uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,108,101,562,515
arxiv
\section{Introduction} String gas cosmology, originally discussed in \cite{BV,TV} (see also \cite{Perlt}) and later reconstructed in the context of branes as fundamental degrees of freedom in string theory~\cite{ABE,BEK}, provides a mechanism for constructing a nonsingular cosmology and a mechanism which confines all but three spatial dimensions to be microscopic in size. String winding modes play a key role in this scenario: they are unable to annihilate in more than three spatial dimensions \cite{BV} because the probability that two string world sheets intersect is of measure zero (see \cite{caveats} for some recent caveats regarding this scenario). A key assumption in string gas cosmology is that all spatial dimensions start out at string scale, and that the universe is hot (and brane degrees of freedom excited). However, in order to connect this scenario to our current universe, a mechanism is required which expands the volume of our observed three spatial dimensions to be large enough to contain the currently observed Hubble volume. This is one aspect of the ``flatness problem'' of Standard Big Bang cosmology (see e.g. \cite{Guth} for a discussion), and a period of cosmological inflation after the winding modes in our three spatial dimensions have disappeared is the only currently known solution. Thus, an outstanding challenge for string gas cosmology has been to provide a stringy realization of inflation \footnote{It has recently been shown that a conventional four-dimensional realization of inflation using regular scalar fields in the four-dimensional effective field theory is not consistent with the late-time stabilization of the extra dimensions \cite{Patil}.}. One suggestion was recently made in \cite{BEM} and made use of a gas of co-dimension one branes which provide a period of power-law inflation, with an equation of state $\omega = -2/3$ (where $\omega = p / \rho$, and $p$ and $\rho$ are the pressure and energy density, respectively), which is however incompatible with inflation being the source of the cosmic microwave anisotropies (see e.g. \cite{WMAP}). The graceful exit from inflation was achieved by considering the branes to be unstable in the vacuum, and stabilized by plasma effects in a hot gas, in analogy to how embedded defects in field theory can be stabilized in the plasma of the early universe \cite{Nag,Carter}. In this paper we consider a more natural realization of inflation in the context of string gas cosmology. We study the period of cosmology after our three dimensions have shredded their winding modes (as discussed in detail in \cite{BEK}), but the other dimensions are still confined by strings and branes which wind around them. We consider the four-dimensional effective action, and focus in particular on two scalar fields in this action, one corresponding to a modulus of the higher-dimensional theory, the other corresponding to the radion. We assume an exponential potential for the modulus field and consider brane string matter (treated as a hydrodynamical fluid) whose overall action has a prefactor which depends exponentially on the two scalar fields. We then demonstrate that it is possible to obtain solutions of power-law inflation with an effective equation of state more negative than $\omega = -2/3$. The coupling of matter to the scalar fields required for acceleration arises automatically if we consider branes winding around the compact space as our matter. To obtain a graceful exit from inflation, we again require the branes to be unstable, as in \cite{BEM}. Compared to \cite{BEM}, our mechanism may provide an inflationary phase with a smaller value of $\omega$, and it does not require branes which are extended in our large spatial dimensions. Our mechanism relies of the analysis of \cite{BM} which we now review. \section{Coupled Inflation} In \cite{BM} a mechanism to realize an accelerated phase of expansion was proposed which relies on a rolling scalar field being coupled (roughly with gravitational strength) to some form of matter or radiation. Although the main motivation in \cite{BM} came from trying to explain the late time acceleration phase that we seem to be entering into today, in this paper we show how one can obtain inflation with a gas of unstable branes via a similar mechanism. Since branes in general couple to both the dilaton and the radion (the volume of the extra dimensions), unless one invokes an additional mechanism to stabilize one of the scalars, one has to study the dynamics involving both the radion and the dilaton. Hence we first review and generalize the analysis done in \cite{BM}. {\bf Evolution Equations:} We start with an effective four dimensional action with two scalar fields minimally coupled to gravity \begin{eqnarray} {\hat{S}} \, &=& \, \frac{M_p^2}{2} \int d^{4}x\sqrt{-g}\bigl[R - \partial_{\widehat{m}}\phi\partial^{\widehat{m}}\phi \\ &-& \partial_{\widehat{m}}\psi\partial^{\widehat{m}}\psi-\frac{2V_0}{M_p^2}e^{-2(\alpha\phi+\beta\psi)}\bigr] \, .\nonumber \end{eqnarray} where $M_p$ is the reduced Planck mass which we will set to 1 from now on. Such exponential potentials are common in dimensionally reduced supergravity/M-theories in dilaton-radion systems~\cite{exppot}, or in the context of large extra dimensions~\cite{mohapatra} and we will give some explicit examples later. In the above, $\alpha,\beta$ are constants which depend on the origin of the potential as well as on the dimensionality of the original model. Consider now that a form of matter or radiation also couples to the scalar fields: \begin{equation} S_{\mathtt{mat}}=\int d^4x \sqrt{-g}\widetilde{\rho}=\int d^4x \sqrt{-g}\rho e^{2(\mu\psi+\nu\phi)} \end{equation} where, as usual, $\widetilde{\rho}$ is the observed energy density and we define a ``bare density'', $\rho\equiv\widetilde{\rho} e^{-2(\mu\psi+\nu\phi)}$, which obeys the familiar evolution equations involving an equation of state parameter $\omega$ \begin{equation} \dot{\rho} + 3H(p+\rho) \, = \, 0 \end{equation} with \begin{equation} p=\omega\rho \Rightarrow \rho=\rho_0\left(\frac{a}{a_0}\right)^{-3(1+\omega)} \, . \label{rho} \end{equation} In the next section we will identify $\widetilde{\rho}$ with the energy density of a gas of branes wrapping extra dimensions and will derive the coupling exponents $\mu$ and $\nu$ explicitly. Due to this coupling, the brane action not only acts as a source term for gravity but also provides an effective potential term for the scalar fields. The field equations for the scalars now read: \begin{equation} \ddot{\phi}+3H\dot{\phi}=-\frac{\partial V_{\mathtt{eff}}(\phi,\psi)}{\partial \phi} \end{equation} and \begin{equation} \ddot{\psi}+3H\dot{\psi}=-\frac{\partial V_{\mathtt{eff}}(\phi,\psi)}{\partial \psi} \end{equation} where \begin{equation} V_{\mathtt{eff}}(\phi)=V_0e^{-2(2\alpha\psi+\beta \phi)}+e^{2(\mu \psi+\nu\psi)}\rho \, . \label{effective} \end{equation} The above multi-field exponential potential has been studied in the context of {\it assisted inflation}, see~\cite{Liddle-maz}. Similar multi-field potentials were also used to construct inflationary models in \cite{ACD1,ACD2}. {F}rom here onwards it is useful to work in terms of rotated fields~\cite{wands}, \begin{equation} \left(\begin{array}{c} \psi'\\ \phi' \end{array}\right)= \left(\begin{array}{cc} \cos\theta&\sin\theta\\ -\sin\theta&\cos\theta \end{array}\right) \left(\begin{array}{c} \psi\\ \phi \end{array}\right) \label{linear} \end{equation} where we choose \begin{equation} \cos\theta=\frac{\mu}{\sqrt{\mu^2+\nu^2}} \end{equation} such that \begin{equation} \widetilde{\rho}=\rho e^{2\mu'\psi'}\ \mbox{ with } \ \mu'=\sqrt{\mu^2+\nu^2} \end{equation} depends only on a single field. The exponents $\alpha,\beta$ transform in the same way as the fields: \begin{equation} \left(\begin{array}{c} \alpha'\\ \beta' \end{array}\right)= \left(\begin{array}{cc} \cos\theta&\sin\theta\\ -\sin\theta&\cos\theta \end{array}\right) \left(\begin{array}{c} \alpha\\ \beta \end{array}\right) \end{equation} Thus, the ``rotated'' field equations read as \begin{equation} \ddot{\phi'}+3H\dot{\phi'}=2\beta' V_0e^{-2(\alpha'\psi'+\beta' \phi')} \label{phi} \end{equation} and \begin{equation} \ddot{\psi'}+3H\dot{\psi'}=2\alpha' V_0e^{-2(\alpha'\psi'+\beta' \phi')}-2\mu'\rho e^{2\mu' \psi'} \label{psi} \end{equation} One can also write down the Friedmann equation for a Robertson Walker metric: \begin{eqnarray} \label{Hubble} H^2 =\textstyle{1\over 3}\left( \frac{\dot{\phi'}^2}{2}+\frac{\dot{\psi'}^2}{2}+V_0e^{-2(\alpha'\psi'+\beta' \phi')}+\rho e^{2\mu' \psi'}\right)\, \end{eqnarray} {\bf Inflation from a Single Field:} Before we solve (\ref{phi}-\ref{Hubble}) in its full generality, it is insightful to look at a special case when the dynamics reduces to that of a single field which has been studied in detail in \cite{BM}. This happens when \begin{equation} \frac{\beta}{\alpha}=\tan\theta=\frac{\nu}{\mu}\Rightarrow \beta'=0 \end{equation} In this case, $\phi'$ remains frozen (even if one starts with a non-zero $\dot{\phi'}$ it is quickly Hubble damped) and $\psi'$ evolves under the influence of the effective potential (\ref{effective}) with exponents $\alpha'=\sqrt{\alpha^2+\beta^2}$ and $\mu'$. As discussed in \cite{BM}, provided the exponents are of ${\cal O}(1)$ and have the same sign, $\psi'$ tracks the minimum formed between the two opposing exponentials. Since $\rho$ redshifts, the minimum redshifts and one can solve for the evolution of $\psi'$ and $a(t)$ to get \begin{eqnarray} \label{psi-evol} e^{2\psi'} &=& \left(\frac{\alpha'V_0}{\mu'\rho}\right)^{1/(\mu'+\alpha')} \\ &=& \left(\frac{\alpha'V_0}{\mu'\rho_0}\right)^{1/(\mu'+\alpha')}\left(\frac{a}{a_0}\right)^{3(1+\omega)/(\mu'+\alpha')} \nonumber \end{eqnarray} and \begin{equation} \label{scalefact} a(t)=a_0\left(\frac{t}{t_0}\right)^{(2/3\alpha') \left[(\mu'+\alpha')/(1+\omega)\right]}\,. \label{a-evol} \end{equation} Thus, the universe accelerates if \begin{equation} \frac{\mu'}{\alpha'}> \frac{1}{2}(1+3\omega) \label{acceleration} \end{equation} {F}or non-relativistic dust type matter, like wrapped branes, we have $\omega=0$, so that (\ref{acceleration}) becomes \begin{equation} \sqrt{\frac{\mu^2+\nu^2}{\alpha^2+\beta^2}}>\frac{1}{2} \end{equation} In the next section, when we discuss the dynamics involving a gas of branes, we will see that, although the system does not reduce to a single field dynamics, one can get power law inflation in a manner similar to the case discussed above. As a next step then, let us calculate the density fluctuations and spectral tilt in this model. From (\ref{psi-evol}) and (\ref{a-evol}) one can compute $\dot{\psi'}$ and $H$ and hence the amplitude of density fluctuations \begin{eqnarray} \delta_H &=& \frac{H^2}{2\pi\dot{\psi'}}=\frac{(\mu'+\alpha')H}{3\pi M_p} \\ &=&\frac{(\mu'+\alpha')}{3\pi M_p}\sqrt{\frac{V_0(\mu'+\alpha')e^{-2\alpha'\psi'}}{3\mu' M_p^2}} \nonumber \end{eqnarray} or \begin{equation} \delta_H^2\sim a^{-\frac{3(1+\omega)\alpha'}{\mu'+\alpha'}} \, , \end{equation} where the right hand side is evaluated at the time when the length scale of the fluctuation being considered is exiting the Hubble radius (see e.g. \cite{MFB,LLbook} for reviews). Therefore, the spectral tilt $\eta$ is given by \begin{equation} \eta\approx 1+\frac{1}{a}\frac{d\ln\delta_H^2}{da}=1-\frac{3(1+\omega)}{1+\mu'/\alpha'} \label{eta} \end{equation} It is clear from (\ref{eta}) that in order to explain the observed spectral tilt of the CMB spectrum (see e.g. \cite{WMAP}), $\eta>0.94$, one needs a very large ratio $\mu'/\alpha'\sim {\cal O}(10)$ which is difficult to achieve from string theory. Hence, it seems likely that one needs to supplement this inflationary mechanism with an additional mechanism to generate the observed almost scale-invariant spectrum of density perturbations. We will comment on this later. {\bf Two Field Dynamics:} To solve the evolution equations (\ref{phi}-\ref{Hubble}) for the two field case we choose the usual ansatz \begin{equation} a=a_0\left(\frac{t}{t_0}\right)^m\ ;e^{\psi}=e^{\psi_0}\left(\frac{t}{t_0}\right)^n\ ;e^{\phi} =e^{\phi_0}\left(\frac{t}{t_0}\right)^p \label{e-ansatz} \end{equation} In order that all the terms in (\ref{phi}-\ref{Hubble}) have the same $t$ dependence, $t^{-2}$ to be specific, we get by looking at the exponents \begin{equation} \alpha'n+\beta'p=1=\frac{3m}{2}-\mu'n \label{np} \end{equation} {F}urther, one can substitute the potentials associated with $V_0$ and $\rho_0$ from (\ref{phi},\ref{psi}) in (\ref{Hubble}) to obtain another relation between the power law exponents: \begin{eqnarray} \label{m} m^2 &=& \textstyle{1\over 3}\bigl[\frac{1}{2}(n^2+p^2) \\ &+& (3m-1)(\frac{p}{2\beta'}(1+\frac{\alpha'}{\mu'})-\frac{n}{2\mu'})\bigr] \nonumber \end{eqnarray} One can thus solve (\ref{m}) and (\ref{np}) to obtain $m,n$ and $p$ in terms of the exponents $\alpha',\beta'$ and $\mu'$ (see Appendix for details). In particular one finds \begin{equation} m=2\frac{3\mu'\alpha'+\alpha^{'2}+\beta^{'2}+2\mu^{'2}}{6\mu'\alpha'+3\alpha^{'2}+3\beta^{'2}+8\mu^{'2}\beta^{'2}} \label{m-soln} \end{equation} In order to find out whether one gets inflation, one has to check whether $m > 1$ or not. Intuitively, it is clear what needs to happen in order to have an accelerated expansion: Along the $\psi'$ direction, due to the coupling, one has a slowly evolving minimum due to the two opposing exponential potentials as before. The field, though, can also roll along the $\phi'$ direction, since $\beta'\neq 0$. However, if $\beta'$ is sufficiently small (in other words if the two exponents corresponding to the potential $V_0$ and $\rho_0$ are approximately collinear) then the field rolls slowly also along the $\phi'$ direction and thus we can have acceleration. {F}inally, one has to check that the ansatz (\ref{e-ansatz}) gives meaningful solutions, i.e. solutions exist for positive $V_0$ and $\rho_0$. We show in the Appendix that this is indeed the case provided $m>1/3$, which is obviously true for inflationary solutions. \section{Supergravity and Brane Gases} {\bf Supergravity and Effective Potentials:} Let us start with a typical bosonic sector of a supergravity theory: \begin{eqnarray}\label{eq:sugra} {\hat{S}} &=& \frac{1}{16\pi {\hat{G}}} \int d^{\widehat{D}}x\sqrt{-g}\bigl[{\hat{R}} - \partial_{\widehat{m}}\phi\partial^{\widehat{m}}\phi \\ &-& {1 \over 2}\sum_i e^{-2a_i\phi}{1 \over {n_i!}} F^2_{n_i} - V(\phi)\bigr] \nonumber \end{eqnarray} where hatted quantities denote the full higher ($\widehat{D}$) dimensional objects, $\phi$ is the dilaton field and the field strengths $F_{n_i}$'s are $n_i$ forms with $i=1\dots M$. For generality, we have also included a potential for the dilaton which may have both a classical \cite{classical} or a quantum \cite{quantum} origin depending on the specific supergravity/string theory model. In general, the potential runs to either $\pm\infty$, leading to the much studied issue of dilaton stabilization. For the purposes of this paper, we do not concern ourselves with this issue and simply assume that either it is stabilized after inflation or it is coupled very weakly (if at all) to standard model particles and therefore even though the dilaton slowly runs towards infinity, constraints coming from fifth force experiments and variation of physical constants \cite{variation} are satisfied. Indeed a ``least coupling mechanism'' has been proposed \cite{least} to explain why the dilaton may couple very weakly to the ordinary standard model particles. As a simple example of a running potential we choose \begin{equation} V(\phi)=V_0e^{-2\delta\phi} \end{equation} We mention in passing that such exponential potentials are found in several supergravity theories \cite{classical}. To obtain an effective four-dimensional theory one has to perform a consistent dimensional reduction \cite{duff}. For a flux compactification the only consistent ansatz (without involving squashing) is given by \cite{exppot,flux} \begin{equation} \widehat{g}_{\widehat{m}\widehat{n}}=\left( \begin{array}{cc} g_{mn}(x) & 0\\ 0 &e^{2\psi(x)}\st{\circ}{g}_{\st{\circ}{m}\st{\circ}{n}}(y) \end{array} \right) \label{eq:metric} \end{equation} and \begin{equation} F_{\st{\circ}{m}_1\dots\st{\circ}{m}_{\st{\circ}{D}}}\sim \epsilon_{\st{\circ}{m}_1\dots\st{\circ}{m}_{\st{\circ}{D}}} \end{equation} where $F$ has to be a $\st{\circ}{D}$-form, $\st{\circ}{D}$ being the number of extra dimensions and we use the symbol ``$\circ$'' to indicate extra dimensional quantities. Once one solves the Bianchi identity and the field equations for $F$, the $F^2$ term in the action (\ref{eq:sugra}) gives us a potential term for the scalars, $\phi$ and $\psi$. After performing the usual dimensional reduction by integrating the extra dimensions and applying conformal transformations \begin{equation} \widehat{g}\rightarrow e^{-\st{\circ}{D}\psi/(\st{\circ}{D}+2)}\widehat{g} \end{equation} followed by \begin{equation} \psi\rightarrow \sqrt{\frac{\st{\circ}{D}(\st{\circ}{D}+2)}{2}}\psi \label{conf-psi} \end{equation} to go to the Einstein frame in four dimensions, one finds that \begin{equation} \int d^{\widehat{D}}x\sqrt{-\widehat{g}}\frac{e^{-2a_i\phi}}{n_i!}F^2_{n_i} \end{equation} becomes \begin{equation} \int d^4x\sqrt{-g}\st{\circ}{c}^2e^{-2(a\phi+\gamma' \psi)} \end{equation} with \begin{equation} 2\gamma'\equiv 3\sqrt{\frac{2\st{\circ}{D}}{\st{\circ}{D}+2}} \end{equation} and $a$ is the same exponent that appears in the dilaton coupling to the $\st{\circ}{D}$-form in (\ref{eq:sugra}), its value depends on the specific supergravity model. Also, $\st{\circ}{c}$ is a constant determining the strength of the flux background. Let us now compute the four dimensional effective potential coming from the dilaton potential $V(\phi)$. We observe that dimensional reduction followed by conformal transformation (\ref{conf-psi}) gives a $\psi$ dependence which is in fact the same as one gets from a higher dimensional cosmological constant. The four dimensional potential reads \begin{equation} V(\phi,\psi)=V_0e^{-2(\delta\phi+\gamma\psi)} \label{dilaton} \end{equation} with \begin{equation} 2\gamma=\sqrt{\frac{2\st{\circ}{D}}{\st{\circ}{D}+2}} \end{equation} The full four dimensional action then reads \begin{eqnarray}\label{eq:4dgravity} S &=& \frac{1}{2}\int d^4x\sqrt{-g}\bigl[R-\partial_{m}\phi\partial^{m}\phi-\partial_{m}\psi\partial^{m}\psi \nonumber \\ &-& 2(V_0'e^{-2(a\phi+\gamma' \psi)}+V_0e^{-2(\delta\phi+\gamma\psi)})\bigr] \end{eqnarray} where as before we have set $M_p=1$, and redefined $c_0$ in terms of $V_0'$. As a specific example, let us consider type IIA string theory. In the string frame the relevant bosonic part of the action reads \begin{eqnarray} \hat{S}_{II} &=& \int d^{10}x\sqrt{-g}e^{-2\phi}\bigl[\hat{R}+4\partial_{\widehat{m}}\phi\partial^{\widehat{m}}\phi-V(\phi)\bigr] \nonumber \\ &+& {1 \over 2}\frac{1}{4!}F^2_{4}+\dots \label{typeII} \end{eqnarray} where we have only included the 4-form whose dual is a 6-form field strength and hence can be consistently turned on as there are 6 extra dimensions. The ellipsis involve other form and fermionic fields which are set to zero as usual. To be general we have also added a potential for the dilaton which could result from quantum corrections. Performing the well known conformal transformation \begin{equation} \hat{g}_{\widehat{m}\widehat{n}}\rightarrow e^{-4\phi/(\st{\circ}{D}+2)}\hat{g}_{\widehat{m}\widehat{n}} \end{equation} followed by \begin{equation} \phi\rightarrow {\sqrt\frac{4}{\st{\circ}{D}+2}}\phi \label{conf-phi} \end{equation} one recovers an action of the form (\ref{eq:sugra}) in the 10-dimensional Einstein frame with $2a_4=-1/\sqrt{2}$ corresponding to the four form coupling exponent, or in terms of the 6-form dual, $2a_6=1/\sqrt{2}$: \begin{equation} \frac{1}{4!}e^{\phi}F^2_{4}\leftrightarrow\frac{1}{6!}e^{-\phi}F^2_{6} \, . \end{equation} The dimensionally reduced 4-dimensional action then looks like (\ref{eq:4dgravity}) with \begin{equation} 2a_6=\frac{1}{\sqrt{2}}\mbox{ and }2\gamma'=3\sqrt{\frac{3}{2}} \end{equation} where $\gamma'$ and $a_6$ can be identified with $\alpha$ and $\beta$ of section 2 respectively. Instead of looking at a flux background one may also study a potential of the form (\ref{dilaton}) coming from a higher dimensional dilatonic potential where substituting $\st{\circ}{D}=6$ we get \begin{equation} 2\gamma=\sqrt{\frac{3}{2}} \label{10-gamma} \end{equation} and $\delta$ remains a free parameter. {\bf Brane Stress Energy:} Let us now consider a gas of branes wraping all the compact internal dimensions and hence these are $\st{\circ}{D}$-branes. The action for such a gas is given by \begin{equation} S_{\mathtt{brane}}=\int d^{\widehat{D}}x \sqrt{-\widehat{g}}\rho_0e^{-2\nu\phi}e^{-3\alpha}=\int d^{\widehat{D}}x \sqrt{-\widehat{g}}\hat{\rho} \end{equation} where $e^{\alpha}=a/a_0$ denotes the usual scale factor of our observable universe. The exponential involving the dilaton in the last term originates from the dilaton coupling present in the brane action in the string frame which depends on several factors like the nature of the brane/string (whether it is fundamental or solitonic etc. \cite{coupling}), the dimensionality of the supergravity model, etc. The second exponential corresponds to the well known fact that the brane energy density redshifts as non-relativistic dust along the transverse directions, which in this case are the three large spatial dimensions. In order to obtain the brane energy density in the 4-dimensional Einstein frame one has to perform the conformal rescalings (\ref{conf-psi}), so that in terms of the rescaled $\psi$ one has \begin{equation} S_{\mathtt{brane}}=\int d^4x \sqrt{-g}\rho_0e^{2(\mu\psi+\nu\phi)}\left(\frac{a}{a_0}\right)^{-3} \label{brane} \end{equation} with \begin{equation} 2\mu=\sqrt{\frac{\st{\circ}{D}}{2(\st{\circ}{D}+2)}} \, , \label{mu} \end{equation} or \begin{eqnarray} S_{\mathtt{brane}} &=&\int d^4x \sqrt{-g}\rho_0e^{2\mu'\psi'}\left(\frac{a}{a_0}\right)^{-3} \nonumber \\ &=& \int d^4x \sqrt{-g}\widetilde{\rho} \end{eqnarray} where $\psi'$ is now the linear combination of the dilaton and the radion as defined earlier in (\ref{linear}). $\widetilde{\rho}$ corresponds to the observed four dimensional energy density of the wrapped branes. Let us focus our attention now onto the 10 dimensional superstring theories. The DBI action for a $p$-brane in string frame is given by \cite{leigh} \begin{equation} S_{DBI}=T_p\int d^{p+1}\sigma e^{-2\phi}\sqrt{-\gamma} \label{dbi} \end{equation} where $\gamma$ is the induced metric on the world volume of the brane parameterized by $\sigma$'s. (\ref{dbi}) leads to an action for a gas of 6-branes wrapping all the compact extra six directions, which looks like \begin{equation} S_{\mathtt{brane}}=\int d^{10}x \sqrt{-\widehat{g}}\rho_0e^{-2\phi}e^{-3\alpha} \end{equation} in the string frame. Conformal transformation of $\widehat{g}$ (\ref{conf-phi}) and subsequent rescaling of $\phi$ then gives us the action in Einstein frame (\ref{brane}) with \begin{equation} 2\nu=\frac{\st{\circ}{D}}{2}\sqrt{\frac{1}{\st{\circ}{D}+2}}=\frac{3}{\sqrt{8}} \end{equation} while substituting $\st{\circ}{D}=6$ in (\ref{mu}) gives us the exponent $\mu$ \begin{equation} 2\mu=\sqrt{\frac{3}{8}} \, . \end{equation} Moreover, one finds in this case \begin{equation} \psi'= \cos(\pi/3) \psi+\sin(\pi/3)\phi\mbox{ and }\mu'=\sqrt{\frac{3}{8}} \, . \label{10-mu} \end{equation} {\bf Inflation from 10 Dimensional Universe:} We have finally accumulated all the objects neccessary to understand whether or not one can indeed obtain a phase of acceleration in the early universe with branes in conjunction with string theory potentials for modulii fields and the dilaton. Let us first look at the case when no flux is turned on ($V_0'=0$ in (\ref{eq:4dgravity})) but rather we have a dilatonic potential of the form (\ref{dilaton}). In this case we have one free parameter, namely $\delta$, and whether one has acceleration or not depends on it. First, realize that in order to end inflation we want \begin{equation} \alpha_{eff}\equiv\sqrt{\alpha^{'2}+\beta^{'2}}>1 \label{tracking} \end{equation} This is because, once the branes have decayed, the scalars evolve as if under the influence of an exponential potential with an effective exponent $\alpha_{eff}$ \cite{flux}. Therefore, in order for the scalars to track radiation \cite{copeland} after the end of inflation we require (\ref{tracking}). (\ref{tracking}) implies a bound $\delta>\sqrt{5/8}$. Now, plugging in all the relevant exponents in the expression for $m$ in (\ref{m-soln}) one finds that it is possible to have an inflationary paradigm provided \begin{equation} \sqrt{\frac{5}{8}}<\delta<\sqrt{\frac{9}{8}} \, , \label{range} \end{equation} For these values of $\delta$ one finds \begin{equation} 1<m<1.1 \end{equation} in other words, when $\delta$ is close to 1, and this seems reasonable from the string theory point of view. Next, let us look at the potential coming from the 4-form flux. Substituting the exponents $a_6$ and $\gamma'$ in (\ref{m-soln}) one finds \begin{equation} m<1 \end{equation} Thus, with just a 4-form flux and branes one cannot get acceleration. This situation however can change if a seperate mechanism is available to stabilize a particular linear combination of the dilaton and the radion. \section{Graceful Exit and Reheating} Branes in string theory can also be interpreted as solitonic solutions in the corresponding low energy supergravity theory. One subclass of such branes are the unstable branes of string theory (e.g. even-dimensional branes in Type IIB string theory). Provided that the tachyon which describes the decay of these branes interacts with fields which are in thermal equilibrium in the early universe, these branes could be stabilized at early times - like the embedded Z-string in the standard electroweak theory \cite{Nag} - and thus trigger inflation as described above. The phase of inflation would last until the fields which mediate the interaction, e.g. the photon in the case of the standard model Z-string, fall out of equilibrium. At that time, the tachyonic instability could set in. The energy density stored in the unstable branes would lead to reheating. In the example below, the role of the bulk modes is played by the Kaluza-Klein modes. To be specific, we consider a subclass of brane-like solutions of the supergravity equations known as black branes \cite{bb} which appear to us (i.e. in the effective four dimensional world) as black holes. Stability of such black branes (both charged and uncharged) has been discussed in detail in \cite{stab}, and in particular it was realized that when the Kaluza-Klein modes of the gravitational supermultiplet are lighter than the tension of the branes, these modes become unstable (Gregory-Laflamme instability) \cite{stab}\footnote{Stability of branes depends on various factors like the charge and the dilatonic coupling of the branes \cite{dilaton}, but for simplicity we will consider uncharged branes and assume that the dilatonic couplings do not run.}. Since in our model the volume of the internal manifold slowly grows, the Gregory-Laflamme instability thus provides us with a natural mechanism to end inflation. Assume that the mass scale associated with the tension of the branes is given by \begin{equation} T=10^{-B}M_p \end{equation} The mass of the Kaluza-Klein modes are, on the other hand, given by \begin{equation} M_K=e^{-\zeta\psi}M_p\mbox{ with }\zeta=\sqrt{\frac{\st{\circ}{D}+2}{2\st{\circ}{D}}} \end{equation} The branes become unstable when $M_K\sim T$. In terms of the rotated basis this happens when \begin{eqnarray} 10^{B} &=& e^{\zeta(\psi'\cos\theta-\phi'\sin\theta)} \\ &=& e^{\zeta(\psi'_0\cos\theta-\phi'_0\sin\theta)}\left(\frac{t}{t_0}\right)^{\zeta(n\cos\theta-p\sin\theta)} \nonumber \\ &=& e^{-\zeta\psi_0}\left(\frac{a}{a_0}\right)^{\zeta(n\cos\theta-p\sin\theta)/m} \nonumber \\ &\equiv& N_0e^{{\cal N}\zeta(n\cos\theta-p\sin\theta)/m} \nonumber \end{eqnarray} where ${\cal N}$ is the number of e-foldings. In the spirit of string gas cosmology \cite{BV} we assume that, initially, the internal volume is of Planck size, i.e. $\psi\approx 0$. This implies $N_0 \sim {\cal O}(1)$. One can then estimate the number ${\cal N}$ of e-foldings as \begin{equation} {\cal N}=\frac{2.3Bm}{\zeta(n\cos\theta-p\sin\theta)} \end{equation} For a range of the relevant parameters one can indeed obtain a sufficient number of efoldings. For example, for a ten-dimensional model, substituting $\st{\circ}{D}=6$ and $\theta=\pi/3$ one finds \begin{equation} {\cal N}=\frac{2.3Bm}{0.41n-0.71p} \end{equation} For the exponent calculated for a gas of six branes (\ref{10-mu}) with a dilatonic potential (\ref{dilaton},\ref{10-gamma}), $\delta$ within the range (\ref{range}), one finds that for a moderate hierarchy, $B\sim 5-7$ we get around 50-60 efoldings which is neccessary to solve the flatness and horizon problems. After the black-branes have decayed, the field starts rolling along the exponential potential of the dilaton and the radion. At late times, there are two possible scenarios. If the potential is indeed a pure exponential as we considered in our model, then the fields would continue to roll tracking first the radiation and later on the matter energy density (since $\sqrt{\alpha^2+\beta^2}>1$). One can try to connect this model to a late time coupled quintessence regime as discussed in \cite{BM}. However, the phenomenological viability of this scenario requires a mechanism which ensures that the standard model particles are only very weakly coupled to radion and the dilaton - otherwise one has conflicts with fifth force experiments. Indeed, in \cite{least} such a least coupling principle was proposed for the coupling of the dilaton to standard model particles. We here do not further speculate on the details of such mechanisms. The other possibility could be if the radion-dilaton potential has a minimum where the fields could be stabilized after the end of inflation. This possibility might be realized if, for example, the potential is really a sum of exponentials. This occurs quite commonly in supergravity reductions \cite{exppot,flux}. In this case, of course, the moduli are stabilized and one does not need to worry about the fifth force constraints (as long as the potential is sufficiently curved at the minimum). \section{Discussion and Conclusions} We have presented a mechanism for obtaining inflation in the context of string gas cosmology. We start with the conventional scenario of string gas cosmology: the universe starts out hot and small (string scale), with all spatial dimensions of comparable scale. As discussed in \cite{BV,ABE,BEK}, the fundamental string winding modes can disappear in at most three spatial dimensions, thus leading to an explanation of why the spatial dimensions predicted by string theory but not seen experimentally are confined. Radion stabilization is a natural result of the string winding and momentum modes about the extra spatial dimensions \cite{Patil}. At the later times studied in this paper, we have focused on the effective four-dimensional field theory coming from dimensionally reducing the Lagrangian of string gas cosmology. Assuming the existence of an exponential potential for the dilaton and the radion at these later times, we have shown that the coupling of branes winding the extra dimensions couple to the radion and dilaton and can lead to a period of power-law inflation. This inflationary period is generated by the combined radion-dilaton system tracking its time-dependent potential minimum position. The time-dependence of the effective potential is induced by the coupling of the brane gas to the two fields. A graceful exit from inflation is obtained by making use of a gas of unstable branes, branes which are stabilized at early times via their interactions with the Kaluza-Klein modes of the fields of the bulk gravitational supermultiplet. The decay of these branes then leads to reheating after inflation. For the specific branes we considered we obtained a power law exponent $m$ which is too small to be consistent with the observational limits on the tilt in the power spectrum of density fluctuations. In addition, a hierarchy between the Planck scale and the tension of the branes is required in order to obtain a sufficient number of e-foldings of inflation. Both of these problems can be solved provided the brane coupling exponents are larger than what we obtained for 6-branes in the context of 10 dimensional supergravity/string theory. This can happen in several ways. As noted in \cite{quantum}, quantum stingy loop corrections can change several coefficients in the effective dilatonic action which in turn will change the exponents (For example, if the coefficient in front of the kinetic term for the dilaton changes, one has to rescale the dilaton field which in turn changes its coupling exponent to the brane.). One could also consider various types of branes with different dimensionalities, perhaps in the context of lower dimensional supergravity models. The various exponents again are expected to be different, as is evident from the general analysis we performed. Finally, it is not neccessary to restrict oneself only to radion-dilaton systems. For example, one could consider squashed configurations where the branes couple to the volume as well as the shape moduli (the dilaton would be stabilized by some other mechanism). As shown in \cite{flux}, for such configurations it is possible to consistently turn on other lower form fluxes (not just the 6-form as considered here) which may come with the right exponents to realize an accelerated regime in conjunction with the branes. We leave an exploration of all these possibilities for the future. \section{Appendix: Two field Solution} We want to solve cosmological evolution equations of the form (\ref{psi}-\ref{Hubble}): \begin{equation} \ddot{\phi}+3H\dot{\phi}=2\beta V_0e^{-2(\alpha\psi+\beta\phi)} \label{aphi} \end{equation} \begin{equation} \ddot{\psi}+3H\dot{\psi}=2\alpha V_0e^{-2(\alpha\psi+\beta \phi)}-2\mu\rho e^{2\mu \psi} \label{apsi} \end{equation} \begin{eqnarray} \label{aHubble} H^2 =\textstyle{1\over 3}\left( \frac{\dot{\phi}^2}{2}+\frac{\dot{\psi}^2}{2}+V_0e^{-2(\alpha\psi+\beta \phi)}+\rho e^{2\mu \psi}\right)\,. \end{eqnarray} We choose the ansatz as in (\ref{e-ansatz}) \begin{equation} a=a_0\left(\frac{t}{t_0}\right)^m\ ;e^{\psi}=e^{\psi_0}\left(\frac{t}{t_0}\right)^n\ ;e^{\phi} =e^{\phi_0}\left(\frac{t}{t_0}\right)^p \label{a-ansatz} \end{equation} In order that all the terms in (\ref{apsi},\ref{aHubble}) have the same $t$ dependence, $t^{-2}$, to be specific, we get by looking at the exponents \begin{equation} \alpha n+\beta p=1=\frac{3m}{2}-\mu n\label{a-np} \end{equation} Further, one can substitute the potentials associated with $V_0$ and $\rho_0$ from (\ref{apsi}) in terms of the field derivatives in (\ref{aHubble}) to obtain \begin{eqnarray} H^2 &=&\textstyle{1\over 3}\left( \frac{\dot{\phi}^2}{2}+\frac{\dot{\psi}^2}{2}-\frac{1}{2\mu}(\ddot{\psi}+3H\dot{\psi}) \right.\nonumber\\ &-&\left.\frac{1}{2\beta}(1+\alpha/\mu)(\ddot{\phi}+3H\dot{\phi})\right) \end{eqnarray} Substituting the ansatz (\ref{a-ansatz}) in the above equation one finds another relation between the power law exponents \begin{equation} m^2=\textstyle{1\over 3}\left[\frac{1}{2}(n^2+p^2)+(3m-1)(\frac{p}{2\beta}(1+\frac{\alpha}{\mu})-\frac{n}{2\mu})\right] \label{a-m} \end{equation} One can thus solve (\ref{a-m}) and (\ref{a-np}) to obtain $m,n$ and $p$ in terms of the exponents $\alpha,\beta$ and $\mu$: \begin{equation} m=2\frac{3\mu\alpha+\alpha^{2}+\beta^{2}+2\mu^{2}}{6\mu\alpha+3\alpha^{2}+3\beta^{2}+8\mu^{2}\beta^{2}} \label{am-soln} \end{equation} \begin{equation} n=\frac{3\alpha+6\mu-8\beta^{2}\mu}{6\mu\alpha+3\alpha^{2}+3\beta^{2}+8\mu^{2}\beta^{2}} \label{n-soln} \end{equation} \begin{equation} p=\frac{\beta(8\mu^2+3+8\mu\alpha)}{6\mu\alpha+3\alpha^{2}+3\beta^{2}+8\mu^{2}\beta^{2}} \label{p-soln} \end{equation} Although the $t$ dependence now cancels in all the evolution equations by virtue of (\ref{am-soln}-\ref{p-soln}), we are still left with matching the coefficients in the two equations (\ref{aphi}) and (\ref{apsi}). These equations essentially determine the other unknown parameters $\phi_0$ and $\psi_0$ in terms of $V_0$ and $\rho_0$ or vice-versa. In fact to have a consistent solution one should check whether these equations can be satisfied for positive values of $V_0$ and $\rho_0$ as those are the physical scenarios we are interested in. After some algebra one finds \begin{equation} V_0=e^{-2(\alpha\psi_0+\beta\phi_0)}\frac{p(3m-1)}{2\beta} \label{Vnot} \end{equation} and \begin{eqnarray} \rho_0=e^{-2\mu \psi}\left(\frac{3m-1}{2\mu}\right)\left(\frac{\alpha}{\beta}p-n\right)\nonumber\\ =e^{-2\mu \psi}(3m-1)\left(\frac{4(\alpha^2+\beta^2)-3+4\mu\alpha}{6\mu\alpha+3\alpha^{2}+3\beta^{2}+8\mu^{2}\beta^{2}}\right) \label{rhonot} \end{eqnarray} From (\ref{p-soln}) it is clear that the ratio $p/\beta$ is positive. Also, for the solutions we are looking at, $m>1>1/3$, and thus the right hand side of (\ref{Vnot}) is positive implying $V_0>0$. Next let us look at the right hand side of (\ref{rhonot}). In order for the potential to track radiation after inflation we demanded $\alpha^2+\beta^2>1$ and thus it is clear that $\rho_0$ is positive too. We indeed have consistent attractor solutions. \begin{acknowledgements} RB is supported in part (at McGill) by an NSERC Discovery grant and (at Brown) by the US Department of Energy under Contract DE-FG02-91ER40688, TASK~A. The work of TB is supported by NSERC Grant No.\ 204540. DE is supported in part by NSF-PHY-0094122 and funds from Syracuse University. \end{acknowledgements}
1,108,101,562,516
arxiv
\section{Introduction} Higher-curvature theories of gravity play an important role in theoretical physics. On one hand, higher derivatives seem necessary to obtain a consistent quantum description of gravity. For example, string theory and effective field theory approaches predict an infinite tower of higher-curvature corrections to the usual Einstein-Hilbert action~\cite{Callan:1985ia,Zwiebach:1985uq,Bergshoeff:1989de,Gross:1986iv, Gross:1986mw}, while other approaches introduce a finite number of higher-curvature terms to restore certain desirable properties, {\it e.g.,}\ renormalizability~\cite{Stelle:1976gc, Stelle:1977ry, Starobinsky:1980te}. On the other hand, studying higher-curvature theories can provide insight on the special or universal properties of gravitational theory. This program has been especially fruitful in the holographic context, where deformations of the gravitational theory correspond to deformations of the dual CFT. In this way, it has been possible to provide evidence for universal relationships that hold within holography and beyond~\cite{Myers:2010tj, Myers:2010xs, Perlmutter:2013gua, Mezei:2014zla, Bueno1, Bueno3, Chu:2016tps, Bueno:2018yzo, Li:2018drw, Bueno:2020odt,Bueno:2022jbl}. In this work we are specifically interested in the structural aspects of a class of theories known as \textit{generalized quasi-topological gravities} (GQTGs). Schematically, we write the action of these theories as \begin{equation}\label{action} S=\frac{1}{16\pi G} \int \mathrm{d}^D x \sqrt{|g|} \left[\frac{(D-1)(D-2)}{L^2}+R+\sum_{n=2}\sum_{i_n}L^{2(n-1)} \mu_{i_n}^{(n)} \mathcal{R}_{i_n}^{(n)} \right]\, , \end{equation} where $\mathcal{R}_{i_n}^{(n)}$ are densities constructed from $n$ Riemann tensors and the metric, the $\mu_{i_n}$ are dimensionless couplings, $L$ is some length scale, and $i_n$ is an index running over all independent GQTG invariants of order $n$. For this action, the field equations can be expressed as \begin{equation}\label{EOM} \mathcal{E}_{ab} = P_{a}{}^{cde}R_{bcde} - \frac{1}{2} g_{ab} \mathcal{L} - 2 \nabla^c \nabla^d P_{a c d b} = 0\, , \quad \text{with} \quad P^{abcd} \equiv \frac{\partial \mathcal{L}}{\partial R_{abcd}} \, , \end{equation} where $\mathcal{L}$ is the Lagrangian of the theory. For a general theory polynomial in curvature tensors, it is clear that the field equations can contain forth-order derivatives of the metric. The defining property of GQTGs is that they allow for spherically symmetric solutions of the Schwarzschild-like form characterized by a single function {\it i.e.,}\ with $g_{tt} g_{rr}=-1$, where $f(r)$ satisfies at most a second-order equation.. Then, the static spherically symmetric black holes of the theory have the form \begin{equation}\label{fEq} \mathrm{d} s^2_{f}=-f(r)\mathrm{d} t^2+\frac{\mathrm{d} r^2}{f(r)}+r^2\mathrm{d} \Omega^2_{(D-2)}\, , \end{equation} with $f(r)$ satisfying an equation that contains at most second derivatives. GQTGs can be further subdivided into different classes depending on the character of the field equations on spherically symmetric and other backgrounds. The most important subclass being Lovelock gravity~\cite{Lovelock1, Lovelock2}, for which the equations on spherically symmetric backgrounds are algebraic in the metric function $f(r)$, and second-order for any metric. Lovelock gravities are also the most constrained. Besides Einstein gravity, there exists no Lovelock theory in $D = 4$, and in general a Lovelock theory of order $n$ in curvature is non-trivial only when $D \ge 2n+1$. A second subclass of the GQTG family are quasi-topological gravities~\cite{Quasi2, Quasi, Dehghani:2011vu,Ahmed:2017jod,Cisterna:2017umf}. For quasi-topological gravities, the field equations for spherically symmetric black holes are algebraic, as for Lovelock theory. However, on general backgrounds the equations of motion will be fourth-order. Quasi-toplogical gravities are less constrained in the sense that they exist in any spacetime dimension $D \ge 5$ for any order in curvature cubic or higher, as explicitly constructed in~\cite{Bueno:2019ycr}. These possibilities do not fully exhaust the space of possible theories, and there exist remaining GQTGs for which the field equations for spherically symmetric black holes is a second-order differential equation for $f(r)$~\cite{PabloPablo, Hennigar:2016gkm, PabloPablo2, Hennigar:2017ego,PabloPablo3} ---these theories can exist even in $D = 4$. GQTGs have by now been the subject of quite intensive investigation, {\it e.g.,}\ ~\cite{PabloPablo,Hennigar:2016gkm,PabloPablo2,Hennigar:2017ego,PabloPablo3,Ahmed:2017jod,PabloPablo4,Dey:2016pei,Feng:2017tev,Hennigar:2017umz,Hennigar:2018hza,Bueno:2018xqc,Bueno:2018yzo,Bueno:2018uoy,Poshteh:2018wqy,Mir:2019ecg,Mir:2019rik,Arciniega:2018fxj,Cisterna:2018tgx,Arciniega:2018tnn,Mehdizadeh:2019qvc,Erices:2019mkd,Emond:2019crr,Jiang:2019fpz, Cano:2019ozf, Burger:2019wkq, Bueno:2019ltp,Bueno:2020odt, Frassino:2020zuv, KordZangeneh:2020qeg, Marciu:2020ysf, Quiros:2020uhr, Pookkillath:2020iqq, Marciu:2020ski, Adair:2020vso, Khodabakhshi:2020hny, Edelstein:2020nhg, Konoplya:2020jgt, Khan:2020kwl, Khodabakhshi:2020ddv, Cano:2020qhy, Cano:2020ezi, Quiros:2020eim, Edelstein:2020lgv, Jimenez:2020gbw, Caceres:2020jrf, Mustafa:2020qjo, Cano:2020oaa, Fierro:2020wps, Bhattacharjee:2021nfx, Bhattacharjee:2021jwm, Jawad:2021kkp, Bueno:2021krl, Li:2021jfh, Ghosh:2021zpb, Bakhtiarizadeh:2021vdo, Sardar:2021blt, Bakhtiarizadeh:2021hjr, Gray:2021roq, Jaime:2022cho, Bueno:2022lhf, Edelstein:2022xlb, Cano:2022ord}, and many of the interesting properties of these theories are now well-understood. Here we summarize some particularly relevant ones: \begin{enumerate} \item When linearized around any maximally symmetric background, their equations are identical to the Einstein gravity ones, up to a redefinition of the Newton constant ---in other words, they only propagate the usual transverse and traceless graviton in the vacuum \cite{PabloPablo,Hennigar:2016gkm,PabloPablo2,Hennigar:2017ego,PabloPablo3,Ahmed:2017jod,PabloPablo4}. \item They possess non-hairy black hole solutions fully characterized by their ADM mass/energy and whose thermodynamic properties can be obtained from an algebraic system of equations. \item Although the defining property pertains to static spherically symmetric black holes, certain subsets of GQTGs allow for reduction of order in the field equations for other metrics, such as Taub-NUT/Bolt~ \cite{Bueno:2018uoy}, slowly-rotating black holes~\cite{Adair:2020vso, Gray:2021roq}, near extremal black holes~\cite{Cano:2019ozf}, and cosmological solutions~\cite{Arciniega:2018fxj, Arciniega:2018tnn,Cisterna:2018tgx, Cano:2020oaa}. \item In the context of gravitational effective field theory, any higher-curvature theory can be mapped, via field redefinition, into some GQTG~\cite{Bueno:2019ltp, Bueno:2019ycr}. \item We can consider arbitrary linear combinations of GQTG densities and the corresponding properties hold, which means, in particular, that GQTG theories have a well-defined and continuous Einstein gravity limit, corresponding to setting all higher-curvature couplings to zero. \item Extensions away from pure metric theories, including scalars or vector fields, while preserving the main properties are possible~\cite{Cano:2020qhy,Bueno:2021krl,Cano:2022ord}. \end{enumerate} Our purpose here is to complete the study of structural aspects of GQTGs. In~\cite{Bueno:2019ycr} we proved existence of GQTGs at all orders of curvature and in all dimensions $D \ge 4$. In this manuscript, we will address how many \textit{distinct/inequivalent} GQTGs exist at each order in curvature and in each dimension. The organization of the manuscript is as follows. We begin in Section~\ref{GQTGss} by reviewing in more detail the defining properties of GQTGs and introducing notation that will be used throughout. Then, in Section~\ref{GQTGssss}, we provide a simple argument that gives an upper bound on the number of possible distinct GQTGs. We then refine this upper bound into an exact result, showing that at order $n$ in curvature there are $n-1$ distinct GQTGs, provided $D > 4$, while there is a single unique family provided $D = 4$. In Section~\ref{GQTGthermo} we compute the thermodynamic charges for any possible GQTG, and verify that they satisfy the first law. Intriguingly, we find good evidence that the thermodynamics for GQTGs can be written entirely in terms of the embedding function for the given family of theories, which determines the maximally symmetric vacua of the theory. We collect a number of useful results and expressions in the appendix. \section{Generalized quasi-topological gravities}\label{GQTGss} We start in this section with a quick review of the defining properties of GQTGs and some notation. Our discussion here closely follows that of~\cite{Bueno:2019ycr}. We also introduce the notion of {\it inequivalent} GQTG densities which will be important for the rest of the paper. Roughly speaking, we will say that two GQTG densities of a fixed curvature order $n$ are inequivalent if they give rise to different equations for the metric function $f(r)$. \subsection{Definitions} A general static and spherically symmetric (SSS) metric can be written in terms of two undetermined functions $N(r)$ and $f(r)$ as \begin{equation} \label{SSS} \mathrm{d} s^2_{N, f}=-N(r)^2 f(r) \mathrm{d} t^2+\frac{\mathrm{d} r^2}{f(r)}+ r^2\mathrm{d} \Omega^2_{(D-2)}\, , \end{equation} where $\mathrm{d} \Omega^2_{(D-2)}$ is the metric of the $(D-2)$-dimensional round sphere. Essentially all our discussion extends straightforwardly to the cases in which the horizon is planar or hyperbolic instead. The formulas below will include those as well, the different cases being parametrized by a constant $k$ taking values $k=1,0,-1$ for spherical, planar and hyperbolic horizons respectively. For a given curvature invariant of order $n$, $\mathcal{R}_{(n)}$, we define $L_{N,f}$ and $S_{N,f} $ as the effective Lagrangian and on-shell action which result from evaluating $\sqrt{|g|}\mathcal{R}_{(n)}$ in the ansatz (\ref{SSS}) \begin{equation}\label{ansS} L_{N,f}\equiv \left. N(r) r^{D-2} \mathcal{R}_{(n)}\right|_{N,f}\, , \quad S_{N,f}\equiv \Omega_{(D-2)}\int \mathrm{d} t \int \mathrm{d} r L_{N,f} \, , \end{equation} where we integrated over the angular directions, $\Omega_{(D-2)}\equiv 2\pi^{\frac{D-1}{2}}/\Gamma[\frac{D-1}{2}]$. We will define $L_f\equiv L_{1,f}$ and $S_f\equiv S_{1,f}$, namely, the expressions obtained from setting $N=1$ in $L_{N,f}$. Now, solving the full nonlinear equations of motion for a metric of the form (\ref{SSS}) can be shown to be equivalent to solving the Euler-Lagrange equations of $S_{N,f}$ associated to $N(r)$ and $f(r)$ \cite{Palais:1979rca,Deser:2003up,PabloPablo4}, namely, \begin{equation} \left.\mathcal{E}^{ab}\right|_{N,f}\equiv \left. \frac{1}{\sqrt{|g|}} \frac{\delta S}{\delta g^{ab}} \right|_{N,f}=0 \quad \Leftrightarrow \quad \frac{\delta S_{N,f}}{\delta N}= \frac{\delta S_{N,f}}{\delta f}=0\, . \end{equation} We say that $\mathcal{R}_{(n)}$ is a GQTG density if the Euler-Lagrange equation of. $f(r)$ associated to $L_f$ vanishes identically, {\it i.e.,}\ if \begin{equation} \label{GQTGcond} \frac{\delta S_{f}}{\delta f}=0\, , \quad \forall \, \, f(r)\, . \end{equation} This is the same as asking $L_f$ to be a total derivative, \begin{equation}\label{condd2} L_f =T_0'\, , \end{equation} for some function $T_0(r,f(r),f'(r))$. The equation satisfied by $f(r)$ for a given GQTG density can be obtained from the variation of $L_{N,f}$ with respect to $N(r)$ as \begin{equation}\label{eqf} \left.\frac{\delta S_{N,f}}{\delta N}\right|_{N=1}=0\, \quad \Leftrightarrow\quad \text{equation of}\quad f(r)\, . \end{equation} As explained in \cite{PabloPablo3}, whenever \req{condd2} holds, the effective Lagrangian $L_{N,f}$ takes the form \begin{equation}\label{fofwo} L_{N,f}=N T_0' + N' T_1 + N'' T_2 +\mathcal{O}(N'^2/N)\, , \end{equation} where $T_{1}$, $T_2$ are functions of $f(r)$ and its derivatives, and $\mathcal{O}(N'^2/N)$ is a sum of terms all of which are at least quadratic in derivatives of $N(r)$. Integrating by parts it follows that \begin{equation} S_{N,f} = \Omega_{(D-2)} \int \mathrm{d} t \int \mathrm{d} r \left[N\left(T_0-T_1+T_2' \right)' +\mathcal{O}(N'^2/N) \right]\, . \end{equation} So it is possible to write all terms involving one power of $N(r)$ or its derivatives as a product of $N(r)$ and a total derivative which depends on $f(r)$ alone. Now, it follows straightforwardly that condition (\ref{eqf}) equates that total derivative to zero. Integrating it once one we are left with \cite{PabloPablo3} \begin{equation} \label{eqqqf} \mathcal{F}_{\mathcal{R}_{(n)}} \equiv T_0-T_1+T_2'=C\, , \end{equation} where $C$ is an integration constant related to the ADM mass of the solution \cite{Arnowitt:1960es,Arnowitt:1960zzc,Arnowitt:1961zz,Deser:2002jk}. In particular, for spherical horizons, the precise relation reads \begin{equation} C= \frac{M}{\Omega_{(D-2)}}\, . \end{equation} Hence, given some linear combination of GQTG densities, obtaining the equation satisfied by the metric function $f(r)$ amounts to evaluating $L_{N,f}$ as defined in \req{ansS} and then identifying the functions $T_{i=0,1,2}$ from \req{fofwo}. The equation is then given by (\ref{eqqqf}).\footnote{Sometimes we will refer to this equation as the ``integrated equation'' of $f(r)$ to emphasize the fact that it follows from integrating once (on $r$) the only non-vanishing component of the actual equations of motion of the theory evaluated on the single-function SSS ansatz. } As argued in \cite{PabloPablo3}, the integrated equation is at most second-order in derivatives of $f(r)$. In fact, there are two possibilities as far as the number of derivatives of $f(r)$ are involved: i) theories whose integrated equation involves $f'(r)$ and $f''(r)$; ii) theories whose integrated equation exclusively involves $f(r)$, so the equation is algebraic instead of differential. We shall call theories of the former class ``genuine'' GQTG densities. Theories of the latter class are called Quasi-topological gravities, and they include Einstein and Lovelock theories as subcases. Now, a natural question is: given a fixed spacetime dimension $D$ and a curvature order $n$, are the integrated equations corresponding to different genuine GQTG densities $\{ \mathcal{R}^{I}_{(n)}$, $\mathcal{R}^{II}_{(n)}$, $\dots$ $\mathcal{R}^{i_n}_{(n)}\}$ proportional to each other ---{\it i.e.,}\ are the functional dependences on $r$, $f(r)$, $f'(r)$ and $f''(r)$ of the equations identical--- for the various densities? If not, how many inequivalent contributions to the equation of $f(r)$ are there at a given order in curvature? Analogous questions can be asked fixing $D$ and $n$ for theories belonging to the Quasi-topological class. Given two genuine GQTG densities of order $n$, we will say they are ``inequivalent'' (as far as SSS solutions are concerned) if the quotient of their integrated equations is not a constant, \begin{equation} \mathcal{R}^{I}_{(n)} \quad \text{inequivalent from} \quad \mathcal{R}^{II}_{(n)} \quad \Leftrightarrow \quad \frac{\mathcal{F}_{\mathcal{R}^I_{(n)}}(r,f(r),f'(r),f''(r)) }{\mathcal{F}_{\mathcal{R}^{II}_{(n)}} \left(r,f(r),f'(r),f''(r)\right)} \neq \text{constant} \, . \end{equation} Otherwise we will say they are ``equivalent''. Given two Quasi-topological gravities of order $n$, we would perform an analogous definition, \begin{equation} \mathcal{Z}^{I}_{(n)} \quad \text{inequivalent from} \quad \mathcal{Z}^{II}_{(n)} \quad \Leftrightarrow \quad \frac{\mathcal{F}_{\mathcal{Z}^I_{(n)}}(r,f(r)) }{\mathcal{F}_{\mathcal{Z}^{II}_{(n)}} \left(r,f(r)\right)} \neq \text{constant} \, . \end{equation} but we will show later that, in fact, all Quasi-topological gravities of a given order are equivalent. That will not be the case for genuine GQTGs, in whose case we will prove that there exist $(n-2)$ inequivalent densities for $D\geq 5$.\footnote{The existence of multiple types of GQTG densities was first pointed out in \cite{Bueno:2020odt}, where two inequivalent quintic densities were explicitly constructed in $D=6$.} \section{How many types of GQTGs are there?}\label{GQTGssss} In this section we prove that there exist exactly $(n-2)$ inequivalent genuine GQTG densities and a single inequivalent Quasi-topological one at a given curvature order $n$ in $D\geq 5$. In $D=4$ there are no Quasi-topological theories and we argue that our proof for the existence of $(n-2)$ genuine GQTG densities fails in that case, illustrating the fact that a single genuine GQTG density exists in $D=4$ for $n\geq 3$. \subsection{At most $(n+1)$ order-$n$ densities} Let us start our study by putting an upper bound on the possible number of inequivalent GQTG densities existing at a given curvature order $n$. As argued in \cite{Deser:2005pc}, evaluated on a metric of the form (\ref{fEq}), the Riemann tensor can be written as \begin{equation}\label{rieef} \left.\tensor{R}{^{ab}_{cd}}\right|_f=2\left[-A T^{[a}_{[c}T^{b]}_{d]}+2B T^{[a}_{[c}\sigma^{b]}_{d]}+\psi \sigma^{[a}_{[c}\sigma^{b]}_{d]}\right]\, , \end{equation} where $\sigma_a^b$ and $T_a^b$ are projectors on the angular and ($t$,$r$) directions, respectively.\footnote{These satisfy $T_a^b T_b^c=T_a^c$, $\sigma_a^b \sigma_b^c=\sigma_a^c$, $\sigma_a^b T_b^c=0$, $\delta^a_b T_{a}^b=2$, $\delta^a_b \sigma_a^b=(D-2)$, $\delta^{a}_{b}=T^{a}_{b}+\sigma^{a}_{b}$. } On the other hand, the dependence on the radial coordinate appears exclusively through the three functions $A$, $B$ and $\psi$, which read \begin{equation} A\equiv \frac{f''(r)}{2}\, , \quad B\equiv -\frac{f'(r)}{2r}\, , \quad \psi\equiv \frac{k-f(r)}{r^2}\, , \end{equation} where $k={1,0, -1}$ for spherical, planar and hyperbolic horizons respectively. Now, GQTG densities are built from contractions of the metric and the Riemann tensor, so any order-$n$ density of that type will become some polynomial of these objects when evaluated on (\ref{fEq}), namely, \begin{equation}\label{rfff} \mathcal{S}\big|_f=\sum_{l=0}^n\sum_{k=0}^l c_{k,l}B^l\psi^{l-k}A^{n-l}\, , \end{equation} for some constants $c_{k,l}$. The idea is now to determine the most general constants $c_{k,l}$ consistent with the GQTG requirement, which asks $r^{D-2} \mathcal{S}|_f $ to be a total derivative, {\it i.e.,}\ \begin{equation}\label{pazo} r^{D-2} \mathcal{S}|_f = T_0'(r)\, . \end{equation} Note that imposing this condition on \req{rfff} and finding the compatible values of $c_{k,l}$ does not guarantee that the corresponding GQTG densities actually exist, as this does not provide an explicit construction of covariant curvature densities. Doing this does impose, nonetheless, a necessary condition which all actual densities must satisfy. Given a GQTG density, $\mathcal{S}$, it is useful to define the object $\tau(r)$ through the relation \begin{equation}\label{ttau} T_0 \equiv r^{D-1} \tau\, , \quad \text{so that} \quad \mathcal{S}\big|_f= \frac{1}{r^{D-2}} \frac{\mathrm{d} }{\mathrm{d} r} \left[ r^{D-1} \tau(r)\right] \, . \end{equation} In a sense, $\tau(r)$ is the fundamental building block as long as on-shell GQTG densities are concerned. Observe that since \begin{equation}\label{} \sum_i \alpha_i \mathcal{S}_i\big|_f = \frac{1}{r^{D-2}} \frac{\mathrm{d} }{\mathrm{d} r} \left[ r^{D-1} \sum_i \alpha_i \tau_{(i)}(r)\right] \, , \end{equation} linear combinations of the $\tau_{(i)}$ give rise to linear combinations of GQTG densities in an obvious way. Now, imposing (\ref{pazo}) on densities of the form \req{rfff}, we find that there are $(n+1)$ independent possible densities at a given order $n$. In terms of the $\tau(r)$, the possibilities turn out to be simply given by $\tau=\tau_{(n,j)}$, where we defined \begin{equation} \tau_{(n,j)}\equiv \psi^{n-j} B^j \, , \quad \text{where} \quad j=0,1,\dots,n\, . \end{equation} The corresponding putative on-shell densities read\footnote{Note that for the objects $\mathcal{S}_{(n,j)}$ we omit the $|_f$. By this we mean that we literally define $\mathcal{S}_{(n,j)}$ to be the expression that appears in the right-hand side. Actual densities evaluated on the single-function SSS ansatz will reduce to linear combinations of the $\mathcal{S}_{(n,j)}$. } \begin{equation}\label{rj} \mathcal{S}_{(n,j)} \equiv \frac{1}{r^{D-2}} \frac{\mathrm{d}}{\mathrm{d} r} \left[ r^{D-1} \tau_{(n,j)}\right] \, , \quad j=0,1,\dots,n\, . \end{equation} Observe that the resulting possibilities are such that $A$ only appears either to the power $1$ or to the power $0$ when expanding $\mathcal{S}_{(n,j)}$, which is like restricting the sum in $l$ appearing in (\ref{rfff}) to $l=\{n-1,n\}$. It follows that any GQTG density in any number of dimensions and at any order in curvature must necessarily be expressible as a linear combination of the above densities when evaluated on the single-function SSS ansatz, namely \begin{equation}\label{genS} \mathcal{S}|_f= \frac{1}{r^{D-2}} \frac{\mathrm{d}}{\mathrm{d} r} \left[ r^{D-1} \sum_{j=0}^n \alpha_{(n,j)} \tau_{(n,j)}(r)\right] \, , \end{equation} for certain constants $\alpha_{(n,j)}$. Using the methods developed in~\cite{Bueno:2019ycr} ---cf. section 5 of that work--- it is possible to compute the field equations for the putative theory~\eqref{genS} despite the fact that a covariant form of the action is not known. The integrated equation for the metric function $f(r)$ corresponding to a putative density $\mathcal{S}_{(n,j)}$ is given, in the notation of \req{eqqqf}, by\footnote{So, for a linear combination of densities, the equation would read $\sum_j \alpha_{(n,j)} \mathcal{F}_{(j)}=C $ where $C$ is an integration constant related to the mass of the solution.} \begin{align} \label{fnj} &\mathcal{F}_{(n,j)}= \frac{(-1)^{j+1}}{2^{j+1}}r^{D-2+j-2n} (k-f)^{n-j-1}(f')^{j-2} \times \\ &\left[ f' \Big[j(D-1+j-2n)(k-f)f-(j-1)r (k+(n-j-1)f) f' \right] +j(j-1) r (k-f) f f''\Big] \, . \notag \end{align} Observe that this simplifies considerably both for $j=0$ and $j=1$. In those cases the dependence on $f'$ and $f''$ disappears and one finds algebraic equations for $f(r)$, \begin{equation} \mathcal{F}_{(n,0)}= -\frac{r^{D-1-2n}}{2} (k-f)^{n-1} [k+(n-1)f]\, , \quad \mathcal{F}_{(n,1)}=\frac{(D-2n)r^{D-1-2n}}{4}(k-f)^{n-1}f \, . \end{equation} An obvious question at this point is: which of these possible densities actually corresponds to the Einstein-Hilbert one, if any. In that case we have $n=1$, and the two possible densities and their integrated equations of motion read, respectively, \begin{align} \mathcal{S}_{(1,0)}&=-\frac{1}{r^2} \left[(D-3) (f-k)+r f' \right]\, , \quad \mathcal{F}_{(1,0)}=-\frac{r^{D-3} k }{2} \, , \\ \mathcal{S}_{(1,1)}&=-\frac{1}{2r^2} \left[(D-2)r f'+r^2 f'' \right]\, , \quad \mathcal{F}_{(1,1)}=\frac{(D-2)r^{D-3}f }{4} \, . \end{align} Now, the corresponding expressions for the Einstein-Hilbert action ({\it i.e.,}\ for a density given by the Ricci scalar $\mathcal{S}_{\rm \scriptscriptstyle EH}\equiv R$) read \begin{equation} \mathcal{S}_{\rm \scriptscriptstyle EH}|_f=-\frac{1}{r^2} \left[(D-2)(D-3)(f-k)+ 2(D-2)r f'+r^2 f''\right]\, , \quad \mathcal{F}_{\rm \scriptscriptstyle EH}=-(D-2)(f-k) r^{D-3} \, . \end{equation} Hence, none of the putative densities coincides with the Einstein-Hilbert one. Rather, it is a linear combination of the two which does, namely, \begin{equation} \mathcal{S}_{\rm \scriptscriptstyle EH}|_f= (D-2) \mathcal{S}_{(1,0)} + 2 \mathcal{S}_{(1,1)}\, . \end{equation} Even though our approach has selected two possible independent densities susceptible of giving rise to GQTG densities at linear order in curvature, there (obviously) exists a unique possibility corresponding to an actual density, given by the Ricci scalar, which therefore is given by a linear combination of the two. While the $n=1$ case is somewhat special, this already illustrates the fact that our upper bound of $(n+1)$ densities at order $n$ is not tight and can be improved. For higher $n$, the only known examples of densities which give rise to algebraic integrated equations for $f(r)$ are Lovelock and Quasi-topological gravities. From our perspective, at a given order $n$ in $D$ dimensions, all available Lovelock and Quasi-topological gravities for such $n$ and $D$ are ``equivalent'' as far as the equation of $f(r)$ is concerned, which means that they should correspond to a fixed linear combination of $\mathcal{S}_{(n,0)}$ and $\mathcal{S}_{(n,1)}$. In the next subsections we argue that, indeed, the bound of $(n+1)$ densities can be lowered to at most $(n-1)$ GQTG densities of order $n\geq 2$. While amongst the $(n+1)$ candidates identified here there are two which produce algebraic equations, we will see that only a linear combination of the two survives, precisely corresponding to the known Lovelock and Quasi-topological case. The additional putative $(n-2)$ densities would give rise to distinct second-order differential equations for $f(r)$. \subsection{At most $(n-1)$ order-$n$ densities}\label{atmostnm1} In order to lower our upper bound on the number of available GQTG densities existing at a given order, we can impose some further conditions on our candidate on-shell densities $\mathcal{S}_{(n,j)}$. The first condition comes from imposing that the equations of motion associated to them admit maximally symmetric solutions. When evaluated on such backgrounds, the equations of motion of actual higher-curvature densities reduce to an algebraic equation which involves the cosmological constant, the curvature scale of the background ({\it e.g.,}\ the AdS radius) as well as the higher-curvature couplings. More precisely, consider a gravitational Lagrangian consisting of a linear combination of generic higher-curvature densities of the form given in \req{action}. The result for the equations of motion when evaluated for \begin{equation} f(r)=\frac{r^2}{L_{\star}^2}+k\, , \end{equation} which corresponds to pure AdS$_{D}$ with radius $L_{\star}$, is given by \begin{equation} \frac{r^{D-1}}{16\pi G} \left[\frac{(D-2)}{L^2}-\frac{(D-2)}{L_{\star}^2}+ \sum_{n=2} \sum_{i_n} \frac{L^{2(n-1)}}{L_{\star}^{2n}} \mu_{i_n}^{(n)} a_{i_n}^{(n)}\right]=0\, , \end{equation} for certain constants $a_{i_n}^{(n)}$. Interestingly, as we will see below, this same equation which determines the vacua, also appears to play a key role in the thermodynamics of black holes in the theory. Naturally, the solution for Einstein gravity is simply $L^2=L_{\star}^2$, which relates the action scale to the AdS radius in the usual way. Now, what happens when we consider the integrated equations of a linear combination of candidate on-shell GQTG densities, each contributing as in \req{fnj}, on such a background? It turns out that the result $\sum_j \alpha_{(n,j)} \mathcal{F}_{(n,j)}$ contains two different kinds of terms, one which goes with a power of $r^{D-1}$, and one which goes with a power of $r^{D-3}$. As we have seen, actual densities contribute with a single power of the type $r^{D-1}$, so we must impose that the second kind of term is absent for our putative densities. Removing such a piece amounts to imposing the condition \begin{equation}\label{cond1} \sum_{j=0}^{n} \alpha_{(n,j)} (2n-Dj)=0\, . \end{equation} Hence, we learn that not all the candidate densities can be independent and we reduce the number from $(n+1)$ to $n$. There is another condition we can easily impose on our candidate densities. As explained in the first section, GQTG densities have second-order linearized equations around general maximally symmetric backgrounds. This is in contradistinction to most higher-curvature gravities, whose linearized equations involve up to four derivatives of the metric ---see {\it e.g.,}\ \cite{Aspects} for general formulas. Suppose then that we consider a small radial perturbation on AdS space such that the metric function becomes \begin{equation}\label{perv} f(r)=\frac{r^2}{L_{\star}^2}+k+\varepsilon h(r)\, , \end{equation} where $\varepsilon \ll 1$. Now, observe that in our general discussion, the integrated equation of motion for a GQTG density, $\mathcal{F}_{\mathcal{S}_n}$, has been integrated once (on $r$) with respect to the actual equations of motion of the corresponding density. Hence, the fact that the actual (linearized) equations of motion for GQTG densities are second order for any perturbation on a maximally symmetric background implies that the integrated equations cannot contain terms involving $h''(r)$ (or more derivatives) at leading order in $\varepsilon$. If they did, the actual linearized equations would involve terms of the form $\sim \varepsilon h'''(r)$, in contradiction with the linearized second-order behavior. With this in mind, our strategy now is to insert \req{perv} in a linear combination of integrated equations for our candidate on-shell densities (\ref{fnj}) and impose that no terms involving $h''(r)$ appear at leading order in $\varepsilon$. By doing so, we find an additional (remarkably simple) condition, which reads \begin{equation}\label{cond2} \sum_{j=0}^{n} \alpha_{(n,j)}j (j-1)=0\, . \end{equation} Imposing it further reduces the number of independent densities from $n$ to $(n-1)$. Hence, we conclude that in $D$ dimensions there exist at most $(n-1)$ inequivalent GQTG theories of order $n$. Later in subsection \ref{proofexa} we will prove that in fact there exist exactly $(n-1)$ inequivalent densities for $D\geq 5$. There are many possible ways to choose a basis of on-shell densities so that \req{cond1} and \req{cond2} are implemented. For instance, we may choose for the $\tau(r)$ functions defined in \req{ttau} \begin{align}\label{qt} \tau^{\rm QT}_{\{n\}}\equiv & +(2n-D) \tau_{(n,0)}-2n \tau_{(n,1)}\, ,\\ \tau^{\rm GQT}_{\{n,j\}}\equiv& +(j+1)(D j -4 n) \tau_{(n,j+1)} \\ \notag &+ \left[2D(1-j^2)-4n(1-2j) \right]\tau_{(n,j)} + j [D(j+1)-4n]\tau_{(n,j-1)}\, , \end{align} with $j=2,\dots,n-1$, where we isolated the QT class combination in the first line ---see next subsection. Naturally, constructing actual covariant densities of each of the classes is a non-trivial problem on its own. Explicit formulas for order-$n$ GQTG densities in arbitrary dimensions $D\geq 4$ as well as for order-$n$ QT densities in $D\geq 5$ were presented in \cite{Bueno:2019ycr}. However, these cases only exhausted $2$ of the $(n-1)$ classes which we show to exist for $D\geq 5$ in the present paper (one of the genuine GQTG types and the Quasi-topological one). In Appendix \ref{densiii} we present explicit formulas for the $(n-2)$ different types of GQTG densities for $n=4,5,6$ in $D=5$ and $D=6$. \subsection{Uniqueness of Quasi-topological densities} As mentioned above, Quasi-topological densities are a subclass of GQTGs characterized by having an algebraic (as opposed to second-order differential) integrated equation of motion for the metric function $f(r)$ \cite{Quasi2, Quasi, Dehghani:2011vu,Ahmed:2017jod,Cisterna:2017umf}. Theories of that kind are required to satisfy an additional condition besides (\ref{GQTGcond}), namely \cite{Bueno:2019ycr} \begin{equation} \left[ \frac{D-2}{r} \frac{\partial}{\partial f''}+\frac{\mathrm{d} }{\mathrm{d} r} \frac{\partial}{\partial f''}+\frac{(D-3)}{2} \frac{\partial}{\partial f'}+\frac{r}{2} \frac{\mathrm{d} }{\mathrm{d} r} \frac{\partial}{\partial f'}-r \frac{\partial}{\partial f}\right] \mathcal{Z}|_f=0\, , \end{equation} which is equivalent to enforcing that the term $\nabla^d P_{acdb}$ from the field equations vanishes on a static spherically symmetric metric ansatz. Imposing this condition on a general linear combination of our canditate densities (\ref{genS}) severely constrains the values of the $\alpha_j$, and we find that $\tau^{\rm QT}_{\{n\}}$ as defined in \req{qt} is in fact the only possibility. Hence, we learn that the only combination of putative densities compatible with the Quati-topological condition is given by \begin{equation}\label{zf} \mathcal{Z}_{(n)}|_f= \frac{1}{r^{D-2}} \frac{\mathrm{d}}{\mathrm{d} r} \left[ r^{D-1} \left( (2n-D) \tau_{(n,0)}-2n \tau_{(n,1)}\right) \right]\, . \end{equation} Now, Quasi-topological gravities with precisely this structure were shown to exist in \cite{Bueno:2019ycr} at all orders in $n$ and for all $D\geq 5$. Therefore, we conclude that the only possible on-shell structure of a Quasi-topological density is given by (\ref{zf}). There are no additional inequivalent Quasi-topological densities besides the known ones: if a given higher-curvature density possesses second-order linearized equations around maximally symmetric backgrounds and admits black hole solutions satisfying $g_{tt}g_{rr}=-1$ and such that the equation for $f(r)$ is algebraic, then, the equation which determines such a function is uniquely determined to be \begin{equation} \mathcal{F}_{\mathcal{Z}_n}=\frac{(D-2n)}{2} r^{D-2n-1} (k-f)^n\, . \end{equation} This naturally includes the subcases of Einstein and Lovelock gravities. \subsection{Exactly $(n-1)$ order-$n$ densities}\label{proofexa} Let us finally proceed to prove that there exist exactly $(n-1)$ inequivalent GQTG densities of order $n$ in dimensions higher than four. Consider the following combination of ``on-shell densities'' \begin{equation} \mathcal{S}^{(k)}_{p}=\sum_{i=0}^{p}\alpha^{(k)}_{p,i} \mathcal{S}_{(p,i)}\, ,\quad k=1,\ldots,k _p\equiv \max\,(1,p-1)\, , \end{equation} where the $\mathcal{S}_{(p,i)}$ are defined in \req{rj} and where we assume the constants $\alpha^{(k)}_{p,i}$ to satisfy the constraints found in subsection \ref{atmostnm1}, namely, \begin{equation}\label{alphaconstrv2} \sum_{j=0}^{p} \alpha^{(k)}_{p,j} (2p - D j )=0\, ,\quad \sum_{j=0}^{p} \alpha^{(k)}_{p,j} j(j-1)=0\, . \end{equation} At each curvature order $p$, there are $k_{p}$ linearly independent solutions and the index $k$ labels each of them. Now, let us assume that for $p=1,2,\ldots ,n$ we have proven that all of these on-shell densities correspond to the evaluation of actual higher-curvature densities on the single-function SSS ansatz. Namely, there exists a set of Lagrangians $\mathcal{R}_{p}^{(k)}$ such that \begin{equation} \mathcal{R}_{p}^{(k)}\Big|_{f}=\mathcal{S}^{(k)}_{p}\, ,\quad p=1,\ldots n\, , \quad k=1,\ldots, k_{p}\, . \end{equation} With this in mind, let us now consider an order-$(n+1)$ density built from a general linear combination of products of all these lower-order densities, {\it i.e.,}\ \begin{align} \tilde{\mathcal{R}}_{n+1}=\sum_{m=1}^{n}\sum_{k=1}^{k_m} \sum_{k'=1}^{k_{n+1-m}}C_{m,k,k'}\mathcal{R}^{(k)}_{m}\mathcal{R}^{(k')}_{n+1-m}\, , \end{align} where we introduced the constants $C_{m,k,k'}$. We can ask now: is it possible to generate $n$ inequivalent GQTGs of order $(n+1)$ in this way? In order to answer this question, let us evaluate $\tilde{\mathcal{R}}_{n+1}$ on the single-function SSS ansatz and try to obtain all the possible on-shell GQTGs structures. The evaluation yields \begin{align} \tilde{\mathcal{R}}_{n+1}\Big|_{f}&=\sum_{m=1}^{n}\sum_{k=1}^{k_m} \sum_{k'=1}^{k_{n+1-m}}\sum_{i=0}^{m} \sum_{j=0}^{n+1-m}\alpha^{(k)}_{m,i} \alpha^{(k')}_{n+1-m,j}C_{m,k,k'} \mathcal{S}_{(m,i)}\mathcal{S}_{(n+1-m, j)}\\ \label{ramong} &=\sum_{m=1}^{n}\sum_{i=0}^{m} \sum_{j=0}^{n+1-m}\tilde{C}_{m,i,j} \mathcal{S}_{(m,i)}\mathcal{S}_{(n+1-m, j)}\, , \end{align} where we defined \begin{equation} \tilde{C}_{m,i,j}\equiv \sum_{k=1}^{k_m} \sum_{k'=1}^{k_{n+1-m}}\alpha^{(k)}_{m,i} \alpha^{(k')}_{n+1-m,j}C_{m,k,k'}\, . \end{equation} Now, since we are summing over all the $\alpha^{(k)}_{n,j}$ satisfying \eqref{alphaconstrv2} and $C_{m,k,k'}$ is an arbitrary tensor, note that this equality is equivalent to demanding that $\tilde{C}_{m,i,j}$ is an arbitrary tensor satisfying the following constraints \begin{align}\label{Cconstr1} \sum_{j=0}^{n+1-m}\tilde{C}_{m,i,j} [2(n+1-m) - D j ]&=0\, ,\quad \sum_{j=0}^{n+1-m} \tilde{C}_{m,i,j} j(j-1)=0\, ,\\ \label{Cconstr2} \sum_{i=0}^{m}\tilde{C}_{m,i,j} (2m - Di)&=0\,,\quad \sum_{i=0}^{m} \tilde{C}_{m,i,j} i(i-1)=0\, . \end{align} In this way, we do not need to make reference to the $\alpha^{(k)}_{n,j}$ anymore. Next, it is convenient to rearrange the sum in the following form, in terms of the index $l\equiv i+j$, \begin{align} \tilde{\mathcal{R}}_{n+1}\Big|_{f}&=\sum_{m=1}^{n}\sum_{l=0}^{n+1} \sum_{j=\max(l-m,0)}^{\min(l,n+1-m)}\tilde{C}_{m,l-j,j} \mathcal{S}_{(m,l-j)}\mathcal{S}_{(n+1-m, j)}\\ &=\sum_{l=0}^{n+1}\sum_{m=1}^{n} \sum_{j=0}^{n+1-m}\theta(l-j)\theta(j+m-l)\tilde{C}_{m,l-j,j} \mathcal{S}_{(m,l-j)}\mathcal{S}_{(n+1-m, j)}\, , \end{align} where $\theta(x)\equiv 1$ if $x\ge0$ and $\theta(x)\equiv 0$ if $x<0$. Observe that the effect of the theta functions is to enforce that $i \geq 0$ and $i\leq m $, respectively, which in \req{ramong} is explicit from the $i$ sum. Expanding the product $\mathcal{S}_{(m,l-j)}\mathcal{S}_{(n+1-m, j)}$ we get the following expression, \begin{align}\notag \tilde{\mathcal{R}}_{n+1}\Big|_{f}=&\sum_{l=0}^{n+1}\sum_{m=1}^{n} \sum_{j=0}^{n+1-m}\theta(l-j)\theta(j+m-l)\tilde{C}_{m,l-j,j} \\\notag & \times \bigg[\alpha_{l,m,j} B^{2+l}\psi ^{n-1-l} +\beta_{l,m,j} B^{1+l} \psi ^{n-l}+\gamma_{l,m,j}B^l\psi ^{1-l+n} \\ &\quad \, \, +\sigma_{l,m,j}r B' B^l \psi^{n-l} +\zeta_{l,m,j}rB'B^{l-1} \psi^{1-l+n} +\omega_{l,m,j}r^2\left(B'\right)^2B^{l-2}\psi ^{1-l+n} \bigg]\, , \end{align} where \begin{align} \alpha_{l,m,j} \equiv &-4 (j-l+m) (-1+j+m-n)\, ,\\\notag \beta_{l,m,j} \equiv& -2 \big[1-4 j^2-5 l+4 m+D (-1+l-n)+4(l-m) (m-n)+n\\ &+4 j (1+l-2 m+n)\big]\, ,\\ \gamma_{l,m,j} \equiv&+(-1+D-2 j+2 l-2 m) (-3+D+2 j+2m-2 n)\, ,\\ \sigma_{l,m,j} \equiv&+2\left[2 j^2-j (1+2 l-2 m+n)+l (1-m+n)\right]\, ,\\ \zeta_{l,m,j} \equiv&-\left[4 j^2-l (-3+D+2 m-2 n)-2 j (1+2 l-2 m+n)\right]\, ,\\ \omega_{l,m,j} \equiv&- j (j-l)\, . \end{align} Finally, this can be recast as follows, \begin{align}\notag \tilde{\mathcal{R}}_{n+1}\Big|_{f}=\sum_{l=0}^{n+1} \left[\Gamma_{l}B^l\psi ^{1-l+n}+\Upsilon_{l}rB' B^{l-1}\psi ^{1-l+n}+\Omega_{l}r^2\left(B'\right)^2B^{l-2}\psi ^{1-l+n}\right]\, , \end{align} where \begin{align}\notag \Gamma_{l}&\equiv \sum_{m=1}^{n}\sum_{j=0}^{n+1-m}\Bigg[\theta(l-2-j)\theta(j+m-l+2)\tilde{C}_{m,l-2-j,j} \alpha_{l-2,m,j}\\ &\quad +\theta(l-1-j)\theta(j+m-l+1)\tilde{C}_{m,l-1-j,j} \beta_{l-1,m,j} +\theta(l-j)\theta(j+m-l)\tilde{C}_{m,l-j,j} \gamma_{l,m,j} \Bigg]\, ,\\ \notag \Upsilon_{l}&\equiv \sum_{m=1}^{n}\sum_{j=0}^{n+1-m}\Bigg[\theta(l-1-j)\theta(j+m-l+1)\tilde{C}_{m,l-1-j,j} \sigma_{l-1,m,j}\\ & \quad +\theta(l-j)\theta(j+m-l)\tilde{C}_{m,l-j,j} \zeta_{l,m,j} \Bigg]\, ,\\ \Omega_{l}&\equiv \sum_{m=1}^{n}\sum_{j=0}^{n+1-m}\theta(l-j)\theta(j+m-l)\tilde{C}_{m,l-j,j} \omega_{l,m,j} \, . \end{align} Now, in order for this to be a GQTG we must have \begin{align} \tilde{\mathcal{R}}_{n+1}\Big|_{f}&=\mathcal{S}^{(k)}_{n+1}=\sum_{l=0}^{n+1}\alpha^{(k)}_{n+1,l} \mathcal{S}_{(n+1,l)}\\ &=\sum_{l=0}^{n+1}\alpha^{(k)}_{n+1,l} B^{l-1} \psi^{n-l} \left(l r \psi B'+B \psi (D+2 l-2 n-3)-2 B^2 (l-n-1)\right)\\ &=\sum_{l=0}^{n+1} \bigg[ B^{l} \psi^{n-l+1}\left(\alpha^{(k)}_{n+1,l}(D+2 l-2 n-3)-\alpha^{(k)}_{n+1,l-1}2(l-n-2)\right)\\ &\quad \quad \quad +\alpha^{(k)}_{n+1,l}l r B'B^{l-1} \psi^{n-l+1} \bigg]\, , \end{align} for some coefficients $\alpha^{(k)}_{n+1,l}$. Therefore, we have the equations \begin{align}\label{rec1} \Gamma_{l}=&\alpha^{(k)}_{n+1,l}(D+2 l-2 n-3)-\alpha^{(k)}_{n+1,l-1}2(l-n-2)\, , \\ \label{rec2} \Upsilon_{l}=&l\alpha^{(k)}_{n+1,l}\, ,\\ \label{rec3} \Omega_{l}=&0\, , \end{align} for $l=0,\ldots , n+1$. In addition, the coefficients $\alpha^{(k)}_{n+1,l}$ should satisfy the constraints \begin{equation}\label{alphaconstrv3} \sum_{l=0}^{n+1} \alpha^{(k)}_{n+1,l} (2n+2 - Dl)=0\, ,\quad \quad \sum_{l=0}^{n+1} \alpha^{(k)}_{n+1,l} l(l-1)=0\, , \end{equation} but note that these must arise as consistency conditions in order for the system of equations to have solutions. Then, the question is whether the system of equations for the tensor $\tilde C_{m,i,j}$ given by Eqs.~\eqref{Cconstr1}, \eqref{Cconstr2}, \eqref{rec1}, \eqref{rec2}, \eqref{rec3} has solutions for \emph{any} value of the $\alpha^{(k)}_{n+1,l}$ satisfying the constraints \eqref{alphaconstrv3}. If that is the case, then we have proven the existence of $n$ different GQTGs at order $n+1$ which, as we saw earlier, is the maximum possible number of GQTGs at that order. The number of equations to be solved for fixed $n$ ---namely, the number of equations required for establishing the existence of $n$ densities of order $(n+1)$--- and the number of unknowns ($\tilde C_{m,i,j}$) read, respectively \begin{equation} \# \text{ equations} = \frac{12+n(11+3n)}{2}\, , \quad \# \text{ unknowns} =\frac{n(n+2)(n+7)}{6}\, . \end{equation} The former is greater than the latter as long as $n<5.10421$ and smaller for greater values of $n$. Observe that while the number of equations grows as $\sim n^2$, the number of unknowns grows as $\sim n^3$. Here, the number of unknowns is the number of constants available to be fixed in order for the GQTG conditions to be satisfied, and so having more unknowns than equations means that we have more than enough freedom to impose all the conditions. Hence, as long as we are able to show that the $(n-1)$ different classes of GQTG exist for $n\leq 6$ using other methods, this result shows that they will generally exist for $n > 6$. In practice, solving this system of equations explicitly for any $n\geq 6$ and $D$ is challenging, Nevertheless, the resolution for explicit values of $n$ and $D$ is straightforward with the help of a computer algebra system. Doing this, we have checked that there is a solution for any consistent value of the $\alpha^{(k)}_{n+1,l}$ in any $D$ as long as $n\ge 6$.\footnote{In practice, we have checked this explicitly for $n=6$ and general $D$ and for $n=7,\dots, 20$ in $D=5,6,7$. } In sum, our results here imply that, if $(n-1)$ inequivalent GQTGs exist for $n=1,\ldots, 6$, then, $(n-1)$ inequivalent densities will exist for every order $n\geq 6$. In Appendix \ref{densiii} we have provided explicit examples of all the inequivalent classes of GQTGs up to $n=6$ for $D=5,6$, so this proves that there are $(n-1)$ inequivalent GQTGs at every order $n\ge 2$ in those cases. The construction of explicit $n\leq 6$ densities of all the different classes for other values of $D$ can be analogously performed (although it requires some non-trivial computational effort in each case) so we are highly confident that our results apply for general $D\geq 7$ as well. On the other hand, note that our argument here does not work in $D=4$. Indeed, we have found no evidence for the existence of additional inequivalent GQTGs (besides the one known prior to this paper \cite{Hennigar:2016gkm,PabloPablo2,PabloPablo4,Bueno:2019ycr}) up to order $6$ in that case. This strongly suggests that in $D=4$ there is a single type of GQTG at every curvature order although a rigorous proof of this fact would require some additional work. \section{Black Hole Thermodynamics}\label{GQTGthermo} In this section we study thermodynamic aspects of GQTGs in an as general as possible fashion. First we show that the first law of black hole mechanics is satisfied by the black hole solutions of general GQTGs. Then, we will show that thermodynamic magnitudes of at least one class of genuine GQTGs can be, similarly to the Lovelock and quasi-topological cases, expressed in terms of the characteristic polynomial which embeds maximally symmetric backgrounds in the theory and the on-shell Lagrangian. \subsection{The first law for general GQTGs} Here we wish to understand the first law of thermodynamics for all possible GQTGs. We will begin by working directly with Eq.~\eqref{fnj}, without imposing the constraints on the couplings given in Eqs.~\eqref{cond1} and \eqref{cond2} at this time. The integrated field equations of the putative theory can be written in the form \begin{equation} \sum_{n=0}^{n_{\rm max}} \sum_{j=0}^{n} \alpha_{n,j} \mathcal{F}_{(n,j)} = - \frac{8 \pi G M}{\Omega_{D-2}} \, , \end{equation} where the parameter $M$ is the black hole mass \cite{Arnowitt:1960es,Arnowitt:1960zzc,Arnowitt:1961zz,Deser:2002jk}. At a black hole horizon, where $f(r_+) = 0$, the above equation can be expanded to yield the following constraints: \begin{align} M &= \frac{\Omega_{D-2}}{16 \pi G} \sum_{n=0}^{n_{\rm max}} \sum_{j=0}^{n} \alpha_{n,j} (j-1) k^{n-j} r_+^{D-2n - 1} (-2 \pi r_+ T)^j \, , \\ 0 &= \sum_{n=0}^{n_{\rm max}} \sum_{j=0}^{n} \alpha_{n,j} (D-2n +j-1) k^{n-j} (-2 \pi r_+ T)^j r_+^{D-2n-2} \, . \end{align} where the temperature satisfies $T=f'(r_+)/(4\pi)$. The first equation gives the black hole mass in terms of the temperature $T$ and the horizon radius $r_+$, while the second provides a relationship between $T$ and $r_+$. The other ingredient we need is the black hole entropy. This should be computed according to Wald's formula \cite{Wald:1993nt,Iyer:1994ys} \begin{equation} S = -2\pi \int_{\mathcal{H}} \mathrm{d}^{D-2} x \sqrt{h} \, P_{ab}{}^{cd} \varepsilon^{ab}\varepsilon_{cd} \, , \end{equation} where $\varepsilon_{ab}$ is the binormal to the horizon $\mathcal{H}$. Using the technology introduced in~\cite{Bueno:2019ycr}, this can be computed without knowledge of the covariant form of the Lagrangian. The key insight is that the tensor $P_{ab}{}^{cd}$ can be computed from the on-shell Lagrangian and must take the form \begin{equation} \label{Pabcd} \left.\tensor{P}{_{cd}^{ab}}\right|_{f} = P_1 T^{[a}_{[c} T^{b]}_{d]} +P_2 T^{[a}_{[c} \sigma^{b]}_{d]} +P_3 \sigma^{[a}_{[c} \sigma^{b]}_{d]} \, , \end{equation} where \begin{equation} P_1\equiv - \frac{\partial {\cal R}_{(n)}|_f}{\partial f''} \, ,\quad P_2 \equiv - \frac{r}{D-2} \frac{\partial {\cal R}_{(n)}|_f}{\partial f'} \, , \quad P_3\equiv - \frac{ r^2 }{(D-2)(D-3)} \frac{\partial {\cal R}_{(n)}|_f}{\partial f }\, . \end{equation} For the case of the static and spherically symmetric black holes considered here, the horizon binormal is given by $\varepsilon_{ab} = 2 r_{[a} t_{b]}$ with $r^a$ and $t^b$ the unit spacelike and timelike normal vectors. A calculation then gives \begin{equation} S = -4 \pi \Omega_{D-2} r_+^{D-2} \left[\frac{\partial \mathcal{L}}{\partial f''} \right]_{r = r_+} = \frac{\Omega_{D-2}}{8 G} \sum_{n=0}^{n_{\rm max}} \sum_{j=0}^{n} \alpha_{n,j} j k^{n-j} (-2 \pi r_+ T)^{j-1} r_+^{D-2n} \, . \end{equation} It is then straight-forward to show that the first law of thermodynamics \begin{equation} \mathrm{d} M = T \mathrm{d} S \end{equation} holds independent of any conditions placed on the couplings $\alpha_{n,j}$. This fact is somewhat surprising because, as discussed earlier, it is only when certain constraints are obeyed by the couplings that a genuine, covariant construction for the Lagrangian can be built based on curvature invariants. However, these same constraints are unnecessary to obtain a valid first law. Despite the fact that the coupling constraints are not necessary to obtain a valid first law, it is still possible to understand them from a thermodynamic perspective. For this, the natural starting point is the free energy, which reads \begin{equation} F = \frac{\Omega_{D-2}}{16 \pi G} \sum_{n, j} \alpha_{n, j} k^{n-j}(-2 \pi T)^j r_+^{D-1 -2n+j} \, . \end{equation} From the free energy, the equation that relates the temperature and horizon radius can be obtained according to \begin{equation} \frac{\partial F}{\partial r_+} = 0 \, , \end{equation} while the mass and entropy can then be verified to follow in the usual way. The constraints on the couplings enforce the following conditions on the free energy: \begin{eqnarray} F-T\frac{\partial F}{\partial T}-\frac{r_+}{D-1}\frac{\partial F}{\partial r_+}\Bigg|_{2\pi T r_+=-k}&=&0\, ,\\ \frac{\partial^2 F}{\partial T^2}\Bigg|_{2\pi T r_+ =-k}&=&0\, , \end{eqnarray} where it is to be noted that the derivatives here are to be computed without assuming any relationship between $r_+$ and $T$. These expressions above, phrasing the coupling constraints in as properties of the free energy, can be reinterpreted as statements about massless hyperbolic black holes. The static black hole with metric function \begin{equation} f(r)=-1 + \frac{r^2}{L_{\star}^2} \end{equation} is pure AdS space in a particular slicing. In terms of the parameters we have been using, this corresponds to $k=-1$, $r_+ = L_\star$ and $T = 1/(2 \pi L_\star)$, therefore satisfying the condition $2 \pi T r_+ = - k$. In this language, as we will see explicitly below, the first of the two constraints on the free energy actually ensures that the mass of this black hole vanishes. The second constraint on the free energy does not have as direct of an interpretation in terms of the thermodynamic properties of this black hole, but one could imagine it is a statement about fluctuations. \subsection{A unified picture of the thermodynamics?} Lovelock and quasi-topological gravities are, by comparison to alternatives, rather simple extensions of general relativity, especially in the context of static, spherically symmetric black holes. Within our parameterization, the coupling constants $\alpha_{n,j}$ to achieve the on-shell Lagrangian for Lovelock and quasi-topological theories amounts to the choice~\eqref{qt}. For these theories, as has long been known in the case of Lovelock \cite{Boulware:1985wk,Wheeler:1985nh,Wheeler:1985qd}, the field equations for a static, spherically symmetric black hole take the form \begin{equation} M = \frac{(D-2)\Omega_{D-2} r^{D-1}}{16 \pi G L^2} h(y) \, , \quad y \equiv \frac{(f(r) -k) L^2}{r^2} \, . \end{equation} The function $h(x)$ appearing here is the same function that determines the vacua of the theory, {\it i.e.,}\ the field equations for the maximally symmetric solutions of the theory. This ``embedding function'' or ``characteristic polynomial'' is related to the Lagrangian of the theory evaluated on a maximally symmetric background \cite{Aspects,Bueno:2018yzo} \begin{equation}\label{embed} h(x) = \frac{16 \pi G L^2}{(D-1)(D-2)} \left[\mathcal{L}(x) - \frac{2}{D} x \mathcal{L}'(x) \right]\, , \end{equation} where here $x$ is related to the curvature of the maximally symmetric background according to \begin{equation}\label{mssCurv} R_{ab}{}^{cd} = - \frac{2 x}{L^2} \delta_{[a}^c \delta_{b]}^d \, , \end{equation} and $\mathcal{L}(x)$ corresponds to the Lagrangian of the theory evaluated for the curvature~\eqref{mssCurv}. The fact that the field equations can be written in terms of the embedding function naturally leads to some simple and universal expressions for black hole thermodynamics: \begin{align} M &= \frac{(D-2)\Omega_{D-2} r_+^{D-1}}{16 \pi G L^2} h(y_+) \, , \quad y_+ \equiv -\frac{kL^2}{r_+^2} \, , \nonumber\\ S &= - \frac{4 \pi \Omega_{D-2} L^2 r_+^{D-2}}{D(D-1) } \mathcal{L}'(y_+) \, . \end{align} These relationships are expressed here in their simplest possible forms, but of course can be massaged using the identity~\eqref{embed} and its derivatives, along with the constraint \begin{equation} 0 = (D-1)k h(y_+) - 2 y_+ (2 \pi r_+ T + k) h'(y_+) \, , \end{equation} which can be used to isolate for the temperature, if desired. It is natural to wonder whether similar relationships hold for the more complicated generalized quasi-topological theories, or whether this result for Lovelock and quasi-topological theories was an artefact of their simplicity. Here we will provide evidence that this is indeed possible, though the situation is more involved than the Lovelock and quasi-topological cases. Consider the family of theories identified according to the following choices of couplings: \begin{align} \alpha_{n, n-j} = \frac{(D-4)^{j-1} n! \left[\left(n - j - 2 \right) D - 4(n-2) \right]}{2^{2j} \, j! \, (n-j)! \, (n-2)} \alpha_{n,n} \, . \end{align} In general dimensions, this corresponds to the family of theories for which an explicit covariant formulation was identified in~\cite{Bueno:2019ycr}. These couplings satisfy the necessary constraints~\eqref{cond1} and~\eqref{cond2}, and in addition define a family of GQTG theories for which the free energy can be written as, \begin{equation} F = - \frac{(D-2) \Omega_{D-2} r_+^{D-1}}{16 \pi G L^2} h(x_+) - \frac{4 L^2 r_+^{D-3}}{D^2(D-1)} \left[(D-2)k + (D-4) \pi r_+ T \right] \mathcal{L}'(x_+) \end{equation} where \begin{equation} x_+ \equiv \frac{8 \pi T L^2}{r_+ D} - \frac{(D-4) k L^2}{r_+^2 D} \, . \end{equation} From this form of the free energy, the full thermodynamic properties for this class of theories can be derived. We obtain for the mass and relationship between the temperature and horizon radius the following two results: \begin{align} M =& \, \frac{(D-2) \Omega_{D-2} r_+^{D-1}}{16 \pi G L^2} h(x_+) - \frac{(D-2) \Omega_{D-2} r_+^{D-3}}{4 \pi G D} [2 \pi r_+ T + k] h'(x_+) \nonumber\\ &+ \frac{(D-4) \Omega_{D-2} L^4 r_+^{D-5}}{D^3(D-1)} [2 \pi r_+ T + k]^2 \mathcal{L}''(x_+) \, , \\ 0 =&\, (D-1)(D-2) h(x_+) - \frac{2 (D-2)^2 L^2}{r_+^2 D} [2 \pi r_+ T + k] h'(x_+) \nonumber\\ &- \frac{8 (D-4) L^6}{D^3 (D-1) r_+^4} [2\pi r_+ T + k]^2 \left( 16 \pi G \mathcal{L}''(x_+)\right) \, , \end{align} while the entropy can be simply obtained from the above according to $S = (M-F)/T$. It is a bit interesting that the thermodynamic properties of black holes can be encoded in terms of the embedding function $h(x)$ and the Lagrangian of the theory $\mathcal{L}(x)$ evaluated on an auxiliary maximally symmetric vacuum spacetime with curvature given by $x_+/L^2$. There is one case where this result is somewhat natural, and this is the case of massless hyperbolic black holes where $f = -1 + r^2/L_{\star}^2$. Of course, this choice of metric function amounts to a pure AdS space in a particular slicing. One has $k=-1$, $T = 1/(2 \pi L_{\star})$, and $x_+ = L^2/L_{\star}^2$. In this case, the only non-trivial field equation demands that $h(x_+) = 0$, which in turn demands that $M = 0$. Next, note that considerable simplification occurs in $D = 4$. In this case, the situation reduces to that first studied in~\cite{PabloPablo4}. In that case, the couplings are given by \begin{equation} \alpha_{n, n-1} = -\frac{n }{n-2} \alpha_{n, n} \, , \quad \alpha_{2, j} = 0 \quad \forall j \, , \quad \text{and} \quad \alpha_{n, j} = 0 \quad \forall j \neq n, n-1 \, , \forall n \ge 3 \, . \end{equation} The thermodynamic relations in this case simplify to \begin{align} M &= \frac{\Omega_{D-2} r_+^3}{8 \pi G L^2} h(x_+) - \frac{\Omega_{D-2} r_+}{8 \pi G} \left[2 \pi r_+ T + k \right] h'(x_+) \, , \\ S &= \frac{\Omega_{D-2} k r_+ L^2}{6 T} \mathcal{L}'(x_+) - \frac{\Omega_{D-2} r_+}{8 \pi G T} \left[2 \pi r_+ T + k \right] h'(x_+) \, , \end{align} and the constraint that determines the temperature in terms of the horizon radius reads \begin{equation} 0 = \frac{-3 r_+^2}{L^2} h(x_+) + \left[2 \pi r_+ T + k \right] h'(x_+) \, . \end{equation} It seems likely that the thermodynamics of each family of GQTG can be obtained in this way, though we will leave that full analysis for future work. Nonetheless, we can make a few general remarks, based on the connection with massless hyperbolic black holes. For any given family of GQTGs, the mass must have a term proportional to $h(x)$ followed by a series of terms with powers that vanish for the massless hyperbolic black hole. For example, the simplest possibility would be $(2 \pi r_+ T + k)$ raised to various powers, multiplying derivatives of $h$ and $\mathcal{L}$. Similarly, the entropy must have a term proportional to $\mathcal{L}'(x)$, followed by a series of terms that vanish for the massless hyperbolic black hole, just as above. Lastly, the argument $x$ must be a function of $r_+$, $T$ and $k$ that limits to $L^2/L_{\star}^2$ for the massless hyperbolic black hole. For example, allowing for a linear dependence on the parameters, the most general option is the one-parameter family \begin{equation} x_+ = \frac{2 \pi T L^2 \beta}{r_+} + \frac{(\beta-1) k L^2}{r_+^2} \, . \end{equation} This linear relationship recovers the result for Lovelock/quasi-topological gravity (with $\beta = 0$) and the GQTG family we have presented above (with $\beta = 4/D$). Preliminary calculations have suggested that other GQTG families may require a more complicated dependence than this. \section{Final comments} \label{discuss} In this work, we have completed the structural analysis of generalized quasi-topological gravities, proving that at order $n$ in curvature there exist $n-1$ distinct GQTGs provided $D > 4$. In the case of $D = 4$, our results strongly suggest that there is a single (unique up to addition of trivial densities) GQTG family corresponding to that identified in~\cite{PabloPablo4}. To achieve this, we first derived an upper bound, based on the fact that an on-shell GQTG density must be a polynomial in the three independent terms appearing in the Riemann curvature for a static, spherically symmetric background. This upper bound, which holds independent of any knowledge of the covariant form of the densities, was then refined by demanding of the putative theories additional properties that must hold for a true covariant density. Finally, we proved the refined estimate to be exact using arguments based on recurrence formulas, like those introduced in~\cite{Bueno:2019ycr}. In order for our argument to hold, it is required that $n-1$ densities exist for $n=2,3,4,5,6$, which then implies existence for all $n>6$. Such $n-1$ densities for the lowest curvature orders can be constructed explicitly for $D\geq 5$ but not for $D=4$, in which case we have verified that there is always a unique density for every $n=2,\dots,6$. The argument for higher $n$ then fails for $D=4$. While it could in principle be possible that additional inequivalent densities exist in $D=4$ for higher orders ---and our construction involving products of lower-order densities was not general enough to capture them--- we find this possibility highly unlikely. In addition, we have provided a basic analysis of the thermodynamic properties of black holes in all possible theories, confirming that the first law is satisfied. Perhaps the most interesting result in this direction is the strong evidence that the thermodynamics of black holes in any GQTG may be expressible in terms of the same function that determines the vacua of the theory, just like in Lovelock and quasi-topological gravities. Why the thermodynamics of black holes in these theories is encoded in the curvature of some axillary maximally symmetric space remains mysterious to us, and may be worth further investigation. More pragmatically, such closed-form and universal expressions provide a simple means by which the thermodynamics could be studied when an infinite number of higher-curvature corrections are simultaneously included. As a by-product, our work has identified $(n-2)$ hitherto unknown families of GQTGs in $D > 4$. Going forward, it would be interesting to understand how the properties of black hole solutions differ between these different families, or whether there exist universal features, such as occurs in $D = 4$~\cite{PabloPablo4}. Moreover, the methods we have used to upper bound the number of distinct theories may generalize to allow for a similar analysis to be carried out when there is non-minimal coupling between gravity and matter fields. \section*{Acknowledgments} In some cases, calculations performed in the manuscript have been facilitated by Maple and Mathematica, utilizing the specialized packages GRTensor and xAct~\cite{xact}. The work of PB was partially supported by the Simons foundation through the It From Qubit Simons collaboration. The work of PAC is supported by a postdoctoral fellowship from the Research Foundation - Flanders (FWO grant 12ZH121N). The work of RAH is supported physically by planet Earth through the electromagnetic and gravitational interactions, and received the support of a fellowship from ``la Caixa” Foundation (ID 100010434) and from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 847648” under fellowship code LCF/BQ/PI21/11830027. The work of JM is funded by the Agencia Nacional de Investigaci\'on y Desarrollo (ANID) scholarship No. 21190234 and by Pontificia Universidad Cat\'olica de Valpara\'iso.
1,108,101,562,517
arxiv
\section{Introduction} Mesh patterns were first introduced by Br\"and\'en and Claesson in \cite{Bra11} as a generalisation of permutation patterns, and have been studied extensively in recent years, see e.g.,~\cite{CTU15,JKR15}. A mesh pattern consist of a pair $(\pi,P)$, where $\pi$ is a permutation and $P$ is a set of coordinates in a square grid. For example, $(312,\{(0,0),(1,2)\})$ is a mesh pattern, which we depict by \begin{center} \patt{0.5}{3}{3,1,2}[0/0,1/2][][][][][4]. \end{center} A natural definition of when one mesh patterns occurs in another mesh patterns was given in~\cite{TU17}, which we present in \cref{sec:PosMP}. This allows us to generalise the classical permutation poset to a poset of mesh patterns, where $(\sigma,S)\le(\pi,P)$ if there is an occurrence of $(\sigma,S)$ in~$(\pi,P)$. The permutation poset has received a lot of attention in recent years, but due to its complicated structure a full understanding of it has proven elusive, see~\cite{McSt13,Smith15}. The poset of mesh patterns, which we define here, contains the poset of permutations as an induced subposet. Therefore, investigating the poset of mesh patterns may lead to a better understanding of the poset of permutations. Moreover, studying this poset may help to answer some of the open questions on mesh patterns. In \cref{sec:PosMP} we introduce the poset of mesh patterns and related definitions, including a brief overview of poset topology. In \cref{sec:MF} we prove some results on the M\"obius function of this poset. In \cref{sec:purity} we give a characterisation of the non-pure (or non-ranked) intervals of the poset. In \cref{sec:topology} we give some results on the topology of the poset. \section{The Poset of Mesh Patterns}\label{sec:PosMP} To define a mesh pattern we begin with a permutation $\pi=\pi_1\pi_2\ldots\pi_n$. We can plot $\pi$ on an $n\times n$ grid, where we place a dot at coordinates $(i,\pi_i)$, for all $1\le i\le n$. A \emph{mesh pattern} is then obtained by shading some of the boxes of this grid, so a mesh pattern takes the form $p=(\cl{p},\sh{p})$, where $\cl{p}$ is a permutation and $\sh{p}$ is a set of coordinates recording the shaded boxes, which are indexed by their south west corner. For ease of notation we sometimes denote the mesh pattern $(\cl{p},\sh{p})$ as $\cl{p}^{\sh{p}}$. We let~$|\cl{p}|$ represent the length of $\cl{p}$ and $|\sh{p}|$ the size of $\sh{p}$, and define the \emph{length} of $p$ as $|\cl{p}|$, which we denote $|p|$. For example, the mesh pattern $(132,\{(0,0),(0,1),(2,2)\})$, or equivalently $132^{(0,0),(0,1),(2,2)}$, has the form: \begin{center} \patt{0.5}{3}{1,3,2}[0/1,0/0,2/2][][][][][4] \end{center} To define when a mesh pattern occurs within another mesh pattern, we first need to recall two other well-known definitions of occurrence. A permutation $\sigma$ \emph{occurs} in a permutation $\pi$ if there is a subsequence, $\eta$, of $\pi$ whose letters appear in the same relative order of size as the letters of $\sigma$. The subsequence $\eta$ is called an \emph{occurrence} of $\sigma$ in $\pi$. If no such occurrence exists we say that $\pi$ \emph{avoids} $\sigma$. Consider a mesh pattern $(\sigma,S)$ and an occurrence $\eta$ of $\sigma$ in $\pi$, in the classical permutation pattern sense. Each box $(i,j)$ of $S$ corresponds to an area $R_{\eta}(i,j)$ in the plot of $\pi$, which is the rectangle whose corners are the points in $\pi$ which in $\eta$ correspond to the letters $\sigma_i,\sigma_{i+1},j,j+1$ of $\sigma$, and the letters $\sigma_0,\sigma_{|\sigma|+1},0$ and $|\sigma|+1$ are to the south, north, east and west boundaries, respectively. A point is contained in $R_\eta(i,j)$ if it is in the interior of $R_\eta(i,j)$, that is, not on the boundary. For example, in \cref{fig:occEx} where $\eta$ is the occurence in red, the area of $R_\eta(0,0)$ contains the boxes $\{(0,0),(1,0),(0,1),(1,1)\}$, and it contains exactly one point. We say that $\eta$ is an occurrence of the mesh pattern $(\sigma,S)$ in the permutation~$\pi$ if there is no point in $R_{\eta}(i,j)$, for all shaded boxes~$(i,j)\in S$. Using these definitions of occurrences we can recall a concept of mesh pattern containment in another mesh pattern introduced in \cite{TU17}. An example of which is given in \cref{fig:occEx}. \begin{defn}[\cite{TU17}]\label{defn:meshOcc} An occurrence of a mesh pattern $(\sigma,S)$ in another mesh pattern $(\pi,P)$ is an occurrence~$\eta$ of~$(\sigma,S)$ in $\pi$, where for any $(i,j)\in S$ every box in $R_\eta(i,j)$ is shaded in $(\pi,P)$. \end{defn} \begin{figure}\centering \begin{subfigure}[b]{0.3\textwidth} \centering\patt{0.75}{2}{1,2}[0/1,0/2,2/2][][][][] \caption{}\label{subfiga}\end{subfigure} \begin{subfigure}[b]{0.3\textwidth}\centering \colpatt{0.75}{3}{1,2,3}[0/0,0/2,0/3,1/3,1/1,1/2,2/1,2/2,3/3][2/2,3/3][3/3,0/2,0/3,1/3,1/2] \caption{}\label{subfigc}\end{subfigure} \caption{A pair of mesh patterns, with an occurrence of (a) in~(b) depicted in red. }\label{fig:occEx} \end{figure} The classical permutation poset $\mathcal{P}$ is defined as the poset of all permutations, with $\sigma\le_\mathcal{P}\pi$ if and only if $\sigma$ occurs in $\pi$. Using \cref{defn:meshOcc} we can similarly define the mesh pattern poset $\mathcal{M}$ as the poset of all mesh patterns, with $m\le_\mathcal{M} p$ if $m$ occurs in $p$. We drop the subscripts from $\le$ when it is clear which partial order is being considered. An \emph{interval} $[\alpha,\beta]$ of a poset is defined as the subposet induced by the set $\{\kappa\,|\,\alpha\le\kappa\le\beta\}$. See \cref{fig:intEx} for an example of an interval of $\mathcal{M}$. \begin{figure}\centering \begin{tikzpicture} \def1.35{3} \def2{3} \def0.25{0.5} \node (123-123) at (0*2,5*1.35){\patt{0.25}{3}{1,2,3}[0/3,1/3,2/3][][][][][4]}; \node (123-12) at (-1*2,4*1.35){\patt{0.25}{3}{1,2,3}[0/3,1/3][][][][][4]}; \node (123-13) at (0*2,4*1.35){\patt{0.25}{3}{1,2,3}[0/3,2/3][][][][][4]}; \node (123-23) at (1*2,4*1.35){\patt{0.25}{3}{1,2,3}[1/3,2/3][][][][][4]}; \node (123-1) at (-1.5*2,3*1.35){\patt{0.25}{3}{1,2,3}[0/3][][][][][4]}; \node (12-12) at (-0.5*2,3*1.35){\patt{0.25}{2}{1,2}[0/2,1/2][][][][][4]}; \node (123-2) at (0.5*2,3*1.35){\patt{0.25}{3}{1,2,3}[1/3][][][][][4]}; \node (123-3) at (1.5*2,3*1.35){\patt{0.25}{3}{1,2,3}[2/3][][][][][4]}; \node (12-1) at (-1*2,2*1.35){\patt{0.25}{2}{1,2}[0/2][][][][][4]}; \node (123-0) at (0*2,2*1.35){\patt{0.25}{3}{1,2,3}[][][][][][4]}; \node (12-2) at (1*2,2*1.35){\patt{0.25}{2}{1,2}[1/2][][][][][4]}; \node (12-0) at (-1*2,1*1.35){\patt{0.25}{2}{1,2}[][][][][][4]}; \node (1-1) at (1*2,1*1.35){\patt{0.25}{1}{1}[0/1][][][][][4]}; \node (1-0) at (0*2,0*1.35){\patt{0.25}{1}{1}[][][][][][4]}; \draw (123-123) -- (123-12);\draw (123-123) -- (123-13);\draw (123-123) -- (123-23); \draw[-] (123-123) to [bend right=15] (12-12);\draw[-] (12-12) to [bend left=15] (1-1); \draw (123-12) -- (123-1);\draw (123-12) -- (123-2); \draw (123-13) -- (123-1);\draw (123-13) -- (123-3); \draw (123-23) -- (123-2);\draw (123-23) -- (123-3); \draw (123-1) -- (123-0);\draw (123-1) -- (12-1); \draw (123-2) -- (123-0); \draw (123-3) -- (123-0);\draw (123-3) -- (12-2); \draw (12-12) -- (12-1);\draw (12-12) -- (12-2); \draw (12-1) -- (12-0);\draw (12-2) -- (12-0); \draw (123-0) -- (12-0);\draw (12-0) -- (1-0); \draw (1-1) -- (1-0); \end{tikzpicture} \caption{The interval $[1^\emptyset,123^{(0,3),(1,3),(2,3)}]$ of $\mathcal{M}$.}\label{fig:intEx} \end{figure} The first result on the mesh pattern poset is that there are infinitely many maximal elements, which shows a significant difference to the permutation poset, where there are no maximal elements. \begin{lem}\label{lem:max} The poset of mesh pattern contains infinitely many maximal elements, which are the mesh patterns in which all boxes are shaded. \end{lem} \begin{proof} This follows from the easily proven fact that a fully shaded mesh pattern occurs only in itself, and in no other mesh patterns. \end{proof} \subsection{Poset Topology} In this subsection we briefly introduce some poset topology, and refer the reader to \cite{Wac07} for a comprehensive overview of the topic, including any definitions we omit here. The \emph{M\"obius function} of an interval $[\alpha,\beta]$ of a poset is defined by:\linebreak ${\mu(a,a)=1}$, for all $a$, $\mu(a,b)=0$ if $a\not\le b$, and $$\mu(a,b)=-\sum_{c\in[a,b)}\mu(a,c).$$ See \cref{fig:1-123} for an example. The M\"obius function of a poset $P$ is given by $\mu(P)=\mu(\hat{0},\hat{1})$, where $\hat{0}$ and $\hat{1}$ are unique minimal and maximal elements which we add to $P$. In a poset we say that $\alpha$ \emph{covers} $\beta$, denoted $\alpha\gtrdot\beta$, if $\alpha>\beta$ and there is no $\kappa$ such that $\alpha>\kappa>\beta$. A \emph{chain} of length $k$ in a poset is a totally ordered subset $c_1<c_2<\cdots<c_{k+1}$, and the chain is \emph{maximal} if $c_i\lessdot c_{i+1}$, for all $1\le i \le k$. A poset is \emph{pure} (also known as \emph{ranked}) if all maximal chains have the same length. The \emph{dimension} of a poset $P$, denoted $\dim P$, is the length of the longest maximal chain. For example, the interval in \cref{fig:intEx} is nonpure because there is one maximal chain of length $3$ ($\pattin{1}{1}{}\lessdot\pattin{1}{1}{0/1}\lessdot \pattin{2}{1,2}{0/2,1/2}\lessdot\pattin{3}{1,2,3}{0/3,1/3,2/3}$), two maximal chains of length $4$ and all other maximal chains have length $5$, so the interval has dimension $5$. The \emph{interior} of an interval $[\alpha,\beta]$ is obtained by removing $\alpha$ and $\beta$, and is denoted $(\alpha,\beta)$. The \emph{order complex} of an interval $[\alpha,\beta]$, denoted $\Delta(\alpha,\beta)$, is the simplicial complex whose faces are the chains of $(\alpha,\beta)$. When we refer to the \emph{topology} of an interval we mean the topology of the order complex of the interval. A simplicial complex is \emph{shellable} if we can order the maximal faces $F_1,\ldots,F_t$ such that the subcomplex $\left(\cup_{i=1}^{k-1}F_i\right)\cap F_k$ is pure and $(\dim F_k)$-dimensional, for all $k=2,\ldots,t$. Being shellable implies other properties on the topology, such as having the homotopy type of a wedge of spheres. An interval $I$ is \emph{disconnected} if the interior can be split into two disjoint pairwise incomparable sets, that is, $I=A\cup B$ with $A\cap B=\emptyset$ and for every~$a\in A$ and $b\in B$ we have $a\not\le b$ and $b\not\le a$. Each interval $I$ can be decomposed into its smallest connected parts, which we call the \emph{components} of $I$. A component is \emph{nontrivial} if it contains more than one element and we say an interval is \emph{strongly disconnected} if it has at least two nontrivial components. For example, the interval $[1^\emptyset,12^{(0,2),(1,2)}]$ in \cref{fig:intEx} is disconnected but not strongly disconnected. Note that if an interval has dimension less than $3$ it can never be strongly disconnected. We can use disconnectivity as a test for shellability using the following results. \begin{lem}\label{lem:strongdis} If an interval is strongly disconnected, then it is not shellable. \begin{proof} Consider any ordering of the maximal chains and let $F_k$, with ${k>1}$, be the first chain where every preceding chain belongs to a different component and $F_k$ belongs to a nontrivial component. Note that such an $F_k$ exists in every ordering because the interval is strongly disconnected, and because $F_k$ belongs to a nontrivial component it must have dimension of at least $1$. So $\left(\cup_{i=1}^{k-1}F_i\right)\cap F_k=\emptyset$, which has dimension $-1$, so it is not $\dim(F_k-1)$-dimensional. Therefore, the ordering is not a shelling. \end{proof} \end{lem} Since every subinterval of a shellable interval is shellable, \cite[Corollary 3.1.9]{Wac07}, we obtain the following: \begin{cor} An interval which contains a strongly disconnected subinterval is not shellable. \end{cor} Finally, we present a useful result known as the Quillen Fiber Lemma \cite{Quillen78}. Two simplicial complexes are homotopy equivalent if one can be obtained by deforming the other but not breaking or creating any new ``holes", for a formal definition see \cite{Hat02}. A simplicial complex is \emph{contractable} if it is homotopy equivalent to a point and if two posets are homotopy equivalent their M\"obius functions are equal. Given a poset $P$, with $p \in P$ define the upper ideal $P_{\ge p}=\{q\in P\,|\,q\ge p\}$. \begin{prop}\label{thm:Quil}(Quillen Fiber Lemma) Let $\phi:P\rightarrow Q$ be an order-preserving map between posets such that for any $x\in Q$ the complex\linebreak $\Delta(\phi^{-1}(Q_{\ge x}))$ is contractible. Then $P$ and $Q$ are homotopy equivalent. \end{prop} \section{M\"obius Function}\label{sec:MF} In this section we present some results on the M\"obius function of the mesh pattern poset. We begin with some simple results on: mesh patterns with the same underlying permutations; the mesh patterns with no points~$\epsilon^\emptyset$ and $\epsilon^{(0,0)}$; and mesh patterns with no shaded boxes. Throughout the remainder of the paper we assume that $m$ and $p$ are mesh patterns. \begin{lem} Let $\pi$ be a permutation. For any sets $A\subseteq B$ the interval~$[\pi^A,\pi^B]$ is isomorphic to the boolean lattice~$B_{|B|-|A|}$. Therefore,\linebreak ${\mu(\pi^A,\pi^B)=(-1)^{|B|-|A|}}$ and $[\pi^A,\pi^B]$ is shellable. \begin{proof} The elements of $[\pi^A,\pi^B]$ are exactly the mesh patterns $\pi^C$ where $C\subseteq B\setminus A$, which implies the result. \end{proof} \end{lem} \begin{lem} Consider $A\in\{\emptyset,(0,0)\}$, then: $$\mu(\epsilon^{A},p)=\begin{cases} 1,&\mbox{ if }p=\epsilon^A \\ -1,&\mbox{ if }A=\emptyset\,\,\&\,\,|\cl{p}|+|\sh{p}|=1\\ 0,&\mbox{ otherwise} \end{cases}.$$ \begin{proof} The first two cases are trivial. By the proof of \cref{lem:max} we know that $\epsilon^{(0,0)}$ is not contained in any larger mesh patterns, which implies $\mu(\epsilon^{(0,0)},p)=0$, for all $p\not=\epsilon^{(0,0)}$. If $|\cl{p}|+|\sh{p}|>1$, then $(\epsilon^\emptyset,p)$ contains a unique minimal element $1^\emptyset$, so $\mu(\epsilon^\emptyset,p)=0$. \end{proof} \end{lem} \begin{lem} The interval $[\sigma^\emptyset,\pi^\emptyset]$ is isomorphic to $[\sigma,\pi]$ in $\mathcal{P}$, so $$\muM(\sigma^\emptyset,\pi^\emptyset)=\muP(\sigma,\pi).$$ \end{lem} The M\"obius function of the classical permutation poset is known to be unbounded \cite{Smith13}. So we get the following corollary: \begin{cor} The M\"obius function is unbounded on $\mathcal{M}$. \end{cor} We can also show that the M\"obius function is unbounded if we include shaded boxes. We do this by mapping to the poset $\mathcal{W}$ of words with subword order, that is, the poset made up of all words and $u\le w$ if there is a subword of $w$ that equals $u$. The map we introduce is analogous to the map in \cite[Section 2]{Smith14}, which maps certain intervals of the permutation poset to intervals of $\mathcal{W}$. A \emph{descent} in a permutation $\pi=\pi_1\pi_2\ldots\pi_n$ is a pair of letters $\pi_i,\pi_{i+1}$ with $\pi_{i}>\pi_{i+1}$. We call $\pi_{i+1}$ the \emph{descent bottom}. An \emph{adjacency tail} is a letter $\pi_i$ with $\pi_i=\pi_{i-1}\pm 1$. Let $adj(\pi)$ be the number of adjacency tails in~$\pi$. Consider the set $\Gamma$ of mesh patterns where the permutation has exactly one descent, the descent bottom is $1$ and we shade everything south west of~$1$. For example, the mesh pattern $2314^{(0,0),(1,0),(2,0)}$: \begin{center} \patt{0.4}{4}{2,3,1,4}[0/0,1/0,2/0][][][][][4]. \end{center} \begin{lem}\label{lem:mobUn} Consider a mesh pattern $m\in \Gamma$, then $[21^{(0,0),(1,0)},m]$ is shellable and \[ \mu(21^{(0,0),(1,0)},m)=\begin{cases} (-1)^{|m|}\lfloor\frac{|m|}{2}\rfloor,&\text{ if } adj(\cl{m})=0\\ (-1)^{|m|},& \text{ if } adj(\cl{m})=1 \text{ \& tail before descent}\\ 0, &\text{ otherwise} \end{cases}. \] \begin{proof} First note that every mesh pattern in $[21^{(0,0),(1,0)},m]$ is also in $\Gamma$. We define a map $f$ from $\Gamma$ to binary words in the following way. Let $b(x)$ be the set of letters that appear before $1$ in $x\in \Gamma$. Set $\hat{f}(x)$ as the word where the $i$th letter is $0$ if it is in $b(x)$ and $1$ otherwise, and let $f(x)$ equal $\hat{f}(x)$ with the first letter removed. So $f(\Gamma)$ is the set of binary words with at least one~$0$. The inverse of this map is obtained by the following procedure: 1) take a binary word $w\in f(\Gamma)$ and prepend a $1$; 2) put the positions that are $0$'s in increasing order followed by the positions that are $1$ in increasing order; and 4) shade everything southwest of $1$. So $f$ is a bijection. It is straightforward to check that $f$ is order preserving. So the interval~$[21^{(0,0),(1,0)},m]$ is isomorphic to $[0,f(m)]$ in $\mathcal{W}$. It was shown in \cite{Bjo90} that intervals of $\mathcal{W}$ are shellable, which proves the shellability part. It was also shown that the M\"obius function equals the number of normal occurrences with the sign given by the dimension, where an occurrence is \emph{normal} if in any consecutive sequence of equal elements every non-initial letter is part of the occurrence. So for an occurrence of $0$ in $f(m)$ to be normal there can be no $1$ directly preceded by a $1$ and at most one $0$ directly preceded by a $0$. If such a $0$ exists it must be the occurrence, otherwise any $0$ can be the occurrence. In our bijection a non-initial letter of such a sequence maps to an adjacency tail. Combining this with the fact that if there are no adjacency tails, then the letters before the descent must be all the even letters of which there are $\lfloor\frac{|m|}{2}\rfloor$, completes the proof. \end{proof} \end{lem} The M\"obius function on $\mathcal{P}$ often takes larger values than on $\mathcal{M}$, but it is not always true that $\muM(m,p)\le \muP(\cl{m},\cl{p})$. A simple counterexample is the interval $$[1^{(0,1)},123^{(0,2),(0,3),(1,2),(1,3)}],$$ which has M\"obius function $1$, however $\muP(1,123)=0$, see \cref{fig:1-123}. \begin{figure}\centering \begin{tikzpicture} \def0.25{0.35} \node (123) at (0,3){\footnotesize\textcolor{white}{1}\patt{0.25}{3}{1,2,3}[0/2,0/3,1/2,1/3][][][][][4]\textcolor{red}{1}}; \node (12a) at (-1,1.5){\footnotesize\textcolor{red}{-1}\patt{0.25}{2}{1,2}[0/2,1/2][][][][][4]\textcolor{white}{-1}}; \node (12b) at (1,1.5){\footnotesize\textcolor{white}{-1}\patt{0.25}{2}{1,2}[0/1,0/2][][][][][4]\textcolor{red}{-1}}; \node (1) at (0,0){\footnotesize\textcolor{white}{-1}\patt{0.25}{1}{1}[0/1][][][][][4]\textcolor{red}{1}}; \draw (1) -- (12a) -- (123) -- (12b) -- (1); \node (123) at (5,3){{\footnotesize\textcolor{red}{0}} $123$ \textcolor{white}{0}}; \node (12) at (5,1.5){{\footnotesize\textcolor{red}{-1}} $12$ \textcolor{white}{-1}}; \node (1) at (5,0){{\footnotesize\textcolor{red}{1}} $1$ \textcolor{white}{1}}; \draw (1) -- (12) -- (123); \end{tikzpicture} \caption{The interval $[1^{(0,1)},123^{(0,2),(0,3),(1,2),(1,3)}]$ (left) in $\mathcal{M}$ and $[1,123]$ (right) in $\mathcal{P}$, with the M\"obius function in red.}\label{fig:1-123} \end{figure} If we consider intervals where the bottom mesh pattern has no shadings, then we get the following result: \begin{lem}\label{lem:mu0} Consider an interval $[s^\emptyset,p]$ in $\mathcal{M}$ with $\sh{p}\not=\emptyset$. If $s^B\not\in(s^\emptyset,p)$ for any set $B$, then $\mu(s^\emptyset,p)=0$. \begin{proof} Consider the map $f:(s^\emptyset,p)\rightarrow A:x\mapsto\cl{x}^\emptyset,$ that is, $f$ removes all shadings from~$x$. We can see that ${A=(s^\emptyset,\cl{p}^{\emptyset}]}$, so $A$ is contractible, because it has the unique maximal element~$\cl{p}^\emptyset$, hence $\mu(A)=0$. Moreover,~$f^{-1}(A_{\ge y})=[y,p)$, for all $y\in A$, which is contractible. Therefore,~$(s^\emptyset,p)$ is homotopy equivalent to $A$ by the Quillen Fiber Lemma (\cref{thm:Quil}), which implies~$\mu(s^\emptyset,p)=0$. \end{proof} \end{lem} \begin{ex} Consider the subinterval $[1^\emptyset,12^{(0,2)}]$ in \cref{fig:intEx}, applying \cref{lem:mu0} implies $\mu(1^\emptyset,12^{(0,2)})=0$. However, we cannot apply \cref{lem:mu0} to $[1^\emptyset,12^{(0,2),(1,2)}]$ because it contains the element $1^{(0,1)}$. \end{ex} We can combine Lemma~\ref{lem:mu0} with the following result to see that the M\"obius function is almost always zero on the interval $[1^\emptyset,p]$. \begin{lem} As $n$ tends to infinity the proportion of mesh patterns of length~$n$ that contain any of $\{1^{(0,0)},1^{(1,0)},1^{(0,1)},1^{(1,1)}\}$ approaches $0$. \begin{proof} Let $P(n,i)$ be the probability that the letter $i$ is an occurrence of~$1^{(0,0)}$ in a length $n$ mesh pattern, and let $P(n)$ be the probability that a length $n$ mesh pattern contains $1^{(0,0)}$. The probability $P(n,i)$ can be bounded above by first considering the index $k$ of $i$, each having probability $\frac{1}{n}$, and then requiring that all boxes south west of $i$ are filled, of which there are $ik$. This provides an upper bound, because it is possible that there is a point south west of $i$, which would imply $i$ is not an occurrence of $1^{(0,0)}$. We can formulate this as: \begin{align*} P(n,i)&\le\sum_{k=1}^{n}\frac{1}{n}\left(\frac{1}{2^i}\right)^k =\frac{1}{n}\left(\frac{1-2^{-i(n+1)}}{1-2^{-i}}-1\right)\\ &=\frac{1}{n}\left(\frac{2^{-i}-2^{-i(n+1)}}{1-2^{-i}}\right) =\frac{1}{n2^i}\left(\frac{1-2^{-in}}{1-2^{-i}}\right) \le\frac{2}{n2^i} \end{align*} To compute the probability $P(n)$ we can sum over all the $P(n,i)$. Note again this is an over estimate because if a mesh pattern contains multiple occurrences of $1^{(0,0)}$ it counts that mesh pattern more than once. $$ P(n)\le\sum_{i=1}^{n}P(n,i)\le\sum_{i=1}^{n}\frac{2}{n2^i} =\frac{2}{n}\left(\frac{1-\left(\frac{1}{2}\right)^{n+1}}{1-\frac{1}{2}}-1\right) \le\frac{2}{n} $$ Repeating this calculation for the other three shadings of $1$ implies that the probability of containing any of the forbidden mesh patterns is bounded by $\frac{8}{n}$ which tends to zero as $n$ tends to infinity. \end{proof} \end{lem} Because of the previous lemma we obtain: \begin{cor} As $n$ tends to infinity the proportion of mesh patterns $p$ of length n such that $\mu(1^\emptyset,p)=0$ approaches $1$. \end{cor} In the classical case it is true that given a permutation $\sigma$ the probability a permutation of length $n$ contains $\sigma$ tends to $1$ as $n$ tends to infinity, this follows from the Marcus-Tardos Theorem \cite{MT04}. By the above result we can see the same is not true in the mesh pattern case. In fact we conjecture the opposite is true: \begin{conj} Given a mesh pattern $m$, with at least one shaded box, the probability that a random mesh pattern of length $n$ contains $m$ tends to~$0$ as $n$ tends to infinity. \end{conj} \section{Purity}\label{sec:purity} Recall that a poset is pure (also known as ranked) if all the maximal chains have the same length, and as we can see from \cref{fig:intEx}, intervals of the mesh pattern poset can be non-pure. In this section we classify which intervals~$[1^\emptyset,m]$ are non-pure. First we consider the length of the longest maximal chain in any interval~$[1^\emptyset,m]$, that is, the dimension of $[1^\emptyset,m]$. \begin{lem} For any mesh pattern $m$, we have $\dim(1^\emptyset,m)=|\cl{m}|+|\sh{m}|$. \begin{proof} We can create a chain from $m$ to $1^\emptyset$ by deshading all boxes, in any order, and then deleting all but one point, in any order. The length of this chain is $|\cl{m}|+|\sh{m}|$. Moreover, we cannot create a longer chain because at every step of a chain we must deshade a box or delete a point. \end{proof} \end{lem} Therefore, we define the \emph{dimension} of a mesh pattern as $\dim(m)=|\cl{m}|+|\sh{m}|$ and we say an edge $m\lessdot p$ is \emph{impure} if $\dim(p)-\dim(m)>1$. Next we give a classification of impure edges. Let $\mX{m}{x}$ be the mesh pattern obtained by deleting the point $x$ in $m$ and let $\occX{m}{x}$ be the occurrence of $\mX{m}{x}$ in $m$ that does not use the point $x$. An occurrence $\eta$ of $m$ in $p$ \emph{uses the shaded box $(a,b)\in\sh{p}$} if $(a,b)\in R_\eta(i,j)$ for some shaded box $(i,j)\in\sh{m}$. We say that deleting a point $x$ \emph{merges shadings} if there is a shaded box in $\mX{m}{x}$ that corresponds to more than one shaded box in $\occX{m}{x}$, see \cref{fig:impEx}. \begin{lem}\label{lem:impureEdge} Two mesh patterns $m<p$ form an impure edge if and only if all occurrences of $m$ in $p$ use all the shaded boxes of $p$ and are obtained by deleting a point that merges shadings. \end{lem} \begin{proof} First we show the backwards direction. Because $m$ is obtained by deleting a point that merges shadings, $m$ must have one less point and at least one less shaded box so $\dim(p)-\dim(m)\ge2$. So it suffices to show that there is no $z$ such that $m<z<p$. Suppose such a $z$ exists, then if~$z$ is obtained by deshading a box in $p$ it can no longer contain $m$ because all occurrences of $m$ in $p$ use all the shaded boxes of $p$. If $z$ is obtained by deleting a point and $m<z$, then $\cl{m}=\cl{z}$. Therefore, we can deshade some boxes of $z$ to get $m$, which implies there is an occurrence of $m$ in $p$ that does not use all the shaded boxes of $p$. Now consider the forward direction. Suppose $m\lessdot p$ is impure, so $\dim(p)-\dim(m)\ge2$. Therefore, $m$ is obtained by deleting a single point which merges shadings, but does not delete shadings because any other combination of deleting points and deshading can be done in successive steps. Furthermore, this must be true for any point that can be deleted to get $m$, that is, for all occurrences of $m$ in $p$. Moreover, if there is an occurrence that does not use all the shaded boxes of $p$, we can deshade the box it doesn't use and get an element that lies between $m$ and $p$. \end{proof} \begin{figure}\centering \begin{subfigure}[b]{0.3\textwidth} \centering\colpatt{0.5}{2}{1,2}[0/2,1/2][2/2][0/2,1/2] \caption*{$a=12^{(0,2),(1,2)}$}\label{subfig:a}\end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering\colpatt{0.5}{3}{1,2,3}[1/3,2/3][1/1,3/3][1/3,2/3] \caption*{$b=123^{(1,3),(2,3)}$}\label{subfig:b}\end{subfigure} \caption{Two mesh patterns with a point $x$ in black whose deletion merges shadings and the occurrences $\occX{a}{x}$ and $\occX{b}{x}$ in red. By \cref{lem:impureEdge} $\mX{a}{x}\lessdot a$ is impure, but $\mX{b}{x}<b$ is not an impure edge because there is a second occurrence of $\mX{b}{x}$ in $b$, using points $23$, that does not use all the shaded boxes in $b$.}\label{fig:impEx} \end{figure} \begin{lem}\label{lem:topImpure} If $[m,p]$ contains an impure edge, then it contains an impure edge $a\lessdot b$ where $\cl{p}=\cl{b}$. \begin{proof} Let $x\lessdot y$ be an impure edge in $[m,p]$. So $x$ is obtained from $y$ by deleting a point $i$. Consider an occurrence $\eta$ of $y$ in $p$ and let $b$ be the mesh pattern where $\cl{b}=\cl{p}$ and $\sh{b}$ are the shaded boxes used by $\eta$. Let $a$ be the mesh pattern obtained from $b$ by deleting the point which corresponds to $i$ in $\eta$. The mesh pattern $b$ is constructed from $y$ by adding a collection of points. None of these added points can be touching a shaded box in $b$, as they must be added to empty boxes of~$y$. Moreover, the set of occurrences of~$a$ in $b$ correspond to the set of occurrences of $x$ in $y$, after adding the new points. This implies that the occurrences of $x$ in $y$ satisfy the conditions of \cref{lem:impureEdge} if and only if the occurrences of~$a$ in $b$ satisfy the same conditions. So \cref{lem:impureEdge} implies $a\lessdot b$ is an impure edge. \end{proof} \end{lem} \begin{prop} The interval $[1^\emptyset,m]$ is non-pure if and only if there exists a point $x$ in $m$ whose deletion merges shadings and there is no other occurrence of $\mX{m}{x}$ in $m$ which uses a subset of the shadings used by $\occX{m}{x}$. \begin{proof} First we show the backwards direction. Let $t$ be the mesh pattern obtained by inserting~$x$ back into $\mX{m}{x}$, and $\phi$ the corresponding occurrence of $\mX{m}{x}$ in $t$. Note that there are no other occurrences of $\mX{m}{x}$ in $t$ because there is no occurrence of $\mX{m}{x}$ in $m$ which uses a subset of the shadings used by $\occX{m}{x}$. Therefore, by Lemma~\ref{lem:impureEdge} we get that $\mX{m}{x}\lessdot t$ is an impure edge. To see the other direction suppose there is an impure edge in $[1^{\emptyset},m]$. By Lemma~\ref{lem:topImpure} there is an impure edge $a\lessdot b$ where $\cl{b}=\cl{m}$. By \cref{lem:impureEdge} all occurrences of $a$ in $b$ use all shaded boxes of $b$ and are obtained by deleting a point that merges shadings. Moreover, if deleting a point merges shadings in $b$, then its deletion merges shadings in $m$, which implies the result. \end{proof} \end{prop} \begin{cor} There is an impure edge in the interval $[m,p]$ if and only if there exists a point $x$ in $p$ whose deletion merges shadings and there is no other occurrence of $\mX{p}{x}$ in $p$ with a subset of shadings of $\occX{p}{x}$, and $\mX{p}{x}\ge m$. \end{cor} Note that containing an impure edge in $[m,p]$ does not necessarily imply that $[m,p]$ is non-pure. For example, if $[m,p]$ contains only one edge and that edge is impure, then $[m,p]$ is still pure. Although it is also possible to have a pure poset that contains impure and pure edges, see \cref{fig:pureIm}. \begin{figure}\centering \begin{tikzpicture} \def1.35{1.35} \def2{2} \def0.25{0.25} \node (2413) at (0*2,3*1.35){\patt{0.25}{4}{2,4,1,3}[0/0,1/0,2/0][][][][][4]}; \node (213) at (-1*2,1.15*1.35){\patt{0.25}{3}{2,1,3}[0/0,1/0][][][][][4]}; \node (312) at (0*2,1.15*1.35){\patt{0.25}{3}{3,1,2}[0/0,1/0][][][][][4]}; \node (231) at (1*2,1.85*1.35){\patt{0.25}{3}{2,3,1}[0/0,1/0,2/0][][][][][4]}; \node (21) at (0*2,0*1.35){\patt{0.25}{2}{2,1}[0/0,1/0][][][][][4]}; \draw[-] (21) to [bend left=15] (213); \draw[-] (21) to (312); \draw[-] (21) to [bend right=15] (231); \draw[-] (213) to [bend left=15] (2413); \draw[-] (312) to (2413); \draw[-] (231) to [bend right=15] (2413); \end{tikzpicture} \caption{The interval $[21^{(0,0),(1,0)},2413^{(0,0),(1,0),(2,0)}]$, which is pure but contains both pure and impure edges.} \label{fig:pureIm} \end{figure} \section{Topology}\label{sec:topology} A full classification of shellable intervals has not been obtained for the classical permutation poset, so finding such a classification for the mesh pattern poset would be equally difficult, if not more so. However, in \cite{McSt13} all disconnected intervals of the permutation poset are described, and containing a disconnected subinterval implies a pure interval is not shellable. So this gives a large class of non-shellable intervals, in fact it is shown that almost all intervals are not shellable. We showed in \cref{lem:strongdis} that containing a strongly disconnected interval implies an interval is not shellable. So in this section we consider when an interval is strongly disconnected. Firstly we look at the relationship between connectivity in $\mathcal{P}$ and $\mathcal{M}$. The connectivity of the interval $[\cl{m},\cl{p}]$ in $\mathcal{P}$ does not necessarily imply the same property for $[m,p]$ in $\mathcal{M}$. For example, the interval $[123,456123]$ is disconnected in $\mathcal{P}$ but the interval \begin{equation}\label{eq:a}\left[\patt{.25}{3}{1,2,3}[3/0,3/1,3/2][][][][][4] ,\patt{.25}{6}{4,5,6,1,2,3}[6/0,6/1,6/2][][][][][4]\right]\end{equation} is a chain in $\mathcal{M}$, so is connected. Furthermore, the interval $[321,521643]$ is connected in $\mathcal{P}$ but the interval \begin{equation}\label{eq:b}\left[\patt{.25}{3}{3,2,1}[1/3][][][][][4], \patt{.25}{6}{5,2,1,6,4,3}[1/5,1/6,4/6][][][][][4]\right]\end{equation} is strongly disconnected in~$\mathcal{M}$. Therefore, if $[\cl{m},\cl{p}]$ is (non-)shellable in $\mathcal{P}$, then it is not true that $[m,p]$ has the same property in $\mathcal{M}$. For example, $[123,456123]$ is not shellable but~\eqref{eq:a} is shellable, and $[321,521643]$ is shellable but \eqref{eq:b} is not shellable. In \cite{McSt13} the direct sum operation is used to show that almost all intervals of the permutation poset are not shellable in $\mathcal{P}$. We generalise the direct sum operation to mesh patterns. Given two permutations $\alpha=\alpha_1\ldots\alpha_a$ and~$\beta=\beta_1\ldots\beta_b$ the direct sum of the two is defined as $\alpha\oplus\beta=\alpha_1\ldots\alpha_a(\beta_1+a)(\beta_2+a)\ldots(\beta_b+a)$, that is, we increase the value of each letter of $\beta$ by the length of $\alpha$ and append it to $\alpha$. This can also be thought of in terms of the plots of $\alpha$ and $\beta$ by placing a copy of $\beta$ to the north east of $\alpha$. Similarly we can define the skew-sum $\alpha\ominus\beta$ by prepending $\alpha$ to $\beta$ and increasing the value of each letter of $\alpha$ by the length of $\beta$. We extend these definitions to mesh patterns in the following way: \begin{defn}\label{defn:directsum} Consider two mesh patterns $s$ and $t$, where the top right corner of $s$ and bottom left corner of $t$ are not shaded. The direct sum~$s\oplus t$ has the classical pattern $\cl{s}\oplus\cl{t}$ and shaded boxes $\sh{s}\cup\{(i+|\cl{s}|,j+|\cl{s}|)\,|\,(i,j)\in\sh{t}\}$, and also for any shaded boxes $(i,|\cl{s}|)$, $(|\cl{s}|,i)$, $(j,|\cl{s}|)$ or $(|\cl{s}|,j)$, shaded all the boxes north, east, south or west of the box, respectively, for all $0\le i< |\cl{s}|$ and $|\cl{s}|< j\le |\cl{s}|+|\cl{t}|$. We similarly define the skew-sum for when the bottom right corner of $s$ and top left corner of $t$ are not shaded. \end{defn} The direct product $s\oplus t$ can be consider as placing a copy of $t$ north east of $s$ and any shaded box that was on a boundary we extend to the new boundary, see \cref{fig:directsum}. We define the direct sum in this way because it maintains one of the most important properties in the permutation sense, that the first $|\cl{s}|$ letters are an occurrence of $s$ and the final $|\cl{t}|$ letters are an occurrence of $t$. A permutation is said to be indecomposable if it cannot be written as the direct sum of smaller permutations. We generalise this to mesh patterns. \begin{defn} A mesh pattern $m$ is \emph{indecomposable} (resp. \emph{skew-\linebreak indecomposable}) if it cannot be written $m=a\oplus b$ (resp. $m=a\ominus b$), where neither $a$ nor $b$ is $m$. \end{defn} \begin{rem} It is well known that a permutation has a unique decomposition into indecomposable permutations. This implies that a mesh pattern also has a unique decomposition. \end{rem} Using these definitions we can give a large class of strongly disconnected intervals, which is a mesh pattern generalisation of Lemma 4.2 in \cite{McSt13}. \begin{figure}\centering $\patt{.5}{3}{1,3,2}[1/3,2/2]\oplus\patt{.5}{3}{3,2,1}[0/3,1/2,2/1,3/3,3/0][][][][][4] =\patt{.5}{6}{1,3,2,6,5,4}[1/3,2/2,3/6,4/5,5/4,6/6,6/3,1/4,1/5,1/6,0/6,2/6,6/0,6/1,6/2][][][][][4]$ \caption{The direct sum of two mesh patterns.}\label{fig:directsum} \end{figure} \begin{lem} If $m$ is indecomposable, $\dim m > 1$ and $(0,0),(|m|,|m|)\not\in\sh{m}$, then $[m,m\oplus m]$ is strongly disconnected. \begin{proof} By Lemma 4.2 in \cite{McSt13} the interval $[\cl{m},\cl{m}\oplus \cl{m}]$ is strongly disconnected, with components $P_1=\{\cl{m}\oplus x\,|\,x\in [1,\cl{m})\}$ and $P_2=\{x\oplus \cl{m}\,|\,x\in [1,\cl{m})\}$. Consider any pair $\alpha,\beta\in[m,m\oplus m]$, if $\cl{\alpha}$ and~$\cl{\beta}$ are not in the same component of $[\cl{m},\cl{m}\oplus \cl{m}]$, then $\alpha$ and $\beta$ are incomparable. Let $\hat{P_1}=\{\alpha\,|\,\cl{\alpha}\in P_1\}$ and $\hat{P_2}=\{\alpha\,|\,\cl{\alpha}\in P_2\}$. However, $\hat{P_1}\cup\hat{P_2}\not=(\cl{m},\cl{m\oplus m})$ because it does not include the mesh patterns $\alpha$ with ${\cl{\alpha}=\cl{m}\oplus \cl{m}}$. There are exactly two occurrences of $m$ in $m\oplus m$. These are $\eta_1$ the first~$|m|$ letters and $\eta_2$ the last $|m|$ letters. Note that each shaded box of~${m\oplus m}$ is used by at least one of $\eta_1$ and $\eta_2$, so if we deshade a box the resulting pattern $x$ contains at most one occurrence of $m$, either the first or last $|m|$ letters. Let $Q_1$ and $Q_2$ be sets of patterns with underlying permutation $\cl{m}\oplus \cl{m}$ where the first and last $|m|$ letters are the only occurrence of $m$, respectively. So any element $Q_1$ cannot contain an element in $P_2\cup Q_2$ and similarly any element of $Q_2$ cannot contain an element of ${P_1\cup Q_1}$. Therefore, $P_1\cup Q_1$ and $P_2\cup Q_2$ are disconnected nontrivial components of $[m,m\oplus m]$. \end{proof} \end{lem} \begin{cor} If $m$ is skew-indecomposable, $(|m|,0),(0,|m|) \not\in\sh{m}$ and $\dim m>1$, then $[m,m\ominus m]$ is strongly disconnected. \end{cor} Using Lemma 4.2 in \cite{McSt13} it is shown that almost all intervals of the classical permutation poset are not shellable. The proof of this follows from the Marcus-Tardos theorem. We have seen this result does not apply in the mesh pattern case, so we cannot prove a similar result using this technique. A similar problem was studied for boxed mesh patterns in permutations in~\cite{AKV13}, which is equivalent to boxed mesh patterns in fully shaded mesh patterns. So we present the following open question: \begin{que} What proportion of intervals of $\mathcal{M}$ are shellable? \end{que} The M\"obius function in the permutation poset can be computed more easily by decomposing the permutations into smaller parts using the direct sum, or skew-sum, see \cite{BJJS11,McSt13}. Which leads to the following question: \begin{que} Can a formula for the M\"obius function of $\mathcal{M}$ be obtained by decomposing mesh patterns using direct sums and skew sums? \end{que} \section*{\refname}
1,108,101,562,518
arxiv
\section{Introduction} A wide range of galactic and cosmological observations has verified the existing of dark matter, which contributes a considerable part of the total energy density in the Universe. The $\textit{cold dark matter}$ model has successfully explained the observed large-scale structure. However, we still know little about the constituent of dark matter on smaller scales and some issues exist in this standard model. For example, according to the simulation, galaxies like the Milky Way should have thousands of dark matter sub-halos surviving from the tide stripping process and appearing in the form of satellite dwarf galaxies, whereas only $\sim 10$ such dwarfs have been observed in our galaxy and Andromeda M31 galaxy~\citep{Drlica-Wagner2015}. Furthermore, one may conjecture that dark matter (or part of it) consists of compact objects (COs), such as the massive compact halo objects (MACHOs)~\citep{Wyrzykowski2011,Pooley2009,Mediavilla2009,Monroy-Rodriguez2014}, primordial black holes (PBHs)~\citep{Carr1974,Carr1975}, axion mini-clusters~\citep{Hardy2017} and compact mini halos~\citep{Ricotti2009}. For convenience, hereafter we take all of them as the compact dark matter/objects (COs). Some theoretical analysis allows the mass of COs to be as light as $10^{-7}M_\odot$ and as heavy as the first stars $\sim 10^3M_\odot$~\citep{Griest1991}. Probing COs through astronomical observation is therefore crucial to discriminate models and deepen our understandings on the nature of dark matter. Efforts have been devoted with various approaches and some progress has been made in constraining the CO fraction in dark matter $f_{\rm{CO}}$ and the mass $M_{\rm{CO}}$. While large-mass ($\geq 100M_\odot$) COs can perturb the wide stellar binaries~\citep{Quinn2009}, the microlensing of stars can constrain the COs in the Milky way with low-mass ($\leq10M_\odot$)~\citep{Tisserand2007,Wyrzykowski2011,Udalski2015,Calchi Novati2013,Niikura2017}. Besides, by observing the lack of radiation as a result of accretion, one could also give a constraint for large-mass COs with the cosmic microwave background~\citep{Ali-Haimoud2017}. Other methods include millilensing of quasars~\citep{Wilkinson2001}, lensing of supernovae~\citep{Benton2007}, ultra-faint dwarf galaxies~\citep{Brandt2016} and caustic crossing~\citep{Oguri2018}. Generally speaking, no robust evidence of COs has been found for $f_{\rm{CO}}>0.1$ in a wide mass range. The mass range $10-100M_\odot$ has been poorly constrained and attracting most of the attention especially after the gravitational waves (GWs) from binary black holes were directly detected by LIGO/VIRGO~\citep{Abbott2016}. The black hole masses are within such window, which suggests they could be the PBH dark matter~\citep{Bird2016,Sasaki2016}. However, current constraints are too weak~\citep{Ricotti2008,Oguri2018}. More robust and independent evidences are needed to verify such conjecture. Recently, lensing of transients like GWs~\citep{Jung2019,Liao2020}, gamma ray bursts (GRBs)~\citep{Ji2018} and fast radio bursts (FRBs)~\citep{Munoz2016} were proposed to be very promising in constraining COs. The imprints of COs as lenses correspond to the distorted waveforms of GWs, the auto-correlation in GRB light curves and the echoes of FRB signals. Remarkably, FRB method should be the simplest and cleanest even though we have not understood the formation mechanism of FRB yet. FRBs are bright pulses of emission at radio frequencies, most of which have durations of order milliseconds or less~\citep{Lorimer2007,Thornton2013}. The short duration and large brightness make them emit coherently in nature. Most of FRBs seem to be one-off, but a few are repeaters manifesting a longer-lived central engine. Recent studies showed it is possible that a large fraction (or even all) of the FRBs are repeaters, and we just happen to catch one of their bursts~\citep{Ravi2019,Munoz2020,Fonseca2020}. While current event rate is limited by the small fields-of-view of current radio telescopes, FRB events are supposed to be quite often on the full sky ($\sim10^4$/day).~\citep{Thornton2013,Champion2016}. The ongoing wide-field surveys like APERTIF, UTMOST, HIRAX and CHIME will monitor a considerable fraction of the sky, giving thousands of detections per year. If part of dark matter consists of COs, there must be a chance that an FRB is within the Einstein radius of a CO, appearing split signals with flux ratio and time delay. Therefore, detections of such lensed signals could statistically infer the fraction and mass of COs in turn~\citep{Munoz2016}. In principle, lensing of FRBs can effectively detect the mass range down to $20-100M_\odot$ that gives typical time delays comparable to the intrinsic duration of the signal. Realistic constraints depend on the event number and distributions of signal durations and redshifts. Shorter durations, higher redshifts and larger event number would give more stringent constraints. The detected FRB events are timely included in the public catalogue\footnote{http://frbcat.org/}~\citep{Petroff2016}. The newest event number is $\sim110$ which gives a statistical sample. We use these data to give a first constraint on COs and discuss more details about identifying the lensed signals in this work. Besides, we also make corrections to the forecast and discuss how we will deal with the detected lensed FRBs. The Letter is organized as follows: In Section 2, we introduce the theory on FRB lensing; In Section 3, we discuss how to identify the lensed signals and apply our method to the existing data, giving the constraints; The forecast and lens mass estimation are shown in Section 4; Finally, we summarize and make discussions in Section 5. \section{Lensing of fast radio bursts} Gravitational lensing is usually classified by the lens mass scale (equivalently the Einstein radius). For FRB lensing, we suggest it be more appropriate to take it as strong lensing since we can clearly discriminate the split transient signals, whereas the traditional microlensing limited by the resolution can only observe the overlapped images of constant sources. We take the CO as a point mass whose Einstein radius is given by \begin{equation} \theta_\mathrm{E}=2\sqrt{\frac{GM_\mathrm{CO}}{c^2D}}\approx(10^{-6})^{\prime\prime}\left(\frac{M_\mathrm{CO}}{M_\odot}\right)^{1/2}\left(\frac{D}{\mathrm{Gpc}}\right)^{-1/2}, \end{equation} where the $\textit{effective lensing distance}$ (called $\textit{time delay distance}$ as well) $D=D_{\rm{L}}D_{\rm{S}}/D_{\rm{LS}}$, which is a combination of three angular diameter distances. Subscripts $\rm{S,L}$ denote the source and the lens, respectively. Although the spatial resolution in radio observation could reach very high level, for example, the angular resolution for the FRB 121102 with Very Long Baseline Array (VLBA) is $\sim(10^{-2})^{\prime\prime}$~\citep{Spitler2016,Chatterje2017,Tendulkar2017}, it is still insufficient to distinguish split images spatially for $M_{\rm{CO}}<10^8M_\odot$. Therefore, we can not get the information of COs by measuring $\theta_{\rm{E}}$. What one can directly measure is the time delay between the lensed signals, which is determined by \begin{equation} \Delta t=\frac{4GM_\mathrm{CO}}{c^3}(1+z_\mathrm{L})\left[\frac{y}{2}\sqrt{y^2+4}+\mathrm{ln}\left(\frac{\sqrt{y^2+4}+y}{\sqrt{y^2+4}-y}\right)\right], \end{equation}\label{dt} where the dimensionless impact parameter $y=\beta/\theta_E$ stands for the relative source position, $z_{\rm{L}}$ is the lens redshift. Obviously, $\Delta t$ must be larger than the width ($w$) of the observed signal itself such that the split lensed images can be distinguished as double peaks. This requires $y$ larger than certain value $y_{\rm{min}}(M_{\rm{CO}},z_{\rm{L}},w)$ according to Eq.\ref{dt}. In addition, the flux/magnification ratio between two images ($+,-$) can be directly measured as well: \begin{equation} R_\mathrm{f}\equiv\left|\frac{\mu_+}{\mu_-}\right|=\frac{y^2+2+y\sqrt{y^2+4}}{y^2+2-y\sqrt{y^2+4}}>1.\label{Rf} \end{equation} To make both lensed images (especially the fainter one) detectable with high enough signal-to-noise ratio (SNR), $R_{\rm{f}}$ should not be too large, which requires the impact parameter to be smaller than certain value $y_{\rm{max}}=\left[(1+R_{\rm{f,max}})/\sqrt{R_{\rm{f,max}}}-2\right]^{1/2}$. We set the criterion $R_{\rm{f,max}}=5$ following Mu$\rm{\tilde{n}}$oz et al. (2016). For a given FRB event at $z_{\rm{S}}$, the lensing optical depth is the probability that the point source is within the perceptible region of any COs along the line of sight: \begin{equation} \tau(M_\mathrm{CO},f_\mathrm{CO},z_\mathrm{S},w)=\int_0^{z_\mathrm{S}}d\chi(z_\mathrm{L})(1+z_\mathrm{L})^2n_\mathrm{CO}\sigma(M_\mathrm{CO},z_\mathrm{L},z_\mathrm{S},w),\label{taun} \end{equation} where $\chi$ is the comoving distance, $n_{\rm{CO}}$ is the CO number density and the cross section is given by \begin{equation} \sigma(M_\mathrm{CO},z_\mathrm{L},z_\mathrm{S},w)=\frac{4\pi GM_\mathrm{CO}}{c^2}\frac{D_\mathrm{L}D_\mathrm{LS}}{D_\mathrm{S}}\left[y_\mathrm{max}^2-y_\mathrm{min}^2(M_\mathrm{CO},z_\mathrm{L},w)\right]. \end{equation} Using Hubble parameter at lens redshift and Hubble constant, Eq.\ref{taun} can be rewritten as: \begin{equation} \begin{aligned} \tau(M_{\rm{CO}},f_{\rm{CO}},z_S,w)=\frac{3}{2}f_{\rm{CO}}\Omega_c\int_0^{z_{\rm{S}}}dz_{\rm{L}}\frac{H_0^2}{H(z_{\rm{L}})}\frac{D_{\rm{L}}D_{\rm{LS}}}{D_{\rm{S}}} \\ \times(1+z_{\rm{L}})^2\left[y_\mathrm{max}^2-y_\mathrm{min}^2(M_\mathrm{CO},z_\mathrm{L},w)\right]. \label{tau} \end{aligned} \end{equation} We adopt the flat $\Lambda$CDM cosmology with total dark matter density $\Omega_c=0.24$, baryonic matter density $\Omega_b=0.06$ and Hubble constant $H_0=70\rm{km\ s^{-1}Mpc^{-1}}$. Following other works, we assume a fraction of dark matter is in the form of COs having the same mass $M_{\rm{CO}}$, then $f_{\rm{CO}}=\Omega_{\rm{CO}}/\Omega_c$. According to the definition, the expected number of lensed FRBs is the sum of the lensing optical depths of all FRBs (for $\tau_i\ll1$): \begin{equation} N_\mathrm{lensed}(M_\mathrm{CO},f_\mathrm{CO})=\sum_{i=1}^{N_\mathrm{total}}\tau_\mathrm{i}(M_{\rm{CO}},f_{\rm{CO}},z_{S,i},w_i).\label{N} \end{equation} This equation shows that for the given ($M_\mathrm{CO},f_\mathrm{CO}$), it will predict the corresponding number of detectable lensed FRB signals. On the contrary, one can infer ($M_\mathrm{CO},f_\mathrm{CO}$) with the number of observed lensed signals. Particularly, if no lensed signal is detected, the region in ($M_\mathrm{CO},f_\mathrm{CO}$) parameter space that predicts at least one lensed signal should be ruled out, which is the standard analysis pipeline widely used in the literature. \section{constraints with current observations} The number of verified FRBs is rapidly increasing. At the moment of writing this Letter, the reported FRB number is 110. In addition, there are extra 9 events that are highly considered as the candidates. Although the method only requires the transient nature, most of the candidates do not have the measured widths of the signals and are therefore not used by us in this work. We will introduce how we analyze these data and constrain COs in this section. \subsection{Identifying the lensed signals} \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[width=8cm,angle=0,trim={25mm 15mm 58mm 30mm}]{FRB121002.pdf} \\ \includegraphics[width=8cm,angle=0,trim={25mm 15mm 58mm 30mm}]{FRB121102.pdf}\\ \end{tabular} \caption{Dynamic spectra of FRB~121002 and a multiple-peak burst of FRB~121102 plotted using the raw data from \protect\url{https://data-portal.hpc.swin.edu.au/dataset} and \protect\url{http://seti.berkeley.edu/frb121102}, respectively. } \label{figure:mp_FRBs} \end{center} \end{figure} In Mu$\rm{\tilde{n}}$oz et al. (2016), the double-peak structure was pointed out to be the feature of a lensed FRB. We have searched such signals in the catalog and find a few existing FRBs that have multiple-peak structure and are likely to be lensed. They are FRB 170827~\citep{Farah2018}, FRB 121002~\citep{Champion2016}, FRB 121102 (repeating)~\citep{Hessels2018}, FRB 180814.J0422+73 (repeating)~\citep{CHIME/FRB2019}, FRB 181112~\citep{Cho2020} and the very recent FRB 181123 by the Five-hundred-meter Aperture Spherical radio Telescope (FAST)~\citep{Zhu2020}. To identify a lensed signal, we suggest that one should further use the dynamic spectrum information which reflects the intrinsic feature of an FRB. The dynamic spectra of these events are presented in the original papers except for FRB 121002. One can easily tell that FRB 170827, 121102, 180814.J0422+73 and 181123 are not lensed since the pulses in the dynamic spectra corresponding to different peaks show different structures. It is impossible to fit them using a simple time delay and relative magnification parameters like what lensing requires. Rather than lensing effect, the multiple peaks of these FRBs must come from the intrinsic substructure of the signals themselves. For example, FRB 181123 has three peaks, but peak 2 only has the higher and peak 3 only has the lower frequency parts compared to peak 1. We plot the dynamic spectrum of FRB 121002 in the upper panel of Fig.\ref{figure:mp_FRBs}. Lensing of this event was first discussed by~\citep{Munoz2016}. The signal-to-noise ratio is small such that the dynamic spectrum can not easily distinguish the two peaks and we can not compare them. However, the second arrived peak has an intenser pulse than the first one, which is against the prediction of lensing theory. We therefore take it as an unlensed event. We also plot the repeating FRB 121102 in the lower panel for example. It clearly shows the ``frequency drift" phenomenon where multiple bursts occur within several milliseconds with decreasing frequencies. At last, it is worth mentioning that the spectrum of FRB 181112 showed two similar pulses with very large flux ratio~\citep{Cho2020}. However, the different polarization details and the impossibility of wave effects indicate the peaks should be intrinsic~\citep{Cho2020}. Therefore, we emphasize that it is important to use more information like the dynamic spectra or polarization properties to identify any lensed signals such that the degeneracy between intrinsic substructure and lensing can be broken. A lensed FRB should appear in dynamic spectrum as two pulses with the same shape and only different from each other by flux magnification and time delay (the fainter one comes later as the echo). We have carefully examined the dynamic spectra of the 110 FRBs with their original papers or the raw data on the FRB website, especially those who have multiple peaks. No strong evidence of lensing signal was found, which can shed light on the properties of lenses. \subsection{Results} The radio pulse from FRB experienced a frequency-dependent delayed time through the ionized interstellar medium, quantified by a dispersion measure (DM) which is proportional to the number of electrons along the line of sight. If we know the ionized history of the Universe, we can infer the distances/redshifts with the directly measured DMs. The observed DM of an FRB can be decomposed into \begin{equation} \mathrm{DM=DM_{MW}+DM_E}, \end{equation} where \begin{equation} \mathrm{DM_E=DM_{IGM}+\frac{DM_{host}+DM_{src}}{1+z}} \end{equation} is the external DM contribution outside the Milky Way galaxy, and $\rm{DM_{host}}$ and $\rm{DM_{src}}$ are from FRB host galaxy and source environment, respectively. The biggest issue in this manner is we have not much information on the host galaxy and source environment, except for those who can be localized~\citep{Li2020}. The current viewpoint is that the average $\rm{DM_{host}+DM_{src}}$ could span from several tens to $\sim200{\rm\ pc~cm^{-3}}$. To make a robust conclusion, we adopt the maximum value $200{\rm\ pc~cm^{-3}}$, equivalently the minimum inference of $z$ for all host galaxies. For the galactic DM contribution $\rm{DM}_{\rm{MW}}$, we use the NE2001 galactic electron density model~\citep{Cordes2002}. We adopt two intergalactic medium (IGM) models of inferring redshifts from the rest of dispersion measure $\rm{DM_{IGM}}$. In model 1, we follow the original work by \citet{Petroff2016}, where the fraction of baryon mass in the IGM $f_{\rm IGM}$ was supposed to be unity ($f_{\rm IGM}=1.0$) and the He ionization history was not taken into consideration, approximately ${\rm DM_{IGM}} \sim 1200z~{\rm pc~cm^{-3}}$~\citep{Ioka2003}. In model 2, the ${\rm DM_{IGM}}-z$ relation is given by \citet{Deng2014}, approximately ${\rm DM_{IGM}} \sim 855z~{\rm pc~cm^{-3}}$~\citep{Zhang2018}, with the consideration of He ionization history and $f_{\rm IGM}=0.83$. With the current five localized FRBs, model 2 seems to be more favored~\citep{Li2020}. \begin{figure} \includegraphics[width=\columnwidth,angle=0]{results.pdf} \caption{Constraints on the CO fraction and mass based on the fact that no lensed signal has been found in current data. The shaded regions are ruled out. The limits are at $68\%$ confidence level (1 $\sigma$) and there are no limits within 2 $\sigma$. }\label{results} \end{figure} We follow the standard operating procedure in the literature for studying the nature of COs. For each ($M_{\rm{CO}},f_{\rm{CO}}$) point in Fig.\ref{results}, it corresponds to an expected number of lensed FRB signals according to Eq.\ref{N}. Since no lensed signal has been found in the current data, the shaded regions in the ($M_{\rm{CO}},f_{\rm{CO}}$) parameter space that predict at least one detectable lensed signal should be ruled out at $68\%$ confidence level (1 $\sigma$). In the case of IGM model 2, the mass can be tested down to $\sim 100M_\odot$ and $f_{\rm{CO}}$ is gradually constrained to $\sim0.5-0.6$ for large mass. While in the case of IGM model 2, the constraints are weaker since it gives smaller redshifts. Our results are comparable to that from wide binaries~\citep{Quinn2009}. Although current constraints are relatively weak, especial for small masses, we have proved the feasibility of this method. For thousands of events detected in the near future, we will give a much better constraint, especially for small masses ($<100M_\odot$). \section{forecast} In this section, we use the realistic distributions of the data to make an improved forecast. Furthermore, we discuss how COs can be constrained with detected lensed signals. \begin{figure} \includegraphics[width=\columnwidth,angle=0]{distribution.pdf} \caption{The 2-dimensional distribution of widths and inferred redshifts with two methods. We note that a width desert between 30 and 300 ms exists in current data. }\label{distribution} \end{figure} \begin{figure} \includegraphics[width=\columnwidth,angle=0]{forecast.pdf} \caption{An forecast based on the realistic distributions of the data. The critical lines (red and blue) correspond to the cases that one lensed signal is expected to be detected. We also show the results by Mu$\rm{\tilde{n}}$oz et al. (2016) for comparison, where the event number is $10^4$, the constant signal widths are 0.3, 1 and 3 ms in case 1, 2 and 3, respectively. }\label{forecast} \end{figure} \subsection{A null search case} In Mu$\rm{\tilde{n}}$oz et al. (2016), to calculate the integrated lensing probability, the optical depth for lensing of a single burst had to be convolved with the redshift distribution of FRBs. They assumed FRBs either have a constant comoving number density or a scenario where FRBs follow the star-formation history. Since we know little about the FRB origin and the DM contribution from host galaxies, there is no reason to make any assumptions for redshift distribution of FRBs. For example, if the progenitors of FRBs are binary stars, then a delay time distribution (DTD) relative to the star formation rate exists. The direct and more robust way is to understand FRB redshifts from the detected signals themselves. Furthermore, they assumed a constant width of FRB to be 0.3, 1 and 3 ms, respectively, which is not realistic. We make forecast basing on the real distribution of the data. The 2-dimensional distribution of widths and redshifts is plotted in Fig.\ref{distribution}. The widths are observed ones, rather than the intrinsic. The data are from the FRB website http://frbcat.org/ which provides the observed (inferred as well) parameters of each verified signal. The redshifts are inferred from the dispersion measure with two IGM models. The generated events for forecasting follow the 2-D distribution in Fig.\ref{distribution}, i.e., the simulation follows the observed data themselves. The number $10^4$ we assume in this work do not rely on certain surveys. The data from all the telescopes could be used in the analysis. The number is chosen such that we can compare our results with Mu$\rm{\tilde{n}}$oz et al. (2016). Besides, $10^4$ FRBs per year is promising for CHIME-like telescopes. The improved forecast is shown in Fig.\ref{forecast}. The critical curves are similar to those in Mu$\rm{\tilde{n}}$oz et al. (2016), however, they are less steep for the small-mass end determined by some very small widths in the catalog, while the decreasing trend persists to large mass due to some very large widths. In addition, we also consider $10^3$ events for either very near future or a pessimistic scenario. \subsection{Constraints from lensed signals} We discuss the case that at least one lensed signals will be verified. Once a lensed FRB signal can be detected, we can estimate the lens mass from the measured time delay and flux ratio. The source position can be determined from flux ratio, then the redshifted lens mass can be determined from time delay. Compared to the uncertainties in the measured time delay and flux ratio, the uncertainty of lens redshift dominates. The typical value is $\sigma_{z_L}\sim 0.5$. Nevertheless, it is sufficient for current CO studies. The mass can be pinned down very well on certain scale. Moreover, if more than one lensed signals are detected, we can even test whether COs consists of the same mass and the theories that give a non-constant mass function. The intermediate-mass black holes may also be discovered in this way. To show how this method works, we simulate a typical lensed signal in Fig.\ref{figure:sim_FRB} for example. It uses \emph{\sc psrfits} search mode format covering a frequency range of 1230$-$1518\,MHz of 512 channels. The data are two-bit sampled with sampling time of 64\,$\mu$s. The DMs and widths at 50 \% power point of these two pulses are 1000\,cm$^{-3}$\,pc and 0.5\,ms, respectively. The redshift of the FRB $z_{\rm{S}}=1.0$ and the compact dark matter is located at $z_{\rm{L}}=0.5$. The source position relative to the Einstein radius is $y=0.5$. Assuming a flat $\Lambda$CDM model with $\Omega_{\rm{M}}=0.3$ and $H_0=70\rm{km/s/Mpc}$, the time delay between two lensed signal can be got $\Delta t=1.5$ ms, the first arrived signal has magnification $|\mu_+|=1.6$ and the second signal has $|\mu_-|=0.6$, with flux ratio $R_{\rm{f}}=2.7$. From the dynamic spectrum, one can clearly see two identical pulses except for the time delay and flux ratio differences. If we detect such a signal with $\Delta t$ and $R_f$ measurements, we can infer the redshifted lens mass $M_z=M_{\rm{CO}}(1+z_{\rm{L}})=45M_\odot$ basing on Eq.\ref{dt} and Eq.\ref{Rf}. However, since $z_{\rm{L}}$ can not be observed, $M_{\rm{CO}}=M_z/(1+z_{\rm{L}})$ inference has the uncertainty from $z_{\rm{L}}$ which should be in $[0,z_{\rm{S}}=1]$, giving $22.5M_\odot<M_{\rm{CO}}<45M_\odot$. Nevertheless, since we have information on $M_{\rm{CO}}$, the degeneracy between $M_{\rm{CO}}$ and $f_{\rm{CO}}$ can be broken. This would further narrow the allowed regions in Fig.\ref{results} or Fig.\ref{forecast}. This idea is very similar to the mass estimation in lensing of gravitational waves~\citep{Cao2014}. \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[width=8cm,angle=0,trim={25mm 15mm 58mm 30mm}]{sim_FRB.pdf}\\ \end{tabular} \caption{Dynamic spectrum of a simulated lensed signal.} \label{figure:sim_FRB} \end{center} \end{figure} \section{Summaries and prospectives} Fast radio bursts are one of the most exciting new mysteries of astrophysics. Beyond how they are created, there is also the prospect of using FRBs to probe the extremes of the Universe and the invisible intervening medium. Due to the short duration, cosmological distances and the large event rate, the lensing of FRBs could be a powerful and robust tool to probe the compact dark matter/objects. We have made some progress in this work summarized as follows: \begin{enumerate} \item For the first time, we use realistic FRB data to give a constraint on the fraction and mass of COs. The constraint results are comparable to that from wide binaries; \item We make an improved forecast basing on the distributions of the existing FRBs for the upcoming CHIME-like experiments; \item We discuss the importance of using dynamic spectra of FRBs in identifying the lensed signals. It can effectively break the degeneracy between intrinsic structure and lensing imprints; \item We discuss the situation when a few lensed signals can be detected and find the CO parameter space can be further well determined by lens mass estimation. \end{enumerate} For future studies, it is necessary to build up an effective pipeline to identify lensed FRBs, especially for the upcoming large number of FRBs. It is also important to understand the properties of the host galaxies, the ionization history of the Universe and its fluctuation in each direction such that the redshift inference can be more accurate. Fast and high spatial resolution program will directly find the host galaxies, thus a large number of redshifts can be measured accurately. More and more events are having polarization measurements, which can be used as an extra criterion for identifying lensed signals, especially for those with similar dynamic spectra, since lensing would not change the polarization. While we are writing this Letter, we note a very recent work based on analyzing FRB 181112 and 180924~\citep{Sammons2020}. It shows the burst substructure with high time resolution can be measured down to $15\mu s$ such that much smaller mass scales can be probed, making FRB method very promising. \section*{Acknowledgments} We thank the referee for his/her helpful comments and G. Hobbs for his \emph{\sc pfits} software package to simulate the lensed signal. KL was supported by the National Natural Science Foundation of China (NSFC) No.~11973034. ZL was supported by the NSFC No.~11920101003. HG was supported by the NSFC No.~11722324, 11690024, 11633001, the Strategic Priority Research Program of the Chinese Academy of Sciences No.~XDB23040100 and the Fundamental Research Funds for the Central Universities. SBZ was supported by the NSFC No.~11725314.
1,108,101,562,519
arxiv
\section{Introduction}\label{S:Intro} It was shown in \cite{F-T-Gev} (see also~\cite{Chae}, \cite{FMRT}) that the solutions of the 2D Navier--Stokes equations with periodic boundary conditions belong to the Gevrey class of analytic functions (if the forcing term does). Using the Gevrey regularity approach the following estimate for the spatial analyticity radius for the solutions that lie on the global attractor (or are near it) was obtained \begin{equation}\label{estFT} l_a\ge\frac{c|\Omega|^{1/2}}{G^2\log G}, \end{equation} where $G=\|f\|_{L_2}|\Omega|/\nu^2$ is the Grashof number and $|\Omega|=L^2/\gamma$ is the area of the periodic domain $\Omega=[0,L/\gamma]\times[0,L]$, $\gamma\le1$. Therefore, the Fourier coefficients $\hat u_k$ are exponentially small for $|k|\gg L/l_a$, and $l_a$ naturally forms a lower bound for the small dissipative length scale for the system (see, for instance,~\cite{D-T}). There are other ways of estimating the dissipative small length scale for the Navier--Stokes system, for instance, in terms of the dimension of the global attractor \cite{BV}, \cite{CF88}, \cite{CFT85}, \cite{FMRT}, \cite{T}. The Hausdorff and fractal dimensions of the global attractor satisfy the following estimate \cite{CFT} (see also~\cite{CF88}, \cite{T}): $$ \dim_F\mathcal{A}\le c_1 G^{2/3}(\log(1+G))^{1/3},\qquad c_1=c_1(\gamma) $$ which has been shown in \cite{Liu} (following ideas of~\cite{BV}) to be logarithmically sharp. If we accept the point of view that the small length scale can be defined as follows (see \cite{CFT85}, \cite{FMRT}, \cite{Rob}, \cite{T}) \begin{equation}\label{viadim} l_f\sim\left(\frac{|\Omega|}{\dim_F\mathcal{A}}\right)^{1/2}, \end{equation} then up to logarithmic correction we have \begin{equation}\label{heurest} l_f\sim\frac{|\Omega|^{1/2}}{G^{1/3}}\,. \end{equation} This heuristic estimate for the small length scale is probably the best one can hope for since it matches, up to logarithmic term, the physically asserted estimates for the enstrophy dissipation length scale~\cite{Kr} . We also observe that the estimate~(\ref{heurest}) is extensive, that is, independent of the size of the spatial domain provided that its shape is fixed. Another rigorous definition of the small length scale can be given in terms of the number of determining modes, nodes, or volume elements (see \cite{FMRT}, \cite{F-P}, \cite{F-T-Nodes}, \cite{Jones-Titi2} and the references therein). It was shown that if $N$ is sufficiently large and $N$ equal squares of size $l_{dn}$ tile the periodic spatial domain, then any collection of points (one in each square) are determining for the long time dynamics of the 2D Navier--Stokes system. The best to date estimate for $N$ was obtained in \cite{Jones-Titi2}: $$ N\le c_2G, $$ where $c_2=c_2(\gamma)$ depends only on the aspect ratio $\gamma\le1$. (An explicit estimate for $c_2$ was obtained in \cite{I-T3}: $c_2(\gamma)=(68/(\gamma\pi))^{1/2}$.) Therefore the small length scale defined in terms of the lattice of determining nodes satisfies \begin{equation}\label{l-nodes} l_{dn}\ge c_2(\gamma)^{-1/2}\frac{|\Omega|^{1/2}}{G^{1/2}}\,. \end{equation} We observe that this estimate is not extensive, that is, $l_{dn}$ scales like $\lambda^{-1/2}$ if $\Omega$ is replaced by $\lambda\Omega$, $\lambda>0$. We point out here that for the 2D Navier--Stokes system with analytic forcing the results of~\cite{F-K-R}, \cite{F-R} provide the existence of a finite number $N$ of instantaneously determining nodes comparable with the fractal dimension of the attractor. These nodes, however, can be chosen arbitrarily (up to a subset of $\Omega^N$ with $2N$-dimensional Lebesgue measure zero) and therefore do not naturally define a regular lattice of determining nodes. The best to date estimate for the analyticity radius of the solutions of the Navier--Stokes equations with analytic forcing term $f$ was obtained in \cite{Kuk}: \begin{equation}\label{l-a-K} l_a\ge c_3(\gamma)\frac{|\Omega|^{1/2}}{G^{1/2}(1+\log G)^{1/4}}\,. \end{equation} Relating the radius of analyticity to the dissipative small length scale (see also~\cite{H-K-R} in this regard) we note that up to a logarithmic correction the estimate~(\ref{l-a-K}) coincides with (\ref{l-nodes}), but both are worse than~(\ref{viadim}), where the latter coincides, as we have already pointed out, with the physically asserted estimate of~\cite{Kr}. In this paper we focus on the 2D space periodic Navier--Stokes system with damping \begin{equation}\label{DNS} \begin{aligned} \partial_tu+\sum_{i=1}^2u^i\partial_iu&= -\mu u+\nu\Delta\,u-\nabla\,p +f,\\ \div u&=0. \end{aligned} \end{equation} By adding the Coriolis forcing term to~(\ref{DNS}) one obtains the well-known Stommel--Charney barotropic model of ocean circulation~\cite{Charney}, \cite{D-F}, \cite{P}, \cite{Stommel}. Here the damping $\mu u$ represents the Rayleigh friction term and $f$ is the wind stress. For an analytical study of this system see, for instance \cite{BCT}, \cite{Hauk}, \cite{I91}, \cite{W}, and the references therein. In a follow up work we will be studying the effect of adding rotation (Coriolis parameter) on the size of small scales and the complexity of the dynamics of~(\ref{DNS}). Therefore, we will focus in this work on the system~(\ref{DNS}). We also point out that in this geophysical context the viscosity plays a much smaller role in the mechanism of dissipating energy than the Rayleigh friction. That is why in this work the friction coefficient $\mu>0$ will be fixed and we consider the system at the limit when $\nu\to0^+$. Sharp estimates (as $\nu\to0$) for the Hausdorff and the fractal dimensions of the global attractor of the system~(\ref{DNS}) were first obtained in the case of the square-shaped domain in \cite{I-M-T} ($\gamma=1$). Then the case of an elongated domain was studied in~\cite{I-T6} ($\gamma\to0$), where it was shown that \begin{equation}\label{dimD} \dim_F\mathcal{A}\le c_4D,\qquad D=\frac{\|\mathop\mathrm{rot} f\|_\infty|\Omega|}{\mu\nu}\,, \end{equation} where $c_4$ is an absolute constant ($c_4\le12$). This estimate is sharp as both $\nu\to0$ and $\gamma\to0$. Therefore the small length scale defined as in~(\ref{viadim}) is of the order of \begin{equation}\label{lfdamped} l_f\sim\left(\frac{|\Omega|}{\dim_F\mathcal{A}}\right)^{1/2} \sim\,\left(\frac{\mu\nu}{\|\mathop\mathrm{rot} f\|_\infty}\right)^{1/2} \sim\,\frac{|\Omega|^{1/2}}{D^{1/2}}\,. \end{equation} This heuristic estimate is, in fact, a rigorous bound for the small length scale expressed in terms of the number of determining modes and nodes~\cite{I-T3}: \begin{equation}\label{dndamped} l_{dn}= c_5\left(\frac{|\Omega|}{D}\right)^{1/2}=c_5 \left(\frac{\mu\nu}{\|\mathop\mathrm{rot} f\|_\infty }\right)^{1/2}, \qquad c_5=68^{1/4}. \end{equation} This means that any lattice of points in $\Omega$ at a typical distance $l\le l_{dn}$ is determining. The main result of this paper is in showing that the analyticity radius $l_a$ of the solutions of the damped-driven Navier--Stokes system~(\ref{DNS}) lying on the global attractor is bounded from below and satisfies the estimate: \begin{equation}\label{la} l_a\ge \frac{c|\Omega|^{1/2}}{D^{1/2}(1+\log D)^{1/2}}\,, \end{equation} which up to a logarithmic correction agrees both with the smallest scale estimate~(\ref{lfdamped}) and the rigorously defined typical distance between the determining nodes~(\ref{dndamped}). It is worth mentioning that this point of view of relating the radius of analyticity of solutions on the Navier--Stokes equations to small scales in turbulence was also presented in~\cite{H-K-R}. This paper is organized as follows. In section~\ref{S:Gevrey} we employ the Gevrey--Hilbert space technique of \cite{F-T-Gev} to derive a lower bound for the radius of analyticity of the order \begin{equation}\label{Gevrey-D} \frac{c|\Omega|^{1/2}}{D^2\log D}\,. \end{equation} This bound considerably improves, for a fixed $\mu>0$, the lower bound~(\ref{estFT}) for the classical Navier--Stokes system as $\nu\to0^+$ (see also Remark~\ref{R:with-mu}). Let us remark that as an alternative to the Gevrey regularity technique for estimating small scales one can apply the ladder estimates approach presented in~\cite{D-G} to obtain estimates for the small scales in~(\ref{DNS}) (see also~\cite{Gi-Ti}). In section~\ref{S:Kuk} the estimate~(\ref{la}) is proved for the system~(\ref{DNS}) following~\cite{Kuk}. \setcounter{equation}{0} \section{Gevrey regularity of the damped Navier--Stokes system}\label{S:Gevrey} As usual (see, for instance, \cite{BV},\cite{CF88},\cite{Lad},\cite{TNS}), we write~(\ref{DNS}) as an evolution equation in the Hilbert space $H$ which is the closed subspace of solenoidal vectors in $(L_2(\Omega))^2$ with zero average over the torus $\Omega=[0,L/\gamma]\times[0,L]$: \begin{equation}\label{FDNS} \partial_tu+B(u,u)+\nu Au+\mu u=f, \qquad u(0)=u_0. \end{equation} Here $A=-P\Delta$ is the Stokes operator with eigenvalues $0<\lambda_1\le\lambda_2\le\dots$, $B(u,v)=P\bigl(\sum_{i=1}^2u^i\partial_iv\bigr)$ is the nonlinear term, $f=Pf\in H$, and $P:(L_2(\Omega))^2\to H$. We restrict ourselves to the case $\gamma=1$ and, in addition, assume that $\Omega=[0,2\pi]^2$ (this simplifies the Fourier series below). The case of the square-shaped domain $\Omega=[0,L]^2$ reduces to this case by scaling. Furthermore, any domain with aspect ratio $\gamma<1$ can be treated in the similar way, the absolute dimensionless constants $c_1, c_2,\dots$ below will then depend on $\gamma$, however. A vector field $u\in H$ has the Fourier series expansion $$ u=\sum_{j\in\mathbb{Z}^2} u_je^{ij\cdot x}, \quad u_j\in\mathbb{C}^2, \quad u_{-j}=\bar u_j, \quad u_j\cdot j=0, \quad u_0=0, $$ and $$ \|u\|^2=\|u\|^2_{L_2}=(2\pi)^2\sum_{j\in\mathbb{Z}^2} |u_j|^2. $$ The eigenvalues of the Stokes operator $A$ are the numbers $|j|^2$, and the domain of its powers is the set of vector functions $u$ such that $$ (2\pi)^2\sum_{j\in\mathbb{Z}^2}|j|^{4\alpha} |u_j|^2= \|A^\alpha u\|^2<\infty. $$ For $\tau, s>0$ we define the Gevrey space $D(e^{\tau A^s})$ of functions $u$ satisfying \begin{equation}\label{def-Gev} (2\pi)^2\sum_{j\in\mathbb{Z}^2} e^{2\tau|j|^{2s}} |u_j|^2= \|e^{\tau A^s}u\|^2<\infty. \end{equation} We suppose that the forcing term $f$ belongs to the Gevrey space of analytic functions \begin{equation}\label{f-Gev} f\in D(e^{\sigma_1A^{1/2}}A^{1/2}), \ \text{so that}\ (2\pi)^2\sum_{j\in\mathbb{Z}^2}|j|^2e^{2\sigma_1|j|} |u_j|^2= \|e^{\sigma_1A^{1/2}}A^{1/2}f\|<\infty \end{equation} for some $\sigma_1>0$. We set $$ \varphi(t)=\min(\nu\lambda_1^{1/2}t,\sigma_1). $$ The norm and the scalar product in $D(e^{\varphi(t)A^{1/2}})$ are denoted by $\|\cdot\|_\varphi$ and $(\cdot,\cdot)_\varphi$, respectively. We assume that $u_0\in D(A^{1/2})$ and take the scalar product of~(\ref{FDNS}) and $Au$ in $D(e^{\varphi(t)A^{1/2}})$ for sufficiently small $t\le\sigma_1/(\nu\lambda_1^{1/2})$. Since $$ \bigl(e^{\varphi(t)A^{1/2}}\partial_tu(t),e^{\varphi(t)A^{1/2}}u(t)\bigr)= \frac12\partial_t\|A^{1/2}u(t)\|^2_\varphi- \nu\lambda_1^{1/2}(Au(t),A^{1/2}u(t))_\varphi, $$ we obtain \begin{equation}\label{scal-pr} \frac12\partial_t\|A^{1/2}u\|_\varphi^2+\nu\|Au\|^2_\varphi+\mu\|A^{1/2}u\|^2_\varphi= (B(u,u)Au)_\varphi+\nu\lambda_1^{1/2}(Au,A^{1/2})_\varphi+ (A^{1/2}f,A^{1/2}u)_\varphi. \end{equation} Next we use the key estimate (see \cite{F-T-Gev}, \cite{FMRT}, \cite{Titi}) for the nonlinear term in Gevrey spaces $$ (B(u,u),Au)_\varphi\le c_1\|A^{1/2}u\|^2_\varphi\|Au\|_\varphi \biggl(1+ \log\frac{\|Au\|^2_\varphi}{\lambda_1\|A^{1/2}u\|^2_\varphi}\biggr)^{1/2} $$ and use Young's inequality for this estimate and for the last two terms in~(\ref{scal-pr}): $$ \aligned \partial_t\|A^{1/2}u\|_\varphi^2+&\nu\|Au\|^2_\varphi\le\\&\le \frac{2c_1^2}\nu\|A^{1/2}u\|^4_\varphi \biggl(1+\log\frac{\|Au\|^2_\varphi}{\lambda_1\|A^{1/2}u\|^2_\varphi}\biggr) +2\nu\lambda_1\|A^{1/2}u\|^2_\varphi+\frac{\|A^{1/2}f\|^2_\varphi}{2\mu}\le\\ &\le \frac{c_2}\nu\|A^{1/2}u\|^4_\varphi \biggl(1+\log\frac{\|Au\|^2_\varphi}{\lambda_1\|A^{1/2}u\|^2_\varphi}\biggr) +\nu^3\lambda_1^2+\frac{\|A^{1/2}f\|^2_\varphi}{2\mu}, \endaligned $$ where $c_2=2c_1^2+1$. Next, using the inequality $ -\alpha z+\beta(1+\log z)\le\beta\log\beta/\alpha $ (see \cite{FMRT}, \cite{FMT}), we find $$ \aligned -\nu\|Au\|_\varphi^2+ \frac{c_2}\nu\|A^{1/2}u\|^4_\varphi \biggl(1+\log\frac{\|Au\|^2_\varphi}{\lambda_1\|A^{1/2}u\|^2_\varphi}\biggr)\le \frac{c_2}\nu\|A^{1/2}u\|^4_\varphi \log\frac{c_2\|A^{1/2}u\|^2_\varphi}{\lambda_1\nu^2}, \endaligned $$ and obtain the differential inequality $$ \partial_t\|A^{1/2}u\|_\varphi^2\le \frac{c_2}\nu\|A^{1/2}u\|^4_\varphi \log\frac{c_2\|A^{1/2}u\|^2_\varphi}{\lambda_1\nu^2} +\nu^3\lambda_1^2+\frac{\|A^{1/2}f\|^2_\varphi}{2\mu}\,. $$ Hence the function $$ y(t)=\frac{c_2\|A^{1/2}u\|_\varphi^2}{\lambda_1\nu^2}+ \frac{\|A^{1/2}f\|_{\sigma_1}^2}{\lambda_1\nu^{3/2}\mu^{1/2}} +e, $$ where $\ln e=1$, satisfies $$ \partial_ty(t)\le\nu\lambda_1c_3y^2\log y, \qquad c_3=\max(1,c_2/2). $$ Therefore $y(t)\le2y(0)$ for as long as $$t\le(2\nu\lambda_1c_3y(0)\log2y(0))^{-1}.$$ In other words, $$ \|A^{1/2}u\|_\varphi^2\le2\|A^{1/2}u_0\|^2+ c_4(\nu/\mu)^{1/2}\|A^{1/2}f\|_{\sigma_1}+c_4\lambda_1\nu^2, \qquad c_4=e/c_2, $$ as long as $0\le t\le T^*(\|A^{1/2}u_0\|)$, where $$ \aligned T^*(\|A^{1/2}u_0\|)=\frac1{2c_3\nu\lambda_1 \left(\frac{c_2\|A^{1/2}u_0\|^2}{\lambda_1\nu^2} +\frac{\|A^{1/2}f\|_{\sigma_1}}{\lambda_1\nu^{3/2}\mu^{1/2}}+e \right)\log\left(2\left(\frac{c_2\|A^{1/2}u_0\|^2}{\lambda_1\nu^2} +\frac{\|A^{1/2}f\|_{\sigma_1}}{\lambda_1\nu^{3/2}\mu^{1/2}}+e \right)\right)}. \endaligned $$ We now observe (see Lemma~\ref{L:Est-for-rot}) that on the global attractor or in the absorbing ball we have, respectively, $$ \|A^{1/2}u(t)\|\le\frac{\|A^{1/2}f\|}\mu,\quad t\in\mathbb{R}, \quad \|A^{1/2}u(t)\|\le2\frac{\|A^{1/2}f\|}\mu,\quad t\ge T_0(\|A^{1/2}u_0\|). $$ Therefore we have the following lower bound for $T^*$: $$ T^*\ge c_5\left[\nu\lambda_1 \left(\frac{\|A^{1/2}f\|^2}{\lambda_1\nu^2\mu^2} +\frac{\|A^{1/2}f\|_{\sigma_1}}{\lambda_1\nu^{3/2}\mu^{1/2}}+1 \right)\log\left(\frac{\|A^{1/2}f\|^2}{\lambda_1\nu^2\mu^2} +\frac{\|A^{1/2}f\|_{\sigma_1}}{\lambda_1\nu^{3/2}\mu^{1/2}}+1 \right) \right]^{-1} $$ In the limit $\nu\to0^+$ we have $$ \frac{\|A^{1/2}f\|^2}{\lambda_1\nu^2\mu^2}\gg \frac{\|A^{1/2}f\|_{\sigma_1}}{\lambda_1\nu^{3/2}\mu^{1/2}}, $$ and we can write the lower bound for $T^*$ as follows $$ T^*\ge c_6\left[\nu\lambda_1D^2\log D\right]^{-1}, $$ where $$ \frac{\|A^{1/2}f\|}{\lambda_1^{1/2}\nu\mu}= \frac{\|\mathop\mathrm{rot} f\||\Omega|^{1/2}}{2\pi\nu\mu}\le\frac1{2\pi}D, \quad \text{where}\quad D=\frac{\|\mathop\mathrm{rot} f\|_\infty|\Omega|}{\nu\mu}\,. $$ In terms of the analyticity radius $l_a$ the lower bound for $T^*$ takes the form $$ l_a\ge\frac{c_7|\Omega|^{1/2}}{D^2\log D}\,. $$ Thus, we have proved the following theorem. \begin{theorem}\label{T:Gevrey} Suppose that $f\in D(A^{1/2}e^{\sigma_1A^{1/2}})$ for some $\sigma_1>0$. Then a solution $u$ lying on the global attractor $\mathcal{A}$ is analytic with analyticity radius $$ l_a\ge\min\left(\frac{c_7|\Omega|^{1/2}}{(D^2+D_1+1)\log(D^2+D_1+1)}\ , \ \sigma_1\right), $$ where $$ D=\frac{\|\mathop\mathrm{rot} f\|_\infty|\Omega|}{\nu\mu}\,,\qquad D_1=\frac{\|A^{1/2}f\|_{\sigma_1}}{\lambda_1\nu^{3/2}\mu^{1/2}}. $$ Moreover, \begin{equation}\label{Gev-with-mu} l_a\ge\frac{c_8|\Omega|^{1/2}}{D^2\log D}\,\quad \text{as}\quad\nu\to0^+. \end{equation} The constants $c_7$ and $c_8$ depend only on the aspect ratio of the periodic domain $\Omega$. \end{theorem} \begin{remark}\label{R:with-mu} \rm{ We observe that the estimate~(\ref{Gev-with-mu}) for the system~(\ref{DNS}) is of the order $\nu^{-2}\log(1/\nu)$ as far as the dependence on $\nu\to0^+$ is concerned, while the estimate~(\ref{estFT}) for the classical Navier--Stokes system is, in this respect much larger; namely, is of the order~$\nu^{-4}\log(1/\nu)$. However, the estimate~(\ref{Gev-with-mu}) is not sharp and will be improved in the next section. As has been demonstrated in~\cite{O-T} the Gevrey--Hilbert space technique does not always provide sharp estimates for the radius of analyticity. The mechanism explaining this has been reported in \cite{O-T} by means of an explicitly solvable model equation. } \end{remark} \setcounter{equation}{0} \section{Sharper bounds}\label{S:Kuk} In this section we obtain sharper lower bounds for the analyticity radius $l_a$. This is achieved by combining the $\nu$-independent estimate for the vorticity contained in the following lemma and the $L_p$-technique developed in~\cite{Gr-K}, \cite{Kuk} for the uniform analyticity radius of the solutions of the Navier--Stokes equations. We observe that similar technique has been earlier established in~\cite{B-B} for studying the analyticity of the Euler equations. Applying the operator $\mathop\mathrm{rot}$ to~(\ref{DNS}) we obtain the well-known scalar vorticity equation \begin{equation}\label{VE} \partial_t\omega+u\cdot\nabla\omega=\nu\Delta\omega-\mu \omega+F, \end{equation} where $\omega=\mathop\mathrm{rot} u$, $F=\mathop\mathrm{rot} f$, $u=\nabla^\perp\Delta^{-1}\omega$, so that $u\cdot\nabla\omega=\nabla^\perp\Delta^{-1}\omega\cdot\nabla\omega= J(\Delta^{-1}\omega,\omega)$, and $\nabla^\perp\,=(-\partial_2\ ,\partial_1\ )$, $J(a,b)=\nabla^\perp a\cdot \nabla b$. \begin{lemma}\label{L:Est-for-rot} {\rm(See~\cite{I-T3}.)} The solutions $u(t)$ lying on the global attractor $\mathcal{A}$ satisfy the following bound: \begin{equation}\label{rot-infty} \|\omega(t)\|_{L_{2k}}\le \frac{\|\mathop\mathrm{rot} f\|_{L_{2k}}}{\mu}\ , \qquad t\in\mathbb{R}, \end{equation} where $1\le k\le\infty$. \end{lemma} \begin{proof} We use the vorticity equation~(\ref{VE}) and take the scalar product with $\omega^{2k-1}$, where $k\ge1$ is integer, and use the identity $$(J(\psi,\varphi),\varphi^{2k-1})= (2k)^{-1}\int J(\psi,\varphi^{2k})dx= (2k)^{-1}\int\div(\varphi^{2k}\nabla^\perp\psi)dx=0.$$ We obtain $$ \aligned \|\omega\|_{L_{2k}}^{2k-1}\partial_t\|\omega\|_{L_{2k}}+ (2k-1)\nu\int|\nabla\,\omega|^2\omega^{2k-2}dx+ \mu\|\omega\|_{L_{2k}}^{2k}=\\=(\mathop\mathrm{rot} f,\omega^{2k-1})\le\ \|\mathop\mathrm{rot} f\|_{L_{2k}}\|\omega\|_{L_{2k}}^{2k-1}. \endaligned $$ Hence, by Gronwall's inequality $$ \|\omega(t)\|_{L_{2k}}\le\|\omega(0)\|_{L_{2k}}e^{-\mu t}+ \mu^{-1}\|\mathop\mathrm{rot} f\|_{L_{2k}}(1-e^{-\mu t}), $$ and passing to the limit as $k\to\infty$ we find $$ \|\omega(t)\|_\infty\le\|\omega(0)\|_\infty e^{-\mu t}+ \mu^{-1}\|\mathop\mathrm{rot} f\|_\infty(1-e^{-\mu t}). $$ Now, we let $t\to\infty$ in the above inequalities and obtain $$ \limsup_{t\to\infty}\|\omega(t)\|_{L_{2k}}\le \frac{\|\mathop\mathrm{rot} f\|_{L_{2k}}}\mu\ , \quad 1\le k\le\infty, $$ which gives (\ref{rot-infty}) since the solutions lying on the attractor are bounded for $t\in\mathbb{R}$. \end{proof} As before we consider the square-shaped domain $\Omega=[0,L]^2$ and it is now convenient to write~(\ref{DNS}) in dimensionless form. We introduce dimensionless variables $x'$, $t'$, $u'$ and $p'$ by setting $$ x=Lx',\quad t=(L^2/\nu)t',\quad u=(\nu/L)u',\quad p=(\nu^2/L^2)p', \quad \mu=(\nu/L^2)\mu'. $$ We obtain \begin{equation}\label{DDNS} \begin{aligned} \partial_{t'}u'+\sum_{i=1}^2{u'}^i\partial'_iu'&= -\mu' u+\Delta'\,u'-\nabla'\,p' +f',\\ {\div}' u'&=0, \end{aligned} \end{equation} where $x'\in\Omega'=[0,1]^2$, $f'=(L^3/\nu^2)f$. Accordingly, the dimensionless form of~(\ref{VE}) is as follows (we omit the primes): \begin{equation}\label{DVE} \partial_t\omega+u\cdot\nabla\omega=\Delta\omega-\mu \omega+F. \end{equation} \begin{remark}\label{Rm:Dimless-est} {\rm For dimensionless variables $u'$ and $\omega'$ the estimate~(\ref{rot-infty}) with $k=\infty$ takes the form \begin{equation}\label{Dimless-est} \|\omega'\|_\infty=\|{\mathop\mathrm{rot}}'u'\|_\infty\le \frac{\|{\mathop\mathrm{rot}}'f'\|_\infty}{\mu'}= \frac{\|{\mathop\mathrm{rot}}f\|_\infty L^2}{\nu\mu}=D. \end{equation} } \end{remark} The next lemma is similar to the main estimate for the space analyticity radius in~\cite{Kuk}. \begin{lemma}\label{L: analytic sol} Suppose that $F$ is a restriction to $\Omega$ $($that is, $y=0$$)$ of a bounded $x$-periodic analytic function $F(x+iy)+iG(x+iy)$ in the region $|y|\le\delta_F$ and \begin{equation}\label{MF} M_F^2=\sup_{x\in\Omega, \ |y|\le\delta_F} (F(x+iy)^2+G(x+iy)^2). \end{equation} Let $p\ge3/2$ and let $$ t_0=\frac{M_{2p}^2}{CM_F^2/\mu}\,. $$ Here $($and throughout$)$ $C$ is a sufficiently large universal constant and $M_{2p}\ge\|\omega_0\|_{L_{2p}}$. Then the solution $\omega(t)$ is analytic for $t>0$ and for $0<t\le t_0$ the space analyticity radius of $\omega(t)$ is greater than $$ \delta(t)= \min\left( \frac{t^{1/2}}C, \frac1{Cpt^{(2p-3)/4p}M_{2p}}, \frac1{Cpt^{(2p-3)/(4p+6)}M_{2p}^{2p/(2p+3)}}, \frac1{pt^{1/2}M_{2p}}, \delta_F \right). $$ \end{lemma} \begin{proof} We solve (\ref{DVE}) by a sequence of approximating solutions (see~\cite{Kato}, \cite{Kuk}). We set $u^{(0)}=0$ and $\omega^{(0)}=0$. Then for $\omega^{(n)}$, $u^{(n)}$ we have the equation \begin{equation}\label{Iter} \aligned \partial_t\omega^{(n)}-\Delta\omega^{(n)}+ u^{(n-1)}\cdot\nabla\omega^{(n)}+\mu \omega^{(n)}=F\\ \omega^{(n)}(0)=\omega_0=\mathop\mathrm{rot} u_0, \qquad u^{(n)}=\nabla^\perp\Delta^{-1}\omega^{(n)}. \endaligned \end{equation} The solutions $\omega^{(n)}$ and $u^{(n)}$ for $t>0$ have analytic extensions $\omega^{(n)}+i\theta^{(n)}$ and $u^{(n)}+iv^{(n)}$ and since the system~(\ref{Iter}) is linear, their analyticity radius is at least $\delta_F$. They satisfy the equation $$ \partial_t(\omega^{(n)}+i\theta^{(n)})-\Delta(\omega^{(n)}+i\theta^{(n)})+ (u^{(n-1)}+iv^{(n-1)})\cdot\nabla(\omega^{(n)}+i\theta^{(n)}) +\mu (\omega^{(n)}+i\theta^{(n)})=F+iG, $$ or, equivalently, the system \begin{equation}\label{Comp-syst} \aligned &\partial_t\omega^{(n)}-\Delta\omega^{(n)}+\mu\omega^{(n)}+ u^{(n-1)}\cdot\nabla\omega^{(n)}-v^{(n-1)}\cdot\nabla\theta^{(n)}&=F,\\ &\partial_t\theta^{(n)}-\Delta\theta^{(n)}+\mu\theta^{(n)}+ u^{(n-1)}\cdot\nabla\theta^{(n)}+v^{(n-1)}\cdot\nabla\omega^{(n)}&=G, \endaligned \end{equation} where, as before, $u^{(n)}=\nabla^\perp\Delta^{-1}\omega^{(n)}$, $v^{(n)}=\nabla^\perp\Delta^{-1}\theta^{(n)}$, and the differential operators are taken with respect to $x$. In view of the analyticity of the solutions we have the Cauchy--Riemann equations \begin{equation}\label{Cauchy-Riemann} \aligned \frac{\partial\omega^{(n)}}{\partial y_j}\,&=\,- \frac{\partial\theta^{(n)}}{\partial x_j}\,,\\ \frac{\partial\omega^{(n)}}{\partial x_j}\,&=\, \frac{\partial\theta^{(n)}}{\partial y_j}\,,\quad j=1,2, \endaligned \end{equation} and the similar equations for $u^{(n)}$ and $v^{(n)}$. Let $\varepsilon>0$. We consider the functional \begin{equation}\label{psin} \psi_n(t)=\int_0^1\int_{\Omega} \bigl(\omega^{(n)}(x,\alpha ts,t)^2+\theta^{(n)}(x,\alpha ts,t)^2+ \varepsilon\bigr)^pdxds. \end{equation} We also set $$ Q_n(x,s,t)=\omega^{(n)}(x,\alpha ts,t)^2+\theta^{(n)}(x,\alpha ts,t)^2+\varepsilon. $$ Here $t\in\mathbb{R}^+$ and $\alpha\in \mathbb{R}^2$. The combination $\alpha ts$ will play the role of the variable $y$; $p\ge3/2$, and $\varepsilon>0$ is arbitrary. We differentiate $\psi_n(t)$ taking into account~(\ref{Comp-syst}) and use the Cauchy--Riemann equations~(\ref{Cauchy-Riemann}) to handle the derivatives with respect to $y$. We obtain \begin{equation}\label{I0-I4} \frac1{2p}\partial_t\psi_n(t)+I_0=I_1+I_2+I_3+I_4, \end{equation} where $$ \aligned I_0=\int_0^1\int_\Omega Q_n^{p-1}\bigl(|\nabla\omega^{(n)}|^2+ |\nabla\theta^{(n)}|^2+\mu(\omega^{(n)})^2+\mu(\theta^{(n)})^2\bigr)dxds+\\ +2(p-1)\int_0^1\int_\Omega Q_n^{p-2}\bigl(\omega^{(n)}\nabla\omega^{(n)}+ \theta^{(n)}\nabla\theta^{(n)}\bigr)^2dxds, \endaligned $$ and $$ \aligned &I_1=\int_0^1\int_\Omega Q_n^{p-1}\bigl(-\omega^{(n)}\nabla\theta^{(n)}+ \theta^{(n)}\nabla\omega^{(n)}\bigr)\cdot\alpha s\,dxds,\\ &I_2=\int_0^1\int_\Omega Q_n^{p-1}\bigl(\omega^{(n)}\nabla\omega^{(n)}+ \theta^{(n)}\nabla\theta^{(n)}\bigr)\cdot u^{(n-1)}dxds,\\ &I_3=\int_0^1\int_\Omega Q_n^{p-1}\bigl(-\omega^{(n)}\nabla\theta^{(n)}+ \theta^{(n)}\nabla\omega^{(n)}\bigr)\cdot v^{(n-1)}dxds,\\ &I_4=\int_0^1\int_\Omega Q_n^{p-1}\bigl(\omega^{(n)}F+ \theta^{(n)}G)dxds. \endaligned $$ The arguments of $Q_n$ are $x,s,t$, and the arguments of $\omega^{(n)}$, $\theta^{(n)}$, $u^{(n)}$, and $v^{(n)}$ are $x$, $\alpha ts$, and $t$. For an arbitrary $\eta>0$ we have \begin{equation}\label{I1} \aligned I_1&\le\eta \int_0^1\int_\Omega Q_n^{p-1}\bigl(|\nabla\omega^{(n)}|^2+ |\nabla\theta^{(n)}|^2\bigr)dxds+\\ &C_\eta\int_0^1\int_\Omega Q_n^{p-1}\bigl((\omega^{(n)})^2+ (\theta^{(n)})^2\bigr)|\alpha|^2s^2dxds\le \eta I_0+C_\eta|\alpha|^2\psi_n(t). \endaligned \end{equation} Next, \begin{equation}\label{I2} I_2=\frac1{2p}\int_0^1\int_\Omega \nabla Q_n^{p}\cdot u^{(n-1)}dxds=0. \end{equation} For $I_3$ we have \begin{equation}\label{I3-1} I_3\le \eta I_0+C_\eta\int_0^1\int_\Omega Q_n^{p}\,|v^{(n-1)}|^2dxds\le \eta I_0+C_\eta I_3'I_3'', \end{equation} where $$ I_3'=\left(\int_0^1\int_\Omega Q_n(x,s,t)^{p^2/(p-1)} dxds\right)^{(p-1)/p}\!, \, I_3''=\sum_{j=1}^2\left(\int_0^1\int_\Omega |v_j^{(n-1)} (x,\alpha t s, t)|^{2p}dxds\right)^{1/p}\!. $$ We write $I_3'$ as follows $$ I_3'=\|Q_n^{p/2}\|_{L_\beta(\Omega_0)}^2, \qquad\Omega_0=\Omega\times[0,1]\subset\mathbb{R}^3, \qquad\beta=2p/({p-1}),\qquad 2\le\beta\le6, $$ and use in $\Omega_0$ the Gagliardo--Nirenberg inequality $$ \|A\|_{L_\beta(\Omega_0)}\le C\|A\|_{L_2(\Omega_0)}^{3/\beta-1/2} \|\nabla_{x,s}\,A\|_{L_2(\Omega_0)}^{3/2-3/\beta}+ C\|A\|_{L_2(\Omega_0)} $$ for $A=A(x,s)=Q_n^{p/2}(x,s,t)$. We have $$ \aligned &\|\nabla_{x,s}\,A\|_{L_2(\Omega_0)}^2= \|\nabla_{x,s}\,Q_n^{p/2}\|_{L_2(\Omega_0)}^2=\\ &p^2\int_0^1\int_\Omega Q_n^{p-2}\bigl((\omega^{(n)}\nabla\omega^{(n)} +\theta^{(n)}\nabla\theta^{(n)})^2+ t^2 (\theta^{(n)}\alpha\cdot\nabla\omega^{(n)}- \omega^{(n)}\alpha\cdot\nabla\theta^{(n)})^2\bigr)dxds\le\\ &\qquad\qquad \le Cp^2(1+|\alpha|^2t^2)I_0. \endaligned $$ Hence, $$ \|\nabla_{x,s}\,A\|_{L_2(\Omega_0)}^{3/2-3/\beta}= \|\nabla_{x,s}\,Q_n^{p/2}\|_{L_2(\Omega_0)}^{3/2p}\le C(1+|\alpha|^2t^2)^{3/4p}I_0^{3/4p}. $$ Next, $\|Q_n^{p/2}\|_{L_2(\Omega_0)}^2=\psi_n(t)$, $$ \|A\|_{L_2(\Omega_0)}^{3/\beta-1/2}= \|Q_n^{p/2}\|_{L_2(\Omega_0)}^{(2p-3)/2p}=\psi_n(t)^{(2p-3)/4p} $$ and \begin{equation}\label{I3'} I_3'\le C(1+|\alpha|^2t^2)^{3/2p}I_0^{3/2p}\psi_n(t)^{(2p-3)/2p}+C\psi_n(t). \end{equation} We now consider $I_3''$. Since $v_j^{(n-1)}(x,0,t)=0$ (the solution restricted to $y=0$ is real-valued), we have (using the Cauchy--Riemann equations for $v_j$) $$ \aligned |v_j^{(n-1)}(x,\alpha ts,t)|=&\left|\sum_{k=1}^2\alpha_kts \int_0^1\partial_{y_k} v_j^{(n-1)}(x,\alpha ts\tau,t)d\tau\right|=\\ &\left|\sum_{k=1}^2\alpha_kts \int_0^1\partial_{k} u_j^{(n-1)}(x,\alpha ts\tau,t)d\tau\right|. \endaligned $$ Then $$ \aligned I_3''=\sum_{j=1}^2\left(\int_0^1\int_\Omega\left|\sum_{k=1}^2 \alpha_kts\int_0^1\partial_ku_j^{(n-1)}(x,\alpha ts\tau,t)d\tau\right|^{2p} dxds\right)^{1/p}\le\\ C|\alpha|^2t^2\left(\int_0^1\int_\Omega\int_0^1\left| \nabla u^{(n-1)}(x,\alpha ts\tau,t)\right|^{2p}s^{2p}d\tau dxds\right)^{1/p}=\\ C|\alpha|^2t^2\left(\int_0^1s^{2p}ds\int_0^1d\tau\int_\Omega\left| \nabla u^{(n-1)}(x,\alpha ts\tau,t)\right|^{2p} dx\right)^{1/p}. \endaligned $$ Since $u=\nabla^\perp\Delta^{-1}\omega$, we have (see~\cite{G-T}, \cite{Yud}) $$ \left(\int_\Omega|\nabla u(x)|^{2p}dx\right)^{1/2p}= \|\nabla\nabla^\perp\Delta^{-1}\omega\|_{L_{2p}}\le \|\Delta^{-1}\omega\|_{W_{2p}^2}\le Cp\|\omega\|_{L_{2p}}\,. $$ Therefore $$ \aligned I_3''\le Cp^2|\alpha|^2t^2\left(\int_0^1\int_0^1\int_\Omega\left| \omega^{(n-1)}(x,\alpha ts\tau,t)\right|^{2p} dx\, s^{2p}dsd\tau\right)^{1/p}\le\\ Cp^2|\alpha|^2t^2\left(\int_0^1\int_\Omega Q_{n-1}^pdxds\right)^{1/p}\le Cp^2|\alpha|^2t^2\psi_{n-1}(t)^{1/p}, \endaligned $$ where we have used $\int_0^1\int_0^1h(s\tau)s^{2p}dsd\tau\le(2p)^{-1}\int_0^1h(s)ds$. Combining this with~(\ref{I3-1}) and (\ref{I3'}) we obtain \begin{equation}\label{I3-final} \aligned &I_3\le\\ &\eta'I_0+C_{\eta'}p^2|\alpha|^2t^2(1+|\alpha|^2t^2)^{3/2p} I_0^{3/2p}\psi_n(t)^{(2p-3)/2p}\psi_{n-1}(t)^{1/p}+ Cp^2|\alpha|^2t^2\psi_{n-1}(t)^{1/p} \psi_n(t)\le\\ &\eta I_0+C_{\eta}p^2(|\alpha|t)^{4p/(2p-3)}(1+|\alpha|^2t^2)^{3/(2p-3)} \psi_{n-1}(t)^{2/(2p-3)}\psi_{n}(t)+Cp^2|\alpha|^2t^2\psi_{n-1}(t)^{1/p} \psi_n(t). \endaligned \end{equation} Finally, we estimate $I_4$: \begin{equation}\label{I4} \aligned I_4\le\int_0^1\int_\Omega Q_n^{p-1}\bigl((\omega^{(n)})^2 +\theta^{(n)})^2)\eta\mu+(F^2+G^2)/(4\eta\mu)\bigr)dxds\le\\ \eta I_0+C_\eta(M_F^2/\mu)\int_0^1\int_\Omega Q_n^{p-1}dxds \le\eta I_0+C_\eta(M_F^2/\mu)\psi_n(t)^{(p-1)/p}, \endaligned \end{equation} where $M_F$ is defined in~(\ref{MF}). Taking $\eta>0$ sufficiently small we infer from~(\ref{I0-I4}), (\ref{I1}), (\ref{I2}), (\ref{I3-final}), (\ref{I4}) $$ \aligned \partial_t\psi_n(t)\le Cp|\alpha|^2\psi_n(t)+Cp^3|\alpha|^{4p/(2p-3)} t^{4p/(2p-3)}\psi_{n-1}(t)^{2/(2p-3)}\psi_n(t)+\\ Cp^3|\alpha|^{(4p+6)/(2p-3)} t^{(4p+6)/(2p-3)}\psi_{n-1}(t)^{2/(2p-3)}\psi_n(t)+\\ Cp^3|\alpha|^2t^2\psi_{n-1}(t)^{1/p}\psi_n(t)+ Cp\psi_n(t)^{(p-1)/p}M_F^2/\mu, \endaligned $$ where $$ \psi_n(0)=\int_\Omega(\omega_0(x)^2+\varepsilon)^pdx. $$ We set $\varphi_n(t)=\psi_n(t)^{1/2p}$ and obtain the differential inequality for $\varphi_n$: $$ \aligned \partial_t\varphi_n(t)\le C|\alpha|^2\varphi_n(t)+Cp^2|\alpha|^{4p/(2p-3)} t^{4p/(2p-3)}\varphi_{n-1}(t)^{4p/(2p-3)}\varphi_n(t)+\\ Cp^2|\alpha|^{(4p+6)/(2p-3)} t^{(4p+6)/(2p-3)}\varphi_{n-1}(t)^{4p/(2p-3)}\varphi_n(t)+\\ Cp^2|\alpha|^2t^2\varphi_{n-1}(t)^{2}\varphi_n(t)+ C\varphi_n(t)^{-1}M_F^2/\mu. \endaligned $$ We now use the Gronwall-type Lemma~\ref{L:Kuk-Gronwall} from \cite{Kuk} below and see that $\varphi(t)\le2\varphi(0)$ on the time interval specified in~(\ref{tmin}), (\ref{A1-A5}), and letting $\varepsilon\to0$ we obtain $$ \int_0^1\int_\Omega\left(\omega^{(n)}(x,\alpha ts,t)^2+\theta^{(n)}(x,\alpha ts,t)^2 \right)^pdxds\le2^{2p}M_{2p}^{2p}, $$ for $t\ge0$, $|\alpha|t\le\delta_F$ and \begin{equation}\label{tmin} t\le\min(A_1,A_2,A_3,A_4,A_5), \end{equation} where \begin{equation}\label{A1-A5} \aligned &A_1=\frac1{C|\alpha|^2},\\ &A_2=\frac1{Cp^{(4p-6)/(6p-3)}|\alpha|^{4p/(6p-3)}M_{2p}^{4p/(6p-3)}},\\ &A_3=\frac1{Cp^{(4p-6)/(6p+3)}|\alpha|^{(4p+6)/(6p+3)}M_{2p}^{4p/(6p+3)}},\\ &A_4=\frac1{Cp^{2/3}|\alpha|^{2/3}M_{2p}^{2/3}},\\ &A_5=\frac{M_{2p}^2}{CM_F^2\mu^{-1}}\,. \endaligned \end{equation} We now set \begin{equation}\label{t0} t_0=\frac{M_{2p}^2}{CM_F^2/\mu}\,. \end{equation} Then the condition $$ t\le\min(A_1,A_2,A_3,A_4) $$ can be written in terms of $y=\alpha t$ as follows \begin{equation}\label{|y|} |y|\le\min\left( \frac{t^{1/2}}C, \frac1{Cpt^{(2p-3)/4p}M_{2p}}, \frac1{Cpt^{(2p-3)/(4p+6)}M_{2p}^{2p/(2p+3)}}, \frac1{pt^{1/2}M_{2p}} \right). \end{equation} Now for $t_0$ defined in~(\ref{t0}) and \begin{equation}\label{delta(t)} \delta(t)= \min\left( \frac{t^{1/2}}C, \frac1{Cpt^{(2p-3)/4p}M_{2p}}, \frac1{Cpt^{(2p-3)/(4p+6)}M_{2p}^{2p/(2p+3)}}, \frac1{pt^{1/2}M_{2p}}, \delta_F \right) \end{equation} we have for $0<t\le t_0$ and $|y|\le\delta(t)$ $$ \int_0^1\int_\Omega\bigl(\omega^{(n)}(x,sy,t)^2 +\theta^{(n)}(x,sy,t)^2\bigr)dxds\le2^{2p}M_{2p}^{2p} $$ for all integer $n\ge1$. Therefore for any $y\in\mathbb{R}^2$ with $|y|=1$ this gives that $$ \int_0^{\delta(t)}\int_\Omega\bigl(\omega^{(n)}(x,sy,t)^2 +\theta^{(n)}(x,sy,t)^2\bigr)dxds\le2^{2p}\delta(t)M_{2p}^{2p} $$ and since $\int_0^\delta f(sy)ds\le B$, $|y|=1$ implies $\int_{|y|\le\delta}f(y)dy\le2\pi\delta B$, we obtain $$ \int_{|y|\le\delta(t)}\int_\Omega\bigl(\omega^{(n)}(x,y,t)^2 +\theta^{(n)}(x,y,t)^2\bigr)dxds\le2\pi2^{2p}\delta(t)^2M_{2p}^{2p}. $$ This estimate is uniform in $n$ and as in~\cite{Gr-K}, \cite{Kuk} we obtain the existence of an analytic solution of~(\ref{DVE}) with analyticity radius satisfying~(\ref{delta(t)}). The proof is complete. \end{proof} \begin{lemma}\label{L:Kuk-Gronwall} {\rm(See~\cite{Kuk}.)} Let $y_n(t)\in C^1[0,T]$ be a sequence of non-negative functions satisfying $y_0(t)\le M$ for $0\le t\le T$, and $y_n(0)\le M$ for $n\ge1$. Suppose that on the interval $0\le t\le T$ $$ \partial_ty_n(t)\le\sum_{j=1}^NK_jt^{\alpha_j}y_n(t)^{\beta_j}y_{n-1}(t)^{\gamma_j}, $$ where $K_j>0$, $\alpha_j>-1$, $\beta_j\in\mathbb{R}$, and $\gamma_j\ge0$ are given constants. Then $y_n(t)\le 2M$ for all $n=0,1,2,\dots$ provided that $$ 0\le t\le\min\left(T, \min_{j=1,\dots,N}\left(\frac{\alpha_j+1} {NK_j2^{\beta_j^++\gamma_j}M^{\beta_j+\gamma_j-1}} \right)^{1/(\alpha_j+1)}\right), $$ where $\beta^+=\max(\beta,0)$. \end{lemma} We can now state the main result of this section. \begin{theorem}\label{T:Analyticity _in_L_infty} The solutions on the 2D space-periodic damped-driven Navier--Stokes system~$(\ref{DNS})$ lying on the global attactor $\mathcal{A}$ are analytic with space analyticity radius $l_a$ satisfying the lower bound \begin{equation}\label{est_for_la} l_a\ge\frac{|\Omega|^{1/2}}{CD^{1/2}(1+\log D)^{1/2}},\qquad \text{where}\qquad D=\frac{\|\mathop\mathrm{rot} f\|_\infty|\Omega|}{\mu\nu}\,. \end{equation} \end{theorem} \begin{proof} We first observe that~(\ref{est_for_la}) is equivalent to the estimate \begin{equation}\label{est_for_la_dimless} l_a\ge\frac{1}{CD^{1/2}(1+\log D)^{1/2}} \end{equation} for the equation written in dimensionless form. Next, by Young's inequality $$ \aligned &pt^{(2p-3)/4p}M\le CpM^{4p/(4p-3)}t^{1/2}+t^{-1/2},\\ &pt^{(2p-3)/(4p+6)}M^{2p/(2p+3)}\le CpM^{}t^{1/2}+t^{-1/2}. \endaligned $$ Hence, the estimate~(\ref{delta(t)}) can be written as follows \begin{equation}\label{delta(t)3} \delta(t)\ge \min\left( \frac{t^{1/2}}C, \frac1{Cpt^{1/2}\bigl(M_{2p}^{4p/(4p-3)}+M_{2p}\bigr)}, \delta_F \right). \end{equation} The solutions lying on the attractor are bounded in $L_{2p}$: $$ \|\omega(t)\|_{L_{2p}}\le M_{2p}, \qquad \mathrm{M}_{2p}\le CM_\infty. $$ Setting $$ p=C(1+\log M_\infty) $$ we see that $$ \left.p\bigl(M_{2p}^{4p/(4p-3)}+M_{2p}\bigr)\right\vert_{p=C(1+\log M_\infty)} \le C(1+\log M_\infty)M_\infty $$ and therefore $$ \delta(t)\ge \min\left( \frac{t^{1/2}}C\,,\, \frac1{C(1+\log M_\infty)M_\infty t^{1/2}}\,, \, \delta_F \right). $$ At the moment of time $$ t^*=\frac1{C(1+\log M_\infty)M_\infty}\,, $$ which for sufficiently large $M_\infty$ (the case of our interest) is smaller than $t_0$ defined in~(\ref{t0}) (the details are given below) we have $$ \delta(t^*)\ge \frac1{CM_\infty^{1/2}(1+\log M_\infty)^{1/2}}\,. $$ Since $M_\infty\le D$ (see (\ref{Dimless-est})), it follows that $$ \delta(t^*)\ge \frac1{CD^{1/2}(1+\log D)^{1/2}}\,. $$ By the invariance property of the attractor we see that on the attractor the above estimate holds for all $t^*$, which proves~(\ref{est_for_la_dimless}). To complete the proof it remains to show that \begin{equation}\label{t0t*} \frac1{C(1+\log D)D}=t^*\le t_0=\frac{D^2}{CM_{F'}^2/\mu'}\,, \end{equation} where in the expression for $t_0$ we reverted to the prime notation for the dimensionless damping coefficient $\mu'$ and the forcing $F'$. We relate the forcing term and its analytic extension by the equality $$ M_{F'}=K\|{\mathop\mathrm{rot}}'f'\|_\infty,\qquad K=K(F,\delta_F). $$ Recalling that $f'=(L^3/\nu^2)f$, $\mu'=(L^2/\nu)\mu$, and $x'=(1/L)x$ we see that $$ t_0=\frac{\nu}{CK^2\mu L^2}\,. $$ Hence, (\ref{t0t*}) goes over to the condition $$ C(1+\log D)\ge\frac{K^2\mu^2}{\|\mathop\mathrm{rot} f\|_\infty}\,, $$ which is obviously satisfied for all sufficiently small $\nu>0$. The proof is complete. \end{proof} \setcounter{equation}{0} \section{Concluding remarks}\label{S:Conclusion} We have shown that the solutions lying on the attractor of the 2D space-periodic damped-driven Navier--Stokes system, the Stommel--Charney barotropic model of ocean circulation without rotation, with analytic forcing have space ana\-lyti\-city radius which up to a logarithmic term coincides with the small scale estimates both in terms of the sharp bounds for the fractal dimension of the global attractor, and in terms of the spatial lattice of determining nodes. The derivation of this lower bound for the analyticity radius essentially uses the techniques developed in~\cite{Kuk}. \section*{Acknowledgments} A.A.I. would like to thank the warm hospitality of the Mathematics Department at the University of California, Irvine, where this work was done. This work was supported in part by the US Civilian Research and Development Foundation, grant no.~RUM1-2654-MO-05 ( A.A.I. and E.S.T.), by the Russian Foundation for Fundamental Research, grants~nos.~06-01-00096 and 05-01-00429, and by the RAS Programme no.1 `Modern problems of theoretical mathematics' (A.A.I.). The work of E.S.T. was supported in part by the National Science Foundation, grant no.~DMS-0504619, the ISF grant no. 120/6, and the BSF grant no. 2004271. \bibliographystyle{amsplain}
1,108,101,562,520
arxiv
\section{Introduction} In~\cite{FKT2}, the authors introduced a class of Riemann surfaces of infinite genus that are ``asymptotic to'' a finite number of complex lines joined by infinite many handles. These surfaces are constructed by pasting together a compact submanifold of finite genus, plane domains, and handles. All these components satisfy a number of geometric/analytic hypotheses stated in~\cite{FKT2} that specify the asymptotic holomorphic structure of the surface. The class of surfaces obtained in this way yields an extension of the classical theory of compact Riemann surfaces that has analogues of many theorems of the classical theory. It was proven in~\cite{FKT2} that this new class includes quite general hyperelliptic surfaces, heat curves (which are spectral curves associated to a certain ``heat-equation''), and Fermi curves with zero magnetic potential. In order to verify the geometric/analytic hypotheses for the latter the authors proved two ``asymptotic'' theorems similar to the ones we prove below. This is the main step needed to verify these hypotheses. In this work we extend their results to Fermi curves with ``small'' magnetic potential. There are two immediate applications of our results. First, as we have already mentioned, one can use our theorems for verifying the geometric/analytic hypotheses of~\cite{FKT2} for Fermi curves with small magnetic potential. This would show that these curves belong to the class of Riemann surfaces mentioned above. Secondly, one can prove that a class of these curves are irreducible (in the usual algebraic-geometrical sense). Both these applications were done in \cite{FKT2}~for Fermi curves with zero magnetic potential. Complex Fermi curves (and other similar spectral curves) have been studied, in different perspectives, in the absence of magnetic field \cite{FKT2, GKT, KT, Kr, Mc}, and in the presence of magnetic field~\cite{FKT1}. Some results on the real Fermi curve in the high-energy region were obtained in~\cite{Ka}. There one also finds a short description of the existing results on periodic magnetic Schr\"odinger operators. An even more general review is presented in~\cite{E}. To our knowledge our work provides new results on complex Fermi curves with magnetic field. At this moment we are only able to handle the case of ``small'' magnetic potential. The asymptotic characterization of Fermi curves with arbitrarily large magnetic potential remains as an open problem. In order to prove our theorems we follow the same strategy as~\cite{FKT2}. The presence of magnetic field makes the analysis considerably harder and requires new estimates. As it was pointed out in~\cite{Ka, E}, the study of an operator with magnetic potential is essentially more complicated than the study of the operator with just an electric potential. This seems to be the case in this problem as well. Before we outline our results let us introduce some definitions. Let~$\Gamma$ be a lattice in~$\mathds R^2$ and let~$A_1$, $A_2$ and~$V$ be real-valued functions in~$L^2(\mathds R^2)$ that are periodic with respect to $\Gamma$. Set~$A \coloneqq (A_1,A_2)$ and define the operator $$ H(A,V) \coloneqq (i \nabla + A)^2 + V $$ acting on $L^2(\mathds R^2)$, where~$\nabla$ is the gradient operator in~$\mathds R^2$. For $k \in \mathds R^2$ consider the following eigenvalue-eigenvector problem in~$L^2(\mathds R^2)$ with boundary conditions, \begin{align*} H(A,V) \varphi & = \lambda \varphi, \\ \varphi(x+\gamma) & = e^{i k \cdot \gamma} \varphi(x) \end{align*} for all $x \in \mathds R^2$ and all $\gamma \in \Gamma$. Under suitable hypotheses on the potentials $A$ and $V$ this problem is self-adjoint and its spectrum is discrete. It consists of a sequence of real eigenvalues $$ E_1(k,A,V) \leq E_2(k,A,V) \leq \cdots \leq E_n(k,A,V) \leq \cdots $$ For each integer $n \geq 1$ the eigenvalue $E_n(k,A,V)$ defines a continuous function of $k$. From the above boundary condition it is easy to see that this function is periodic with respect to the dual lattice $$ \Gamma^\# \coloneqq \{ b \in \mathds R^2 \; | \; b \cdot \gamma \in 2\pi \mathds Z \, \text{ for all } \gamma \in \Gamma \}, $$ where $b \cdot \gamma$ is the usual scalar product on~$\mathds R^2$. It is customary to refer to $k$ as the crystal momentum and to $E_n(k,A,V)$ as the $n$-th band function. The corresponding normalized eigenfunctions $\varphi_{n,k}$ are called Bloch eigenfunctions. The operator $H(A,V)$ (and its three-dimensional counterpart) is important in solid state physics. It is the Hamiltonian of a single electron under the influence of magnetic field with vector potential~$A$, and electric field with scalar potential~$V$, in the independent electron model of a two-dimensional solid~\cite{RS4}. The classical framework for studying the spectrum of a differential operator with periodic coefficients is the Floquet (or Bloch) theory \cite{RS4, K, MW}. Roughly speaking, the main idea of this theory is to ``decompose'' the original eigenvalue problem, which usually has continuous spectrum, into a family of boundary value problems, each one having discrete spectrum. In our context this leads to decomposing the problem $H(A,V) \varphi = \lambda \varphi$ (without boundary conditions) into the above $k$-family of boundary value problems. Let $U_k$ be the unitary transformation on~$L^2(\mathds R^2)$ that acts as $$ U_k \, : \, \varphi(x) \longmapsto e^{ik \cdot x} \varphi(x). $$ By applying this transformation we can rewrite the above problem and put the boundary conditions into the operator. Indeed, if we define $$ H_k(A,V) \coloneqq U_k^{-1} \, H(A,V) \, U_k \qquad \text{and} \qquad \psi \coloneqq U_k^{-1} \varphi, $$ then the above problem is unitarily equivalent to $$ H_k(A,V) \psi = \lambda \psi \qquad \text{for} \qquad \psi \in L^2(\mathds R^2 / \Gamma). $$ Furthermore, a simple (formal) calculation shows that $$ H_k(A,V) = (i \nabla + A - k)^2 + V. $$ The real ``lifted'' Fermi curve of $(A,V)$ with energy $\lambda \in \mathds R$ is defined as $$ \widehat{\mathcal{F}}_{\lambda, \mathds R}(A,V) \coloneqq \{ k \in \mathds R^2 \; | \; (H_k(A,V) - \lambda) \varphi = 0 \, \text{ for some } \varphi \in \mathcal{D}_{H_k(A,V)} \setminus \{0\} \}, $$ where $\mathcal{D}_{H_k(A,V)} \subset L^2(\mathds R^2/\Gamma)$ denotes the (dense) domain of $H_k(A,V)$. The adjective ``lifted'' indicates that $\widehat{\mathcal{F}}_{\lambda, \mathds R}(A,V)$ is a subset of $\mathds R^2$ rather than $\mathds R^2 / \Gamma^\#$. As we may replace $V$ by $V-\lambda$, we only discuss the case $\lambda=0$ and write $\widehat{\mathcal{F}}_\mathds R(A,V)$ in place of $\widehat{\mathcal{F}}_{0,\mathds R}(A,V)$ to simplify the notation. Let $|\Gamma| \coloneqq \int_{\mathds R^2 / \Gamma} dx$ and $\hat{A}(0) \coloneqq |\Gamma|^{-1} \int_{\mathds R^2 / \Gamma} A(x) \, dx$. Since $H_k(A,V)$ is equal to $H_{k-\hat{A}(0)}(A-\hat{A}(0),V)$, if we perform the change of coordinates $k \to k + \hat{A}(0)$ and redefine $A-\hat{A}(0) \to A$ we may assume, without loss of generality, that $\hat{A}(0)=0$. The dual lattice $\Gamma^\#$ acts on $\mathds R^2$ by translating $k \mapsto k+b$ for $b \in \Gamma^\#$. This action maps $\widehat{\mathcal{F}}_\mathds R(A,V)$ to itself because for each $n \geq 1$ the function $k \mapsto E_n(k,A,V)$ is periodic with respect to $\Gamma^\#$. In other words, the real lifted Fermi curve ``is periodic'' with respect to $\Gamma^\#$. Define $$ \mathcal{F}_\mathds R(A,V) \coloneqq \widehat{\mathcal{F}}_\mathds R(A,V) / \Gamma^\#. $$ We call $\mathcal{F}_\mathds R(A,V)$ the real Fermi curve of $(A,V)$. It is a curve in the torus $\mathds R^2/\Gamma^\#$. The above definitions and the real Fermi curve have physical meaning. It is useful and interesting, however, to study the ``complexification'' of these curves. Knowledge about the complexified curves may provide information about the real counterparts. For complex-valued functions $A_1$, $A_2$ and $V$ in $L^2(\mathds R^2)$ and for $k \in \mathds C^2$ the above problem is no longer self-adjoint. Its spectrum, however, remains discrete. It is a sequence of eigenvalues in the complex plane. From the boundary condition in the original problem it is easy to see that the family of functions $k \mapsto E_n(k,A,V)$ remains periodic with respect to $\Gamma^\#$. Moreover, the transformation $U_k$ is no longer unitary but it is still bounded and invertible and it still preserves the spectrum, that is, we can still rewrite the original problem in the form $H_k(A,V) \psi = \lambda \psi$ for $\psi \in L^2(\mathds R^2/\Gamma)$ without modifying the eigenvalues. Thus, it makes sense to define \begin{align*} \widehat{\mathcal{F}}(A,V) & \coloneqq \{ k \in \mathds C^2 \; | \; H_k(A,V) \varphi = 0 \, \text{ for some } \varphi \in \mathcal{D}_{H_k(A,V)} \setminus \{0\} \}, \\ \mathcal{F}(A,V) & \coloneqq \widehat{\mathcal{F}}(A,V) / \Gamma^\#. \end{align*} We call $\widehat{\mathcal{F}}(A,V)$ and $\mathcal{F}(A,V)$ the complex ``lifted'' Fermi curve and the complex Fermi curve, respectively. When there is no risk of confusion we refer to either simply as Fermi curve. We are now ready to outline our results. When $A$ and $V$ are zero the (free) Fermi curve can be found explicitly. It consists of two copies of $\mathds{C}$ with the points $-b_2 + i b_1$ (in the first copy) and $b_2 + i b_1$ (in the second copy) identified for all $(b_1,b_2) \in \Gamma^\#$ with $b_2 \neq 0$. In this work we prove that in the region of $\mathds C^2$ where $k \in \mathds C^2$ has ``large'' imaginary part the Fermi curve (for nonzero $A$ and $V$) is ``close to'' the free Fermi curve. In a compact form, our main result (that will be stated precisely in Theorems \ref{t:reg} and \ref{t:hand}) is essentially the following. \vspace{0.3cm} \noindent {\bf Main result.} \emph{ Suppose that $A$ and $V$ have some regularity and assume that (in a suitable norm) $A$ is smaller than a constant given by the parameters of the problem. Write $k$ in $\mathds{C}^2$ as $k = u + iv$ with $u$ and $v$ in $\mathds{R}^2$ and suppose that $|v|$ is larger than a constant given by the parameters of the problem. (Recall that the free Fermi curve is two copies of $\mathds{C}$ with certain points in one copy identified with points in the other one.) Then, in this region of $\mathds{C}^2$, the Fermi curve of $A$ and $V$ is very close to the free Fermi curve, except that instead of two planes we may have two deformed planes, and identifications between points can open up to handles that look like $\{ (z_1,z_2) \in \mathds{C}^2 \; | \; z_1 z_2 = \text{constant} \}$ in suitable local coordinates.} \vspace{0.3cm} The proof of our results has basically three steps: \begin{itemize} \item We first derive very detailed information about the free Fermi curve (which is explicitly known). Then, to compute the interacting Fermi curve we have to find the kernel of $H$ in $L^2(\mathds R^2)$ with the above boundary conditions. \item In the second step of the proof we derive a number of estimates for showing that this kernel has finite dimension for small $A$ and $k \in \mathds C^2$ with large imaginary part. Our strategy here is similar to the Feshbach method in perturbation theory \cite{GS}. Indeed, we prove that in the complement of the kernel of $H$ in $L^2(\mathds R^2)$, after a suitable invertible change of variables in $L^2(\mathds R^2)$, the operator $H$ multiplied by the inverse of the operator that implements this change of variables is a compact perturbation of the identity that is invertible for such $A$ and $k$. This reduces the problem of finding the kernel to finite dimension and thus we can write local defining equations for the Fermi curve. \item In the third step of the proof we use these equations to study the Fermi curve. A few more estimates and the implicit function theorem gives us the deformed planes. The handles are obtained using a quantitative Morse lemma from \cite{deO} that is available in the Appendix \ref{s:morse}. \end{itemize} Steps two and three contain most of the novelties in this work. The critical part of the proof is the second step. The main difficulty arises due to the presence of the term $A \cdot i \nabla$ in the Hamiltonian $H(A,V)$. When $A$ is large, taking the imaginary part of $k \in \mathds C^2$ arbitrarily large is not enough to control this term---it is not enough to make its contribution small and hence have the interacting Fermi curve as a perturbation of the free Fermi curve. (The term $V$ in $H(A,V)$ is easily controlled by this method.) However, the proof can be implemented by assuming that $A$ is small. This work is organized as follows. In \S\ref{s:free} we collect some properties of the free Fermi curve and in \S\ref{s:tubes} we define $\varepsilon$-tubes about it. In \S\ref{s:main} we state our main results and in \S\ref{s:idea} we describe the general strategy of analysis used to prove them. Subsequently, we implement this strategy by proving a number of lemmas and propositions in \S\ref{s:inv} to \S\ref{s:der}, which we put together later in \S\ref{s:reg} and \S\ref{s:hand} to prove our main theorems. The proof of the estimates of \S\ref{s:coeff} and \S\ref{s:der} are left to the Appendices \ref{s:app2} and \ref{s:app3}. \vspace{0.3cm} {\bf Acknowledgments.} I would like to thank Professor Joel Feldman for suggesting this problem and for the many discussions I have had with him. I am also grateful to Alessandro Michelangeli for useful comments about the manuscript. This work is part of the author's Ph.D. thesis \cite{deO} defended at the University of British Columbia in Vancouver, Canada. \section{The free Fermi curve} \label{s:free} When the potentials $A$ and $V$ are zero the curve $\widehat{\mathcal{F}}(A,V)$ can be found explicitly. In this section we collect some properties of this curve. For $\nu \in \{1,2\}$ and $b \in \Gamma^\#$ set \begin{align*} N_{b,\nu}(k) & \coloneqq (k_1 + b_1) + i(-1)^\nu(k_2 + b_2), \\ \mathcal{N}_\nu(b) & \coloneqq \{ k \in \mathds C^2 \; | \; N_{b,\nu}(k)=0\},\\ N_b(k) & \coloneqq N_{b,1}(k) N_{b,2}(k), \\ \mathcal{N}_b & \coloneqq \mathcal{N}_1(b) \cup \mathcal{N}_2(b), \\ \theta_\nu(b) & \coloneqq \tfrac{1}{2} ((-1)^\nu b_2 + i b_1). \end{align*} Observe that $\mathcal{N}_\nu(b)$ is a line in $\mathds C^2$. The free lifted Fermi curve is an union of these lines. Here is the precise statement. \begin{prop}[The free Fermi curve] \label{p:free} The curve $\widehat{\mathcal{F}}(0,0)$ is the locally finite union $$ \bigcup_{b\in\Gamma^\#} \bigcup_{\nu\in\{1,2\}} \mathcal{N}_\nu(b).$$ In particular, the curve $\mathcal{F}(0,0)$ is a complex analytic curve in $\mathds C^2 / \Gamma^\#$. \end{prop} The proof of this proposition is straightforward. It can be found in \cite{deO}. Here we only give its first part. \begin{proof}[Proof of Proposition \ref{p:free} (first part)] For all $k \in \mathds C^2$ the functions $\{ e^{ib \cdot x} \; | \; b \in \Gamma^\# \}$ form a complete set of eigenfunctions for $H_k(0,0)$ in $L^2(\mathds R^2/\Gamma)$ satisfying $$ H_k(0,0) e^{ib \cdot x} = (i\nabla-k)^2 e^{ib \cdot x} = (b+k)^2 e^{ib \cdot x} = N_b(k) e^{ib \cdot x}. $$ Hence, $$ \widehat{\mathcal{F}}(0,0) = \{ k \in \mathds C^2 \; | \; N_b(k)=0 \, \text{ for some } b \in \Gamma^\# \} = \bigcup_{b \in \Gamma^\#} \mathcal{N}_b = \bigcup_{b \in \Gamma^\#} \bigcup_{\nu \in \{1,2\}} \mathcal{N}_\nu(b). $$ This is the desired expression for $\widehat{\mathcal{F}}(0,0)$. \end{proof} \begin{figure}[htb] \begin{center} \begin{tikzpicture}[scale=0.9] \draw (1.5,6.4) node {{\small $\mathcal{N}_2(0)$}}; \draw (0.5,3.9) node {{\small $\mathcal{N}_2(b)$}}; \draw (9.5,3.9) node {{\small $\mathcal{N}_1(-b)$}}; \draw (8.5,6.4) node {{\small $\mathcal{N}_1(0)$}}; \draw (5.9,6.8) node {{\small $\mathcal{N}_1(b)$}}; \draw (4,6.8) node {{\small $\mathcal{N}_2(-b)$}}; \draw[-,dashed] (15,0.5) arc (0:180:2cm and 0.3cm); \draw (15,0.5) arc (0:-180:2cm and 0.3cm); \draw[-latex] (13.35,-0.2) arc (-80:-15: 2cm and 0.3cm) node {$\qquad k_2$}; \draw (13,6) ellipse (2 and 0.3); \draw[-latex] (13,6)--(13,7.2); \draw (13,7.5) node {$i k_1$}; \draw (15,0.5)--(15,6) (11,0.5)--(11,6); \draw[-latex] (0,2.5)--(10,2.5); \draw (10.3,2.5) node {$k_2$}; \draw[-latex] (5,0)--(5,7.2); \draw (5,7.5) node {$i k_1$}; \draw[thick,dashed,domain=0:1.3,smooth,variable=\t] plot(15-1.5*\t*\t,1.35+\t); \draw[thick,domain=-1.3:0,smooth,variable=\t] plot(15-1.5*\t*\t,1.35+\t); \draw[thick,dashed,domain=0:1.3,smooth,variable=\t] plot(11+1.5*\t*\t,1.35+\t); \draw[thick,domain=-1.3:0,smooth,variable=\t] plot(11+1.5*\t*\t,1.35+\t); \draw[thick,dashed,domain=0:1.3,smooth,variable=\t] plot(15-1.5*\t*\t,5.15+\t); \draw[thick,domain=-1.3:0,smooth,variable=\t] plot(15-1.5*\t*\t,5.15+\t); \draw[thick,dashed,domain=0:1.3,smooth,variable=\t] plot(11+1.5*\t*\t,5.15+\t); \draw[thick,domain=-1.3:0,smooth,variable=\t] plot(11+1.5*\t*\t,5.15+\t); \draw[thick,domain=0:0.4,smooth,variable=\t] plot(15-5.1*\t*\t,3.25+\t); \draw[thick,dashed,domain=-0.4:0,smooth,variable=\t] plot(15-5.1*\t*\t,3.25+\t); \draw[thick,domain=0:0.4,smooth,variable=\t] plot(11+5.1*\t*\t,3.25+\t); \draw[thick,dashed,domain=-0.4:0,smooth,variable=\t] plot(11+5.1*\t*\t,3.25+\t); \draw[thick,dashed] (12.42,2.65)--(11.816,2.85); \draw[thick,dashed] (13.535,2.65)--(14.184,2.85); \draw[thick] (14.184,3.65)--(13.52,3.85); \draw[thick] (11.816,3.65)--(12.47,3.85); \clip (0.5,0) rectangle (9.5,6.5); \draw[-,dotted] (1.5,0)--(1.5,6.5) (3.25,0)--(3.25,6.5) (6.75,0)--(6.75,6.5) (8.5,0)--(8.5,6.5); \draw[-,thick] (6,0)--(10.5,4.5) (4,0)--(0,4) (2.5,0)--(8.5,6) (7.5,0)--(1.5,6) (10.5,0.5)--(4.5,6.5) (0,1)--(5.5,6.5); \end{tikzpicture} \end{center} \caption{Sketch of $\widehat{\mathcal{F}}(0,0)$ and $\mathcal{F}(0,0)$ when both $ik_1$ and $k_2$ are real.} \label{fig:free} \end{figure} The lines $\mathcal{N}_\nu(b)$ have the following properties (see \cite{deO} for a proof). \begin{prop}[Properties of $\mathcal{N}_\nu(b)$] \label{p:lines} Let $\nu \in \{1,2\}$ and let $b,c,d \in \Gamma^\#$. Then: \begin{itemize} \item[\rm (a)] $\mathcal{N}_\nu(b) \cap \mathcal{N}_\nu(c) = \varnothing \quad \text{if} \quad b \neq c \,$; \item[\rm (b)] $\text{\rm dist}(\mathcal{N}_\nu(b), \mathcal{N}_\nu(c)) = \tfrac{1}{\sqrt{2}} |b-c|$; \item[\rm (c)] $\mathcal{N}_1(b) \cap \mathcal{N}_2(c) = \{ ( i \theta_1(c) + i \theta_2(b), \, \theta_1(c) - \theta_2(b) ) \}$; \item[\rm (d)] the map $k \mapsto k+d$ maps $\mathcal{N}_\nu(b)$ to $\mathcal{N}_\nu(b-d)$; \item[\rm (e)] the map $k \mapsto k+d$ maps $\mathcal{N}_1(b) \cap \mathcal{N}_2(c)$ to $\mathcal{N}_1(b-d) \cap \mathcal{N}_2(c-d)$. \end{itemize} \end{prop} Let us briefly describe what the free Fermi curve looks like. In the Figure \ref{fig:free} there is a sketch of the set of $(k_1, k_2) \in \widehat{\mathcal{F}}(0,0)$ for which both $ik_1$ and $k_2$ are real, for the case where the lattice $\Gamma^\#$ has points over the coordinate axes, that is, it has points of the form $(b_1,0)$ and $(0,b_2)$. Observe that, in particular, Proposition \ref{p:lines} yields \begin{align*} \mathcal{N}_1(0) \cap \mathcal{N}_2(b) & = \{ (i\theta_1(b), \theta_1(b)) \},\\ \mathcal{N}_1(-b) \cap \mathcal{N}_2(0) & = \{ (i \theta_2(-b), \theta_2(b))\},\\ \text{the map } k \mapsto k+b \text{ maps } & \mathcal{N}_1(0) \cap \mathcal{N}_2(b) \text{ to } \mathcal{N}_1(-b) \cap \mathcal{N}_2(0). \end{align*} Recall that points in $\widehat{\mathcal{F}}(0,0)$ that differ by elements of $\Gamma^\#$ correspond to the same point in $\mathcal{F}(0,0)$. Thus, in the sketch on the left, we should identify the lines $k_2 = -b_2/2$ and $k_2 = b_2/2$ for all $b \in \Gamma^\#$ with $b_2 \neq 0$, to get a pair of helices climbing up the outside of a cylinder, as illustrated by the figure on the right. The helices intersect each other twice on each cycle of the cylinder---once on the front half of the cylinder and once on the back half. Hence, viewed as a ``manifold'' (with singularities), the pair of helices are just two copies of $\mathds R$ with points that corresponds to intersections identified. We can use $k_2$ as a coordinate in each copy of $\mathds R$ and then the pairs of identified points are $k_2 = b_2/2$ and $k_2 = -b_2/2$ for all $b \in \Gamma^\#$ with $b_2 \neq 0$. So far we have only considered $k_2$ real. The full $\widehat{\mathcal{F}}(0,0)$ is just two copies of $\mathds C$ with $k_2$ as a coordinate in each copy, provided we identify the points $\theta_1(b) = \tfrac{1}{2} (-b_2 + i b_1)$ (in the first copy) and $\theta_2(b) = \tfrac{1}{2} (b_2 + i b_1)$ (in the second copy) for all $b \in \Gamma^\#$ with $b_2 \neq 0$. \section{The $\varepsilon$-tubes about the free Fermi curve} \label{s:tubes} We now introduce real and imaginary coordinates in $\mathds C^2$ and define $\varepsilon$-tubes about the free Fermi curve. We derive some properties of the $\varepsilon$-tubes as well. For $k \in \mathds C^2$ write $$ k_1 = u_1 + i v_1 \qquad \text{and} \qquad k_2 = u_2 + iv_2, $$ where $u_1$, $u_2$, $v_1$ and $v_2$ are real numbers. Then, \begin{align*} N_{b,\nu}(k) & = (k_1 + b_1) + i(-1)^\nu(k_2 + b_2) \\ & = i(v_1 + (-1)^\nu(u_2 + b_2)) -(-1)^\nu (v_2 - (-1)^\nu (u_1+b_1)), \end{align*} so that $$ |N_{b,\nu}(k)| = |v + (-1)^\nu (u+b)^\perp |, $$ where $(y_1, y_2)^\perp \coloneqq (y_2, -y_1)$. Since $N_b(k) = N_{b,1}(k) N_{b,2}(k)$, we have $N_b(k)=0$ if and only if $$ v - (u+b)^\perp = 0 \qquad \text{or} \qquad v + (u+b)^\perp = 0. $$ Let $2 \Lambda$ be the length of the shortest nonzero ``vector'' in $\Gamma^\#$. Then there is at most one $b \in \Gamma^\#$ with $|v+(u+b)^\perp| < \Lambda$ and at most one $b \in \Gamma^\#$ with $|v-(u+b)^\perp| < \Lambda$ (see \cite{deO} for a proof). Let $\varepsilon$ be a constant satisfying $0 < \varepsilon < \Lambda/6$. For $\nu \in \{1,2\}$ and $b \in \Gamma^\#$ define the $\varepsilon$-tube about $\mathcal{N}_{\nu}(b)$ as $$ T_\nu(b) \coloneqq \{ k \in \mathds C^2 \;\; | \;\; |N_{b,\nu}(k)| = |v + (-1)^\nu (u+b)^\perp | < \varepsilon \}, $$ and the $\varepsilon$-tube about $\mathcal{N}_b = \mathcal{N}_1(b) \cup \mathcal{N}_2(b)$ as $$ T_b \coloneqq T_1(b) \cup T_2(b). $$ Since $(v+(u+b)^\perp) + (v-(u+b)^\perp) = 2v$, at least one of the factors $|v+(u+b)^\perp|$ or $|v-(u+b)^\perp|$ in $|N_b(k)|$ must always be greater or equal to $|v|$. If $k \not\in T_b$ both factors are also greater or equal to $\varepsilon$. If $k \in T_b$ one factor is bounded by $\varepsilon$ and the other must lie within $\varepsilon$ of $|2v|$. Thus, \begin{align} k \not\in T_b \quad & \Longrightarrow \quad |N_b(k)| \geq \varepsilon |v|, \label{bd1} \\ k \in T_b \quad & \Longrightarrow \quad |N_b(k)| \leq \varepsilon(2|v|+\varepsilon). \label{bd2} \end{align} Finally, denote by $\overline{T}_b$ the closure of $T_b$. The intersection $\overline{T}_b \cap \overline{T}_{b'}$ is compact whenever $b \neq b'$, and $\overline{T}_b \cap \overline{T}_{b'} \cap \overline{T}_{b''}$ is empty for all distinct elements $b, b', b'' \in \Gamma^\#$ (see \cite{deO} for details). If a point $k$ belongs to the free Fermi curve the function $N_b(k)$ vanishes for some $b \in \Gamma^\#$. We now give a lower bound for this function when $(b,k)$ is not in the zero set. \begin{prop}[Lower bound for $|N_b(k)|$] \label{p:Nb} $ \, $ \begin{itemize} \item[\rm (a)] If $|b+u+v^\perp| \geq \Lambda$ and $|b+u-v^\perp| \geq \Lambda$, then $|N_b(k)| \geq \frac{\Lambda}{2}(|v|+|u+b|)$. \item[\rm (b)] If $|v| > 2\Lambda$ and $k \in T_0$, then $|N_b(k)| \geq \frac{\Lambda}{2}(|v|+|u+b|)$ for all $b \neq 0$ but at most one $b \neq 0$. This exceptional $\tilde{b}$ obeys $|\tilde{b}| > |v|$ and $| \, |u+\tilde{b}|-|v| \, | < \Lambda$. \item[\rm (c)] If $|v| > 2\Lambda$ and $k \in T_0 \cap T_d$ with $d \neq 0$, then $|N_b(k)| \geq \frac{\Lambda}{2}(|v|+|u+b|)$ for all $b \not\in \{0, d\}$. Furthermore we have $|d| > |v|$ and $|\, |u+d|-|v| \, | < \Lambda$. \end{itemize} \end{prop} \begin{proof} (a) By hypothesis, both factors in $|N_b(k)| = |v + (u+b)^\perp | \, |v - (u+b)^\perp |$ are greater or equal to $\Lambda$. We now prove that at least one of the factors must also be greater or equal to $\frac{1}{2}(|v|+|u+b|)$. Suppose that $|v| \geq |u+b|$. Then, since $(v + (u+b)^\perp) + (v - (u+b)^\perp) = 2v$, at least one of the factors must also be greater or equal to $|v| = \frac{1}{2}(|v|+|v|) \geq \frac{1}{2}(|v|+|u+b|)$. Now suppose that $|v| < |u+b|$. Then similarly we prove that $|u+b| > \frac{1}{2}(|v|+|u+b|)$. All this together implies that $|N_b(k)| \ge \frac{\Lambda}{2}(|v|+|u+b|)$, which proves part (a). (b) By hypothesis $\varepsilon < \Lambda / 6 < |v|$. Let $k \in T_{0}$. Then, by \eqref{bd2}, \begin{equation} \label{partb} |N_{0}(k)| \leq \varepsilon(2|v|+\varepsilon) < 3 \varepsilon |v| < \frac{\Lambda}{2} |v|. \end{equation} Thus we have either $|u + v^\perp| < \Lambda$ or $|u - v^\perp| < \Lambda$ (otherwise apply part (a) to get a contradiction). Suppose that $|u + v^\perp| < \Lambda$. Then there is no $b \in \Gamma^\# \setminus \{0\}$ with $|b+u+v^\perp| < \Lambda$ and there is at most one $\tilde{b} \in \Gamma^\# \setminus \{0\}$ satisfying $|\tilde{b}+u-v^\perp| < \Lambda$. This inequality implies $| \, |u + \tilde{b}| - |v| \, | < \Lambda$. Furthermore, for this $\tilde{b}$, $$ |\tilde{b}| = |2v^\perp - (u+v^\perp) + (\tilde{b} + u - v^\perp)| > 2|v| - 2\Lambda > |v|, $$ since $-2\Lambda > -|v|$. Now suppose that $|u - v^\perp| < \Lambda$. Then similarly we prove that $|\tilde{b}| > |v|$. Finally observe that, if $b \not\in \{0,\tilde{b}\}$ then $|b+u+v^\perp| \geq \Lambda$ and $|b+u-v^\perp| \geq \Lambda$. Hence, applying part (a) it follows that $|N_b(k)| \geq \frac{\Lambda}{2}(|v|+|u+b|)$. This proves part (b). (c) As in the proof of part (b), if $k \in T_0 \cap T_d$ then in addition to \eqref{partb} we have $|N_{d}(k)| < \frac{\Lambda}{2} |v|$. Thus, applying part (b) we conclude that $d$ must be the exceptional $\tilde{b}$ of part (b). The statement of part (c) follows then from part (b). This completes the proof. \end{proof} \section{Main results} \label{s:main} The Riemann surfaces introduced in \cite{FKT2} can be decomposed into $$ X^{\rm com} \cup X^{\rm reg} \cup X^{\rm han}, $$ where $X^{\rm com}$ is a compact submanifold with smooth boundary and finite genus, $X^{\rm reg}$ is a finite union of open ``regular pieces'', and $X^{\rm han}$ is an infinite union of closed ``handles''. All these components satisfy a number of geometric/analytic hypotheses stated in \cite{FKT2} that specify the asymptotic holomorphic structure of the surface. Below we state two ``asymptotic'' theorems that essentially characterize the $X^{\rm reg}$ and $X^{\rm han}$ components of Fermi curves with small magnetic potential. Before we move to the theorems let us introduce some definitions. For any $\varphi \in L^2(\mathds R^2/\Gamma)$ define $\hat{\varphi} : \Gamma^\# \to \mathds C$ as $$ \hat{\varphi}(b) \coloneqq (\mathcal{F} \varphi)(b) \coloneqq \frac{1}{|\Gamma|} \int_{\mathds R^2/\Gamma} \varphi(x) \, e^{-i b \cdot x} \, d x, $$ where $|\Gamma| \coloneqq \int_{\mathds R^2/\Gamma} dx$. Then, $$ \varphi(x) = (\mathcal{F}^{-1} \hat{\varphi})(x) = \sum_{b \in \Gamma^\#} \hat{\varphi}(b) \, e^{i b \cdot x}, $$ $$ \| \varphi \|_{L^2(\mathds R^2 / \Gamma)} = |\Gamma|^{1/2} \| \hat{\varphi} \|_{l^2(\Gamma^\#)}. $$ Recall that $k = u + iv$ with $u,v \in \mathds R^2$, let $\rho$ be a positive constant, and set $$ \mathcal{K}_\rho \coloneqq \{k \in \mathds C^2 \;|\;\; |v|\le \rho \}.$$ Finally, consider the projection \begin{align*} pr \, : \, \mathds C^2 \, & \longrightarrow \, \mathds C, \\ (k_1,k_2) \, & \longmapsto \, k_2, \end{align*} and define $$ q \coloneqq (i\nabla \cdot A) + A^2 + V. $$ It is easy to construct a holomorphic map $E : \widehat{\mathcal{F}}(A,V) \to \mathcal{F}(A,V)$ \cite{deO}. The precise form of this map is irrelevant here. For our purposes it is enough to think of it simply as a ``projection'' (or ``exponential map''). We are ready to state our results. Clearly, the set $\mathcal{K}_\rho$ is invariant under the action of $\Gamma^\#$ and $\mathcal{K}_\rho / \Gamma^\#$ is compact. Hence, the image of $\widehat{\mathcal{F}}(A,V) \cap \mathcal{K}_\rho$ under the holomorphic map $E$ is compact in $\mathcal{F}(A,V)$. This image set will essentially play the role of $X^{\rm com}$ in the decomposition of $\mathcal{F}(A,V)$. Our first theorem characterizes the regular piece $X^{\rm reg}$ of $\mathcal{F}(A,V)$. \begin{thm}[The regular piece] \label{t:reg} Let $0 < \varepsilon < \Lambda/6$ and suppose that $A_1$, $A_2$ and $V$ are functions in $L^2(\mathds R^2/\Gamma)$ with $\| b^2 \hat{q}(b) \|_{l^1(\Gamma^\#)} < \infty$ and $\|(1+b^2) \hat{A}(b)\|_{l^1(\Gamma^\#\setminus \{0\})} < 2\varepsilon/63$. Then there is a constant $\rho = \rho_{\Lambda,\varepsilon,q,A}$ such that, for $\nu \in \{1,2\}$, the projection $pr$ induces a biholomorphic map between $$ \left( \widehat{\mathcal{F}}(A,V) \cap T_\nu(0) \right) \setminus \left( \mathcal{K}_\rho \cup \bigcup_{b \in \Gamma^\# \setminus \{0\}} T_b \right) $$ and its image in $\mathds C$. This image component contains $$ \Big \{ z \in \mathds C \,\, \Big| \,\, 8 |z| > \rho \, \text{ and } |z+(-1)^\nu \theta_\nu(b)| > \varepsilon \, \text{ for all } b \in \Gamma^\# \setminus \{0\} \Big \} $$ and is contained in $$ \Bigg \{ z \in \mathds C \,\, \Bigg | \,\, |z+(-1)^\nu\theta_\nu(b)| > \frac{1}{2} \left( \varepsilon - \frac{\varepsilon^2}{40 \Lambda} \right) \, \text{ for all } b \in \Gamma^\# \setminus \{0\} \Bigg \}, $$ where $\theta_{\nu}(b) = \frac{1}{2} ( (-1)^\nu b_2 + i b_1)$. Furthermore, \begin{align*} pr^{-1} \, : \, \text{{\rm Image(}pr{\rm)}} & \, \longrightarrow \, T_\nu(0),\\ y & \, \longmapsto \, (-\beta_2^{(1,0)} -i(-1)^\nu y - r(y),y), \end{align*} where $\beta_2^{(1,0)}$ is a constant given by \eqref{cte1} that depends only on $\rho$ and $A$, $$ |\beta^{(1,0)}_2| < \frac{\varepsilon^2}{100 \Lambda} \qquad \text{and} \qquad |r(y)| \leq \frac{\varepsilon^3}{50 \Lambda^2} + \frac{C}{\rho}, $$ where $C = C_{\Lambda,\varepsilon,q,A}$ is a constant. \end{thm} Now observe that, since $T_b + c = T_{b+c}$ for all $b, c \in \Gamma^\#$, the complement of $E \big(\widehat{\mathcal{F}}(A,V) \cap \mathcal{K}_\rho\big)$ in $\mathcal{F}(A,V)$ is the disjoint union of $$ E \Bigg( \big(\widehat{\mathcal{F}}(A,V) \cap T_0 \big) \setminus \Bigg(\mathcal{K}_\rho \cup \bigcup_{\substack{b \in \Gamma^\# \\ b_2 \neq 0}} T_b \Bigg) \Bigg) $$ and $$ \bigcup_{\substack{b \in \Gamma^\# \\ b_2 \neq 0}} E \big( \widehat{\mathcal{F}}(A,V) \cap T_0 \cap T_b \big). $$ Basically, the first of the two sets will be the regular piece of $\mathcal{F}(A,V)$, while the second set will be the handles. The map $\Phi$ parametrizing the regular part will be the composition of the map $E$ with the inverse of the map discussed in the above theorem. The detailed information about the handles $X^{\rm han}$ in $\mathcal{F}(A,V)$ comes from our second main theorem. \begin{thm}[The handles] \label{t:hand} Let $0 < \varepsilon < \Lambda/6$ and suppose that $A_1$, $A_2$ and $V$ are functions in $L^2(\mathds R^2/\Gamma)$ with $\| b^2 \hat{q}(b) \|_{l^1(\Gamma^\#)} < \infty$ and $\|(1+b^2) \hat{A}(b) \|_{l^1(\Gamma^\#\setminus \{0\})} < 2\varepsilon/63$. Then, for every sufficiently large constant $\rho$ and for every $d \in \Gamma^\# \setminus \{0\}$ with $2|d| > \rho$, there are maps \begin{align*} \phi_{d,1} \, & : \, \Big\{ (z_1,z_2) \in \mathds C^2 \; \Big| \;\; |z_1| \le \frac{\varepsilon}{2} \text{ and } |z_2| \le \frac{\varepsilon}{2} \Big\} \longrightarrow T_1(0) \cap T_2(d), \\ \phi_{d,2} \, & : \, \Big\{ (z_1,z_2) \in \mathds C^2 \; \Big| \;\; |z_1| \le \frac{\varepsilon}{2} \text{ and } |z_2| \le \varepsilon \Big\} \longrightarrow T_1(-d) \cap T_2(0), \end{align*} and a complex number $t_d$ with $|t_d| \leq \frac{C}{|d|^4}$ such that: \begin{itemize} \item[\rm (i)] For $\nu \in \{1,2\}$ the domain of the map $\phi_{d,\nu}$ is biholomorphic to its image, and the image contains $$ \Big\{ k \in \mathds C^2 \; \Big| \;\; |k_1+i(-1)^\nu k_2| \le \frac{\varepsilon}{8} \; \text{ and } \; |k_1+(-1)^{\nu+1} d_1 - i(-1)^\nu( k_2 + (-1)^{\nu+1} d_2)| \le \frac{\varepsilon}{8} \Big \}. $$ Furthermore, $$ D \hat{\phi}_{d,\nu} = \frac{1}{2} \begin{pmatrix} 1 & 1 \\ -i(-1)^\nu & i(-1)^\nu \end{pmatrix} \left(I + O \left( \frac{1}{|d|^2} \right) \right) $$ and $$ \phi_{d,\nu}(0) = (i \theta_\nu(d), (-1)^{\nu+1} \theta_\nu(d)) + O \left( \frac{\varepsilon}{900} \right) + O \left( \frac{1}{\rho} \right). $$ \item[\rm (ii)] \begin{align*} \phi_{d,1}^{-1} (T_1(0) \cap T_2(d) \cap \widehat{\mathcal{F}}(A,V)) & = \Big\{ (z_1,z_2) \in \mathds C^2 \; \Big| \;\; z_1z_2 = t_d, \;\; |z_1| \le \frac{\varepsilon}{2} \; \text{ and } \; |z_2| \le \frac{\varepsilon}{2} \Big\}, \\ \phi_{d,2}^{-1} (T_1(-d) \cap T_2(0) \cap \widehat{\mathcal{F}}(A,V)) & = \Big\{ (z_1,z_2) \in \mathds C^2 \; \Big| \;\; z_1z_2 = t_d, \;\; |z_1| \le \frac{\varepsilon}{2} \; \text{ and } \; |z_2| \le \frac{\varepsilon}{2} \Big\}. \end{align*} \item[\rm (iii)] $$ \phi_{d,1}(z_1,z_2) = \phi_{d,2}(z_2,z_1) - d. $$ \end{itemize} \end{thm} These are the main results in this paper. In the next section we outline the strategy for proving them. The proofs are presented in the subsequent sections divided in many steps. \section{Strategy outline} \label{s:idea} Below we briefly describe the general strategy of analysis used to prove our results. We first introduce some notation and definitions. Observe that \begin{align*} H_k(A,V) \varphi & = ( (i \nabla + A - k)^2 + V ) \varphi \\ & = ( (i \nabla-k)^2 + 2A \cdot (i \nabla -k) + (i \nabla \cdot A) + A^2 + V ) \varphi, \end{align*} and write $$ H_k(A,V) = \Delta_k + h(k,A) + q(A,V) $$ with $$ \Delta_k \coloneqq (i \nabla - k)^2, \qquad h(k,A) \coloneqq 2 A \cdot (i \nabla - k) \qquad \text{and} \qquad q(A,V) \coloneqq (i\nabla \cdot A) + A^2 + V. $$ For each finite subset $G$ of $\Gamma^\#$ set \begin{align*} G' \coloneqq \Gamma^\# \setminus G \qquad & \text{and} \qquad \mathds C^2_G \coloneqq \mathds C^2 \setminus \bigcup_{b \in G'} \mathcal{N}_b, \\ L^2_G \coloneqq \overline{\text{span} \{ e^{ib \cdot x} \; | \; b \in G \}} \qquad & \text{and} \qquad L^2_{G'} \coloneqq \overline{\text{span} \{ e^{ib\cdot x} \; | \; b \in G' \}}. \end{align*} To simplify the notation write $L^2$ in place of $L^2(\mathds R^2/\Gamma)$. Let $I$ be the identity operator on $L^2$, and let $\pi_G$ and $\pi_{G'}$ be the orthogonal projections from $L^2$ onto $L^2_G$ and $L^2_{G'}$, respectively. Then, $$ L^2 = L^2_G \oplus L^2_{G'} \qquad \text{and} \qquad I = \pi_G + \pi_{G'}. $$ For $k \in \mathds C^2_G$ define the partial inverse $(\Delta_k)^{-1}_G$ on $L^2$ as $$ (\Delta_k)^{-1}_G \coloneqq \pi_G + \Delta_k^{-1} \pi_{G'}. $$ Its matrix elements are $$ \big( (\Delta_k)^{-1}_G \big)_{b,c} \coloneqq \left \langle \frac{e^{ib \cdot x}}{|\Gamma|^{1/2}}, (\Delta_k)^{-1}_G \frac{e^{ic \cdot x}}{|\Gamma|^{1/2}} \right\rangle_{\!\!L^2} = \begin{cases} \delta_{b,c} & \text{if} \quad c \in G, \\ \delta_{b,c} \frac{1}{N_c(k)} & \text{if} \quad c \not\in G, \end{cases} $$ where $b,c \in \Gamma^\#$. Here is the main idea. By definition, a point $k$ is in $\widehat{\mathcal{F}}(A,V)$ if $H_k(A,V)$ has a nontrivial kernel in $L^2$. Hence, to study the part of the curve in the intersection of $\cup_{d' \in G} T_{d'}$ with $\mathds C^2 \setminus \cup_{b \in G'} T_b$ for some finite subset $G$ of $\Gamma^\#$, it is natural to look for a nontrivial solution of $$ (\Delta_k + h + q) (\psi_G + \psi_{G'}) = 0, $$ where $\psi_G \in L^2_G$ and $\psi_{G'} \in L^2_{G'}$. Equivalently, if we make the following (invertible) change of variables in $L^2$, $$ (\psi_G + \psi_{G'}) = (\Delta_k)^{-1}_G ( \varphi_G + \varphi_{G'}), $$ where $\varphi_G \in L^2_G$ and $\varphi_{G'} \in L^2_{G'}$, we may consider the equation \begin{equation} \label{eq} (\Delta_k + h + q) \varphi_G + (I + (h + q) \Delta_k^{-1}) \varphi_{G'} = 0. \end{equation} The projections of this equation onto $L^2_{G'}$ and $L^2_G$ are, respectively, \begin{align} \pi_{G'} (h+q) \varphi_G + \pi_{G'}(I + (h + q) \Delta_k^{-1}) \varphi_{G'} & = 0, \label{projG'} \\ \pi_G (\Delta_k + h + q) \varphi_G + \pi_G (h + q) \Delta_k^{-1} \varphi_{G'} & = 0. \label{projG} \end{align} Now define $R_{G'G'}$ on $L^2$ as $$ R_{G'G'} \coloneqq \pi_{G'}(I + (h + q) \Delta_k^{-1}) \pi_{G'}. $$ Observe that $R_{G'G'}$ is the zero operator on $L^2_G$. Then, if $R_{G'G'}$ has a bounded inverse on $L^2_{G'}$, the equation \eqref{projG'} is equivalent to $$ \varphi_{G'} = - R_{G'G'}^{-1} \pi_{G'} (h+q) \varphi_G. $$ Substituting this into \eqref{projG} yields $$ \pi_G ( \Delta_k + h + q - (h + q) \Delta_k^{-1} R_{G'G'}^{-1} \pi_{G'} (h+q) ) \varphi_G = 0. $$ This equation has a nontrivial solution if and only if the (finite) $|G| \times |G|$ determinant $$ \det \, [ \, \pi_G ( \Delta_k + h + q - (h + q) \Delta_k^{-1} R_{G'G'}^{-1} \pi_{G'} (h+q) ) \pi_G \, ] = 0 $$ or, equivalently, expressing all operators as matrices in the basis $\{ |\Gamma|^{-1/2} \, e^{ib \cdot x} \; | \; b \in \Gamma^\# \}$, \begin{equation} \label{detGxG} \det \left[ N_{d'}(k) \delta_{d',d''} + w_{d',d''} - \sum_{b,c \in G'} \frac{w_{d',b}}{N_b(k)} (R_{G'G'}^{-1})_{b,c} w_{c,d''} \right]_{d',d'' \in G} = 0, \end{equation} where $$ w_{b,c} \coloneqq h_{b,c} + \hat{q}(b-c) = -2(c+k) \cdot \hat{A}(b-c) + \hat{q}(b-c). $$ Therefore, if $R_{G'G'}$ has a bounded inverse on $L^2_{G'}$---which is in fact the case under suitable conditions---in the region under consideration we can study the Fermi curve in detail using the (local) defining equation \eqref{detGxG}. \section{Invertibility of $R_{G'G'}$ } \label{s:inv} The following notation will be used whenever we consider vector-valued quantities. Let $\mathcal{X}$ be a Banach space and let $A, B \in \mathcal{X}^2$, where $A = (A_1, A_2)$ and $B = (B_1, B_2)$. Then, $$ \| A \|_\mathcal{X} \coloneqq (\| A_1 \|_\mathcal{X}^2 + \| A_2 \|_\mathcal{X}^2)^{1/2} \qquad \text{and} \qquad A \cdot B \coloneqq A_1 B_1 + A_2 B_2. $$ Furthermore, we will denote by $\| \, \cdot \, \|$ the operator norm on $L^2(\mathds R^2 / \Gamma)$. In general, for any $B,C \subset \Gamma^\#$ ($C$ such that $\Delta_k^{-1} \pi_C$ exists) define the operator $R_{BC}$ as \begin{align} R_{BC} \coloneqq & \, \pi_B (I + (h + q) \Delta_k^{-1}) \pi_C \notag \\ = & \, \pi_B \pi_C + \pi_B \, q \, \Delta_k^{-1} \pi_C + \pi_B (2A \cdot i \nabla) \Delta_k^{-1} \pi_C - \pi_B (2 k \cdot A) \Delta_k^{-1} \pi_C. \label{RBC} \end{align} Its matrix elements are \begin{equation} \label{RBCm} (R_{BC})_{b,c} = \delta_{b,c} + \frac{\hat{q}(b-c)}{N_c(k)} - \frac{2c \cdot \hat{A}(b-c)}{N_c(k)} - \frac{2 k \cdot \hat{A}(b-c)}{N_c(k)}, \end{equation} where $b \in B$ and $c \in C$. We first estimate the norm of the last three terms on the right hand side of \eqref{RBC}. We begin with the following proposition. \begin{prop} \label{p:bd1} Let $k \in \mathds C^2$ and let $B,C \subset \Gamma^\#$ with $C \subset \{ b \in \Gamma^\# \; | \; N_b(k) \neq 0 \}$. Then, \begin{align*} \| \pi_B \, q \, \Delta_k^{-1} \pi_C \| & \leq \| \hat{q} \|_{l^1} \sup_{c \in C} \frac{1}{|N_c(k)|}, \\ \| \pi_B (A \cdot i \nabla) \Delta_k^{-1} \pi_C \| & \leq \| \hat{A} \|_{l^1} \sup_{c \in C} \frac{|c|}{|N_c(k)|}, \\ \| \pi_B (k \cdot A) \Delta_k^{-1} \pi_C \| & \leq \| \hat{A} \|_{l^1} \, |k| \, \sup_{c \in C} \frac{1}{|N_c(k)|}. \end{align*} \end{prop} To prove this proposition we apply the following well-known inequality (see \cite{deO}). \begin{prop} \label{p:ineq} Consider a linear operator $T : L^2_C \to L^2_B$ with matrix elements $T_{b,c}$. Then, $$ \| T \| \leq \max \left\{ \sup_{c \in C} \, \sum_{b \in B} |T_{b,c}|, \; \sup_{b \in B} \, \sum_{c \in C} |T_{b,c}| \right\}. $$ \end{prop} \begin{proof}[Proof of Proposition \ref{p:bd1}] We only prove the first inequality. The proof of the other ones is similar. Write $T \coloneqq \pi_B \, q \, \Delta_k^{-1} \pi_C$. Then, in view of \eqref{RBC} and \eqref{RBCm}, \begin{align*} \sup_{c \in C} \sum_{b \in B} |T_{b,c}| & \leq \sup_{c \in C} \sum_{b \in B} \frac{|\hat{q}(b-c)|}{|N_c(k)|} \leq \sup_{c \in C} \frac{1}{|N_c(k)|} \| \hat{q} \|_{l^1}, \\ \sup_{b \in B} \sum_{c \in C} |T_{b,c}| & \leq \sup_{b \in B} \sum_{c \in C} \frac{|\hat{q}(b-c)|}{|N_c(k)|} \leq \sup_{c \in C} \frac{1}{|N_c(k)|} \| \hat{q} \|_{l^1}. \end{align*} By Proposition \ref{p:ineq}, these estimates yield the desired inequality. \end{proof} The key estimate for the existence of $R_{G'G'}^{-1}$ is given below. \begin{prop}[Estimate of $\| R_{SS} - \pi_{S} \|$] \label{p:bdR} Let $k \in \mathds C^2$ with $|u| \leq 2|v|$ and $|v| > 2 \Lambda$. Suppose that $S \subset \{ b \in \Gamma^\# \; | \;\; |N_b(k)| \geq \varepsilon |v| \}$. Then, \begin{equation} \label{key} \| R_{SS} - \pi_S \| \leq \| \hat{q} \|_{l^1} \frac{1}{\varepsilon |v|} + \frac{14}{\varepsilon} \| \hat{A} \|_{l^1}. \end{equation} \end{prop} If $A=0$, the right hand side of \eqref{key} can be made arbitrarily small for any $V$ by taking $|v|$ sufficiently large (recall that $q(0,V)=V$). If $A \neq 0$, however, we need to take $\| \hat{A} \|_{l^1}$ small to make that quantity less than 1. The term $ \tfrac{14}{\varepsilon} \| \hat{A}\|_{l^1}$ in \eqref{key} comes from the estimate we have for $ \| \pi_{G'} \, h \, \Delta_k^{-1} \pi_{G'} \|$. \begin{proof}[Proof of Proposition \ref{p:bdR}] By hypothesis, for all $b \in S$, \begin{equation} \label{b5} \frac{1}{|N_b(k)|} \leq \frac{1}{\varepsilon |v|}. \end{equation} We now show that, for all $b \in S$, \begin{equation} \label{b6} \frac{|b|}{|N_b(k)|} \leq \frac{4}{\varepsilon}. \end{equation} First suppose that $|b| \leq 4|v|$. Then, $$ \frac{|b|}{|N_b(k)|} \leq \frac{4|v|}{\varepsilon |v|} = \frac{4}{\varepsilon}. $$ Now suppose that $|b| \geq 4|v|$. Again, by hypothesis we have $|u| \leq 2|v|$ and $|v| > 2\Lambda > \varepsilon$. Hence, $$ |v \pm (u+b)^\perp| \geq |b|-|u|-|v| \geq |b|-3|v| \geq |b|- \frac{3}{4} |b| = \frac{|b|}{4}. $$ Consequently, $$ \frac{|b|}{|N_b(k)|} = \frac{|b|}{|v+(u+b)^\perp| \, |v-(u+b)^\perp|} \leq |b| \frac{4}{|b|} \frac{4}{|b|} = \frac{16}{|b|} \leq \frac{4}{|v|} \leq \frac{4}{\varepsilon}. $$ This proves \eqref{b6}. The expression for $R_{SS} - \pi_S$ is given by \eqref{RBC}. Observe that $|k| \leq |u|+|v| \leq 3|v|$. Then, applying Proposition \ref{p:bd1} and using \eqref{b5} and \eqref{b6} we obtain \begin{align*} \| R_{SS} - \pi_S \| & \leq ( 6 |v| \, \| \hat{A} \|_{l^1} + \| \hat{q} \|_{l^1} ) \sup_{b \in S} \frac{1}{|N_c(k)|} + 2 \| \hat{A} \|_{l^1} \sup_{b \in S} \frac{|c|}{|N_c(k)|} \\ & \leq ( 6 |v| \, \| \hat{A} \|_{l^1} + \| \hat{q} \|_{l^1} ) \frac{1}{\varepsilon |v|} + \frac{8}{\varepsilon} \| \hat{A} \|_{l^1} = \| \hat{q} \|_{l^1} \frac{1}{\varepsilon |v|} + \frac{14}{\varepsilon} \| \hat{A} \|_{l^1}. \end{align*} This is the desired inequality. \end{proof} From the last proposition it follows easily that $R_{SS}$ has a bounded inverse for large $|v|$ and weak magnetic potential. \begin{lem}[Invertibility of $R_{SS}$] \label{l:invR} Let $k \in \mathds C^2$, $$ |u| \leq 2|v|, \qquad |v| > \max \left\{ 2 \Lambda, \, \| \hat{q} \|_{l^1} \frac{2}{\varepsilon} \right\}, \qquad \| \hat{q} \|_{l^1} < \infty \qquad \text{and} \qquad \| \hat{A} \|_{l^1} < \frac{2}{63} \varepsilon. $$ Suppose that $S \subset \{ b \in \Gamma^\# \; | \; |N_b(k)| \geq \varepsilon |v| \}$. Then the operator $R_{SS}$ has a bounded inverse with \begin{align*} \| R_{SS} - \pi_S \| & < \| \hat{q} \|_{l^1} \frac{1}{\varepsilon |v|} + \| \hat{A} \|_{l^1} \frac{14}{\varepsilon} < \frac{17}{18}, \\ \| R_{SS}^{-1} - \pi_S \| & < 18 \| R_{SS} - \pi_S \|. \end{align*} \end{lem} \begin{proof} Write $R_{SS} = \pi_S + T$ with $T = R_{SS} - \pi_S$. Then, by Proposition \ref{p:bdR}, $$ \| T \| = \| R_{SS} - \pi_S \| \leq \| \hat{q} \|_{l^1} \frac{1}{\varepsilon |v|} + \| \hat{A} \|_{l^1} \frac{14}{\varepsilon} < \frac{1}{2} + \frac{4}{9} = \frac{17}{18} < 1. $$ Hence, the Neumann series for $R_{SS}^{-1} = (\pi_S + T)^{-1}$ converges (and is a bounded operator). Furthermore, \begin{align*} \| R_{SS}^{-1} - \pi_S \| & = \| (\pi_S + T)^{-1} - \pi_S \| = \| (\pi_S + T)^{-1} - (\pi_S + T)^{-1} (\pi_S + T) \| \\ & = \| (\pi_S + T)^{-1} T \| \leq (1-\|T\|)^{-1} \| T \| < 18 \| R_{SS} - \pi_S \|, \end{align*} as was to be shown. \end{proof} Lemma \ref{l:invR} says that if $G$ is such that $G' \subset \{ b \in \Gamma^\# \; | \; |N_b(k)| \geq \varepsilon |v| \}$ the operator $R_{G'G'}$ has a bounded inverse on $L^2_{G'}$ for $|u| \leq 2|v|$, large $|v|$, and weak magnetic potential. We are now able to write local defining equations for $\widehat{\mathcal{F}}(A,V)$ under such conditions. \section{Local defining equations} In this section we derive local defining equations for the Fermi curve. We begin with a simple proposition. \begin{prop} \label{p:Gprime} Suppose either {\rm (i)} or {\rm (ii)} or {\rm (iii)} where: \begin{itemize} \item[] \begin{itemize} \item[{\rm (i)}] $G = \{ 0 \}$ and $k \in T_0 \setminus \cup_{b \in \Gamma^\# \setminus \{0\} }T_b$; \item[{\rm (ii)}] $G = \{ 0, d \}$ and $k \in T_0 \cap T_d$; \item[{\rm (iii)}] $G = \varnothing$ and $k \in \mathds C^2 \setminus \cup_{b \in \Gamma^\#} T_b$. \end{itemize} \end{itemize} Then $G' = \Gamma^\# \setminus G = \{ b \in \Gamma^\# \; | \;\; |N_b(k)| \geq \varepsilon |v| \}$. \end{prop} \begin{proof} The proposition follows easily if we observe that $G' = \Gamma^\# \setminus G$ and recall from \eqref{bd1} that \[ k \not\in T_b \quad \Longrightarrow \quad |N_b(k)| \geq \varepsilon |v|. \qedhere \] \end{proof} We now introduce some notation. Let $\mathcal{B}$ be a fundamental cell for $\Gamma^\# \subset \mathds R^2$ (see \cite[p~310]{RS4}). Then any vector $u \in \mathds R^2$ can be written as $u = \xi + \mathfrak{u}$ for some $\xi \in \Gamma^\#$ and $\mathfrak{u} \in \mathcal{B}$. Define \[ \alpha \coloneqq \sup \{ |\mathfrak{u}| \; | \; \mathfrak{u} \in \mathcal{B} \}, \qquad R \coloneqq \max \left\{ \alpha, \, 2\Lambda, \, \| \hat{q} \|_{l^1} \frac{2}{\varepsilon} \right \}, \qquad \mathcal{K}_R \coloneqq \{ k \in \mathds C^2 \; | \; |v| \leq R \}. \] We first show that in $\mathds C^2 \setminus \mathcal{K}_R$ the Fermi curve is contained in the union of $\varepsilon$-tubes about the free Fermi curve. \begin{prop}[$\widehat{\mathcal{F}}(A,V) \setminus \mathcal{K}_R$ is contained in the union of $\varepsilon$-tubes] $$ \widehat{\mathcal{F}}(A,V) \setminus \mathcal{K}_R \subset \bigcup_{b \in \Gamma^\#} T_b. $$ \end{prop} \begin{proof} Without loss of generality we may consider $k \in \mathds C^2$ with real part in $\mathcal{B}$. We now prove that any point outside the region $\mathcal{K}_R$ and outside the union of $\varepsilon$-tubes does not belong to $\widehat{\mathcal{F}}(A,V)$. Suppose that $k \in \mathds C^2 \setminus (\mathcal{K}_R \cup \bigcup_{b \in \Gamma^\#} T_b)$ and recall that $k$ is in $\widehat{\mathcal{F}}(A,V)$ if and only if \eqref{eq} has a nontrivial solution. If we choose $G = \varnothing$ then $G' = \Gamma^\#$ and this equation reads $$ R_{G'G'} \varphi_{G'} = 0. $$ By Proposition \ref{p:Gprime}(iii) we have $G' = \Gamma^\# = \{ b \in \Gamma^\# \; | \; |N_b(k)| \geq \varepsilon |v| \}$. Furthermore, since $u \in \mathcal{B}$ and $|v| > R \geq \alpha$, it follows that $|u| \leq \alpha < |v| < 2|v|$. Consequently, the operator $R_{G'G'}$ has a bounded inverse by Lemma \ref{l:invR}. Thus, the only solution of the above equation is $\varphi_{G'} = 0$. That is, there is no nontrivial solution of this equation and therefore $k \not\in \widehat{\mathcal{F}}(A,V)$. \end{proof} We are left to study the Fermi curve inside the $\varepsilon$-tubes. There are two types of regions to consider: intersections and non-intersections of tubes. To study non-intersections we choose $G = \{ 0 \}$ and consider the region $(T_0 \setminus \cup_{b \in \Gamma^\# \setminus \{0\}} T_b) \setminus \mathcal{K}_R$. For intersections we take $G = \{0, d\}$ for some $d \in \Gamma^\# \setminus \{0\}$ and consider $(T_0 \cap T_d) \setminus \mathcal{K}_R$. Observe that, since the tubes $T_b$ have the following translational property, $T_b + c = T_{b+c}$ for all $b,c \in \Gamma^\#$, and the curve $\widehat{\mathcal{F}}(A,V)$ is invariant under the action of $\Gamma^\#$, there is no loss of generality in considering only the two regions above. Any other part of the curve can be reached by translation. Recall that $G' = \Gamma^\# \setminus G$ and for $d',d'' \in G$ and $i,j \in \{1,2 \}$ set \begin{equation} \label{defBC} \begin{aligned} B_{ij}^{d'd''}(k;G) & \coloneqq -4 \sum_{b,c \in G'} \frac{\hat{A}_i(d'-b)}{N_b(k)} (R_{G'G'}^{-1})_{b,c} \, \hat{A}_j(c-d''), \\ C_{i}^{d'd''}(k;G) & \coloneqq -2 \hat{A}_i(d'-d'') + 2 \sum_{b,c \in G'} \frac{\hat{q}(d'-b) - 2b \cdot \hat{A}(d'-b)}{N_b(k)} (R_{G'G'}^{-1})_{b,c} \hat{A}_i(c-d'') \\ & \quad + 2 \sum_{b,c \in G'} \frac{\hat{A}_i(d'-b)}{N_b(k)} (R_{G'G'}^{-1})_{b,c} (\hat{q}(c-d'') - 2d'' \cdot \hat{A}(c-d'')), \\ C_{0}^{d'd''}(k;G) & \coloneqq \hat{q}(d'-d'') -2 d'' \cdot \hat{A}(d'-d'') \\ & \quad - \sum_{b,c \in G'} \frac{\hat{q}(d'-b) - 2b \cdot \hat{A}(d'-b)}{N_b(k)} (R_{G'G'}^{-1})_{b,c} (\hat{q}(c-d'') - 2d'' \cdot \hat{A}(c-d'')). \end{aligned} \end{equation} Then, \begin{align*} D_{d',d''}(k;G) & \coloneqq w_{d',d''} - \sum_{b,c \in G'} \frac{w_{d',b}}{N_b(k)} (R_{G'G'}^{-1})_{b,c} w_{c,d''} \\ & = B_{11}^{d'd''} k_1^2 + B_{22}^{d'd''} k_2^2 + (B_{12}^{d'd''} + B_{21}^{d'd''}) k_1 k_2 + C_1^{d'd''} k_1 + C_2^{d'd''} k_2 + C_0^{d'd''}. \end{align*} These functions have the following property. \begin{prop} \label{p:holom} For $d',d'' \in G$ and $i,j \in \{1,2\}$, the functions $B_{ij}^{d'd''}$, $C_i^{d'd''}$, $C_0^{d'd''}$ {\rm(}and consequently $D_{d',d''}${\rm)} are analytic on $(T_0 \setminus \cup_{b \in \Gamma^\# \setminus \{0\}} T_b) \setminus \mathcal{K}_R$ and $(T_0 \cap T_d) \setminus \mathcal{K}_R$ for $G=\{0\}$ and $G=\{0,d\}$, respectively. \end{prop} \begin{proof}[Sketch of the proof] It suffices to show that $B_{ij}^{d'd''}$, $C_i^{d'd''}$ and $C_0^{d'd''}$ are analytic functions. This property follows from the fact that all the series involved in the definition of these functions are uniformly convergent sums of analytic functions. The argument is similar for all cases. See \cite{deO} for details. \end{proof} Using the above functions we can write (local) defining equations for the Fermi curve. \begin{lem}[Local defining equations for $\widehat{\mathcal{F}}(A,V)$] \label{l:eqn} $\,$ \begin{itemize} \item[{\rm (i)}] Let $G = \{0\}$ and $k \in (T_0 \setminus \cup_{b \in \Gamma^\# \setminus \{0\}} T_b) \setminus \mathcal{K}_R$. Then $k \in \widehat{\mathcal{F}}(A,V)$ if and only if $$ N_0(k) + D_{0,0}(k) = 0. $$ \item[{\rm (ii)}] Let $G=\{0,d\}$ and $k \in (T_0 \cap T_d) \setminus \mathcal{K}_R$. Then $k \in \widehat{\mathcal{F}}(A,V)$ if and only if $$ (N_0(k) + D_{0,0}(k))(N_d(k) + D_{d,d}(k)) - D_{0,d}(k)D_{d,0}(k) = 0. $$ \end{itemize} \end{lem} \begin{proof} We only prove part (i). The proof of part (ii) is similar. First, by Proposition \ref{p:Gprime}(i) we have $G' = \Gamma^\# \setminus \{0\} = \{ b \in \Gamma^\# \; | \; |N_b(k)| \geq \varepsilon |v| \}$. Furthermore, since $k \in T_0$, we have either $|v-u^\perp|<\varepsilon$ or $|v+u^\perp|<\varepsilon$. In either case this implies $|u| < \varepsilon + |v| < 2\Lambda + |v| < 2|v|$. Hence, the operator $R_{G'G'}$ has a bounded inverse by Lemma \ref{l:invR}. Thus, in the region under consideration $\widehat{\mathcal{F}}(A,V)$ is given by \eqref{detGxG}: $$ 0 = N_0(k) + w_{0,0} - \sum_{b,c \in G'} \frac{w_{0,b}}{N_b(k)} (R_{G'G'}^{-1})_{b,c} w_{c,0} = N_0(k) + D_{0,0}(k). $$ This is the desired expression. \end{proof} To study in detail the defining equations above we shall estimate the asymptotic behaviour of the functions $B_{ij}^{d'd''}$, $C_i^{d'd''}$, $C_0^{d'd''}$ and $D_{d',d''}$ for large $|v|$. (We sometimes refer to these functions as coefficients.) Since all these functions have a similar form it is convenient to prove these estimates in a general setting and specialize them later. This is the contents of \S \ref{s:coeff} and \S \ref{s:der}. We next introduce a change of variables in $\mathds C^2$ that will be useful for proving these bounds. \section{Change of coordinates} \label{s:change} Define the (complementary) index $\nu'$ as $\nu' \coloneqq \nu - (-1)^\nu$. Observe that $\nu' = 2$ if $\nu = 1$, $\nu' = 1$ if $\nu = 2$, and $(-1)^\nu = -(-1)^{\nu'}$. The following change of coordinates in $\mathds C^2$ will be useful for our analysis. For $\nu \in \{1,2\}$ and $d',d'' \in G$ define the functions $w_{\nu,d'}, \, z_{\nu,d'} : \mathds C^2 \to \mathds C$ as \begin{equation} \label{change} \begin{aligned} w_{\nu,d'}(k) & \coloneqq k_1+d'_1 + i(-1)^\nu (k_2+d'_2), \\ z_{\nu,d'}(k) & \coloneqq k_1+d'_1 - i(-1)^\nu (k_2+d'_2). \end{aligned} \end{equation} Observe that, the transformation $(k_1,k_2) \mapsto (w_{\nu,d'}, z_{\nu,d'})$ is just a translation composed with a rotation. Furthermore, if $k \in T_\nu(d') \setminus \mathcal{K}_R$ then $|w_{\nu,d'}(k)|$ is ``small'' and $|z_{\nu,d'}(k)|$ is ``large''. Indeed, $|w_{\nu,d'}(k)| = |N_{d',\nu}(k)| < \varepsilon$ and $|z_{\nu,d'}(k)| = |N_{d',\nu'}(k)| \geq |v| > R$. Define also \begin{align*} J^{d'd''}_\nu & \coloneqq \tfrac{1}{4} (B_{11}^{d'd''} - B_{22}^{d'd''} + i(-1)^\nu (B_{12}^{d'd''} + B_{21}^{d'd''})), \\ K^{d'd''} & \coloneqq \tfrac{1}{2} (B_{11}^{d'd''} + B_{22}^{d'd''}),\\ L^{d'd''}_\nu & \coloneqq -d_1' B_{11}^{d'd''} - i(-1)^\nu d_2' B_{22}^{d'd''} - \tfrac{1}{2}(d_2' + i(-1)^\nu d_1') (B_{12}^{d'd''} + B_{21}^{d'd''}) \\ & \qquad + \tfrac{1}{2}( C_1^{d'd''} + i(-1)^\nu C_2^{d'd''}), \\ M^{d'd''} & \coloneqq d_1'^2 B_{11}^{d'd''} + d_2'^2 B_{22}^{d'd''} + d_1' d_2' (B_{12}^{d'd''} + B_{21}^{d'd''}) -d_1' C_1^{d'd''} - d_2' C_2^{d'd''} + C_0^{d'd''}, \end{align*} where $J^{d'd''}_\nu$, $K^{d'd''}$, $L^{d'd''}_\nu$ and $M^{d'd''}$ are functions of $k \in \mathds C^2$ that also depend on the choice of $G \subset \Gamma^\#$. Using these functions we can express $N_{d'}(k) + D_{d',d'}(k)$ and $D_{d',d''}(k)$ as follows. \begin{prop} \label{p:newv} Let $\nu \in \{1,2\}$ and let $d',d'' \in G$. Then, \begin{align*} N_{d'} + D_{d',d'} & = J^{d'd'}_{\nu'} w_{\nu,d'}^2 + J^{d'd'}_\nu z_{\nu,d'}^2 + (1 + K^{d'd'}) w_{\nu,d'} z_{\nu,d'} + L^{d'd'}_{\nu'} w_{\nu,d'} + L^{d'd'}_\nu z_{\nu,d'} + M^{d'd'}, \\ D_{d',d''} & = J^{d'd''}_{\nu'} w_{\nu,d'}^2 + J^{d'd''}_\nu z_{\nu,d'}^2 + K^{d'd''} w_{\nu,d'} z_{\nu,d'} + L^{d'd''}_{\nu'} w_{\nu,d'} + L^{d'd''}_\nu z_{\nu,d'} + M^{d'd''}. \end{align*} Furthermore, {\allowdisplaybreaks \begin{align*} J^{d'd''}_\nu(k) & = - \sum_{b,c \in G'} \frac{(1,-i(-1)^\nu) \cdot \hat{A}(d'-b)}{N_b(k)} (R_{G'G'}^{-1})_{b,c} \, (1,-i(-1)^\nu) \cdot \hat{A}(c-d''), \\ K^{d'd''}(k) & = - 2 \sum_{b,c \in G'} \frac{\hat{A}(d'-b) \cdot \hat{A}(c-d'')}{N_b(k)} (R_{G'G'}^{-1})_{b,c}, \\ L^{d'd''}_\nu(k) & = \sum_{b,c \in G'} \frac{\hat{q}(d'-b) + 2(d'-b) \cdot \hat{A}(d'-b)}{N_b(k)} (R_{G'G'}^{-1})_{b,c} (1,i(-1)^\nu) \cdot \hat{A}(c-d'')\\ & \quad + \sum_{b,c \in G'} \frac{(1,i(-1)^\nu) \cdot \hat{A}(d'-b)}{N_b(k)} (R_{G'G'}^{-1})_{b,c} \, (\hat{q}(c-d'') + 2 (d'-d'') \cdot \hat{A}(c-d'')) \\ & \quad - (1,i(-1)^\nu) \cdot \hat{A}(d'-d''), \\ M^{d'd''}(k) & = - \sum_{b,c \in G'} \frac{\hat{q}(d'-b) + 2(d'-b) \cdot \hat{A}(d'-b)}{N_b(k)} (R_{G'G'}^{-1})_{b,c} \, \hat{q}(c-d'')\\ & \quad + \hat{q}(d'-d'') + 2 (d'-d'') \cdot \hat{A}(d'-d''). \end{align*} } \end{prop} \begin{proof} To simplify the notation write $w = w_{\nu,d'}$, $z = z_{\nu,d'}$, $B_{ij} = B_{ij}^{d'd''}$ and $C_i = C_i^{d'd''}$. First observe that, in view of \eqref{change}, $$ N_{d'} = (k_1+d_1' + i(-1)^\nu(k_2+d_2')) (k_1+d_1' - i(-1)^\nu(k_2+d_2')) = wz. $$ Furthermore, \begin{align*} k_1 & = \tfrac{1}{2}(w+z)-d_1', \\ k_2 & = \tfrac{(-1)^\nu}{2i}(w-z)-d_2', \\ k_1^2 & = \tfrac{1}{4} (w^2+z^2) + \tfrac{1}{2}wz -d_1'(w+z)+d_1'^2,\\ k_2^2 & = -\tfrac{1}{4} (w^2+z^2) + \tfrac{1}{2}wz + i(-1)^\nu d_2'(w-z) + d_2'^2, \\ k_1 k_2 & = \tfrac{i(-1)^\nu}{4} (z^2 - w^2) - \tfrac{1}{2} (d_2' - i(-1)^\nu d_1') w - \tfrac{1}{2} (d_2' + i(-1)^\nu d_1') + d_1'd_2'. \end{align*} Hence, \begin{align*} D_{d',d''} & = B_{11} k_1^2 + B_{22} k_2^2 + (B_{12} + B_{21}) k_1 k_2 + C_1 k_1 + C_2 k_2 + C_0 \\ & = \tfrac{1}{4}(B_{11} - B_{22} -i(-1)^\nu(B_{12} +B_{21})) w^2 + \tfrac{1}{4} (B_{11} - B_{22} + i(-1)^\nu(B_{12} + B_{21})) z^2\\ & \quad + \big( -d_1' B_{11} + i(-1)^\nu d_2' B_{22} - \tfrac{1}{2}(d_2' - i(-1)^\nu d_1')(B_{12} + B_{21}) + \tfrac{1}{2} (C_1 - i(-1)^\nu C_2) \big ) w \\ & \quad + \big( -d_1' B_{11} + i(-1)^\nu d_2' B_{22} - \tfrac{1}{2}(d_2' + i(-1)^\nu d_1')(B_{12} + B_{21}) + \tfrac{1}{2} (C_1 + i(-1)^\nu C_2) \big) z \\ & \quad + d_1'^2 B_{11} + d_2'^2 B_{22} + d_1' d_2' (B_{12} + B_{21}) -d_1' C_1 - d_2' C_2 + C_0 + \tfrac{1}{2}(B_{11}+B_{22}) wz \\ & = J^{d'd''}_{\nu'} w^2 + J^{d'd''}_\nu z^2 + K^{d'd''} w z + L^{d'd''}_{\nu'} w + L^{d'd''}_\nu z + M^{d'd''}. \end{align*} This proves the first claim. Consequently, $$ N_{d'} + D_{d',d'} = J^{d'd'}_{\nu'} w^2 + J^{d'd'}_\nu z^2 + (1+K^{d'd'}) w z + L^{d'd'}_{\nu'} w + L^{d'd'}_\nu z + M^{d'd'}, $$ which proves the second claim. Now, again to simplify the notation write $$ f g = \sum_{b,c \in G'} \frac{\hat{f}(b,d')}{N_b(k)} (R_{G'G'}^{-1})_{b,c} \, \hat{g}(c,d''), $$ that is, to represent sums of this form suppress the summation and the other factors. Note that $f g \neq g f$ according to this notation. Then, substituting \eqref{defBC} into the definition of $J^{d'd''}_\nu$ we have \begin{align*} J^{d',d''}_\nu & = \tfrac{1}{4} (B_{11} - B_{22} + i(-1)^\nu (B_{12} + B_{21})) = -A_1A_1 + A_2A_2 - i(-1)^\nu (A_1A_2+A_2A_1) \\ & = (A_1 -i(-1)^\nu A_2) ( -A_1 + i(-1)^\nu A_2) = - ((1,-i(-1)^\nu) \cdot A)\, ((1,-i(-1)^\nu) \cdot A ) \\ & = - \sum_{b,c \in G'} \frac{(1,-i(-1)^\nu) \cdot \hat{A}(d'-b)}{N_b(k)} (R_{G'G'}^{-1})_{b,c} \, (1,-i(-1)^\nu) \cdot \hat{A}(c-d''). \end{align*} Similarly, substituting \eqref{defBC} into the definitions of $K^{d'd''}$, $L^{d'd''}_\nu$ and $M^{d'd''}$ we derive the other expressions. \end{proof} \section{Asymptotics for the coefficients} \label{s:coeff} Let $f$ and $g$ be functions on $\Gamma^\#$ and for $k \in \mathds C^2$ and $d',d'' \in G$ set \begin{equation} \label{Phi} \Phi_{d',d''}(k;G) \coloneqq \sum_{b,c \in G'} \frac{f(d'-b)}{N_b(k)} (R_{G'G'}^{-1})_{b,c} \, g(c-d''). \end{equation} In this section we study the asymptotic behaviour of the function $\Phi_{d',d''}(k)$ for $k$ in the union of $\varepsilon$-tubes with large $|v|$. Here we only give the statements. See Appendix \ref{s:app2} for the proofs. Reset the constant $R$ as \begin{equation} \label{R1} R \coloneqq \max \left\{1, \, \alpha, \, 2\Lambda, \, 140 \| \hat{A} \|_{l^1}, \, \| (1+b^2)\hat{q}(b) \|_{l^1} \frac{4}{\varepsilon} \right\}, \end{equation} and make the following hypothesis. \begin{hyp} \label{h:small} $$ \| b^2 \hat{q}(b) \|_{l^1} < \infty \qquad \text{and} \qquad \| (1+b^2) \hat{A}(b) \|_{l^1} < \frac{2}{63} \varepsilon. $$ \end{hyp} Our first lemma provides and expansion for $\Phi_{d',d'}(k)$ ``in powers of $1/|z_{\nu,d'}(k)|$''. \begin{lem}[Asymptotics for $\Phi_{d',d'}(k)$] \label{l:b1} Under Hypothesis \ref{h:small}, let $\nu \in \{1,2\}$ and let $f$ and $g$ be functions on $\Gamma^\#$ with $\| b^2 f(b) \|_{l^1} < \infty$ and $\| b^2 g(b) \|_{l^1} < \infty$. Suppose either {\rm (i)} or {\rm (ii)} where: \begin{itemize} \item[] \begin{itemize} \item[\rm (i)] $G=\{0\}$ and $k \in (T_\nu(0) \setminus \cup_{b \in G'} T_b) \setminus \mathcal{K}_R$; \item[\rm (ii)] $G=\{0, d\}$ and $k \in (T_\nu(0) \cap T_{\nu'}(d)) \setminus \mathcal{K}_R$. \end{itemize} \end{itemize} Then, for $(\mu,d') = (\nu,0)$ if {\rm (i)} or $(\mu,d') \in \{ (\nu,0), (\nu',d) \}$ if {\rm (ii)}, $$ \Phi_{d',d'}(k) = \alpha_{\mu,d'}^{(1)}(k) + \alpha_{\mu,d'}^{(2)}(k) + \alpha_{\mu,d'}^{(3)}(k), $$ where for $1 \le j \le 2$, $$ |\alpha_{\mu,d'}^{(j)}(k)| \leq \frac{C_j}{(2|z_{\mu,d'}(k)|-R)^j} \qquad \text{and} \qquad |\alpha_{\mu,d'}^{(3)}(k)| \leq \frac{C_3}{|z_{\mu,d'}(k)| R^2}, $$ where $C_j = C_{j;\Lambda,A,q,f,g}$ and $C_3 = C_{3;\varepsilon,\Lambda,A,q,f,g}$ are constants. Furthermore, the functions $\alpha_{\mu,d'}^{(j)}(k)$ are given by \eqref{alpha} and \eqref{alpha3} and are analytic in the region under consideration. \end{lem} Below we have more information about the function $\alpha_{\mu,d'}^{(1)}(k)$. \begin{lem}[Asymptotics for $\alpha_{\mu,d'}^{(1)}(k)$] \label{l:b1b} Consider the same hypotheses of Lemma \ref{l:b1}. Then, for $(\mu,d') = (\nu,0)$ if {\rm (i)} or $(\mu,d') \in \{ (\nu,0), (\nu',d) \}$ if {\rm (ii)}, $$ z_{\mu,d'}(k) \, \alpha_{\mu,d'}^{(1)}(k) = \alpha^{(1,0)}_{\mu,d'} + \alpha^{(1,1)}_{\mu,d'}(w(k)) + \alpha^{(1,2)}_{\mu,d'}(k) + \alpha^{(1,3)}_{\mu,d'}(k), $$ where $\alpha^{(1,0)}_{\mu,d'}$ is a constant given by \eqref{a10}, and the remaining functions $\alpha^{(1,j)}_{\mu,d'}$ are given by \eqref{a1j}. Furthermore, for $0 \le j \le 2$, $$ |\alpha_{\mu,d'}^{(1,j)}| \leq C_j \qquad \text{and} \qquad |\alpha_{\mu,d'}^{(1,3)}| \leq \frac{C_3}{2|z_{\mu,d'}(k)|-R}, $$ where $C_j = C_{j;\Lambda,A,f,g}$ and $C_3 = C_{3;\Lambda,A,f,g}$ are constants given by \eqref{cts}. \end{lem} The next lemma estimates the decay of $\Phi_{d',d''}(k)$ with respect to $z_{\nu',d}(k)$ for $d' \neq d''$. \begin{lem}[Decay of $\Phi_{d',d''}(k)$ for $d'\neq d''$] \label{l:b2} Under Hypothesis \ref{h:small}, let $\nu \in \{1,2\}$ and let $f$ and $g$ be functions on $\Gamma^\#$ with $\| b^2 f(b) \|_{l^1} < \infty$ and $\| b^2 g(b) \|_{l^1} < \infty$. Suppose further that $G=\{0, d\}$ and $k \in (T_\nu(0) \cap T_{\nu'}(d)) \setminus \mathcal{K}_R$. Then, for $d',d'' \in G$ with $d' \neq d''$, $$ |\Phi_{d',d''}(k)| \le \frac{C_{\Gamma^\#,\varepsilon,f,g}}{|z_{\nu',d}(k)|^{3-10^{-1}}}, $$ where $C_{\Gamma^\#,\varepsilon,f,g}$ is a constant. \end{lem} The next proposition relates the quantities $|v|$, $|k_2|$, $|z_{\nu,d'}(k)|$ and $|d|$ for $k$ in the $\varepsilon$-tubes with large $|v|$. \begin{prop} \label{p:order} For $\nu \in \{1,2\}$ we have: \begin{itemize} \item[{\rm (i)}] Let $k \in T_\nu(0) \setminus \mathcal{K}_R$. Then, $$ \frac{1}{|z_{\nu,0}(k)|} \leq \frac{1}{|v|} \leq \frac{3}{|z_{\nu,0}(k)|} \qquad \text{and} \qquad \frac{1}{4|v|} \leq \frac{1}{|k_2|} \leq \frac{8}{|v|}. $$ \item[{\rm (ii)}] Let $k \in (T_\nu(0) \cap T_{\nu'}(d)) \setminus \mathcal{K}_R$. Then, $$ \frac{1}{|z_{\nu,0}(k)|} \leq \frac{1}{|v|} \leq \frac{3}{|z_{\nu,0}(k)|}, \qquad \frac{1}{|z_{\nu',d}(k)|} \leq \frac{1}{|v|} \leq \frac{3}{|z_{\nu',d}(k)|}, $$ $$ \frac{1}{2|z_{\nu',d}(k)|} \leq \frac{1}{|d|} \leq \frac{2}{|z_{\nu',d}(k)|}. $$ \end{itemize} \end{prop} \section{Bounds on the derivatives} \label{s:der} In the last section we expressed $\Phi_{d',d''}(k)$ as a sum of certain functions $\alpha_{\mu,d'}^{(j)}(k)$ for $k$ in the $\varepsilon$-tubes with large $|v|$. In this section we provide bounds for the derivatives of all these functions. Here we only give the statements. See Appendix \ref{s:app3} for the proofs. Our first lemma concerns the derivatives of $\Phi_{d',d''}(k)$. \begin{lem}[Derivatives of $\Phi_{d',d''}(k)$] \label{l:der} Under Hypothesis \ref{h:small}, let $f$ and $g$ be functions in $l^1(\Gamma^\#)$ and suppose either {\rm (i)} or {\rm (ii)} where: \begin{itemize} \item[] \begin{itemize} \item[\rm (i)] $G=\{0\}$ and $k \in (T_0 \setminus \cup_{b \in G'} T_b) \setminus \mathcal{K}_R$; \item[\rm (ii)] $G=\{0, d\}$ and $k \in (T_0 \cap T_d) \setminus \mathcal{K}_R$. \end{itemize} \end{itemize} Then, for any integers $n$ and $m$ with $n+m \geq 1$ and for any $d',d'' \in G$, $$ \left| \frac{\partial^{n+m}}{\partial k_1^n \partial k_2^m} \Phi_{d',d''}(k) \right| \leq \frac{C}{|v|}, $$ where $C$ is a constant with $C = C_{\varepsilon,\Lambda, A, f,g,m,n}$ if {\rm (i)} or $C = C_{\Lambda,A,f,g,m,n}$ if {\rm (ii)}. \end{lem} We now improve the estimate of Lemma \ref{l:der}(ii) for $d'\neq d''$. \begin{lem}[Derivatives of $\Phi_{d',d''}(k)$ for $d' \neq d''$] \label{l:derii} Consider a constant $\beta \ge 2$ and suppose that $\| |b|^\beta \hat{q}(b) \|_{l^1} < \infty$ and $\| (1+|b|^\beta) \hat{A}(b) \|_{l^1} < 2\varepsilon/63$. Let $\nu \in \{1,2\}$ and let $f$ and $g$ be functions on $\Gamma^\#$ obeying $\| |b|^\beta f(b) \|_{l^1} < \infty$ and $\| |b|^\beta g(b) \|_{l^1} < \infty$. Suppose further that $G=\{0, d\}$ and $k \in T_0 \cap T_d$ with $|v| > \frac{2}{\varepsilon} \| |b|^\beta \hat{q}(b) \|_{l^1}$. Then, for any integers $n$ and $m$ with $n+m \geq 0$ and for any $d',d'' \in G$ with $d' \neq d''$, $$ \left| \frac{\partial^{n+m}}{\partial k_1^n \partial k_2^m} \Phi_{d',d''}(k) \right| \le \frac{C}{|d|^{1+\beta}}, $$ where $C = C_{\varepsilon,\Lambda, A, f,g,m,n}$ is a constant. \end{lem} Observe that, in particular, this lemma with $m=n=0$ generalizes Lemma \ref{l:b2}. We next have bounds for the derivatives of $\alpha_{\mu,d'}^{(j)}(k)$. \begin{lem}[Derivatives of $\alpha_{\mu,d'}^{(j)}(k)$] \label{l:der2} Under Hypothesis \ref{h:small}, let $\nu \in \{1,2\}$ and let $f$ and $g$ be functions in $l^1(\Gamma^\#)$. Suppose either {\rm (i)} or {\rm (ii)} where: \begin{itemize} \item[] \begin{itemize} \item[\rm (i)] $G=\{0\}$ and $k \in (T_\nu(0) \setminus \cup_{b \in G'} T_b) \setminus \mathcal{K}_R$; \item[\rm (ii)] $G=\{0, d\}$ and $k \in (T_\nu(0) \cap T_{\nu'}(d)) \setminus \mathcal{K}_R$. \end{itemize} \end{itemize} Then, there is a constant $\rho = \rho_{\varepsilon,A,q,m,n}$ with $\rho \geq R$ such that, for $|v| \geq \rho$ and for $(\mu,d') = (\nu,0)$ if {\rm (i)} or $(\mu,d') \in \{ (\nu,0), (\nu',d) \}$ if {\rm (ii)}, for any integers $n$ and $m$ with $n+m \geq 1$ and for $1 \le j \le 2$, $$ \left| \frac{\partial^{n+m}}{\partial k_1^n \partial k_2^m} \alpha_{\mu,d'}^{(j)}(k) \right| \leq \frac{C_j}{(2|z_{\mu,d'}(k)|-R)^j} \qquad \text{and} \qquad \left| \frac{\partial^{n+m}}{\partial k_1^n \partial k_2^m} \alpha_{\mu,d'}^{(3)}(k) \right| \leq \frac{C_3}{|z_{\mu,d'}(k)| R^2}, $$ where $C_l = C_{l;f,g,\Lambda,A,q,n,m}$ for $1 \le l \le 3$ are constants. Furthermore, $$ C_{1;f,g,\Lambda, A, 1, 0}, \, C_{1;f,g,\Lambda, A, 0, 1} \leq 13 \Lambda^{-2} \| f \|_{l^1} \| g \|_{l^1} \qquad \text{and} \qquad C_{1;f,g,\Lambda, A, 1, 1} \leq 65 \Lambda^{-3} \| f \|_{l^1} \| g \|_{l^1}. $$ \end{lem} \section{The regular piece} \label{s:reg} \begin{proof}[Proof of Theorem \ref{t:reg}] {\tt Step 1 (defining equation).} We first derive a defining equation for the Fermi curve. Without loss of generality we may assume that $\hat{A}(0)=0$. Let $G=\{0\}$, recall that $G'=\Gamma^\# \setminus \{0\}$, and consider the region $( T_\nu(0) \setminus \cup_{b \in G'} T_b ) \setminus \mathcal{K}_\rho$, where $\rho$ is a constant to be chosen sufficiently large obeying $\rho \geq R$. By Proposition \ref{p:Gprime}(i) we have $G' = \{ b \in \Gamma^\# \; | \;\; |N_b(k)| \geq \varepsilon |v| \}$. To simplify the notation write $$ \mathcal{M}_\nu \coloneqq \left( \widehat{\mathcal{F}}(A,V) \cap T_\nu(0) \right) \setminus \left( \mathcal{K}_\rho \cup \bigcup_{b \in \Gamma^\# \setminus \{0\}} T_b \right). $$ By Lemma \ref{l:eqn}(i), a point $k$ is in $\mathcal{M}_\nu$ if and only if $$ N_0(k) + D_{0,0}(k) = 0. $$ By Proposition \ref{p:newv}, if we set $$ w(k) \coloneqq w_{\nu,0}(k) = k_1 + i (-1)^\nu k_2 \qquad \text{and} \qquad z(k) \coloneqq z_{\nu,0}(k) = k_1 - i (-1)^\nu k_2, $$ this equation becomes \begin{equation} \label{deq1} \beta_1 w^2 + \beta_2 z^2 + (1 + \beta_3) wz + \beta_4 w + \beta_5 z + \beta_6 + \hat{q}{(0)} = 0, \end{equation} where \begin{alignat*}{3} \beta_1 & \coloneqq J_{\nu'}^{00}, & \qquad \beta_2 & \coloneqq J_\nu^{00}, & \qquad \beta_3 & \coloneqq K^{00}, \\ \beta_4 & \coloneqq L_{\nu'}^{00}, & \qquad \beta_5 & \coloneqq L_\nu^{00}, & \qquad \beta_6 & \coloneqq M^{00} - \hat{q}(0), \end{alignat*} with $J_\nu^{00}$, $K^{00}$, $L_\nu^{00}$ and $M^{00}$ given by Proposition \ref{p:newv}. Observe that all the coefficients $\beta_1, \dots, \beta_6$ have exactly the same form as the function $\Phi_{0,0}(k)$ of Lemma \ref{l:b1}(i) (see \eqref{Phi}). Thus, by this lemma, for $1 \leq i \leq 6$ we have \begin{equation} \label{coef} \beta_i = \beta_i^{(1)} + \beta_i^{(2)} + \beta_i^{(3)}, \end{equation} where the function $\beta_i^{(j)}$ is analytic in the region under consideration with $$ |\beta_i^{(j)}(k)| \leq \frac{C}{(2|z(k)|-\rho)^j} \leq \frac{C}{|z(k)|^j} \quad \text{for} \quad 1 \le j \le 2 \qquad \text{and} \qquad |\beta_i^{(3)}(k)| \leq \frac{C}{|z(k)| \rho^2}, $$ where $C = C_{\varepsilon, \Lambda, q, A}$ is a constant. The exact expression for $\beta_i^{(j)}$ can be easily obtained from the definitions and from Lemma \ref{l:b1}(i). Substituting \eqref{coef} into \eqref{deq1} and dividing both sides of the equation by $z$ yields \begin{equation} \label{deq1b} w + \beta_2^{(1)} \, z + g = 0, \end{equation} where \begin{equation} \label{defg} g \coloneqq \frac{\beta_1 w^2}{z} + (\beta_2^{(2)} + \beta_2^{(3)}) z + \beta_3 w + \frac{\beta_4 w}{z} + \beta_5 + \frac{\beta_6}{z} + \frac{\hat{q}(0)}{z} \end{equation} obeys \begin{equation} \label{decg} |g(k)| \leq \frac{C}{\rho}, \end{equation} with a constant $C = C_{\varepsilon, \Lambda, q, A}$. Therefore, a point $k$ is in $\mathcal{M}_\nu$ if and only if $$ F(k) = 0, $$ where $$ F(k) \coloneqq w(k) + \beta_2^{(1)}(k) \, z(k) + g(k) $$ is an analytic function (in the region under consideration). {\tt Step 2 (candidates for a solution).} Let us now identify which points are candidates to solve the equation $F(k)=0$. First observe that, by Proposition \ref{p:lines}(c) the lines $\mathcal{N}_\nu(0)$ and $\mathcal{N}_{\nu'}(d)$ intersect at $\mathcal{N}_\nu(0) \cap \mathcal{N}_{\nu'}(d) = \{ ( i \theta_\nu(d), (-1)^{\nu'} \theta_\nu(d)) \}$. Hence, the second coordinate of this point and the second coordinate of a point $k$ differ by $$ pr(k) - pr( \mathcal{N}_\nu(0) \cap \mathcal{N}_{\nu'}(d) ) = k_2 - (-1)^{\nu'} \theta_\nu(d) = k_2 + (-1)^\nu \theta_\nu(d). $$ Now observe that, if $k \in T_\nu(0) \cap T_{\nu'}(d)$ then $|k_1 + i(-1)^\nu k_2| < \varepsilon$ and \begin{align*} |k_2 + (-1)^\nu \theta_\nu(d)| & = \big| \tfrac{1}{2} (k_1+i(-1)^\nu k_2) - \tfrac{1}{2} ( k_1 + d_1 - i(-1)^\nu (k_2+d_2) \big| \\ & \leq \tfrac{1}{2} \big| N_{0,\nu}(k) - N_{d,\nu'}(k) \big | < \tfrac{\varepsilon}{2} + \tfrac{\varepsilon}{2} = \varepsilon. \end{align*} That is, the second coordinate of $k$ and the second coordinate of $\mathcal{N}_\nu(0) \cap \mathcal{N}_{\nu'}(d)$ must be apart from each other by at most $\varepsilon$. This gives a necessary condition on the second coordinate of a point $k$ for being in $\mathcal{M}_\nu$. Conversely, if a point $k$ is in the $(\varepsilon/4)$-tube inside $T_\nu(0)$, that is, $|k_1 + i(-1)^\nu k_2| < \frac{\varepsilon}{4}$, and its second coordinate differ from the second coordinate of $\mathcal{N}_\nu(0) \cap \mathcal{N}_{\nu'}(d)$ by at most $\varepsilon/4$, that is, $|k_2 + (-1)^\nu \theta_\nu(d)| < \frac{\varepsilon}{4}$, then $$ |N_{d,\nu'}(k)| = \big| N_{0,\nu}(k) - 2 (k_2 + (-1)^\nu \theta_\nu(d))| \leq \frac{\varepsilon}{4} + 2 \, \frac{\varepsilon}{4} < \varepsilon, $$ that is, the point $k$ is also in $T_{\nu'}(d)$ and hence lie in the intersection $T_\nu(0) \cap T_{\nu'}(d)$. This gives a sufficient condition on the first and second coordinates of a point $k$ for being in $T_\nu(0) \cap T_{\nu'}(d)$. For $y \in \mathds C$ define the set of candidates for a solution of $F(k)=0$ as $$ M_\nu(y) \coloneqq pr^{-1}(y) \cap \left( T_\nu(0) \setminus \bigcup_{b \in \Gamma^\# \setminus \{0\}} T_b \right) = pr^{-1}(y) \cap \left( T_\nu(0) \setminus \bigcup_{b \in \Gamma^\# \setminus \{0\}} T_{\nu'}(b) \right). $$ Observe that, if $|y + (-1)^\nu \theta_\nu(b)| \geq \varepsilon$ for all $b \in \Gamma^\# \setminus \{0\}$ then \begin{equation} \label{inc1} M_\nu(y) = pr^{-1}(y) \cap T_\nu(0) = \{ (k_1, y) \in \mathds C^2 \; | \;\; |k_1 + i(-1)^\nu y | < \varepsilon \}. \end{equation} On the other hand, if $|y + (-1)^\nu \theta_\nu(d)| < \varepsilon$ for some $d \in \Gamma^\# \setminus \{0\}$, then there is at most one such $d$ and consequently \begin{equation} \label{inc2} \begin{aligned} M_\nu(y) & = pr^{-1}(y) \cap (T_\nu(0) \setminus T_{\nu'}(d)) \\ & = \{ (k_1, y) \in \mathds C^2 \; | \;\; |k_1 + i(-1)^\nu y | < \varepsilon \, \text{ and } \, |k_1+d_1 + i(-1)^{\nu'} (y+d_2)| \geq \varepsilon \}. \end{aligned} \end{equation} Indeed, suppose there is another $d' \neq 0$ such that $|y + (-1)^\nu \theta_\nu(d')| < \varepsilon$. Then, $$ |d-d'| = |2(-1)^\nu \theta_\nu(d-d')| = |y + (-1)^\nu \theta_\nu(d) - (y + (-1)^\nu \theta_\nu(d'))| \leq 2\varepsilon < 2 \Lambda, $$ which contradicts the definition of $\Lambda$. Thus, there is no such $d' \neq 0$. {\tt Step 3 (uniqueness).} We now prove that, given $k_2$, if there exists a solution $k_1(k_2)$ of $F(k_1,k_2)=0$, then this solution is unique and it depends analytically on $k_2$. This follows easily using the implicit function theorem and the estimates below, which we prove later. \begin{prop} \label{p:imp} Under the hypotheses of Theorem \ref{t:reg} we have \begin{align} |F(k)-w(k)| & \leq \frac{\varepsilon}{900} + \frac{C_1}{\rho}, \tag{a} \\ \left| \frac{\partial F}{\partial k_1}(k) - 1 \right| & \leq \frac{1}{7 \cdot 3^4} + \frac{C_2}{\rho}, \tag{b} \end{align} where the constants $C_1$ and $C_2$ depend only on $\varepsilon$, $\Lambda$, $q$ and $A$. \end{prop} Now suppose that $(k_1,y) \in M_\nu(y)$. Then, $$ \left| \frac{\partial F}{\partial k_1}(k_1,y) - 1 \right| \leq \frac{1}{7 \cdot 3^4} + \frac{C_2}{\rho}. $$ Hence, by the implicit function theorem, by choosing the constant $\rho \geq R$ sufficiently large, if $F(k_1^*,y)=0$ for some $(k_1^*,y) \in M_{\nu}(y)$, then there is a neighbourhood $U \times V \subset \mathds C^2$ which contains $(k_1^*,y)$, and an analytic function $\eta : V \to U$ such that $F(k_1,k_2) = 0$ for all $(k_1,k_2) \in U \times V$ if and only if $k_1 = \eta(k_2)$. In particular this implies that the equation $F(k_1,k_2)=0$ has at most one solution $(\eta(y),y)$ in $M_\nu(y)$ for each $y \in \mathds C$. We next look for conditions on $y$ to have a solution or have no solution in $M_\nu(y)$. {\tt Step 4 (existence).} We first state an improved version of Proposition \ref{p:imp}(a). \begin{prop} \label{p:imp2} Under the hypotheses of Theorem \ref{t:reg} we have $$ F(k)-w(k) = \beta_2^{(1,0)} + \beta_2^{(1,1)}(w(k)) + \beta_2^{(1,2)}(k) + h(k), $$ where \begin{equation} \label{cte1} \beta^{(1,0)}_2 = -2i \sum_{b,c \in G'_1} \frac{ \theta_{\nu'}(\hat{A}(-b))}{\theta_{\nu'}(b)} \left[ \delta_{b,c} + \frac{\theta_{\nu'}(\hat{A}(b-c))}{ \theta_{\nu'}(c)} \right] \theta_\nu(\hat{A}(c)) \end{equation} is a constant that depends only on $\rho$ and $A$ and $$ h \coloneqq \beta_2^{(1,3)} + g. $$ Furthermore, \begin{align*} |\beta^{(1,0)}_2| < \frac{1}{100 \Lambda} \, \varepsilon^2, & \qquad \quad |\beta^{(1,1)}_2(k)| < \frac{1}{40 \Lambda^2} \, \varepsilon^3, \\ |\beta^{(1,2)}_2(k)| < \frac{1}{7^4 \Lambda^3} \, \varepsilon^4, & \qquad \quad |h(k)| \leq C_{\varepsilon,\Lambda,q,A} \, \frac{1}{\rho}. \end{align*} \end{prop} We now derive conditions for the existence of solutions. Suppose that $F(\eta(y),y)=0$. Then, since $\eta(y) + i(-1)^\nu y=w(\eta(y),y)$ and $\varepsilon<\Lambda/6$, using the above proposition we obtain \begin{align*} |\eta(y) + i(-1)^\nu y| = |w(\eta(y),y)| & = |F(\eta(y),y) - w(\eta(y),y)| \\ & \leq \frac{\varepsilon^2}{100 \Lambda} + \frac{\varepsilon^3}{40 \Lambda^2} + \frac{\varepsilon^4}{7^4 \Lambda^3} + \frac{C}{\rho} \leq \frac{\varepsilon^2}{50 \Lambda} + \frac{C}{\rho}. \end{align*} Hence, by choosing the constant $\rho$ sufficiently large we find that $$ |\eta(y) + i(-1)^\nu y| < \frac{\varepsilon^2}{40 \Lambda}. $$ In view of \eqref{inc2}, there is no solution in $M_\nu(y)$ if for some $d \in \Gamma^\# \setminus \{0\}$ we have $$ |y+(-1)^\nu \theta_\nu(d)|<\varepsilon \qquad \text{and} \qquad |\eta(y) + d_1 + i(-1)^{\nu'}(y+d_2) | < \varepsilon. $$ This happens if $$ |y+(-1)^\nu \theta_\nu(d)| \leq \frac{1}{2}\left( \varepsilon - \frac{\varepsilon^2}{40\Lambda} \right) $$ because in this case \begin{align*} | \eta(y) + d_1 + i(-1)^{\nu'}(y+d_2) | & = |\eta(y) + i(-1)^\nu y - 2i(-1)^\nu y + d_1 - i(-1)^\nu d_2| \\ & \leq |\eta(y) + i(-1)^\nu y| + 2 |y+(-1)^\nu \theta_\nu(d)| < \varepsilon. \end{align*} Therefore, the image set of $pr$ is contained in $$ \Omega_1 \coloneqq \Bigg \{ z \in \mathds C \,\, \Bigg | \,\, |z+(-1)^\nu\theta_\nu(b)| > \frac{1}{2}\left( \varepsilon - \frac{\varepsilon^2}{40\Lambda} \right) \, \text{ for all } b \in \Gamma^\# \setminus \{0\} \Bigg \}. $$ On the other hand, in view of \eqref{inc1}, there is a solution in $M_\nu(y)$ if $|y+(-1)^\nu \theta_\nu(b)| > \varepsilon$ for all $b \in \Gamma^\# \setminus \{0\}$. Recall from Proposition \ref{p:order}(a) that $\rho < |v| < 8|k_2|$. Thus, the image set of $pr$ contains the set $$ \Omega_2 \coloneqq \Big \{ z \in \mathds C \,\, \Big| \,\, 8 |z| > \rho \, \text{ and } |z+(-1)^\nu \theta_\nu(b)| > \varepsilon \, \text{ for all } b \in \Gamma^\# \setminus \{0\} \Big \}. $$ {\tt Step 5.} Summarizing, we have the following biholomorphic correspondence: \begin{align*} \mathcal{M}_\nu \ni k \, \xrightarrow{\qquad pr \qquad} \,\, k_2 & \in \Omega, \\ \mathcal{M}_\nu \ni (\eta(y),y) \, \xleftarrow{\qquad pr^{-1} \qquad} \,\, y & \in \Omega, \end{align*} where $$ \Omega_2 \subset \Omega \subset \Omega_1 \qquad \text{and} \qquad \eta(y) = -\beta_2^{(1,0)} -i(-1)^\nu y - r(y), $$ with the constant $\beta_2^{(1,0)}$ given by \eqref{cte1}, $$ |\beta^{(1,0)}_2| < \frac{\varepsilon^2}{100 \Lambda} \qquad \text{and} \qquad |r(y)| \leq \frac{\varepsilon^3}{50 \Lambda^2} + \frac{C}{\rho}. $$ This completes the proof of the theorem. \end{proof} \begin{proof}[Proof of Proposition \ref{p:imp}] (a) Recall that $\beta_2 = J_\nu^{00}$. First observe that, by Proposition \ref{p:newv}, Lemma \ref{l:b1}, and \eqref{alpha}, we have \begin{equation} \label{beta21} \beta_2^{(1)}(k) = (J_\nu^{00})^{(1)}(k) = \sum_{b,c \in G'_1} \frac{(1,i(-1)^\nu) \cdot \hat{A}(-b)}{N_b(k)} \, S_{b,c} \, (1,-i(-1)^\nu) \cdot \hat{A}(c). \end{equation} Thus, by \eqref{rec} and \eqref{estS}, \begin{equation} \label{as1} |\beta_2^{(1)}(k)| \leq \sqrt{2} \| \hat{A} \|_{l_1} \frac{2}{\Lambda (2|z(k)|-R)} \, \frac{45}{44} \, \sqrt{2} \| \hat{A} \|_{l_1} \leq \frac{4}{\Lambda |z(k)|} \, \frac{44}{45} \, \left( \frac{2\varepsilon}{63} \right)^2 \leq \frac{\varepsilon}{900} \, \frac{1}{|z(k)|}. \end{equation} Now recall that $|g(k)| \leq C_{\varepsilon,\Lambda,q,A} \, \frac{1}{\rho}$. Hence, $$ |F(k) - w(k)| = |\beta_2^{(1)}(k) z(k) + g(k)| \leq \frac{\varepsilon}{900} + C_{\varepsilon,\Lambda,q,A} \, \frac{1}{\rho}. $$ This proves part (a). (b) We first compute \begin{equation} \label{ee1} \begin{aligned} \frac{\partial g}{\partial k_1} & = \frac{\partial \beta_1}{\partial k_1} \frac{w^2}{z} + \beta_1 \, \frac{2wz-w^2}{z^2} + \left( \frac{\partial \beta_2^{(2)}}{\partial k_1} + \frac{\partial \beta_2^{(3)}}{\partial k_1} \right) z + \beta_2^{(2)} + \beta_2^{(3)} + \frac{\partial \beta_3}{\partial k_1} w + \beta_3 \\ & \quad + \frac{\partial \beta_4}{\partial k_1} \, \frac{w}{z} + \beta_4 \, \frac{z-w}{z^2} + \frac{\partial \beta_5}{\partial k_1} + \frac{\partial \beta_6}{\partial k_1} \, \frac{1}{z} - \frac{\beta_6}{z^2} - \frac{\hat{q}(0)}{z^2}. \end{aligned} \end{equation} Now observe that, since $k \in T_\nu(0) \setminus \mathcal{K}_\rho$ we have $|w(k)| < \varepsilon$, $3|v| \geq |z|$ and $\rho < |v| \leq |z|$. Furthermore, by Lemmas \ref{l:b1}(i), \ref{l:der}(i) and \ref{l:der2}(i), for $1 \leq i \leq 6$ and $1 \leq j \leq 2$, \begin{equation} \label{ee2} \begin{aligned} |\beta_i(k)| & \leq \frac{C}{|z(k)|}, \qquad & |\beta_i^{(j)}(k)| & \leq \frac{C}{|z(k)|^j}, \qquad & |\beta_i^{(3)}(k)| & \leq \frac{C}{|z(k)| \rho^2}, \\ \left| \frac{\partial \beta_i(k)}{\partial k_1} \right| & \leq \frac{C}{|z(k)|}, \qquad & \left| \frac{\partial \beta_i^{(j)}(k)}{\partial k_1} \right| & \le \frac{C}{|z(k)|^j}, \qquad & \left| \frac{\partial \beta_i^{(3)}(k)}{\partial k_1} \right| & \le \frac{C}{|z(k)| \rho^2}, \end{aligned} \end{equation} where $C = C_{\varepsilon,\Lambda,q,A}$ in all cases. Hence, \begin{equation} \label{ee3} \left| \frac{\partial g(k)}{\partial k_1} \right| \leq C_{\varepsilon,\Lambda,q,A} \, \frac{1}{\rho}. \end{equation} By Lemma \ref{l:der2}(i) with $f=g=(1,-i(-1)^\nu) \cdot \hat{A}$, we obtain \begin{equation} \label{ee4} \left | z(k) \frac{ \partial \beta_2^{(1)}(k)}{\partial k_1} \right| \leq |z(k)| \frac{13}{\Lambda^2 |z(k)|} \| (1,-i(-1)^\nu) \cdot \hat{A} \|_{l^1}^2 \leq \frac{26}{\Lambda^2} \| \hat{A} \|_{l^1}^2 < \frac{1}{7 \cdot 3^4}. \end{equation} Therefore, \begin{align*} \left| \frac{\partial F}{\partial k_1}(k) - 1 \right| & = \left| \frac{\partial}{\partial k_1} ( F(k)-w(k) ) \right| = \left| \frac{\partial}{\partial k_1} ( \beta_2^{(1)}(k) z(k) + g(k) ) \right|\\ & = \left| \frac{ \partial \beta_2^{(1)}}{\partial k_1}(k) z(k) + \beta_2^{(1)}(k) + \frac{\partial g}{\partial k_1}(k) \right| \leq \frac{1}{7 \cdot 3^4} + C_{\varepsilon,\Lambda,q,A} \, \frac{1}{\rho}. \end{align*} This proves part (b) and completes the proof of the proposition. \end{proof} \begin{proof}[Proof of Proposition \ref{p:imp2}] First observe that $$ (1,i(-1)^\nu) \cdot A = A_1 + i(-1)^\nu A_2 = A_1 - i(-1)^{\nu'} A_2 = -2i \theta_{\nu'}(A). $$ Thus, recalling \eqref{beta21}, $$ \beta_2^{(1)}(k) = (J_\nu^{00})^{(1)}(k) = \sum_{b,c \in G'_1} \frac{2i \theta_{\nu'}(\hat{A}(-b))}{N_b(k)} \, S_{b,c} \, 2i \theta_\nu(\hat{A}(c)). $$ Now, by Lemma \ref{l:b1b} we have $$ z(k) \beta_2^{(1)}(k) = \beta_2^{(1,0)} + \beta_2^{(1,1)}(w(k)) + \beta_2^{(1,2)}(k) + \beta_3^{(1,3)}(k), $$ where $$ \beta_2^{(1,0)} = - 2i \sum_{b,c \in G_1'} \frac{\theta_{\nu'}(\hat{A}(-b))}{\theta_{\nu'}(b)} \left[ \delta_{b,c} + \frac{\theta_{\nu'}(\hat{A}(b-c))}{\theta_{\nu'}(c)} \right] \theta_{\nu}(\hat{A}(c)) $$ and $$ |\beta_3^{(1,3)}(k)| \leq C_{\Lambda,A} \, \frac{1}{|z(k)|} < C_{\Lambda,A} \, \frac{1}{\rho}. $$ Hence, $$ F(k)-w(k) = z(k) \beta_2^{(1)}(k) + g(k) = \beta_2^{(1,0)} + \beta_2^{(1,1)}(w(k)) + \beta_2^{(1,2)}(k) + h(k) $$ with $h \coloneqq \beta_3^{(1,3)} + g$. Furthermore, in view of \eqref{decg}, $$ |h(k)| \leq |\beta_3^{(1,3)}(k)| + |g(k)| < C_{\varepsilon,\Lambda,q,A} \, \frac{1}{\rho}. $$ This proves the first part of the proposition. Finally, by \eqref{cts}, since $\| \hat{A} \|_{l^1} < 2\varepsilon/63$ and $\varepsilon<\Lambda/6$, we find that $$ |\beta_2^{(1,0)}| \leq \frac{1}{2 \Lambda} \left( 1 + \frac{1}{2 \Lambda} \| \theta_{\nu'}(\hat{A}) \|_{l^1} \right) \| 2i \theta_{\nu'}(\hat{A}) \|_{l^1} \| 2i \theta_{\nu}(\hat{A}) \|_{l^1} \leq \frac{4}{\Lambda} \| \hat{A} \|_{l^1}^2 < \frac{1}{100 \Lambda} \, \varepsilon^2, $$ $$ |\beta_2^{(1,1)}| \leq \frac{\varepsilon}{\Lambda^2} \left( 1 + \frac{7}{6 \Lambda} \| \theta_{\nu'}(\hat{A}) \|_{l^1} \right) \| 2i \theta_{\nu'}(\hat{A}) \|_{l^1} \| 2i \theta_{\nu}(\hat{A}) \|_{l^1} \leq \frac{8}{\Lambda^2} \, \varepsilon \| \hat{A} \|_{l^1}^2 < \frac{1}{40 \Lambda^2} \, \varepsilon^3 $$ and $$ |\beta_2^{(1,2)}| \leq \frac{64}{\Lambda^3} \| \theta_{\nu'}(\hat{A}) \|_{l^1}^2 \| 2i \theta_{\nu'}(\hat{A}) \|_{l^1} \| 2i \theta_{\nu}(\hat{A}) \|_{l^1} \leq \frac{256}{\Lambda^3} \| \hat{A} \|_{l^1}^4 < \frac{1}{7^4 \Lambda^3} \, \varepsilon^4. $$ This completes the proof. \end{proof} \section{The handles} \label{s:hand} \begin{proof}[Proof of Theorem \ref{t:hand}] {\tt Step 1 (defining equation).} Let $G=\{0,d\}$ and consider the region $(T_\nu(0) \cap T_{\nu'}(d)) \setminus \mathcal{K}_\rho$, where $\rho$ is a constant to be chosen sufficiently large obeying $\rho \ge R$. Observe that, this requires $d$ being sufficiently large for $(T_\nu(0) \cap T_{\nu'}(d)) \setminus \mathcal{K}_\rho$ being not empty. In fact, by Proposition \ref{p:order}(ii), for $k$ in this region we have $\rho < |v| \le 2|d|$. Now, recall from Proposition \ref{p:Gprime}(ii) that $G' = \{ b \in \Gamma^\# \; | \;\, |N_b(k)| \geq \varepsilon |v| \}$, and to simplify the notation write $$ \mathcal{H}_\nu \coloneqq \widehat{\mathcal{F}}(A,V) \cap (T_\nu(0) \cap T_{\nu'}(d)) \setminus \mathcal{K}_\rho. $$ By Lemma \ref{l:eqn}(ii), a point $k$ is in $\mathcal{H}_\nu$ if and only if \begin{equation} \label{eqh} (N_0(k)+D_{0,0}(k)) (N_d(k)+D_{d,d}(k)) - D_{0,d}(k)D_{d,0}(k) = 0. \end{equation} Define \begin{equation} \label{ch1} \begin{aligned} w_1(k) & \coloneqq w_{\nu,0} = k_1 + i (-1)^\nu k_2, \\ z_1(k) & \coloneqq z_{\nu,0} = k_1 - i (-1)^\nu k_2, \\ w_2(k) & \coloneqq w_{\nu',d} = k_1+d_1 + i(-1)^{\nu'} (k_2+d_2), \\ z_2(k) & \coloneqq z_{\nu',d} = k_1+d_1 - i(-1)^{\nu'} (k_2+d_2). \end{aligned} \end{equation} Note that, by Proposition \ref{p:order}(ii), $$ |v| \le |z_1| \le 3|v|, \qquad |v| \le |z_2| \le 3|v| \qquad \text{and} \qquad |d| \le |z_2| \le 2|d|. $$ By Proposition \ref{p:newv}, \begin{equation} \label{term1} \begin{aligned} N_0 + D_{0,0} & = \beta_1 w_1^2 + \beta_2 z_1^2 + (1 + \beta_3) w_1 z_1 + \beta_4 w_1 + \beta_5 z_1 + \beta_6 + \hat{q}{(0)}, \\ N_d + D_{d,d} & = \eta_1 w_2^2 + \eta_2 z_2^2 + (1 + \eta_3) w_2 z_2 + \eta_4 w_2 + \eta_5 z_2 + \eta_6 + \hat{q}{(0)}, \end{aligned} \end{equation} where \begin{alignat*}{3} \beta_1 & \coloneqq J_{\nu'}^{00}, & \qquad \beta_2 & \coloneqq J_\nu^{00}, & \qquad \beta_3 & \coloneqq K^{00}, \\ \beta_4 & \coloneqq L_{\nu'}^{00}, & \qquad \beta_5 & \coloneqq L_\nu^{00}, & \qquad \beta_6 & \coloneqq M^{00} - \hat{q}(0), \end{alignat*} and \begin{alignat*}{3} \eta_1 & \coloneqq J_\nu^{dd}, & \qquad \eta_2 & \coloneqq J_{\nu'}^{dd}, & \qquad \eta_3 & \coloneqq K^{dd}, \\ \eta_4 & \coloneqq L_\nu^{dd}, & \qquad \eta_5 & \coloneqq L_{\nu'}^{dd}, & \qquad \eta_6 & \coloneqq M^{dd} - \hat{q}(0), \end{alignat*} with $J_\nu^{d'd'}$, $K^{d'd'}$, $L_\nu^{d'd'}$ and $M^{d'd'}$ given by Proposition \ref{p:newv}. Observe that all the coefficients $\beta_1, \dots, \beta_6$ and $\eta_1, \dots, \eta_6$ have exactly the same form as the function $\Phi_{d',d'}(k)$ of Lemma \ref{l:b1}(ii) (see \eqref{Phi}). Thus, by this lemma, for $1 \leq i \leq 6$ we have \begin{equation} \label{coef2} \beta_i = \beta_i^{(1)} + \beta_i^{(2)} + \beta_i^{(3)} \qquad \text{and} \qquad \eta_i = \eta_i^{(1)} + \eta_i^{(2)} + \eta_i^{(3)}, \end{equation} where the functions $\beta_i^{(j)}$ and $\eta_i^{(j)}$ are analytic in the region under consideration with \begin{align*} |\beta_i^{(j)}(k)| \leq \frac{C}{(2|z_1(k)|-\rho)^j} \leq \frac{C}{|z_1(k)|^j} \quad \text{for} \quad 1 \le j \le 2 \qquad \text{and} \qquad |\beta_i^{(3)}(k)| \leq \frac{C}{|z_1(k)| \rho^2}, \\ |\eta_i^{(j)}(k)| \leq \frac{C}{(2|z_2(k)|-\rho)^j} \leq \frac{C}{|z_2(k)|^j} \quad \text{for} \quad 1 \le j \le 2 \qquad \text{and} \qquad |\eta_i^{(3)}(k)| \leq \frac{C}{|z_2(k)| \rho^2}, \end{align*} where $C = C_{\varepsilon,\Lambda, q, A}$ is a constant. The exact expressions for $\beta_i^{(j)}$ and $\eta_i^{(j)}$ can be easily obtained from the definitions and from Lemma \ref{l:b1}(ii). Substituting \eqref{coef2} into \eqref{term1} yields \begin{equation} \label{term1b} \begin{aligned} \frac{1}{z_1} (N_0 + D_{0,0}) & = w_1 + \beta_2^{(1)} \, z_1 + g_1, \\ \frac{1}{z_2} (N_d + D_{d,d}) & = w_2 + \eta_2^{(1)} \, z_2 + g_2, \end{aligned} \end{equation} where \begin{equation} \label{defg1g2} \begin{aligned} g_1 & \coloneqq \frac{\beta_1 w_1^2}{z_1} + (\beta_2^{(2)} + \beta_2^{(3)}) z_1 + \beta_3 w_1 + \frac{\beta_4 w_1}{z_1} + \beta_5 + \frac{\beta_6}{z_1} + \frac{\hat{q}(0)}{z_1}, \\ g_2 & \coloneqq \frac{\eta_1 w_2^2}{z_2} + (\eta_2^{(2)} + \eta_2^{(3)}) z_2 + \eta_3 w_2 + \frac{\eta_4 w_2}{z_2} + \eta_5 + \frac{\eta_6}{z_2} + \frac{\hat{q}(0)}{z_2} \end{aligned} \end{equation} obey \begin{equation} \label{decg1g2} |g_1(k)| \le \frac{C}{\rho} \qquad \text{and} \qquad |g_2(k)| \le \frac{C}{\rho}, \end{equation} with a constant $C = C_{\varepsilon,\Lambda, q,A}$. This gives us more information about the first term in \eqref{eqh}. We next consider the second term in that equation. Write \begin{equation} \label{term2} D_{0,d} = c_1(d) + p_1 \qquad \text{and} \qquad D_{d,0} = c_2(d) + p_2 \end{equation} with \begin{alignat*}{2} c_1(d) & \coloneqq \hat{q}(-d) - 2d \cdot \hat{A}(-d), \qquad \quad & p_1 & \coloneqq D_{0,d} - \hat{q}(-d) + 2d \cdot \hat{A}(-d), \\ c_2(d) & \coloneqq \hat{q}(d) + 2d \cdot \hat{A}(d), \qquad \quad & p_2 & \coloneqq D_{d,0} - \hat{q}(d) - 2d \cdot \hat{A}(d). \end{alignat*} We have the following estimates. \begin{prop} \label{p:p1p2} Under the hypotheses of Theorem \ref{t:hand} we have, for any integers $n$ and $m$ with $n+m \ge 0$ and for $1 \le j \le 2$, $$ \left| \frac{\partial^{n+m}}{\partial k_1^n \partial k_2^m} \, p_j(k) \right| \le \frac{C_1}{|d|} \qquad \text{and} \qquad |c_j(d)| \le \frac{C_2}{|d|}, $$ where the constants $C_1$ and $C_2$ depend only on $\varepsilon$, $\Lambda$, $q$ and $A$. \end{prop} Thus, by dividing both sides of \eqref{eqh} by $z_1z_2$ and substituting \eqref{term1b} and \eqref{term2} we find that \begin{equation} \label{eqh2} \begin{aligned} 0 & = \frac{1}{z_1 z_2} \big[ (N_0+D_{0,0}) (N_d+D_{d,d}) - D_{0,d} D_{d,0} \big] \\ & = (w_1 + \beta_2^{(1)} \, z_1 + g_1)(w_2 + \eta_2^{(1)} \, z_2 + g_2) - \frac{1}{z_1 z_2} (c_1(d) + p_1)(c_2(d) + p_2). \end{aligned} \end{equation} We now introduce a (nonlinear) change of variables in $\mathds C^2$. Set \begin{equation} \label{x1x2} \begin{aligned} x_1(k) & \coloneqq w_1(k) + \beta_2^{(1)}(k) \, z_1(k) + g_1(k), \\ x_2(k) & \coloneqq w_2(k) + \eta_2^{(1)}(k) \, z_2(k) + g_2(k). \end{aligned} \end{equation} This transformation obeys the following estimates. \begin{prop} \label{p:jac} Under the hypotheses of Theorem \ref{t:hand} we have: \vspace{0.2cm} \noindent {\rm (i)} For $1 \leq j \leq 2$ and for $\rho$ sufficiently large, $$ |x_j(k) - w_j(k)| \le \frac{\varepsilon}{900} + \frac{C}{\rho} < \frac{\varepsilon}{8}. $$ \noindent {\rm (ii)} \begin{align*} \begin{pmatrix} \frac{\partial x_1}{\partial k_1} & \frac{\partial x_1}{\partial k_2}\\ \frac{\partial x_2}{\partial k_1} & \frac{\partial x_2}{\partial k_2} \end{pmatrix} & = \begin{pmatrix} 1 & i(-1)^\nu \\ 1 & i(-1)^{\nu'} \end{pmatrix} ( I + M ) \shortintertext{and} \begin{pmatrix} \frac{\partial k_1}{\partial x_1} & \frac{\partial k_1}{\partial x_2}\\ \frac{\partial k_2}{\partial x_1} & \frac{\partial k_2}{\partial x_2} \end{pmatrix} & = \frac{1}{2} \begin{pmatrix} 1 & 1 \\ i(-1)^{\nu'} & i(-1)^\nu \end{pmatrix} (I + N) \end{align*} with $$ \| M \| \leq \frac{4}{7 \cdot 3^4} + \frac{C}{\rho} < \frac{1}{2} \qquad \text{and} \qquad \| N \| \le 4 \| M \|. $$ Furthermore, for all $m,i,j \in \{1,2\}$, $$ \left| \frac{\partial^2 k_m}{\partial x_i \partial x_j} \right| \le \frac{3}{\Lambda^3} \, \varepsilon^2 + \frac{C}{\rho}. $$ Here, all the constants $C$ depend only on $\varepsilon$, $\Lambda$, $q$ and $A$. \end{prop} By the inverse function theorem, these estimates imply that the above transformation is invertible. Therefore, by rewriting the equation \eqref{eqh2} in terms of these new variables, we conclude that a point $k$ is in $\mathcal{H}_\nu$ if and only if $x_1(k)$ and $x_2(k)$ satisfy the equation \begin{equation} \label{eqh3} x_1 x_2 + r(x_1,x_2) = 0, \end{equation} where $$ r(x_1,x_2) \coloneqq - \frac{1}{z_1 z_2} (c_1(d) + p_1)(c_2(d) + p_2). $$ In order to study this defining equation we need some estimates. {\tt Step 2 (estimates).} Using the above inequalities we have, for $i,j,l \in \{1,2\}$, $$ \left| \frac{\partial}{\partial x_i} \, p_j(k(x)) \right| \le \sum_{m=1}^2 \left| \frac{\partial p_j}{\partial k_m} \, \frac{\partial k_m}{\partial x_i} \right| \le \frac{C}{|d|} $$ and $$ \left| \frac{\partial^2}{\partial x_i \partial x_l} \, p_j(k(x)) \right| \le \sum_{m,n=1}^2 \left| \frac{\partial^2 p_j}{\partial k_m \partial k_n} \, \frac{\partial k_m}{\partial x_i} \, \frac{\partial k_n}{\partial x_l} \right| + \sum_{m=1}^2 \left| \frac{\partial p_j}{\partial k_m} \, \frac{\partial^2 k_m}{\partial x_i \partial x_l} \right| \le \frac{C}{|d|}, $$ so that $$ |r(x)| \le C \, \frac{1}{|d|^2} \, \frac{1}{|d|} \, \frac{1}{|d|} \le \frac{C}{|d|^4}, $$ $$ \left| \frac{\partial}{\partial x_i} \, r(x) \right| \le C \, \frac{1}{|d|^3} \, \frac{1}{|d|} \, \frac{1}{|d|} + C \, \frac{1}{|d|^2} \, \frac{1}{|d|} \, \frac{1}{|d|} \le \frac{C}{|d|^4} $$ and $$ \left| \frac{\partial^2}{\partial x_i \partial x_j} \, r(x) \right| \le \frac{C}{|d|^4}. $$ Here, all the constants depend only on $\varepsilon$, $\Lambda$, $q$ and $A$. {\tt Step 3 (Morse lemma).} We now apply the quantitative Morse lemma in Appendix \ref{s:morse} for studying the equation \eqref{eqh3}. We consider this lemma with $a = b = C/|d|^4$, $\delta = \varepsilon$, and $d$ sufficiently large so that $b < \max \{ \frac{2}{3} \, \frac{1}{55}, \frac{\varepsilon}{4} \}$. Observe that, under this condition we have $$ (\delta-a)(1-19b) > \frac{\varepsilon}{2} \qquad \text{and} \qquad (\delta-a)(1-55b) > \frac{\varepsilon}{4}. $$ According to this lemma, there is a biholomorphism $\Phi_\nu$ defined on $$ \Omega_1 \coloneqq \left \{ (z_1,z_2) \in \mathds C^2 \; \Big | \;\; |z_1| < \frac{\varepsilon}{2} \; \text{ and } \; |z_2| < \frac{\varepsilon}{2} \right\} $$ with range containing \begin{equation} \label{range} \left \{ (x_1,x_2) \in \mathds C^2 \; \Big | \;\; |x_1| < \frac{\varepsilon}{4} \; \text{ and } \; |x_2| < \frac{\varepsilon}{4} \right \} \end{equation} such that \begin{equation} \label{propPhi} \begin{aligned} \| D \Phi_\nu - I \| & \le \frac{C}{|d|^2}, \\ ((x_1 x_2 + r) \circ \Phi_\nu)(z_1,z_2) & = z_1 z_2 + t_d, \\ |t_d| & \le \frac{C}{|d|^4}, \\ |\Phi_\nu(0)| & \le \frac{C}{|d|^4}, \end{aligned} \end{equation} where $D \Phi_\nu$ is the derivative of $\Phi_\nu$ and $t_d$ is a constant that depends on $d$. Hence, if for $\nu=1$ we define $$ \phi_{d,1} \, : \, \Omega_1 \longrightarrow T_1(0) \cap T_2(d) $$ as $$ \phi_{d,1}(z_1,z_2) \coloneqq (k_1(\Phi_1(z_1,z_2)),k_2(\Phi_1(z_1,z_2))), $$ where $k(x)$ is the inverse of the transformation \eqref{x1x2}, we obtain the desired map. Note that the conclusion (ii) of the theorem is immediate. We next prove (i) and (iii). {\tt Step 4 (proof of (i)).} By Proposition \ref{p:jac}(i), for $1 \leq j \le 2$ we have $| x_j(k)-w_j(k) | \le \frac{\varepsilon}{8}$. Now, recall from \eqref{ch1} the definition of $w_1(k)$ and $w_2(k)$. Then, since $$ |x_j(k)| \le |x_j(k)-w_j(k)| + |w_j(k)| < \frac{\varepsilon}{8} + |w_j(k)|, $$ the set $$ \left \{ (k_1,k_2) \in \mathds C^2 \; \Big | \;\; |w_1(k)| < \frac{\varepsilon}{8} \; \text{ and } \; |w_2(k)| < \frac{\varepsilon}{8} \right \} $$ is contained in the set \eqref{range}. This proves the first part of (i). To prove the second part we use Proposition \ref{p:jac} and \eqref{propPhi}. First observe that $$ D \phi_{d,1} = \frac{\partial k}{\partial x} D \Phi_1 = \frac{1}{2} \begin{pmatrix} 1 & 1 \\ i & -i \end{pmatrix} (I + N) (I + D\Phi_1 - I) = \frac{1}{2} \begin{pmatrix} 1 & 1 \\ i & -i \end{pmatrix} (I + N + \mathcal{R}), $$ where $$ \| N \| \le \frac{1}{3^3} + \frac{C}{\rho} \qquad \text{and} \qquad \| \mathcal{R} \| \le \frac{C}{|d|^2}. $$ Furthermore, from \eqref{ch1} and \eqref{x1x2} we have $$ k_1 = i \theta_\nu(d) + \frac{1}{2}(w_1+w_2) = i \theta_\nu(d) + \frac{1}{2} ( x_1 + x_2 + \beta_2^{(1)} z_1 + \eta_2^{(1)} z_2 + g_1 + g_2) $$ and similarly $$ k_2 = -(-1)^\nu \theta_\nu(d) + \frac{(-1)^\nu}{2i} ( x_1 - x_2 - \beta_2^{(1)} z_1 + \eta_2^{(1)} z_2 - g_1 + g_2), $$ so that $$ \phi_{d,1}(0) = k(\Phi_1(0)) = k \left( O \left( \frac{1}{|d|^4} \right) \right) = (i \theta_\nu(d), -(-1)^\nu \theta_\nu(d)) + O \left( \frac{\varepsilon}{900} \right) + O \left( \frac{1}{\rho} \right). $$ {\tt Step 5 (proof of (iii)).} To prove part (iii) it suffices to note that $T_1(0) \cap T_2(d) \cap \widehat{\mathcal{F}}(A,V)$ is mapped to $T_1(-d) \cap T_2(0) \cap \widehat{\mathcal{F}}(A,V)$ by translation by $d$ and define $\phi_{d,2}$ by $$ \phi_{d,2}(z_1,z_2) \coloneqq \phi_{d,1}(z_2,z_1) + d. $$ This completes the proof of the theorem. \end{proof} \begin{proof}[Proof of Proposition \ref{p:p1p2}] It suffices to estimate $$ c_{d',d''} \coloneqq \hat{q}(d'-d'') - 2(d'-d'') \cdot \hat{A}(d'-d'') \qquad \text{and} \qquad p_{d',d''} \coloneqq D_{d',d''}- c_{d',d''} $$ for $d',d'' \in \{0,d\}$ with $d' \neq d''$. Define $l^{d'd''}_\nu \coloneqq (1,i(-1)^\nu) \cdot \hat{A}(d'-d'')$. Observe that, since $$ |\hat{q}(d'-d'')| = \frac{1}{|d'-d''|^2} |d'-d''|^2 \, |\hat{q}(d'-d'')| \leq \frac{1}{|d'-d''|^2} \sum_{b \in \Gamma^\#} |b|^2 \, |\hat{q}(b)| \leq \| b^2 \hat{q}(b) \|_{l^1} \frac{1}{|d|^2}, $$ and similarly $$ |\hat{A}(d'-d'')| \leq \| b^2 \hat{A}(b) \|_{l^1} \frac{1}{|d|^2}, $$ it follows that $$ |c_{d',d''}| \leq \frac{C_{A,q}}{|d|} \qquad \text{and} \qquad |l^{d'd''}_\nu| \leq \frac{C_{A}}{|d|^2}. $$ This gives the desired bounds for $c_1$ and $c_2$. Now, by Proposition \ref{p:newv} we have $$ p = J^{d'd''}_{\nu'} w_{\nu,d'}^2 + J^{d'd''}_\nu z_{\nu,d'}^2 + K^{d'd''} w_{\nu,d} z_{\nu,d'} + (\tilde{L}^{d'd''}_{\nu'} - l^{d'd''}_{\nu'}) w_{\nu,d'} + (\tilde{L}^{d'd''}_\nu - l^{d'd''}_\nu) z_{\nu,d'} + \tilde{M}^{d'd''} $$ with $\tilde{L}^{d'd''}_\nu \coloneqq L^{d'd''}_\nu + l^{d'd''}_\nu$ and $\tilde{M}^{d'd''} \coloneqq M^{d'd''} - c$. Observe that all the coefficients $J^{d'd''}_\nu$, $K^{d'd''}$, $\tilde{L}^{d'd''}_\nu$ and $\tilde{M}^{d'd''}$ have exactly the same form as the function $\Phi_{d',d''}(k)$ of Lemma \ref{l:derii} (see Proposition \ref{p:newv} and \eqref{Phi}). Thus, by this lemma with $\beta=2$, for any integers $n$ and $m$ with $n+m \ge 0$, the absolute value of the $\frac{\partial^{n+m}}{\partial k_1^n \partial k_2^m}$-derivative of each of these functions is bounded above by $C_{\varepsilon, \Lambda, A, q,m,n} \, \frac{1}{|d|^3}$. Hence, if we recall from Proposition \ref{p:order}(ii) that $|z_1(k)| \le 6|d|$ and $|z_2(k)| \le 2|d|$, and apply the Leibniz rule we find that $$ \left| \frac{\partial^{n+m}}{\partial k_1^n \partial k_2^m} \, p_{d',d''}(k) \right| \leq C_{m,n} \, \frac{C}{|d|}. $$ This yields the desired bounds for $p_1$ and $p_2$ and completes the proof. \end{proof} \begin{proof}[Proof of Proposition \ref{p:jac}] (i) Similarly as in \eqref{as1} we have $$ |\beta_2^{(1)}(k)| \le \frac{\varepsilon}{900} \, \frac{1}{|z_1(k)|} \qquad \text{and} \qquad |\eta_2^{(1)}(k)| \le \frac{\varepsilon}{900} \, \frac{1}{|z_2(k)|}. $$ Thus, in view of \eqref{decg1g2}, and by choosing $\rho$ sufficiently large, $$ |x_1(k)-w_1(k)| \le |\beta_2^{(1)}(k) \, z_1(k) + g_1(k)| \le \frac{\varepsilon}{900} + \frac{C}{\rho} < \frac{\varepsilon}{8}, $$ and similarly $|x_2(k)-w_2(k)| < \varepsilon/8$. This proves part (i). (ii) Recall \eqref{ch1} and \eqref{x1x2}. Then, for $1 \le j \le 2$, \begin{align*} \frac{\partial x_1}{\partial k_j} & = \frac{\partial}{\partial k_j} (w_1 + z_1 \beta_2^{(1)} + g_1) = \frac{\partial w_1}{\partial k_j} + z_1 \frac{\partial \beta_2^{(1)}}{\partial k_j} + \frac{\partial z_1}{\partial k_j} \beta_2^{(1)} + \frac{\partial g_1}{\partial k_j}, \\ \frac{\partial x_2}{\partial k_j} & = \frac{\partial}{\partial k_j} (w_2 + z_2 \eta_2^{(1)} + g_2) = \frac{\partial w_2}{\partial k_j} + z_2 \frac{\partial \eta_2^{(1)}}{\partial k_j} + \frac{\partial z_2}{\partial k_j} \eta_2^{(1)} + \frac{\partial g_2}{\partial k_j}. \end{align*} First observe that the functions $g_1$ and $g_2$ are similar to the function $g$ (see \eqref{defg1g2} and \eqref{defg}). Thus, it is easy to see that $\frac{\partial g_1}{\partial k_j}$ and $\frac{\partial g_2}{\partial k_j}$ are given by expressions similar to \eqref{ee1}. Since $k \in T_\nu(0) \cap T_{\nu'}(d)$ we have $|w_1(k)| < \varepsilon$ and $|w_2(k)| < \varepsilon$. Recall also the inequalities in Proposition \ref{p:order}(ii). Hence, by Lemmas \ref{l:b1}(ii), \ref{l:der}(ii) and \ref{l:der2}(ii), we obtain \eqref{ee2} with $k_1$ and $z(k)$ replaced by $k_j$ and $z_1(k)$, respectively, and for $k_1$, $z(k)$ and $\beta$ replaced by $k_j$, $z_2(k)$ and $\eta$, respectively. Consequently, similarly as in \eqref{ee3} and using again Lemma \ref{l:b1}(ii), for $1 \le j \le 2$ we have $$ \left| \frac{\partial z_1}{\partial k_j} \beta_2^{(1)} + \frac{\partial g_1}{\partial k_j} \right| \leq C_{\varepsilon,\Lambda,q,A} \, \frac{1}{\rho} \qquad \text{and} \qquad \left| \frac{\partial z_2}{\partial k_j} \eta_2^{(1)} + \frac{\partial g_2}{\partial k_j} \right| \leq C_{\varepsilon,\Lambda,q,A} \, \frac{1}{\rho}. $$ Now recall that $\beta_2 = J_\nu^{00}$ and $\eta_2 = J_{\nu'}^{dd}$. Then, by Proposition \ref{p:newv}, Lemma \ref{l:b1}(ii), and \eqref{alpha}, it follows that \begin{align*} \beta_2^{(1)}(k) & = (J_\nu^{00})^{(1)}(k) = \sum_{b,c \in G'_1} \frac{(1,i(-1)^\nu) \cdot \hat{A}(-b)}{N_b(k)} \, S_{b,c} \, (1,-i(-1)^\nu) \cdot \hat{A}(c), \\ \eta_2^{(1)}(k) & = (J_{\nu'}^{dd})^{(1)}(k) = \sum_{b,c \in G'_1} \frac{(1,i(-1)^{\nu'}) \cdot \hat{A}(d-b)}{N_b(k)} \, S_{b,c} \, (1,-i(-1)^{\nu'}) \cdot \hat{A}(c-d). \end{align*} Hence, by Lemma \ref{l:der2}(ii), similarly as in \eqref{ee4}, for $1 \le j \le 2$, $$ \left | z_1(k) \frac{ \partial \beta_2^{(1)}(k)}{\partial k_j} \right| \leq \frac{13}{\Lambda^2} \| (1,-i(-1)^\nu) \cdot \hat{A} \|_{l^1}^2 < \frac{1}{7 \cdot 3^4} \qquad \text{and} \qquad \left | z_2(k) \frac{ \partial \eta_2^{(1)}(k)}{\partial k_j} \right| < \frac{1}{7 \cdot 3^4}. $$ Therefore, \begin{align*} \begin{pmatrix} \frac{\partial x_1}{\partial k_1} & \frac{\partial x_1}{\partial k_2} \\ \frac{\partial x_2}{\partial k_1} & \frac{\partial x_2}{\partial k_2} \end{pmatrix} & = \begin{pmatrix} 1 & i(-1)^\nu \\ 1 & i(-1)^{\nu'} \end{pmatrix} + \begin{pmatrix} z_1(k) \frac{ \partial \beta_2^{(1)}(k)}{\partial k_1} & z_1(k) \frac{\partial \beta_2^{(1)}(k)}{\partial k_2} \\ z_2(k) \frac{ \partial \eta_2^{(1)}(k)}{\partial k_1} & z_2(k) \frac{ \partial \eta_2^{(1)}(k)}{\partial k_2} \end{pmatrix} \\ & \qquad + \begin{pmatrix} \beta_2^{(1)} & - i(-1)^\nu \beta_2^{(1)} \\ \eta_2^{(1)} & - i(-1)^{\nu'} \eta_2^{(1)} \end{pmatrix} + \begin{pmatrix} \frac{\partial g_1}{\partial k_1} & \frac{\partial g_1}{\partial k_2} \\ \frac{\partial g_2}{\partial k_1} & \frac{\partial g_2}{\partial k_2} \end{pmatrix} \\ & \eqqcolon \begin{pmatrix} 1 & i(-1)^\nu \\ 1 & i(-1)^{\nu'} \end{pmatrix} ( I + M_1 + M_2 + M_3), \end{align*} where $$ \| M_1 \| \leq 2 \, \frac{2}{7 \cdot 3^4} \qquad \text{and} \qquad \| M_2 + M_3 \| \leq C_{\varepsilon,\Lambda,q,A} \, \frac{1}{\rho}. $$ Set $M \coloneqq M_1 + M_2 + M_3$. This proves the first claim. Now, by choosing $\rho$ sufficiently large we can make $\| M \| < \frac{1}{2}$. Write $$ P \coloneqq \begin{pmatrix} 1 & i(-1)^\nu \\ 1 & i(-1)^{\nu'} \end{pmatrix}. $$ Then, by the inverse function theorem and using the Neumann series, \begin{align*} \begin{pmatrix} \frac{\partial k_1}{\partial x_1} & \frac{\partial k_1}{\partial x_2} \\ \frac{\partial k_2}{\partial x_1} & \frac{\partial k_2}{\partial x_2} \end{pmatrix} & = \begin{pmatrix} \frac{\partial x_1}{\partial k_1} & \frac{\partial x_1}{\partial k_2} \\ \frac{\partial x_2}{\partial k_1} & \frac{\partial x_2}{\partial k_2} \end{pmatrix}^{-1} = (I + M)^{-1} P^{-1} = (I + \tilde{M}) P^{-1} \\ & \eqqcolon P^{-1} (I + P \tilde{M} P^{-1}) = \frac{1}{2} \begin{pmatrix} 1 & 1 \\ i(-1)^{\nu'} & i(-1)^\nu \end{pmatrix} (I + P \tilde{M} P^{-1}), \end{align*} with $$ \| P \tilde{M} P^{-1} \| \leq 2 \| \tilde{M} \| 1 \le \frac{2 \| M \| }{1 - \| M \|} \le 4 \| M \|. $$ Set $N \coloneqq P \tilde{M} P^{-1}$. This proves the second claim. Differentiating the matrix identity $T T^{-1}=I$ and applying the chain rule we find that $$ \frac{\partial^2 k_m}{\partial x_i \partial x_j} = - \sum_{l,p=1}^2 \frac{\partial k_m}{\partial x_l} \, \frac{\partial}{\partial x_i} \left( \frac{\partial x_l}{\partial k_p} \right) \frac{\partial k_p}{\partial x_j} = - \sum_{l,p=1}^2 \frac{\partial k_m}{\partial x_l} \, \frac{\partial^2 x_l}{\partial k_r \partial x_p} \, \frac{\partial k_r}{\partial x_i} \, \frac{\partial k_p}{\partial x_j}. $$ Furthermore, in view of the above calculations we have $$ \left| \frac{\partial k_i}{\partial x_j} \right| \leq \frac{1}{2} ( 1 + \| N \| ) \leq \frac{1}{2} ( 1 + 4 \| M \| ) \le \frac{1}{2} \left( 1 + 4 \, \frac{1}{2} \right) < \frac{3}{2}. $$ Thus, $$ \left| \frac{\partial^2 k_m}{\partial x_i \partial x_j} \right| \leq 4 \left( \frac{3}{2} \right)^3 \sup_{l,r,p} \left| \frac{\partial^2 x_l}{\partial k_r \partial x_p} \right|. $$ We now estimate $$ \frac{\partial^2 x_1}{\partial k_i \partial k_j} = \frac{\partial z_1}{\partial k_i} \, \frac{\partial \beta_2^{(1)}}{\partial k_j} + z_1 \frac{\partial^2 \beta_2^{(1)}}{\partial k_i \partial k_j} + \frac{\partial z_1}{\partial k_j} \, \frac{\partial \beta_2^{(1)}}{\partial k_i} + \frac{\partial^2 g_1}{\partial k_i \partial k_j} \qquad \text{and} \qquad \frac{\partial^2 x_2}{\partial k_i \partial k_j}. $$ From \eqref{ee1} with $g$, $w$ and $z$ replaced by $g_1$, $w_1$ and $z_1$, respectively, we obtain \begin{align*} \frac{\partial^2 g_1}{\partial k_1^2} & = \frac{\partial^2 \beta_1}{\partial k_1^2} \frac{w_1^2}{z_1} + 2 \frac{\partial \beta_1}{\partial k_1} \frac{2w_1 z_1 - w_1^2}{z_1^2} + \beta_1 \, \frac{2 z_1^2 - 6w_1z_1 + 4 w_1^2}{z_1^3} + \left( \frac{\partial^2 \beta_2^{(2)}}{\partial k_1^2} + \frac{\partial^2 \beta_2^{(3)}}{\partial k_1^2} \right) z_1 \\ & \quad + 2 \left( \frac{\partial \beta_2^{(2)}}{\partial k_1} + \frac{\partial \beta_2^{(3)}}{\partial k_1} \right) + \frac{\partial^2 \beta_3}{\partial k_1^2} w_1 + 2 \frac{\partial \beta_3}{\partial k_1} + \frac{\partial^2 \beta_4}{\partial k_1^2} \, \frac{w_1}{z_1} + 2 \frac{\partial \beta_4}{\partial k_1} \, \frac{z_1-w_1}{z_1^2} \\ & \quad + \beta_4 \, \frac{2(w_1-z_1)}{z_1^3} + \frac{\partial^2 \beta_5}{\partial k_1^2} + \frac{\partial^2 \beta_6}{\partial k_1^2} \, \frac{1}{z_1} - 2 \frac{\partial \beta_6}{\partial k_1} \, \frac{1}{z_1^2} + 2 \frac{\beta_6}{z_1^3} + \frac{2 \hat{q}(0)}{z_1^3}. \end{align*} Hence, by Lemmas \ref{l:b1}(ii), \ref{l:der}(ii) and \ref{l:der2}(ii), $$ \left| \frac{\partial^2 g_1}{\partial k_1^2} \right| \le C_{\varepsilon, \Lambda, q, A} \, \frac{1}{\rho}. $$ Similarly we prove that $$ \left| \frac{\partial^2 g_l}{\partial k_i \partial k_j} \right| \le C_{\varepsilon, \Lambda, q, A} \, \frac{1}{\rho} $$ for all $l,i,j \in \{1,2\}$ because all the derivatives acting on $g_l$ are essentially the same up to constant factors (see \cite{deO}). Furthermore, again by Lemma \ref{l:der2}(ii), $$ \left| \frac{\partial \beta_2^{(1)}}{\partial k_j} \right| \le C_{\varepsilon, \Lambda, q, A} \, \frac{1}{\rho} , \qquad \left| \frac{\partial \eta_2^{(1)}}{\partial k_j} \right| \le C_{\varepsilon, \Lambda, q, A} \, \frac{1}{\rho}, $$ and $$ \left | z_1(k) \frac{ \partial^2 \beta_2^{(1)}(k)}{\partial k_1 \partial k_j} \right| \leq \frac{65}{\Lambda^3} \| (1,-i(-1)^\nu) \cdot \hat{A} \|_{l^1}^2 < \frac{1}{5 \Lambda^3} \, \varepsilon^2, \qquad \left | z_2(k) \frac{ \partial^2 \eta_2^{(1)}(k)}{\partial k_i \partial k_j} \right| < \frac{1}{5 \Lambda^3} \, \varepsilon^2. $$ Hence, $$ \left| \frac{\partial^2 x_l}{\partial k_i \partial k_j} \right| \leq \frac{1}{5 \Lambda^3} \, \varepsilon^2 + C_{\varepsilon, \Lambda, q, A} \, \frac{1}{\rho}. $$ Therefore, $$ \left| \frac{\partial^2 k_m}{\partial x_i \partial x_j} \right| \leq 4 \left( \frac{3}{2} \right)^3 \sup_{l,r,p} \left| \frac{\partial^2 x_l}{\partial k_r \partial x_p} \right| \le \frac{3}{\Lambda^3} \, \varepsilon^2 + C_{\varepsilon, \Lambda, q, A} \, \frac{1}{\rho}. $$ This completes the proof of the proposition. \end{proof}
1,108,101,562,521
arxiv
\section*{Introduction} \noindent Let $k$ be a field that is finitely generated over~$\bbQ$ and let $X$ be a geometrically connected variety over~$k$. We do not assume that $X$ is complete or non-singular. For $\ell$ a prime number, let $H_\ell = \mathrm{H}^1(X_\kbar,\bbZ_\ell)$ be the $\ell$-adic cohomology in degree~$1$, on which we have a Galois representation $\rho_\ell = \rho_{\ell,X} \colon \Gal(\kbar/k) \to \GL(H_\ell)$. The Zariski closure $\sG_\ell = \sG_{\ell,X}$ of $\Image(\rho_\ell)$ is an algebraic subgroup of~$\sGL(H_\ell)$. Though our understanding of the groups~$\sG_\ell$ is still incomplete, we dispose of several highly non-trivial results about their structure; see for instance \cite{SerreRibet}, \cite{LaPi1992} and~\cite{LaPi1995}. In this paper we are mostly concerned with the actual images $\Image(\rho_\ell)$, and especially the way they vary with~$\ell$. It can be shown that $\Image(\rho_\ell)$ is open---and hence of finite index---in~$\sG_\ell(\bbZ_\ell)$ for every~$\ell$. Our first main result is that that the index $\bigl[\sG_\ell(\bbZ_\ell) : \Image(\rho_\ell)\bigr]$ is in fact bounded: \vspace{\topsep} \noindent \textbf{Theorem A.} \emph{With $X/k$ as above, $\Image(\rho_\ell)$ is an open subgroup of\/~$\sG_\ell(\bbZ_\ell)$ for every~$\ell$ and the index $\bigl[\sG_\ell(\bbZ_\ell) : \Image(\rho_\ell)\bigr]$ is bounded when $\ell$ varies. The same is true if we everywhere replace $\mathrm{H}^1(X_\kbar,\bbZ_\ell)$ by $\mathrm{H}^1_{\text{c}}\bigl(X_\kbar,\bbZ_\ell(1)\bigr)$.} \vspace{\baselineskip} \noindent In either variant ($\mathrm{H}^1$ or $\mathrm{H}^1_{\text{c}}(1)$) this result (Corollary~\ref{cor:indexbdd} in the text) is in fact a consequence of a similar, but more general, result about Galois representations associated with $1$-motives; see Theorem~\ref{thm:Main}. The proof of the result about $1$-motives occupies most of Section~\ref{sec:Indices}. It has three main ingredients. To handle the case of an abelian variety, in which case $\sG_\ell$ is reductive over~$\bbZ_\ell$ for almost all~$\ell$, we use a result of Wintenberger~\cite{WintenbLang} to control the derived group, and we use techniques from our previous paper~\cite{CadMoon} to control the abelian part. To perform the step from abelian varieties to general $1$-motives we then have to study the unipotent radicals of the groups~$\sG_\ell$; here we make essential use of results of Jossen~\cite{Jossen}. \vspace{\baselineskip} \noindent We apply Theorem~A to improve a recent result of Litt~\cite{Litt}. Let $X/k$ be as above and assume $X$ is normal and geometrically integral. Following Litt, we say that a representation of the geometric fundamental group $\tau \colon \pi(X_\kbar) \to \GL_n(\bbZ_\ell)$ is \emph{arithmetic} if, possibly after replacing~$k$ by a finite extension, it appears as a subquotient of a representation of the arithmetic fundamental group~$\pi_1(X)$. (We leave out base points from the notation.) Litt's remarkable result is that we can obtain important information about arithmetic representations from their truncations modulo a power of~$\ell$ that does not depend on the representation. In particular he proves (\cite{Litt}, Theorem~1.2) that there exists a constant~$N$, depending on $X$ and~$\ell$ but not on~$\tau$, such that every arithmetic representation $\tau \colon \pi_1(X_\kbar) \to \GL_n(\bbZ_\ell)$ that is trivial modulo~$\ell^N$ is in fact unipotent. Our second main result gives the improvement that the dependence on~$\ell$ can be eliminated: \vspace{\topsep} \noindent \textbf{Theorem B.} \emph{With $X/k$ as above, there exists an integer~$\ell_X$ such that for every prime $\ell \geq \ell_X$, every arithmetic representation $\tau \colon \pi_1(X_\kbar) \to \GL_n(\bbZ_\ell)$ that is trivial modulo~$\ell$ is unipotent. In particular, in Litt's result the constant~$N$ can be chosen depending only on~$X$, independently of~$\ell$.} \vspace{\baselineskip} \noindent The proof is given in Section~\ref{sec:Litt}. \subsection*{Notation and conventions.}\label{ssec:Notat} If $\sG$ is an algebraic group, $\sG^\der$ denotes its derived subgroup. We write~$\sG^\ab$ for $\sG/\sG^\der$ and $\ab \colon \sG \to \sG^\ab$ for the canonical map. If $\sG$ is a group scheme over a field, $\sG^\idcomp$ denotes its identity component. If $\sG$ is a group scheme over a Dedekind domain with generic point $\eta$, then by $\sG^\idcomp$ we mean the closed subgroup scheme of~$\sG$ whose generic fibre is~$(\sG_\eta)^\idcomp$. Unless indicated otherwise, reductive group schemes are assumed to have connected fibres. If $\sG$ is a reductive group scheme over a ring~$R$ (for us usually $R = \bbZ_\ell$), let $p \colon \sG^\sconn \to \sG^\der$ be the simply connected cover of its derived subgroup; then we define \[ \sG^\der(R)_\mathrm{u} = \Image\bigl(p\colon \sG^\sconn(R) \to \sG^\der(R)\bigr)\, . \] By the rank of a reductive group over a connected base scheme we mean the absolute rank of its fibres. If $R$ is a ring and $H$ is a free $R$-module of finite type, we denote by $\sGL(H)$ the associated reductive group over~$R$ and by $\GL(H) = \sGL(H)\bigl(R\bigr)$ the (abstract) group of $R$-linear automorphisms of~$H$. \section{Indices of images of Galois representations in their Zariski closure} Let $k$ be a finitely generated extension of $\bbQ$. \label{sec:Indices} \subsection{Notation.} \label{ssec:1MotNot} We refer to \cite{DelHodge3}, Section~10, and \cite{Jossen}, Section~1, for the basic notions of $1$-motives. Let $M$ be a $1$-motive over $k$. For $\ell$ a prime number, let $T_\ell(M)$ be the $\ell$-adic realization of~$M_\kbar$, and let \[ \rho_{\ell,M} \colon \Gal(\kbar/k) \to \GL\bigl(T_\ell(M)\bigr) \] be the associated Galois representation. We denote by $\sG_{\ell,M} \subset \sGL\bigl(T_\ell(M)\bigr)$ the Zariski closure of the image of~$\rho_{\ell,M}$, which is a subgroup scheme of~$\sGL\bigl(T_\ell(M)\bigr)$, flat over~$\bbZ_\ell$. \begin{theorem}\label{thm:Main} With $M/k$ as above, $\Image(\rho_{\ell,M})$ is an open subgroup of ~$\sG_{\ell,M}(\bbZ_\ell)$ for every~$\ell$ and $\bigl[\sG_{\ell,M}(\bbZ_\ell) : \Image(\rho_{\ell,M})\bigr]$ is bounded when $\ell$ varies. \end{theorem} The proof consists of several steps. We start with a lemma. \begin{lemma}\label{lem:GroupLemma} \begin{enumerate} \item\label{itm:Isogeny} Let $\sG$ be a reductive group over~$\bbZ_\ell$. Let $\sZ$ be the centre of\/~$\sG$, so that $q\colon \sZ^\idcomp \to \sG^\ab$ is an isogeny. Then there is a constant $C_1$, depending only on the rank of\/~$\sG$, such that $q$ has degree at most~$C_1$. \item\label{itm:scCover} Given a positive integer~$r$ there exist constants $C_2$ and $\ell_0$, depending only on~$r$, such that the following holds: For every prime number $\ell \geq \ell_0$ and every reductive group~$\sG$ over~$\bbZ_\ell$ of rank at most~$r$, the subgroup $\sG^\der(\bbZ_\ell)_\mathrm{u} \subset \sG^\der(\bbZ_\ell)$ (see the end of the Introduction for notation) has index at most~$C_2$. \end{enumerate} \end{lemma} \begin{proof} Let $r$ be given. It follows from the classification of split semisimple groups in terms of root data that there exists a constant $C = C(r) > 0$ such that for every reductive group~$\sG$ of rank $\leq r$ over~$\bbZ_\ell$ the isogeny $p \colon \sG^\sconn \to \sG^\der$ has degree at most~$C$ and also the centre of~$\sG^\der$, which is a finite flat $\bbZ_\ell$-group scheme, has order at most~$C$. Part~\ref{itm:Isogeny} follows because the kernel of~$q$ is contained in the centre of~$\sG^\der$. For~\ref{itm:scCover}, take $\ell_0 = C(r)!$. Let $\ell$ be a prime number with $\ell > \ell_0$, and let $\sG$ be a reductive group over~$\bbZ_\ell$ of rank at most~$r$. Define $\mu_\sG = \Ker(p\colon \sG^\sconn \to \sG^\der)$. Then $\ell$ does not divide the order of the finite group scheme~$\mu_\sG$, which is therefore finite \'etale over~$\bbZ_\ell$. Writing $\bbF = \overline{\bbF}_\ell$, Galois cohomology gives a short exact sequence \[ \sG^\sconn(\bbF) \tto \sG^\der(\bbF) \tto \mathrm{H}^1\bigl(\Gal(\bbF/\bbF_\ell),\mu_\sG(\bbF)\bigr) \to 1\, . \] Since $\Gal(\bbF/\bbF_\ell) \cong \Zhat$, the $\mathrm{H}^1$ that appears is a subquotient of $\mu_\sG(\bbF)$ (see \cite{SerreCL}, Section~XIII.1), and therefore $\bigl[\sG^\der(\bbF_\ell):\sG^\der(\bbF_\ell)_\mathrm{u}\bigr] \leq C(r)$. By Proposition~1 of~\cite{WintenbLang} it follows that $\bigl[\sG^\der(\bbZ_\ell): \sG^\der(\bbZ_\ell)_\mathrm{u}\bigr] \leq C(r)$. \end{proof} \subsection{The case of an abelian variety.}\label{AV} We now prove Theorem~\ref{thm:Main} when $M=A$ is an abelian variety. \subsubsection{} As a first step we reduce the problem to the case where the base field is a number field. Since $k$ is finitely generated over $\bbQ$, there exists an integral scheme~$S$ of finite type over~$\bbQ$, with generic point~$\eta$, and an abelian scheme $X \to S$ such that $k$ is the function field of~$S$ and $A$ is isomorphic to the generic fiber~$X_\eta$ of~$X/S$. Choose a prime number~$\ell$. By a result of Serre (see~\cite{SerreRibet}) there exists a closed point $s \in S$ such that the image of the Galois representation~$\rho_{\ell,X_s}$ is the same, via a specialization isomorphism $\spcl\colon T_{\ell}(X_\eta) \isomarrow T_\ell(X_s)$, as the image of~$\rho_{\ell,X_\eta}$. (Note that Serre's result can also be applied to finitely many prime numbers~$\ell$ at the same time, but not to infinitely many~$\ell$.) It then follows from \cite{Cadoret}, Theorem~1.2, that the image of the adelic Galois representation \[ \rho_{\Zhat,X_s} \colon \Gal(\kbar/k) \to \prod_{\ell} \GL\bigl(T_\ell(X_s)\bigr) \] is open in the image of~$\rho_{\Zhat,X_\eta}$. This implies that $\sG_{\ell,X_s}(\bbZ_\ell)$ has bounded index in $\sG_{\ell,X_\eta}(\bbZ_\ell)$, where again we compare the two via the map~$\spcl$. So it suffices to prove the result for~$X_s$ over the number field~$\kappa(s)$. \subsubsection{} Now assume that $k$ is a number field. To simplify notation, write $\rho_\ell = \rho_{\ell,A}$ and $\sG_\ell = \sG_{\ell,A}$. As we may replace $k$ by a finite extension, we may further assume that the group schemes~$\sG_\ell$ have connected fibers. (See \cite{SerreRibet} or \cite{LaPi1992}, Proposition~6.14. It actually suffices to assume that all $n$-torsion points of~$A$ are $k$-rational for some $n \geq 3$.) Moreover, since by \cite{Bogomolov}, Theorem~1, we know that $\Image(\rho_\ell)$ is open in~$\sG_\ell(\bbZ_\ell)$ for every~$\ell$, we may exclude finitely many prime numbers~$\ell$ from our considerations. By \cite{LaPi1995}, Proposition~1.3 together with \cite{WintenbLang}, Theorem~2, the set~$\cL$ of primes numbers~$\ell$ for which the group scheme~$\sG_\ell$ is reductive and $\Image(\rho_\ell)$ contains $\sG_\ell^\der(\bbZ_\ell)_\unip$ contains all but finitely many~$\ell$. Hence it suffices to find a constant~$C$ such that $\bigl[\sG_\ell(\bbZ_\ell) : \Image(\rho_\ell)\bigr] \leq C$ for almost all $\ell \in \cL$. Writing $\rho_\ell^\ab \colon \Gal(\kbar/k) \to \sG_\ell^\ab(\bbZ_\ell)$ for the composition of~$\rho_\ell$ and the canonical map $\ab\colon \sG_\ell (\bbZ_\ell)\to \sG_\ell^\ab(\bbZ_\ell)$, we have a commutative diagram with exact rows \[ \begin{tikzcd} 1 \ar[r] & \Image(\rho_\ell) \cap \sG_\ell^\der(\bbZ_\ell) \ar[r] \ar[d,hook] & \Image(\rho_\ell) \ar[r] \ar[d,hook] & \Image(\rho_\ell^\ab) \ar[r] \ar[d,hook] & 1\\ 1 \ar[r] & \sG_\ell^\der(\bbZ_\ell) \ar[r] & \sG_\ell(\bbZ_\ell) \ar[r] & \sG_\ell^\ab(\bbZ_\ell) & \end{tikzcd} \] As the reductive groups~$\sG_\ell$, for $\ell \in \cL$, all have rank at most $2\cdot \dim(A)$, it follows from Lemma~\ref{lem:GroupLemma}\ref{itm:scCover} that there is an $\ell_0$ and a constant~$C_2$ such that $\Image(\rho_\ell) \cap \sG_\ell^\der(\bbZ_\ell)$ has index at most~$C_2$ in~$\sG_\ell^\der(\bbZ_\ell)$ for all $\ell \geq \ell_0$ in~$\cL$. It now only remains to find a bound for the index of~$\Image(\rho_\ell^\ab)$ in~$\sG_\ell^\ab(\bbZ_\ell)$. \subsubsection{} Choose an embedding $k \hookrightarrow \bbC$ and let $T_\Betti = \mathrm{H}_1(A_\bbC,\bbZ)$. Let $\sG_\Betti \subset \sGL(T_\Betti)$ be the (integral) Mumford--Tate group of~$A_\bbC$, which is part of a Shimura datum $(\sG_\Betti,X)$. Let $(\sG^\ab,X^\ab)$ be the associated abelian Shimura datum. Choose neat compact open subgroups $K \subset \sG_\Betti(\Zhat)$ and $K^\ab \subset \sG_\Betti^\ab(\Zhat)$ with $\ab(K) \subseteq K^\ab$. Possibly after replacing~$k$ with a finite extension (which we may do), there exists a level~$K$ structure on~$A$, and the choice of such allows us to associate with~$A$ a $k$-rational point~$s$ on the Shimura variety $\Sh_K(\sG_\Betti,X)$. Let $S$ denote the irreducible component of~$\Sh_K(\sG_\Betti,X)$ containing~$s$, and let $S^\ab \subset \Sh_{K^\ab}(\sG_\Betti^\ab,X^\ab)$ be its image. In Section~3 of~\cite{CadMoon} we have defined representations $\phi \colon \pi_1(S) \to K$ and $\phi^\ab \colon \pi_1(S^\ab) \to K^\ab$ that fit into a commutative diagram \[ \begin{tikzcd} \pi_1(S) \ar[r,"\phi"] \ar[d] & K \ar[d,"\ab"]\\ \pi_1(S^\ab) \ar[r,"\phi^\ab"] & K^\ab \end{tikzcd} \] (As explained in loc.\ cit., these representations are essentially independent of the choice of geometric base point, which we therefore omit from the notation.) Moreover, it is shown in ibid., Section~5, that the image of~$\phi$ in $K \subset \sG_\Betti(\Zhat)$ equals the image of~$\rho_{\Zhat,A}$ in $\prod_\ell\, \sG_\ell(\bbZ_\ell) \subset \sG_\Betti(\Zhat)$. By ibid., Proposition~3.5 and Corollary~3.7 (applied to $(\sG^\ab,X^\ab)$) it follows that the image of the composite homomorphism \[ \Gal(\kbar/k) \xrightarrow{~\rho_\ell^\ab~} \sG_\ell^\ab(\bbZ_\ell) \tto \sG_\Betti^\ab(\bbZ_\ell) \] has bounded index in~$\sG_\Betti^\ab(\bbZ_\ell)$, for varying~$\ell$. By \cite{UllYafQuebec}, Corollary~2.11, or \cite{Vasiu}, Theorem~1.3.1, the Mumford--Tate conjecture is true on connected centers. More precisely: under the inclusion $\sG_\ell \hookrightarrow \sG_\Betti \otimes \bbZ_\ell$ the connected center $\sZ_\ell^\idcomp$ of~$\sG_\ell$ maps into $\sZ_\Betti \otimes \bbZ_\ell$ and induces an isomorphism $\sZ_\ell^\idcomp \isomarrow \sZ_\Betti^\idcomp \otimes \bbZ_\ell$. (We only need to consider the primes $\ell \in \cL$, for which $\sZ_\ell^\idcomp$ and~$\sZ_\Betti^\idcomp \otimes \bbZ_\ell$ are tori over~$\bbZ_\ell$.) By Lemma~\ref{lem:GroupLemma}\ref{itm:Isogeny} it follows that the natural homomorphisms $\sG_\ell^\ab \to \sG_\Betti^\ab \otimes \bbZ_\ell$ are isogenies of bounded degree, and therefore the index of~$\Image(\rho_\ell^\ab)$ in~$\sG_\ell^\ab(\bbZ_\ell)$ is bounded when $\ell$ varies. This completes the proof of Theorem~\ref{thm:Main} in the case where $M = A$ is an abelian variety. \begin{remark} The proof shows that for $\ell \gg 0$ (depending on~$A$) the index $\bigl[\sG_\ell(\bbZ_\ell) : \Image(\rho_\ell)\bigr]$ can be bounded by a constant that only depends on the Mumford--Tate group~$\sG_\Betti$ and the abelianized Shimura datum $(\sG_\Betti^\ab,X^\ab)$. \end{remark} \subsection{The case of a split $1$-motive.} Next, we consider a split $1$-motive $M = [Y \xrightarrow{0} A \times T]$ (i.e., a $1$-motive in which all extensions are trivial). As before we may replace~$k$ with a finite extension. We may therefore assume that $T$ is a split torus and that $Y$ is constant. Then the projection $\sG_{\ell,M} \to \sG_{\ell,A\times T}$ is an isomorphism and restricts to an isomorphism $\Image(\rho_{\ell,M}) \isomarrow \Image(\rho_{\ell,A\times T})$; this reduces the problem to the case $Y=0$. The case where $T$ is trivial is treated in \ref{AV}, so we assume $T \neq \{1\}$. As $T$ is a split torus, the Galois group $\Gal(\kbar/k)$ acts on $T_\ell(T)$ through the $\ell$-adic cyclotomic character~$\chi_\ell$. If the abelian variety~$A$ is zero, then $\Image(\rho_{\ell,M}) \isomarrow \Image(\chi_\ell)$ for all~$\ell$, and the fact that $k\cap \bbQ^\ab$ is a finite extension of~$\bbQ$ gives the desired conclusion that $\Image(\rho_{\ell,M})$ is of bounded index in~$\bbZ_\ell^\times$. Assume then that $Y=0$ and $A \neq 0$. The group scheme $\sG_{\ell,M}$ is a closed subgroup scheme of $\sG_{\ell,A} \times \sG_{\ell,T}$. Let $V_\ell(A) = T_\ell(A) \otimes \bbQ_\ell$. Choose a polarization of~$A$, and let $\psi_\ell \colon V_\ell(A) \times V_\ell(A) \to \bbQ_\ell(1)$ be the associated alternating bilinear form. The image of~$\rho_{\ell,A}$ is contained in the group of symplectic similitudes $\CSp\bigl(V_\ell(A),\psi_\ell\bigr)$, and if $\nu \colon \CSp\bigl(V_\ell(A),\psi_\ell\bigr) \to \bbQ_\ell^\times$ is the multiplier character, we have the relation $\nu \circ \rho_{\ell,A} = \chi_\ell$. It follows that the image of~$\rho_{\ell,M}$ is contained in the graph of~$\nu$, viewed as a subgroup of $\sG_{\ell,A}(\bbQ_\ell) \times \sG_{\ell,T}(\bbQ_\ell)$ and hence the projection map $\sG_{\ell,M}(\bbQ_\ell) \to \sG_{\ell,A}(\bbQ_\ell)$ is injective. This gives us a commutative diagram \[ \begin{tikzcd} \Image(\rho_{\ell,M}) \ar[r,hook] \ar[d,"\wr"] & \sG_{\ell,M}(\bbZ_\ell) \ar[r,hook] \ar[d,hook] & \sG_{\ell,M}(\bbQ_\ell) \ar[d,hook]\\ \Image(\rho_{\ell,A}) \ar[r,hook] & \sG_{\ell,A}(\bbZ_\ell) \ar[r,hook] & \sG_{\ell,A}(\bbQ_\ell) \end{tikzcd} \] and it follows that $\bigl[\sG_{\ell,M}(\bbZ_\ell) : \Image(\rho_{\ell,M})\bigr] \leq \bigl[\sG_{\ell,A}(\bbZ_\ell) : \Image(\rho_{\ell,A})\bigr]$. The theorem for~$M$ now follows from the case of an abelian variety treated in \ref{AV}. \subsection{The general case.} As a last step in the proof, we now turn to the case of a general $1$-motive $M = [Y \to G]$. \subsubsection{} The semi-abelian variety~$G$ is an extension of an abelian variety~$A$ by a torus~$T$. On~$M$ we have a weight filtration whose graded pieces are $T$, $A$ and~$Y$, respectively. Let $\tilde{M} = T \oplus A \oplus Y$ (or, in different notation, $\tilde{M} = [Y \xrightarrow{0} A\times T]$) be the associated total graded, which is a split $1$-motive. As before we may replace~$k$ with a finite extension. We therefore can, and from now on will, assume that $T$ is a split torus, that $Y$ is constant, and that all $3$-torsion points of~$A$ are $k$-rational. This implies that the group schemes $\sG_{\ell,M}$ and~$\sG_{\ell,\tilde{M}}$ have connected fibers. \subsubsection{} Let $W$ denote the weight filtration on~$T_\ell(M)$. The Galois representation~$\rho_{\ell,M}$ takes values in the parabolic subgroup $\sStab_W \subset \sGL\bigl(T_\ell(M)\bigr)$. We have a natural identification $\gr^W\bigl(T_\ell(M)\bigr) \isomarrow T_\ell(\tilde{M})$ and if $\pi \colon \sStab_W \to \sGL\bigl(T_\ell(\tilde{M})\bigr)$ is the natural homomorphism then we have $\pi \circ \rho_{\ell,M} = \rho_{\ell,\tilde{M}}$. In particular, $\pi$~restricts to a homomorphism $\sG_{\ell,M} \to \sG_{\ell,\tilde{M}}$. Let $\sU_{\ell,M}$ denote the kernel of the latter homomorphism, which is the intersection of~$\sG_{\ell,M}$ with the unipotent radical of~$\sStab_W$. Let $k\subset k^\unip$ be the field extension (inside~$\kbar$) that corresponds with the kernel of~$\rho_{\ell,\tilde{M}}$. Then $\rho_{\ell,M}$ restricts to a Galois representation $\rho_{\ell,M}^\unip \colon \Gal(\kbar/k^\unip) \to \sU_{\ell,M}(\bbZ_\ell)$. This gives us a commutative diagram with exact rows \[ \begin{tikzcd} 1 \ar[r] & \Image(\rho_{\ell,M}^\unip) \ar[r] \ar[d,hook] & \Image(\rho_{\ell,M}) \ar[r] \ar[d,hook] & \Image(\rho_{\ell,\tilde{M}}) \ar[r] \ar[d,hook] & 1\\ 1 \ar[r] & \sU_{\ell,M}(\bbZ_\ell) \ar[r] & \sG_{\ell,M}(\bbZ_\ell) \ar[r] & \sG_{\ell,\tilde{M}}(\bbZ_\ell) & \end{tikzcd} \] which gives the inequality \[ \bigl[\sG_{\ell,M}(\bbZ_\ell) : \Image(\rho_{\ell,M}) \bigr] \leq \bigl[\sU_{\ell,M}(\bbZ_\ell) : \Image(\rho_{\ell,M}^\unip)\bigr] \cdot \bigl[\sG_{\ell,\tilde{M}}(\bbZ_\ell) : \Image(\rho_{\ell,\tilde{M}}) \bigr]\, . \] (At this point we do not yet know that these numbers are all finite.) \subsubsection{} \label{ssec:LastStep} By the previous part of the proof, $\bigl[\sG_{\ell,\tilde{M}}(\bbZ_\ell) : \Image(\rho_{\ell,\tilde{M}}) \bigr]$ is bounded independently of~$\ell$. What remains to be shown is that also the index of $\Image(\rho_{\ell,M}^\unip)$ in $\sU_{\ell,M}(\bbZ_\ell)$ is finite and bounded when $\ell$ varies. Our proof of this is based on the results of Jossen~\cite{Jossen}. Choose a field embedding $k \to \bbC$. Let $T_\Betti(M)$ denote the Hodge realization of~$M_\bbC$, and let $\sG_{\Betti,M} \subset \sGL\bigl(T_\Betti(M)\bigr)$ denote the integral Mumford--Tate group. Again we have a weight filtration~$W$ on~$T_\Betti(M)$, and an associated parabolic subgroup $\sStab_W \subset \sGL\bigl(T_\Betti(M)\bigr)$. Analogous to how we defined~$\sU_{\ell,M}$, let $\sU_{\Betti,M}\subset \sG_{\Betti,M}$ be the intersection of $ \sG_{\Betti,M}$ with the unipotent radical of $\sStab_W $. It follows from the results of Brylinski in~\cite{Bryl} (which extend results of Deligne for abelian varieties), that under the comparison isomorphism $T_\Betti(M) \otimes \bbZ_\ell \isomarrow T_\ell(M)$ we have $\sG_{\ell,M} \subseteq \sG_{\Betti,M} \otimes \bbZ_\ell$. (See also \cite{Jossen}, Theorem~3.1.) Hence $\sU_{\ell,M} \subseteq \sU_{\Betti,M} \otimes \bbZ_\ell$. Writing $\mathfrak{u}_?$ for the Lie algebra of~$\sU_?$, this gives an inclusion of $\bbZ_\ell$-Lie algebras $\mathfrak{u}_{\ell,M} \subseteq \mathfrak{u}_{\Betti,M} \otimes \bbZ_\ell$. Possibly after again replacing $k$ with a finite extension, we may assume that all transformations $\phi \in \GL\bigl(T_2(M)\bigr)$ that lie in the image of the $2$-adic unipotent representation~$\rho_{2,M}^\unip$ have the property that $(\phi - 1)\colon T_2(M) \to T_2(M)$ is divisible by~$2$. For every~$\ell$ we then have a map \[ \vartheta_\ell = \log \circ \rho_{\ell,M}^\unip \colon \Gal(\kbar/k^\unip) \to \mathfrak{u}_{\ell,M}\, , \] whose image is a $\bbZ_\ell$-submodule of~$\mathfrak{u}_{\ell,M}$. Write $V_\Betti(?) = T_\Betti(?) \otimes \bbQ$ and $V_\ell(?) = T_\ell(?) \otimes \bbQ_\ell$. Let $P = P(M)$ be the semi-abelian variety of \cite{Jossen}, Definition~4.3. By ibid., Theorem~6.2, we have an isomorphism $\alpha_\Betti\colon V_\Betti(P) \isomarrow (\mathfrak{u}_{\Betti,M} \otimes \bbQ)$. Moreover, by the first assertion of ibid., Theorem~7.2, this extends to a commutative diagram \[ \begin{tikzcd} V_\ell(P) \ar[r,"\sim"] \ar[d,"\wr"] & \Image(\vartheta_\ell) \otimes \bbQ_\ell \ar[r,hook] & \mathfrak{u}_{\ell,M} \otimes_{\bbZ_\ell} \bbQ_\ell \ar[d,hook] \\ V_\Betti(P) \otimes \bbQ_\ell \ar[rr,"\sim","\alpha_\Betti \otimes 1"'] && \mathfrak{u}_{\Betti,M} \otimes_\bbZ \bbQ_\ell \end{tikzcd} \] and it follows that the inclusion maps $\Image(\vartheta_\ell) \otimes \bbQ_\ell \hookrightarrow \mathfrak{u}_{\ell,M} \otimes_{\bbZ_\ell} \bbQ_\ell \hookrightarrow \mathfrak{u}_{\Betti,M} \otimes_\bbZ \bbQ_\ell$ are isomorphisms. This already implies that $\Image(\rho_{\ell,M}^\unip)$ has finite index in $\sU_{\ell,M}(\bbZ_\ell)$. To bound this index when $\ell$ varies, we use that by the last assertion of ibid., Theorem~7.2, for almost all~$\ell$ the previous diagram restricts to a diagram \[ \begin{tikzcd} T_\ell(P) \ar[r,"\sim"] \ar[d,"\wr"] & \Image(\vartheta_\ell) \ar[r,hook] & \mathfrak{u}_{\ell,M} \ar[d,hook] \\ T_\Betti(P) \otimes \bbZ_\ell \ar[d,hook] && \mathfrak{u}_{\Betti,M} \otimes \bbZ_\ell \ar[d,hook]\\ V_\Betti(P) \otimes \bbQ_\ell \ar[rr,"\sim","\alpha_\Betti \otimes 1"'] && \mathfrak{u}_{\Betti,M} \otimes_\bbZ \bbQ_\ell \end{tikzcd} \] As the index of the lattice $T_\Betti(P)$ inside $T_\Betti(P) + \mathfrak{u}_{\Betti,M}$ (taken inside $\mathfrak{u}_{\Betti,M} \otimes \bbQ$) is finite and independent of~$\ell$, it follows that also the index of~$\Image(\vartheta_\ell)$ inside~$\mathfrak{u}_{\ell,M}$ is bounded when $\ell$ varies. Hence the index of $\Image(\rho_{\ell,M}^\unip)$ in $\sU_{\ell,M}(\bbZ_\ell)$ is finite and bounded when $\ell$ varies. The proof of Theorem~\ref{thm:Main} is now complete. \begin{corollary} \label{cor:indexbdd} Let $X$ be a geometrically connected variety over a field~$k$ that is finitely generated over~$\bbQ$. For $\ell$ a prime number, write $H_\ell = \mathrm{H}^1(X_\kbar,\bbZ_\ell)$, let $\rho_{\ell,X} \colon \Gal(\kbar/k) \to \GL(H_\ell)$ be the natural Galois representation, and let\/ $\sG_{\ell,X} \subset \sGL(H_\ell)$ be the Zariski closure of\/~$\Image(\rho_{\ell,X})$. Then\/ $\Image(\rho_{\ell,X})$ is an open subgroup of\/~$\sG_{\ell,X}(\bbZ_\ell)$ for every~$\ell$ and $\bigl[\sG_{\ell,X}(\bbZ_\ell) : \Image(\rho_{\ell,X})\bigr]$ is bounded when $\ell$ varies. The same assertion is true if we everywhere take $H_\ell = \mathrm{H}^1_{\text{c}}\bigl(X_\kbar,\bbZ_\ell(1)\bigr)$. \end{corollary} \begin{proof} We may pass to the dual Galois representation~$H_\ell^\vee$. But by \cite{BaVSri}, Corollary~5.3.2,\footnote{It appears that the definition of $\ell$-adic homology as given in \cite{BaVSri}, Section~2.5, needs to be changed: writing $\Lambda = \bbZ/m\bbZ$ one should define $\mathrm{H}_i(X_\kbar,\Lambda)$ to be $\cH_i\bigl(\mathrm{RHom}(\mathrm{R}\Gamma(X_\kbar,\Lambda),\Lambda)\bigr)$, whereas loc.\ cit.\ uses $\mathrm{RHom}(~,\Lambda(-n)[-2n])$ with $n=\dim(X)$.} $H_\ell^\vee$ is the $\ell$-adic realization of the $1$-motive $\Alb^-(X)$ of $X/k$ as in ibid., Section~7.2; so Theorem~\ref{thm:Main} applies. For the version with $\mathrm{H}^1_{\text{c}}\bigl(X_\kbar,\bbZ_\ell(1)\bigr)$ we apply Theorem~\ref{thm:Main} to the $1$-motive $\mathrm{H}^1_{\text{m}}(X)(1)$ introduced in \cite{DelHodge3}, Section~10.3. \end{proof} \begin{remark} \label{rem:PicCurve} In the next section we apply this result with $X$ a non-singular curve and $H_\ell = \mathrm{H}^1(X_\kbar,\bbZ_\ell)$. In that case we can be much more explicit about the $1$-motive whose $\ell$-adic realization gives the Galois representation~$H_\ell^\vee$. Namely, if $X \hookrightarrow \bar{X}$ is the complete non-singular model of~$X$, we can form the curve~$X^\prime$ that is obtained from~$\bar{X}$ by identifying all points in~$\bar{X}\setminus X$. Then $\Pic^0_{X^\prime/k}$ is a semi-abelian variety whose $\ell$-adic realization is isomorphic to~$H_\ell^\vee$. \end{remark} \section{Uniformity in~$\ell$ in a theorem of Litt} \label{sec:Litt} \subsection{} \label{ssec:LittSetup} As before, let $k$ be a field that is finitely generated over~$\bbQ$. Let $X$ be a normal scheme, geometrically integral, separated and of finite type over $k$, and let~$\bar{x}$ be a geometric point on~$X$. The main result of~\cite{Litt} is the following. \begin{theorem}[Litt] \label{thm:Litt} Let $X/k$ be as in~\emph{\ref{ssec:LittSetup}}, and let $\ell$ be a prime number. Then there exists a positive integer~$N$, depending on $X$ and~$\ell$, such that for any arithmetic representation $\tau \colon \pi_1(X_{\bar{k}})\to \GL_n(\bbZ_\ell)$ we have \[ \text{$\tau$ is trivial modulo $\ell^N$} \quad\implies\quad \text{$\tau$ is unipotent.} \] \end{theorem} If~$\tau$ is a monodromy representation of a smooth proper family over~$X$, it is known that $\tau$ is semisimple; in this case, therefore, the conclusion is that if $\tau$ is trivial modulo~$\ell^N$ then $\tau$ is trivial. See \cite{Litt}, Corollary~1.6. \subsection{} \label{ssec:QlDef} For $X/k$ as in~\ref{ssec:LittSetup}, let $N(X,\ell)$ be the smallest positive integer~$N$ such that all arithmetic representations $\tau \colon \pi_1(X_{\bar{k}}) \to \GL_n(\bbZ_\ell)$ that are trivial modulo~$\ell^N$ are unipotent. Our goal is to estimate~$N(X,\ell)$ and to show that it is bounded as a function of~$\ell$. Note that nothing changes if we replace~$\bar{x}$ by a different geometric base point or if we replace~$k$ by a finite extension. In what follows we may therefore assume that~$\bar{x}$ lies over a $k$-rational point $x \in X(k)$. This gives an action of $\Gal(\kbar/k)$ on~$\pi_1(X_{\bar{k}})$ and an isomorphism $\pi_1(X) \cong \pi_1(X_{\bar{k}}) \rtimes \Gal(\kbar/k)$. By a Bertini argument (see \cite{DelPi1}, Lemma~1.4 and \cite{Litt}, Section~4.1) there exists an affine smooth curve~$C/k$ and a $k$-morphism $C\to X$ such that the induced homomorphism $\pi_1(C_\kbar) \to \pi_1(X_\kbar)$ is surjective. For such a curve we have $N(X,\ell) \leq N(C,\ell)$ for all~$\ell$. What we will estimate is~$N(C,\ell)$. In what follows we therefore assume that $X$ is an affine curve, smooth over~$k$. Moreover, if $X \hookrightarrow \bar{X}$ is the complete non-singular model of~$X$ we may assume that $\bar{X}$ has positive genus. (This is not essential but it will simplify some later assertions.) \subsection{} Choose an embedding $k \hookrightarrow \bbC$. As remarked in~\ref{rem:PicCurve} there is a semi-abelian variety $M = \Pic^0_{X^\prime/k}$ whose Hodge and $\ell$-adic realizations are given by $T_\Betti(M) = \mathrm{H}_1(X_\bbC,\bbZ)$ and $T_\ell(M) = \mathrm{H}_1(X_\kbar,\bbZ_\ell)$. As in Section~\ref{sec:Indices}, let $\sG_{\Betti,M} \subset \sGL\bigl(T_\Betti(M)\bigr)$ be the (integral) Mumford--Tate group and $\sG_{\ell,M} \subset \sGL\bigl(T_\ell(M)\bigr)$ the Zariski closure of the image of $\rho_{\ell,X} \colon \Gal(\kbar/k) \to \GL\bigl(T_\ell(M)\bigr)$. If there is no risk of confusion, we will from now on omit~$M$ from the notation. As already remarked, to estimate the constants~$N(X,\ell)$ we may replace~$k$ with a finite extension; we may therefore assume that the group schemes~$\sG_\ell$ have connected fibres. For every~$\ell$ we have a comparison isomorphism $\iota_\ell \colon T_\Betti \otimes \bbZ_\ell \isomarrow T_\ell$, which we will take as an identification. As $M$ is semi-abelian, the weight filtration~$W_\bullet$ on~$T_\Betti$ only has two steps: \[ W_{-3} = 0 \quad\subset\quad W_{-2} = \Ker\bigl(\mathrm{H}_1(X_\bbC,\bbZ) \to \mathrm{H}_1(\bar{X}_\bbC,\bbZ)\bigr) \quad\subset\quad W_{-1} = T_\Betti(M)\, . \] The weight filtration on~$T_\ell$ is $W_\bullet(T_\Betti) \otimes \bbZ_\ell$. The Mumford--Tate group~$\sG_\Betti$ is contained in the stabilizer $\sStab_W \subset \sGL(T_\Betti)$ of the weight filtration, and as we have already seen in~\ref{ssec:LastStep}, $\sG_\ell \hookrightarrow \sG_\Betti \otimes \bbZ_\ell$ for all~$\ell$. Let $\sQ_\ell \subset \sG_\ell$ be the subgroup scheme of elements $g \in \sG_\ell$ for which there is a scalar~$\alpha$ such that $g$ acts on $\gr^W_{-i}$ as $\alpha^i \cdot \id$ ($i=1,2$). Let $\nu \colon \sQ_\ell \to \bbG_\mult$ be the character (over~$\bbZ_\ell$) given by $g \mapsto \alpha$. For our proof of Theorem~\ref{thm:LittUniform} we need the following result. \begin{lemma} \label{lem:nusurj} For almost all~$\ell$ the homomorphism $\nu \colon \sQ_\ell(\bbZ_\ell) \to \bbZ_\ell^\times$ is surjective. \end{lemma} \begin{proof} Analogous to how we defined the group schemes~$\sQ_\ell$, let $\sQ_\Betti \subset \sG_\Betti$ be the subgroup scheme of elements $g \in \sG_\Betti$ that act on~$\gr^W_{-i}$ as $\alpha^i \cdot \id$ for some scalar~$\alpha$, and let $\nu \colon \sQ_\Betti \to \bbG_\mult$ (over~$\bbZ$) be given by $g \mapsto \alpha$. The unipotent radical of $\sStab_W \subset \sGL(H_\Betti)$ is the vector group scheme associated with the free $\bbZ$-module $\Hom(\gr_W^{-1},\gr_W^{-2})$. The unipotent radical of $\sG_\Betti \otimes \bbQ$ is the intersection of $\sG_\Betti \otimes \bbQ$ and $\mathrm{R}_\unip(\sStab_W)$ and is therefore again a vector group scheme. We denote it by~$\sU_{\Betti,\bbQ}$. (It is the generic fibre of the $\sU_{\Betti,M}$ of~\ref{ssec:LastStep}.) We have an isomorphism $h_\bbQ \colon \sQ_\Betti \otimes \bbQ \isomarrow \sU_{\Betti,\bbQ} \rtimes \bbG_\mult$, with $z \in \bbG_\mult$ acting on~$\sU_{\Betti,\bbQ}$ as multiplication by~$z$. This extends over an open part of~$\Spec(\bbZ)$, say over $R = \bbZ[1/n]$; by this we mean that $\sG_{\Betti} \otimes R$ is smooth over~$R$, its unipotent radical~$\sU_{\Betti,R}$ is the vector group scheme given by a free $R$-submodule of $\Hom_R(\gr_W^{-1} \otimes R,\gr_W^{-2} \otimes R)$, and $h_\bbQ$ extends to an isomorphism $h_R \colon \sQ_\Betti \otimes R \isomarrow \sU_{\Betti,R} \rtimes \bbG_\mult$. Similarly, the unipotent radical of $\sG_\ell \otimes \bbQ_\ell$ is a vector group scheme~$\sU_{\ell,\bbQ_\ell}$ that is associated with a $\bbQ_\ell$-subspace of $\Hom_{\bbQ_\ell}(\gr_W^{-1} \otimes \bbQ_\ell,\gr_W^{-2} \otimes \bbQ_\ell)$. We have $\sU_{\ell,\bbQ_\ell} \rtimes \{1\} \subseteq \sQ_\ell \otimes \bbQ_\ell \subseteq \sU_{\ell,\bbQ_\ell} \rtimes \bbG_\mult$, where in the semi-direct product $z \in \bbG_\mult$ again acts on~$\sU_{\ell,\bbQ_\ell}$ as multiplication by~$z$. By \cite{Jossen}, Theorems~6.2 and~7.2, the inclusion $\sG_\ell \hookrightarrow \sG_\Betti \otimes \bbZ_\ell$ restricts to an isomorphism $\sU_{\ell,\bbQ_\ell} \isomarrow \sU_{\Betti,\bbQ} \otimes \bbQ_\ell$. We claim that the homomorphism $\nu \colon \sQ_\ell \otimes \bbQ_\ell \to \bbG_{\mult,\bbQ_\ell}$ is surjective (as a homomorphism of algebraic groups) for all~$\ell$. If this is true, it follows that the inclusions $\sG_\ell \hookrightarrow \sG_\Betti \otimes \bbZ_\ell$ restrict to isomorphisms $\sQ_\ell \otimes \bbQ_\ell \isomarrow \sQ_\Betti \otimes \bbQ_\ell$, and hence also to isomorphisms $\sQ_\ell \isomarrow \sQ_\Betti \otimes \bbZ_\ell$ for all $\ell \nmid n$. As $\sQ_\Betti \otimes \bbZ_\ell \cong \sU_{\Betti,\bbZ_\ell} \rtimes \bbG_\mult$, with $\nu$ given by the second projection, this gives the desired conclusion that $\nu \colon \sQ_\ell(\bbZ_\ell) \to \bbZ_\ell^\times$ is surjective for all $\ell \nmid n$. It remains to prove the claim. There exists an integral affine scheme~$S$ of finite type over~$\bbZ$ whose function field is~$k$, such that $M$ extends to a semi-abelian variety over~$S$. Let $s \in S$ be a closed point with residue field of cardinality~$q(s)$ such that $\ell \nmid q(s)$, and let $F_s \in \Gal(\kbar/k)$ be an arithmetic Frobenius element at~$s$. Associated with~$F_s$ we have a Frobenius torus~$\sT(F_s)$ over~$\bbQ$ whose character group is isomorphic to the $\Gal(\Qbar/\bbQ)$-submodule of~$\smash{\Qbar}^\times$ generated by the eigenvalues $\alpha_1,\ldots,\alpha_m$ of~$F_s$ acting on $\gr^W_{-1}\bigl(T_\ell\bigr) \otimes \Qbar_\ell$. (See~\cite{SerreRibet}. Note that $F_s$ acts on~$\gr^W_{-2}$ as multiplication by $q(s) \in \bbZ_\ell^\times$, and since we have assumed that $g(\bar{X}) > 0$ we only have to consider the eigenvalues of~$F_s$ on~$\gr^W_{-1}$.) Because $F_s$ acts semi-simply on $T_\ell \otimes \bbQ_\ell$ (cf.\ \cite{Litt}, Lemma~2.9), we obtain an injective homomorphism $i\colon \sT(F_s) \otimes \bbQ_\ell \hookrightarrow \sG_\ell \otimes \bbQ_\ell$. Further, we have a homomorphism $j \colon \bbG_\mult \hookrightarrow \sT(F_s)$ corresponding to the $\Gal(\Qbar/\bbQ)$-equivariant map $X^*\bigl(\sT(F_s)\bigr) \to \bbZ$ that sends each~$\alpha_i$ to~$1$. Then the image of $(i\circ j) \colon \bbG_\mult \to \sG_\ell \otimes \bbQ_\ell$ is contained in~$\sQ_\ell \otimes \bbQ_\ell$ and $i \circ j$ gives a section of the map~$\nu$. So indeed $\nu \colon \sQ_\ell \otimes \bbQ_\ell \to \bbG_{\mult,\bbQ_\ell}$ is surjective. This completes the proof. \end{proof} \subsection{Notation.} For $\ell$ a prime number and $\alpha \in \bbZ_\ell^\times$, define $C(\alpha,\ell)$ by \[ C(\alpha,\ell) = \begin{dcases} \frac{1}{s} \cdot \left(v_\ell(\alpha^s-1) + \frac{1}{\ell-1} + 2\right) & \text{if $\ell = 2$} \\ \frac{1}{s} \cdot \left(v_\ell(\alpha^s-1) + \frac{1}{\ell-1}\right) & \text{if $\ell > 2$} \end{dcases} \] where $v_\ell$ denotes the $\ell$-adic valuation and where $s$ is the order of $(\alpha \bmod 4) \in (\bbZ/4\bbZ)^\times$ if $\ell = 2$ and is the order of $(\alpha \bmod \ell) \in (\bbZ/\ell\bbZ)^\times$ if $\ell > 2$. Note that for $\ell > 2$ we can choose a root of unity $\zeta \in \bbZ_\ell^\times$ of order $\ell-1$, and then $\alpha$ can be written as $\alpha = \zeta^{(\ell-1)/s} \cdot \exp(y)$ for some $y \in \ell\bbZ_\ell$. With this notation, $C(\alpha,\ell) = (1/s) \cdot \bigl(v_\ell(y) + 1/(\ell-1)\bigr)$. \begin{proposition} \label{prop:2CBound} Let $\ell > 2$. Suppose $\Image(\rho_{\ell,X}) \cap \sQ_\ell(\bbZ_\ell)$ contains an element~$g$ such that $\alpha = \nu(g) \in \bbZ_\ell^\times$ has infinite order. Then $N(X,\ell) \leq 1 + \bigl\lfloor 2\cdot C(\alpha,\ell)\bigr\rfloor$. \end{proposition} \begin{proof} All we need to do is to carefully go through the proof of Theorem~1.2 in~\cite{Litt}. For the reader's convenience, let us give the steps that are required to extract the assertion from Litt's paper. Let $g$ and $\alpha = \nu(g)$ be as in the assertion. Let $N = 1 + \bigl\lfloor 2\cdot C(\alpha,\ell)\bigr\rfloor$. The claim is that every arithmetic representation $\tau \colon \pi_1(X_{\bar{k}}) \to \GL_n(\bbZ_\ell)$ that is trivial modulo~$\ell^N$ is unipotent. As explained by Litt in \cite{Litt}, Section~4.1 (especially Lemma~4.1 and the first half of the proof of his Theorem~1.2), it suffices to prove this only for those representations~$\tau$ that extend to a representation $ \pi_1(X) \to \GL_n(\bbZ_\ell)$. Let $\pi_1(X_{\bar{k}})^{(\ell)}$ be the maximal pro-$\ell$ quotient of~$\pi_1(X_{\bar{k}})$, let $\bbZ_\ell\lbb \pi_1(X_{\bar{k}})^{(\ell)} \rbb$ be the completed group algebra, $\cI \subset \bbZ_\ell\lbb \pi_1(X_{\bar{k}})^{(\ell)} \rbb$ the augmentation ideal, and \[ \bbQ_\ell\lbb \pi_1(X_{\bar{k}})^{(\ell)} \rbb = \lim_n \Bigl(\bbQ_\ell \otimes\bigl(\bbZ_\ell\lbb \pi_1(X_{\bar{k}})^{(\ell)} \rbb/\cI^n\bigr)\Bigr)\, . \] These rings come equipped with an action of~$\Gal(\kbar/k)$. Let $r$ be a real number with $\bigl\lfloor 2\cdot C(\alpha,\ell)\bigr\rfloor < r < N$. Litt defines (\cite{Litt}, Definition~3.2) a Galois-stable $\bbQ_\ell$-subalgebra \[ \bbQ_\ell\lbb \pi_1(X_{\bar{k}})^{(\ell)} \rbb^{\leq \ell^{-r}} \subset \bbQ_\ell\lbb \pi_1(X_{\bar{k}})^{(\ell)} \rbb\, \] which he calls the \emph{convergent group ring}. The representation~$\tau$ gives rise to a Galois-equivariant homomorphism $\beta\colon \bbZ_\ell\lbb \pi_1(X_{\bar{k}})^{(\ell)} \rbb \to M_n(\bbZ_\ell)$, where the Galois action on $M_n(\bbZ_\ell)$ is obtained by using the section of $\pi_1(X) \to \Gal(\kbar/k)$ associated with the rational base point $x \in X(k)$, together with the assumption that $\tau$ extends to a representation of~$\pi_1(X)$. Litt shows (ibid., Proposition~3.4) that the assumption that $\tau$ is trivial modulo~$\ell^N$ implies that $\beta$ uniquely extends to a $\Gal(\kbar/k)$-equivariant homomorphism \[ \tilde\beta \colon \bbQ_\ell\lbb \pi_1(X_{\bar{k}})^{(\ell)} \rbb^{\leq \ell^{-r}} \to M_n(\bbQ_\ell)\, . \] The convergent group ring comes equipped with a weight filtration~$W_\bullet$ and a Gauss norm for which $\tilde\beta$ is continuous. Write $g = \rho_{\ell,X}(\sigma)$ for some $\sigma \in \Gal(\kbar/k)$, and recall that $\alpha = \nu(g)$. The proof of \cite{Litt}, Theorem~2.8 (at the end of Section~2) shows that $\sigma$ acts on~$\gr_{-i}^W$ as multiplication by~$\alpha^i$. It then follows from ibid., Theorem~3.6 and Remark~3.11, that if we consider the eigenspaces of~$\sigma$ acting on $W_{-n} = W_{-n} \bbQ_\ell\lbb \pi_1(X_{\bar{k}})^{(\ell)} \rbb^{\leq \ell^{-r}}$ with eigenvalues in $\{\alpha^n,\alpha^{n+1},\ldots\}$, the $\bbQ_\ell$-linear span of these eigenspaces is dense in $W_{-n}$ with respect to the Gauss norm. As $\sigma$ has only finitely many eigenvalues on $M_n(\bbQ_\ell)$ and $\alpha$ is not a root of unity, it follows that $\tilde\beta$ is zero on~$W_{-n}$ for $n$ large enough. Finally, as by \cite{Litt}, Proposition~2.7, we have $\cI^n \subset W_{-n}$, it follows that $\beta$ is zero on~$\cI^n$ for $n$ large enough, which means that $\tau$ is unipotent. \end{proof} We can now prove the main result of this section. \begin{theorem} \label{thm:LittUniform} Let $X/k$ be as in~\emph{\ref{ssec:LittSetup}}. Then there exists an integer~$\ell_X$ such that for all prime numbers $\ell \geq \ell_X$ and all arithmetic representations $\tau \colon \pi_1(X_{\bar{k}}) \to \GL_n(\bbZ_\ell)$ we have \begin{equation} \text{$\tau$ is trivial modulo $\ell$} \quad\implies\quad \text{$\tau$ is unipotent.} \end{equation} In particular, there exists a positive integer~$N(X)$ such that $N(X,\ell) \leq N(X)$ for all~$\ell$. \end{theorem} \begin{proof} By Corollary~\ref{cor:indexbdd} and Lemma~\ref{lem:nusurj} there exist integers $L\geq 2$ and~$M$ such that for all prime numbers $\ell > L$ the image of $\Image(\rho_{\ell,X}) \cap \sQ_\ell(\bbZ_\ell)$ under~$\nu$ has index less than~$M$ in~$\bbZ_\ell^\times$. This means that for every $\ell > L$ we can find an element $g_\ell \in \Image(\rho_{\ell,X}) \cap \sQ_\ell(\bbZ_\ell)$ such that $\alpha_\ell = \nu(g_\ell)$ is of the form \[ \alpha_\ell = z_\ell \cdot \exp(y_\ell)\qquad \text{(with $z_\ell \in \bbZ_\ell^\times$ a root of unity and $y_\ell \in \ell\bbZ_\ell$)} \] such that the order of $z_\ell$ is at least $(\ell-1)/M$ and $v_\ell(y) < 1+ \log_\ell(M)$. Then $\lim_{\ell\to\infty} C(\alpha_\ell,\ell) = 0$ and by Proposition~\ref{prop:2CBound} this gives the result. \end{proof} {\small
1,108,101,562,522
arxiv
\section{Introduction} It is a well-known fact that in single-field slow-roll inflation \cite{infl2,infl3,infl4}, the comoving curvature perturbation ${\mathcal R}_c$ and the uniform density curvature perturbation $\zeta$ coincide and are conserved. In the seminal works \cite{Malik3,lms}, it was shown that requiring just energy conservation is enough to show the superhorizon conservation of $\zeta$ given that the non-adiabatic pressure $\delta P_{nad}$ vanishes, under the assumption that gradient terms are negligible. Moreover, it was shown in \cite{Malik3} that for adiabatic perturbations, on superhorizon scales the comoving sli\-cing coincides with the uniform density slicing, as long as $\partial V / \partial \phi \neq 0$. As a result, $\zeta$ and ${\mathcal R}_c$ coincide and both are conserved on superhorizon scales. Nevertheless, there are cases in which the conservation of $\zeta$ or ${\mathcal R}_c$ does not hold even for adiabatic perturbations. This seems to contradict the results quoted in the above. In this paper, we carefully study the meaning of adiabaticity and clarify how these seemingly contradictory statements are reconciled. For this purpose, we first introduce three different definitions of adiabaticity. Then we study the energy-momentum conservation laws for arbitrary matter and derive several useful relations among gauge-invariant variables, independent of the theory of gravity. We find a few useful formulas that relate some of the gauge-invariant variables to each other. Then we specialize to the case of general relativity and discuss the meaning of the conservation of $\zeta$ and ${\mathcal R}_c$ in detail. Finally we study so-called ultra slow-roll inflation as an interesting non-trivial example in which the superhorizon conservation of $\zeta$ or ${\mathcal R}_c$ does not hold even for an exactly adiabatic perturbation, $\delta P_{nad}=\delta P_{c,nad}=0$. Throughout this paper the dot denotes the proper-time derivative ($\dot{}=d/dt$) and the prime the conformal-time derivative ($\prime{}=d/d\eta$), where $dt=a d\eta$, and the proper-time and conformal-time Hubble expansion rates are respectively denoted by $H=\dot a/a$ and ${\cal H}=a'/a=\dot a$. \section{Adiabaticity: several definitions} \label{sec:defs} Let us consider several definitions of (non)-adiabaticity. Adiabaticity is apparently a term from thermodynamics. Therefore ori\-gi\-nal\-ly it is meaningful only when the basic matter variables such as the energy density and pressure are thermodynamic. As can be seen from the perturbed energy and momentum conservation equations for a perfect fluid with equation of state $P=P(\rho)$, adiabatic perturbations move with the speed of sound $c_w$, given by \begin{equation} c_w^2 \equiv \frac{P'}{\rho'}. \end{equation} For a perfect adiabatic fluid, we therefore have $\delta P = c_w^2 \delta \rho$. Then it seems natural to define the non-adiabatic pressure as \begin{equation} \delta P_{nad}\equiv \delta P - c_w^2 \delta\rho, \label{dpnadther} \end{equation} which is gauge invariant and vanishes for a perfect fluid. This is the definition used in \cite{Malik3,lms}, and in much of the literature. However, the early universe is for sure not in thermal equilibrium, so one can question the above definition based on thermodynamics. In fact, when the universe is dominated by a scalar field, it makes more sense to talk about the propagation speed $c_s$ of that scalar field (the phase speed of sound, see also \cite{Christopherson:2008ry}), defined on comoving slices via \begin{equation} c_s^2 \equiv \left(\frac{\delta P}{\delta \rho}\right)_c. \label{cssq} \end{equation} One is then led to define the non-adiabatic pressure as \begin{equation} \delta P_{c,nad}\equiv\delta P_c-c_s^2\delta\rho_c. \label{dpnadstr} \end{equation} For a fluid, one has $c_s=c_w$ and both definitions coincide. However, this is in general not true. For a minimally coupled scalar field one has, for example, \begin{equation} c_w^2 = -1+\frac{2\epsilon}{3}-\frac{\eta}{3} , \qquad c_s^2=1, \end{equation} with $\epsilon,\eta$ the usual slow-roll parameters. In this sense, the second definition is more general: It can apply both to a fluid and to a scalar field, hence should be regarded as the proper definition of adiabaticity. Therefore we focus on the perturbation which satisfies $\delta P_{c,nad}=0$ in this paper. As a consequence, for the first definition we then have (in agreement with \cite{Christopherson:2012kw}) \begin{equation} \delta P_{nad} = (c_s^2-c_w^2) \delta \rho_c. \label{dpnad} \end{equation} The third definition which is commonly used in the inflationary cosmology is about the stage when the so-called growing mode of the perturbation dominates. As discussed in the above, the adiabatic perturbation would generally satisfy a second-order differential equation. Hence when it is Fourier decomposed with respect to the spatial comoving wavenumber $k$, there will be two independent solutions for each $k$-mode. Usually what happens is that as the mode goes out of the Hubble horizon during inflation, one of the solutions (the decaying mode) dies out, and the other mode (the growing mode) dominates. It turns out that this growing mode approaches a constant in the superhorizon limit when expressed in terms of the curvature perturbation on comoving slices ${\cal R}_c$ (or equally of the one on uniform energy density slices $\zeta$). When the universe enters this stage where the growing mode dominates, the evolution of the universe thereafter is unique. In other words, if we denote the time after which the universe is in this growing mode dominated stage by $t_a$, given the state of the universe at some later but arbitrary time $t_b$ ($>t_a$), one can always recover the initial condition at $t=t_a$ uniquely because the decaying mode is completely negligible during the whole stage of evolution. It is said that when this is the case the universe has arrived at the adiabatic stage (or the adiabatic limit). In particular, when the universe is dominated by a scalar field whose evolution is well described by the slow-roll approximation, this stage is reached as soon as the scale of the perturbation leaves out of the horizon. The above, third definition is different from the previous two definitions in that it applies only to the stage when the wavelength of the perturbation is much greater than the Hubble horizon. Nevertheless, as long as we are interested in superhorizon scale perturbations, the adiabaticity conditions for both of the previous two cases will be approximately satisfied if the universe is in the adiabatic limit. Namely, both $\delta P_{nad}$ and $\delta P_{c,nad}$ will be of $\mathcal{O}\left((k/{\cal H})^2\right)$ and hence vanish in the superhorizon limit. \section{formulas for arbitrary matter independent of gravity} Now, let us derive a few useful formulas valid for any gravity theory. Independent of the theory of gra\-vi\-ty, the energy-momentum conservation must hold, which follows from the matter equations of motion and general covariance. We set the perturbed metric as \begin{eqnarray} ds^2&=&a^2\Bigl[-(1+2A)d\eta^2+2\partial_jB dx^jd\eta\nonumber\\ && \qquad +\left\{\delta_{ij}(1+2{\cal R})+2\partial_i\partial_j E\}dx^idx^j\right\} \Bigr]\,, \label{metric} \end{eqnarray} and the perturbed energy-momentum tensor as \begin{align} T^0_0&=-(\rho+\delta\rho)\,, \quad T^0_j=(\rho+P)u^0u_j=\frac{\rho+P}{a}u_j\,, \nonumber\\ \quad T^i_j&=(P+\delta P)\delta^i_j+\Pi^i{}_j\,;\quad \Pi^k{}_k\equiv0\,. \end{align} For a scalar-type perturbation, $u_j$ can be written as a spatial gradient, \begin{align} u_j=-a\partial_j(v-B)\, \qquad \rightarrow \qquad T^0_j=-(\rho+P)\partial_j(v-B) \, \end{align} $\Pi^k{}_j$ in the form can be written as \begin{align} \Pi_{ij}=\delta_{ik}\Pi^k{}_j =\left[\partial_i\partial_j-\frac{1}{3}\delta_{ij}\mathop\Delta^{(3)}\right]\Pi\,, \end{align} where $\mathop\Delta^{(3)}=\delta^{ij}\partial_i\partial_j$. In this work, we mainly consider the following gauge-invariant variables: \begin{align} {\cal R}_c &\equiv {\cal R} - {\cal H}(v-B)\,, \\ \zeta &\equiv {\cal R} -\frac{{\cal H}}{\rho'} \delta \rho ={\cal R}+\frac{\delta\rho}{3(\rho+P)}\, , \\ V_f &\equiv (v-B)-\frac{{\cal R}}{{\cal H}}\,. \end{align} Their geometrical meanings are apparent: ${\cal R}_c$ represents the curvature perturbation on comoving slices ($v-B=0$), $\zeta$ the curvature perturbation on uniform density slices ($\delta \rho =0$), and $V_f$ the velocity potential on flat slices (${\mathcal R}=0$). They are related to each other as \begin{align} &{\cal R}_c=-{\cal H} V_f\,, \label{calRcVf}\\ &\zeta\equiv{\cal R}_{ud}={\cal R}_c+\frac{\delta\rho_c}{3(\rho+P)}\,. \label{zetacalRc} \end{align} There relations will become useful later. Hereafter we use the suffix `$c$' for quantities on comoving slices, the suffix `$ud$' for those on uniform density slices, and the suffix `$f$' for those on flat slices. The equation of motion is given by $\delta(\nabla_\mu T^\mu{}_j)=0$. Explicitly we have \begin{align} (\rho+P)\bigl[\partial_j(v-B)' +&{\cal H}(1-3c_w^2)\partial_j(v-B)\bigr] \nonumber\\ =(\rho+P)\partial_jA&+\partial_k(\delta T^k{}_j) \nonumber \\ \qquad\qquad\qquad =(\rho+P)\partial_jA&+\partial_j\delta P+\frac{2}{3}\partial_j\nabla^2\Pi\,. \label{eom} \end{align} Therefore, we may remove the common partial derivative $\partial_j$ to obtain \begin{align} &&(\rho+P)\left[(v-B)'+{\cal H}(1-3c_w^2)(v-B)\right] \nonumber\\ &&\qquad=(\rho+P)A+\delta P+\frac{2}{3}\nabla^2\Pi. \end{align} On comoving slices, $v-B=0$ ($\Leftrightarrow~T^0{}_j=0$). Hence \begin{align} (\rho+P)A_c+\delta P_c+\frac{2}{3}\nabla^2 \Pi=0\,. \end{align} If the perturbation is adiabatic, by definition $\Pi=0$. Thus we find \begin{align} \delta P_c=-(\rho+P)A_c\,. \label{Pceq} \end{align} Note that this relation between $\delta P_c$ and $A_c$ is completely independent of the theory of gravity. \section{Useful relations among gauge-invariant variables independent of gravity} Combining Eqs.~(\ref{cssq}), (\ref{dpnad}) and (\ref{Pceq}), we now have \begin{align} \delta P_{nad}=(c_s^2-c_w^2)\delta\rho_c=\frac{c_w^2-c_s^2}{c_s^2}(\rho+P)A_c\,. \label{PnadAc} \end{align} The first equality is an identity, while the second comes from the conservation of the energy momentum tensor, and is valid for any gravity theory. This equation may be regarded as a statement that $\delta P_{nad}$ has the same behavior as $\delta\rho_c$ and $A_c$ unless $c_w^2=c_s^2$. In other words, the proper-time slicing ($A=0$), comoving slicing ($v-B=0$) and uniform density slicing ($\delta\rho=0$) coincide with each other (approximately) if $c_w^2\neq c_s^2$ and $\delta P_{nad}=0$ (approximately). Namely, \begin{equation} \{\delta{\mathcal P}_{nad}\approx 0, c_s\neq c_w \} \Rightarrow \delta\rho_c\approx A_c\approx 0\,. \end{equation} We can use Eq.~(\ref{PnadAc}) to obtain for example a general relation between the comoving curvature perturbation ${\mathcal R}_c$ and uniform density curvature perturbation $\zeta$, \begin{equation} \zeta={\mathcal R}_c -\frac{H}{\dot{\rho}}\delta\rho_c ={\mathcal R}_c+\delta P_{nad}\frac{H}{\dot{\rho}(c^2_w-c^2_s)} \label{zetaR} \end{equation} This is in agreement with the well-known coincidence of $\zeta$ and ${\mathcal R}_c$ on super-horizon scales for slow roll-models in general relativity, since in this case $c_s\neq c_w$ and $\delta P_{nad}\approx 0$ on superhorizon scales. Note also that this relation is degenerate in the case of $c_s=c_w$. As an example of such a case during inflation, later we explicitly consider the so-called ultra-slow roll inflation model. \section{Formulas for arbitrary matter in general relativity} Here we focus on the case of general relativity. On comoving slices, the $G^0_0$- and $G^0_i$-components of the perturbed Einstein equations give \begin{align} \mathop\Delta^{(3)}\left[{\cal H}\sigma_c+{\cal R}_c\right]=-4\pi G\delta\rho_c\,, \label{G00} \\ {\cal R}_c'={\cal H} A_c\,, \label{G0i} \end{align} where $\sigma$ denotes the scalar shear: $\sigma \equiv B-E'$. The $G^i_j$-components give, for adiabatic perturbations $\Pi=0$ and $\delta P_c=c_s^2\delta\rho_c$, \begin{eqnarray} \frac{2}{a^2}({\cal H}'-{\cal H}^2)A_c&=&8\pi Gc_s^2\delta\rho_c\\ \sigma_c'+2{\cal H}\sigma_c+A_c+{\cal R}_c&=&0. \end{eqnarray} Using the Friedman equation we then derive the equation of motion for ${\mathcal R}_c$: \begin{align} {\cal R}_c''+\frac{{z^2}'}{z^2}{\cal R}_c'-c_s^2 \mathop\Delta^{(3)}{\cal R}_c=0\,; \quad z^2\equiv \frac{(\rho+P)a^4}{c_s^2{\cal H}^2}\,. \label{calRceq} \end{align} Substituting Eq.~(\ref{G0i}) in Eqs.~(\ref{PnadAc}) and (\ref{zetaR}) now gives \begin{eqnarray} \delta P_{nad}&=& \left[\left(\frac{c_w}{c_s}\right)^2-1\right](\rho+P)\frac{\dot{{\mathcal R}_c}}{H} \label{PnaddotRc} \\ \zeta&=&{\mathcal R}_c-\frac{\dot{{\mathcal R}_c}}{3c_s^2H}. \label{equival} \end{eqnarray} Thus $\delta P_{nad}=0$ if either $c_w^2=c_s^2$ or $\dot{\cal R}_c=0$. In particular in the latter case, $\dot{\cal R}_c=0$, we have $\zeta={\cal R}_c$. \subsection{Conserved $\zeta$ and adiabaticity} Here we briefly review the common notion \cite{Malik3} that the superhorizon conservation of $\zeta$ follows directly from adiabaticity, independent of gravity. Indeed, demanding $\delta(\nabla_\mu T^\mu_0)=0 $ yields, in the uniform density slicing, \begin{eqnarray} \zeta'=-\frac{{\cal H}\delta P_{nad}}{(\rho+P)}+\frac{1}{3}\mathop\Delta^{(3)}\left(v-E'\right)_{ud}\, \label{dzeta} \end{eqnarray} The usual interpretation of the above equation is that for adiabatic perturbations, $\zeta$ is conserved on super-horizon scales, as long as the gradient terms can be neglected. However, as we have seen, actually adiabaticity in the general sense (as defined in Eq.~(\ref{dpnadstr})) does not necessarily imply $\delta P_{nad}=0$. Furthermore, neglecting the gradient terms may not be justified. In the remainder of this letter we will consider the case of a minimally coupled scalar field in general relativity, as an example of the applications of the general relations that we have just derived. \section{Ultra slow-roll inflation} \label{usri} As an interesting non-trivial example in which the equivalence between ${\mathcal R}_c$ and $\zeta$ fails to hold, we consider the ultra slow-roll inflation (USR): a minimally coupled single scalar field model with constant potential. When $V=V_0$, the background scalar field equation becomes $\ddot{\phi}+3H\dot\phi=0$, and the density and pressure perturbations become equal to each other, $\delta P=\delta\rho$, in arbitrary gauge. Therefore we have \begin{align} c_w^2=c_s^2=1 \,,\quad \delta P_{nad}=\delta P_{c,nad}=0\,. \end{align} In other words, the perturbation is adiabatic both in the sense of $\delta P_{nad}=0$ and $\delta P_{c,nad}=0$. Solving the background equations, we obtain \begin{equation} \dot{\phi} \propto a^{-3}\,. \end{equation} In particular, this implies $H=const.$ is an extremely good approximation except possibly for the very beginning of the ultra slow-roll phase. This gives \begin{equation} \epsilon \equiv -\frac{\dot{H}}{H^2} = \frac{\dot{\phi}^2}{2H^2} \propto a^{-6}, \qquad \delta \equiv \frac{\ddot\phi}{H\dot\phi} =\frac{1}{2}\frac{\dot{\epsilon}}{\epsilon H} = -3\,. \end{equation} We are now in the position to appreciate the peculiarity of ultra slow-roll inflation. Let us reconsider the relations we found in the previous section. First, as we saw in Eq.~(\ref{PnaddotRc}) $\delta P_{nad}=0$ implies $\dot{\cal R}_c=0$ if $c_s^2\neq c_w^2$. However, since we have $c_s^2=c_w^2=1$ in ultra slow-roll inflation, we are unable to claim anything about the conservation of ${\cal R}_c$. Second, the comoving slicing coincides with the uniform density slicing (and ${\mathcal R}_c$ with $\zeta$) if $\dot{\cal R}_c=0$, see Eq.~(\ref{equival}). However, again, we are unable to claim anything since we do not know if ${\cal R}_c$ is conserved or not. In fact, we find that ${\cal R}_c$ is not conserved even on superhorizon scales. The same follows from Eq.~(\ref{zetaR}): when $c_s^2=c_w^2$ that relation is degenerate, so $\zeta$ and ${\mathcal R}_c$ do not necessarily coincide. Third, we concluded from Eq. \ref{dzeta} that $\zeta$ is conserved on superhorizon scales if $\delta P_{nad}=0$. However, as noted there, this is true only if the gradient terms are negligible. As we shall see below it happens that here they are not negligible at all. \subsection{$\zeta$ and ${\cal R}_c$ in ultra slow-roll inflation} \label{compzeta} From Eq.~(\ref{equival}), we have \begin{align} \zeta={\cal R}_c-\frac{\dot{\cal R}_c}{3H} =-\frac{a^3}{3H}\partial_t \left(\frac{{\cal R}_c}{a^3}\right)\,. \label{zetausl} \end{align} From Eq.~(\ref{calRceq}), on superhorizon scales, we find that the time derivative of the time-dependent solution is given by \begin{align} \dot{\cal R}_c\propto\frac{1}{az^2}=\frac{H^2}{\dot\phi^2a^3}\propto a^3\,. \end{align} Since $H$ is almost constant in USR, we conclude that ${\cal R}_c$ is not conserved but grows as $a^3$ on superhorizon scales. Inserting this to Eq.~(\ref{zetausl}) implies $\zeta=0$. Thus it seems that $\zeta$ is still conserved (corresponding to the conserved solution of ${\cal R}_c$) and the rapidly growing solution of ${\cal R}_c$ does not contribute to $\zeta$ at all. The above conclusion, however, is valid only in the strict large scale limit. The finiteness of the wavelength can affect the behavior of the perturbation significantly even if the wavelength is much larger than the Hubble horizon size. To see this, one can take into account the spatial gradient term of Eq.~(\ref{calRceq}) iteratively. For simplicity, we work in the Fourier space where we replace $\mathop\Delta^{(3)}$ by $-k^2$. The superhorizon solution for ${\mathcal R}_c$ is then \begin{equation} {\mathcal R}_c=c_1\left(1+\mathcal{O}(k^2)\right) +c_2 a^3\left(1+\frac{1}{2}\frac{k^2}{{\cal H}^2}+\mathcal{O}(k^4)\right)\,. \label{rsh} \end{equation} Inserting this into Eq.~(\ref{equival}) gives \begin{equation} \zeta = c_1\left(1+\mathcal{O}(k^2)\right) +\frac{c_2a^3}{3}\left(\frac{k^2}{{\cal H}^2}+\mathcal{O}(k^4)\right). \label{zetasol2} \end{equation} Thus we see that the time-dependent solution grows like $a$ even on superhorizon scales. More specifically, $\zeta(t)\approx \zeta(t_k) a(t)/a(t_k)$ where $t_k$ is the horizon crossing time $a(t_k)=kH$ of the wavenumber $k$. \begin{table*} \begin{tabular}{|l|l|l|} \hline & {Any Gravity theory } & {General Relativity ($A_c=\dot{{\mathcal R}_c}/H$)} \\ \hline Generic matter & $\delta P_{\rm nad} =\delta \rho_c(c_s^2-c_w^2) =\left[\left(\frac{c_w}{c_s}\right)^2-1\right] (\rho+P) A_c $ & $\delta P_{\rm nad}= \left[\left(\frac{c_w}{c_s}\right)^2-1\right](\rho+P)\frac{\dot{{\mathcal R}_c}}{H} $ \\ \hline M. c. scalar field & $\delta P_{\rm nad}=(c^2_w-1)A_{\rm c} \dot{\phi} ^2$ & $\delta P_{\rm nad}=(c^2_w -1 )\frac{ \dot{{\mathcal R}_c} }{H}\dot{\phi}^2$ \\ \hline \end{tabular} \newline \vspace*{1 cm} \newline \centering \begin{tabular}{|l|l|l|} \hline & {Any Gravity theory } & {General Relativity } \\ \hline Generic matter & $\zeta={\mathcal R}_c-\delta P_{\rm nad}\frac{H}{\dot{\rho}(c^2_s-c^2_w)} = {\mathcal R}_c +\frac{H}{\dot{\rho}} \frac{\rho+P}{c_s^2} A_c$ & $ \zeta={\mathcal R}_c+(\rho+P)\frac{\dot{{\mathcal R}_c}}{c_s^2H}\frac{H}{\dot{\rho}} $ \\ \hline M. c. scalar field & $ \zeta={\mathcal R}_c+ A_{\rm c} \dot{\phi} ^2\frac{H}{\dot{\rho}} \, $& $\zeta={\mathcal R}_c+\dot{\phi}^2\frac{\dot{{\mathcal R}_c}}{H}\frac{H}{\dot{\rho}} $ \\ \hline \end{tabular} \caption{The upper table shows the relation between the fluid-based non-adiabatic pressure perturbations $\delta P_{nad}$ and metric perturbations, and the lower table gives the relation between curvature perturbations on uniform density slices $\zeta$ and on comoving slices ${\mathcal R}_c$. For both tables the first column corresponds to relations valid in any gravity theory, the second column to the case of general relativity, the first row is for a generic matter field and the second one is for a minimally coupled scalar field.} \end{table*} \section{Discussion and conclusions} The seminal works \cite{Malik3,lms} have taught us that for any relativistic theory of gravity, adiabaticity implies that $\zeta$ and ${\mathcal R}_c$ coincide and are conserved when gradient terms can be neglected, which in general happens on superhorizon scales. In this work, we have provided more insight into this claim. First, we have specified that the above statement holds when (non)-adiabaticity is defined in the thermodynamical sense, see Eq.~(\ref{dpnadther}). We have argued that for a system out of equilibrium, like the early universe, one should define (non)-adiabaticity in the strict sense, as in Eq.~(\ref{dpnadstr}). In this work, we have looked at perturbations which are strictly adiabatic in that strict sense ($\delta P_{c,nad}=0$), and checked the implications for non-adiabaticity in the thermodynamical sense $\delta P_{nad}$. A third definition of non-adiabaticity states that the adiabatic limit has been reached as soon as the time-dependent solution (the non-freezing one) for $\zeta$ has become totally negligible. Second, we have rewritten the relation between (thermodynamical) non-adiabaticity and conserved quantities in such a way as to clarify when exactly gradient terms can be neglected, bypassing the need for an explicit computation of these gradient terms. In Eq.~(\ref{PnadAc}) we have shown that for any gravity theory, $\delta P_{nad}$ is proportional to the lapse function in comoving slicing, $A_c$, provided that $c_s^2\neq c_w^2$. In the particular case of general relativity, $A_c$ is proportional to $\dot{{\mathcal R}}_c$ so we obtain the proportionality between $\delta P_{nad}$ and $\dot{{\mathcal R}}_c$, still under the condition that $c_s^2\neq c_w^2$. Furthermore, we have obtained in Eq.~(\ref{zetaR}) that when $\delta P_{nad}=0$, ${\mathcal R}_c$ and $\zeta$ coincide, again under the condition that $c_s^2\neq c_w^2$. This results holds independently of gravity theory as well. As an illustration, finally, we have studied the model of ultra slow-roll (USR) inflation, where $\delta P_{c,nad}=\delta P_{nad}=0$ and $c_w=c_s=1$. Indeed, for USR inflation all relations above obtained break down: $\zeta$ and ${\mathcal R}_c$ do not coincide and are both not conserved. This is an example of the fact that adiabaticity (in the thermodynamic sense) is not always enough to ensure the conservation of ${\mathcal R}_c$ or $\zeta$.\\ \\ \noindent\paragraph{\bf\emph{Acknowledgments}} It is a pleasure thank Karim Malik, Jorge Nore\~na, Gonzalo Palma, Sergey Sibiryakov and Drian van der Woude for illuminating discussions. This work was supported by the Fondecyt 2015 Postdoctoral Grant 3150126 (SM), the ``Anillo'' project ACT1122 funded by the ``Programa de Investigaci\'on Asociativa" (SM), and by the the Greek national funds under the ``ARISTEIA'' Action, the Dedicacion exclusica and Sostenibilidad programs at UDEA, the UDEA CODI projects IN10219CE and 2015-4044 (AER), and in part by MEXT KAKENHI Grant Number 15H05888.
1,108,101,562,523
arxiv
\section{INTRODUCTION} Working memory (WM) is the ability to maintain and manipulate information over a short period of time, consisting of encoding, retention, and retrieval processes \cite{small2001circuit}. In the field of neuroscience, there is great interest in the neural oscillation dynamics of WM process using electroencephalogram (EEG) \cite{shin2021predicting}. EEG can extract rapidly fluctuating brain activation characteristics based on high temporal resolution \cite{won2017motion, lee2018high, kwon2019subject}. In general, many studies identify brain mechanisms during WM encoding and WM retrieval \cite{kragel2017similar}, but resting-state (RS) without stimulated neural activity is also receiving attention \cite{heister2013resting, pyka2009impact, zhang2017hybrid}. However, there are only a few studies on the effects of RS EEG in relation to the WM process. The WM process can be analyzed using the amplitude and phase of the brain activity measured from the EEG \cite{shin2022differential}. Power spectral density (PSD) is a method of quantifying amplitude by calculating various frequency bands (delta, theta, alpha, beta, and gamma) using a fast Fourier transform (FFT) \cite{lee2019connectivity,lee2020frontal}. Among them, alpha bands are highly correlated with various cognitive functions, such as performance and information processing speed \cite{brokaw2016resting}. Phase transfer entropy (PTE) is a phase-based information flow estimation method for computing interactions between complex cortical \cite{lobier2014phase,ahmadi2020decoding}. Using this, several studies have reported the direction of information flow in brain networks in various frequency bands during WM \cite{hillebrand2016direction, wang2019consistency}. However, no studies have investigated the difference in power and information flow among RS EEG in relation to the WM process. In this study, EEG data were obtained from twenty-nine participants to investigate changes in the neurodynamics in RS EEG according to the WM process. Participants performed a WM task and three RS EEG. We calculated the power and information flow among RS EEG before and after WM encoding and WM retrieval. We hypothesized that the power and information flow of frequencies in RS EEG will be greater as the WM task progressed. Also, we thought that there was a correlation between the significant EEG characteristics in RS and WM performance. \begin{figure*}[t!] \centering \scriptsize \includegraphics[width=\textwidth]{Figure/Fig1.pdf} \caption{Experimental design. (a) Experimental procedures involving resting-state and working memory. (b) Working memory task consisting of encoding and retrieval sessions. (c) Channel placement of 60 EEG electrodes and six regions of interest (frontal, central, left temporal, right temporal, parietal, and occipital regions.)} \end{figure*} \section{METHODS} \subsection{Participants and Experimental Produce} Twenty-nine participants (17 females; mean $\pm$ SD age: 25.1 $\pm$ 2.6 years) were recruited and participated in the study after obtaining written informed consent. They were free of any neurologic or psychiatric disorders. This study was approved by the Institutional Review Board at Korea University (KUIRB-2021-0155-03). This experiment consists of a WM task and RS EEG (Fig. 1a). First, each participant visited the laboratory and prepared for the experiment for about an hour. Then, a total of three 5 minutes of eye-closed RS were performed before and after WM process. WM performed 54 word-pair tasks (Fig. 1b) \cite{marshall2006boosting, shin2020assessment}. The encoding session displays word pairs for 4 seconds and breaks for 1 second. The retrieval session uses the keyboard to enter a pair of words displayed on the screen within 30 seconds. After that, the word pair is re-encoded by displaying the correct answer for 2 seconds. Memory performance evaluation was considered correct for typos and inflectional errors. The task was implemented with Psychtoolbox (http://psychtoolbox.org). \subsection{Data Recording and Preprocessing} The data recorded 60 EEG and 4 EOG signals from 64 Ag/AgCl electrodes using BrainAmp (Brain products GmBH, Germany) at a sampling rate of 1,000 Hz. The EEG electrodes were arranged based on the 10-20 international system configuration, and the EOG electrodes were arranged with two at both ends of the eye (horizontal) and two in the right eye (vertical). FCz and Fpz were used as reference and ground. The impedance of all electrodes was kept below 20 k$\Omega$. The recorded EEG signals were preprocessed using the EEGLAB \cite{delorme2004eeglab} and BCILAB toolboxes \cite{kothe2013bcilab} for MATLAB 2018b. First, down-sampling was performed at 250 Hz and band-pass filtering was performed at 0.5 to 100 Hz. After that, three RS EEG were segmented and independent component analysis was applied to remove eye movements based on EOG signals \cite{jeong2020brain}. Finally, a Laplacian spatial filter was used to improve the signal-to-noise ratio \cite{jeong2020decoding}. \subsection{EEG Data Analysis} To identify the power and information flow among three RS EEG according to the WM process, we calculated PSD and PTE in five frequency bands: delta (1.3-5 Hz), theta (4-7.5 Hz), alpha (8-13.5 Hz), beta (14-29.5 Hz), and gamma (30-50 Hz). We also compared EEG characteristics between brain regions by grouping 60 channels into six brain regions of interest (ROIs): frontal, central, left temporal, right temporal, parietal, and occipital regions (Fig. 1c). \subsubsection{Power Spectral Density} To identify differences in the power distribution of RS EEG, the cleaned EEG signal was transformed into the frequency domain using FFT \cite{suk2014predicting}. PSD was calculated for each frequency band for all EEG channels. \subsubsection{Phase Transfer Entropy} PTE is a functional connectivity estimation method that can measure large-scale phase-specific directed connectivity between EEG channels using phase time-series data extracted from EEG \cite{hillebrand2016direction}. The dPTE calculation is detailed in Wang \textit{et al.} \cite{wang2019consistency}. For each participant's all EEG channels, the dPTE was calculated and grouped into six ROIs. \subsection{Statistical Analysis} To confirm the difference among RS EEG according to the WM process, we performed statistical verification for three conditions (RS 1 vs. RS 2, RS 1 vs. RS 3, and RS 2 vs. RS 3). First, a comparative analysis of PSD was performed using paired \textit{t}-test. Second, PTE was compared using a non-parametric permutation test (\textit{r} = 5,000), considering that not all PTE values were normally distributed from the Lilliefors test. Finally, the Pearson correlation coefficient was calculated to determine the relationship between the significant EEG characteristics and WM performance. The \textit{p}-values for all analyses are 0.01. \begin{figure*}[t!] \centering \scriptsize \includegraphics[width=\textwidth]{Figure/Fig2.pdf} \caption{Statistical differences in spectral power among RS EEG in each frequency band. Each color bar represents the \textit{t}-values. The black asterisk indicates a significant channel (\textit{p} $<$ 0.01).} \end{figure*} \begin{figure*}[t!] \centering \scriptsize \includegraphics[width=\textwidth]{Figure/Fig3.pdf} \caption{Mean dPTE and statistical difference for RS EEG in delta, alpha, and beta bands. Each color bar represents the dPTE and \textit{t}-values. The white asterisk indicates significant connectivity (\textit{p} $<$ 0.01).} \end{figure*} \begin{figure*}[t!] \centering \scriptsize \includegraphics[width=\textwidth]{Figure/Fig4.pdf} \caption{Correlation between significant EEG characteristics of the alpha band and WM performance. (a) and (b) represent PSD and PTE, respectively.} \end{figure*} \section{RESULTS} \subsection{Changes in Spectral Power of RS EEG} We calculated spectral power to identify changes in RS EEG according to the WM process. Fig. 2 shows the statistical differences among RS by each frequency band. In the delta band, there was a significant difference in the channel near the central and parietal regions. In the alpha band, it was confirmed that brain activation was statistically higher as WM process progressed. In the beta band, RS 1 vs. RS 2 and RS 2 vs. RS 3 showed differences in channels near the parietal and occipital regions. Conversely, there was no statistical difference in theta and gamma bands. \subsection{Difference in Directional Information Flow among RS EEG} To investigate the pattern changes of directional information flow according to the WM load, we analyzed RS EEG using dPTE. Fig. 3 shows the results of information flow in the delta, alpha, and beta bands. The dPTE values between channels for each RS showed an anterior-to-posterior information flow in the delta band, and conversely, a posterior-to-anterior information flow in the alpha and beta bands. The \textit{t}-value of dPTE between ROIs of RS EEG was most prominent in RS 2 vs. RS 3, and in particular, the posterior-to-anterior information flow of the alpha band was statistically higher in RS 3 than in RS 2. On the other hand, information flow was dispersed or weak in theta and gamma bands. \subsection{Correlation between Alpha Band and WM Performance} We investigated the effect of RS EEG before and after WM encoding and WM retrieval on WM performance. The results showed a relationship with WM performance only in the difference between RS 3 and RS 1 in the alpha band (Fig. 4). Specifically, Fig. 4a shows a negative correlation between spectral power in the left temporal region and performance and Fig. 4b shows a positive correlation between information flow from left temporal to frontal regions and performance. On the other hand, other EEG characteristics did not show a significant relationship with performance. \section{DISCUSSION} In the current study, we identified the differences in spectral power in delta, alpha, and beta bands among RS EEG before and after WM encoding and WM retrieval. The previous study has reported that changes in the frequencies of RS EEG reflect state transitions of brain activity during WM \cite{lopez2013alterations}. This suggests that RS EEG may be modulated by the demands of cognitive tasks. Our results provide evidence that brain activation changes in several frequencies in RS EEG are influenced by previous WM tasks. Interestingly, we found that the information flow of channels and ROIs among RS EEG became much stronger at certain frequencies after WM retrieval. The anterior-to-posterior information flow in the delta band and the posterior-to-anterior information flow in the alpha and beta bands were consistent with the results of previous studies \cite{massimini2004sleep, hillebrand2016direction, wang2019consistency}. Information flow has been reported to be much stronger in the alpha band associated with internal mental synchronization that inhibits the processing of previously incoming visual stimuli \cite{scheeringa2012eeg}. In addition, previous functional magnetic resonance imaging studies have reported that a challenging WM task has a significant effect on the activation of the default mode network during subsequent RS \cite{forbes2015spontaneous, pyka2009impact}. Taken together, dynamic changes in the information flow in delta, alpha, and beta bands during RS EEG can help to explain their functional role in cognitive neural processing. We found correlations between the alpha band of RS EEG and WM performance. The spectral power is negatively correlated with WM performance as an active integration mechanism similar to that occurring during sleep \cite{brokaw2016resting}. In addition, it may explain the notion that increased long-range coherence of information flows reflects the central executive function of the WM \cite{sauseng2005fronto}. These findings suggest that power and information flow in the alpha band during RS EEG are important markers related to WM performance. As a limitation of this study, the conclusion was drawn only for changes in RS EEG according to verbal WM. Therefore, we will further investigate the variability among RS EEG by performing various WM tasks such as N-back \cite{heister2013resting} or visuospatial tasks \cite{shin2020assessment}. In addition, we will further investigate the relationship between RS EEG and WM process by analyzing the interactions among significant frequencies \cite{thung2018conversion, kim2019subject}. In conclusion, our results showed that RS EEG according to the WM process had a significant impact on the variability of brain mechanisms and performance in relation to cognitive function. In particular, the changes in power and information flow in the alpha band were most prominent in RS EEG after WM retrieval. Therefore, these findings suggest that RS EEG may be useful for understanding cognitive neuroscience. \bibliographystyle{IEEEtran}
1,108,101,562,524
arxiv
\section{Introduction} \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{overview8.pdf} \normalsize\href{http://3DShapeNets.cs.princeton.edu}{http://3DShapeNets.cs.princeton.edu} \vspace{-1mm} \caption{{\bf Usages of 3D ShapeNets.} Given a depth map of an object, we convert it into a volumetric representation and identify the observed surface, free space and occluded space. 3D ShapeNets can recognize object category, complete full 3D shape, and predict the next best view if the initial recognition is uncertain. Finally, 3D ShapeNets can integrate new views to recognize object jointly with all views. } \label{fig:teaser} \end{figure} Since the establishment of computer vision as a field five decades ago, 3D geometric shape has been considered to be one of the most important cues in object recognition. Even though there are many theories about 3D representation (e.g. \cite{geon,GeometricEra}), the success of 3D-based methods has largely been limited to instance recognition (e.g. model-based keypoint matching to nearest neighbors \cite{rothganger20063d,tang2012textured}). For object category recognition, 3D shape is not used in any state-of-the-art recognition methods (e.g. \cite{DPM,DCNN}), mostly due to the lack of a good generic representation for 3D geometric shapes. Furthermore, the recent availability of inexpensive 2.5D depth sensors, such as the Microsoft Kinect, Intel RealSense, Google Project Tango, and Apple PrimeSense, has led to a renewed interest in 2.5D object recognition from depth maps (e.g. Sliding Shapes \cite{SlidingShapes}). Because the depth from these sensors is very reliable, 3D shape can play a more important role in a recognition pipeline. As a result, it is becoming increasingly important to have a strong 3D shape representation in modern computer vision systems. Apart from category recognition, another natural and challenging task for recognition is shape completion: given a 2.5D depth map of an object from one view, what are the possible 3D structures behind it? For example, humans do not need to see the legs of a table to know that they are there and potentially what they might look like behind the visible surface. Similarly, even though we may see a coffee mug from its side, we know that it would have empty space in the middle, and a handle on the side. \iffalse On the other hand, object recognition is sometimes quite difficult, even for humans. It is possible that we cannot confidently recognize an object from a particular viewpoint and we have to resolve to another view to gather more observation for recognizing an object. This situation is even more common for computer vision. Automatic object recognition systems today fail frequently \cite{Failures}, and we desire a robust system that can recover from errors automatically. In particular, as shown in Figure \ref{fig:ShapeNets}, if a robot cannot identify an object confidently from a given view, a fail-safe mode is to allow the robot to move and observe the object from another viewpoint, in order to reduce the uncertainty for recognition. This naturally raises the question for view planning: which next view is the best for helping the robot to discriminate the object category? \fi \begin{figure*} \centering \begin{subfigure}{0.27\textwidth} \centering \includegraphics[width=1\textwidth]{architecture.pdf} \caption{Architecture of our 3D ShapeNets model. For illustration purpose, we only draw one filter for each convolutional layer.} \label{fig:architecture} \end{subfigure} ~ \begin{subfigure}{0.71\textwidth} \centering \includegraphics[width=0.95\textwidth]{l5_surface.png}~L5 \vspace{-3mm}\line(1,0){350} \includegraphics[width=0.95\textwidth]{l4_surface.png}~L4 \vspace{-3mm}\line(1,0){350} \includegraphics[width=0.95\textwidth]{l3_surface.png}~L3 \vspace{-3mm}\line(1,0){350} \includegraphics[width=0.95\textwidth]{l2_surface.png}~L2 \vspace{-3mm}\line(1,0){350} \includegraphics[width=0.95\textwidth]{l1_surface.png}~L1 \caption{Data-driven visualization: For each neuron, we average the top 100 training examples with highest responses ($>$0.99) and crop the volume inside the receptive field. The averaged result is visualized by transparency in 3D (Gray) and by the average surface obtained from zero-crossing (Red). 3D ShapeNets are able to capture complex structures in 3D space, from low-level surfaces and corners at L1, to objects parts at L2 and L3, and whole objects at L4 and above.} \label{fig:filters} \end{subfigure} \vspace{-1mm} \caption{{\bf 3D ShapeNets.} Architecture and filter visualizations from different layers.} \label{fig:ShapeNets} \end{figure*} In this paper, we study generic shape representation for both object category recognition and shape completion. While there has been significant progress on shape synthesis~\cite{Sid2011,Sid2012} and recovery~\cite{Shen2012}, they are mostly limited to part-based assembly and heavily rely on expensive part annotations. Instead of hand-coding shapes by parts, we desire a data-driven way to learn the complex shape distributions from raw 3D data across object categories and poses, and automatically discover a hierarchical compositional part representation. As shown in Figure~\ref{fig:teaser}, this would allow us to infer the full 3D volume from a depth map without the knowledge of object category and pose a priori. Beyond the ability to jointly hallucinate missing structures and predict categories, we also desire the ability to compute the potential information gain for recognition with regard to missing parts. This would allow an active recognition system to choose an optimal subsequent view for observation, when the category recognition from the first view is not sufficiently confident. To this end, we propose 3D ShapeNets to represent a geometric 3D shape as a probabilistic distribution of binary variables on a 3D voxel grid. Our model uses a powerful Convolutional Deep Belief Network (Figure~\ref{fig:ShapeNets}) to learn the complex joint distribution of all 3D voxels in a data-driven manner. To train this 3D deep learning model, we construct ModelNet, a large-scale object dataset of 3D computer graphics CAD models. We demonstrate the strength of our model at capturing complex object shapes by drawing samples from the model. We show that our model can recognize objects in single-view 2.5D depth images and hallucinate the missing parts of depth maps. Extensive experiments suggest that our model also generalizes well to real world data from the NYU depth dataset~\cite{NYUdataset}, significantly outperforming existing approaches on single-view 2.5D object recognition. Further it is also effective for next-best-view prediction in view planning for active object recognition~\cite{NBVsurveys}. \begin{figure*}[t] \centering \includegraphics[width = 1\textwidth]{tsdf3.pdf} ~~~~~~(1) object~~~~~~~~~~~~~~~~~~~~~~(2) depth \& point cloud~~~~~~~~~~~~~(3) volumetric representation~~~~~~~~~~~~(4) recognition \& completion \vspace{-1mm} \caption{ {\bf View-based 2.5D Object Recognition.} (1) Illustrates that a depth map is taken from a physical object in the 3D world. (2) Shows the depth image captured from the back of the chair. A slice is used for visualization. (3) Shows the profile of the slice and different types of voxels. The surface voxels of the chair $\mathbf{x}_o$ are in red, and the occluded voxels $\mathbf{x}_u$ are in blue. (4) Shows the recognition and shape completion result, conditioned on the observed free space and surface.} \label{fig:25Drecognition} \vspace{-3mm} \end{figure*} \section{Related Work} There has been a large body of insightful research on analyzing 3D CAD model collections. Most of the works~\cite{Sid2011,TomAssembly,Sid2012} use an assembly-based approach to build deformable part-based models. These methods are limited to a specific class of shapes with small variations, with surface correspondence being one of the key problems in such approaches. Since we are interested in shapes across a variety of objects with large variations and part annotation is tedious and expensive, assembly-based modeling can be rather cumbersome. For surface reconstruction of corrupted scanning input, most related works~\cite{recon2,recon1} are largely based on smooth interpolation or extrapolation. These approaches can only tackle small missing holes or deficiencies. Template-based methods~\cite{Shen2012} are able to deal with large space corruption but are mostly limited by the quality of available templates and often do not provide different semantic interpretations of reconstructions. The great generative power of deep learning models has allowed researchers to build deep generative models for 2D shapes: most notably the DBN~\cite{DBN} to generate handwritten digits and ShapeBM~\cite{ShapeBM2012} to generate horses, etc. These models are able to effectively capture intra-class variations. We also desire this generative ability for shape reconstruction but we focus on more complex real world object shapes in 3D. For 2.5D deep learning, \cite{Socher} and \cite{depthRCNN} build discriminative convolutional neural nets to model images and depth maps. Although their algorithms are applied to depth maps, they use depth as an extra 2D channel instead of modeling full 3D. Unlike~\cite{Socher}, our model learns a shape distribution over a voxel grid. To the best of our knowledge, we are the first work to build 3D deep learning models. To deal with the dimensionality of high resolution voxels, inspired by~\cite{CDBN}\footnote{The model is precisely a convolutional DBM where all the connections are undirected, while ours is a convolutional DBN.}, we apply the same convolution technique in our model. Unlike static object recognition in a single image, the sensor in active object recognition~\cite{callari2001active} can move to new view points to gain more information about the object. Therefore, the Next-Best-View problem~\cite{NBVsurveys} of doing view planning based on current observation arises. Most previous works in active object recognition~\cite{denzler2002information,jia2009active} build their view planning strategy using 2D color information. However this multi-view problem is intrinsically 3D in nature. Atanasov et al,~\cite{atanasov2013hypothesis,atanasov2013nonmyopic} implement the idea in real world robots, but they assume that there is only one object associated with each class reducing their problem to instance-level recognition with no intra-class variance. Similar to~\cite{denzler2002information}, we use mutual information to decide the NBV. However, we consider this problem at the precise voxel level allowing us to infer how voxels in a 3D region would contribute to the reduction of recognition uncertainty. \iffalse View planning, a.k.a. the Next-Best-View problem, has been extensively studied \cite{NBVsurveys} in the literature of 3D model acquisition in Computer Graphics, where the goal is to reconstruct the 3D surface completely and accurately using range scans. It desires a next view to have enough overlap with current known space so that reconstruction could be registered very accurately \cite{krainin2011autonomous, pito1995solution} while exploring unknown spaces as much as possible at the same time \cite{fisher1999next, wong1999next}. The Best-Next-View for reconstruction find the trade-off between these two factors. However, in our case, we look for the best next-view that potentially makes the discriminative information for object recognition visible. We do not need to reconstruct the whole 3D object accurately in order to recognize it, and this makes Next-Best-View for recognition unique. For 3D model acquisition, researchers have used shape completion technique to refine the model \cite{pauly2005example, ramamoorthi1999creating}. In our case, we samples different shapes for completion to predict the Next-Best-View for recognition. \fi \iffalse 1. Criticize shape is under utilized 2. criticize object detection is not considered the uncertainty and failure mode. adjust a new viewpoint to gather more information Human: what is this thing. you rotate it to see better. 3D shape information is such an important cue for visual system, but it is not used at all in the modern computer vision system. Shape is the most fundamental concept in visual perception. But they are under-utilized in the state-of-the-art computer vision system. Contribution:\\ 1. convolutional network to learn 3D feature at high resolution \\ 2. do both multi-way object classification\\ 3. do object detection in RGBD\\ 4. do shape complementation\\ 5. ModelNet dataset: a large scale high quality object dataset\\ 6. do next best view for recognition\\ Novelty:\\ 1. graphics people do mesh completion, robotics people do recognition. we have a model to do both\\ 2. very different from Sochar’s paper. because we use shape distribution instead of feature learning Task: We do object classification. Convolution. The box doesn't.t have to be accurate thanks for the convolution Just like image classification From its very beginnings [32] and up until the early nineties [29], object recognition research has been heavily geometry-centric. The central tenet of the time was alignment, and the act of recognition was posed as correctly aligning a 3D model of an object with its 2D depiction in the test image [21, 26]. The parameters recovered during alignment (object pose, object scale, etc.) served as the output of the recognition process, to be used, for instance, in the perception-manipulation loop in robotics applications. Unfortunately, the success of these 3D model-based methods was largely limited to instance recognition tasks for objects with well-pronounced rectilinear structures (e.g. staplers were a favorite example). As the field moved toward category recognition and objects with more complex appearance, 3D model-based object recognition has been replaced by the new 2D appearance-based methods (e.g. [9, 14, 37]). These methods forgo 3D and operate directly on the 2D image plane. Thus, instead of a 3D model of an object, they use a large dataset of 2D views of the object class from different viewpoints, as the model. These methods have shown steadily improving performance on a number of challenging tasks, such as the PASCAL VOC dataset [13]. \fi \section{3D ShapeNets} To study 3D shape representation, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid. Each 3D mesh is represented as a binary tensor: 1 indicates the voxel is inside the mesh surface, and 0 indicates the voxel is outside the mesh (i.e., it is empty space). The grid size in our experiments is $30\times30\times30$. To represent the probability distribution of these binary variables for 3D shapes, we design a Convolutional Deep Belief Network (CDBN). Deep Belief Networks (DBN) \cite{DBN} are a powerful class of probabilistic models often used to model the joint probabilistic distribution over pixels and labels in 2D images. Here, we adapt the model from 2D pixel data to 3D voxel data, which imposes some unique challenges. A 3D voxel volume with reasonable resolution (say $30\times30\times30$) would have the same dimensions as a high-resolution image ($165\times 165$). A fully connected DBN on such an image would result in a huge number of parameters making the model intractable to train effectively. Therefore, we propose to use convolution to reduce model parameters by weight sharing. However, different from typical convolutional deep learning models (e.g. \cite{CDBN}), we do not use any form of pooling in the hidden layers -- while pooling may enhance the invariance properties for recognition, in our case, it would also lead to greater uncertainty for shape reconstruction. The energy, $E$, of a convolutional layer in our model can be computed as: \begin{equation} E(\mathbf{v},\mathbf{h}) = -\sum_f \sum_j \left(h_j^f \left( W^f * v\right)_j + c^f h_j^f \right) - \sum_l b_lv_l \end{equation} where $v_l$ denotes each visible unit, $h_j^f$ denotes each hidden unit in a feature channel $f$, and $W^f$ denotes the convolutional filter. The ``$\ast$'' sign represents the convolution operation. In this energy definition, each visible unit $v_l$ is associated with a unique bias term $b_l$ to facilitate reconstruction, and all hidden units $\{h_j^f\}$ in the same convolution channel share the same bias term $c^f$. Similar to~\cite{DCNN}, we also allow for a convolution stride. \begin{figure}[t] \includegraphics[width=1\linewidth]{NBV.pdf} \vspace{-2mm} \caption{{\bf Next-Best-View Prediction.} [Row 1, Col 1]: the observed (red) and unknown (blue) voxels from a single view. [Row 2-4, Col 1]: three possible completion samples generated by conditioning on $(\mathbf{x}_o,\mathbf{x}_u)$. [Row 1, Col 2-4]: three possible camera positions $\mathbf{V}^i$, front top, left-sided, tilted bottom, front, top. [Row 2-4, Col 2-4]: predict the new visibility pattern of the object given the possible shape and camera position $\mathbf{V}^i$.} \label{fig:nextview} \vspace{-3mm} \end{figure} A 3D shape is represented as a $24\times24\times24$ voxel grid with 3 extra cells of padding in both directions to reduce the convolution border artifacts. The labels are presented as standard one of $K$ softmax variables. The final architecture of our model is illustrated in Figure~\ref{fig:ShapeNets}(a). The first layer has 48 filters of size 6 and stride 2; the second layer has 160 filters of size 5 and stride 2 (i.e., each filter has $48\hspace{-1mm}\times\hspace{-1mm}5\hspace{-1mm}\times\hspace{-1mm}5\hspace{-1mm}\times\hspace{-1mm}5$ parameters); the third layer has 512 filters of size 4; each convolution filter is connected to all the feature channels in the previous layer; the fourth layer is a standard fully connected RBM with 1200 hidden units; and the fifth and final layer with 4000 hidden units takes as input a combination of multinomial label variables and Bernoulli feature variables. The top layer forms an associative memory DBN as indicated by the bi-directional arrows, while all the other layer connections are directed top-down. We first pre-train the model in a layer-wise fashion followed by a generative fine-tuning procedure. During pre-training, the first four layers are trained using standard Contrastive Divergence~\cite{CD}, while the top layer is trained more carefully using Fast Persistent Contrastive Divergence (FPCD)~\cite{FPCD}. Once the lower layer is learned, the weights are fixed and the hidden activations are fed into the next layer as input. Our fine-tuning procedure is similar to wake sleep algorithm~\cite{DBN} except that we keep the weights tied. In the wake phase, we propagate the data bottom-up and use the activations to collect the positive learning signal. In the sleep phase, we maintain a persistent chain on the topmost layer and propagate the data top-down to collect the negative learning signal. This fine-tuning procedure mimics the recognition and generation behavior of the model and works well in practice. We visualize some of the learned filters in Figure~\ref{fig:ShapeNets}(b). During pre-training of the first layer, we collect learning signal only in receptive fields which are non-empty. Because of the nature of the data, empty spaces occupy a large proportion of the whole volume, which have no information for the RBM and would distract the learning. Our experiment shows that ignoring those learning signals during gradient computation results in our model learning more meaningful filters. In addition, for the first layer, we also add sparsity regularization to restrict the mean activation of the hidden units to be a small constant (following the method of~\cite{Sparsity}). During pre-training of the topmost RBM where the joint distribution of labels and high-level abstractions are learned, we duplicate the label units 10 times to increase their significance. \section{2.5D Recognition and Reconstruction} \subsection{View-based Sampling} \begin{figure*}[t] \vspace{-2mm} \includegraphics[width=1\linewidth]{ModelNet.pdf} \vspace{-2mm} \caption{{\bf ModelNet Dataset.} Left: word cloud visualization of the ModelNet dataset based on the number of 3D models in each category. Larger font size indicates more instances in the category. Right: Examples of 3D chair models.} \label{fig:modelnet} \vspace{-3mm} \end{figure*} After training the CDBN, the model learns the joint distribution $p(\mathbf{x},y)$ of voxel data $\mathbf{x}$ and object category label $y \in \{ 1,\cdots,K\}$. Although the model is trained on complete 3D shapes, it is able to recognize objects in single-view 2.5D depth maps (e.g., from RGB-D sensors). As shown in Figure \ref{fig:25Drecognition}, the 2.5D depth map is first converted into a volumetric representation where we categorize each voxel as free space, surface or occluded, depending on whether it is in front of, on, or behind the visible surface (i.e., the depth value) from the depth map. The free space and surface voxels are considered to be observed, and the occluded voxels are regarded as missing data. The test data is represented by $\mathbf{x}=(\mathbf{x}_o,\mathbf{x}_u)$, where $\mathbf{x}_o$ refers to the observed free space and surface voxels, while $\mathbf{x}_u$ refers to the unknown voxels. Recognizing the object category involves estimating $p(y|\mathbf{x}_o)$. We approximate the posterior distribution $p(y|\mathbf{x}_o)$ by Gibbs sampling. The sampling procedure is as follows. We first initialize $\mathbf{x}_u$ to a random value and propagate the data $\mathbf{x} = (\mathbf{x}_o,\mathbf{x}_u)$ bottom up to sample for a label $y$ from $p(y|\mathbf{x}_o, \mathbf{x}_u)$. Then the high level signal is propagated down to sample for voxels $\mathbf{x}$. We clamp the observed voxels $\mathbf{x}_o$ in this sample $\mathbf{x}$ and do another bottom up pass. 50 iterations of up-down sampling are sufficient to get a shape completion $\mathbf{x}$, and its corresponding label $y$. The above procedure is run in parallel for a large number of particles resulting in a variety of completion results corresponding to potentially different classes. The final category label corresponds to the most frequently sampled class. \subsection{Next-Best-View Prediction} Object recognition from a single-view can sometimes be challenging, both for humans and computers. However, if an observer is allowed to view the object from another view point when recognition fails from the first view point, we may be able to significantly reduce the recognition uncertainty. Given the current view, our model is able to predict which next view would be optimal for discriminating the object category. The inputs to our next-best-view system are observed voxels $\mathbf{x}_o$ of an unknown object captured by a depth camera from a single view, and a finite list of next-view candidates $\{\mathbf{V}^i\}$ representing the camera rotation and translation in 3D. An algorithm chooses the next-view from the list that has the highest potential to reduce the recognition uncertainty. Note that during this view planning process, we do not observe any new data, and hence there is no improvement on the confidence of $p(y|\mathbf{x}_o=x_o)$. The original recognition uncertainty, $H$, is given by the entropy of $y$ conditioned on the observed $\mathbf{x}_o$: \begin{equation} \begin{split} H &= H\left(p(y|\mathbf{x}_o=x_o)\right) \\ &= -\sum_{k=1}^{K} p(y=k|\mathbf{x}_o=x_o) \textrm{log }p(y=k|\mathbf{x}_o=x_o) \end{split} \end{equation} where the conditional probability $p(y|\mathbf{x}_o=x_o)$ can be approximated as before by sampling from $p(y,\mathbf{x}_u|\mathbf{x}_o=x_o)$ and marginalizing $\mathbf{x}_u$. When the camera is moved to another view $\mathbf{V}^i$, some of the previously unobserved voxels $\mathbf{x}_u$ may become observed based on its actual shape. Different views $\mathbf{V}^i$ will result in different visibility of these unobserved voxels $\mathbf{x}_u$. A view with the potential to see distinctive parts of objects (e.g. arms of chairs) may be a better next view. However, since the actual shape is partially unknown\footnote{If the 3D shape is fully observed, adding more views will not help to reduce the recognition uncertainty in any algorithm purely based on 3D shapes, including our 3D ShapeNets.}, we will hallucinate that region from our model. As shown in Figure~\ref{fig:nextview}, conditioning on $\mathbf{x}_o=x_o$, we can sample many shapes to generate hypotheses of the actual shape, and then render each hypothesis to obtain the depth maps observed from different views, $\mathbf{V}^i$. In this way, we can simulate the new depth maps for different views on different samples and compute the potential reduction in recognition uncertainty. Mathematically, let $\mathbf{x}_n^i = \textrm{Render}(\mathbf{x}_u, \mathbf{x}_o, \mathbf{V}^i ) \setminus \mathbf{x}_o$ denote the {\bf new} observed voxels (both free space and surface) in the next view $\mathbf{V}^i$. We have $\mathbf{x}_n^i \subseteq \mathbf{x}_u$, and they are unknown variables that will be marginalized in the following equation. Then the potential recognition uncertainty for $\mathbf{V}^i$ is measured by this conditional entropy, \begin{equation} \begin{split} H_{i} &= H\left(p(y|\mathbf{x}_n^i,\mathbf{x}_o=x_o)\right) \\ &= \sum_{\mathbf{x}_n^i} p(\mathbf{x}_n^i|\mathbf{x}_o=x_o)H(y|\mathbf{x}_n^i,\mathbf{x}_o=x_o). \end{split} \end{equation} The above conditional entropy could be calculated by first sampling enough $\mathbf{x}_u$ from $p(\mathbf{x}_u|\mathbf{x}_o=x_o)$, doing the 3D rendering to obtain 2.5D depth map in order to get $\mathbf{x}_n^i$ from $\mathbf{x}_u$, and then taking each $\mathbf{x}_n^i$ to calculate $H(y|\mathbf{x}_n^i=x_n^i,\mathbf{x}_o=x_o)$ as before. According to information theory, the reduction of entropy $H - H_i = I(y;\mathbf{x}_n^i|\mathbf{x}_o=x_o) \geq 0$ is the mutual information between $y$ and $\mathbf{x}_n^i$ conditioned on $\mathbf{x}_o$. This meets our intuition that observing more data will always potentially reduce the uncertainty. With this definition, our view planning algorithm is to simply choose the view that maximizes this mutual information, \begin{equation} \mathbf{V}^* = {\arg\max}_{\mathbf{V}^i} I(y;\mathbf{x}_n^i|\mathbf{x}_o=x_o). \end{equation} Our view planning scheme can naturally be extended to a sequence of view planning steps. After deciding the best candidate to move for the first frame, we physically move the camera there and capture the other object surface from that view. The object surfaces from all previous views are merged together as our new observation $\mathbf{x}_o$, allowing us to run our view planning scheme again. \begin{figure}[t] \vspace{-2mm} \footnotesize \centering \includegraphics[width=1\linewidth]{sample_8_crop.pdf} \vspace{-1mm} \caption{{\bf Shape Sampling.} Example shapes generated by sampling our 3D ShapeNets for some categories.} \label{fig:samples} \end{figure} \section{ModelNet: A Large-scale 3D CAD Dataset} Training a deep 3D shape representation that captures intra-class variance requires a large collection of 3D shapes. Previous CAD datasets (e.g.,~\cite{Psb}) are limited both in the variety of categories and the number of examples per category. Therefore, we construct ModelNet, a large-scale 3D CAD model dataset. To construct ModelNet, we downloaded 3D CAD models from 3D Warehouse, and Yobi3D search engine indexing 261 CAD model websites. We query common object categories from the SUN database \cite{SUNDB} that contain no less than 20 object instances per category, removing those with too few search results, resulting in a total of 660 categories. We also include models from the Princeton Shape Benchmark \cite{Psb}. After downloading, we remove mis-categorized models using Amazon Mechanical Turk. Turkers are shown a sequence of thumbnails of the models and answer ``Yes'' or ``No'' as to whether the category label matches the model. The authors then manually checked each 3D model and removed irrelevant objects from each CAD model (e.g, floor, thumbnail image, person standing next to the object, etc) so that each mesh model contains only one object belonging to the labeled category. We also discarded unrealistic (overly simplified models or those only containing images of the object) and duplicate models. Compared to~\cite{Psb}, which consists of 6670 models in 161 categories, our new dataset is 22 times larger containing 151,128 3D CAD models belonging to 660 unique object categories. Examples of major categories and dataset statistics are shown in Figure \ref{fig:modelnet}. \section{Experiments} We choose 40 common object categories from ModelNet with 100 unique CAD models per category. We then augment the data by rotating each model every 30 degrees along the gravity direction (i.e., 12 poses per model) resulting in models in arbitrary poses. Pre-training and fine-tuning each took about two days on a desktop with one Intel XEON E5-2690 CPU and one NVIDIA K40c GPU. Figure \ref{fig:samples} shows some shapes sampled from our trained model. \subsection{3D Shape Classification and Retrieval} \label{sec:exp:classification} Deep learning has been widely used as a feature extraction technique. Here, we are also interested in how well the features learned from 3D ShapeNets compare with other state-of-the-art 3D mesh features. We discriminatively fine-tune 3D ShapeNets by replacing the top layer with class labels and use the 5th layer as features. For comparison, we choose Light Field descriptor~\cite{LFDfeature} (LFD, 4,700 dimensions) and Spherical Harmonic descriptor~\cite{SHPfeature} (SPH, 544 dimensions), which performed best among all descriptors~\cite{Psb}. \begin{table}[t] \centering \begin{tabular}{c|c|c|c} \Xhline{2\arrayrulewidth} 10 classes & SPH~\cite{SHPfeature} & LFD~\cite{LFDfeature} & Ours\tabularnewline \Xhline{2\arrayrulewidth} classification & 79.79 \% & 79.87 \% & {\bf 83.54}\% \tabularnewline retrieval AUC & 45.97\% & 51.70\% & {\bf 69.28}\% \tabularnewline retrieval MAP & 44.05\% & 49.82\% & {\bf 68.26}\% \tabularnewline \Xhline{2\arrayrulewidth} 40 classes & SPH~\cite{SHPfeature} & LFD~\cite{LFDfeature} & Ours\tabularnewline \Xhline{2\arrayrulewidth} classification & 68.23\% & 75.47\% & {\bf 77.32}\% \tabularnewline retrieval AUC & 34.47\% & 42.04\% & {\bf 49.94}\% \tabularnewline retrieval MAP & 33.26\% & 40.91\% & {\bf 49.23}\% \tabularnewline \Xhline{2\arrayrulewidth} \end{tabular} \vspace{-2mm} \caption{{\bf Shape Classification and Retrieval Results.} } \vspace{-2mm} \label{table:cls} \end{table} We conduct 3D classification and retrieval experiments to evaluate our features. Of the 48,000 CAD models (with rotation enlargement), 38,400 are used for training and 9,600 for testing. We also report a smaller scale result on a 10-category subset (corresponding to NYU RGB-D dataset~\cite{NYUdataset}) of the 40-category data. For classification, we train a linear SVM to classify meshes using each of the features mentioned above, and use average category accuracy to evaluate the performance. For retrieval, we use $L2$ distance to measure the similarity of the shapes between each pair of testing samples. Given a query from the test set, a ranked list of the remaining test data is returned according to the similarity measure\footnote{For our feature and SPH we use the $L2$ norm, and for LFD we use the distance measure from~\cite{LFDfeature}.}. We evaluate retrieval algorithms using two metrics: (1) mean area under precision-recall curve (AUC) for all the testing queries\footnote{We interpolate each precision-recall curve.}; (2) mean average precision (MAP) where AP is defined as the average precision each time a positive sample is returned. We summarize the results in Table~\ref{table:cls} and Figure~\ref{fig:feature}. Since both of the baseline mesh features (LFD and SPH) are rotation invariant, from the performance we have achieved, we believe 3D ShapeNets must have learned this invariance during feature learning. Despite using a significantly lower resolution mesh as compared to the baseline descriptors, 3D ShapeNets outperforms them by a large margin. This demonstrates that our 3D deep learning model can learn better features from 3D data automatically. \begin{figure}[t] \centering \includegraphics[width=0.43\linewidth]{precision_recall_10.pdf} \quad \includegraphics[width=0.43\linewidth]{precision_recall_40.pdf} \vspace{-2mm} \caption{{\bf 3D Mesh Retrieval.} Precision-recall curves at standard recall levels.} \label{fig:feature} \end{figure} \subsection{View-based 2.5D Recognition} To evaluate 3D ShapeNets for 2.5D depth-based object recognition task, we set up an experiment on the NYU RGB-D dataset with Kinect depth maps \cite{NYUdataset}. We select 10 object categories from ModelNet that overlap with the NYU dataset. This results in 4,899 unique CAD models for training 3D ShapeNets. We create each testing example by cropping the 3D point cloud from the 3D bounding boxes. The segmentation mask is used to remove outlier depth in the bounding box. Then we directly apply our model trained on CAD models to the NYU dataset. This is absolutely non-trivial because the statistics of real world depth are significantly different from the synthetic CAD models used for training. In Figure \ref{fig:NYUvis}, we visualize the successful recognitions and reconstructions. Note that 3D ShapeNets is even able to partially reconstruct the ``monitor'' despite the bad scanning caused by the reflection problem. To further boost recognition performance, we discriminatively fine-tune our model on the NYU dataset using back propagation. By simply assigning invisible voxels as 0 (i.e. considering occluded voxels as free space and only representing the shape as the voxels on the 3D surface) and rotating training examples every 30 degrees, fine-tuning works reasonably well in practice. As a baseline approach, we use $k$-nearest-neighbor matching in our low resolution voxel space. Testing depth maps are converted to voxel representation and compared with each of the training samples. As a more sophisticated high resolution baseline, we match the testing point cloud to each of our 3D mesh models using Iterated Closest Point method \cite{ICP} and use the top 10 matches to vote for the labels. We also compare our result with \cite{Socher} which is the state-of-the-art deep learning model applied to RGB-D data. To train and test their model, 2D bounding boxes are obtained by projecting the 3D bounding box to the image plane, and object segmentations are also used to extract features. 1,390 instances are used to train the algorithm of~\cite{Socher} and perform our discriminative fine-tuning, while the remaining 495 instances are used for testing all five methods. Table \ref{table:ap} summarizes the recognition results. Using only depth without color, our fine-tuned 3D ShapeNets outperforms all other approaches with or without color by a significant margin. \begin{figure}[t] \vspace{-2mm} {\small ~~~~Input~~~~~~~~GT~~~~~~~~~3D ShapeNets Completion Result~~~~~~NN } \vspace{-0.5mm} \centering \includegraphics[width=1\linewidth]{completion_new.pdf} \vspace{-3mm} \caption{{\bf Shape Completion.} From left to right: input depth map from a single view, ground truth shape, shape completion result (4 cols), nearest neighbor result (1 col).} \label{fig:shapecompletion} \vspace{-3mm} \end{figure} \iffalse \begin{figure}[t] \centering \vspace{-2mm} \includegraphics[width=0.8\linewidth]{new_figures/NBV.pdf}% \vspace{-3mm} \caption{When the uncertainty of observation goes higher, the recognition accuracy drops. } \label{fig:shapecompletion} \vspace{-5mm} \end{figure} \fi \begin{figure*}[t] \includegraphics[width=1\linewidth]{NYU_reconstruction.pdf} \vspace{-5mm} \caption{{\bf Successful Cases of Recognition and Reconstruction on NYU dataset} \cite{NYUdataset}. In each example, we show the RGB color crop, the segmented depth map, and the shape reconstruction from two view points.} \label{fig:NYUvis} \end{figure*} \begin{table*}[t] \centering \setlength{\tabcolsep}{7.8pt} { \centering \footnotesize \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c} \Xhline{2\arrayrulewidth} & bathtub & bed & chair & desk & dresser & monitor & \hspace{-1mm}nightstand\hspace{-1mm} & sofa & table & toilet & all\tabularnewline \Xhline{3\arrayrulewidth} \cite{Socher} Depth & 0.000 & 0.729 & 0.806 & 0.100 & 0.466 & 0.222 & 0.343 & 0.481 & 0.415 & 0.200 & 0.376\tabularnewline \hline NN & 0.429 & 0.446 & 0.395 & 0.176 & 0.467 & 0.333 & 0.188 & 0.458 & 0.455 & 0.400 & 0.374\tabularnewline \hline ICP & 0.571 & 0.608 & 0.194 & {\bf 0.375} & {\bf 0.733} & 0.389 & 0.438 & 0.349 & 0.052 & {\bf 1.000} & 0.471\tabularnewline \hline 3D ShapeNets & 0.142 & 0.500 & 0.685 &0.100 & 0.366& {\bf 0.500} & {\bf 0.719} & 0.277& 0.377 & 0.700 & 0.437 \tabularnewline \hline 3D ShapeNets fine-tuned & {\bf 0.857} & 0.703 & {\bf 0.919} & 0.300 & 0.500 & {\bf 0.500} & 0.625 & {\bf 0.735} & 0.247 & 0.400 & {\bf 0.579} \tabularnewline \Xhline{3\arrayrulewidth} \cite{Socher} RGB & 0.142 & {\bf 0.743} & 0.766 & 0.150 & 0.266 & 0.166 & 0.218 & 0.313 & 0.376 & 0.200 & 0.334\tabularnewline \hline \cite{Socher} RGBD & 0.000 & {\bf 0.743} & 0.693 & 0.175 & 0.466 & 0.388 & 0.468 & 0.602 & {\bf 0.441} & 0.500 & 0.448\tabularnewline \Xhline{2\arrayrulewidth} \end{tabular} } \vspace*{-2mm} \caption{{\bf Accuracy for View-based 2.5D Recognition on NYU dataset} \cite{NYUdataset}. The first five rows are algorithms that use only depth information. The last two rows are algorithms that also use color information. Our 3D ShapeNets as a generative model performs reasonably well as compared to the other methods. After discriminative fine-tuning, our method achieves the best performance by a large margin of over 10\%. } \label{table:ap} \vspace*{5mm} \centering \setlength{\tabcolsep}{9.4pt} { \centering \footnotesize \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c} \Xhline{2\arrayrulewidth} & bathtub & bed & chair & desk & dresser & monitor & \hspace{-1mm}nightstand\hspace{-1mm} & \hspace{1mm}sofa\hspace{1mm} & table & toilet & all\tabularnewline \Xhline{2\arrayrulewidth} Ours & 0.80 & {\bf 1.00} & {\bf 0.85} & 0.50 & {\bf }0.45 & 0.85 & {\bf 0.75} & {\bf 0.85} & 0.95 & {\bf 1.00} & {\bf 0.80}\tabularnewline \hline Max Visibility & {\bf 0.85} & 0.85 & {\bf 0.85} & 0.50 & {\bf 0.45} & {\bf 0.85} & {\bf 0.75} & {\bf 0.85} & 0.90 & 0.95 & 0.78\tabularnewline \hline Furthest Away & 0.65 & 0.85 & 0.75 & {\bf 0.55} & 0.25 & 0.85 & 0.65 & 0.50 & {\bf 1.00} & 0.85 & 0.69\tabularnewline \hline Random Selection & 0.60 & 0.80 & 0.75 & 0.50 & {\bf 0.45} & {\bf 0.90} & 0.70 & 0.65 & 0.90 & 0.90 & 0.72\tabularnewline \Xhline{2\arrayrulewidth} \end{tabular} } \vspace*{-2mm} \caption{{\bf Comparison of Different Next-Best-View Selections Based on Recognition Accuracy from Two Views.} Based on an algorithm's choice, we obtain the actual depth map for the next view and recognize the object using those two views in our 3D ShapeNets representation. } \label{table:nbv} \end{table*} \subsection{Next-Best-View Prediction} For our view planning strategy, computation of the term $p(\mathbf{x}_n^i|\mathbf{x}_o=x_o)$ is critical. When the observation $\mathbf{x}_o$ is ambiguous, samples drawn from $p(\mathbf{x}_n^i|\mathbf{x}_o=x_o)$ should come from a variety of different categories. When the observation is rich, samples should be limited to very few categories. Since $\mathbf{x}_n^i$ is the surface of the completions, we could just test the shape completion performance $p(\mathbf{x}_u|\mathbf{x}_o=x_o)$. In Figure~\ref{fig:shapecompletion}, our results give reasonable shapes across different categories. We also match the nearest neighbor in the training set to show that our algorithm is not just memorizing the shape and it can generalize well. To evaluate our view planning strategy, we use CAD models from the test set to create synthetic renderings of depth maps. We evaluate the accuracy by running our 3D ShapeNets model on the integration depth maps of both the first view and the selected second view. A good view-planning strategy should result in a better recognition accuracy. Note that next-best-view selection is always coupled with the recognition algorithm. We prepare three baseline methods for comparison : (1) random selection among the candidate views; (2) choose the view with the highest new visibility (yellow voxels, NBV for reconstruction); (3) choose the view which is farthest away from the previous view (based on camera center distance). In our experiment, we generate 8 view candidates randomly distributed on the sphere of the object, pointing to the region near the object center and, we randomly choose 200 test examples (20 per category) from our testing set. Table \ref{table:nbv} reports the recognition accuracy of different view planning strategies with the same recognition 3D ShapeNets. We observe that our entropy based method outperforms all other strategies. \iffalse Sometimes, having two views is not enough and a sequence of views is needed for recognition. The best view planning strategies should take the minimal steps to reach the accuracy upper bound for the recognition algorithm. With the same experiment settings, we run the above view planning strategies in 4 steps. In each step, 8 random new views are generated for selection. All these view planning strategies are greedy in a sense that they are not optimized for global planning. As the result shows in Figure \ref{fig:feature}(c), our view planning significantly outperforms other methods. We only need ??? steps to achieve the same recognition accuracy with ??? steps by others. \fi \section{Conclusion} To study 3D shape representation for objects, we propose a convolutional deep belief network to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid. Our model can jointly recognize and reconstruct objects from a single-view 2.5D depth map (e.g. from popular RGB-D sensors). To train this 3D deep learning model, we construct ModelNet, a large-scale 3D CAD model dataset. Our model significantly outperforms existing approaches on a variety of recognition tasks, and it is also a promising approach for next-best-view planning. All source code and data set are available at our project website. \paragraph{Acknowledgment.} This work is supported by gift funds from Intel Corporation and Project X grant to the Princeton Vision Group, and a hardware donation from NVIDIA Corporation. Z.W. is also partially supported by Hong Kong RGC Fellowship. We thank Thomas Funkhouser, Derek Hoiem, Alexei A. Efros, Andrew Owens, Antonio Torralba, Siddhartha Chaudhuri, and Szymon Rusinkiewicz for valuable discussion. {\small \bibliographystyle{ieee}
1,108,101,562,525
arxiv
\section{Supplemental material} \subsection{Technicalities of data-driven approach} The data-driven approach used here was first introduced in Ref.~\cite{Lansberg:2016deg}. Especially, the partonic amplitude is parameterized following Eq.(1) of \cite{Lansberg:2016deg}. In principle, four parameters need to be determined from the proton-proton experimental data after convolving with the PDFs. The fit is done data set by data set for each scale choice. We summarized the fitted values of these parameters, $\kappa,\lambda,\langle P_T \rangle$, in Tab.~\ref{tabppfit} for the 4 particles ($D^0,J/\psi,B\rightarrow J/\psi, \Upsilon(1S)$) and 3 scale variations ($\mu_F=\xi \mu_0,\xi=0.5,1.0,2.0$) which we considered. We left the 4th parameter $n$ fixed to 2. \begin{table*}[!hbt] \renewcommand{\arraystretch}{1.4} \setlength{\tabcolsep}{12pt} \begin{tabular}{|cc|cccc|} \hline & & $D^0$ & $J/\psi$ & $B\rightarrow J/\psi$ & $\Upsilon(1S)$ \\ \hline\hline \multirow{3}{*}{$\kappa$} & $\xi=0.5$ & $1.26$ & $0.92$ & $0.25$ & $0.69$\\ & $\xi=1.0$ & $0.66$ & $0.56$ & $0.15$ & $0.77$\\ & $\xi=2.0$ & $0.50$ & $0.41$ & $0.13$ & $0.69$\\\hline \multirow{3}{*}{$\lambda$} & $\xi=0.5$ & $2.97$ & $0.58$ & $0.09$ & $0.10$\\ & $\xi=1.0$ & $1.78$ & $0.30$ & $0.09$ & $0.08$\\ & $\xi=2.0$ & $1.39$ & $0.22$ & $0.08$ & $0.07$\\\hline \multirow{3}{*}{$\langle P_T \rangle$} & $\xi=0.5$ & $0.01$ & $4.5$ (fixed) & $0.02$ & $13.5$ (fixed)\\ & $\xi=1.0$ & $0.09$ & $4.5$ (fixed) & $3.84$ & $13.5$ (fixed)\\ & $\xi=2.0$ & $0.03$ & $4.5$ (fixed) & $0.11$ & $13.5$ (fixed)\\ \hline \end{tabular} \caption{A summary of the fitted parameters $\kappa,\lambda,\langle P_T \rangle$ of our data-driven method.\label{tabppfit}} \end{table*} \subsection{Validation with FONLL}\label{validate} We have explicitly checked the reweighted results from our data-driven approach with those from the FONLL perturbative calculation~\cite{Cacciari:1998it,Cacciari:2012ny} for open heavy-flavour production. The reweighted nPDFs from both approaches are shown in Fig.~\ref{fig:CompareFONLL} for the case of LHC $D^0$ data. In the data-driven approach, the only theoretical uncertainty to be considered is from the factorisation scale variation (shown in the top-left insets of Figs.~\ref{fig:CompareFONLLa} and \ref{fig:CompareFONLLb}). On the other hand, in the case of FONLL, besides the factorization scale uncertainty (shown in the top-right insets of Figs.~\ref{fig:CompareFONLLa} and \ref{fig:CompareFONLLb}), we also provide other theoretical uncertainties from the renormalisation scale (shown in the bottom-left insets of Figs.~\ref{fig:CompareFONLLa} and \ref{fig:CompareFONLLb}) and charm quark mass variations (shown in the bottom-right insets of Figs.~\ref{fig:CompareFONLLa} and \ref{fig:CompareFONLLb}). It confirms that for observables like $R_{p\rm{Pb}}$, the dominant theoretical uncertainty is from the factorisation scale variation, while the renormalisation scale and the charm mass uncertainties are largely cancelled. One also clearly sees that the results obtained using our data-driven method and FONLL are very similar confirming the validity of the data-driven method. The same situation holds also for the $B\rightarrow J/\psi$ case. \begin{figure}[!thb] \begin{center} \subfloat[nCTEQ15 nPDF]{\includegraphics[width=0.49\textwidth,draft=false]{D0_Rg_MU2GeV_FONLL_nCTEQ15-crop.pdf}\label{fig:CompareFONLLa}} \subfloat[EPPS16 nPDF]{\includegraphics[width=0.49\textwidth,draft=false]{D0_Rg_MU2GeV_FONLL_EPPS16-crop.pdf}\label{fig:CompareFONLLb}} \caption{Comparison of the reweighted nPDFs with $D^0$ data between our data-driven approach and FONLL with various theoretical uncertainties. The error bands due to nPDF uncertainty are given at $68\%$ CL level. } \label{fig:CompareFONLL} \end{center}\vspace*{-1cm} \end{figure} \subsection{$\chi^2$ numbers} The $\chi^2$ values before and after reweighting are displayed in Tab.~\ref{tabchi2} for $\{D^0,J/\psi,B\rightarrow J/\psi, \Upsilon(1S)\}$ production in $p{\rm Pb}$ collisions at the LHC, together with total number of data points $N_{\rm data}$. As it is customary, these $\chi^2$ values do not account for any theoretical uncertainties. Regardless of the scale choice (i.e. $\xi$), $\chi^2/N_{\rm data}$ are around $1$ after reweighting, while $\chi^2/N_{\rm data}$ varies significantly with the original nPDFs. It is normal as our replicas are matching the $R_{p{\rm Pb}}$ vs $P_T,y$ data. We take the inclusive $J/\psi$ PHENIX $R_{dAu}$ results as a postdiction, where the $\chi^2$ numbers before and after reweighting are shown in Tab.~\ref{tabchi2others}. The compatibility between the theoretical calculations and the PHENIX data is further improved with the reweighted nPDFs. We also have checked the global coherence of the HF constraints with the LHC W/Z and DIS NMC data. The corresponding $\chi^2$ values are also shown in Tab.~\ref{tabchi2others}. No degradation is observed as the $\chi^2/N_{\rm data}$ values similar before and after the reweighting. \begin{table*}[!t] \renewcommand{\arraystretch}{1.4} \setlength{\tabcolsep}{12pt} \begin{tabular}{|cc|cccc|} \hline & & $D^0$ & $J/\psi$ & $B\rightarrow J/\psi$ & $\Upsilon(1S)$ \\ \hline\hline & $N_{\rm data}$ & 38 & 71 & 37 & 12 \\\hline\hline \multirow{3}{*}{{\rm Original nCTEQ15}} & $\xi=0.5$ & $142$ & $131$ & $39$ & $14$\\ & $\xi=1.0$ & $39$ & $63$ & $23$ & $11$\\ & $\xi=2.0$ & $63$ & $90$ & $15$ & $11$\\\hline \multirow{3}{*}{{\rm Reweighted nCTEQ15}} & $\xi=0.5$ & $56$ & $46$ & $14$ & $13$\\ & $\xi=1.0$ & $56$ & $53$ & $11$ & $11$\\ & $\xi=2.0$ & $56$ & $46$ & $9$ & $11$\\\hline\hline \multirow{3}{*}{{\rm Original EPPS16}} & $\xi=0.5$ & $53$ & $62$ & $9$ & $10$\\ & $\xi=1.0$ & $140$ & $150$ & $7$ & $10$\\ & $\xi=2.0$ & $218$ & $220$ & $8$ & $11$\\ \hline \multirow{3}{*}{{\rm Reweighted EPPS16}} & $\xi=0.5$ & $37$ & $59$ & $7$ & $10$\\ & $\xi=1.0$ & $37$ & $59$ & $7$ & $10$\\ & $\xi=2.0$ & $37$ & $59$ & $7$ & $11$\\ \hline \end{tabular} \caption{Total numbers of $R_{p{\rm Pb}}$ vs $P_T,y$ data points used for reweighting and the corresponding $\chi^2$ values before and after reweighting. No theoretical uncertainties are taken into account when evaluating $\chi^2$.\label{tabchi2}} \end{table*} \begin{table*}[!t] \renewcommand{\arraystretch}{1.4} \setlength{\tabcolsep}{12pt} \begin{tabular}{|c|cc|c|cccc|} \hline & & & \multirow{2}{*}{{\rm Original}} & \multicolumn{4}{c|}{Reweighted}\\\cline{5-8} & & & & $D^0$ & $J/\psi$ & $B\rightarrow J/\psi$ & $\Upsilon(1S)$ \\ \hline\hline \multirow{6}{*}{{\rm PHENIX} $J/\psi$ ($N_{\rm data}=74$)} & \multirow{3}{*}{{\rm nCTEQ15}} & $\xi=0.5$ & $265$ & $-$ & $134$ & $-$ & $-$\\ & & $\xi=1.0$ & $189$ & $-$ & $176$ & $-$ & $-$\\ & & $\xi=2.0$ & $231$ & $-$ & $205$ & $-$ & $-$\\\cline{2-8} & \multirow{3}{*}{{\rm EPPS16}} & $\xi=0.5$ & $133$ & $-$ & $138$ & $-$ & $-$\\ & & $\xi=1.0$ & $207$ & $-$ & $167$ & $-$ & $-$\\ & & $\xi=2.0$ & $263$ & $-$ & $209$ & $-$ & $-$\\ \hline \multirow{3}{*}{{\rm LHC} W/Z ($N_{\rm data}=102$)} & \multirow{3}{*}{{\rm nCTEQ15}} & $\xi=0.5$ & \multirow{3}{*}{$248$} & $218$ & $230$ & $212$ & $229$\\ & & $\xi=1.0$ & & $254$ & $271$ & $214$ & $238$\\ & & $\xi=2.0$ & & $317$ & $332$ & $219$ & $243$\\\hline \multirow{3}{*}{{\rm NMC} $F_2^{\rm Sn}/F_2^C$ ($N_{\rm data}=111$)} & \multirow{3}{*}{{\rm nCTEQ15}} & $\xi=0.5$ & \multirow{3}{*}{$65$} & $93$ & $98$ & $86$ & $70$\\ & & $\xi=1.0$ & & $65$ & $66$ & $78$ & $67$\\ & & $\xi=2.0$ & & $62$ & $62$ & $71$ & $65$\\\hline \multirow{3}{*}{{\rm NMC} $F_2^{\rm Pb}/F_2^C$ ($N_{\rm data}=14$)} & \multirow{3}{*}{{\rm nCTEQ15}} & $\xi=0.5$ & \multirow{3}{*}{$8$} & $8$ & $8$ & $8$ & $7$\\ & & $\xi=1.0$ & & $7$ & $6$ & $7$ & $7$\\ & & $\xi=2.0$ & & $9$ & $8$ & $7$ & $8$\\\hline \end{tabular} \caption{Comparison of $\chi^2$ values for inclusive $J/\psi$ $d{\rm Au}$ PHENIX data, $W/Z$ $p{\rm Pb}$ LHC data and NMC data before and after reweighting. No theoretical uncertainties are taken into account for evaluating $\chi^2$.\label{tabchi2others}} \end{table*} \subsection{Effective number of replicas} The reliability of the reweighting procedure can be estimated by the effective number of replicas $N_{\rm eff}$ after reweighting (see Eq.(14) in Ref.~\cite{Kusina:2016fxy}). It provides an estimation of the number of replicas effectively contributing to the reweighting procedure. If $N_{\rm eff}/N_{\rm rep}\ll 1$ with $N_{\rm rep}$ the number of original replicas, the reweighting procedure becomes inefficient and a new global fit is necessary. We provide the values of $N_{\rm eff}$ in Tab.~\ref{tabNeff} based on our original $N_{\rm rep}=10^4$ replicas. Among the $24$ reweighting results (2 nPDFs, 4 data sets and 3 factorisation scale choices $\mu_F=\xi \mu_0$), we conclude that we always have $N_{\rm eff} > 3000$, which confirms the reliability of our reweighting results. \begin{table*}[!t] \renewcommand{\arraystretch}{1.4} \setlength{\tabcolsep}{12pt} \begin{tabular}{|cc|cccc|} \hline & & $D^0$ & $J/\psi$ & $B\rightarrow J/\psi$ & $\Upsilon(1S)$ \\ \hline\hline \multirow{3}{*}{{\rm nCTEQ15}} & $\xi=0.5$ & $3063$ & $3423$ & $6584$ & $9508$\\ & $\xi=1.0$ & $5573$ & $5906$ & $7859$ & $9830$\\ & $\xi=2.0$ & $5353$ & $5479$ & $8625$ & $9929$\\\hline \multirow{3}{*}{{\rm EPPS16}} & $\xi=0.5$ & $3116$ & $3304$ & $7914$ & $9724$\\ & $\xi=1.0$ & $3979$ & $4204$ & $8444$ & $9875$\\ & $\xi=2.0$ & $4226$ & $4462$ & $8783$ & $9932$\\ \hline \end{tabular} \caption{Summary of $N_{\rm eff}$ after performing reweighting with $10^4$ original replicas.\label{tabNeff}} \end{table*} \end{document}
1,108,101,562,526
arxiv
\section{Abstract} \noindent \textbf{Purpose}: ESPIRiT is a parallel imaging method that estimates coil sensitivity maps from the auto-calibration region (ACS). This requires choosing several parameters for the optimal map estimation. While fairly robust to these parameter choices, occasionally, poor selection can result in reduced performance. The purpose of this work is to automatically select parameters in ESPIRiT for more robust and consistent performance across a variety of exams. \noindent \textbf{Methods}: By viewing ESPIRiT as a denoiser, Stein's unbiased risk estimate (SURE) is leveraged to automatically optimize parameter selection in a data-driven manner. The optimum parameters corresponding to the minimum true squared error, minimum SURE as derived from densely-sampled, high-resolution, non-accelerated data, and minimum SURE as derived from ACS are compared using simulation experiments. To avoid optimizing the rank of ESPIRiT's auto-calibrating matrix (one of the parameters), a heuristic derived from SURE-based singular value thresholding is also proposed. \noindent \textbf{Results}: Simulations show SURE derived from the densely-sampled, high-resolution, non-accelerated data to be an accurate estimator of the true mean squared error, enabling automatic parameter selection. The parameters that minimize SURE as derived from ACS correspond well to the optimal parameters. The soft-threshold heuristic improves computational efficiency while providing similar results to an exhaustive search. In-vivo experiments verify the reliability of this method. \noindent \textbf{Conclusion}: Using SURE to determine ESPIRiT parameters allows for automatic parameter selections. In-vivo results are consistent with simulation and theoretical results. \noindent \textbf{Keywords}: Parallel Imaging Calibration, \emph{ESPIRiT}, \emph{Stein's Unbiased Risk Estimate}. \clearpage \section{Introduction} \label{sec:introduction} Modern MRI leverages multiple receive coil arrays to perform parallel imaging (PI) for faster acquisition-time and improved signal to noise ratio (SNR). The high level mechanics of PI can be broken up into two steps: calibration and reconstruction. Calibration typically involves exploiting the data redundancy provided by the multiple receive channels to derive linear data consistency operators that define the subspace the signal is expected to live in. Reconstruction then enforces data consistency through these derived operators along with other priors to reconstruct the desired image. Consider the two most common PI techniques, SENSE \cite{ref:sense} and GRAPPA \cite{ref:grappa}. Both methods can be broken down into the two steps. SENSE is an image-domain method that uses an explicit calibration scan to derive explicit spatial coil sensitivity maps (CSM), which is then used as a spatial weighting operator during the reconstruction step. GRAPPA uses auto-calibration signal (ACS) lines or a separate calibration scan to derive convolution kernels in k-space that enforce data consistency. In this work, the calibration of ESPIRiT \cite{ref:uecker} is studied. ESPIRiT is a method that bridges SENSE and GRAPPA by using ACS to derive so called ESPIRiT maps that can be used in a SENSE-like reconstruction. In ESPIRiT, there are several user-set parameters in the calibration that determine the quality of the resulting ESPIRiT maps. These parameters are described in the theory section. While ESPIRiT is in general robust to variation in these parameters, occasionally, poor parameter choice can result in reduced PI performance like noise amplification or image attenuation. This is exemplified in Figure \ref{fig:variability}, where different parameter choices result in ESPIRiT maps with significant variations in terms of signal attenuation and noise amplification. This motivates the study of ESPIRiT parameter selection. Since the quality of MRI reconstruction is often quantified in terms of Mean Squared Error (MSE), it is desirable to select parameters that result in ESPIRiT maps that would result in a minimum MSE reconstruction. However, calculating MSE as a function of the parameters is impossible without access to the true, under-lying noise-free image. To overcome this limitation, the parameter selection problem is reformulated into a denoising problem, where ESPIRiT is viewed as a linear denoiser of the calibration data. This formulation allows Stein's Unbiased Risk Estimate (SURE) \cite{ref:stein} to be used. SURE is a technique that calculates the expected MSE of a given denoiser, and has seen widespread applicability in the field of denoising \cite{luisier2007new, zhang1998adaptive, blu2007sure, luisier2008sure}. By calculating SURE as a function of ESPIRiT parameters, the expected MSE of the corresponding ESPIRiT maps can be calculated and used to determine the optimal ESPIRiT map in this expected MSE sense. Succinctly, SURE is used as a proxy to the underlying MSE for parameter optimization. In order to calculate SURE for a given denoiser, it is required to calculate the divergence of the denoiser with respect to the acquired data \cite{ref:stein, ref:ramani1}. This is often very difficult for general denoising algorithms, resulting in the emergence of Monte-Carlo methods. In particular, Ramani et al.\ demonstrates how Monte-Carlo simulations can calculate the divergence term required by SURE for a general (possibly non-linear) denoiser and provides a black-box framework for denoiser parameter optimization \cite{ref:ramani1}. This Monte-Carlo SURE-framework has seen successful application to parallel imaging. Weller et al.\ presents a SURE-based method of selecting regularization for k-space data-consistency linear operators (GRAPPA \cite{ref:grappa} and SPIRiT \cite{lustig2010spirit}) that optimizes the linear operator for reconstruction \cite{ref:weller1, ref:weller2}. However, as discussed by Ramani et al., the divergence of a linear denoising operator corresponds to the trace of the operator. Consequently, if it is feasible to calculate the trace, the exact SURE value is obtained while avoiding Monte-Carlo simulations. This work demonstrates that, when considering non-accelerated, densely-sampled, high-resolution data, the exact SURE value can be calculated as a function of ESPIRiT parameters. In this case, ESPIRiT decouples into image-domain pixel-wise operators for which the divergence can be calculated exactly to be the trace of said operators which can be calculated efficiently. This work then augments ESPIRiT with a projection onto ACS to construct an ACS denoiser, and calculates the exact SURE value for this approximate problem. It is seen that the parameters corresponding to the minimum SURE as derived from ACS correspond well to the optimal parameters derived from the minimum true squared error. This allows for near-optimum parameter estimation while overcoming the need to perform Monte-Carlo simulations. That being said, it is expected that the Monte-Carlo methods will achieve similar results with a larger computational cost. This work calculates SURE for ESPIRiT by partitioning the complex vector space into real and imaginary parts, and takes advantage of the Hermitian symmetric characteristics of the ESPIRiT operator for efficient calculation of the divergence term required by SURE. This is discussed in the Theory section and in the Appendix. By partitioning the vector space, the application of SURE to ESPIRiT in this work fits within the general linear denoiser framework presented by Ramani et al.\ \cite{ref:ramani1}. Using SURE as a metric to quantitatively evaluate the performance of ESPIRiT parameters is desirable as it avoids adding further model complexity and regularization to ESPIRiT. This is in contrast to methods where regularization, additional iterations and other techniques are incorporated into the model to build-in noise robustness while performing calibration \cite{xu2014robust, majumdar2013calibrationless, jin2016general, park2012adaptive}. While this work focuses on ESPIRiT, the main theme of this work is to demonstrate how the quantitative comparison of the denoising efficacy of a PI operator can be used to perform optimal PI calibration. The field of PI is extensive with many novel methods designed to fill different requirements and applications. The proposed reformulation of the calibration step can be applied to many other novel PI methods as well. For example, the matrix low-rank optimization in Parallel-LORAKS \cite{haldar2016p} can be reformulated into a matrix denoising problem, where SURE can be used to find the optimal soft threshold to denoise the matrix \cite{ref:candes}. The encoding matrix presented in Joint-SENSE \cite{ying2007joint} will also benefit from the application of SURE. SURE can be calculated as a function of the polynomial coefficients used in Joint-SENSE to approximate CSM to determine which derived CSM best denoises ACS data. For GRAPPA based methods like iterative GRAPPA \cite{zhao2008iterative}, a Tikhonov parameter can be introduced into the least squared fitting procedure, and this can be optimized over with SURE. However, this reformulation is unlikely to benefit techniques involving non-linear functions acting on noise as the noise model assumed by SURE no longer holds. For example, \cite{chang2012nonlinear, lyu2018kernl} involve non-linear mapping of data, and \cite{majumdar2013calibrationless, jin2016general} explicitly rely on CS for noise robustness; both of which change the noise model assumed by SURE. Additionally, while this work presents the concept of ESPIRiT calibration based on its denoising efficacy, there have been numerous works done on using SURE in the regime of regularization selection for MRI reconstruction. For example, Ramani et al.\ presented a framework for tuning non-linear reconstructions based on their respective Jacobians evaluated on acquired data \cite{ref:ramani2}; and Marin et al.\ presented a parameterized wavelet-based estimator that uses SURE to determine the optimal parameters for reconstruction \cite{ref:marin}. These works focus on selecting optimal regularization parameters for reconstruction, and often include a data-consistency term in the reconstruction formulation. Consequently, the SURE-based calibration in this work should work synergistically with these methods as it is expected to improve the performance of the data-consistency operator. Finally, one of the parameters in ESPIRiT determines the rank of a block Hankel structured auto-calibration matrix. This work first demonstrates how SURE can be used to optimize for the same, then presents a singular value soft-thresholding heuristic derived from the SURE-based low rank matrix approximation theory developed by Cand\`es et al.\ \cite{ref:candes} to reduce computational overhead. The heuristic uses the SURE-optimal soft threshold derived by Cand\`es et al. to weight the singular vectors of the auto-calibration matrix. This is seen to provide similar results to an exhaustive rank search while significantly reducing computational burden. The details of this heuristic is discussed in the sections below. It should be explicitly noted that this work assumes that noise is additive, complex normal noise that is not correlated. If there is noise correlation between channels, the data should be whitened using noise characteristics derived from either the data itself or an explicit noise acquisition. \section{Theory} \label{sec:theory} \subsection{ESPIRiT} ESPIRiT is a technique that combines the data-based calibration advantages of GRAPPA to derive SENSE-like relative CSM. These CSM are derived from the null-space of the auto-calibration matrix, which is formed by sweeping a kernel through the calibration region (as depicted in Figure \ref{fig:espirit}). The following provides a brief overview of ESPIRiT, with emphasis on the parameters that need to be tuned. For more detail about ESPIRiT and comparison to other PI methods, please refer to \cite{ref:uecker}. Let $A$ be the auto-calibration matrix, $V_{||}$ be a matrix consisting of the right singular vectors of $A$ corresponding to the dominant singular values of $A$, $V_{\perp}$ be a matrix consisting of the remaining singular vectors that span the null space of $A$, $U$ be a matrix consisting of the left singular vectors of $A$ and $\Sigma$ be a diagonal matrix consisting of the singular values of $A$ in descending magnitude order. Then, by taking the singular value decomposition, \begin{equation} A = U \Sigma V^* \text{ with } V = \mat{V_{||}, & V_{\perp}}. \label{eq:svd} \end{equation} Let $y$ be the noise-free, fully-sampled, underlying multi-channel k-space data (at the sampling locations). Let $R_r$ be an operator that extracts a block from $y$ around the k-space position $r$ (including $r$ itself) across all the channels. Since the signal $y$ should be orthogonal to the null-space of $A$, the following normal equations are derived: \begin{subequations} \begin{equation} \left(\sum_r R_r^* V_\perp V_\perp^* R_r\right) y = 0 \end{equation} \begin{equation} \left(\sum_r R_r^* \left(I - V_{||} V_{||}^* \right)R_r\right) y = 0 \end{equation} \begin{equation} \underbrace{\left(\sum_r R_r^* R_r\right)^{-1} \left(\sum_r R_r^* \left(V_{||} V_{||}^* \right)R_r\right)}_{\mathcal{W}} y = y \label{eq:espiritconst} \end{equation} \label{eq:espiritnormal} \end{subequations} $\left(\sum_r R_r^* R_r\right)^{-1}$ effectively scales each channel by a scalar that is the inverse of the number of k-space elements selected by $R_r$. Thus, $\mathcal{W}$ is a convolution with a matrix-valued kernel where the matrix operates on the channel dimension. This convolution operator is decoupled into pixel-wise operations along the channel dimension in the image domain. In other words, let $x$ be the true multi-channel image data such that $y = F x$ where $F$ is the unitary Fourier transform operator. Equation \eqref{eq:espiritconst} becomes, \begin{equation} \underbrace{(F^* \mathcal{W} F)}_{\mathcal{G}} x = x, \label{eq:espiritimconst} \end{equation} where $\mathcal{G}$ is defined to be $F^*\mathcal{W}F$. Particularly, since $\mathcal{G}$ is decoupled into pixel-wise operators, it suffices to look at the effect of $\mathcal{G}$ on a particular image pixel with index $q$. \begin{equation} \mathcal{G}(q) x(q) = x(q) \label{eq:espiritimconstpix} \end{equation} $x(q)$ is a vector of dimension equal to the number of CSM. Let the source image be $m$ and let $S$ be the vector constructed from stacking the coil sensitivities of the different channels. The SENSE model states, \begin{equation} x(q) = S(q) m(q) \label{eq:forwardmodel} \end{equation} $m(q)$ is a scalar and $S(q)$ is a vector of dimension equal to the number of coil-sensitivity maps. Applying this to \eqref{eq:espiritimconstpix}, \begin{equation} \mathcal{G}(q) S(q) m(q) = S(q) m(q) \end{equation} If $m(q) \neq 0$, the following eigenvalue-eigenvector condition is derived: \begin{equation} \mathcal{G}(q) S(q) = 1 \cdot S(q) \label{eq:espiriteig} \end{equation} Thus sensitivity maps, in the ideal case, are eigenvectors of $\mathcal{G}$ with eigenvalues one, with the other eigenvectors of $\mathcal{G}$ having eigenvalues much smaller than one. This comes from observing that $\mathcal{W}$ is an average of projections and is consequently positive semi-definite with eigenvalues smaller or equal to one. In practice, due to data-inconsistencies (like noise), the observed eigenvalues of the eigenvector maps are very close to (but not exactly) one. This motivates defining an approximate ``$\approx 1$'' condition where the eigenvalues that would be $1$ in the ideal case but are instead close to one. The eigenvalue decomposition of $\mathcal{G}$ is taken and the eigenvector corresponding to eigenvalue ``$\approx 1$'' are considered to be ESPIRiT maps. In some cases, multiple eigenvalues ``$\approx 1$'' appear such as when the calibration region supports a field-of-view smaller than the object which results in multiple sensitivity values at a pixel location due to aliasing \cite{ref:uecker}. This motivates using more that one set of these eigenvector maps to better capture the desired signal. \subsection{Parameter choices in ESPIRiT} There are three parameters in the ESPIRiT calibration. The first is the size of the window, or kernel, that is swept through the ACS to construct the auto-calibration matrix $A$; the second is the size of the signal subspace used to partition $V_{||}$ from $V_{\perp}$; and the third is the threshold, above which eigenvalues are considered ``$\approx 1$''. These parameters are denoted as the kernel size $(k)$, the subspace size $(w)$ and the eigenvalue crop threshold $(c)$, respectively. Let $n_c$ be the number of channels. For 2D data, a kernel size $(k)$ would imply that window of dimensions $(k \times k \times n_c)$ is swept through the calibration region to construct the rows of the auto-calibration matrix $A$. Consequently, $V_{||}$ and $V_{\perp}$ together span a linear space of dimension $(k \cdot k \cdot n_c)$, and the rank of $V_{||}$ can vary from $0$ to $(k \cdot k \cdot n_c)$. The chosen rank of $V_{||}$ is the subspace size $(w)$ and is measured in Window Normalized Singular Values Number (WNSVN). This normalizes the rank by the kernel spatial dimensions and thus $(w)$ is in the range from $0$ to $n_c$. A subspace size of $(w)$ implies $V_{||}$ consists of $w \cdot k^2$ orthogonal vectors. Finally, the eigenvalue crop threshold $(c)$ determines the pixel positions $(q)$ where the eigenvectors of operator $\mathcal{G}(q)$ are well defined, which corresponds to pixels positions $(q)$ within the object. A too high threshold would result in direct attenuation of the signal, and too small a threshold would allow eigenvectors of operator $\mathcal{G}(q)$ from positions that are not well defined in terms of \eqref{eq:espiriteig}, which in turn allows in signal that is not necessarily from the object, such as noise. The feasible values of the eigenvalue crop threshold $(c)$ are from $0$ to $1$, with realistic values residing in the range from $0.7$ to $0.95$. The rank of $V_{||}$ also represents a similar trade-off. Since the same under-lying data is observed through multiple receive channels, the auto-calibration matrix $A$ is expected to be low-rank, but this is often not the case due to noise and other data inconsistencies. Since the ESPIRiT operator is derived from $V_{||}$, too large a subspace size implies that the ESPIRiT operator captures these inconsistencies within its range space while too small a subspace size yields an operator with insufficient information to properly describe the underlying signal. In the latter case, a projection onto the operator's range space will result in some signal loss. Lastly, the kernel size $(k)$ of ESPIRiT determines the smoothness of the resulting ESPIRiT maps, where a smaller kernel results in smoother CSM. However, a too small kernel size may result in a calibration matrix $A$ whose $V_{||}$ does not contain sufficient information to create the SENSE-like operator. While ESPIRiT is fairly robust to these parameter choices, there is variability in map quality (such as how well the maps capture the field of view of the object) and choosing parameters that result in optimal reconstruction is desirable. Figure \ref{fig:variability} exemplifies the variability in ESPIRiT maps when varying the subspace size $(w)$ and eigenvalue crop threshold $(c)$ for a fixed kernel size $(k)$. In order to develop a robust, data-driven method of automatic parameter selection, Stein's unbiased risk estimate (SURE) is explored as a metric to select parameters that are optimal in an expected mean squared error sense. \subsection{Stein's unbiased risk estimate} Stein's unbiased risk estimate is a data-driven method of calculating the expected mean squared error of an estimator in the presence of zero-mean additive Gaussian noise, given that the estimator is differentiable with respect to the data almost everywhere \cite{ref:stein}. The SURE expression for a Hermitian symmetric operator is presented and then extended to ESPIRiT. Let $x \in \mathbb{C}^m$ be the ground truth to be estimated; $n$ be zero-mean, additive, Gaussian complex noise with standard deviation $\sigma$; and $x_\text{acq} = x + n$ be the acquired data. Let $\pmb{P}_\theta \in \mathbb{C}^{m\times m}$ be a Hermitian symmetric linear operator, parameterized by $\theta$, that is an estimator of $x$ for $x_{\text{acq}}$. This work assumes that the noise is additive, normal noise that is not correlated. If there is noise correlation between channels, the data should be pre-whitened using noise characteristics derived either from the data itself or from an explicit noise measurement. Let $\E{\cdot}$ denote the expected value operation. Partitioning the complex vector space into real and imaginary parts and noting that the divergence of a linear operator is the trace of the linear operator (steps described in the appendix), Stein's first theorem \cite{ref:stein} implies: \begin{subequations} \begin{equation} \E{\norm{\pmb{P}_\theta x_\text{acq} - x}_2^2} = \E{SURE_{\pmb{P}_\theta} (x_\text{acq})} \label{eq:surebasic_a} \end{equation} \begin{equation} \begin{array}{rcl} SURE_{\pmb{P}_\theta} \left(x_\text{acq}\right) &=& -m \sigma^2 + \norm{\left(\pmb{P}_\theta - I\right)x_\text{acq}}_2^2 + 2 \sigma^2 \, \left[\text{div}_{x_\text{acq}}\left(\pmb{P}_\theta\right)\right]\left(x_\text{acq}\right) \\ &=& -m \sigma^2 + \norm{\left(\pmb{P}_\theta - I\right)x_\text{acq}}_2^2 + 2 \sigma^2 \text{trace} \left(\pmb{P}_\theta\right) \end{array} \label{eq:surebasic_b} \end{equation} \label{eq:surebasic} \end{subequations} Particularly, note that Equation $\eqref{eq:surebasic_b}$ is independent of $x$, and only depends on the acquired data $x_\text{acq}$. Thus SURE can be used as a surrogate for the expected mean squared error to find the optimal parameters $\theta^*$. \subsection{SURE with ESPIRiT} The main concepts will first be illustrated through a non-accelerated, densely-sampled, high-resolution case where the requirement of zero-mean additive normal noise is satisfied. In this case, there will be no noise-amplification due to data interpolation and the efficacy of the ESPIRiT operator is tied to the quality of the denoising it performs. This will later extend to the case when the densely-sampled region is restricted to the low-resolution ACS signal. To quantify denoising, first define an ESPIRiT projection operator, which denoises the acquired data using a projection onto the ESPIRiT maps. An illustration is provided in Figure \ref{fig:espirit}f. Let $x_{acq}$ denote the acquired multi-channel images obtained from applying a Discrete Fourier transform on the non-accelerated, densely-sampled, high-resolution acquired k-space. Let $n_c$ be the number of coils. Let $S^i(q)$ be the $i^{th}$ eigenvector of $\mathcal{G}(q)$ with eigenvalue $\lambda_i(q)$. $S^i(q)$ is a vector of dimension $n_c$ and has unit norm. Since $\mathcal{G}$, and consequently $\mathcal{G}(q)$, is Hermitian symmetric, the eigenvectors $S^i(q)$ will be orthonormal to each other. The ESPIRiT projection operator at pixel position $q$, which will be denoted as $P(q)$, is defined as: \begin{equation} \pmb{P}(q) = \mat{| & \null & | \\ S^1(q) & \dots & S^{n_c}(q) \\ | & \null & |} \mat{| & \null & | \\ S^1(q) & \dots & S^{n_c}(q) \\ | & \null & |}^* \label{eq:Scomp} \end{equation} The aggregate ESPIRiT projection operator $\pmb{P}$ can be represented by stacking the pixel-wise operators diagonally: \begin{equation} \pmb{P} = \mat{ \pmb{P}(1) & \hdots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \hdots & \pmb{P}(N) } \text{ where $N$ is the number of image pixels.} \label{eq:espiritprojop} \end{equation} The range space of the ESPIRiT projection operator describes the linear subspace the desired signal is expected to reside in. Projecting onto this subspace is expected to remove undesirable signal such as additive, white, Gaussian noise that contaminates MRI data. This interpretation of ESPIRiT as a denoiser allows for the application of SURE. Let $\pmb{P}_\theta$ be the projection operator derived from a particular ESPIRiT parameter set $\theta = (k, w, c)$. Define $x$ as the densely-sampled, high-resolution, noise-less, multi-channel images; $n$ as additive, complex normal noise of standard deviation $\sigma$; $x_\text{acq} = x + n$ as the acquired coil images; and $I$ as the identity operator. In this densely-sampled, high-resolution case, finding the optimal projection operator is reduced to finding the optimum $\theta^*$ that results in a projection operator $\pmb{P}_{\theta^*}$ that best denoises the input data $x_\text{acq}$. That is to say, \begin{equation} \theta^* = \argmin_\theta \norm{x - \pmb{P}_\theta x_\text{acq}}_2^2 \label{eq:opt} \end{equation} Since ESPIRiT is a pixel-wise linear projection operator in the image domain, the divergence contributed by a single pixel $x(q)$ is the trace of the linear operator affecting that pixel (denoted $\pmb{P}_\theta(q)$) (described in the appendix). Thus, the SURE value of $\pmb{P}_\theta$ can be calculated by summing over all pixel positions $q$. \begin{equation} SURE_\theta(v) = \sum_q \left[-n_c \sigma^2 + \norm{\left(\pmb{P}_\theta(q) - I\right)x_\text{acq}(q)}_2^2 + 2 \sigma^2 \text{trace } \pmb{P}_\theta(q)\right] \label{eq:sureespirit} \end{equation} Equation $\eqref{eq:sureespirit}$ can then be used as a surrogate for $\eqref{eq:opt}$. \begin{equation} \theta^* = \argmin_\theta \norm{\pmb{P}_\theta x_\text{acq} - x}_2^2 \approx \argmin_\theta SURE_\theta\left(x_\text{acq}\right) \label{eq:suresol} \end{equation} Since the trace of $AB$ is equal to the trace of $BA$ for any matrix $A$ and $B$, and since $S^i(q)$ are orthonormal to each other, the trace of $\pmb{P}_\theta$ can be efficiently calculated by permuting the matrices in Equation \eqref{eq:Scomp}. \begin{equation} \text{trace } \pmb{P}_\theta = \sum_q \sum_i \norm{S^i(q)}^2 \label{eq:trace} \end{equation} Thus, in the above case, it is possible to search through values of $\theta$ to determine the optimal projection operator $\pmb{P}_\theta$ for calibration. Equation \eqref{eq:suresol} is used as a basis to optimize for the ESPIRiT projection operator. A variant of \eqref{eq:suresol} is presented in the following for the case in which the densely sampled region is limited to only the ACS data. \subsection{Accelerated case with the auto-calibration signal} In order to accelerate PI scans while still being auto-calibrating, the ESPIRiT operator is estimated from only the low-resolution, densely sampled ACS data. This is accomplished by incorporating a Fourier sampling operator into the projection operator $\pmb{P}$ and enforcing data consistency within the ACS data in k-space. Define $R$ to be the operator that outputs an ACS region from densely-sampled k-space. Let a new, augmented projection operator $\pmb{P}_\theta^R$ be defined as follows: \begin{equation} \pmb{P}_\theta^R = R F S S^* F^* R^* \label{eq:espiritprojopcal} \end{equation} Let $y$ be the noise-less, densely-sampled, low-resolution auto-calibration region in k-space; $n$ be additive, zero-mean, complex normal noise of standard deviation $\sigma$; and $y_\text{acq} = y + n$ be the acquired auto-calibration region. Then, interpreting \eqref{eq:espiritprojopcal} as a denoiser of ACS data yields the following SURE expression: \begin{equation} SURE_\theta(y)= \norm{\left(\pmb{P}_\theta^R-I\right) y_\text{acq}}_2^2 + 2 \sigma^2 \text{trace } \pmb{P}_\theta^R + C \label{eq:sureespiritcal} \end{equation} $C$ is some constant term that is ignored because it does not affect the minimum. Since the trace of a matrix does not depend on a basis and $F^*$ is a change of basis, it follows that, \begin{equation} \text{trace } \pmb{P}_\theta^R = \text{trace } R F S S^* F^* R^* = \text{trace } F^* R F S S^* F^* R^* F \end{equation} Thus, the trace can be calculated efficiently in a manner similar to Equation \eqref{eq:trace}. \begin{equation} \text{trace } \pmb{P}_\theta^R = \sum_q \sum_i \norm{\underline{S}^i(q)}^2 \text{ where } \underline{S} = F^*RFS \end{equation} Note that the exact SURE value is calculated for a denoiser constructed by augmenting ESPIRiT with a projection onto ACS. Since this is a different denoising operator (compared to the previous densely-sampled, high-resolution non-accelerated case), the SURE values calculated (via Equations $\eqref{eq:sureespiritcal}$ and $\eqref{eq:sureespirit}$) will not necessarily match. That being said, the SURE values calculated by Equations $\eqref{eq:sureespiritcal}$ and $\eqref{eq:sureespirit}$ are exact for their respective denoisers. The parameters $(\theta)$ obtained from minimizing Equations \eqref{eq:sureespirit} and \eqref{eq:sureespiritcal} often correspond in practice. With the assumption that the optimal ESPIRiT maps are derived from parameters that best denoise densely-sampled, high-resolution, non-accelerated data, near optimal parameter selection while being limited to ACS is achieved. Enforcing consistency of ACS data and the corresponding SURE expression is seen to be a good representative of the performance of ESPIRiT maps. This allows different parameter sets to be searched through to obtain the ESPIRiT projection operator that results in near-optimal performance in the expected mean squared error sense even when restricted to ACS data. \subsection{Soft-threshold based weighting for subspace estimation} In practice, sweeping through different kernel sizes and signal subspace sizes (or the rank of $A$ and $V_{||}$) is computationally intensive whereas sweeping through different thresholds to determine the eigenvalue crop threshold $(c)$ is relatively quick. To aid viability, a heuristic that appropriately weights the right singular vectors based on singular value soft-thresholding as an alternative to sweeping through different rank values is presented. Since the same underlying data is being observed through multiple channels, the auto-calibration matrix $A$ is expected to be low rank. However, due to noise and other data inconsistencies present in the data, the observed auto-calibration matrix often has full rank. A low-rank matrix estimate of $A$ is then constructed by hard thresholding the singular values of $A$. Ideally, it would be beneficial to use SURE to determine the optimal low-rank matrix estimate in the sense of ``denoising'' the matrix. However, this is difficult to do since a hard threshold is not weakly-differentiable. A common alternative is to soft-threshold the singular values. Consider the singular value decomposition of $A$ in its dyadic form. \begin{equation} A = \sum_i s_i u_i v_i^* \end{equation} Here, $u_i$ are the left singular vectors, $v_i$ are the right singular vectors and $s_i$ are the singular values. A soft-threshold low rank matrix estimate of $A$, which will be denoted as $\widehat{A}$, is constructed by soft-thresholding the singular values by threshold $\lambda$. \begin{equation} \widehat{A} = \sum_i (s_i-\lambda)_+ u_i v_i^* \label{eq:lowrankest} \end{equation} If $A$ does not contain structured noise, Cand\`es et al.\ showed that it is possible to use SURE to efficiently find the optimal $\lambda$ in Equation \eqref{eq:lowrankest}, which will be denoted as $\lambda^*$ \cite{ref:candes}. However, since $A$ is block-Hankel, it contains structured noise. That being said, in this work, the method derived by Cand\`es et al. is applied directly to obtain an approximate $\lambda^*$. With more relaxed computational efficiency requirements, black-box Monte-Carlo methods may be utilized to estimate $\lambda^*$, such as the work by Ramani et al.\ \cite{ref:ramani1}. If computational efficiency is a non-issue, it is possible to utilize Equations \eqref{eq:sureespirit} and \eqref{eq:sureespiritcal} to enumerate all possible ranks of matrix $A$ to determine the optimal rank of $A$. This is demonstrated by Experiment (a) in the Theory section. The goal of this heuristic is to avoid enumerating the rank of $A$ by incorporating $\lambda^*$ to form a weighted subspace estimate. To motivate weighting the singular vectors, consider the following. The subspace selection problem can be modeled by a hard threshold on the singular values using a threshold $\lambda$. \begin{equation} V_{||} = V W \text{ where } W \text{ is a diagonal weight matrix with } W_{ii} = \left\{\begin{array}{rl}1, & \left(s_i > \lambda\right)\\ 0, & \text{otherwise}\end{array}\right. \label{eq:surehardthresh} \end{equation} Instead, weight the singular vectors with its soft-threshold variant that is used in \eqref{eq:lowrankest}. Let this weighted subspace be $V_{||}^w$. \begin{equation} V_{||}^w = V W \text{ where } W \text{ is a diagonal weight matrix with } W_{ii} = \frac{\left(s_i - \lambda \right)_+}{s_i} \label{eq:sureweightsub} \end{equation} The parameter $\lambda^*$ is calculated by Cand\`es' SURE method in \eqref{eq:sureweightsub} to get a weighted subspace estimate $V_{||}^w$. This weighted subspace estimate is then used in Equation \eqref{eq:espiritconst} instead of $V_{||}$. The ESPIRiT operators derived from $V_{||}^w$ retain the differentiability property utilized by Equations \eqref{eq:sureespirit} and \eqref{eq:sureespiritcal}. Consequently, the exact SURE value as a function of the crop threshold can be calculated for ESPIRiT operators derived from $V_{||}^w$. \section{Methods} \label{sec:methods} To verify the efficacy of the technique, exhaustive simulation experiments are conducted using MATLAB (MathWorks, Natick, MA) to compare the true squared error and the expected risk as calculated by SURE for ESPIRiT. The feasibility of the method is demonstrated by conducting in-vivo experiments using ESPIRiT. Additionally, the soft-threshold heuristic with an eigenvalue crop threshold search (using a method akin to line-search) is implemented in the Berkeley Advanced Reconstruction Toolbox (BART) \cite{ref:uecker2}. \subsection{Simulation Experiments} Fully-sampled, high-resolution data of the human brain were acquired on a 1.5T scanner (GE, Waukesha, WI) using an eight-channel coil for a subject (with IRB approval and informed consent obtained). Data were collected obtained using inversion-recovery prepared 3D RF-spoiled gradient-echo sequence with the following parameters: $TR/TE = 12.2/5.2\;ms$, $TI = 450 ms$, $FA = 20^\circ$, $BW = 15kHz$, and a matrix size of $256\times180\times230$ with $1\; mm$ isotropic resolution. This was done to have a ground-truth to compare against and verify the accuracy of SURE as an estimator of the mean squared error. The $3D$ dataset was Fourier transformed along the readout direction and a slice along the readout dimension was taken. ESPIRiT maps were calculated from this slice and the projection of this slice onto the ESPIRiT maps was considered to be the true, underlying ground truth. The ground truth has dimensions $230 \times 180 \times 8$, where $8$ is the number of channels, and an $l_2-$norm of $8369.46$. Additive complex k-space noise of standard deviation $4$ was retrospectively added to the ground-truth and the result was considered to be the acquired data. For different parameter values $(\theta)$, ESPIRiT maps were generated and the true squared error between the projection and the ground truth was calculated. This error was compared to the calculated SURE value. The soft-threshold based weighting heuristic was also tested by similarly varying kernel sizes and eigenvalue crop thresholds and comparing the true squared error to SURE. For all the previous cases, the SURE approximation when being restricted to low-resolution, densely-sampled ACS data was also calculated. In Experiment (a), a fixed kernel size of $6$ was used and the subspace size and eigenvalue crop thresholds were varied. For each subspace size and eigenvalue crop threshold, the true squared error, SURE value given high-resolution, densely sampled data, and the SURE value given low-resolution, densely-sampled ACS data were calculated. The latter curve was normalized by a constant to better compare the minimums of each of the curves. In Experiment (b), a fixed kernel size of $6$ was used along with the soft-threshold based subspace weighting heuristic. The true squared error, SURE value given high-resolution, densely sampled data, and the SURE value given low-resolution, densely-sampled ACS data was calculated as the crop threshold was varied. The latter curve was normalized by a constant to better compare the minimum of each of the curves. In Experiment (c), the same three curves were calculated from varying the kernel size, subspace size and eigenvalue crop threshold parameters. The minima given a particular kernel size across the subspace thresholds and eigenvalue crop thresholds for that kernel size were taken. This experiment was conducted to test the dependence of ESPIRiT maps on the kernel size $(k)$ assuming optimal subspace size and eigenvalue crop threshold for that particular kernel size $(k)$. In Experiment (d), the same three curves were calculated from varying the kernel size and eigenvalue crop threshold parameters while using the soft-threshold based subspace weighting heuristic. The minima across the eigenvalue crop thresholds given a particular kernel size were taken. This experiment was conducted to test the dependence of ESPIRiT maps on the kernel size $(k)$ assuming an optimal eigenvalue crop threshold for that particular kernel size $(k)$. In Experiment (e), a fixed kernel size of $6$ was used along with the soft-threshold based subspace weighting heuristic, and the eigenvalue crop threshold was varied. For each eigenvalue crop threshold, the true squared error and corresponding g-factor maps are calculated. The maximum and average g-factor within the field of view of the desired object is used to demonstrate how ESPIRiT maps calibrated according to the minimum squared error can aid with accelerated acquisitions. The g-factor maps are calculated assuming equispaced $2\times 2$ sub-sampling in the phase encode directions. Experiments (a, b, c, d, e) used an ACS size of $24 \times 24$. \subsection{In-Vivo Experiments} In Experiment (f), a 3D accelerated dataset was acquired on a 3T Achieva scanner (Philips, Best, The Netherlands) with IRB approval and informed consent obtained. The acquisition was a $T_1$-weighted, TFE dataset acquired using Poisson disk under-sampling ($R \sim 2$) with $18\times18$ ACS lines using an 8-channel head coil. The k-space data were pre-whitened using scanner software based on noise measurement. A unitary inverse Fourier transform was taken along the readout direction and a slice was extracted. ESPIRiT calibration was performed on the dataset using parameters selected by the SURE-based method with the soft-threshold based weighting heuristic. A fixed kernel size of $6$ is used along with the line-search like method for calculating the eigenvalue crop threshold. A PI+compressed sensing (CS) iterative reconstruction was performed with the SURE-calibrated ESPIRiT maps (with an $l_1$ regularization of $0.01$ using a wavelet sparsity prior). The iterative reconstruction used BART. In Experiment (g), one pre-whitened 3D dataset was acquired on a 3T Skyra scanner (Siemens Healthcare, Erlangen, Germany) with IRB approval and informed consent obtained. The acquisition was a high resolution, densely sampled $T_2$-weighted, TSE dataset with no under-sampling. The data is coil compressed from 32 channels to 8 channels using geometric coil compression \cite{zhang2013coil}. A unitary inverse Fourier transform was taken along the readout direction and a slice was extracted. The densely sampled data was retrospectively under-sampled using a $2 \times 2$ Poisson disk sampling mask and a $2 \times 2$ equispaced sampling mask. ESPIRiT calibration was performed on the dataset using parameters selected by the SURE-based method with the soft-threshold based weighting heuristic. A fixed kernel size of $6$ is used along with the line-search like method for calculating the eigenvalue crop threshold. PI+CS iterative reconstructions was performed with the SURE-calibrated ESPIRiT maps (with an $l_1$ regularization of $0.01$ using a wavelet sparsity prior). These iterative reconstructions were implemented in BART. In Experiment (h), the SURE-based parameter selection is applied to the same data used in the original ESPIRiT work \cite{ref:uecker} that demonstrated ESPIRiT's robustness to aliasing due to the calibration region supporting a FOV smaller than the object. The same 2D spin-echo dataset ($TR/TE=550/14 ms$, $FA=90^\circ$, $BW = 19 kHz$, matrix size: $320\times168$, slice thickness: $3 mm$, 24 reference lines) with an FOV of $(200 \times 150) mm^2$, acquired at $1.5T$ using an $8$-channel head coil is used. Additionally, the data is retrospectively under-sampled using an equi-spaced $2\times2$ sampling mask with an ACS size of $24 \times 24$. The data was Fourier transformed to the image domain and noise variance was estimated from a corner of the image data that did not contain any desired signal. To determine the ESPIRiT maps, a fixed kernel size of $6$ is used along with the soft-threshold based weighted subspace estimate. The eigenvalue crop threshold was calculated using the line-search like method. The resulting ESPIRiT map was then calculated to verify robustness to aliasing. \subsection{Characterization Experiments} Experiment (i): Using the same data as in Experiment (a), ESPIRiT maps are calculated as a function of calibration size $(r)$ using the soft-weighting heuristic and MSE-optimal crop threshold. A calibration size of $r$ implies $(r \times r)$ ACS lines. The resulting projection MSE is also calculated. This is to characterize the effect of the number of ACS lines in deriving optimal ESPIRiT maps. Experiment (j) explores the possibility of using SURE for more finely-tuned parameterization by calibrating ESPIRiT maps on a slice-by-slice basis that is SURE-optimal per slice. This should allow for ESPIRiT's parameters to vary as a function of the noise level per slice. This concept is applied to the $T_2$-weighted dataset in Experiment (g), where the SURE-calibration uses the soft-weighting heuristic. Additionally, this experiment compares the SURE-optimal calibration to default calibration parameters present currently in BART. Experiment (k): It is also worthwhile considering the effects of kernel size when using the soft-threshold heuristic and the SURE-optimized crop threshold. ESPIRiT calibration is performed on the same dataset as in Experiment (a), and g-factor is calculated as a function of kernel size $(k)$. \section{Results} \label{sec:results} \subsection{Simulation Results} The simulation results of Experiments (a), (b), (c) and (d) are illustrated in Figure \ref{fig:espirit_sim}. In Experiments (a) and (b), it is seen that the true squared error calculated with the densely-sampled, high-resolution data and SURE correspond well. It is also seen that SURE as calculated from ACS data has a minimum close to the minimum of the true squared error. Additionally, the minimum true squared error of Experiment (b) is higher than the minimum true squared error of Experiment (a) by approximately $1.34\%$, demonstrating that the soft-threshold heuristic yields ESPIRiT maps that perform almost as well as the optimal EPSIRiT maps derived from enumerating through the rank of the auto-calibration matrix. With comparable MATLAB (MathWorks, Natick, MA) implementations, the exhaustive rank search and crop threshold search in Experiment (a) took days to complete while the exhaustive crop threshold search with the soft-threshold heuristic in Experiment (b) completed in a few hours on an Intel(R) Xeon(R) Gold 6138 CPU. In comparison, for Experiment (h), the soft-threshold heuristic and crop threshold search implemented in BART completes in $7.112$ seconds on the same CPU. Thus significant computational gain is achieved with minimal performance penalty. Experiments (c) and (d) do not show perfect correspondence. However, the squared error in these cases are in the order of $10^5$, which is much smaller than Experiments (a) and (b), where the squared errors and in the order of $10^7$. The simulation results of Experiment (e) is illustrated in Figure \ref{fig:gfact}. Note how until Point B, a decrease in the squared error corresponds to better g-factor performance, while any increase in the crop threshold beyond Point B attenuates the signal. This experiment demonstrates that, even though a crop threshold value reflect in Point A is sufficient, there is g-factor performance to be gained through optimizing for the minimum squared error. On simulated data that fits the model, it is seen that SURE as estimated from densely-sampled, high-resolution non-accelerated data is an accurate estimator of the squared error. Furthermore, restricting ourselves to ACS results in near-optimal parameter choice. \subsection{In-Vivo Results} The results of Experiments (f), (g) and (h) are depicted in Figures \ref{fig:invivo_t1}, \ref{fig:invivo_t2} and \ref{fig:res_aliasing}, respectively. Figures \ref{fig:invivo_t1} and \ref{fig:invivo_t2} show how SURE allows ESPIRiT to tightly capture the field of view of the desired object without attenuating the signal. Experiment (h) demonstrates that SURE calibration allows ESPIRiT to retain its robustness to aliasing when the FOV is smaller than object, while still capturing tightly the field of view of the desired object. \subsection{Characterization Results} The result of Experiment (i) is illustrated in Figure \ref{fig:calib_size}. While a larger calibration size does result in a lower squared error, the advantage is in the order of $10^5$, which is much smaller than optimizing other parameters depicted in Figure \ref{fig:espirit_sim}, which are in the order of $10^7$. The result of Experiment (j) is illustrated in Figure \ref{fig:crop_per_slice}. While the SURE-based parameters derive ESPIRiT maps that tightly support the FOV of the desired signal for most slices and is an improvement over the default parameters, the spurious signal outside the human body due to eye-motion related artifacts in Slice 125 results in ESPIRiT maps that do not non-tightly wrap around the FOV of the desired signal for both cases. Experiment (k): The g-factor performance of calibrated ESPIRiT maps as a function of kernel size can be seen in Figure \ref{fig:g_per_k}. This suggests that, when using an 8-channel coil, g-factor performance is fairly stable as a function of kernel size. \section{Discussion} \label{sec:discussion} With respect to an exhaustive subspace rank and crop threshold search, SURE as estimated from densely-sampled, high-resolution, non-accelerated data corresponds well to the true squared error, yielding optimal parameter selection in this full-data case since SURE is used as a proxy to the true squared error. The parameters that minimize SURE estimated from ACS correspond well to the optimum parameters that minimize full-data SURE. With the assumption that the optimal ESPIRiT maps are derived from parameters that best denoise densely-sampled, high-resolution, non-accelerated data, the resulting parameter choices are optimal in an expected error sense. The soft-threshold heuristic is seen to significantly ease computational burden while yielding ESPIRiT maps that perform similarly to the optimal maps derived from an exhaustive rank search. Using SURE as a metric to determine ESPIRiT parameters results in consistent performance across different datasets. In practice, with the parameter ranges mentioned in the in-vivo experiments, the resulting parameter choices tend to optimize for SNR performance while causing no signal attenuation. Motion during the scan presents a challenge for estimating accurate auto-calibrated CSM, and the presented SURE-based parameter optimization can not solve this issue. In case of motion, the SURE optimized parameters may lead to CSM that extend outside of the images object as depicted by the eye-motion related artifacts in Slice 125 of Figure \ref{fig:crop_per_slice}. Since SURE tries to avoid data attenuation at all costs, the ESPIRiT maps calibrated at this slice do not tightly wrap around the FOV of the desired signal. This is in contrast to Slice 1 of Figure \ref{fig:crop_per_slice}, where the lack of motion allows SURE and ESPIRiT to very finely calibrate its support about the FOV of the desired signal. While the soft-threshold weighting heuristic and line-search based crop threshold search does help alleviate some of the computational burden, there will always be added cost. For Experiment (h), ESPIRiT on BART takes $0.416$ seconds using BART's default parameters, versus $7.112$ seconds using the soft-weighting heuristic and line-search on an Intel(R) Xeon(R) Gold 6138 CPU. However, this method does offer a robust, data-consistent metric that can adapt to the noise level for denoising while avoiding signal attenuation. For practical usage, we recommend a kernel size of $6-8$ with the soft-threshold based weighted subspace heuristic and at least $24$ ACS lines. With respect to the ``soft'' SENSE model described in \cite{ref:uecker}, we recommend using two ESPIRiT maps. While SURE should help determine the number of ESPIRiT maps as well, it would contribute non-trivially to the computational burden for minimal gain. If it is known the ACS prescribed FOV is sufficient for the object, one set of ESPIRiT maps is usually sufficient. \section{Conclusion} \label{sec:conclusion} Using SURE as a metric to determine parameters in ESPIRiT allows for automatic parameter selections. In-vivo results are consistent with simulation and theoretical results. \section{Data Availability Statement} \label{sec:data} The MATLAB (MathWorks, Natick, MA) code and data that support the findings of the Experiments $(a-e, i, k)$, used to produce Figures \ref{fig:espirit_sim}, \ref{fig:gfact}, \ref{fig:res_aliasing}, \ref{fig:calib_size}, and \ref{fig:g_per_k}, are openly available in \url{https://github.com/mikgroup/auto-espirit} at DOI:10.5281/zenodo.3679377. The $C$ code that support the findings of Experiments $(f, g, j)$, used to produce Figures \ref{fig:invivo_t1}, \ref{fig:invivo_t2} and \ref{fig:crop_per_slice}, is openly available as a part of the BART Toolbox in \url{https://github.com/mrirecon/bart/} at DOI:10.5281/zenodo.3376744, reference \cite{ref:uecker2}.
1,108,101,562,527
arxiv
\section{The ionization problem}\label{sec:1} A great achievement of quantum mechanics, which goes back to its birthday, is the satisfactory explanation of {\em the stability of atoms}. In this context, the fact that the electrons do not fall into the nucleus of an atom can be derived mathematically from a variant of {\em Heisenberg's uncertainty principle} (e.g. Hardy's or Sobolev's inequality). On the other hand, the deeper question ``How many electrons that a nucleus can bind?" has not been answered completely. From experiments it is widely believed that a neutral atom can bind at most one or two extra electrons, but proving this fact mathematically from the many-body Schr\"odinger equation remains a very challenging problem. In this article, we will limit the discussion to the Born--Oppenheimer approximation of non-relativistic atoms. To be precise, we consider a system of $N$ quantum electrons moving around a classical nucleus and interacting via Coulomb forces. The statistical properties of electrons are encoded by a normalized wave function in $L^2(\mathbb{R}^{3N})$ which satisfies the anti-symmetry \begin{equation} \label{eq:Pauli} \Psi(x_1,...,x_i,...,x_j,...,x_N)= - \Psi(x_1,...,x_j, ..., x_i, ...,x_N), \quad \forall i\ne j, \end{equation} where $x_i\in \mathbb{R}^3$ stands for the position of the $i$-th electron (we ignore the spin of electrons for simplicity). Usually $|\Psi|^2$ is interpreted as the probability density of $N$ electrons. The condition \eqref{eq:Pauli} is called {\em Pauli's exclusion principle}; in particular it implies that $\Psi(x_1,...,x_i,...,x_j,...,x_N)=0$ if $x_i=x_j$ with $i\ne j$, namely two electrons cannot occupy a common position. As we will see, Pauli's exclusion principle plays a crucial role in the ionization problem. The Hamiltonian of the system is \begin{equation} \label{eq:HN} H_{N} = \sum\limits_{i = 1}^N {\left( { - \Delta _{x_i} - \frac{Z} {{|x_i |}}} \right)} + \sum\limits_{1 \le i < j \le N} {\frac{1} {{|x_i - x_j |}}} . \end{equation} Here we use atomic units; in particular the electronic charge is $-1$ and the nuclear charge is $Z\in \mathbb{N}$. By Hardy's inequality \begin{equation} \label{eq:Hardy} -\Delta \ge \frac{1}{4|x|^2}\quad \text{on }L^2(\mathbb{R}^3) \end{equation} it is easy to see that $H_N$ is bounded from below with the core domain $C_c^\infty(\mathbb{R}^{3N})$, and hence it can be extended to be a self-adjoint operator by Friedrichs' method with the form domain $H^1(\mathbb{R}^{3N})$. We will always restrict $H_N$ to the anti-symmetric subspace induced by \eqref{eq:Pauli}. We are interested in the ground state problem \begin{equation} \label{eq:EN} E_N = \inf \sigma(H_N)=\mathop {\inf }\limits_{||\Psi ||_{L^2} = 1} \langle \Psi ,H_{N} \Psi \rangle. \end{equation} Obviously, both $H_N$ and $E_N$ depend on $Z$, but we ignore this dependence in the notation. It is well-known that if a minimizer $\Psi$ of \eqref{eq:EN} exists, then it is a solution to the Schr\"odinger equation \begin{equation} \label{eq:EN-eq} H_N \Psi = E_N \Psi. \end{equation} Although $E_N$ is finite, the existence of minimizers of \eqref{eq:EN} does not necessarily hold. Heuristically, there should a transition when $N$ increases as follows: \begin{itemize} \item If $N< Z+1$, then the outermost electron is attracted by the rest of the system which is of the effective charge $Z-(N-1)>0$. The existence is likely to hold in this case; \item If $N>Z+1$, then the outermost electron prefers to ``escape to infinity" due to the Coulomb repulsion. The nonexistence is in favor. \end{itemize} The first half of this prediction, namely the existence of all positive ions and neutral atoms, was proved by Zhislin in 1960. \begin{theorem}[\cite{Z60}] \label{thm:Zhislin} If $N<Z+1$, then $E_N$ has a minimizer. \end{theorem} On the other hand, the second half of the above prediction, namely the nonexistence of highly negative ions, is much more difficult and often referred to as the ``ionization conjecture", see \cite[Problem 9]{S00} and \cite[Chapter 12]{LS09}. To be precise, let us denote by $N_c=N_c(Z)$ the largest number of electrons such that $E_{N_c}$ has a minimizer. Then we have the following conjecture. \begin{conjecture}[\cite{S00,LS09}] \label{con2} $N_c\le Z+C$ with a constant $C>0$ (possibly $C=1$). \end{conjecture} Due to the celebrated Hunziker--van Winter--Zhislin (HVZ) theorem (see e.g. \cite[Theorem 11.2]{T09}), the essential spectrum of $H_N$ is $[E_{N-1},\infty)$. Consequently, we always have $E_N\le E_{N-1}$; moreover, if $E_N<E_{N-1}$, then $H_N$ has a bound state with eigenvalue $E_N$. In \cite{Z60}, Zhislin proved Theorem \ref{thm:Zhislin} by establishing the strict binding inequality $E_N<E_{N-1}$ for all $N<Z+1$ by an induction argument (if $E_{N-1}$ has an eigenfunction $\Psi_{N-1}$, then the energy of the $N$-body state $\Psi_{N-1}\wedge \varphi$ is lower than $E_{N-1}$ for some function $\varphi\in L^2(\mathbb{R}^3)$ describing one electron at infinity). In contrast, Conjecture \ref{con2} implies that there exists $N_c \le Z+C$ such that \begin{equation} \label{eq:EN-ENc} E_N=E_{N_{c}},\quad \forall N\ge N_c. \end{equation} It is believed that $E_N$ is not only {\em strictly decreasing} when $N\le N_c$, but also {\em convex}. \begin{conjecture}[\cite{LS09}] \label{conCV} The function $E_N$ is convex in $N\in \mathbb{N}$. \end{conjecture} A consequence of Conjecture \ref{conCV} is that if the nucleus can bind $N$ electrons ($E_N<E_{N-1}$), then it can also bind $N-1$ electrons ($E_{N-1}<E_{N-2}$). This seemingly obvious fact is still an open mathematical question! \bigskip In the next sections, we will review some rigorous results on the ionization problem, in the full many-body Schr\"odinger theory as well as in some simplified models. We will summarize the main ideas and mention several open problems, thus extending a cordial invitation to young researchers to enter the subject. See also \cite{N21} for a shorter review on the same topic. \bigskip \section{Non-asymptotic bounds} \label{sec:2} The fact that $N_c=N_c(Z)<\infty$ is already highly nontrivial. This was proved in 1982 independently by Ruskai \cite{R82,R82b} and Sigal \cite{S82,S84}. While the conjectured bound $N_c(Z)\le Z+O(1)_{Z\to \infty}$ remains open, there are non-asymptotic bounds on $N_c(Z)$ and the most important one was proved by Lieb in 1984. \begin{theorem}[\cite{L84b,L84}] \label{thm:L84} We have $N_c(Z)<2Z+1$ for all $Z>0$. \end{theorem} This result holds even if $Z$ is not an integer, and it can also be extended to molecules. In particular, it settles the ionization conjecture for the hydrogen atom. In spite of its importance, the proof of Theorem \ref{thm:L84} is so short and elegant that, as recommended in \cite{L84b}, it can be given in any elementary course of quantum mechanics. \begin{proof}[Proof of Theorem \ref{thm:L84}] Assume that Schr\"odinger's equation \eqref{eq:EN-eq} has a solution $\Psi$. Then multiplying the equation with $|x_N|$ we have \begin{align} \label{eq:Lieb-proof-0} 0&=\langle |x_N| \Psi, (H_N -E_N) \Psi\rangle= \langle |x_N| \Psi, (H_{N-1} -E_N) \Psi\rangle \nonumber\\ &\quad+ \Re \langle |x_N| \Psi, (-\Delta_{x_N}) \Psi\rangle - Z +\frac{1}{2}\sum_{i=1}^{N-1} \left\langle \Psi, \frac{|x_N|+|x_i|}{|x_i-x_N|} \Psi \right\rangle. \end{align} Here we have used the symmetry of $|\Psi|^2$ to symmetrize the interaction term. The first two terms on the right hand side of \eqref{eq:Lieb-proof-0} can be dropped for a lower bound thanks to the obvious inequality $ H_{N-1} \ge E_{N-1}\ge E_N $ for the first $(N-1)$ electrons and the operator inequality \begin{equation} \label{eq:Lieb-ineq} (-\Delta)|x|+|x|(-\Delta)\ge 0\text{ on } L^2(\mathbb{R}^3) \end{equation} for the $N$-th electron. Combining with the triangle inequality $|x|+|y| \ge |x-y|$ we conclude from \eqref{eq:Lieb-proof-0} that $ 0 > - Z + \frac{N-1}{2}, $ namely $N<2Z+1$. Here we get the strict inequality because the triangle inequality is strict almost everywhere. \end{proof} The one-body inequality \eqref{eq:Lieb-ineq} was proved directly in \cite{L84b,L84} by integration by parts. As explained in \cite{N12}, this bound can be also deduced from Hardy's inequality \eqref{eq:Hardy} by applying the IMS formula \begin{equation} \label{eq:IMS} \frac{\varphi(x)^2 (-\Delta)+ (-\Delta)\varphi(x)^2}{2}= \varphi(x) (-\Delta) \varphi (x) - |\nabla \varphi|^2 \end{equation} to the case $\varphi(x)= |x|^{1/2}$. In 2013, Chen and Siedentop \cite{CS13} proved an interesting generalization of \eqref{eq:IMS}: if $\min(a,b)\in [0,2]$ and $a+b\le d$, then $$ |\nabla_x|^a |x|^b + |x|^b |\nabla_x|^a\ge 0 \text { on }L^2(\mathbb{R}^d). $$ The method of ``multiplying the equation by $|x|$" is called the {\em Benguria--Lieb argument}. It was used by Benguria on simplified models \cite{B79} and extended by Lieb to the full many-body context. Nowadays, this is a standard argument in the analysis of Coulomb systems. Let us discuss below two further results obtained by variants of this argument. In 2012, we derived a new bound which improves Theorem \ref{thm:L84} for $Z\ge 6$. \begin{theorem}[\cite{N12}] \label{thm:N12} We have $N_c(Z)<1.22 Z+3 Z^{1/3}$ for all $Z\ge 1$. \end{theorem} \begin{proof}[Ideas of the proof] Multiplying Schr\"odinger's equation \eqref{eq:EN-eq} with $|x_N|^2$ (instead of $|x_N|$) we have \begin{align*} 0&=\langle |x_N|^2 \Psi, (H_N -E_N) \Psi\rangle = \langle |x_N|^2 \Psi, (H_{N-1} -E_N) \Psi\rangle \\ &\quad + \Re \langle |x_N|^2 \Psi, (-\Delta_{x_N}) \Psi\rangle - Z \left\langle \Psi, |x_N| \Psi \right\rangle + \frac{1}{2}\sum_{i=1}^{N-1} \left\langle \Psi, \frac{|x_N|^2+|x_i|^2}{|x_i-x_N|} \Psi \right\rangle. \end{align*} The first term on the right hand side can be dropped for a lower bound as before. For the second term, by the IMS formula \eqref{eq:IMS} and Hardy's inequality \eqref{eq:Hardy} we have \begin{equation} \label{eq:IMS-x2} \frac{|x|^2(-\Delta)+ (-\Delta)|x|^2}{2}= |x|(-\Delta) |x| - 1 \ge \frac{1}{4}-1= -\frac{3}{8}. \end{equation} The error in \eqref{eq:IMS-x2} is small in comparison with the third term thanks to the bound $$ Z \left\langle \Psi, |x_N| \Psi \right\rangle > 0.553 N^{2/3} $$ which is a consequence of the Lieb-Thirring inequality \cite{LT75}. On the other hand, the last term on the right hand side, up to a symmetrization, can be estimated by $$ \beta \ge \mathop {\inf }\limits_{\{x_i\}_{i=1}^N\subset \mathbb{R}^3 } \frac{{\sum\limits_{1 \le i < j \le N} {\frac{{|x_i |^2 + |x_j |^2 }} {{|x_i - x_j |}}} }} {{N(N-1)\sum\limits_{i = 1}^N {|x_i |} }} \ge \beta -1.55 N^{-2/3} $$ where $\beta$ is determined by a variational problem of infinitely many classical particles $$ \beta:= \inf_{\substack{\rm \rho ~probability\\ \rm ~measure~in~\mathbb{R}^3}} \left\{ {\frac{{\iint\limits_{\mathbb{R}^3 \times \mathbb{R}^3 } {\frac{{x^2+y^2}} {{2|x - y|}} {{\rm d}\rho} (x){{\rm d}\rho} (y)}}} {{\int\limits_{\mathbb{R}^3 } {|x| {{\rm d}\rho} (x)} } }} \right\}. $$ The key improvement comes from the fact that $\beta \ge 0.82$ (instead of $1/2$ by triangle inequality). This eventually implies that $N<1.22 \,Z + 3 Z^{1/3}$ (here $\beta^{-1} \approx 1.22$). \end{proof} In 2013, Lenzmann and Lewin proved a stronger version of the nonexistence, where the absence of not only ground states but also all eigenfunctions is concerned. \begin{theorem}[\cite{LL13}] \label{thm:LL13} If $N\ge 4Z+1$, then $H_N$ has no eigenvalue. \end{theorem} \begin{proof}[Ideas of the proof] If $\Psi$ is an eigenfunction of $H_N$, then for every one-body self-adjoint operator $A$ on $L^2(\mathbb{R}^3)$ we have \begin{equation} \label{eq:LL-proof-0-1} 0 = \langle \Psi, {\bf i} [H_N, A_{x_N}] \Psi\rangle = \left\langle \Psi, {\bf i} \left[-\Delta_{x_N} -\frac{Z}{|x_N|} + \sum_{j=1}^N \frac{1}{|x_j-x_N|}, A_{x_N} \right] \Psi \right\rangle \end{equation} with ${\bf i}^2=-1$. In particular, choosing $$ A= {\bf i}[\Delta, f(x)]= ({\bf i} \nabla_{x})\cdot \nabla f(x) + \nabla f(x) \cdot ({\bf i} \nabla_{x}) $$ we find that $$ 0 = \left\langle \Psi, [\Delta_{x_N},[\Delta_{x_N},f(x_N)]] \Psi \right\rangle + \left\langle \Psi, \nabla f (x_N) \cdot \left( \frac{Z x_N}{|x_N|^3} - \sum_{j=1}^{N-1} \frac{x_N-x_j}{|x_N-x_j|^3}\right) \Psi \right\rangle. $$ With the special choice $f(x)=\frac{1}{3}|x|^3$ we have $\nabla f (x) = |x| x$, and hence $$ 0 = \frac{1}{3}\left\langle \Psi, [\Delta_{x_N},[\Delta_{x_N},|x_N|^3]] \Psi \right\rangle + Z - \frac{1}{2}\left\langle \Psi, \sum_{j=1}^{N-1} \frac{(|x_j| x_j - |x_N| x_N)\cdot (x_j-x_N) }{|x_j-x_N|^3} \Psi \right\rangle $$ where we used the symmetrization for the interaction term. The latter identity is very similar to \eqref{eq:Lieb-proof-0}. Then the conclusion follows from two key ingredients: first $$ [\Delta,[\Delta,|x|^3]] \le 0 \text{ on }L^2(\mathbb{R}^3) $$ which should be compared with \eqref{eq:Lieb-ineq}, and second $$ \inf_{x\ne y\in \mathbb{R}^3}\frac{(|x|x- |y|y)\cdot (x-y)}{|x-y|^3} = \frac{1}{2} $$ which should be compared with the triangle inequality. Thus $0>Z+\frac{N-1}{4}$, namely $N<4Z+1$. Strictly speaking, some decay condition on $\Psi$ is needed to ensure that all relevant quantities are finite. However, this technical condition can be relaxed by choosing $f_R(x)=R^3 g(|x|/R)$ with $g(r)=r-\arctan r$ and then sending $R\to \infty$. \end{proof} In the above proof, it is helpful to interpret \eqref{eq:LL-proof-0-1} as a stationary condition for the Schr\"odinger dynamics $\Psi(t)=e^{-{\bf i} tH_N} \Psi$: \begin{align*} 0 = \frac{\,{\rm d}^2}{\,{\rm d} t^2} \langle \Psi(t), f(x_N) \Psi(t)\rangle = \langle \Psi(t), -[H_N,[H_N, f(x_N)]] \Psi(t)\rangle, \end{align*} which explains the choice of $A= {\bf i} [\Delta, f(x)]$. This time-dependent technique goes back to the famous {\em Morawetz--Lin--Strauss estimate} for nonlinear Schr\"odinger equations (NLS). In the standard NLS, the choices $f(x)=|x|, |x|^2, |x|^4$ were used in \cite{M68,G77,T08}, respectively. The argument in \cite{LL13} shows that the new choice $f(x)=|x|^3$ corresponds to the time-dependent version of Lieb's proof in \cite{L84b,L84}. The above approach motivates a stronger version of Conjecture \ref{con2}. \begin{conjecture} \label{eq:Con-stronger} There exists a universal constant $C>0$ such that if $N>Z+C$, then $H_N$ has no eigenvalue. \end{conjecture} \section{Stability of bosonic atoms} We have mentioned in the introduction that the ionization problem is strongly associated to Pauli's exclusion principle. However, we have not seen this subtle fact so far. The heuristic idea supporting for Conjecture \ref{con2} relies only on an electrostatic argument which is purely classical. Hence, in principle it applies to not only {\em anti-symmetric} wave functions $\Psi\in L^2(\mathbb{R}^{3N})$ but also {\em totally symmetric} ones, namely \begin{equation} \label{eq:Pauli-off} \Psi(x_1,...,x_i,...,x_j,...,x_N)= \Psi(x_1,...,x_j, ..., x_i, ...,x_N), \quad \forall i\ne j. \end{equation} The latter case corresponds to the so-called ``bosonic atoms" where electrons are treated as if they were bosonic particles. Note that all of the HVZ theorem, Zhislin's theorem (Theorem \ref{thm:Zhislin}) and Lieb's theorem (Theorem \ref{thm:L84}) work equally well for bosonic atoms. In contrast, in 1983, Benguria and Lieb proved a striking result that Conjecture \ref{con2} {\em fails} in the bosonic case, thus firmly validating the importance of Pauli's exclusion principle \eqref{eq:Pauli} in the ionization problem. \begin{theorem}[\cite{BL83}] \label{thm:BL83} If \eqref{eq:Pauli} is replaced by the bosonic symmetry \eqref{eq:Pauli-off}, then $$\liminf_{Z\to \infty} \frac{N_c(Z)}{Z} \ge t_c >1. $$ \end{theorem} As we will see below, the constant $t_c$ is taken from Hartree's theory, which is known numerically $t_c \approx 1.21$ \cite{B84}. In 1990, Solovej \cite{S90b} proved the optimality of Theorem \ref{thm:BL83} by providing a matching asymptotic upper bound, namely $N_c/Z\to t_c$. In principle, it is also natural to consider the variational problem \eqref{eq:EN} without any symmetry condition, but this results in the same problem with the bosonic symmetry \eqref{eq:Pauli-off}, see e.g. \cite[Corollary 3.1]{LS09}. \begin{proof}[Proof of Theorem \ref{thm:BL83}] The main principle of the proof is that the many-body energy $E_N$ in \eqref{eq:EN} with the symmetry condition \eqref{eq:Pauli-off} can be approximated by {Hartree's theory} where one restricts to the uncorrelated ansatz \begin{align}\label{eq:Hartree} \Psi(x_1,...,x_N)= (u^{\otimes N})(x_1,...,x_N) = u(x_1)... u(x_N) \end{align} for a normalized function $u\in L^2(\mathbb{R}^3)$. This leads to Hartree's energy \begin{align} \label{eq:eH} E^{\rm H}_N=&\inf_{\|u\|_{L^2}=1} \langle u^{\otimes N}, H_N u^{\otimes N}\rangle \nonumber\\ &= \inf_{\|u\|_{L^2}=1} N \int_{\mathbb{R}^3} \left(|\nabla u(x)|^2 - \frac{Z |u(x)|^2}{|x|} + \frac{N-1}{2} |u(x)|^2 \left (|u|^2 * \frac{1}{|x|} \right) \right) \,{\rm d} x. \end{align} Note that by rescaling $u(x)= t^{-1/2} Z^{-3}v(x/Z)$ with $t=(N-1)/Z$ we can write \begin{align} \label{eq:eH} E^{\rm H}_N = \frac{N Z^3}{N-1} e(t) \end{align} where \begin{align} \label{eq:eHt} e(t)= \inf_{\|v\|^2_{L^2}=t} \int_{\mathbb{R}^3} \left(|\nabla v(x)|^2 - \frac{|v(x)|^2}{|x|} + \frac{1}{2} |v(x)|^2 \left (|v|^2 * \frac{1}{|x|} \right) \right) \,{\rm d} x . \end{align} The existence/nonexistence within Hartree's theory is fairly easy to handle since the functional on the right hand side of \eqref{eq:eHt} is convex in $|v|^2$. In particular, it is well-known that $e(t)$ has a minimizer if and only if $t\le t_c$ for a constant $t_c>1$; moreover $e(t)$ is negative and strictly decreasing when $t\le t_c$ while $e(t)=e(t_c)$ for all $t\ge t_c$ (see \cite[Lemma 13]{BBL81} and \cite[Theorem 7.16]{L81}). The main difficulty here is to justify Hartree's approximation. The upper bound $E_N\le E_N^{\rm H}$ follows directly by the variational principle, but obtaining a good lower bound is not obvious. This follows from the {Hoffmann--Ostenhof inequality} \cite{HO77} $$ K=\left\langle \Psi, \sum_{i=1}^N (-\Delta_{x_i})\Psi \right\rangle \ge \int_{\mathbb{R}^3} |\nabla \sqrt{\rho_{\Psi}}|^2 $$ and the {Lieb--Oxford inequality} \cite{LO81} \begin{align} \label{eq:LO} \left\langle \Psi, \sum_{1\le i<j\le N} \frac{1}{|x_i-x_j|} \Psi \right\rangle \ge \frac{1}{2}\int_{\mathbb{R}^3} \frac{\rho_\Psi(x)\rho_\Psi(y)}{|x-y|} \,{\rm d} x \,{\rm d} y - 1.68 \int_{\mathbb{R}^3}\rho_\Psi^{4/3} \end{align} where $\rho_\Psi$ is the one-body density of the $N$-body wave function $\Psi$, \begin{align} \label{eq:rhoN} \rho_\Psi(x) = N \int_{\mathbb{R}^{3(N-1)}} |\Psi(x,x_2,...,x_N)| \,{\rm d} x_2 ... \,{\rm d} x_N. \end{align} Note that $\rho\ge 0$ and $\int_{\mathbb{R}^3} \rho=N$ since $\Psi$ is normalized. The error term in \eqref{eq:LO} can be controlled further by H\"older's and Sobolev's inequalities $$ \int_{\mathbb{R}^3} \rho_\Psi^{4/3} \le \left( \int_{\mathbb{R}^3} \rho_\Psi \right)^{5/6} \left( \int_{\mathbb{R}^3} \rho_\Psi^3 \right)^{1/6} \le C N^{5/6} K^{1/2}. $$ By Hardy's inequality \eqref{eq:Hardy} it is easy to show that $E_N \ge - CN Z^{2}$. Consequently, for a ground state $\Psi$ of $H_N$, the kinetic energy is controlled as $K\le CNZ^{2}$. Moreover, thanks to Lieb's theorem (Theorem \ref{thm:L84}), the existence of minimizer implies that $N\le 2Z+1$. All this gives \begin{align} \label{eq:EN-boson-lowerbound} E_N &\ge \int_{\mathbb{R}^3} \left( |\nabla \sqrt{\rho_{\Psi}}|^2 - \frac{Z \rho_\Psi(x)}{|x|} + \frac{1}{2} \rho_\Psi(x) \left ( \rho_\Psi * \frac{1}{|x|} \right) \right) \,{\rm d} x - CN^{5/6} (NZ^2)^{1/2} \noindent \nonumber\\ &\ge Z^3 \Big( e(N/Z) - CZ^{-2/3}\Big). \end{align} In summary, if $N_c \le t_c Z+1$, then from \eqref{eq:EN-ENc}, \eqref{eq:eH}, and \eqref{eq:EN-boson-lowerbound} we find that $$Z^3 \Big( e(N_c/Z) - CZ^{-2/3}\Big) \le E_{N_c}=E_{t_c Z+1} \le Z^3 e(t_c),$$ and hence $$ \limsup_{Z\to \infty} e(N_c/Z) \le e(t_c). $$ Since $e(t)$ is strictly decreasing when $t\le t_c$, we obtain the desired result $$\liminf_{Z\to \infty} N_c/Z = t_c.$$ \end{proof} In fact, the influence of Benguria and Lieb's argument in \cite{BL83} goes far beyond the context of the ionization problem. It has inspired several works dealing with the justification of Hartree's theory from many-body bosonic systems, which is particularly relevant to the description of the Bose--Einstein condensation for interacting Bose gases. We refer to \cite{LNR14} for further discussions. Now let us focus on Hartree's theory. Note that the bound $t_c<2$ follows easily from the Benguria--Lieb argument as in Theorem \ref{thm:L84}. However, it is not easy to improve. Very recently, Benguria and Tubino \cite{BT22} successfully proved that $t_c<1.5211$ (thus approaching closer to the numerical value $t_c \approx 1.21$ \cite{B84}). Their proof strategy is similar to that of Theorem \ref{thm:N12}, but they were able to replace the use of the Lieb--Thirring inequality (which works only for fermions) by a clever application of the virial theorem to control the kinetic energy. Finally let us mention the following analogue of Conjecture \ref{eq:Con-stronger} for Hartree's equation, which is essentially taken from \cite{LL13}. \begin{conjecture} \label{con:Hartree-hard} If Hartree's equation $$ (-\Delta - |x|^{-1} + |u|^2* |x|^{-1}) u= \lambda u $$ has a solution $u\in H^1(\mathbb{R}^3)$ with a constant $\lambda\in \mathbb{R}$, then $\int_{\mathbb{R}^3}|u|^2 \le t_c.$ \end{conjecture} In \cite{LL13}, Lenzmann and Lewin proved a weaker bound with $4t_c$ instead of $t_c$, using the same proof strategy of Theorem \ref{thm:LL13}. In fact, they considered the following dynamical version of Conjecture \ref{con:Hartree-hard} and proved the bound $4t_c$ in this stronger sense. \begin{conjecture} \label{con:Hartree-hard-time} Consider the time-dependent Hartree's equation $$ {\bf i} \partial_t u= (-\Delta - |x|^{-1} + |u|^2* |x|^{-1}) u. $$ Then for every initial state $u_0\in H^1(\mathbb{R}^3)$, we have $$ \limsup_{T\to \infty}\frac{1}{T} \int_0^T \int_{|x|\le R} |u(x,t)|^2 \,{\rm d} x \,{\rm d} t \le t_c, \quad \forall R>0. $$ \end{conjecture} The idea here is that even if we start from an initial state with arbitrarily large mass, when the time becomes large, there is only at most $t_c$ mass staying in every bounded domain. Conjecture \ref{con:Hartree-hard-time} is closely related to the scattering theory of dispersive PDEs with long-range interaction potentials, which is a very interesting topic in its own right. \section{Asymptotic neutrality} Now we come back to the ionization problem with Pauli's exclusion principle \eqref{eq:Pauli}. A fundamental step towards the ionization conjecture is the following result of Lieb, Sigal, Simon and Thirring in 1984. \begin{theorem}[\cite{LSST84,LSST88}] \label{thm:asympneu}We have $$ \lim_{Z\to \infty} \frac{N_c(Z)}{Z} = 1. $$ \end{theorem} \begin{proof}[Ideas of the proof] The general strategy goes back to the geometric localization method used in Sigal's proof of $N_c(Z)<\infty$ in \cite{S82}. The idea is that by introducing a suitable partition of unity of $(\mathbb{R}^{3})^{N}$, the quantum problem can be reduced to a classical problem. In \cite{S84}, Sigal derived the asymptotic bound \begin{equation} \label{eq:Sigal-0} \limsup_{Z\to \infty}\frac{N_c(Z)}{Z}\le 2 \end{equation} using its classical analogue \begin{equation} \label{eq:Sigal} \mathop {\max }\limits_{1\le j\le N} \left\{ {\sum\limits_{1 \le i \le N,i \ne j} {\frac{1} {{|x_i - x_j |}}} - \frac{Z} {{|x_j |}}} \right\} \ge 0 \end{equation} which follows easily from the triangle inequality. The key ingredient in \cite{LSST84} is the following improvement of \eqref{eq:Sigal}: for every $\varepsilon>0$, $N\ge N_\varepsilon$, and $\{x_i\}_{i=1}^N\subset \mathbb{R}^3$ we have \begin{equation} \label{eq:Sigal-improved} \mathop {\max }\limits_{1\le j\le N} \left\{ {\sum\limits_{i \ne j} {\frac{1} {{|x_i - x_j |}}} - \frac{(1-\varepsilon)N} {{|x_j |}}} \right\} \ge 0. \end{equation} By a contradiction argument, \eqref{eq:Sigal-improved} can be deduced from its continuum analogue which is a nice result in potential theory: for any probability measure $\mu\ne \delta_0(x)$ in $\mathbb{R}^3$ and for any $\varepsilon>0$, there exists a point $x \in {\rm supp} (\mu)\backslash\{0\}$ such that \begin{equation} \label{eq:Sigal-improved-2} f(x)=\int_{\mathbb{R}^3} \frac{1}{|x-y|} \,{\rm d} \mu(y) - \frac{1-\varepsilon}{|x|} \ge 0. \end{equation} In the simple case, if ${\rm supp} (\mu)$ is bounded and does not contain $0$, then $f$ is harmonic outside ${\rm supp} (\mu)$ and vanishing at infinity. Hence, if we assume further that $f$ is continuous, then $f$ must be nonnegative somewhere in ${\rm supp} (\mu)$ by the maximum principle. For the general case see \cite{LSST84,LSST88} for details. For every $\varepsilon>0$, $N\ge N_\varepsilon$, and $R>0$, the inequality \eqref{eq:Sigal-improved} and its refinements allow to construct a partition of unity $\{J_a\}_{a=0}^N$ of $C^\infty$ functions in $(\mathbb{R}^{3})^N$ so that the following hold. \begin{itemize} \item[(i)] $J_a \ge 0$ for all $a$ and $\sum_{a=0}^N J_a^2(X)=1$ for all $X= \{x_b\}_{b=1}^{N} \subset (\mathbb{R}^{3})^N$. \item[(ii)] Denoting $|X|_\infty =\max_{1\le b\le N} |x_b|$ we have $$L=\sum_{a=0}^N |\nabla_{\mathbb{R}^{3N}}J_a(X)|^2 \le C_\varepsilon \frac{N^{1/2}\log(N)^2}{R |X|_\infty}.$$ \item[(iii)] $J_0$ is totally symmetric in $\{x_b\}_{b=1}^{N}$ and $${\rm supp}\, J_0 \subset \left\{ X= \{x_b\}_{b=1}^{N} \,|\, |X|_\infty \le R \right\}.$$ \item[(iv)] If $a\ne 0$, then $J_a$ is symmetric in $\{x_b\}_{b\ne a}$ and $${\rm supp}\, J_a \subset \left\{ X =\{x_b\}_{b=1}^{N} \,|\, \sum_{b\ne a} \frac{1}{|x_b-x_a|} \ge \frac{(1-\varepsilon)N}{|x_a|} \right\}.$$ \end{itemize} Now let us conclude the proof of Theorem \ref{thm:asympneu}. Assume that $$N\in [(1-2\varepsilon )^{-1}Z,2Z+1]$$ for some $\varepsilon>0$ small (independent of $Z$). We choose $$ Z^{-1/2} \log(Z)^2 \ll R \ll Z^{-1/3}. $$ By the IMS localization formula (c.f. \eqref{eq:IMS}) we can decompose $H_N$ in \eqref{eq:HN} as \begin{equation}\label{eq:HN-decompose} H_N = \sum_{a=0}^N \frac{1}{2} (J_a^2 H_N + H_N J_a^2) = \sum_{a=0}^N J_a H_N J_a - L = \sum_{a=0}^N J_a (H_N -L) J_a. \end{equation} When $a=N$, by (ii) we have $$ L \le C_\varepsilon \frac{Z^{1/2}\log(Z)^2}{R |X|_\infty} \ll \frac{\varepsilon N}{|x_N|}. $$ Moreover, by the property of the support of $J_N$ in (iv) $$ J_N \sum_{i=1}^{N-1} \frac{1}{|x_i-x_N|} J_N \ge J_a \frac{(1-\varepsilon)N}{|x_N|} J_a. $$ By decomposing $$ H_N -L = H_{N_1} -\Delta_{x_N} -\frac{Z}{|x_N|} + \sum_{i=1}^{N-1} \frac{1}{|x_i-x_N|} - L, $$ then using $H_{N_1}\ge E_{N-1}$ and $-\Delta_{x_N}\ge 0$ we get $$ J_N (H_N - L) J_N \ge J_N \left( E_{N-1} + \frac{(1-2\varepsilon) N - Z}{|x_N|} \right) J_N \ge J_N^2 E_{N-1}. $$ Similarly, $J_a (H_N - L) J_a\ge J_a^2 E_{N-1}$ for all $a\ne 0$. For $a=0$, we use (ii) again for $L$ and use the triangle inequality to control the interaction energy. This gives $$ J_0^2 \left( \sum_{1\le i<j\le N} \frac{1}{|x_i-x_j|} - L \right) \ge J_0^2 \left( \frac{N(N-1)}{2|X|_\infty} - C_\varepsilon \frac{Z^{1/2}\log(Z)^2}{R |X|_\infty} \right) \ge J_0 \frac{Z^2}{4R} $$ where in the last inequality we also used the property of the support of $J_0$ in (iii). On the other hand, by summing the first $N$ eigenvalues of the hydrogen atom, we have $$ \sum_{i=1}^N \Big( -\Delta_{x_i} - \frac{Z}{|x_i|} \Big) \ge - C Z^{7/3}. $$ Here Pauli's exclusion principle is crucial. Thus $$ J_0 (H_N-L)J_0 \ge J_0^2 \left( \frac{Z^2}{4R} - C Z^{7/3} \right) \ge 0 \ge J_0^2 E_{N-1}. $$ In summary, from \eqref{eq:HN-decompose} we deduce that \begin{equation}\label{eq:HN-decompose} H_N = \sum_{a=0}^N J_a (H_N -L) J_a \ge \sum_{a=0}^N J_a^2 E_{N-1} = E_{N-1} \ge E_N. \end{equation} Therefore, if $H_N \Psi=E_N \Psi$ for a ground state $\Psi$, then we must have $$E_N=\langle \Psi, H_N \Psi\rangle=E_{N-1}.$$ A careful reconsideration of the proof of the above estimates shows that this equality cannot hold for any eigenfunction. Thus $N\in [(1-2\varepsilon )^{-1}Z,2Z+1]$ fails, which implies the desired result. \end{proof} The convergence result in Theorem \ref{thm:asympneu} is not quantitative. In 1990, Fefferman and Seco \cite{FS90} and Seco, Sigal and Solovej \cite{SSS90} offered different proofs for the quantitative bound $$ N_c(Z) \le Z + O(Z^{5/7}) $$ which remains the best known asymptotic result. The proofs in \cite{FS90,SSS90} use certain information of the minimizing wave function obtained via a careful comparison with the Thomas--Fermi (TF) theory which will be revisited below. \section{The TF theory} Since the many-body Schr\"odinger equation is too complicated, for practical computations one often replaces the wave function $\Psi$ by its one-body density $\rho_\Psi$ defined in \eqref{eq:rhoN}, resulting in density functional theories. A famous example is the TF theory, in which the ground state energy $E_N$ in \eqref{eq:EN} is replaced by its semiclassical approximation \begin{align}\label{eq:TF-theory} E^{\rm TF}(N)= \inf_{\rho\ge 0, \int \rho=N} \int_{\mathbb{R}^3} \Big( C^{\rm TF} \rho^{5/3}(x) - \frac{Z}{|x|} \rho(x) + \frac{1}{2} \rho(x) \Big(\rho*\frac{1}{|x|} \Big) \Big) \,{\rm d} x \end{align} with a constant $C^{\rm TF}>0$. Here $N,Z>0$ are not necessarily integers. The mathematical properties of the TF theory were studied in full detail by Lieb and Simon in 1973 \cite{LS73,LS77}. In particular, concerning the ionization problem we have the following theorem. \begin{theorem}[\cite{LS77}] For all $Z>0$, $E^{\rm TF}(N)$ has a minimizer if and only if $N\le Z$. \end{theorem} \begin{proof}[Ideas of the proof] A very useful argument introduced in \cite{LS77} is the {\em relaxation method} which relates the variational problem \eqref{eq:TF-theory} to the ``unconstrained problem" \begin{align}\label{eq:TF-theory<=} E^{\rm TF}_{\le}( N)= \inf_{\rho\ge 0, \int \rho \le N} \int_{\mathbb{R}^3} \Big( C^{\rm TF} \rho^{5/3}(x) - \frac{Z}{|x|} \rho(x) + \frac{1}{2} \rho(x) \Big(\rho*\frac{1}{|x|} \Big) \Big) \,{\rm d} x. \end{align} The advantage of \eqref{eq:TF-theory<=} is that the set of states is {\em convex}, and hence the existence of minimizers of \eqref{eq:TF-theory<=} follows easily by the direct method in the calculus of variations (in particular, the set of states is stable under the weak convergence in $L^{5/3}$). Moreover, as explained in \cite{LS77}, by standard rearrangement inequalities it is easy to see that the functional on the right hand side of \eqref{eq:TF-theory} is strictly convex. This implies that the unconstrained minimizer $\rho$ of \eqref{eq:TF-theory<=} is unique with $\int_{\mathbb{R}^3}\rho \le N$. Thus the existence of a minimizer of the original problem \eqref{eq:TF-theory} is equivalent to $\int_{\mathbb{R}^3}\rho =N$. The existence part is rather standard: if $\int_{\mathbb{R}^3}\rho < N\le Z$, then we can put some positive mass at infinity to lower the energy and obtain a contradiction. Let us focus on the nonexistence part which is more challenging. As proved in \cite{LS77}, the unconstrained minimizer solves the TF equation $$ \frac{5}{3}C^{\rm TF}\rho(x)^{2/3} = [\Phi(x)]_+, \quad \Phi(x)= Z|x|^{-1}-\rho*|x|^{-1} -\mu $$ for some chemical potential $\mu \ge 0$. We prove that $\int_{\mathbb{R}^3}\rho \le Z$ for all $N>0$, which implies the nonexistence when $N>Z$. Assume by contradiction that $\int_{\mathbb{R}^3}\rho > Z$. Then the TF potential satisfies $$ |x| \Phi(x) \le Z -|x| \Big( \rho* \frac{1}{|x|}\Big) \to Z -\int_{\mathbb{R}^3} \rho<0 $$ as $|x|\to \infty$ (we used $\mu\ge 0$). Therefore, $A=\{x: \Phi(x)<0\}$ is non empty. Moreover, since $\Phi$ is continuous on $\mathbb{R}^3\backslash\{0\}$ and $\Phi(x)\to \infty$ as $|x|\to 0$, we find that $A$ is open and $0\ne A$. To conclude, using $\Delta |x|^{-1} = 4\pi \delta_0(x)$ and the TF equation, we find that $$\Delta \Phi(x)= 4\pi \rho(x) = 0 \text { in } A.$$ Thus $\Phi$ is harmonic in $A$, $\Phi<0$ in $A$ and $\Phi=0$ on the boundary of $A$. All this leads to a contradiction to the maximum principle. Thus $\int_{\mathbb{R}^3} \rho\le Z$. \end{proof} In fact, by a variant of the Benguria--Lieb argument, the nonexistence part can be also proved differently, in which we only use harmonic analysis via {\em Newton's theorem}. \begin{proof}[Another proof of $N\le Z$ \cite{N13}] Let $\rho$ be the unconstrained minimizer. Integrating the TF equation with $|x|^k\rho(x)$, $k\ge 1$, we get $$ 0 \le \frac 5 3 C^{\rm TF} \int_{\mathbb{R}^3} \rho(x)^{5/3} |x|^k \,{\rm d} x = \int_{\mathbb{R}^3} \Big(Z|x|^{-1}-\rho*|x|^{-1} -\mu \Big) \rho(x) |x|^k \,{\rm d} x. $$ The contribution associated with $\mu \ge 0$ can be ignored for an upper bound. Since the TF functional is rotation invariant and $\rho$ is unique, it must be radial. Hence, by Newton's theorem (see e.g. \cite[Theorem 5.2]{LS09}) we have $$ \rho*|x|^{-1}= \int_{\mathbb{R}^3} \frac{\rho(y)}{\max(|x|, |y|)} \,{\rm d} y. $$ Consequently, \begin{align*} Z \int_{\mathbb{R}^3} |x|^{k-1} \rho(x) &\ge \int_{\mathbb{R}^3} |x|^k \rho(x) (\rho*|x|^{-1}) \,{\rm d} x = \frac{1}{2} \iint \frac{ (|x|^k+|y|^k) \rho(x) \rho(y)}{\max(|x|, |y|)} \,{\rm d} x \,{\rm d} y. \end{align*} Thanks to the AM-GM inequality, \[ \frac{{|x{|^k} + |y{|^k}}}{{\max( |x|,|y|) }} \ge \left( {1 - \frac{1}{k}} \right)\left( {|x{|^{k - 1}} + |y{|^{k - 1}}} \right), \] and hence $$ Z \int_{\mathbb{R}^3} |x|^{k-1} \rho(x) \,{\rm d} x \ge \left(1-\frac{1}{k}\right) \Big( \int |x|^{k-1} \rho(x) \,{\rm d} x \Big) \Big( {\int \rho(y) {\rm d}y }\Big).$$ Thus $Z\ge (1-k^{-1})N$. The conclusion follows by taking $k\to \infty$. Strictly speaking, in the above proof we need $\int_{\mathbb{R}^3} |x|^{k-1} \rho(x) \,{\rm d} x$ to be finite, which is not true if $k$ becomes large. However, this issue can be fixed by integrating the TF equation in $\{|x|\le R\}$ and then sending $R\to \infty$ at the end. \end{proof} It is unclear whether the above proof can be used to derive the asymptotic neutrality in Theorem \ref{thm:asympneu}. To end this section, let us mention the following analogue of Conjecture \ref{con:Hartree-hard-time} for the Thomas--Fermi theory. \begin{conjecture} \label{con:TF-time} Consider the time-dependent Thomas--Fermi theory $$ \begin{cases} \partial_t \varphi &= \frac{1}{2} (\nabla \varphi)^2 + c_0 \rho^{2/3} - Z|x|^{-1} + \rho* |x|^{-1},\\ \partial_t \rho &= \nabla (\rho \nabla \varphi) \end{cases} $$ with a fixed constant $c_0>0$. Then for all initial state $(\varphi_0,\rho_0)$ satisfying $0\le \rho_0\in L^1(\mathbb{R}^3)\cap L^{5/3}(\mathbb{R}^3)$ and $\sqrt{\rho_0} |\nabla \varphi_0| \in L^2(\mathbb{R}^3)$, we have $$ \limsup_{T\to \infty}\frac{1}{T} \int_0^T \int_{|x|\le R} \rho (x,t) \,{\rm d} x \,{\rm d} t \le Z, \quad \forall R>0. $$ \end{conjecture} In 2018, Chen and Siedentop \cite{CS18} proved the weaker bound $4Z$ instead of $Z$, using a strategy similar to that of \cite{LL13}. Their analysis also covers the Vlasov equation. \section{The Thomas--Fermi-von Weizs\"acker (TFW) theory} In principle, the TF theory is purely semiclassical and it is only good to describe the bulk of electrons at distance $O(Z^{-1/3})$ to the nucleus. For physical and chemical applications, it is important to capture additional contributions of the {\em innermost} and {\em outermost} electrons, the ones at distances $O(Z^{-1})$ and $O(1)$ to the nucleus, respectively. We refer to Lieb's review \cite{L81} for a pedagogical introduction to several refined versions of the TF theory. In this section, we focus on the first refinement: the TFW theory where the ground state energy is given by \begin{align}\label{eq:TFW-theory} E^{\rm TFW}(N)= \inf_{\|u\|_{L^2}^2=N} \int_{\mathbb{R}^3} \Big( C^{\rm TF} |u|^{10/3} + C^{\rm W} |\nabla u|^2 - \frac{Z|u|^2}{|x|} + \frac{1}{2} |u|^2 \Big(|u|^2*\frac{1}{|x|} \Big) \Big) \,{\rm d} x. \end{align} Here $|u|^2$ plays the role of the electron density and the von Weizs\"acker correction term $C^{\rm W} |\nabla u|^2$, with a constant $C^{\rm W}>0$, corresponds to the contribution of the innermost electrons. In the context of the ionization conjecture, we have the following theorem. \begin{theorem}[\cite{B79,BBL81,BL85}] The variational problem $E^{\rm TFW}(N)$ has a minimizer if and only if $N\le N_c(Z)$ for some critical value $Z<N_c(Z)\le Z+C$. \end{theorem} The general framework of the existence, uniqueness and properties of TFW minimizers was discussed in great detail by Benguria, Brezis and Lieb in 1981 \cite{BBL81}. Although the functional on the right hand side of \eqref{eq:TFW-theory} is still convex in $u$, the TFW theory is significantly more complicated than the TF theory. The analysis in \cite{BBL81} contains several steps; with the starting point being the study of a ``fully unconstrained problem" (namely a version of \eqref{eq:TFW-theory} without any mass constraint of $u$), which is of the same spirit of the above analysis of the Thomas--Fermi theory. The unique fully unconstrained minimizer is a positive, radial solution to the TFW equation $$ \left( \frac{5}{3}C^{\rm TF} u^{2/3} + C^{\rm W} (-\Delta) - \Phi(x)\right) u (x)=0, \quad \Phi(x)=\frac{Z}{|x|} - |u|^2*\frac{1}{|x|} $$ and moreover $\int_{\mathbb{R}^3} |u|^2=N_c(Z)$. The strict lower bound $N_c(Z)>Z$ shows that the von Weizs\"acker correction really improves the binding ability of atoms. This remarkable result was proved by Benguria in his 1979 PhD thesis under Lieb's supervision \cite{B79}. The nonexistence part, namely $N_c(Z)\le Z+C$ was proved by Benguria and Lieb in 1984 \cite{BL85}. Let us quickly explain these proofs below. \begin{proof}[Proof of $N_c(Z)>Z$ in TFW theory \cite{B79}] Assume that $N_c(Z)=\int_{\mathbb{R}^3} |u|^2 \le Z$. By Newton's theorem, the TFW potential $$ \Phi(x)=\frac{Z}{|x|} - |u|^2*\frac{1}{|x|} = \frac{Z}{|x|} - \int_{\mathbb{R}^3} \frac{|u(y)|^2}{\max(|x|,|y|)} \,{\rm d} y $$ is nonnegative. Therefore, from the TFW equation we have $$ -\Delta u(x) + c_1 u^{5/3}(x) \ge 0 \quad \text{ for all }x\ne 0 $$ with a constant $c_1\ge 0$. Consider the function $\widetilde{u}(x)= c_2 |x|^{-3/2}$ with $c_2>0$ sufficiently small such that \begin{align*} -\Delta \widetilde{u}(x) + c_1 \widetilde{u} ^{5/3}(x) &\le 0, \quad \forall |x|\ge 1,\\ \widetilde{u}(x) &\le u(x), \quad \forall |x|=1. \end{align*} If the open set $A=\{|x|>1, \widetilde{u}(x) > u(x) \}$ is non empty, then $\widetilde{u}-u$ is subharmonic and positive in $A$, but vanishes on the boundary of $A$, which is a contradiction to the maximum principle. Thus $u(x)\ge \widetilde u(x)$ for all $|x|\ge 1$, but this contradicts to the fact that $u\in L^2(\mathbb{R}^3)$. Thus $N_c(Z)>Z$. \end{proof} \begin{proof}[Proof of $N_c\le Z+C$ in TFW theory \cite{BL85}] The main idea is that the function $$ p(x)= (4\pi C^{\rm W} u^2(x) + \Phi^2(x))^{1/2}$$ is subharmonic for $|x|>0$ and $p(x)\to 0$ as $|x|\to \infty$. This implies that $|x|p(x)$ is convex and decreasing in $|x|$. On the other hand, when $|x|\to \infty$, the TFW minimizer $u$ decays faster than any polynomial while the TFW potential satisfies $|x|\Phi(x)\to -Q(Z)<0$ where $Q(Z)=N_c(Z)-Z$. Therefore, we conclude that $$ Q(Z)= \lim_{|y|\to \infty} |y|p(y) \le |x| p(x), \quad \forall |x|>0. $$ We can choose $|x_0|\sim O(1)$ such that $\Phi(x_0)<0$ (this follows from $Q(Z)>0$) and $u(x_0)\le C$ (this follows from the TFW equation). Thus $Q(Z) \le C$ as desired. \end{proof} Further important results on the TFW theory were established later by Solovej in his 1989 PhD thesis under Lieb's supervision. In particular, Solovej introduced the {\em universality} concept, namely some relevant quantities not only are bounded uniformly but also have limits when $Z\to \infty$. In particular, he proved the following theorem. \begin{theorem}[\cite{S90}] \label{thm:S90}The TFW unconstrained minimizer $u_Z$ and the TFW potential $\Phi_Z(x)= Z|x|^{-1}- |u_Z|^2*|x|^{-1}$ have limits when $Z\to \infty$ $$ \lim_{Z\to \infty}u_Z(x) = u_\infty(x) , \quad \lim_{Z\to \infty} \Phi_Z(x)=\Phi_\infty (x), \quad \forall x\ne 0. $$ Consequently, the maximum ionization $Q_Z= N_c(Z)-Z$ also has a limit $$ \lim_{Z\to \infty} Q_Z = Q_\infty = - \lim_{|x|\to \infty} |x| \Phi_\infty(x). $$ \end{theorem} Recall that the TF theory describes the bulk of the electrons at distance $O(Z^{-1/3})$ to the nucleus, and hence the rescaled function $Z^{-1}u_Z(Z^{-1/3}x)$ has a limit when $Z\to \infty$ which is given by the TF minimizer. However, the universality in Theorem \ref{thm:S90} is much deeper since it describes the outermost electrons at distance $O(1)$ to the nucleus which are responsible for chemical binding. In the level of the many-body Schr\"odinger theory, the convergence of the one-body density $Z^{-2} \rho_Z(Z^{-1/3}x)$ was already proved by Lieb and Simon \cite{LS77}, but the universality remains open. \begin{conjecture}[Universality] \label{conj-uni} In the many-body Schr\"odinger theory \eqref{eq:EN}, the one-body density $\rho_Z$ of the ground state with $N=N_c(Z)$ has a limit up to subsequences $Z=Z_n\to \infty$, namely $$ \lim_{Z_n\to \infty} \rho_{Z_n}(x) = \rho_\infty(x), \quad \forall x\ne 0. $$ \end{conjecture} Here different subsequences $Z_n\to \infty$ may lead to different limits, which corresponds to the existence of different groups in an ``infinite periodic table". In principle, if Conjecture \ref{conj-uni} holds true, then we should be able to extract the convergence of the maximum ionization as well as the radius of atoms. Currently, the boundedness of these quantities is unknown \cite{S00,LS09}. We refer to a recent paper of Solovej \cite{S17} for further discussions on the universality of large atoms and molecules. \section{The Hartree--Fock (HF) theory} In computational physics and chemistry, not only the electron density, but also the electron orbitals are of the fundamental interest. One of the most popular methods in this direction is the Hartree--Fock (HF) theory in which the many-body wave functions are restricted to the Slater determinants \begin{align}\label{eq:Slater} \Psi(x_1,...,x_N)= (u_1\wedge ... \wedge u_N)(x_1,...,x_N)= \frac{1}{\sqrt{N!}} \det [ u_i(x_j)]_{1\le i,j\le N} \end{align} where $\{u_i\}_{i=1}^N$ is an orthonormal family in $L^2(\mathbb{R}^3)$. In principle, the Slater determinants are very similar to the Hartree states in \eqref{eq:Hartree}, except that the anti-symmetric tensor has to be taken in \eqref{eq:Slater} to ensure Pauli's exclusion principle \eqref{eq:Pauli}. The Hartree--Fock energy is defined by \begin{align}\label{eq:HF} E^{\rm HF}(N)= \inf_{\Psi \in SD_N} \langle \Psi, H_N \Psi\rangle \end{align} where $SD_N$ is the set of $N$-body Slater determinants. Here $N\in \mathbb{N}$ and $Z>0$ is not necessarily an integer. The analysis of the HF theory is an important subject of mathematical physics. In the context of the ionization problem, the existence of HF miminizers for $N\le Z$ is much harder than that in the many-body Schr\"odinger theory since the set of states is very nonlinear due to the orthogonality of the orbitals $\{u_i\}_{i=1}^N$. This issue was settled in a seminal paper of Lieb and Simon in 1977. \begin{theorem}[\cite{LS77b}] \label{thm:LS77b} For every $N<Z+1$, the HF minimization problem \eqref{eq:HF} has a minimizer. Moreover, the minimizing orbitals $\{u_i\}_{i=1}^N$ are the $N$ lowest eigenfunctions of the one-body operator $$ h = -\Delta - Z|x|^{-1} + U_\Psi(x) - K_\Psi $$ with the multiplication operator $U_\Psi(x)= \sum_{i=1}^N |u_i|^2*|x|^{-1}$ and the Hilbert--Schmidt operator $K_\Psi$ with kernel $K_\Psi(x,y)=\sum_{i=1}^N u_i(x) \overline{u_i(y)} |x-y|^{-1}$. In fact, $h$ has infinitely many negative eigenvalues; in particular $h u_i =\varepsilon_i u_i$ with $\varepsilon_i< 0$ for all $i$. \end{theorem} \begin{proof} The general strategy is to use the relaxation method in the same spirit of the TF theory. In the HF case, the corresponding ``unconstrained problem" is \begin{align}\label{eq:HF-ext} \inf_{\Psi \in \widetilde{SD}_N} \langle \Psi, H_N \Psi\rangle \end{align} where $\widetilde{SD}_N$ contains all $\Psi=u_1\wedge ... \wedge u_N$ such that the $N\times N$ matrix $$M=(\langle u_i, u_j\rangle)_{1\le i,j\le N}$$ satisfies $0\le M\le 1$. Note that $M=1$ if $\Psi$ is a Slater determinant. The extension to $0\le M\le 1$ makes the set $\widetilde{SD}_N$ stable under the weak convergence in $L^2(\mathbb{R}^3)$ of the orbitals $\{u_i\}_{i=1}^N$, thus ensuring the existence of minimizers of \eqref{eq:HF-ext} by the direct method in the calculus of variations. In order to go back to the original problem \eqref{eq:HF}, three key ingredients are needed. First, since the energy $\langle \Psi, H_N \Psi\rangle$ with $\Psi=u_1\wedge ... \wedge u_N$ is invariant under changing $\{u_i\}_{i=1}^N$ to $\{A u_i\}_{i=1}^N$ with any $N\times N$ unitary matrix $A$, we can assume that $M$ is diagonal, namely $$\langle u_i, u_j\rangle = \lambda_{i} \delta_{ij}\quad \text { with } 0\le \lambda_i \le 1.$$ Second, note that for each $i$, the function $u=u_i$ is the minimizer of the functional $$\Psi= u_1\wedge ... \wedge u_{i-1} \wedge u \wedge u_{i+1}\wedge... \wedge u_N \mapsto \langle \Psi, H_N \Psi\rangle={\rm const}+ \langle u, h u\rangle$$ subject to the constraints $$\langle u, u_j\rangle=0 \text{ for all }j\ne i, \quad \|u\|_{L^2}\le 1.$$ Therefore, $u_i$ must be a linear combination of the $N$ smallest eigenfunctions of $h$. Up to a further unitary transformation, we can assume that all $u_i$ are eigenfunctions of $h$. Third, when $N<Z+1$, $h$ has infinitely many eigenvalues below its essential spectrum $[0,\infty)$. This fact can be proved by the min-max principle, using radial trial states with disjoint supports. Thus for all $i$, we have $hu_i=\varepsilon_i u_i$ with $\varepsilon_i<0$, and hence the minimizing $u_i\mapsto \langle u_i, h_i u_i\rangle = \varepsilon_i \|u_i\|_{L^2}^2$ under the constraint $\|u_i\|_{L^2}\le 1$ must satisfy $\|u_i\|_{L^2}=1$. \end{proof} Note that a Slater determinant can be encoded fully in terms of its {\em one-body density matrix}. Recall that for every $N$-body wave function $\Psi$, the one-body density matrix $\gamma_\Psi$ is a trace class operator on $L^2(\mathbb{R}^3)$ with kernel \begin{align} \label{eq:1pdm} \gamma_\Psi(x,y)=N \int_{\mathbb{R}^{3(N-1)}}\int_{\mathbb{R}^3} \Psi(x,x_2,...,x_N)\overline{\Psi(y,x_2,...,x_N)} \,{\rm d} x_2 ... \,{\rm d} x_N. \end{align} In particular, if $\Psi$ is given in \eqref{eq:Slater}, then $\gamma_\Psi$ is the rank-$N$ orthogonal projection $$ \gamma_\Psi = \sum_{i=1}^N |u_i\rangle \langle u_i|. $$ The one-body density $\rho_\Psi$ defined in \eqref{eq:rhoN} is given equivalently by $\rho_\Psi(x)=\rho_\gamma(x)=\gamma(x,x)$. Using these notations, the energy of a Slater determinant $\Psi$ is given by $$ \langle \Psi, H_N \Psi\rangle = \mathcal{E}^{\rm HF}(\gamma_\Psi) $$ where $$ \mathcal{E}^{\rm HF}(\gamma)= {\rm Tr} ((-\Delta - Z|x|^{-1} )\gamma) + \frac{1}{2} \iint \frac{\rho_\gamma(x)\rho_\gamma(y) - |\gamma(x,y)|^2}{|x-y|} \,{\rm d} x \,{\rm d} y. $$ Consequently, the Hartree--Fock energy in \eqref{eq:HF} can be written equivalently as \begin{align}\label{eq:HF-gamma} E^{\rm HF}(N)=\inf_{\substack {0\le \gamma=\gamma^2 \le 1\\ {\rm Tr} \gamma =N} } \mathcal{E}^{\rm HF}(\gamma) \end{align} In this direction, the relaxation method suggests to relate \eqref{eq:HF-gamma} to the ``unconstrained problem" \begin{align}\label{eq:HF-gamma-0-1} E^{\rm HF}_{\le}(N)=\inf_{\substack {0\le \gamma \le 1\\ {\rm Tr} \gamma =N} } \mathcal{E}^{\rm HF}(\gamma). \end{align} Here we drop the projection condition $\gamma=\gamma^2$ in \eqref{eq:HF-gamma-0-1} in order to make the set of states convex. Thus in principle, the unconstrained energy $E^{\rm HF}_{\le}(N)$ is much easier to compute than the original energy $E^{\rm HF}(N)$. On the other hand, while $E^{\rm HF}(N)$ is an obvious upper bound to the full many-body $E_N$ in \eqref{eq:EN}, it is unclear if the unconstrained energy $E^{\rm HF}_{\le}(N)$ has this nice property or not. This conceptual difficulty was removed completely in 1981 by Lieb. \begin{theorem}[Lieb's variational principle \cite{L81b}] \label{thm:L81b} Let $0\le \gamma \le 1$ and ${\rm Tr} \gamma=N$. Then there exist an $N$-body Slater determinant $\Psi$ and an $N$-body mixed state $\Gamma$ such that its one-body density matrix is $\Gamma^{(1)}=\gamma$ and $$ \langle \Psi, H_N \Psi\rangle \le {\rm Tr}(H_N \Gamma) \le \mathcal{E}^{\rm HF}(\gamma). $$ \end{theorem} Here $\Gamma$ is an $N$-body mixed state if $\Gamma= \sum_{i} \lambda_i |\Psi_i\rangle \langle \Psi_i|$ with $N$-body orthonormal functions $\{\Psi_i\}$ and nonnegative numbers $\{\lambda_i\}$ satisfying $\sum_i \lambda_i=1$. In terms of the one-body density matrices, we have $\Gamma^{(1)}=\sum_i \lambda_i \gamma_{\Psi}$ where $\gamma_\Psi$ is defined in \eqref{eq:1pdm}. A direct consequence of Lieb's theorem is that $E^{\rm HF}_{\le}(N)=E^{\rm HF}(N)$, which makes the formulation \eqref{eq:HF-gamma-0-1} extremely helpful to compute an energy upper bound of $E_N$ by the trial state argument. As mentioned in \cite{L81b}, Theorem \ref{thm:L81b} holds for any two-body interaction which is positive semidefinite. We refer to Bach's paper \cite{B92} for a simplified proof of this result. Theorem \ref{thm:L81b} is one of the main tools in Bach's proof that the HF energy agrees with the best known expansion of the quantum energy, namely $$ E_N = c_1 Z^{7/3} + c_2 Z^{2} + c_3 Z^{5/3} + o(Z^{5/3}) = E_N^{\rm HF}+o(Z^{5/3}), \quad \forall N\in [Z,N_c(Z)] $$ where the first equality was established previously by Fefferman and Seco \cite{FS90b}. Using the concept of one-body density matrices, we can rewrite the proof of Theorem \ref{thm:LS77b} as follows. \begin{proof}[A shorter proof of Theorem \ref{thm:LS77b}] Consider the variational problem \begin{equation}\label{eq:EHF-rel-rel} \widetilde{E}^{\rm HF}_{\le}(N) = \inf_{\substack {0\le \gamma \le 1\\ {\rm Tr} \gamma \le N} } \mathcal{E}^{\rm HF}(\gamma). \end{equation} The existence of a minimizer $\gamma$ of \eqref{eq:EHF-rel-rel} can be proved by the direct method in the calculus of variations (the set of states is stable under the weak-* convergence in trace class). For every $0\le \widetilde{\gamma}\le 1$ with ${\rm Tr}\widetilde{\gamma} \le N$, the function $t\mapsto \mathcal{E}^{\rm HF}( (1-t) \gamma + t\widetilde \gamma)$ with $t\in [0,1]$ attains its minimum at $t=0$, and hence $$ 0 \le \frac{d}{dt} \mathcal{E}^{\rm HF}( (1-t) \gamma + t\widetilde \gamma)|_{t=0}= {\rm Tr} (h (\widetilde{\gamma}-\gamma)) $$ where $h=-\Delta - Z|x|^{-1}+\rho_\gamma*|x|^{-1}-K_\gamma$ with $K_\gamma(x,y)=\gamma(x,y)/|x-y|$. If $N<Z+1$, then $h$ has infinitely many negative eigenvalues $\varepsilon_1\le \varepsilon_2\le ...<0$ (this was already explained in the previous proof). Consequently, we can choose $\widetilde \gamma$ to be the projection on the lowest $N$ eigenfunctions of $h$, so that $${\rm Tr}(h\gamma)\le {\rm Tr} (h \widetilde \gamma)=\sum_{i=1}^N \varepsilon_i.$$ Since $0\le \gamma \le 1$ and ${\rm Tr}\gamma\le N$, this implies that ${\rm Tr}\gamma=N$. Thus $\gamma$ is also a minimizer for $E^{\rm HF}_{\le}(N)$, and by Theorem \ref{thm:L81b} the existence of minimizers of $E^{\rm HF}(N)$ follows. \end{proof} Now let us turn to the nonexistence in the HF theory. All non-asymptotic bounds in Section \ref{sec:2} extend to the HF case without significant modifications; in particular Lieb's Theorem \ref{thm:L84} ensures that $N_c(Z)<2Z+1$. However, the conjecture bound $N_c(Z)\le Z +C$ had been open for a long time until solved by Solovej in 2003. \begin{theorem}[\cite{S03}] $N_c(Z)\le Z+C$ in the Hartree--Fock theory. \end{theorem} The proof in \cite{S03} is based on a clever use of the Benguria--Lieb method, but only for outermost electrons. More precisely, assuming that we have an efficient method to separate $m$ outermost electrons from the rest of the system, which is of the effective charge $Z'= Z - (N-m)$, then the Benguria--Lieb method gives $m < 2Z' + 1$. Since $Z'$ is smaller than $Z$, the loss of the factor 2 becomes less serious. Solovej's idea is to propose a rigorous bootstrap argument to bring $Z'$ down to order 1 after finitely many steps. On the technical level, the key tool in \cite{S03} is a rigorous comparison between the HF potential $$ \Phi^{\rm HF}_Z(x) = \frac{Z}{|x|} - \int_{|y|\le |x|} \frac{\rho^{\rm HF}(y)}{|x-y|} dy. $$ and the corresponding TF potential $\Phi^{\rm TF}_Z(x)$, namely \begin{align} \label{eq:HF-TF} |\Phi_Z^{\rm HF}(x) - \Phi^{\rm TF}_Z(x)| \le C (1+ |x|^{-4+\varepsilon}), \quad \forall x\ne 0 \end{align} for some universal constants $C>0$, $\varepsilon>0$. Note that $\Phi^{\rm TF}_Z(x)$ behaves as $|x|^{-4}$ for $|x|\gg Z^{-1/3}$. The significance of \eqref{eq:HF-TF} is that the TF theory captures correctly the HF theory, at least in terms of the potentials, up to a length scale of order 1. This is highly remarkable since due to its semiclassical nature the TF theory is supposed to be good only for $|x|\sim Z^{-1/3}$. This property suggests that the universality in Conjecture \ref{conj-uni} should hold also in the HF theory, but a rigorous proof is still missing. In Solovej's strategy, the main conceptual difficulty is the splitting of ``problem from outside" from the ``problem from inside". In the HF theory, this can be done using the unconstrained formulation \eqref{eq:HF-gamma-0-1} and Lieb's Theorem \ref{thm:L81b}. Unfortunately, this technique is not available on the level of the many-body Schr\"odinger theory. It seems that a completely new many-body localization technique which be needed to solve the ionization conjecture. \section{Liquid drop model} In 1928, Gamow proposed a theory to describe a nucleus using only the number of nucleons (protons and neutrons) and the electrostatic energy of protons. This problem has gained renewed interest from many mathematicians \cite{CMT17}. To be precise, the liquid drop model is associated to the minimization problem \begin{align} \label{eq:EG} E^{\rm G}(m)= \inf_{|\Omega|=m} \mathcal{E}(\Omega) \end{align} where $$ \mathcal{E}(\Omega) = {\rm Per}(\Omega) + D(\Omega) = {\rm Per}(\Omega) + \frac{1}{2}\int_{\Omega} \int_{\Omega}\frac{1}{|x-y|} dx dy. $$ Here $\Omega\subset \mathbb{R}^3$ stands for the nucleus and ${\rm Per}(\Omega)$ is the perimeter in the sense of De Giorgi (which is the surface area of $\Omega$ when the boundary is smooth). It is generally assumed in the physics literature that if a minimizer exists, then it is a ball. Consequently, by comparing the energy of a ball of volume $m$ with the energy of a union of two balls of volume $m/2$, one expects the nonexistence of minimizers if $m>m_*$ with $$m_*=5\frac{2-2^{2/3}}{2^{2/3}-1}\approx 3.518.$$ \begin{conjecture}[\cite{CP11}] \label{conj:li}$E^{\rm G}(m)$ has a minimizer if and only if $m\le m_*$. Moreover, if a minimizer exists, then it is a ball. \end{conjecture} The question here is somewhat similar to that of the ionization conjecture of atoms. As we will see, some ideas from the liquid drop model turn out to be helpful for the ionization problem. On the mathematical side, among all measurable sets of a given volume, although a ball minimizes the perimeter (by the isoperimetric inequality \cite{DG58}), it does maximize the Coulomb self-interaction energy (by the Riesz rearrangement inequality \cite{R30}). Therefore, it is unclear why balls should be the minimizers. Consequently, the argument predicting the threshold $m_*$ is questionable. In 2014, Kn\"upfer and Muratov \cite{KM14} proved that if $m>0$ is sufficiently small, then $E^{\rm G}(m)$ has a unique minimizer which is a ball. The proof in \cite{KM14} uses deep techniques in geometric measure theory, including a quantitative isoperimetric inequality of Fusco, Maggi, and Pratelli \cite{FMP08}. On the other hand, it is desirable to develop a non-perturbative approach to handle larger masses. In 2015, Frank and Lieb proved the following result, which serves as a basic tool to analyze the existence question for all $m>0$. \begin{theorem}[\cite{FL15}] \label{thm:FL15} If for a given $m>0$, one has the strict binding inequality \begin{align}\label{eq:FL-strict-ineq} E^{\rm G}(m)< E^{\rm G}(m-m') + E^{\rm G}(m'), \quad \forall 0<m'<m, \end{align} then $E^{\rm G}(m)$ has a minimizer. \end{theorem} Theorem \ref{thm:FL15} can be interpreted in the same spirit of the strict binding inequality $E(N)<E(N-1)$ in the context of the ionization problem. The proof in \cite{FL15} is based on the ``method of the missing mass", which goes back to Lieb's 1983 work on sharp Hardy--Littlewood--Sobolev and related inequalities \cite{L83}. Very recently, Frank and Nam \cite{FN21} used Theorem \ref{thm:FL15} to establish the {\em optimal existence} in Conjecture \ref{conj:li}. \begin{theorem}[\cite{FN21}] \label{thm:FL21} $E^{\rm G}(m)$ has a minimizer for every $0<m\le m_*$. \end{theorem} \begin{proof} Let us prove the strict binding inequality \eqref{eq:FL-strict-ineq} for all $0<m<m_*$. Let $0<m_1<m$ and $s= m_1/m \in (0,1)$. As a first step, we observe that if $|\Omega|=m_1$, then $|s^{-1/3} \Omega|=m$, and hence by the variational principle \begin{align*} E^{\rm G}(m) \le \mathcal E (s^{-1/3} \Omega) = s^{-2/3} {\rm Per} \Omega + s^{-5/3} D (\Omega) = s^{-5/3} \mathcal E(\Omega) - s^{-5/3}( 1 - s) {\rm Per} \ \Omega. \end{align*} On the other hand, by the isoperimetric inequality $$ {\rm Per} \ \Omega \ge m_1^{2/3} {\rm Per} B_1 = s^{2/3} m^{2/3} {\rm Per} B_1 $$ where $B_1$ is the ball of volume 1 in $\mathbb{R}^3$ . Inserting this in the above inequality, optimizing over $\Omega$, and rearranging terms we find that $$ E(m_1) \ge s^{5/3} E(m) + s^{2/3} (1-s) m^{2/3} {\rm Per} B_1. $$ Similarly, $$ E(m-m_1) \ge (1-s)^{5/3} E(m) + (1-s)^{2/3} s m^{2/3} {\rm Per} B_1. $$ Therefore, \begin{align*} & E(m_1)+ E(m-m_1) - E(m) \\ & \ge ( s^{5/3} + (1-s)^{5/3} - 1) E(m) + \Big( s^{2/3} (1-s) + (1-s)^{2/3} s \Big) m^{2/3} {\rm Per} \ B_1. \end{align*} Using $s^{5/3} + (1-s)^{5/3} - 1<0$ and $E(m)\le \mathcal E(m^{1/3}B_1)$ we find that \begin{align*} & E(m_1)+ E(m-m_1)- E(m) \nonumber \\ &\ge \Big( s^{5/3} + (1-s)^{5/3} - 1 \Big) \Big( D(B_1) m - f(s) {\rm Per} \ B_1 \Big) m^{2/3} \end{align*} with \begin{equation*} f(s):= \frac{s^{2/3} + (1-s)^{2/3} -1 }{1-s^{5/3} - (1-s)^{5/3} }. \end{equation*} Therefore, $E(m_1)+ E(m-m_1)- E(m)>0$ if \begin{equation} \label{eq:gs>m} m< \frac{{\rm Per} \ B_1}{D(B_1)} \min_{s\in [0,1]} f(s). \end{equation} A direct computation shows that the right hand side of \eqref{eq:gs>m} coincides with $m_*$. Thus the existence of minimizers for every $0<m < m_*$ follows immediately from Theorem \ref{thm:FL15}. The existence can be extended to $m=m^*$ by a continuity argument from \cite[Theorem 3.4]{FL15}. \end{proof} In \cite{FN21}, we also proved that if the nonexistence in Conjecture \ref{conj:li} holds, then the above proof can be refined to show that the minimizer for $m<m_*$ is unique and it is a ball. Thus only the (optimal) nonexistence part is missing. There are some partial nonexistence results. Kn\"upfer and Muratov \cite{KM14} proved that $E^{\rm G}(m)$ has no minimizer if $m$ is sufficiently large. The same result was proved by Lu and Otto \cite{LO14} by a different method. This result is comparable to the bound $N_c(Z)<\infty$ in the ionization problem. In 2016, Frank, Killip and Nam \cite{FKN16} proved the nonexistence for all $m>8$, which is somewhat comparable to Lieb's bound $N_c(Z)<2Z+1$ in the ionization problem. \begin{proof}[Proof of nonexistence for $m>8$ \cite{FKN16}.] Assume that $E^{\rm G}(m)$ has a minimizer $\Omega$. We split $\Omega$ into two parts, $ \Omega= \Omega^+ \cup \Omega^- $, by a hyperplane $H$ and then move $\Omega^-$ to infinity by translations. Since $\Omega$ is a minimizer, we obtain the binding inequality \begin{align*} {\rm Per}(\Omega) + \int_{\Omega} \int_{\Omega}\frac{1}{|x-y|} dx dy &\le {\rm Per}(\Omega^+) + \int_{\Omega^+} \int_{\Omega^+}\frac{1}{|x-y|} dx dy \\ &\quad +{\rm Per}(\Omega^-) + \int_{\Omega^-} \int_{\Omega^-}\frac{1}{|x-y|} dx dy \end{align*} which is equivalent to $$ 2 \mathcal{H}^2 (\Omega\cap H) \ge \int_{\Omega^+} \int_{\Omega^-}\frac{1}{|x-y|} dx dy. $$ Here $\mathcal{H}^2$ is the two-dimensional Hausdorff measure. Next, we parameterize: $$ H= H_{\nu,\ell} = \{ x\in \mathbb{R}^3: \, x \cdot \nu =\ell\} $$ with $\nu\in S^2$, $\ell\in \mathbb{R}$. The above inequality becomes $$ 2 \mathcal{H}^2 (\Omega\cap H_{\nu,\ell}) \ge \int_{\Omega} \int_{\Omega}\frac{\chi(\nu\cdot x > \ell > \nu \cdot y)}{|x-y|} dx dy. $$ Integrating over $\ell\in \mathbb{R}$ and using Fubini's theorem we get $$ 2 |\Omega| \ge \int_{\Omega} \int_{\Omega}\frac{[\nu\cdot (x-y)]_+}{|x-y|} dx dy. $$ Finally, averaging over $\nu \in S^2$ and using $$ \int [\nu \cdot z]_{+} \frac{d\nu}{4\pi} = \frac{|z|}{2} \int_0^{\pi/2} \cos \theta \sin \theta d\theta =\frac{|z|}{4} $$ with $z=x-y$, we conclude that $2|\Omega|\ge \frac 1 4 |\Omega|^2$, namely $|\Omega| \le 8$. \end{proof} It is interesting that the above cutting argument can be used to replace the Benguria--Lieb argument in the ionization problem in various situations. In 2018, Frank, Nam and Van Den Bosch \cite{FNV18} used this technique to establish the ionization conjecture in the Thomas--Fermi--Dirac-von Weis\"acker theory. In this model, the standard Benguria--Lieb method does not apply due to Dirac's correction term to the exchange energy, but a modification of the above cutting argument gives an efficient control of number of particles ``outside" in terms of particles ``inside", thus enabling us to employ Solovej's bootstrap argument as in the HF theory. In \cite{FNV18b}, we extended the nonexistence $N_c(Z)\le Z+C$ to the M\"uller density-matrix-functional theory. In this model, the existence for $N\le Z$ was proved by Frank, Lieb, Seiringer and Siedentop in 2007 \cite{FLSS07} using a relaxation method in the spirit of TF and HF theories. In \cite{K17}, Kehle established the nonexistence to a family of density-matrix-functional theories that interpolates the HF and M\"uller theories. Hopefully an exchange of ideas from the ionization problem to the liquid drop model will lead to further results in the future.
1,108,101,562,528
arxiv
\section{Introduction and Notation} \noindent A rational map from $\mathbb{P}^{n}$ to itself is determined by an $(n+1)$-tuple of polynomials in $n+1$ variables, all homogeneous of the same degree $d$. If this map is a morphism, it will be finite of degree $d^{n}$. In the rest of this paper, we will refer to such a rational map as a degree $d$ map on $\mathbb{P}^{n}$ by abuse of notation. The space of degree $d$ maps on $\mathbb{P}^{n}$ is projective, with homogeneous coordinates coming from monomials of degree $d$. There are ${n+d \choose d}$ such monomials, so that this space has dimension ${n+d \choose d}(n+1) - 1$. We write $N_{d}^{n}$ for the dimension of this space, or $N$ when $d$ and $n$ are clear. \medskip The case of interest is morphisms on $\mathbb{P}^{n}$. In the sequel, we refer to the polynomials defining the map as $q_{0}, q_{1}, \ldots, q_{n}$. Then a map $(q_{0}:\ldots:q_{n})$ is a morphism if and only if the $q_{i}$'s share no common geometric root. The $q_{i}$'s only share a common root on a hypersurface of $\mathbb{P}^{N}$ which we call the resultant subvariety and which is defined over $\mathbb{Z}$; we denote its complement by $\operatorname{Hom}_{d}^{n}$. \medskip The space $\mathbb{P}^{N}$ of rational maps comes equipped with an action of $\operatorname{PGL}(n+1)$ by conjugation. The conjugation action $A\cdot\varphi = A\varphi A^{-1}$, fixes the resultant, which gives an action of $\operatorname{PGL}(n+1)$ on $\operatorname{Hom}_{d}^{n}$. In this paper, we mainly study the quotient of this action, which we denote $\mathrm{M}_{d}^{n}$, or $\mathrm{M}_{d}$ when $n = 1$. We will show that this quotient is geometric in the sense of geometric invariant theory \cite{GIT}, and compute the largest stable and semistable loci $\operatorname{Hom}_{d}^{n, s}$ and $\operatorname{Hom}_{d}^{n, ss}$, which satisfy $\operatorname{Hom}_{d}^{n} \subset \operatorname{Hom}_{d}^{n, s} \subset \operatorname{Hom}_{d}^{n, ss} \subset \mathbb{P}^{N}$. \medskip Knowing that the quotient $\mathrm{M}_{d}^{n}$ is well-behaved is often necessary to answer questions about the geometry of families of dynamical systems. In \cite{PST}, Petsche, Szpiro, and Tepper prove that $\mathrm{M}_{d}^{n}$ exists as a geometric quotient in order to show that isotriviality is equivalent to potential good reduction for morphisms of $\mathbb{P}^{n}$ over function fields, generalizing previous results in the one-dimensional case. In \cite{DeM07}, DeMarco uses the explicit description of the space $\mathrm{M}_{2}$ in order to study iterations of quadratic maps on $\mathbb{P}^{1}$, and one can expect similar results in higher dimension given a better understanding of the structure of $\mathrm{M}_{d}^{n}$. \medskip By now the theory of morphisms on $\mathbb{P}^{1}$ is the standard example in dynamical systems. For a survey of the arithmetic theory, see \cite{ADS}; also see a recent paper by Manes \cite{Man} about moduli of morphisms on $\mathbb{P}^{1}$ with a marked point of period $n$, which functions as a dynamical level structure. In the complex case, see an overview by Milnor \cite{D1C}, and the work of DeMarco \cite{DeM05} \cite{DeM07} about compactifications of the space $\mathrm{M}_{d}$ that respect the iteration map. Despite this, the higher-dimensional theory remains understudied. The only prior result in the direction of moduli of morphisms on $\mathbb{P}^{n}$ is the proof in \cite{PST} that $\mathrm{M}_{d}^{n}$ exists as a geometric quotient. Unfortunately, the proof does not lend itself well to finding the stable and semistable spaces for the action of $\operatorname{PGL}(n+1)$ on $\mathbb{P}^{N}$, nor does it bound the size of the finite stabilizer group uniformly on $\operatorname{Hom}_{d}^{n}$. \medskip The first two tasks in this paper are then to construct alternative proofs of the fact that the quotient $\mathrm{M}_{d}^{n}$ is geometric, first by explicitly describing the stable and semistable loci, and second by finding a uniform bound for the size of the stabilizer group in $\operatorname{PGL}(n+1)$. The former we will do in section $2$, using the Hilbert-Mumford criterion for stability and semistability. We will see that the complements of both $\operatorname{Hom}_{d}^{n, s}$ and $\operatorname{Hom}_{d}^{n, ss}$ are equal to a finite union of linear subvarieties and their $\operatorname{PGL}(n+1)$-conjugates; this contrasts with the $n = 1$ case, when the complement is the $\operatorname{PGL}(2)$-orbit of only one linear subvariety. In section $3$ we will study the stabilizer groups, proving a uniform bound on their sizes, valid over all fields and rings of definition, depending only on $n$ and $d$. This will strengthen previous results in this direction for $n = 1$ in \cite{Sil94}. \medskip Most results in this paper are a natural generalization of the study of morphisms on $\mathbb{P}^{1}$ in \cite{Sil96}, which refers to the space of morphisms as $\operatorname{Rat}_{d}$ and its quotient as $\mathrm{M}_{d}$, and which proves that $\mathrm{M}_{2} \cong_{\operatorname{Spec}\mathbb{Z}} \mathbb{A}^{2}$ using the theories of fixed points and multipliers. Specializing to the case where $n = 1$, we will prove that $\mathrm{M}_{d}$ is rational for all $d$ in section $4$. This is new even in the case of $d = 3$. The proof in this paper is based on showing that $\mathrm{M}_{d}$ is birational to a vector bundle over the space $\mathrm{M}_{0, d+1}$ of $d + 1$ unmarked points on $\mathbb{P}^{1}$, which is known to be rational. \medskip Unfortunately, we do not see any easy generalization of rationality to $\mathrm{M}_{d}^{n}$. The obstruction is that the space of unmarked points on $\mathbb{P}^{n}$ is not known to be rational. Clearly $\operatorname{Hom}_{d}^{n}$ is rational, so $\mathrm{M}_{d}^{n}$ is unirational, which for some applications, such as the density of points defined over a number field $K$, is enough. However, in order to investigate the structure of $\mathrm{M}_{d}^{n}$ we need more than that. We do not expect a result along the lines of that in \cite{Sil96}, that $\mathrm{M}_{2} \cong \mathbb{A}^{2}$, but we do expect rationality of $\mathrm{M}_{d}^{n}$. \medskip I would like to express my gratitude to my advisor Shouwu Zhang for introducing me to dynamical systems and guiding my research, to Lucien Szpiro and Joe Silverman for looking at the proofs of the major theorems in this paper, and to Xander Faber for helping me with this paper's presentation. \section{The Spaces $\operatorname{Hom}_{d}^{n}$ and $\mathrm{M}_{d}^{n}$} \noindent The space $\operatorname{Hom}_{d}^{n}$ of degree-$d$ morphisms on $\mathbb{P}^{n}$ arises as the subset of $\mathbb{P}^{N} = \{(q_{0}:q_{1}:\ldots:q_{n})\}$ defined by the condition that the $q_{i}$'s share no common root. In order to give this space an algebraic structure, we investigate its complement. We will show the following result, proven by Macaulay \cite{Mac} and reinterpreted here in modern language: \begin{thm}\label{Macaulay}The maps on $\mathbb{P}^{n}$ of degree $d$ such that the $q_{i}$'s share a nonzero root form a closed, irreducible subvariety of $\mathbb{P}^{N}$ of codimension $1$, which is defined over $\mathbb{Z}$.\end{thm} \begin{proof}Consider the variety $V = \mathbb{P}^{n} \times \mathbb{P}^{N}$. We think of $V$ as representing a set of polynomials $(q_{0}: q_{1}: \ldots: q_{n})$ acting on the point $(x_{0}: x_{1}: \ldots: x_{n})$. Consider the resultant subvariety $U \subset V$ defined by the condition that $q_{i}(\mathbf{x}) = 0$ for all $i$. This variety clearly has codimension at most $n+1$. If we denote the variables defining $\mathbb{P}^{N}$ as $a^{i}_{j^{i}_{0}j^{i}_{1}\ldots j^{i}_{n}}$ with $j^{i}_{0} + \ldots + j^{i}_{n} = d$, representing the $x_{0}^{j^{i}_{0}}\ldots x_{n}^{j^{i}_{n}}$ monomial of $q_{i}$, then we see that $U$ is defined by equations that are bihomogeneous of degree $1$ in the $a^{i}_{J}$'s and $d$ in the $x_{i}$'s. \medskip We claim that $U$ is irreducible. The claim follows from a generalization of the fact that a primitive polynomial is irreducible over a domain whenever it is irreducible over its fraction field. More precisely, let $R$ be a domain with fraction field $K$, and let $I$ be an ideal of $R[y_{1}, \ldots, y_{m}]$ that is not contained in any prime of $R$. We have a natural map $f$ from $\operatorname{Spec} K[y_{1}, \ldots, y_{m}]$ to $\operatorname{Spec} R[y_{1}, \ldots, y_{m}]$. If $V(I)$ is reducible over $R$, say $V(I) = V_{1} \cup V_{2}$ with $V_{i}$ nonempty, then either $V(I)$ is reducible over $K$, or one $f^{-1}(V_{i})$, say $f^{-1}(V_{1})$, is empty. In the latter case, $I(V_{1})$ may not contain nonconstant polynomials, so it contains at least one prime constant. This contradicts the assumption that $I$ is not contained in any prime of $R$; hence, $V(I)$ is reducible over $K$. \medskip With the above generalization, suppose that $U$ is reducible. Then it is also reducible as a subvariety of $\mathbb{A}^{n+1}\times\mathbb{A}^{N+1}$. Further, by letting $R = \mathbb{Z}[x_{0}, \ldots, x_{n}]$ and $K$ be its fraction field, we see that either $U$ is contained in a prime of $R$, or $U$ is reducible in $\mathbb{A}^{N+1}_{K}$. The former case is impossible since $U$ is not contained in any prime of $\mathbb{Z}$ or any relevant prime ideal of the ring of polynomials over $\mathbb{Z}$, and the latter is impossible since it is defined by linear equations in the $a^{i}_{J}$'s. Either way this is a contradiction, so $U$ is irreducible and the claim is proven. \medskip Finally, the maps on $\mathbb{P}^{n}$ of degree $d$ whose polynomials have a common nonzero root arise as the projection of $U$ onto the second factor of $\mathbb{P}^{n} \times \mathbb{P}^{N}$. It is irreducible because the projection map is surjective. It is closed because the map is proper. It has codimension at most $1$ because almost all polynomials in $U$ share just one root, so that the dimension of $U$ and its image are equal. It has exact codimension $1$ because some maps, for instance $q_{i} = x_{i}^{d}$, are morphisms. And it is defined over $\mathbb{Z}$ because every construction we have made in this proof is defined over $\mathbb{Z}$.\end{proof} \medskip We call the image of $U$ the resultant subvariety of $\mathbb{P}^{N}$; we call its generating polynomial the Macaulay resultant and denote it by $\operatorname{Res}_{d}^{n}$. Macaulay proved the theorem by constructing the resultant explicitly, and showing that it has integer coefficients and is irreducible. His explicit construction shows that if the polynomials are homogeneous of degrees $d_{0}, d_{1}, \ldots, d_{n}$, then the resultant is $(n+1)$-homogeneous in the coefficients of each polynomial $p_{i}$ of degree $\prod_{j \neq i}d_{j}$. In our case, all the degrees are equal to $d$, so that the resultant is $(n+1)$-homogeneous in the coefficients of each $q_{i}$ of degree $d^{n}$. In particular, the resultant subvariety is a hypersurface of degree $(n+1)d^{n}$. \medskip Theorem~\ref{Macaulay} shows that the space of morphisms is the complement of the resultant subvariety, and is therefore affine and of dimension $N$. Silverman \cite{Sil96}, who only considers the case $n = 1$, refers to this space as $\operatorname{Rat}_{d}$; we will refer to it as $\operatorname{Hom}_{d}^{n}$ and to its complement in $\mathbb{P}^{N}$ as $\operatorname{Res}_{d}^{n}$ by abuse of notation. \medskip The action of $\operatorname{PGL}(n+1)$ on $\mathbb{P}^{n}$ leads to a conjugation action on $\operatorname{Hom}_{d}^{n}$, wherein $A \in \operatorname{PGL}(n+1)$ acts on a rational map $\varphi$ by sending it to $A\varphi A^{-1}$. The property of being ill-defined at a point is stable under both the left action mapping $\varphi$ to $A\varphi$ and the right action mapping $\varphi$ to $\varphi A^{-1}$; hence, the conjugation action is well-defined on $\operatorname{Hom}_{d}^{n}$. The space of endomorphisms of $\mathbb{P}^{n}$ defined by degree-$d$ polynomials may be regarded as the quotient of $\operatorname{Hom}_{d}^{n}$ by the conjugation action. \medskip \emph{A priori}, we only know that over an algebraically closed field, the quotient exists as a set. In order to give it algebraic structure, we need to pass to the stable or semistable space in geometric invariant theory \cite{GIT}. Fortunately, we have the following result: \begin{thm}\label{stable}Every $\varphi \in \operatorname{Hom}_{d}^{n}$ is stable.\end{thm} \begin{proof}We use the Hilbert-Mumford criterion, as described in chapter $2$ of \cite{GIT}. To do that, we pull back the action of $\operatorname{PGL}(n+1)$ on $\mathbb{P}^{N}$ to the action of $\operatorname{SL}(n+1)$ on $\mathbb{A}^{N+1}$, and consider one-parameter subgroups of $\operatorname{SL}(n+1)$. The criterion states that a point lies in the stable space $\operatorname{Hom}_{d}^{n, s}$ (respectively, the semistable space $\operatorname{Hom}_{d}^{n, ss}$) iff for every such subgroup, its action on the point can be diagonalized with diagonal elements $t^{a_{I}}$, and at least one $a_{I}$ is negative (resp. non-positive). \medskip Note that the action of $A \in \operatorname{SL}(n+1)$ on $\varphi \in \mathbb{A}^{N+1}$ is conjugate to the action of $BAB^{-1}$ on $B\varphi B^{-1}$. In particular, it will have the same eigenvalues, so the action of a one-parameter subgroup $G = \mathbb{G}_{m}$ will have the same $a_{I}$'s. Therefore, we may conjugate $G$ to be diagonal, which will be enough to give us criteria for stability and semistability up to conjugation. So from now on, we assume $G$ is the diagonal subgroup whose $i$th diagonal entry is $t^{a_{i}}, a_{i} \in \mathbb{Z}$. Here we label the rows and columns from $0$ to $n$, in parallel with the label for the $q_{i}$'s. We have $a_{0} + \ldots + a_{n} = 0$. We may also assume that $a_{0} \geq a_{1} \geq \ldots \geq a_{n}$, after conjugation if necessary, and that the $a_{i}$'s are coprime. \medskip The action of $G$ on $\mathbb{A}^{N+1}$ is already diagonal. We denote the $\mathbf{x^{d}}$ coefficient of $q_{i}$ by $c_{\mathbf{d}}(i)$; then $G$ multiplies $c_{\mathbf{d}}(i)$ by $t^{a_{i}}t^{-(a_{0}d_{0} + \ldots + a_{n}d_{n})}$. A point $\varphi$ is not stable (resp. unstable) if for some choice of $G$, all the $c_{\mathbf{d}}(i)$'s for which $a_{0}d_{0} + \ldots + a_{n}d_{n} > a_{i}$ (resp. $a_{0}d_{0} + \ldots + a_{n}d_{n} \geq a_{i}$) are zero. Let us observe that this means that, for $d > 1$, every $x_{0}^{d}$ coefficient has to be zero, as we will have $da_{0} > a_{0} \geq a_{i}$ for every $i$. This means that $\varphi$ lacks any $x_{0}^{d}$ coefficient, so that the $q_{i}$'s have a nontrivial zero at $(1:0:\ldots:0)$, and $\varphi \notin \operatorname{Hom}_{d}^{n}$. The property of not being a morphism is preserved under conjugation, proving the theorem.\end{proof} \medskip Since $\operatorname{Hom}_{d}^{n}$ is stable, it has a natural geometric quotient induced by the $\operatorname{PGL}(n+1)$ action on $\mathbb{P}^{N}$, which we denote by $\mathrm{M}_{d}^{n}$; as $\operatorname{Hom}_{d}^{n}$ is affine, $\mathrm{M}_{d}^{n}$ is affine, with structure sheaf $\mathcal{O}_{\operatorname{Hom}_{d}^{n}}^{\operatorname{SL}(n+1)}$. We may also write $\mathrm{M}_{d}^{n, s}$ for the quotient of the stable space and $\mathrm{M}_{d}^{n, ss}$ for the quotient of the semistable space. The latter quotient is only categorical, rather than geometric, but will be proper over $\operatorname{Spec}\mathbb{Z}$ (all spaces in question, as well as $\operatorname{SL}(n+1)$, are defined over $\mathbb{Z}$; hence, so are the quotients). \medskip Let us now describe the not-stable and unstable spaces more explicitly. In the $n = 1$ case, $G$ depends only on $a_{0}$, which may be taken to be $1$. This gives us only one criterion for stability (resp. semi-stability), which means that the not-stable (resp. unstable) space is irreducible (in fact, it will be a linear subvariety and its orbit under $\operatorname{PGL}(2)$-conjugation). When $n > 1$, this is no longer true: $G$ depends on multiple variables, and we can find many infinite families of coprime $a_{i}$'s that sum to $0$ and are in decreasing order. \medskip However, the not-stable (resp. unstable) space will still be a union of finitely many linear subvarieties and their $\operatorname{PGL}(n+1)$ conjugates, whose number will generally grow with $d$ and $n$. This is because there are only $2^{N+1}$ linear spaces defined by conditions of the form $c_{\mathbf{d}}(i) = 0$ for a collection $J$ of $(\mathbf{d}, i)$ pairs. For each such space, either there exists a $G$ such that $(\mathbf{d}, i) \in J$ if and only if $a_{0}d_{0} + \ldots + a_{n}d_{n} > a_{i}$ (resp. $a_{0}d_{0} + \ldots + a_{n}d_{n} \geq a_{i}$), or there doesn't. Of course, a given $J$ may correspond to infinitely many $G$, which will in general have ratios $a_{0}:\ldots:a_{n}$ that are close in the archimedean metric. \medskip We omit the calculation of the linear subvarieties that occur as the not-stable (resp. unstable) space for each $d$ and $n$, as well as the number of such varieties. We will just note that there are far fewer than $2^{N+1}$ such varieties: for a start, we have already seen that $((d, 0, \ldots, 0), i) \in J$ for all $i$. One more constraint that follows trivially from the definition of the $a_{i}$'s is that if $(\mathbf{d}, i) \in J$, then so is $(\mathbf{d}, j)$ for $j > i$. Put another way, not being stable (resp. instability) imposes more conditions on $q_{j}$ than on $q_{i}$ for $j > i$. It may also be shown that for each $G$ the number of conditions is roughly between one half and $e^{-1}$ times $N$; we omit the proof, as this result will not be relevant in the remainder of this paper. \medskip Finally, when $n = 1$, the only $G$ has $a_{0} = 1, a_{1} = -1$, so $a_{0}d_{0} + a_{1}d_{1} = d_{0} - d_{1} = 2d_{0} - d$. When $d$ is even, $2d_{0} - d$ is always even, so the conditions $a_{0}d_{0} + a_{1}d_{1} > a_{i}$ and $a_{0}d_{0} + a_{1}d_{1} \geq a_{i}$ coincide, and the stable and semistable spaces are the same; this was shown in \cite{Sil96}. We will show that this will never be the case for higher $n$. First, observe that if we set $a_{0} = 1$, $a_{n} = -1$, and $a_{i} = 0$ for $i \neq 0, n$, we obtain $a_{0}d_{0} + \ldots + a_{n}d_{n} = d_{0} - d_{n}$, which may take any value between $-d$ and $d$ inclusive. Hence, the conditions $a_{0}d_{0} + \ldots + a_{n}d_{n} > a_{i}$ and $a_{0}d_{0} + \ldots + a_{n}d_{n} \geq a_{i}$ will not coincide. \medskip Now, suppose that $\varphi$ is a point that is not stable, with $c_{\mathbf{d}}(i) = 0$ if and only if $d_{0} - d_{n} > a_{i}$ with $a_{i}$ as above. If $\varphi$ is unstable, then we can find some $G$ such that if $a_{0}d_{0} + \ldots + a_{n}d_{n} \geq a_{0}$ then $d_{0} - d_{n} > 1$, and if $a_{0}d_{0} + \ldots + a_{n}d_{n} \geq a_{i}$ for $i \neq 0, n$, then $d_{0} - d_{n} > 0$. If for that $G$ we have $a_{1} \geq 0$, then looking at the $x_{0}x_{1}^{d-1}$ monomial, we get $a_{0}d_{0} + \ldots + a_{n}d_{n} = a_{0} + (d-1)a_{1} \geq a_{0}$ but $d_{0} - d_{n} = 1$, a contradiction. If $a_{1} < 0$, then we must have $a_{i} < 0$ for all $i > 0$, so $a_{0} + a_{n} > 0$. For $d = 2k + 1$, we consider the $x_{0}^{k+1}x_{n}^{k}$ monomial, for which $a_{0}d_{0} + \ldots + a_{n}d_{n} = k(a_{0} + a_{n}) + a_{0} > a_{0}$ but $d_{0} - d_{n} = 1$; for $d = 2k$, we consider the $x_{0}^{k}x_{n}^{k}$ monomial, for which $a_{0}d_{0} + \ldots + a_{n}d_{n} = k(a_{0} + a_{n}) > 0 > a_{1}$ but $d_{0} - d_{n} = 0$. Either way, we have a contradiction, so $\varphi$ is semistable but not stable. This proves: \begin{prop}\label{semistable}For all $d, n > 1$, we have $\operatorname{Hom}_{d}^{n, s} \subsetneq \operatorname{Hom}_{d}^{n, ss}$.\end{prop} \medskip We will conclude this section with the following strict containment: \begin{prop}\label{containment}$\operatorname{Hom}_{d}^{n} \subsetneq \operatorname{Hom}_{d}^{n, s}$.\end{prop} \begin{proof}Observe that the linear subvarieties defined above are invariant under conjugation by every upper triangular matrix, at least when we ensure $a_{0} \geq a_{1} \geq \ldots \geq a_{n}$. Hence, the codimension of the not-stable space is equal to the codimension of the largest linear subvariety, minus $\frac{n(n+1)}{2}$. It suffices to show this codimension is more than $1$, or, in other words, that every linear subvariety has codimension at least $\frac{n(n+1)}{2} + 2$. We will consider two cases. \medskip\textit{Case 1.} $a_{1} \geq 0$. When $d_{0} > 0$, the $x_{0}^{d_{0}}x_{1}^{d_{1}}$ monomial has $a_{0}d_{0} + a_{1}d_{1} > a_{1}$, so it is zero for all $q_{i}$'s except $q_{0}$; when $d_{0} > 1$ it is also zero for $q_{0}$, since $a_{0}d_{0} + a_{1}d_{1} \geq 2a_{0}$. This gives us a total codimension of $n^{2} + (n-1)$, which is larger than $\frac{n(n+1)}{2} + 1$ for all $n \geq 2$. When $n = 1$ this case is impossible because we need to have $a_{0} + a_{1} = 0$. \medskip\textit{Case 2.} $a_{1} < 0$. We have $a_{0} = -(a_{1} + \ldots + a_{n}) > -a_{i}$ for all $i$; therefore, the $x_{0}^{d-1}x_{i}$ monomial is zero in every $q_{j}$ except $q_{0}$; the $x_{0}^{d}$ monomial is always zero. This gives us a codimension of $n^{2} + n + 1$, which is large enough for all $n$.\end{proof} \begin{rem}The larger spaces $\operatorname{Hom}_{d}^{n, s}$ and $\operatorname{Hom}_{d}^{n, ss}$ have a meaning in the field of moduli spaces more than in this of dynamical systems, where we study the iterates of morphisms. The problem is that we cannot always iterate rational maps which are not morphisms, even if they are stable: the image may not be dense, and may eventually map to a locus on which the map is ill-defined. A map of the form $(q:0:0:\ldots:0)$ with $q(1, 0, \ldots, 0) = 0$ will be impossible to iterate. For general $q$, it will also be stable for large $d$, because we will have $a_{0}d_{0} + \ldots + a_{n}d_{n} > a_{0}$ for many different $\mathbf{d}$'s no matter how we choose the $a_{i}$'s, even after conjugation. When $n = 1$, it suffices to have $d \geq 4$, because then $\varphi$ is unstable only if is of the form $(p:q)$ with $p$ and $q$ sharing a common root of multiplicity at least $\frac{d-1}{2}$, and we may pick a map $(q:0)$ with $q$ having distinct roots. For one approach for giving a completion of $\operatorname{Hom}_{d}^{n}$ in a way that permits iteration at the boundary, see \cite{DeM05}.\end{rem} \section{Stabilizer Groups} \medskip The moduli space $\mathrm{M}_{d}^{n}$, as well as its stable and semistable completions, has a well-defined function mapping each morphism to its stabilizer group in $\operatorname{PGL}(n+1)$, which will be well-defined up to conjugation. This stabilizer will be finite, at least on $\mathrm{M}_{d}^{n, s}$, from standard facts from geometric invariant theory. We will study the possible subgroups of $\operatorname{PGL}(n+1)$ that may occur as stabilizers of morphisms. We gain very little by assuming Theorem~\ref{stable}, so we might as well not assume it \emph{a priori}; this will provide an alternative proof for it. \medskip Note that the resultant is a $\operatorname{PGL}(n+1)$-invariant section of a $\operatorname{PGL}(n+1)$-linearizable divisor on $\mathbb{P}^{N}$ that is nonzero on $\operatorname{Hom}_{d}^{n}$. Therefore, on $\operatorname{Hom}_{d}^{n}$ stability is equivalent to having closed fibers, which is equivalent to having a stabilizer group of the lowest possible dimension (see chapter $1$ of \cite{GIT}). Hence, to provide a second proof of Theorem~\ref{stable}, it suffices to show that the stabilizer of every $\varphi \in \operatorname{Hom}_{d}^{n}$ is finite. This was done in \cite{PST}. We will prove a stronger result: \begin{thm}\label{finite}The stabilizer of every point in $\operatorname{Hom}_{d}^{n}, d > 1$, is a finite group of order bounded in terms of $n$ and $d$.\end{thm} \begin{proof}Note that if $A \in \operatorname{Stab}(\varphi)$, then $BAB^{-1} \in \operatorname{Stab}(B\varphi B^{-1})$. Therefore, when considering individual stabilizing matrices, we may assume they are in Jordan canonical form. We use the following result: \begin{lem}\label{diagonal}If $A \in \operatorname{Stab}(\varphi)$, and $\varphi$ is not purely inseparable, then $A$ is diagonalizable.\end{lem} \begin{proof} In characteristic zero, this is trivial given Theorem~\ref{stable}. However, it is not trivial in characteristic $p$; the proof works for every characteristic, so we lose nothing from not using Theorem~\ref{stable}. \medskip We will assume that $A$ is not diagonalizable and derive a contradiction. It suffices to assume that $A$ is a Jordan matrix whose largest Jordan block is of size $r > 1$. After conjugation and scaling, we may assume that the first Jordan block is also the largest, and has eigenvalue $1$. We will label the rows and columns from $0$ to $n$, in parallel with the labels for the $q_{i}$'s. We will also write $\varphi = (q_{0}:q_{1}:\ldots:q_{n})$, $k_{i} = a_{ii}$ for the eigenvalue in the $i$th position, and $r_{i}$ for the size of the Jordan block containing $a_{ii}$. We have $r_{0} = r, k_{0} = 1, r_{i} \leq r$. \medskip Note that the inverse of the first Jordan block is the matrix with zeroes below the main diagonal and $a_{ij} = (-1)^{i-j}$ on or above it. Therefore, each vector $\mathbf{x} = (x_{0}, x_{1}, \ldots, x_{n})$ is transformed to: $$\mathbf{x'} = (x_{0}-x_{1}+\ldots\pm x_{r-1}, x_{1}-x_{2}+\ldots\mp x_{r-1}, \ldots, x_{r-1}, \ldots, \frac{1}{k_{n}}x_{n})$$ \noindent We write $q_{i}'(\mathbf{x}) = q_{i}(\mathbf{x'})$. Similarly, $A$ transforms $\varphi = (q_{0}, \ldots, q_{n})$ to: $$\varphi' = (q_{0}' + q_{1}', q_{1}' + q_{2}', \ldots, q_{r-1}', \ldots, k_{n}q_{n}')$$ \noindent Since $A$ stabilizes $\varphi$, we need $\varphi'$ to be a scalar multiple of $\varphi$. \medskip For each $\mathbf{d} \in \mathbb{Z}^{n+1}$, we denote the $\mathbf{x^{d}}$ coefficient of $q_{i}$ (respectively $q_{i}'$) by $c_{\mathbf{d}}(i)$ (resp. $c_{\mathbf{d}}'(i)$). We suppress trailing zeroes for simplicity, so that $c_{d}$ denotes the $x_{0}^{d}$ coefficient. We are looking for the largest $i$ such that $c_{d}(i) \neq 0$; such an $i$ exists, or else $(1:0:\ldots:0)$ is a common root of all the $q_{i}$'s. As the only $x_{0}^{d}$ term in $\mathbf{x'^{d}}$ comes from $x_{0}'^{d}$, we have $c_{d}'(j) = c_{d}(j)$ for all $j$. Now in $\varphi'$, the $i$th term is either $q_{i}'$ or $q_{i}' + k_{i}q_{i+1}'$, so that its $x_{0}^{d}$ coefficient is $k_{i}c_{d}(i)$. This implies that the scaling factor is $k_{i}$, i.e. $\varphi' = k_{i}\varphi$. \medskip Now, assume that $i$ is not at the beginning of its Jordan block, that is that $a_{i-1, i} = 1$. Then $k_{i-1} = k_{i}$, and the fact that $\varphi' = k_{i}\varphi$ implies that $k_{i-1}c_{d}'(i-1) + c_{d}'(i) = k_{i}c_{d}(i-1)$. This reduces to $c_{d}(i) = 0$, a contradiction. Therefore, $i$ is at the beginning of its Jordan block. \medskip Let us now consider the $x_{0}^{d-1}x_{1}$ coefficients, and assume throughout that all indices are in the same Jordan block as $i$. We have $c_{d-1, 1}'(j) = c_{d-1, 1}(j) - dc_{d}(j)$. For $j > i$, this reduces to $c_{d-1, 1}'(j) = c_{d-1, 1}(j)$. Conversely, the corresponding term to $c_{d-1, 1}$ in $\varphi' = k_{i}\varphi$ will be $k_{i}c_{d-1, 1}'(j) + c_{d-1, 1}'(j+1) = k_{i}c_{d-1, 1}(j)$. When $j > i$, this implies that $c_{d-1, 1}'(j+1) = 0$, so that $c_{d-1, 1}(j) = 0$ for $j > i + 1$; conversely, for $i + 1$, we obtain $k_{i}c_{d-1, 1}'(i) + c_{d-1, 1}'(i+1) = k_{i}c_{d-1, 1}(i)$, which reduces to $c_{d-1, 1}(i+1) = k_{i}dc_{d}(i) \neq 0$. This shows that $i + 1$ is the largest index with a nonzero $x_{0}^{d-1}x_{1}$ coefficient, at least in the Jordan block containing $i$. \medskip We may apply induction on $s(\mathbf{d}) = d_{1} + 2d_{2} + \ldots + (r-1)d_{r-1}$, and find that in the Jordan block containing $i$, the largest index with a nonzero $\mathbf{x^{d}}$ coefficient is $i + s(\mathbf{d})$. Note that the Jordan block has $r_{i} \leq r$ elements, but the number of monomial indices attached to the first Jordan block is $(r-1)d + 1$, which is strictly greater than $r$ when $d, r > 1$. This is a contradiction: the last element of the Jordan block has $k_{i}c_{\mathbf{d}}' = k_{i}c_{\mathbf{d}}$ for all $\mathbf{d}$, i.e. $c_{\mathbf{d}}(i + r_{i} - 1)' = c_{\mathbf{d}}(i + r_{i} - 1)$, but that last equality is only true when $s(\mathbf{d}) \leq r_{i}$, which is not the case for all $\mathbf{d}$. Since we are assuming $d > 1$, we must have $r = 1$, and we are done. \medskip The careful reader may note that the proof that $i + s$ is the largest index with a nonzero $\mathbf{x^{d}}$ coefficient for $s(\mathbf{d}) = s$ makes an assumption about the characteristic we are working in. In characteristic zero, $d \neq 0$ and there is no problem. In characteristic $p$, we need to treat separately the case when $p < d$. Then for example we may have $p \mid d$, so that $c_{d-1, 1}'(j) = c_{d-1, 1}(j)$ for all $j$, and $c_{d-1, 1}(i+1)$ may be zero. Note that the number of monomial indices containing $x_{0}^{d-2}$ attached to the first Jordan block is $2(r-1) + 1$, which is strictly greater than $r$ when $r > 1$; when $p \nmid d(d-1)$, we may restrict ourselves to such monomials, and the proof proceeds as in characteristic zero. \medskip When $p \mid d-1$, we may restrict ourselves to monomials containing $x_{0}^{d-1}$, and proceed with the proof. We will only encounter an obstruction if $r_{i} = r$ and only at the end of the Jordan block, where the existence of a nonzero $x_{0}^{d-1}x_{r-1}$ monomial does not guarantee that of $x_{0}^{d-2}x_{1}x_{r-1}$. However, the action of $A$ on $q_{i + r - 1}$ takes it to $k_{i}q'_{i + r - 1}$, and we must have $c_{\mathbf{d}}'(i + r - 1) = c_{\mathbf{d}}(i + r - 1)$ for all $\mathbf{d}$. If we write $d-1 = p^{l}m, m \nmid p$, then we see that $x_{0}^{d-1}x_{r-1}$ is transformed to $k_{i}(x_{0} - x_{1} + \ldots \pm x_{r-1})^{d-1}x_{r-1} = k_{i}(x_{0}^{p^{l}} - \ldots \pm x_{r-1}^{p^{l}})^{m}x_{r-1}$ which shows that the $x_{0}^{p^{l}(m-1)}x_{1}^{p^{l}}$ monomial does not satisfy $c_{\mathbf{d}}'(i + r - 1) = c_{\mathbf{d}}(i + r - 1)$. This yields a contradiction. \medskip Finally, when $p \mid d$, we may write $d = p^{l}m$. When $m > 1$, we apply exactly the same proof as in characteristic zero, except that we write $m$ instead of $d$ and $m_{j} = \frac{d_{j}}{p^{l}}$ instead of $d_{j}$; then we define $s(\mathbf{d}) = m_{1} + \ldots + (r-1)m_{r-1}$, and in the Jordan block containing $i$, the largest index with a nonzero $\mathbf{x^{d}}$ coefficient is $i + s(\mathbf{d})$. As $m > 1$, we have $(r-1)m + 1 > r$ for $r > 1$, and we have the same contradiction as in the characteristic zero case. Note that when $m = 1$, we may derive the same contradiction from any nonzero monomial not of the form $x_{j}^{d}$, which must exist if $\varphi$ is not purely inseparable. Hence, if $\varphi$ has a non-diagonalizable stabilizer then it is purely inseparable and we are done.\end{proof} \medskip With the above lemma, we know that any abelian subgroup of $\operatorname{Stab}(\varphi) \in \operatorname{GL}(n+1)$ will be simultaneously diagonalizable. We will prove the following uniform bound on the size of abelian stablizing subgroups: \begin{lem}\label{bound}Every diagonal subgroup stabilizing $\varphi \in \operatorname{Hom}_{d}^{n}$ is of size at most $d^{n+1}$.\end{lem} \begin{proof}A diagonal matrix $A$ with diagonal entries $(a_{0}, a_{1}, \ldots, a_{n})$ acts on each $q_{i}$ by multiplying $c_{\mathbf{d}}(i)$ by $\frac{a_{i}}{a_{0}^{d_{0}}\ldots a_{n}^{d_{n}}}$. Our case of interest will be the $x_{j}^{d}$ coefficients. Each has to be nonzero for at least one $i$, which induces the equation $a_{i}$ = $a_{j}^{d}$. Note that we may set the scaling factor $k$ to be $1$, since the scalar matrix $k^{\frac{1}{1-d}}$ multiplies every coefficient by $k$. \medskip Now, we have at least $n + 1$ different relations $a_{i} = a_{j}^{d}$. We may drop relations until each $j$ has just one $i$ such that such a relation holds; dropping relations will increase the size of the group, so by bounding the size of the larger group, we will bound the size of any automorphism group. \medskip We obtain a function $j \mapsto i$. If the function is bijective, we may write it as a product of disjoint cycles, and conjugate to get the cycles to be $(0\ 1\ \ldots\ s_{1}-1) \ldots (n-s_{k}+1\ \ldots\ n)$, where here $r_{i}$ denotes the length of the $i$th cycle, and has nothing to do with the definition in Lemma~\ref{diagonal}. Then $a_{0}^{d^{r_{1}}} = a_{0}$ and $a_{0}$ is a root of unity of order dividing $d^{r_{1}} - 1$, the choice of which uniquely determines $a_{i}, 0 \leq i \leq r_{1} - 1$. We have similar results for $a_{r_{1}}, \ldots, a_{n-r_{k}+1}$; since $\sum r_{i} = n+1$, this bounds the size of the group by $d^{n+1}$. \medskip In general, of course, the function $j \mapsto i$ may not be bijective, so we can only write it as a product of precycles, whose cycles are disjoint. Here a precycle means a cycle and zero or more tails. The above discussion applies to the cycles. For the tails, suppose without loss of generality that $(0\ 1\ \ldots\ r)$ is a tail where $r$ and no element before it is part of a cycle; then the choice of $a_{r}$ determines a choice of $d$ possibilities for $a_{r-1}$ and in general $d^{s}$ for $a_{r-s}$ subject to the obvious compatibility condition. This clearly respects the bound of $d^{n+1}$: if $m$ is the total number of elements in cycles, then we have at most $m^{n+1}$ possibilities for the cycles, each of which gives us exactly $(d-m)^{n+1}$ possibilities for the tails.\end{proof} \medskip The bound $d^{n+1}$ works for abelian stabilizing subgroups in the purely inseparable case as well. We may view a purely inseparable $\varphi$ as the action of raising every coefficient to the $d$th power followed by the matrix $B$. Then $A\varphi A^{-1} = \varphi$ if and only if $ABA^{-1}_{d} = B$, where $A_{d}$ is the image of the matrix $A$ under the homomorphism of raising every entry to the $d$th power; we need to show the group of such $A$, which we will write as $\operatorname{Stab}(B)$, is finite. Since $A$ and $A_{d}$ are conjugate, all eigenvalues of $A$ are in $\mathbb{F}_{d}$. \medskip We may conjugate an abelian stabilizing subgroup $G$ to obtain a block diagonal group with each block upper triangular and with its $(i, j)$ entry depending only on $j - i$. We may also fix one element, $C$ to be in Jordan canonical form, in which case we will have $C_{d} = C$ and thus $BC = CB$. Then $B$ is in block form; labeling the blocks by $r, s$ and the $r$th block of $C$ by $C_{r}$, we see that the $B_{rs}$ is nonzero if and only if the blocks $r$ and $s$ are of the same size and equal for every element of $\operatorname{Stab}(B)$, and in any case $B_{rs}$ commutes with $C_{r} = C_{s}$, so it is upper triangular with its $(i, j)$ entry depending only on $j - i$. In particular, it commutes with every $A_{r} = A_{s}$, so that $B$ commutes with $G$. Hence for all $A \in G$, we have $AB = BA$ and $ABA^{-1}_{d} = B$, so that $A = A^{d}$ and $A$ has entries in $\mathbb{F}_{d}$. Furthermore, for each block in $G$ of size $r$, we have $r$ positive possibilities for $j - i$, inducing $d^{r}$ possible blocks, and $d^{n+1}$ possible matrices in $G$. \medskip Note that we may have additional stabilizing matrices in $\operatorname{PGL}(n+1)$. These occur when there exists an automorphism of the set $\{0, 1, \ldots, n\}$ that does not leave the diagonal vector $\mathbf{a} = (a_{0}, \ldots, a_{n}) \in \mathbb{A}^{n+1}$ fixed, but does fix $\mathbf{a} = (a_{0}:\ldots:a_{n}) \in \mathbb{P}^{n}$. Since the automorphism has to fix $a_{0}a_{1}\ldots a_{n}$, we see that it must send each $a_{i}$ to $\zeta a_{i}$ where $\zeta$ is a root of unity of order at most $n+1$; hence there are at most $n+1$ possibilities for such an automorphism, modulo automorphisms that fix $\mathbf{a} \in \mathbb{A}^{n+1}$ and are hence simultaneously block-diagonalizable with $A$. \medskip We will rely on one final bound, due to G. A. Miller \cite{Mil}: \begin{prop}\label{group}The size of a finite group is bounded in terms of the size of its largest abelian subgroup.\end{prop} \begin{proof}It suffices to show this for $p$-groups. For each $n$, we let $k(n)$ be the minimal exponent of the largest abelian subgroup of any $p$-group of exponent $n$. Furthermore, for each $l \leq n$, we let $k(n, l)$ be the minimal exponent subject to the restriction that $Z = Z(G)$ have exponent $l$, so that $k(n) = \min\{k(n, l)\}$. It is enough to show that $\lim_{n \to \infty}k(n) = \infty$. \medskip It is trivial to show that $k(2) = 2$. In general, for a $p$-group of exponent $n$ and center of exponent $l$, let $g$ be such that $g \notin Z$, $g^{p} \in Z$, and $gZ \in Z(G/Z)$. Unless $G$ is abelian, in which case the result is trivial, we may take $g$ to be a preimage of a nontrivial element in the socle of $G/Z$. For every $h \in G$, $hgh^{-1} = gz$ for some $z \in Z$; we obtain a group homomorphism $h \mapsto z$ from $G$ to $Z$. The homomorphism has kernel $K$ of exponent at least $n-l$ and center containing $\langle Z, g\rangle$. Any abelian subgroup of $K$ will be an abelian subgroup of $G$, so that we obtain $k(n, l) \geq k(n-l, l+1)$. It easily follows that $k(n) \geq 2\sqrt{n}$.\end{proof} \medskip The bound in the above proposition is very weak. It is known that for odd $p$ we have $k(n) \leq \frac{n+4}{3}$ and for $p = 2$ we have $k(n) \leq 2\frac{n+3}{5}$ \cite{Dan}, but little more. It is also not known \emph{a priori} that the group has to be finite, only that if it is finite then it is bounded. We may use Theorem~\ref{stable} and finish. However, with little additional effort, we may prove finiteness directly, providing an alternative proof that all morphisms are stable. The fact that finite implies uniformly bounded means that it is enough to show that every finitely generated stabilizing subgroup is finite. More precisely: \begin{prop}\label{inject}Every finitely generated subgroup of $\operatorname{PGL}(n)$ contained in finitely many finite-order conjugacy classes is finite.\end{prop} \begin{proof}Let $R$ be the $\mathbb{Z}$-algebra generated by the finitely many coefficients of the generators. Then the group is contained in $\operatorname{PGL}(n, R)$, and we may project it into the finite group $\operatorname{PGL}(n, R/\mathfrak{m})$ where $\mathfrak{m}$ is a maximal ideal in $R$; we will show the map can be chosen to be injective. In fact, each non-unipotent conjugacy class $i$ contains two different eigenvalues, $a_{i_{1}}, a_{i_{2}}$; therefore, if we choose $\mathfrak{m}$ not to contain $a_{i_{1}} - a_{i_{2}}$, which we can since there are only finitely many such elements, then the map will have unipotent kernel. In characteristic $0$, the only finite-order unipotent matrix is the identity, so the map is injective and we are done. \medskip In characteristic $p$, we obtain a finite-index and hence finitely generated unipotent group. We may conjugate it by some matrix $P$ to be upper triangular; then matrix multiplication is equivalent to addition of the $(r, r+1)$ entry for any $r$, and the finite generation implies that the set of all $(r, r+1)$ entries lies in a finitely generated $\mathbb{Z}/p\mathbb{Z}$-vector space, which is finite. For the matrices with all $(r, r+k)$ entries for all $k \leq l$, matrix multiplication corresponds to addition of $(r, r+l+1)$ entries, and we may add those entries to our vector space, which will remain finite. We may now construct $\mathfrak{m}$ to avoid the finite vector space and the determinant of $P$, as well as the eigenvalue differences described above. The map will then be injective.\end{proof} \medskip Note that in the proof of proposition we make no assumption on the base ring. Of course, the argument in the proposition applies to $\operatorname{GL}(n+1)$, and shows that the answer to Burnside's problem, which asks whether a finitely generated group of bounded exponent is necessarily finite, is yes when restricted to subgroups with faithful finite-dimensional representations over any field.\end{proof} \medskip For each stabilizer group $G \in \operatorname{PGL}(n+1)$, there is a closed subscheme $\operatorname{Fix}(G) \in \operatorname{Hom}_{d}^{n}$ consisting of all $\varphi$ with stabilizer group containing $G$. Theorem~\ref{finite} states that every $G$ with nonempty $\operatorname{Fix}(G)$ is finite and of bounded order. Furthermore, each nontrivial stabilizing matrix is, up to conjugation, one of the $d^{n+1}$ possibilities for each of the $(n+1)^{n+1}$ functions on the set $\{0, 1, \ldots, n\}$. We may strengthen this result as follows: \begin{cor}\label{stabilizer}There are only finitely many $G$ with nonempty $\operatorname{Fix}(G)$ up to conjugation. In particular, on an open dense set of $\operatorname{Hom}_{d}^{n}$, which descends to $\mathrm{M}_{d}^{n}$, the stabilizer group is trivial.\end{cor} \begin{rem}The statement that there are only finitely many such $G$ up to conjugation is stronger than the statement that there are only finitely many $G$ up to isomorphism, which follows trivially from the bound on the size of $G$.\end{rem} \begin{proof}Since the size of $G$ is bounded, it suffices to show that each stabilizing subgroup has finitely many projective $n+1$-dimensional representations up to conjugacy. This is always true when the representation is completely reducible, which will be true if the ambient characteristic $p$ does not divide $\left|G\right|$. But when $\operatorname{Fix}(G)$ is not purely inseparable, every element will be diagonalizable, so it will have order not divisible by $p$, so that $G$ has order not divisible by $p$. In the purely inseparable case, we have $\operatorname{PGL}(n+1)$ acting on itself stably and with finite stabilizers, so that each orbit is of dimension $(n+1)^{2} - 1$ and thus consists of all of $\operatorname{PGL}(n+1)$. In other words, every purely inseparable map is, up to conjugation, $(x_{0}^{d}:\ldots:x_{n}^{d})$, so that its stabilizer group is conjugate to $\operatorname{PGL}(n+1, \mathbb{F}_{d})$. \medskip It remains to be shown that the complement of $\bigcup_{G \supset I}\operatorname{Fix}(G)$ is dense; its openness follows from the fact that the condition $A\varphi A^{-1} = \varphi$ is closed. It suffices to show that each $\operatorname{Fix}(G)$ is a proper subset of $\operatorname{Hom}_{d}^{n}$. We lose nothing if we ignore purely separable maps. From the proof of Lemma~\ref{bound}, each of the finitely many elements that may occur in $G$, a diagonal matrix with $i$th entry $a_{i}$, multiplies $c_{\mathbf{d}}(i)$ by $\frac{a_{i}}{\mathbf{a^{d}}}$, and hence induces the relation $c_{\mathbf{d}}(i) = 0$ outside a set of $(\mathbf{d}, i)$'s for which $\frac{a_{i}}{\mathbf{a^{d}}}$ is constant. If $\frac{a_{i}}{\mathbf{a^{d}}}$ is constant for all $(\mathbf{d}, i)$, then we have $a_{i} = k\mathbf{a^{d}}$; choosing a constant $\mathbf{d}$, we see that $a_{i}$ is constant, so $A$ is a scalar matrix. Hence no non-trivial $A$ fixes all of $\operatorname{Hom}_{d}^{n}$.\end{proof} \medskip Note that when $n = 1$, \cite{Sil96} has an explicit bound on the size of $\operatorname{Stab}(\varphi)$ of $n_{1}!n_{2}!n_{3}!$, where the $n_{i}$'s are indices for which there exist periodic points for $\varphi$ of exact order $n_{i}$. The technique in this paper improves on that bound. Following the proof of Lemma~\ref{bound}, we have three possibilities for the map $j \mapsto i$ up to conjugation: $(1, 2) \mapsto (1, 2)$, $(1, 2) \mapsto (2, 1)$, and $(1, 2) \mapsto (1, 1)$. In the first case, $a_{0} = \zeta_{d-1}^{i}$ and $a_{1} = \zeta_{d-1}^{j}$, where we use $\zeta_{i}$ to denote an $i$th root of unity; modulo multiplying both $a_{0}$ and $a_{1}$ by some $\zeta_{d-1}$, we obtain a cyclic group of order $d-1$. In the second case, we have $a_{0} = \zeta_{d^{2} - 1}^{i}, a_{1} = a_{0}^{d}$, and modulo multiplying both by $\zeta_{d^{2} - 1}^{d+1}$, we obtain a cyclic group of order $d+1$. In the third case, $a_{0} = \zeta_{d-1}$ and $a_{1}^{d} = a_{0}$, and modulo multiplying both by $\zeta_{d-1}$, we obtain a cyclic group of order $d$. \medskip Thus every diagonalizable abelian subgroup $A$ of $\operatorname{Stab}(\varphi)$ will be cyclic of size dividing $d-1$, $d$, or $d+1$. Furthermore, the only non-diagonalizable element commuting with $A$ can be the matrix $M$ corresponding to the automorphism permuting $x_{0}$ and $x_{1}$; we have $M^{-1} = M$ and $MAM = A$ in $\operatorname{PGL}(2)$ if and only if $\frac{a_{1}}{a_{0}} = \frac{a_{0}}{a_{1}}$, or, equivalently, $a_{i} = \pm 1$ for $i = 0, 1$. In other words, the only possible non-diagonalizable abelian subgroup $A$ is $\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}$. \medskip Now, the only finite subgroups of $\operatorname{PGL}(2)$ are, up to conjugation, cyclic, dihedral, tetrahedral, octahedral, or icosahedral \cite{Sil94}. The last three groups are of order at most $60$; only the first two are infinite families. Since the largest abelian subgroup of the dihedral group of order $2k$ is of order $k$, we see that for large $d$, the order of $\operatorname{Stab}(\varphi)$ is bounded by $2(d+1)$. \medskip We conclude this section with a remark that $\mathrm{M}_{d}^{n}(k)$, consisting of all $k$-rational points in $\mathrm{M}_{d}^{n}(\overline{k})$, is not the same as the quotient $\operatorname{Hom}_{d}^{n}(k)/\operatorname{PGL}(n+1, k)$. The latter parametrizes morphisms of $\mathbb{P}^{n}_{k}$ up to conjugation defined over $k$, the former up to conjugation defined over $\overline{k}$. There exist maps defined over $k$ which are conjugate over $\overline{k}$ but not over $k$ itself. For examples, see \cite{Sil96} and \S\S 4.7-4.10 of \cite{ADS}. \section{Rationality of $\mathrm{M}_{d}$} \noindent In this section, we show that when $n = 1$, the variety $\mathrm{M}_{d} = \mathrm{M}_{d}^{1}$ is rational. This partly generalizes Silverman's result in \cite{Sil96} that $\mathrm{M}_{2} = \mathbb{A}^{2}$ over $\mathbb{Z}$. We do so by parametrizing fixed points of $\varphi$. The fixed point set of $\varphi$, $\operatorname{Fix}(\varphi)$, is the intersection of two curves in $\mathbb{P}^{1} \times \mathbb{P}^{1}$, the graph $\Gamma_{\varphi}$ and the diagonal embedding $\Delta$. As $\Delta$ is irreducible and not contained in $\Gamma_{\varphi}$ for $d > 1$, this is a proper intersection of divisors of type $(1, 1)$ and $(d, 1)$, so it has $d + 1$ points, counting multiplicity. We have: \begin{thm}\label{rational}$\mathrm{M}_{d}$ is birational to the total space of a rank-$d$ vector bundle on $\mathrm{M}_{0, d+1}$, the space of unmarked $d+1$ points on $\mathbb{P}^{1}$. Since $\mathrm{M}_{0, d+1}$ is rational, it follows that $\mathrm{M}_{d}$ is rational.\end{thm} \begin{proof}We explicitly write $\varphi(x:y) = (p:q)$ where $p(x, y) = a_{d}x^{d} + \ldots + a_{0}y^{d}$ and $q(x, y) = b_{d}x^{d} + \ldots + b_{0}y^{d}$. The fixed points of $\varphi$ are those for which $(p:q) = (x:y)$, which are the roots of the homogeneous degree-$d + 1$ polynomial $py - qx$. The polynomial $py - qx$ induces a map from $\operatorname{Rat}_{d}$ to $(\mathbb{P}^{1})^{d+1}/S_{d+1}$ where $S_{d+1}$ acts by permutation of the factors. We will call this map $\operatorname{Fix}$. We use the following lemma: \begin{lem}\label{surjective}The map $\operatorname{Fix}$ is surjective, and has rational fibers.\end{lem} \begin{proof}A point $(x:y)$ is fixed if and only if we have $py = qx$, i.e. $a_{d}x^{d}y + \ldots + a_{0}y^{d+1} = b_{d}x^{d+1} + \ldots + b_{0}xy^{d}$. This is a homogeneous linear condition in the coefficients of $\varphi$, and we have $d+1$ such conditions compared with $2d + 2$ variables. From elementary linear algebra, we have a solution space of linear dimension $d+1$, or projective dimension $d$. It is a linear subvariety of $\mathbb{P}^{2d+1}$, so it is rational. \medskip We can also show that this dimension-$d$ space will not be contained in the resultant locus. We fix a set of fixed points and write $r$ for the polynomial having those fixed points as roots. We need to show $r$ is of the form $py - qx$ for some $p$ and $q$ sharing no common root. By conjugating, we may assume neither $(0:1)$ nor $(1:0)$ is a root of $r$, so that it has a nonzero $x^{d+1}$ coefficient, which we may take to be $1$, and a nonzero $y^{d+1}$ coefficient. Now we let $q = -x^{d}$ so that $r + qx$ is divisible by $y$, yielding $p = \frac{r + qx}{y}$. Now $r + qx$ has a nonzero $y^{d+1}$ coefficient, so $p$ has a nonzero $y^{d}$ coefficient; therefore, $p$ does not have $(0:1)$ as a root, so it shares no root with $y$.\end{proof} \medskip Now, $\operatorname{Fix}$ descends to a rational map $\operatorname{Fix}':\mathrm{M}_{d} \to (\mathbb{P}^{1})^{d+1}/S_{d+1}\operatorname{PGL}(2)$ where $\operatorname{PGL}(2)$ acts diagonally; we are restricting to the open set of $\mathrm{M}_{d}$ whose fixed points are in the stable space of the action of $\operatorname{PGL}(2)$ on $(\mathbb{P}^{1})^{d+1}/S_{d+1}$. With this restriction, the image is $\mathrm{M}_{0, d+1}$, so it suffices to show the general fiber of $\operatorname{Fix}'$ is rational. Lemma~\ref{surjective} says that the fiber of $\operatorname{Fix}$ is rational, so it suffices to show that the automorphism group of the general point in $(\mathbb{P}^{1})^{d+1}/S_{d+1}$ is small enough that the quotient of the fiber by it is still rational. Using Noether's problem \cite{Noe} \cite{Sal}, we will show a stabilizer of size $4$ or $6$ is small enough. \begin{lem}\label{auto}Let $d > 1$. The automorphism group of a general configuration of $d + 1$ unmarked points in $\mathbb{P}^{1}$ is trivial, unless $d = 2$, in which case it is $S_{3}$, or $d = 3$, in which case it is $\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}$.\end{lem} \begin{proof}We will use inhomogeneous coordinates. For $d = 2$, we can conjugate the three points to be $0, 1, \infty$; the set is then stabilized by every permutation in $S_{3}$, so it has size $6$. For $d > 3$, we will show that the stabilizer is generically trivial, and on the way show that for $d = 3$ the stabilizer is generically of order $4$, consisting of all elements in $S_{4}$ of cycle type $(2, 2)$. This will be enough to prove the theorem. \medskip First, note that if a $(d+1)$-cycle stabilizes the set of points, then by conjugation we may assume it sends $0$ to $1$, $1$ to $\lambda$, $\mu$ to $\infty$, and $\infty$ to $0$. The cycle, regarded as an element of $\operatorname{PGL}(2)$, is of the form $\frac{ax + b}{cx + e}$; then $\frac{b}{e} = 1$, $a = 0$, $\frac{a + b}{c + e} = \lambda$, and $c\mu + e = 0$. These equations together imply that $\lambda = \frac{e}{c + e} = \frac{e}{e - \frac{e}{\mu}} = \frac{\mu}{\mu - 1}$. For a generic choice of $\mu, \lambda$, this can never happen, so no $(d+1)$-cycle is in the stabilizer. This remains true for $d = 3$, in which case we are forced to have $\lambda = \mu$, since generically $\lambda \neq \frac{\lambda}{\lambda - 1}$. \medskip Observe that if an automorphism of cycle type $(c_{1}, \ldots, c_{k})$ stabilizes the set, then each subset corresponding to the $i$th cycle is stabilized by a $c_{i}$-cycle. Therefore, the above discussion shows that no cycle of length $4$ or more stabilizes a generic set. We have reduced to the case when all cycles are of size $1$, $2$, or $3$. Now, if we have a stabilizing automorphism which includes a $3$-cycle, we may conjugate the $3$-cycle to be $(0\ 1\ \infty)$, forcing it to act on $\mathbb{P}^{1}$ as $\frac{1}{1-x}$. Generically, if $\lambda$ is a fourth point, none of the points in the set (including $\lambda$) will be $\frac{1}{1 - \lambda}$. We are left with cycles of size $1$ or $2$. If we have a stabilizing automorphism with two $2$-cycles, then up to conjugation we may assume the element acts on four points as $(0\ \infty)(1\ \lambda)$, so that it maps $x$ to $\frac{\lambda}{x}$. If $d = 3$ then this will stabilize the set regardless of what $\lambda$ is. If $d > 3$ then we have an additional point $\mu$, and generically $\frac{\lambda}{\mu}$ will not be in our set. \medskip We are left with automorphisms that act as single $2$-cycles, fixing $d - 1$ points. For $d \geq 4$, they will fix $3$ points and therefore act trivially. For $d = 3$, we may assume by conjugation that the element acts as $(0\ 1)$ and fixes $\infty$; this forces it to be the automorphism $1 - x$, which generically does not fix $\lambda$. This leaves us with automorphisms consisting only of $1$-cycles, i.e. the identity.\end{proof} \medskip We will return to Noether's problem now. Let us work over a fixed field $k$. Recall \cite{Noe} that if $K = k(x_{1}, \ldots, x_{m})$ is a purely transcendental field, and $G$ is a finite group of size $2$, $3$, $4$, or $6$ permuting the $x_{i}$'s, then $K^{G}$ is purely transcendental as well. In particular, if $R$ is the graded $k$-algebra $k[x_{1}, \ldots, x_{m}]$, and $G$ acts on it by permutation of the $x_{i}$'s, then $\operatorname{Proj} R^{G}$ is rational. We will show this to be the case when $R$ is the fiber of $\operatorname{Fix}$ in the $d = 2$ and $d = 3$ cases, by finding an orbit $y_{1}, \ldots, y_{m}$ generating $R$ over $k$. \medskip When $d = 2$, we have a $2$-dimensional fiber. Explicitly, we have six homogeneous variables $a_{i}, b_{i}, 0 \leq i \leq 2$, on which the automorphism group $\operatorname{PGL}(2)$ acts linearly. The fiber we are interested in consists of maps fixing the points $0, 1, \infty$, corresponding to the linear conditions $a_{0} = 0$, $a_{0} + a_{1} + a_{2} = b_{0} + b_{1} + b_{2}$, $b_{2} = 0$, respectively. The values of $a_{2}, a_{1}, b_{0}$ uniquely determine that of $b_{1}$, so we may write the fiber as $\operatorname{Proj} k[a_{2}, a_{1}, b_{0}]$. The group $S_{3}$ acts linearly and faithfully on the $k$-vector space spanned by $a_{2}, a_{1}, b_{0}$. Let us consider the action of the automorphism $(0\ \infty) = \frac{1}{x}$: $$\varphi(x) = \frac{a_{2}x^{2} + a_{1}x}{b_{1}x + b_{0}}$$ $$\frac{1}{\varphi(\frac{1}{x})} = \frac{b_{0}x^{2} + b_{1}x}{a_{1}x + a_{2}}$$ $$a_{2} \mapsto b_{0}$$ $$a_{1} \mapsto b_{1} = a_{2} + a_{1} - b_{0}$$ $$b_{0} \mapsto a_{2}$$ \noindent Observe that this automorphism fixes $a_{2} + b_{0}$. Let us also consider the action of the automorphism $(0\ 1) = 1 - x$: \begin{align}\nonumber 1 - \varphi(1-x) & = 1 - \frac{a_{2}(1-x)^{2} + a_{1}(1-x)}{b_{1}(1-x) + b_{0}} \\ \nonumber & = \frac{-a_{2}(1-x)^{2} + (b_{1} - a_{1})(1-x) + b_{0}}{b_{1}(1-x) + b_{0}}\end{align} $$a_{2} \mapsto -a_{2}$$ $$a_{1} \mapsto 2a_{2} + a_{1} - b_{1} = a_{2} + b_{0}$$ $$b_{0} \mapsto b_{0} + b_{1} = a_{2} + a_{1}$$ \noindent This automorphism does not stabilize $a_{2} + b_{0}$; hence, $a_{2} + b_{0}$ has stabilizer of order $2$, and orbit of size $3$. By repeating the maps $1-x$ and $\frac{1}{x}$, we can compute the orbit as $\{a_{2} + b_{0}, a_{1}, a_{2} + a_{1} - b_{0}\}$. This generates $R$ as long as $\operatorname{char} k \neq 2$. When $\operatorname{char} k = 2$, the automorphism $1 - x$ fixes $a_{2}$, whose orbit is then $\{a_{2}, b_{0}, a_{2} + a_{1}\}$. In either case, we can construct the action of $S_{3}$ as an action of generators, reducing the quotient to Noether's problem. \medskip When $d = 3$, we similarly obtain a $3$-dimensional fiber, fixing the points $0, 1, \lambda, \infty$. We obtain the linear conditions $a_{0} = 0$, $b_{3} = 0$, $a_{3} + a_{2} + a_{1} = b_{2} + b_{1} + b_{0}$, $\lambda^{2}a_{3} + \lambda a_{2} + a_{1} = \lambda^{2}b_{2} + \lambda b_{1} + b_{0}$, and we may write $R$ as $k[a_{3}, a_{2}, b_{1}, b_{0}]$. We look at the automorphism $(0\ \infty)(1\ \lambda) = \frac{\lambda}{x}$: $$\varphi(x) = \frac{a_{3}x^{3} + a_{2}x^{2} + a_{1}x}{b_{2}x^{2} + b_{1}x + b_{0}}$$ $$\frac{\lambda}{\varphi(\frac{\lambda}{x})} = \frac{\lambda}{\frac{a_{3}\lambda^{3} + a_{2}x\lambda^{2} + a_{1}x^{2}\lambda}{b_{2}x\lambda^{2} + b_{1}x^{2}\lambda + b_{0}x^{3}}} = \frac{b_{0}x^{3} + b_{1}\lambda x^{2} + b_{2}\lambda^{2}x}{a_{1}x^{2} + a_{2}\lambda x + a_{3}\lambda^{2}}$$ $$a_{3} \mapsto b_{0}$$ $$a_{2} \mapsto \lambda b_{1}$$ $$b_{1} \mapsto \lambda a_{2}$$ $$b_{0} \mapsto \lambda^{2} a_{3}$$ \noindent We may scale down by a factor of $\lambda$ to obtain $(\lambda^{-1}b_{0}, b_{1}, a_{2}, \lambda a_{3})$, which is equivalent to picking the representative function $\frac{\sqrt{\lambda}}{\sqrt{\lambda^{-1}}x}$. Let us also consider the action of the automorphism $(0\ \lambda)(1\ \infty) = \frac{x - \lambda}{x - 1}$: $$\varphi(\frac{x - \lambda}{x-1}) = \frac{a_{3}(x-\lambda)^{3} + a_{2}(x-\lambda)^{2}(x-1) + a_{1}(x-\lambda)(x-1)^{2}}{b_{2}(x-\lambda)^{2}(x-1) + b_{1}(x-\lambda)(x-1)^{2} + b_{0}(x-1)^{3}}$$ \noindent We obtain: $$\frac{a_{3}(x-\lambda)^{3} + (a_{2}-\lambda b_{2})(x-\lambda)^{2}(x-1) + (a_{1}-\lambda b_{1})(x-\lambda)(x-1)^{2} - \lambda b_{0}(x-1)^{3}}{a_{3}(x-\lambda)^{3} + (a_{2}-b_{2})(x-\lambda)^{2}(x-1) + (a_{1}-b_{1})(x-\lambda)(x-1)^{2} - b_{0}(x-1)^{3}}$$ $$a_{3} \mapsto a_{3} + a_{2} + a_{1} - \lambda(b_{2} + b_{1} + b_{0})$$ \noindent We will show the orbit of $a_{3}$ generates $R$. But first, note that $a_{3} + a_{2} + a_{1} = b_{2} + b_{1} + b_{0}$ implies that $a_{1} = b_{2} + b_{1} + b_{0} - a_{2} - a_{3}$, and then $\lambda^{2}a_{3} + \lambda a_{2} + a_{1} = \lambda^{2}b_{2} + \lambda b_{1} + b_{0}$ implies that $(\lambda^{2} - 1)a_{3} + (\lambda - 1)a_{2} = (\lambda^{2} - 1)b_{2} + (\lambda - 1)b_{1}$, that is, $b_{2} = a_{3} + \frac{a_{2} - b_{1}}{\lambda + 1}$. \medskip We have $\frac{x - \lambda}{x - 1}$ mapping $a_{3}$ to $a_{3} + a_{2} + a_{1} - \lambda(b_{2} + b_{1} + b_{0}) = (1 - \lambda)(b_{2} + b_{1} + b_{0}) = (1 - \lambda)(a_{3} + b_{0} + \frac{a_{2} + \lambda b_{1}}{\lambda + 1})$. If we then apply the map $\frac{\lambda}{x}$, we obtain $(1 - \lambda)(\lambda^{-1}b_{0} + \lambda b_{3} + \frac{b_{1} + \lambda a_{2}}{\lambda + 1})$. The orbit is, up to scaling, $\{a_{3}, b_{0}, a_{3} + b_{0} + \frac{a_{2} + \lambda b_{1}}{\lambda + 1}, \lambda^{-1}b_{0} + \lambda b_{3} + \frac{b_{1} + \lambda a_{2}}{\lambda + 1}\}$, which generates $R$. Again, we apply Noether's problem and obtain a rational quotient, as desired.\end{proof} \medskip Unfortunately, this proof does not seem to generalize to $\mathrm{M}_{d}^{n}$. Although Lemma~\ref{auto} is true for all $n, d > 1$, there are two significant obstructions. First, the dimension of the target space of the map $\operatorname{Fix}$ will be $n(1 + d + \ldots + d^{n})$, which is larger than $N_{d}^{n}$ unless $n$ and $d$ are very small. This means that the map will not be surjective, though the fibers are still rational whenever they are nonempty. And second, even for small $n$ and $d$ the base space for the vector bundle is not $\mathrm{M}_{0, d+1}$, which is relatively tame, but rather the space of $1 + d + \ldots + d^{n}$ points on $\mathbb{P}^{n}$, a much more complex object. All we can say at this stage is that $\mathrm{M}_{d}^{n}$ is unirational, which follows trivially from the fact that it is covered by $\operatorname{Hom}_{d}^{n}$. \bibliographystyle{amsplain}
1,108,101,562,529
arxiv
\section{#1}\setcounter{equation}{0}} \newcommand{\subsect}[1]{\subsection{#1}} \newcommand{\setcounter{section}{0}{\setcounter{section}{0} \renewcommand{\thesection}{\Alph{section}}} \newcommand{\appe}[1]{\setcounter{equation}{0} \setcounter{subsection}{0} \addtocounter{section}{1} \section*{Appendix \ #1}} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \begin{document} \begin{titlepage} \null \begin{flushright} KOBE-TH-96-04\\ hep-ph/9701302\\ December 1996 \end{flushright} \vspace{7mm} \begin{center} {\Large\bf Unitarity of Neutral Kaon System \par} \vspace{1.5cm} \baselineskip=7mm {\large Tsukasa Kawanishi\footnote{E-mail address: [email protected]}\par} \vspace{5mm} {\sl Graduate School of Science and Technology, Kobe University\\ Rokkodai, Nada, Kobe 657, Japan \par} \vspace{3cm} {\large\bf Abstract} \end{center} \par In neutral kaon system, we always use non-hermitian Hamiltonian for convenience of treating decay process, unitarity seems to be lost. If we take decay channels ($\pi\pi$, $\pi\pi\pi$, $\pi\ell\nu$ $\ldots$ etc.) into account, however, Hamiltonian of the whole system must be hermitian. We attempt to derive an effective Hamiltonian with respect to only $K^0$, $\bar{K}^0$ states, starting from the hermitian Hamiltonian. For brevity, we take only a $\pi\pi$ state into account as the decay channel in this paper. We can not avoid an oscillation between $K^0$, $\bar{K}^0$ and $\pi\pi$ states if we start from a hermitian Hamiltonian whose states all have discrete energy levels. We therefore treat the $\pi\pi$ state more appropriately to have a continuous energy spectrum to achieve the decay of $K^0$, $\bar{K}^0$ into $\pi\pi$. As the consequence, we find a different time evolution from what we expect in the conventional method immediately after the decay starts, though it recovers Fermi's golden rule for long enough time scale. \end{titlepage} \setcounter{footnote}{0} \baselineskip=7mm \sect{Introduction} The neutral kaon system has long served as a probe of fundamental physics. Time evolution equation of neutral kaon system is written by non-hermitian Hamiltonian customarily, unitarity of the system seems to be lost. This is because we don't take decay channels ($\pi\pi$, $\pi\pi\pi$, $\pi\ell\nu$ $\ldots$ etc.) of K mesons into account. Hamiltonian of the neutral kaon system must be hermitian with the decay channels being included into the base of this Hamiltonian. We attempt toderive the decay behavior of neutral kaon system effectively, starting from a hermitian Hamiltonian \cite{sanda}. Recently, there are many discussions about CPT and quantum mechanics violation\cite{cpt}. If these violations exist, they must be extremely small. In order to treat these extremely small quantities, we need to describe the time evolution of neutral kaon system as exactly as possible. Therefore, it seems to be important to clarify whether the conventional treatment by use of non-hermitian Hamiltonian gives exact result, relying on the exact treatment. In future, such clarification may affect analysis of CPT and quantum mechanics violation. In Sec.2, for convenience, we derive time evolution equation of neutral kaon system with $K^0$, $\bar{K}^0$ and $\pi\pi$ bases. Here we treat $\pi\pi$ state as a state with continuous energy spectrum. Then we can avoid an oscillation between $\pi\pi$ and K meson, and explain a decay phenomenon well. In Sec.3, assuming CP invariance, we solve the time evolution equation concerning $K_1$ and $\pi\pi$ states perturbatively, and consider the decay process of $K_1$. In Sec.4, we examine the validity of making use of perturbation to solve time evolution equation. We introduce a simplified model where we can solve the equation non-perturbatively, and compare the result with the one obtained by our perturbative method. Sec.5 is devoted to summary and discussion. \sect{Derivation of Time Evolution Equation } For brevity, we consider a system with $K^0$, $\bar{K}^0$ and $\pi\pi$ states. In a frame where the $K^0$, $\bar{K}^0$ are at rest, their 4-momenta are fixed, while $\pi\pi$ state has an ambiguity of relative momentum $\vec{k}$, $\pi(\vec{k})\pi( - \vec{k})$. If we start from $\pi\pi$ state as a discrete state, we can not avoid an oscillation $\pi\pi \leftrightarrow K$ . Thus $K^0$, $\bar{K}^0$ are treated as discrete states, while $\pi\pi$ state is a continuous state with a continuous parameter $\vec{k}$. At arbitrary time $ t $, a state $|\psi(t)>_I$ is given as \begin{equation} |\psi(t)>_I = C_{K^0} (t) |K^0> + C_{\bar{K}^0} (t) |\bar{K}^0> + \int {d^3}k C_{\vec{k}}(t) |\pi(\vec{k})\pi(-\vec{k})> , \end{equation} where index $ I $ implies interaction representation. We note that the momentum $\vec{P}$ should be preserved in the $K^0$, $\bar{K}^0$ $\rightarrow$ $\pi\pi$ transition, since the interaction Hamiltonian of the form \begin{eqnarray} & &{H^I}_{int} = \int {d^3}x {{\cal H}^I}_{int}, \\ & &{{\cal H}^I}_{int} = \alpha K^0 \pi\pi + \bar{\alpha} \bar{K}^0 \pi\pi, \\ & &(\pi\pi : \pi^+ \pi^- , \pi^0 \pi^0 ; CP invariance \rightarrow \alpha = - \bar{\alpha} ), \nonumber \end{eqnarray} implies that the total momentum should vanish due to the factor $\delta (\vec{P_K} + \vec{P_\pi} + \vec{P_\pi})$. We thus have assigned $\vec{k} , - \vec{k}$ for the momentum of two $\pi$'s $(\vec{P_K} = \vec{0})$. The normalization of continuous state is fixed as \begin{equation} <\pi (\vec{k}) \pi ( - \vec{k}) | \pi (\vec{k'}) \pi (- \vec{k'})> = \delta^3 (\vec{k} - \vec{k'}). \end{equation} Then, $<\psi(t) | \psi(t)>_I = 1$ and Eq.(2.1), (2.4) gives \begin{equation} |C_{K^0} (t) |^2 + |C_{\bar{K^0}} (t) |^2 + \int {d^3}k |C_{\vec{k}} (t) |^2 = 1 \end{equation} Schr\"odinger equation is written as \begin{eqnarray} & &i\frac{\partial}{\partial t} |\psi(t)>_S = H|\psi(t)>_S \\ & &H = H_0 + H_{int}. \nonumber \end{eqnarray} Here, $ H_0 $ is free Hamiltonian and $ H_{int} $ is interaction Hamiltonian due to weak interaction. Index $S$ implies Schr\"odinger representation. It is related to the interaction picture by \begin{eqnarray} |\psi(t)>_I = e^{iH_0t} |\psi(t)>_S \\ i\frac{\partial}{\partial t} |\psi(t)>_I = {H^I}_{int} |\psi(t)>_I, \end{eqnarray} with the interaction Hamiltonian being defined by \begin{equation} {H^I}_{int} = e^{iH_0t} H_{int} e^{-iH_0t} . \end{equation} A "matrix" form of time evolution equation is possible by inserting a complete set \begin{equation} 1 = |K^0><K^0| + |\bar{K^0}><\bar{K^0}| + \int{d^3}k |\pi(\vec{k})\pi(-\vec{k})><\pi(\vec{k})\pi(-\vec{k})| , \end{equation} to get \begin{eqnarray} \lefteqn{i\frac{\partial}{\partial t}\pmatrix{<K^0|\psi(t)>_I \cr <\bar{K^0}|\psi(t)>_I \cr <\pi\pi|\psi(t)>_I \cr}} \nonumber \\ &=& \pmatrix{<K^0|{H^I}_{int}|K^0>& <K^0|{H^I}_{int}|\bar{K^0}>& <K^0|{H^I}_{int}|\pi\pi> \cr <\bar{K^0}|{H^I}_{int}|K^0>& <\bar{K^0}|{H^I}_{int}|\bar{K^0}>& <\bar{K^0}|{H^I}_{int}|\pi\pi> \cr <\pi\pi|{H^I}_{int}|K^0>& <\pi\pi|{H^I}_{int}|\bar{K^0}>& <\pi\pi|{H^I}_{int}|\pi\pi> \cr} \nonumber \\ & & \hspace{7cm} \times \pmatrix {<K^0|\psi(t)>_I \cr <\bar{K^0}|\psi(t)>_I \cr <\pi\pi|\psi(t)>_I \cr}. \end{eqnarray} Eq.(2.1) and above gives \begin{equation} i\frac{\partial}{\partial t} \pmatrix{C_{K^0} (t) \cr C_{\bar{K^0}} (t) \cr C_{\vec{k}} (t) \cr} = \pmatrix{ {H^I}_{K^0K^0} & {H^I}_{K^0{\bar{K^0}}} & {H^I}_{{K^0}{\vec{k'}}} \cr {H^I}_{\bar{K^0}K^0} & {H^I}_{\bar{K^0}\bar{K^0}} & {H^I}_{\bar{K^0}\vec{k'}} \cr {H^I}_{\vec{k}K^0} & {H^I}_{\vec{k}\bar{K^0}} & {H^I}_{\vec{k}\vec{k'}} \cr } \pmatrix{C_{K^0}(t) \cr C_{\bar{K^0}} (t) \cr C_{\vec{k'}} (t) \cr} \end{equation} where multiplication of matrices should be understood as \begin{equation} {H^I}_{K^0\vec{k}} C_{\vec{k}} (t) \Rightarrow \int {d^3}k {H^I}_{K^0\vec{k}} C_{\vec{k}}(t) . \end{equation} In Eq(2.12), the Hamiltonian in the interaction representation is related to the time independent Hamiltonian in the Schr\"odinger representation as \begin{eqnarray} & & \pmatrix{e^{iE_{K^0}t}& 0& 0 \cr 0& e^{iE_{\bar{K^0}t}}& 0 \cr 0& 0& e^{iE_{\vec{k}}t} \cr} \pmatrix{ H_{{K^0}{K^0}}& H_{{K^0}{\bar{K^0}}}& H_{{K^0}{\vec{k'}}} \cr H_{\bar{K^0}{K^0}}& H_{\bar{K^0}\bar{K^0}}& H_{\bar{K^0}\vec{k'}} \cr H_{\vec{k}{K^0}}& H_{\vec{k}\bar{K^0}}& H_{\vec{k}\vec{k'}} \cr} \nonumber \\ & & \hspace{4cm} \times \pmatrix{ e^{-iE_{K^0}t} & 0 & 0 \cr 0 & e^{-iE_{\bar{K^0}t}} & 0 \cr 0 & 0 & e^{-iE_{\vec{k'}}t} \cr} \nonumber \\ &=&\pmatrix{ H_{{K^0}{K^0}}& H_{{K^0}{\bar{K^0}}}& e^{i \Delta E_{{K^0} \vec{k'}} t} H_{{K^0}{\vec{k'}}} \cr H_{\bar{K^0}{K^0}}& H_{\bar{K^0}\bar{K^0}}& e^{i \Delta E_{\bar{K^0} \vec{k'}} t} H_{\bar{K^0}\vec{k'}} \cr e^{i \Delta E_{\vec{k} {K^0}} t} H_{\vec{k} {K^0}}& e^{i \Delta E_{\vec{k} \bar{K^0}} t} H_{\vec{k} \bar{K^0}}& e^{i\Delta E_{\vec{k} \vec{k'}}t} H_{\vec{k} \vec{k'}} \cr} \end{eqnarray} where, \begin{equation} \Delta E_{\vec{k} \vec{k'}} = E_{\vec{k}} - E_{\vec{k'}}, \end{equation} and so on. CPT invariance implies \begin{equation} E_{K^0} = E_{\bar{K^0}} = M_{K}. \end{equation} \sect{Solving Time Evolution Equation Perturbatively} In order to see how the unitarity is effectively lost we simplify the situation by assuming CP invariance, and consider only ($K_1, \pi\pi$) subsystem where $K_1$ denotes CP even eigenstate, i.e $K_2$ decouples from the system. We thus consider $K_1 \rightarrow \pi\pi $ decay and solve time evolution equation by perturbative method. For convenience, we treat $\pi$ as a massless particle. Refering to Eq.(2.14), we obtain \begin{eqnarray} & &\pmatrix{C_{K_1}(t) \cr C_{\vec{k}} (t) \cr} = {\bf T} e^{i \int_{0}^t \hat{H} (t) dt} \pmatrix{C_{K_1} (0) \cr C_{\vec{k'}} (0) \cr} \nonumber \\ &=& (1 + i \int_{0}^t \hat{H} (t') dt' - \int_{0}^t \hat{H} (t') dt' \int_{0}^{t'} \hat{H} (t") dt" + \cdots) \pmatrix {C_{K_1} (0) \cr C_{\vec{k'}} (0) \cr} , \\ & &\hat{H}(t) = \pmatrix{0 & e^{i(E_{K_1} - E_{\vec{k'}} )t} H_{{K_1}{\vec{k'}}} \cr e^{-i(E_{K_1} - E_{\vec{k}})t} H_{\vec{k} K_1} & 0 \cr} , \end{eqnarray} where $ \bf{T} $ stands for time ordered product and we have taken an interaction representation where \begin{equation} < K_1 | H_{int} | K_1> = 0 , \hspace{1cm} <\pi\pi | H_{int} | \pi\pi > = 0. \end{equation} Inserting Eq.(3.2) into Eq.(3.1) and expanding in powers of $\hat{H} (t)$ to second order, we obtain \begin{equation} \pmatrix{C_{K_1}(t) \cr C_{\vec{k}} (t) \cr} = \pmatrix{ H_{11} & H_{12} \cr H_{21} & H_{22} \cr} \pmatrix{C_{K_1} (0) \cr C_{\vec{k'}} (0) \cr}, \end{equation} \begin{equation} H_{11} = 1 + \int {d^3} k \frac{e^{i \Delta E_{{K_1} \vec{k}} t} - 1 -i \Delta E_{{K_1} \vec{k}} t}{( \Delta E_{{K_1} \vec{k}} )^2} |H_{K_1 \vec{k}}|^2 \end{equation} \begin{equation} H_{12} = \frac{e^{i \Delta E_{{K_1} \vec{k'}} t} -1} {\Delta E_{{K_1} \vec{k'}}} H_{K_1 \vec{k'}} \end{equation} \begin{equation} H_{21} = - \frac{e^{-i \Delta E_{{K_1} \vec{k}} t} -1} {\Delta E_{{K_1} \vec{k}}} H_{\vec{k} K_1} \end{equation} \begin{equation} H_{22} = 1 + \frac{1}{\Delta E_{{K_1} \vec{k'}}} ( \frac{e^{i \Delta E_{\vec{k} \vec{k'}} t} -1} {\Delta E_{\vec{k} \vec{k'}}} + \frac{e^{-i \Delta E_{{K_1} \vec{k}} t} -1} {\Delta E_{{K_1} \vec{k}}}) H_{\vec{k} K_1} H_{K_1 \vec{k'}} \end{equation} and take initial conditions into consideration, \begin{equation} C_{K_1} (0) = 1 , \hspace{1cm} C_{\vec{k}} (0) = 0 . \end{equation} Then \begin{equation} |C_{K_1}(t)|^2 = \{1 + 2 \int d^3 k \frac{{\rm{cos}} (E_{K_1} - E_{\vec{k}} )t - 1} {(E_{K_1} - E_{\vec{k}} )^2} |H_{K_1 \vec{k}} |^2 + O(H^4) \} |C_{K_1} (0) |^2 . \end{equation} Time derivative of Eq.(3.10) is \begin{equation} \frac{\partial}{\partial t} |C_{K_1}(t)|^2 = \{ -2 \int d^3 k \frac{{\rm{sin}} (E_{K_1} - E_{\vec{k}} )t}{E_{K_1} - E_{\vec{k}}} |H_{K_1 \vec{k}} |^2 + O(H^4) \} |C_{K_1} (t) |^2 . \end{equation} For the assumed interaction in Eq.(2.2) and Eq.(2.3), $|H_{K_1 \vec{k}} |^2 \propto \frac{1}{\sqrt{E_{K_1}{E_{\vec{k}}}^2}}$. Thus, \begin{eqnarray} \frac{\partial}{\partial t} |C_{K_1}(t)|^2 &\propto& - \int_{M_{K_1}}^\infty dk \frac{{\rm{sin}} kt}{k} |C_{K_1} (t) |^2 \nonumber \\ &=& - \{ \pi - {\rm{si}} (M_{K_1} t) \} |C_{K_1} (t)|^2, \end{eqnarray} where si is sine integral function and $M_{K_1}$ denotes the $K_1$ mass. With $ t \rightarrow \infty $ , Eq.(3.12) reduces to what we expect from Fermi's golden rule, \begin{equation} \frac{\partial}{\partial t} |C_{K_1} (t)|^2 \propto - {\rm const} |C_{K_1}(t)|^2 . \end{equation} This equation indicates a decay process, not an oscillation. We should, however, note that when time $t$ is relatively small, our result shows a clear difference from the conventional result expected from the golden rule. \sect{Comparison with a Non-perturbative Method} In this section, we would like show that our procedure relying on a perturbative method actually reproduces the exact result solved nonperturbatively for some simplified case. This indicates the validity of our method. \subsection{A Simplified Model Solved by Non-perturbative Method \cite{kawai}} In this subsection, we treat the following system in 1+1 dimension. \begin{eqnarray} H | K_1 > = M_{K_1} | K_1 > + v \int dk | k >, \\ H | k > = v^* | K_1 > + k | k >, \end{eqnarray} where $k$ is assumed to take $-\infty$ to $\infty$, and $v$ is assumed to be constant. We write eigenvalue and eigenvector of $H$ as $H | \omega > = \omega | \omega >$ and difine $ | \omega >$ as following \begin{equation} | \omega > = N_{\omega} \{ | K_1 > + \int dk f_{\omega} (k) | k > \}. \end{equation} Here, $N_{\omega}$ is a normalization factor. From the above we obtain \begin{equation} \omega = M_{K_1} + v^*\int dk f_{\omega} (k), \hspace{1cm} \omega f_{\omega} (k) = v + k f_{\omega} (k), \end{equation} which reads as \begin{equation} f_{\omega} (k) = \frac{{\bf P}}{\omega - k} + \frac{\omega - M_{K_1} }{v^*} \delta (\omega -k). \end{equation} Here, {\bf P} denotes a principal value\cite{ww}. The normalization condition for continuous states, \begin{equation} < \omega | \omega ' > = \delta ( \omega - \omega ') , \hspace{1cm} < k | k' > = \delta (k-k'), \end{equation} fixes $N_\omega$ as, \begin{equation} | N_{\omega} |^2 = \frac{|v|^2 }{ {\pi}^2 |v|^4 + (\omega - M_{K_1})^2}. \end{equation} At $t=0$, the state is assumed to be pure $K_1$ state, \begin{equation} | \psi (0)> = |K_1 >= \int d \omega | \omega >< \omega | K_1 >= \int d \omega {N_{\omega}}^* | \omega>. \end{equation} Time evolution of $| \psi (t) >$ is uniquely fixed as \begin{equation} | \psi (t) >= \int d \omega e^{-i \omega t} {N_{\omega}}^* | \omega >. \end{equation} The "survival probability" of $K_1$ is then given as \begin{equation} |<K_1 | \psi(t)>|^2 = |\int d \omega \frac{|v|^2 e^{-i \omega t}} { {\pi}^2 |v|^4 + (\omega - M_{K_1})^2}|^2 \nonumber \\ = e^{-2 \pi |v|^2 t}. \end{equation} This expression means that a particle $K_1$ decays with $\Gamma = 2 \pi |v|^2$, and corresponds to the Fermi's golden rule. Namely, in this simplified model there is no deviation from the conventional result. \subsection{Comparison with our Perturbative Model} How about the result obtained by our perturbative method?. From Eq.(3.10), for 1+1 dimensional case we obtain ($|H_{K_1 \vec{k}} |^2 = |v|^2$) \begin{equation} | C_{K_1} (t)|^2 = \{ 1+2 \int dk \frac{\cos (M_{K_1} - k)t -1} {(M_{K_1} - k)^2} |v|^2 + O( H^4 ) \} |C_{K_1} (0) |^2. \end{equation} Eq.(4.11) becomes \begin{equation} |C_{K_1} (t) |^2 = \{1-2 |v|^2 \pi t + O( |v|^4 ) \} |C_{K_1} (0) |^2 . \end{equation} and time differential is \begin{equation} \frac{ \partial }{ \partial t } |C_{K_1} (t) |^2 \simeq -2 \pi |v|^2 |C_{K_1} (t)|^2 . \end{equation} This answer reproduces the exact result Eq.(4.10), and we realize that once we go through differential equation our method actually reproduces the nonperturbative result correctly. \sect{Summary and Discussion} In this paper, we have found a clear deviation from the result obtained by conventional Fermi's golden rule for relatively small $t$. But as is seen from Eq.(3.12) we can see this difference only for the time duration of order $t \simeq \frac{1}{M_{K_1}}$. Therefore, it seems to be rather hard to observe such difference. When, however, CPT symmetry is tested, such small deviation might affect the physical quantities, since CPT violation, if any, should be very tiny. \vspace{1cm} {\bf Acknowledgement} I wish to record my special thanks to A.I.Sanda(Nagoya-U) for stimulating discussing and M.Kobayashi(KEK), C.S.Lim (Kobe-U) and T.Morozumi (Hiroshima-U) for their suggestion, and H.Kawai (KEK) and H.Sonoda (UCLA) for explaining the prescription in subsection 4.1 . \vspace{1cm}
1,108,101,562,530
arxiv
\section{Introduction} Shock waves are generated by the discontinuous change in the thermodynamic variables of the system which travels faster than the local speed of sound. They are usually caused by the sudden compression or expansion of matter. Shocks are very common in astrophysics as they are seen in supernova, binary neutron star collision and are associated with stellar winds, supernovae remnants, radio jets, accretion on to compact objects, and phase transition (PT) in neutron stars (NSs) \citep{amir,sironi,joel,lundman,amano}. When the shock is radiation dominated they are also believed to be the sources of non thermal photons, cosmic-rays and neutrinos \citep{zeldovich,ronaldo,tavani,jones}. In the last few decades renewed interest has been generated regarding shock wave not only in astrophysics and cosmology but also in high energy physics \citep{elth,carrus,smoller,kons,blandford,bland}. Non-relativistic shock wave was known for a long time and is also very well studied \citep{landau,bethe,hirs,raizer,richard,shapiro}. The relativistic theory of shock wave started with Taub \citep{taub} and Landau \citep{landau}, where the rankine-hugoniot (RH) jump condition (basically energy-momentum and mass flux conservation equations) are solved to derive a single equation known as Taub adiabat (TA) equation connecting thermodynamical matter properties of the upstream and downstream matter. Relativistic shocks were further analysed rigorously by Lichnerowicz \citep{lichnero,thorne,ran} assuming matter as an ideal fluid having an infinite conductivity. Since then a number of astrophysical problem was addressed in the literature \citep{colgate,anile,toshi,mangano,padua,eliz,gs,hoover}. In all the calculation the velocity of the shock is less than the speed of light (c). The shock surface or front forms a hyper-surface which is space-like (SL) which is to say that the shock hyper-surface has a SL normal vector. However, this is not only the case and it was pointed out by Csernai \citep{csernai} that that the normal to the hyper-surface can also be time-like (TL). He argued that a system undergoing rapid rarefaction, can generate bubbles at different spatial point not causally connected. The hyper-surface then becomes TL with bubble formation and growth. As an example they pointed out that as supercooled quark-gluon plasma fireball expands it cools and gets hadronized in heavy-ion collision. Various works then analyzed these results mainly in the context of heavy-ion collision \citep{gorenstein,rosenhauer,gyulassy}. In shock wave usually the matter properties behind and in front of the shocks are same (satisfying same equation of state (EoS)). There is only compression and rarefaction of matter due to shocks. However, if the initial and final state of matter across the shock discontinuity does not belong to the same EoS, then we also have an combustion. The initial and the final state then does not lie in the same curve because of the difference of chemical energy. Thus the TA equation is called combustion or Chapman-jouget adiabat (CA). CA are particularly interesting when we study astrophysical PT \citep{ritam-irfan,ritam-shailendra-rana} in connection with NSs. After the proposition of the conjecture that strange quark matter (SQM) is the most stable state of matter at high density \citep{itoh,witten,bodmer}, there has been a renewed interest in the astrophysical community to test the theory. And the best laboratory to test this theory is the core of the NSs where central density can reach as high as $4-6$ times nuclear saturation density. Such densities are ideal for conversion from hadronic matter (HM) to quark matter (QM). One of the model for the PT process involves shock induced PT whose kinematics can be studied using RH conditions and TA equation \citep{bhattacharyya1,igor,schramm,ritam-amit,ritam-shailendra-rana}. They assume that a shocks wave initiates a combustion where the upstream matter is HM and the downstream matter is the QM. Then they solve the RH equations or the CA equation to obtain the downstream matter properties along with the matter velocities at either side of the front. Comparing the matter velocities with local speed of sound one can identify the combustion process. Most of the calculation done previously assume relativistic RH condition. However, if we are to extend them to analyze the problems in astrophysics we need to extend the relativistic results to general relativistic (GR) regime. That is the main aim of this work. In Section II we give the formalism of extending the calculation from relativistic to general relativistic regime. Section III discusses and compares the results of the relativistic and general relativistic case and also a simple application of the result for neutron star. We summarize and discuss our results in section IV. \section{Formalism} We begin with the assumption that the matter is flowing along the radial direction and the shock front is perpendicular to it. The width of the discontinuity is negligible in comparison to the actual system and is assumed to be a single discontinuity across which the flow variables are discontinuous. Denoting the two sides of the shock discontinuity as “a” and “b”, the difference of a thermodynamic quantity (Q) across the front is given by [Q] =$Q_{a}$-$Q_{b}$. The discontinuous surface is denoted by $\Sigma$ having a unit normal vector, $\wedge^{\mu}$, in space-time. The shock discontinuity can be SL or TL according to the normalization condition given by \begin{align} \wedge^{\mu}\wedge_{\mu} & = -1 && \text{(For TL Hyper-surface($\Sigma$))}& \nonumber \\ & = +1 && \text{(For SL Hyper-surface($\Sigma$))}\nonumber \end{align} \begin{figure} \hspace{0.5cm} \includegraphics[scale=0.8]{sf.eps} \caption{Schematic diagram of a moving shock discontinuity. The shock front is moving from left to right. $v_a$ and $v_b$ are the matter velocities on either side of the front.} \end{figure} \subsection{Relativistic Shock Waves} The relativistic fluid dynamic equations is constructed from the energy-momentum tensor ($T^{\mu\nu}$) in Minkowski ST, where $T^{00}$ is the energy density, $T^{0\alpha}$ is the energy flux and $T^{\alpha\beta}$ the momentum flux. The line element in flat ST is given by \begin{equation} ds^2= \eta_{\mu\nu}dx^{\mu}dx^{\nu}=-dt^2 + dr^2 + r^{2}d\Omega^{2}, \end{equation} where $\eta_{\mu\nu} $ is flat space-time metric which in spherical polar coordinates is given by \begin{equation*} \eta_{\mu\nu} = \begin{pmatrix} - 1 & 0 & 0 & 0\\ 0 &+1 & 0 & 0\\ 0 & 0 & r^{2} & 0\\ 0 & 0 & 0 & r^{2}\sin ^2\theta \end{pmatrix} \end{equation*} Considering a perfect fluid (i.e, fluid with no viscosity or heat conductivity), the stress-energy tensor is given by \begin{equation} T^{\mu\nu} = wu^{\mu}u^{\nu} + pg^{\mu\nu} \end{equation} where, $w$ (the enthalpy) is equal to the $e$ (energy density) + $p$ (pressure), $u^{\mu}=(\gamma,\gamma v,0,0)$ is the four-velocity vector with $\gamma = \frac{1}{\sqrt{1 - v^{2}}}$ and its norm is given by $g_{\mu\nu}u^{\mu}u^{\nu} = -1$. The conservation of energy-momentum tensor ($\partial_{\mu}T^{\mu\nu}=0$) and mass-flux ($\partial_{\mu}J^{\mu}=0$) in the rest frame of the front becomes the RH jump condition across the discontinuity \begin{align} T_{a}^{\mu\nu}\wedge_{\nu} & = T_{b}^{\mu\nu}\wedge_{\nu} \\ n_{a}u_{a}^{\mu}\wedge_{\mu} &= n_{b}u_{b}^{\mu}\wedge_{\mu}. \end{align} \begin{itemize} \item For SL discontinuity, the normal vector (NV) is given by \begin{equation} \Lambda^{\mu} = (0,1,0,0) \nonumber \end{equation} Therefore, the RH jump conditions becomes \begin{align} T_{a}^{01} & = T_{b}^{01} \nonumber\\ \Rightarrow w_{a}\gamma_{a}^{2}v_{a} & = w_{b}\gamma_{b}^{2}v_{b} \\ T_{a}^{11} & = T_{b}^{11} \nonumber \\ \Rightarrow w_{a}\gamma_{a}^{2}v_{a}^{2} + P_{a} & = w_{b}\gamma_{b}^{2}v_{b}^{2} + P_{b} \\ n_{a}u_{a}^{1} & = n_{b}u_{b}^{1} \nonumber \\ \Rightarrow n_{a}v_{a}\gamma_{a} & = n_{b}v_{b}\gamma_{b} \end{align} \item For TL discontinuity, the NV is given by \begin{equation} \Lambda^{\mu} = (1,0,0,0). \nonumber \end{equation} and the RH jump conditions are given as \begin{align} T_{a}^{10} & = T_{b}^{10} \nonumber\\ \Rightarrow w_{a}\gamma_{a}^{2}v_{a} & = w_{b}\gamma_{b}^{2}v_{b} \\ T_{a}^{00} & = T_{b}^{00} \nonumber \\ \Rightarrow w_{a}\gamma_{a}^{2} - P_{a} & = w_{b}\gamma_{b}^{2} - P_{b}\\ n_{a}u_{a}^{0} & = n_{b}u_{b}^{0} \nonumber \\ \Rightarrow n_{a}\gamma_{a} & = n_{b}\gamma_{b} \end{align} \end{itemize} where $\gamma_i = \frac{1}{(1-v_i^2/c^2)^{1/2}} = \frac{1}{(1-v_i^2)^{1/2}}$, where $i=a, b$. The above equations can be further solved to derive the TA/CA equation both for the SL and TL case and is given by \begin{equation} \Bigg(\frac{w_{a}^2}{n_{a}^{2}} - \frac{w_{b}^2}{n_{b}^{2}}\Bigg) + (p_{b} - p_{a})\Bigg(\frac{w_{a}}{n_{a}^{2}} + \frac{w_{b}}{n_{b}^{2}}\Bigg) = 0. \\ \end{equation} Interestingly, for the relativistic case the TA/CA equation for both SL and TL comes out to be the same. The TA equation does not have the velocity terms, however knowing the matter properties on both sides of the front, the velocity of the matter of both phases can be calculated. They are given as \begin{itemize} \item SL velocities \begin{align} v_a = \sqrt{\frac{(p_b - p_a)(e_b + p_a)}{(e_b - e_a)(e_a + p_b)}} \nonumber \\ v_b = \sqrt{\frac{(p_b - p_a)(e_a + p_b)}{(e_b - e_a)(e_b + p_a)}} \nonumber \end{align} \item TL velocities \begin{align} v_a = \sqrt{\frac{(e_b - e_a)(e_a + p_b)}{(p_b - p_a)(e_b + p_a)}} \nonumber\\ v_b = \sqrt{\frac{(e_b - e_a)(e_b + p_a)}{(p_b - p_a)(e_a + p_b)}} \nonumber \end{align} \end{itemize} \subsection{General Relativistic Shock Waves} For general relativistic shock the line element is given by (for a spherically symmetric space-time (ST)) \begin{equation} ds^2= g_{\mu\nu}dx^{\mu}dx^{\nu}=- e^{2\phi(r)}dt^2 + e^{2\Lambda(r)}dr^2 + r^{2}d\Omega^{2} \end{equation} where $g_{\mu\nu}$ is general spherically symmetric metric given by \begin{equation*} g_{\mu\nu} = \begin{pmatrix} - e^{2\phi(r)} & 0 & 0 & 0\\ 0 & e^{2\Lambda(r)} & 0 & 0\\ 0 & 0 & r^{2} & 0\\ 0 & 0 & 0 & r^{2}\sin ^2(\theta ) \end{pmatrix} \end{equation*} Although the form of the energy-momentum tensor remains the same, the four velocity in curved ST is given by \begin{equation} u^{\mu} = \dfrac{dx^{\mu}}{ d\tau} = \dfrac{dx^{\mu}dt}{dt d\tau}\nonumber \end{equation} where, the four position vector in spherical polar coordinate is $x^{\mu}=(t,r,\theta,\Phi)$ and the norm of $u^\mu$ is given by $g_{\mu\nu}u^\mu u^\nu = -1 $. \par Considering the front to be moving along the radial direction the four-velocity of fluid particles is given by \begin{equation} u^{\mu} = \gamma_g(1,v_r,0,0) \end{equation} where, $\theta $ and $\Phi =$ constant, $\gamma_g=\frac{1}{[e^{2\phi}-e^{2\Lambda}v_r^2]^{1/2}}$ and $v_r=\dfrac{dr}{dt}$ is radial velocity. Following the same normalization conditions (as done for relativistic treatment), the NV for SL shock is \begin{equation} \Lambda^{\mu} = (0,\frac{1}{\sqrt{g_{11}}},0,0) \end{equation} and the NV for TL discontinuity is \begin{equation} \Lambda^{\mu} = (\frac{1}{\sqrt{-g_{00}}},0,0,0) \end{equation} where, $g_{00} = - e^{2\phi(r)}$ and $g_{11} = e^{2\Lambda(r)} $ are the metric elements. Having defined our ST metric and the shock normal, the jump condition for GR shocks is given by \begin{itemize} \item For SL shock \par the energy-flux conservation jump condition is \begin{align} T_{a}^{01} & = T_{b}^{01} \nonumber\\ \Rightarrow w_{a}\gamma_{ga}^2 v_{ra} & = w_{b}\gamma_{gb}^2 v_{rb} \end{align} \par the momentum-flux conservation jump condition becomes \begin{align} T_{a}^{11} & = T_{b}^{11} \nonumber \\ \Rightarrow w_{a}\gamma_{ga}^2 v_{ra}^2 + \frac{p_a}{e^{2\Lambda_a}} & = w_{b}\gamma_{gb}^2 v_{rb}^2 + \frac{p_b}{e^{2\Lambda_b}} \end{align} \par and the particle-flux conservation is given by \\ \begin{equation} n_{a}u_{a}^{1} = n_{b}u_{b}^{1} \nonumber \\ \end{equation} \begin{align} \Rightarrow n_a\gamma_{ga}v_{ra} = n_b\gamma_{gb}v_{rb} =j \end{align} \par Using eqn 17 and 18 the particle-current density (j) is given by \begin{align} j^2 & = (\frac{p_b}{e^{2\Lambda_b}}-\frac{p_a}{e^{2\Lambda_a}})\frac{1}{(w_aV_a^{2}-w_bV_b^{2})} \end{align} \par where, $V_a=\frac{1}{n_a}$ and $V_b=\frac{1}{n_b}$. \item For TL shocks \par \par the energy-flux conservation \begin{align} T_{a}^{10} & = T_{b}^{10} \nonumber\\ \Rightarrow w_a\gamma_{ga}^2 v_{ra} & = w_b\gamma_{gb}^2 v_{rb} \end{align} \par the momentum-flux conservation \begin{align} T_{a}^{00} & = T_{b}^{00} \nonumber\\ \Rightarrow w_a\gamma_{ga}^2 - \frac{p_a}{e^{2\phi_a}} & = w_b\gamma_{gb}^2 - \frac{p_b}{e^{2\phi_b}} \end{align} \par particle-flux conservation \\ \begin{equation} n_{a}u_{a}^{0} = n_{b}u_{b}^{0} \nonumber \\ \end{equation} \begin{align} \Rightarrow n_a\gamma_{ga} = n_b\gamma_{gb} = j \end{align} \par where j$^2$ is given by \begin{align} j^2 & = (\frac{p_a}{e^{2\phi_a}}-\frac{p_b}{e^{2\phi_b}}) \frac{1}{(w_aV_a^{2}-w_bV_b^{2})} \end{align} \end{itemize} Once the Jump condition are derived, we then proceed to derive the corresponding TA/CA for the SL and TL shocks. \begin{itemize} \item For SL Shocks, using eqn 18, $v_{ra}$ and $v_{rb}$ becomes \\ \end{itemize} \begin{align} v_{ra}^2 = \frac{e^{2\phi_a}}{\frac{1}{j^2V_a^2}+e^{2\Lambda_a}} \\ v_{rb}^2 = \frac{e^{2\phi_b}}{\frac{1}{j^2V_b^2}+e^{2\Lambda_b}} \end{align} Squaring eqn 16 and using above equations (i.e, the value of $v_{ra}^{2}$ and $v_{rb}^{2}$ with $j^2$), the TA/CA equation for SL shocks becomes \begin{equation} (w_a^{2}\gamma_{ga}^4 v_{ra}^{2} - w_b^{2}\gamma_{gb}^4 v_{rb}^{2} ) = 0 \nonumber \end{equation} \begin{align} \Rightarrow &\left[\frac{p_b}{e^{2\Lambda_b}} - \frac{p_a}{e^{2\Lambda_a}}\right]\left[\frac{w_a^2V_a^4e^{2\Lambda_a}}{e^{2\phi_a}} - \frac{w_b^2V_b^4e^{2\Lambda_b}}{e^{2\phi_b}}\right]\nonumber\\ &+ (w_aV_a^2-w_bV_b^2)\left[ \frac{w_a^2V_a^2}{e^{2\phi_a}} - \frac{w_b^2V_b^2}{e^{2\phi_b}}\right] = 0. \end{align} \begin{itemize} \item Similarly, we can derive the TA/CA for TL shock and is given by \end{itemize} \begin{equation} (w_a^{2}\gamma_{ga}^4 v_{ra}^{2} - w_b^{2}\gamma_{gb}^4 v_{rb}^{2} ) = 0 \nonumber \end{equation} \begin{align} \Rightarrow &\left[\frac{p_b}{e^{2\phi_b}} - \frac{p_a}{e^{2\phi_a}} \right]\left[\frac{w_a^2V_a^4e^{2\phi_a}}{e^{2\Lambda_a}} - \frac{w_b^2V_b^4e^{2\phi_b}}{e^{2\Lambda_b}}\right] \nonumber \\ & + (w_aV_a^2-w_bV_b^2)\left[ \frac{w_a^2V_a^2}{e^{2\Lambda_a}}-\frac{w_b^2V_b^2}{e^{2\Lambda_b}}\right] = 0 \end{align} It is interesting to note that for the GR shocks the TA/CA equation are different for the SL and TL shocks unlike the relativistic case where they were same. The matter velocities of the two phases in terms of the thermodynamic variables are given by \\ SL shocks \\ \begin{align} {v_a} &= \sqrt{\frac{A_2B_1B_2w_a^{2} - a_{11}+b_{11}}{2c_{11}}} \\ {v_b} &= \frac{{v_a}B_1(A_2 w_a + A_1 w_b - \frac{b_{11}}{w_a B_1B_2} )}{2A_1[B_2p_a + B_1\epsilon_b]} \end{align} where, we have defined \begin{align*} A_1 & = e^{2\phi_a} \text{,} A_2 = e^{2\phi_b} \text{,} B_1 = e^{2\Lambda_a} \text{,} B_2 = e^{2\Lambda_b}\\ w_a & = (p_a + \epsilon_a) \text{,} w_b = (p_b + \epsilon_b)\\ a_{11} &= A_1\big[B_1B_2(\epsilon_a - p_a)(\epsilon_b - p_b ) + 2(B_2^{2}p_a\epsilon_a +B_1^{2}p_b\epsilon_b)]\\ b_{11} &= w_a\sqrt{B_1B_2}\sqrt{B_1B_2(A_1^{2}w_b^{2}+A_2^{2}w_a^{2}) - 2A_2a_{11}}\\ c_{11} &= B_1(B_1p_b + B_2\epsilon_a)(B_2\epsilon_a - B_1\epsilon_b) \\ \end{align*} TL shocks\\ \begin{align} {v_a} &= \sqrt{\frac{A_1(a_{21} - b_{21})}{2c_{21}}} \\ {v_b} &= \frac{{v_a}A_2(B_2 w_a + B_1 w_b + \frac{b_{21}}{w_a A_1A_2} )}{2B_2[A_1p_b + A_2\epsilon_a]} \end{align} where, \begin{align*} A_1 & = e^{2\phi_a} \text{,} A_2 = e^{2\phi_b} \text{,} B_1 = e^{2\Lambda_a} \text{,} B_2 = e^{2\Lambda_b}\\ w_a & = (p_a + \epsilon_a) \text{,} w_b = (p_b + \epsilon_b)\\ a_{21} &= A_1A_2\big[B_2 w_a^{2} - B_1( \epsilon_b - p_b)( \epsilon_a - p_a)] - 2B_1(A_2^{2}p_a\epsilon_a +A_1^{2}p_b\epsilon_b)\\ b_{21} &= w_a\sqrt{A_1A_2}\sqrt{A_1A_2(B_1^{2}w_b^{2}-B_2^{2}w_a^{2}) + 2B_2a_{21}}\\ c_{21} &= B_1^{2}(A_2p_a + A_1\epsilon_b)(A_2p_a - A_1p_b) \\ \end{align*} The jump conditions, TA/CA equations and the matter velocities of the GR shocks reduces to the corresponding relativistic shocks (both for SL and TL) if either the metric potentials becomes zero \begin{equation*} \Rightarrow \Lambda_a = \phi_a = \Lambda_b = \phi_b = 0 \\ \end{equation*} or if the metric potentials on either side of the front becomes equal, i.e. \\ \begin{equation*} \Lambda_a = \Lambda_b = \Lambda; ~ \phi_a = \phi_b = \phi. \\ \end{equation*} \section{Results} We assume that the shock front is accompanied with a combustion front which is altering the matter properties on the other side of the front. Therefore, we need two equation of state to describe matter properties on either side of the front. The initial matter EoS is most probably a zero temperature hadronic EoS as we are considering cold matter. We also assume that the final burnt matter EoS is of quark matter as the model system that we will study in this work is a neutron star (NS) where hadronic matter is undergoing a deconfinment to quark matter due to the shock induced burning. Although, the final burnt QM can have finite temperature the temperature is less than $10$ MeV \citep{ritam-slow-burn}. Such low temperature has negligible effect of the EoS of quark matter \citep{igor,ritam-amit}. For the hadronic phase, we adopt a relativistic mean-field EoS with PLZ parameter setting \citep{serot,glen,reinhard}. The EoS of the quark matter is modelled after the MIT bag model \citep{chodos} having u, d, and s quarks with mass $5$, $10$ and $80$ MeV. The bag pressures is assumed to be $B^{1/4} = 160$ MeV and the quark interaction term is taken to be $0.6$ \citep{alford}. It is possible to deduce the combustion process associated with a shock. They are categorized comparing the matter velocities and speed of sound on either side of the front like fast combustion or slow combustion. However, for relativistic shocks, the matter velocities are usually comparable to speed of light or even can be super-luminous. There can be even situation where the matter velocities becomes imaginary and non-physical. Starting with a given a initial state and solving for the final state one can then categorize different physical and non-physical region in an $\epsilon-p$ diagram, depending on the matter velocities. The two important velocity limits are \\ 1. Matter velocities reaches the velocity of light ($v=c=1)$\\ 2. Matter velocities becomes imaginary. The condition for which the matter velocities of relativistic SL shocks reaches the speed of light is given by \begin{align*} v_a^{2} &= 1 = v_b^{2} \\ \Rightarrow p_b &= \epsilon_b + p_a - \epsilon_a \end{align*} And the condition for which they become imaginary are \begin{align*} v_a^{2} &= -ve = v_b^{2} \\ \Rightarrow p_b &< p_a \textrm{ \& } \epsilon_b > \epsilon_a \\ \quad \textbf{ $ or $ } \quad \\ \Rightarrow p_b &> p_a \text{ \&} \epsilon_b < \epsilon_a \end{align*} It is interesting to note that for relativistic shocks the above two condition for TL shocks are same because $v_a^{sl} = \frac{1}{v_a^{tl}}$ and $v_b^{sl} = \frac{1}{v_b^{tl}}$. \\ However, for GR shocks the conditions for SL and TL shocks are different. For the GR SL shocks the condition for luminous matter velocities are \\ a) condition for $v_a \approx 1$ is \begin{align*} p_b &=\frac{{B2} \left[p_a h_1 +{A_1} {B_1} h_2 + {A_2}{B_1}^2 h_3 + {B_1}^2 {\epsilon_a} ({B_2} {\epsilon _a} - {B_1} {\epsilon_b})\right]}{{B_1} \left(h_1 + {A_1} {B_1} h_4 + {B_1}^2 ({B_1}{\epsilon_b} - {B_2}{\epsilon_a})\right)}. \end{align*} b) and the condition for $v_b \approx 1$ is \begin{align*} p_b &= \frac{{B_2} (- h {A_1} {B_1}{\epsilon_b}- {A_1} {B_2}{p_a}+{A_2}{B_1}{p_a}+ h {A_2}{B_1} {\epsilon_a})}{{B_1} ({A_2}{B_1}-{A_1}{B_2})}. \end{align*} where, \begin{align*} h_1 & = {A_1}^2 ({B_1} {\epsilon_b}+{B_2}{p_a})\\ h_2 & = - {B_1} {p_a} {\epsilon_b}+ {B_1} {\epsilon_a} {\epsilon _b} + 2 {B_2} {p_a} {\epsilon_a} \\ h_3 & = - {p_a}^2 - 2 {p_a} {\epsilon_a} - {\epsilon_a}^2 \\ h_4 & = - 2 {B_1} {\epsilon_b} - {B_2} {p_a} + {B_2} {\epsilon_a} \end{align*} The luminous matter velocity conditions for GR TL Shock are \\ a) the condition for $v_a \approx 1$ becomes \begin{align*} p_b &= \frac{{A_2}({A_2}G1+{A_1}G3)}{{A_1} \left( {A_1}^3 {\epsilon_b}+{A_1}^2 G4 + {A_1}{B_1} G5 +{A_2}{B_1}^2 {p_a}\right)} \end{align*} b) and the condition for $v_b \approx 1$ becomes \begin{align*} p_b &= \frac{h {A_2} [{A_1} ({B_1}{\epsilon_b}+{B_2} {p_a}-{B_2}{\epsilon_a})-{A_2}{B_1}{p_a}]}{{A_1} ({A_1}{B_2}-{A_2}{B_1})} \end{align*} where, \begin{align*} G1 &= ({A_1}^2{\epsilon_a}^2+2{A_1}{B_1}{p_a}{\epsilon_a}+{B_1}^2 {p_a}^2) \\ G2 &= (-{B_1}{p_a} {\epsilon_b}+{B_1} {\epsilon_a} {\epsilon_b}-{B_2}{p_a}^2-2 {B_2} {p_a}{\epsilon_a} -{B_2}{\epsilon_a}^2)\\ G3 &= -{A_1}^2 {\epsilon_a} {\epsilon_b}+{A_1}G2 + {B_1}^2{p_a}{\epsilon_b} \\ G4 &= -({A_2}{\epsilon_a}-2 {B_1} {\epsilon_b})\\ G5 &= (-{A_2}{p_a}+{A_2}{\epsilon_a}+{B_1}{\epsilon_b})\\ h & = Constant.\\ \end{align*} Similarly, the condition for which the matter velocities becomes imaginary for GR SL shocks are (same for both $v_a$ and $v_b$) \begin{align*} B_2\epsilon_a &< B_1\epsilon_b \textrm{ \& } (A_2B_1 B_2w_a^{2} + b_{11}) > a_{11} \\ or \\ B_2 \epsilon_a &> B_1 \epsilon_b \textrm{ \& } (A_2B_1 B_2w_a^{2} + b_{11}) < a_{11} \\ \end{align*} For TL shocks, the condition comes out to be \begin{align*} A_2 p_a &< A_1 p_b \textrm{ \& } a_{21} > b_{21} \\ or \\ A_2 p_a &> A_1 p_b \textrm{ \& } a_{21} < b_{21} \end{align*} \begin{figure*} \centering \includegraphics[width = 7.0in,height=2.70in]{rp2.eps} \hspace{3.0cm} \scriptsize{(a)} \hspace{9.10cm} \scriptsize{(b)} \caption{Final states in the pressure (p) versus energy-density ($\epsilon$) curves are shown for shock induced transition from initial state ($\epsilon_0$= $\epsilon_a$,$p_0$ = $p_a$) for (a) SL shock and (b) TL shock in relativistic case. For SL shock, A and C regions are sub-luminal (blue), while B and D regions are super-luminal (gray). However, for TL Shock, B and D regions are sub-luminal (blue), but A and C regions are super-luminal (gray). Other regions (red) for both SL and TL shocks are unphysical (where velocities are imaginary).} \end{figure*} Starting with a given initial condition ($p_0,\epsilon_0$) we can find the region corresponding to different velocity conditions in the $\epsilon-p$ diagram. We start with the relativistic SL shocks whose different regions are shown in fig.(2(a)). The black solid line corresponds to the $p=\epsilon$ line which is the causality limit. The region above this line is the global forbidden region. We will characterize the region below this line according to the luminous and imaginary velocity conditions. Region A and C are the regions where the matter velocities are less than the speed of light. However, there are also regions below the solid black line for which the matter velocities becomes super-luminous (region B and D). With the given initial condition if we are able to reach any point in the region as the final point then the matter velocities will break the speed of light limit. There are also forbidden regions (marked in red) where the matter velocities with the given initial state becomes imaginary. Similar region-plot is shown in fig(2(b)) for relativistic TL shocks. For this case the forbidden region remains the same however, the region of sub and super luminous matter velocities gets interchanged as the matter velocities for SL and TL shocks are inverse of each other. Previously, in the literature these regions have been mentioned earlier \citep{csernai,gorenstein} and they argue that spontaneous hadronization seen in heavy ion process can be explained by such super-luminous velocities \citep{gorenstein,rosenhauer}. However, the uniqueness of our present work is in the fact that we have extrapolated this idea to GR shocks. The problem with relativistic shock is simple in comparison to GR shocks. To study relativistic shocks we do not need any particular system as the metric is always the same. However, this is not the case for GR treatment and we always need a particular system to get the metric potentials which vary from system to system. In the present work our model system is the interior of a neutron star (NS). We solve the TOV equation (eqn \citep{tov}) for a certain initial density and obtain the pressure and density profile as a function of radius. Knowing the pressure and density profile we also find the metric potential as a function of pressure (and thereby of radius). We follow the procedure for both the hadronic and quark EoS. The region - plot for GR SL shocks is shown in fig(3(a)). The causality line remains the same. The initial conditions are given by $p_0,\epsilon_0$ and the different regions are shown with different colours. Analytical results are not possible for the GR shocks and we have solved our equations numerically to obtain the different regions. The corresponding $v_h=1$ and $v_q=1$ lines are shown by green and red line respectively. The forbidden region in the red (a small region) for Sl shocks. Region A and B are the two allowed region where matter velocities are either smaller the velocity of light or are very close to it. Region C and D are regions where matter velocities super-luminous. The region - plots for GR shock is different from SR case in the sense that the relation between pressure and energies are no longer linear. The regions are effected by the gravitational potentials i.e by the mass of the star. The corresponding TL GR shock regions are shown in fig(3(b)). However, unlike the SR case the matter velocities of the SL and TL GR shocks are not inverse to each other and therefore the TL GR regions are not exactly opposite of SL GR shock region. For GR TL shocks a huge region is forbidden where the matter velocities becomes imaginary. Region A and B are regions where matter velocities becomes super-luminous. Region C and D are regions where the matter velocities are less than speed of light. Also, there is a small window between the forbidden region where the matter velocities are finite. It should be clear that the region - plots differ with different initial conditions, especially for GR shocks. \begin{figure*} \centering \includegraphics[width = 7.0in,height=2.70in]{rp1.eps} \hspace{3.60cm} \scriptsize{(a)} \hspace{8.50cm} \scriptsize{(b)} \caption{Final states in the pressure (p) versus energy-density ($\epsilon$) curves are shown for shock induced transition from initial state ($\epsilon_0$= $\epsilon_a$,$p_0$ = $p_a$) for (a) SL shocks and (b) TL shocks in general relativistic case. For SL shock, B region is sub-luminal (blue) while in region A velocities are less than or equals to luminal velocity but C and D regions are super-luminal (gray). However, for TL Shock, C and D regions are sub-luminal (blue), but A,B and E regions are super-luminal. Other regions(red) for both SL and TL shocks are unphysical (where velocities become imaginary).} \end{figure*} \subsection{Relativistic and General Relativistic Combustion Adiabat} The region - plots are important and exciting from the physics point of view in the sense that it give rise to super-luminous velocities which can have important physical applications in connections to astrophysical shocks. However, in the present problem we will focus on GR CA and study the combustion of NS to QS whose relativistic study has been done earlier \citep{ritam-irfan,ritam-shailendra-rana}. The relativistic CA was used to predict and constraint the maximum mass of the daughter QS which results from the combustion of parent NS. In the present work we extend the study and include GR corrections. \begin{figure*} \centering \includegraphics[width = 3.45in,height=2.5in]{casl.eps} \includegraphics[width = 3.45in,height=2.5in]{catl.eps} \hspace{0.5cm} \scriptsize{(a)} \hspace{9.50 cm} \scriptsize{(b)} \hspace{3.0cm} \includegraphics[width = 3.45in,height=2.5in]{casr.eps} \hspace{.6cm} \\ \scriptsize{(c)} \caption{(a) General Relativistic CA (p versus x) curve (SL) (b) General Relativistic CA (p versus x) curve (TL) (c) Relativistic CA (p versus x) curve (SL/TL), shows HM EoS (red-solid line with circle) with their corresponding burnt state QM (orange-star). The red-dot on QM EoS indicate the maximum on the burnt QM EoS corresponding to orange-dot on HM EoS. In the low pressure arrows indicate the regular jump from HM EoS to its burnt state, but as pressure become higher, arrow indicate the retracing nature of QM curves.} \end{figure*} The initial state or the upstream quantities are the input (here the HM EoS). Also, the final EoS of the burnt or downstream state is also known (the QM EoS). The CA are used to calculate the the corresponding state in the downstream matter for a given initial state. As the EoS are different the upstream and downstream point lie on different curve. We can plot the curves for the HM and its corresponding QM in the $X-p$ plane. The initial point lie in the red HM curve and for a given initial state, solving the CA equation we obtain a point lying in the burnt QM (orange curve). The lines connecting the initial and the final point is known as the "Rayleigh line" whose slope is proportional to $(\gamma_n v_n)^2$. The CA curve for SL and TL relativistic shock is shown in fig(4(c)). Initially, there is a compression due to the shock which means a initial point with smaller pressure corresponds to a final point with larger pressure. As we go up the HM curve the slope of RL decreases however, both the upstream and downstream curve rises. But, there is a maximum point on the burnt trajectory (or downstream curve) beyond which if we go along the HM curve, the downstream curve comes down and retrace its path. The retracing nature of the burnt trajectory can also be seen as we draw a Rayleigh line connecting the upstream curve and corresponding downstream curve (represented by arrows). Also, the sign of the slope of the Rayleigh line changes after the maximum point. We plot the corresponding SL GR curve in fig(4(a)). The range of the downstream curve increases for GR case as compared to the SR case. Also initially the slope of the RL is larger for the GR shocks. The maximum of the pressure also has a much higher value. We also show the corresponding TL GR CA curve in fig(4(b)). The CA range for TL GR shocks is much larger, however, the maximum of the pressure remains more or less the same. The relativistic and GR curve range may differ but all of them lies in the same downstream curve defined by the QM EoS. The maximum of the pressure becomes more clear if we plot pressure as a function of number density (fig(5)). For a given HM curve the corresponding QM pressure is lowest for relativistic shocks. For the GR shocks the corresponding pressure is much higher both for the SL and TL shocks. It is also clear that the maximum pressure for the relativistic case arrives at much lower density whereas for the GR case they appear at much larger densities. Till now we have discussed the conversion from pure HM to pure quark matter. However, There can be matter phase in QS where we have both hadrons and quarks which is also termed as mixed phase \citep{glen}. Therefore, we can have a hybrid star (HYS) which has HM at the outer region, mixed phase in the intermediate and in the core of the star and in some massive stars also a pure quark phase at the core. The corresponding pressure as a function of number density for HYS. The maximum pressure for the HYS is significantly smaller than the QS, indicating the pressure difference between the NS and HYS are not very large. \begin{figure*} \centering \includegraphics[width = 3.30in,height=2.750in]{CA.eps} \includegraphics[width = 3.30in,height=2.750in]{MM1.eps} \hspace{3.60cm} \scriptsize{(a)} \hspace{8.50cm} \scriptsize{(b)} \caption{(a) p as a function of baryon density (n) for HM and their corresponding downstream QM and Hy.M curves are shown in SR and GR (SL/TL) cases. The burnt matter pressure first rises and then decreases cutting their corresponding HM pressure curve at particular number density. (b) Mass-Radius relation for NS, QS and Hy.S are plotted. Each arrow indicate the maximum mass of phase transitioned QS (QM) and Hy.S (Hy.M) corresponding to input NS (HM) for both relativistic (SL/TL) and general relativistic (SL/TL) cases.} \end{figure*} It was earlier shown that the maximum pressure of the QM is used to calculate the maximum mass of the QS which results from the combustion of a parent NS \citep{ritam-irfan}. The occurrence of the maximum mass is shown in fig(6). The mass-radius diagram gives the sequences of stars masses with their corresponding radius. It also gives the maximum mass that can be attained for a given EoS. In the figure the green solid curve gives the mass-radius sequence of NS. The maximum mass it can reach is about 2.35 $M_\odot$ corresponding to a radius of about 12 km. The mass-radius sequence of QM EoS is shown with red curve which has a maximum mass of 2.05 $M_\odot$ corresponding to a radius of about 10.7 Km. However, if we assume that the quark star is obtained from the combustion of a parent NS (with relativistic calculation), then the maximum mass of the QS is about 1.963 $M_\odot$. The parent NS has a mass of about 1.56 $M_\odot$. Therefore, such combustion are highly unlikely as it needs external source of energy. However, the maximum masses of the daughter QS obtained from GR shocks are in agreement with maximum mass of the EoS sequence (as is shown in the figure). For GR SL shocks the mass difference is again huge (between the parent NS and daughter QS); however, for GR TL shocks the mass difference is negligible implyinga possible combustion process. However, if we now study the combustion from NS to HYS the situation is different. Again the maximum mass of duaghter QS is lower than the EoS sequence and is minimum for relativistic shocks. Relativistic shocks are unable to combust a NS and result a HYS more massive than $1.3 M_\odot$. The situation is better for GR shocks which can produce a HYS of mass as large as $1.9 M_\odot$. However, for SL shocks huge amount of external energy is required to initiate such combustion but for TL shocks NS can undergo exothermic combustion and produce HYS. Remember, there can be another scenario where a NS undergoing combustion to either QS or HYS can become unstable and collpse to a Black hole. \subsection{Matter velocities across the front} The matter velocities across the front are a valuable tool to understand the properties of shock induced combustion. Combustion can be either detonation (fast burning where the combustion and the shock front almost coincides) or a deflagration (slow burning). If the velocity of the burned matter is larger than unburnt matter then phase transition corresponds to the detonation, whereas if its speed is smaller than unburnt matter then phase transition resembles the deflagration or slow combustion. This is expressed as \begin{align} v_{upstream}>v_{downstream} \Rightarrow Deflagration \nonumber \\ v_{upstream}<v_{downstream} \Rightarrow Detonation \nonumber \end{align} \begin{figure*} \centering \includegraphics[width = 3.30in,height=2.750in]{SRsl.eps} \includegraphics[width = 3.30in,height=2.750in]{SRtl.eps} \hspace{3.60cm} \scriptsize{(a)} \hspace{8.50cm} \scriptsize{(b)} \caption{The upstream ($v_a = v_h$) for HM EoS (red-circle) and downstream ($v_b = v_q$) for QM EoS (green-triangle) velocities are shown as a function of number density for (a) SL and (b) TL relativistic shocks. For SL shocks, $v_h$ is always greater than $v_q$ at low density and both are subluminal but they tends to superluminal as density become much higher. However, for TL shocks, $v_q$ is less than $v_h$ at low density and are superluminal while they tends to superluminal as density become much higher.} \end{figure*} \begin{figure*} \centering \includegraphics[width = 3.30in,height=2.750in]{GRVSL.eps} \includegraphics[width = 3.30in,height=2.750in]{GRVTL.eps} \hspace{3.60cm} \scriptsize{(a)} \hspace{8.50cm} \scriptsize{(b)} \caption{The upstream ($v_a = v_h$) for HM EoS (red-circle) and downstream ($v_b = v_q$) for QM EoS (green-square) velocities are shown as a function of number density for (a) SL and (b) TL general relativistic shocks. For SL shocks, $v_h$ become superluminal at much less densities and is always greater than $v_q$ (subluminal). However, for TL shocks, $v_q$ is less than $v_h$ at low density but each are superluminal, while both tends to superluminal as density become much higher.} \end{figure*} \begin{figure*} \centering \includegraphics[width = 3.30in,height=2.750in]{SRslhm.eps} \includegraphics[width = 3.30in,height=2.750in]{SRtlhm.eps} \hspace{3.60cm} \scriptsize{(a)} \hspace{8.50cm} \scriptsize{(b)} \caption{The upstream ($v_a = v_h$) for HM EoS (red-circle) and downstream ($v_b = v_{Hy.M}$) for Hy.M EoS (green-triangle) velocities are shown as a function of number density for (a) SL and (b) TL relativistic shocks. For SL shocks, $v_h$ is always greater than $v_{Hy.M}$ at low density and both are subluminal but they tends to superluminal as density become much higher. However, for TL shocks, $v_{Hy.M}$ is less than $v_h$ at low density and are superluminal while they tends to superluminal as density become much higher.} \end{figure*} \begin{figure*} \centering \includegraphics[width = 3.30in,height=2.750in]{GRVSLhm.eps} \includegraphics[width = 3.30in,height=2.750in]{GRVTLhm.eps} \hspace{3.60cm} \scriptsize{(a)} \hspace{8.50cm} \scriptsize{(b)} \caption{The upstream ($v_a = v_h$) for HM EoS (red-circle) and downstream ($v_b = v_{Hy.M}$) for Hy.M EoS (green-square) velocities are shown as a function of number density for (a) SL and (b) TL general relativistic shocks. For SL shocks, $v_h$ is always greater than $v_{Hy.M}$ and both are always subluminal. However, for TL shocks, $v_{Hy.M}$ is much larger than $v_h$ in which $v_q$ is superluminal and $v_h$ is subluminal always.} \end{figure*} We have shown the matter velocities for relativistic SL and TL shocks in fig(6) for combustion from HM to QM. The matter velocity of the hadronic matter is shown by the red curve whereas the quark matter velocity is shown in green. For the SL shocks we find that at low densities hadronic matter velocity is much higher than quark matter velocity indicating a deflagration combustion. As the density increases their difference decreases and at a density $0.54$ fm$^{-3}$ the becomes equal and goes to zero indicating no combustion process. This indicates that at such densities the shock induced combustion from HM to QM is not possible. At about $0.58$ fm$^{-3}$ matter velocities again becomes non zero and attains a velocity close to that of velocity of light. The properties of matter velocities for relativistic TL shocks are just the inverse of the SL shocks. Therefore, at low densities quark matter velocities are greater than the hadronic matter velocities but both are greater than the speed of light, indicating almost instantaneous combustion. At higher densities they attain a velocity close to that of speed of light. However, the situation is quite different for GR shocks as shown in fig(7). Although, at low density $v_q$ is less than $v_h$ (just like the relativistic shocks), the value of $v_h$ at low densities is greater than the speed of light. This situation continues till very large density values of about $0.8$ fm$^{-3}$. Beyond that matter velocities becomes zero indicating that the combustion process is not sustainable at such densities. At much higher densities (which is usually not realized even at NS cores) matter velocities can again be finite and even greater than the speed of light. For the TL GR shocks initially $v_q$ is greater than $v_h$ and both are greater than the speed of light indicating instantaneous detonation. This continues till $0.75$ fm$^{-3}$ where the velocities becomes infinitely large. Beyond that the velocities becomes zero. However, at about $0.78$ fm$^{-3}$ they becomes finite a less than the speed of light initially, but later very high densities again they become greater than the speed of light. Similar velocity plot for relativistic and GR shocks combustion from NS to HYS are shown in fig (8 \& 9). The subluminal velocity for relativistic SL shocks appears only at lower densities having a deflagration type of combustion. At higher densities matter velocities are either luminal or super-luminal. As expected the matter velocities are just the opposite for relativistic TL shocks. The scenario is quite different for GR shocks. The velocities are subluminal for quite an extended range having an deflagration combustion for GR SL shocks. Beyond that the velocities are imaginary indicating no combustion for massive stars. For the TL shocks the matter velocities are imaginary at low densities whereas at high densities the quark matter velocity is super-luminal whereas the hadronic matter velocity is sub-luminal indicating detonation type of combustion only for very massive stars. It should be mentioned that although matter velocities are super-luminal for quite a range of densities the front velocity is almost sub-luminal for most of the density range. \section{Summary and Discussion} Study of Shock wave are important to understand many physical scenarios and particle acceleartion in astrophysics. One needs relativistic calculation to understand physical phenomena at such regimes. The relativistic shock condition was done by Taub a long ago \citep{taub}, however, he considered only the SL shocks. Csernai \citep{csernai} introduced the idea of TL shocks which can also in principle exist. Although, few GR shock calculation exists in literature a detailed analysis of SL and TL shock was lacking. In the present work we present a detail analysis of GR SL and TL shocks and compare them with the relativistic case to observe the difference. We study the GR shock in the NS system where we assume that the shock induced transition deconfines HM to QM. We initially derive the RH condition for both the SL and TL GR shocks and from there we derive the CA equation. However, unlike the SR shocks where the CA equation was same for SL and TL case, the CA equation for GR SL and TL shocks are different because of the occurrence of the metric potential. Also, the matter velocities for relativistic SL and TL shocks were inverse of each other which is not the case for GR shocks. We also find that if the gravitational potentials becomes either zero or same on either side of the shock front the GR conditions reduces to SR conditions. Assuming a initial state for the HM we solve the CA equation to get the final state of the QM in the star. The matter velocities can then be calculated from the thermodynamical variables of the initial and final states. With a given initial state we can study the matter velocities to check for which final state the matter velocities becomes imaginary or breaks the speed of light limit. It was interesting to find that even with ensuring the causality condition there are certain final state (with a given initial state) where the matter velocities can be super-luminous. For the relativistic shocks, given an initial state the region with velocities greater than the speed of light are just the opposite for SL and TL shock in the $p-\epsilon$ plane. However, the picture for GR shocks are quiet different. For a given initial state, the region of imaginary velocities and region of velocities greater than speed of light are also function of the mass of the star (which determines the gravitational potentials). Also, the regions for SL and TL shocks are not complimentary. The occurrence of regions where matter velocities becomes greater than the speed of light is of great importance because if we are able to reach such final states we can have almost super-luminous combustion/shocks which can important astrophysical implications. The GR shock calculation is extended to study the combustion of HM to QM in NS and to calculate the maximum mass of the daughter QS which results from the combustion of parent NS. Previous SR calculation indicated that the maximum mass of the PT quark star is much less than that of the actual maximum mass of the quark sequence; however, the GR calculation shows that the combusted QS maximum mass is in agreement with the maximum mass of the QS EoS sequence. The GR calculation shows that combustion from NS to QS is more viable with TL shocks. The combustion of massive NS to HYS with relativistic calculation is almost impossible however, GR calculation can have massive NS combusting to massive HYS. The velocity of the matter phases on either side of the front can be examined to determine the combustion process where we find that both for the SR and GR SL shock combustion at low density involves deflagration process. For the TL shocks at low density the velocities are higher than the speed of light and can signify super-luminous combustion. It should be mentioned that the calculation is done from the rest frame of the front. Although, the matter velocities can be greater than the speed of light the front velocity for most of the case is sub-luminous. We are in the process of obtaining the dynamical equation s for SL and TL GR shocks which would give us a clear picture about the shock propagation. However, this analysis shows that the TL shock velocities are in principle greater than SL shock velocities. Most of the shock calculation in astrophysical scenario are studied with SL shocks and therefore, it will be interesting to check them for TL shocks. The GR shocks are dependent of the mass of the system and this is another aspect that can be studied in more detail where large masses (like black hole) can be considered. \section{Acknowledgments} The authors are grateful to Indian Institute of Science Education and Research Bhopal for providing all the research and infrastructure facilities.
1,108,101,562,531
arxiv
\section{Introduction} % Asymmetric tensor fields appear in many scientific and engineering applications. In fluid dynamics, the gradient of the velocity is an asymmetric tensor field that encodes fundamental behaviors such as rotation, angular deformation (also known as pure shear), and volumetric deformation. Similar behaviors are encoded in the deformation gradient tensor in solid mechanics. While these types of motions can be understood by visualizing the vector field itself, tensor field visualization provides a more direct visual representation~\cite{Chen:11}. % % Existing visualization techniques for 3D asymmetric tensor fields have focused on three different approaches. % First, the tensor field is analyzed locally, with a focus on designing proper glyph representations for tensors~\cite{gerrits:2016:glyphs}. % Second, the topological structures of the symmetric part of the tensor field are extracted and visualized~\cite{Palacios:16}. % However, asymmetric tensors can have complex eigenvalues which lead to features and structures that are not well preserved by the symmetric part of the tensor field (Figure~\ref{fig:comp_Sym}). % Third, researchers have attempted to understand 3D asymmetric tensor fields by performing 2D analysis on the projection of the tensor field on some probe planes or surfaces. % Unfortunately, where the projected tensors have complex eigenvalues does not usually coincide with the complex domain of the original 3D tensor field. % These difficulties highlight a fundamental need to perform topological analysis and visualization {\em directly} on 3D asymmetric tensor fields, rather than their projection on 2D or their symmetric part. % % \begin{figure}[!t] \centering \begin{overpic}[width={0.95\columnwidth}]{images/comp_Sym} \end{overpic} \caption The rich structure in a 3D asymmetric tensor field, such as that of the velocity gradient tensor of the Rayleigh-B{\'e}rnard flow (left) cannot be adequately captured when only visualizing the symmetric part of the tensor field (right). } \label{fig:comp_Sym} \end{figure} % In this paper, we introduce the notion of {\em tensor mode} for 3D asymmetric tensors, which leads to a model that we refer to as the {\em eigenvalue space}. % Each point in the eigenvalue space gives rise to a levelset surface that we call a {\em mode surface}. % We have identified seven special modes, which give rise to six topological feature surfaces and one feature curve: {\em linear and planar degenerate surfaces} (Figure~\ref{fig:teaser} (b)), {\em real and complex neutral surfaces} (Figure~\ref{fig:teaser} (c)), {\em linear and planar balanced surfaces} (Figure~\ref{fig:teaser} (d)), and {\em triple degenerate curves} (the black curve in (Figure~\ref{fig:teaser} (b-d)). % In particular, unlike 3D {\em symmetric} tensor fields, triple degenerate points are stable features in 3D {\em asymmetric} tensor fields and form curves (triple degenerate curve). % We also observe that, unlike 2D asymmetric tensor fields, there are two different measures for the relative strengths of rotation and shear in the tensor, emphasizing the significance of the degenerate surface and the balanced surface. % These differences highlight the richer structures in 3D asymmetric tensor fields. In addition, we define some non-topological feature surfaces such as the levelsets of the tensor magnitude ({\em magnitude surfaces}) and isotropicity ({\em isotropicity surfaces}). To better understand the eigenvector behaviors in the asymmetric field, we develop an {\em augmented hyperstreamline} visualization method. When traveling along a hyperstreamline following one eigenvector field, we also visualize the other eigenvectors in the real domain and the dual-eigenvectors in the complex domain along the hyperstreamline. The hyperstreamline is shown as a tree stem, while the other eigenvectors in the real domain are visualized as thorns attached to the stem. Similarly, the dual-eigenvectors in the complex domain are visualized as leaves attached to the stem. % This can be particularly useful for inspecting the eigenvector behavior when crossing special mode surfaces such as the neutral surface and the degenerate surface. % % For piecewise linear tensor fields defined on tetrahedral meshes, three of the aforementioned feature surfaces, such as the balanced surfaces, magnitude surfaces, and isotropicity surfaces are quadratic inside each tetrahedron. % For such surfaces, we provide a quadratic surface extraction method that leads to a seamless extracted surface. % For features surfaces of a higher-degree such as the neutral surface, the degenerate surface, and other mode surfaces, we employ the A-patches method~\cite{luk:2009}. % Finally, we extract the triple degenerate curve by finding the intersection of the balanced surface and the neutral surface. We demonstrate the utility of our approach by applying our tensor field analysis and visualization to solid mechanics and fluid dynamics applications and providing physical interpretation. % % \section{Related Work}\label{sec:related_work} % Tensor field visualization has advanced much in the last decades~\cite{EPFL-BOOK-138668,Kratz:13}. Topological analysis of tensor fields has found many applications in understanding solid and fluid mechanics data. % Existing topology-driven tensor field visualization has focused on symmetric tensors of two- and three-dimensions. Tensor field topology is first studied by Delmarcelle and Hesselink\cite{delmarcelle:visualizing}, who extend the notions of singularities and separatrices from vector fields to 2D symmetric tensor fields. % % The topological features of 3D symmetric tensor fields are first studied by Hesselink et al.~\cite{hesselink:topology}, who define degenerate points as those where the tensor field has an eigenvalue with a multiplicity of {\em three}, i.e. triple degeneracy. Zheng and Pang~\cite{Zheng:04} point out that triple degenerate points are structurally unstable. That is, under an arbitrarily small perturbation to the tensor field such points disappear. Instead, Zheng and Pang define the topology of a 3D symmetric tensor field as the collection of {\em double degenerate points}, where the tensor field has two eigenvalues, one of which is repeating (multiplicity of two). Such points form curves, i.e. degenerate curves. Since then, a number of techniques have been developed to extract degenerate curves~\cite{Zheng:05a,Tricoche:08,Palacios:16,Roy:18}. % In particular, Tricoche et al.~\cite{Tricoche:08} point out that the degenerate curves are a subset of the ridge and valley lines of {\em tensor mode}, a tensor invariant whose name originated from mechanics~\cite{CRISCIONE:00}. With this formulation, Tricoche et al.~\cite{Tricoche:08} introduce the concept of tensor mode to the Visualization community and the idea of using tensor mode to define and extract topological structures. % More recently, a number of feature surfaces have been introduced for 3D symmetric tensor fields, such as neutral surfaces and mode surfaces~\cite{Palacios:16}, extremal surfaces~\cite{Zobel2017}, and fiber surfaces\cite{raith2019tensor}. % The visualization of asymmetric tensor fields starts more recently, and it has focused on 2D. Zheng and Pang~\cite{Zheng:05c} extend the topological analysis from 2D symmetric tensor fields to 2D {\em asymmetric} tensor fields with the introduction of {\em dual-eigenvectors} in the {\em complex domains} where the tensor field has complex eigenvalues. Zhang et al.~\cite{Zhang:09} provide rigorous analysis of 2D asymmetric tensor fields with the introduction of the notion of {\em eigenvalue manifold}. Chen et al.~\cite{Chen:11} introduce a visualization in which glyphs and hyperstreamlines are both used in visualizing asymmetric tensor fields. Lin et al.~\cite{Lin:12} introduce the notions of {\em eigenvalue graphs} and {\em eigenvector graphs} for 2D asymmetric tensor fields, which are extended to surfaces and a multi-scale framework by Khan et al.~\cite{Khan:20}. % % Despite the advances in 3D symmetric tensor fields and 2D asymmetric tensor fields, there has been relatively little work in the topological analysis of 3D asymmetric tensor fields. Visualization research on such fields is usually focused on glyph design~\cite{gerrits:2016:glyphs}. In this paper, we provide the results of our initial investigation of the topological analysis for 3D asymmetric tensor fields. % % \begin{figure*}[!t] \centering \centering% \begin{overpic}[width={\textwidth}]{images/crossing_case.png} \put(65, -8) {(a)} \put(30, 20) {real} \put(30, 10) {domain} \put(30, 60) {outer} \put(30, 50) {complex} \put(30, 40) {domain} \put(30, 95) {inner} \put(30, 85) {complex} \put(30, 75) {domain} \put(78, 30) {$w_1$} \put(92, 15) {$w_2$} \put(200, -8) {(b)} \put(330, -8) {(c)} \put(460, -8) {(d)} \end{overpic} \caption{ % We visualize a hyperstreamline following an eigenvector field as a tree stem, with the other eigenvectors in the real domain as thorns, and the dual-eigenvectors in the complex domain as leaves (a). A segment of the hyperstreamline is textured with a wood texture inside the real domain and given a smooth appearance in the inner complex domain. The segment inside the outer complex domain is a composition of the two appearances. % When crossing the degenerate surface (b), the dominant eigenvector field (tree stem in this case) in the real domain changes into the real eigenvector field in the complex domain. % Notice that the repeating eigenvector at the crossing point is the limit of the other two eigenvectors (thorns) from the real domain and the major dual-eigenvector (long axes of the leaves) from the complex domain. % When crossing the triple degenerate curve (c), all three eigenvectors from the real domain converge to the only eigenvector at the triple degenerate point. % Additionally, the dominant eigenvector field is discontinuous at the real neutral surface (d). % } \label{fig:degenerate_crossing} \end{figure*} % \section{Tensor Background} \label{sec:math_background} % Before presenting our analysis, we first review relevant mathematical background on 3D asymmetric tensor fields. % We start with 3D asymmetric tensors, which, under a given basis, can be represented as $3\times 3$ matrices. % A $3 \times 3$ tensor has a {\em characteristic polynomial} $f(\lambda)=\lambda^3+a_2\lambda^2 + a_1\lambda+a_0$ such that $f(T)=0$. The {\em trace} of $T$ is $\trace(T)=-a_{2}$. When the trace is zero, the tensor $T$ is referred to as being {\em traceless}. % The {\em determinant} of $T$ is $\det(T)= -a_0$, and the {\em minor} is $\minor(T)=a_1$. % Additionally, the set of all $3 \times 3$ tensors form a $9$-dimensional linear space, on which the following inner product of two tensors $R$ and $S$ can be introduced~\cite{spence2000elementary}: $\langle R, S \rangle = \sum_{i=1}^{3}\sum_{j=1}^{3}R_{ij}S_{ij} = \trace(S^T R)$. % With this product, one can define the {\em magnitude} of a tensor $T$ as $||T||=\sqrt{\langle T, T \rangle}$. % % The roots of the characteristic polynomial $f(\lambda)$ are the eigenvalues of $T$. There are either three mutually distinct real-valued eigenvalues, one real-valued eigenvalue and two complex-valued conjugate eigenvalues, two real-valued eigenvalues with one of them having a multiplicity of two, or one real-valued eigenvalue of multiplicity of three. % When all three eigenvalues are real and mutually distinct, we refer to their largest, middle, and smallest eigenvalues as the {\em major}, {\em medium}, and {\em minor eigenvalues}, respectively. % When there is only one real eigenvalue, it is referred to as the {\em real eigenvalue} of $T$. % When there are two eigenvalues, we refer to the eigenvalue of a multiplicity of two as the {\em repeating eigenvalue} and the other eigenvalue the {\em dominant eigenvalue}. The notions of {\em major eigenvectors}, {\em medium eigenvectors}, {\em minor eigenvectors}, {\em real eigenvectors}, {\em repeating eigenvectors}, and {\em dominant eigenvectors} can be defined as the eigenvectors corresponding to the respective eigenvalues. % % A tensor $T$ is {\em symmetric} if it is equal to its transpose; otherwise, it is {\em asymmetric}. A special case of asymmetric tensors is {\em anti-symmetric tensors}, which are equal to their negated transpose. The eigenvalues of a symmetric tensor are guaranteed to be real-valued, while the eigenvalues of an asymmetric tensor can be either real-valued or complex-valued. % Furthermore, eigenvectors belonging to different eigenvalues form an orthonormal basis for symmetric tensors. % For asymmetric tensors, even when the eigenvalues are real-valued, their respective eigenvectors are not mutually perpendicular. % A tensor field is a continuous tensor-valued function defined in the domain. A hyperstreamline is a curve that is tangent to an eigenvector field everywhere along its path. For example, a {\em dominant hyperstreamline} follows the dominant eigenvector field, while a {\em real hyperstreamline} follows the real eigenvector field. \section{Analysis of 3D Asymmetric Tensor Fields } \label{sec:eigenvalue_manifold} % In this section, we describe our analysis of 3D asymmetric tensor fields. % An $3 \times 3$ asymmetric tensor $T$ can be uniquely decomposed as % \begin{equation} T = D+A, \label{eq:decomposition} \end{equation} % where $D=\frac{\trace(T)}{3}\mathbb{I}$ is a multiple of the identity matrix $\mathbb{I}$ and $A=T-D$ is a traceless tensor that is referred to the {\em deviator} of $T$. % Note that $T$ and $A$ have the same set of eigenvectors, i.e. anisotropy. % Therefore, we begin with the analysis of 3D traceless asymmetric tensors in the following subsections. % \subsection{Dual-Eigenvectors} % A traceless asymmetric tensor with complex eigenvalues has a real Schur form~\cite{Aldhaheri:89} of $\begin{pmatrix} a & -c & \quad\quad d \\ c & \quad b & \quad\quad e \\ 0 & \quad 0 & -a-b \end{pmatrix}$ % where $(a-b)^2<4c^2$ under some orthonormal basis $\langle v_1, v_2, v_3 \rangle$. $T$ has one real eigenvalue $\lambda_3=-a-b$ and two complex eigenvalues $\lambda_{1,2}=\frac{(a+b)\pm \sqrt{4c^2-(a-b)^2}i}{2}$. % Note that the eigenvectors corresponding to the complex eigenvalues are also complex-valued. % We extend the notion of {\em dual-eigenvectors} from 2D asymmetric tensor fields~\cite{Zheng:05c} to 3D. % In the plane spanned by $\langle v_1, v_2 \rangle$, the projection of $T$ has the form $\begin{pmatrix} a & -c \\ c & \quad b \end{pmatrix}$, whose major and minor dual-eigenvectors are well-defined. % We refer to the dual eigenvectors of the projection tensor as the dual-eigenvectors of $T$. % \subsection{Degenerate Surface} Given a 3D asymmetric tensor field, the set of points in the tensor field with three mutually distinct real eigenvalues is referred to as the {\em real domain} of the field, while the set of points with one real eigenvalue and two complex conjugate eigenvalues is referred to as the {\em complex domain} of the field. The boundary between the real domain and the complex domain consists of points with one real-valued eigenvalue with a multiplicity of at least two. We refer to such a boundary point as a {\em degenerate point}. % The real Schur form for degenerate traceless tensors is expressed as $\begin{pmatrix} a & c & \quad d \\ 0 & a & \quad e \\ 0 & 0 & -2a \end{pmatrix}.$ % When $a=0$, $T$ has one real eigenvalue $0$ with a multiplicity of three. % It is therefore referred to as a {\em triple degenerate tensor}. % Otherwise, $T$ has one real eigenvalue $a$ with a multiplicity of two and another real eigenvalue $-2a$. In this case, $T$ is a {\em double degenerate tensor}. % Furthermore, for a double degenerate tensor, the $2\times 2$ sub-block corresponds to a plane, and the projection of the tensor onto the plane is a 2D degenerate tensor. % % In general, a traceless tensor is degenerate if and only if its {\em discriminant} $\Delta(T)=0$ where $\Delta(T)=-27\det(T)^2-4\minor(T)^3$. % Note that the discriminant $\Delta$ can be negative for asymmetric tensors. % Consequently, the set of degenerate points is co-dimension one and forms a surface which we refer to as the {\em degenerate surface}. Additionally, the set of triple degenerate tensors has one additional constraint which is $a=0$. Therefore, this set of tensors forms curves, i.e. {\em triple degenerate curve}. % Notice that in 3D symmetric tensor fields, the complex domain is empty, triple degenerate tensors are structurally unstable, and double degenerate tensors form curves. % Contrasting these properties with the properties of 3D asymmetric tensor fields suggests that features in a 3D asymmetric tensor field cannot be properly represented by the features in its symmetric part~\cite{Palacios:16}. % % We wish to understand the eigenvector behavior at the degenerate surface. % For this, we travel along a dominant hyperstreamline towards the degenerate surface as shown in Figure~\ref{fig:degenerate_crossing}. % Since there are two more eigenvectors in the real domain and two dual-eigenvectors in the complex domain, we develop a visualization metaphor in which the hyperstreamline is the stem of a plant to which thorns and leaves can be attached. % Along the stem, the other eigenvectors are represented as thorns and the dual-eigenvectors as leaves (Figure~\ref{fig:degenerate_crossing} (a)). We refer to a hyperstreamline with thorns and leaves as an {\em augmented hyperstreamline}. % Notice that when traveling along the dominant hyperstreamline towards the degenerate surface from the real domain (Figure~\ref{fig:degenerate_crossing} (b)), the other two eigenvectors converge and become the same at the degenerate surface, which is the repeating eigenvector. % On the other hand, when traveling from the complex domain towards the degenerate surface, the eccentricities of the leaves increase towards one (the ellipse becomes a thin line). % The major dual-eigenvectors converge to the repeating eigenvector at the degenerate surface. % It is also possible to cross the triple degenerate curve. % In this case (Figure~\ref{fig:degenerate_crossing} (c)), all three eigenvectors in the real domain converge to the only eigenvector at the triple degenerate curve (the three stems become tangent at their common intersection point). % On the other side, the real eigenvector from the complex domain also converges to the same eigenvector at the triple degenerate curve. % These behaviors at the degenerate surface and triple degenerate curve signify their topological importance. % % \subsection{Neutral Surface} % In the real domain, a 3D asymmetric traceless tensor $T$ has three mutually distinct real eigenvalues (i.e. $\lambda_1>\lambda_2>\lambda_3$) which sum to zero. There are three cases: % \begin{inparaenum}[i)] \item {\em linear}: $\lambda_2<0$ where the major eigenvalue $\lambda_1$ is the {\em dominant eigenvalue}, \item {\em planar}: $\lambda_2>0$ where the minor eigenvalue $\lambda_3$ is the dominant eigenvalue, and \item {\em neutral}: $\lambda_2=0$ where the dominant eigenvalue is not well-defined, since the major eigenvalue and minor eigenvalue have an equal absolute value but opposite signs. \end{inparaenum} Similarly, we can classify degenerate traceless tensors as being {\em linear}, {\em planar}, or {\em neutral} if the repeating eigenvalue (corresponding to $\lambda_2$ in the real domain) is negative, positive, or zero, respectively. % Note that the set of neutral degenerate tensors is exactly the set of triple degenerate tensors. Furthermore, the dominant eigenvalue of a degenerate tensor is positive (linear), negative (planar), and not well-defined (neutral). % % In the complex domain, we also classify tensors in a similar fashion. % Such tensors have only one real eigenvalue, which is the dominant eigenvalue. We refer to such a tensor as linear, planar, or neutral if the real eigenvalue is positive, negative, or zero, respectively. % Note that this classification of linearity/planarity/neutrality is consistent with that for the real domain and degenerate surface. % That is, when travelling along a path from the real domain to the complex domain without ever reaching any neutral tensor, the linearity/planarity does not change. % % The real Schur form of real neutral tensors is $\begin{pmatrix} a & \quad c & d \\ 0 & -a & e \\ 0 & \quad 0 & 0 \end{pmatrix}.$ % The projection of the tensor onto the plane spanned by the major and minor eigenvectors is traceless and has two real eigenvalues $\pm a$. % On the other hand, the real Schur form of complex neutral tensors is $\begin{pmatrix} a & -c & d \\ c & -a & e \\ 0 & \quad 0 & 0 \end{pmatrix}.$ % The projection of such a tensor onto the plane spanned by the dual-eigenvectors is also traceless and has a pair of conjugate complex eigenvalues. % % The collection of real neutral tensors, triple degenerate tensors, and complex neutral tensors form a surface which we refer to as the {\em neutral surface}. It separates the domain of the tensor field into the {\em linear domain} and the {\em planar domain}. Furthermore, the neutral surface is characterized by $\det(T)=0$. % When one travels from the linear domain into the planar domain through the real neutral surface, the dominant eigenvalue (and eigenvector) switches from the major eigenvalue (and eigenvector) to the minor eigenvalue (and eigenvector). Notice the sudden change in the hyperstreamline direction in Figure~\ref{fig:degenerate_crossing} (d). % Furthermore, the degenerate surface intersects the neutral surface at exactly the triple degenerate curve. % \subsection{Balanced Surface} % A traceless asymmetric tensor $T$ can be uniquely decomposed as the sum of a symmetric tensor $S$ and an anti-symmetric tensor $R$. When $T$ is the velocity gradient of an incompressible flow, $S$ represents the rate of angular deformation and $R$ the rate of rotation in the fluids. % We define the strength of rotation as $\tau_R=\sqrt{\langle R, R \rangle}$ and the strength of the angular deformation (shear) as $\tau_S=\sqrt{\langle S, S \rangle}$, respectively. % A tensor $T$ is {\em shear-dominant} if $\tau_S>\tau_R$. % On the other hand, $T$ is {\em rotation-dominant} if $\tau_S<\tau_R$. When $\tau_S=\tau_R$, we refer to $T$ as a {\em balanced tensor}. % % In the 2D case, a $2\times 2$ tensor $T$ has complex eigenvalues if and only if $\tau_R>\tau_S$. % Otherwise, it has only real eigenvalues. % That is, the complex domain is identical to the rotation-dominant domain, and the real domain is identical to the shear-dominant domain. % % However, the situation is different in 3D. It can be verified that a tensor $T$ is balanced, i.e. $\tau_R=\tau_S$, if and only if $\minor(T)=0$. % The real Schur form for a balanced tensor $T$ is $\begin{pmatrix} a & -c & \quad\quad d \\ c & \quad b & \quad\quad e \\ 0 & \quad 0 & -a-b \end{pmatrix}$ where $c^2=a^2+ab+b^2$. % In this case, $|a|$, $|b|$, and $|c|$ form the side lengths of a triangle with the angle between sides of $|a|$ and $|b|$ being $120^\circ$ if $ab>0$ and $60^\circ$ if $ab<0$. Furthermore, $T$ is linear if $a+b<0$ and planar if $a+b>0$. % Note that a balanced tensor $T$ must have complex eigevalues except when $a=b=0$, i.e. the tensor is triple degenerate. % Consequently, the set of balanced tensors is not the same as the set of degenerate tensors. That is, the complex domain is not the same as the rotation-dominant domain for 3D asymmetric tensor fields. % % The balanced surface divides the complex domain into % \begin{inparaenum}[i)] \item {\em inner complex domain} (dominated by rotation), and \item {\em outer complex domain} (dominated by shear). \end{inparaenum} % This signifies the importance of the balanced surface as a feature in the tensor field. % Moreover, the difference between the balanced surface and the degenerate surface shows the richer structure in 3D asymmetric tensor fields when it comes to understanding the interaction between rotation and shearing. % Another important observation is that the neutral surface, the degenerate surface, and the balanced surface intersect exactly at the triple degenerate curve, signifying the latter's topological importance. % \subsection{Tensor Mode} % An important invariant for 3D symmetric tensor fields is their mode~\cite{Palacios:16}, which is intricately connected to the topology of the fields. % For example, the mode is zero at precisely the neutral surface and $\pm 1$ at precisely the degenerate curves. % All possible mode values for 3D symmetric tensors are between $-1$ and $1$. Extending the notion of tensor modes from the symmetric case, we define the mode for 3D asymmetric tensors in a way that is motivated by the formulas for the eigenvalues of the tensor. % Let $T$ be a traceless tensor. % When the discriminant $\Delta(T) \ge 0$, $T$ has three real eigenvalues: % \begin{equation} \label{eq:real_domain} \begin{split} \lambda_1 &= -2\sqrt{-\frac{p}{3}} \cos \left(\frac{1}{3}\arccos\left(\frac{3q}{2p}\sqrt{\frac{-3}{p}}\right)+\frac{2\pi}{3}\right) \\ \lambda_2 &= -2\sqrt{-\frac{p}{3}} \cos \left(\frac{1}{3}\arccos\left(\frac{3q}{2p}\sqrt{\frac{-3}{p}}\right)\right) \\ \lambda_3 &= -2\sqrt{-\frac{p}{3}} \cos \left(\frac{1}{3}\arccos\left(\frac{3q}{2p}\sqrt{\frac{-3}{p}}\right)-\frac{2\pi}{3}\right), \\ \end{split} \end{equation} % where $p=\minor(T)$ and $q=\det(T)$. In particular, when $\Delta(T)=0$, two of the eigenvalues are the same and $T$ is degenerate. % When $\Delta(T)<0$, $T$ is in the complex domain and the real eigenvalue can be expressed as % \begin{equation} \lambda_1 = -\left\{ \begin{array}{ll} 2\frac{|q|}{q}\sqrt{-\frac{p}{3}}\cosh\left( \frac{1}{3} \arcosh\left(\frac{-3|q|}{2p}\sqrt{\frac{-3}{p}} \right) \right) & \textrm{ if $p<0$} \\[1em] -\sqrt[3]{q} & \textrm{ if $p=0$} \\[0.5em] 2\sqrt{\frac{p}{3}} \sinh\left(\frac{1}{3} \arsinh \left(\frac{3q}{2p}\sqrt{\frac{3}{p}} \right) \right) & \textrm{ if $p>0$} \end{array} \right. \label{eq:complex_domain} \end{equation} % From these formulas, we can see that the eigenvalues of $T$ are the result of the interplay among $p$, $q$, and $\Delta$. Therefore, we define the mode of $T$ as the triple $(\mu, \sign(p), \sign(q))$ where $\mu = \frac{3|q|}{2|p|}\sqrt{\frac{3}{|p|}}$. Note that in the real domain, $\mu \in [0,1]$ and $p<0$. % In the complex domain, $\mu\in (1,\infty)$ when $p<0$. % Once $p > 0$, $T$ must be in the complex domain and $\mu \in [0,\infty)$. % % Given a particular mode $(\mu, \sign(p), \sign(q))$ for $\mu \in [0,\infty]$, the set of points in the tensor field of this mode forms a surface which we refer to as the {\em mode surface} of mode $(\mu, \sign(p), \sign(q))$. % Note that feature surfaces that we have defined earlier have unique tensor modes. % More specially, % \begin{itemize} \item {\em Real neutral surface}: $(\mu=0, \sign(p)=\text{``-''}, \sign(q)=\text{``0''})$, \item {\em Complex neutral surface}: $(\mu=0, \sign(p)=\text{``+''}, \sign(q)=\text{``0''})$, \item {\em Linear degenerate surface}: $(\mu=1, \sign(p)=\text{``-''}, \sign(q)=\text{``+''})$, \item {\em Planar degenerate surface}: $(\mu=1, \sign(p)=\text{``-''}, \sign(q)=\text{``-''})$, \item {\em Linear balanced surface}: $(\mu=\infty, \sign(p)=\text{``0''}, \sign(q)=\text{``+''})$, \item {\em Planar balanced surface}: $(\mu=\infty, \sign(p)=\text{``0''}, \sign(q)=\text{``-''})$. \end{itemize} % Another special mode is when $\sign(p)=\text{``0''}$ and $\sign(q)=\text{``0''}$; in this case, $\mu$ is undefined. % The set of points with this mode is precisely the triple degenerate curve. % % \begin{figure}[!b] \centering% \includegraphics[width={\columnwidth}]{images/eigenvalue_manifold.png} \caption{ The eigenvalue space contains seven special tensors based on the tensor mode: real neutral tensors, complex neutral tensors, linear degenerate tensors, planar degenerate tensors, linear balanced tensors, planar balanced tensors, and triple degenerate tensors. % When the tensor represents the velocity gradient of some incompressible flow, the corresponding flow patterns are illustrated next to the tensor. % Notice that the flow pattern inside the 2D plane is simple shear when the tensor is degenerate (linear, planar, and triple). % For neutral tensors, the projection is either a saddle (real neutral) or an elliptical pattern (complex neutral). % For linear tensors, the flow leaves the plane in the third dimension, while for planar tensors, the flow enters the plane in the third dimension. % } \label{fig:eigenvalue_manifold} \end{figure} % % \subsection{Eigenvalue Space} % The definition of tensor mode allows us to construct a model for all 3D asymmetric tensors, which we refer to as the {\em eigenvalue space} for 3D asymmetric tensors. % We first consider the set of all traceless tensors, which we map to the border of a hexagon and its center as shown in ~Figure~\ref{fig:eigenvalue_manifold}. % Each point on the border of the hexagon represents a unique tensor mode. % Starting from the top and continuing counterclockwise, we encounter special modes in the order of real neutral tensors, linear degenerate tensors, linear balanced tensors, complex neutral tensors, planar balanced tensors, and planar degenerate tensors, as shown in ~Figure~\ref{fig:eigenvalue_manifold}. % In addition, the center of the hexagon corresponds to the triple degenerate tensors. % On the other hand, points inside the hexagon other than the center do not correspond to any valid tensor mode. % Note that the real domain consists of two edges in the hexagon (upper-left and upper-right), while the complex domain consists of the other four edges. % The left and right edges correspond to the outer complex domain, while the {lower-left and lower-right edges correspond to the inner complex domain. % Furthermore, the left half and the right half of the hexagon signify the symmetry between linear and planar tensors. % Note that the triple degenerate curve is adjacent to every mode surface. The domain of the tensor field is the disjoint union of all the mode surfaces. We can consider the volume being a book with each mode surface as a page and the triple degenerate curve as the book spine. Figure~\ref{fig:teaser} (a) illustrates this idea with a tensor field. % For a tensor $T$ that may have a non-zero trace, we define the notion of {\em isotropicity} as % \begin{equation}\label{eq:isotropicity} \eta(T)=\frac{\trace(T)}{\sqrt{3}||T||} \end{equation} % Note that the isotropicity of a tensor must be between $-1$ and $1$, where the isotropicity of $\pm 1$ corresponds to a positive and negative multiple of the identity matrix, respectively. % % Given a 3D asymmetric tensor of unit magnitude, its mode and isotropocity uniquely determine its eigenvalues. % In addition, when a tensor has an isotropicity of $\pm 1$, its deviator is zero, making its mode not well-defined. % Consequently, we add to our eigenvalue space two additional special tensors: isotropicity of $1$ and isotropicity of $-1$. % This leads to a double hexagonal cone and line segment in the middle as shown in ~Figure~\ref{fig:eigenvalue_manifold_cone}. % The base of the double cone corresponds to traceless tensors, which are modeled by the hexagon in~Figure~\ref{fig:eigenvalue_manifold}. % The top and bottom tip points in the double cone correspond to the $1$ and $-1$ isotropocities, respectively. % Each point on the surface of this double cone as well as the line between the top and bottom tips corresponds to a unique combination of eigenvalues in a tensor up to a positive multiple. % We refer to the double cone and the center line segment as the {\em eigenvalue space} for 3D asymmetric tensors. % Note that pure isotropocity tensors (i.e. $\eta(T) = \pm 1$) are co-dimension eight in the space of 3D asymmetric tensors. % Thus, they are structurally unstable. % However, we include them in our eigenvalue space for their theoretical values. % The set of points in the field with a given isotropicity $\eta\ne \pm 1$ is a surface, which we refer to as an {\em isotropicity surface}. A special isotropicity surface is the {\em traceless surface}, whose corresponding isotropicity value is zero. Note that for incompressible fluid data, its gradient tensor is always {\em traceless}; thus, the traceless surface becomes the whole domain. % Another feature surface that we visualize is the {\em magnitude surface}, which consists of the points in the field with the same tensor magnitude that is not zero. % Note that the set of zero magnitude tensors is co-dimension nine in the tensor field and thus structurally unstable. % % \begin{figure}[!t] \centering% \begin{overpic}[width={0.8\columnwidth}]{images/eigenvalue_manifold_cone.png} \put(-9, 5) {-1} \put(-8, 55) {0} \put(-8,100) {1} \put(-12, -5) {\small{Isotropicity}} \end{overpic} \caption{ We add to our eigenvalue space two additional special tensors with isotropicity of $\pm 1$ at the top (brown dot) and the bottom (pink dot) and model the eigenvalue space as a hexagonal double cone with one additional line segment.} \label{fig:eigenvalue_manifold_cone} \end{figure} % % \begin{figure*}[!t] \centering% \begin{overpic}[width={\textwidth}]{images/method_quad2.png} \put(55, -7) {(a)} \put(30, 40) {\LARGE{$f(x,y,z)$}} \put(180, -7) {(b)} \put(325, -7) {(c)} \put(455, -7) {(d)} \end{overpic} \caption % Given a quadratic function $f(x, y, z)$ defined on a tetrahedron (a), we find its zeroth levelset by first extracting the quadratic curves as an ellipse or a hyperbola on each face of the tetrahedron (b). % These curves are then mapped to the parameter domain (c) for the quadratic surface (ellipsoid or hyperboloid) to bound a region, which we triangulate. % Mapping the triangulated region back to the $XYZ$ space produces the quadratic surface (d). % } \label{fig:method_quadratic} \end{figure*} % % \section{Extraction of Feature Curves and Surfaces} \label{sec:extraction} % The input data to our visualization is a piecewise linear tensor field defined on a tetrahedral mesh. To extract the aforementioned feature curves and surfaces, we consider their complexity. % Inside a tetrahedron, the tensor field is linear and can be locally expressed as $T(x, y, z) = T_0+xT_x+yT_y+zT_z$ where $T_0$, $T_x$, $T_y$, and $T_z$ are 3D asymmetric tensors. % \subsection{Magnitude Surface} The magnitude surface of a given magnitude $s>0$ is thus characterized by $||T||^2=s^2$. Note that when $T(x, y, z)$ is linear, $f(x, y, z)=||T(x, y, z)||^2-s^2$ is a quadratic polynomial of $x$, $y$, and $z$. % Under structurally stable conditions, a quadratic surface is part of either an ellipsoid, a single-sheet hyperboloid, or a double-sheet hyperboloid. % Note that each of these types of surfaces can be parameterized over two variables~\cite{hilbert:1999:geometry}. Therefore, we first express $f(x, y, z)$ as $f(x, y, z)= \textbf{x}^T K \textbf{x} = 0$ where $\textbf{x} = (x, y, z, 1)$ and $K$ is a $4 \times 4$ symmetric matrix. % Using the eigenvalues of $K$, we can decide whether the magnitude surface is an ellipsoid, a single-sheet hyperboloid, or a double-sheet hyperboloid~\cite{beyer:1991:crc}. % Finally, we can extract the magnitude surface using proper parameterization at any given accuracy. % % Observing that the magnitude surface is piecewise quadratic and continuous across the faces of adjacent tetrahedra, we first extract the intersection of the magnitude surface with each face. Such an intersection is a quadratic curve inside each triangle face, which is part of either an ellipse or a hyperbola. % We extract these quadratic curves using the method described in~\cite{Khan:20}. % Next, we extract the magnitude surface inside each tetrahedron by collecting its intersection with the four faces of the tetrahedron. % These intersection curves are then mapped to the parameter space for the magnitude surface (an ellipsoid or a hyperboloid), which form loops and bound a number of regions in the parameter space. % We then sample each region at a given sampling rate to generate a set of points inside. % Finally, we apply constrained Delaunay triangulation~\cite{chew:1989:constrained} with the boundary curves as the constraints to generate a triangulation of the regions in the parameter space. % These regions, when mapped back to the $XYZ$-space, give rise to the magnitude surfaces in the tetrahedron. Fig.~\ref{fig:method_quadratic} illustrates this process. Finally, the magnitude surface from adjacent tetrahedra are stitched together from their shared quadratic curves in the common face. % Note that the process of extracting the magnitude surface inside each tetrahedron is independent of the other tetrahedra. % Thus, we enable parallel computation to speed up the process. % % \subsection{Isotropicity Surface} The isotropicity surface is defined in Equation~\ref{eq:isotropicity}, which involves radicals in its formulation. To overcome this issue, we use an alternative formulation: % \begin{equation}\label{eq:isotropicity_2} \trace(T)^2-3\eta(T)^2||T||^2=0 \end{equation} % Note that this formulation captures both the positive isotropicity surface and the negative isotropocity surface, and we refer to the collection of both surfaces as the {\em generalized isotropocity surface}. % Such a surface is a quadratic surface in the domain, which we can extract using the same approach for extracting the magnitude surface mentioned above. % The only issue is how to separate the positive and negative parts of the generalized isotropicity surface. This is achieved as follows. % When extracting the intersection of the positive or negative isotropicity surface with a face of a tetrahedron, we use the sign of $\trace(T)$ to extract only the relevant intersection segments. % We then use these segments as input to the remainder of our pipeline to extract the isotropicity surface inside each tetrahedron. % This leads to the correct extraction of the surface, either positive only, or negative only. % \subsection{Balanced Surface} The balanced surface satisfies that $\minor(T)=0$. % This is again a quadratic surface, which we extract using the same method as above. % It consists of both the linear part and planar part, separated by the triple degenerate curve. % We will provide the detail of extracting triple degenerate curves next, as part of our effort to extract neutral surfaces. % \subsection{Neutral Surface and Triple Degenerate Curve} The neutral surface satisfies $\det(T)=0$ and is a cubic surface. % To extract such a surface, we employ the A-patches technique~\cite{luk:2009}, which allows the extraction of algebraic curves and surfaces. % This is achieved by converting a degree-$n$ polynomial $f(x,y,z)$ into its Bernstein coefficients and testing the sign of coefficients on a tetrahedral grid to find the zeroth levelset. % % % % % % % % % % % % % In addition, note that the triple degenerate curve is precisely the intersection of the neutral surface and the balanced surface. Consequently, we extract the triple degenerate curve as follows. % We first extract the balanced surface using our quadratic surface extraction algorithm, which results in a triangular mesh. Next, we compute the curve $\det(T)=0$ on this mesh, which is the triple degenerate curve. % To find this cubic curve on the triangular mesh, we employ the same A-patches method for a lower-dimension. That is, on a triangular mesh representing the balanced surface, we build a triangular Bernstein grid for each triangle, test the A-patch conditions~\cite{luk:2009}, and either extract the curve inside the triangle or subdivide the triangle into smaller triangles and repeat the process. % % The ability to extract triple degenerate curves allows us to separate real neutral surfaces from complex neutral surfaces as well as separate linear balanced surfaces from planar balanced surfaces. \subsection{Degenerate surface and Mode surface} Other than the balanced surface and the neutral surface, all other mode surfaces are degree-six surfaces, including the degenerate surface. % Such a surface can be extracted using the A-patches method. % In addition, for such a surface, the linear part and the planar part are separated by precisely the triple degenerate curve. % Thus, we can extract either the linear part, or the planar part, or both for any mode surface. % % \section{Performance} % Our feature extraction algorithm is tested on a number of analytical and simulation data from solid mechanics and fluid dynamics. The number of tetrahedra in our data ranges from $500,000$ to $1,500,000$. % Measurements were taken on a computer with an Intel(R) Xeon(R) E3-2124G CPU$@$ 3.40 GHz, 16GB of RAM, and an NVIDIA Quadro P620 GPU. % The time to extract quadratic surfaces such as magnitude surfaces, isotropicity surfaces, and balanced surfaces range from $0.38$ second to $2.91$ seconds, depending on the number of tetrahedra in the data. % It is more expensive to extract feature surfaces using the A-patches algorithm due to the recursive nature of the technique. The neutral surface is a degree-three surface. The time to extract this surface ranges from $0.69$ second to $5.90$ seconds. On the other hand, the degenerate surface and other mode surfaces are degree-six surfaces. The A-patches method requires $4.76$ seconds to $22.75$ seconds for our data. % Note that the time reported above includes the time to compute the triple degenerate curve. % \section{Applications} \label{sec:apps} % In this section, we apply our novel analysis to a number of analytical and simulation datasets in solid mechanics and fluid dynamics. % Additionally, we provide some physical interpretation of our visualization based on our tensor field analysis. % % \begin{figure}[!b] \centering% \begin{overpic}[width={0.95\columnwidth}]{images/app_Twist2.png} \put(-5, 165) {30\%} \put(-5, 70) {100\%} \put(-5, -6) {(a) deformation} \put(70, -6) {(b) Cauchy stress} \put(165, -6) {(c) PK1 stress} \end{overpic} \caption % We visualize the features at $30\%$ and $100\%$ of the loading of the twisting scenario. % The images from the left to the right column are (a) the deformation, (b) the degenerate curves of the Cauchy stress tensor fields, and (c) the degenerate surfaces of the PK1 stress tensor fields. } \label{fig:app_Twist2} \end{figure} % % \begin{figure}[!b] \centering% \begin{overpic}[width={\columnwidth}]{images/app_Twist3.png} \put(15, 92) {(a) $\eta(T)=\pm 0.7$} \put(90, 92) {(b) $\eta(T)=\pm 0.5$} \put(175,92) {(c) $\eta(T)=0$} \put(15, -6) {(d) $\Vert T \Vert=50$} \put(90, -6) {(e) $\Vert T \Vert=100$} \put(175,-6) {(f) $\Vert T \Vert=150$} \end{overpic} \caption We visualize three isotropicity surfaces (a-c) and three magnitude surfaces (d-f). % The magnitude surfaces are colored in navy blue, while the positive and negative isotropicity surfaces are colored in brown and pink, respectively. The traceless surface (zero isotropicity) is colored in teal. } \label{fig:app_Twist} \end{figure} % % \subsection{Solid Mechanics} % Twisted bundles of steel cables can be found in many places in real life, from those embedded in truck tires to bring additional support, to those used for suspension structures such as cable cars, elevators, and cranes (machines). The cables can fail due to the wear and tear from the cables untwisting under stress (heavy weight lifting). % Such failures can in turn lead to property damages and loss of lives. % To understand the potential weakness in the steel cables under stress due to twisting, we consider the first Piola-Kirchhoff (PK1) stress tensor~\cite{kelly:2020:solid} used to study metal plasticity. % Unlike the perhaps better-known Cauchy stress which is symmetric, the PK1 stress tensor is asymmetric as it is the product of the Cauchy stress and the deformation gradient tensor. % {\bf The twisting scenario} is simulated with SIMULIA~\cite{ABAQUS:614}. % Figure~\ref{fig:app_Twist2} shows two twisting stages of a block whose front face is fixed and the back face is twisted: (top) $30\%$ of the full twist, and (bottom) fully twisted ($18^\circ$). % We observe that the degenerate curves in the Cauchy stress tensor fields (Figure~\ref{fig:app_Twist2} (b): colored curves) and the degenerate surfaces in the PK1 stress tensor fields (Figure~\ref{fig:app_Twist2} (c): colored surfaces) both have a twisting structure. % However, the Cauchy stress tensor fields (Figure~\ref{fig:app_Twist2} (b)) do not have a complex domain and the set of degenerate tensors in the field form curves. This is visible from the visualization shown in the middle column, where linear degenerate curves are colored in green and planar degenerate curves are colored in yellow. % Note that despite the significantly different twists at different stages, the Cauchy stress leads to the nearly identical set of degenerate curves (Figure~\ref{fig:app_Twist2} (b): top and bottom). % In contrast, the visualization of the PK1 tensor fields (Figure~\ref{fig:app_Twist2} (c)) shows a clear difference in the degenerate surfaces for the two stages. While at $30\%$ (Figure~\ref{fig:app_Twist2} (c): top), the degenerate surface in the PK1 tensor is similar to the degenerate curves in the Cauchy stress (Figure~\ref{fig:app_Twist2} (b): top), at $100\%$, a pronounced difference is shown (Figure~\ref{fig:app_Twist2} (b-c): bottom). % This highlights the potential benefits of visualizing the asymmetric PK1 stress over the symmetric Cauchy stress for twisting motions. % % % Moreover, in the PK1 stress tensor fields, the linear and planar degenerate surfaces point out the boundary conditions of the fixed side and the twisting side of the block. % Since the fixed side has less deformation, the region of the complex domain is smaller. % Furthermore, while the loading is increasing, the complex region is growing, which indicates that the rotation-dominant domain is getting larger. % Figure~\ref{fig:app_Twist} visualizes the isotropicity surfaces and the magnitude surfaces which have a skew-symmetric structure as well. % In addition, the isotropicity surfaces illustrate the material is compressed at the center and isotropically stretched on the boundary. Insights such as these are dependent on the ability to perform tensor field analysis. % Note that both the PK1 stress and the Cauchy stress can provide important insights into the underlying mechanics as such insights are complementary. As tensor fields, certain tools are available for both symmetric and asymmetric tensor fields, such as eigenvalue analysis. On the other hand, the interpretation of such analysis depends on many factors such as the type of the tensors and the physical quantities that they represent. Consequently, visualizing both types of tensor fields and understanding the connection between their structures, i.e. multi-field visualization, can provide a more holistic view of the underlying physics than using only one of them. % \subsection{Fluid Dynamics} % The velocity gradient tensor field of a flow plays an important role in understanding fluid dynamics, and asymmetric tensor field analysis of such a field can lead to complementary insight to existing vector field visualization methods~\cite{Zhang:09}. % In this paper, we perform analysis and visualization of 3D velocity gradient tensor fields directly instead of their 2D projections onto some lower-dimensional surface or probe plane. We will discuss our data sets: (1) the Lorenz attractor, (2) the Rayleigh-B{\'e}rnard flow, (3) the Arnold–Beltrami–Childress flow, and (4) an open-channel flow (Appendix). % % % % {\bf Lorenz attractor} is a set of chaotic solutions to the Lorenz system~\cite{lorenz:1963:deterministic} with system parameters $\sigma$, $\rho$, and $\beta$. Figure~\ref{fig:teaser} (a) shows the butterfly-shaped attractor (the grey winding curve) in the system when $\sigma =10$, $\rho =28$, and $\beta =8/3$~\cite{lorenz:1963:deterministic}. % We extract and visualize the feature curve and surfaces in the gradient tensor, such as} (b) linear degenerate surfaces (green) and planar degenerate surfaces (yellow), (c) real neutral surfaces (orange) and complex neutral surfaces (red), and (d) linear balanced surfaces (blue) and planar balanced surfaces (magenta). % Note that all of these surfaces intersect exactly at the triple degenerate curves (black). % Moreover, topological feature surfaces separate the two critical points in the attractor, and these surfaces exhibit a two-way rotational symmetry. % % \begin{figure*}[!t] \centering% \begin{overpic}[width={\textwidth}]{images/app_Bernard_original.png} \put(25, -7) { (a) vector field} \put(145, -7) { (b) degenerate surfaces} \put(275, -7) { (c) balanced surfaces} \put(410, -7) { (d) neutral surfaces} \end{overpic} \caption{ % This figure visualizes the Rayleigh-B{\'e}rnard flow with (a) the vector field, (b) the linear and planar degenerate surfaces (green and yellow), (c) the real and complex neutral surfaces (orange and red) neutral surfaces, and (d) linear and planar balanced surfaces (blue and magenta). % Notice that the triple degenerate curve (black curves) can be considered as the spine to which the feature surfaces are attached. } \label{fig:app_Bernard} \end{figure*} % \begin{figure}[!t] \centering% \begin{overpic}[width={\columnwidth}]{images/app_Bernard2.png} \put(0, -8) {(a) vector field} \put(75, -8) {(b) mode surfaces} \put(150, -8) {(c) triple degenerate curves} \end{overpic} \caption{% % We magnify the center portion of the B{\'e}rnard cell and show (a) the streamlines of the vector field, (b) the mode surfaces and (c) the triple degenerate curves. % } \label{fig:app_Bernard2} \end{figure} {\bf The Rayleigh-B{\'e}rnard flow} is thermal convection in a thin horizontal layer of fluid heated from below by maintaining the constant temperature difference between the upper and lower boundaries. % The flow is characterized by the formation of two convection cells as shown in Figure~\ref{fig:app_Bernard} (a). % Identifying regions of stretching and compression as well as the rotation-dominant region is useful yet challenging. % With our feature surfaces, such spaces can be better perceived. Notice that the triple degenerate curve (black) can be considered as a spine to which other feature surfaces are attached. % In Figure~\ref{fig:app_Bernard2} (a), we zoom in on the center portion of the B{\'e}rnard cell ($x \in [0.5, 0.8]$; $y \in [0.35, 0.65]$; $z \in [0.35, 0.65]$) and visualize the neutral surfaces, the degenerate surfaces, the balanced surfaces, and the triple degenerate curves (Figure~\ref{fig:app_Bernard2} (b)). % Note that the left face of the cube corresponds to the center face that separates the pair of convection cells and the lower-left corner contains the converging upwelling flow. % There, we observe a quick transition of the relatively flat and parallel mode surfaces: the planar degenerate surface (leftmost, yellow), the real neutral surface (orange), the linear degenerate surface (green), and the linear balanced surface (rightmost, blue). % Near the bottom, underneath the real neutral surface, the converging flow is dominated by shearing with compression (planar), and then becomes stretching-dominant (linear) by crossing the real neutral surface. % Next, the strength of the rotation gradually increases until the shear balances the rotation at the linear balanced surface (blue). % Finally, we enter the rotation-dominant convection cell domain on the right-hand side of the linear balanced surface (blue). % % Furthermore, in the upper part of upwelling convection, the flow characteristic transitions exhibit more volumetric appearance: from the linear degenerate surface (leftmost, green), real neutral surface (orange), planar degenerate surface (yellow), planar balanced surface (magenta), to the complex neutral surface (rightmost, red) in the rotation cell. % Lastly, we notice both the triple degenerate curves have an ``M'' shape, with the triple degenerate curve near the bottom of the cube being narrower (Figure~\ref{fig:app_Bernard2} (c)). % % We conjecture that the converging flow pattern pushes the triple degenerate curves towards the center of the domain, thus the ``M'' shapes. % The above observations of the flow characteristics and behaviors can be difficult to detect and interpret correctly with the 2D flow and tensor field visualization in probe planes. % Such comprehensive analysis is attainable with the use of 3D visualization of the velocity gradient tensors. % \begin{figure*}[!t] \centering% \begin{overpic}[width={\textwidth}]{images/app_ABC4.png} \put(30, -7) {(a) Vector field} \put(170, -7) { (b) } \put(310, -7) { (c) } \put(435, -7) { (d) } \end{overpic} \caption{On the boundary face of the ABC flow, streamlines exhibit {\em ordered} behavior (a: inside the red frame) and {\em chaotic} behaviors (a: outside the red frame). The non-chaotic streamlines occur around the cylindrical regions (b: transparent cylinders and half-cylinders) that are referred to as the principal vortices. Notice the one-to-one correspondence between the principal vortices and the triple degenerate curves (b: colored curves). In addition, the chaotic streamlines (c: grey curves) appear to travel along the corresponding triple degenerate curve. The dominant hyperstreamline starting from $(\pi, \pi, \pi)$ (d) contains multiple real segments (with thorns) and complex segments (with leaves) as it travels in the periodicity box. Note that when intersecting a face, the hyperstreamline continues from the same location on the opposite face, i.e. points with the same labels. Successive intersection points are labeled with $1$-$4$. Notice that the hypersteamline is mostly a straight line except when it crosses the real neutral surface (not shown). It also consists of mostly complex segments. } \label{fig:app_ABC} \end{figure*} {\bf Arnold–Beltrami–Childress flow} (ABC flow) is a 3D incompressible vector field that is a steady-state solution to Euler's equations~\cite{dombre:1986:chaotic}. The ABC flow is periodic in each of the $X$, $Y$, and $Z$ directions with a period of $2\pi$ and is usually studied in its {\em periodicity box}: $[0, 2\pi) \times [0, 2\pi) \times [0, 2\pi)$ (Figure~\ref{fig:app_ABC} (a): the cube). One of the main characteristics of the ABC flow is the existence of {\em chaotic streamlines}, which, due to the periodicity in the flow, can intersect a face of the periodicity box infinitely many times so that the set of the intersection points fills a region in the face~\cite{dombre:1986:chaotic}. When $A=1$, $B=\sqrt{2/3}$, and $C=\sqrt{1/3}$, chaotic streamlines occur outside the so-called {\em principal vortices}, each of which is a tubular region along one of the $X$, $Y$, and $Z$ axes (Figure~\ref{fig:app_ABC} (b): colored cylinders and half-cylinders due to periodicity). There are a total of six principal vortices, two along each axis. Inside a principal vortex, the streamlines' orientations are predominantly along the direction of the tube. Each such streamline intersects a face of the cube at a set of points that are on a curve (instead of a region). Such streamlines are not chaotic. On the faces of the periodicity box, the intersection points with chaotic streamlines are outside the principal vortices. While Dombre and Frisch~\cite{dombre:1986:chaotic} illustrate the principal vortices as cylinders, they point out that these regions are helical, which, when traveling from one face to the opposite face of the cube, finish a turn of $2\pi$. We observe that there is a one-to-one correspondence between the set of triple degenerate curves (Figure~\ref{fig:app_ABC} (b): colored curves) and the set of principal vortices. Note that some of the triple degenerate curves are divided into three segments by the faces of the periodicity box. We notice that each triple degenerate curve also has a helical shape and finishes a turn of $2\pi$ after traveling from one face to the opposite face. Moreover, each triple degenerate curve is a loop under the periodic condition. In addition, the streamlines in a principal vortex (Figure~\ref{fig:app_ABC} (c): grey curves) appear to be around the triple degenerate curves. The correlation between the triple degenerate curves and the principal vortices in terms of their numbers, locations, and shapes suggests that additional insights may be gained on the ABC flow by inspecting the topological structures in its gradient tensor field. % \begin{figure}[!t] \centering% \begin{overpic}[width={\columnwidth}]{images/app_ABC2.png} \put(25, 0) { (a) Balanced surfaces} \put(50, -7) { \small (back view)} \put(135, 0) { (b) Complex neutral surfaces} \put(175, -7) { \small (side view)} \end{overpic} \caption{ % We compare the vortex core lines (golden curves) of the ABC flow to (a) the linear and planar balanced surfaces (blue and magenta), and (b) the complex neutral surfaces (red). % Notice that the vortex core lines appear to be in the inner complex domain (a) and separated into the linear and planar segments by the complex neutral surfaces (b). } \label{fig:app_ABC2} \end{figure} % Besides the periodicity in the flow, there is an additional eight-fold symmetry within the periodicity box~\cite{dombre:1986:chaotic} that leads to the {\em fundamental box}: $[0, \pi) \times [0, \pi) \times [0, \pi)$ which is one-eighth of the periodicity box. Given this, we compute the dominant hyperstreamline from $p=(\pi, \pi, \pi)$ in one direction. Figure~\ref{fig:app_ABC} (d) shows the augmented hyperstreamline through $p$, which, when intersecting a face of the periodicity box, continues from the same location on the opposite face. Successive intersection points are labeled with $1$-$4$. Notice that the hyperstreamline is mostly straight, except where the dominant eigenvectors are discontinuous (crossing the real neutral surfaces). The variety of tensor field behavior along the hyperstreamline reflects the rich structure in the ABC flow. This highlights the benefit of our tree-based augmented hyperstreamline visualization, which can be used as a probing tool for the field with only one user-specified seed point. The eigenvector information in the field along the hyperstreamline is captured by the thorns and leaves, giving the user a more holistic view of the field than showing only the stem. Vortex core lines are a popular visualization for understanding fluid flows~\cite{roth:1998:higher}. We compare vortex core lines (extracted using VTK~\cite{schroeder:2006:VTK} to our tensor-based feature surfaces. As shown in Figure~\ref{fig:app_ABC2} (a) (colored curves), there are four vortex core lines given the periodicity condition. Notice that they do not intersect the balanced surfaces (a) but intersect the complex neutral surfaces (b). This suggests that the vortex core lines are inside not only the complex domain but also the inner complex domain. Such an observation signifies the importance of the balanced surfaces in understanding fluid flows. In addition, if this observation can be justified theoretically, it may be used in the future to evaluate the effectiveness of vortex core line extraction methods. Moreover, the complex neutral surface divides vortex core lines into linear segments and planar segments. To our knowledge, a vortex core line is usually extracted and studied as a whole. Understanding the transition from linear parts to planar parts and vice versa has the potential of bringing additional insight to the understanding of the underlying fluid dynamics. % \section{Conclusion and Future Work} \label{sec:conclusion} % In this paper, we explore the topology of 3D asymmetric tensor fields and introduce an eigenvalue space based on the tensor mode that facilitates our analysis. % At the core of our analysis is the definition of tensor mode, which gives rise to a number of feature curves and surfaces with topological significance. In addition, we show that triple degenerate tensors are stable and form curves. % Additionally, we introduce the notion of balanced surface, which divides the complex domain into the inner part (rotation-dominant) and the outer part (shear-dominant). Such a feature is not present for 2D asymmetric tensor fields. % % Observing that a number of the feature surfaces are quadratic, we provide an algorithm to extract them effectively and quickly. Note that our algorithm can also be used to extract quadratic feature surfaces in symmetric tensor fields such as the magnitude surfaces and the isotropic index surfaces~\cite{Palacios:16}. % To enable a holistic view of the eigenvectors and dual-eigenvectors, we visualize a hyperstreamline following one eigenvector field as a tree stem with attached thorns and leaves to show the other eigenvectors or dual-eigenvectors. % This allows us to inspect the change in the tensor field behavior across important feature surfaces. % Finally, we have applied our analysis and visualization to a number of analytical and simulation data and provided some physical observations. % % In the future, we wish to investigate more robust extraction of feature surfaces of the tensor fields than the A-patches method, which neither guarantees to find all the surfaces nor provides a seamless surface extraction. % For example, there has been work on the seamless extraction of mode surfaces for 3D symmetric tensor fields~\cite{Qu:21}, which is based on a reparameterization of the space of mode surfaces in a linear tensor field. % We plan to investigate a potential adaptation of this approach for asymmetric tensor fields. % Deeper understanding of the relationship between features in a 3D asymmetric tensor field and those of its symmetric part is another direction that we plan to explore. % The two types of tensors share many characteristics such as the concepts of eigenvalues and eigenvectors as well as tensor invariants such as the magnitude and trace. % Moreover, the symmetric part of an asymmetric tensor is symmetric, and understanding how the topological features in an asymmetric tensor field such as neutral surfaces, degenerate surfaces, and balanced surfaces relate to the features in the symmetric part has the potential of creating a unified framework for 3D tensor fields, whether symmetric or asymmetric. % Such deeper understanding can potentially lead to more insight into the data by examining the features in both the asymmetric tensor field and its symmetric part. % Finally, developing a multi-scale representation of 3D asymmetric tensor field topology is an important research area that has received relatively little attention from the Visualization community. % We plan to investigate this area in our future work. % % % % % % \acknowledgments{The authors wish to thank our anonymous reviewers for their constructive feedback. We appreciate the help from Avery Stauber and Yichuan Yin during video production. This work was supported in part by the NSF award (\#1619383). } \bibliographystyle{abbrv-doi}
1,108,101,562,532
arxiv
\section{Introduction} Phylogenetic trees are models of evolutionary relationships. The general approach in phylogenetics is to represent evolutionary relationships using bifurcating trees with sampled taxa (represented by so-called labeled vertices) placed at the leaves. Neighbor-joining (NJ) is a popular method for constructing such trees and uses distances between each pair of taxa. Such trees have the maximum number of unsampled ancestors (represented by so-called latent vertices), each ancestor corresponding to a vertex comprising a branching point in the tree. This approach does not allow the labeled vertices to share an ancestor-descendant relationship, and thus may not be appropriate for data sets that have been densely sampled with respect to evolutionary time, for example, genomic sequences of pathogens that have been sampled from individuals who are part of the same transmission chain. To account for ancestor-descendant relationships \citet{Jombart2011} model evolutionary relationships using a directed acyclic graph in which each edge is directed from a parent to its child. This graph does not contain any latent vertices and is not necessarily connected. In case the graph is disconnected, it is an incomplete representation of the evolutionary relationships among all the labeled vertices. In related work \citet{Gavryushkina2014} provide a method for constructing so-called sampled ancestor (SA) trees in which labeled vertices come to be placed at internal vertices by contracting terminal branches. The authors do this in a Bayesian inference framework where trees are generated under a model that does not allow labeled vertices to have degree greater than two and, in addition, does not allow latent vertices to have degree greater than three. Two distance-based algorithms, recursive grouping (RG) and Chow-Liu recursive grouping (CLRG), have been developed by \citet{Choi2010b} for constructing trees which may contain latent vertices with degree greater than two and labeled vertices with degree greater than 0 (so-called generally labeled trees). The authors additionally developed NJc, a method for constructing generally labeled trees by initially constructing a tree using NJ and subsequently contracting all branches that are incident to a latent vertex and are smaller than a preselected threshold. The performance of RG, CLRG, and NJc was compared on simulated data where only the tree topology was varied. In that study, no method clearly outperformed the others. We developed a distance-based agglomeration method called family-joining (FJ). FJ iteratively identifies, on the basis of a distance threshold, vertices that are in a parent-child or sibling relationship, and introduces latent vertices if required. After inferring all the edges, the branch lengths are estimated using ordinary least-squares (OLS) regression. RG, CLRG and FJ require the setting of a threshold that determines the model complexity (number of branches) of the output tree. We tested three approaches to threshold selection which minimized Bayesian information criterion (BIC), Akaike information criterion (AIC), and cross-validation (CV) error, respectively. We compared the performance of FJ-BIC, FJ-AIC, FJ-CV with NJc-BIC, RG-BIC, CLRG-BIC and SA across diverse simulation scenarios. We applied FJ-BIC to an HIV-1 transmission chain data set \citep{Vrancken2014} and checked if the known transmission events were compatible with the FJ-BIC tree. Additionally in the analysis of HIV-1 sequences, we compared the bootstrap support of branches in the FJ-BIC tree and the maximum likelihood tree constructed using RAxML \citep{Stamatakis2006}. \section{New Approaches} \subsection{An overview of family-joining} The family-joining (FJ) method consists of a distance-based agglomeration algorithm for constructing generally labeled trees, and an efficient algorithm for computing ordinary least-squares (OLS) branch lengths. Trees are inferred using the following agglomeration procedure. We initialize a vertex set with all labeled vertices. At each iteration we select from the vertex set, the vertex pair that optimizes the neighbor-joining objective, as defined by \citet{Saitou1987}, see eq. (\ref{eqn:neighborIdentificationStep}) in Materials and Methods. We classify the selected vertex pair as being either parent-child or siblings on the basis of a threshold $\epsilon$, see eq. (\ref{eqn:relationshipTest}) in Materials and Methods. If they are found to be siblings we check if there is another vertex that is the parent of both the siblings. If no such vertex is found, a latent vertex is introduced as the parent of both the siblings. The distance matrix is augmented by adding distances from the newly introduced latent vertex to each of the other vertices, obtained using the formula described in \citet{Studier1988}, see eq. (\ref{eqn:distancesFromLatentVertex}) in Materials and Methods. Rows and columns of the distance matrix corresponding to the children are removed, and the procedure is iterated until a connected graph is obtained. Subsequently, we estimate branch lengths using ordinary least-squares (OLS) regression. For efficient calculation of OLS branch lengths we extended the algorithm by \citet{Bryant1997}, which was designed for leaf-labeled trees, to generally labeled trees. OLS branch lengths may be negative, which has no biological interpretation. To account for this, after estimating the branch lengths, all branches that are shorter than $\epsilon$ and are incident to a latent vertex are contracted. Overall, the procedure is similar to constructing the neighbor-joining tree followed by contracting short branches. We demonstrate FJ by applying it to a tree-additive distance matrix. A distance matrix is tree-additive if there exists a tree, in which the distance between each pair of labeled vertices is equal to the corresponding sum of lengths of the branches that lie along the unique path between the two vertices. \subsubsection{An example using tree-additive distances} \begin{figure*} \centering \textsl{}\includegraphics[width=0.8\textwidth]{FJ_example.pdf} \vspace*{2 em} \caption{Panel A: The tree-additive distances used in this example. Labeled vertices are represented by solid circles and latent vertices by white circles with black border. Panels B to G: The agglomeration steps of FJ which identifies the correct tree topology. The edges that are inferred in each agglomeration step are shown as solid lines. The dotted lines connect the labeled and latent vertices that will be used in the next iteration. Panel H: The correct branch lengths estimated using OLS.} \label{fig:FJ_example} \end{figure*} We simulated a generally labeled tree and computed corresponding tree-additive distances. We applied FJ to the resulting tree-additive distance matrix and describe the major steps below. See Fig. \ref{fig:FJ_example} for an illustration. The first iteration identified $O_{1}$ and $O_{2}$ as neighbors that share a sibling relationship. No parent was found for these siblings and a latent vertex $L_{1}$ was introduced. Distances between $L_{1}$ and vertices $O_{3}$ through $O_{9}$ were calculated and the rows and columns corresponding to $O_{1}$ and $O_{2}$ were removed from the distance matrix. Edges were added between $L_{1}$ and $O_{1}$, and between $L_{1}$ and $O_{2}$. The second iteration found $O_{4}$ and $O_{5}$ as neighbors that share a parent-child relationship with $O_{4}$ being the parent. An edge was added between $O_{4}$ and $O_{5}$, and $O_{5}$ was removed from the distance matrix. The following two iterations identified neighbors that are siblings with no parent thus introducing two latent vertices $L_{2}$ and $L_{3}$. The sibling pairs found in the third and fourth iteration are $(L_{1}, O_{3})$ and $(L_{2}, O_{4})$ respectively. The fifth iteration identified $L_{3}$ and $O_{6}$ as siblings, both of which are the children of $O_{9}$. Similarly, the next iteration found $O_{9}$ to be the parent of both $O_{7}$ and $O_{8}$. The final step involved estimating branch lengths using ordinary least-squares. The estimated branch lengths are identical to the corresponding branch lengths in the simulated tree. \section{Results and Discussion} \subsection{Simulated data} Simulated data sets were constructed by varying either the tree type, proportion of labeled internal vertices, type of contracted edge, number of labeled vertices, sequence length or branch length. Each of these parameters is described in detail below. An overview of the parameter settings is provided in Table \ref{tab:simulatedDatasets}. \begin{table*}[t] \begin{center} \caption{Simulated data sets were constructed by varying either the tree type, proportion of labeled internal vertices, type of contracted edge, number of labeled vertices, sequence length or branch length. All settings that were considered for each parameter are shown below. The default setting for each parameter is indicated with $^{*}$.} \begin{tabular}{|l|c|c|c|c|c|}\hline \label{tab:simulatedDatasets} Tree type &&balanced&random$^{*}$&unbalanced&\\\hline Fraction of latent vertices &0.5&0.37&0.25$^{*}$&0.12&0\\\hline Contracted edge &\emph{leaf/latent}&\emph{labeled/latent}&\emph{any/latent}$^{*}$&\emph{latent/latent}&\\\hline Average branch length &0.001&0.004&0.016$^{*}$&0.064&0.256\\\hline Number of labeled vertices &20&40&80&160$^{*}$&320\\\hline Sequence length &250&500&1000$^{*}$&2000&4000\\\hline \end{tabular} \end{center} \end{table*} Three types of binary trees were generated: balanced, unbalanced and random. Unbalanced or ladder-like trees have the largest diameter among all the trees with the same number of vertices. The diameter of a tree is the number of edges that lie on the path in the tree with the maximum number of edges. We chose this tree type because it has been shown that the accuracy of the neighbor identification step (\ref{eqn:neighborIdentificationStep}), which forms a part of FJ, is inversely related to tree diameter \cite{St.John2003}. A balanced tree is complementary to an unbalanced tree and has the smallest diameter possible. The fraction of latent vertices ranges from zero to $(n-2)/(2n-2)$ where $n$ is the number of labeled vertices. We simulated trees by varying the fraction of latent vertices over this range in four equal steps. Trees with the desired proportion of labeled vertices were constructed by contracting edges of a binary tree. Depending on the type of simulation experiment, the following edges were contracted: \emph{leaf/latent}, \emph{labeled/latent}, \emph{latent/latent}, and \emph{any/latent}. For each setting of tree type, fraction of latent vertices, and edge type, we randomly generated corresponding types of binary trees and contracted randomly selected edges of the appropriate type, until the desired fraction of latent vertices was reached. Once the topology was generated, branches were assigned lengths by uniformly sampling numbers between 1 and 100, and scaling them such that the expected branch length was equal to a preselected branch length average. Branch length averages took values of 0.001, 0.004, 0.016, 0.064, and 0.256 subs/site. A vertex was randomly selected as the root and sequences were evolved along the branches according to a GTR+$\Gamma$ model of substitution \cite{Lanave1984}. The parameters of the GTR model were set using estimates from a real data set \cite{Waddell1997}. The parameters shape and scale of the $\Gamma$ model were set to 1 which resulted in a moderate variation of substitution rate across sites. Seq-Gen was used for simulating sequence evolution \cite{Rambaut1997}. Sequence lengths took values of 250, 500, 1000, 2000, and 4000 nt. The number of labeled vertices (taxa) took values of 20, 40, 80, 160, and 320. Simulation scenarios were defined by varying each parameter over its range while keeping the remaining parameters fixed at their default setting. The default settings for each parameter are described below. Note that this procedure would result in 22 different parameter combinations. We simulated the corresponding 22 scenarios. For the categorical parameters \emph{tree type} and \emph{contracted edge type}, the respective default settings were \emph{random} and \emph{any/latent}. These settings were selected as the defaults as they do not restrict the generation of generally labeled trees. For the continuous parameter, fraction of vertices that are latent, which has a bounded range the midpoint was considered as the default value. For the following continuous parameters with no upper bound: number of labeled vertices, sequence length, and average branch length, we selected the appropriate range and default settings such that the trend in performance over each parameter range would be apparent. The default setting for the number of labeled vertices was 160, for the sequence length it was 1000 nt, for the average branch length was 0.016 subs/site. For each setting of parameter values, 100 trees and corresponding sequences were simulated. For distance-based methods we computed pairwise distances using ML distance estimates under a GTR+$\Gamma$ model, computed using RAxMLv8.2.8 \cite{Stamatakis2014}. For SA which constructs rooted trees we provided sampling times for each labeled vertex. This was done by randomly selected a vertex as the root and defining the sampling time for each labeled vertex as the path length from the root. Note that this method of defining sampling times is equivalent to assuming a strict molecular clock with a clock rate of 1.0. When substitution rates (subs./site/time) follow a strict molecular clock, the distance from the root to each labeled vertex is proportional to the time elapsed since divergence from the root. SA recovers the correct clock rate of 1.0 under the strict molecular clock model in all scenarios except two where the average branch length is very small (0.001 and 0.004; see Supplementary Fig. 3) \subsection{Performance metrics} Precision and recall were used to quantify the accuracy of the various methods at reconstructing the simulated trees. These metrics are defined below. \begin{align*} &\text{Precision}(T,\hat{T}) \:\:\:\quad= &\dfrac{|S\cap \hat{S}|}{|\hat{S}|}&\mbox{, and}\\ &\text{Recall}(T,\hat{T}) \!\!\!\!\!\!\qquad\quad= &\dfrac{|S\cap \hat{S}|}{|S|},& \end{align*} where $S$ and $\hat{S}$ are the set of splits corresponding to the simulated tree $T$ and the reconstructed tree $\hat{T}$, respectively. Please note that $S$ contains the split of every branch in $T$, including the terminal branches. Precision and recall range from zero to one. Precision is equal to one only if all the splits in the reconstructed tree are present in the simulated tree. Similarly, recall is equal to one only if all the splits in the simulated tree are present in the reconstructed tree. Please note that we do not report Robinson-Foulds distance, which is popularly used for quantifying reconstruction accuracy, since it would be biased against methods that do not allow polytomies. Each of the reconstruction methods that we tested can achieve the highest and the lowest possible value of recall. Among the reconstruction methods that were compared, only SA can not achieve a precision of one if the simulated tree contains polytomies. We feel that both precision and recall are important measures of reconstruction accuracy. \subsection{Results of comparative study on simulated data} We present the results of applying FJ-BIC, NJc-BIC, RG-BIC, CLRG-BIC and SA to all simulated data sets. For methods which have the suffix BIC, we performed threshold selection by minimizing Bayesian information criterion (BIC). For FJ, we also tested FJ-AIC and FJ-CV which optimized Akaike information criterion (AIC), and cross-validation error (CV), respectively. As FJ-AIC and FJ-CV never performed higher than FJ-BIC in any simulation scenario we do not show the results in the main paper. These results are shown in Supplementary Fig. 4. A change in precision or recall is considered to be statistically significant if the corresponding Welch's t-test has a p-value that is smaller than $0.01$. A method is said to have significantly high precision or recall if no other method has significantly higher precision or recall, respectively. \begin{table*} \small{ \begin{centering} \caption{Methods with the significantly highest precision and recall are shown below. All methods that are not significantly worse than the best method are also shown. F, N, R, C, and S stand for FJ-BIC, NJc-BIC, RG-BIC, CLRG-BIC, and SA, respectively. Black and red indicate methods with the highest precision and recall, respectively. The default setting for each simulation parameter is indicated with $^{*}$.} \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{6}{|c|}{Precision, \textcolor{red}{Recall}}\tabularnewline \hline \multirow{2}{*}{Tree type} & & balanced & random{*} & unbalanced & \tabularnewline & & \FJP, \NJcR, \SAR & \FJP, \FJR, \NJcR, \CLRGR, \SAR & \CLRGP, \CLRGR, \SAR & \tabularnewline \hline \multirow{2}{*}{Contracted edge} & \emph{leaf/latent} & \emph{labeled/latent} & \emph{any/latent}{*} & \emph{latent/latent} & \tabularnewline & \FJP, \NJcP, \FJR, \NJcR, \CLRGR & \FJP, \NJcR & \FJP, \FJR, \NJcR, \CLRGR, \SAR & \RGP, \SAR & \tabularnewline \hline \multirow{2}{*}{Fraction of latent vertices} & 0.5 & 0.37 & 0.25{*} & 0.12 & 0\tabularnewline & \NJcP, \SAR & \NJcP, \CLRGP, \SAR & \FJP, \FJR, \NJcR, \CLRGR, \SAR & \FJP, \NJcR, \CLRGR, \SAR & \CLRGP, \CLRGR\tabularnewline \hline \multirow{2}{*}{Average branch length} & 0.001 & 0.004 & 0.016{*} & 0.064 & 0.256\tabularnewline & \CLRGP, \SAR & \FJP, \SAR & \FJP, \FJR, \NJcR, \CLRGR, \SAR & \FJP, \CLRGP, \NJcR, \SAR & \CLRGP, \NJcR, \SAR\tabularnewline \hline \multirow{2}{*}{Number of labeled vertices} & 20 & 40 & 80 & 160{*} & 320\tabularnewline & \FJP, \NJcR, \CLRGR & \FJP, \NJcR, \CLRGR & \FJP, \NJcR, \CLRGR, \SAR & \FJP, \FJR, \NJcR, \CLRGR, \SAR & \FJP, \NJcR, \CLRGR, \SAR\tabularnewline \hline \multirow{2}{*}{Sequence length} & 250 & 500 & 1000{*} & 2000 & 4000\tabularnewline & \FJP, \CLRGP, \SAR & \FJP, \SAR & \FJP, \FJR, \NJcR, \CLRGR, \SAR & \FJP, \NJcP, \CLRGP, \FJR, \NJcR, \CLRGR, \SAR & \FJP, \NJcP, \CLRGP, \FJR, \NJcR, \SAR\tabularnewline \hline \end{tabular} \label{tab:bestMethods} \par\end{centering} } \end{table*}\textsl{} \begin{figure*}[htbp] \centering \includegraphics[width=\textwidth]{precisionRecall_main.pdf} \vspace*{1 em} \caption{A comparison of the reconstruction accuracy of all methods in six simulation categories. One parameter (x-axes) was varied in each category. The default parameter settings are denoted as parameterValue(d) on each x-axis. For each parameter setting, 100 data sets were created. Precision is shown in blue and recall is shown in pink.} \label{fig:precisionRecallAll} \end{figure*} \subsubsection*{Tree type} Both FJ-BIC and NJc-BIC have significantly higher precision and recall on balanced trees than on unbalanced trees. This behavior is expected, since the accuracy of the step of FJ, in which neighbors are identified, is inversely related to tree diameter\cite{St.John2003}. Even on unbalanced trees, which have large diameters, FJ-BIC and NJc-BIC have moderately large (median) precision/recall values of 0.79/0.81 and 0.76/0.87 respectively (see Fig. \ref{fig:precisionRecallAll}A). Similarly RG-BIC performs low on unbalanced trees than on balanced trees, which is in agreement with previous work \cite{Choi2010b}. RG iteratively partitions the entire vertex set into families. Balanced trees and unbalanced trees have $n_\text{leaves}/2$ and two families of size two, respectively. This suggests that RG has a higher error rate for unbalanced trees than for balanced trees. In contrast, CLRG-BIC performs significantly higher on unbalanced trees than on balanced trees with median precision/recall values of 0.89/0.93 and 0.89/0.91, respectively. CLRG constructs the MST and then iteratively applies RG to the neighborhood of each internal vertex. The higher performance of CLRG-BIC on unbalanced trees is most likely due to the MST being topologically close to the unbalanced tree. SA has a median precision and recall of 0.77 and 0.96, respectively, across all tree types. The comparatively lower precision of SA is due to this methods constructing trees in which a labeled vertex can only have up to one descendant and all other internal vertices have degree three. Subsequently this results in trees with excess branches if the true tree contains polytomies. \subsubsection*{Type of contracted edge} FJ-BIC has a significantly higher precision than other methods for all types of contracted edges, except \emph{latent/latent}. SA has a high median recall of 0.96 for all types of contracted edges. However the recall values of SA are not significantly higher than those of FJ-BIC if the contracted edge is \emph{leaf/latent}. This is due to a large variance in the performance of SA, quantified with an inter-quantile range of 0.26 (see Fig. \ref{fig:precisionRecallAll}B). SA has high median precision of 0.94 if the contracted edge is \emph{leaf/latent}. Contracting \emph{leaf/latent} edges results in trees in which a labeled vertex can have up to one descendant and all other internal vertices have degree three. The high performance of SA in this category is because these are the same type of trees which SA samples when optimizing tree topology. SA has lower performance when any other edge type is contracted. RG-BIC and CLRG-BIC have significantly higher precision and recall if \emph{latent/latent} edges are contracted, when compared to precision and recall for other edge types. \subsubsection*{Fraction of vertices that are latent} For leaf-labeled trees which have a maximal fraction (0.5) of latent vertices, all methods have a median precision higher than 0.95 (see Fig. \ref{fig:precisionRecallAll}C). In this simulation scenario, with a median recall of 0.97, SA has significantly higher recall than other methods, even though FJ-BIC also has a high median recall of 0.94. In general, precision reduces and recall rises with a decrease in the fraction of latent vertices. FJ-BIC has a median precision and recall that is greater than 0.89 across all settings of fraction of latent vertices. CLRG-BIC has a significantly higher precision and recall than other methods when all vertices are labeled. This is because the CLRG algorithm involves the construction of a MST which should be topologically similar to the completely labeled tree. \subsubsection*{Average branch length (substitution rate)} All methods perform badly on trees with short average branch lengths of 0.001 subs/site with median recall smaller than 0.8 each (see Fig. \ref{fig:precisionRecallAll}D). This is because a significant portion of the simulated sequences are identical. Thus, in FJ-BIC, NJc-BIC, RG-BIC, and CLRG-BIC there is a preference for choosing parent-child relationship ovSoundser siblings. CLRG-BIC has significantly higher precision than other methods if branch lengths are either very small or very large. FJ-BIC has high precision if the average branch length is between 0.004 and 0.064. In trees with larger branch lengths there is a high chance that sequences undergo multiple substitutions at the same site. This effect has been termed genetic saturation and results in an underestimation of the true evolutionary distance. Additionally, estimates of large distances are associated with large variance \cite{Hoyle2003} which results in the selection of wrong neighbors in the neighbor-joining step. CLRG-BIC has higher performance for trees with large branch lengths because the input to CLRG-BIC is the MST and the construction of the MST is probably robust to noise in distance estimates. The performance of SA is not greatly affected by long branches. \subsubsection*{Number of labeled vertices (taxa)} The performance of all the methods is expected to worsen with increasing number of labeled vertices. RG shows significant change in precision/recall with corresponding median values changing from 0.88/0.75 (5 labeled vertices) to 0.83/0.61 (80 labeled vertices) (see Fig. \ref{fig:precisionRecallAll}E). The change in precision and recall shown by SA is not significant. FJ-BIC and CLRG-BIC show a significant drop in precision but not in recall. Even for trees with 320 taxa, FJ-BIC has high median precision and recall of 0.92 and 0.9 respectively. NJc-BIC shows significant change in both precision and recall with median precision/recall changing from 0.93/0.93 to 0.89/0.91. \subsubsection*{Sequence length} The performance of all methods improves with increase in sequence length. For all settings of sequence length, FJ-BIC is among the methods with significantly high precision (see Fig. \ref{fig:precisionRecallAll}F). FJ-BIC is among the methods with significantly high recall for sequences of length 1000 nt to 4000 nt. For all settings of sequence length, SA is among the methods with significantly high recall. \subsubsection*{Summary of performance} For the simulations performed at the default parameter settings, the methods listed in order of decreasing median precision are FJ-BIC (0.93), NJc-BIC (0.9), CLRG-BIC (0.89), RG-BIC (0.82), and SA (0.77), and the methods listed in order of decreasing median recall are SA (0.96), NJc-BIC (0.92), CLRG-BIC (0.92), FJ-BIC (0.91) and RG-BIC (0.63). In 15 out of the 22 simulated scenarios FJ-BIC is among the methods with significantly high precision. In 17/22 simulated scenarios SA is among the methods with significantly high recall. In 13/22 simulated scenarios NJc-BIC is among the methods with significantly high recall. FJ-BIC has a median recall that is greater than 0.9 in 16/22 simulated scenarios. The remaining scenarios are (i) trees with 20 taxa (recall of 0.89), (ii) trees in which branches are very short (0.001 and 0.004 subs/site; recall of 0.6 and 0.84 respectively), (iii) unbalanced trees (0.81), and (iv) trees constructed using short sequences (250 and 500 nt; recall of 0.77 and 0.85 respectively). \subsection{Comparison of time-complexities and run times} Clustering methods are deterministic procedures for which we report worst-case run times. Both FJ and NJ run in time $O(n^{3})$. RG runs in time $O(n^{4})$ which makes it infeasible to run on large datasets. CLRG runs in $O(n^{2}\log n + n_{i}\delta^{3}_{\max}(\mbox{MST}))$ where $n_{i}$ is the number of internal vertices of the MST and $\delta_{\max}(\mbox{MST})$ is the largest vertex degree in the MST. Model selection with BIC or AIC requires the repeated optimization of the likelihood function with respect to parameters of the substitution model. Computing the likelihood with Felsenstein's dynamic programming algorithm \cite{Felsenstein1981} takes $O(nA^{2}L)$ time where $L$ is the sequence length and $A$ is the size of the alphabet. $A$ is four for genetic sequences and 20 for protein sequences. We used RAxML for computing and optimizing likelihoods; RAxML is highly optimized for this task. SA performs Bayesian inference by MCMC sampling, a stochastic procedure whose runtime depends on how easily the MCMC chain moves through the space of trees and model parameters. The observed run times (see Fig. \ref{fig:runTimesAll}) suggest that FJ-BIC and NJc-BIC are the fastest methods for trees containing up to 320 taxa, with both the methods having a median run time of 5.4 and 4.8 minutes respectively. CLRG-BIC took around 9.3 minutes to reconstruct trees containing 320 taxa and showed the slowest growth in run time. RG showed the largest growth in run time taking 4.8 hours for reconstructing trees with 320 taxa. SA was run with MCMC chain-length set to $10^8$ states. SA took around two hours to construct trees containing 20 taxa and 30 hours for constructing trees containing 320 taxa. \begin{figure*}[htbp] \centering \includegraphics[width=0.6\textwidth]{runTime_numberOfObservedVertices.pdf} \vspace*{1 em} \caption{A comparison of run times of all methods in the scenario where the number of labeled vertices was varied. Run times are shown on a log-scale.} \label{fig:runTimesAll} \end{figure*} \subsection{HIV-1 transmission chain data} We applied FJ-BIC to a dataset of HIV-1 subtype C \textit{env} gene sequences that were sampled from 11 hosts who are part of a partially known transmission chain \cite{Vrancken2014}. We selected this dataset because it contains sequences from viruses that are evolutionarily closely related. We discarded 31 sequences which had gaps and analyzed the remaining 181 sequences of length 1376 nt. The hosts are labeled $A,B,C,D,E,F,G,H,I,K,\text{ and } L$. Sequences from multiple time points are available for $A,B,C,D,E,\text{ and } H$. The sampling times for all sequences are known. All the pairs of hosts who were involved in a transmission event are known and have been inferred by interviewing the hosts. The direction of transmission is known for all transmission events except for the transmission between $A$ and $B$. Additionally we compared the bootstrap support of branches in the FJ-BIC tree with the branches in the maximum likelihood tree constructed using RAxMLv8.2. \cite{Stamatakis2006}. We first identified the most appropriate model of substitution using JModelTest2 \cite{Darriba2012}. A BIONJ tree \cite{Gascuel1997} was constructed with Jukes-Cantor distances and AIC was computed for the following models of substitution: JC, F81, K80, HKY, TrNef, TrN, TPM1, TPM1uf, SYM, GTR. Variants of all substitution models which included a parameter for invariant sites (I) and/or a Gamma model ($\Gamma$) for inter-site rate variation were also tested. GTR+$\Gamma$+I was the best model, i.e., the one with the smallest AIC score. We constructed a tree with RAXML using the original sequence alignment and the GTRCATI model of substitution, which we refer to as the RAxML tree. The CAT model approximates the Gamma model and enables fast computation \cite{Stamatakis2006}. We inferred a generally labeled tree using FJ-BIC. Pairwise distances were computed using RAxML which included the following steps \cite{Stamatakis2005}. First a maximum parsimony tree was constructed using stepwise addition and the parameters of the substitution model GTR+$\Gamma$ were optimized. The optimized substitution model was used to compute maximum likelihood distances for all sequence pairs. For computing the likelihood of FJ trees at various values of the distance threshold we used RAxML as follows. FJ trees were converted to leaf-labeled trees by replacing each interior labeled vertex with a latent vertex and adding an edge of length zero between the newly added latent vertex and the former interior labeled vertex. This conversion was necessary since RAxML can only handle leaf-labeled trees. We then maximized the likelihood of the converted FJ tree by fixing the tree topology and branch lengths and optimizing the parameters of the substitution model GTR+$\Gamma$. The maximized log-likelihood was used for computing BIC. The FJ-BIC tree was rooted assuming a strict molecular clock model. We define the optimal position of the root as that position which minimizes the RSS of regressing distances from the root to each labeled vertex against sampling times. We placed the root at the midpoint of each branch and computed the RSS for each branch. We then picked the branch that had the smallest RSS and searched along the branch for the position of the root with the smallest RSS. Subsequently this position was chosen as the root of the FJ-BIC tree. \subsubsection{Compatibility of the FJ-BIC tree with known transmission events} \begin{figure*}[htbp] \centering \includegraphics[width=0.8\textwidth]{FJBICTreeForBelgianTransmissionChainData_final.pdf} \vspace{1 em} \caption{The FJ-BIC tree of 181 HIV-1 \textit{env} gene sequences sampled from hosts involved in a known transmission chain. Each vertex is represented by a circle whose inner color is black if the vertex is labeled and white if the vertex is latent. The outer color of each circle indicates the host of the corresponding vertex. Branches reflecting transmission events have been labeled. 9/10 transmission events are compatible with the FJ-BIC tree. The red box highlights the transmission event $B \rightarrow I$ which is not compatible with the FJ tree.} \label{fig:FJBICTree} \end{figure*} In order to check if the known transmission events are compatible with a rooted tree we needed to label all latent vertices with a host. Latent vertices were visually labeled with hosts using standard maximum parsimony. The labeling that we applied resulted in the minimum possible total cost of 10 (see Fig. \ref{fig:FJBICTree}). Given a rooted tree with all vertices labeled with a host, we define a transmission event ($X \rightarrow Y$) to be compatible with the tree if there is a directed edge from a vertex labeled $X$ to a vertex labeled $Y$. 9 out of 10 transmission events are compatible with the FJ-BIC tree. The direction of transmission between $A$ and $B$ is not known. The FJ-BIC tree suggests that $A$ was infected by $B$. The branch of the FJ-BIC tree that suggests this transmission event has been labeled with the known transmission event $A \leftrightarrow B$. 8 out the remaining 9 transmission events are compatible with the FJ-BIC tree and branches indicative of these transmission events are labeled in Fig. \ref{fig:FJBICTree}. The transmission event $B \rightarrow I$ is not compatible with the FJ-BIC tree (red solid box in Fig. \ref{fig:FJBICTree}) which may be due to insufficient sampling; Only three sequences were available from host $I$. It is possible that the polytomy present inside the red dotted box in Fig. \ref{fig:FJBICTree} may be resolved if more sequences from $I$ were available, in such a way that the resulting tree would be compatible with the transmission $B \rightarrow I$. \subsubsection{Branch support in the FJ-BIC tree and the RAxML tree} \begin{figure*}[htbp] \centering \includegraphics[width=0.42\textwidth]{supportsForCommonBranches.pdf} \includegraphics[width=0.56\textwidth]{supportsForOtherBranches.pdf} \vspace*{1 em} \caption{Left: Comparing the support of common branches in the FJ-BIC tree and the RAxML tree. Right: Supports for branches that are only present in either the FJ-BIC tree or the RAxML tree.} \label{fig:commonBranches} \end{figure*} The bootstrap support of a branch is defined as the number of bootstrap replicate trees that contain this branch. The bootstrap support of branches in the FJ-BIC tree and the RAxML tree were computed using 1000 replicates. Since each labeled vertex is a leaf in all bootstrap RAxML trees, all terminal branches of the RAxML tree trivially have a support of one. The support of a terminal branch in the FJ-BIC tree is not necessarily one. 75 internal branches were common to both the FJ-BIC tree and the RAxML tree. The median(IQR) supports for the common branches were 0.73 (0.43) and 0.76 (0.38) in the FJ-BIC and the RAxML tree respectively. Supports for the common branches in both trees were strongly correlated (Pearson's $\rho = 0.84$, see Fig.\ref{fig:commonBranches}). There are 44 and 103 internal branches that are present only in the FJ-BIC tree and the RAxML tree respectively with lower median (IQR) branch supports of 0.22 (0.28) and 0.18 (0.33) respectively (see Fig.\ref{fig:commonBranches}). The 124 terminal branches in the FJ-BIC tree have a median(IQR) branch support of 0.95 (0.16). On average an internal branch in the FJ-BIC tree has a higher support than an internal branch in the RAxML tree. $36\%$ of FJ-BIC branches and $25\%$ of RAxML branches have supports greater than 0.7. \subsection{Summary and Outlook} In this paper, we present a fast distance-based agglomeration method called family-joining (FJ) for constructing generally labeled trees. A key feature of the algorithm presented here is its low worst case time complexity, $O(n^{3})$, where $n$ is the number of taxa making it feasible for analyzing large data sets. For precomputed distances between 320 taxa, FJ-BIC took around 5.4 minutes ($\pm 0.76$) to estimate a tree. At each agglomeration step FJ only adds branches (both internal and terminal) if there is sufficient data to support this move. The algorithm treats short branches as unreliable and identifies an optimal threshold by minimizing test error. We tested two methods, FJ-BIC and FJ-CV error, which minimize BIC and CV error, respectively. When compared with related methods FJ-BIC was best at reconstructing the correct tree across a wide range of simulation settings. FJ-BIC was applied to HIV sequences sampled from individuals involved in a known transmission chain. The FJ-BIC tree was compatible with ten out eleven transmission events. On average, internal branches in the FJ-BIC tree were found to have higher statistical support than internal branches in the tree constructed using RAxML. A method for reconstructing phylogenetic trees with high precision circumvents the need for a time-consuming bootstrap analysis. To the best of our knowledge the method presented here is the first attempt at modeling evolutionary relationships using generally labeled trees. \subsection{Summary and Outlook} In this paper, we present a fast distance-based agglomeration method called family-joining (FJ) for constructing generally labeled trees. A key feature of the algorithm presented here is its low worst case time complexity, $O(n^{3})$, where $n$ is the number of taxa making it feasible for analyzing large data sets. For precomputed distances between 320 taxa, FJ-BIC took around 5.4 minutes ($\pm 0.76$) to estimate a tree. At each agglomeration step FJ only adds branches (both internal and terminal) if there is sufficient data to support this move. The algorithm treats short branches as unreliable and identifies an optimal threshold by minimizing test error. We tested two methods, FJ-BIC and FJ-CV error, which minimize BIC and CV error, respectively. When compared with related methods FJ-BIC was best at reconstructing the correct tree across a wide range of simulation settings. FJ-BIC was applied to HIV sequences sampled from individuals involved in a known transmission chain. The FJ-BIC tree was compatible with ten out eleven transmission events. On average, internal branches in the FJ-BIC tree were found to have higher statistical support than internal branches in the tree constructed using RAxML. A method for reconstructing phylogenetic trees with high precision circumvents the need for a time-consuming bootstrap analysis. To the best of our knowledge the method presented here is the first attempt at modeling evolutionary relationships using generally labeled trees. \section{Materials and Methods} \subsection{Terminology} A phylogenetic tree is an edge weighted undirected tree consisting of two types of vertices, labeled vertices (representing observed sequences) and latent vertices (representing unobserved sequences). Sequence information is present only at labeled vertices. Where appropriate, we refer to edges as branches and edge weights as branch lengths. A branch length quantifies the amount of expected change between the sequences corresponding to the respective incident vertices. Branch lengths are usually in units of substitutions per site. Labeled vertices and latent vertices have a minimum degree of one, and three respectively. For a tree consisting of $n$ labeled vertices the number of latent vertices lies between zero and $n- 2$. For trees with a maximal number of latent vertices, all labeled vertices are leaves (degree one) and all latent vertices have degree three. Trees are leaf-labeled if all labeled vertices are leaves, else they are generally labeled. A distance matrix $\mathbf{d}$ is tree-additive in a tree $T$ if the distance between each pair of labeled vertices equals the corresponding path length (sum of branch lengths along the unique path between the two vertices) in $T$. Each branch partitions the set of all labeled vertices into two disjoint sets which are referred to as the split of the branch. The two sets of labeled vertices that are present in a split are referred to as the sides of the split. A split is compatible with a tree if there is any branch in the tree such that removing the branch bipartitions the set of labeled vertices as defined by the split. $S(T)$ denotes the set of splits that are defined by a branch in $T$. A pair of vertices are siblings if both of them are leaves and are adjacent to the same vertex. A vertex pair is in a parent-child relationship if they are adjacent and one of them is a leaf. Thus we call siblings what in the context of the neighbor-joining algorithm is called neighbors. A rooted tree is a tree with directed edges. In such trees there is one latent vertex (the root) which has indegree zero and outdegree greater than zero. All edges in a rooted tree are directed away from the root. Edges incident to leaves are referred to as terminal edges while edges incident to internal vertices are referred to as internal edges. \subsection{Family-joining algorithm} \begin{figure}[htbp] \centering \includegraphics[width=0.35\textwidth]{FJ_overview.pdf} \vspace*{1 em} \caption{An illustration of the family-joining algorithm. The main steps have been labeled with their time complexity.} \label{fig:FJ_illustration} \end{figure} Our objective is, given a tree-additive distance matrix $\mathbf{d}$, to infer the respective tree $T_{o}$. $T_{o}$ may be generally labeled and may contain latent vertices with degree greater than three. We assume that all branch lengths in $T_{0}$ are strictly greater than zero. We provide a method which correctly infers $T_{o}$ using entries in $\mathbf{d}$. Let $\mathcal{T}_{\max}$ be the set of all trees that satisfy the following criteria: (i) their set of leaves includes all the labeled vertices in $T_{o}$, (ii) they have the maximum number of latent vertices, (iii) and $\mathbf{d}$ is the tree-additive distance matrix in every tree in $\mathcal{T}_{\max}$. All splits in $S(T_{o})$ are compatible with every tree in $\mathcal{T}_{\max}$. If this were not true for some tree $T_{\max}$ in $\mathcal{T}_{\max}$ then there would be two branches, $b_o$ in $T_o$ and $b_{\max}$ in $T_{\max}$, such that labeled vertices $\{i, j\}$ and $\{k, l\}$ lie on different sides of $b_0$ and $\{i, k\}$ and $\{j, l\}$ lie on different sides of $b_{\max}$. Applying Buneman's 4-point condition \cite{Buneman1971} would result in the following contradictory inequalities: \begin{align*} &d_{ij} + d_{kl} < d_{ik} + d_{jl} \text{ for } b_{0}\\ &d_{ij} + d_{kl} \geq d_{ik} + d_{jl} \text{ for } b_{\max} \end{align*} The inequality is strict for $b_{0}$ as all branch lengths in $T_{0}$ are greater than zero. Thus any tree in $\mathcal{T}_{\max}$ can be constructed as follows. If there is a labeled vertex in $T_0$ with degree greater than one replace this vertex with a latent vertex and add a branch of length zero between the labeled vertex and the newly added latent vertex. If there is a latent vertex with degree greater than three ($v_\text{poly}$) disconnect two randomly selected vertices adjacent to $v_\text{poly}$ and connect them to a new latent vertex with a branch of length zero. Lengths of branches between the newly added latent vertex and each adjacent vertex are the same as the length of the original branch between $v_\text{poly}$ and that vertex. Both of these augmentation operations are performed until all latent vertices have degree 3 and there are no labeled internal vertices. Applying the neighbor-joining algorithm using distances in $\mathbf{d}$ yields a tree $T_{\it{NJ}}$ with the maximum number of latent vertices such that $\mathbf{d}$ is tree-additive in $T_{\it{NJ}}$. Thus $T_{\it{NJ}}$ belongs to $\mathcal{T_{\text{max}}}$ and consequently neighbors in $T_{\it{NJ}}$ are either parent-child or siblings in $T_{o}$. NJ is an agglomerative clustering method that identifies, at each step, the pair of vertices to cluster by minimizing the following objective value \cite{Saitou1987,Studier1988}. \begin{equation} \label{eqn:neighborIdentificationStep} (n-2)d_{ij} - \sum_{k\neq i}d_{ik}-\sum_{k\neq j}d_{jk} \end{equation} where $n$ is the number of vertices that are yet to be clustered. Neighbors $i$ and $j$ can be classified as parent-child or siblings based on the following quantity. $$\Delta_{ij}=\displaystyle\sum_{k\neq i,j}\dfrac{d_{ji}+d_{ik}-d_{jk}}{2(n-2)}$$ It can be easily shown that: \begin{equation*} \begin{aligned} &\text{ if $i$ is the parent of $j$ }&&\text{then } \Delta_{ij} = 0, \\ &\text{ if $j$ is the parent of $i$ }&&\text{then } \Delta_{ij} = d_{ij}, \\ &\text{ if $i$ and $j$ are siblings }&&\text{then } 0 < \Delta_{ij} < d_{ij} \\ \end{aligned} \end{equation*} These criteria are shown to be true in the following statements. If $i$ is the parent of $j$ then the path from $j$ to any vertex $k \neq i,j$, will visit $i$. Thus $d_{jk} = d_{ji} + d_{ik}$, which gives $\Delta_{ij} = 0$ and $\Delta_{ji} = d_{ij}$. If $i$ and $j$ are siblings then $d_{jk} = d_{ju} + d_{uk}$ where $u$ is the vertex adjacent to both $i$ and $j$. Similarly $d_{ik} = d_{iu} + d_{uk}$, which gives $\Delta_{ij} = d_{iu}$. It follows that $0 < \Delta_{ij} < d_{ij}$. When distances are estimated from sequences we use a threshold $\epsilon$ for classifying the relationship as parent-child or sibling. Specifically $i$ is the parent of $j$ if $|\Delta_{ij}| < \epsilon$. The unordered vertex pair $\{i,j\}$ are said to be in a parent-child relationship if \begin{equation} \label{eqn:relationshipTest} \min\{|\Delta_{ij}|,|\Delta_{ji}|\} < \epsilon \end{equation} The criterion for selecting the appropriate $\epsilon$ is discussed in detail later. When $\mathbf{d}$ is tree-additive any sufficiently small $\epsilon$ can be used for correctly classifying the vertices. The family-joining algorithm consists of two main parts: GetTreeTopology which infers the tree topology, and GetBranchLengths which estimates the branch lengths. We describe these two steps in detail below. \subsubsection{Inferring tree topology} An overview of GetTreeTopology is provided in Algorithm \ref{algo:getTreeTopology}. GetTreeTopology initializes a so-called active vertex set $V_\text{a}$ with the set of all labeled vertices. It then performs agglomerative clustering where the following actions are performed at each step. The pair $\{i,j\}$ of vertices in $V_\text{a}$ that minimizes (\ref{eqn:neighborIdentificationStep}) is identified. $i$ and $j$ are then classified as parent-child or siblings using (\ref{eqn:relationshipTest}). If $i$ is the parent of $j$, or vice-versa, an edge is added between them and all distances from the child are removed from $\mathbf{d}$. If $i$ and $j$ are found to be siblings then we search for another vertex $k$ in $V_\text{a}$ that minimizes the following quantity. \begin{equation} \label{eqn:deltaParentToChildren} |d_{ik}+d_{kj}-d_{ij}| \end{equation} If $|d_{ik}+d_{kj}-d_{ij}| < 2\epsilon$ then $k$ is the parent of both $i$ and $j$. Corresponding edges are added and all distances from $i$ and $j$ are removed from $\mathbf{d}$. $i$ and $j$ are removed from $V_\text{a}$. Note that there are alternate criteria for checking if there is a vertex $k$ that is the parent of both $i$ and $j$. One such criterion is to compute \begin{equation} \label{eqn:deltaParentToChildren2} \min\{|\Delta_{ki}|,|\Delta_{kj}|\}, \end{equation} and consider $k$ to be the parent of both $i$ and $j$ if $\min\{|\Delta_{ki}|,|\Delta_{kj}|\} < 2\epsilon$. In the simulation study we found that reconstruction accuracy was higher when we used the quantity in eqn. (\ref{eqn:deltaParentToChildren}) as opposed to eqn. (\ref{eqn:deltaParentToChildren2}) (see Supplementary Fig. 4). This is probably because the quantity in eqn. (\ref{eqn:deltaParentToChildren}) is more robust to noise in the estimates of large distances. If $k$ is not the parent of both $i$ and $j$, a latent vertex $l$ is introduced as the parent of both $i$ and $j$. Corresponding edges are added and distances from $l$ to any vertex $m$ in $V_\text{a}$ other than $i$ and $j$ are calculated using the following estimate by \citet{Studier1988}. \begin{equation} \label{eqn:distancesFromLatentVertex} d_{lm} = (d_{im}+d_{jm}-d_{ij})/2 \qquad\text{for }m\neq i,j \end{equation} $i$ and $j$ are removed from $V_\text{a}$ and all distances from $i$ and $j$ are removed from $\mathbf{d}$. Distances from $u$ are added to $\mathbf{d}$ and $u$ is added to $V_\text{a}$. The agglomeration step described above is repeated until the number of vertices in $V_\text{a}$ is less than four. After each iteration $V_\text{a}$ reduces by either one or two vertices. If $V_\text{a}$ has reached the size three, we check using (\ref{eqn:deltaParentToChildren}) if there are vertices $i$, $j$, and $k$ in $V_\text{a}$ such that $k$ is the parent of both $i$ and $j$. If we find such vertices, corresponding edges are added. Otherwise a latent vertex $u$ is introduced and edges are added between $u$ and the three remaining vertices. If $V_\text{a}$ has reached size two, an edge is added between the two remaining vertices. GetTreeTopology returns the list of edges of the estimated tree $\hat{T}$. $\hat{T}$ has the same topology as the true tree if distances are additive in the true tree. \begin{algorithm} \label{algo:getTreeTopology} \SetKwInput{Initialize}{Initialize} \SetKwData{numberOfSingletons}{$s$} \SetKwFor{While}{while}{do}{} \KwIn{The distance matrix $\mathbf{d}$, the threshold $\epsilon$, and the labeled vertices $V_{\text{obs}}$} \Initialize{edge-set $E \leftarrow \emptyset$, $V_{\text{a}} \leftarrow V_{\text{obs}}$} \While{$|V_{\text{a}}| > 3$}{ From $V_{\text{a}}$ pick $\{i,j\}$ minimizing (\ref{eqn:neighborIdentificationStep})\; Classify $\{i,j\}$ using (\ref{eqn:relationshipTest})\; \eIf{$\{i,j\}$ are parent-child}{ Add edge $\{i,j\}$ to $b$\; Remove child from $V_{\text{a}}$\; Remove distances from child, from $\mathbf{d}$\; }{ Remove $i$ and $j$ from $V_{\text{a}}$\; From $V_{\text{a}}$ pick $k$ minimizing (\ref{eqn:deltaParentToChildren})\; \eIf{$k$ is the parent of both $i$ and $j$} { Add edges $\{i,k\}$ and $\{j,k\}$ to $E$\; } { Introduce vertex $u$ and add it to $V_{\text{a}}$\; Add edges $\{i,u\}$ and $\{j,u\}$ to $E$\; Get distances from $u$ (\ref{eqn:distancesFromLatentVertex}), add to $\mathbf{d}$\; } Remove distances from $i$ and $j$, from $\mathbf{d}$\; } } \eIf{$|V_{\text{a}}|=2$}{ $\{i,j\} \leftarrow V_{\text{a}}$; Add edge $\{i,j\}$ to $E$\; }{ From $V_{\text{a}}$ pick $i,j,k$ minimizing (3)\; \eIf{$k$ is the parent of both $i$ and $j$}{ Add edges $\{i,k\}$ and $\{j,k\}$ to $E$\; }{ Introduce vertex $u$\; Add edges $\{i,u\}$, $\{j,u\}$, and $\{k,u\}$ to $E$\; } } \KwOut{edge-set $E$} \caption{GetTreeTopology} \end{algorithm} \subsubsection{Upper bound on the time complexity of GetTreeTopology} At first glance it appears that the neighbor identification step requires $\Omega(n^{3})$ time. This can be reduced to $O(n^{2})$ with the observation that the neighbor-joining objective can be reformulated as follows \cite{Studier1988}: \begin{align} &(n-2)d_{ij} - R_{i}-R_{j}\nonumber\\ &\mbox{ where } R_{i} = \sum_{k\neq i}d_{ik}\label{eqn:Ri} \end{align} From eq. (\ref{eqn:Ri}) it is evident that initializing each row sum $R_{i}$ with the original distances takes $O(n)$ time. Updating each $R_{i}$ after each agglomeration step is done by subtracting distances from children and, if applicable, adding distances to the newly introduced latent vertices. Thus the process of updating each $R_{i}$ takes $O(1)$ time. Additionally, storing all the $R_{i}$ in memory requires $O(n)$ space which incurs very little memory overhead compared to the $O(n^{2})$ space required to store all the pairwise distances. If all distances and row sums are stored in memory then identifying the neighbors takes $O(n^{2})$ time. Note that $\Delta_{ij}$ can also be reformulated for faster computation as follows. \begin{align*} \Delta_{ij}&=\sum_{k\neq i,j}\dfrac{d_{ji}+d_{ik}-d_{jk}}{2(n-2)}\\ &=\dfrac{d_{ji}}{2} + \dfrac{(\sum_{k\neq i,j}d_{ik})-(\sum_{k\neq i,j}d_{jk})}{2(n-2)}\\ &=\dfrac{d_{ji}}{2} + \dfrac{(d_{ij}+\sum_{k\neq i,j}d_{ik})-(d_{ji}+\sum_{k\neq i,j}d_{jk})}{2(n-2)}\\ &=\dfrac{d_{ji}}{2} + \dfrac{(\sum_{k\neq i}d_{ik})-(\sum_{k\neq j}d_{jk})}{2(n-2)}\\ &=\dfrac{d_{ji}}{2} + \dfrac{R_{i}-R_{j}}{2(n-2)}. \end{align*} Thus, once the neighbors $\{i,j\}$ have been identified, it takes $O(1)$ time to compute both $\Delta_{ij}$ and $\Delta_{ji}$. It takes $O(n)$ time to find the vertex $k$ which minimizes |$d_{ki}+d_{kj}-d_{ij}$|. The overall time-complexity of GetTreeTopology is $O(n^{3})$. The time-complexities associated with the main steps of GetTreeTopology are shown in Fig. \ref{fig:FJ_illustration}. \subsubsection{Efficient estimation of branch lengths} Branch lengths $b$ of $\hat{T}$ are estimated by ordinary least squares. This is done by solving $\mathbf{A}b=d$ where $d$ is a column vector containing all those entries of $\mathbf{d}$ that are above or alternatively all those entries of $\mathbf{d}$ that are below the diagonal. $\mathbf{A}$ is the branch incidence matrix of $\hat{T}$ and is constructed as follows. If the $m^{th}$ entry of the $d$ is $d_{ij}$, then \begin{equation} \label{eqn:constructBranchIncidenceMatrix} a_{me} = \begin{cases} 1 & \mbox{if } \mbox{the path from $i$ to $j$ contains $e$} \\ 0 & \mbox{otherwise} \end{cases} \end{equation} $\mathbf{A}$ has the dimension $n(n-1)/2 \times |E|$ where $|E|$ is the number of branches in the tree, $n$ is the number of labeled vertices, and $b$ is the vector of branch lengths that we wish to estimate. The ordinary least squares (OLS) estimate of branch lengths is given by \begin{equation} \label{eqn:OLSEstimate} \hat{b} = (\mathbf{A}^{t}\mathbf{A})^{-1}\mathbf{A}^{t}d. \end{equation} For the estimation of OLS branch lengths we do not make the assumption that distances are tree-additive. For leaf-labeled trees there is a fast $O(n^{2})$ algorithm for computing the OLS branch lengths \cite{Bryant1997}. Any algorithm that estimates OLS branch lengths by performing the matrix operations that are defined in eqn. (\ref{eqn:OLSEstimate}) needs to use all entries of the distance vector, and thus must run in $\Omega(n^{2})$ time \citep{Bryant1998}. Thus the algorithm by \citet{Bryant1997} is time-optimal. We show that this algorithm extends to generally labeled trees. The main steps involved in this computation are computing first $\mathbf{A}^{t}d$ and then $(\mathbf{A}^{t}\mathbf{A})^{-1}\mathbf{A}^{t}d$, each in $O(n^{2})$ time. We describe both of these steps below. \noindent Computing $\mathbf{A}^{t}d$ \noindent The $i^{\mbox{th}}$ entry of $\mathbf{A}^{t}d$, $\delta^{T}_{i}d$, is the sum of all distances between labeled vertices $a$ and $b$ that lie on either side of edge $e_{i}$. $\delta_{i}$ is the $i^{\mbox{th}}$ column of $\mathbf{A}$. For efficient computation of $\mathbf{A}^{t}d$, edges are visited in order of increasing distance from leaves, keeping track of which edges have already been visited. We first compute $\delta^{T}_{i}d$ for every terminal edge $e_{i}$ which is defined as follows. \begin{equation} \label{eqn:deltaId_terminal} \delta^{T}_{i}d = \sum_{j,j \neq i} d_{ij} \end{equation} Next we compute $\delta^{T}_{i}d$ for every internal edge $e_{i}$ which are visited in the order of increasing distance from leaves. Consider the internal vertex $\alpha$ with only one incident edge $e_{i}$ such that $\delta^{T}_{i}d$ has not been calculated. Let the edges incident to $e_{i}$ be $e_{j_{1}},\ldots,e_{j_{m}}$ Let $C_{i}$ be the side of the split of the edge $e_{i}$ that does not contain $\alpha$. Similarly $C_{j_{k}}$ is the side of the split of $e_{j_{k}}$ that does not contain $\alpha$. Depending on whether $\alpha$ is labeled or not labeled, $\delta^{T}_{i}d$ is computed as follows: Case 1: Vertex $\alpha$ is not labeled \cite{Bryant1997}. \begin{equation} \label{eqn:deltaId_internal_not_labeled} \begin{aligned} \delta^{T}_{i}d &= \sum_{k}\sum_{a \in C_{j_{k}}, b\in C_{i}}\mkern-18mu d_{ab}\\ &=\sum_{k} \delta^{T}_{j_{k}}d -2\sum_{k<l}\sum_{a\in C_{j_{k}},b\in C_{j_{l}}}\mkern-18mu d_{ab} \end{aligned} \end{equation} Case 2: Vertex $\alpha$ is labeled. \begin{equation} \label{eqn:deltaId_internal_labeled} \begin{aligned} \delta^{T}_{i}d &= \sum_{k}\sum_{a \in C_{j_{k}}, b\in C_{i}} \mkern-18mud_{ab} + \sum_{b \in C_{i}} d_{\alpha b}\\ &=\sum_{k} \delta^{T}_{j_{k}}d -2\sum_{k<l}\sum_{a\in C_{j_{k}},b\in C_{j_{l}}}\mkern-18mud_{ab} - \sum_{k}\sum_{b\in C_{j_{k}}}d_{\alpha b}+ \sum_{b \in C_{i}} d_{\alpha b} \end{aligned} \end{equation} Computing each element of $\mathbf{A}^{t}d$ involves the summation of entries of the distance vector. Since each element of the distance vector is summed over just once, $\mathbf{A}^{t}d$ is computed in $O(n^{2})$ time. \noindent Computing $(\mathbf{A}^{t}\mathbf{A})^{-1}(\mathbf{A}^{t}d)$ There is a closed-form solution for the OLS branch length $b_{0}$ of any edge $e_{0}$ which is formulated in terms of the splits, and the elements of $\mathbf{A}^{t}d$, that are defined by $e_0$ and the edges adjacent to $e_{0}$. A description of the branch length formula is given later. When computing branch lengths, edges can be visited in any order. We derive the branch length formula for an internal edge. \noindent \begin{figure*} \centering \includegraphics[width=\textwidth]{internalEdges.pdf} \vspace*{0.2 em} \caption{The three cases for the internal edge $e_{0}$. Case 1: Both $\alpha$ and $\beta$ are not labeled. Case 2: Only $\alpha$ is labeled. Case 3: Both $\alpha$ and $\beta$ are labeled. The triangles represent subtrees.} \label{fig:internalEdges} \end{figure*} Consider the internal edge $e_{0}$ shown in Fig. \ref{fig:internalEdges} with adjacent edges $e_{1}, \ldots e_{k}, e_{k+1} \ldots e_{m}$. $e_{0}$ is incident to $\{\alpha,\beta\}$. The respective sizes of the parts of the split defined by $e_{0}$ are $n_{\alpha}$ and $n_{\beta}$ For each edge $e_{i}$ define $P_{i} = \sum_{x\in A_{i}, y \in B_{i}} p_{xy}$ where $A_{i}$ and $B_{i}$ are the parts of the split defined by edge $e_{i}$. Here $p_{xy}$ denotes the length of the path from $x$ to $y$ when branch lengths are determined by OLS. It turns out that $P_{i} = \delta_{i}^{T}d$. For each edge $e_{i}$ let $C_{i}$ be the side of the split that does not contain $\alpha$ and $\beta$. $n_{i}$ is the cardinality of $C_{i}$. Define \begin{equation*} Q_{i}=\begin{cases} \sum_{x \in C_{i}} p_{\alpha x}, & \mbox{if } 1 \leq i \leq k\\ \sum_{x \in C_{i}} p_{\beta x}, & \mbox{if } k+1 \leq i \leq m\\ \end{cases} \end{equation*} For the case where both $\alpha$ and $\beta$ are not labeled it can be shown that \cite{Bryant1997} $$\underline{P} = (nI - 2N)\underline{Q} +NU\underline{Q} + b_{0}N\underline{v}$$ where $N$ is the $m \times m$ diagonal matrix with $(n_{1},n_{2},\ldots, n_{m})$ on the diagonal, $I$ is the identity matrix, $\underline{Q} = (Q_{1},Q_{2},\ldots,Q_{m})^{T}$, $U$ is the $m \times m$ matrix of ones, $\underline{v}$ is the vector with $n_{\beta}$ in positions $1$ to $k$ followed by $n_{\alpha}$ in positions $k+1$ to $m$, and $\underline{P} = (P_{1},P_{2},\ldots,P_{m})^{T}$. Similarly for the internal edge $e_{0}$ \begin{equation*} P_{0}=\underline{v}^{T}\underline{Q}+ n_{\alpha}n_{\beta}b_{0} \end{equation*} Letting $X = (nN^{-1}-2I+U)$ and substituting $\underline{Q}$ gives the following branch length estimate. \begin{equation} \label{eqn:branchLengthEstimate} b_{0} = \dfrac{P_{0}-\underline{v}^{T}X^{-1}N^{-1}\underline{P}}{n_{\alpha}n_{\beta}-\underline{v}^{T}X^{-1}\underline{v}} \end{equation} For cases where only $\alpha$ and both $\alpha$ and $\beta$ are labeled, respectively, the derivation of the above mentioned equations is similar to that described in \citet{Bryant1997} and is provided in the supplementary material. The formula, eqn. (\ref{eqn:branchLengthEstimate}), for branch length is valid only when $X^{-1}$ exists. \citet{Bryant1997} showed that $X$ is invertible as long as there is at most one zero on the diagonal of the matrix $(nN^{-1}-2I)$. The $i^{th}$ diagonal element is zero if $n_{i}/n = 2$ which occurs if there is an edge where both parts of the split have equal size. Even in generally labeled trees there can be at most one such edge. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{terminalEdges.pdf} \vspace *{1 em} \caption{The two cases for the terminal edge $e_{0}$. $\alpha$ is labeled in case 1 and not labeled in case 2. The triangles represent subtrees.} \label{fig:terminalEdges} \end{figure} There are two cases to consider for external branches, one if $\alpha$ is not labeled and the other if $\alpha$ is labeled see Fig. \ref{fig:terminalEdges}. In both cases the derivation of the branch length formula is similar to what has been described earlier and is omitted. The branch length formulae turn out to be identical in all cases. The reader is referred to the supplementary material for the proof. For a more detailed description of the algorithm for computing OLS branch lengths, the reader is referred to \citet{Bryant1997}. Once $\mathbf{A}^{t}d$ has been computed, all branch lengths can be calculated in $O(n)$ time. Since there are $O(n)$ edges the time complexity of computing OLS branch lengths is $O(n^{2}$) The overall time complexity of FJ is $O(n^{3})$. This can be reduced further if heuristics are used at the neighbor identification step, eqn. (\ref{eqn:neighborIdentificationStep}). OLS branch lengths may be negative which has no biological interpretation. After estimating the branch lengths all branches that are shorter than $\epsilon$ and are incident to a latent vertex are contracted. If there is a branch between two labeled vertices that has a negative length, its length is set to $10^{-7}$. $10^{-7}$ is smaller than the smallest non-zero distance estimate computed in any of the simulation scenarios. \subsection{Model selection} Values of $\epsilon$ are inversely related to the number of latent vertices and thus inversely related to model complexity. We performed model selection using three estimates for test error, cross-validation error, Akaike information criterion (AIC) and Bayesian information criterion (BIC). In all cases, model selection is performed by identifying the value of $\epsilon$ that minimizes the estimate for test error. Please refer to the Supplementary material for a description on how cross-validation error is computed. AIC and BIC are Taylor series approximations of the Kullback-Leibler distance between the generative model which one wishes to recover and the model that is obtained by maximum likelihood estimation. These are formulated as, $$\mbox{AIC} = -2 \log \mbox{likelihood} + 2m$$ $$\mbox{BIC} = -2 \log \mbox{likelihood} + m\log(n)$$ Under the likelihood framework, phylogenetic trees are probabilistic graphical models which are completely described by tree topology and branch lengths. $n$ denotes sample size and is given by sequence length. The number $m$ of parameters equals the number of branches in the tree. We use FJ branch lengths as approximations of the maximum likelihood branch lengths. Likelihood is computed using Felsenstein's pruning algorithm which is a dynamic programming algorithm that enables efficient calculation of the likelihood \cite{Felsenstein1981}. The calculation of cross-validation error is described on page 6 of the supplement. \subsection{Related methods considered in the comparative validation} \subsubsection{Sampled ancestors} We used the sampled ancestors package \cite{Gavryushkina2014} of BEASTv2.3.0 \cite{Drummond2012} for the comparative validation of the FJ algorithm. The following models were considered: the GTR model for substitution, the four-category $\Gamma$ model for rate heterogeneity across sites, the strict molecular clock model and the fossilized birth death model for generating trees. Uniform priors were set for all model parameters. For all datasets, $10^8$ states were visited using Markov chain Monte Carlo (MCMC) and every $10^5th$ state was sampled. The first $5\%$ of the sampled states were discarded as burn-in and the effective sample size (ESS) was computed for all model parameters using the R package CODA \cite{Plummer2006}. ESS were found to be greater than 200 for all parameters across all the MCMC chains indicating that the chains were sufficiently long. The trees that are produced by BEASTv2.3.0 are rooted and contain the maximum number of latent vertices. The sampled trees were post-processed by unrooting them and contracting all terminal edges of length zero. We reported the average precision and recall of the post-processed sampled trees from the true tree. \subsubsection{Recursive grouping and Chow-Liu recursive grouping} For assessing the performance of RG and CLRG we used the Matlab implementation that was provided by the authors. Both of these methods are distance-based. RG initially sets the active vertex set $V_{\text{a}}$ to the set of all labeled vertices. At each iteration $V_{\text{a}}$ is partitioned into so-called families using k-means clustering. For each family containing more than one vertex, a relationship test similar to the one used in FJ is performed. If there is a vertex that is the parent of all other vertices in the family then edges are added from the parent to each child. If no such parent is found then a latent vertex is introduced as the parent to all vertices of the family and corresponding edges are added. $V_{\text{a}}$ is reduced by removing all the children. This procedure is iterated until a connected graph is obtained. CLRG starts by constructing a minimum spanning tree over all the labeled vertices. For each internal vertex $v_{i}$, the vertex set $V_{i}$ comprising of $v_{i}$ and its neighbors is constructed and RG is applied to distances between vertices in $V_{i}$, producing the tree $T_{i}$. The subgraph in the minimum spanning tree that is induced by $V_{i}$ is replaced by $T_{i}$. Both RG and CLRG require the setting of two thresholds, $\epsilon$ and $\tau$. The first threshold, $\epsilon$ is used for performing the relationship test. RG and CLRG additionally contract branches that are smaller than this threshold. The second threshold, $\tau$ is used to filter out large distances and only distances below this threshold are used when performing the relationship test. We optimized $\epsilon$ using BIC and set $\tau$ to a reasonably high value of $0.5$. We modified the implementation provided by the authors, in order to correctly evaluate distances of value zero. Such distance estimates were encountered, predominantly, when the average branch length was the shortest and when a large fraction of internal vertices were labeled. The modification is that all distances of value zero were changed to $10^{-7}$. \subsubsection{Neighbor-joining with edge contraction} We implemented NJc in Python. NJc involves two steps. The first step is the construction of a tree using NJ. Subsequently all branches that are incident to a latent vertex and are smaller than a preselected threshold $\epsilon$ are contracted. We optimized $\epsilon$ using BIC. \subsection{Acknowledgments} PK is partially supported by German Center for Infection Research, grant no. DZIF 80008023. This work has been performed in the context of the EuResist Network GEIE, and the project MASTER-HIV/HEP which is funded by the German Health Ministry. \subsection{Availability of code} A program that constructs generally labeled trees using FJ-BIC is provided at https://bioinf.mpi-inf.mpg.de/publications/prabhavk/familyJoining. \clearpage \begin{center} \Large{Supplementary material} \end{center} \section{Fast OLS for generally labeled trees} In what follows we show that the branch length formula, eqn. (\ref{eqn:branchLengthEstimate}) (see also eqn. (10) in the main paper), that was derived by \citet{Bryant1997} for leaf-labeled trees is also applicable for generally labeled trees. We follow the same terminology that was defined in the main paper. \begin{figure*} \centering \includegraphics[width=\textwidth]{internalEdges.pdf} \vspace*{0.2 em} \caption{The three cases for the internal edge $e_{0}$. Case 1: Both $\alpha$ and $\beta$ are not labeled. Case 2: Only $\alpha$ is labeled. Case 3: Both $\alpha$ and $\beta$ are labeled. The triangles represent subtrees.} \label{fig:internalEdges} \end{figure*} Consider the internal edge $e_{0}$ shown in Fig. \ref{fig:internalEdges} with adjacent edges $e_{1}, \ldots e_{k}, e_{k+1} \ldots e_{m}$. $e_{0}$ is incident to the vertices $\alpha$ and $\beta$. The respective sizes of the sides of the split defined by $e_{0}$ are $n_{\alpha}$ and $n_{\beta}$. For each edge $e_{i}$, define $P_{i} = \sum_{x\in A_{i}, y \in B_{i}} p_{xy}$ where $A_{i}$ and $B_{i}$ are the sides of the split defined by edge $e_{i}$. Here $p_{xy}$ denotes the length of the path from $x$ to $y$ when branch lengths are determined by OLS. It turns out that $P_{i} = \delta_{i}^{T}d$. For each edge $e_{i}$, $i \neq 0$, let $C_{i}$ be the side of the split defined by $e_{i}$ that does not contain $\alpha$ and $\beta$. $n_{i}$ is the cardinality of $C_{i}$. Define \begin{equation*} Q_{i}=\begin{cases} \sum_{x \in C_{i}} p_{\alpha x}, & \mbox{if } 1 \leq i \leq k\\ \sum_{x \in C_{i}} p_{\beta x}, & \mbox{if } k+1 \leq i \leq m\\ \end{cases} \end{equation*} If both $\alpha$ and $\beta$ are not labeled (Case 1 in Fig. \ref{fig:internalEdges}) it can be shown that \citep{Bryant1997} $$\underline{P} = (nI - 2N)\underline{Q} +NU\underline{Q} + b_{0}N\underline{v}$$ where $N$ is the $m \times m$ diagonal matrix with $(n_{1},n_{2},\ldots, n_{m})$ on the diagonal, $I$ is the identity matrix, $\underline{Q} = (Q_{1},Q_{2},\ldots,Q_{m})^{T}$, $U$ is the $m \times m$ matrix of ones, $\underline{v}$ is the vector with $n_{\beta}$ in positions $1$ to $k$ followed by $n_{\alpha}$ in positions $k+1$ to $m$, $\underline{P} = (P_{1},P_{2},\ldots,P_{m})^{T}$, $n$ is the total number of labeled vertices, and $b_{0}$ is the branch length of the edge $e_{0}$ Similarly for the internal edge $e_{0}$, \begin{equation*} P_{0}=\underline{v}^{T}\underline{Q}+ n_{\alpha}n_{\beta}b_{0} \end{equation*} Letting $X = (nN^{-1}-2I+U)$ and substituting $\underline{Q}$ gives the following branch length estimate. \begin{equation*} b_{0} = \dfrac{P_{0}-\underline{v}^{T}X^{-1}N^{-1}\underline{P}}{n_{\alpha}n_{\beta}-\underline{v}^{T}X^{-1}\underline{v}} \end{equation*} For cases where only $\alpha$ and both $\alpha$ and $\beta$ are labeled, respectively, the derivation of the equations are similar to that described in \citet{Bryant1997} and is described below. \subsection*{Case 2: $\alpha$ is labeled and $\beta$ is not labeled} For edges $e_{i}$ incident to $\alpha$, $i = 1\ldots k$, we have \begin{align*} P_{i} &= \sum_{x\in A_{i}}\sum_{y \in B_{i}} p_{xy} \\ &=\sum_{j = 1, j\neq i}^{m}\sum_{x\in C_{i}}\sum_{y\in C_{j}} p_{xy} + \sum_{x\in C_{i}}p_{\alpha x}\\ &=\sum_{j = 1, j\neq i}^{k}\sum_{x\in C_{i}}\sum_{y\in C_{j}}(p_{\alpha x} + p_{\alpha y}) + \sum_{j = k+1}^{m}\sum_{x\in C_{i}}\sum_{y\in C_{j}}(p_{\alpha x} + b_{0} + p_{\beta y}) + \sum_{x\in C_{i}}p_{\alpha x}\\ &= \sum_{j = 1,j \neq i }^{k}\!\!\!\![n_{j}Q_{i} + n_{i}Q_{j}] + \sum_{j = k+1}^{m}\!\!\![n_{j}Q_{i} + n_{i}Q_{j} + n_{i}n_{j}b_{0}] + Q_{i}\\ &= (n-n_{i}-1)Q_{i} + n_{i}(Q_{1}+\ldots+Q_{i-1}+Q_{i+1}+\ldots+Q_{m}) + n_{i}n_{\beta}b_{0} + Q_{i}\\ &=(n-2n_{i})Q_{i} + n_{i}\sum_{j=1}^{m}Q_{j} + n_{i}n_{\beta}b_{0} \end{align*} For edges $e_{i}$ incident to $\beta$, $i = k+1\ldots m$, we have \begin{align*} P_{i} &= \sum_{x\in A_{i}}\sum_{y \in B_{i}} p_{xy} \\ &=\sum_{j = 1, j\neq i}^{m}\sum_{x\in C_{i}}\sum_{y\in C_{j}} p_{xy} + \sum_{x\in C_{i}}p_{\alpha x}\\ &=\sum_{j = 1}^{k}\sum_{x\in C_{i}}\sum_{y\in C_{j}}(p_{\beta x} + b_{0} +p_{\alpha y}) + \sum_{j = k+1, j\neq i}^{m}\sum_{x\in C_{i}}\sum_{y\in C_{j}}(p_{\beta x} + p_{\beta y}) + \sum_{x\in C_{i}}(p_{\beta x}+b_{0})\\ &= (\sum_{j = 1}^{k}n_{j}Q_{i} + n_{i}Q_{j} +n_{i}n_{j}b_{0}) + (\sum_{j = k+1,j \neq i }^{m}n_{j}Q_{i} + n_{i}Q_{j}) + Q_{i} + n_{i}b_{0}\\ &= (n-n_{i}-1)Q_{i} + n_{i}(Q_{1}+\ldots+Q_{i-1}+Q_{i+1}+\ldots+Q_{m}) + n_{i}(n_{\alpha}-1)b_{0} + Q_{i}+n_{i}b_{0}\\ &=(n-2n_{i})Q_{i} + n_{i}\sum_{j=1}^{m}Q_{j} + n_{i}n_{\alpha}b_{0} \end{align*} In matrix form, \begin{align*} &\underline{P} = (nI - 2N)\underline{Q} +NU\underline{Q} + b_{0}N\underline{v}\\ &\Leftrightarrow N(nN^{-1}-2I+U)\underline{Q} = \underline{P} - b_{0}N\underline{v} \end{align*} Setting $X = (nN^{-1}-2I+U)$ and rearranging, we get $$\underline{Q} = X^{-1}N^{-1}\underline{P}-b_{0}X^{-1}\underline{v}$$ For the internal edge $e_{0}$ we have \begin{align*} P_{0} &= \sum_{i=1}^{k}\sum_{j=k+1}^{m}\sum_{x \in C_{i}, y \in C_{j}} p_{xy} + \sum_{j=k+1}^{m} \sum_{x \in C_{j}}(b_{0} + p_{\beta x})\\ &=(\sum_{i=1}^{k}\sum_{j=k+1}^{m}\sum_{x \in C_{i}, y \in C_{j}}p_{\alpha x} + b_{0} + p_{\beta y}) + n_{\beta}b_{0} + \sum_{j=k+1}^{m}Q_{j}\\ &=(\sum_{i=1}^{k}\sum_{j=k+1}^{m}n_{j}Q_{i} + n_{i}n_{j}b_{0} + n_{i}Q_{j}) + n_{\beta}b_{0} + \sum_{j=k+1}^{m}Q_{j}\\ &=\sum_{i=1}^{k}n_{\beta}Q_{i} + \sum_{j=k+1}^{m}(n_{\alpha}-1)Q_{j} + (n_{\alpha}-1)n_{\beta}b_{0} + n_{\beta}b_{0} + \sum_{j=k+1}^{m}Q_{j}\\ &=\underline{v}^{T}\underline{Q}+ n_{\alpha}n_{\beta}b_{0} \end{align*} After substituting \underline{Q} and rearranging we get, \begin{equation} \label{eqn:branchLengthEstimate} b_{0} = \dfrac{P_{0}-\underline{v}^{T}X^{-1}N^{-1}\underline{P}}{n_{\alpha}n_{\beta}-\underline{v}^{T}X^{-1}\underline{v}} \end{equation} \subsection*{Case 3: Both $\alpha$ and $\beta$ are labeled} For edges $e_{i}$ incident to $\alpha$, $i = 1\ldots k$, we have \begin{align*} P_{i} &= \sum_{x\in A_{i}}\sum_{y \in B_{i}} p_{xy} \\ &=\left[\sum_{j = 1, j\neq i}^{m}\sum_{x\in C_{i}}\sum_{y\in C_{j}} p_{xy}\right] + \sum_{x\in C_{i}}p_{\alpha x} + \sum_{x\in C_{i}}p_{\beta x}\\ &=\left[\sum_{j = 1, j\neq i}^{k}\sum_{x\in C_{i}}\sum_{y\in C_{j}}p_{\alpha x} + p_{\alpha y}\right] + \left[\sum_{j = k+1}^{m}\sum_{x\in C_{i}}\sum_{y\in C_{j}}p_{\alpha x} + b_{0} + p_{\beta y}\right] + 2\sum_{x\in C_{i}}p_{\alpha x} + n_{i}b_{0}\\ &= \left[\sum_{j = 1,j \neq i }^{k} n_{j}Q_{i} + n_{i}Q_{j}\right] + \left[\sum_{j = k+1}^{m} n_{j}Q_{i} + n_{i}Q_{j} + n_{i}n_{j}b_{0}\right] + 2Q_{i}+n_{i}b_{0}\\ &= (n-n_{i}-2)Q_{i} + n_{i}(Q_{1}+\ldots+Q_{i-1}+Q_{i+1}+\ldots+Q_{m}) + n_{i}b_{0}(1+\!\!\!\!\sum_{j=k+1}^{m}\!\!\!\!n_{j}) + 2Q_{i}\\ &=(n-2n_{i})Q_{i} + n_{i}\sum_{j=1}^{m}Q_{j} + n_{i}n_{\beta}b_{0} \end{align*} By symmetry, for edges $e_{i}$ incident to $\beta$, $i = k+1\ldots m$, we have, $$P_{i}=(n-2n_{i})Q_{i} + n_{i}\sum_{j=1}^{m}Q_{j} + n_{i}n_{\alpha}b_{0}$$ In matrix form, $$\underline{P} = (nI - 2N)\underline{Q} +NU\underline{Q} + b_{0}N\underline{v}$$ For the internal edge $e_{0}$ we have \begin{align*} P_{0} &= \sum_{i=1}^{k}\sum_{j=k+1}^{m}\sum_{x \in C_{i}, y \in C_{j}} p_{xy} + \left[\sum_{j=1}^{k} \sum_{x \in C_{j}}b_{0} + p_{\alpha x}\right] + \left[\sum_{j=k+1}^{m} \sum_{x \in C_{j}}b_{0} + p_{\beta x}\right] + b_{0}\\ &=\left[\sum_{i=1}^{k}\sum_{j=k+1}^{m}\sum_{x \in C_{i}, y \in C_{j}}\!\!\!\!p_{\alpha x} + b_{0} + p_{\beta y}\right] + (n_{\alpha}+n_{\beta}-1)b_{0} + \sum_{j=1}^{m}Q_{j}\\ &=\left[\sum_{i=1}^{k}\sum_{j=k+1}^{m}\!\!n_{j}Q_{i} + n_{i}n_{j}b_{0} + n_{i}Q_{j}\right] + (n_{\alpha}+n_{\beta}-1)b_{0} + \sum_{j=1}^{m}Q_{j}\\ &=(n_{\beta}-1)\sum_{i=1}^{k}Q_{i} + (n_{\alpha}-1)\sum_{j=k+1}^{m}Q_{j} + (n_{\alpha}-1)(n_{\beta}-1)b_{0} +(n_{\alpha}+n_{\beta}-1)b_{0} + \sum_{j=1}^{m}Q_{j}\\ &=n_{\beta}\sum_{i=1}^{k}Q_{i} + n_{\alpha}\sum_{i=k+1}^{m}Q_{i} + n_{\alpha}n_{\beta}b_{0}\\ &=\underline{v}^{T}\underline{Q}+ n_{\alpha}n_{\beta}b_{0} \end{align*} After substituting \underline{Q} and rearranging we get, $$b_{0} = \dfrac{P_{0}-\underline{v}^{T}X^{-1}N^{-1}\underline{P}}{n_{\alpha}n_{\beta}-\underline{v}^{T}X^{-1}\underline{v}}$$ \begin{figure} \centering \includegraphics[width=0.45\textwidth]{terminalEdges.pdf} \vspace *{1 em} \caption{The two cases for the terminal edge $e_{0}$. $\alpha$ is labeled in case 1 and not labeled in case 2. The triangles represent subtrees.} \label{fig:terminalEdges} \end{figure} Consider the terminal edge $e_{0}$ shown in Fig. \ref{fig:terminalEdges} with adjacent edges $e_{1},e_{2}\ldots e_{m}$. $e_{0}$ is incident to the vertices $\alpha$ and $\beta$. The respective sizes of the sides of the split defined by $e_{0}$ are $n_{\alpha}$ and $n_{\beta}$. Since $e_{0}$ is a terminal edge the leaf $\beta$ is labeled. There are two cases to consider depending on if $\alpha$ is labeled or not labeled. If $\alpha$ is not labeled (Case 1 in Fig. \ref{fig:terminalEdges}), the branch length formula given by \citet{Bryant1997} is $$b_{0} = \dfrac{P_{0}-\underline{v}^{T}X^{-1}N^{-1}\underline{P}}{n_{\alpha}n_{\beta}-\underline{v}^{T}X^{-1}\underline{v}}$$ where $n_\alpha = (n-1)$, $n_\beta = 1$ and $k=m$. If $\alpha$ is labeled (Case 2 in Fig. \ref{fig:terminalEdges}), the branch length formula can be derived as follows. For edges $e_{i}$ incident to $\alpha$ we have, \begin{align*} P_{i} &= \sum_{x\in A_{i}}\sum_{y \in B_{i}} p_{xy}\\ &=\sum_{j = 1, j\neq i}^{m}\sum_{x\in C_{i}}\sum_{y\in C_{j}} p_{xy} + \sum_{x\in C_{i}}(p_{\alpha x} + p_{\beta x})\\ &=\sum_{j = 1, j\neq i}^{m}\sum_{x\in C_{i}}\sum_{y\in C_{j}} (p_{\alpha x} + p_{\alpha y}) + \sum_{x\in C_{i}}(2p_{\alpha x} + b_{0})\\ &=\sum_{j = 1, j\neq i}^{m}[n_{j}Q_{i}+n_{i}Q_{j}] +2Q_{i}+n_{i}b_{0}\\ &=(n-n_{i}-2)Q_{i} + n_{i}\sum_{j = 1, j\neq i}^{m}Q_{j}+2Q_{i}+n_{i}b_{0}\\ &=(n-2n_{i})Q_{i} + n_{i}\sum_{j = 1}^{m}Q_{j}+n_{i}b_{0}\\ \end{align*} In matrix form, $$\underline{P} = (nI - 2N)\underline{Q} +NU\underline{Q} + b_{0}N\underline{v}$$ For the terminal edge $e_0$ we have, \begin{align*} P_{0} &= \sum_{i=1}^{m}\sum_{x \in C_{i}} p_{\beta x} + b_{0}\\ &=(\sum_{i=1}^{m}\sum_{x \in C_{i}} p_{\alpha x} +b_{0}) + b_{0}\\ &=\sum_{i=1}^{m}Q_{i} + (n-1)b_{0}\\ &=\underline{v}^{T}\underline{Q}+ n_{\alpha}n_{\beta}b_{0} \end{align*} where $n_\alpha = (n-1)$, $n_\beta = 1$ and $k=m$. After substituting \underline{Q} and rearranging we get, $$b_{0} = \dfrac{P_{0}-\underline{v}^{T}X^{-1}N^{-1}\underline{P}}{n_{\alpha}n_{\beta}-\underline{v}^{T}X^{-1}\underline{v}}$$ \pagebreak \section{Molecular clock rate inferred by SA} \begin{figure}[htbp] \centering \includegraphics[width=0.8\textwidth]{meanClockRate} \caption{Rate of the strict molecular clock that is estimated by SA. The true rate of the strict molecular clock is 1.0 subs./site/time in all simulation scenarios.} \end{figure} \section{Comparison of various FJ-based methods} For computing cross-validation error the original sequence alignment with $L$ columns was partitioned into $K$ validation alignments by randomly sampling $L/K$ columns without replacement. For each validation alignment, the corresponding training alignment was constructed using the complimentary set of $L-L/K$ alignment columns. This procedure was repeated $R$ times, giving $RK$ training and validation alignments in total. ML distances were computed for all training and validation alignments. For a fixed value of $\epsilon$, FJ trees were constructed for each training distance matrix. We set $R$ to 10 and tried two values for $K$, i.e., 3 and 5. Test error was computed as the residual sum of squares between the fitted distances (path length on the tree) and the corresponding distances computed from the validation alignment. We then found the $\epsilon$ that minimized expected test error as this would yield the most generalizable model. \[\arg\min\limits_{\epsilon} \displaystyle\sum_{k}\displaystyle\sum_{i,j}\!\!\!\underbrace{({d_{T(\!\epsilon,k\!)}}(i,j)}_{\text{distance in fitted tree}}-\underbrace{{d_{V(k)}}(i,j))^{2}}_{\text{distance in validation set}} \] where $T(\epsilon,k)$ is the tree constructed at threshold $\epsilon$ using distances from the $k^{\text{th}}$ training alignment and $V(k)$ is the $k^{\text{th}}$ validation alignment. \begin{figure}[htbp] \centering \includegraphics[width=0.8\textwidth]{precisionRecall_supplement} \caption{A comparison of various FJ-based methods. FJ-BIC is the method that is presented in the main paper. FJ2-BIC checks if siblings have a parent using the criterion shown in eqn. (4) of the main paper. FJ-AIC uses AIC for model selection. FJ-3CV and FJ-5CV performs model selection using 3-fold CV and 5-fold CV respectively.} \label{fig:precisionRecall_FJ} \end{figure} \bibliographystyle{natbib}
1,108,101,562,533
arxiv
\section{INTRODUCTION} Determination of the relationship between the acoustic field scattered by the seafloor and environmental parameters is crucial to understanding and predicting acoustic interaction with the ocean environment. An important step in this process is to perform measurements of seafloor scattering in conjunction with measurements of seafloor geoacoustic and roughness properties. The scattered field is typically characterized by the differential scattering cross section per unit area per unit solid angle, $\sigma$, which will be abbreviated here as `scattering cross section' or `cross section,' keeping in mind that it is dimensionless. It is a system-independent quantity that characterizes the angular and frequency dependence of the second moment of the acoustic pressure field due to scattering \citep{jackson_richardson_2007,pierce_acoustics}. In terms of acoustic scattering measurements, rock seafloors have received little attention to date, with five existing scattering strength measurements reported in the literature. To the authors' knowledge, detailed acoustic scattering measurements of rock seafloors, coupled with measured ground truth have never been made. Scattering strength measurements of rock seafloors without quantitative geophysical parameters have been made by \cite{eyering_etal_1948} at 24 kHz, \cite{urick_1954} at 55 kHz, \cite{mckinney_anderson_1964} at 100 kHz, and \cite{soukup_gragg_2003} at 2-3.5 kHz. A report from the Applied Physics Laboratory, University of Washington \citep{APL_9407} presents model curves that were fit to scattering strength measurements taken at a rocky site. Previous measurements of scattering strength fall between -15 to -22 dB at 20$^\circ$ grazing angle, with the exception of the APL-UW measurement from `rough rock', which is approximately -8 dB at the same angle. These measurements tend to decrease monotonically with grazing angle, apart from typical statistical fluctuations, and some systematic ripples in \cite{soukup_gragg_2003}. Some of these measurements likely suffer from bias. The measurements by \cite{eyering_etal_1948} likely include the effect of multiple interactions with the sea surface and floor, and may represent an overestimate of scattering strength. One of the of the authors of the APL-UW report \citep{jackson_pc_TR9407doubts} has expressed concerns regarding the reliability of the calibration for the measurements on which the models were based. The model curves from the APL-UW report exceed the maximum possible Lambert's law curve, and would violate energy conservation unless the scattering cross section is azimuthally anisotropic or if enhanced backscattering \citep{ishimaru_chen_1990,thorsos_jackson_1991} were present. The present work addresses the paucity of scattering measurements from rock seafloors by presenting estimates of scattering strength obtained from glacially-eroded rock outcrops, accompanied by characterization of geoacoustic and roughness properties. These outcrops contain two contrasting roughness characteristics that allow model-data comparisons to be made under different conditions. Acoustic backscattering data at 100 kHz were collected off the coast of Sandefjord, Norway by the Norwegian Defence Research Establishment (FFI) aboard the HU Sverdrup II using the HISAS 1030 synthetic aperture sonar (SAS) system from a HUGIN autonomous vehicle \citep{midtgaard_etal_2011,fossum_etal_2008}. This sonar has not been calibrated in terms of its receiver sensitivity $s_r$ or source strength $s_0$, which are required for scattering strength estimates. The product of these two parameters was estimated by comparison of measured data to a model which used measured input parameters. Roughness estimates of rock outcrops were obtained using a digital stereo photogrammetry system. Geoacoustic parameters were estimated using an effective medium model with previously measured mineral composition of the bedrock in the area. Characterization of the environment, including estimates of geoacoustic and roughness parameters of the bedrock is summarized in Section~\ref{sec:envchar}. Section~\ref{sec:expOver} gives an overview of the acoustic scattering experiment, and details the data processing and calibration technique. Scattering strength results are presented and discussed in Section~\ref{sec:results}, with conclusions given in Section~\ref{sec:conclusions}. \section{ENVIRONMENTAL CHARACTERIZATION} \label{sec:envchar} \subsection{Geoacoustic characterization} \label{sec:geoacoustic} The bedrock surrounding Larvik and Sandefjord, Norway and their coastline is composed of monzonite, a crystalline intrusive igneous rock \citep{petersen_1978, neumann_1980, lemaitre_2002}. This material supports both compressional and shear waves, and like most natural materials, contains intrinsic dispersion and attenuation \citep{mavko_rockPhysics}. Wave propagation within monzonite has been modeled as an elastic medium with frequency-independent complex wave speeds. This model contains transverse and longitudinal waves, but is dispersionless with linear dependence of attenuation on frequency. Since linear attenuation over a limited frequency range results in logarithmic dispersion via the Kramers-Kronig relations it does not satisfy causality \citep{futterman_1962,milton_etal_1997}. Although the model is acausal, it is commonly used to characterize the seafloor especially when little information is available \citep[Chap. 9]{jackson_richardson_2007}. The frequency-dependence of interface scattering is primarily dependent on the roughness spectrum, and is weakly dependent on sound speed dispersion. Thus including appropriate dispersion in the elastic model would likely not be observable when comparing scattering measurements to models. Parameters for the elastic model are $\tilde{c}_{p}$ and $\tilde{c}_t$, the compressional and shear phase speeds, $\delta_p$ and $\delta_t$, the compressional and shear loss parameters (imaginary component of the complex sound speed divided by the phase speed), and $\rho_b$, the bulk density. The complex wave speeds, $c_p$ and $c_t$, are related to the phase speeds by $c_p = \tilde{c}_p (1+i \delta_p)$ and $c_t = \tilde{c}_t (1+i \delta_t)$ respectively, where $i$ is the imaginary unit \cite[Chap. 9]{jackson_richardson_2007}. The bulk and shear moduli, $K$ and $\mu$ respectively, are related to $\tilde{c}_p$ and $\tilde{c}_t$ through the standard formulae: $\tilde{c}_p = \sqrt{(K+ \frac{4}{3} \mu)/\rho_b}$ and $\tilde{c}_t = \sqrt{\mu/\rho_b}$ \citep{mavko_rockPhysics}. Geoacoustic parameters of the area were not measured, but bounds were computed by using an effective medium approximation combined with previously measured mineral compositions \citep{neumann_1976, neumann_1980}. Crystalline igneous rock is composed of randomly-oriented crystal grains of individual minerals \citep{lemaitre_2002}. An effective medium model is used to replace the heterogeneous granular material with a homogeneous material with properties that reflect the aggregate effect of the crystal grains. The narrowest bounds for the aggregate bulk and shear moduli without knowledge of grain shapes or distributions are attained by the Hashin-Shtrikman-Walpole (HSW) bounds for multiphase composites \citep{mavko_rockPhysics}. For each mineral component, its bulk modulus, shear modulus, density, and volume fraction $\beta_i$ are required. The two isotropic moduli are computed from anisotropic crystalline mineral elastic properties using a Voigt average \citep{denToonder_etal_1999} and the bulk density is computed using a simple volume average. Due to the slight porosity of crystalline igneous rock, one of the components of the effective medium is water, the volume fraction of which was not measured by \cite{neumann_1976, neumann_1980}. Measurements of porosity in granite, a similar crystalline igneous rock, were made by \cite{tullborg_2006} in Sweden and by \cite{norton_knapp_1977} in the United States. These measurements range between 0.61\% and 2.60\% with a mean of 1.02\% and a standard deviation of 0.43\%. The porosity mean is used to compute the mean wave speeds and the bulk density, and the standard deviation is used to compute model uncertainties. Attenuation at 100 kHz cannot be estimated using the HSW bounds. Instead, an estimate for attenuation is based on measurements of water-saturated granite measured by \cite{coyner_martin_1990}. Using the resonant bar technique, the measured quality factor at 100 kHz, the center frequency of the HISAS 1030, was approximately 30. This estimate results in $\delta_p \approx 0.02$. Shear wave attenuation in saturated granite was found by the same researchers to be approximately twice that of compressional wave attenuation in this frequency range, so $\delta_t = 2 \delta_p$. The isotropic averaged elastic moduli ($K$ and $\mu$), density ($\rho$), and volume fraction ($\beta$) for each mineral in the Sandefjord area can be found in Table \ref{tab:mineralParameters}, with the results of the Voigt average for isotropic bulk and shear moduli, and density. Inputs to the Voigt average for each mineral are from \cite{lbRFM}. These moduli were used to estimate HSW bounds on compressional velocity, shear velocity and density, and are reported in Table \ref{tab:geoacousticParameters}, along with the attenuation parameters. Since detailed information regarding mineral composition is rarely available for a given region, Table~\ref{tab:geoacousticParameters} also includes tabulated parameters for generic rock from Table IV.2 in \cite{APL_9407}, and generic granite (a similar crystalline igneous rock) from Table 5.2 in \cite{bourbie_1987_acoustics}. The granite parameters are the mean of ranges reported in \cite{bourbie_1987_acoustics}, which are of 4500 - 6000 m/s for $c_p$, 2500 - 3300 for $c_t$, and 2500 - 2700 kg/m$^3$ for $\rho_b$. The parameters for generic rock generally perform quite poorly, although parameters for generic granite are much closer to the estimates using detailed minerology. This comparison indicates that tabulated values for geoacoustic properties may be adequate for modeling purposes if the lithology is known. \begin{table} \centering \begin{tabular}{ l r r r r} \hline \hline Mineral Name & $K$ [GPa] & $\mu$ [GPa] & $\rho$ [kg /m$^3$] &$\beta$ [\%] \\ \hline Water & 2.20 & 0.00 & 1000 & 1.20 \\ Albite & 79.17 & 22.14 & 2630 & 51.05 \\ Anorthite & 110.84 & 42.88 & 2760 & 8.97 \\ Orthoclase & 81.71 & 31.85 & 2560 & 23.91 \\ Nepheline & 80.43 & 32.86 & 2620 & 3.32 \\ Diopside & 166.31 & 65.77 & 3310 & 3.45 \\ Enstatite & 184.70 & 75.99 & 3200 & 0.08 \\ Olivine & 129.60 & 81.05 & 3224 & 2.88 \\ Magnetite & 161.00 & 91.30 & 5180 & 2.05 \\ Apatite & 125.80 & 49.65 & 3218 & 1.35 \\ \hline \hline \end{tabular} \caption{Mineral composition of the experimental site, along with their volume fractions, $\beta$, isotropic bulk modulus, $K$, isotropic shear modulus, $\mu$, and density, $\rho$. Mineral fractions have been averaged from three sites in \cite{neumann_1976,neumann_1980}, and elastic constants used to estimate isotropic moduli are from \cite{lbRFM}.} \label{tab:mineralParameters} \end{table} \begin{table} \centering \begin{tabular}{ l c c c c c c} \hline \hline \noalign{\smallskip} Parameter & \MyHead{1.3cm}{Lower\\ Bound} & \MyHead{1.3cm}{Upper\\ Bound} & Mean & \MyHead{1.5cm}{Generic\\Rock }& \MyHead{1.5cm}{Generic\\Granite} \\ \noalign{\smallskip} \hline $\tilde{c}_p$ [m/s] & 5945 & 6842 & 6393 & 3600 & 5300\\ $\tilde{c}_t$ [m/s] & 3198 & 3353 & 3276 & 1900 & 2900\\ $\rho_b$ [kg/m$^3$] & 2696 & 2720 & 2708& 2500 & 2600\\ $\delta_p$ & 0.02 &0.02 & 0.02 & 0.0018 & 0.01\\ $\delta_t$ & 0.04 & 0.04 & 0.04 & 0.085 & 0.05\\ \hline \hline \end{tabular} \caption{Summary of geoacoustic parameter estimates and their bounds. The values for $\tilde{c}_p$, $\tilde{c}_t$ and $\rho_b$ were calculated using the Hashin-Shtrikman-Walpole bounds \citep{mavko_rockPhysics}. Also included are estimates for generic rock from Table IV.2 in \cite{APL_9407}, and average values for generic granite (which is similar to monzonite) from Table 5.2 in \cite{bourbie_1987_acoustics}} \label{tab:geoacousticParameters} \end{table} \subsection{Roughness characterization} \label{sec:roughness} An experiment to characterize the roughness of rock outcrops was performed in May 2013 near Sandefjord, Norway at 59$^\circ$4'26.2''N, 10$^\circ$15'42.1''E. Roughness measurements of in-air (subaerial) glacially-eroded rock outcroppings called \textit{roches moutone\'es} were obtained using digital stereo photogrammetry. These roughness measurements provided inputs to the effective acoustic system calibration described below, and inputs to approximate scattering models that were compared with measured data. These types of outcrops have two contrasting roughness characteristics: a gently-undulating surface where the ice flowed onto the outcrop (stoss), and a stepped surface where the ice flowed off of the outcrop (lee). The stoss side has been shaped through the mechanism of glacial clast abrasion, whereby sediment grains (clasts) trapped beneath glaciers gouge and scrape the underlying bedrock \citep{scholz_1976,alley_1997}. The resulting surface is characterized by large-scale undulations that follow the glacial flow pattern, and small-scale scratches, or striae, from individual clasts or sediment grains. The stepped leeward side is formed when hydraulic fracturing dislodges blocks of rock delineated by the internal joint structure, a process termed glacial quarrying or plucking \citep{hallet_1996,iverson_2012, zoet_anandakrishnan_alley_2012}. \begin{figure} \centering \includegraphics[width=\columnwidth]{figure1.JPG} \caption{(color online) Photograph of in-air (subaerial) glacially abraded roughness measured in air. Ice flow in this area was from the bottom left of the image to the top right. The large-scale undulations can be seen, and are represented in the height fields in Fig.~\ref{fig:abradedHeights}. The distance between the surface shown in the bottom left and upper right is approximately 2.5 m. The diagonal line in the upper part of the image is a shadow cast by the measurement system.} \label{fig:area4photo} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{figure2.jpg} \caption{(color online) Photograph of glacially plucked roughness. Ice flow in this area is out of the page. The typical stepped characteristics of the leeward side of a \textit{roche moutone\'e} can be seen here. One leg of the stereophotogrammetry frame can be seen in the upper left quadrant of the image, of which 1.3 m is visible.} \label{fig:area5photo} \end{figure} Roughness measurements are reported from two in-air surfaces that contain glacial abrasion and plucking respectively. They were chosen because they are representative of other surfaces with the same geomorphology in the region. To illustrate the roughness characteristics of these surfaces, photographs are shown in Fig.~\ref{fig:area4photo} for glacial abrasion, and Fig.~\ref{fig:area5photo} for glacial plucking. These surfaces are the in-air expression of the same geomorphological features studied below in Section~\ref{sec:expOver}. Similarities with the SAS imagery of these features is explored below. The ice flowed from North to South at the roughness measurement site \citep{mangerud_2004}. Additional measurements and analysis can be found in \cite{olson_thesis}. In this research, acoustic measurements were performed on submerged outcrops, which preserved the glacial features. Roughness measurements were performed on subaerial outcrops, which are subject to additional erosion through chemical weathering \citep{nicholson_2009} over the approximately 12,000 years since the glaciers retreated from this area \citep{mangerud_2011}. Chemical erosion creates small pits in the rock surface as grains are dissolved due to oxidation and exposure to an acidic environment. These features are the primary source of difference between roughness characteristics of submerged and subaerial \textit{roches moutone\'es}. It is argued below that the chemical weathering pits do not affect the scattered acoustic field at these frequencies, and that subaerial roughness measurements are acceptable as input parameters to scattering models used to predict the cross section from submerged outcrops. The stereo photogrammetry system used in these measurements consisted of two Nikon D7000 digital single lens reflex cameras with Nikkor 28 mm f/2.8 D lenses mounted to an aluminum frame. Baseline separations were set at approximately 0.5 m and 1 m, which enabled the system to operate at heights of both 1 m and 2 m from rock surfaces, which will be called the high- and low-resolution modes respectively. The nominal system resolutions determined by the camera resolution, camera separation, focal length, and mean distance from the rock surface are $(\Delta x, \Delta y, \Delta z) = ( 160, 158,77.4)$ $\mu$m for the high-resolution mode and $( 356, 357,176)$ $\mu$m for the low-resolution mode, where $\Delta x$ is the resolution parallel to the baseline direction in the horizontal plane, $\Delta y$ is the resolution perpendicular to baseline in the horizontal plane, and $\Delta z$ is the precision of the depth estimate. Since a 7$\times$7 correlation window was used for the stereo-matching algorithm in the photogrammetry processing, a conservative estimate of the realistic image-plane resolutions is seven times the nominal resolution, (1.12,1.11) mm and (2.49,2.45) mm for the high- and low-resolution modes respectively. Surface features as small as the nominal system resolution are observable, but features at scales smaller than the realistic resolution are subject to an effective low-pass filter due to the correlation window. These resolutions correspond to Nyquist-Shannon spatial frequencies of approximately 449 m$^{-1}$ and 202 m$^{-1}$ for the high- and low-resolution modes respectively. The cameras were calibrated using a black and white checkerboard pattern attached to a glass plate. Images were processed to obtain height fields using the OpenCV library \citep{openCV}, and the intrinsic and extrinsic camera parameters were obtained using the camera calibration toolbox \citep{bouguet_camCalBox}. Details on stereo photogrammetry can be found in \cite{manualPhoto} and an example of an underwater system can be found in \cite{lyons_pouliquen_2004}. \subsubsection{\textbf{\textit{Data analysis}}} Two-dimensional roughness power spectra were estimated from measured height fields using Thomson's multitaper approach \citep{thomson_1982,percival_walden}. This approach uses the discrete prolate spheroidal sequences (DPSS) as orthogonal window functions. One of the advantages of the DPSS windows is that for a given value of the equivalent noise bandwidth, $N_{BW}$, several orthogonal windows are available. Power spectra computed with orthogonal windows were incoherently averaged to produce a power spectrum estimate with reduced variance compared to single realizations \citep{percival_walden}. Two-dimensional window functions were obtained by taking the inner product of two one-dimensional window functions. To further mitigate spectral leakage, a least-squares plane was subtracted from the height field before windowing and spectrum estimation. The $NBW$ parameter was set at six, and seven windows in each direction were used, for a total of 49 orthogonal windows. \subsubsection{\textit{Roughness results}} Two roughness measurements of a glacially abraded rough surfaces are presented in Fig.~\ref{fig:abradedHeights}. These measurements were made of the same rock surface, but at the two different heights of the roughness measurement system, 2 m for Fig.~\ref{fig:abradedHeights}(a), and 1 m for Fig.~\ref{fig:abradedHeights}(b). A mean plane was subtracted from each of the measurements before plotting. Because of this operation it is difficult to use a global coordinate system for all roughness measurements, and coordinates are referenced to their mean values. The region of overlap between the two measurements is indicated by the dashed box in Fig.~\ref{fig:abradedHeights}(a). Roughness is displayed in a color representation in which the color bar communicates surface height, and surface slope information modifies the black/white balance (lightness), as if the surface were illuminated by a light source. The lightness modification to the color bar is biased to lighter values, since black is part of the color bar and white is not. Thus pixels may appear to have colors that are not part of the color bar (e.g. white glints) due to a large modification of its lightness value. This visualization scheme is used because the dynamic range of the color scale is dominated by the large amplitude features and cannot resolve the low-amplitude, short wavelength features captured by the photogrammetry system. After a mean plane was subtracted, the root mean square (rms) height of Fig.~\ref{fig:abradedHeights}a is 13.1 mm and the rms slope is 0.32. In Fig.~\ref{fig:abradedHeights}b, the rms height is 2.01 mm and the rms slope is 0.37. In both images, glacier flow is approximately in the negative $y$ direction. Long wavelength undulations in both the along- and across-flow direction are evident at scales of approximately 50 cm. At wavelengths on the order of a few cm, the roughness is primarily caused by scratches parallel to the glacier flow direction. These striae are likely caused by individual clasts being dragged across the rock surface by the glacier. At wavelengths less than 5 mm there exist pits in the surface that are likely the result of post-glacial chemical weathering \citep{nicholson_2009} that do not seem to be shaped in any preferred direction. Post-glacial weathering is the largest source of discrepancy between submerged and subaerial bedrock roughness characteristics. From Fig.~\ref{fig:abradedHeights}(b), weathering pits are common, although their prevalence is exaggerated somewhat by the shading scheme. There are also several small cracks running through both height fields. The cracks are likely pre-existing joints in the rock material that were widened by freeze-thaw cycles \citep{nicholson_2009}. Although pits 10 mm wide and 1 mm deep exist, they are relatively rare, with most pits less than 2-3 mm in horizontal extent, and shallower than 0.5 mm. It is likely that roughness parameters estimated for wavelengths larger than 2-3 mm are applicable to underwater (submereged) abraded surfaces. Since first-order perturbation theory \citep{thorsos_jackson_1989} states that roughness at scales less than $\lambda$/2 (7.5 mm for the HISAS 1030 sonar) cannot affect the scattered field, and perturbation theory is expected to be accurate for these surfaces, chemical weathering pits likely do not affect the scattered field. \begin{figure} \centering \includegraphics[width=\columnwidth]{figure3.jpeg} \caption{(color online) Rough interface results from a glacially abraded surface in (a) the low-resolution mode, and (b) the high-resolution mode. The glaciers flowed in the negative $y$ direction. The color bar corresponds to height reference to the surface mean, and the brightness, or black/white information communicates the surface slope. The dashed box in (a) indicates the portion of the surface that is shown in (b).} \label{fig:abradedHeights} \end{figure} Power spectra estimated for the height fields depicted in Fig. \ref{fig:abradedHeights} are presented in Fig.~\ref{fig:abradedSpectra}. The 2D power spectra are plotted as a function of horizontal spatial frequencies, $u_x$ and $u_y$. Both spectra are anisotropic at low spatial frequencies, but are isotropic at high spatial frequencies with the division at approximately 200 m$^{-1}$. This division between large and small scales corresponds to a 5 mm wavelength. Since the chemical weathering pits appear to be isotropic and are mostly restricted to diameters of less than 5 mm, the high-spatial frequency isotropic regime would seem to represent the effect of chemical weathering. Since the weathering pits are confined to a region of spatial frequencies above the highest Bragg spatial frequency accessible by the HISAS sonar (135 m$^{-1}$), they likely do not contribute significantly to the scattered field. So long as attention is restricted to spatial frequencies less than 200 m$^{-1}$, the spectral characteristics of subaerial roughness can be used to represent spectral characteristics of submerged \textit{roches moutone\'es}. The low-wavenumber anisotropy takes the form of broad peaks in the spatial frequency domain centered at the origin, and at aspect angles, $u_\phi=\tan^{-1}(u_y/u_x)$, of 0$^\circ$, -21$^\circ$, and 77$^\circ$ for Fig~\ref{fig:abradedSpectra}(a), and 0$^\circ$ and 77$^\circ$ for the Fig.~\ref{fig:abradedSpectra}(b). By the projection-slice theorem \citep{ferguson_wyber_2005}, angles in the spatial-frequency domain correspond to angles in the spatial domain. Since these peaks are centered at the origin, their widths set the correlation scale of the surface in that particular direction. The width of the anisotropic peak at 0$^\circ$ likely corresponds to the correlation scale of small scratches perpendicular to the direction of glacier flow. The anisotropic feature at 77$^\circ$ is present in both measurements and likely corresponds to large-scale undulations approximately parallel to glacier flow. The peak at -21$^\circ$ in Fig.~\ref{fig:abradedSpectra}(a) likely corresponds to the undulations present between the upper left and lower right corners of Fig.~\ref{fig:abradedHeights}(a). \begin{figure} \centering \includegraphics[width=\columnwidth]{figure4-eps-converted-to.pdf} \caption{(color online) Decibel version of the two dimensional power spectra in m$^4$ of abraded surfaces shown in Fig.~\ref{fig:abradedHeights} as a function of horizontal spatial frequencies, $u_x$ and $u_y$. The power spectrum of the abraded surface measured in low-resolution mode is presented in (a), and the spectrum measured by the high-resolution mode is in (b). Angles mentioned in the text are measured counterclockwise from the +$u_x$ axis.} \label{fig:abradedSpectra} \end{figure} A rough surface formed through glacial plucking is presented in Fig.~\ref{fig:pluckedSurf} with the same visualization scheme as in Fig.~\ref{fig:abradedHeights}. This surface is composed of approximately planar polygonal facets at large scales, with low-level roughness superimposed at small scales. After a mean plane was subtracted, the rms height of this surface was 45.6 mm, and the rms slope is 5.2. Note that some of the steep faces appear to be quite smooth. This artifact results from a shortcoming of wide-baseline photogrammetry in which the stereo correspondence algorithm fails for steep slopes, and the missing areas are interpolated. The small-scale roughness appears to be isotropic, and lacks the parallel striae exhibited by glacially abraded surfaces. It is likely that the small-scale roughness reflects the shape of the preexisting internal joint surface before glacial quarrying, and is not the result of glacial abrasion \citep{iverson_2012}. \begin{figure} \centering \includegraphics[width=\columnwidth]{figure5.jpeg} \caption{(color online) Roughness measurement results for the plucked rough interface. The colorscale communicates surface height, and the surface slope has been included as changes to grayscale value to accentuate low-amplitude roughness not resolved by the color scale. Only the low resolution measurement is presented because the high resolution mode does not include enough facets to obtain a proper sample size.} \label{fig:pluckedSurf} \end{figure} The two-dimensional power spectrum of this surface is shown in Fig.~\ref{fig:pluckedSpectrum}. The plucked spectrum is anisotropic over much of the spatial frequency domain. At low spatial frequencies, it has peaks at $u_\phi$ of 90$^\circ$, and $\pm$23$^\circ$. These directional peaks have most of their energy at the origin and extend to spatial frequencies of 150 m$^{-1}$. These peaks likely result from the large-scale facet structure of the plucked interface. Aside from the directional peaks, there is a background isotropic spectrum that decays as a power law, likely representing the isotropic small-scale roughness on each facet. \begin{figure} \centering \includegraphics[width=3.375in]{figure6-eps-converted-to.pdf} \caption{(color online) Decibel version of the two-dimensional power spectrum in m$^4$ of the plucked interface shown in Fig.~\ref{fig:pluckedSurf} as a function of horizontal spatial frequencies, $u_x$ and $u_y$. Angles mentioned in the text are measured counterclockwise from the +$u_x$ axis.} \label{fig:pluckedSpectrum} \end{figure} Parameters for an isotropic two-dimensional power spectrum are required for the effective acoustic system calibration step detailed below, and as inputs for scattering models. Averaging over azimuth is typically only performed for isotropic spectra. In this case the azimuthally averaged spectrum is expected to reflect the behavior of scattering cross section measurements averaged over several azimuth angles, which may be anisotropic. Based on perturbation theory, the roughness spectrum components responsible for backscattered power are at the Bragg wavenumbers, $2 k_w \cos\theta$, or spatial frequencies or $2 u_w \cos\theta$, where $k_w= 2\pi u_w = 2\pi f/c_w$ is the wavenumber in water, $c_w$ is the sound speed in water, and $f$ is the acoustic frequency. For the angles covered by the experimental geometry, this corresponds to spatial frequencies between 105 m$^{-1}$ and 135 m$^{-1}$. The azimuthally averaged spectrum for the small- and large-scale abraded spectrum, and the plucked spectrum are shown on a log-log scale in Fig.~\ref{fig:abradedSpectrumAzAvg}. Low wavenumbers that are biased by the apodization functions are not shown here. The small- and large-scale abraded spectra match very closely in their overlapping spatial-frequency domains, which is expected. The plucked spectrum exceeds the abraded spectra by more than three orders of magnitude in the low spatial frequency domain, but is much closer in power at high spatial frequencies. A power-law model is fit to azimuthally-averaged spectra, of the form \begin{align} \Phi(u_r) = \phi_2 / u_r^{\gamma_2} \end{align} where $\Phi(u_r)$ is the 2D power spectrum, $u_r$ is the radial spatial frequency, $\phi_2$ is the spectral strength, and $\gamma_2$ is the spectral slope. The subscripts on $\gamma_2$ and $\phi_2$ indicate that these parameters are for 2D power spectra, following the convention in \cite{jackson_richardson_2007}. Power-law fits are displayed in Fig.~\ref{fig:abradedSpectrumAzAvg}, and estimated parameters can be found in Table~\ref{tab:roughnessParams}. All spatial frequency components of the spectrum that were not biased from apodization or the photogrammetry processing were used in the fit. Although a single power-law is fit to the plucked spectrum, it appears to have some curvature in log-log space and could converge with the abraded spectra at high spatial frequencies if higher-resolution data were available. \begin{table} \centering \begin{tabular}{ l l l} \hline \hline Parameter & Abraded & Plucked\\ \hline $\phi_2$ [m$^{4-\gamma_2}$] & 4.847$\times10^{-8}$ & 6.083$\times10^{-3}$\\ $\gamma_2$ & 2.73 & 4.36\\ $\eta_\Phi$ [$\%$] & 4.5 & 6.2\\ \hline \hline \end{tabular} \caption{Roughness Spectrum model parameters and uncertainty.} \label{tab:roughnessParams} \end{table} \begin{figure} \centering \includegraphics[width=\columnwidth]{figure7-eps-converted-to.pdf} \caption{(color online) Azimuthally averaged power spectrum from abraded and plucked surfaces with power-law fit. The power spectrum from both the low- and high-resolution modes of the photogrammetry system are plotted for the abraded surfaces.} \label{fig:abradedSpectrumAzAvg} \end{figure} Uncertainty in the model parameter estimates is a major contributor to uncertainty in the resulting scattering strength measurements reported below. Parameters are estimated using a least-squares fit to the power-law in log-log space, which means that parameter uncertainty is a function of the residual sum of squares, and number of points used in the estimate. An additional source of uncertainty is the variation of $\Phi$ as function of azimuthal spatial frequency, $u_\phi$. The total relative variance $\eta^2_\Phi$ in the spectrum estimate is estimated by $\eta^2_\Phi = \eta^2_{LS} + \eta^2_{aniso}$, where $\eta^2_{LS}$ is the relative variance due to the least squares fit, and \begin{align} \eta^2_{aniso} = \frac{1}{u_{max} - u_{min}}\int\limits_{u_{min}}^{u_{max}}\frac{\langle \Phi^2 (u_\phi,u_r)-\Phi^2(u_r)\rangle_{u_\phi}}{\Phi^2(u_r)}\,\textrm{d} u_r \end{align} represents the azimuthal variability in the power spectrum. It is the variance of the power spectrum over $u_\phi$ divided by the squared mean, and averaged over the Bragg spatial frequencies accessible by the measurement system. A parameter for total uncertainty of the spectrum over the Bragg spatial frequencies is used rather than uncertainty of individual parameter estimates. Uncertainties can be found in Table~\ref{tab:roughnessParams}. \section{ACOUSTIC SCATTERING EXPERIMENTS} \label{sec:expOver} Acoustic backscattering measurements were performed off the coast of Larvik and Sandefjord, Norway in April 2011 \citep{midtgaard_etal_2011}. This experiment was performed by the Norwegian Defence Research Establishment (FFI) aboard the HU Sverdrup II. Data were collected using the HUGIN 1000 HUS autonomous underwater vehicle (AUV) equipped with the HISAS 1030 interferometric SAS system \citep{fossum_etal_2008}. This sonar has not been calibrated in terms of its open-circuit receiver sensitivity, $s_r$, or the source strength, $s_0$, although beam patterns were measured. These parameters must be determined in order to estimate the absolute seafloor scattering cross section. The seafloor in this area consisted of \textit{roches moutone\'es} surrounded by a sediment of cobble with a mud matrix. The water depth at the experimental site was approximately 30 m, with nominal vehicle altitude of 10 m from the seafloor. The sound speed profile was slightly upward-refracting with the surface sound speed at approximately 1452 m/s, and approximately 1456 m/s at the seafloor. The change in sound speed of the lower 10 m of the water column was a maximum of 3 m/s, and therefore refraction effects on the local grazing angle can be ignored. Further details on the experiment can be found in \cite{midtgaard_etal_2011}. \subsection{Synthetic aperture sonar overview} \label{sec:sasOver} Synthetic aperture sonar (SAS) is a data collection and beamforming technique in which a transmitter and a receiver array move along a track, which is typically linear or circular. Acoustic energy is transmitted at regular intervals, and the resulting scattered field is sampled, also at regular intervals. The received field from many transmissions can be concatenated to form a synthetic array that has a length many times that of the physical receiver array. A SAS image can be formed using a variety of beamforming techniques, the simplest and most robust of which is the backprojection, or delay and sum algorithm \citep{hawkins_thesis}. Synthetic array lengths scale linearly with pixel range and images have a constant Cartesian resolution over the whole scene. The array lengths can be quite long and in most situations imaged objects are in the near field. Aligning successive physical aperture locations to form a synthetic array is challenging in the ocean environment due to hydrodynamic forces on the sonar platform. Typically, an inertial navigation system combined with displaced-phase center acoustic navigation is used to estimate the sonar element locations to within a fraction of a wavelength \citep{bellettini_pinto_2002}. More information on SAS processing can be found in \cite{hansen_sas} and references therein. Interferometric SAS is possible with two vertically-separated receiver arrays on a sonar platform. The phase difference between SAS image pixels formed from each array is related to the seafloor height. Since phase is wrapped to $2\pi$ radians, discontinuities in the phase difference must be detected in the presence of noise. Typically, an $n\times n$ window is used to provide an estimate seafloor height with reduced variance at the cost of reduced resolution \citep{saebo_etal_2013, saebo_thesis}. Bathymetry estimates can be used in scattering strength measurements by providing an estimate of the local seafloor slope and global grazing angle at each pixel. The assumption of a constant or planar seafloor is often used in scattering strength measurements \citep{jackson_etal_1986}, which is severely violated in the case of the rock outcrops studied in this research. The local seafloor slope is estimated using a weighted average version of a finite difference operator on the SAS bathymetry to reduce high-frequency noise. The weights are computed using an $m$-point least squares fit to a quadratic function, with the derivative of the polynomial computed analytically \citep{savitsky_golay_1964}. Second-order polynomials were used to estimate all slopes, with window sizes of 55 pixels for plucked surfaces, and 23 pixels for abraded surfaces. The larger window size was used to suppress effects of steps on leeward surfaces of \textit{roches moutone\'es} and focus on the mean trend. \subsection{Estimating the scattering cross section from synthetic aperture sonar data} \label{sec:dataProc} Estimation of scattering strength from SAS systems is similar in principle to estimation using other real-aperture sonar systems designed to measure scattering strength \citep{jackson_etal_1986,williams_etal_2002}, with the exceptions of nearfield beamforming and the ability to estimate seafloor slope. Typically, beamformed time series are incoherently averaged over independent areas of the seafloor, and then the sonar equation is inverted for scattering strength for each intensity sample. Since SAS is a near-field imaging technique, transmission loss changes significantly along the synthetic array for all ranges. Consequently, sonar equation terms that vary as a function of sensor location, i.e. spherical spreading and attenuation, are removed before the image is beamformed. Terms that do not vary as a function of sensor position, such as the ensonified area and calibration parameters, are removed after image formation. The resulting calibrated SAS pixel intensity values correspond to the unaveraged cross section, $\tilde{\sigma}$, for azimuthally isotropic scatterers. A sonar equation is used that relates pixel intensity to the scattering cross section. It is equivalent to Eq. (G.11) in \cite{jackson_richardson_2007} but adapted to SAS quantities. This equation is valid so long as the scattering cross section and vertical beam pattern are both slowly varying within the system resolution. Let $v_{ij}$ be the complex matched-filtered voltage sensed by the $j$th receiver element, and delayed such that it corresponds to the $i$th pixel. Let $x$ be the along-track position of the pixel location, and $y$ be the cross-track (ground range) position. The complex value of the $i$th pixel, $q_i$, is the output of the delay-and sum beamformer, and is a weighted sum over the synthetic array, with the weights determined by transducer patterns, and by tapering applied to reduce side-lobes. When corrected for propagation, vertical beam pattern effects, and coherent gain, $q_i$ is defined by \begin{align} \begin{split}\label{eq:pressureToPixel} q_i &= \left(\sum\limits_{j}^{Nr} w_j b_{tx}(\phi_{ij}) b_{rx}(\phi_{ij})\right)^{-1}\times \\ &\left( \sum\limits_{j}^{Nr} w_j b_{tx}(\phi_{ij}) b_{rx}(\phi_{ij}) \frac{r^2_{ij}e^{2 \alpha r_{ij}}}{\left| b_{tx}(\theta_{ij}) b_{rx}(\theta_{ij})\right|}v_{ij} \right) \end{split} \end{align} where $w_j$ is the processing weight applied to the $j$th receiver, $\alpha$ is the attenuation of seawater, $r_{ij}$ is the distance (slant range) from pixel $i$ to receiver $j$, $b_{tx}(\phi_{ij})$ and $b_{rx}(\phi_{ij})$ are the transmit and receive horizontal beam patterns, and $b_{tx}(\theta_{ij})$ and $b_{rx}(\theta_{ij})$ are the vertical beam patterns. The variables $\phi_{ij}$, and $\theta_{ij}$, are the horizontal and depression angles from the sensor $j$ and pixel $i$ in the sonar's coordinate system. The HISAS 1030 sonar has a single transmitter and multiple receivers. The phase-center approximation \citep{bellettini_pinto_2002} is employed to work with an equivalent monostatic configuration with colocated transmitters and receiver. The unaveraged scattering cross section, $\tilde{\sigma}$, is related to $q_{i}$ through \begin{align} \left | q_i \right |^2 = (s_r s_0)^2 \Gamma \tilde{\sigma} \delta x \delta y \frac{\cos(\theta - \theta_0)}{\cos \theta} \label{eq:pixelToSigma} \end{align} where $s_r$ is the receiver voltage sensitivity in V/Pa, $s_0$ is the source level in Pa-m, and the cosine terms result from converting from the transducer array coordinate system to the seafloor coordinate system, with $\theta_0$ the depression angle of the main response axis of the transmitter as discussed in Appendix G of \cite{jackson_richardson_2007}. The quantities $\delta x$ and $\delta y$ are the along track and range resolutions of the system respectively. These resolutions are defined as the width of rectangular windows that have the same integrated power as the ambiguity function along a particular direction, similar to the concept of equivalent noise bandwidth. This definition of resolution differs from others, such as the 3 dB down points, and is a simple way to define a sonar equation. The local seafloor slope at each pixel is used to define the local grazing angle. Ensembles are formed by grouping local grazing angles into 1$^\circ$ bins and estimating the sample mean. The along-track resolution, $\delta x$, is nominally 3.14 cm for the parameters of the HISAS 1030 and processing parameters used for these data. Due to vehicle motion, the value of $\delta x$ fluctuates by as much as 5\% based on calculations of the along track resolution for each pixel. To simplify data processing, the nominal value of $\delta x$ is used, and its variability is included in uncertainty analysis. Note that since SAS is a nearfield imaging algorithm, a given patch of the seafloor is always in the nearfield of the synthetic array. The Fourier transform relationship between the beam pattern in the far field and the aperture weighting function is used to compute $\delta x$, since the beam pattern in the focal plane of a focused near-field array is identical to the far-field beampattern \citep{mast_2007}. The range resolution, $\delta y$ is determined by the transmitted bandwidth and spectral weighting of the received pulse, and is equal to 3.25 cm. The relative signal gain, $\Gamma$, is defined here as the power ratio between the partially coherent array gain and the coherent array gain \citep{cox_1973,carey_moseley_1991,carey_1998}. It characterizes the bias observed when the received signal is a fluctuating quantity instead of a point scatterer. In this work, partial coherence is due to two mechanisms, 1) phase fluctuations due to uncertainty in the sensor positions and oceanographic conditions, and 2) amplitude and phase fluctuations due the scattering characteristics of random rough surfaces. The first mechanism is included as a source of uncertainty and discussed at the end of the next subsection. The spatial coherence of rough surface scattering is a complex topic and a rigorous treatment is outside the scope of this work. However, an intuitive argument is given that $\Gamma$ is the same for all pixels. In general, the coherence of the field due to scattering from a rough surface has a Fourier transform relationship with the covariance of the pressure at the scattering surface \citep{mallert_fink_1991}. Theoretical models of coherence typically employ an isotropic point scatterer model (see \cite{jackson_moravan_1984} and references therein), or the van Cittert-Zernike theorem (vCZT) \citep{born_wolf, mallert_fink_1991}. Under the experimental conditions in this work, both models predict that the coherence along the receiver aperture is equal to the autocorrelation of the transmitting aperture function. This relationship, which we will call the vCZT, depends on the two equivalent assumptions that the covariance of the surface field behaves like a Dirac delta function, or that the scattering cross section is isotropic. For the ground ranges studied in this work, the grazing angles along the synthetic array vary by less than 0.1$^\circ$. Therefore we can restrict attention to azimuthal coherence, and only the aziumthal dependence of the scattering cross section is of import. Since azimuthally isotropic scatterers have already been assumed above, it is an appropriate assumption to use for coherence as well. Since the scattering cross section is not anisotropic for all situations, this result may not always be applicable. However, the intuitive argument can be made that if the bistatic scattering cross section is isotropic over the azimuth angles subtended by the transmitter and receiver arrays, then the spatial coherence is practically equal to the autocorrelation of the transmitter aperture. Using the vCZT, $\Gamma$ can be demonstrated to be the same for all pixels. The transmit and receive arrays are always matched in length due to the phase center approximation \citep{bellettini_pinto_2002}. Additionally, the transmit and receive aperture weights have functional forms that scale as a function of $x/y=\tan(\phi)$, where $\phi$ is the horizontal angle from the sensor location to a given pixel. Therefore range dependence in both the partial coherent gain and coherent gain is canceled out by the definition of $\Gamma$. Under the assumption of azimuthal isotropy, $\Gamma$ is identical for all pixels and may be normalized out during the calibration procedure described below. Note that if the system had been calibrated using a technique such as reciprocity, the value for $\Gamma$ would need to be explicitly calculated based on the system parameters. \subsection{System calibration} \label{sec:sysCal} \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{figure8.jpeg} \caption{(color online) SAS images of the calibration rock outcrop. Boxes denote areas where pixels were extracted to form estimates of scattering strength. With respect to Ra, the rest of the images (Rb-Rf in order) are related in azimuth angle by the following clockwise rotations: 180$^\circ$, 45$^\circ$, 225$^\circ$, 270$^\circ$, and 90$^\circ$. Grayscale value denotes the decibel equivalent of $\tilde{\sigma}$, the unaveraged scattering cross section. Horizontal axes represent ground range from the sonar track with positive values representing the port side, and negative values representing the starboard side. Vertical axes represent distance along the sonar track.} \label{fig:SAScalibrationRock} \end{figure*} The two parameters characterizing system calibration, $s_r$ and $s_0$, have not been measured, but are required for estimating the scattering cross section. An effective calibration technique is applied that estimates, $s = (s_r s_0)^2$, by fitting the scattering strength of an area of the seafloor to a valid scattering model with known inputs. Note that this technique also normalizes out any bias that is constant for all pixels and measurement locations, such as the relative signal gain, $\Gamma$, discussed above. This method of calibration has been applied previously using a seafloor scattering model by \cite{dwan_thesis} for the Gloria side-scan sonar, and used for scattering strength measurements by \cite{lyons_anderson_1994}. A similar technique using man-made targets, such as metal spheres, has become standard technique \citep{foote_etal_2005} for sonar calibration. A rock outcrop with very low-amplitude roughness was chosen based on inspection of SAS imagery to provide an effective system calibration. The apparent low-amplitude roughness enabled the use of a first-order scattering model. SAS images of the calibration rock taken at various azimuthal angles are presented in Fig.~\ref{fig:SAScalibrationRock}. This outcrop extends approximately 0.5 m above the surrounding sediment. The color scale corresponds to the decibel value of the relative scattering cross section. Note that the decibel quantity here and in Fig.~\ref{fig:sasSvabergs} is dimensionless and thus appears without a reference unit. The vertical axes correspond to the distance along the synthetic aperture, and horizontal axes correspond to the ground range to the sonar, with negative values representing distance from the port side of the vehicle, and positive values from the starboard side. Boxes indicate areas that were used to estimate scattering strength. In the case that multiple boxes intersect, their union is used to define a more complicated boundary that avoids double counting of pixels. Ensonification direction is nominally from the line where the ground range is zero, and can be from the left or right of the images depending on the sign of the ground range values. The calibration rock outcrop is approximately 10 m x 5 m in horizontal extent, and appeared to be flat and smooth based on the image texture, as supported by interferometric SAS bathymetry. Dropstones, likely deposited by glaciers, are present on the rock surface. Apart from discrete features such as dropstones and edges, the image amplitude appears to slowly decrease as the absolute value of the ground range from the sonar track increases. Since all propagation and system effects have been removed, this slow change in image amplitude can be attributed to the decrease in scattering strength with grazing angle, which is confirmed in the scattering strength plots below. The surrounding sediment is composed of approximately 4 cm cobble in a mud matrix, as indicated by diver samples. The surface of the calibration rock scatters sound at approximately 7 dB lower than surrounding cobble, which is a further indication that the surface is quite smooth. The calibration rock surface, like most of the exposed bedrock in this area, has been formed through glacial abrasion, as indicated by the smooth image texture and low amplitude of the scattered field. The relative scattering strength is averaged over all of these aspects for the system calibration. In \ref{sec:roughness}, the roughness power spectra of glacially abraded surfaces was seen to be anisotropic at scales on the order of the Bragg wavenumber for this sonar system. To minimize bias due to anisotropy, the calibration procedure compares azimuthally-averaged scattering strength to the small-slope model computed with azimuthally-averaged roughness power spectrum parameters. Azimuthal variability of the roughness spectrum at the range of Bragg spatial frequencies was used to compute error bars on the scattering model, and thus the estimate of $s$. The first-order small-slope approximation (SSA) \citep{voronovich_1985} was used to compute a model cross section curve from which $s$ is estimated. In its first-order form, the SSA can be decomposed into the product of a term depending on the geoacoustic properties, and the Kirchhoff integral \citep[Chap. 13]{ jackson_richardson_2007}. The Kirchhoff integral was numerically evaluated using the algorithm developed by \cite{gragg_etal_2001} and \cite{drumheller_gragg_2001}. The SSA is accurate in the respective regions of validity of the simpler small-roughness perturbation approximation and Kirchhoff approximation \citep[Chap. 13]{jackson_richardson_2007}. Although these bounds have not been established for power-law surfaces, perturbation theory for fluid interfaces and a von K{\'a}rm{\'a}n spectrum has been shown to be accurate with $k_w h\approx 1$, where $h$ is the rms roughness \citep{thorsos_jackson_williams_2000}. The rms roughness of the surface presented in Fig.~\ref{fig:abradedHeights}(b) is 2.09 mm, which corresponds to $kh = 0.86$, and the rms slope is 0.37. Given these considerations, the SSA is expected to be accurate for glacially abraded surfaces. The elastic SSA model requires 9 inputs, $\phi_2$, $\gamma_2$, $c_p$, $c_t$, $c_w$, $\delta_p$, $\delta_t$, $\rho_b$, and $\rho_w$, the density of water. It was argued in \ref{sec:roughness} that roughness characteristics of the surfaces from which the acoustic calibration measurements were made are similar to the roughness characteristics of glacially abraded surfaces at the roughness measurement site. Therefore, the power spectrum estimates presented in \ref{sec:roughness} are assumed to represent the roughness power spectrum of the calibration rock. Since the two sites are approximately 5 km apart, the roughness characteristics are not exactly the same. However, since many clasts have been dragged across the surface, small-scale roughness is assumed to be well-approximated by a stationary random field. The statistics of this random height field can then be assumed to be the same at both sites, given that the mineral composition and erosion mechanisms are the same. Mean geoacoustic properties and their bounds were summarized in Table~\ref{tab:geoacousticParameters}. The effective calibration parameters, $s$, is estimated through a least-squares fit of model cross section, $\sigma_{SSA}$, to the measured cross section, $\sigma$ over the angular domain covered by the data. A comparison between $\sigma_{SSA}$ and the measured scattering strength from the calibration rock using the best-fit value of $s = 7.03\times10^{-6}$ V$^2$m$^2$ is presented in Fig.~\ref{fig:calibration}. The measured data and SSA model curves are quite similar in shape over the grazing angles of interest. These data exhibit no dramatic minima or maxima, indicating that there is no critical angle in the angular domain of the measurement. From the geoacoustic parameters listed in Table.~\ref{tab:geoacousticParameters}, the compressional and shear wave critical angles are 77$^\circ$ and 63$^\circ$ respectively. Below the critical angle, the shape of the scattering strength curve is insensitive to the compressional and shear wave speeds contrast, and more sensitive to the density contrast. \begin{figure} \centering \includegraphics[width=\columnwidth]{figure9-eps-converted-to.pdf} \caption{Results from the effective calibration technique. Shown here is the scattering strength as a function of grazing angle averaged over all azimuths of the calibration rock and using estimated value of $s$. It is compared to the SSA model with dashed lines representing the model uncertainty. Uncertainties for measured data are represented as error bars.} \label{fig:calibration} \end{figure} Uncertainty in scattering strength estimates is a function of uncertainty in $s$, $\delta y$ and the finite ensemble of pixels used to estimate the mean scattered power. Ensemble sizes for the abraded surfaces reported below were between 220 and 737 samples with a mean of 561 samples. Ensemble sizes for the plucked surfaces were between 1148 and 2525 samples with a mean of 2109 samples. Uncertainty in $s$ is driven by the finite ensemble from the calibration rock, and uncertainty in the model parameters used to compute $\sigma_{SSA}$ and $\delta y$. Imperfect knowledge of vehicle motion and the ocean environment causes the array to become partially coherent, as discussed above. At present, there is no technique to estimate the degree of phase error along the synthetic aperture \citep{hansen_saebo_2013}, so it will be assumed that the element positions are known to within $\lambda$/10, a common requirement for well-focused images \citep{hansen_etal_2011}. This requirement translates into a residual coherence bias of $1- \exp\lbrace -\pi^2/50\rbrace$, or 18\%. This bias is included as an additional relative uncertainty contribution because the navigational accuracy is expected to vary from site to site and cannot be normalized out by the calibration technique. The total uncertainty of scattering cross section estimates, expressed as 95\% confidence intervals, is approximately 61\%, or 6 dB (+2 dB, -4 dB). \section{RESULTS AND DISCUSSION} \label{sec:results} \begin{figure*} \centering \includegraphics[width=\textwidth]{figure10.jpeg} \caption{(color online) SAS images of \textit{roches moutone\'es}. Horizontal axes show ground range distance from the sonar track, with negative values corresponding to port, and positive corresponding to starboard. Vertical axes denote distance along the sonar track. Grayscale value corresponds to the decibel equivalent of $\tilde{\sigma}$, the unaveraged scattering cross section, and boxes outline areas where pixels were used to estimate scattering strength. Solid boxes indicate regions of glacial abrasion and dashed boxes indicate regions of glacial plucking.} \label{fig:sasSvabergs} \end{figure*} Scattering strength measurements from the calibration rock and four more \textit{roches moutone\'es} are presented in this section. SAS images of the additional \textit{roches moutone\'es} are presented in Fig.~\ref{fig:sasSvabergs}, and are designated S1-S4. The highest point on these outcrops is approximately 2 - 2.5 m above the surrounding sediment. There are two images of S1, with a and b designating different azimuthal ensonification directions. Boxes in the image denote areas were pixels were used to estimate scattering strength, with solid lines representing areas formed by glacial abrasion, and dashed lines representing areas with glacial plucking. The imagery and slope field within these boxes were manually examined and areas with shadows were excluded from the estimate. Plucked areas appear in SAS imagery as areas of low-scattering strength with bright areas that seem to correspond with vertical facets and can be compared to the aerial photographs in Fig.~\ref{fig:area4photo} and Fig.~\ref{fig:area5photo}. Some of these regions contain dropstones as well, which can be identified by their highlight-shadow structure. The mean scattering strength of the combined highlights and shadows of dropstones is about -23 dB. The glacier direction may be inferred from these images since the plucked areas are downstream from the abraded areas. Ten measurements of glacially abraded surfaces are reported, six of which are the individual aspects of the calibration rock, designated R with individual aspects as the letters a-f, and the rest from S1, S2, S3, and S4. Five measurements of glacially plucked surfaces are reported, two of which are from S1 at different aspects. \begin{figure*} \centering \includegraphics[width=\textwidth]{figure11-eps-converted-to.pdf} \caption{(color online) Scattering strength as a function of grazing angle from glacially abraded surfaces a) R, the calibration rock, b) S1, c) S2, d) S3, and e) S4. Error bars represent the measurement uncertainty, and the two gray dashed lines indicate the error bounds of the small slope approximation (SSA) due to uncertainty in input parameters.} \label{fig:ssAbraded} \end{figure*} Scattering strength measurements from glacially abraded portions of S1, S2, S3, S4, and the calibration rock are shown in Fig.~\ref{fig:ssAbraded} as a function of grazing angle and are compared to the small slope approximation. Scattering strengths fall between -33 dB and -26 dB at a reference angle of 20$^\circ$ grazing angle. Most of the points are clustered at the lower end of this range, except for the measurements from S1a, S1b, and S3. These three measurements are consistently greater than both the small slope model and the other scattering strength measurements from abraded surfaces. This discrepancy is likely due to differing roughness parameters between individual \textit{roches moutone\'es}. From the two outcrops with more than one aspect, R and S1, anisotropy is evident. The measurements from S2 and S4 appear to drop off more rapidly than the model curves. This trend may be due to the fact that local grazing estimate may be positively biased, and lower grazing angles that have less scattered power are included in these estimates. To within the uncertainty bounds of this measurement, almost all the data points overlap with the 95\% confidence intervals of the SSA. Therefore the SSA is a sufficient model for glacially abraded surfaces. Since the SSA reduces to perturbation theory \citep{dacol_berman_1988,gragg_etal_2001} for these roughness parameters and grazing angles, perturbation theory is also an adequate model. In the 15-25$^\circ$ range, all scattering strength measurements fall within $\pm$ 4.5 dB of one another. This interval represents the global bias that is expected from the effective calibration if roughness parameters from other abraded sites were used instead of the one used here. \begin{figure*} \centering \includegraphics[width=\textwidth]{figure12-eps-converted-to.pdf} \caption{(color online) Scattering strength as a function of grazing angle from glacially plucked surfaces a) S1, b) S2, c) S3, and d) S4. Error bars represent the measurement uncertainty, and the black line represents the empirical Lambertian model (LM) with the parameter $\mu$ estimated by best-fit to the data.} \label{fig:ssPlucked} \end{figure*} Scattering strength measurements from glacially plucked surfaces are shown in Fig.~\ref{fig:ssPlucked} as a function of grazing angle. Each measurement is compared to Lambert's model, $\sigma = \mu \sin^2\theta$ where $\mu$ is the Lambert parameter and $\theta$ is the grazing angle. The parameter, $\mu$, is estimated by a least-squares fit to the data and has no direct connection with environmental parameters. Model curves for perturbation theory and the small-slope approximation are not shown in the figure because $kh\approx20$, and $s>1$, where $s$ is the rms slope. These parameters are outside the range of validity for these first-order approximate models. Elastic perturbation theory (PT) \citep{dacol_berman_1988} computed with the plucked roughness parameters yields a scattering strength of -12 dB at 20$^\circ$ grazing angle, and SSA results in a scattering strength of -1 dB at the same angle. For the plucked roughness parameters, the Kirchhoff integral in the SSA was computed using numerical quadrature because the spectral exponent, $\gamma_2$, for the plucked surfaces was 4.36, and the algorithm in \cite{gragg_etal_2001} is restricted to values of the spectral exponent less than 3.8. Scattering from dropstones is present in the measurements, but absent in the model. Each dropstone sits on a low scattering strength horizontal facet and contributes a constant scattering strength of about -23 dB. The presence of dropstones slightly increases the scattering strength compared to plucked surfaces alone. Lambert's model provides a reasonable fit to the shape of data in all cases, and quite a good fit for S1 and S3. Lambert parameters cluster at approximately -13 dB for S1 and S3, and -18 dB for S2 and S4. On average, scattering strength tends to decrease as grazing angle decreases. Variability in these measurements is greater than in the glacially abraded surfaces, and exhibits some deterministic features, such as correlated fluctuations and jumps. These correlated fluctuations are likely the result of poor estimates of the local slope, and the fact that the surface is composed of facets of various orientations. If only a few facets are included in the ensemble (even though 1000s of pixels may be included), the measurements could exhibit these correlated fluctuations. Inclusion of a larger ensemble of facets would likely result in reduced variability. At 20$^\circ$ grazing angle, the scattering strength measurements from abraded surfaces fall between -33 and -26 dB, whereas measurements from plucked surfaces fall between -30 and -20 dB. Compared to previous measurements of seafloor scattering compiled in Chapter 12 of \cite{jackson_richardson_2007}, the plucked measurements overlap with the upper range of measurements from mud and sand, and the abraded measurements overlap with the lower end of those measurements. Compared with previous measurements from rock seafloors, the plucked measurements overlap with those by \cite{mckinney_anderson_1964} and \cite{soukup_gragg_2003}. The remaining three measurements \cite{eyering_etal_1948,urick_1954,APL_9407} exceed all scattering strength measurements reported here. Bragg scattering is responsible for the levels and angular dependence of the cross section of the glacially abraded surfaces. This inference can be made since the small-slope approximation is in good agreement with measured data, and as stated above, this model reduces to perturbation theory at low grazing angles and these roughness parameters \citep{gragg_etal_2001}. The low scattered levels result from low-levels of roughness at the Bragg wavenumber components. Both perturbation theory and the small-slope approximation do not accurately predict scattering strength obtained from plucked surfaces. At 20$^\circ$ grazing angle, perturbation theory overpredicts the scattering strength by 8 dB, and small-slope overpredicts scattering strength by 19 dB. These models are expected to be inaccurate for these roughness parameters at 100 kHz because the rms height, $h$, is large compared to the acoustic wavelength, and the rms slope is greater than unity. Higher-order terms in the perturbation series and small-slope approximation are unlikely to match these measurements because these terms tend to be positive-definite and would only increase the data-model mismatch. Although it is possible that some higher-order terms are negative, their net effect is typically positive-definite, as discussed in \cite{thorsos_1990}. An alternative to higher-order scattering can be motivated by examination of the SAS images of the plucked areas in Fig.~\ref{fig:sasSvabergs}. The low amplitude pixels appear to correspond to horizontal facets, and bright pixels appear to correspond to facets facing directly back at the sonar system. The scattered field due to small-scale roughness from each individual facet appears to be modulated by the large-scale facet structure of the surface. This situation could be described using the composite roughness, or two-scale scattering model \citep{brown_1978, mcdaniel_gorman_1983, jackson_composite}. Note that since the power spectrum of the plucked surface is a power-law, a separation of scales cannot be unambiguously defined in the spatial frequency domain, as discussed by \cite{jackson_composite}. However, the system resolution is high enough to resolve the facet structure, and a separation of scales could be defined in the spatial domain. \section{CONCLUSION} \label{sec:conclusions} Acoustic scattering measurements from two contrasting roughness features of glacially eroded rock outcroppings were made using a SAS system at 100 kHz. A method by which scattering strength is estimated from SAS data is detailed, as well as a method to estimate the effective system calibration parameters. This calibration technique effectively normalizes out any bias that is consistent for all measurements, including the effects of partial coherence of rough surface scattering. Scattering strength estimates from five different glacially abraded areas of the seafloor were obtained, and varied between -33 and -26 dB at 20$^\circ$ grazing angle. To within the uncertainty of the measurement system, these measurements are consistent with each other and are in good agreement with the small-slope scattering model. Scattering strength estimates from four glacially plucked areas of the seafloor were obtained, with scattering strengths varying between -30 and -20 dB at 20$^\circ$. Scattering from glacially plucked surfaces matches well with the shape of a Lambertian model, with two surfaces having $\mu\approx-13$ dB, and the others having $\mu\approx-18$ dB. Supporting environmental information in the form of geoacoustic and roughness properties were presented to support the acoustic system calibration, and to provide inputs to the small-slope model. \section*{ACKNOWLEDGEMENTS} \label{sec:ack} The authors would like thank to the the crew of the HU Sverdrup II, operators of the HUGIN AUV, and researchers at the Norwegian Defence Research Establishment for conducting the acoustic scattering experiment and collecting the data. The first author is grateful for helpful discussions with Daniel Brown, Sridhar Anandakrishnan, and Roy Hansen. This work was supported by the U.S. Office of Naval Research under grant N-00014-13-1-0056, and by the National Defense Science and Engineering Graduate Fellowship.
1,108,101,562,534
arxiv
\section{Introduction} Young, forming giant planets are surrounded by their circumplanetary disks (CPDs), where their regular satellites will form eventually. Regardless whether the planet formed via core accretion or disk instability scenario, the circumplanetary disk forms in the last phase of the planet formation \citep[e.g.][]{Kley99,Lubow99,SB13,AB09a,AB09b}. The circumplanetary disks characteristics have been studied numerically since two decades now \citep[e.g.][]{Kley99,Lubow99,DAngelo03,AB09a,AB09b,Machida10,AB12,Tanigawa12,SB13,Gressel13,Szulagyi14,Fujii14,Perez15,DAngelo15,Tanigawa14,Zhu16}. As circumstellar disk, they widely vary in mass from $10^{-4}$ $\rm{M_{planet}}$ (\citealt{DAngelo03}; G. D'Angelo private communication; \citealt{Gressel13}) till $\sim$ $\rm{M_{planet}}$ \citep{SB13,Szulagyi17GI} if the planet formed via gravitational instability. Because the CPD is constantly fed from the circumstellar disk \citep[e.g.][]{Szulagyi14,FC16}, the mass of the CPD depends on the circumstellar disk mass (apart from the planetary mass \citealt{Szulagyi17gap}). The CPD's temperature will depend on the semi-major axis of the planet, the local density and opacity, the mass of the planet and its age, the viscosity of the gas, among other factors \citep{DAngelo03,PN05,AB09b,AB12,Gressel13,Szulagyi17gap}. The temperature of the CPD vary from thousands of Kelvins in the forming planet vicinity to a few hundred Kelvins in the outer subdisk. Of course, the CPD evolves in time, similarly to the circumstellar disk, getting lighter and cooler during its lifetime \citep{Szulagyi17gap}. The characteristics of the circumplanetary disks are affecting their detectability, hence creating mock observations from the hydrodynamical simulations is a useful tool to plan and interpret real observations. Planet-disk interactions, such as gaps has been studied on synthetic images \citep{Dipierro15, Szulagyi17alma, DSHARP}. Circumplanetary disks had been predicted to be seen with ALMA and VLA \citep{Szulagyi17alma,IsellaTurner18,Zhu18}. Mock images of polarized light about circumstellar disks helped us understanding what polarized light observations can reveal about the circumstellar disk characteristics \citep{Dong12}. Synthetic observations of scattered light shed light on how planet-disk interactions -- especially spirals -- are expected to look like \citep{Dong15,Dong15b,FD15,Dong16}. It has also been suggested, that polarized light from the circumplanetary disk dust could be detected in favorable circumstances \citep{Stolker}. The detectability of the CPD in near-infrared as well as via spectral energy distributions was investigated in \citet{Szulagyi19}. Actual detections of circumplanetary disks are just began in 2019: two possible candidates from ALMA continuum emission in the PDS70 disk \citep{Isella19}, as well as infrared excess detection of circumplanetary material around PDS70b \citep{Christiaens19}. With these few possible detections, the characterization of circumplanetary disks from observations are still not yet possible. Unlike circumplanetary disks, circumstellar disks have been thoroughly characterized from observations during the last decade thanks to optical/near-IR instruments like VLT/SPHERE and GPI \citep[e.g.,][]{Garufi2017b, Rapson2015} and to the (sub-)mm interferometer ALMA \citep[e.g.,][]{Andrews2018}. Among the near-IR observations, the most successful technique to directly image circumstellar disks is currently the polarized differential imaging \citep[PDI,][]{Kuhn2001, Apai2004}. This technique allows a very good removal of the strong stellar flux by separating the polarized light (mostly scattered light from the disk) from the unpolarized light (mainly stellar light). Therefore, most of the available high-resolution near-IR maps of circumstellar disks trace the polarized scattered light from the disk surface. In principle, these polarized scattered light observations also open the way to detect the circumplanetary disk the same way although this is yet to be proven observationally. In this paper we combine temperature-included (i.e. radiative) 3D gas hydrodynamic simulations, with Monte-Carlo radiative transfer to create mock observations about detecting the circumplanetary disk in scattered light with and without polarization. In the first paper of this series, we looked at the circumplanetary disk observability in sub-mm/radio wavelength \citep{Szulagyi17alma}. In the second paper, we reviewed the case for near-infrared and spectral energy distributions \citep{Szulagyi19}. In \citet{SzE20} we made predictions of hydrogen recombination line fluxes with extinction and determined the planet-mass/planet accretion versus H-alpha, Paschen-beta, Brackett-gamma line luminosity relationships. \section[]{Methods} \label{sec:numerical} We had a three step process for creating the mock images presented in this work. First, we run 3D radiative hydrodynamic simulations of the circumstellar disk with a forming planet embedded within (Sect. \ref{sec:hydro}). Then we used the RADMC-3D radiative transfer tool to create wavelength-dependent images of the systems {on 1.245 microns} with polarization (Sect. \ref{sec:radmc3d}). Finally, we convolved the images with a diffraction limited PSF for the VLT/SPHERE instruments and created polarization maps (Sect. \ref{sec:convol}). \subsection{Hydrodynamic Simulations} \label{sec:hydro} The hydrodynamic simulations in this study are the same as in our previous paper \citep{Szulagyi19} of the series. In brief, we had a circumstellar disk with a mass of $\sim 10^{-2} \mathrm{M_{Sun}}$ between 20 and 120 AU around a solar-mass star, where a planet is forming at 50 AU. In four different simulations, the planet masses were chosen to be a Saturn-mass, 1 Jupiter-mass, 5 Jupiter-masses and 10 Jupiter-masses (i.e. only one planet present in each hydrodynamic run). We used the JUPITER code to carry our the hydrodynamic calculations, that was developed by F. Masset and J. Szul\'agyi \citep{Szulagyi14,Szulagyi16a} that not only solves Euler equations but also the radiative transfer with the flux-limited diffusion approximation (two-temperature approach \citealt[e.g.][]{Kley89,Commercon11}). {The heating processes include adiabatic compression, viscous heating, shock heating and stellar irradiation, while the cooling processes are adiabatic expansion and radiative diffusion. The main source of heating in the circumplanetary disk is the accretion process \citep{Szulagyi16a}, as the gas tries to fall onto the planet, leading to adiabatic compression in this region that heats up the compressing gas. Furthermore, the accretion shock front on the circumplanetary disk surface \citep{SzM17} in case of the higher mass planets are also strongly heated up. Viscous heating in the CPD is secondary, with the chosen low viscosity value. Stellar irradiation is computed based on solar flux, but its effect on the CPD is negligible. } The viscosity was a constant kinematic viscosity of $10^{-5} \mathrm{a_{p}}^2\Omega_p$, where $ \mathrm{a_{p}}$ is the semi-major axis and $\Omega_p$ denotes the orbital frequency of the planet. Given that we were particularly interested in the circumplanetary region, where high-resolution is necessary to get the disk characteristics (density, temperature, velocities) right, we used mesh refinement in this region. This meant that while the circumstellar disk has been simulated with a lower resolution (680 cells azimuthally over $2\pi$, 215 cells radially between 20 and 120 AU and 20 cells in the co-latitude direction over 7.4 degrees opening angle from the midplane), the Hill-sphere of the planet were well resolved with four levels of refinement. Each level doubled the resolution in each spatial direction, hence the final resolution in the planet vicinity was 0.029 AU. While the dust was not explicitly simulated within the hydrodynamics, its effect on the temperature of the disk is taken into account through the dust opacities (with the limit of assuming a constant dust-to-gas ratio of 1\%). The opacity table was equivalent to what was used in \citet{Szulagyi19}, and included both gas and dust opacities. \subsection{RADMC-3D post-processing} \label{sec:radmc3d} RADMC-3D \citep{Dullemond12}\footnote{\url{http://www.ita.uni-heidelberg.de/~dullemond/software/radmc-3d/}} radiative transfer tool was used to create wavelength-dependent intensity images from the hydrodynamic simulations. {We used $5\times10^7$ photons for these Monte-Carlo runs. We run the RADMC-3D with the flux conservation option, which makes sure that the total flux of the images are conserved as well, regardless the image resolution. We ran two sets of images: \begin{itemize} \item 5000x5000 pixel resolution image on the entire circumstellar disk; ran multiple times with planet positions of 0 deg, 45 deg, 90 deg, and inclinations of 0 deg, 30 deg, 60 deg and 90 deg. \item 1000x1000 pixel images on the Hill-sphere (using \url{zoomau} command); ran 20 times with randomly changing seed number and averaged at the end. We calculated the variance between these 20 runs and verified that it is near zero, so convergence was reached. These CPD region images were produced for the four different inclinations (0 deg, 30 deg, 60 deg and 90 deg). \end{itemize} } The dust-density files were created from the gas density (i.e. assuming that these micron-sized dust grains are strongly coupled to the gas), by multiplying the gas density in each cell with the dust-to-gas ratio of 1\%. We assumed thermal equilibrium, hence we used the dust temperature to be equal to the gas temperature, except that the dust evaporation above 1500 K was taken into account. This meant that in the cells hotter than this limit, the dust density was set to zero, to be consistent with the radiative hydrodynamic simulation, where the opacity table contains the dust evaporation as well. {We used the hydrodynamic simulation calculated temperature for the dust, because that includes shock-heating, accretional heating, viscous heating which are very important in the circumplanetary disk region and result in a hot planet vicinity. We compared our results based on another method, where we used thermal Monte Carlo simulation to calculate the dust temperature, using \url{radmc3d mctherm}, but this method does not account for the main heating mechanisms that take place in the planet vicinity, hence result in different mock images (see discussion in Sect. \ref{sec:discussion})}. { The distance of the circumstellar disk was assumed to be 100 parsec for all the calculations.} The hydrodynamic simulations cannot handle well optically thin, low-density regions of the circumstellar disk, such as the disk atmosphere. In the hydro simulations the disk opening angle was only 7.4 degrees, but real circumstellar disks have a larger opening angle. Therefore we had to extend the circumstellar disk in the vertical direction using an extrapolation technique, before we ran the RADMC-3D calculations. The extrapolation was as follows. First, we fitted Gaussian-functions to the density field in each cell column (z-direction) separately, so that the vertical extent of the circumstellar disk was 2.5 times larger than the original hydro simulation's. Second, in this circumstellar disk atmosphere region, we kept the temperature as it is in the last (optically thin) co-latitude cell. Here the temperature is high due to stellar irradiation, much higher than in the bulk of the circumstellar disk (midplane regions). This meant that the temperature in the circumstellar disk atmosphere was constant with co-latitude. The dust opacities were identical to what had been used in \citet{Pohl17}. It was assumed to be a mixture made of silicates (\citealt{draine2003b}), carbon (\citealt{zubko1996}), and water ice (\citealt{warren2008}) with fractional abundances of 7\%, 21\%, and 42\%, consistent with \citet{ricci2010}. The remaining 30\% was vacuum. The opacity of the mixture was determined by means of the Bruggeman mixing formula. The absorption and scattering opacities, $\kappa_{\mathrm{scat}}$ and $\kappa_{\mathrm{abs}}$, as well as the scattering matrix elements $Z_{ij}$ were calculated for spherical, compact dust grains with Mie theory considering the BHMIE code of \citet{bohren1983}. The grain sizes were between 0.01 micron and 150 micron, with a power-law index of -3.5. \subsection{Polarization maps} \label{sec:convol} To compare our simulations to the available observations, we first convolved the images with a rotationally symmetric 2D Gaussian Point-Spread-Function, with a Full-width-half-maximum to be $1.22\cdot\lambda/D$, where $\lambda$ is the wavelength and $D$ is the mirror-size of 8.2 meters (equivalent of VLT mirror diameter). The RADMC-3D provides the set of Stokes parameters $I, Q, U, V$. The polarized intensity map $PI$ was obtained through: \begin{equation} PI = \sqrt{Q^2+U^2} \label{eq:stokespi} \end{equation} An alternative treatment of the Stokes parameters is commonly used in observational work, that is the creation of the tangential (sometimes called radial or polar) parameters $Q_{\phi}$ and $U_{\phi}$ \citep{Canovas2015,Monnier19}. These are defined as: \begin{equation} \begin{split} Q_{\phi} &= +Q \cos\,2\phi + U \sin\,2\phi \,, \\ U_{\phi} &= -Q \sin\,2\phi + U \cos\,2\phi \, \label{eq:stokesphi} \end{split} \end{equation} \noindent with $\phi$ being the angle with respect to the stellar position (x$_{0}$,y$_{0}$) calculated as: \begin{equation} \phi = \mathrm{arctan} \frac{x-x_0}{y-y_0}\, \label{eq:azimuth} \end{equation} \noindent By construction, $Q_{\phi}$ corresponds to $PI$ in the scenario of perfectly centro-symmetric scattering {and single scattering}, whereas $U_{\phi}$ is ideally expected to only contain noise. \section{Results} \label{sec:results} The obtained $I,\ PI,\ Q_{\phi}$ and $U_{\phi}$ maps of {on the different inclinations (0, 30, 60, 90 degrees), and different planetary positions (0, 45, 90 degrees)}. The J-band images of zero inclination are shown in Fig. \ref{fig:scat12_incl0}, while the other inclinations are in the Appendix \ref{All_maps}. Fig. \ref{fig:scat12_incl0} compares the simulations of the four planetary masses considered: 0.3 $\rm{M_{Jup}}$, 1 $\rm{M_{Jup}}$, 5 $\rm{M_{Jup}}$, 10 $\rm{M_{Jup}}$. On the images, the planet (and circumplanetary disk) always lies to the East at 50 AU from the central star. \begin{figure*} \includegraphics[width=18cm]{Image_J0.pdf} \caption{Polarized scattered light images at 1.245 microns (J band) with 0$^\circ$ inclination for the 10, 5, 1, and 0.3 Jupiter-mass cases (from top to bottom). The columns are the $I$, $PI$, $Q_\phi$, $U_\phi$, and a zoom of the $Q_\phi$ (top) and $U_\phi$ maps (bottom) on the CPD region, respectively. The $PI$, $Q_\phi$, and $U_\phi$ images have the same color stretch to highlight their relative flux brightness, whereas the zoomed maps have a harder stretch with negative values shown in black. The white box indicates the zoomed area of the last column. The yellow and red lines on the last $Q_\phi$ map highlight the region used to calculate the contrast of CSD and CPD, respectively. {The assumed distance is 100 pc.}} \label{fig:scat12_incl0} \end{figure*} From these images, the main circumstellar disk (CSD) is always very bright in $PI$ and its morphology resembles that of the $I$ images. The circumplanetary disk is visible in the first two cases only, that is with a planet of 10 and 5 $\rm{M_{Jup}}$. Similar considerations apply to the $Q_{\phi}$ images and these maps look very similar to the $PI$. On the other hand, the $U_{\phi}$ images do not show any significant signal except around the circumplanetary disk in the first case. The reason for the CPD visibility only in the high-mass planet cases, because {in these cases the accretion shock front on the surface of the CPD is strong (the velocity of the incoming accretion flow is super-sonic), creating a hot, bright surface \citep{SzM17}. In the smaller mass planet cases, the accretion flow is slower and sub-sonic, hence the shock is not as strong and not as hot (see \ref{sec:discussion} for further discussion on this point). This accretion shock is created by the incoming circumstellar disk material, through the meridional circulation, a mass transfer between the circumstellar and circumplanetary disks \citep{Szulagyi14,FC16}. The accretion shock on the circumplanetary disk surface hence contributes to the observability and observational appearance of the CPD.} What is described above for the 0 inclination case (Fig. \ref{fig:scat12_incl0}) also applies to the other images created for the other inclinations (see Appendix \ref{All_maps}). The only obvious differences are that the circumplanetary disk becomes decreasingly evident with increasing inclination, and that some signal is recovered from the $U_{\phi}$ image when the inclination is high, in agreement with the theoretical prediction by \citet{Canovas2015}. \begin{figure} \includegraphics[width=8.5cm]{Radial_profile.pdf} \caption{Radial profile of the polarized-to-stellar light contrast of $PI$, $Q_{\phi}$ and $U_{\phi}$ as described in Sect.\,\ref{sec:contrast}. The profiles obtained from three different planetary masses are displaced along the y-axis for a better visualization. The vertical line indicates the planet location. {The $PI$ and $Q_{\phi}$ profiles are coincident except where the 5 $M_{\rm jup}$ mass planet is detected} (see Sect.\,\ref{sec:CSDvsCPD}).} \label{fig:profile} \end{figure} \subsection{Polarized contrast} \label{sec:contrast} In this section, we provide a more quantitative analysis of the maps in Fig.\,\ref{fig:scat12_incl0}, as well as of those shown in Appendix \ref{All_maps}. Measuring the amount of scattered light from real observations is a challenging task because of the difficulties in flux-calibrating the images and because the disk flux is directly dependent on the stellar flux. Some authors {(e.g., \citealt{Avenhaus2018, Garufi18})} quantified the near-IR polarized light from the disk in relation to the stellar flux, thus as to alleviate the dependence on the stellar brightness. In particular, a way to do it is by dividing the observed polarized flux at a certain disk radius, $F_{\rm pol}(r)$, by the stellar flux incident on that disk region, $F_*/4\pi r^2$. This number contains information on both the intrinsic albedo of particles (see e.g., \citealt{Mulders13}) and on the fraction of photons scattered toward the observer (see e.g., \citealt{Stolker16}) and is thus sometimes referred to as (polarized) geometric albedo, or contrast. This measurement is available for a relatively large number of real circumstellar disks \citep[see][]{Garufi17,Garufi18}. From our simulations, we obtained the aforementioned contrast along a {4 au-large} radial cut oriented toward the planet location. This profile is obtained from the $PI,\ Q_{\phi}$ and $U_{\phi}$ images as well, and is shown in Fig.\,\ref{fig:profile} for some illustrative cases. $Q_{\phi}$ almost always lies upon the profile of total polarized intensity, and at the planet location there is zero excess from the presence of the CPD in the case of Jupiter- and Saturn mass planets. For the higher mass planets ($\geq$ 5 $\rm{M_{Jup}}$), at the CPD location there is a contrast of {0.15 and 0.6} for $Q_{\phi}$ and $PI$, respectively. A locally different flux recorded in the $Q_\phi$ and $PI$ maps indicates that the pattern of the polarization diverges from centro-symmetric in the circumstellar disk (see also Sect.\,\ref{sec:CSDvsCPD}). This can be appreciated by plotting the polarization angles $\psi=0.5*\arctan(U/Q$) on top of the $Q_\phi$ maps, as done in Fig.\,\ref{fig:angles}, {zoomed to the Hill-sphere region. From the image, it is clear how the CPD changes the polarization vectors locally in the 5 and 10 M$_{\rm jup}$ planet cases. Furthermore, in the 10 M$_{\rm jup}$ case the deviation include not only the CPD ($\sim$ half of Hill-sphere), but the surrounding area as well (the spiral wakes).} \begin{figure*} \includegraphics[width=18cm]{Image_angle.pdf} \caption{Polarization vectors overplotted on the $Q_\phi$ maps {of the close region around the CPD. In the first two cases, a centro-symmetric pattern is visible. In the last two cases, polarization vectors are vertical, in line with the local pattern of the protoplanetary disk}.} \label{fig:angles} \end{figure*} We also extracted the contrast from the circumstellar and from the circumplanetary disk by averaging the contribution from their respective regions (0.04" for the CPD, 0.25" for the CSD that is exactly within the inner and outer edge of the disk). The values thus obtained for the circumstellar disk from the different simulations are comprised in a narrow interval of values (from 1.5\% to 3.2\%). Compared to real disks, these numbers are realistically high since the brightest disks ever observed in PDI have it up to $\sim2\%$ (see \citealt{Garufi17}). {This shows that if the CPDs have a strong enough accretion-shock surface on them, they could be surprisingly bright in some cases. } On the other hand, the contrast obtained around the circumplanetary disk span enormously (from {4500\%} to $\lesssim0.1\%$). From the $PI$ image, the contrast of the 10 $\rm{M_{Jup}}$ case is always larger than 1 (i.e., more photons than those incident from the star are detected) indicating a strong additional source of photons to be scattered {(the hot circumplanetary disk shock surface)}. This observational scenario would be by itself a natural, robust evidence of circumplanetary disk. However, for all the other planetary mass cases that we studied the detection of the circumplanetary disk in polarized light is less straightforward. Observationally, we can define a formal threshold of 0.1\% below which the signal is mostly noise \citep{Garufi17}. According to this criterion, {7 of the remaining 9 cases (3 planet masses, 3 inclinations)} should still be regarded as detection. We caution, however, that our images does not contain extra source of noise, that could make the detections even more difficult as described here. So these detection numbers should be regarded as idealistic, best-case scenarios. We must nonetheless consider the effect of the circumstellar disk itself that may still be present at the planet location (in particular for the 0.3 $\rm{M_{Jup}}$ case where the disk gap is more shallow than for the more massive planets) and leaves the same imprint on the scattered-light images. In this regard, we noticed that the contrast around the planet decreases toward smaller masses but then increases again for the lowest-mass case. \subsection{Circumstellar versus circumplanetary disk signal} \label{sec:CSDvsCPD} Our simulations show that it is formally possible to distinguish between the scattered light from the circumstellar and from the circumplanetary disk by comparing the contrast from the $PI$ and $Q_{\phi}$ images. In fact, for the two largest-mass planet scenarios the polarized contrast around the planet calculated from these two maps significantly differ {(up to a factor 50)} whereas in the 1.0 $\rm{M_{Jup}}$ case {only a minor ratio ($\sim30\%$) is visible, and no difference is appreciable in the 0.3 $\rm{M_{Jup}}$ case.} Conversely, for all our simulations the circumstellar disk signal from the $PI$ and $Q_{\phi}$ images is very similar (always within $10\%$). This behaviour can be appreciated from Fig.\,\ref{fig:ratios}. Strong discrepancies between $PI$ and $Q_{\phi}$ are expected when the scattered light deviates from a centro-symmetric pattern (see Fig.\,\ref{fig:angles}), which is the assumption under which $Q_{\phi}$ is constructed (see Eq. \ref{eq:stokesphi}). In the presence of a circumplanetary disk, photons are not expected to be scattered in such a pattern since the star is no longer the only source of photons (see on Fig. \ref{fig:scat12_incl0} for 10 Jupiter-mass case). Therefore, the comparison of the polarized contrast from the $PI$ and $Q_{\phi}$ images is a simple but potentially powerful manner to discriminate the presence of a circumplanetary disk. \begin{figure} \includegraphics[width=\columnwidth]{CPD_PI_Qphi.pdf} \caption{ {$PI$/$Q_{\phi}$ contrast ratios of circumstellar- and circumplanetary disks for different planet masses and different inclinations}. From these simulations, in the 10 and 5 $\rm{M_{Jup}}$ cases, {as well as marginally in the 1 $\rm{M_{Jup}}$ case,} the signal from the circumplanetary disk can be observationally disentangled from the circumstellar disk signal.} \label{fig:ratios} \end{figure} {Comparing the $PI$/$Q_{\phi}$ contrast ratios of the CPD with various position angles of the planet (0, 45, 90 degrees) does not show a clear trend towards either direction, hence we only show the position angle 0 on Fig. \ref{fig:ratios}. However, for all position angles considered, we found that the $PI$/$Q_{\phi}$ contrast ratio decreases with increasing inclination angle.} \section{Discussion} \label{sec:discussion} {The results of Sect.\,\ref{sec:results} suggest that the parallel employment of $PI$ and $Q_\phi$ maps and of multiple wavebands may help reveal a circumplanetary disk. In fact, while the light scattered off by the CPD could easily be confused for a substructure of the circumstellar disk, its polarization pattern is different. This results in a local divergence between the tangential and the total component of the polarized light, as indicated by the $Q_\phi$ and $PI$ maps respectively. In this work, we showed that this effect is appreciable for massive planets, with mass of the of at least $5\,M_{\rm jup}$.} {As of today, the paucity of planets detected in circumstellar disks does not allow to test these predictions. Nonetheless, the ideal threshold of $5\,M_{\rm jup}$ over which the scattered light from a CPD becomes detectable is close to the common mass upper limit determined in a number of objects \citep{Claudi19,Maire17,Mesa19,Mesa19b} indicating that this approach could in principle be used in parallel to the typical differential techniques to detect the planetary thermal light.} {The results presented here are greatly affected by the temperature calculation. We used the hydrodynamic simulation calculated temperatures (via flux limited diffusion approximation), where assumed perfect thermal equilibrium between the micron sized dust and the gas (Fig. \ref{fig:temp} top-left panel). The more traditional method to calculate the dust temperature is thermal Monte Carlo computation, that can be done with RADMC-3D's \url{mctherm} command (Fig. \ref{fig:temp} top-right panel; calculated with $10^6$ photon packages). However, latter method does not include the accretional heating due to adiabatic compression, shock-heating due to accretion shock fronts, and viscous heating, all which heats up the planet vicinity. Hence, the circumplanetary region does not show up on the mctherm temperature maps (Fig. \ref{fig:temp} top-right panel), which also means that this region would not be very visible on mock observations. The difference of the two temperature calculation are shown on Fig. \ref{fig:temp} bottom panels. Everywhere else in the simulation box (i.e. in the circumstellar disk) the temperature difference between the two method is always smaller than 10-20K, the only region where the difference is at minimum 50 K (but ranges up to 1600 K difference) is the accreting planet vicinity. This test shows that for creating mock observations, the way the temperature is calculated is very important for the outcome.} \begin{figure*} \includegraphics[width=8.5cm]{hydro_temp_midplane.png} \includegraphics[width=8.5cm]{mctherm_midplane.png} \includegraphics[width=8.5cm]{temp_difference_midplane.png} \includegraphics[width=8.5cm]{temp_diff_50K.png} \caption{ {Comparison of the temperature calculations in the 10 Jupiter-mass case. Top-left panel: hydrodynamic simulation calculated temperature in the midplane that include adiabatic compression, shock heating and viscous heating. Top right panel: RADMC-3D calculated temperature field in midplane with thermal Monte-Carlo (\url{mctherm}). Bottom-left panel: difference of the temperature between the hydrodynamic simulation calculated temperature and RADMC-3D's Monte-Carlo calculated one, clearly there is a large difference in the circumplanetary region. Bottom-right panel: the region in the entire simulation box, where the temperature difference between the two methods are at least 50 K -- clearly this is only the circumplanetary region (a vertical slice is shown of this area). The planet and the inner circumplanetary disk is the orange region in the middle, the shock front on the circumplanetary disk created by the accretion stream is also visible as a horizontal yellow line above the planet.}} \label{fig:temp} \end{figure*} {In the 5 and 10 Jupiter-mass planet cases the circumplanetary region pops up, and some spurious photons can be seen to originate from this region (Fig. \ref{fig:scat12_incl0}). Closer inspection showed that the accretion shock front created on the surface of the circumplanetary disk (Fig. \ref{fig:shock}) due to the incoming mass influx from the circumstellar disk (via the meridional circulation; \citealt{Szulagyi14}) is the origin of the noisiness on the mock images on Fig. \ref{fig:scat12_incl0}. This shock front is strong and hot ($>1000$\,K) enough only in the 5 and 10 Jupiter-mass cases, not for the smaller planets. This is due to the the fact that the incoming accretion flow velocity scales with the planetary mass (nearly free-fall velocity). The higher velocity of the influx creates stronger and hotter shock fronts on the circumplanetary disk surface, with increasing planetary mass \citep{Szulagyi17gap}. This shock-front is helping the observability of the circumplanetary disk, hence it is also important to self-consistently include it when creating mock observations on circumplanetary disks.} \begin{figure} \includegraphics[width=\columnwidth]{hydro_temp_shock.png} \caption{ {Hydrodynamic simulation calculated temperature field (threshold) in the circumplanetary disk (10 Jupiter-mass case), where the accretion shock-front ($>1000$\,K) on the disk surface is visible by yellow, orange and red areas above the planet. The noisiness of the circumplanetary region in the mock observations (e.g. Fig. \ref{fig:scat12_incl0}) is due to this shock front. However, this shock front is also helping the detection of the circumplanetary disk.}} \label{fig:shock} \end{figure} {The results also depend on the optical depth. In this work we took care of dust evaporation above silicate evaporation temperature (1500 K), which meant that the dust density were put to zero, where the temperatures were rising above this limit (this region is near the planet, e.g. in the shock front on the circumplanetary disk surface). We made a test run for the 10 Jupiter-mass planet case (Fig. \ref{fig:comp}), where we compared cases with and without dust evaporation taken into account, for the Monte-Carlo temperature calculation versus the hydrodynamic simulation resulted temperatures. Fig. \ref{fig:comp} shows that when the dust evaporation is not included, the circumplanetary disk region (and its surface shock front) does not show up on the mock images. This is partially due to the different optical thickness -- if evaporation taken into account, and dust is no longer contribute to the local optical depth, the region became more optically thin (gas-only case). This again highlights that the modeling of the planetary vicinity should be carefully done to incorporate various physical effects when making observational predictions.} \begin{figure*} \includegraphics[width=18cm]{compare_evap_noevap_mctherm_hydrotemp.pdf} \caption{ {Comparison between the RADMC-3D total intensity images when the dust temperature was calculated with the hydrodynamic simulation radiative transfer module (that includes shock heating, and accretional heating due to adiabatic compression, and viscous heating) - top row, and with RADMC-3D's \url{mctherm} Monte-Carlo method - bottom row. In the left column the dust evaporation was not taken into account, while on the right evaporation was included (i.e. the dust density was set to zero where the temperature rose above 1500 K). This test run was done with $10^6$ photon packages on 1.245 microns, on the 10 Jupiter-mass simulation, units are in Jy/pixel, scaled to 100 pc distance.}} \label{fig:comp} \end{figure*} Our models only covers part of the parameter space. We assumed a fixed dust-to-gas ratio of 0.01 even in the circumplanetary disk \citep{DSz18}, however real disks can have smaller and larger values than this \citep[e.g.][]{YG05,Marel13,DD14,Birnstiel12,WB14,Ansdell16}, which might affect the results. In this work we have considered the planets to be 50 AU from the star, but the circumplanetary disk-circumstellar disk contrast can be very different if the circumplanetary disk at another distance. Circumplanetary disks closer to the star tend to be more optically thick, and hotter than the more distant ones. For the circumstellar disk mass we considered an average value of 0.01 $\rm{M_{Sun}}$, and the radial extent was between 20-120 AU, similar to a transitional disk with an inner cavity. While the circumplanetary disk mass linearly scales with the circumstellar disk mass \citep{Szulagyi17gap}, the changes in mass will also result in different optical depth, which can affect the results described here. The large, optically thin inner 20 AU can also affect the results. The hydrodynamic simulations did not include magnetic fields, e.g. the fields of the disks, which might affect the dust density distribution \citep{Gressel13}. Since the planet interior structure was not part of the simulation, any temperature in the planet region is a lower limit to the radiation from the planet. This will affect the radiation field in/from the CPD, as well as its spatial structure. \section{Summary} In this work we investigated polarized scattered light detectability of circumplanetary disks surrounding nascent planets. We ran hydrodynamic simulations with mesh refinement to resolve sufficiently the circumplanetary disk. We used radiative transfer included hydrodynamics, to realistically estimate the temperature. Then, we post-processed the simulations with RADMC-3D Monte-Carlo radiative transfer software to create polarized light images in J band. We added convolution with a PSF-size at the diffraction limit, assuming 8.2 meter mirror, like VLT. We considered different planetary mass cases: Saturn, 1, 5, 10 $\rm{M_{Jup}}$, different disk inclinations of 0, 30, 60, 90 degrees, { and various planetary locations (0, 45, 90 degrees).} The planets were embedded in a 0.01 $\rm{M_{Sun}}$ circumstellar disk, 50 AU away from their star, which was assumed to be a Sun-equivalent. Our $I,\ PI,\ Q_{\phi}$ and $U_{\phi}$ images revealed that the circumplanetary disk detection is only possible in the case of very massive planets (5 and 10 $\rm{M_{Jup}}$), although it is highly dependent on how optically thick is the circumplanetary disk (i.e. how much dust it contains, and what is the temperature there, { whether the dust is evaporated or not). In these high mass planet cases the accretion shock front on the surface of the circumplanetary disk \citep{SzM17} is so strong and luminous, that it helps the observability of this subdisk. Therefore the inclusion of this shock-front is important when modelling observability of the circumplanetary disk.} The circumplanetary disk detection is challenging in polarized light, not only because of sensitivity but also due to the contrast with the circumstellar disk. However, we showed that, ideally speaking, it is possible to distinguish between the two disk's contributions by comparing the total polarized light (from the $PI$ image) and the centro-symmetric polarized light (from the $Q_\phi$ image), as well as by finding stronger polarized colors in the circumplanetary disk than in the neighboring circumstellar disk. In conclusion, while circumplanetary disk detection might be challenging in polarized light, the $PI$/$Q_\phi$ images can be possible tools to detect the circumplanetary disk within the circumstellar disk. \section*{Acknowledgments} We thank for Adriana Pohl providing the opacity table, including the polarization matrix. J.Sz. thanks for the financial support through the Swiss National Science Foundation (SNSF) Ambizione grant PZ00P2\_174115. Furthermore, these results are part of a project that has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant agreement No. 948467). We also acknowledge support from the project PRIN-INAF 2016 The Cradle of Life - GENESIS-SKA (General Conditions in Early Planetary Systems for the rise of life with SKA) and from INAF/Frontiera (Fostering high ResolutiON Technology and Innovation for Exoplanets and Research in Astrophysics) through the "Progetti Premiali" funding scheme of the Italian Ministry of Education, University, and Research. Computations partially have been done on the "Piz Daint" machine hosted at the Swiss National Computational Centre and partially carried out on ETH Z\"urich's Euler machine. \section*{Data Availability} The data underlying this article will be shared on reasonable request to the corresponding author.
1,108,101,562,535
arxiv
\section{Introduction} \noindent The groups $F$, $T$, and $V$ introduced by R. Thompson in the 1960s have a particular place in the history of group theory. First, $T$ and $V$ are the first examples of infinite finitely presented simple groups, and $F$ is the first example of a torsion-free group of type $F_\infty$ that is not of type $F$. But, since then, many groups have been constructed by varying the constructions of $F$, $T$, and $V$; see for instance \cite{HigmanV, Stein, MR1396957, Rover, Nekrashevych, BrinnV, FunarUniversal, DehornoybrV, BrinbrV, Funar-Kapoudjian, Sim, Monod, QV, MR4009393}. Although all these groups turn out to share similar properties, axiomatising the family of ``Thompson-like groups'' seems difficult; see \cite{Thumann, Witzel} for attempts in this direction. Nowadays, the investigation of Thompson-like groups is a subject on its own. Recent successes include the construction of new examples of non-amenable groups without non-abelian free subgroups \cite{Monod, LM} and the construction of simple groups distinguished by finiteness properties \cite{MR3910073, TwistedThompson}. \medskip \noindent In this article, we are mainly interested in braided versions of Thompson's groups and their finiteness properties. Recall that a group $G$ is \emph{of type $F_n$} if it admits a \emph{classifying space} (i.e. an aspherical CW complex with $G$ as its fundamental group) that contains only finitely many cells in its $n$-skeleton. A group is \emph{of type $F_\infty$} if it is of type $F_n$ for every $n \geq 1$. Notice that groups of type $F_1$ coincide with finitely generated groups, and that groups of type $F_2$ coincide with finitely presented groups. Being of type $F_n$ for $n \geq 3$ is usually thought of as a higher dimensional analogue of these properties. Because the homotopy type of an aspherical CW complex depends only on its fundamental group, one can associate topological invariants to a group from a classifying space, such as (co)homology groups. Then, being of type $F_n$ assures that such invariants in dimension $\leq n$ are finitely generated. One can expect, next, to construct an explicit classifying space and to compute these invariants. Interestingly, the property of being of type $F_n$ can also be characterised from coarse geometry. In some sense, a finitely generated group is of type $F_n$ if and only if it is \emph{coarsely $(n-1)$-connected}. See \cite[Section~9.2]{MR3753580} for more details. In particular, being of type $F_n$ is a quasi-isometric invariant. \medskip \noindent An interesting question is to determine to what extend a braided version of a Thompson group satisfies the same finiteness properties as the corresponding Thompson group. A positive answer in this direction can be found in \cite{MR3545879}, where the authors prove that the braided version of $V$ introduced in \cite{DehornoybrV, BrinbrV} and the braided version of $F$ introduced in \cite{MR2384840} are of type $F_\infty$. In \cite{Funar-Kapoudjian}, two braided versions of $T$, denoted by $T^\sharp$ and $T^\ast$, are introduced and proved to be finitely presented. The authors next proved that they are of type $F_3$ in \cite{MR2803858}, and they conjectured that they are of type $F_\infty$. This conjecture was the motivation of the present work. \medskip \noindent Our framework, largely inspired by \cite{Funar-Kapoudjian}, is the following. Fix a locally finite tree $A$ embedded into the plane in such a way that its vertex-set is discrete. The \emph{arboreal surface} $\mathscr{S}(A)$ is the oriented planar surface with boundary obtained by thickening $A$ in the plane. We denote by $\mathscr{S}^\sharp(A)$ the punctured arboreal surface obtained from $\mathscr{S}(A)$ by adding a puncture for each vertex of the tree. Now we fix a \emph{rigid structure} on $\mathscr{S}^\sharp(A)$, i.e. a decomposition into \emph{polygons} by means of a family of pairwise non-intersecting arcs whose endpoints are on the boundary of $\mathscr{S}(A)$ such that each polygon contains exactly one vertex of the underlying tree in its interior and such that each arc crosses once and transversely a unique edge of the tree. See for instance Figure~\ref{Dsharp}. We are interested in a specific subgroup of the big mapping class group of $\mathscr{S}^\sharp(A)$, corresponding to the (isotopy classes of the) homeomorphisms that, loosely speaking, preserve the rigid structure ``almost everywhere''. \begin{definition} A homeomorphism of $\mathscr{S}^\sharp(A)$ is \emph{asymptotically rigid} if it sends all but finitely many polygons of the rigid structure to polygons. We denote by $\amod (A)$ of isotopy classes of orientation-preserving asymptotically rigid homeomorphisms of $\mathscr{S}^\sharp(A)$. \end{definition} \noindent As an example, if $A_n$ denotes the $(n+1)$-regular tree ($n \geq 2$), then $\amod(A_n)$ provides a braided version of Thompson's group $T_n$, referred to as the \emph{braided Ptolemy-Thompson group} and denoted by $\mathrm{br}T_n$. For instance, $\mathrm{br}T_2$ coincides with the group $T^\sharp$ introduced in \cite{Funar-Kapoudjian}. More generally, if $A_{n,m}$ denotes the tree with one vertex of valence $m$ while all the other vertices have valence $n+1$ ($n \geq 2$), we refer to $\mathrm{br}T_{n,m}:= \amod(A_{n,m})$ as the \emph{braided Higman-Thompson group}. So, for $m=n+1$, we recover the braided Ptolemy-Thompson group $\mathrm{br}T_n$. Interestingly, the group $T^\ast$ introduced in \cite{Funar-Kapoudjian} turns out to coincide with $\mathrm{br}T_{2,4}$; see Section \ref{sec:arboreal} for more details. As another example, if $R_n$ denotes the union of $n$ infinite rays whose origins are all identified, then the finite-index subgroup $\mathrm{br}H_n$ of elements in $\amod(R_n)$ that preserves the ends of the surface $\mathscr{S}^\sharp(R_n)$ coincides with the \emph{braided Houghton group} introduced in \cite{Degenhardt}, as observed in \cite{FunarHoughton}. Finally, let us mention that a braided version of the lamplighter group $\mathbb{Z}\wr \mathbb{Z}$ can also be constructed, but this example will be treated in more details in a forthcoming work \cite{Next}; see also Section 2. \medskip \noindent The major contribution of our work is the introduction of a geometric point of view on asymptotically mapping class groups. Given a simplicial tree $A$, we make $\amod(A)$ act on the cube complex $\mathscr{C}(A)$ constructed as follows. The vertices are classes of marked subsurfaces $[\Sigma, \varphi]$ where $\Sigma$ is a non-empty connected union of finitely many polygons of the rigid structure and $\varphi$ is an asymptotically rigid homeomorphism of $\mathscr{S}^\sharp(A)$ (for a more precise definition see Section \ref{section:Construction}). For any $H_1, \ldots, H_k$ pairwise distinct adjacent polygons of $\Sigma$, there is a $k$-cube issued from $[\Sigma, \varphi]$ spanned by $$\left\{ \left[ \Sigma \cup \bigcup\limits_{i \in I} H_i , \varphi \right] \mid I \subset \{1, \ldots, k\} \right\}.$$ This simple cube complex turns out to be tightly related to $\amod(A)$. As a first application, we show that every finite subgroup in $\amod(A)$ fixes a vertex in $\mathscr{C}(A)$, which allows us to prove that finite subgroups in $\amod(A)$ are all cyclic with specific orders. See Theorem \ref{thm:FiniteOrders}. As a particular case, we characterise all the orders of finite-order elements in braided Higman-Thompson groups: \begin{restatable*}{thm}{torsionthm}\label{prop:torsion} Let $n \geq 2$ and $m \geq 1$ be two integers. The braided Higman-Thompson group $\mathrm{br}T_{n,m}$ contains an element of order $l$ if and only if $$\left\{ \begin{array}{ll} \text{$l$ divides $m$ or $m-n+1$} & \text{if $m \neq n+1$} \\ \text{$l=2$ or $l$ divides $n+1$} & \text{if $m=n+1$} \end{array} \right..$$ \end{restatable*} \noindent This result has interesting consequences on the isomorphism problem. For instance, we deduce that the braided Ptolemy-Thompson groups $\mathrm{br}T_n$ and $\mathrm{br}T_m$ are isomorphic if and only if $n=m$. We also recover the fact, proved in \cite{Funar-Kapoudjian}, that $T^\sharp$ and $T^\ast$ are not isomorphic. However, not all braided Higman-Thompson groups can be distinguished by their finite-order elements: notice that, for every $n \geq 2$, $\mathrm{br}T_{n,n-1}$ contains an element of every possible order. \medskip \noindent But our main application concerns finiteness properties. Such applications are motivated by the following observation (which is a consequence of Theorem \ref{thm:contractible} and Lemma \ref{lem:VertexStab}): \begin{prop} For every simplicial tree $A$, $\mathscr{C}(A)$ is a contractible cube complex, on which $\amod(A)$ acts with cube-stabilisers isomorphic to finite extensions of braid groups. \end{prop} \noindent As a consequence, by looking at the action of the groups $\amod(A)$ on the cube complexes $\mathscr{C}(A)$, we are in a good position to prove finiteness properties among asymptotically rigid mapping class groups. The two main theorems of this article go in this direction: \begin{restatable*}{thm}{ThompsonThm}\label{PtolemyConnected} For all $n \geq 2$ and $m \geq 1$, the braided Higman-Thompson group $\mathrm{br}T_{n,m}$ is of type $F_\infty$. \end{restatable*} \begin{restatable*}{thm}{HoughtonThm}\label{thm:brHfiniteness} For every $n \geq 1$, the braided Houghton group $\mathrm{br}H_n$ is of type $FP_{n-1}$ but not of type $FP_n$. Moreover, if $n \geq 3$, $\mathrm{br}H_n$ is finitely presented. \end{restatable*} \noindent The property $FP_n$ is a cohomological analogue of the property $F_n$. A group of type $F_n$ is automatically of type $FP_n$, and the converse holds for finitely presented groups (see for instance \cite[Section 8.7]{MR1324339}). Consequently, Theorem \ref{thm:brHfiniteness} shows that, for every $n \geq 1$, the braided Houghton group $\mathrm{br}H_n$ is of type $F_{n-1}$ but not of type $F_n$. \medskip \noindent For $(n,m)=(2,1)$ and $(n,m)=(2,4)$, Theorem \ref{PtolemyConnected} proves Funar and Kapoudjian's conjecture according to which $T^\sharp$ and $T^\ast$ are of type $F_\infty$. Theorem \ref{thm:brHfiniteness} was conjectured in \cite{Degenhardt} and verified for $n \leq 3$. A strategy was suggested in \cite{BuxHoughton} for the general case, but our proof of the conjecture uses a different approach. Theorems \ref{PtolemyConnected} and \ref{thm:brHfiniteness} show that the braided versions of Thompson's and Houghton's groups verify the same finiteness properties as the unbraided versions. \medskip \noindent We emphasize that our proofs of Theorems \ref{PtolemyConnected} and \ref{thm:brHfiniteness} are constructive, providing explicit highly connected complexes on which asymptotically rigid mapping class groups act. As an application, in a forthcoming article \cite{NextNext}, we compute explicit presentations of the braided Higman-Thompson groups. In particular, this allows us to compute the abelianisations of these groups, providing other algebraic invariants in addition to Theorem \ref{prop:torsion}. We also emphasize that the techniques developped in this article go beyond the arboreal surfaces we study here. For instance, we expect that our arguments can be adapted to the universal mapping class groups constructed in \cite{MR2105950, AramayonaFunar}. \paragraph{A few words about the proofs.} If we allow $n=1$ in our notation $A_{n,m}$, then, when $n \geq 2$, $\amod(A_{n,m})$ coincides with the braided Higman-Thompson group $\mathrm{br}T_{n,m}$; and, when $n=1$, $\amod(A_{n,m})$ contains the braided Houghton group $\mathrm{br}H_m$ as a finite-index subgroup. This notation allows us to work with these two families of groups simultaneously. The cube complex $\mathscr{C}(A_{n,m})$, on which $\amod(A_{n,m})$ acts, is naturally endowed with the \emph{height function} $$h : \text{vertex $[\Sigma, \varphi]$} \mapsto \text{number of punctures in $\Sigma$};$$ and standard arguments from Morse theory allow us to deduce finiteness properties of $\amod(A_{n,m})$ from a careful analysis of the descending links in $\mathscr{C}(A_{n,m})$. However, this strategy may fail because of vertices of arbitrarily large height with non-simply connected descending links. We first need to extract from $\mathscr{C}(A_{n,m})$ a suitable subcomplex $\mathscr{SC}(A_{n,m})$, referred to as the \emph{spine}. It corresponds to the subcomplex spanned by the vertices of the form $[\Sigma,\varphi]$ where $\Sigma$ contains the vertex of valence $m$ in $A_{n,m}$, and it turns out to be homotopy equivalent to the original complex. In particular, it is also contractible. \medskip \noindent Interestingly, the descending links in $\mathscr{SC}(A_{n,m})$ can be described as complexes of arcs $\mathfrak{C}(p,q,r)$ for some $p,q \geq 1$ and $r \geq 0$. \begin{definition} Let $p, q\geq1$ and $r \geq 0$ be three integers. Fix a disc $\mathbb{D}$ with $p$ punctures in its interior and $q$ marked points on its boundary. Let $\{m_i \mid i \in \mathbb{Z}_q \}$ denote these marked points, ordered cyclically. From now on, an arc in $\mathbb{D}$ refers to an arc that starts from a marked point and that ends at a puncture. Two arcs that start from the marked points $m_i,m_j$ are \emph{$r$-separated} if they are disjoint and if the distance between $i$ and $j$ in $\mathbb{Z}_q$ is $>r$ (where $\mathbb{Z}_q$ is metrically thought of as the cycle $\mathrm{Cayl}(\mathbb{Z}_q,\{1\})$). Notice that being $0$-separated amounts to being disjoint. We define $\mathfrak{C}(p,q,r)$ as the simplicial complex whose vertices are the isotopy classes of arcs and whose simplices are collections of arcs that are pairwise $r$-separated (up to isotopy). \end{definition} \noindent Then, proving Theorems \ref{PtolemyConnected} and \ref{thm:brHfiniteness} essentially amounts to showing that, for a fixed $r \geq 0$, our arc complex $\mathfrak{C}(p,q,r)$ becomes more and more connected as $p$ and $q$ increase. More precisely, we prove that: \begin{thm}\label{Intro:ArcComplex} Let $p \geq 2$, $q \geq 1$ and $r \geq 0$ be three integers. The complex $\mathfrak{C}(p,q,r)$ is $\left( \left\lfloor \frac{1}{3} \left( p+ \left\lfloor \frac{q}{r+1} \right\rfloor \right) \right\rfloor -2 \right)$-connected. Moreover, if $r=0$, then $\mathfrak{C}(p,q,r)$ is homotopy equivalent to a bouquet of infinitely many $(q-1)$-spheres as soon as $p \geq 2q$. \end{thm} \noindent Our theorem, proved more generally for arbitrary surfaces and not only for discs, is a direct consequence of Propositions \ref{prop:SimConnected} and \ref{prop:BouquetSpheres} below. Our argument follows closely the lines of the proof of \cite[Theorem~3.10]{MR3545879}, which is inspired by an argument from \cite{MR1123262}. \medskip \noindent The topology of complexes of arcs has been widely studied in the literature, especially because of its connection with homology stability. See for instance the foundational article \cite{MR786348}. However, it is usually assumed that a collection of arcs that have disjoint interiors but that start from the same marked point on the boundary spans a simplex, which is not the case in $\mathfrak{C}(p,q,r)$. Even worse, in order to span a simplex, we require the marked points to be sufficiently far away from each other. So $\mathfrak{C}(p,q,r)$ is significantly different from the complexes of arcs usually studied. \begin{remark} After the completion of this work, K.-U. Bux informed us that he also had a proof of Theorem \ref{thm:brHfiniteness}. His proof is announced in \cite[Remark 13.10.2]{BuxBraided} and some details are given in \cite{BuxArcComplexes}. Notice that the complex $\mathcal{A}(m,n,0)$ defined in \cite{BuxArcComplexes} coincides with our $\mathfrak{C}(m,n,0)$, so the families of arc complexes considered in \cite{BuxArcComplexes} and in our paper overlap. However, no one is included in the other. In particular, $\mathfrak{C}(m,n,r)$ for $r \geq 1$ does not belong to the framework of \cite{BuxArcComplexes}. \end{remark} \paragraph{Organisation of the article.} Section~\ref{sec:arboreal} is dedicated to basic definitions and examples about asymptotically rigid mapping class groups. Our cube complex, the main object of the article, is constructed in Section~\ref{section:CC}, where we prove that it is contractible. A quick discussion related to its curvature is included in Subsection~\ref{section:curvature}. As a first application, we classify finite subgroups in asymptotically rigid mapping class groups in Section~\ref{section:torsion}. The core of the article is Section~\ref{section:TypeF} where we prove Theorems~\ref{PtolemyConnected} and~\ref{thm:brHfiniteness}. The spine is introduced and studied in Subsection~\ref{section:spine} and the descending links of its vertices are described as arc complexes in Subsection~\ref{section:Links}. Finally, we study their homotopy (Theorem~\ref{Intro:ArcComplex}) and we prove Theorems~\ref{PtolemyConnected} and~\ref{thm:brHfiniteness} in Subsections~\ref{section:HigmanThompson} and~\ref{section:Houghton}. \paragraph{Acknowledgments.} The authors thank the University of Basel and the EPFL for their hospitality during parts of this project. A. G. was supported by a public grant as part of the Fondation Math\'ematique Jacques Hadamard. A. L. acknowledges support from the Swiss National Science Foundation Grant ``Birational transformations of threefolds'' $200020\mathunderscore178807$. \section{Arboreal surfaces and asymptotically rigid mapping class groups}\label{sec:arboreal} \noindent Let us recall from the introduction the general framework of this article. \medskip \noindent Fix a locally finite tree $A$ embedded into the plane in such a way that its vertex-set is discrete. The \emph{arboreal surface} $\mathscr{S}(A)$ is the oriented planar surface with boundary obtained by thickening $A$ in the plane. We denote by $\mathscr{S}^\sharp(A)$ the punctured arboreal surface obtained from $\mathscr{S}(A)$ by adding a puncture for each vertex of the tree. Following \cite{Funar-Kapoudjian}, we fix a \emph{rigid structure} on $\mathscr{S}^\sharp(A)$, i.e. a decomposition into \emph{polygons} by means of a family of pairwise non-intersecting arcs whose endpoints are on the boundary of $\mathscr{S}(A)$ such that each polygon contains exactly one vertex of the underlying tree in its interior and such that each arc crosses once and transversely a unique edge of the tree. See for instance Figure~\ref{Dsharp}. \begin{figure} \begin{center} \includegraphics[scale=0.3]{surfaces} \caption{Surfaces with rigid structures associated to a simplicial tree.} \label{Dsharp} \end{center} \end{figure} \medskip \noindent A subsurface of $\mathscr{S}^\sharp(A)$ is \emph{admissible} if it is a non-empty connected finite union of polygons belonging to the rigid structure. A homeomorphism $\varphi : \mathscr{S}^\sharp(A) \to \mathscr{S}^\sharp(A)$ is \emph{asymptotically rigid} if the following conditions are satisfied: \begin{itemize} \item there exists an admissible subsurface $\Sigma \subset \mathscr{S}^\sharp(A)$ such that $\varphi(\Sigma)$ is also admissible; \item the homeomorphism $\varphi$ is \emph{rigid outside $\Sigma$}, i.e. the restriction \[\varphi : \mathscr{S}^\sharp(A) \backslash \Sigma \to \mathscr{S}^\sharp(A) \backslash \varphi( \Sigma)\] respects the rigid structure, mapping polygons to polygons. Such a surface $\Sigma$ is called a \emph{support} for $\varphi$. \end{itemize} We denote by $\amod (A)$ the group of isotopy classes of orientation-preserving asymptotically rigid homeomorphisms of $\mathscr{S}^\sharp(A)$. We emphasize that isotopies have to fix each puncture. \medskip \noindent In the sequel, we refer to the \emph{frontier} $\mathrm{Fr}(\Sigma)$ of an admissible subsurface $\Sigma$ as the union of the arcs defining the rigid structure that are contained in the boundary. Also, a polygon is called \emph{adjacent} to $\Sigma$ if it is not contained in $\Sigma$ but shares an arc with the frontier of $\Sigma$. \medskip \noindent Any asymptotically rigid homeomorphism $\mathscr{S}^\sharp(A) \to \mathscr{S}^\sharp(A)$ induces a \emph{quasi-auto\-morph\-ism} $A \to A$, i.e. a bijection defined on the vertices of $A$ that preserves adjacency and non-adjacency for all but finitely many pairs of vertices. Let $\mathrm{QAut}(A)$ denote the group of quasi-automorphisms of $A$. The induced morphism $\amod(A) \to \mathrm{QAut}(A)$ is not surjective in general, and we denote its image by $\mathfrak{mod}_a(A)$; its kernel corresponds to the homeomorphisms $\mathscr{S}^\sharp(A) \to \mathscr{S}^\sharp(A)$ that are the identity outside an admissible subsurface and that fix each puncture. In other words, we have a short exact sequence $$1 \to PB_\infty \to \amod(A) \to \mathfrak{mod}_a(A) \to 1$$ where $PB_\infty$ denotes the limit of pure mapping class groups $\bigcup\limits_{\text{$\Sigma$ admissible}} \mathrm{PMod}(\Sigma)$. Notice that $PB_\infty$ is isomorphic to the group of compactly supported pure braids with infinitely many strands. We refer to the previous exact sequence as the \emph{arboreal short exact sequence} satisfied by $\amod(A)$. \medskip \noindent Another interesting exact sequence comes from the \emph{forgetful map} $\amod(A) \to \Mod(\mathscr{S}(A))$, where we forget the punctures. We denote by $\amod_f(A)$ the image of $\amod(A)$ in the mapping class group $\Mod(\mathscr{S}(A))$. The kernel of the forgetful map corresponds to the homeomorphisms $\mathscr{S}^\sharp(A) \to \mathscr{S}^\sharp(A)$ that are the identity outside an admissible subsurface. In other words, we have a short exact sequence $$1 \to B_\infty \to \amod(A) \to \amod_f(A) \to 1$$ where $B_\infty$ denotes the limit $\bigcup\limits_{\text{$\Sigma$ admissible}} \mathrm{Mod}(\Sigma)$. Notice that $B_\infty$ is isomorphic to the group of compactly supported braids with infinitely many strands. We refer to the previous exact sequence as the \emph{forgetful short exact sequence} satisfied by $\amod(A)$. \medskip \noindent These two short exact sequences motivate the idea that $\amod(A)$ is a braided version of $\amod_a(A)$ and $\amod_f(A)$. They may also be useful to compute explicit presentations; see Example \ref{ex:brH2}. However, as $PB_\infty$ and $B_\infty$ are not even finitely generated, such presentations are infinite, and in general it is not clear if finite presentations can be extracted from them, and if so, how. \medskip \noindent Depending on the choice of the tree $A$, we obtain various groups with a rich structure. Let us present some particular cases of groups that interest us most. \paragraph{Braided Higman-Thompson groups.} For integers $n\geq 2$ and $m\geq 1$, let $A_{n,m}$ be the tree with one vertex of valence $m$ while all the other vertices have valence $n+1$. We then call the group $\amod(A_{n,m})$ the \emph{braided Higman-Thompson group} and denote it by $\mathrm{br}T_{n,m}$. The terminology is justified by the forgetful short exact sequence $$1 \to B_\infty \to \mathrm{br}T_{n,m} \to T_{n,m} \to 1,$$ where $T_{n,m}:= \amod_f(A_{n,m})$ coincides with the Higman-Thompson group of type $(n,m)$ introduced in \cite{Brown} by analogy with \cite{HigmanV}. (We refer to \cite[Section 2.3]{Survey} for a justification of this identification for $n=2$ and $m=1$. The general case follows similarly.) Note that $A_n:=A_{n,n+1}$ is the $(n+1)$-regular tree. We refer to $\mathrm{br}T_n:= \mathrm{br}T_{n,n+1}$ as the \emph{braided Ptolemy-Thompson group}. For $n=2$, we recover the group $T^\sharp$ introduced in \cite{Funar-Kapoudjian}. \begin{ex} Notice that many isometries of $A_n$ induce (asymptotically) rigid homeomorphisms of $\mathscr{S}^\sharp(A_n)$. Moreover, any two distinct such isometries induce non-isotopic homeomorphisms, so many subgroups of $\mathrm{Aut}(A_n)$ turn out to define subgroups of $\mathrm{br}T_n$. They include non-abelian free subgroups (which do not come from braid subgroups) and finite cyclic subgroups. More precisely, if $H$ is a polygon of the rigid structure, then the homeomorphism $r_H$ that shifts cyclically the arcs of the frontier of $H$ defines a(n asymptotically) rigid homeomorphism of $\mathscr{S}^\sharp(A_n)$ of finite order. \end{ex} \begin{ex}\label{ex_rotations} More generally, if $\Sigma$ is any admissible subsurface containing $k$ punctures, then its frontier consists of $k(n-1)+2$ arcs. Consequently, the complement of $\Sigma$ in $\mathscr{S}^\sharp(A_n)$ consists of $k(n-1)+2$ pairwise homeomorphic arboreal surfaces. We denote by $r_\Sigma$ the asymptotically rigid homeomorphism that cyclically clockwise shifts the arcs of the frontier of $\Sigma$ (and hence the homeomorphic arboreal surfaces) and whose restriction to a disk in $\Sigma$ containing all the punctures is the identity. Figure \ref{rotation} illustrates the case $k=2$. \end{ex} \begin{figure} \begin{center} \includegraphics[scale=0.4]{rotation} \caption{An asymptotically rigid homeomorphism.} \label{rotation} \end{center} \end{figure} \vspace{-0.3cm} \paragraph{Braided Houghton groups.} Fix an $n \geq 1$ and let $R_n$ denote the union of $n$ infinite rays whose initial vertices are identified. We refer to the index-$n$ subgroup of $\amod(R_n)$ that stabilises each end of the surface $\mathscr{S}(R_n)$ as the \emph{braided Houghton group} $\mathrm{br}H_n$. As explained below, $\mathrm{br}H_n$ coincides with the groups introduced in \cite{Degenhardt} and \cite{FunarHoughton}. The terminology is justified by the exact sequence $$1 \to PB_\infty \to \mathrm{br}H_n \to H_n \to 1$$ given by the arboreal short exact sequence satisfied by $\amod(R_n)$, where $H_n$ (defined as the finite-index subgroup in $\mathrm{QAut}(R_n)$ that stabilises each end of $R_n$) denotes the Houghton group as introduced in \cite{Houghton}. \begin{ex}\label{ex:brH2} The braided Houghton group $\mathrm{br}H_2$ satisfies the short exact sequence $$1 \to B_\infty \to \mathrm{br}H_2 \to \mathbb{Z} \to 1$$ where the cyclic group corresponds to the subgroup in $\amod_f(R_2)$ that fixes the two ends of $R_2$. This exact sequence splits and provides the decomposition $\mathrm{br}H_2 = B_\infty \rtimes \langle t \rangle$, where $t$ denotes the homeomorphism coming from the translation acting on $R_2$. It follows that $\mathrm{br}H_2$ admits $$\left\langle t, \tau_i \ (i \in \mathbb{Z}) \left| \begin{array}{l} [\tau_i, \tau_j]=1, \ i,j \in \mathbb{Z}, \ |i-j| \geq 2 \\ \tau_i \tau_{i+1} \tau_i = \tau_{i+1} \tau_i \tau_{i+1}, \ i \in \mathbb{Z} \\ \tau_i^t=\tau_{i+1}, \ i \in \mathbb{Z} \end{array} \right\rangle \right.$$ as a presentation, where $\tau_i$ corresponds to the twist between the punctures $i$ and $i+1$, and with the notation $a^b:= bab^{-1}$. Setting $\tau:= \tau_0$ and using the relation $\tau_i = t^i \tau t^{-i}$ for every $i \in \mathbb{Z}$, the presentation can be simplified as: $$\left\langle t, \tau \mid \tau \tau^t \tau = \tau^t \tau \tau^t, \ \left[ \tau, \tau^{t^n} \right]=1 \ (n \in \mathbb{Z}, |n| \geq 2) \right\rangle.$$ (As proved in \cite{Degenhardt}, and reproved by Theorem~\ref{thm:brHfiniteness}, $\mathrm{br}H_2$ is not finitely presented, so a finite presentation cannot be extracted from the previous presentation.) \end{ex} \paragraph{Braided lamplighter group.} Let $A$ denote the tree obtained from an horizontal bi-infinite line by gluing an infinite descending vertical ray to each of its vertices. We refer to $\mathrm{br}\mathcal{L}:= \amod (A)$ as the \emph{braided lamplighter group}. The terminology is justified by the short exact sequence $$1 \to B_\infty \to \mathrm{br}\mathcal{L} \to \mathcal{L} \to 1$$ given by the forgetful short exact sequence satisfied by $\amod(A)$, where $\mathcal{L}$ denotes the \emph{lamplighter group}, defined as the wreath product $\mathbb{Z} \wr\mathbb{Z}:= \left( \bigoplus\limits_\mathbb{Z} \mathbb{Z} \right) \rtimes \mathbb{Z}$ (where $\mathbb{Z}$ acts on the direct sum by shifting the coordinates). In a forthcoming article \cite{Next}, it will be proved that the braided lamplighter group $\mathrm{br}\mathcal{L}$, like its unbraided version $\mathcal{L}$, is finitely generated but not finitely presented. \paragraph{Variations of the definition.} Following \cite{Funar-Kapoudjian}, given a simplicial tree $A$ we can define a different surface with rigid structure $\mathscr{S}^\ast(A)$ from $\mathscr{S}(A)$: we add a puncture for each edge of the tree, and we decompose the surface into polygons by means of a family of pairwise non-intersecting arcs whose endpoints are on the boundary of $\mathscr{S}(A)$ such that each arc contains a puncture and crosses the underlying tree once and transversely. See Figure \ref{Dsharp}. Mimicking the definition of $\amod(A)$, one defines a new group $\amod^\ast(A)$. As observed in \cite{Funar-Kapoudjian}, the groups $\amod(A)$ and $\amod^\ast(A)$ may not be isomorphic (see Remark~\ref{rem:funar_kap_non-iso} below). However, as justified by the next observation, there is no loss of generality in studying only the groups $\amod(A)$. \begin{lemma}\label{lem:Iso} Let $A$ be a simplicial tree. If $A'$ is a tree obtained from $A$ by collapsing an edge, then $\amod^\ast(A)$ is isomorphic to $\amod(A')$. \end{lemma} \begin{proof} Fix a vertex $u \in A$. Given a polygon $H$ of the rigid structure of $\mathscr{S}^\sharp(A)$ that doesn't contain $u$, let $\alpha$ denote the arc in $\mathrm{Fr}(H)$ that separates $H\backslash \alpha$ from the puncture $u$. There is an isotopy of $\mathscr{S}(A)$ supported in $(H \backslash N(\mathrm{Fr}(H))) \cup N(\alpha)$, where $N(\alpha)$ is a small tubular neighborhood of $\alpha$ and $N(\mathrm{Fr}(H))$ is a small tubular neighborhood of $\mathrm{Fr}(H)$, that sends $\alpha$ to an arc that passes through the puncture of $H$ and that crosses the underlying tree once and transversely. Since all these isotopies have pairwise disjoint supports, we can perform on all the polygons at once and, after removing the puncture $u$, we obtain a homeomorphism $\mathscr{S}^\sharp(A) \cup \{u\} \to \mathscr{S}^\ast(A)$ that respects the rigid structures, sending polygons to polygons. (This paragraph is extracted from the proof of \cite[Proposition~2.10]{Funar-Kapoudjian}.) \medskip \noindent Now, fix a neighbor $v \in A$ of $u$ and let $A'$ denote the tree obtained from $A$ by collapsing the edge between $u$ and $v$. Let $A,B$ denote the polygons of $\mathscr{S}^\sharp(A)$ containing the punctures $u,v$. Define a new rigid structure on $\mathscr{S}^\sharp(A) \cup \{u\}$ by removing the arc common to $A,B$. (In other words, we are merging $A$ and $B$ into a unique polygon.) Notice that the surface with rigid structure we obtain coincides with $\mathscr{S}^\sharp(A')$ (up to a homeomorphism that respects the rigid structure). \medskip \noindent Therefore, there exists a homeomorphism $\mathscr{S}^\sharp(A') \to \mathscr{S}^\ast(A)$ that sends each polygon of $\mathscr{S}^\sharp(A')$ to a polygon of $\mathscr{S}^\ast(A)$, except one that is sent to the union of two polygons. By conjugating by such a homeomorphism, one obtains an isomorphism $\amod(A') \to \amod^\ast(A)$. \end{proof} \noindent In \cite{Funar-Kapoudjian}, the authors study the groups $T^\sharp:= \amod(A_2)$ and $T^\ast:= \amod^\ast(A_2)$ and prove that they are not isomorphic by comparing their abelianisations (see also Remark \ref{rem:funar_kap_non-iso}). As a consequence of Lemma~\ref{lem:Iso}, $T^\ast$ turns out to be isomorphic to $\amod(A_{2,4})$. Thus, one may justify the difference between $T^\sharp$ and $T^\ast$ by saying that $T^\sharp$ is a braided version of the Ptolemy-Thompson group $T_2$ but that $T^\ast$ is a braided version of the Higman-Thompson group $T_{2,4}$ (as defined in \cite{Brown}, following \cite{HigmanV}). \begin{remark} As a consequence of Lemma \ref{lem:Iso}, if $A_1$ and $A_2$ are two trees such that a third tree $A$ can be obtained from both $A_1$ and $A_2$ by collapsing an edge, then $\amod(A_1)$ and $\amod(A_2)$ are isomorphic. Therefore, we obtain non-trivial examples of isomorphic asymptotically rigid mapping class groups. \end{remark} \begin{remark} It is worth noticing that, as a consequence of Lemma \ref{lem:Iso} (and its proof), the three braided versions of the Houghton groups associated to the surfaces with rigid structures illustrated by Figure \ref{houghtons} all coincide. We used the first surface in our definition of braided Houghton's groups; the second surface is used in \cite{FunarHoughton} in order to recover the original definition of \cite{Degenhardt}; and the third surface is used in \cite{FunarHoughton} in order to define an a priori different braided version of Houghton's groups. \end{remark} \begin{figure} \begin{center} \includegraphics[scale=0.28]{Houghtons} \caption{} \label{houghtons} \end{center} \end{figure} \section{A contractible cube complex}\label{section:CC} \subsection{Construction}\label{section:Construction} \noindent In this section, we fix a locally finite tree $A$ embedded into the plane in such a way that its vertex-set is discrete. Let $\mathscr{C}(A)$ denote the cube complex defined as follows: \begin{itemize} \item A vertex of $\mathscr{C}(A)$ is a couple $(\Sigma,\varphi)$, where $\Sigma \subset \mathscr{S}^\sharp(A)$ is an admissible subsurface and $\varphi : \mathscr{S}^\sharp(A) \to \mathscr{S}^\sharp(A)$ an asymptotically rigid homeomorphism, modulo the following equivalence relation: $(\Sigma_1, \varphi_1) \sim (\Sigma_2, \varphi_2)$ if $\varphi_2^{-1} \varphi_1$ is isotopic to an asymptotically rigid homeomorphism that maps $\Sigma_1$ to $\Sigma_2$ and that is rigid outside $\Sigma_1$. We denote by $[\Sigma,\varphi]$ the vertex of $\mathscr{C}(A)$ represented by $(\Sigma, \varphi)$. \item An edge of $\mathscr{C}(A)$ links $[\Sigma, \varphi]$ and $[\Sigma \cup H, \varphi]$ where $H$ is a polygon adjacent to $\Sigma$. \item If $[\Sigma, \varphi]$ is a vertex and if $H_1, \ldots, H_k$ are pairwise distinct adjacent polygons of $\Sigma$, the subgraph spanned by $$\left\{ \left[ \Sigma \cup \bigcup\limits_{i \in I} H_i , \varphi \right] \mid I \subset \{1, \ldots, k\} \right\}$$ is filled in with a $k$-cube. \end{itemize} \noindent Figure \ref{ExCC} illustrates a piece of $\mathscr{C}(A)$ when $A$ is a linear tree of length two. The group $\amod(A)$ naturally acts on the cube complex $\mathscr{C}(A)$ by isometries preserving the structure of a cube complex in the following way: for every asymptotically rigid homeomorphism $g \in \amod(A)$ and every $[\Sigma, \varphi] \in \mathscr{C}(A)$, we define $$g \cdot [\Sigma, \varphi]:= [\Sigma, g \varphi].$$ Let us note that this action is well-defined. If $g_1,g_2$ are two representatives of $g$ in $\amod(A)$ and if $(\Sigma_1,\varphi_1),(\Sigma_2, \varphi_2)$ are two representatives of the vertex $[\Sigma,\varphi]$, then $g_2^{-1}g_1$ is isotopic to the identity and $\varphi_2^{-1} \varphi_1$ is isotopic to a homeomorphism that sends $\Sigma_1$ to $\Sigma_2$ and that is rigid outside $\Sigma_1$. So $\varphi_2^{-1} g_2^{-1}g_1 \varphi_1$ is isotopic to a homeomorphism that sends $\Sigma_1$ to $\Sigma_2$ and that is rigid outside $\Sigma_1$, i.e. $[\Sigma_1,g_1 \varphi_1]= [\Sigma_2, g_2 \varphi_2]$. \begin{figure} \begin{center} \includegraphics[trim=0 13cm 0 0,clip,scale=0.27]{CC6} \caption{Two adjacent squares in the cube complex $\mathscr{C}(A)$ when $A$ is a linear tree of length two. The colored $2$-cells indicate the markings. So the vertices in the left square are marked by the identity and the two right vertices are marked by a twist.} \label{ExCC} \end{center} \end{figure} \medskip \noindent It is worth noticing that our cube complex is naturally endowed with a \emph{Morse function}, which will be a key tool to prove the contractibility of $\mathscr{C}(A)$. It is defined as follows. \medskip \noindent Observe that, if $[\Sigma_1,\varphi_1]=[\Sigma_2,\varphi_2]$, then the surfaces $\Sigma_1$ and $\Sigma_2$ are homeomorphic, so they must have the same number of punctures. This allows us to define the \emph{height} of a vertex $x=[\Sigma, \varphi]$ as the number of punctures contained in $\Sigma$; we denote it by $h(x)$. Notice that, by construction of $\mathscr{C}(A)$, if $x$ and $y$ are two adjacent vertices then $h(y)=h(x) \pm 1$. Hence, the edges of $\mathscr{C}(A)$ are naturally oriented by the height function (from small to large height). Also, notice that the action of $\amod(A)$ preserves the height function. \medskip \noindent Let us record a few elementary observations about the cube complex $\mathscr{C}(A)$. \begin{claim} If $A$ contains at least two vertices, then $\mathscr{C}(A)$ is not locally compact. \end{claim} \noindent Consider a vertex $[\Sigma, \id]$ of height $2$, $H$ one of the two polygons of $\Sigma$, and let $\tau\in\amod(A)$ correspond to the homeomorphism that is the identity outside $\Sigma$, but twists the two punctures inside $\Sigma$. Then $[\Sigma, \id]=[\Sigma, \tau^n]$ for all $n\in\Z$, but $[\Sigma\setminus H, \tau^n]\neq [\Sigma\setminus H, \tau^m]$ for all $m\neq n$. Thus we obtain infinitely many edges descending from $[\Sigma, \id]$ to vertices of smaller height. \begin{claim} The cube complex $\mathscr{C}(A)$ is locally finite-dimensional. \end{claim} \noindent Fix a vertex $x:= [\Lambda,\psi] \in \mathscr{C}(A)$ and assume that it belongs to an $n$-cube for some $n \geq 1$. By construction, there exist an admissible subsurface $\Sigma$, polygons $H_1, \ldots, H_n$ and a homeomorphism $\varphi$ such that our cube is spanned by the vertices $$\left\{ \left[ \Sigma \cup \bigcup\limits_{i \in I} H_i , \varphi \right] \mid I \subset \{1, \ldots, n\} \right\}.$$ As a consequence, $x=[\Sigma \cup H_{i_1} \cup \cdots \cup H_{i_r},\varphi]$ for some $1 \leq i_1, \ldots, i_r \leq n$ and $0\leq r \leq n$. Because we also have $x=[\Lambda,\psi]$, there must exist a homeomorphism of $\mathscr{S}^\sharp(A)$ sending $\Lambda$ to $\Sigma \cup H_{i_1} \cup \cdots \cup H_{i_r}$; as a consequence, the frontiers of these two surfaces have the same number of connected components. But the dimension $n$ of our cube is necessarily bounded above by the number of components of the frontier of $\Sigma$, which is at most the number of components of the frontier of $\Sigma \cup H_{i_1} \cup \cdots \cup H_{i_r}$. We conclude that any cube in $\mathscr{C}(A)$ containing $x$ has dimension at most the number of components in the frontier of~$\Lambda$. \subsection{Contractibility} \noindent In this section, we fix a locally finite tree $A$ embedded into the plane in such a way that its vertex-set is discrete and we show that the complex $\mathscr{C}(A)$ constructed in Section \ref{section:Construction} is contractible: \begin{thm}\label{thm:contractible} The cube complex $\mathscr{C}(A)$ is contractible. \end{thm} \noindent Our first preliminary lemma is an elementary statement, which will be fundamental in the sequel. \begin{lemma}\label{lem:Choice} Let $x,y$ two adjacent vertices in $\mathscr{C}(A)$ such that $h(y)>h(x)$. For every representative $(\Sigma, \varphi)$ of $x$, there exists a polygon $H$ adjacent to $\Sigma$ such that $y=[\Sigma \cup H, \varphi]$. \end{lemma} \begin{proof} There exist an admissible surface $\Xi$, a polygon $K$ and an asymptotically rigid homeomorphism $\psi$ such that $x=[\Xi,\psi]$ and $y= [\Xi \cup K, \psi]$. Since we also have $x=[\Sigma, \varphi]$, we know that $\varphi^{-1}\psi$ is isotopic to a homeomorphism $\xi$ that maps $\Xi$ to $\Sigma$ and that is rigid outside $\Xi$. Notice that $\xi$ maps $K$ to a polygon $H$ adjacent to $\Sigma$, so $\xi$ sends $\Xi \cup K$ to $\Sigma \cup H$ and is rigid outside $\Xi \cup K$, hence $y = [\Sigma \cup H, \varphi]$ as desired. \end{proof} \noindent We begin by observing that our complex is connected: \begin{lemma}\label{lem:Connected} The cube complex $\mathscr{C}(A)$ is connected. \end{lemma} \begin{proof} Fix two vertices $[\Sigma_1, \varphi_1]$ and $[\Sigma_2, \varphi_2]$. Since $\varphi_1$ and $\varphi_2$ are asymptotically rigid, so is $\varphi_2^{-1} \varphi_1$. Let $\Xi$ be a support of $\varphi_2^{-1} \varphi_1$ and let $\Sigma_1^+$ be an admissible subsurface containing $\Sigma_1 \cup \varphi_1^{-1} \varphi_2(\Sigma_2) \cup \Xi.$ By construction, $\varphi_2^{-1} \varphi_1$ is rigid outside $\Sigma_1^+$. Set $\Sigma_2^+= \varphi_2^{-1} \varphi_1( \Sigma_1^+)$. \medskip \noindent Because $\Sigma_1 \subset \Sigma_1^+$ and $\Sigma_2 \subset \Sigma_2^+$, adding polygons to $\Sigma_1$ and $\Sigma_2$ one by one produces a path in $\mathscr{C}(A)$ from $[\Sigma_1,\varphi_1]$ to $[\Sigma_1^+,\varphi_1]$ and a path from $[\Sigma_2, \varphi_2]$ to $[\Sigma_2^+, \varphi_2]$. By construction, $\varphi_2^{-1} \varphi_1$ maps $\Sigma_1^+$ to $\Sigma_2^+$ and is rigid outside $\Xi$. As $\Xi \subset \Sigma_1^+$, $\varphi_2^{-1} \varphi_1$ must be rigid outside $\Sigma_1^+$. Consequently $[\Sigma_1^+, \varphi_1]=[\Sigma_2^+, \varphi_2]$ and this concludes the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:contractible}.] We begin by constructing specific contractible compact subcomplexes in $\mathscr{C}(A)$. First of all, we need to introduce some terminology. Given a vertex $x$ and an admissible subsurface $\Sigma$, we say that $[\Sigma,\mathrm{id}]$ \emph{dominates} $x$ if there exists an \emph{increasing path} from $x$ to $[\Sigma, \mathrm{id}]$, i.e. a path along which the height of a vertex is always greater than the height of the previous one. Given a finite collection of vertices $\mathcal{S}$ and an admissible subsurface $\Sigma$ such that $[\Sigma, \mathrm{id}]$ dominates every vertex in $\mathcal{S}$, we denote by $X(\mathcal{S},\Sigma)$ the subcomplex spanned by the union of all the increasing paths from a vertex in $\mathcal{S}$ to $[\Sigma, \mathrm{id}]$. \medskip \noindent Observe that dominating vertices always exist: \begin{claim}\label{claim:Dom} Let $\mathcal{S}$ be a finite collection of vertices. There exists an admissible subsurface $\Sigma$ such that $[\Sigma, \mathrm{id}]$ dominates all the vertices in $\mathcal{S}$. \end{claim} \noindent Fix an enumeration $\mathcal{S}= \{ [\Sigma_i, \varphi_i] \mid i \in I\}$. For every $i\in I$, let $\Lambda_i$ be a support of $\varphi_i$ and let $\Omega_i$ denote the smallest admissible subsurface containing $\Sigma_i\cup \Lambda_i$. Fix an admissible subsurface $\Sigma$ that contains $\varphi_i(\Omega_i)$ for every $i \in I$. Adding polygons to $\Sigma_i$ produces an increasing path from $[\Sigma_i, \varphi_i]$ to $[\Omega_i,\varphi_i]= [\varphi_i(\Omega_i), \mathrm{id}]$; and next adding polygons to $\varphi_i(\Omega_i)$ produces an increasing path from $[\varphi_i(\Omega_i),\mathrm{id}]$ to $[\Sigma, \mathrm{id}]$. Thus, we have proved that $[\Sigma, \mathrm{id}]$ dominates each vertex in $\mathcal{S}$, concluding the proof of our claim. \medskip \noindent Our goal now is to show that the subcomplexes $X(\mathcal{S}, \Sigma)$ are always contractible. \medskip \noindent So fix a finite collection of vertices $\mathcal{S}$ and an admissible subsurface $\Sigma$ such that $[\Sigma, \mathrm{id}]$ dominates all the vertices in $\mathcal{S}$. We assume that $X(\mathcal{S},\Sigma)$ is not reduced to a single vertex, i.e. $X(\mathcal{S}, \Sigma) \neq \{[\Sigma, \mathrm{id}] \}$. As a consequence, a vertex $x \in \mathcal{S}$ of minimal height admits neighbors in $X(\mathcal{S}, \Sigma)$, say $x_1, \ldots,x_k$ where $k\geq 1$. Notice that $x$ must also have minimal height in $X(\mathcal{S},\Sigma)$, so it follows from Lemma \ref{lem:Choice} that there exist an admissible subsurface $\Xi$, pairwise distinct polygons $H_1, \ldots, H_k$ and a homeomorphism $\varphi$ such that $x= [\Xi, \varphi]$ and $x_i= [\Xi \cup H_i, \varphi]$ for every $1 \leq i \leq k$. \begin{claim}\label{claim:CubeinX} The cube spanned by $\left\{ \left[\Xi \cup \bigcup\limits_{i \in I} H_i, \varphi \right] \mid I \subset \{1, \ldots, k\} \text{ finite} \right\}$ lies in $X(\mathcal{S},\Sigma)$. \end{claim} \noindent Fix a non-empty finite subset $I \subset \{1, \ldots, k\}$. Because there exists an increasing path from $x$ to $[\Sigma, \mathrm{id}]$, we know from Lemma \ref{lem:Choice} that there exist polygons $P_1, \ldots, P_r$ such that $[\Xi \cup P_1 \cup \cdots \cup P_r, \varphi]= [\Sigma,\mathrm{id}]$. Given an $i \in I$, notice that $x_i$ lies on an increasing path from $x$ to $[\Sigma, \mathrm{id}]$, so we deduce again from Lemma \ref{lem:Choice} that there exist polygons $Q_1, \ldots, Q_s$ such that $[\Sigma, \mathrm{id}]=[\Xi \cup H_i \cup Q_1 \cup \dots \cup Q_s, \varphi]$. From $$[\Xi \cup P_1 \cup \cdots \cup P_r, \varphi]= [\Sigma,\mathrm{id}]= [\Xi \cup H_i \cup Q_1 \cup \dots \cup Q_s, \varphi]$$ it follows that $\Xi \cup P_1 \cup \cdots \cup P_r = \Xi \cup H_i \cup Q_1 \cup \dots \cup Q_s$. Thus, we have proved that, for every $i \in I$, there exists $1 \leq \sigma(i) \leq r$ such that $H_i= P_{\sigma_i}$. Because the $H_i$'s are adjacent to $\Xi$, adding to $\Xi$ the $H_i$'s and next the remaining $P_i$'s produces an increasing path from $x$ to $[\Sigma, \mathrm{id}]$ passing through $\left[\Xi \cup \bigcup\limits_{i \in I} H_i, \varphi \right]$, concluding the proof of our claim. \medskip \noindent We deduce from Claim \ref{claim:CubeinX} that $X(\mathcal{S}, \Sigma)$ deformation retracts onto the proper subcomplex $X\left( (\mathcal{S}\backslash \{x\}) \cup \{x_1, \ldots, x_r\}, \Sigma \right)$. Moreover, notice that $[\Sigma,\mathrm{id}]$ dominates all the vertices in $(\mathcal{S}\backslash \{x\}) \cup \{x_1, \ldots, x_r\}$. Therefore, by iterating the process, we find a sequence of finite sets $\mathcal{S}_1, \mathcal{S}_2, \ldots$ of vertices all dominated by $[\Sigma,\mathrm{id}]$ such that $$X(\mathcal{S},\Sigma) \supsetneq X(\mathcal{S}_1,\Sigma) \supsetneq X(\mathcal{S}_2, \Sigma) \supsetneq \cdots.$$ Since all these subcomplexes contain only finitely many cells, the sequence must eventually stop, and we find a finite collection $\mathcal{R}$ of vertices dominated by $[\Sigma,\id]$ such that $X(\mathcal{S}, \Sigma)$ deformation retracts onto $X(\mathcal{R}, \Sigma)$ and such that the previous process cannot apply to $X(\mathcal{R},\Sigma)$; in other words, $X(\mathcal{R}, \Sigma)= \{ [\Sigma, \mathrm{id}] \}$. Thus, we have proved that $X(\mathcal{S}, \Sigma)$ is contractible. \medskip \noindent We are finally ready to prove our proposition. Given an $n \geq 1$ and a continuous map $f : \mathbb{S}^n \to \mathscr{C}(A)$, the image of $f$ lies in a compact subcomplex, say $Y$. According to Claim~\ref{claim:Dom}, there exists an admissible subsurface $\Sigma$ such that $[\Sigma, \mathrm{id}]$ dominates each vertex in $Y$, so $Y$ lies in the contractible subcomplex $X(Y^{(0)}, \Sigma)$, proving that $f$ is homotopically trivial. Thus, all the homotopy groups of $\mathscr{C}(A)$ are trivial, which implies that our cube complex is contractible thanks to Whitehead's theorem. \end{proof} \subsection{A word about curvature}\label{section:curvature} \noindent In view of Theorem \ref{thm:contractible}, it is natural to ask whether our cube complex is nonpositively curved. Recall that a cube complex is \emph{CAT(0)} if it is simply connected and if the links of its vertices are simplicial flag complexes. \medskip \noindent Unfortunately, our cube complexes may not be CAT(0). As an example, let $R_3$ denote the union of three infinite rays with a common origin and let $\varphi,\psi$ be the two homeomorphisms illustrated by Figure \ref{phipsi}. Then Figure \ref{noCAT} shows three squares in $\mathscr{C}(R_3)$ pairwise intersecting along an edge and all three intersecting along a vertex. However, this subcomplex does not span a $3$-cube as the missing vertex would have to be of height zero, which is impossible. Thus, the cube complex $\mathscr{C}(R_3)$ contains a vertex whose link is not flag, and a fortiori is not CAT(0). \begin{figure} \begin{center} \includegraphics[scale=0.27]{phipsi} \caption{The homeomorphisms $\varphi$ and $\psi$.} \label{phipsi} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=0.35]{noCAT} \caption{} \label{noCAT} \end{center} \end{figure} \medskip \noindent Nevertheless, our cube complexes turn out to be CAT(0) in several cases of interest, including the braided Ptolemy-Thompson groups. And, in other cases, the construction can be adaptated. This will be explained in a forthcoming article \cite{Next}. \section{Classification of finite subgroups}\label{section:torsion} \noindent In this section, we fix a locally finite tree $A$ embedded into the plane in such a way that its vertex-set is discrete. As a first application of the cube complex constructed in the previous section, let us notice that we can classify the finite subgroups in $\amod(A)$. The key observation is the following: \begin{lemma}\label{prop:finitegrps} Let $G \leq \amod(A)$ be a finite subgroup. Then $G$ fixes a vertex of $\mathscr{C}(A)$ of the form $[\Sigma,\id]$. \end{lemma} \begin{proof} Let $\Sigma' \subset \mathscr{S}^\sharp(A)$ be an admissible subsurface that is a support for all elements $g\in G$. Such a surface exists since $G$ is finite, by assumption. Let $\Sigma \subset \mathscr{S}^\sharp(A)$ be the smallest admissible subsurface containing the admissible subsurfaces $\{g(\Sigma')\mid g\in G\}$. We claim that the vertex $[\Sigma,\id]$ is fixed by $G$. \medskip \noindent So let $k \in G$ be an arbitrary element. Then $k(\Sigma)$ is again admissible and $k$ is rigid outside $\Sigma$, since $\Sigma$ is a support of $k$. Moreover, $\{g(\Sigma')\mid g\in G\}$ is invariant under $k$, so we must have $k(\Sigma)=\Sigma$. Consequently, we have $$[\Sigma, \id]=[\Sigma, k]= k \cdot [\Sigma, \id],$$ i.e. $k$ fixes $[\Sigma, \id]$, as desired. \end{proof} \noindent Therefore, the classification of finite subgroups in $\amod(A)$ reduces to the classification of finite subgroups in vertex-stabilisers of $\mathscr{C}(A)$. The structure of these stabilisers are described by the following statement: \begin{lemma}\label{lem:VertexStab} The stabiliser in $\amod(A)$ of a vertex $[\Sigma, \id]$ in $\mathscr{C}(A)$ is a subgroup of $\mathrm{stab}(\Sigma)$ in $\mathrm{Mod}(\mathscr{S}^\sharp(A))$, and it satisfies $$1 \to \mathrm{Mod}(\Sigma) \to \stab([\Sigma, \id]) \to \mathbb{Z}_{r(\Sigma)} \to 1$$ for some integer $r(\Sigma) \geq 0$, where the morphism to $\mathbb{Z}_{r(\Sigma)}$ comes from the action by cyclic permutations of $\mathrm{stab}([\Sigma,\id])$ on components of $\mathrm{Fr}(\Sigma)$. \end{lemma} \noindent For instance, if $A$ is the $n$-regular tree and if $\Sigma$ is a single polygon, then $r(\Sigma)=n$. However, if $A=R_n$ with $n \geq 3$ and if $\Sigma$ is again a single polygon, then $r(\Sigma)=n$ if $\Sigma$ is the central polygon and $r(\Sigma)=0$ otherwise. \begin{proof}[Proof of Lemma \ref{lem:VertexStab}.] The equivalence relation in the definition of vertices in $\mathscr{C}(A)$ implies directly that the stabiliser $\stab([\Sigma, \id])$ is the subgroup $$\left\{ g \in \amod(A) \mid \text{up to isotopy, $g$ maps $\Sigma$ to itself and is rigid outside $\Sigma$} \right\}.$$ Now, note that $\Mod(\Sigma)$ injects into the stabiliser $\stab([\Sigma, \id])$ in the following way: an isotopy class of homeomorphisms in $\Mod(\Sigma)$ represented by $g$ defines an element in $\amod(A)$ as the isotopy class of the asymptotically rigid element whose restriction to $\Sigma$ is $g$ and that is the identity outside $\Sigma$. Consider the following short exact sequence: \[ 1\longrightarrow \Mod(\Sigma)\overset{\iota}{ \hooklongrightarrow} \stab([\Sigma, \id]) \overset{\pi}{\longrightarrow} \Z_{r(\Sigma)} \longrightarrow 1, \] where $\iota$ is the inclusion and $\pi$ is defined as follows. If $g\in \stab([\Sigma, \id])$ then $g$ (up to isotopy) is an orientation-preserving homeomorphism of $\Sigma$. Consequently, it preserves the cyclic order of the connected components of $\mathrm{Fr}(\Sigma)$. This yields a homomorphism from the stabiliser $\stab([\Sigma, \id])$ to some finite cyclic group, whose image is hence a cyclic group $\mathbb{Z}_{r(\Sigma)}$. \end{proof} \noindent We are now ready to state the main result of this section: \begin{thm}\label{thm:FiniteOrders} Every finite subgroup in $\amod(A)$ is cyclic. Moreover, the positive divisors of the integers in $$\{ \gcd(r(\Sigma),h(\Sigma)), \ \gcd(r(\Sigma),h(\Sigma)-1) \mid \text{$\Sigma$ admissible, $r(\Sigma) \neq 0$}\},$$ where $r(\Sigma)$ is defined in Lemma \ref{lem:VertexStab}, are exactly the orders of the finite-order elements in $\amod(A)$. \end{thm} \noindent Before turning to the proof of Theorem \ref{thm:FiniteOrders}, recall that the center of the braid group $\mathrm{Mod}(D_k)$ on $k$ strands is the infinite cyclic group generated by the \emph{full twist} $\tau$, which is a Dehn twist around a simple closed curve lying in a small tubular neighborhood of the boundary $\partial D_k$ and parallel to $\partial D_k$. An element $g\in \Mod(D_k)$ is \emph{periodic} if $g^r=\tau^s$ for some integers $r,s \in \mathbb{Z}$, not both zero. For instance, if we fix a cyclic order on the punctures of $D_k$, then the (class of the) homeomorphism $\varepsilon$ given by permuting the $k$ punctures cyclically is periodic (see Figure~\ref{fig:braid1}); and, if we fix a cyclic order on all the punctures of $D_k$ but one, then the (class of the) homeomorphism $\delta$ given by permuting cyclically the ordered punctures is also periodic (see Figure~\ref{fig:braid2}). As observed in \cite[Corollary~2.2]{MR2253663}, a consequence of a theorem proved by Ker\'ekj\'art\'o, Brouwer, and Eilenberg \cite{Eilenberg}, which we record for future use, is that $\varepsilon$ and $\delta$ are essentially the only periodic elements in our braid group: \begin{figure} \centering \begin{minipage}{0.45\textwidth} \begin{center} \begin{tikzpicture} \draw[thick, ->] (2,0) arc (0:50:2); \draw[thick, ->] (0.6,1.8) arc (72:122:2); \draw[thick, ->] (-1.6,1.218) arc (144:194:2); \draw[thick, ->] (-1.6,-1.18) arc (216:266:2); \draw[thick, ->] (0.6,-1.8) arc (288:338:2); \filldraw[black] (1.9,-0.3) circle (2pt); \filldraw[black] (0.88,1.7) circle (2pt); \filldraw[black] (-1.4,1.5) circle (2pt); \filldraw[black] (-1.8,-0.8) circle (2pt); \filldraw[black] (0.2,-1.9) circle (2pt); \end{tikzpicture} \end{center} \caption{The element $\varepsilon$ in $\Mod(D_k)$.} \label{fig:braid1} \end{minipage}\hfill \begin{minipage}{0.45\textwidth} \begin{center} \begin{tikzpicture} \draw[thick, ->] (2,0) arc (0:75:2); \draw[thick, ->] (0,2) arc (90:165:2); \draw[thick, ->] (-2,0) arc (180:255:2); \draw[thick, ->] (0,-2) arc (270:345:2); \filldraw[black] (1.98,-0.2) circle (2pt); \filldraw[black] (0.16,1.98) circle (2pt); \filldraw[black] (-1.98,0.2) circle (2pt); \filldraw[black] (-0.16,-1.98) circle (2pt); \filldraw[black] (0,0) circle (2pt); \end{tikzpicture} \end{center} \caption{The element $\delta$ in $\Mod(D_k)$.} \label{fig:braid2} \end{minipage} \end{figure} \begin{prop}\label{prop:periodic} Every periodic element in $\Mod(D_k)$ is conjugate to a power of $\varepsilon$ or~$\delta$. \end{prop} \noindent Now we are ready to prove our main theorem. \begin{proof}[Proof of Theorem \ref{thm:FiniteOrders}.] Let $G \leq \amod(A)$ be a finite subgroup. By Proposition~\ref{prop:finitegrps}, $G$ fixes a vertex $[\Sigma, \id]$. Recall from Lemma~\ref{lem:VertexStab} that there exists a short exact sequence \begin{equation}\label{eq:stab} 1\longrightarrow \Mod(\Sigma) \hooklongrightarrow \stab([\Sigma, \id]) \overset{\pi}{\longrightarrow} \Z_{r(\Sigma)} \longrightarrow 1. \end{equation} The subgroup $\mathrm{Mod}(\Sigma)$ is isomorphic to the mapping class group $\Mod(D_{h(\Sigma)})$ of the $h(\Sigma)$-punctured disk $D_{h(\Sigma)}$, itself isomorphic to the braid group on $h(\Sigma)$ strands. Since braid groups are torsion-free, the homomorphism $\pi$ injects $G$ into $\mathbb{Z}_{r(\Sigma)}$. In particular, $G$ is cyclic, proving the first assertion of our theorem. Moreover, if $r(\Sigma)=0$ then $G$ must be trivial, so, from now on, we assume that $r(\Sigma) \neq 0$. \medskip \noindent If $h(\Sigma)=1$, then $\mathrm{Mod}(\Sigma)=\{1\}$, which implies that $\mathrm{stab}([\Sigma,\id])$ is cyclic of order $r(\Sigma)$. From now on, we assume that $h(\Sigma) \geq 2$. \medskip \noindent Fix an element $\rho\in \stab([\Sigma, \id])$ such that its restriction to some disk in $\Sigma$ containing all the punctures of $\Sigma$ is the identity and such that it cyclically shifts the components of $\mathrm{Fr}(\Sigma)$ with the smallest possible angle. (Here, following Lemma \ref{lem:VertexStab}, we are thinking of $\mathrm{stab}([\Sigma,\id])$ as a subgroup of $\mathrm{stab}(\Sigma) \leq \mathrm{Mod}(\mathscr{S}^\sharp(A))$, and so $\rho$ as a homeomorphism stabilising $\Sigma$.) Notice that $\rho$ is sent to a generator in $\mathbb{Z}_{r(\Sigma)}$; that $\rho^{r(\Sigma)}$ coincides with the full twist $\tau$ of $\Sigma$; that $\rho$ commutes with all the elements in $\mathrm{Mod}(\Sigma) \leq \mathrm{stab}([\Sigma,\id])$; and that $\rho$ has infinite order because $h(\Sigma) \geq 2$. \medskip \noindent Let $g \in \mathrm{stab}([\Sigma,\id])$ be a non-trivial element. It can be written as $\rho^t \sigma$ where $t \in \mathbb{Z}$ and $\sigma \in \mathrm{Mod}(\Sigma)$; up to replacing $g$ with $g^{-1}$, we can suppose without loss of generality that $t \geq 0$. Notice that, if $\sigma$ is not periodic, then $g$ has infinite order. Indeed, if $g$ has finite order then $\sigma$ must have a power that is a power of $\rho$, so $\sigma$ must have a power that is a power of the full twist $\tau$. From now on, we assume that $\sigma$ is periodic. According to Proposition \ref{prop:periodic}, $g$ must be conjugate to $\rho^t \epsilon^{s}$ or $\rho^t \delta^{s}$ for some $s \in \mathbb{Z}$. Because $g$ is non-trivial, $t$ and $s$ cannot be both zero. Therefore, if $t = 0$ or $s=0$, then $g$ has infinite order. From now on, we assume that $t>0$ and $s \neq 0$. \medskip \noindent First, assume that $g$ is conjugate to $\rho^t \epsilon^{s}$. Notice that $(t \vee r(\Sigma))/t$ is the smallest integer $n \geq 1$ such that $(\rho^t)^n$ belongs to $\langle \tau \rangle$, and similarly $(|s| \vee h(\Sigma))/|s|$ is the smallest integer $n \geq 1$ such that $(\epsilon^{s})^n$ belongs to $\langle \tau \rangle$. (Here, $\vee$ refers to the least common multiple.) Therefore, $\nu:= \frac{t \vee r(\Sigma)}{t} \vee \frac{|s| \vee h(\Sigma)}{|s|}$ is the smallest integer $n \geq 1$ such that $(\rho^t\epsilon^{s})^n$ belongs to $\langle \tau \rangle$. Notice that $$\left( \rho^t\epsilon^{s} \right)^\nu = \left( \rho^{r(\Sigma)} \right)^{t \nu/ r(\Sigma)} \left( \epsilon^{h(\Sigma)} \right)^{s \nu/h(\Sigma)} = \tau^{\frac{t\nu}{r(\Sigma)} +\frac{s \nu}{h(\Sigma)}}.$$ Therefore, either $th(\Sigma) + s r(\Sigma) \neq 0$ and $g$ has infinite order (since it is conjugate to a non-trivial power of $\tau$) or $th(\Sigma) + s r(\Sigma)=0$ and $g$ has order $\nu$. \medskip \noindent Similarly, if $g$ is conjugate to $\rho^t \delta^s$, we set $\nu:= \frac{t \vee r(\Sigma)}{t} \vee \frac{|s| \vee (h(\Sigma)-1)}{|s|}$ and we show that either $t(h(\Sigma)-1) + sr(\Sigma) \neq 0$ and $g$ has infinite order or $t(h(\Sigma)-1) + sr(\Sigma) = 0$ and $g$ has order $\nu$. \medskip \noindent So far, we have proved that $$\begin{array}{l} \displaystyle \left\{ \text{divisors of } r(\Sigma) \mid \Sigma \text{ single polygon, $r(\Sigma) \neq 0$} \right\} \\ \\ \displaystyle \cup \left\{ \frac{t \vee r(\Sigma)}{t} \vee \frac{s \vee h(\Sigma)}{s} \mid t,s>0, th(\Sigma) = s r(\Sigma), \text{$\Sigma$ admissible}, r(\Sigma) \neq 0 \right\} \\ \\ \displaystyle \cup \left\{ \frac{t \vee r(\Sigma)}{t} \vee \frac{s \vee (h(\Sigma)-1)}{s} \mid t,s>0, t(h(\Sigma)-1) = s r(\Sigma), \text{$\Sigma$ admissible}, r(\Sigma) \neq 0 \right\} \end{array}$$ is the set of orders of the finite-order elements in $\amod(A)$; according to our next claim, it coincides with the positive divisors of the integers in $$\begin{array}{l} \displaystyle \left\{ r(\Sigma) \mid \text{$\Sigma$ single polygon, $r(\Sigma) \neq 0$} \right\} \\ \hspace{2cm} \displaystyle \cup \left\{ \gcd(r(\Sigma), h(\Sigma)), \gcd(r(\Sigma),h(\Sigma)-1) \mid \text{$\Sigma$ admissible, $r(\Sigma) \neq 0$} \right\}. \end{array}$$ We conclude the proof of our theorem by noticing that the left set lies in the right one. \begin{claim} For all integers $a,b \geq 1$, $$\left\{ \frac{t \vee a}{t} \vee \frac{s\vee b}{s} \mid t,s \geq 1, \ tb=sa \right\}$$ coincides with the set of positive divisors of $\mathrm{gcd}(a,b)$. \end{claim} \noindent First, assume that $t,s \geq 1$ are two integers satisfying $tb=sa$. Fix a prime $p$ and let $$v_p : n \mapsto \max\{k \geq 0 \mid p^k \text{ divides } n \}$$ denote the $p$-adic valuation. We distinguish two cases. If $v_p(t) \geq v_p(a)$, then $$v_p \left( \frac{t \vee a}{t} \right) = \max(v_p(t),v_p(a))-v_p(t) =0.$$ Notice that $v_p(t)+v_p(b)=v_p(tb)=v_p(sa)=v_p(s)+v_p(a)$ implies that $v_p(s) \geq v_p(b)$, so we also have $$v_p \left( \frac{s \vee b}{s} \right) = \max(v_p(s),v_p(b))-v_p(s)=0.$$ Therefore, $$v_p \left( \frac{t \vee a}{t} \vee \frac{s \vee b}{s} \right) =\max \left( v_p \left( \frac{t \vee a}{t} \right), v_p \left( \frac{s \vee b}{s} \right) \right)=0.$$ Next, if $v_p(t)< v_p(a)$, then $$v_p \left( \frac{t \vee a}{t} \right) = \max(v_p(t),v_p(a))-v_p(t) = v_p(a)-v_p(t).$$ Notice that $v_p(t)+v_p(b)=v_p(tb)=v_p(sa)=v_p(s)+v_p(a)$ implies that $v_p(s) < v_p(b)$, so we also have $$v_p \left( \frac{s \vee b}{s} \right) = \max(v_p(s),v_p(b))-v_p(s)= v_p(b)-v_p(s).$$ Therefore, $$v_p \left( \frac{t \vee a}{t} \vee \frac{s \vee b}{s} \right) =\max \left( v_p \left( \frac{t \vee a}{t} \right), v_p \left( \frac{s \vee b}{s} \right) \right)$$ coincides with $v_p(a)-v_p(t)= v_p(b)-v_p(s)$. We conclude that $$v_p \left( \frac{t \vee a}{t} \vee \frac{s \vee b}{s} \right) \leq \min(v_p(a),v_p(b)) \text{ for every prime $p$},$$ proving that $\frac{t \vee a}{t} \vee \frac{s \vee b}{s}$ divides $\gcd(a,b)$. Thus, the integers in the set given by our claim are all positive divisors of $\gcd(a,b)$. Conversely, let $k$ be a positive divisor of $\gcd(a,b)$. Set $t:=a/k$ and $s:=b/k$, and notice that $tb= ab/k = sa$ and that $$ \frac{t \vee a}{t} \vee \frac{s \vee b}{s}= \frac{a}{t} \vee \frac{b}{s} = k \vee k = k.$$ So the divisor $k$ belongs to the set given by our claim. This concludes the proof. \end{proof} \noindent Let us illustrate Theorem \ref{thm:FiniteOrders} on two examples. First, we consider the trees associated to the braided Houghton groups. \begin{prop} Let $n \geq 2$ be an integer. Then $\amod(R_n)$ contains an element of order $l$ if and only if $l$ divides $n$. \end{prop} \begin{proof} If $\Sigma$ is an admissible subsurface that does not contain the central vertex of $R_n$, then $r(\Sigma)=2$ if $n=2$ and $r(\Sigma)=0$ if $n \geq 3$ (because the complement of $\Sigma$ has two components, one of them one contains a $2n$-gon and the other contain only $4$-gons). If $\Sigma$ is an admissible subsurface containing the central vertex of $R_n$, then $r(\Sigma)=n$. The desired conclusion follows from Theorem \ref{thm:FiniteOrders}. \end{proof} \noindent As a second example, we apply Theorem \ref{thm:FiniteOrders} to the braided Higman-Thompson groups. \torsionthm \begin{proof} First, assume that $m =n+1$. So $A_{n,m}$ is $(n+1)$-regular. If $\Sigma$ is an admissible subsurface, then $$r(\Sigma)= n+1 + (h(\Sigma)-1)(n-1) = 2 + h(\Sigma)(n-1).$$ Notice that $\gcd(r(\Sigma),h(\Sigma))$ must divide $r(\Sigma)-h(\Sigma)(n-1) = 2$ and that $\gcd(r(\Sigma),h(\Sigma)-1)$ must divide $r(\Sigma)-(h(\Sigma)-1)(n-1)=n+1$. Also, notice that, if $\Sigma$ has height $2$, then $\gcd(r(\Sigma),h(\Sigma))= \gcd(2n,2)=2$; and that, if $\Sigma$ has height $n+2$, then $\gcd(r(\Sigma),h(\Sigma)-1) = \gcd(n(n+1),n+1)=n+1$. Therefore, it follows from Theorem \ref{thm:FiniteOrders} that the possible finite orders in $\mathrm{br}T_{n,m}$ are the positive divisors of $2$ and $n+1$. \medskip \noindent From now on, we assume that $m \neq n+1$. If $\Sigma$ is an admissible subsurface that does not contain the vertex of valence $m$, then $r(\Sigma)=0$ because the complement of $\Sigma$ contains one component with a $2m$-gon while all the other components contain only $2(n+1)$-gons. But, if $\Sigma$ contains the vertex of valence $m$, then $$r(\Sigma)= m + (h(\Sigma)-1)(n-1) = m-n+1 + h(\Sigma)(n-1).$$ Notice that $\gcd(r(\Sigma),h(\Sigma)-1)$ must divide $r(\Sigma)-(h(\Sigma)-1)(n-1)=m$ and that, if $\Sigma$ has height $m+1$, then $\gcd(r(\Sigma),h(\Sigma)-1))=\gcd(mn,m)=m$. We need to distinguish two cases: \begin{itemize} \item Assume that $m \neq n-1$. Notice that $\gcd(r(\Sigma),h(\Sigma))$ must divide $r(\Sigma)-h(\Sigma)(n-1)=m-n+1$. Moreover, if $\Sigma$ has height $\lvert m-n+1\rvert$, then $r(\Sigma)=\lvert m-n+1\rvert(n-1+ \epsilon)$ where $\epsilon \in \{\pm 1\}$ according to the sign of $m-n+1$ and we have $\gcd(r(\Sigma), h(\Sigma))=|m-n+1|$. It follows from Theorem~\ref{thm:FiniteOrders} that the finite orders in $\mathrm{br}T_{n,m}$ are the positive divisors of the integers in $\{m\} \cup \{m-n+1\}.$ \item Assume that $m=n-1$. Then $\gcd(r(\Sigma),h(\Sigma))= \gcd(h(\Sigma)(n-1),h(\Sigma))=h(\Sigma)$. It follows from Theorem~\ref{thm:FiniteOrders} that all the possible orders occur in $\mathrm{br}T_{n,m}$. \end{itemize} This concludes the proof of our theorem.\qedhere \end{proof} \noindent Interestingly, Theorem \ref{prop:torsion} allows us to distinguish many braided Higman-Thompson groups up to isomorphism. For instance: \begin{cor} Let $m,n \geq 2$ be two integers. The braided Ptolemy-Thompson groups $\mathrm{br}T_n$ and $\mathrm{br}T_m$ are isomorphic if and only if $m=n$. \end{cor} \begin{remark}\label{rem:funar_kap_non-iso} From Theorem~\ref{prop:torsion}, we can directly recover the result of Funar-Kapoudjian that $\amod^*(A_2) \simeq \mathrm{br}T_{2,4}$ is not isomorphic to $\amod(A_2)=\mathrm{br}T_{2,3}$. More generally, we see that $\amod^*(A_n)\simeq \mathrm{br}T_{n,2n}$ is not isomorphic to $\amod(A_{n})=\mathrm{br}T_{n,n+1}$. \medskip \noindent However, Theorem~\ref{prop:torsion} does not give us any information on whether $\mathrm{br}T_{6n-1}$ is isomorphic to $\mathrm{br}T_{6n-2,6n}$, for instance. Note also that an interesting phenomenon occurs for the braided Higman-Thompson groups of the form $\mathrm{br}T_{n,n-1}$ for $n\geq 2$. They contain an element of each order, and so they cannot be distinguished up to isomorphism either by using torsion. \end{remark} \section{Finiteness properties}\label{section:TypeF} This section is dedicated to the proofs of Theorems \ref{PtolemyConnected} and~\ref{thm:brHfiniteness} from the introduction. Namely, we want the prove that the braided Higman-Thompson group $\mathrm{br}T_{n,m}$ is of type $F_\infty$ and that the braided Houghton group $\mathrm{br}H_n$ is of type $F_{n-1}$ but not of type $F_n$. \medskip \noindent If we allow $n=1$ in our notation $A_{n,m}$, then $\amod(A_{n,m})$ coincides with the braided Higman-Thompson group $\mathrm{br}T_{n,m}$ if $n \geq 2$, and it contains the braided Houghton group $\mathrm{br}H_n$ as a finite-index subgroup if $n =1$. This notation allows us to work with these two families of groups simultaneously in Sections \ref{section:spine} and \ref{section:Links}. \medskip \noindent Our strategy is the following. First, we extract from the cube complex $\mathscr{C}(A_{n,m})$, on which $\amod(A_{n,m})$ acts, a smaller cube complex $\mathscr{SC}(A_{n,m})$, referred to as the \emph{spine}. We show in Section \ref{section:spine} that $\mathscr{SC}(A_{n,m})$ is a $\amod(A_{n,m})$-invariant subcomplex, on which $\mathscr{C}(A_{n,m})$ deformation retracts. In particular, $\mathscr{SC}(A_{n,m})$ is also contractible and its vertex-stabilisers are also finite extensions of braid groups. Next, in Section \ref{section:Links}, we describe the descending links in $\mathscr{SC}(A_{n,m})$, with respect to the height function, as complexes of arcs on the disc. Finally, in Sections \ref{section:HigmanThompson} and \ref{section:Houghton}, we study the connectedness of these complexes and we deduce our main theorems from standard arguments of Morse theory. \subsection{The spine of the cube complex}\label{section:spine} \noindent Typically, there are two different types of vertices in the cube complex $\mathscr{C}(A_{n,m})$: those associated to the admissible subsurfaces containing the \emph{central polygon} (i.e. the polygon containing the vertex of degree $m$), and those associated to the admissible subsurfaces that do not contain the central polygon. The descending links (with respect to the height function) of the latter vertices are non-simply connected graphs if $n=1$, which is an obstruction to prove finiteness properties. This observation motivates the following definition: \begin{definition} Fix two integers $n,m \geq 1$. The \emph{spine} $\mathscr{SC}(A_{n,m})$ is the subcomplex of $\mathscr{C}(A_{n,m})$ generated by the vertices $$\{ [\Sigma, \varphi] \mid \text{$\Sigma$ contains the central polygon} \},$$ where the central polygon refers to an arbitrary polygon we fix once for all if $m=n+1$ and otherwise to the polygon of $\mathscr{S}^\sharp(A_{n,m})$ that contains the puncture corresponding to the unique vertex of $A_{n,m}$ having valence $m$. \end{definition} \noindent It is clear that $\amod(A_{n,m})$ stabilises the spine $\mathscr{SC}(A_{n,m})$. By reproducing the proof of Theorem~\ref{thm:contractible} word for word, it can be proved that the spine is also contractible. In fact, we can prove more: \begin{prop}\label{prop:HomotopyEqui} For all $n,m \geq 1$, the spine $\mathscr{SC}(A_{n,m}) \subset \mathscr{C}(A_{n,m})$ is a $\amod(A_{n,m})$-invariant subcomplex, on which $\mathscr{C}(A_{n,m})$ deformation retracts. In particular, $\mathscr{C}(A_{n,m})$ and $\mathscr{SC}(A_{n,m})$ are homotopy equivalent. \end{prop} \begin{proof} If $m=n+1$, then $A_{n,m}$ is the $m$-regular tree and $\mathscr{SC}(A_{n,m})=\mathscr{C}(A_{n,m})$. Indeed, if $[\Sigma, \varphi]$ is a vertex of $\mathscr{C}(A_{n,n+1})$, then there exists a rigid homeomorphism $t$ associated to an isometry of $A_{n,m}$ such that $t \Sigma$ contains the central polygon. Then $[\Sigma,\varphi]=[t \Sigma, \varphi \circ t^{-1}] \in \mathscr{SC}(A_{n,n+1})$. From now on, we assume that $m \neq n+1$. \medskip \noindent As a consequence of Whitehead's theorem, it is sufficient to show that the inclusion $\mathscr{SC}(A_{n,m}) \hookrightarrow \mathscr{C}(A_{n,m})$ induces an isomorphism on each homotopy group in order to deduce that $\mathscr{C}(A_{n,m})$ deformation retracts on $\mathscr{SC}(A_{n,m})$. This is a consequence of the following statement: every finite subcomplex $F \subset \mathscr{C}(A_{n,m})$ is contained in another subcomplex $F^+$ that deformation retracts on $F^+ \cap \mathscr{SC}(A_{n,m})$. We need some preliminary observations before proving this statement. \medskip \noindent Let $\Sigma$ be an admissible subsurface. We refer to the \emph{central height} of $\Sigma$ as the minimal number of polygons to add to $\Sigma$ in order to get an admissible subsurface containing the central polygon, and we denote by $\tau(\Sigma)$ the subsurface obtained from $\Sigma$ by adding the polygon adjacent to $\Sigma$ that is the closest to the central polygon. If $\Sigma$ already contains the central polygon, then it has zero central height and $\tau(\Sigma)= \Sigma$. \begin{claim}\label{claim:CentralHeight} If $(\Sigma_1,\varphi_1)$ and $(\Sigma_2,\varphi_2)$ are two representatives of a vertex in $\mathscr{C}(A_{n,m})$, then $\Sigma_1$ and $\Sigma_2$ have the same central height and $[\tau(\Sigma_1),\varphi_1]=[\tau(\Sigma_2),\varphi_2]$. \end{claim} \noindent Because having central height zero can be read from the height and the number of components in the frontier of the admissible subsurface under consideration, our claim is clear if the vertex belongs to the spine. From now on, we assume that it does not belong to the spine, i.e. $\Sigma_1$ and $\Sigma_2$ do not contain the central polygon. We know that $\varphi_2^{-1}\varphi_1$ is isotopic to a homeomorphism $\varphi$ that sends $\Sigma_1$ to $\Sigma_2$ and that is rigid outside $\Sigma_1$. Necessarily, $\varphi$ stabilises the central polygon. Also, it sends the polygons between $\Sigma_1$ and the central polygon to the polygons between $\Sigma_2$ and the central polygon; in particular, it sends $\tau(\Sigma_1)$ to $\tau(\Sigma_2)$. The latter assertion shows that $[\tau(\Sigma_1),\varphi_1]=[\tau(\Sigma_2),\varphi_2]$; and the former assertion shows that $\Sigma_1$ and $\Sigma_2$ have the same central height. Thus, our claim is proved. \medskip \noindent Claim \ref{claim:CentralHeight} allows us to define the central height of a vertex $[\Sigma,\varphi]$ as the central height of $\Sigma$, and to define the map $$\tau : \left\{ \begin{array}{ccc} \mathscr{C}(A_{n,m})^{(0)} & \to & \mathscr{C}(A_{n,m})^{(0)} \\ \left[ \Sigma, \varphi \right] & \mapsto & \left[ \tau(\Sigma), \varphi \right] \end{array} \right..$$ Notice that, for every vertex $x \in \mathscr{C}(A_{n,m})$, $\tau^k(x)$ belongs to $\mathscr{SC}(A_{n,m})$ if $k$ is at least the central height of $x$. Consequently, if $F$ is an arbitrary finite subcomplex in $\mathscr{C}(A_{n,m})$, then the subcomplex $F^+$ spanned by $$\left\{ \tau^k(x) \mid x \in F^{(0)}, k \geq 0 \right\}$$ is also finite and it intersects $\mathscr{SC}(A_{n,m})$. Let $S \subset F^+$ denote the subcomplex generated by the vertices of maximal central height. Notice that, as a consequence of Claims~\ref{claim:Tau} and~\ref{claim:InjectiveComponent}, for each connected component $R$ of $S$ the subcomplex in $F^+$ generated by $R \cup \tau(R^{(0)})$ is isomorphic to $R \times [0,1]$, where $R$ is sent to $R \times \{0\}$ and $\tau(R^{(0)})$ to $R \times \{1\}$. Therefore, we can retract simultaneously all the components of $S$ onto the subcomplex of $F^+$ spanned by the vertices that do not have maximal central height. By iterating the process, we eventually deformation retract $F^+$ onto its subcomplex spanned by the vertices of central height zero, i.e. onto $F^+ \cap \mathscr{SC}(A_{n,m})$ as desired. \begin{claim}\label{claim:Tau} For a $k$-cube $C$ in $S$, $C \cup \tau(C^{(0)})$ spans a $(k+1)$-cube. \end{claim} \noindent There exist an admissible subsurface $\Sigma$, a homeomorphism $\varphi$ and polygons $H_1, \ldots, H_k$ such that the vertices of $C$ are $$\left\{ \left[ \Sigma\cup \bigcup\limits_{i \in I} H_i ,\varphi \right] \mid I \subset \{1 ,\ldots, k\} \right\}.$$ Because they all have maximal central height, the polygons $H_1, \ldots, H_k$ are distinct from the polygon adjacent to $\Sigma$ that is the closest to the central polygon. Let $H_{k+1}$ denote the former polygon. Then the vertices of $C \cup \tau(C^{(0)})$ are $$\left\{ \left[ \Sigma\cup \bigcup\limits_{i \in I} H_i ,\varphi \right] \mid I \subset \{1 ,\ldots, k+1\} \right\}.$$ Thus, $C \cup \tau(C^{(0)})$ spans a $(k+1)$-cube. \begin{claim}\label{claim:InjectiveComponent} The map $\tau$ is injective on each connected component of $S$. \end{claim} \noindent Before turning to the proof of our claim, we need to introduce some notation. Given an admissible subsurface $\Sigma$ that does not contain the central polygon, we denote by $\alpha(\Sigma)$ the polygon of $\Sigma$ that is the closest to the central polygon, and $\omega(\Sigma)$ the adjacent polygon of the central polygon that is the closest to $\Sigma$. \medskip \noindent Now, fix two vertices $a,b \in S$ belonging to the same connected component and assume that $\tau(a)= \tau(b)$. Given a path of vertices $x_1, \ldots, x_k \in S$ from $a$ to $b$, we claim that they admit representatives $(\Sigma_1,\varphi_1), \ldots, (\Sigma_k,\varphi_k)$ such that, for every $1 \leq i \leq k-1$, we have $\alpha(\Sigma_i) = \alpha(\Sigma_{i+1})$ and $\varphi_{i+1}^{-1}\varphi_i$ is isotopic to an asymptotically rigid homeomorphism that is rigid on the connected subsurface delimited by $\alpha(\Sigma_i)$ and that contains the central polygon. We construct our representatives inductively, starting with an arbitrary representative $(\Sigma_1,\varphi_1)$ of $x_1$. \medskip \noindent Assume that $(\Sigma_i, \varphi_i)$ is defined for some $1 \leq i \leq k-1$ and that the height of $x_{i+1}$ is larger than the height of $x_i$. According to Lemma \ref{lem:Choice}, there exists a polygon $H$ such that $(\Sigma_{i}\cup H,\varphi_i)$ is a representative of $x_{i+1}$. Then set $\Sigma_{i+1}=\Sigma_i \cup H$ and $\varphi_{i+1}= \varphi_i$. Observe that $H$ does not separate $\alpha(\Sigma_i)$ and the central polygon because $x_i$ and $x_{i+1}$ have the same central height, so $\alpha(\Sigma_{i+1})= \alpha(\Sigma_i)$. \medskip \noindent Now, assume that $(\Sigma_i,\varphi_i)$ is defined for some $1 \leq i \leq k-1$ and that the height of $x_{i+1}$ is smaller than the height of $x_i$. Fix a representative $(\Sigma, \varphi)$ of $x_{i+1}$. Up to rotating around the central polygon by a rigid homeomorphism, we assume that $\omega(\Sigma)= \omega(\Sigma_i)$. If $\alpha(\Sigma_i) \neq \alpha(\Sigma)$, then we denote by $K$ the polygon separating both $\Sigma$ and $\Sigma_i$ from the central polygon and that is the farthest from the central polygon; see Figure \ref{Tau}. We also denote $P$ (resp. $Q$) the adjacent polygon of $K$ that is the closest to $\Sigma_i$ (resp. $\Sigma$). Now, according to Lemma \ref{lem:Choice}, there exists a polygon $H$ such that $x_i=[\Sigma \cup H, \varphi]$. From the equality $[\Sigma \cup H, \varphi]= [\Sigma_i, \varphi_i]$, we know that $\varphi^{-1} \varphi_i$ is isotopic to a homeomorphism $\psi$ that sends $\Sigma_i$ to $\Sigma \cup H$ and that is rigid outside $\Sigma_i$. Necessarily, $\psi$ fixes $K$ and the central polygon, which are distinct. Consequently, $\psi$ must stabilise each component of the frontier of $K$, contradicting the fact that $\psi$ must also send $P$ to $Q$. Thus we have proved that $\alpha(\Sigma_i)= \alpha(\Sigma)$. Observe that, because $x_i$ and $x_{i+1}$ have the same central height, necessarily $\alpha(\Sigma)= \alpha(\Sigma \cup H)$. Therefore, we can set $\Sigma_{i+1}= \Sigma$ and $\varphi_{i+1}=\varphi$. \begin{figure} \begin{center} \includegraphics[scale=0.3]{Tau} \caption{} \label{Tau} \end{center} \end{figure} \medskip \noindent Thus, our representatives are constructed. Since $\alpha(\Sigma_i)= \alpha(\Sigma_{i+1})$ for every $1 \leq i \leq k-1$, we denote by $A$ this common polygon. Also, we denote by $B$ the adjacent polygon of $A$ that is the closest to the central polygon. By construction, $$\varphi_k^{-1}\varphi_1 = (\varphi_k^{-1} \varphi_{k-1}) \cdot (\varphi_{k-1}^{-1} \varphi_{k-2}) \cdots (\varphi^{-1}_2 \varphi_1)$$ is isotopic to an asymptotically rigid homeomorphism that is rigid on the connected subsurface delimited by $A$ and containing the central polygon. Observe that such a homeomorphism must stabilise $B$. On the other hand, it follows from the equality $$[\Sigma_1 \cup B , \varphi_1] = \tau(x_1)=\tau(a)=\tau(b)= \tau(x_k) = [\Sigma_k \cup B, \varphi_k]$$ that $\varphi_k^{-1}\varphi_1$ is isotopic to an asymptotically rigid homeomorphism sending $\Sigma_1 \cup B$ to $\Sigma_k \cup B$ that is rigid outside $\Sigma_1 \cup B$. We deduce that $\varphi_{k}^{-1} \varphi_1$ is isotopic to an asymptotically rigid homeomorphism that $\Sigma_1$ to $\Sigma_k$ and that is rigid outside $\Sigma_1$, hence $a=[\Sigma_1,\varphi_1]= [\Sigma_k, \varphi_k]=b$ concluding the proof of Claim \ref{claim:InjectiveComponent}. \end{proof} \subsection{Description of the descending links}\label{section:Links} \noindent Let us recall some basic definitions about links and Morse functions in cube complexes. \begin{definition} Let $X$ be a cube complex. A map $f : X \to \mathbb{R}$ is a \emph{Morse function} if it is affine and non-constant on each cube of positive dimension and if the image $f(X^{(0)})$ is discrete. For every $m \in \mathbb{R}$, the \emph{sublevel set} $X_m$ is the subcomplex of $X$ generated by the vertices in $y \in X$ satisfying $f(y) \leq m$. The \emph{link} of a vertex $x \in X$ is the simplicial complex whose vertices are the edges containing $x$ and whose simplices are collections of edges spanning cubes. The \emph{descending link} of $x \in X$ (with respect to $f$) is the link of $x$ in the $X_{f(x)}$. \end{definition} \noindent For $\mathscr{SC}(A_{n,m})$, the affine extension of the height function is our Morse function. Our goal in this section is to describe the descending links with respect to this function. \medskip \noindent Let $p, q\geq1$ and $r \geq 0$ be three integers. Fix a disc $\mathbb{D}$ with $p$ punctures in its interior and $q$ marked points on its boundary. Let $\{m_i \mid i \in \mathbb{Z}_q \}$ denote these marked points, ordered cyclically. From now on, an arc in $\mathbb{D}$ will refer to an arc that starts from a marked point and that ends at a puncture. Two arcs starting from the marked points $m_i,m_j$ are \emph{$r$-separated} if they are disjoint and if the distance between $i$ and $j$ in $\mathbb{Z}_q$ is $>r$ (where $\mathbb{Z}_q$ is metrically thought of as the cycle $\mathrm{Cayl}(\mathbb{Z}_q,\{1\})$). Notice that being $0$-separated amounts to being disjoint. We define $\mathfrak{C}(p,q,r)$ as the simplicial complex whose vertices are the isotopy classes of arcs and whose simplices are collections of arcs that are pairwise $r$-separated (up to isotopy). Observe that pairwise $r$-separated classes of arcs $\alpha_1,\dots, \alpha_n$ can be represented by pairwise $r$-separated arcs $\beta_1,\dots, \beta_n$. \medskip \noindent The braid group $\mathrm{Mod}(\mathbb{D})$ naturally acts on $\mathfrak{C}(p,q,r)$. Moreover: \begin{lemma}\label{lem:LinkTopo} For every $i \in \mathbb{Z}_q$, fix an arc $\alpha_i$ from $m_i$ to a puncture and assume that, if two marked points $m_i,m_j$ satisfy $\left| i-j\right| >r$, then $\alpha_i$ and $\alpha_j$ are disjoint. The subcomplex spanned by $\{\alpha_i, i \in \mathbb{Z}_q\}$ is a strict fundamental domain for the action $\mathrm{Mod}(\mathbb{D}) \curvearrowright \mathfrak{C}(p,q,r)$. Moreover, for every $I \subset \mathbb{Z}_q$ such that $i$ and $j$ are at distance $>r$ for all distinct $i,j \in I$, the stabiliser of the simplex spanned by $\{\alpha_i, i \in I\}$ coincides with the subgroup $\mathrm{Mod} \left( \mathbb{D} \backslash \bigcup\limits_{i \in I} N(\alpha_i) \right)$ where $N(\alpha_i)$ is a small tubular neighborhood of $\alpha_i$ for every $i \in I$. \end{lemma} \begin{proof} Let $\mathscr{S} \subset \mathfrak{C}(p,q,r)$ denote the subcomplex spanned by $\{\alpha_0, \ldots, \alpha_{q-1}\}$. A simplex $\{\alpha_i, i \in I\}$ in $\mathscr{S}$, $I \subset \mathbb{Z}_q$, is uniquely determined by the marked points $\{m_i, i \in I\} \subset \partial \mathbb{D}$. Because $\mathrm{Mod}(\mathbb{D})$ fixes the boundary $\partial \mathbb{D}$ pointwise, it follows that no two subsimplices in $\mathscr{S}$ belong to the same $\mathrm{Mod}(\mathbb{D})$-orbit. Next, let $\mathscr{S}'$ be an arbitrary simplex in $\mathfrak{C}(p,q,r)$. The vertices of $\mathscr{S}'$ can be represented by pairwise $r$-separated arcs $\beta_1, \ldots, \beta_s$. For every $1 \leq i \leq s$, let $m_{\sigma(i)}$ denote the unique marked point that belongs to $\beta_i$. Then there exists a homeomorphism $g \in \mathrm{Mod}(\mathbb{D})$ such that $g \cdot \beta_i = \alpha_{\sigma(i)}$ for every $1 \leq i \leq s$. By construction, $g \mathscr{S}' \subset \mathscr{S}$. Thus, we have proved that $\mathscr{S}$ is a strict fundamental domain for the action $\mathrm{Mod}(\mathbb{D}) \curvearrowright \mathfrak{C}(p,q,r)$. The second assertion of our lemma is clear. \end{proof} \noindent We are now ready to prove the main result of this section. \begin{prop}\label{prop:deslink} Let $n ,m \geq 1$ be two integers. In $\mathscr{SC}(A_{n,m})$, the descending link of a vertex of height $k \geq m+1$ is isomorphic to $\mathfrak{C}(k, m+(k-1)(n-1), n-1)$. \end{prop} \begin{proof} Fix a vertex $[\Sigma, \varphi] \in \mathscr{SC}(A_{n,m})$ of height $k \ge m+1$. Up to translating our vertex by $\varphi^{-1}$, we can assume without loss of generality that $\varphi=\mathrm{id}$. We denote by $\mathscr{A}$ the set of collections of $n$ consecutive frontier-arcs of $\Sigma$. For every collection of $n$ consecutive frontier-arcs $i \in \mathscr{A}$, we choose a topological disc $D_i \subset \Sigma$ and a puncture $p_i \in \Sigma$ such that {$D_i\cap \partial\Sigma$ is connected, such that} $D_i$ intersects the frontier of $\Sigma$ exactly in $i$, and such that $p_i$ is the only puncture in $D_i$. Moreover, we assume that $D_i \cap D_j = \emptyset$ for all $i, j \in \mathscr{A}$ satisfying $i \cap j= \emptyset$, which is possible because $k \geq m+1$. \medskip \noindent For every set $I \subset \mathscr{A}$ of pairwise disjoint collections of frontier-arcs, we fix an admissible subsurface $\Sigma_I$ and an asymptotically rigid homeomorphism $\varphi_I$ of $\mathscr{S}^\sharp(A_{n,m})$ such that $\varphi_I(\Sigma_I)=\Sigma$, $\varphi_I$ is rigid outside $\Sigma_I$, and $\varphi_I^{-1}(D_i)$ is an extremal polygon $H_i^I \subset \Sigma_I$ of the rigid structure for every $i \in I$. Observe that these conditions imply that $[\Sigma,\mathrm{id}]= [\Sigma_I, \varphi_I]$. For instance, $\Sigma_I$ can be constructed as follows. Since $\Sigma$ has height $k$, its frontier contains $m+(k-1)(n-1)$ arcs, which implies that $n \#I \leq m+(k-1)(n-1)$. Because $k \geq m+1$, we deduce that $\# I \leq k-1$. Consequently, we can fix an admissible subsurface $\Xi$ of height $k-\#I \geq 1$ containing the central polygon. Notice that the frontier of $\Xi$ contains $\left[ m+(k-1)(n-1) \right] - (n-1) \#I\geq \# I$ arcs in its frontier, so we can fix a collection $J$ of $\#I$ arcs in the frontier of $\Xi$ and a bijection $\iota : J \to I$ so that any two consecutive arcs in $J$ (with respect to the cyclic order) are separated by the same number of arcs that separate their images under $\iota$ in the frontier of $\Sigma$. Then setting $\Sigma_I$ as the subsurface obtained from $\Xi$ by adding a polygon for each arc in $J$ provides the desired subsurface. \medskip \noindent If $I \subset \mathscr{A}$ is a set of pairwise disjoint collections, we denote by $Q_I$ the cube generated by $$\left[ \Sigma_I \backslash \bigcup\limits_{j \in J}{H_j^{I}}, \varphi_I \right], J \subset I.$$ Such a cube corresponds to a simplex in the descending link $\mathscr{L}$ of $[\Sigma,\id]$, namely the simplex associated to the edges from $[\Sigma, \mathrm{id}]$ to the vertices $$\left[ \Sigma_I \backslash H^I_i, \varphi_I \right], \ i \in I.$$ We denote this simplex by $S_I$. Consider now the subcomplex $\mathscr{S}$ of the descending link $\mathscr{L}$ consisting of the union of all the simplices $S_I$. \begin{claim}\label{claim:subcplx} The vertices of $\mathscr{S}$ correspond to the vertices in $V:=\{ [\Sigma_i \setminus H_i^i, \varphi_i] \mid i \in \mathscr{A} \}.$ \end{claim} \noindent Fix a set $I \subset \mathscr{A}$ of pairwise disjoint collections and an element $i \in I$. Notice that $$\left[ \Sigma_i \backslash H^i_i, \varphi_i \right]= \left[ \Sigma_I \backslash H^I_i, \varphi_I \right].$$ Indeed, $\varphi_i$ maps $\Sigma_i$ to $\Sigma$, $H^i_i$ to $D_i$, and is rigid outside $\Sigma$; and $\varphi_I$ maps $\Sigma_I$ to $\Sigma$, $H^I_i$ to $D_i$, is rigid outside $\Sigma$. So $\varphi_I^{-1} \varphi_i$ maps $\Sigma_i$ to $\Sigma_I$, $H^i_i$ to $H^I_i$, and is rigid outside $\Sigma_i$, Equivalently, $\varphi_I\varphi_i^{-1}$ sends $\Sigma_i \backslash H^i_i$ to $\Sigma_I \backslash H^I_i$ and is rigid outside $\Sigma_i \backslash H^i_i$. The desired equality follows. Thus, the vertices of $\mathscr{S}$ belong to $V$. Conversely, a vertex of $V$ is a simplex $S_J$ where $J$ is a singleton, concluding the proof of our claim. \medskip \noindent Our goal is now to show that $\mathscr{S}$ is a strict fundamental domain under the action of $\mathrm{Mod}(\Sigma)\leq \mathrm{stab}([\Sigma,\id])$ on the descending link $\mathscr{L}$. This is the content of the following two claims. \begin{claim}\label{claim:NoTwoSameOrbit} No two simplices in $\mathscr{S}$ belong to the same $\mathrm{Mod}(\Sigma)$-orbit. \end{claim} \noindent Two simplices in $\mathscr{S}$ can be written as $S_I$ and $S_J$ for some $I,J \subset \mathscr{A}$. Assume that there exists a $g\in\mathrm{Mod}(\Sigma)$ such that $g S_I=S_J$. If $i \in I$ is a frontier-arc, then $g$ must send the edge from $[\Sigma,\id]$ to $[\Sigma_I \backslash H^I_i, \varphi_I]$ (which corresponds to a vertex in $S_I$) to an edge from $[\Sigma,\id]$ to $[\Sigma_J \backslash H^J_j, \varphi_J]$ for some $j \in J$ (which is a vertex in $S_J$). We already know that $\varphi_J^{-1}g \varphi_I$ sends $\Sigma_I$ to $\Sigma_J$, and we deduce from the equality $$\left[ \Sigma_I \backslash H^I_i, g \varphi_I \right] = \left[ \Sigma_J \backslash H^J_j, \varphi_J \right]$$ that it also sends $H^I_i \cap \mathrm{Fr}(\Sigma_I)$ to $H^J_j \cap \mathrm{Fr}(\Sigma_J)$. Because $\varphi^I_i$ sends $H^I_i \cap \mathrm{Fr}(\Sigma_I)$ to $i$ and since $g$ fixes pointwise $\partial \Sigma$, it follows that $\varphi_J^{-1}$ sends $i$ to $H^J_j \cap \mathrm{Fr}(\Sigma_J)$. But $\varphi_J$ sends $H^J_j \cap \mathrm{Fr}(\Sigma_J)$ to $j$, so necessarily $i=j \in J$. Thus, we have proved that $I \subset J$. By symmetry, we also have $J \subset I$, hence $S_I=S_J$. \begin{claim}\label{claim:HaveAtranslate} Every simplex in $\mathscr{L}$ has a $\mathrm{Mod}(\Sigma)$-translate in $\mathscr{S}$. \end{claim} \noindent Let us fix a simplex $S$ in $\mathscr{L}$. It corresponds to a cube $Q$, which is generated by the vertices $$\left\{ \left[\Xi \cup \bigcup\limits_{j \in J} K_j, \psi \right] \mid J \subset \{1, \ldots, s\} \right\},$$ where $\Xi$ is an admissible subsurface, $K_1, \ldots, K_s$ are $2n$-gons of the rigid structure adjacent to $\Xi$, $\psi$ is an asymptotically rigid homeomorphism, and $[\Xi \cup K_1 \cup \cdots \cup K_s, \psi]=[ \Sigma, \mathrm{id}]$. \medskip \noindent Let $J \subset \mathscr{A}$ denote the set of pairwise disjoint collections corresponding to the intersections between the frontier of $\Sigma$ and the $\psi(K_i)$'s (which lie in $\Sigma$). So, for every $1 \leq i \leq s$, there exists some $j(i) \in J$ such that $\psi(K_i)$ and $D_{j(i)}$ have the same intersection with the frontier of $\Sigma$. Fix a $\beta \in \mathrm{Mod}(\Sigma)$ such that $\beta(\psi(K_i))= D_{j(i)}$ for every $1 \leq i \leq s$. The existence of such a $\beta$ follows from the fact that the discs $D_{j(i)}$ as well as the images $\psi(K_i)$ are disjoint. To summarize, we have: \begin{itemize} \item The asymptotically rigid homeomorphism $\psi$ maps $\Xi \cup K_1 \cup \cdots \cup K_s$ to $\Sigma$, each of the $K_i$ to $\psi(K_i)$, and $\psi$ is rigid outside $\Xi \cup K_1 \cup \cdots \cup K_s$. \item The asymptotically rigid homeomorphism $\beta$ sends $\Sigma$ to $\Sigma$, each of the images $\psi(K_i)$ to the disc $D_{j(i)}$, and it is rigid outside $\Sigma$. \item The asymptotically rigid homeomorphism $\varphi_I^{-1}$ maps $\Sigma$ to $\Sigma_I$, each of the discs $D_{j(i)}$ to the $2n$-gon $H^I_{j(i)}$, and it is rigid outside $\Sigma$. \end{itemize} This implies that $\varphi_I^{-1} \beta \psi$ maps $\Xi \cup K_1 \cup \cdots \cup K_s$ to $\Sigma$, each of the $K_i$ to $H^I_{j(i)}$, and it is rigid outside $\Xi \cup K_1 \cup \cdots \cup K_s$. Consequently, $$\beta \cdot \left[ \Xi \cup K_1 \cup \cdots \cup K_{i-1} \cup K_{i+1} \cup \cdots \cup K_s, \psi \right] = \left[ \Sigma_I \backslash H^I_{j(i)}, \varphi_I \right]$$ for every $1 \leq i \leq s$. In other words, $\beta$ sends $S$ to $S_I$. This concludeds the proof of our claim. \medskip \noindent Thus, we have proved that $\mathscr{S}$ is a strict fundamental domain for the action of $\mathrm{Mod}(\Sigma)$ on $\mathscr{L}$. Next, notice that: \begin{claim}\label{claim:StabModSimplex} Fix a set $I \subset \mathscr{A}$ of pairwise disjoint collections. The stabiliser of $S_I$ in $\mathrm{Mod}(\Sigma)$ coincides with the subgroup $\mathrm{Mod}\left(\Sigma \backslash \bigcup\limits_{i \in I} D_i \right)$. \end{claim} \noindent Notice that, as a consequence of Claim \ref{claim:NoTwoSameOrbit}, the stabiliser of $S_I$ coincides with its pointwise stabiliser. If $g \in \mathrm{Mod}(\Sigma)$ stabilises $S_I$, then, for every $i \in I$, we have $$\left[ \Sigma_I \backslash H^I_i, g \varphi_I \right] = g \cdot \left[ \Sigma_I \backslash H^I_i, \varphi_I \right]= \left[ \Sigma_I \backslash H^I_i, \varphi_I \right],$$ so $\varphi_I^{-1}g \varphi_I$ is isotopic to a homeomorphism that stabilises $\Sigma_I \backslash H^I_i$ and that is rigid outside. But we know that $\varphi_I^{-1}g \varphi_I$ first sends through $\varphi_I$ the subsurface $\Sigma_I$ to $\Sigma$, $H^I_i$ to $D_i$ and is rigid outside $\Sigma_I$; next it mixes $\Sigma$ by $g$, and is rigid outside $\Sigma$; finally it sends back $\Sigma$ to $\Sigma_I$ and $D_i$ to $H^I_i$ through $\varphi_I^{-1}$, and is rigid outside $\Sigma$. Therefore, $g$ has to preserve (up to isotopy) the topological disc $D_i$. This concludes the proof of our claim. \begin{figure} \begin{center} \includegraphics[scale=0.3]{ArcLink} \caption{Associating a marked point $m_i$ and an arc $\alpha_i$ to a disc $D_i$.} \label{ArcLink} \end{center} \end{figure} \medskip \noindent For every $i \in \mathscr{A}$, fix a marked point $m_i \in \partial \Sigma$ that lies in the mid frontier-arc of $i$ if $n$ is odd or that lies between the two mid frontier-arcs of $i$ if $n$ is even. We identify the set of marked points $\{m_i \mid i \in \mathscr{A}\}$ with $\mathbb{Z}_c$ (where $c:=m+(k-1)(n-1)$ denotes the cardinality of $\mathscr{A}$) in such a way that the cyclic order induced by $\partial \Sigma$ coincides with the cyclic order induced by $\mathbb{Z}_c$. For every $i \in \mathscr{A}$, we also fix an arc $\alpha_i$ from the marked point $m_i$ to the unique puncture of the disc $D_i$. See Figure \ref{ArcLink}. Notice that $I \subset \mathscr{A}$ is a set of pairwise disjoint collections if and only if $\{D_i, i \in I\}$ is a collection of pairwise disjoint discs if and only if the arcs in $\{\alpha_i, i \in I\}$ are pairwise $(n-1)$-separated. By identifying $\Sigma$ with the disc $\mathbb{D}$ from the definition of $\mathfrak{C}(k,c,n-1)$, we deduce from the combination of Claims~\ref{claim:NoTwoSameOrbit},~\ref{claim:HaveAtranslate},~\ref{claim:StabModSimplex} with Lemma~\ref{lem:LinkTopo} that the map $$g \cdot \left( [\Sigma,\mathrm{id}], \left[ \Sigma_{\{i\}} \backslash H_i^{\{i\}}, \varphi_{\{i\}} \right] \right) \mapsto g \cdot \alpha_i$$ induces an isomorphism $\mathscr{L} \to \mathfrak{C}(k,c,n-1)$. \end{proof} \begin{remark} Notice that, as a consequence of \cite[Proposition II.12.20(1)]{BH} and of the proof of Proposition \ref{prop:deslink}, the descending links in $\mathscr{SC}(A_{n,m})$ can also be described as developments of simple complexes of braid groups. Interestingly, the underlying complexes turn out to coincide with the descending links in a CAT(0) cube complex on which the unbraided group $\amod_f(A_{n,m})$ acts. More precisely, if $n=1$, then the underlying complexes are $(m-1)$-simplices, which are isomorphic to the descending links in $\mathbb{R}^m$ on which $\amod_f(A_{n,m}) \simeq \mathbb{Z}^m \rtimes \mathbb{Z}_m$ acts; and if $n \geq 2$, then $\amod_f(A_{n,m}) \simeq T_{n,m}$ and the underlying complexes coincide with the descending links in the CAT(0) cube complex we can construct following \cite{MR2146639} by thinking of the Higmann-Thompson group $T_{n,m}$ as an \emph{annular diagram group}. \end{remark} \subsection{Braided Higman-Thompson groups}\label{section:HigmanThompson} \noindent This section is dedicated to the proof of one of the two main results of this article, namely: \ThompsonThm \noindent The strategy is to apply \emph{Morse theory} to the action of $\mathrm{br}(T_{n,m})$ on the spine $\mathscr{SC}(A_{n,m})$. The following criterion is a standard combination of Bestvina-Brady Morse theory \cite{Morse} and Brown's Criterion \cite{Brown}. \begin{prop}\label{prop:Morse} Let $G$ be a group acting cellularly on a contractible affine cell complex $X$ and let $f : X \to \mathbb{R}$ be a $G$-invariant Morse function. Assume that each sublevel set is $G$-cocompact, that the cell-stabilisers are of type $F_n$, and that, for every $k \geq 0$, there exists some $m \geq 0$ such that the descending link of every vertex $x \in X$ satisfying $f(x) \geq m$ is $k$-connected. Then $G$ is of type $F_n$. \end{prop} \noindent Consequently, in order to deduce Theorem \ref{PtolemyConnected} from Proposition \ref{prop:Morse}, we need to show that the complexes introduced in the previous section are highly connected. More precisely, our goal is to prove the following statement: \begin{prop}\label{prop:ArcComplex} Let $p\geq 2$, $q \geq 1$ and $r \geq 0$ be three integers. The complex $\mathfrak{C}(p,q,r)$ is $\left( \left\lfloor \frac{1}{3} \left( p + \left\lfloor \frac{q}{r+1} \right\rfloor \right) \right\rfloor -2 \right)$-connected. \end{prop} \noindent In fact, we are going to prove the proposition in a much more general framework. Indeed, in order to argue by induction, we need to introduce a more general family of complexes so that links of simplices still belong to this larger family. For the reader's convenience, let us fix the definitions of \emph{links} and \emph{stars} in simplicial complexes as used in this section and the next one. \begin{definition} Let $X$ be a simplicial complex and $\Delta \subset X$ a simplex. The \emph{link} of $\Delta$ is the subcomplex of $X$ that is the union of all the simplices that are disjoint from $\Delta$ but that span simplices with $\Delta$. The \emph{star} of $\Delta$, denoted by $\mathrm{star}(\Delta)$, is the union of all the simplices having $\Delta$ as a face. \end{definition} \noindent The general framework in which we are going to prove Proposition \ref{prop:ArcComplex} is the following. Let $S$ be a punctured surface with boundary. Fix a set of punctures $P$, a set of marked points $M \subseteq \partial S$ and a symmetric relation $\sim$ on $M$. Here, we are interested in the simplicial complex $\mathfrak{R}=\mathfrak{R}(S,P,M,\sim)$ defined as follows: the vertices of $\mathfrak{R}$ are the isotopy classes of arcs connecting a point in $M$ to a point in $P$, and its simplices are collections of arcs that are pairwise disjoint and that start from marked points that are pairwise $\sim$-related. \medskip \noindent Notice that, if $S$ is a disc with $p$ punctures, if $P$ is the set of all the punctures of $S$, if $M$ has cardinality $q$ and if $\sim$ is the $r$-separation, then $\mathfrak{R}$ coincides with the complex $\mathfrak{C}(p,q,r)$ from the previous section. Our main result about the connectedness properties of $\mathfrak{R}$ is the following: \begin{prop}\label{prop:SimConnected} Assume that $\#M \geq 1$ and $\#P \geq 2$. The complex $\mathfrak{R}(S,P,M,\sim)$ is $\left( \left\lfloor \frac{\#P + \min(\sim)}{3} \right\rfloor -2 \right)$-connected, where $\min(\sim)$ denotes the minimum size of a $\subseteq$-maximal collection of pairwise $\sim$-related marked points in $M$. \end{prop} \noindent We record the following statement for future use during the proof of Proposition \ref{prop:SimConnected}. \begin{lemma}\label{LEMMA39} Let $Y$ be a compact $m$-dimensional combinatorial manifold. Let $X$ be a simplicial complex and assume that the link of every $k$-simplex in $X$ is $(m-k-2)$-connected. Let $\psi : Y \to X$ be a simplicial map whose restriction to $\partial Y$ is injective on simplices. Then, after possibly subdividing the simplicial structure of $Y$, $\psi$ is homotopic relative to $\partial Y$ to a map that is injective on simplices. \end{lemma} \noindent This lemma appears in \cite[Lemma 3.9]{MR3545879} with the constant $m-2k-2$ instead of $m-k-2$. However, as it was pointed out to us by K.-U. Bux, there is a mistake in the proof of \cite[Lemma 3.9]{MR3545879}. (More precisely, at the very end of the first paragraph of the proof, the induction hypothesis does not apply to $\varphi : B \to \mathrm{link}(\psi(\sigma))$ because the assumptions only show that the link of a $d$-simplex in $\mathrm{link}(\psi(\sigma))$ is $(m-k-2d-3)$-connected instead of $(m-k-2d-2)$-connected.) Replacing the constant $m-2k-2$ with $m-k-2$ solves the problem, and Lemma \ref{LEMMA39} can be proved by reproducing the proof of \cite[Lemma 3.9]{MR3545879} word for word. \begin{proof}[Proof of Proposition \ref{prop:SimConnected}.] Our argument follows closely the proof of \cite[Theorem 3.10]{MR3545879}. We argue by induction over $\#M+\#P$. If $\#M+\#P=3$, then the statement is clear. From now on, we assume that $\#M+\#P > 3$. Fix a puncture $p \in P$ and a marked point $m \in M$, and let $\mathfrak{R}_0$ denote the subcomplex generated by the vertices corresponding to the arcs connecting a point in $M \backslash \{m\}$ to a point in $P \backslash \{p\}$. The first step of our argument is to show that we can work with $\mathfrak{R}_0$ instead of $\mathfrak{R}$. \begin{claim}\label{claim:LinkInRzero1} For every $k \geq 0$, the link of a $k$-simplex in $\mathfrak{R}_0$ is $\left( \left\lfloor \frac{\#P+\min(\sim)}{3} \right\rfloor - k-3 \right)$-connected. \end{claim} \noindent Let $x_0,\ldots, x_k$ denote the vertices of a $k$-simplex $\Delta$ in $\mathfrak{R}_0$. For every $0 \leq i \leq k$, $x_i$ is represented by an arc $\alpha_i$ connecting a point $n_i \in M \backslash \{m\}$ to a point $q_i \in P \backslash \{p\}$. Notice that the marked points $n_0, \ldots, n_k$ and the punctures $q_0, \ldots,q_k$ are pairwise distinct. By definition, the simplices in $\mathrm{link}(\Delta)$ correspond to the simplices in $\mathfrak{R}_0$ whose vertices are represented by arcs that are pairwse disjoint up to isotopy, that are disjoint from $\alpha_0,\ldots, \alpha_k$ up to isotopy, that start from pairwise $\sim$-related marked points, and that start from marked points $\sim$-related to $n_0, \ldots, n_k$. Consequently, the link of $\Delta$ is isomorphic to $$\mathfrak{R}(S\cup \{q_0, \ldots, q_k\}, P\backslash \{p, q_0, \ldots, q_k\}, M', \approx)$$ where $M'$ denotes the set of the elements in $M\backslash \{m,n_0, \ldots, n_k\}$ that are $\sim$-related to $n_0, \ldots, n_k$ and where $\approx$ denotes the restriction of $\sim$ to $M'$. By our induction hypothesis, we know that our link is $\left( \left\lfloor \frac{\#P-k-2+ \min(\approx)}{3} \right\rfloor -2 \right)$-connected. Now, fix a $\subseteq$-maximal collection in $M'$ of pairwise $\approx$-related points $\{y_1, \ldots, y_r\}$. Then either $\{y_1, \ldots, y_r, n_0, \ldots, n_k\}$ or $\{y_1, \ldots, y_r,m,n_0, \ldots, n_k\}$ is a $\subseteq$-maximal collection in $M$ of pairwise $\sim$-related points. Consequently, we have $\min(\sim) \leq r+k+2 = \min(\approx)+k+2$, which leads to the desired conclusion. \medskip \noindent In the sequel, the previous argument will be used several times in different contexts. \begin{claim}\label{claim:Rzero1} The pair $(\mathfrak{R},\mathfrak{R}_0)$ is $\left( \left\lfloor \frac{\#P + \min(\sim)}{3} \right\rfloor -2 \right)$-connected, i.e. the inclusion $\mathfrak{R}_0 \hookrightarrow \mathfrak{R}$ induces an isomorphism on $\pi_i$ for $i< \left\lfloor \frac{\#P + \min(\sim)}{3} \right\rfloor -2$ and an epimorphism on $\pi_{\left\lfloor \frac{\#P + \min(\sim)}{3} \right\rfloor -2}$. \end{claim} \noindent Let $\mathfrak{R}_1$ denote the subcomplex generated by $\mathfrak{R}_0$ and the vertices, said of type $1$, corresponding to the arcs connecting $m$ to $p$. Because no two vertices of type $1$ are adjacent, $\mathfrak{R}_1$ is obtained from $\mathfrak{R}_0$ by gluing cones over the links of the vertices of type $1$. Arguing as in Claim \ref{claim:LinkInRzero1}, we show that such links are isomorphic to $\mathfrak{R}(S\cup \{p\},P\backslash \{p\}, M', \approx)$, where $M'$ denotes the set of the elements in $M \backslash \{m\}$ that are $\sim$-related to $m$ and where $\approx$ denotes the restriction of $\sim$ to $M'$. We know by induction that such a link is $\left( \left\lfloor \frac{\#P-1+\min(\approx)}{3} \right\rfloor -2 \right)$-connected and we show that the inequality $\min(\approx) \geq \min(\sim)-1$ holds, which allows us to deduce that the link under consideration is $\left( \left\lfloor \frac{\#P + \min(\sim)}{3} \right\rfloor -3 \right)$-connected. \medskip \noindent Next, let $\mathfrak{R}_2$ denote the subcomplex generated by $\mathfrak{R}_1$ and the vertices, said of type $2$, corresponding to the arcs connecting $m$ to a point in $P \backslash \{p\}$. Because no two vertices of type $2$ are adjacent, $\mathfrak{R}_2$ is obtained from $\mathfrak{R}_1$ by gluing cones over the links of the vertices of type $2$. Arguing as in Claim \ref{claim:LinkInRzero1}, we show that such links are isomorphic to $\mathfrak{R}(S \cup \{q\}, P\backslash \{p,q\}, M', \approx)$, where $q$ is a puncture distinct from $p$, where $M'$ is the set of the elements in $M\backslash \{m\}$ that are $\sim$-related to $m$ and where $\approx$ denotes the restriction of $\sim$ to $M'$, and we show that they are $\left( \left\lfloor \frac{\#P + \min(\sim)}{3} \right\rfloor -3 \right)$-connected. \medskip \noindent Finally, let $\mathfrak{R}_3$ denote the subcomplex generated by $\mathfrak{R}_2$ and the vertices, said of type $3$, corresponding to the arcs connecting a point in $M \backslash \{m\}$ to $p$. Because no two vertices of type $3$ are adjacent, $\mathfrak{R}_3$ is obtained from $\mathfrak{R}_2$ by gluing cones over the links of the vertices of type $3$. Arguing as in Claim \ref{claim:LinkInRzero1}, we show that such links are isomorphic to $\mathfrak{R}(S\cup \{p\}, P\backslash \{p\}, M', \approx)$, where $n$ is a marked point distinct from $m$, where $M'$ is the set of the elements in $M\backslash \{n\}$ that are $\sim$-related to $n$ and where $\approx$ denotes the restriction of $\sim$ to $M'$, and that they are $\left( \left\lfloor \frac{\#P + \min(\sim)}{3} \right\rfloor -3 \right)$-connected. \medskip \noindent Notice that a vertex in $\mathfrak{R}$ either belongs to $\mathfrak{R}_0$ or is of type $1$, $2$ or $3$, i.e. $\mathfrak{R}_3$ coincides with the entire complex $\mathfrak{R}$. Consequently, it follows from the previous paragraphs that $\mathfrak{R}$ can be obtained from $\mathfrak{R}_0$ by gluing cones over $\left( \left\lfloor \frac{\#P + \min(\sim)}{3} \right\rfloor -3 \right)$-connected subcomplexes, concluding the proof of Claim \ref{claim:Rzero1}. \medskip \noindent As a consequence of Claim \ref{claim:Rzero1}, it suffices to show that a map $\psi : \mathbb{S}^r \to \mathfrak{R}_0$ from a combinatorial sphere of dimension $r \leq \left\lfloor \frac{\#P + \min(\sim)}{3} \right\rfloor -2$ is homotopically trivial in $\mathfrak{R}$ in order to deduce that $\mathfrak{R}$ is $\left( \left\lfloor \frac{\#P + \min(\sim)}{3} \right\rfloor -2 \right)$-connected. By simplicial approximation, we may suppose without loss of generality that $\psi$ is simplicial. Also, as a consequence of Lemma \ref{LEMMA39}, which applies according to Claim \ref{claim:LinkInRzero1}, we may suppose without loss of generality that $\psi$ is injective on each simplex. \medskip \noindent Fix an arc $\gamma$ from $m$ to $p$. We want to prove that $\psi$ can be homotoped so that its image lies in the star of $\gamma$. Since the latter is contractible, this will show that $\psi$ is homotopy trivial, as desired. \medskip \noindent The arcs representing the vertices in the image of $\psi$ have their endpoints distinct from $p$ and $m$, but they may intersect $\gamma$. If there is no such intersection, then the vertices of the image of $\psi$ already lies in the star of $\gamma$. Consequently, the image of $\psi$ lies in the subcomplex generated by the star of $\gamma$, which coincides with the star of $\gamma$ itself because the link of $\gamma$ is \emph{flag} (i.e. every collection of pairwise adjacent vertices spans a simplex), so there is nothing to prove in this case. Otherwise, let $x \in \mathbb{S}^r$ be the vertex whose image is represented by the arc $\alpha$ that intersects $\gamma$ the closest to $p$. Fix a small disc $D \subseteq S$ containing $p$ such that $D \cap \alpha$ is a subarc contained in $\partial D$ and such that $D$ is disjoint from all the arcs representing the images under $\psi$ of the vertices of $\mathbb{S}^r$ distinct from $x$. Now let $\alpha'$ denote the arc obtained from $\alpha$ by replacing the subarc $\alpha \cap \partial D$ with $\partial D \backslash \alpha$. See Figure \ref{pushing}. Because $\psi$ is injective on simplices, the link of $x$ is sent in the link of $\psi(x)$ (which is represented by $\alpha$); and, by construction, this image also lies in the link of the vertex represented by $\alpha'$. Therefore, we can define a new map $\psi' : \mathbb{S}^r \to \mathfrak{R}_0$ by sending $x$ to the vertex represented by $\alpha'$ and by sending each vertex $y$ distinct from $x$ to $\psi(y)$. \begin{figure} \begin{center} \includegraphics[scale=0.32]{pushing} \caption{Pushing $\alpha$ to $\alpha'$.} \label{pushing} \end{center} \end{figure} \medskip \noindent We claim that $\psi$ and $\psi'$ are homotopy equivalent in $\mathfrak{R}$. Arguing as in Claim \ref{claim:LinkInRzero1}, we show that the intersection $L$ of the links in $\mathfrak{R}$ of the two vertices represented by $\alpha$ and $\alpha'$ is isomorphic to $\mathfrak{R}(S \cup \{p,q\}, P\backslash \{p,q\}, M', \approx)$, where $q \in P$ and $n \in M$ are the endpoints of $\alpha$, where $M'$ denotes the set of the elements in $M \backslash \{n\}$ that are $\sim$-related to $n$ and where $\approx$ denotes the restriction of $\sim$ to $M'$; and we show that $L$ is $\left( \left\lfloor \frac{\#P + \min(\sim)}{3} \right\rfloor -3 \right)$-connected. As a consequence, the common restriction $\mathbb{S}^{r-1} \to L$ of $\psi$ and $\psi'$ to $\mathrm{link}(x)$ is homotopically trivial, i.e. there exists a map $\varphi : \mathrm{star}(x) \to L$ such that $\varphi_{|\mathrm{link}(x)}$ coincides with the previous restriction. Because the star of $\psi(x)$ is contractible and that the image of $\varphi$ lies in the link of $\psi(x)$, we can homotope $\psi$ without modifying it outside the star of $x$ so that $\psi_{|\mathrm{star}(x)}= \varphi$. The same process applied to $\psi'$ leads to the same map, proving that $\psi$ and $\psi'$ are homotopy equivalent, as claimed. \medskip \noindent Notice that the total number of intersections between $\gamma$ and the arcs representing the images under $\psi'$ of the vertices in $\mathbb{S}^r$ is smaller than the total number of intersections between $\gamma$ and the arcs representing the images under $\psi$ of the vertices in $\mathbb{S}^r$. By iterating the argument, we construct a map $\mathbb{S}^r \to \mathfrak{R}_0$ that is homotopy equivalent to $\psi$ and whose image lies in the star of $\gamma$, as desired. \medskip \noindent Thus, we have proved that $\mathfrak{R}(S,P,M,\sim)$ is $\left( \left\lfloor \frac{\#P + \min(\sim)}{3} \right\rfloor -2 \right)$-connected, as desired. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:ArcComplex}.] Let $S$ be a disc with a set $P$ of $p$ punctures and a set $M$ of $q$ marked points in its boundary. Let $\sim$ denote the relation of being $r$-separated in $M$. Then $\mathfrak{R}(S,P,M,\sim)$ coincides with $\mathfrak{C}(p,q,r)$. By noticing that $\min(\sim)= \left\lfloor \frac{q}{r+1} \right\rfloor$, the desired conclusion follows from Proposition \ref{prop:SimConnected}. \end{proof} \begin{proof}[Proof of Theorem \ref{PtolemyConnected}.] We want to apply Proposition \ref{prop:Morse} to the spine $\mathscr{SC}(A_{n,m})$ endowed with its height function. Notice that $\mathscr{SC}(A_{n,m})$ is contractible according to Proposition \ref{prop:HomotopyEqui} and Theorem \ref{thm:contractible}; that cell-stabilisers are finite extensions of braid groups according to Lemma \ref{lem:VertexStab}, so they are of type $F_\infty$; that, as a consequence of Proposition~\ref{prop:deslink} and Proposition \ref{prop:ArcComplex}, the descending link of a vertex is arbitrarily connected if its height is sufficiently high; and finally that Claim \ref{claim:CocompactLevel} below shows that sublevels are $\mathrm{br}T_{n,m}$-cocompact. Therefore, Proposition \ref{prop:Morse} proves that $\mathrm{br}T_{n,m}$ is of type $F_\infty$, as desired. \begin{claim}\label{claim:CocompactLevel} Fix two integers $n,m \geq 1$. For every $i \geq 1$, $\amod(A_{n,m})$ acts cocompactly on the subcomplex $X_i$ of $\mathscr{SC}(A_{n,m})$ generated by the vertices of height $\leq i$. \end{claim} \noindent Fix an $i \geq 1$ and let $C$ be a cube in $X_i$. So there exist an admissible subsurface $\Sigma$ containing the central polygon, a homeomorphism $\varphi$ and polygons $H_1, \ldots, H_k$ such that the vertices of $C$ are $$\left\{ \left[ \Sigma \cup \bigcup\limits_{j \in I} H_j, \varphi \right] \mid I \subseteq \{1, \ldots, k\} \right\}.$$ Consequently, the vertices of the $\amod(A_{n,m})$-translate $\varphi^{-1} \cdot C$ are $$\left\{ \left[ \Sigma \cup \bigcup\limits_{j \in I} H_j, \id \right] \mid I \subseteq \{1, \ldots, k\} \right\}.$$ But $\Sigma$ must be the union of at most $i$ polygons and it contains the central polygons, so there are only finitely many possibilities for $\Sigma$ and $H_1, \ldots, H_k$. Thus, $X_i$ contains only finitely many $\amod(A_{n,m})$-translates of cubes, concluding the proof of our claim. \end{proof} \begin{remark} We do not expect the constant given by Proposition \ref{prop:ArcComplex} to be optimal. In fact, we think that $\mathfrak{R}(S,P,M,\sim)$ is homotopy equivalent to a bouquet of infinitely many $\left( \min \left( \#P, \min(\sim) \right)-1 \right)$-spheres. Proposition \ref{prop:BouquetSpheres} below proves this assertion in a particular case. However, the weaker information provided by Proposition \ref{prop:ArcComplex} is sufficient for our purpose, so we do not pursue this direction further. \end{remark} \subsection{Braided Houghton groups}\label{section:Houghton} \noindent This section is dedicated to the second main result of this article, namely: \HoughtonThm \noindent Similarly to the unbraided Houghton groups, the strategy is to apply the following criterion: \begin{prop}\label{prop:BrownHoughton}\emph{\cite[Corollary 3.3]{Brown}} Let $G$ be a group acting on a contractible simplicial complex $X$ with cell-stabilisers that are finitely presented and of type $FP_\infty$. Fix a filtration $X_1 \subset X_2 \subset \cdots$ such that $G$ acts cocompactly on each $X_i$. If, up to homotopy, $X_{i+1}$ is obtained from $X_i$ by the adjunction of $n$-cells for every $i \geq 1$, then $G$ is of type $FP_{n-1}$ but not $FP_n$. Moreover, if $n \geq 3$, then $G$ is finitely presented. \end{prop} \noindent More precisely, we are going to show that the descending links in the cube complex $\mathscr{SC}(A_{1,n})$, on which $\mathrm{br}H_n$, acts are homotopy equivalent to bouquets of $(n-1)$-spheres, which amounts to proving, as a consequence of Proposition \ref{prop:deslink}, that the complex $\mathfrak{C}(k,n,0)$ is $(n-2)$-connected (when $k$ is large enough) {by theorems of Hurewicz and Whitehead}. We already know from Proposition \ref{prop:ArcComplex} that this complex is $\left( \left\lfloor \frac{k+n}{3} \right\rfloor -2 \right)$-connected, but this is not sufficient. However, assuming that $r=0$, we will be able to reproduce the proof of Proposition \ref{prop:ArcComplex} almost word by word but with optimal constants. \medskip \noindent Let $S$ be a punctured surface with boundary. Fix a set of punctures $P$ and a set of marked points $M \subset \partial S$. Here, we are interested in the simplicial complex $\mathfrak{R}(S,P,M)$ defined as follows: the vertices of $\mathfrak{R}(S,P,M)$ are the isotopy classes of arcs connecting a point in $M$ to a point in $P$, and its simplices are collections of arcs that are pairwise disjoint up to isotopy. Notice that, if $S$ is a disc with $p$ punctures, if $P$ is the set of all the punctures of $S$, and if $M$ has cardinality $q$, then $\mathfrak{R}(S,P,M)$ coincides with the complex $\mathfrak{C}(p,q,0)$. \begin{prop}\label{prop:BouquetSpheres} If $\#M \geq 1$ and $\#P \geq 2 \cdot \#M$, then $\mathfrak{R}(S,P,M)$ is homotopy equivalent to a bouquet of infinitely many $(\#M-1)$-spheres. \end{prop} \begin{proof} First of all, we claim that $\mathfrak{R}(S,P,M)$ is $(\#M-2)$-connected. Our argument follows closely the proof of \cite[Theorem 3.10]{MR3545879}. We argue by induction over $\#M$. If $\#M=1$ the statement is clear and there is nothing to prove. So assume that $\#M \geq 2$. Fix a puncture $p \in P$ and a marked point $m \in M$, and let $\mathfrak{R}_0$ denote the subcomplex generated by the vertices corresponding to the arcs connecting a point in $M \backslash \{m\}$ to a point in $P \backslash \{p\}$. \begin{claim}\label{claim:LinkInRzero} For every $k \geq 0$, the link of a $k$-simplex in $\mathfrak{R}_0$ is $(\#M-k-4)$-connected. \end{claim} \noindent Let $x_0,\ldots, x_k$ denote the vertices of a $k$-simplex $\Delta$ in $\mathfrak{R}_0$. For every $0 \leq i \leq k$, $x_i$ is represented by an arc $\alpha_i$ connecting a point $n_i \in M \backslash \{m\}$ to a point $q_i \in P \backslash \{p\}$. Notice that the marked points $n_0, \ldots, n_k$ and the punctures $q_0, \ldots,q_k$ are pairwise distinct. By definition, the simplices in $\mathrm{link}(\Delta)$ correspond to the simplices in $\mathfrak{R}_0$ whose vertices are represented by arcs that are pairwse disjoint up to isotopy and that are disjoint from $\alpha_0,\ldots, \alpha_k$ up to isotopy. Consequently, the link of $\Delta$ is isomorphic to $$\mathfrak{R}(S\cup \{q_0, \ldots, q_k\}, P\backslash \{p, q_0, \ldots, q_k\}, M \backslash \{m,n_0, \ldots, n_k\}). $$ By induction, this complex is $(\#M-k-4)$-connected, as desired. \begin{claim}\label{claim:Rzero} The pair $(\mathfrak{R}(S,P,M),\mathfrak{R}_0)$ is $(\#M-2)$-connected, i.e. the inclusion $\mathfrak{R}_0 \hookrightarrow \mathfrak{R}(S,P,M)$ induces an isomorphism on $\pi_i$ for $i< \#M-2$ and an epimorphism on $\pi_{\#M-2}$. \end{claim} \noindent Let $\mathfrak{R}_1$ denote the subcomplex generated by $\mathfrak{R}_0$ and the vertices, said of type $1$, corresponding to the arcs connecting $m$ to $p$. Because no two vertices of type $1$ are adjacent, $\mathfrak{R}_1$ is obtained from $\mathfrak{R}_0$ by gluing cones over the links of the vertices of type $1$. Notice that such links are isomorphic to $\mathfrak{R}(S\cup \{p\},P\backslash \{p\}, M\backslash \{m\})$, and so are $(\#M-3)$-connected by induction. \medskip \noindent Next, let $\mathfrak{R}_2$ denote the subcomplex generated by $\mathfrak{R}_1$ and the vertices, said of type $2$, corresponding to the arcs connecting $m$ to a point in $P \backslash \{p\}$. Because no two vertices of type $2$ are adjacent, $\mathfrak{R}_2$ is obtained from $\mathfrak{R}_1$ by gluing cones over the links of the vertices of type $2$. Notice that such links are isomorphic to $\mathfrak{R}(S \cup \{q\}, P\backslash \{p,q\}, M \backslash \{m\})$ where $q$ is a puncture distinct from $p$, and so they are $(\#M-3)$-connected by induction. \medskip \noindent Finally, let $\mathfrak{R}_3$ denote the subcomplex generated by $\mathfrak{R}_2$ and the vertices, said of type $3$, corresponding to the arcs connecting a point in $M \backslash \{m\}$ to $p$. Because no two vertices of type $3$ are adjacent, $\mathfrak{R}_3$ is obtained from $\mathfrak{R}_2$ by gluing cones over the links of the vertices of type $3$. Notice that such links are isomorphic to $\mathfrak{R}(S\cup \{p\}, P\backslash \{p\}, M\backslash \{n\})$ where $n$ is a marked point distinct from $m$, and so they are $(\#M-3)$-connected by induction. \medskip \noindent Notice that a vertex in $\mathfrak{R}(S,P,M)$ either belongs to $\mathfrak{R}_0$ or it is of type $1$, $2$ or $3$, i.e. $\mathfrak{R}_3$ coincides with the entire complex $\mathfrak{R}(S,P,M)$. Consequently, it follows from the previous paragraphs that $\mathfrak{R}(S,P,M)$ can be obtained from $\mathfrak{R}_0$ by gluing cones over $(\#M-3)$-connected subcomplexes. This concludes the proof of Claim \ref{claim:Rzero}. \medskip \noindent As a consequence of Claim \ref{claim:Rzero}, it suffices to show that a map $\psi : \mathbb{S}^r \to \mathfrak{R}_0$ from a combinatorial sphere of dimension $r \leq \#M-2$ is homotopically trivial in $\mathfrak{R}(S,P,M)$ in order to deduce that $\mathfrak{R}(S,P,M)$ is $(\#M-2)$-connected. By simplicial approximation, we may suppose without loss of generality that $\psi$ is simplicial. Also, as a consequence of Lemma \ref{LEMMA39}, which applies according to Claim \ref{claim:LinkInRzero}, we may suppose without loss of generality that $\psi$ is injective on each simplex. \medskip \noindent Fix an arc $\gamma$ from $m$ to $p$. We want to prove that $\psi$ can be homotoped so that its image lies in the star of $\gamma$ in $\mathfrak{R}(S,P,M)$. Since the star of a vertex is contractible, this will show that $\psi$ is homotopy trivial, as desired. \medskip \noindent The arcs representing the vertices in the image of $\psi$ have their endpoints distinct from $p$ and $m$, but they may intersect $\gamma$. If there is no such intersection, then the vertices of the image of $\psi$ already lies in the star of $\gamma$. Consequently, the image of $\psi$ lies in the subcomplex generated by the star of $\gamma$, which coincides with the star of $\gamma$ itself because the link of $\gamma$ is \emph{flag} (i.e. every collection of pairwise adjacent vertices spans a simplex), so there is nothing to prove in this case. Otherwise, let $x \in \mathbb{S}^r$ be the vertex whose image is represented by the arc $\alpha$ that intersects $\gamma$ the closest to $p$. Fix a small disc $D \subset S$ containing $p$ such that $D \cap \alpha$ is a subarc contained in $\partial D$ and such that $D$ is disjoint from all the arcs representing the images under $\psi$ of the vertices of $\mathbb{S}^r$ distinct from $x$. Now let $\alpha'$ denote the arc obtained from $\alpha$ by replacing the subarc $\alpha \cap \partial D$ with $\partial D \backslash \alpha$. See Figure~\ref{pushing}. Because $\psi$ is injective on simplices, the link of $x$ is sent in the link of $\psi(x)$ (which is represented by $\alpha$); and, by construction, this image also lies in the link of the vertex represented by $\alpha'$. Therefore, we can define a new map $\psi' : \mathbb{S}^r \to \mathfrak{R}_0$ by sending $x$ to the vertex represented by $\alpha'$ and by sending each vertex $y$ distinct from $x$ to $\psi(y)$. \medskip \noindent We claim that $\psi$ and $\psi'$ are homotopy equivalent in $\mathfrak{R}(S,P,M)$. Notice that the intersection $L$ of the links in $\mathfrak{R}(S,P,M)$ of the vertices represented by $\alpha$ and $\alpha'$ is isomorphic to $\mathfrak{R}(S \cup \{p,q\}, P\backslash \{p,q\}, M \backslash \{n\})$ where $q \in P$ and $n \in M$ are the endpoints of $\alpha$. By induction, the intersection is therefore $(\#M-3)$-connected. As a consequence, the common restriction $\mathbb{S}^{r-1} \to L$ of $\psi$ and $\psi'$ to $\mathrm{link}(x)$ is homotopically trivial, i.e. there exists a map $\varphi : \mathrm{star}(x) \to L$ such that $\varphi_{|\mathrm{link}(x)}$ coincides with the previous restriction. Because the star of $\psi(x)$ is contractible and the image of $\varphi$ lies in the link of $\psi(x)$, we can homotope $\psi$ without modifying it outside the star of $x$ so that $\psi_{|\mathrm{star}(x)}= \varphi$. The same process applied to $\psi'$ leads to the same map, proving that $\psi$ and $\psi'$ are homotopy equivalent, as claimed. \medskip \noindent Notice that the total number of intersections between $\gamma$ and the arcs representing the images under $\psi'$ of the vertices in $\mathbb{S}^r$ is smaller than the total number of intersections between $\gamma$ and the arcs representing the images under $\psi$ of the vertices in $\mathbb{S}^r$. By iterating the argument, we construct a map $\mathbb{S}^r \to \mathfrak{R}_0$ that is homotopy equivalent to $\psi$ and whose image lies in the star of $\gamma$, as desired. \medskip \noindent Thus, we have proved that $\mathfrak{R}(S,P,M)$ is $(\#M-2)$-connected. Notice that $\mathfrak{R}(S,P,M)$ has dimension $\#M-1$. But it is well-known that an $(n-1)$-connected CW complex of dimension $n$ is homotopy equivalent to a bouquet of $n$-spheres, so it follows that $\mathfrak{R}(S,P,M)$ is homotopy equivalent to a bouquet of $(\#M-1)$-spheres. \medskip \noindent It remains to show that this bouquet contains infinitely many spheres. Because the complex $\mathfrak{R}(S,P,M)$ is $(\#M-1)$-dimensional and $(\#M-2)$-connected, its $(\#M-1)$th homotopy group is isomorphic to its $(\#M-1)$th homology group, which is isomorphic to the free abelian group of $(\#M-1)$-chains. Therefore, in order to conclude, it suffices to construct infinitely many non-trivial $(\#M-1)$-chains using pairwise disjoint collections of simplices in $\mathfrak{R}(S,P,M)$. \medskip \noindent Write $M=\{m_1, \ldots, m_k\}$. Because $\#P \geq 2 \#M$, we can assign to each marked point $m_i$ two punctures $p_i,q_i$ such that $p_1,q_1, \ldots, p_k,q_k$ are pairwise distinct. For every $1 \leq i \leq k$, let $\alpha^i_1,\alpha^i_2,\ldots$ be a sequence of arcs connecting $m_i$ to $p_i$ and supported in a topological disc containing $m_i$, $p_i$ and $q_i$ among the marked points and punctures. We choose our arcs such that: \begin{itemize} \item for every $1 \leq i \leq k$, the arcs $\alpha^i_1, \alpha^i_2, \ldots$ are pairwise non-isotopic; \item for all distinct $1 \leq i,j \leq k$ and for all $r,s \geq 1$, the arcs $\alpha^i_r$ and $\alpha^j_s$ are disjoint. \end{itemize} For all $j \geq 1$, let $S_j$ denote the subcomplex generated by the vertices represented by the arcs $\{\alpha^i_j, \alpha^i_{j+1} \mid 1 \leq i \leq k\}$. Notice that, for every $1 \leq t \leq k-1$, the subcomplex associated to $\{\alpha^i_j,\alpha^i_{j+1} \mid 1 \leq i \leq t+1\}$ coincides with the suspension of the subcomplex associated to $\{\alpha^i_j, \alpha^i_{j+1} \mid 1 \leq i \leq t\}$, so $S_j$ is a triangulation of a $(k-1)$-sphere. Consequently, $S_j$ can be thought of as a $(k-1)$-chain, and it is non-trivial as a sum of pairwise distinct simplices. Moreover, for every $j \geq 1$, $S_j$ and $S_{j+2}$ are disjoint, hence infinitely many non-trivial and pairwise disjoint $(k-1)$-chains, as desired. \end{proof} \noindent As an immediate consequence of Proposition \ref{prop:BouquetSpheres}, we get: \begin{cor}\label{cor:ArcComplex} For all $q \geq 1$ and $p \geq 2q$, the complex $\mathfrak{C}(p,q,0)$ is homotopy equivalent to a bouquet of infinitely many $(q-1)$-spheres. \end{cor} \begin{proof}[Proof of Theorem \ref{thm:brHfiniteness}.] Fix an $n \geq 1$ and, for every $i \geq 1$, let $X_i$ denote the subcomplex of $\mathscr{SC}(R_n)$ generated by the vertices of height $\leq 2n+i$. Recall from Proposition \ref{prop:HomotopyEqui} and Theorem \ref{thm:contractible} that $\mathscr{SC}(A_{n,m})$ is contractible. Moreover, we know from Claim \ref{claim:CocompactLevel} that each $X_i$ is $\amod(A_{1,n})$-cocompact, and a fortiori $\mathrm{br}H_n$-cocompact since $\mathrm{br}H_n$ has finite index in $\amod(A_{1,n})$. Next, notice that, for every $i \geq 1$, $X_{i+1}$ is obtained from $X_i$ by gluing cones over the descending links of the vertices of height $i+1$ \cite[Lemma 2.5]{Morse}. According to Proposition \ref{prop:deslink} and Corollary \ref{cor:ArcComplex}, these links are homotopy equivalent to bouquets of $(n-1)$-spheres. As a consequence, up to homotopy, $X_{i+1}$ is obtained from $X_i$ by adjunctions of $n$-cells. Now, Theorem \ref{thm:brHfiniteness} follows from Proposition \ref{prop:BrownHoughton}. \end{proof} \addcontentsline{toc}{section}{References} \bibliographystyle{alpha} {\footnotesize
1,108,101,562,536
arxiv
\section{Introduction} In gravitational theories it is expected that very high energy scattering at small impact parameter is dominated by the formation and evaporation of black holes. Our basic understanding of how this might happen in a semiclassical approximation has led to Hawking's paradox, the statement that black holes might lose information with the scattering being non-unitary \cite{Hawking}. The same understanding that has led to this paradox shows that black holes have a very large entropy and that they also have a temperature. Both of these features cannot be explained by classical means. The gauge/gravity duality has given us a way, in principle, to formulate these phenomena in a unitary framework, where the black hole intermediate state should be described by some approximately thermal object with large entropy in a dual quantum field theory. The details of such approximately thermal field theory configuration, however, have not been worked out so far. Large black holes in $AdS$ have positive specific heat \cite{HawkingPage} and can be readily identified with a thermal state in the field theory~\cite{Witten:1998qj}. On the other hand, small black holes are not stable objects (since they evaporate) and a complete description of the system would require a theory of how such small black holes form and evaporate. An initial state that produces the black hole is not thermal, but the dynamics should be such that the system thermalizes rapidly (the black hole formation and ringing suggests this kind of picture \cite{SS}). The ensuing evaporation of the black hole shows that the system should be thought of as being out of equilibrium, but very long lived. For this second stage of evaporation one would need a detailed analysis of how the degrees of freedom -- the entropy -- of the black hole escape from it, if one is to claim to have really solved the black hole information paradox. An attempt to describe the initial stages of thermalization of small black holes\footnote{These are black holes which are much smaller than the radius of the $AdS$ geometry and behave as ten-dimensional Schwarzschild black holes \cite{Horowitz:2000kx}.} was done for ${\cal N}=4 $ super Yang-Mills theory in \cite{AsplundB}, but the details of the dynamics in that context are still poorly understood. In general, problems of thermalization in full-fledged quantum field theories are hard to approach, especially at strong coupling. A promising strategy to try to address these issues is therefore to look for examples with fewer degrees of freedom. This paper is a first step in this direction, where we show that formulating the problem of black hole thermalization in a simpler setting has various advantages.\footnote{Other studies of thermalization based on toy models have been carried out in \cite{IP}, but part of the system there already begins in a thermal state and the process of black hole production is not addressed.} One of such settings is represented by matrix quantum mechanical systems. The BFSS matrix model \cite{BFSS} is the prime example of a matrix quantum mechanics. It is dual to M-theory on a discrete light-cone quantization of flat space and it has been argued that it accommodates black holes, which have been analyzed in various works \cite{BFKS}. The geometry of the black hole horizon can be related to properties of the matrix quantum mechanics and to the presence of tachyons \cite{KLif}. Some of the studies of the BFSS model have also included numerical analysis based on lattice methods in Euclidean field theory \cite{Latt}. However, a detailed analysis of the formation and thermalization properties of these matrix black holes is still missing. Moreover, a numerical instability arises in these Euclidean thermal systems, exactly because eigenvalues can leave the black hole state. As such, the Euclidean ensemble does not exist, as one would expect when studying systems that cannot be in true equilibrium. This problem can be fixed by considering instead the BMN matrix model \cite{BMN}, where the large volume instability of having a moduli space of vacua with a continuous spectrum is cured and the simulations make sense \cite{Catterall:2010gf}. The BMN matrix model is in fact a mass deformation of the BFSS model which lifts the flat directions and has no problems from this point of view. As the BFSS model, it has an M-theory interpretation, as it describes the discrete light-cone quantization of the theory on a plane wave (rather than on flat space). Another nice feature of the BMN matrix model is that it can be thought of as an $SU(2)$ invariant sector of ${\cal N}=4$ super Yang-Mills theory on $S^3 \times \mathbb{R}$ \cite{KKP}. This means that certain classes of analysis of this model might also be fruitful to better understand the physics of that four-dimensional theory. Our main interest in this paper is to explore some time dependent dynamics, in a matrix quantum mechanics setting, that might help us understand how the initial stages of black hole thermalization might proceed. For this, we need some simple configurations with given initial conditions that are similar to the problem of scattering gravitons at high energies. We can then study more precisely the dynamics of the modes that produce the thermalization. In the analysis of \cite{AsplundB}, it was argued that particle production in off-diagonal modes connecting eigenvalues was responsible for generating the entropy of the black hole configuration and for trapping the eigenvalues. If we try to mimic this intuition, we would expect it to work very similarly for the BFSS matrix model: we could think of scattering two graviton states with large matrices and to hopefully see the black hole form. Unfortunately, we are unable to do this because these graviton states in the BFSS matrix model are very poorly understood: they are bound states at threshold and therefore we would need a detailed understanding of their wave function, which we do not have, to do the analysis. For this reason and for the reasons mentioned above, we turn instead our attention to the BMN matrix model. Here the set of ground states is richer than in the BFSS model and instead of scattering eigenvalues off each other we can scatter more complicated ground state configurations. These configurations are given by sets of concentric fuzzy spheres. Each such fuzzy sphere is a giant graviton \cite{GG} that can grow in size due to the Myers effect \cite{Myers:1999ps}. For each fuzzy sphere, one has a decoupled $U(1)$ set of degrees of freedom that correspond to the center of mass motion of the corresponding membrane object. These degrees of freedom can be excited leaving the rest of the $SU(N)$ degrees of freedom unaffected. This means that the fuzzy spheres can be rigidly moved around without being deformed and their geometry is simple to analyze. Thus, it is a simple matter to find reasonable initial conditions for these fuzzy spheres. By aiming them properly, we can make them collide (namely, have two such fuzzy spheres intersect each other at some time $t$). The dynamics near the intersection is interesting. From the point of view of string theory, these are configurations with branes at angles, so there is a possibility of having tachyons form at these intersections \cite{Berkooz:1996km}. Moreover, even in the absence of tachyons, modes near the intersections are light. As we move the branes past each other, these modes have a time dependent mass and in general we expect copious particle production in these modes if the adiabatic approximation breaks down. At weak coupling this is similar to the problem of preheating in inflation \cite{Kofman:1994rk}. In our case, the matrix model black hole would result from thermalization after moduli trapping \cite{Kofman:2004yc}.\footnote{The trapping between D0-branes was analyzed in \cite{DKPS}.} A further advantage that we have in this setup over the BFSS model is that the classical solutions corresponding to branes crossing each other are periodic: the branes can cross repeatedly and the fluctuations that are produced in this way have time to grow through many repetitions until they cause a large back-reaction. The main goal of this paper is to describe in detail the mass spectrum of the bosonic modes stretching between fuzzy spheres, as we displace the spheres along a direction longitudinal to their world-volume and we make them intersect. Our calculation is a generalization of the analysis done in \cite{VRetal} to study the fluctuations around concentric fuzzy spheres.\footnote{Some studies of the thermodynamics of these configurations include the works in \cite{fuzzytherm}. Other important result about protected multiplets can be found in \cite{BMNextra}. } While fluctuations around some fuzzy sphere configurations have already been considered in the past by other groups, in this paper we focus our attention on a different and novel set of configurations. In previous studies, the displacements of the fuzzy spheres are in the transverse directions to the spheres and the distance between the spheres (or other branes \cite{Bak:2002rq}) does not change in time, or does so in such a way that the time dependence of the off-diagonal modes is perturbatively small \cite{fuzzycalc}. We will argue that in our setup there is a spectrum of tachyons at the intersection locus of the fuzzy spheres. The intersection locus being one-dimensional gives us a one-dimensional tower of such tachyons. There is such a tachyon even for two D0-branes at some finite distance. These tachyons can have large negative mass when at least one of the fuzzy spheres corresponds to a large matrix, so that any small fluctuation in these modes can grow rapidly and might lead to fast thermalization. A detailed analysis of the evolution of the system following the tachyon production is beyond the scope of the present work, but we sketch nonetheless a picture of what happens in a simple case. The paper is organized as follows. In Section~\ref{sec-2} we review briefly the BMN plane wave matrix model and the classical vacua of the theory. We present a novel derivation of the spectrum of fluctuations around concentric fuzzy spheres making use of the aforementioned relation between the BMN matrix model and ${\cal N}=4$ super Yang-Mills. In Section~\ref{sec-3} we consider a classical solution of the BMN matrix model consisting of two fuzzy sphere which are not concentric, but displaced along a direction longitudinal to their world-volume. After giving a geometrical characterization of the fluctuations around this configuration, we proceed in Section~\ref{sec-4} with a detailed computation of the spectrum of off-diagonal fluctuations, both along the transverse directions to the spheres and along the longitudinal ones. We find that certain longitudinal modes can become tachyonic. In Section~\ref{sec-5} we study the time dependence of these tachyonic modes as we allow the displaced fuzzy spheres to oscillate one towards the other along the direction of the displacement. Some concluding remarks can be found in Section~\ref{sec-6}, while in the Appendix we present an alternative derivation of the spectrum of longitudinal fluctuations. \section{The BMN matrix model } \label{sec-2} The BMN matrix model \cite{BMN} is a massive deformation of the BFSS matrix model \cite{BFSS}. The latter is obtained from the dimensional reduction of ten-dimensional ${\cal N}=1$ super Yang-Mills down to $0+1$ dimensions and has an action given by \begin{equation} S_{BFSS}= \frac 1{2 g^2}\int dt \, {\rm tr} \left( (D_t X^I)^2+ \frac 1{2} [X^I,X^J]^2\right) +\hbox{fermions}\,, \label{S_BFSS} \end{equation} where $X^I$ ($I=1,\ldots,9$) are nine hermitian matrices. The covariant time derivative is given by \begin{equation} D_t X^I = \partial_t X^I-i [A_t, X^I] \end{equation} and $g$ is a dimensionful coupling constant that can be removed by rescaling the fields and the time coordinate. It can be set to one, if desired, or factored out of the action and interpreted as determining $\hbar$. We will not work in detail with the fermions in this paper, so we shall just suppress them from now on. The Hamiltonian of this system (in the $A_t=0$ gauge) is given by \begin{equation} {\cal H} = \frac{1}{2}{\rm tr}\left(g^2 (\Pi^I)^2- \frac 1{2g^2} [X^I,X^J]^2\right)\,. \end{equation} The BMN matrix model system is a massive deformation of (\ref{S_BFSS}) that preserves all 32 supersymmetries. It also preserves a diagonal set of modes that decouple and constitute a system of free degrees of freedom. These are the `center of mass motion' degrees of freedom in the BFSS matrix model. The BMN matrix model splits the $X^I$ into two groups of variables: $X^{1,2,3}$, that we will label $X^i$, and $X^{4,\dots, 9}$, that we will label $Y^a$. The action includes additional terms given by \begin{equation} S_{BMN} = S_{BFSS} - \frac{1}{2g^2}\int dt\, {\rm tr}\left(\mu^2 (X^i)^2+\frac{\mu^2}{4}(Y^a)^2 +2\mu i \,\epsilon_{\ell jk}X^\ell X^jX^k \right) \,. \label{BMNaction} \end{equation} In the conventions above, $\mu$ is real and has been rescaled by a factor of $3$ with respect to $\cite{BMN}$. It has units of frequency, as $X^I$. The equations of motion following from this action are \begin{eqnarray} \ddot X^i &=& -\mu^2X^i -3 i\mu\,\epsilon^{ijk}X^jX^k-\left[\left[X^i,X^I\right],X^I\right]\,,\cr \ddot Y^a &=& -\frac{\mu^2}{4}Y^a -\left[\left[Y^a,X^I\right],X^I\right]\,. \label{EoM} \end{eqnarray} It is convenient for our study to recast the potential for the $X^i$ fields in the following form \begin{equation} V^{(X)}_{BMN}= \frac 1{2g^2}{\rm tr} \left[\left(i [X^2,X^3]+\mu X^1 \right) ^2 +\left( i[X^3,X^1]+\mu X^2 \right) ^2+\left(i [X^1,X^2]+\mu X^3 \right) ^2\right]\,.\label{eq:VXBMN} \end{equation} In this paper we will choose to rescale $\mu \to 1$ by both scaling the matrices and the time coordinate. This can always be done. The overall action then has a factor of $1/g^2$ in front. This serves as a calibration of $\hbar$: we are free to absorb $g^2$ in the definition of $\hbar$. This has no effect on the classical physics except for the global normalization of the energy units. It is only when we quantize the theory that the value of $g$ will be important and it will characterize the strength of quantum effects or fluctuations around classical solutions of the dynamics. An important point to notice is that the potential is a sum of squares of hermitian matrices. The $V_{BMN}^{(X)}$ is positive definite and the same is true for the other terms in the potential: the quadratic terms in $Y^a$ are obviously a sum of squares and the rest of the terms in (\ref{S_BFSS}) are commutator squared terms with the right sign to make them positive definite. Another very useful result to recall is that the BMN matrix model can be obtained by considering a truncation to the $SU(2)$ invariant configurations of ${\cal N}=4 $ super Yang-Mills on $S^3\times {\field R}$ \cite{KKP}. The sphere has a $SO(4)\simeq SU(2)_L\times SU(2)_R$ symmetry and the $SU(2)_L$ invariant sector of the theory gives exactly the model above. In that case, $g$ is identified with the coupling constant of ${\cal N}=4$ super Yang-Mills. The $X^i$ degrees of freedom arise from the gauge connection on $S^3$ and in the usual field theory analysis would have mass 2. The $Y^a$ degrees of freedom arise instead from the scalars $\phi^a$ of the super Yang-Mills theory and would ordinarily have mass equal to one. This corresponds to setting $\mu=2$ in the BMN model above. Equivalently, we can think of this as quantizing the ${\cal N}= 4$ theory on a sphere of radius equal to 2, rather than one. This $SU(2)_L$ invariant reduction is a convenient device to calculate sometimes and it also explains why the potential is a sum of squares. \subsection{Fuzzy sphere ground states and fluctuations} \label{sec-grouptheory} The ground states of the BMN matrix model are those that have zero energy. These are characterized by $Y^a=0$ and also by \begin{equation} [X^i, X^j]= i \epsilon_{ijk} X^k\,, \end{equation} where we have set $\mu=1$. These are obtained by requiring that each of the individual squares in the potential vanish. The most general solution to this equation is given by a possibly reducible lie algebra representation of $SU(2)$, where $X^i \simeq \bigoplus_\alpha L_{(n_\alpha)}^i$, with the sum indicating a sum over irreducible representations of $SU(2)$. Each irreducible representation (which we can label by the size of the representation $n=2j+1$, or by the maximal spin $j$) gives rise to a fuzzy sphere configuration. In the BMN model these are interpreted as giant graviton membranes of the plane wave limit M-theory dual. If we fix the total size of the matrices $N$, we can set $N=\sum_\alpha n_\alpha$. The complete set of vacua of the theory is characterized by these possible splittings. This is equal to the partitions of $N$. Given such a configuration, we can think of the ground state vacuum expectation values as being block diagonal. We can then ask what is the spectrum of off-diagonal fluctuations connecting two of these fuzzy spheres. This has been analyzed in detail in \cite{VRetal}. Here, we will reproduce that result using a slightly different calculation. The idea is to remember that this matrix quantum mechanics is an $SU(2)_L$ invariant reduction of the $U(N)$ ${\cal N}=4$ super Yang-Mills theory on $S^3\times {\field R}$. All of the fuzzy sphere vacua are related to each other by a gauge transformation in the theory on $S^3$ \cite{LinMalda}. The gauge transformation that relates them is not $SU(2)_L$ invariant, so the modes that are considered $SU(2)_L$ invariant get shuffled. The gauge transformation uses the obvious map $S^3\to SU(2)$, since they are identical spaces. This $SU(2)$ has to be embedded in the gauge group. For a fuzzy sphere configuration $N=\sum_\alpha n_\alpha$, we embed $SU(2)$ into $U(N)$ by the action on the fundamental of $U(N)$ according to $\bigoplus_\alpha R_\alpha= \bigoplus_\alpha (n_\alpha)$, where we are labeling the representations by $U(N)$. This $SU(2)$ embedding can also be thought of as $SU(2)_L$: the translation on the sphere by $SU(2)_L$ generates the sphere itself. Also notice that the fuzzy sphere configurations are $SU(2)_L$ invariant only if an $SU(2)_L$ rotation is accompanied by a compensating gauge transformation. Now let us do some group theory. The Fock space spectrum of physical polarizations of fluctuations of the $A=0$ vacuum is given by \begin{eqnarray} {Spec}( \phi) &=& \bigoplus_j (j,j)\,,\cr {Spec} (A) &=& \bigoplus_j (j,j+1)\oplus(j+1,j)\,, \end{eqnarray} where we are using the spin notation for the representations. The first line indicates the spectrum of representations of the scalar fluctuations, while the one of the second line represents the transverse fluctuations of gluons. The energy for the scalars $\phi$ is $2j+1$, while for the vectors it is $2j+2$. When we consider the embedding on matrices, we have that the off diagonal block connecting block $n_1$ and $n_2$ transforms as $j_1 \otimes j_2$ with respect to $SU(2)$. This is, the matrices transform as $\bigoplus_{j'=|j_1-j_2|}^{j_1+j_2} j'$ with respect to $SU(2)_L$. If we tensor these together, we find that for the scalars we need to take the tensor product \begin{equation} \bigoplus_{\tilde j} \tilde j \otimes (j,j) \simeq \bigoplus_{\tilde j} \bigoplus_{ j'= |\tilde j-j|}^{\tilde j +j} ( j',j)\,. \end{equation} We can only have $SU(2)_L$ singlets if $j'=0$, so we find that the singlet sector is given by the case where $j=\tilde j$. We find this way that \begin{equation} \bigoplus_{\tilde j} \tilde j \otimes (j,j)\big|_{singlet} \simeq \bigoplus_{\tilde j} ( 0 ,\tilde j)\,, \end{equation} with energy $2\tilde j+1$. Notice that $\tilde j$ is either only half integer or only integer, as per the usual rules of angular momentum. Similarly, we find that for the vectors \begin{equation} \bigoplus_{\tilde j}\tilde j \otimes \left((j,j+1)\oplus(j+1,j)\right) \big|_{singlet} \simeq \bigoplus_{\tilde j} (0, \tilde j+1)\oplus (0, \tilde j-1)\,, \end{equation} with energies $2\tilde j+2$ and $2\tilde j$ respectively. For the special case $\tilde j=0$, the representation $(0,-1)$ is not counted. Properly normalizing to the value of $\mu$ we have used ({\it i.e.}, dividing by 2 the energies above), we get that on the fuzzy sphere the scalars $Y^a$ have a spectrum under $SU(2)_R$ rotations given by \begin{equation} Spec(Y)= \bigoplus_{\tilde j} \tilde j\,, \end{equation} with energies $\tilde j +1/2$, where $\tilde j $ is only integer or half integer and it runs between $|j_1-j_2|$ and $j_1+j_2$ in integer steps. Similarly, for the $X^i$ fields, we get a group decomposition \begin{equation} Spec(X) = \bigoplus_{\tilde j} (\tilde j+1)\oplus (\tilde j-1)\,, \end{equation} with energies $\tilde j+1$ and $ \tilde j$ respectively. Remember that $\tilde j$ is always integer or half integer. The same analysis can be done for fermions, we will not do this here. The representations $\tilde j$ appearing in the decomposition of the off-diagonal matrix fluctuations of $Y^a$ are called fuzzy monopole harmonics. They are fuzzy spherical harmonics if the two representations have the same dimension. For the $X^i$ variables, the $\tilde j$ representations appearing in the decomposition are called fuzzy monopole vector (tensor) harmonics and fuzzy vector (tensor) harmonics if the two representations have the same size. Notice that we only described the physical fluctuations. The typical configurations also have zero modes due to gauge transformations. These zero modes have been projected out. If we add them, we get that $X$ has an additional set of zero modes that are described by the fuzzy monopole spherical harmonics. There is one more thing that needs to be remarked: the zero mode associated to rotations in the diagonal $U(1)$ is absent as nothing is charged under it. For matrices of the same size the theory has an enhanced $SU(2)$ unbroken gauge symmetry and the modes organize themselves into triplets and singlets of $SU(2)$. This generalizes to when one has more coincident fuzzy spheres. \section{Kicking the spheres} \label{sec-3} We now consider the following classical solutions of the BMN matrix model:\footnote{These configurations are non-BPS.} \begin{eqnarray} \vev{X^i}= \begin{pmatrix} L_{(n_1)}^i +\Re e(b^i_1*\mathsf{1}_{(n_1)} \exp(i t))& 0 &\dots\\ 0&L_{(n_2)}^i+\Re e(b^i_2*\mathsf{1}_{(n_2)} \exp(i t))&\dots\\ \vdots&\vdots &\ddots \end{pmatrix}\,. \label{vev} \end{eqnarray} We have turned on modes proportional to the identity that decouple on each diagonal block and describe the center of mass motion of the fuzzy spheres. These modes leave the shape of the spheres invariant, but move their centers of mass in time. A similar kicking of the spheres can be done by turning on a displacement $b^a_\alpha$ along the $Y^a$ directions \begin{eqnarray} \vev{Y^a}= \begin{pmatrix} \Re e(b^a_1*\mathsf{1}_{(n_1)} \exp(i t/2))& 0 &\dots\\ 0&\Re e(b^a_2*\mathsf{1}_{(n_2)} \exp(i t/2))&\dots\\ \vdots&\vdots &\ddots \end{pmatrix}\,. \end{eqnarray} Notice that in order to obey the equations of motion (\ref{EoM}) the $Y^a$ directions must have a frequency that is half the frequency of the $X^i$ directions. For the rest of this paper we shall set $\vev{Y^a}=0$. We restrict now our analysis to the case of two diagonal blocks, with ranks $n_1$ and $n_2$. We will first study the dynamics of the off-diagonal modes connecting the two fuzzy spheres when they are large, {\it i.e.} with $n_1$, $n_2,$ and $ n_1-n_2$ being large. For the configurations with $b^i_\alpha=0$, the off-diagonal modes have a high angular frequency starting at $n_1-n_2$, so this frequency is much larger than the frequency of the zero mode that we are kicking. This means that small fluctuations on these degrees of freedom are expected to be adiabatic for a typical $b^i_\alpha$. There is a clear procedure to analyze this type of configurations. One thinks of this as a Born-Oppenheimer approximation where one can solve the dynamics of fast degrees of freedom as we freeze the slow degrees of freedom. The fast degrees of freedom are the off-diagonal modes and the slow degrees of freedom are going to be the zero modes that we turned on. They are only fast relative to the motion we have described if their spin is large. We will analyze these by freezing the result at some time $t$ and studying the spectrum of quadratic fluctuations of the off-diagonal modes for that frozen configuration. Since $t$ is fixed for the analysis, we can use the rotational symmetry of the system to align $\vec b_{1}- \vec b_{2}$ along the $3$-axis ({\it i.e.} only $b^3_1-b^3_2\neq 0$). Moreover, a combination of the form $n_1 \vec b_1+n_2 \vec b_2$ is proportional to the center of mass of the whole system and is a decoupled degree of freedom. This means that all the dynamics we are interested in depends only on $\vec b_{1}-\vec b_2$. We can without loss of generality set $\vec b_1=0$. Also, we can choose the displacements to be real. In the rest of this section we will give a geometric characterization of what type of results we expect to find. A detailed analysis of the spectrum of these fluctuations will then follow in the next section. \subsection{Geometric characterization of the dynamics} These matrix configurations can be described geometrically in the M-theory plane wave geometry. A fuzzy sphere of rank $n$ corresponds to an M2-brane giant graviton of size proportional to its light-cone momentum $n$ (the proportionality constant depends on conventions). In matrix coordinate units, $n$ acts as a cut-off on the range of the angular momentum (see \cite{Madore}), so that the size of the sphere is also proportional to the maximum eigenvalue $j$. Defining the radius of the sphere as given by the distance to the center of mass (this is also done in Matrix Theory \cite{Kabat:1997im}), we get \begin{equation} R^2 = (X^1- x^1_{cm})^2+(X^2-x^2_{cm})^2+(X^3-x^3_{cm})^2= j(j+1)\,. \end{equation} At large $j$ we have that the radius is $R= j+1/2+ o(1/j)$. Notice that the spacing of the radii between different $j$ is essentially $\Delta j$. These M2-branes are described as D2-branes. Off-diagonal modes connecting different such configurations are interpreted as strings stretching between the D2-branes, while the eigenvalues are interpreted as D0-branes. It is standard to think of the D2-branes as branes that have absorbed D0-branes and as a consequence have a strong magnetic field on their world-volume \cite{Dougl}. As a matrix of rank $n$ has $n$ eigenvalues, this is the magnetic flux threading the D2-brane sphere. A string ending on the D2-brane is charged under this magnetic field and experiences a magnetic monopole flux of strength $n$. As is well known, if we consider a charged scalar degree of freedom on a sphere with a magnetic monopole background and if we restrict to the lowest Landau level, the wave functions carry angular momentum and can be argued to be localized on the sphere so that they cover an area of order $1/n$. The angular momentum points in the direction on which we localize the wave function on the sphere. If we reverse the charge, the angular momentum points in the opposite direction. If we have two fuzzy spheres of sizes $j, j'$ at rest they can be described by two concentric spheres of ranks $n, n'$. Now let us consider strings stretching between them. If we attach $n$ states to one sphere and $n'$ states to the other sphere, we get a total of $nn'$ possible strings. These strings will have an endpoint on a sphere that is associated to a positive charge and the other end on the other sphere will have the opposite charge (the string theory is oriented). The effective magnetic flux that the particle sees is $n-n'$. This describes the minimal angular momentum that the modes connecting the fuzzy spheres can have: $|j-j'|$. This is also the length of a string stretched from the north pole of one fuzzy sphere to the north pole of the other one. \begin{figure}[ht] \begin{center} \includegraphics{Angularmomentum.pdf} \caption{An off-diagonal mode can be thought of as a string stretching between two D2-branes. In this figure we consider the case of zero displacement, {\it i.e.} concentric spheres. The angular momentum of a state is given by subtracting the position vectors of the string endpoints. Using the symmetries of the system, the angular momentum vector can always be chosen to be aligned along the vertical axis. \label{fig:ang}} \end{center} \end{figure} The string of maximum length has angular momentum given by $j+j'$ and a mass of order $j+j'$ in appropriate units. The angular momentum vector can be obtained by taking the difference of the position vectors of the endpoints of the string. This is depicted in figure \ref{fig:ang}. The angular momentum vector of the string state points parallel to the string. Also, the geometric length is proportional to the length of the angular momentum vector (this also happens for Non Commutative field theories on the Moyal Plane \cite{BigattiSusskind}). We depict in the figure various highest weight states. The longest string goes from the north pole of one fuzzy sphere to the south pole of the other. Notice also that each string endpoint can be thought of as occupying a uniform fixed area on each sphere. The sphere with $n_1$ eigenvalues has $n_1$ such patches and similarly the second one, with $n_2$ eigenvalues, will be divided into $n_2$ patches. Each of these is to be though of as a D0-brane end-point (region) on the sphere. If we compare with the spectrum of the $Y^a$ fluctuations, we get a precise matching between the possible values of angular momentum we compute geometrically in this way and the ones we obtain from the field theory calculation. These are transverse polarizations of the strings to the three directions in which the branes are embedded. For polarizations of the string modes in the brane 3-plane, they have extra spin: indeed, they carry one unit of spin that is either along the direction of the string, or opposite to it (only transverse polarizations appear on the string) and one can match this to the values of angular momentum of the $X^i$ fluctuations as well. Again, the geometric estimate of the mass is good enough. Notice that there are in general correction of order one to the mass. Now we can consider what happens when we displace the spheres (see figure \ref{fig:displaced}). It is clear that the density of the string endpoints on each fuzzy sphere is not going to change. This is because this is roughly the density of eigenvalues per unit area. Moreover, since the end points of the string are charges in a very strong magnetic field, the magnetic field is good at keeping them from changing positions. Roughly, we should imagine that the string ends are stuck to a particular D0 brane and that these D0 branes are heavy and don't get moved around much by the force of a string. Indeed, in perturbation theory the tension of the D-branes is large \cite{Dai:1989ua}. So we can argue that the endpoint location of the string on each fuzzy sphere will not change, nor how we think about its angular momentum in the $3$-direction (the one that is preserved by the configuration). However, the length of the string will change. \begin{figure}[ht] \begin{center} \includegraphics[scale=0.7]{Displacedfuzzy.pdf} \caption{The length of the strings changes as we displace the fuzzy spheres. \label{fig:displaced}} \end{center} \end{figure} If we use the same labels for the string endpoints as before, that is we label them by their angular momentum, we find that the length is given by \begin{equation} L ^2\simeq (\Delta L^1)^2+(\Delta L^2)^2+(\Delta L^3-b)^2\,, \end{equation} where $b$ is the displacement. The masses should then be roughly given by \begin{equation} M^2\simeq (\Delta \vec L)^2 - 2 b \Delta L^3 + b^2\,. \label{eq:geommass} \end{equation} The mass formula will attain the minimum value on a sphere for fixed $(\Delta \vec L)^2$ when $\Delta L^3$ takes either the maximum or the minimum value, depending on the sign of $b$. Clearly, this value will be minimized when the spheres touch. The states with minimum energy have their spin aligned along the $3$-axis shown in the figure (the strings of length zero do not have an horizontal component of $\vec L$). There will be corrections to this simple geometric formula. This can be seen from comparing the values of the energy when the spheres are concentric to those that are obtained from the exact answer. These corrections to the mass squared are of order $j$ and can in principle make some of the modes tachyonic. This is what one can expect from the intuition of branes at angles \cite{Berkooz:1996km}. To check this requires doing a computation, that we do in the next section. What is important to notice is that if there are tachyons, they are localized near where the spheres touch. In this examples this is a circle, so one can expect that these degrees of freedom are like a one-dimensional field theory set of modes. Notice that the naive geometrical correction is proportional to $\Delta L^3$. If we treat it as an operator in the theory of angular momentum it commutes with $(\Delta \vec L)^2$, so even though the configurations break the rotational symmetry, the spherical harmonic representation of the states should remain diagonal. This is similar to what happens in the computation of the Zeeman splitting in the hydrogen atom. We will see this explicitly when we do the full computation. Also notice that the most tachyonic mode will depend on the displacement, because of all the vertical strings the one that is the shortest depends on the precise value of $b$. This means that since in our kicked sphere solution the displacement is changing with time, which off-diagonal mode is the most tachyonic depends on time. If all of these start condensing, one expects that, since they carry angular momentum, the axial symmetry will be broken classically: the spheres will deform non-uniformly and change shape. The $L^3$ will remain a constant of motion and the axial symmetry will only be restored quantum mechanically by averaging over the orientation of the resulting shape. The axial symmetry can be restored later again classically if the system thermalizes and any coarse grained observation becomes sufficiently homogeneous (this usually requires large $N$). Such effects would be expected when we form a non-rotating black hole in the gravity theory. \section{Spectrum of fluctuations} \label{sec-4} Now we are ready to start calculating the spectrum of fluctuations of off-diagonal modes for displaced fuzzy spheres. As we have described above, at large $N$ the off-diagonal modes connecting two fuzzy spheres are generically heavy, except where the fuzzy spheres intersect. We have also argued that geometrically it seems that the basis of spherical harmonics is preserved. We will expand both the $X^i$ and $Y^a$ variables in spherical monopole harmonics. This is because the off-diagonal matrices have the same rank. The fuzzy vector spherical harmonics will be particular linear combinations of these. The basis of matrices will be labeled by $Y_{\ell m} \in Hom(n_2, n_1)$, which explains how we think of these as matrices. The conjugate harmonic is $Y_{\ell m} ^\dagger = (-1)^m Y_{\ell, -m} \in Hom(n_1,n_2)$. They are normalized so that \begin{equation} {\rm tr}( Y_{\ell m}^\dagger Y_{\ell' m'}) = \frac 12 \delta_{\ell\ell'}\delta_{mm'} \,. \end{equation} The matrices $Y^a$ that are off-diagonal and connecting the two fuzzy spheres will be hermitian and expanded as follows \begin{equation} Y^a = \sum_{\ell,m} y^a_{\ell m} Y_{\ell m}+ (y^a_{\ell m})^* Y_{\ell m}^\dagger\,, \end{equation} or, writing the blocks more explicitly, \begin{equation} Y^a = \sum_{\ell,m}\begin{pmatrix} 0 & y^a_{\ell m} Y_{\ell m}\\ (y^a_{\ell m})^* Y^\dagger_{\ell m} & 0 \end{pmatrix}\,. \label{expYblocks} \end{equation} We will do the same for the $X^i$ matrices. The difference is that the $X^i$ matrices will also have a vacuum expectation value as in (\ref{vev}). As argued above, we can take \begin{equation} \vev{X^3}= \begin{pmatrix} L_{(n_1)}^3 &0\\ 0 & L_{(n_2)}^3+ b \end{pmatrix}\,, \end{equation} where $b$ is the displacement of the second fuzzy sphere, while we take $\vev{X^1}$ and $ \vev{X^2}$ with no displacement. Our goal is to compute the masses of the fluctuations in terms of $\ell, m, n_1, n_2, b$ and any possible mixings between the states. To do this, we will need the following commutation relations for the $Y_{\ell m}$ \begin{eqnarray} {[L^3, Y_{\ell m}] } &=& m Y_{\ell m}\,,\cr {[L^+, Y_{\ell m}] }&=& \sqrt{(\ell -m)(\ell +m+1)} Y_{\ell m+1}\,,\cr {[L^-, Y_{\ell m}]} &=&\sqrt{(\ell +m)(\ell+1-m)} Y_{\ell m-1}\,, \end{eqnarray} and their adjoints \begin{eqnarray} {[L^3, Y^\dagger_{\ell m}] }&=& -mY_{\ell m}^\dagger\,,\cr {[L^+, Y_{\ell m}^\dagger] }&=&- \sqrt{(\ell +m)(\ell -m+1)} Y^\dagger_{\ell m-1}\,,\cr {[L^-, Y_{\ell m}^\dagger] }&=&- \sqrt{(\ell -m)(\ell +m+1)} Y^\dagger_{\ell m+1}\,, \end{eqnarray} where $L^\pm = L^1\pm i L^2$. First we study the fluctuations of $Y^a$, which are easier to analyze, and then those of $X^i$. We will do the calculation for arbitrary values of $n_1, n_2$ with the configurations being off-shell (frozen at finite displacement). \subsection{Transverse fluctuations} To compute the spectrum of transverse fluctuations we need to expand the Lagrangian in fluctuations $\delta Y^a = \sum_{\ell, m} \delta y^a_{\ell m} Y_{\ell m}+ (\delta y^a_{\ell m})^* Y_{\ell m}^\dagger$ keeping only the terms up to quadratic order. Since the $Y^a$ do not have a vev and they appear always at least quadratically in the action, we can just truncate the action to quadratic order in $Y^a$ and replace the $X^i$ by their vevs. There can be no mixing between $X^i$ and $Y^a$ (this is easiest to argue by the fact that the $Y^a$ carry $SO(6)$ symmetry labels and the configurations under study are $SO(6)$ invariant). The expansion of the Lagrangian is straightforward and yields \begin{eqnarray} {\cal L}^{(Y)} &=& \frac12 {\rm tr}(\dot Y^a)^2-\frac18 {\rm tr} (Y^a)^2 + \frac 12 {\rm tr} [\vev{X^i},Y^a]^2\cr &=&\frac 12 {\rm tr}(\dot Y^a)^2-\frac18 {\rm tr} (Y^a)^2 +\frac 12{\rm tr}[\vev{X^3},Y^a]^2+ \frac 12 {\rm tr}[\vev{X^+},Y^a][\vev{X^-},Y^a]\,. \end{eqnarray} The first thing we compute is the kinetic term, which is obviously given by \begin{equation} {\cal L}^{(Y)}_{kin} =\frac{1}{2} \sum_{\ell, m} |\delta \dot y^a_{\ell m}|^2\,. \end{equation} For $b=0$ it is easy to see that the potential can be written as \cite{VRetal} \begin{equation} {\cal L}^{(Y)}_{mass} = -\frac{1}{2} {\rm tr} \left[Y^a\left(\frac14 Y^a+\left[L^i,\left[L^i,Y^a\right]\right]\right)\right]\,. \end{equation} Since $[L^i,[L^i,Y_{\ell m}]]=\ell(\ell+1)Y_{\ell m}$, this implies that at $b=0$ we get that the mass terms are given by \begin{equation} {\cal L}^{(Y)}_{mass} = -\frac{1}{2}\sum_{\ell,m} \left(\ell+\frac 12\right)^2 |\delta y^a_{\ell m}|^2\,. \end{equation} This matches of course with the result obtained in section \ref{sec-grouptheory} using group theoretical arguments. A more detailed description of these spherical harmonics can be found in \cite{Ishiki:2006yr}. We now turn on $b$. Notice that schematically it is $\vev{X^3}= L^3 + b\begin{pmatrix}0&0\\ 0&1\end{pmatrix}$, so that \begin{eqnarray} [\vev{X^3}, \delta Y^a]&=& [L^3,\delta Y^a] + b \left[ \begin{pmatrix}0&0\\ 0&1\end{pmatrix}, \delta Y^a\right]\cr &=& \sum_{\ell,m} (m -b) \delta y^a_{\ell m} Y_{\ell m}-(m-b)(\delta y^a_{\ell m})^* Y_{\ell m}^\dagger \,. \end{eqnarray} Notice that the addition of $b$ does not mix the $Y_{\ell m}$ with each other in the commutator and all it does is to replace $m\to m-b$ in the commutation relations. This is what makes the computation so simple. This is what we were arguing for geometrically in the previous section. When we square the expression above, we get the same result as when $b=0$, plus additional terms that are cross terms and a quadratic term in $b$ \begin{equation} \omega^2_{\ell m} = \left(\ell+\frac12\right)^2 - m^2 + (m-b)^2 \geq 0\,. \end{equation} Notice that these are all positive, because $\ell \geq m$. The minimum possible value, fixing $\ell$ and $m $ but varying $b$, is given by $b=m$, in which case the frequency squared is \begin{equation} \omega_{\ell m }^2 = \left(\ell+\frac12\right)^2 - m^2 \end{equation} and the minimum value this can acquire is given by $m=\pm\ell$, so that the mass is \begin{equation} \omega_{\ell \ell }^2= \ell +\frac 14\,. \end{equation} So, as we go to higher and higher $\ell$, we find that the mode is more and more massive at the place where it is lightest (namely for $b=m$). However this only grows as $\sqrt \ell$, which is subleading to the typical value of the frequency which is of order $\ell$. If we compare with our geometric result in equation \eqref{eq:geommass} we find that it matches it very closely and it is exactly the same if we interpret $(\Delta \vec L)^2$ with the usual quantum value $\ell(\ell+1)$ plus the 1/4 from the background curvature of the plane wave. \subsection{Longitudinal fluctuations} Now we analyze the fluctuations of the $X^i$ fields. This is trickier than for the $Y^a$ fluctuations. First, the $X^i$ have vevs, so that expanding the action in fluctuations is more involved. Secondly, the system is a gauged quantum mechanical system. This means that there are zero modes that should be projected out of the dynamics. Finally, for the displaced fuzzy spheres we are not at an extremum of the potential, so the gradient of the potential does not vanish. This means that the Hessian that determines quadratic fluctuations is not invariant under non-linear field redefinitions, unlike in the case of an extremum of the potential where the Hessian is a symmetric tensor on the tangent space of the corresponding configuration point. This is potentially problematic for the removal of the zero modes, as the group of $U(N)$ rotations of the configuration gives us a non-linear geometric space. All of these problems are solvable in practice. What we need to do is to argue that our fluctuations are orthogonal to linearized gauge transformations on a particular configuration defined by a background. Since we have a metric on the configuration space defined by the kinetic term, this is a well defined procedure. We expand the fluctuations of the off-diagonal blocks as follows\footnote{An equivalent way of doing this computation is to expand the fluctuations in the basis of eigenstates of the $b=0$ problem, which was originally solved in \cite{VRetal}. We outline this alternative derivation in the appendix.} \begin{eqnarray} X^3&\simeq& L^3 + b\begin{pmatrix}0&0\\ 0&1\end{pmatrix} + \sum_{\ell,m} \delta x^3_{\ell m} Y_{\ell m} + (\delta x^3_{\ell m})^* Y^\dagger _{\ell m}\,,\cr X^+& \simeq& L^+ +\sum_{\ell,m} \delta x^+_{\ell m-1} Y_{\ell m} + (\delta x^-_{\ell m+1})^* Y^\dagger _{\ell m}\,,\cr X^-&\simeq& L^- + \sum \delta x^-_{\ell m+1} Y_{\ell m} + (\delta x^+_{\ell m-1})^* Y^\dagger _{\ell m}\,. \label{Xexp} \end{eqnarray} Notice that we have shifted the index $m$ in the coefficients of the $X^\pm$ fluctuations by $\pm 1$. The reason for doing this is that the matrices $L^+$ and $L^-$ usually are associated with angular momentum one. However the configurations with $\delta x=0$ are spherically invariant, so the $L^+$ and $L^-$ matrices should be associated to having no spin: in this way the spin of the matrix is cancelled by the spin of the label. The kinetic term for the longitudinal fluctuations will be then given by \begin{equation} {\cal L}^{(X)}_{kin}= \frac{1}{2} \sum_{\ell,m} |\delta \dot x^3_{\ell m}|^2 + \frac 12 |\delta \dot x^+_{\ell m-1}|^2+\frac 12 |\delta \dot x^-_{\ell m+1}|^2\,. \label{eq:kinetic} \end{equation} Notice that the metric on these fluctuations is diagonal, but the coefficients are not one. This is important for evaluating frequencies and for projecting out the gauge fluctuations. Now let us perform a gauge transformation that is off-diagonal, with parameters $\delta \theta_{\ell m} Y_{\ell m}+ h.c.$. This is necessary for the generator to be hermitian. We find that \begin{equation} \delta_\theta X^3 = i [\delta \theta_{\ell m} Y_{\ell m}+ h.c.,X^3]= -i (m-b) \delta \theta_{\ell m} Y_{\ell m}+ i (m-b) \delta\theta_{\ell m}^* Y^\dagger_{\ell m}\,. \end{equation} Again, notice that $b$ just shifts the value of $m$ for this calculation. Similarly we find that \begin{eqnarray} \delta_\theta X^+ &=& -i \sqrt{(\ell -m)( \ell+m+1) } \delta \theta_{\ell m} Y_{\ell m+1}+ i \sqrt{(\ell + m)( \ell-m+1) } \delta \theta^*_{\ell m}Y^\dagger_{\ell m-1}\,,\cr \delta_\theta X^-&=& (\delta_\theta X^+)^\dagger \,. \end{eqnarray} We then require that the allowed $\delta x^i_{\ell m}$ are orthogonal to the $\delta \theta_{\ell m}$ variations of the configuration. The conjugate variables to these rotations vanish. Classically this means that for fluctuations proportional to $\delta \theta_{\ell m}$ we have to impose the constraint $\dot {\delta \theta}_{\ell m}=0$, so that $\delta \theta_{\ell m}=0$. These gauge transformations are unphysical and are projected out by the Gauss law constraint. To match our labeling, we should replace the dummy index $m\to m-1$ or $m\to m+1$ in the various terms in the expansion of $\delta_\theta X^+$ and $\delta_\theta X^-$. We get this way that \begin{eqnarray} \delta_\theta X^+ &=& -i \sqrt{( \ell+m)(\ell -m+1) } \delta \theta_{\ell m-1} Y_{\ell m}+ i \sqrt{( \ell-m) (\ell + m+1)} \delta \theta^*_{\ell m+1}Y^\dagger_{\ell m}\,,\cr \delta_\theta X^-&=& (\delta_\theta X^+)^\dagger \end{eqnarray} and we see the consistency of the conventions of spin labeling, for the $\delta \theta_{\ell m}$ appear in the same way as the $\delta x^i_{\ell m}$ in the expansion of the fields. Notice how in the gauge variations the coefficient of $Y_{\ell \ell}$ in $\delta_\theta X^-$ vanishes, and also the one of $Y_{\ell, -\ell}$ in $\delta_\theta X^+$. This is the object with maximum helicity (total spin along the $3$-axis) at fixed $\ell$. It has helicity $\ell+1$ and $-\ell-1$ respectively. From our previous considerations these are the modes that are most likely to become very light. We will see that they can indeed become tachyonic for some values of $b$. Now we move on to analyze the potential for these fluctuations. Using the following identities \begin{eqnarray} && i[X^2,X^3]+X^1+i\left(i[X^3,X^1]+X^2\right)=[X^+,X^3]+X^+\,,\cr && i[X^2,X^3]+X^1-i\left(i[X^3,X^1]+X^2\right)=-[X^-,X^3]+X^-\,,\cr && i [X^1,X^2]= \frac 12 [X^-,X^+]\,, \end{eqnarray} the potential can be rewritten as \begin{equation} V^{(X)}_{BMN}= \frac 12 {\rm tr}\left[ \left( \frac 12 [X^-,X^+]+X^3\right)^2 + ( [X^+,X^3]+X^+)(- [X^-,X^3] +X^-)\right]\,. \end{equation} If we expand in quadratic fluctuations, we can expand each term in the square to linearized order and get that way some quadratic terms. There is an additional term that arises because when we turn on $b$ the background does no longer satisfy $[X^+,X^-]=2X^3$. The other two equations are satisfied. These contributions from the potential not being at a minimum affect the coefficient of $\delta x^+\, \delta x^-$ in the quadratic terms. Writing (\ref{Xexp}) in blocks we have that, schematically, \begin{equation} \delta X^+ = \begin{pmatrix} 0& \delta x^+ Y\\ (\delta x^{-})^*Y^\dagger &0 \end{pmatrix}\,, \end{equation} plus a similar expansion for $\delta X^-$. To expand to quadratic order the total potential in off-diagonal modes there are two contributions: those that are linear in the fluctuations in each term of the potential that is squared and those that are quadratic in the fluctuations. The linear terms are off-diagonal, the quadratic terms are block diagonal. The following intermediate calculations are useful before giving the final answer: \begin{eqnarray} &&\delta X^3 + \frac 12[L^-, \delta X^+]- \frac 12 [L^+, \delta X^-] =\nonumber\\ &&\sum_{\ell,m} \left( \delta x_{\ell m}^3 +\frac 12 \sqrt{(\ell-m) (\ell +m+1)}\delta x^+_{\ell m} - \frac 12 \sqrt{(\ell+m)(\ell -m+1)} \delta x^-_{\ell m}\right) Y_{\ell m} + h.c.\,, \cr && \cr && [\delta X^+,\vev{X^3} ]+ [L^+, \delta X^3]+\delta X^+=\nonumber\\ &&\sum_{\ell,m}\left((b-m+1) \delta x^+_{\ell ,m-1} + \sqrt{(\ell+m)(\ell-m+1)} \delta x^3_{\ell, m-1} \right )Y_{\ell m} \nonumber \\ && \hskip 1cm +\left((m-b+1) ( \delta x^-_{\ell m+1})^*- \sqrt{(\ell-m)(\ell+m+1)}(\delta x^3_{\ell m+1})^*\right) Y_{\ell m}^\dagger\,, \end{eqnarray} and a similar equation involving $\delta X^-$. All these terms are off-diagonal (being proportional to the $Y_{\ell m}$). Notice that because of our conventions, only the same values of $\ell, m$ appear in all of the coefficients of these linear terms: this is, the mixing of modes only mixes the same values of $\ell, m$. This simplification makes the problem very tractable for these modes as well. In the end, we need to understand how three modes mix, but one such mode is projected out because of the gauge constraint. The general mass reduces to diagonalizing a $2\times 2$ matrix for each value of $\ell,m$ This can be combined to give a block diagonal term in $\frac12[ X^-,X^+]+ X^3$ proportional to \begin{equation} \begin{pmatrix} 0&0\\ 0& b \end{pmatrix}+\frac 12 \begin{pmatrix} \delta x^- (\delta x^-)^*YY^\dagger-\delta x^+ (\delta x^+)^* Y Y^\dagger &0\\ 0 &\delta x^+ (\delta x^+)^* Y^\dagger Y- \delta x^- (\delta x^-)^*Y^\dagger Y \end{pmatrix}\,. \end{equation} When we square and take traces, expanding to quadratic order in fluctuations, this gives us a contribution to the mass matrix equal to \begin{equation} \frac 12 b ( \delta x^+ (\delta x^+)^*- \delta x^- (\delta x^-)^*)\,. \label{bcontr} \end{equation} Notice that the contribution from this term is negative or positive for different modes depending on the sign of $b$. The kinetic term \eqref{eq:kinetic} suggest that we normalize the fields slightly differently, $\delta x^{\pm} = \sqrt 2\, \delta X^{\pm}$, to have canonical normalizations for every mode. The terms in the expansion can be rewritten as \begin{eqnarray} \delta x_{\ell m}^3 +\frac 12 \sqrt{(\ell-m) (\ell +m+1)}\delta x^+_{\ell m} - \frac 12 \sqrt{(\ell+m)(\ell -m+1)} \delta x^-_{\ell m}\nonumber \\ = \begin{pmatrix} 1\,, ~~ & \sqrt{\frac{(\ell-m)(\ell +m+1)} { 2}} \,,~~ & -\sqrt{\frac {(\ell+m) (\ell+1 -m)}{ 2}}\end{pmatrix}\begin{pmatrix} \delta x^3_{\ell m}\\ \delta X^+_{\ell m}\\ \delta X^-_{\ell m} \end{pmatrix}\equiv V^3 \, \delta X\,. \end{eqnarray} Similarly we find a $V^+$ and $V^-$ given by \begin{eqnarray} V^+ &=&\begin{pmatrix}\sqrt{(\ell-m)(\ell +m+1)}\,, ~~ & {\sqrt 2} {(b-m)}\,, ~~ & 0\end{pmatrix}\,,\cr V^- &=& \begin{pmatrix}- \sqrt{(\ell+m)(\ell -m+1)} \,,~~ & 0\,, ~~ &{\sqrt 2}{(m-b)}\end{pmatrix}\,. \end{eqnarray} We have shifted $m\to m\pm 1$ in the equations above so that we are comparing the same coefficients of $\ell, m$. The mass matrix is given by squaring these vectors and adding them together including also the contribution of (\ref{bcontr}) \begin{equation} \omega^2_{\ell m} = (V^3)^\dagger V^3 + \frac 12 ( V^+)^\dagger V^++\frac 12(V^-)^\dagger V^-+ \begin{pmatrix} 0&0&0\\ 0&b&0\\ 0&0&-b \end{pmatrix}\,. \end{equation} The end result is given by \begin{equation} \omega^2_{\ell m} = \begin{pmatrix} 1+\ell+\ell^2-m^2&\ & (b-m+1)\Lambda_-&\ & (b-m-1)\Lambda_+\\ (b-m+1)\Lambda_-&& b+(b-m)^2+\Lambda_-^2&& -\Lambda_+\Lambda_-\\ (b-m-1)\Lambda_+&& -\Lambda_+\Lambda_-&& -b+(b-m)^2+\Lambda^2_+ \end{pmatrix}\,, \end{equation} where we have defined the shorthands \begin{equation} \Lambda_\pm\equiv \sqrt{\frac{(\ell\pm m)(\ell \mp m+1)}{2}}\,. \end{equation} Of particular interest to us is when $m=\ell +1$ for $\delta X^-_{\ell, m}$ ({\it i.e.} $\Lambda_+=0$), and when $m=-\ell -1$ for $\delta X^+_{\ell, m}$ ({\it i.e.} $\Lambda_-=0$). For these cases there is no mixing with any other mode and these fields have maximum spin in the $3$-direction for fixed $\ell$. We have already argued why these modes are important. Their masses are given by \begin{eqnarray} && (\omega^-_{\ell, \ell+1})^2= -b +(b-\ell-1)^2\,,\cr && (\omega^+_{\ell, -\ell-1})^2= b +(b+\ell+1)^2\,. \label{spectrum} \end{eqnarray} These modes are tachyonic for $b= \pm(\ell +1) $ on an interval for $b$ of order $\sqrt \ell$. Notice that there is a tower of tachyonic modes for each $b$ labeled by $\ell$ with a quadratic dispersion relation. This can be interpreted as a tower of tachyonic modes on a circle in the presence of some holonomy for a gauge field under which these fields are charged. Other modes for which $m$ is not maximal in the sense above are not tachyonic. From the equation for the masses above we still need to project out the gauge variations. This is straightforward, but tedious. If we call the projection matrix that projects onto the gauge degrees of freedom as $\Sigma_{\ell m}$, then $1-\Sigma_{\ell m}$ is the projection in the orthogonal components. The mass matrix we need is then given by \begin{equation} \omega^2_{\ell m, \, \tiny{phys}}= (1-\Sigma_{\ell m})\omega^2_{\ell m}(1-\Sigma_{\ell m}) \end{equation} The precise expressions are not very illuminating. However, none of the modes that appear this way are tachyonic, except the ones that we have already discussed. We can also check that the eigenvalues of the above matrix are $\ell^2, (\ell+1)^2, 0$ when we set $b=0$ (as originally found in \cite{VRetal}) as a consistency check. For $b=0$ the modes with zero eigenvalue are the gauge zero modes. For $b\neq 0$ these modes seem to become massive (the determinant is not zero), but as we have argued already, this is an artifact of the linearization. After all, expanding to second order in these gauge variations we find that \begin{equation} V^{(X)}_{BMN} (b, \delta \theta) \simeq V^{(X)}_{BMN}(b) + \partial_b V^{(X)}_{BMN}(b)(\delta \theta^2) + \ldots\,, \end{equation} and the second term only vanishes for $b=0$. However, the potential is invariant, so $b$ must be corrected to second order in gauge fluctuations. This is a non-linear change of variables. This is why it is better to project on directions orthogonal to the gauge transformations than trying to sort this second order variation and how it affects the metric of the other modes. It is clear that when we consider the above result, we should organize the modes according to the following criteria. If the two fuzzy spheres intersect for some $b >0$, we should fix $K= \ell -m$ for the modes where $b\simeq m$ (those that are near the intersection). For each such $K$ we get a tower of states on a circle labeled by the different values $\ell$ (or $m$) (there are two states for general $\ell, m$). These look like a tower of fields in one dimension. For $b<0$ we would do the same by fixing $K=\ell +m$ for the modes near the intersecting fuzzy spheres. Notice that this result is very similar to results that have been found in other matrix models with fuzzy spheres \cite{Azuma:2007jr}. Since in their case the solutions with displaced fuzzy spheres are critical points of the potential, the issues with zero modes do not appear. \subsection{Interpretation from ${\cal N}=4$ super Yang-Mills} We would like to look at what we have obtained so far from the point of view of ${\cal N}=4 $ super Yang-Mills on $S^3\times \mathbb{R}$. The results above are expected to give details of an $SU(2)_L$ invariant subsector of this theory on the sphere. After all, the BMN dynamics is exactly the truncation to the $SU(2)_L$ invariant subsector of the field theory. Thus, any dynamical feature present in the $SU(2)_L$ invariant truncation is a feature of the full ${\cal N}=4$ super Yang-Mills on the sphere. In particular, the unstable modes we found in the matrix model are necessarily unstable modes in the full field theory. For all the fluctuations we have studied, notice that, apart from the bounds on the total magnetic moment, $|j_1\pm j_2|$, our results are essentially independent of the values $n_1$ and $n_2$. If we think back at how the plane wave matrix model is related to a truncation of ${\cal N}=4$ super Yang-Mills on $S^3$ we realize that we should not be surprised by this fact. The splitting into $n_1$ and $ n_2$ as fuzzy spheres is artificial in ${\cal N}=4 $ super Yang-Mills. The fuzzy spheres are gauge transformations of the vacuum. However $b$ matters: its presence gives states with non-zero energy. In the field theory setup the total angular momentum of physical states also matters, but not the details of the splitting into $n_1, n_2$, as these would just give a multiplicity of components with different masses from different angular momentum objects. To make things precise, we should remember that the fields $X^i$ arise from the gauge connection on $S^3$, while the fields $Y^a$ arise from the constant scalar modes of the field theory. The way to do this is simple: since we have an $S^3$ sphere, the spatial components of the gauge connection can be written in a basis of left invariant one-forms $e^i$ \begin{equation} {\cal A}(t) = X^i(x,t) e^i\,. \end{equation} The spatial component of the field strength (chromo-magnetic field) is given by \begin{equation} d{\cal A}+{\cal A}\wedge {\cal A}= dX^i \wedge e^i + X^i de^i+ \frac 12 [X^i, X^j] e^i \wedge e^j\,. \end{equation} Restricting to $SU(2)$ invariant states requires that $X^i$ be constant. Thus on such configurations the first term vanishes. However, the one-forms are not exact $de^i\neq 0$. Instead they satisfy the Maurer-Cartan equations. Thus the second and third terms do not vanish. Even for constant diagonal $X^i$ there is a non-zero result for the magnetic field. Considering the magnetic field squared part of the Hamiltonian gives an expression that is exactly the term in the potential given by equation \eqref{eq:VXBMN}, after one accounts for various normalization issues. Thus, a non-trivial diagonal displacement in the $X^i$ (of size $b$) corresponds to a chromo-magnetic field of strength proportional to $b$ (the commutator squared term vanishes for this configuration). Notice that in such a description of the theory the parameter $b$ would correspond to a magnetic field that leaves invariant an $SU(2)_L$ of rotations on the $S^3$ (the electric field is related to the time derivative of the $X^i$). It also leaves invariant an $U(1)_R$ of rotations along the direction of said magnetic field. For such a system the $SU(2)$ spherical harmonics under $SU(2)_L$ and the $U(1)$ quantum numbers completely specify the spectrum for the scalar fluctuations, after all, these are originally classified as the $(n,n)$ representations under $SU(2)\times SU(2)$. The first quantum number determines $n$, while the second quantum number denotes the $z$-component of spin in the $SU(2)_R$. Moreover, this symmetry splits the degeneracy in energies for fluctuations with different $U(1)_R$ completely. When truncating to $SU(2)_L$ invariant states the choices of fuzzy spheres matter again, but this does not affect the energies, just the $SU(2)_L$ labels of the fields that get twisted. As we also argued, the $SU(2)_R$ labels are tied very closely to the $SU(2)_L$ labels of the field around the trivial vacuum. Since the system preserves the $SU(2)_L$ symmetry, we find that the splitting of modes into $SU(2)_R$ representations is preserved in the trivial vacuum. This is not obvious in the matrix model computation since we were turning on a vacuum expectation value that breaks the $SU(2)_R$ symmetry to $U(1)_R$ and in principle could induce mixing between modes with different $SU(2)_R$ representations. This non-mixing between modes looks like a happy coincidence from the matrix quantum mechanics point of view. Here we see that there is a symmetry reason from super Yang-Mills for that to happen. Moreover, each such $(n,n)$ can contribute at most one singlet under $SU(2)_L$ for each value of the $U(1)$ angular momentum. We should also ask what kind of instability do the modes between intersecting fuzzy spheres represent in super Yang-Mills. Clearly, the fuzzy sphere is a gauge artifact, but not the presence of a magnetic field. So we should ask what kind of instabilities can arise in the presence of a constant chromo-magnetic field on a sphere. If we take $b$ large, this corresponds to a large magnetic field. By thinking in terms of engineering units, the magnetic field $b$ generates a scale in the system that, if $b$ is large, is much shorter in length than the radius of the sphere. Under such conditions we can ignore the radius of the sphere and treat the magnetic field as if it were constant in space. The only scale in the problem is associated to $b$. The mass squared of the unstable mode should therefore be linear in $b$ (just a result of dimensional analysis). If one has momentum $p$ along the direction of the magnetic field, the spectrum gives modes with frequencies of the form $\omega_{\ell m}^2 = -|b|+ p^2$. This is exactly the result we found in equation \eqref{spectrum}, if we identify $p= \ell+1\pm b$. Such a result is well known. It is the Nielsen-Olesen instability \cite{Nielsen:1978rm} (see also \cite{Leutwyler:1980ma}). This instability is due to the fact that gluons charged under the field that acquired a magnetic field have a large magnetic moment that is enough to drive them tachyonic. For scalars there is no magnetic moment, and the localization effects in a magnetic field cost energy via the uncertainty principle, so these modes are not tachyonic, just as we found for the fluctuations of the $Y^a$ fields. For fermions, one can get zero modes, which is the familiar connection between massless fermions and index theory. Incidentally, it is well known that it is the magnetic moment contribution to the $\beta$-function of non-abelian gauge theories that turns the sign contribution for vector particles relative to scalar particles. It was argued that this would also lead to an effective action where the chromo-magnetic field condenses \cite{Savvidy:1977as}. The Nielsen-Olesen instability would destroy that type of order. \section{Time dependence for the tachyonic modes} \label{sec-5} So far in our analysis we have considered configurations that are frozen in time. We have seen that tachyons can form in the region where the two fuzzy spheres intersect and we have argued that they do not mix with any other mode. At this point an interesting question to ask is what happens to these tachyons as we let our system evolve. In this section we present a straightforward analysis for the simplest possible trajectory, namely an oscillation of the spheres along one fixed direction. Consider two fuzzy spheres of sizes $j_1$ and $ j_2$. The tachyons appear for any integer value from $|j_1-j_2|$ all the way to $j_1+j_2$ if the smaller sphere is allowed to oscillate from the center all the way to the outside of the larger sphere. As we have computed in (\ref{spectrum}), the mass for these tachyons is given by \begin{equation} (m_\ell^\pm(t))^2 = (\ell+1\pm b(t))^2\pm b(t)\,, \end{equation} where we do not write the label $m$, which is set equal to $\mp(\ell+1)$. We take $b(t)= \tilde b \sin t$, so to have a simple sinusoidal motion along the $X^3$ direction. The motion in $t$ is periodic with period $2\pi$, so the full analysis can be restricted to the interval $t\in \{0, 2\pi\}$. We need to solve the differential equation \begin{equation} \ddot q_\ell (t) + (m^\pm_\ell(t))^2 q(t)=0\,. \label{mathieu} \end{equation} This is the equation for a harmonic oscillator with time dependent mass. It is somewhat similar\footnote{Albeit more complicated and not generically solvable in terms of elementary functions.} to the Mathieu equation that describes parametric resonances and, in cosmology, the fluctuations of the inflaton around the minimum of the potential during preheating \cite{Kofman:1994rk}. In general there are two linearly independent solutions to the equation (\ref{mathieu}), which we call $q_1(t)$ and $q_2(t)$. We can relate the initial time problem (at time $t$) to the problem at one period later (at time $t+2\pi$) using a periodicity matrix \begin{equation} \begin{pmatrix} q_1(t+2\pi)\\ q_2(t+2\pi) \end{pmatrix}= \begin{pmatrix} A&B\\ C&D \end{pmatrix} \begin{pmatrix} q_1(t)\\ q_2(t) \end{pmatrix}\,. \end{equation} This equation can be diagonalized, so we can choose the solutions to be eigenvalues of the matrix above. The Wronskian of the solution is constant, so the matrix transforming between one and the other has determinant equal to one. Also, since the differential equation has real coefficients, the solutions can be made real and in that case the matrix above is real as well. Hence, the eigenvalues are either real or unitary. These eigenvalues serve as Lyapunov exponents for the classical periodic orbit. When the eigenvalues are unitary the system is stable, when the eigenvalues are real the system is unstable. The system can be interpreted also as a Schr\"odinger problem with fixed energy in a periodic potential,\footnote{For an elementary treatment see \cite{kittel}.} which is the negative of the $(m^\pm_{\ell}(t))^2$ function. If the solutions are quasi-periodic (the eigenvalues are in the unit circle), one of the functions is identified with positive frequency and the other one is identified with negative frequency modes. This is the case where the functions $q_\ell(t)$ belong to a band of the periodic potential. Generically, if there are regions where the mode is tachyonic, the corresponding Schr\"odinger particle needs to tunnel through the barrier. This phenomenon generically leads to the property that the eigenvalues of the matrix above are non-unitary, with the tunneling amplitude characterizing the growth of the signal. We can in general estimate this using a WKB approximation. The large eigenvalue tells us how the modes grow around these periodic solutions and it describes the discrete time dependence of the instability under various oscillations. The two linearly independent solutions can also be thought of as coefficients of raising/lowering operators. The matrix computed in this basis is a Bogolubov transformation for each period and the amplitude growth correlates with the amount of particle creation between oscillations. For us, the most important question to ask is which of all the modes above grows the fastest, as these modes will dominate the initial stages of the brane collapse problem. It is not hard to figure out that the tachyon with the highest $\ell$ will generally dominate. First, it will be tachyonic for the longest time during the periodic trajectory, not only because the range of $b$ where it is tachyonic is larger (recall that this range is of order $\sqrt{\ell}$), but also because the motion of the oscillations is slower at larger displacement. This means that the main condensation of modes will happen between the north pole of one fuzzy sphere and the south pole of the other one. We can do this numerically for various values of the amplitude of oscillation $\tilde b$ and for different values of $\ell$. This is shown in Figures \ref{fig:tachyonamp} and \ref{fig:tachyonamplong}. \begin{figure}[ht] \begin{center} \includegraphics[scale=1]{tachyon1a.pdf} \caption{ Here we show the norm of the maximum eigenvalue of the periodicity matrix for various values of $\ell$ as a function of the amplitude $\tilde b$, labeling the horizontal axis. We clearly see the band structure of the problem and that the largest value of $\ell$ is typically the one with most amplification so long as it is tachyonic. \label{fig:tachyonamp}} \end{center} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[scale=0.95]{tachyon2a.pdf} \caption{ Here we show the maximum eigenvalue of the periodicity matrix for various values of $\ell$, where we explore the problem in the region of large amplitude $\tilde b$, labeling again the horizontal axis. Asymptotically the amplification becomes of order one, since the time during which each mode is tachyonic on a single oscillation becomes very small. \label{fig:tachyonamplong}} \end{center} \end{figure} Notice also that because condensation happens for various modes with different $\ell$, the system classically breaks the rotation symmetry around the axis of symmetry for small perturbations on the off-diagonal modes. Each of these is expected to decohere from the others once the non-linearities set in, because the classical trajectories diverge rapidly from each other. Also, as seen in the figures, any small amount of back-reaction can move modes between the oscillation bands and the growth bands. Let us assume that this will happen for some fixed value of the off-diagonal perturbation in the classical picture. The time to reach this value will depend on the initial amplitude of the off-diagonal mode. If we begin in an adiabatic ground state for these modes in the non-tachyonic region, we can estimate the initial size of fluctuations by a harmonic oscillator wave function. This is a power-law function of $\hbar$. The growth is exponential (and can easily be of a few orders of magnitude per oscillation), so the time growth for quantum perturbations around a fixed classical periodic orbit to the stages where back-reaction becomes important is logarithmic in the $\hbar\to 0$ limit. As we increase $\hbar$, the time to reach back-reaction is shorter. In the very quantum regime the approximations used here break down and a more robust formalism with quantum back-reaction needs to be developed (an initial attempt of such a setup has been recently pursued in \cite{Asplund:2010xh}). This suggests that in the strong coupling regime thermalization could be very fast. \section{Conclusion} \label{sec-6} The purpose of this paper was to initiate a study of the process of formation and thermalization of black holes, using the gauge/gravity correspondence. Since these are notoriously complicated problems to address in ordinary field theories, our strategy has been to focus our attention on simplified models with a reduced number of degrees of freedom. The prototypical example of such models is the BFSS matrix quantum mechanics describing M-theory on the discrete light-cone quantization of flat space. Unfortunately, this model possesses some features that make its study extremely challenging. These features include the presence of flat directions in the potential (which give rise to a continuous spectrum and to the difficulty of distinguishing between single-particle and multi-particle states) and the absence of a tunable coupling constant. Moreover, the wave function of bound states in this model is not known, making it impossible to describe scattering processes in the regime of high energy and small impact parameters where the non-linearities of gravity become important and black hole formation takes place.\footnote{Most of the work in the past aimed at matching the loop expansion of the BFSS matrix model with perturbative expansions in gravity has been mostly limited to linearized examples or to terms protected by non-renormalization theorems, see for example \cite{Becker:1997xw} and \cite{wati} for a review.} We have therefore considered a different model, the BMN matrix model given by (\ref{BMNaction}), where many of the problems that plague the BFSS model no longer occur. For example, the mass terms in the BMN model lift flat directions and give rise to a discrete spectrum with an isolated set of classical vacua, the fuzzy spheres that have been the central ingredient of our setup. In the limit of large mass, these different vacua are divided into superselection sectors described by harmonic oscillators and the spectrum of their fluctuations can be studied. While the supersymmetric vacua of the model consists of concentric spheres, one can also consider non-BPS configurations obtained by displacing the centers of the spheres. These are obtained by turning on the diagonal modes that control the center of mass motion of the spheres without deforming their shape. In this way one can setup a scattering problem, that might be used as a proxy, under computational control, for the high energy scattering of particles in gravity. We have started our analysis by considering two fuzzy spheres, displaced along a direction in their world-volume and frozen in time in that position. The formation and thermalization of a black hole can be associated to copious particle production in off-diagonal modes of a configuration. The presence of classical tachyons makes the analysis simpler as we get a classical instability that can in principle drive the system towards thermalization. Expanding the fields in fuzzy spherical harmonics, we have found that the modes with the maximal angular momentum along the direction of the displacement become tachyonic at the intersection locus between the two spheres. Interestingly, this instability has a four-dimensional interpretation. It can be regarded in fact as a Nielsen-Olesen instability in ${\cal N}=4$ super Yang-Mills, of which the BMN model is a truncation. The role of the background magnetic field is played in our system by the displacement vector. An obvious generalization of our initial conditions would be to allow for a displacement of the fuzzy spheres also along the transverse directions (this corresponds to turning on some $\vev{Y^a}\neq 0$). The analysis in this case is slightly complicated by the fact that the fluctuations of the $X^i$ and the $Y^a$ fields get coupled and the diagonalization problem becomes less straightforward. One could also try to include the fermionic modes, which have not been considered here. In the last part of the paper, we have allowed our initial configuration to evolve in time describing a periodic motion, with the two fuzzy spheres oscillating along the direction of the displacement and crossing each other repeatedly. This is the simplest trajectory to study and gives rise to a dynamics that is somewhat similar to the physics of preheating during inflation. We have argued that the tachyonic modes that form at the intersection locus between the spheres typically get reinforced after each period of the oscillation. The growth is exponential, and can potentially give rise to a fast thermalization of the other degrees of freedom living on the spheres. So if these instabilities would cause the BMN model to thermalize fast, it is suggestive that the Nielsen-Olesen instability could drive fast thermalization in QCD processes like heavy-ion collisions. This thermalization is observed in experiments at RHIC \cite{Ackermann:2000tr} and it is argued to happen in a very short time scale \cite{Heinz:2004pj}. A better understanding of the details of this dynamics is surely desirable and this paper can be considered as a first step toward this ambitious goal. In particular, we find it extremely interesting to try to estimate from our setup the time scales characterizing the various phases of the black hole evolution, for example the thermalization time. To this regard, it has recently been conjectured \cite{SS} (prompted by the analysis in \cite{preskill}) that black holes are the `fastest scramblers' in nature. By fastest is intended that the thermalization time scale is logarithmic in the number of degrees of freedom of the system, rather than a power-law, as was originally proposed in \cite{page}. Our hope is that it might be possible to check this claim using the ideas and techniques we have presented in this paper. An even more involved scenario would start with a configuration that carries angular momentum on the plane. Such a scenario might lead to different black hole shapes. The time dependent analysis we performed would be more complicated because there is more mixing between modes (the system does not preserve an azimuthal symmetry). We conclude by repeating the observation in the Introduction that the BMN matrix model is amenable to being put on a computer. We can think of implementing numerical simulations of our system, where we define some initial configuration of displaced fuzzy spheres and let them evolve. Such simulations should provide us with a more accurate description of the details of the dynamics that follows the formation of the tachyons and shed light on what happens during the first phases of the evolution of the black holes. We are currently looking into this. \subsection*{Acknowledgements} D.B. would like to thank C. Asplund, D. Kabat, J. Maldacena, E. Silverstein, and H. Verlinde for various discussions related to this work. D.T. is grateful to S. Giddings for several instructive discussions on high energy scattering in gravity and to N. Iizuka for collaboration on related topics. D.B. would like to thank the Simons Center for Geometry and Physics, where some of this work was carried out. Work supported in part by DE-FG02-91ER40618. D.T. was also supported by PHY04-56556 and DE-FG02-95ER40896.
1,108,101,562,537
arxiv
\section{Introduction} \label{Introduction} This paper focuses on small organ ({\em e.g.}, the {\em pancreas}) segmentation from abdominal CT scans, which is an important prerequisite for enabling computers to assist human doctors for clinical purposes. This problem falls into the research area named {\em medical imaging analysis}. Recently, great progress has been brought to this field by the fast development of deep learning, especially convolutional neural networks~\cite{Krizhevsky_2012_ImageNet}\cite{Long_2015_Fully}. Many conventional methods, such as the graph-based segmentation approaches~\cite{Ali_2007_Graph} or those based on handcrafted local features~\cite{Wang_2014_Geodesic}, have been replaced by deep segmentation networks, which typically produce higher segmentation accuracy~\cite{Ronneberger_2015_UNet}\cite{Roth_2015_DeepOrgan}. \newcommand{7.0cm}{7.0cm} \begin{figure}[t] \begin{center} \includegraphics[width=7.0cm]{Dataset.pdf} \end{center} \caption{ A typical example from the NIH {\em pancreas} segmentation dataset~\cite{Roth_2015_DeepOrgan} (best viewed in color). We highlight the {\em pancreas} in red seen from three different viewpoints. It is a relatively small organ with irregular shape and boundary. } \label{Fig:Dataset} \end{figure} Segmenting a small organ from CT scans is often challenging. As the target often occupies a {\em small part} of input data ({\em e.g.}, less than $1.5\%$ in a 2D image, see Figure~\ref{Fig:Dataset}), deep segmentation networks such as FCN~\cite{Long_2015_Fully} and DeepLab~\cite{Chen_2015_Semantic} can be easily confused by the background region, which may contain complicated and variable contents. This motivates researchers to propose a {\em coarse-to-fine} approach~\cite{Zhou_2017_Fixed} with two {\em stages}, in which the coarse stage provides a rough localization and the fine stage performs accurate segmentation. But, despite state-of-the-art performance achieved in pancreas segmentation, this method suffers from {\em inconsistency} between its training and testing flowcharts, which is to say, the training phase dealt with coarse and fine stages individually and did not minimize a global energy function, but the testing phase assumed that these two stages can cooperate with each other in an iterative process. From another perspective, this also makes it difficult for multi-stage visual cues to be incorporated in segmentation, {\em e.g.}, the previous segmentation mask which carries rich information is discarded except for the bounding box. As a part of its consequences, the fine stage consisting of a sequence of iterations cannot converge very well, and sometimes the fine stage produced even lower segmentation accuracy than the coarse stage (see Section~\ref{Approach:Baseline}). Motivated to alleviate these shortcomings, we propose a {\bf Recurrent Saliency Transformation Network}. The chief innovation is to relate the coarse and fine stages with a saliency transformation module, which repeatedly transforms the segmentation probability map from previous iterations as spatial priors in the current iteration. This brings us two-fold advantages over~\cite{Zhou_2017_Fixed}. First, in the training phase, the coarse-scaled and fine-scaled networks are optimized jointly, so that the segmentation ability of each of them gets improved. Second, in the testing phase, the segmentation mask of each iteration is preserved and propagated throughout iterations, enabling multi-stage visual cues to be incorporated towards more accurate segmentation. To the best of our knowledge, this idea was not studied in the computer vision community, as it requires making use of some special properties of CT scans (see Section~\ref{Approach:Discussions}). We perform experiments on two CT datasets for small organ segmentation. On the NIH {\em pancreas} segmentation dataset~\cite{Roth_2015_DeepOrgan}, our approach outperforms the state-of-the-art by an average of over $2\%$, measured by the average Dice-S{\o}rensen coefficient (DSC). On another multi-organ dataset collected by the radiologists in our team, we also show the superiority of our approach over the baseline on a variety of small organs. In the testing phase, our approach enjoys better convergence properties, which guarantees its efficiency and reliability in real clinical applications. The remainder of this paper is organized as follows. Section~\ref{RelatedWork} briefly reviews related work, and Section~\ref{Approach} describes the proposed approach. After experiments are shown in Sections~\ref{ExperimentsNIH} and~\ref{ExperimentsMutliOrgan}, we draw our conclusions in Section~\ref{Conclusions}. \section{Related Work} \label{RelatedWork} Computer-aided diagnosis (CAD) is an important technique which can assist human doctors in many clinical scenarios. An important prerequisite of CAD is medical imaging analysis. As a popular and cheap way of medical imaging, contrast-enhanced computed tomography (CECT) produces detailed images of internal organs, bones, soft tissues and blood vessels. It is of great value to automatically segment organs and/or soft tissues from these CT volumes for further diagnosis~\cite{Brosch_2016_Deep}\cite{Wang_2016_Deep}\cite{Havaei_2017_Brain}\cite{Zhou_2017_Deep}. To capture specific properties of different organs, researchers often design individualized algorithms for each of them. Typical examples include the the liver~\cite{Ling_2008_Hierarchical}\cite{Heimann_2009_Comparison}, the {\em spleen}~\cite{Linguraru_2010_Automated}, the {\em kidneys}~\cite{Lin_2006_Computer}\cite{Ali_2007_Graph}, the {\em lungs}~\cite{Hu_2001_Automatic}, the {\em pancreas}~\cite{Chu_2013_Multi}\cite{Wang_2014_Geodesic}, {\em etc}. Small organs ({\em e.g.}, the {\em pancreas}) are often more difficult to segment, partly due to their low contrast and large anatomical variability in size and (most often irregular) shape. Compared to the papers cited above which used conventional approaches for segmentation, the progress of deep learning brought more powerful and efficient solutions. In particular, convolutional neural networks have been widely applied to a wide range of vision tasks, such as image classification~\cite{Krizhevsky_2012_ImageNet}\cite{Simonyan_2015_Very}\cite{He_2016_Deep}, object detection~\cite{Girshick_2014_Rich}\cite{Ren_2015_Faster}, and semantic segmentation~\cite{Long_2015_Fully}\cite{Chen_2015_Semantic}. Recurrent neural networks, as a related class of networks, were first designed to process sequential data~\cite{Graves_2013_Speech}\cite{Socher_2011_Parsing}, and later generalized to image classification~\cite{Liang_2015_Recurrent} and scene labeling~\cite{Pinheiro_2014_Recurrent} tasks. In the area of medical imaging analysis, in particular organ segmentation, these techniques have been shown to significantly outperform conventional approaches, {\em e.g.}, segmenting the {\em liver}~\cite{Dou_2016_3D}, the {\em lung}~\cite{Harrison_2017_Progressive}, or the {\em pancreas}~\cite{Roth_2016_Spatial}\cite{Cai_2017_Improving}\cite{Roth_2017_Spatial}. Note that medical images differ from natural images in that data appear in a volumetric form. To deal with these data, researchers either slice a 3D volume into 2D slices (as in this work), or train a 3D network directly~\cite{Merkow_2016_Dense}\cite{Milletari_2016_VNet}\cite{Kamnitsas_2017_Efficient}\cite{Yu_2017_Volumetric}. In the latter case, limited GPU memory often leads to patch-based training and testing strategies. The tradeoff between 2D and 3D approaches is discussed in~\cite{Lai_2015_Deep}. By comparison to the entire CT volume, the organs considered in this paper often occupy a relatively small area. As deep segmentation networks such as FCN~\cite{Long_2015_Fully} are less accurate in depicting small targets, researchers proposed two types of ideas to improve detection and/or segmentation performance. The first type involved rescaling the image so that the target becomes comparable to the training samples~\cite{Xia_2016_Zoom}, and the second one considered to focus on a subregion of the image for each target to obtain higher accuracy in detection~\cite{Chen_2016_Mitosis} or segmentation~\cite{Zhou_2017_Fixed}. The coarse-to-fine idea was also well studied in the computer vision area for saliency detection~\cite{Kuen_2016_Recurrent} or semantic segmentation~\cite{Li_2017_Instance}\cite{Lin_2017_RefineNet}. This paper is based on a recent coarse-to-fine framework~\cite{Zhou_2017_Fixed}, but we go one step further by incorporating multi-stage visual cues in optimization. \vspace{-0.02cm} \section{Our Approach} \label{Approach} We investigate the problem of segmenting an organ from abdominal CT scans. Let a CT image be a 3D volume $\mathbf{X}$ of size $W\times H\times L$ which is annotated with a binary ground-truth segmentation $\mathbf{Y}$ where ${y_i}={1}$ indicates a foreground voxel. The goal of our work is to produce a binary output volume $\mathbf{Z}$ of the same dimension. Denote $\mathcal{Y}$ and $\mathcal{Z}$ as the set of foreground voxels in the ground-truth and prediction, {\em i.e.}, ${\mathcal{Y}}={\left\{i\mid y_i=1\right\}}$ and ${\mathcal{Z}}={\left\{i\mid z_i=1\right\}}$. The accuracy of segmentation is evaluated by the Dice-S{\o}rensen coefficient (DSC): ${\mathrm{DSC}\!\left(\mathcal{Y},\mathcal{Z}\right)}= {\frac{2\times\left|\mathcal{Y}\cap\mathcal{Z}\right|}{\left|\mathcal{Y}\right|+\left|\mathcal{Z}\right|}}$. This metric falls in the range of $\left[0,1\right]$ with $1$ implying perfect segmentation. \subsection{Coarse-to-Fine Segmentation and Drawbacks} \label{Approach:Baseline} \renewcommand{7.0cm}{8.0cm} \begin{figure}[t] \begin{center} \includegraphics[width=7.0cm]{Motivation.pdf} \end{center} \caption{ A failure case of the stage-wise {\em pancreas} segmentation approach~\cite{Zhou_2017_Fixed} (in the {\em axial} view, best viewed in color). The red masks show ground-truth segmentations, and the green frames indicate the bounding box derived from the coarse stage. In either slice, unsatisfying segmentation is produced at the fine stage, because the cropped region does not contain enough contextual information, whereas the coarse-scaled probability map carrying such information is discarded. This is improved by the proposed Recurrent Saliency Transformation Network, see Figure~\ref{Fig:VisualizationNIH}. } \label{Fig:Motivation} \end{figure} We start with training 2D deep networks for 3D segmentation\footnote{ Please see Section~\ref{ExperimentsNIH:Comparison} for the comparison to 3D networks.}. Each 3D volume $\mathbf{X}$ is sliced along three axes, the {\em coronal}, {\em sagittal} and {\em axial} views, and these 2D slices are denoted by $\mathbf{X}_{\mathrm{C},w}$ (${w}={1,2,\ldots,W}$), $\mathbf{X}_{\mathrm{S},h}$ (${h}={1,2,\ldots,H}$) and $\mathbf{X}_{\mathrm{A},l}$ (${l}={1,2,\ldots,L}$), where the subscripts $\mathrm{C}$, $\mathrm{S}$ and $\mathrm{A}$ stand for {\em coronal}, {\em sagittal} and {\em axial}, respectively. On each axis, an individual 2D-FCN~\cite{Long_2015_Fully} on a $16$-layer VGGNet~\cite{Simonyan_2015_Very} is trained\footnote{ This is a simple segmentation baseline with a relatively shallow network. Deeper network structures such as ResNet~\cite{He_2016_Deep} and more complicated segmentation frameworks such as DeepLab~\cite{Chen_2015_Semantic}, while requiring a larger memory and preventing us from training two stages jointly (see Section~\ref{Approach:Formulation}), often result in lower segmentation accuracy as these models seem to over-fit in these CT datasets.}. Three FCN models are denoted by $\mathbb{M}_\mathrm{C}$, $\mathbb{M}_\mathrm{S}$ and $\mathbb{M}_\mathrm{A}$, respectively. We use the DSC loss~\cite{Milletari_2016_VNet} in the training phase so as to prevent the models from being biased towards the background class. Both multi-slice segmentation ($3$ neighboring slices are combined as a basic unit in training and testing) and multi-axis fusion (majority voting over three axes) are performed to incorporate pseudo-3D information into segmentation. The organs investigated in this paper ({\em e.g.}, the {\em pancreas}) are relatively small. In each 2D slice, the fraction of the foreground pixels is often smaller than $1.5\%$. To prevent deep networks such as FCN~\cite{Long_2015_Fully} from being confused by the complicated and variable background contents, \cite{Zhou_2017_Fixed} proposed to focus on a smaller input region according to an estimated bounding box. On each viewpoint, two networks were trained for coarse-scaled segmentation and fine-scaled segmentation, respectively. In the testing process, the coarse-scaled network was first used to obtain the rough position of the {\em pancreas}, and the fine-scaled network was executed several times and the segmentation mask was updated iteratively until convergence. Despite the significant accuracy gain brought by this approach, we notice a drawback originating from the {\em inconsistency} between its training and testing strategies. That is to say, the training stage dealt with two networks individually without enabling global optimization, but the testing phase assumed that they can cooperate with each other in a sequence of iterations. From another perspective, a pixel-wise segmentation probability map was predicted by the coarse stage, but the fine stage merely preserved the bounding box and discarded the remainder, which is a major information loss. Sometimes, the image region within the bounding box does not contain sufficient spatial contexts, and thus the fine stage can be confused and produce even lower segmentation accuracy than the coarse stage. A failure case is shown in Figure~\ref{Fig:Motivation}. This motivates us to connect these two stages with a saliency transformation module so as to jointly optimize their parameters. \subsection{Recurrent Saliency Transformation Network} \label{Approach:Formulation} Following the baseline approach, we train an individual model for each of the three viewpoints. Without loss of generality, we consider a 2D slice along the {\em axial} view, denoted by $\mathbf{X}_{\mathrm{A},l}$. Our goal is to infer a binary segmentation mask $\mathbf{Z}_{\mathrm{A},l}$ of the same dimensionality. In the context of deep neural networks~\cite{Long_2015_Fully}\cite{Chen_2015_Semantic}, this is often achieved by first computing a {\em probability map} ${\mathbf{P}_{\mathrm{A},l}}={\mathbf{f}\!\left[\mathbf{X}_{\mathrm{A},l};\boldsymbol{\theta}\right]}$, where $\mathbf{f}\!\left[\cdot;\boldsymbol{\theta}\right]$ is a deep segmentation network (FCN throughout this paper) with $\boldsymbol{\theta}$ being network parameters, and then binarizing $\mathbf{P}_{\mathrm{A},l}$ into $\mathbf{Z}_{\mathrm{A},l}$ using a fixed threshold of $0.5$, {\em i.e.}, ${\mathbf{Z}_{\mathrm{A},l}}={\mathbb{I}\!\left[\mathbf{P}_{\mathrm{A},l}\geqslant0.5\right]}$. In order to assist segmentation with the probability map, we introduce $\mathbf{P}_{\mathrm{A},l}$ as a latent variable. We introduce a {\em saliency transformation} module, which takes the probability map to generate an updated input image, {\em i.e.}, ${\mathbf{I}_{\mathrm{A},l}}={\mathbf{X}_{\mathrm{A},l}\odot \mathbf{g}\!\left(\mathbf{P}_{\mathrm{A},l};\boldsymbol{\eta}\right)}$, and uses the updated input $\mathbf{I}_{\mathrm{A},l}$ to replace $\mathbf{X}_{\mathrm{A},l}$. Here $\mathbf{g}\!\left[\cdot;\boldsymbol{\eta}\right]$ is the transformation function with parameters $\boldsymbol{\eta}$, and $\odot$ denotes element-wise product, {\em i.e.}, the transformation function adds spatial weights to the original input image. Thus, the segmentation process becomes: \begin{equation} \label{Eqn:RecurrentNetwork} {\mathbf{P}_{\mathrm{A},l}}= {\mathbf{f}\!\left[\mathbf{X}_{\mathrm{A},l}\odot \mathbf{g}\!\left(\mathbf{P}_{\mathrm{A},l};\boldsymbol{\eta}\right);\boldsymbol{\theta}\right]}. \end{equation} This is a recurrent neural network. Note that the saliency transformation function $\mathbf{g}\!\left[\cdot,\boldsymbol{\eta}\right]$ needs to be differentiable so that the entire recurrent network can be optimized in an end-to-end manner. As $\mathbf{X}_{\mathrm{A},l}$ and $\mathbf{P}_{\mathrm{A},l}$ share the same spatial dimensionality, we set $\mathbf{g}\!\left[\cdot,\boldsymbol{\eta}\right]$ to be a {\em size-preserved} convolution, which allows the weight added to each pixel to be determined by the segmentation probabilities in a small neighborhood around it. As we will show in the experimental section (see Figure~\ref{Fig:VisualizationNIH}), the learned convolutional kernels are able to extract complementary information to help the next iteration. \renewcommand{7.0cm}{8.0cm} \begin{figure}[t] \begin{center} \includegraphics[width=7.0cm]{RecurrentNetwork.pdf} \end{center} \caption{ We formulate our approach into a recurrent network, and unfold it for optimization and inference. } \label{Fig:RecurrentNetwork} \end{figure} \renewcommand{7.0cm}{15.0cm} \begin{figure*}[t] \begin{center} \includegraphics[width=7.0cm]{Framework.pdf} \end{center} \caption{ Illustration of the training process (best viewed in color). We display an input image along the {\em axial} view which contains $3$ neighboring slices. To save space, we only plot the coarse stage and the first iteration in the fine stage. } \label{Fig:Framework} \end{figure*} To optimize Eqn~\eqref{Eqn:RecurrentNetwork}, we unfold the recurrent network into a plain form (see Figure~\ref{Fig:RecurrentNetwork}). Given an input image $\mathbf{X}_{\mathrm{A},l}$ and an integer $T$ which is the maximal number of iterations, we update $\mathbf{I}_{\mathrm{A},l}^{\left(t\right)}$ and $\mathbf{P}_{\mathrm{A},l}^{\left(t\right)}$, ${t}={0,1,\ldots,T}$: \begin{eqnarray} \label{Eqn:RecurrentComputation} {\mathbf{I}_{\mathrm{A},l}^{\left(t\right)}} & = & {\mathbf{X}_{\mathrm{A},l}\odot\mathbf{g}\!\left(\mathbf{P}_{\mathrm{A},l}^{\left(t-1\right)};\boldsymbol{\eta}\right)}, \\ {\mathbf{P}_{\mathrm{A},l}^{\left(t\right)}} & = & {\mathbf{f}\!\left[\mathbf{I}_{\mathrm{A},l}^{\left(t\right)};\boldsymbol{\theta}\right]}. \end{eqnarray} Note that the original input image $\mathbf{X}_{\mathrm{A},l}$ does not change, and the parameters $\boldsymbol{\theta}$ and $\boldsymbol{\eta}$ are shared by all iterations. At ${t}={0}$, we directly set ${\mathbf{I}_{\mathrm{A},l}^{\left(0\right)}}={\mathbf{X}_{\mathrm{A},l}}$. When segmentation masks $\mathbf{P}_{\mathrm{A},l}^{\left(t\right)}$ (${t}={0,1,\ldots,T-1}$) are available for reference, deep networks benefit considerably from a shrunk input region especially when the target organ is very small. Thus, we define a {\em cropping} function $\mathrm{Crop}\!\left[\cdot;\mathbf{P}_{\mathrm{A},l}^{\left(t\right)}\right]$, which takes $\mathbf{P}_{\mathrm{A},l}^{\left(t\right)}$ as the {\em reference map}, binarizes it into ${\mathbf{Z}_{\mathrm{A},l}^{\left(t\right)}}= {\mathbb{I}\!\left[\mathbf{P}_{\mathrm{A},l}^{\left(t\right)}\geqslant0.5\right]}$, finds the minimal rectangle covering all the activated pixels, and adds a $K$-pixel-wide margin (padding) around it. We fix $K$ to be $20$; our algorithm is not sensitive to this parameter. Finally note that $\mathbf{I}_{\mathrm{A},l}^{\left(0\right)}$, the original input (the entire 2D slice), is much larger than the cropped inputs $\mathbf{I}_{\mathrm{A},l}^{\left(t\right)}$ for ${t}>{0}$. We train two FCN's to deal with such a major difference in input data. The first one is named the {\em coarse-scaled} segmentation network, which is used {\em only} in the first iteration. The second one, the {\em fine-scaled} segmentation network, takes the charge of all the remaining iterations. We denote their parameters by $\boldsymbol{\theta}^\mathrm{C}$ and $\boldsymbol{\theta}^\mathrm{F}$, respectively. These two FCN's are optimized jointly. We compute a DSC loss term on each probability map $\mathbf{P}_{\mathrm{A},l}^{\left(t\right)}$, ${t}={0,1,\ldots,T}$, and denote it by $\mathcal{L}\!\left\{\mathbf{Y}_{\mathrm{A},l},\mathbf{P}_{\mathrm{A},l}^{\left(t\right)}\right\}$. Here, $\mathbf{Y}_{\mathrm{A},l}$ is the ground-truth segmentation mask, and ${\mathcal{L}\!\left\{\mathbf{Y},\mathbf{P}\right\}}={1-\frac{2\times{\sum_i}Y_iP_i}{{\sum_i}Y_i+P_i}}$ is based on a {\em soft} version of DSC~\cite{Milletari_2016_VNet}. Our goal is to minimize the overall loss: \begin{equation} \label{Eqn:LossFunction} {\mathcal{L}}={{\sum_{t=0}^T}\lambda_t\cdot \mathcal{L}\!\left\{\mathbf{Y}_{\mathrm{A},l}^{\left(t\right)},\mathbf{Z}_{\mathrm{A},l}^{\left(t\right)}\right\}}. \end{equation} This leads to joint optimization over all iterations, which involves network parameters $\boldsymbol{\theta}^\mathrm{C}$, $\boldsymbol{\theta}^\mathrm{F}$, and transformation parameters $\boldsymbol{\eta}$. $\left\{\lambda_t\right\}_{t=0}^T$ controls the tradeoff among all loss terms. We set ${2\lambda_0}={\lambda_1}=\ldots={\lambda_T}={2/\left(2T+1\right)}$ so as to encourage accurate fine-scaled segmentation. \subsection{Training and Testing} \label{Approach:TrainingAndTesting} \newcommand{-0.01cm}{-0.01cm} \begin{algorithm}[t!] \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \SetKwInOut{Return}{Return} \Input{ input volume $\mathbf{X}$, viewpoint ${\mathcal{V}}={\left\{\mathrm{C},\mathrm{S},\mathrm{A}\right\}}$;\\ \vspace{-0.01cm} parameters $\boldsymbol{\theta}_\mathrm{v}^\mathrm{C}$, $\boldsymbol{\theta}_\mathrm{v}^\mathrm{F}$ and $\boldsymbol{\eta}_\mathrm{v}$, ${\mathrm{v}}\in{\mathcal{V}}$;\\ \vspace{-0.01cm} max number of iterations $T$, threshold $\mathrm{thr}$; } \Output{ segmentation volume $\mathbf{Z}$; } ${t}\leftarrow{0}$, ${\mathbf{I}_\mathrm{v}^{\left(0\right)}}\leftarrow{\mathbf{X}}$, ${\mathrm{v}}\in{\mathcal{V}}$;\\ \vspace{-0.01cm} ${\mathbf{P}_{\mathrm{v},l}^{\left(0\right)}}\leftarrow {\mathbf{f}\!\left[\mathbf{I}_{\mathrm{v},l}^{\left(0\right)};\boldsymbol{\theta}_\mathrm{v}^\mathrm{C}\right]}$, ${\mathrm{v}}\in{\mathcal{V}}$, $\forall l$;\\ \vspace{-0.01cm} ${\mathbf{P}^{\left(0\right)}}={\frac{\mathbf{P}_\mathrm{C}^{\left(0\right)}+ \mathbf{P}_\mathrm{S}^{\left(0\right)}+\mathbf{P}_\mathrm{A}^{\left(0\right)}}{3}}$, ${\mathbf{Z}^{\left(0\right)}}={\mathbb{I}\!\left[\mathbf{P}^{\left(0\right)}\geqslant0.5\right]}$;\\ \vspace{-0.01cm} \Repeat{${t}={T}$\ {\bf or}\ ${\mathrm{DSC}\!\left\{\mathbf{Z}^{\left(t-1\right)},\mathbf{Z}^{\left(t\right)}\right\}}\geqslant{\mathrm{thr}}$}{ ${t}\leftarrow{t+1}$;\\ \vspace{-0.01cm} ${\mathbf{I}_{\mathrm{v},l}^{\left(t\right)}}\leftarrow {\mathbf{X}_{\mathrm{v},l}\odot\mathbf{g}\!\left(\mathbf{P}_{\mathrm{v},l}^{\left(t-1\right)};\boldsymbol{\eta}\right)}$, ${\mathrm{v}}\in{\mathcal{V}}$, $\forall l$;\\ \vspace{-0.01cm} ${\mathbf{P}_{\mathrm{v},l}^{\left(t\right)}}\leftarrow {\mathbf{f}\!\left[\mathrm{Crop}\!\left[ \mathbf{I}_{\mathrm{v},l}^{\left(t\right)};\mathbf{P}_{\mathrm{v},l}^{\left(t-1\right)}\right]; \boldsymbol{\theta}_\mathrm{v}^\mathrm{F}\right]}$, ${\mathrm{v}}\in{\mathcal{V}}$, $\forall l$;\\ \vspace{-0.01cm} ${\mathbf{P}^{\left(t\right)}}={\frac{\mathbf{P}_\mathrm{C}^{\left(t\right)}+ \mathbf{P}_\mathrm{S}^{\left(t\right)}+\mathbf{P}_\mathrm{A}^{\left(t\right)}}{3}}$, ${\mathbf{Z}^{\left(t\right)}}={\mathbb{I}\!\left[\mathbf{P}^{\left(t\right)}\geqslant0.5\right]}$;\\ \vspace{-0.01cm} } \Return{ ${\mathbf{Z}}\leftarrow{\mathbf{Z}^{\left(t\right)}}$. } \caption{ The Testing Phase } \label{Alg:Testing} \end{algorithm} {\bf The training phase} is aimed at minimizing the loss function $\mathcal{L}$, defined in Eqn~\eqref{Eqn:LossFunction}, which is differentiable with respect to all parameters. In the early training stages, the coarse-scaled network cannot generate reasonable probability maps. To prevent the fine-scaled network from being confused by inaccurate input regions, we use the ground-truth mask $\mathbf{Y}_{\mathrm{A},l}$ as the reference map. After a sufficient number of training, we resume using $\mathbf{P}_{\mathrm{A},l}^{\left(t\right)}$ instead of $\mathbf{Y}_{\mathrm{A},l}$. In Section~\ref{ExperimentsNIH:Settings}, we will see that this ``fine-tuning'' strategy improves segmentation accuracy considerably. Due to the limitation in GPU memory, in each mini-batch containing one training sample, we set $T$ to be the maximal integer (not larger than $5$) so that we can fit the entire framework into the GPU memory. The overall framework is illustrated in Figure~\ref{Fig:Framework}. As a side note, we find that setting ${T}\equiv{1}$ also produces high accuracy, suggesting that major improvement is brought by joint optimization. {\bf The testing phase} follows the flowchart described in Algorithm~\ref{Alg:Testing}. There are two minor differences from the training phase. First, as the ground-truth segmentation mask $\mathbf{Y}_{\mathrm{A},l}$ is not available, the probability map $\mathbf{P}_{\mathrm{A},l}^{\left(t\right)}$ is always taken as the reference map for image cropping. Second, the number of iterations is no longer limited by the GPU memory, as the intermediate outputs can be discarded on the way. In practice, we terminate our algorithm when the similarity of two consecutive predictions, measured by ${\mathrm{DSC}\!\left\{\mathbf{Z}^{\left(t-1\right)},\mathbf{Z}^{\left(t\right)}\right\}}= {\frac{2\times{\sum_i}Z_i^{\left(t-1\right)}Z_i^{\left(t\right)}} {{\sum_i}Z_i^{\left(t-1\right)}+Z_i^{\left(t\right)}}}$, reaches a threshold $\mathrm{thr}$, or a fixed number ($T$) of iterations are executed. We will discuss these parameters in Section~\ref{ExperimentsNIH:Diagnosis:Convergence}. \subsection{Discussions} \label{Approach:Discussions} Coarse-to-fine recognition is an effective idea in medical imaging analysis. Examples include~\cite{Zhou_2017_Fixed}, our baseline, and~\cite{Chen_2016_Mitosis} for metosis detection. Our approach can be applied to most of them towards higher recognition performance. Attention-based or recurrent models are also widely used for natural image segmentation~\cite{Chen_2016_Attention}\cite{Li_2017_Instance}\cite{Xia_2016_Zoom}\cite{Lin_2017_RefineNet}. Our approach differs from them in making full use of the special properties of CT scans, {\em e.g.}, each organ appears at a roughly fixed position, and has a fixed number of components. Our approach can be applied to detecting the lesion areas of an organ~\cite{Kamnitsas_2017_Efficient}\cite{Zhou_2017_Deep}, or a specific type of vision problems such as {\em hair} segmentation in a {\em face}~\cite{Luo_2013_Structure}, or detecting the targets which are consistently small in the input images~\cite{Singh_2016_Learning}. \section{Pancreas Segmentation Experiments} \label{ExperimentsNIH} \newcommand{2.2cm}{2.2cm} \newcommand{1.2cm}{1.2cm} \begin{table*}[!btp] \centering \begin{tabular}{|l||R{2.2cm}|R{1.2cm}||R{1.2cm}|R{1.2cm}|} \hline Model & Average & Gain & Max & Min \\ \hline\hline Stage-wise segmentation~\cite{Zhou_2017_Fixed} & $82.37\pm 5.68$ & $ -$ & $90.85$ & $62.43$ \\ \hline\hline Using $3\times3$ kernels in saliency transformation (basic model) & $83.47\pm 5.78$ & $+0.00$ & $90.63$ & $57.85$ \\ \hline Using $1\times1$ kernels in saliency transformation & $82.85\pm 6.68$ & $-0.62$ & $90.40$ & $53.44$ \\ \hline Using $5\times5$ kernels in saliency transformation & $83.64\pm 5.29$ & $+0.17$ & $90.35$ & $66.35$ \\ \hline\hline Two-layer saliency transformation ($3\times3$ kernels) & $83.93\pm 5.43$ & $+0.46$ & $90.52$ & $64.78$ \\ \hline\hline Fine-tuning with noisy data ($3\times3$ kernels) & $83.99\pm 5.09$ & $+0.52$ & $90.57$ & $65.05$ \\ \hline \end{tabular} \caption{ Accuracy (DSC, $\%$) comparison of different settings of our approach. Please see the texts in Section~\ref{ExperimentsNIH:Settings} for detailed descriptions of these variants. For each variant, the ``gain'' is obtained by comparing its accuracy with the basic model. } \label{Tab:Settings} \end{table*} \subsection{Dataset and Evaluation} \label{ExperimentsNIH:DatasetAndEvaluation} We evaluate our approach on the NIH {\em pancreas} segmentation dataset~\cite{Roth_2015_DeepOrgan}, which contains $82$ contrast-enhanced abdominal CT volumes. The resolution of each scan is $512\times512\times L$, where ${L}\in{\left[181,466\right]}$ is the number of slices along the long axis of the body. The distance between neighboring voxels ranges from $0.5\mathrm{mm}$ to $1.0\mathrm{mm}$. Following the standard cross-validation strategy, we split the dataset into $4$ fixed folds, each of which contains approximately the same number of samples. We apply cross validation, {\em i.e.}, training the models on $3$ out of $4$ subsets and testing them on the remaining one. We measure the segmentation accuracy by computing the Dice-S{\o}rensen coefficient (DSC) for each sample, and report the average and standard deviation over all $82$ cases. \subsection{Different Settings} \label{ExperimentsNIH:Settings} We use the FCN-8s model~\cite{Long_2015_Fully} pre-trained on PascalVOC~\cite{Everingham_2010_Pascal}. We initialize the up-sampling layers with random weights, set the learning rate to be $10^{-4}$ and run $80\rm{,}000$ iterations. Different options are evaluated, including using different kernel sizes in saliency transformation, and whether to fine-tune the models using the predicted segmentations as reference maps (see the description in Section~\ref{Approach:TrainingAndTesting}). Quantitative results are summarized in Table~\ref{Tab:Settings}. As the saliency transformation module is implemented by a size-preserved convolution (see Section~\ref{Approach:Formulation}), the size of convolutional kernels determines the range that a pixel can use to judge its saliency. In general, a larger kernel size improves segmentation accuracy ($3\times3$ works significantly better than $1\times1$), but we observe the marginal effect: the improvement of $5\times5$ over $3\times3$ is limited. As we use $7\times7$ kernels, the segmentation accuracy is slightly lower than that of $5\times5$. This may be caused by the larger number of parameters introduced to this module. Another way of increasing the receptive field size is to use two convolutional layers with $3\times3$ kernels. This strategy, while containing a smaller number of parameters, works even better than using one $5\times5$ layer. But, we do not add more layers, as the performance saturates while computational costs increase. As described in Section~\ref{Approach:TrainingAndTesting}, we fine-tune these models with images cropped from the coarse-scaled segmentation mask. This is to adjust the models to the testing phase, in which the ground-truth mask is unknown, so that the fine-scaled segmentation needs to start with, and be able to revise the coarse-scaled segmentation mask. We use a smaller learning rate ($10^{-6}$) and run another $40\rm{,}000$ iterations. This strategy not only reports $0.52\%$ overall accuracy gain, but also alleviates over-fitting (see Section~\ref{ExperimentsNIH:Diagnosis:OverFitting}). In summary, all these variants produce higher accuracy than the state-of-the-art ($82.37\%$ by~\cite{Zhou_2017_Fixed}), which verifies that the major contribution comes from our recurrent framework which enables joint optimization. In the later experiments, we inherit the best variant learned from this section, including in a large-scale multi-organ dataset (see Section~\ref{ExperimentsMutliOrgan}). That is to say, we use two $3\times3$ convolutional layers for saliency transformation, and fine-tune the models with coarse-scaled segmentation. This setting produces an average accuracy of $84.50\%$, as shown in Table~\ref{Tab:ComparisonNIH}. \subsection{Comparison to the State-of-the-Art} \label{ExperimentsNIH:Comparison} \renewcommand{2.2cm}{2.0cm} \renewcommand{1.2cm}{1.0cm} \begin{table}[!btp] \centering \begin{tabular}{|l||R{2.2cm}|R{1.2cm}|R{1.2cm}|} \hline Approach & Average & Max & Min \\ \hline\hline Roth {\em et al.}~\cite{Roth_2015_DeepOrgan} & $71.42\pm10.11$ & $86.29$ & $23.99$ \\ \hline Roth {\em et al.}~\cite{Roth_2016_Spatial} & $78.01\pm 8.20$ & $88.65$ & $34.11$ \\ \hline Zhang {\em et al.}~\cite{Zhang_2016_Coarse} & $77.89\pm 8.52$ & $89.17$ & $43.67$ \\ \hline Roth {\em et al.}~\cite{Roth_2017_Spatial} & $81.27\pm 6.27$ & $88.96$ & $50.69$ \\ \hline Zhou {\em et al.}~\cite{Zhou_2017_Fixed} & $82.37\pm 5.68$ & $90.85$ & $62.43$ \\ \hline Cai {\em et al.}~\cite{Cai_2017_Improving} & $82.4 \pm 6.7 $ & $90.1 $ & $60.0 $ \\ \hline\hline Our Best Model & $\mathbf{84.50}\pm 4.97$ & $\mathbf{91.02}$ & $\mathbf{62.81}$ \\ \hline \end{tabular} \caption{ Accuracy (DSC, $\%$) comparison between our approach and the state-of-the-arts on the NIH {\em pancreas} segmentation dataset~\cite{Roth_2015_DeepOrgan}. \cite{Zhang_2016_Coarse} was implemented in~\cite{Zhou_2017_Fixed}. } \label{Tab:ComparisonNIH} \end{table} We show that our approach works better than the baseline, {\em i.e.}, the coarse-to-fine approach with two stages trained individually~\cite{Zhou_2017_Fixed}. As shown in Table~\ref{Tab:ComparisonNIH}, the average improvement over $82$ cases is $2.13\pm2.67\%$, which is impressive given such a high baseline accuracy ($82.37\%$ is already the state-of-the-art). The standard deviations ($5.68\%$ of~\cite{Zhou_2017_Fixed} and $4.97\%$ of ours) are mainly caused by the difference in scanning and labeling qualities. The student's $t$-test suggests statistical significance (${p}={3.62\times10^{-8}}$). A case-by-case study reveals that our approach reports higher accuracies on $67$ out of $82$ cases, with the largest advantage being $+17.60\%$ and the largest deficit being merely $-3.85\%$. We analyze the sources of improvement in Section~\ref{ExperimentsNIH:Diagnosis}. Another related work is~\cite{Zhang_2016_Coarse} which stacks two FCN's for segmentation. Our work differs from it by {\bf (i)} our model is recurrent, which allows fine-scaled segmentation to be updated iteratively, and {\bf (ii)} we crop the input image to focus on the salient region. Both strategies contribute significantly to segmentation accuracy. Quantitatively, \cite{Zhang_2016_Coarse} reported an average accuracy of $77.89\%$. Our approach achieves $78.23\%$ in the {\em coarse} stage, $82.73\%$ after {\em only one iteration}, and an entire testing phase reports $84.50\%$. We briefly discuss the advantages and disadvantages of using 3D networks. 3D networks capture richer contextual information, but also require training more parameters. Our 2D approach makes use of 3D contexts more efficiently. At the end of each iteration, predictions from three views are fused, and thus the saliency transformation module carries these information to the next iteration. We implement VNet~\cite{Milletari_2016_VNet}, and obtain an average accuracy of $83.18\%$ with a 3D {\em ground-truth} bounding box provided for each case. Without the ground-truth, a sliding-window process is required which is really slow -- an average of $5$ minutes on a Titan-X Pascal GPU. In comparison, our approach needs $1.3$ minutes, slower than the baseline~\cite{Zhou_2017_Fixed} ($0.9$ minutes), but faster than other 2D approaches~\cite{Roth_2015_DeepOrgan}\cite{Roth_2016_Spatial} ($2$--$3$ minutes). \renewcommand{7.0cm}{16.0cm} \begin{figure*}[t] \begin{center} \includegraphics[width=7.0cm]{VisualizationNIH.pdf} \end{center} \caption{ Visualization of how recurrent saliency transformation works in coarse-to-fine segmentation (best viewed in color). This is a failure case of the stage-wise approach~\cite{Zhou_2017_Fixed} (see Figure~\ref{Fig:Motivation}), but segmentation accuracy is largely improved by making use of the probability map from the previous iteration to help the current iteration. Note that three weight maps capture different visual cues, with two of them focused on the foreground region, and the remaining one focused on the background region. } \label{Fig:VisualizationNIH} \end{figure*} \subsection{Diagnosis} \label{ExperimentsNIH:Diagnosis} \subsubsection{Joint Optimization and Mutli-Stage Cues} \label{ExperimentsNIH:Diagnosis:JointOptimization} Our approach enables joint training, which improves both the coarse and fine stages individually. We denote the two networks trained in~\cite{Zhou_2017_Fixed} by $\mathbb{I}^\mathrm{C}$ and $\mathbb{I}^\mathrm{F}$, and similarly, those trained in our approach by $\mathbb{J}^\mathrm{C}$ and $\mathbb{J}^\mathrm{F}$, respectively. In the coarse stage, $\mathbb{I}^\mathrm{C}$ reports $75.74\%$ and $\mathbb{J}^\mathrm{C}$ reports $78.23\%$. In the fine stage, applying $\mathbb{J}^\mathrm{F}$ on top of the output of $\mathbb{I}^\mathrm{C}$ gets $83.80\%$, which is considerably higher than $82.37\%$ ($\mathbb{I}^\mathrm{F}$ on top of $\mathbb{I}^\mathrm{C}$) but lower than $84.50\%$ ($\mathbb{J}^\mathrm{F}$ on top of $\mathbb{J}^\mathrm{C}$). Therefore, we conclude that both the coarse-scaled and fine-scaled networks benefit from joint optimization. A stronger coarse stage provides a better starting point, and a stronger fine stage improves the upper-bound. In Figure~\ref{Fig:VisualizationNIH}, We visualize show how the recurrent network assists segmentation by incorporating multi-stage visual cues. This is a failure case by the baseline approach~\cite{Zhou_2017_Fixed} (see Figure~\ref{Fig:Motivation}), in which fine-scaled segmentation worked even worse because the missing contextual information. It is interesting to see that in saliency transformation, different channels deliver complementary information, {\em i.e.}, two of them focus on the target organ, and the remaining one adds most weights to the background region. Similar phenomena happen in the models trained in different viewpoints and different folds. This reveal that, except for foreground, background and boundary also contribute to visual recognition~\cite{Zhu_2017_Object}. \vspace{-0.2cm} \subsubsection{Convergence} \label{ExperimentsNIH:Diagnosis:Convergence} We study convergence, which is a very important criterion to judge the reliability of our approach. We choose the best model reporting an average accuracy of $84.50\%$, and record the inter-iteration DSC throughout the testing process: ${d^{\left(t\right)}}= {\mathrm{DSC}\!\left\{\mathbf{Z}^{\left(t-1\right)},\mathbf{Z}^{\left(t\right)}\right\}}= {\frac{2\times{\sum_i}Z_i^{\left(t-1\right)}Z_i^{\left(t\right)}} {{\sum_i}Z_i^{\left(t-1\right)}+Z_i^{\left(t\right)}}}$. After $1$, $2$, $3$, $5$ and $10$ iterations, these numbers are $0.9037$, $0.9677$, $0.9814$, $0.9908$ and $0.9964$ for our approach, and $0.8286$, $0.9477$, $0.9661$, $0.9743$ and $0.9774$ for~\cite{Zhou_2017_Fixed}, respectively. Each number reported by our approach is considerably higher than that by the baseline. The better convergence property provides us with the opportunity to set a more strict terminating condition, {\em e.g.}, using ${\mathrm{thr}}={0.99}$ rather than ${\mathrm{thr}}={0.95}$. We note that~\cite{Zhou_2017_Fixed} also tried to increase the threshold from $0.95$ to $0.99$, but only $3$ out of $82$ cases converged after $10$ iterations, and the average accuracy went down from $82.37\%$ to $82.28\%$. In contrary, when the threshold is increased from $0.95$ to $0.99$ in our approach, $80$ out of $82$ cases converge (in an average of $5.22$ iterations), and the average accuracy is improved from $83.93\%$ to $84.50\%$. In addition, the average number of iterations needed to achieve ${\mathrm{thr}}={0.95}$ is also reduced from $2.89$ in~\cite{Zhou_2017_Fixed} to $2.02$ in our approach. On a Titan-X Pascal GPU, one iteration takes $0.2$ minutes, so using ${\mathrm{thr}}={0.99}$ requires an average of $1.3$ minutes in each testing case. In comparison, \cite{Zhou_2017_Fixed} needs an average of $0.9$ minutes and~\cite{Roth_2016_Spatial} needs $2$-$3$ minutes. \vspace{-0.2cm} \subsubsection{The Over-Fitting Issue} \label{ExperimentsNIH:Diagnosis:OverFitting} Finally, we investigate the over-fitting issue by making use of {\em oracle} information in the testing process. We follow~\cite{Zhou_2017_Fixed} to use the ground-truth bounding box {\em on each slice}, which is used to crop the input region in {\em every} iteration. Note that annotating a bounding box in each slice is expensive and thus not applicable in real-world clinical applications. This experiment is aimed at exploring the upper-bound of our segmentation networks under perfect localization. With oracle information provided, our best model reports $86.37\%$, which is considerably higher than the number ($84.50\%$) without using oracle information. If we do not fine-tune the networks using coarse-scaled segmentation (see Table~\ref{Tab:Settings}), the above numbers are $86.26\%$ and $83.68\%$, respectively. This is to say, fine-tuning prevents our model from relying on the ground-truth mask. It not only improves the average accuracy, but also alleviates over-fitting (the disadvantage of our model against that with oracle information is decreased by $0.67\%$). \section{Mutli-Organ Segmentation Experiments} \label{ExperimentsMutliOrgan} \renewcommand{2.2cm}{1.1cm} \begin{table}[!btp] \centering \begin{tabular}{|l||R{2.2cm}|R{2.2cm}||R{2.2cm}|R{2.2cm}|} \hline Organ & \cite{Zhou_2017_Fixed}-{\bf C } & \cite{Zhou_2017_Fixed}-{\bf F} & Ours-{\bf C} & Ours-{\bf F} \\ \hline\hline {\em adrenal g.} & $57.38$ & $61.65$ & $60.70$ & $\mathbf{63.76}$ \\ \hline {\em duodenum} & $67.42$ & $69.39$ & $71.40$ & $\mathbf{73.42}$ \\ \hline {\em gallbladder} & $82.57$ & $^\sharp82.12$ & $87.08$ & $\mathbf{87.10}$ \\ \hline {\em inferior v.c.} & $71.77$ & $^\sharp71.15$ & $79.12$ & $\mathbf{79.69}$ \\ \hline {\em kidney l.} & $92.56$ & $92.78$ & $96.08$ & $\mathbf{96.21}$ \\ \hline {\em kidney r.} & $94.98$ & $95.39$ & $95.80$ & $\mathbf{95.97}$ \\ \hline {\em pancreas} & $83.68$ & $85.79$ & $86.09$ & $\mathbf{87.60}$ \\ \hline \end{tabular} \caption{ Comparison of coarse-scaled ({\bf C}) and fine-scaled ({\bf F}) segmentation by~\cite{Zhou_2017_Fixed} and our approach on our own dataset. A fine-scaled accuracy is indicated by $\sharp$ if it is lower than the coarse-scaled one. The {\em pancreas} segmentation accuracies are higher than those in Table~\ref{Tab:ComparisonNIH}, due to the increased number of training samples and the higher resolution in CT scans. } \label{Tab:ComparisonMultiOrgan} \end{table} To verify that out approach applies to other organs, we collect a large dataset which contains $200$ CT scans, $11$ abdominal organs and $5$ blood vessels. This corpus took $4$ full-time radiologists around $3$ months to annotate. To the best of our knowledge, this dataset is larger and contains more organs than any public datasets. We choose $5$ most challenging targets including the {\em pancreas} and a blood vessel, as well as two {\em kidneys} which are relatively easier. Other easy organs such as the {\em liver} are ignored. To the best of our knowledge, some of these organs were never investigated before, but they are important in diagnosing pancreatic diseases and detecting the pancreatic cancer at an early stage. We randomly partition the dataset into $4$ folds for cross validation. Each organ is trained and tested individually. When a pixel is predicted as more than one organs, we choose the one with the largest confidence score. Results are summarized in Table~\ref{Tab:ComparisonMultiOrgan}, We first note that~\cite{Zhou_2017_Fixed} sometimes produced a lower accuracy in the fine stage than in the coarse stage. Apparently this is caused by the unsatisfying convergence property in iterations, but essentially, it is the loss of contextual information and the lack of globally optimized energy function. Our approach solves this problem and reports a $4.29\%$ average improvement over $5$ challenging organs (the {\em kidneys} excluded). For some organs, {\em e.g.}, the {\em gallbladder}, we do not observe significant accuracy gain by iterations. But we emphasize that in these scenarios, our coarse stage already provides much higher accuracy than the fine stage of~\cite{Zhou_2017_Fixed}, and the our fine stage preserves such high accuracy through iterations, demonstrating stability. An example is displayed in Figure~\ref{Fig:VisualizationMultiOrgan}. \renewcommand{7.0cm}{8.0cm} \begin{figure}[t] \begin{center} \includegraphics[width=7.0cm]{VisualizationMultiOrgan.pdf} \end{center} \caption{ Mutli-organ segmentation in the {\em axial} view (best viewed in color). Organs are marked in different colors (input image is shown with the ground-truth annotation). } \label{Fig:VisualizationMultiOrgan} \end{figure} \section{Conclusions} \label{Conclusions} This work is motivated by the difficulty of small organ segmentation. As the target is often small, it is required to focus on a local input region, but sometimes the network is confused due to the lack of contextual information. We present the {\bf Recurrent Saliency Transformation Network}, which enjoys three advantages. {\bf (i)} Benefited by a (recurrent) global energy function, it is easier to generalize our models from training data to testing data. {\bf (ii)} With joint optimization over two networks, both of them get improved individually. {\bf (iii)} By incorporating multi-stage visual cues, more accurate segmentation results are obtained. As the fine stage is less likely to be confused by the lack of contexts, we also observe better convergence during iterations. Our approach is applied to two datasets for {\em pancreas} segmentation and multi-organ segmentation, and outperforms the baseline (the state-of-the-art) significantly. Confirmed by the radiologists in our team, these segmentation results are helpful to computer-assisted clinical diagnoses. \vspace{0.2cm} \noindent {\bf Acknowledgements:} This paper was supported by the Lustgarten Foundation for Pancreatic Cancer Research. We thank Wei Shen, Seyoun Park, Weichao Qiu, Song Bai, Zhuotun Zhu, Chenxi Liu, Yan Wang, Siyuan Qiao, Yingda Xia and Fengze Liu for discussions. {\small \bibliographystyle{ieee}
1,108,101,562,538
arxiv
\section{Simulated annealing} The original formulation of simulated annealing was inspired by the analogy between the stochastic evolution of the thermodynamic state of an annealing material towards the configurations of minimal energy and the search for the global minimum of an optimization criterion \cite{Kirkpatrick-et-al-83}. In the procedure, the optimization criterion plays the role of the energy and the state of the annealed material is simulated by the evolution of the state of an inhomogeneous Markov chain. The state of the chain evolves according to the Metropolis-Hastings algorithm in order to simulate the Boltzmann distribution of thermodynamic equilibrium. The Boltzmann distribution is simulated for a decreasing sequence of temperatures (``cooling"). The target distribution of the cooling procedure is the limiting Boltzmann distribution, for the temperature that tends to zero, which takes non-zero values only on the set of global minimizers \cite{Laarhoven-Aarts-87}. The original formulation of the method was for a finite domain. However, simulated annealing can be generalized straightforwardly to a continuous domain because the Metropolis-Hastings algorithm can be used with almost no differences on discrete and continuous domains The main difference is that on a continuous domain the equilibrium distributions are specified by probability densities. On a continuous domain, Markov transition kernels in which the distribution of the elements visited by the chain converges to an equilibrium distribution with the desired density can be constructed using the Metropolis-Hastings algorithm and the general family of MCMC methods \cite{Robert-Casella-04}. We point out that Boltzmann distributions are not the only distributions which can be adopted as equilibrium distributions in simulated annealing \cite{Laarhoven-Aarts-87}. In this paper it is convenient for us to adopt a different type of equilibrium distribution in place of Boltzmann distributions. \subsection{Our setting} The optimization criterion is $U:\bm{\Theta}\rightarrow [0,\, 1]$, with $\bm{\Theta}\subset\mathbb{R}^N$. The assumption that $U$ takes values in the interval $[0,\, 1]$ is a technical one. It does not imply any serious loss of generality. In general, any bounded optimization criterion can be scaled to take values in $[0,\, 1]$. We assume that the optimization task is to find a global maximizer; this can be done without loss of generality. We also assume that $\bm{\Theta}$ is a bounded set. We consider equilibrium distributions defined by probability density functions proportional to $[U(\theta)+\delta]^J$ where $J$ and $\delta$ are two strictly positive parameters. We use $\pi^{(J)}$ to denote an equilibrium distribution, i.e.~$\pi^{(J)}(d\theta)\propto[U(\theta)+\delta]^J\pi_{Leb}(d\theta)$ where $\pi_{Leb}$ is the standard Lebesgue measure. Here, $J^{-1}$ plays the role of the temperature: if the function $U(\theta)$ (Figure \ref{fig:figfun}.a) plus $\delta$ is taken to a positive power $J$ then as $J$ increases (i.e. as $J^{-1}$ decreases) $[U(\theta)+\delta]^J$ (Figure \ref{fig:figfun}.b-d) becomes increasingly peaked around the global maximizers. The parameter $\delta$ is an offset which guarantees that the equilibrium densities are always strictly positive, even if $U$ takes zero values on some elements of the domain. The offset $\delta$ is chosen by the user and we show later that our results allow one to make an optimal selection of $\delta$. The zero-temperature distribution is the limiting distribution, for $J\rightarrow \infty$, which takes non-zero values only on the set of global maximizers. It is denoted by $\pi^{(\infty)}$.\medskip\linebreak \begin{figure}[t] \includegraphics[width=\columnwidth]{Figfun.eps} \caption{The function $U(\theta)$ (upper left) and some probability densities of the form $h_{\bm{\theta}}(\theta)\propto [U(\theta)+ \delta]^{J}$ for $\delta=0.5$ and $J=3$ (upper right), $J=6$ (lower left) and $J=20$ (lower right).}\label{fig:figfun} \end{figure} In the generic formulation of the method, the Markov transition kernel of the $k$-th step of the inhomogeneous chain has equilibrium distribution $\pi^{(J_k)}$ where $\{J_k\}_{k=1,2,\dots}$ is the ``cooling schedule". The cooling schedule is a non-decreasing sequence of positive numbers according to which the equilibrium distribution become increasingly sharpened during the evolution of the chain. We use $\bm{\theta}_k$ to denote the state of the chain and $P_{\bm{\theta}_k}$ to denote its probability distribution. The distribution $P_{\bm{\theta}_k}$ obviously depends on the initial condition $\bm{\theta}_0$. However, in this work, we don't need to make this dependence explicit in the notation. {\it Remark 1:} If, given an element $\theta$ in $\bm{\Theta}$, the value $U(\theta)$ can be computed directly, we say that $U$ is a deterministic criterion, e.g.~the energy landscape in protein structure prediction \cite{Wales-03}. In problems involving random variables, the value $U(\theta)$ may be the expected value $U(\theta) = \int g(x,\theta)p_{\bm{x}}(x;\theta)dx$ of some function $g$ which depends on both the optimization variable $\theta$, and on some random variable $\bm{x}$ which has probability density $p_{\bm{x}}(x;\theta)$ (which may itself depend on $\theta$). In such problems it is usually not possible to compute $U(\theta)$ directly, either because evaluation of the integral requires too much computation, or because no analytical expression for $p_{\bm{x}}(x;\theta)$ is available. Typically one must perform stochastic simulations in order to obtain samples of $\bm{x}$ for a given $\theta$, hence obtain sample values of $g(\bm{x},\theta)$, and thus construct a Monte Carlo estimate of $U(\theta)$. The Bayesian design of clinical trials is an important application area where such expected-value criteria arise \cite{Spiegelhalter-et-al-2004}. The authors of this paper investigate the optimization of expected-value criteria motivated by problems of aircraft routing \cite{Leccchini-et-al-2006}. In the particular case that $p_{\bm{x}}(x;\theta)$ does not depend on $\theta$, the optimization task is often called ``empirical risk minimization'', and is studied extensively in statistical learning theory \cite{Vapnik-95,Vidyasagar-03}. The results of this paper apply in the same way to the optimization of both deterministic and expected-value criteria. The MCMC method developed by M\"{u}ller \cite{Muller-99,Muller-et-al-04} allows one to construct simulated annealing algorithms for the optimization of expected-value criteria. M\"{u}ller \cite{Muller-99,Muller-et-al-04} employs the same equilibrium distributions as those described in our setting; in his context $J$ is restricted to integer values. In Figure \ref{fig:algo}, we illustrate the basic iteration of a generic simulated annealing algorithm with equilibrium distributions $\pi^{(J)}(d\theta)$ for the optimization of deterministic and expected-value criteria. \begin{figure}[t]\centering \includegraphics[width=0.9\columnwidth]{Figalgo.eps} \caption{The basic iterations of simulated annealing with equilibrium distributions $\pi^{(J)}(d\theta)$, for the maximization of deterministic and expected-value criteria (see Remark 1). In the algorithms, $q_{\tilde{\bm{\theta}}}$ is the density of the ``proposal distribution'' of the Metropolis step. The iteration for the expected value criterion has been proposed by M\"{u}ller \cite{Muller-99,Muller-et-al-04}. }\label{fig:algo}\vspace{-1cm} \end{figure} \section{Convergence} The rationale of simulated annealing is as follows: if the temperature is kept constant, say $J_k=J$, then the distribution of the state of the chain $P_{\bm{\theta}_k}$ tends to the equilibrium distribution $\pi^{(J)}$; if $J\rightarrow\infty$ then the equilibrium distribution $\pi^{(J)}$ tends to the zero-temperature distribution $\pi^{(\infty)}$; as a result, if the cooling schedule $J_k$ tends to infinity, one obtains that $P_{\bm{\theta}_k}$ ``follows'' $\pi^{(J_k)}$ and that $\pi^{(J_k)}$ tends to $\pi^{(\infty)}$ and eventually that the distribution of the state of the chain $P_{\bm{\theta}_k}$ tends to $\pi^{(\infty)}$. The theory shows that, under conditions on the cooling schedule and the Markov transition kernels, the distribution of the state of the chain $P_{\bm{\theta}_k}$ actually converges to the target zero-temperature distribution $\pi^{(\infty)}$ as $k\rightarrow\infty$ \cite{Haario-Saksman-91,Gelfand-Mitter-91,Tsallis-Stariolo-96,Locatelli-00}. Convergence to the zero-temperature distribution implies that asymptotically the state of the chain eventually coincides with a global optimizer with probability one. The difficulty which must be overcome in order to obtain finite step results on simulated annealing algorithms on a continuous domain is that usually, in an optimization problem defined over continuous variables, the set of global optimizers has zero Lebesgue measure (e.g.~a set of isolated points). If the set of global optimizers has zero measure then the set of global optimizers has null probability according to the equilibrium distributions $\pi^{(J)}$ for any finite $J$ and, as a consequence, according to the distributions $P_{\bm{\theta}_k}$ for any finite $k$. Put another way, the probability that the state of the chain visits the set of global optimizers is constantly zero after any finite number of steps. Hence the confidence of the fact that the solution provided by the algorithm in finite time coincides with a global optimizer is also constantly zero. Notice that this is not the case for a finite domain, where the set of global optimizers is of non-null measure with respect to the reference counting measure \cite{Laarhoven-Aarts-87,Mitra-et-al-86,Hajek-88,Hannig-et-al-06}. It is instructive to look at the issue also in terms of the rate of convergence to the target zero-temperature distribution. On a discrete domain, the distribution of the state of the chain at each step and the zero-temperature distribution are both standard discrete distributions. It is then possible to define a distance between them and study the rate of convergence of this distance to zero. This analysis allows one to obtain results on the finite-time behavior of simulated annealing \cite{Laarhoven-Aarts-87,Mitra-et-al-86}. On a continuous domain and for a set of global optimizers of measure zero, the target zero-temperature distribution $\pi^{(\infty)}$ ends up being a mixture of probability masses on the set of global optimizers. In this situation, although the distribution of the state of the chain $P_{\bm{\theta}_k}$ still converges asymptotically to $\pi^{(\infty)}$, it is not possible to introduce a sensible distance between the two distributions and a rate of convergence to the target distribution cannot even be defined (weak convergence), see \cite[Theorem 3.3]{Haario-Saksman-91}. This is the reason that until now there have been no guarantees on the performance of simulated annealing on a continuous domain after a finite number of computations: by adopting the zero-temperature distribution $\pi^{(\infty)}$ as the target distribution it is only possible to prove asymptotic convergence in infinite time to a global optimizer. {\it Remark 2:} The standard distance between two distributions, say $\mu_1$ and $\mu_2$, on a continuous support is the total variation norm $\|\mu_1 - \mu_2\|_{TV} = \sup_{A} | \mu_1(A) - \mu_2(A)|$, see e.g.~\cite{Roberts-Rosenthal-04}. In simulated annealing on a continuous domain the distribution of the state of the chain $P_{\bm{\theta}_k}$ is absolutely continuous with respect to the Lebesgue measure (i.e.~\mbox{$\pi_{Leb}(A)=0\Rightarrow P_{\bm{\theta}_k}(A)=0$}), by construction for any finite $k$. Hence if the set of global optimizers has zero Lebesgue measure then it has zero measure also according to $P_{\bm{\theta}_k}$. The set of global optimizers has however measure 1 according to $\pi^{(\infty)}$. The distance $\|P_{\bm{\theta}_k} - \pi^{(\infty)}\|_{TV}$ is then constantly $1$ for any finite $k$. It is also worth mentioning that if the set of global optimizers has zero measure then asymptotic convergence to the zero-temperature distribution $\pi^{(\infty)}$ can be proven only under the additional assumptions of continuity and differentiability of $U$ \cite{Haario-Saksman-91,Gelfand-Mitter-91,Tsallis-Stariolo-96,Locatelli-00}. \section{Finite-time guarantees} In general, optimization algorithms for problems defined on continuous variables can only find approximate solutions in finite time \cite{Blum-et-al-98}. Given an element $\theta$ of a continuous domain how can we assess how good it is as an approximate solution to an optimization problem? Here we introduce the concept of {\it approximate global optimizer} to answer this question. The definition is given for a maximization problem in a continuous but bounded domain. We use two parameters: the {\it value imprecision} $\epsilon$ (greater than or equal to 0) and the {\it residual domain} $\alpha$ (between 0 and 1) which together determine the level of approximation. We say that $\theta$ is an approximate global optimizer of $U$ with value imprecision $\epsilon$ and residual domain $\alpha$ if the function $U$ takes values strictly greater than $U(\theta) + \epsilon$ only on a subset of values of $\theta$ no larger than an $\alpha$ portion of the optimization domain. The formal definition is as follows. \begin{definition} Let $U:\bm{\Theta}\rightarrow\mathbb{R}$ be an optimization criterion where $\bm{\Theta}\subset\mathbb{R}^N$ is bounded. Let $\pi_{Leb}$ denote the standard Lebesgue measure. Let $\epsilon\geq 0$ and $\alpha\in [0,\,1]$ be given numbers. Then $\theta$ is an {\it approximate global optimizer} of $U$ with value imprecision $\epsilon$ and residual domain $\alpha$ if $\pi_{Leb}\{\theta'\in\bm{\Theta} : U(\theta') > U(\theta) + \epsilon \} \leq \alpha\, \pi_{Leb}(\bm{\Theta})\, .$ \end{definition} In other words, the value $U(\theta)$ is within $\epsilon$ of a value which is greater than the values that $U$ takes on at least a $1-\alpha$ portion of the domain. The smaller $\epsilon$ and $\alpha$ are, the better is the approximation of a true global optimizer. If both $\alpha$ and $\epsilon$ are equal to zero then $U(\theta)$ coincides with the essential supremum of $U$. Our definition of approximate global optimizer carries an important property, which holds regardless of what the criterion $U$ is: if $\epsilon$ and $\alpha$ have non-zero values then the set of approximate global optimizers always has non-zero Lebesgue measure. It follows that the probability that the chain visits the set of approximate global optimizers can be non-zero. Hence, it is sensible to study the confidence of the fact that the solution found by simulated annealing in finite time is an approximate global optimizer. {\it Remark 3:} The intuition that our notion of approximate global optimizer can be used to obtain formal guarantees on the finite-time performance of optimization methods based on a stochastic search of the domain is already apparent in the work of Vidyasagar \cite{Vidyasagar-03,Vidyasagar-01}. Vidyasagar \cite{Vidyasagar-03,Vidyasagar-01} introduces a similar definition and obtains rigorous finite-time guarantees in the optimization of expected value criteria based on uniform independent sampling of the domain. Notably, the number of independent samples required to guarantee some desired accuracy and confidence turns out to be polynomial in the values of the desired imprecision, residual domain and confidence. Although the method of Vidyasagar is not highly sophisticated, it has had considerable success in solving difficult control system design applications \cite{Vidyasagar-01,Tempo-et-al-05}. Its appeal stems from its rigorous finite-time guarantees which exist without the need for any particular assumption on the optimization criterion. Here we show that finite-time guarantees for simulated annealing can be obtained by selecting a distribution $\pi^{(J)}$ with a finite $J$ as the target distribution in place of the zero-temperature distribution $\pi^{(\infty)}$. The fundamental result is the following theorem which allows one to select in a rigorous way $\delta$ and $J$ in the target distribution $\pi^{(J)}$. It is important to stress that the result holds universally for {\it any} optimization criterion $U$ on a bounded domain. The only minor requirement is that $U$ takes values in $[0,\, 1]$. \begin{theorem}\label{th:confidence} Let $U:\bm{\Theta}\rightarrow[0,\,1]$ be an optimization criterion where $\bm{\Theta}\subset\mathbb{R}^N$ is bounded. Let $J\geq 1$ and $\delta>0$ be given numbers. Let $\bm{\theta}$ be a multivariate random variable with distribution $\pi^{(J)}(d\theta)\propto [U(\theta)+\delta]^J\pi_{Leb}(d\theta)$. Let $\alpha\in (0,\, 1]$ and $\epsilon\in [0,\, 1]$ be given numbers and define \begin{equation}\label{eq:confidence} \sigma = \frac { 1 } { \mbox{$\displaystyle 1 + \left[\frac{1 + \delta}{\epsilon+1+\delta}\right]^{\,J} \left[\frac{1}{\alpha}\frac{1 +\delta}{\epsilon+\delta} - 1\right] \frac{1+\delta}{\delta} $}}\,\, . \end{equation} Then the statement ``$\bm{\theta}$ is an approximate global optimizer of $U$ with value imprecision $\epsilon$ and residual domain $\alpha$" holds with probability at least $\sigma$. \end{theorem} {\it Proof.} See Appendix A. The importance of the choice of a target distribution $\pi^{(J)}$ with a finite $J$ is that $\pi^{(J)}$ is absolutely continuous with respect to the Lebesgue measure. Hence, the distance $\|P_{\bm{\theta}_k} - \pi^{(J)}\|_{\mbox{\tiny TV}}$ between the distribution of the state of the chain $P_{\bm{\theta}_k}$ and the target distribution $\pi^{(J)}$ is a meaningful quantity. Convergence of the Metropolis-Hastings algorithm and MCMC methods in total variation norm is a well studied problem. The theory provides simple conditions under which one derives upper bounds on the distance to the target distribution which are known at each step of the chain and decrease monotonically to zero as the number of steps of the chain grows. The theory has been developed mainly for {\it homogeneous} chains \cite{Meyn-Tweedie-93,Rosenthal-95,Mengersen-Tweedie-96,Roberts-Rosenthal-04}. In the case of simulated annealing, the factor that enables us to employ these results is the absolute continuity of the target distribution $\pi^{(J)}$ with respect to the Lebesgue measure. However, simulated annealing involves the simulation of inhomogeneous chains. In this respect, another important fact is that the choice of a target distribution $\pi^{(J)}$ with a finite $J$ implies that the inhomogeneous Markov chain can in fact be formed by a finite sequence of homogeneous chains (i.e.~the cooling schedule $\{J_k\}_{k=1,2,\dots}$ can be chosen to be a sequence that takes only a finite set of values). In turn, this allows one to apply the theory of homogeneous MCMC methods to study the convergence of $P_{\bm{\theta}_k}$ to $\pi^{(J)}$ in total variation norm. On a bounded domain, simple conditions on the `proposal distribution' in the iteration of the simulated annealing algorithm allows one to obtain upper bounds on $\|P_{\bm{\theta}_k} - \pi^{(J)}\|_{\mbox{\tiny TV}}$ that decrease geometrically to zero as $k\rightarrow\infty$, without the need for any additional assumption on $U$ \cite{Meyn-Tweedie-93,Rosenthal-95,Mengersen-Tweedie-96,Roberts-Rosenthal-04}. It is then appropriate to introduce the following finite-time result. \begin{theorem}\label{th:confidence_k} Let the notation and assumptions of Theorem \ref{th:confidence} hold. Let $\bm{\theta}_k$, with distribution $P_{\bm{\theta}_k}$, be the state of the inhomogeneous chain of a simulated annealing algorithm with target distribution $\pi^{(J)}$. Then the statement ``$\bm{\theta}_k$ is an approximate global optimizer of $U$ with value imprecision $\epsilon$ and residual domain $\alpha$" holds with probability at least $\sigma-\|P_{\bm{\theta}_k} - \pi^{(J)}\|_{\mbox{\tiny TV}}$. \end{theorem} The proof of the theorem follows directly from the definition of the total variation norm. It follows that if simulated annealing is implemented with an algorithm which converges in total variation distance to a target distribution $\pi^{(J)}$ with a finite $J$, then one can state with confidence arbitrarily close to 1 that the solution found by the algorithm after the known appropriate finite number of steps is an approximate global optimizer with the desired approximation level. For given non-zero values of $\epsilon$, $\alpha$ the value of $\sigma$ given by (\ref{eq:confidence}) can be made arbitrarily close to 1 by choice of $J$; while the distance $\|P_{\bm{\theta}_k} - \pi^{(J)}\|_{\mbox{\tiny TV}}$ can be made arbitrarily small by taking the known sufficient number of steps. It can be shown that there exists the possibility of making an optimal choice of $\delta$ and $J$ in the target distribution $\pi^{(J)}$. In fact, for given $\epsilon$ and $\alpha$ and a given value of $J$ there exists an optimal choice of $\delta$ which maximizes the value of $\sigma$ given by (\ref{eq:confidence}). Hence, it is possible to obtain a desired $\sigma$ with the smallest possible $J$. The advantage of choosing the smallest $J$, consistent with the required approximation and confidence, is that it will decrease the number of steps required to achieve the desired reduction of $\|P_{\bm{\theta}_k} - \pi^{(J)}\|_{\mbox{\tiny TV}}$. \section{Conclusions} We have introduced a new formulation of simulated annealing which admits rigorous finite-time guarantees in the optimization of functions of continuous variables. First, we have introduced the notion of approximate global optimizer. Then, we have shown that simulated annealing is guaranteed to find approximate global optimizers, with the desired confidence and the desired level of accuracy, in a known finite number of steps, if a proper choice of the target distribution is made and conditions for convergence in total variation norm are met. The results hold for {\it any} optimization criterion on a bounded domain with the only minor requirement that it takes values between 0 and 1. In this framework, simulated annealing algorithms with rigorous finite-time guarantees can be derived by studying the choice of the proposal distribution and of the cooling schedule, in the generic iteration of simulated annealing, in order to ensure convergence to the target distribution in total variation norm. To do this, existing theory of convergence of the Metropolis-Hastings algorithm and MCMC methods on continuous domains can be used \cite{Meyn-Tweedie-93,Rosenthal-95,Mengersen-Tweedie-96,Roberts-Rosenthal-04}. Vidyasagar \cite{Vidyasagar-03,Vidyasagar-01} has introduced a similar definition of approximate global optimizer and has shown that approximate optimizers with desired accuracy and confidence can be obtained with a number of uniform independent samples of the domain which is polynomial in the accuracy and confidence parameters. In general, algorithms developed with the MCMC methodology can be expected to be equally or more efficient than uniform independent sampling. \subsubsection*{Acknowledgments} Work supported by EPSRC, Grant EP/C014006/1, and by the European Commission under projects HYGEIA FP6-NEST-4995 and iFly FP6-TREN-037180. We thank S.~Brooks, M.~Vidyasagar and D.~M.~Wolpert for discussions and useful comments on the paper.
1,108,101,562,539
arxiv
\section{Introduction} \label{sec:intro} The smallest neutral molecules such as alkali dimers for instance have often served as benchmark systems in theoretical or experimental studies of femtosecond molecular dynamics due to their simple electronic structure which can be easily probed with common femtosecond laser systems \cite{JCP.103.7269, PRA.54.R4605, JCP.108.9259, CPL.302.363, APB.71.259, JCP.112.8871, CPL.339.362, JCP.114.1259, JCP.114.10311, PRA.66.043402, CPL.376.457, PRA.68.043409, CPL.402.27, CPL.402.126, CPL.26.073301, CPB.19.033301}. In this context, the lithium dimer has been extensively studied both experimentally \cite{JCP.103.7269, JCP.108.9259, JCP.114.10311, PRA.66.043402, PRA.68.043409, CPL.402.27, CPL.402.126} and theoretically \cite{JCP.114.1259, CPL.26.073301, CPB.19.033301, PRA.74.033407}, and time-resolved photoelectron energy distributions were sometimes used as an efficient probe of the molecular dynamics. These studies are usually based on a three-step photoionization scheme \cite{JCP.114.10311, JCP.103.7269, JCP.108.9259, PRA.66.043402, PRA.68.043409, CPL.402.27, CPL.402.126} where a combination of linearly polarized laser pulses excite, in a two-photon process, either a single transition from the ground electronic state X\,($^1\Sigma_g^+$) of \mbox{Li$_{2}$\ } to a pure rovibrational level of the excited E\,($^1\Sigma_g^+$) electronic state, or a series of such transitions. The first excited A\,($^1\Sigma_u^+$) electronic state serves then as an intermediate resonance. The populated rovibrational levels of the E electronic potential are then ionized with an ultrashort laser pulse of linear polarization. In the present study of \mbox{Li$_{2}$\ } photoionization, instead of concentrating our attention on photoelectron energy distributions, we focus on photoelectron angular distributions and we show that a strong modification of these angular distributions happens when the pulse duration varies from the femtosecond to the picosecond regime. We quantify this modification by calculating the asymmetry parameter $\beta$ of the emitted photoelectrons as a function of the pulse duration. Our results are then rationalized using a simple analytical model, and we finally show that this model can be used to extract spectroscopic molecular parameters from measurements of the photoelectron angular distribution performed with ultrashort laser pulses, even if the pulse duration is much shorter than the rotational period of both the neutral molecule and the ion. The outline of the paper is as follows. In Sec.\,\ref{sec:model} we briefly recall the theoretical model we use for the calculation of the molecular photoionization process using ultrashort laser pulses. Then, in Sec.\,\ref{sec:results}, through results of numerical simulations, we illustrate the specific features of the photoelectron angular distributions and calculate the asymmetry parameter at specific photoelectron energies for different pulse durations. We then show that a simple analytical expression is able to reproduce the numerical results and to predict relevant spectroscopic parameters. The last section finally gives some concluding remarks and perspectives for future work. \section{Theoretical model} \label{sec:model} The present theoretical study is based on a three-step excitation scheme used in several experiments \cite{JCP.103.7269, JCP.108.9259, JCP.114.10311, CPL.402.27, CPL.402.126, PRA.66.043402, PRA.68.043409}, and we concentrate here our attention on the last step which consists in the photoionization of the E$(^1\Sigma_g^+)$ electronic state of {Li$_{2}$}, as shown in Fig.\,\ref{fig:PES}. \begin{figure}[ht!] \includegraphics[width=8.6cm]{Fig1.pdf} \caption{(Color online) Potential energy curves involved in the photoionization of \mbox{Li$_{2}$\ } as a function of the internuclear distance $R$. The potential energy curve associated with the E$(^1\Sigma_g^+)$ electronic state of \mbox{Li$_{2}$\ } is in blue (lower curve), and the potential energy curve associated with the $X(^2\Sigma_g^+)$ of \mbox{Li$_{2}^{+}$\ } is in red (upper curve). Their associated molecular rotational quantum numbers are denoted by $N$ and $N^+$, respectively.} \label{fig:PES} \end{figure} We assume that in the first two steps, a sequence of two CW linearly polarized laser pulses has been used to excite in a two-photon transition a pure rovibrational $(v=0,N=0)$ level on the E$(^1\Sigma_g^+)$ excited electronic potential curve of {Li$_{2}$}. This kind of technique has already been used with \mbox{Li$_{2}$\ } \cite{PRA.66.043402, PRA.68.043409}. Since we assume that the \mbox{Li$_{2}$\ } molecule is prepared in its ground rotational level $N=0$, the value of its projection on any quantization axis is $M=0$. This corresponds to an isotropic initial distribution. We propose to ionize the molecule with the help of a femtosecond or picosecond ionizing pulse. Depending on its exact duration, this laser pulse has a bandwidth $\Delta\omega$ which can encompass the two rotational components $N^+=0$ and $N^+=2$, which can be accessed in the ground electronic state of the ion. On this time scale, describing the ionization process and the induced nuclear dynamics requires a full quantum treatment. Here we recall the particular points of the quantum model that are essential for understanding our forthcoming discussion. For a detailed description of the model, see Refs. [\onlinecite{PRA.74.033407, JCP.138.024108, JCP.144.154109, PRB.95.115406}]. This model was designed to treat the ionization dynamics of neutral alkali dimers, including the induced ro-vibrational dynamics. We follow the electronic dynamics and the nuclear motion by expanding the total molecular wave function as \begin{eqnarray} \Psi(\mathbi{r},\mathbi{R},t) & = & \psi^{E}(\mathbi{R},t)\,\phi^{E}(r|R)\,Y_{00}(\,\hat{\mathbi{\!r}}\,) + \nonumber\\ & \displaystyle\sum_{m}\int & \psi^{+}_{\ell m}(E,\mathbi{R},t)\, \phi_{\ell}^{+}(E,r|R)\,Y_{\ell m}(\,\hat{\mathbi{\!r}}\,)\,dE\;\; \label{Eq:DefWF} \end{eqnarray} where $\phi^{E}(r|R)\,Y_{00}(\,\hat{\mathbi{\!r}}\,)$ denotes the electronic wave function associated with the E-state of {Li$_{2}$}. This electronic state of $^1\Sigma_g^+$ symmetry is considered as a 3s$\sigma$ Rydberg state, and its wave function is expressed in the \textit{molecular frame} following Hund's case (b) representation \cite{CUP.2003}. On the other hand, the ionized wave function $\phi_{\ell}^{+}(E,r|R)\,Y_{\ell m}(\,\hat{\mathbi{\!r}}\,)$ is expressed in the \textit{laboratory frame} following Hund's case (d) representation \cite{CUP.2003}. The electron coordinate is denoted by $\mathbi{r}$, and $\mathbi{R}$ denotes the internuclear coordinate. $E$ is the electron asymptotic kinetic energy and $(\ell,m)$ denote the electron angular momentum and its projection in the laboratory frame. Note that the one-photon ionization considered here results in the ejection of a $p$-type electron with $\ell=1$. The sum over $\ell$ is thus omitted in Eq.\,(\ref{Eq:DefWF}). Similar approaches based on such time-dependent expansions have been used in the past for the calculation of time-resolved photoelectron spectra \cite{JCP.110.147, CPL.302.363, JCP.112.8871, JCP.114.1259}. The photoionization step is performed by a laser pulse of linear polarization and the polarization axis is chosen as the quantization axis in the laboratory frame. The following selection rule therefore applies for the projections $M$ and $M^+$ of the molecular rotational angular momenta of \mbox{Li$_{2}$\ } and \mbox{Li$_{2}^{+}$\ } : $M = M^+ + m$. In the present case, we thus have $M^+ + m = 0$. The rotational motion is described by expanding the nuclear wave packets $\psi^{E}(\mathbi{R},t)$ and $\psi^{+}_{\ell m}(E,\mathbi{R},t)$ of Eq.\,(\ref{Eq:DefWF}) as \begin{subequations} \label{eq:psiE+} \begin{eqnarray} \label{eq:psiE} \psi^{E}(\mathbi{R},t) & \!=\! & \psi^{E}_{N,M}(R,t) \, {\cal D}^{N^{\,*}}_{M,0}(\,\hat{\mathbi{\!R}}\,)\\ \label{eq:psi+lm} \psi^{+}_{\ell m}(E,\mathbi{R},t) & \!=\! & \!\sum_{N^{+}} \psi_{\ell,m}^{N^+}(E,R,t) \, {\cal D}^{N^{+\,*}}_{M^+,0}(\,\hat{\mathbi{\!R}}\,) \end{eqnarray} \end{subequations} where ${\cal D}^{N^{*}}_{M,\Lambda}(\,\hat{\mathbi{\!R}}\,)$ denote the normalized Wigner rotation matrices \cite{Zare}. Introducing these expansions in the time-dependent Schr\"odinger equation describing the molecule-field interaction and projecting onto the electronic and rotational basis functions yields, in the dipole approximation, a set of coupled differential equations \cite{PRA.74.033407} for the nuclear wave packets $\psi^{E}(\mathbi{R},t)$ and $\psi^{+}_{\ell m}(E,\mathbi{R},t)$ that we solve using the short-time split-operator method\cite{JComputP.47.412}. To integrate the corresponding differential equations we need to define the potential energy curves $V_{E}(R)$ and $V_{+}(R)$ associated with the electronic states of the neutral and of the ion, and the matrix elements which couple the nuclear wave packets evolving on these electronic potential curves. Such matrix elements are given in Ref. [\onlinecite{PRA.74.033407}], and for the potential curves we use the accurate ab-initio values given in Ref. [\onlinecite{CP.92.263}]. Note that the rotational constant of the E state of \mbox{Li$_{2}$\ }, $B_E \simeq 0.506\,$\mbox{cm$^{-1}$}, is very close to the one of the ground state of the ion, $B_+ \simeq 0.503\,$\mbox{cm$^{-1}$}. The molecules are subjected to a linearly polarized ionizing field with the classical expression \begin{equation} \mathbi{E}(t) = E_0\,f(t)\,\cos(\omega_0 t)\,\hat{\mathbi{\!e}}\,, \end{equation} where $E_0$ and $\omega_0$ denote the electric field amplitude and the frequency of the radiation. $\hat{\mathbi{\!e}}$ is the polarization vector. The pulse envelope is finally defined as \begin{equation} f(t) = \sin^2(\pi t /(2\tau))\,, \end{equation} $\tau$ being the width of the pulse and $2\tau$ its total duration. The analysis of the photoelectron angular distributions is made by projecting the wave packets $\psi^{+}_{\ell m}(E,\mathbi{R},t_f)$ defined in Eq.\,(\ref{eq:psi+lm}) at the end of the pulse (at time $t_f$) on the energy-normalized solutions of the field-free ionized molecular states represented by the usual expansions on angular momentum states\cite{PRA.49.R641, PRA.51.4824, Messiah}. After integration over the electronic coordinate and over the angular degree of freedom for the nuclei, the angular distribution of the ejected photoelectron at some prescribed asymptotic electron kinetic energy $E=\hbar^2k^2/2m$ and angle $\,\hat{\mathbi{\!k}}\,$ is given by \begin{equation} \label{eq:P(E,theta)simple} P(E,\,\hat{\mathbi{\!k}}\,) = \sum_{N^{+}} \,P_{N^+}(E,\,\hat{\mathbi{\!k}}\,)\,, \end{equation} where \begin{equation} \label{eq:PNplus(E,theta)simple} P_{N^+}(E,\,\hat{\mathbi{\!k}}\,) = \sum_{m} \,|Y_{\ell m}(\,\hat{\mathbi{\!k}}\,)|^2 \int |\psi^{N^{+}}_{\ell,m}(E,R,t_f)|^2\,dR\,. \end{equation} Integrating $P(E,\,\hat{\mathbi{\!k}}\,)$ over the electron kinetic energy $E$ yields the total angular distribution $P(\,\hat{\mathbi{\!k}}\,)$, while an integration of the same quantity over the angle $\,\hat{\mathbi{\!k}}=(\theta,\phi)$ yields the integrated photoelectron spectrum $P(E)$. In the next section\,\ref{sec:results}, we will discuss both the integrated photoelectron spectra and the angular distributions at specific energies $P(E,\,\hat{\mathbi{\!k}}\,)$ corresponding to different exit channels. In the present case with linear polarization, the angular distributions do not vary with the polar angle $\phi$ and they are just functions of the azimuthal angle $\theta$. We will thus write $P(E,\,\hat{\mathbi{\!k}}\,) = P(E,\theta)$. Compared to our previous study \cite{PRA.74.033407}, which concentrated on kinetic energy distributions only, we put here the emphasis on angular distributions. Note that the theory described here can be generalized to non-isotropic initial conditions by adding a sum over $M$ in Eq.\,(\ref{eq:psiE}). In practice, we use this generalized approach \cite{PRA.74.033407} when $N \neq 0$. \section{Results} \label{sec:results} \subsection{Photoelectron spectra} Following the scenario shown in Fig.\,\ref{fig:PES}, the second harmonic of a Nd:YAG laser pulse induces photoionization of the molecule at 532\,nm. Depending on the duration of the pulse, between 50\,fs and 15\,ps, the angular distribution may change dramatically because of a strong overlap of the final rotational exit channels. In the following subsections, we will discuss two cases corresponding to the same initial vibrational state $v=0$ in the E-state of the neutral molecule, but with two different initial rotational excitations, $N=0$ and $N=2$. \subsubsection{Isotropic rotational case $N=0$} Starting from the initial rovibrational level $v=0$, $N=0$, $M=0$ and taking into account the vibrational selection rule $v_{+}=v$ which corresponds to the highest Frank-Condon factor, there are two different exit channels corresponding to the two accessible ion rotational quantum numbers $N^+=0$ and $N^+=2$. In Fig.\,\ref{fig:Photoelectron_Spectrum}, we display the photoionization probability as a function of the electron kinetic energy for various pulse durations $\tau$ (Full Width at Half Maximum). For very short pulses $(\tau < 5\,\mathrm{ps})$, the spectra show a broad single peak, while a well-separated double-peak structure appears for long pulse durations $(\tau > 10\,\mathrm{ps})$. The centers of the peaks correspond to the energies associated with the ion ro-vibrational levels $(v_+=0,N^+=0)$ and $(v_+=0,N^+=2)$, at $E=4671.6$\,\mbox{cm$^{-1}$} $\,$ and $E=4668.5$\,\mbox{cm$^{-1}$}, respectively. \begin{figure}[ht!] \includegraphics[width=8.6cm]{Fig2.pdf} \caption{(Color online) Photoelectron spectra calculated using a single initial rovibrational level $(v=0,N=0)$ as a function of the electron kinetic energy $E$ in wavenumbers and of the pulse duration $\tau$ in ps. The laser wavelength is $\lambda=532$\,nm (laser Nd:YAG).} \label{fig:Photoelectron_Spectrum} \end{figure} Decreasing the duration of the pulse increases its bandwidth. When this bandwidth encompasses the two exit channels, it induces a strong overlap effect that will strongly influence the photoelectron angular distribution. The importance of this effect depends on the relative values of the laser bandwidth and of the spectral separation between the two final ro-vibrational levels. As seen in Fig.\,\ref{fig:Photoelectron_Spectrum}, in the present scenario the peak corresponding to the channel $(v_+=0,N^+=0)$ is predominant because the rotational constants of the E-state of Li$_2$ and that of the ground electronic state of Li$_2^+$ are very similar \cite{PRA.74.033407}. \begin{figure}[ht!] \includegraphics[width=8.6cm]{Fig3.pdf} \caption{(Color online) Photoelectron angular distribution spectra calculated using the initial $(v=0, N=0)$ rovibrational state for the main ion exit channel $(v_{+}=0, N^+=0$), which corresponds to the energy $E=4671.6$\,\mbox{cm$^{-1}$}, as a function of the ejection angle $\theta$ (varied from $0^{\circ}$ to $180^{\circ}$) and of the pulse duration $\tau$. Upper panel (a): Angular distribution spectrum $P(E,\theta)$ in a Cartesian view. Central panel (b): In a binocular view. Lower panel (c): In a polar view. The total pulse duration is varied from $\tau=50$\,fs to $\tau=15$\,ps. The laser wavelength is $\lambda=532$\,nm (laser Nd:YAG).} \label{fig:ANG_0_1} \end{figure} \begin{figure}[ht!] \includegraphics[width=8.6cm]{Fig4.pdf} \caption{(Color online) Photoelectron angular distribution spectra calculated using the initial $(v=0, N=0)$ rovibrational state for the secondary ion exit channel $N^+=2$, which corresponds to the energy $E=4668.5$\,\mbox{cm$^{-1}$}, as a function of the ejection angle $\theta$ (varied from $0^{\circ}$ to $180^{\circ}$) and of the pulse duration $\tau$. Upper panel (a): Angular distribution spectrum $P(E,\theta)$ in a Cartesian view. Central panel (b): In a binocular view. Lower panel (c): In a polar view. The total pulse duration is varied from $\tau=50$\,fs to $\tau=15$\,ps. The laser wavelength is $\lambda=532$\,nm (laser Nd:YAG).} \label{fig:ANG_0_2} \end{figure} In Figs.\,\ref{fig:ANG_0_1} and \ref{fig:ANG_0_2}, we display the angular distribution spectra for the specific energies corresponding to the principal peak $(v_+=0, N^+=0)$ and to the secondary peak $(v_+=0, N^+=2)$, respectively. In each figure, the information are shown in three ways. In panel (a), the photoionization probability $P(E,\theta)$ as function of $\theta$, the angle between the polarization vector of the photon beam and the propagation vector of the ejected photoelectrons, for different values of the pulse duration. In panel (b), the photoionization probability is plotted in polar coordinates for the same values of pulse durations as in panel (a), where short duration curves are plotted in front (black), while long duration curves are plotted in the back (yellow) of the 3D picture. The shape of this figure suggests to label this type of representation as a ``binocular'' view of the angular distribution. Finally, in panel (c), a front projection of panel (b) is performed. The effect of the pulse duration on the angular distribution is strikingly different in both cases. For the main ($v_+=0, N^+=0$) exit channel ($E=4671.6$\,\mbox{cm$^{-1}$}, Fig.\,\ref{fig:ANG_0_1}), the shape of the distribution is independent of the pulse duration. Moreover, the distribution is peaked towards $\theta=0^{\circ}$ and $180^{\circ}$. The photoelectrons are thus predominantly ejected in the direction parallel to the photon polarization. However, in Fig.\,\ref{fig:ANG_0_2} corresponding to the minor $(v_+=0,N^+=2)$ exit channel ($E=4668.5$\,\mbox{cm$^{-1}$}), the photoelectron angular distribution evolves from a peaked anisotropic shape similar to the preceding case for short pulses $(\tau < 5\,\mathrm{ps})$, to a quasi-isotropic shape for longer pulses $(\tau > 10\,\mathrm{ps})$. This behavior results from the overlap of the two exit channels: when the pulse is short enough so that its bandwidth is larger than the separation between to two rovibrational levels, the first channel $(v_+=0,N^+=0)$ contributes to the signal and eventually dominates, even at the kinetic energy, which normally corresponds to the minor channel. To go forward in the investigation of the angular distribution at this point, we move to the analysis of the evolution of the asymmetry parameter $\beta$ with the pulse duration. We recall that $\beta$ is defined from the angular differential photoionization cross section as \begin{equation} \frac{d\sigma}{d\Omega}=\frac{\sigma}{4\pi}\;\Big[1+\beta\,P_{2}(\cos\theta)\Big]\,, \label{eq: cross section} \end{equation} where $\sigma$ is the total photoionization cross section and $P_2(x)=(3x^2-1)/2$ is the Legendre polynomial of order 2. Using the notations adopted in the present study, we can write \begin{equation} P(E,\theta) \propto \Big[A+B\cos^2\theta\Big]\,, \end{equation} where $\beta$ can be deduced unambiguously from the relation \begin{equation} \beta = \frac{2B}{3A+B}\,. \label{eq:beta2} \end{equation} We remind that $\beta$, which can take values between $-1$ and 2, gives a global idea on the shape of the angular distribution of photoelectrons. For instance, $\beta=0$ corresponds to an isotropic distribution, while $\beta=2$ and $-1$ correspond to peaked anisotropic distributions along the light polarization axis and its perpendicular axis, respectively. It can be easily seen from Fig.\,\ref{fig:ANG_0_1}(b) that in this case $\beta$ is close to 2 and that in Fig.\,\ref{fig:ANG_0_2}(b) $\beta$ evolves from a value close to 2 for short pulses to a value close to 0 for long pulses. Indeed, the photoionization probabilities plotted in these figures are proportional to the differential photoionization cross section of Eq.(\ref{eq: cross section}). The aim of section\,\ref{sec:extr} is to try to understand this evolution of $\beta$ as a function of energy and pulse duration. For a given energy corresponding to one of the two exit channels and for a given pulse duration, we extract from Fig.\,\ref{fig:ANG_0_1}(a) or Fig.\,\ref{fig:ANG_0_2}(a), the extremal values of $P(E,\theta)$ as a function of $\theta$, namely $P_{max}=P(E,90^{\circ})$ and $P_{min}=P(E,0^{\circ})$. One can easily show that the ratio $\rho=P_{min}/P_{max}$ is directly related to $\beta$ through \begin{equation} \beta=\frac{\rho-1}{\rho/2+1}\,. \end{equation} This very simple formula allows an accurate estimation of the asymmetry parameter for each energy and pulse duration. \begin{figure}[t!] \includegraphics[width=8.6cm]{Fig5.pdf} \caption{Asymmetry parameter $\beta$ as a function of the pulse duration varying from $\tau=50$\,fs to $\tau=15$\,ps for the two peaks corresponding to $N^+=0$ ($E=4671.6$\,\mbox{cm$^{-1}$}) and $N^+=2$ ($E=4668.5$\,\mbox{cm$^{-1}$}). It includes the full quantum simulation (green symbols) and 2 curves: one for the analytical model with the exact parameters (solid blue lines), one for the numerical fit (dashed red lines). The inset shows the two accessible ionization pathways in competition. See text for details.} \label{fig:BETA_0_l0} \end{figure} In Fig.\,\ref{fig:BETA_0_l0}, we display the asymmetry parameter $\beta$ as a function of the pulse duration for the two exit channels $N^+=0$ and $N^+=2$ corresponding to $E=4671.6$\,\mbox{cm$^{-1}$} $\,$ and $E=4668.5$\,\mbox{cm$^{-1}$}, respectively. Three sets of data are plotted but we concentrate at the moment on the symbols (green squares and circles), which are the values extracted from our time-dependent quantum calculation. The other estimations correspond to an analytical model (solid blue lines) and to a numerical fit (dashed red lines) that we will discuss in section\,\ref{sec:extr}. We can already note that the agreement between the three sets of results is very good, demonstrating that the analytical model and the numerical fit allow for a good description of the dynamics of the system. For very short pulses, the two curves (for both exit channels) converge to the same limit $\beta \simeq 1.69$. This behavior is correlated with the overlap effect that we have already discussed. Finally, as the pulse duration increases, the two curves separate and reach their asymptotic values $\beta= 0.2$ and $\beta=2$ for the $N^+=2$ and $N^+=0$ channels, respectively. These exact values can be derived easily. Indeed, for a given $N^+$ the photoionization probability as a function of $\theta$, deduced from Eq.\,(\ref{eq:PNplus(E,theta)simple}), can be written in the CW limit as \begin{equation} P_{N^+}(E,\theta) \propto \sum_{m} \,|Y_{\ell m}(\theta,\phi)|^2\,|{\cal{M}}_{\ell m}^{N,N^+}|^2\,, \end{equation} where ${\cal{M}}_{\ell m}^{N,N^+}$ is the electric dipole matrix element between the initial and final states \cite{PRA.74.033407}. For each exit channel, the value of $\beta$ can be deduced unambiguously from Eq.\,(\ref{eq:beta2}). Indeed, for $N^+=0$ we find $P_{0}(E,\theta) \propto \cos^2\theta$, giving $A=0$ and therefore $\beta=2$. On the other hand, for $N^+=2$, we have $P_{2}(E,\theta) \propto (3 + \cos^{2}\theta)$, {\it i.e.} $A=3$, $B=1$ and thus $\beta=1/5=0.2$, as expected. \begin{figure}[t!] \includegraphics[width=8.6cm]{Fig6.pdf} \caption{(Color online) Photoelectron spectra calculated with the initial rovibrational level $(v=0,N=2)$ as a function of the electron kinetic energy $E$ and of the pulse duration $\tau$. The laser wavelength is $\lambda=532$\,nm (laser Nd:YAG).} \label{fig:Photoelectron_Spectrum2} \end{figure} \subsubsection{Excited rotational case $N=2$} The main difference when starting initially from the $(v=0,N=2)$ excited rovibrational level is that there are three exit channels corresponding to the three ion rotational quantum numbers $N^+=0$, 2 and 4 when a $p$ electron is emitted. The overlap effect discussed in the previous Section is therefore expected to be more rich and complex, especially for short pulse durations. \begin{figure}[t!] \includegraphics[width=8.6cm]{Fig7.pdf} \caption{(Color online) Binocular view of the angular distribution spectra for the initial state $(v=0,N=2)$ as a function of $\theta$ and a total pulse duration $\tau$. Upper panel (a): Exit channel $N^+=4$ corresponding to the energy $E=4664$\,\mbox{cm$^{-1}$}. Central panel (b): Exit channel $N^+=2$ corresponding to the energy $E=4671$\,\mbox{cm$^{-1}$}. Lower panel (c): Exit channel $N^+=0$ corresponding to the energy $E=4674$\,\mbox{cm$^{-1}$}. The total pulse duration is varied from $\tau=50$\,fs to $\tau=15$\,ps. The laser wavelength is $\lambda=532$\,nm (laser Nd: YAG).} \label{fig:ANG_2} \end{figure} Fig.\,\ref{fig:Photoelectron_Spectrum2} shows the photoionization probability as a function of the electron kinetic energy for various pulse durations. In a way similar to Fig.\,\ref{fig:Photoelectron_Spectrum}, for very short pulses, the spectra show a broad structureless single peak. However, its compound structure splits into three well-separated peaks for long pulse durations. Moreover, the principal peak at $E= 4671$\,\mbox{cm$^{-1}$} $\,$ corresponds to the main exit channel $N^+=2$, in accordance with the propensity rule $N^+=N$. The two other exit channels give rise to two satellite peaks on both sides of the main peak, namely a lower energy peak around $E=4664$\,\mbox{cm$^{-1}$} $\,$ corresponding to $N^+=4$ and a higher energy peak near $E=4674$\,\mbox{cm$^{-1}$} $\,$ for $N^+=0$. In Fig.\,\ref{fig:ANG_2}, the photoelectron angular distributions at these three energies are plotted in the previously defined binocular view as a function of the ionization angle $\theta$ for various pulse durations. The three channels $N^+=4$, $N^+=2$ and $N^+=0$ are shown in panels (a), (b) and (c), respectively. Here again, the predominant channel $N^+=2$, is almost not affected by the pulse duration whereas the satellite channels present angular distributions that evolve from peaked shapes at short durations to quasi-isotropic shapes for very long pulses. The evolution shown in Fig.\,\ref{fig:ANG_2}(c) for $N^+=0$ is progressive and similar to that of Fig.\,\ref{fig:ANG_0_2}(b). However, in the case of the $N^+=4$ exit channel, the transition from an anisotropic photoelectron angular distribution to a quasi-isotropic distribution is sharp, occurring around $\tau\simeq 3$\,ps. The asymmetry parameter $\beta$ for the three channels is displayed in Fig.\,\ref{fig:BETA_0_l2} as a function of the pulse duration $\tau$. It shows an evolution similar to that of Fig.\,\ref{fig:BETA_0_l0}, which was showing the evolution of the same parameter for the initial rovibrational state $N=0$. For very short pulses, the three curves become degenerate at the value $\beta \simeq 1.69$, while they fully separate in the intermediate range 1\,ps\;$\leqslant\tau\leqslant$\;10\,ps, before reaching their asymptotic values $\beta \simeq 1.9$ for the principal exit channel $N^+=2$ ($E=4671$\,\mbox{cm$^{-1}$}) and $\beta \simeq 0.2$ for secondary exit channels $N^+=0$ ($E=4674$\,\mbox{cm$^{-1}$}) and $N^+=4$ ($E=4664$\,\mbox{cm$^{-1}$}). Here again, we have a very good agreement between the values calculated using the time-dependent quantum simulation, the analytical model and the numerical fit for all exit channels. This analytical model and the numerical fit will be described in the next section. \subsection{Extraction of spectroscopic molecular parameters} \label{sec:extr} In contrast with the CW case, for short pulse durations, a given electron kinetic energy $E$ may be associated with different exit channels $N^+$. This effect is simply due to the increased bandwidth of the pulse which leads to non-zero contributions of different $N^+$ channels, as seen in Eq.\,(\ref{eq:P(E,theta)simple}). The strong variation of the asymmetry parameter $\beta$ with the pulse duration can thus be easily rationalized from the evidence that the electron angular distribution can be written as an incoherent sum over the different exit channels as written in Eq.\,(\ref{eq:P(E,theta)simple}). As a consequence, the differential ionization probability can be written as \begin{equation} P(E,\theta) \propto \Big[1+\beta(\tau)\,P_{2}(cos\theta)\Big] \end{equation} for an arbitrary pulse duration $\tau$, where $\beta(\tau)$ is given by a weighted average of the CW $\beta$-values of the different individual exit channels $N^+$ \begin{equation} \beta(\tau) = \frac{\sum_j P_j\,\tilde{f}_j\,\beta_j}{\sum_j P_j\,\tilde{f}_j}\,, \label{eq:beta_model} \end{equation} where $P_j$ is the total ionization probability in the exit channel number $j$, $\tilde{f}_j$ denotes the field amplitude at the frequency corresponding to the energy of the exit channel number $j$, and $\beta_j$ is the asymptotic CW value of the asymmetry parameter of channel $j$. Note that since the probabilities $P_j$ are proportional to the square of the electric dipole moment $\mu_j^2$ between the initial state and a given final state $j$, Eq.\,(\ref{eq:beta_model}) can equivalently be written as \begin{equation} \beta(\tau) = \frac{\sum_j \mu_j^2\,\tilde{f}_j\,\beta_j}{\sum_j \mu_j^2\,\tilde{f}_j}\,. \label{eq:beta_model2} \end{equation} We have been using Eq.\,(\ref{eq:beta_model2}) to compare, in Figs.\,\ref{fig:BETA_0_l0} and \ref{fig:BETA_0_l2}, the result obtained from the numerical solution of the time-dependent Schr\"odinger equation (solid green squares and circles) with the analytical model (solid blue lines) and the agreement is extremely good. Another advantage of the knowledge of Eq.\,(\ref{eq:beta_model2}) is that it allows, from a limited numbers of measurements performed with ultrashort laser pulses, to extract the asymptotic CW asymmetry parameters $\beta_j$ of the different exit channels as well as the relative electric dipole moments $\mu_j^2$ by using a simple numerical fitting procedure. We have shown the results of such a numerical fit using Eq.\,(\ref{eq:beta_model2}) in Figs.\,\ref{fig:BETA_0_l0} and \ref{fig:BETA_0_l2} in red dashed lines. The fitting parameters are the values of $\beta_j$ and $\mu_j^2$ since we consider that for a given pulse duration $\tau$ the Fourier transform $\tilde{f}_j$ of the electric field $E(t)$ at the frequency corresponding to each exit channel is known. The agreement of the numerical fit with the numerical solution of the time-dependent Schr\"odinger equation is excellent. For example, using the 5 values of $\beta$ shown with green symbols in Fig.\,\ref{fig:BETA_0_l0} for $\tau \leqslant 4.5$\,ps (on the left of the dashed vertical line), we obtained $\beta_0=2.0000$ instead of 2 and $\beta_2=0.19995$ instead of 0.2 for the initial state $N=0$. One can note in Fig.\,\ref{fig:Photoelectron_Spectrum} that for these very short pulses the rotation is not yet resolved in the kinetic energy spectrum, which shows a single broad peak. Indeed, for such ultrashort pulses, the pulse duration is smaller than the rotational period of both \mbox{Li$_{2}$\ } and Li$_{2}^{+}$. Similarly, using the 5 values of $\beta$ shown with green symbols in Fig.\,\ref{fig:BETA_0_l2} for $\tau \leqslant 4.5$\,ps, we obtained $\beta_0=0.2099$ instead of 0.2 (error $<$ 5\%), $\beta_2=1.897$ instead of 1.9 (error $<$ 1\%) and $\beta_4=0.2075$ (error $<$ 4\%) instead of 0.2 for the initial state $N=2$. The numerical values obtained with this fitting procedure for the relative electric dipole moments $\mu_j^2$ are also correct within 5\%. \begin{figure}[t!] \includegraphics[width=8.6cm]{Fig8.pdf} \caption{Asymmetry parameter $\beta$ as a function of the pulse duration from $\tau=50$\,fs to $\tau=15$\,ps for the three peaks corresponding to $N^+=0$ ($E=4674$\,\mbox{cm$^{-1}$}), $N^+=2$ ($E=4671$\,\mbox{cm$^{-1}$}) and $N^+=4$ ($E=4664$\,\mbox{cm$^{-1}$}). It includes the quantum simulation with green symbols and 2 curves: one for the analytical model with the exact parameters (solid blue lines) and one for the numerical fit (red dashed lines). The inset shows the three accessible ionization pathways in competition. See text for details} \label{fig:BETA_0_l2} \end{figure} With a limited number of measurements performed with very short laser pulses it is thus possible to extract accurate values of some spectroscopic molecular parameters such as the asymmetry parameters of the different exit channels, thanks to the simplicity of Eq. (\ref{eq:beta_model2}), and therefore to the efficiency of the associated numerical fit. \section{Conclusion} \label{sec:conclusion} In this paper, we have presented a theoretical study of the short pulse ionization of the \mbox{Li$_{2}$\ } molecule in the fs to ps regime, solving the time-dependent Schr\"odinger equation for both the nuclear and the electronic dynamics. We have concentrated our study on the calculation of the energy-resolved angular distributions of the emitted photoelectrons. We have shown that when the ionizing pulse is much shorter than the typical rotational period of the molecule the kinetic energy distribution is characterized by a single broad peak which comprises the contribution of the different ion rotational levels involved. In this ultrafast regime, one would normally expect that it is impossible to extract spectroscopic molecular parameters such as the asymmetry parameters $\beta$ associated with each individual ion rotational level. In this paper we have shown however that it is in fact possible to extract such values from a limited number of measurements performed with different pulse durations. These values can be obtained from a numerical fitting procedure and we have shown that the accuracy of this procedure is of the order or better than 5\%. In the future, we are planning to extend the present model to the study of the dissociative ionization of Li$_{2}$. This would allow to extract more information from coincidence measurements \cite{Dowek,MP.110.131}. \section*{Acknowledgments} We acknowledge the use of the computing cluster GMPCS of the LUMAT federation (FR 2764 CNRS). We acknowledge CINES, France for providing access and support to their computing platform Occigen under project AP-010810188. MT acknowledges an invited Professor position at the Paris-Sud University, during which part of this work has been conducted.
1,108,101,562,540
arxiv
\section{Introduction}\label{sec:introduction}} \else \section{Introduction} \label{sec:introduction} \fi \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE Computer Society journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} This demo file is intended to serve as a ``starter file'' for IEEE conference papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \section*{Acknowledgment} The authors would like to thank... \section{Introduction} This demo file is intended to serve as a ``starter file'' for IEEE Computer Society conference papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank... \section{Introduction} \begin{figure}[!t] \centerline{\includegraphics[width=\columnwidth]{fig4.pdf}} \caption{A comparison of speed-accuracy trade-off on Cityscapes test set. The red triangles indicate our methods while blue triangles represent other methods. Green circles represent architecture search methods.} \label{fig4} \end{figure} \IEEEPARstart{S}{emantic} segmentation is a fundamental task in which each pixel of the input image should be assigned to the corresponding label\cite{8006236,8760555,9126262}. It plays a vital role in many practical applications, such as medical image segmentation, navigation of autonomous vehicles, and robots\cite{8264783,romera2017erfnet}. With the rise of deep learning technologies, convolutional neural networks are applied to image segmentation and greatly outperform traditional methods based on handcrafted features. Since the fully convolutional network (FCN)\cite{long2015fully} was proposed to handle semantic segmentation problems, a series of novel networks have been proposed. DeepLab\cite{chen2014semantic} eliminates some of downsampling in ResNet to maintain high resolution and utilizes convolutions with large dilations\cite{mallat1999wavelet} to enlarge receptive fields. From then on, dilated convolutions based backbones with context extraction modules have become the standard layout widely used in a variety of methods, including DeepLabV2\cite{chen2017deeplab}, DeepLabV3\cite{chen2017rethinking}, PSPNet\cite{zhao2017pyramid}, and DenseASPP\cite{yang2018denseaspp}. Since semantic segmentation is a kind of dense prediction task, neural networks need to output high-resolution feature maps of large receptive fields to produce satisfactory results, which is computationally expensive. This problem is especially critical for scene parsing of autonomous driving which requires enforcement on very large images to cover a wide field of view. Therefore, the above methods are very time-consuming in the inference stage and can not be directly deployed on the actual autonomous vehicles. They can not even process an image in one second because of utilizing multi-scale test to improve accuracy. With the ever-increasing demand for mobile device deployment, real-time segmentation algorithms\cite{paszke2016enet,9040271,zhao2018icnet,mehta2018espnet,9032321} are getting more and more attention. DFANet\cite{li2019dfanet} employs deeply multi-scale feature aggregation and lightweight depthwise separable convolutions, achieving 71.3$\%$ test mIoU with 100 FPS. Different from the encoder-decoder paradigm, authors in \cite{yu2018bisenet} propose a novel bilateral network composed of a spatial path and a context path. Specially, the spatial path utilizes three relatively wide 3$\times$3 convolutional layers to capture spatial details, and the context path is a compact pre-trained backbone for extracting contextual information. Such bilateral methods including \cite{poudel2019fast} achieved higher inference speed than encoder-decoder structures at that time. Recently, some competitive real-time methods aiming at semantic segmentation of road scenes are proposed. These methods can be divided into two categories. One utilizes the GPU-efficient backbones, especially ResNet-18\cite{orsic2019defense,hu2020real,li2020semantic}. The other develops complex lightweight encoders trained from scratch, one of which, BiSeNetV2\cite{yu2020bisenet} hits a new peak in terms of real-time performance, achieving 72.6$\%$ test mIoU at 156 FPS on Cityscapes. However, except for \cite{li2020semantic} using extra training data, these recent works do not show the potential for higher quality results. Some of them suffer from a lack of scalability due to deliberately designed architectures and tuned hyper-parameters. Additionally, ResNet-18 is of little advantage given the prosperity of more powerful backbones. In this paper, we propose the dual-resolution networks with deep high-resolution representation for real-time semantic segmentation of high-resolution images, especially road-driving images. Our DDRNets start from one trunk and then divide into two parallel deep branches with different resolutions. One deep branch generates relatively high-resolution feature maps and the other extracts rich semantic information through multiple downsampling operations. Multiple bilateral connections are bridged between two branches to achieve efficient information fusion. Besides, we propose a novel module named DAPPM which inputs low-resolution feature maps, extracts multi-scale context information, and merges them in a cascaded way. Before training on semantic segmentation dataset, the dual-resolution networks are trained on ImageNet following common paradigms. According to extensive experimental results on three popular benchmarks ($i.e.$, Cityscapes, CamVid, and COCOStuff), DDRNets attain an excellent balance between segmentation accuracy and inference speed. Our method achieves new state-of-the-art accuracy on both Cityscapes and CamVid compared with other real-time algorithms, without attention mechanism and extra bells or whistles. With standard test augmentation, DDRNet is comparable to state-of-the-art models and requires much fewer computing resources. We also report statistically relevant performances and conduct ablation experiments to analyze the effect of architecture improvements and standard training tricks. The main contributions are summarized as follows: \begin{itemize} \item A family of novel bilateral networks with deep dual-resolution branches and multiple bilateral fusions is proposed for real-time semantic segmentation as efficient backbones. \item A novel module is designed to harvest rich context information by combining feature aggregation with pyramid pooling. When executed on low-resolution feature maps, it leads to little increase in inference time. \item Our method achieves a new state-of-the-art trade-off between accuracy and speed with the 2080Ti, 77.4$\%$ mIoU at 102 FPS on Cityscapes test set and 74.7$\%$ mIoU at 230 FPS on CamVid test set. To our best knowledge, we are the first to achieve 80.4$\%$ mIoU in nearly real time (22 FPS) on Cityscapes only using fine annotations. \end{itemize} \section{Related Work} In recent years, dilation convolutions based methods have boosted the performance of semantic segmentation under many challenging scenes. And pioneering works explore more possibilities for lightweight architectures such as the encoder-decoder and the two-pathway. In addition, contextual information is proved to be very crucial for scene parsing tasks. In this section, we group the related works into three categories, \emph{i.e.}, high-performance semantic segmentation, real-time semantic segmentation, and context extraction modules. \begin{figure*}[!t] \centerline{\includegraphics[width=\textwidth]{fig6.pdf}} \caption{A comparison about dilation methods, encoder-decoder methods, two-pathway methods and our deep dual-resolution network.} \label{fig6} \end{figure*} \subsection{High-performance Semantic Segmentation} The output of the last layer of a common encoder can not be used directly to predict the segmentation masks due to the lack of spatial details. And effective receptive fields will be too small to learn high-level semantic information if only getting rid of downsampling of the classification backbones. An acceptable strategy is to utilize dilated convolutions to set up the long-range connection between pixels while removing the last two downsampling layers\cite{chen2017rethinking,zhao2017pyramid}, as shown in Fig. \ref{fig6} (a). However, it also poses new challenges to real-time inference due to the exponential growth of high-resolution feature-map dimensions and inadequate optimization of dilated convolution implementation. There is a fact that most state-of-the-art models are built on dilation backbones and are therefore largely unqualified for scene parsing of self-driving. Some works attempt to explore the substitute of the standard dilation backbones. Authors of DeepLabv3plus\cite{chen2018encoder} propose a simple decoder that fuses upsampled feature maps with low-level feature maps. It alleviates the requirement for high-resolution feature maps generated directly from dilated convolutions. DeepLabv3plus can achieve competitive results though the output stride of the encoder is set to 16. HRNet\cite{sun2019high} highlights the deep high-resolution representations and reflects higher efficiency than dilation backbones. We find that the higher computational efficiency and inference speed of HRNet owe to its much thinner high-resolution information flows. Taking HRNetV2-W48 as an example, the dimensions of 1/4 resolution and 1/8 resolution features are 48 and 96, respectively, which are much smaller than those of pre-trained ResNets\cite{he2016deep} with dilation convolutions. Although high-resolution branches of HRNet are much thinner, they can be greatly enhanced by parallel low-resolution branches and repeated multi-scale fusions. Our work begins with the deep, thin, high-resolution representations and puts forward more compact architectures. They maintain high-resolution representations and extract high-level contextual information simultaneously through two concise trunks. \subsection{Real-time Semantic Segmentation} Almost all real-time semantic segmentation models employ two basic methods: encoder-decoder methods and two-pathway methods. Lightweight encoders which play an significant role in both methods are also discussed. \subsubsection{Encoder-decoder Architecture} Compared to dilated convolution based models, encoder-decoder architectures intuitively cost less computation and inference time. The encoder is usually a deep network with repeated spatial reductions to extract contextual information and the decoder restores the resolution by interpolation or transposed convolution\cite{zeiler2010deconvolutional} to complete dense predictions, as shown in Fig. \ref{fig6} (b). Specially, the encoder can be a lightweight backbone pre-trained on ImageNet or an efficient variant trained from scratch like ERFNet\cite{romera2017erfnet} and ESPNet\cite{mehta2018espnet}. SwiftNet\cite{orsic2019defense} defends the advantage of pre-training encoders on ImageNet and leverages lightweight lateral connections to assist with upsampling. Authors in \cite{si2019real} propose a strategy of multiply spatial fusion and class boundary supervision. FANet \cite{hu2020real} achieves a good trade-off between speed and accuracy with fast attention modules and extra downsampling throughout the network. SFNet\cite{li2020semantic} delivers a Flow Alignment Module (FAM) to align feature maps of adjacent levels for better fusion. \subsubsection{Two-pathway Architecture} The encoder-decoder architecture reduces computational effort, but due to the loss of some information during repeated downsampling, it can not be completely recovered by unsampling, which impairs the accuracy of semantic segmentation. The two-pathway architecture is proposed in order to alleviate this problem\cite{yu2018bisenet}, as shown in Fig. \ref{fig6} (c). In addition to one pathway for extracting semantic information, the other shallow pathway of high resolution provides rich spatial details as a supplement. To further improve the accuracy, BiSeNetV2\cite{yu2020bisenet} uses global average pooling for context embedding and proposes attention based feature fusion. The two pathways in BiSeNetV1$\&$V2 are initially separate while the two branches in Fast-SCNN\cite{poudel2019fast} share the learning to downsample module. CABiNet\cite{kumaar2020cabinet} adopts the overall architecture of Fast-SCNN but uses the MobileNetV3\cite{howard2019searching} as the context branch. Other than existing two-pathway methods, the deep and thin high-resolution branch of DDRNets enables multiple feature fusions and sufficient ImageNet pre-training while guaranteeing the inference efficiency. Our method can be easily scaled to achieve higher accuracy (above 80$\%$ mIoU on Cityscapes). \subsubsection{Lightweight Encoders} There are many computationally efficient backbones can be used as the encoder, such as MobileNet\cite{howard2017mobilenets}, ShuffleNet\cite{zhang2018shufflenet} and small version of Xception\cite{chollet2017xception}. MobileNet replaces standard convolutions with depthwise separable convolutions to reduce parameters and computation. The strong regularization effect of depthwise separable convolutions is alleviated by inverted residual blocks in MobileNetV2\cite{sandler2018mobilenetv2}. ShuffleNet utilizes the compactness of grouped convolutions and proposes a channel shuffle operation to facilitate information fusion between different groups. However, these networks contain numerous depthwise separable convolutions which can not be efficiently implemented with the existing GPU architecture. Therefore, although the FLOPs of ResNet-18\cite{he2016deep} is about six times of MobileNetV2 1.0$\times$, inference speed of the former is higher than the latter on single 1080Ti GPU\cite{orsic2019defense}. However, the existing lightweight backbones may be suboptimal for semantic segmentation because they are usually overly tuned for image classification. \subsection{Context Extraction Modules} Another key to semantic segmentation is how to capture richer contextual information. Atrous Spatial Pyramid Pooling (ASPP)\cite{chen2017deeplab} consists of parallel atrous convolutional layers with different rates which can attend to multi-scale contextual information. Pyramid Pooling Module (PPM)\cite{zhao2017pyramid} in PSPNet is more computationally efficient than ASPP by implementing pyramid pooling ahead of convolutional layers. Unlike the local nature of convolutional kernels, self-attention mechanism is good at capturing global dependencies. In this way, Dual Attention Network (DANet)\cite{fu2019dual} takes advantage of both position attention and channel attention to further improve feature representation. Object Context Network (OCNet)\cite{yuan2018ocnet} utilizes self-attention mechanism to explore object context which is defined as a set of pixels belonging to the same object category. Authors of CCNet\cite{huang2019ccnet} raise criss-cross attention to improve the efficiency of memory usage and computation. However, these context extraction modules are designed and performed for high-resolution feature maps, too time consuming for lightweight models. Taking the low-resolution feature maps as input, we strengthen the PPM module with more scales and deep feature aggregation. When appended to the end of the low-resolution branch, the proposed module outperforms the PPM and Base-OC module in OCNet. \section{Method} \begin{table*}[] \caption{The architectures of DDRNet-23-slim and DDRNet-39 for ImageNet. `conv4$\times r$' denotes that conv4 is repeated $r$ times. For DDRNet-23-slim, $r = 1$ and for DDRNet-39, $r = 2$.} \label{tab:3} \begin{tabular}{p{40pt}<{\centering}|p{60pt}<{\centering}|p{85pt}<{\centering}|p{85pt}<{\centering}|p{85pt}<{\centering}|p{85pt}<{\centering}} \toprule stage &output & \multicolumn{2}{c|}{DDRNet-23-slim}& \multicolumn{2}{c}{DDRNet-39} \\ \hline conv1 &$112\times112$&\multicolumn{2}{c|}{$3\times3, 32$, stride 2}& \multicolumn{2}{c}{$3\times3, 64$, stride 2} \\ \hline \multirow{3}{*}{conv2}&\multirow{3}{*}{$56\times56$} & \multicolumn{2}{c|}{$3\times3, 32$, stride 2}& \multicolumn{2}{c}{$3\times3, 64$, stride 2} \\ \cline{3-6} & & \multicolumn{2}{c|}{$\left[\begin{aligned}&3 \times 3, 32\\&3 \times 3, 32\end{aligned}\right] \times 2$} & \multicolumn{2}{c}{$\left[\begin{aligned}&3 \times 3, 64\\&3 \times 3, 64\end{aligned}\right] \times 3$}\\ \hline conv3 &$28\times28$ & \multicolumn{2}{c|}{$\left[\begin{aligned}&3 \times 3, 64\\&3 \times 3, 64\end{aligned}\right] \times 2$}& \multicolumn{2}{c}{$\left[\begin{aligned}&3 \times 3, 128\\&3 \times 3, 128\end{aligned}\right] \times 4$}\\\hline \multirow{2}{*}{conv4$\times r$} & \multirow{2}{*}{$14\times14, 28\times28$}& $\left[\begin{aligned}&3 \times 3, 128\\&3 \times 3, 128\end{aligned}\right] \times 2$ &$\left[\begin{aligned}&3 \times 3, 64\\&3 \times 3, 64\end{aligned}\right] \times 2$ & $\left[\begin{aligned}&3 \times 3, 256\\&3 \times 3, 256\end{aligned}\right] \times 3$ &$\left[\begin{aligned}&3 \times 3, 128\\&3 \times 3, 128\end{aligned}\right] \times 3$ \\ \cline{3-6} & & \multicolumn{2}{c|}{Bilateral fusion} & \multicolumn{2}{c}{Bilateral fusion} \\ \hline \multirow{6}{*}{conv5\_1} & \multirow{6}{*}{$7\times7, 28\times28$}& $\left[\begin{aligned}&3 \times 3, 256\\&3 \times 3, 256\end{aligned}\right] \times 2$ &$\left[\begin{aligned}&3 \times 3, 64\\&3 \times 3, 64\end{aligned}\right] \times 2$ & $\left[\begin{aligned}&3 \times 3, 512\\&3 \times 3, 512\end{aligned}\right] \times 3$ &$\left[\begin{aligned}&3 \times 3, 128\\&3 \times 3, 128\end{aligned}\right] \times 3$ \\ \cline{3-6} & & \multicolumn{2}{c|}{Bilateral fusion} & \multicolumn{2}{c}{Bilateral fusion} \\ \cline{3-6} & & $\left[\begin{aligned}&1 \times 1, 256\\&3 \times 3, 256\\&1 \times 1, 512\end{aligned}\right] \times 1$ & $\left[\begin{aligned}&1 \times 1, 64\\&3 \times 3, 64\\&1 \times 1, 128\end{aligned}\right] \times 1$ & $\left[\begin{aligned}&1 \times 1, 512\\&3 \times 3, 512\\&1 \times 1, 1024\end{aligned}\right] \times 1$ &$\left[\begin{aligned}&1 \times 1, 128\\&3 \times 3, 128\\&1 \times 1, 256\end{aligned}\right] \times 1$ \\ \hline \multirow{2}{*}{conv5\_2}& \multirow{2}{*}{$7\times7$}& \multicolumn{2}{c|}{High-to-low fusion} & \multicolumn{2}{c}{High-to-low fusion} \\ \cline{3-6} & & \multicolumn{2}{c|}{$1\times1, 1024$} & \multicolumn{2}{c}{$1\times1, 2048$} \\ \hline &\multirow{2}{*}{$1\times1$} & \multicolumn{2}{c|}{$7\times7$ global average pool} & \multicolumn{2}{c}{$7\times7$ global average pool} \\ \cline{3-6} & & \multicolumn{2}{c|}{1000-d fc, softmax} & \multicolumn{2}{c}{1000-d fc, softmax} \\ \bottomrule \end{tabular} \end{table*} In this section, the whole pipeline is described, which consists of two main components: the Deep Dual-resolution Network and the Deep Aggregation Pyramid Pooling Module. \subsection{Deep Dual-resolution Network} For convenience, we can add an additional high-resolution branch to the widely used classification backbone such as ResNets. To achieve a trade-off between resolution and inference speed, we let the high-resolution branch create feature maps whose resolution is 1/8 of the input image resolution. Therefore, the high-resolution branch is appended to the end of the conv3 stage. Note that the high-resolution branch does not contain any downsampling operations and has a one-to-one correspondence with the low-resolution branch to form deep high-resolution representations. Then multiple bilateral feature fusions can be performed at different stages to fully fuse the spatial information and semantic information. The detailed architectures of DDRNet-23-slim and DDRNet-39 are shown in Table \ref{tab:3}. We modify the input stem of the original ResNet, replacing one 7$\times$7 convolutional layer with two sequential 3$\times$3 convolutional layers. Residual basic blocks are utilized to construct the trunk and the subsequent two branches. To expand the output dimension, one bottleneck block is added at the end of each branch. The bilateral fusion includes fusing the high-resolution branch into the low-resolution branch (high-to-low fusion) and fusing the low-resolution into the high-resolution branch (low-to-high fusion). For high-to-low fusion, high-resolution feature maps are downsampled by a sequence of 3$\times$3 convolutions with a stride of 2 prior to pointwise summation. For low-to-high resolution, low-resolution feature maps are first compressed by a 1$\times$1 convolution and then upsampled with bilinear interpolation. Fig. \ref{fig5} shows how bilateral fusion is implemented. The $i$-th high-resolution feature maps $X_{Hi}$ and low-resolution feature maps $X_{Li}$ can be written as: \begin{equation} \left\{ \begin{aligned} &X_{Hi} = R(F_H(X_{H(i-1)}) + T_{L-H}(F_L(X_{L(i-1)}))) \\ &X_{Li} = R(F_L(X_{L(i-1)}) + T_{H-L}(F_H(X_{H(i-1)}))) \end{aligned} \right. \label{eq1} \end{equation} where $F_H$ and $F_L$ correspond to the sequence of residual basic blocks with high resolution and low resolution, $T_{L-H}$ and $T_{H-L}$ refer to the low-to-high and high-to-low transformer, $R$ denotes the ReLU function. We totally construct four dual-resolution networks of different depths and widths. DDRNet-23 is twice as wide as DDRNet-23-slim and DDRNet-39 1.5$\times$ is also a wider version of DDRNet-39. \begin{figure}[H] \centerline{\includegraphics[width=\columnwidth]{fig5.pdf}} \caption{The details of bilateral fusion in DDRNet. Point-wise summation is implemented before ReLU.} \label{fig5} \end{figure} \begin{figure*} \centerline{\includegraphics[width=\textwidth]{fig1.pdf}} \caption{The overview of DDRNets on semantic segmentation. ``RB'' denotes sequential residual basic blocks. ``RBB'' denotes the single residual bottleneck block. ``DAPPM'' denotes the Deep Aggregation Pyramid Pooling Module. ``Seg. Head'' denotes the segmentation head. Black solid lines denote information paths with data processing (including upsampling and downsampling) and black dashed lines denote information paths without data processing. ``sum'' denotes the pointwise summation. Dashed boxes denote the components which are discarded in the inference stage.} \label{fig1} \end{figure*} \subsection{Deep Aggregation Pyramid Pooling Module} \label{deep} \begin{figure}[!t] \centerline{\includegraphics[width=\columnwidth]{fig2.pdf}} \caption{The detailed architecture of Deep Aggregation Pyramid Pooling Module. The number of multi-scale branches can be adjusted according to input resolution.} \label{fig2} \end{figure} Here, we propose a novel module to further extract contextual information from low-resolution feature maps. Fig. \ref{fig2} shows the interior structure of the DAPPM. Taking feature maps of 1/64 image resolution as input, large pooling kernels with exponential strides are performed to generate feature maps of 1/128, 1/256, 1/512 image resolution. Input feature maps and image-level information generated by global average pooling are also utilized. We argue that it is inadequate to blend all the multi-scale contextual information by a single 3$\times$3 or 1$\times$1 convolution. Inspired by Res2Net\cite{gao2019res2net}, we first upsample the feature maps and then uses more 3$\times$3 convolutions to fuse contextual information of different scales in a hierarchial-residual way. Considering an input $x$, each scale $y_i$ can be written as: \begin{equation} y_i=\left\{ \begin{aligned} &C_{1\times1}(x), & i = 1 ; \\ &C_{3\times3}(U(C_{1\times1}(P_{2^i+1,2^{i-1}}(x)))+y_{i-1}), &1<i<n ; \\ &C_{3\times3}(U(C_{1\times1}(P_{global}(x)))+y_{i-1}), &i=n . \end{aligned} \right. \label{eq2} \end{equation} where $C_{1\times1}$ is $1\times1$ convolution, $C_{3\times3}$ is $3\times3$ convolution, $U$ denotes upsampling operation, $P_{j,k}$ denotes the pool layer of which kernel size is $j$ and stride is $k$, $P_{global}$ denotes the global average pooling. In the end, all feature maps are concatenated and compressed using a 1$\times$1 convolution. Besides, a 1$\times1$ projection shortcut is added for easy optimization. Similar to SPP in SwiftNet\cite{orsic2019defense}, DAPPM is implemented with the sequence BN-ReLU-Conv. Inside a DAPPM, contexts extracted by larger pooling kernels are integrated with deeper information flow, and multi-scale nature is formed by integrating different depths with different sizes of pooling kernels. Table \ref{tab:6} shows that DAPPM is able to provide much richer context than PPM. Although DAPPM contains more convolution layers and more complex fusion strategy, it hardly affects the inference speed because the input resolution is only 1/64 of the image resolution. For example, with a 1024$\times$1024 image, the maximum resolution of feature maps is 16$\times$16. \begin{table}[] \caption{Considering an input image of 1024$\times$1024, the generated context sizes of PPM and DAPPM are listed} \label{tab:6} \begin{tabular}{p{50pt}p{82pt}<{\centering}p{82pt}<{\centering}} \toprule & PPM & DAPPM \\ \midrule \multirow{5}{*}{Output scale} & \multirow{5}{*}{[16, 6, 3, 2, 1]} & [16] \\ & & [16, 8] \\ & & [16, 8, 4] \\ & & [16, 8, 4, 2] \\ & & [16, 8, 4, 2, 1] \\ \bottomrule \end{tabular} \end{table} \subsection{Overall Architecture for Semantic Segmentation} An overview of our method is depicted in Fig. \ref{fig1}. Some changes are made to the dual-resolution network to accommodate the semantic segmentation task. First, the stride of 3$\times$3 convolution in the RBB of the low-resolution branch is set to 2 to further downsample. Then, a DAPPM is added to the output of the low-resolution branch, which extracts rich contextual information from the high-level feature maps of 1/64 image resolution. Besides, the last high-to-low fusion is replaced by low-to-high fusion implemented by bilinear interpolation and summation fusion. At last, we devise a simple segmentation head consisting of a 3$\times$3 convolutional layer followed by a 1$\times$1 convolutional layer. The computational load of the segmentation head can be adjusted by changing the output dimension of the 3$\times$3 convolutional layer. We set it to 64 for DDRNet-23-slim, 128 for DDRNet-23, and 256 for DDRNet-39. Note that except for the segmentation head and the DAPPM module, all the modules have been pre-trained on ImageNet. \subsection{Deep Supervision} Extra supervision during the training stage can ease the optimization of deep convolutional neural networks (DCNNs). In PSPNet, an auxiliary loss is added to supervise the output of res4\_22 block of ResNet-101 and the corresponding weight is set to 0.4 according to experimental results\cite{zhao2017pyramid}. BiSeNetV2\cite{yu2020bisenet} proposes a booster training strategy in which extra segmentation heads are added at the end of each stage of the semantic branch. However, it needs numerous experiments to find the optimal weights to balance each loss, and leads to a non-negligible increase in training memory. To acquire better results, SFNet\cite{li2020semantic} utilizes a similar strategy named Cascaded Deeply Supervised Learning. In this paper, we only adopt simple extra supervision for a fair comparison with most of the methods. We add the auxiliary loss as shown in Fig. \ref{fig1} and set the weight to 0.4 following the PSPNet. The auxiliary segmentation head is discarded during the testing stage. The final loss which is the weighted sum of cross-entropy loss can be expressed as: \begin{equation} L_f = L_n + \alpha L_a \label{eq3} \end{equation} where $L_f$, $L_n$, $L_a$ represents the final loss, normal loss, auxiliary loss respectively and $\alpha$ denotes the weight of auxiliary loss, which is 0.4 in this paper. \section{Experiments} \subsection{Datasets} Cityscapes\cite{cordts2016cityscapes} is one of the well-known datasets focusing on urban street scene parsing. It contains 2975 finely annotated images for training, 500 images for validation, and 1525 images for testing. We do not use extra 20000 coarsely labeled images during training. There is a total of 19 classes available for the semantic segmentation task. The resolution of images is 2048$\times$1024, which is challenging for real-time semantic segmentation. CamVid\cite{brostow2009semantic} consists of 701 densely annotated frames and the resolution of each frame is 960$\times$720. It comprises 367 images for training, 101 images for validation, and 233 images for testing. We merge the train set and validation set for training and evaluate our models on the test set using 11 classes following previous works\cite{yu2018bisenet,orsic2019defense,li2019dfanet}. COCOStuff\cite{caesar2018coco} provides 10$K$ complex images with dense annotations of 182 categories, including 91 thing and 91 stuff classes. Note that 11 of the thing classes do not have any segmentation annotations. We follow the split in \cite{caesar2018coco} (9$K$ for training and 1$K$ for testing) for a fair comparison. \subsection{Train Setting} Before finetuning on semantic segmentation tasks, the dual-resolution networks are trained on ImageNet\cite{russakovsky2015imagenet} following the same data augmentation strategy as previous works\cite{he2016deep},\cite{xie2017aggregated}. All the models are trained with an input resolution of 224$\times$224, a batch size of 256, and 100 epochs on four 2080Ti GPUs. The initial learning rate is set to 0.1 and is reduced by 10 times at epoch 30, 60, and 90. We train all the networks using SGD with a weight decay of 0.0001 and a Nesterov momentum of 0.9. Top-1 errors on ImageNet validation set are shown in Table \ref{tab:5}. Although the efficiency of DDRNet is not superior to many advanced lightweight backbones which are elaborately designed on ImageNet, it still achieves start-of-the-art results on semantic segmentation benchmarks considering a speed trade-off. The training settings of Cityscapes, CamVid, and COCOStuff are introduced as follows. \begin{table}[] \caption{Top-1 error rates, parameter size and GFLOPs of four scaled-up DDRNets} \label{tab:5} \begin{tabular}{p{70pt}p{33pt}<{\centering}p{55pt}<{\centering}p{45pt}<{\centering}} \toprule Model & top-1 err. & Params. & GFLOPs \\ \midrule DDRNet-23-slim & 29.8 & 7.57M & 0.98G \\ \midrule DDRNet-23 & 24.1 & 28.22M & 3.88G \\ \midrule DDRNet-39 & 22.7 & 40.13M & 6.95G \\ \midrule DDRNet-39 1.5$\times$ & 21.6 & 76.86M & 14.85G \\ \bottomrule \end{tabular} \end{table} \begin{table*}[] \caption{Accuracy and speed comparison on Cityscapes. We report results on both val set and test set. Since inference speed of different models is measured under different conditions, the corresponding GPU models and input resolutions are reported. Our GFLOPs calculation adopts 2048$\times$1024 image as input. The corresponding speed is measured using TensorRT acceleration if the method is marked with \dag} \label{tab:1} \begin{tabular}{p{80pt}p{43pt}<{\centering}p{43pt}<{\centering}p{50pt}<{\centering}p{50pt}<{\centering}p{50pt}<{\centering}p{50pt}<{\centering}p{50pt}<{\centering}} \toprule \multirow{2}{*}{Model} & \multicolumn{2}{c}{MIoU} & \multirow{2}{*}{Speed (FPS)} & \multirow{2}{*}{GPU} & \multirow{2}{*}{Resolution} & \multirow{2}{*}{GFLOPs} & \multirow{2}{*}{Params} \\ \cmidrule{2-3} & val & test & & & & & \\ \midrule SegNet\cite{badrinarayanan2017segnet}& - & 57 & 16.7 & TitanX & 640$\times$360 & 286 & 29.5M \\ ENet\cite{paszke2016enet} & - & 57 & 135.4 & TitanX & 640$\times$360 & 3.8 & 0.4M \\ SQ\cite{treml2016speeding} & - & 59.8 & 16.7 & TitanX & 2048$\times$1024 & 270 & - \\ ICNet\cite{zhao2018icnet} & - & 69.5 & 30 & TitanX M & 2048$\times$1024 & 28.3 & 26.5M \\ ESPNet\cite{mehta2018espnet}& - & 60.3 & 113 & TitanX & 1024$\times$512 & - & 0.4M \\ ERFNet\cite{romera2017erfnet}& 70.0 & 68.0 & 41.7 & TitanX M & 1024$\times$512 & 27.7 & 20M \\ \midrule Fast-SCNN\cite{poudel2019fast}& 68.6 & 68.0 & 123.5 & TitanXp & 2048$\times$1024 & - & 1.1M \\ DFANet A\cite{li2019dfanet}& - & 71.3 & 100 & TitanX & 1024$\times$1024 & 3.4 & 7.8M \\ DFANet B\cite{li2019dfanet}& - & 67.1 & 120 & TitanX & 1024$\times$1024 & 2.1 & 4.8M \\ \midrule SwiftNetRN-18\cite{orsic2019defense}& 75.5 & 75.4 & 39.9 & GTX 1080Ti & 2048$\times$1024 & 104.0 & 11.8M \\ SwiftNetRN-18 ens\cite{orsic2019defense}& - & 76.5 & 18.4 & GTX 1080Ti & 2048$\times$1024 & 218.0 & 24.7M \\ \midrule BiSeNet1\cite{yu2018bisenet}& 69.0 & 68.4 & 105.8 & GTX 1080Ti & 1536$\times$768 & 14.8 & 5.8M \\ BiSeNet2\cite{yu2018bisenet}& 74.8 & 74.7 & 65.5 & GTX 1080Ti & 1536$\times$768 & 55.3 & 49M \\ BiSeNetV2\dag\cite{yu2020bisenet}& 73.4 & 72.6 &$\textbf{156}$ & GTX 1080Ti & 1024$\times$512 & 21.1 & - \\ BiSeNetV2-L\dag\cite{yu2020bisenet}& 75.8 & 75.3 & 47.3 & GTX 1080Ti & 1024$\times$512 & 118.5 & - \\ \midrule CAS\cite{zhang2019customizable} & 71.6 & 70.5 & 108 & TitanXp & 1536$\times$768 & - & - \\ GAS\cite{lin2020graph} & 72.4 & 71.8 & 108.4 & TitanXp & 1537$\times$769 & - & - \\ \midrule SFNet(DF1)\cite{li2020semantic}& - & 74.5 & 74 & GTX 1080Ti & 2048$\times$1024 & - & 9.03M \\ SFNet(DF2)\cite{li2020semantic}& - & 77.8 & 53 & GTX 1080Ti & 2048$\times$1024 & - & 10.53M \\ SFNet(ResNet-18)\cite{li2020semantic}& - &78.9 & 18 & GTX 1080Ti & 2048$\times$1024 & 247 & 12.87M \\ \midrule MSFNet*\cite{si2019real}& - & 71.3 & 117 & GTX 2080Ti & 1024$\times$512 & 24.2 & - \\ MSFNet\cite{si2019real}& - & 77.1 & 41 & GTX 2080Ti & 2048$\times$1024 & 96.8 & - \\ \midrule CABiNet\cite{kumaar2020cabinet}& 76.6 &75.9 & 76.5 & GTX 2080Ti & 2048$\times$1024 & 12.0 & 2.64M \\ \midrule DDRNet-23-Slim& $\textbf{77.8}$(77.3$\pm$0.4) & 77.4 & 101.6 & GTX 2080Ti & 2048$\times$1024 & 36.3 & 5.7M \\ DDRNet-23 & $\textbf{79.5}$(79.1$\pm$0.3) & 79.4 & 37.1 & GTX 2080Ti & 2048$\times$1024 & 143.1 & 20.1M \\ DDRNet-39 & - & $\textbf{80.4}$ & 22.0 & GTX 2080Ti & 2048$\times$1024 & 281.2 & 32.3M \\ \bottomrule \end{tabular} \end{table*} \subsubsection{Cityscapes} Following \cite{9052469}, we use the SGD optimizer with the initial learning rate of 0.01, the momentum of 0.9, and the weight decay of 0.0005. We adopt the ploy learning policy with the power of 0.9 to drop the learning rate and implement the data augmented method including random cropping images, random scaling in the range of 0.5 to 2.0, and random horizontal flipping. Images are randomly cropped into 1024$\times$1024 for training following \cite{li2019dfanet},\cite{si2019real},\cite{li2020semantic}. All the models are trained with 484 epochs (about 120$K$ iterations), a batch size of 12, and syncBN on four 2080Ti GPUs. For the models evaluated on the test server, we feed images from train and val set simultaneously during training. For a fair comparison with \cite{yu2020bisenet} and \cite{li2020semantic}, online hard example mining (OHEM)\cite{shrivastava2016training} is also used. \subsubsection{CamVid} We set the initial learning rate to 0.001 and train all the models for 968 epochs. Images are randomly cropped into 960$\times$720 for training following \cite{li2019dfanet}. All the models are trained on a single GPU and other training details are identical to those for Cityscapes. When employing Cityscapes pre-train, we finetune the models for 200 epochs. \subsubsection{COCOStuff} The initial learning rate is 0.001 and the total training epochs are 110. We resize the short side of the images to 640 before data augmentation. The crop size is 640$\times$640, as same as that of BiSeNetV2\cite{yu2020bisenet}. Other training details are identical to those for Cityscapes while the weight decay is 0.0001. In the inference phase, we fix the image resolution to 640$\times$640. \subsection{Measure of Inference Speed and Accuracy} The inference speed is measured on a single GTX 2080Ti GPU by setting the batch size to 1, with CUDA 10.0, CUDNN 7.6, and PyTorch 1.3. Similar to MSFNet and SwiftNet, we exclude batch normalization layers after convolutional layers because they can be integrated into convolutions during inference. We use the protocols established by \cite{chen2019fasterseg} for a fair comparison (image size: 2048$\times$1024 for Cityscapes, 960$\times$720 for CamVid, and 640$\times$640 for COCOStuff). Following ResNet\cite{he2016deep}, we report the best results, average results, and standard deviations of four trials except for the cityscapes test set of which accuracy is provided by the official server. \subsection{Speed and Accuracy Comparisons} \subsubsection{Cityscapes} As can be observed from Table \ref{tab:1} and Fig. \ref{fig4}, our method achieves a new state-of-the-art trade-off between real-time and high accuracy. Specially, DDRNet-23-slim (our smallest model) achieves 77.4$\%$ mIoU on the test set at 102 FPS. It outperforms DFANet A and MSFNet* by 6.1$\%$ mIoU with similar inference speed, and reasons approximately 2.5 times as fast as MSFNet. Besides, it runs 40$\%$ faster than the smallest SFNet and achieves a 2.9$\%$ mIoU gain on the test set. It is worth noting that our method also towers over those methods based on architecture search for real-time semantic segmentation such as CAS\cite{zhang2019customizable} and GAS\cite{lin2020graph} with similar inference speed. For the wider model, DDRNet-23 achieves the overall best accuracy among the published real-time methods in Table \ref{tab:1}, attaining 79.4$\%$ mIoU at 37 FPS. DDRNet-23 has a performance gain of 0.5$\%$ over SFNet (ResNet-18) and runs twice as fast as it. We keep going deeper with DDRNets and achieve 80.4$\%$ mIoU on the Cityscapes test server at 22 FPS, only using finely annotated data. If benefitting from pre-training on Mapillary\cite{neuhold2017mapillary} dataset and TensorRT acceleration like \cite{li2020semantic}, our method can establish a skyscraping baseline for real-time semantic segmentation of road scenes. On the Cityscapes val set, DDRNet-23-slim is superior to all published results in Table \ref{tab:1} with 36.3 GFLOPs and 5.7M parameters. And DDRNet-23 achieves a new overall best result of 79.5$\%$ mIoU. Fig. \ref{fig7} shows the visualized results of DDRNet-23-slim and DDRNet-23 under different scenes. \begin{figure*} \centering \begin{minipage}{1.6in} \includegraphics[width=1.6in]{image1.pdf} \end{minipage} \begin{minipage}{1.6in} \includegraphics[width=1.6in]{truth1.pdf} \end{minipage} \begin{minipage}{1.6in} \includegraphics[width=1.6in]{sval1.pdf} \end{minipage} \vspace{0.3em} \begin{minipage}{1.6in} \includegraphics[width=1.6in]{val1.pdf} \end{minipage} \vspace{0.3em} \begin{minipage}{1.6in} \includegraphics[width=1.6in]{image2.pdf} \end{minipage} \begin{minipage}{1.6in} \includegraphics[width=1.6in]{truth2.pdf} \end{minipage} \begin{minipage}{1.6in} \includegraphics[width=1.6in]{sval2.pdf} \end{minipage} \begin{minipage}{1.6in} \includegraphics[width=1.6in]{val2.pdf} \end{minipage} \vspace{0.3em} \begin{minipage}{1.6in} \includegraphics[width=1.6in]{image3.pdf} \end{minipage} \begin{minipage}{1.6in} \includegraphics[width=1.6in]{truth3.pdf} \end{minipage} \begin{minipage}{1.6in} \includegraphics[width=1.6in]{sval3.pdf} \end{minipage} \begin{minipage}{1.6in} \includegraphics[width=1.6in]{val3.pdf} \end{minipage} \vspace{0.3em} \begin{minipage}{1.6in} \includegraphics[width=1.6in]{image5.pdf} \end{minipage} \begin{minipage}{1.6in} \includegraphics[width=1.6in]{truth5.pdf} \end{minipage} \begin{minipage}{1.6in} \includegraphics[width=1.6in]{sval5.pdf} \end{minipage} \begin{minipage}{1.6in} \includegraphics[width=1.6in]{val5.pdf} \end{minipage} \vspace{0.3em} \begin{minipage}{1.6in} \includegraphics[width=1.6in]{image4.pdf} \end{minipage} \begin{minipage}{1.6in} \includegraphics[width=1.6in]{truth4.pdf} \end{minipage} \begin{minipage}{1.6in} \includegraphics[width=1.6in]{sval4.pdf} \end{minipage} \begin{minipage}{1.6in} \includegraphics[width=1.6in]{val4.pdf} \end{minipage} \vspace{0.3em} \begin{minipage}{1.6in} \includegraphics[width=1.6in]{image6.pdf} \end{minipage} \begin{minipage}{1.6in} \includegraphics[width=1.6in]{truth6.pdf} \end{minipage} \begin{minipage}{1.6in} \includegraphics[width=1.6in]{sval6.pdf} \end{minipage} \begin{minipage}{1.6in} \includegraphics[width=1.6in]{val6.pdf} \end{minipage} \caption{Visualized segmentation results on Cityscapes val set. The four columns left-to-right refer to the input image, the ground truth, the output of DDRNet-23-slim, and the output of DDRNet-23. The first four rows show the performance of two models while the last two rows represent some segmentation failures.} \label{fig7} \end{figure*} \subsubsection{CamVid} \begin{table}[] \caption{Accuracy and speed comparison on CamVid. MSFNet runs at 1024$\times$768 and MSFNet* runs at 768$\times$512 while other methods run at 960$\times$720. The corresponding speed is measured using TensorRT acceleration if the method is marked with \dag } \label{tab:2} \begin{tabular}{p{80pt}p{33pt}<{\centering}p{45pt}<{\centering}p{45pt}<{\centering}} \toprule Model & MIoU & Speed (FPS) & GPU \\ \midrule \multicolumn{4}{c}{w/o Cityscapes pre-training} \\ \midrule DFANet A\cite{li2019dfanet} & 64.7 & 120 & TitanX \\ DFANet B\cite{li2019dfanet} & 59.3 & 160 & TitanX \\ SwiftNetRN-18 pyr\cite{orsic2019defense} & 73.9 & - & GTX 1080Ti \\ SwiftNetRN-18\cite{orsic2019defense} & 72.6 & - & GTX 1080Ti \\ BiSeNet1\cite{yu2018bisenet} & 65.6 & 175 & GTX 1080Ti \\ BiSeNet2\cite{yu2018bisenet} & 68.7 & 116 & GTX 1080Ti \\ BiSeNetV2\dag\cite{yu2020bisenet} & 72.4 & 124 & GTX 1080Ti \\ BiSeNetV2-L\dag\cite{yu2020bisenet} & 73.2 & 33 & GTX 1080Ti \\ CAS\cite{zhang2019customizable} & 71.2 & 169 & TitanXp \\ GAS\cite{lin2020graph} & 72.8 & 153 & TitanXp \\ SFNet(DF2)\cite{li2020semantic} & 70.4 & 134 & GTX 1080Ti \\ SFNet(ResNet-18)\cite{li2020semantic}& 73.8 & 36 & GTX 1080Ti \\ MSFNet*\cite{si2019real} & 72.7 & 160 & GTX 2080Ti \\ MSFNet\cite{si2019real} & 75.4 & 91 & GTX 2080Ti \\ \midrule DDRNet-23-slim & 74.7(74.3$\pm$0.3) & $\textbf{230}$ & GTX 2080Ti \\ DDRNet-23 & $\textbf{76.3}$(76.0$\pm$0.3)& 94 & GTX 2080Ti \\ \midrule \multicolumn{4}{c}{w/ Cityscapes pre-training} \\ \midrule VideoGCRF\cite{chandra2018deep} & 75.2 & - & - \\ CCNet3D\cite{9133304} & 79.1 & - & - \\ BiSeNetV2\dag\cite{yu2020bisenet} & 76.7 & 124 & GTX 1080Ti \\ BiSeNetV2-L\dag\cite{yu2020bisenet} & 78.5 & 33 & GTX 1080Ti \\ \midrule DDRNet-23-slim & 78.6(78.2$\pm$0.3) & $\textbf{230}$ & GTX 2080Ti \\ DDRNet-23 & $\textbf{80.6}$(80.1$\pm$0.4)& 94 & GTX 2080Ti \\ \bottomrule \end{tabular} \end{table} As shown in Table \ref{tab:2}, DDRNet-23-slim achieves 74.7$\%$ mIoU on the CamVid test set at 230 FPS without Cityscapes pre-training. It obtains the second-highest accuracy and runs faster than all the other methods. In particular, the performance of DDRNet-23 is superior to MSFNet, the previous state-of-the-art method. DDRNet-23 also has a big performance gain over BiSeNetV2-L and SFNet (ResNet-18) and runs about two times faster than them. Given that the training pixels of CamVid are much less than that of Cityscapes, we believe that the outstanding performances of DDRNets partly attribute to adequate ImageNet pre-training. In addition, our Cityscapes pre-trained models achieve new state-of-the-art accuracy with the real-time speed. Specially, Cityscapes pre-trained DDRNet-23 realizes 80.6$\%$ mIoU with 94 FPS, stronger and much faster than BiSeNetV2-L. The corresponding visualized results are shown in Fig. \ref{fig8}. \begin{figure} \centering \begin{minipage}{1.1in} \includegraphics[width=1.1in]{camvid1.pdf} \end{minipage} \begin{minipage}{1.1in} \includegraphics[width=1.1in]{camvid_truth1.pdf} \end{minipage} \vspace{0.3em} \begin{minipage}{1.1in} \includegraphics[width=1.1in]{camvid_val1.pdf} \end{minipage} \vspace{0.3em} \begin{minipage}{1.1in} \includegraphics[width=1.1in]{camvid2.pdf} \end{minipage} \begin{minipage}{1.1in} \includegraphics[width=1.1in]{camvid_truth2.pdf} \end{minipage} \begin{minipage}{1.1in} \includegraphics[width=1.1in]{camvid_val2.pdf} \end{minipage} \vspace{0.3em} \begin{minipage}{1.1in} \includegraphics[width=1.1in]{camvid3.pdf} \end{minipage} \begin{minipage}{1.1in} \includegraphics[width=1.1in]{camvid_truth3.pdf} \end{minipage} \begin{minipage}{1.1in} \includegraphics[width=1.1in]{camvid_val3.pdf} \end{minipage} \vspace{0.3em} \begin{minipage}{1.1in} \includegraphics[width=1.1in]{camvid4.pdf} \end{minipage} \begin{minipage}{1.1in} \includegraphics[width=1.1in]{camvid_truth4.pdf} \end{minipage} \begin{minipage}{1.1in} \includegraphics[width=1.1in]{camvid_val4.pdf} \end{minipage} \vspace{0.3em} \begin{minipage}{1.1in} \includegraphics[width=1.1in]{camvid5.pdf} \end{minipage} \begin{minipage}{1.1in} \includegraphics[width=1.1in]{camvid_truth5.pdf} \end{minipage} \begin{minipage}{1.1in} \includegraphics[width=1.1in]{camvid_val5.pdf} \end{minipage} \vspace{0.3em} \begin{minipage}{1.1in} \includegraphics[width=1.1in]{camvid6.pdf} \end{minipage} \begin{minipage}{1.1in} \includegraphics[width=1.1in]{camvid_truth6.pdf} \end{minipage} \begin{minipage}{1.1in} \includegraphics[width=1.1in]{camvid_val6.pdf} \end{minipage} \caption{Visualized segmentation results on CamVid test set. The color of ignored labels during testing is set to black. The three columns left-to-right refer to the input image, the ground truth, and the output of DDRNet-23. The first four rows show the successful samples while the last two rows represent some segmentation failures.} \label{fig8} \end{figure} \subsubsection{COCOStuff} We also validate our method on COCOStuff which is a more challenging dataset for real-time semantic segmentation due to the plentiful categories. The stride of RBB in the low-resolution branch is set to 1, for the image resolution is smaller than the other two datasets. Time to reshape images and predicted masks is not counted. Table \ref{tab:12} demonstrates that our method outperforms BiSeNetV2 by a substantial degree under very challenging scenarios. Our DDRNet-23 achieves a similar accuracy with PSPNet50 while running 20 times as fast as it. \begin{table}[] \caption{Accuracy and speed comparison on COCOStuff. The input resolution is 640$\times$640 and the result of PSPNet50 comes from \cite{yu2020bisenet}. The corresponding speed is measured using TensorRT acceleration if the method is marked with \dag} \label{tab:12} \begin{tabular}{p{65pt}p{60pt}<{\centering}p{30pt}<{\centering}p{48pt}<{\centering}} \toprule Model & MIoU & PixAcc & Speed (FPS) \\ \midrule PSPNet50\cite{zhao2017pyramid} & 32.6 & - & 6.6 \\ \midrule ICNet\cite{zhao2018icnet} & 29.1 & - & 35.7 \\ \midrule BiSeNetV2\dag\cite{yu2020bisenet} & 25.2 & 60.5 & 87.9 \\ BiSeNetV2-L\dag\cite{yu2020bisenet} & 28.7 & 63.5 & 42.5 \\ \midrule DDRNet-23 & 32.1(31.8$\pm$0.2)& 64.7(64.7$\pm$0.1) & $\textbf{129.2}$ \\ DDRNet-39 & $\textbf{34.8}$(34.6$\pm$0.1)& $\textbf{66.6}$(66.7$\pm$0.2) &83.8 \\ \bottomrule \end{tabular} \end{table} \subsection{Comparisons with State-of-the-art Results} \begin{table}[] \caption{State-of-the-art models on Cityscapes test set. OS denotes the final output stride. All the methods train models on both train and val set except PSPNet marked with \dag only using train set. GFLOPs calculation adopts 1024$\times$1024 image as input and most of results about GFLOPs and Params can be found in \cite{li2020semantic}} \label{tab:9} \begin{tabular}{p{80pt}p{15pt}<{\centering}p{32pt}<{\centering}p{32pt}<{\centering}p{32pt}<{\centering}} \toprule Model & OS & mIoU & GFLOPs & Params. \\ \midrule SAC\cite{zhang2017scale} & 8 & 78.1 & - & - \\ DepthSeg\cite{kong2018recurrent} & 8 & 78.2 & - & - \\ PSPNet\dag\cite{zhao2017pyramid} & 8 & 78.4 & 1065.4 & 65.7M \\ ResNet38\cite{wu2019wider} & 8 & 78.4 & - & - \\ BiSeNet\cite{yu2018bisenet} & 8 & 78.9 & 219.1 & 51.0M \\ DFN\cite{yu2018learning} & 8 & 79.3 & 1121.0 & 90.7M \\ PSANet\cite{zhao2018psanet} & 8 & 80.1 & 1182.6 & 85.6M \\ DenseASPP\cite{yang2018denseaspp} & 8 & 80.6 & 632.9 & 35.7M \\ CCNet\cite{huang2019ccnet} & 8 & 81.4 & 1153.9 & 66.5M \\ DANet\cite{fu2019dual} & 8 & 81.5 & 1298.8 & 66.6M \\ OCNet\cite{yuan2018ocnet} & 8 & 81.7 & - & - \\ OCRNet\cite{yuan2019object} & 8 & 81.8 & - & - \\ HRNetV2-W48\cite{9052469} & 4 & 81.6 & 348.1 & 65.9M \\ SFNet\cite{li2020semantic} & 4 & 81.8 & 417.5 & 50.3M \\ \midrule DDRNet-39 & 8 & 81.9 & $\textbf{140.6}$ & $\textbf{32.3M}$\\ DDRNet-39 1.5$\times$ & 8 & $\textbf{82.4}$ & 303.0 & 70.2M \\ \bottomrule \end{tabular} \end{table} In this part, we further demonstrate the capacity of DDRNet for semantic segmentation by comparing it to state-of-the-art models on the Cityscapes test set. Such methods frequently employ multi-scale and horizontal flip inference to achieve better results regardless of time cost. For a fair comparison with them, we also apply multiple scales including 0.50$\times$, 0.75$\times$, 1$\times$, 1.25$\times$, 1.5$\times$, 1.75$\times$, 2$\times$ with left-right flipping during test. As is shown in Table \ref{tab:9}, standard test augmentation improves the accuracy of DDRNet-39 from 80.4$\%$ to 81.9$\%$. Our DDRNet-39 outperforms numerous powerful models which are integrated with self-attention modules such as CCNet, DANet, and OCNet. It is noteworthy that our method only requires 11$\%$ computation of DANet. DDRNet-39 also gets ahead of SFNet (based on ResNet-101 backbone) which is a state-of-the-art method for real-time semantic segmentation, only requiring 34$\%$ computation of it. DDRNet-39 1.5$\times$ of which size is closer to other models in Table \ref{tab:9} achieves a very competitive performance (82.4$\%$ mIoU). \subsection{Comparisons with HRNet} The major difference between DDRNet and HRNet is the number of parallel branches. Besides, we append the multi-scale context extraction module to the end of the low-resolution branch. Experimental results in Table \ref{tab:10} demonstrate the improvement of DDRNet over HRNet in both inference time and training memory usage. We get the val results of two small HRNets from the official implementation. Training memory is measured on a single 2080Ti with a batch size of 2 and a crop size of 1024$\times$512, excluding the auxiliary segmentation head. \begin{table}[] \caption{Comparative experiments between DDRNet and HRNet in terms of mIoU, FPS and train memory} \label{tab:10} \begin{tabular}{p{100pt}p{30pt}<{\centering}p{30pt}<{\centering}p{43pt}<{\centering}} \toprule Model & mIoU & FPS & Train mem. \\ \midrule HRNetV2-W18-Small-v1\cite{9052469} & 70.3 & 67.2 & 1989MiB \\ HRNetV2-W18-Small-v2\cite{9052469} & 76.2 & 31.1 & 2745MiB \\ DDRNet-23-slim & $\textbf{76.9}$ & $\textbf{101.6}$ & $\textbf{1629MiB}$ \\ \bottomrule \end{tabular} \end{table} \subsection{Ablative Experiments on Cityscapes} \subsubsection{Standard Bells and Whistles} \begin{table}[] \caption{Influences of standard bells and whistles, including deep supervision (DS), OHEM and training at a crop size of 1024$\times$1024 (the default is 1024$\times$512)} \label{tab:8} \begin{tabular}{p{60pt}p{33pt}<{\centering}p{33pt}<{\centering}p{33pt}<{\centering}p{33pt}<{\centering}} \toprule Model & DS & OHEM & 1024$\times$1024 & mIoU \\ \midrule DDRNet-23-slim & & & & 76.1 \\ DDRNet-23-slim & \checkmark & & & 76.1 \\ DDRNet-23-slim & \checkmark &\checkmark & & 76.9 \\ DDRNet-23-slim & \checkmark &\checkmark & \checkmark & 77.8 \\ \bottomrule \end{tabular} \end{table} We analyze the effect of some basic training tricks which are also adopted by recent advanced method SFNet\cite{li2020semantic}. As shown in Table \ref{tab:8}, the accuracy is raised from 76.1 to 77.8 with deep supervision, OHEM, and training at a larger crop size. \subsubsection{DAPPM} \begin{table}[] \caption{Comparison of DAPPM and other context extraction modules. RES2 denotes the Res2Net module and Base-OC is the object context module proposed in \cite{yuan2018ocnet}} \label{tab:7} \begin{tabular}{p{30pt}<{\centering}p{30pt}<{\centering}p{30pt}<{\centering}p{30pt}<{\centering}p{30pt}<{\centering}p{30pt}<{\centering}} \toprule PPM & RES2 &Base-OC & DAPPM & mIoU & Speed\\ \midrule & & & & 74.1 & 107.9 \\ \checkmark & & & & 76.8 & 104.9 \\ &\checkmark & & & 76.8 & 103.6 \\ & &\checkmark & & 75.6 & 104.9 \\ & & &\checkmark & 77.8 & 101.6 \\ \bottomrule \end{tabular} \end{table} We compare the DAPPM with the pyramid pooling based methods (PPM), self-attention based modules (Base-OC), and the res2net module. The results in Table \ref{tab:7} shows that the proposed module improves the performance of scene parsing from 74.1$\%$ mIoU to 77.8$\%$ mIoU while the inference speed is hardly affected. Compared to the PPM and RES2, DAPPM also achieves 1$\%$ mIoU gain while the Base-OC, another state-of-the-art method, gets a relatively poor performance with low-resolution feature maps. \subsubsection{Dual-resolution Networks} For faster experiments, we train all the bilateral networks from scratch with a initial learning rate of 0.05, a crop size of 1024$\times$512, 600 epochs in total, and without using OHEM. As shown in Table \ref{tab:11}, using thinner detail branch results in 1.3$\%$ accuracy decrease and running much faster than the baseline. Appending the detail branch to the middle layer of the network contributes to the deep high-resolution representation and also improves the inference speed because it avoids computing with higher resolution. The bottleneck expands the feature dimension, which generates richer features for the DAPPM and the final segmentation head. The bilateral fusions further improve the segmentation accuracy at a small time cost. Finally, our dual-resolution network takes less computation resources and time than the baseline while achieving better performance. \begin{table}[] \caption{Ablation study of dual-resolution networks. The baseline is adapted from BiSeNetV2 by replacing the complicated semantic branch with our low-solution branch. `+Thiner detail branch' represents cutting the dimension of the detail branch in half. '+Conv3' represents appending the detail branch to the end of conv3 stage. `+Residual' denotes replacing the 3$\times$3 convolutions with the residual basic blocks. `+Bottleneck' denotes adding a bottleneck block to the end of each branch. `+Low-to-high fusion' or `+Bilateral fusion' denotes performing the multiple low-to-high fusion or bilateral fusion} \label{tab:11} \begin{tabular}{p{72pt}p{30pt}<{\centering}p{30pt}<{\centering}p{30pt}<{\centering}p{30pt}<{\centering}} \toprule Model & mIoU & Params. & GFLOPs & Speed\\ \midrule Baseline & 72.2 & 4.3M & 70.2 & 60.2 \\ +Thiner detail branch & 70.9 & 3.8M & 34.0 & 103.7 \\ +Conv3 & 71.4 & 4.0M & 31.7 & 128.7 \\ +Residual & 71.2 & 4.0M & 31.7 & 125.2 \\ +Bottleneck & 73.3 & 5.2M & 34.4 & 110.2 \\ +Low-to-high fusion & 74.0 & 5.3M & 34.5 & 107.6 \\ +Bilateral fusion & 74.6 & 5.7M & 36.3 & 101.6 \\ \bottomrule \end{tabular} \end{table} \section{Conclusion} In this paper, we are devoted to the real-time and accurate semantic segmentation of road scenes and present a simple solution for it without using extra bells or whistles. In particular, novel deep dual-resolution networks are proposed as efficient backbones for real-time semantic segmentation. And a new module is designed for extracting multi-scale contextual information from low-resolution feature maps. To our best knowledge, we are the first to introduce deep high-resolution representation into real-time semantic segmentation and our simple strategy outperforms all previous real-time models on three popular benchmarks. DDRNets mainly consist of residual basic blocks and bottleneck blocks, providing a wide range of speed and accuracy trade-off by scaling model width and depth. Due to the simplicity and efficiency of our method, it can be seen as a strong baseline for unifying real-time and high-accuracy semantic segmentation. Further studies will focus on improving the baseline and transferring the backbones to other downstream tasks. \bibliographystyle{ieeetr} \section{Introduction}\label{sec:introduction}} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE Computer Society journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE Communications Society journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \section*{Acknowledgment} The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE \textsc{Transactions on Magnetics} journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \section*{Acknowledgment} The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi
1,108,101,562,541
arxiv
\chapter{Tools from general category theory} We include here some notions and tools from general category theory which we shall make use of in the main body of the text. \section{Ends and coends} Let \( \catGamma \) and~\( \catC \) be categories and~\( \mH \colon \catGamma[op] \times \catGamma \to \catC \) a bifunctor. The \textdef{end} of~\( \mH \) is an object \[ \End[\catGamma] { \mH } = \End[\vgamma \in \catGamma] { \mH { \vgamma , \vgamma } } \] in~\( \catC \), together with morphisms~\( \End[\catGamma] { \mH } \to \mH { \vgamma , \vgamma } \) for all~\( \vgamma \in \catGamma \), such that for any~\( \mf \colon \vgamma \to \vgamma[prime] \) the lower right square of the following diagram commutes: \[\begin{tikzcd} A \ar[rd,dashed] \ar[rrd, bend left] \ar[ddr, bend right] \\[-1em] &[-2em] \End[\catGamma] { \mH } \ar[r] \ar[d] & \mH { \vgamma , \vgamma } \ar[d, "\mH { \vgamma , \mf }"] \\ & \mH { \vgamma[prime] , \vgamma[prime] } \ar[r, "\mH { \mf , \vgamma[prime] }"'] & \mH { \vgamma , \vgamma[prime] } \invdot \end{tikzcd}\] Furthermore, \( \End[\catGamma] { \mH} \)~is universal with this property, meaning that if \( A \)~is another object of~\( \catC \) with a collection of arrows~\( A \to \mH { \vgamma , \vgamma } \) for all~\( \vgamma \), subject to the same commutativity conditions, then these factor through a unique map~\( A\to \End[\catGamma] { \mH } \). If \( \catC \)~is complete, then clearly \begin{equation}\label{eq:Equalizer_formula_for_end} \End[\catGamma]{ H } = \Eq[par=\Big] { \Prod[ \vgamma \in \catGamma ] { \mH { \vgamma , \vgamma } } \rightrightarrows \Prod[ \mathclap{\mf \colon \vgamma \to \vgamma[prime] } ] { \mH { \vgamma , \vgamma[prime] } } }. \end{equation} Here, the second product runs over all morphisms~\( \mf \colon \vgamma \to \vgamma[prime] \) in~\( \catGamma \), and the two arrows are given by \( \mH { \vgamma , \vgamma } \xto { \mf[push] } \mH { \vgamma , \vgamma[prime] } \) resp.~\( \mH { \vgamma[prime] , \vgamma[prime] } \xto{ \mf[pull] } \mH { \vgamma , \vgamma[prime] } \). \begin{example} Given functors~\( \mF , \mG \colon \catGamma \to \catGamma[prime] \), we obtain a bifunctor \[ \mH = \Hom[ \catGamma[prime] ] { \mF { slot } , \mG { slot } } \colon \catGamma[op] \times \catGamma \to \catset, \] and the universal property shows that \[ \End[\catGamma] { \mH } = \Hom[ \catfun { \catGamma , \catGamma[prime] } ] { \mF , \mG } \] is the set of natural transformations between \( \mF \) and~\( \mG \). \end{example} The dual story is that of a coend: If again \( \mH \colon \catGamma[op] \times \catGamma \to \catC \) is a bifunctor, the \textdef{coend} is an object \[ \Coend[\catGamma] { \mH } = \Coend[\vgamma\in\catGamma] { \mH { \vgamma , \vgamma } } \] in~\(\catC \), together with morphisms~\( \mH { \vgamma , \vgamma } \to \Coend[\catGamma] { \mH } \) such that for any~\( \mf \colon \vgamma \to \vgamma[prime] \), the upper left square of the following digram commutes: \[\begin{tikzcd} \mH { \vgamma , \vgamma[prime] } & \mH { \vgamma , \vgamma } \ar[l, "\mH { \vgamma , \mf }"'] \\ \mH { \vgamma[prime] , \vgamma[prime] } \ar[u, "\mH { \mf , \vgamma[prime] }"] & \Coend[\catGamma] { \mH } \ar[u] \ar[l] \\[-1em] & &[-2em] A \ar[ul,dashed] \ar[llu, bend left] \ar[luu, bend right] \invdot \end{tikzcd}\] Furthermore, \( \Coend[\catGamma] { \mH } \) must be universal with this property, meaning that if \( A \)~is another object of~\( \catC \) with a collection of arrows~\( \mH { \vgamma , \vgamma } \to A \) for all~\( \vgamma \), then these factor through a unique map~\( \Coend[\catGamma] { \mH } \to A \). If \( \catC \)~is cocomplete, then clearly \[ \Coend[\catGamma] { \mH } = \Coeq[par=\Big] { \Coprod[ \mf \colon \vgamma \to \vgamma[prime] ] { \mH { \vgamma , \vgamma[prime] } } \rightrightarrows \Coprod[\vgamma\in\catGamma] { \mH { \vgamma , \vgamma } } }, \] the maps being defined as before. \section{Weighted limits} Given a functor~\( \mF \colon \catGamma \to \catC \), we may regard~\( \mF \) as a bifunctor~\( \catGamma[op] \times \catGamma \to \catC \), \( \tup{ \vgamma , \vgamma[prime] } \mapsto \mF { \vgamma[prime] } \), so the first argument is simply discarded. Then it is easy to see that the universal property of~\( \End[\catGamma] { \mF } \) reduces to that of the limit~\( \invlim[\catGamma] { \mF } \), and the equalizer formula~\eqref{eq:Equalizer_formula_for_end} reduces to the familiar formula \[ {\textstyle \invlim[\catGamma] { \mF } } = \Eq[par=\Big] { \Prod[\vgamma \in\catGamma] { \mF { \vgamma } } \rightrightarrows \Prod[ \mathclap{ \mf \colon \vgamma\to\vgamma[prime]} ] { \mF { \vgamma[prime] } } }. \] \section{Kan extensions} \endgroup \chapter{Tools from general category theory} We include here some notions and tools from general category theory which we shall make use of in the main body of the text. \section{Ends and coends} Let \( \catGamma \) and~\( \catC \) be categories and~\( \mH \colon \catGamma[op] \times \catGamma \to \catC \) a bifunctor. The \textdef{end} of~\( \mH \) is an object \[ \End[\catGamma] { \mH } = \End[\vgamma \in \catGamma] { \mH { \vgamma , \vgamma } } \] in~\( \catC \), together with morphisms~\( \End[\catGamma] { \mH } \to \mH { \vgamma , \vgamma } \) for all~\( \vgamma \in \catGamma \), such that for any~\( \mf \colon \vgamma \to \vgamma[prime] \) the lower right square of the following diagram commutes: \begin{equation}\label{eq:end_universal_property} \begin{tikzcd} A \ar[rd,dashed] \ar[rrd, bend left] \ar[ddr, bend right] \\[-1em] &[-2em] \End[\catGamma] { \mH } \ar[r] \ar[d] & \mH { \vgamma , \vgamma } \ar[d, "\mH { \vgamma , \mf }"] \\ & \mH { \vgamma[prime] , \vgamma[prime] } \ar[r, "\mH { \mf , \vgamma[prime] }"'] & \mH { \vgamma , \vgamma[prime] } \invdot \end{tikzcd} \end{equation} Furthermore, \( \End[\catGamma] { \mH} \)~is universal with this property, meaning that if \( A \)~is another object of~\( \catC \) with a collection of arrows~\( A \to \mH { \vgamma , \vgamma } \) for all~\( \vgamma \), subject to the same commutativity conditions, then these factor through a unique map~\( A\to \End[\catGamma] { \mH } \). If \( \catC \)~is complete, then clearly \begin{equation}\label{eq:Equalizer_formula_for_end} \End[\catGamma]{ H } = \Eq[par=\Big] { \Prod[ \vgamma \in \catGamma ] { \mH { \vgamma , \vgamma } } \rightrightarrows \Prod[ \mathclap{\mf \colon \vgamma \to \vgamma[prime] } ] { \mH { \vgamma , \vgamma[prime] } } }. \end{equation} Here, the second product runs over all morphisms~\( \mf \colon \vgamma \to \vgamma[prime] \) in~\( \catGamma \), and the two arrows are given by \( \mH { \vgamma , \vgamma } \xto { \mf[push] } \mH { \vgamma , \vgamma[prime] } \) resp.~\( \mH { \vgamma[prime] , \vgamma[prime] } \xto{ \mf[pull] } \mH { \vgamma , \vgamma[prime] } \). \begin{example} Given functors~\( \mF , \mG \colon \catGamma \to \catGamma[prime] \), we obtain a bifunctor \[ \mH = \Hom[ \catGamma[prime] ] { \mF { slot } , \mG { slot } } \colon \catGamma[op] \times \catGamma \to \catset, \] and the universal property shows that \[ \End[\catGamma] { \mH } = \Hom[ \catfun { \catGamma , \catGamma[prime] } ] { \mF , \mG } \] is the set of natural transformations between \( \mF \) and~\( \mG \). \end{example} The dual story is that of a coend: If again \( \mH \colon \catGamma[op] \times \catGamma \to \catC \) is a bifunctor, the \textdef{coend} is an object \[ \Coend[\catGamma] { \mH } = \Coend[\vgamma\in\catGamma] { \mH { \vgamma , \vgamma } } \] in~\(\catC \), together with morphisms~\( \mH { \vgamma , \vgamma } \to \Coend[\catGamma] { \mH } \) such that for any~\( \mf \colon \vgamma \to \vgamma[prime] \), the upper left square of the following digram commutes: \[\begin{tikzcd} \mH { \vgamma , \vgamma[prime] } & \mH { \vgamma , \vgamma } \ar[l, "\mH { \vgamma , \mf }"'] \\ \mH { \vgamma[prime] , \vgamma[prime] } \ar[u, "\mH { \mf , \vgamma[prime] }"] & \Coend[\catGamma] { \mH } \ar[u] \ar[l] \\[-1em] & &[-2em] A \ar[ul,dashed] \ar[llu, bend left] \ar[luu, bend right] \invdot \end{tikzcd}\] Furthermore, \( \Coend[\catGamma] { \mH } \) must be universal with this property, meaning that if \( A \)~is another object of~\( \catC \) with a collection of arrows~\( \mH { \vgamma , \vgamma } \to A \) for all~\( \vgamma \), then these factor through a unique map~\( \Coend[\catGamma] { \mH } \to A \). If \( \catC \)~is cocomplete, then clearly \[ \Coend[\catGamma] { \mH } = \Coeq[par=\Big] { \Coprod[ \mf \colon \vgamma \to \vgamma[prime] ] { \mH { \vgamma , \vgamma[prime] } } \rightrightarrows \Coprod[\vgamma\in\catGamma] { \mH { \vgamma , \vgamma } } }, \] the maps being defined as before. \section{Weighted limits} A good reference for this section is \textcite{riehl}. Given a functor~\( \mF \colon \catGamma \to \catC \), we may regard~\( \mF \) as a bifunctor~\( \catGamma[op] \times \catGamma \to \catC \), \( \tup{ \vgamma , \vgamma[prime] } \mapsto \mF { \vgamma[prime] } \), the first argument being simply discarded. Then it is easy to see that the universal property of~\( \End[\catGamma] { \mF } \) reduces to that of the limit~\( \invlim[\catGamma] { \mF } \), and the equalizer formula~\eqref{eq:Equalizer_formula_for_end} reduces to the familiar formula \[ {\textstyle \invlim[\catGamma] { \mF } } = \Eq[par=\Big] { \Prod[\vgamma \in\catGamma] { \mF { \vgamma } } \rightrightarrows \Prod[ \mathclap{ \mf \colon \vgamma\to\vgamma[prime]} ] { \mF { \vgamma[prime] } } }. \] Now suppose that \( \tup { \catV , \tens , \innerhom{ slot , slot } , \finalobject } \)~is a unital\fxfatal{Here, unital involves \( \innerhom { \catunit , slot } = \id \); don't know if that's standard. Also, the notation is usually for final object}, closed, symmetric, monoidal category. Then one has the usual isomorphisms \[ \innerhom { \vv[1] , \innerhom{\vv[2] , \vv[3] } } \cong \innerhom { \vv[2], \innerhom{ \vv[1] , \vv[3] } } \qquad\text{and}\qquad \innerhom{ \vv[1] \tens \vv[2] , \vv[3] } \cong \innerhom{ \vv[2] , \innerhom { \vv[1] , \vv[3] } } \] for~\( \vv[1], \vv[2], \vv[3] \in \catV \). This provides a covariant as well as a contravariant “action” of~\( \catV \) on itself that we wish to generalize. Let~\( \catC \) be a category enriched\fxfatal{Be clear somewhere that an enriched category, in our setup, has an underlying category.} over~\( \catV \), and denote the enriched hom space by~\( \Hom[enrich,smash] \). Then a \textdef{powering} of~\( \catV \) on the category~\( \catC \) is a bifunctor \[ \catV[op] \times\catC\longto\catC, \qquad \tup{ \vv, \vx } \longmapsto \vc[powering=\vv] \] satisfying \[ \Hom[enrich,\catC] { \vy , { \vx[powering=\vv] } } \cong \innerhom { \vv , \Hom[enrich,\catC] { \vy , \vx } } \] for all~\( \vv \in \catV \) and \( \vx , \vy \in \catC \). (other notations include \( \powering { \vv , \vx } \) and~\( \innerhom { \vv , \vx } \)). Notice that \( \vx[powering=\catunit] = \vx \). Dually, a \textdef{copowering} of~\( \catV \) on~\( \catC \) is a bifunctor \[ \catV \times \catC \longto \catC, \qquad \tup { \vv , \vx } \longmapsto \vv \tens \vx. \] satisfying \[ \Hom[enrich,\catC] { \vv \tens \vx , \vy } \cong \innerhom { \vv , \Hom[enrich,\catC] { \vx , \vy } } \] for all~\( \vv \in \catV \) and \( \vx , \vy \in \catC \). Notice that \( \catunit \tens \vx = \vx \). By the identities above, the inner hom and the tensor product provide a powering resp.\ copowering of~\( \catV \) on itself. Now\fxfatal{Wait, wait a moment, I have not introduced enriched ends and coends! But then again, the limit should make sense in the classical sense, so I guess really no issues?} in the above setup, let~\( \catGamma \) be a category and \( \mW \colon \catGamma \to \catV \) a functor. The \textdef{weighted limit} with weight~\( \mW \) of a diagram~\( \mF \colon \catGamma \to \catC \) is given by the end \[ \invlim[weight=\mW] { \mF } = \End [ \vgamma\in\catGamma ] { {\mF{\vgamma}[powering=\mW{\vgamma}] } } \] By taking~\( \mW \) to be the constant functor at~\( \finalobject \), we obtain \( \invlim[weight=\finalobject] = \invlim \). Dually, if instead \( \mW \colon \catGamma[op] \to \catV \) is a functor, then the \textdef{weighted colimit} with weight~\( \mW \) of the diagram~\( \mF \colon \catGamma \to \catC \) is given by the coend \[ \dirlim[weight=\mW] { \mF } = \Coend[\vgamma\in\catGamma] { \mW { \vgamma} \tens \mF { \vgamma } }. \] As before, we recover the colimit by~\( \dirlim[weight=\finalobject] = \dirlim \). \section{Kan extensions} \endgroup \chapter{Tools from general category theory} We include here some notions and tools from general category theory which we shall make use of in the main body of the text. \section{Ends and coends} Let \( \catGamma \) and~\( \catC \) be categories and~\( \mH \colon \catGamma[op] \times \catGamma \to \catC \) a bifunctor. The \textdef{end} of~\( \mH \) is an object \[ \End[\catGamma] { \mH } = \End[\vgamma \in \catGamma] { \mH { \vgamma , \vgamma } } \] in~\( \catC \), together with morphisms~\( \End[\catGamma] { \mH } \to \mH { \vgamma , \vgamma } \) for all~\( \vgamma \in \catGamma \), such that for any~\( \mf \colon \vgamma \to \vgamma[prime] \) the lower right square of the following diagram commutes: \[\begin{tikzcd} A \ar[rd,dashed] \ar[rrd, bend left] \ar[ddr, bend right] \\[-1em] &[-2em] \End[\catGamma] { \mH } \ar[r] \ar[d] & \mH { \vgamma , \vgamma } \ar[d, "\mH { \vgamma , \mf }"] \\ & \mH { \vgamma[prime] , \vgamma[prime] } \ar[r, "\mH { \mf , \vgamma[prime] }"'] & \mH { \vgamma , \vgamma[prime] } \invdot \end{tikzcd}\] Furthermore, \( \End[\catGamma] { \mH} \)~is universal with this property, meaning that if \( A \)~is another object of~\( \catC \) with a collection of arrows~\( A \to \mH { \vgamma , \vgamma } \) for all~\( \vgamma \), subject to the same commutativity conditions, then these factor through a unique map~\( A\to \End[\catGamma] { \mH } \). If \( \catC \)~is complete, then clearly \begin{equation}\label{eq:Equalizer_formula_for_end} \End[\catGamma]{ H } = \Eq[par=\Big] { \Prod[ \vgamma \in \catGamma ] { \mH { \vgamma , \vgamma } } \rightrightarrows \Prod[ \mathclap{\mf \colon \vgamma \to \vgamma[prime] } ] { \mH { \vgamma , \vgamma[prime] } } }. \end{equation} Here, the second product runs over all morphisms~\( \mf \colon \vgamma \to \vgamma[prime] \) in~\( \catGamma \), and the two arrows are given by \( \mH { \vgamma , \vgamma } \xto { \mf[push] } \mH { \vgamma , \vgamma[prime] } \) resp.~\( \mH { \vgamma[prime] , \vgamma[prime] } \xto{ \mf[pull] } \mH { \vgamma , \vgamma[prime] } \). \begin{example} Given functors~\( \mF , \mG \colon \catGamma \to \catGamma[prime] \), we obtain a bifunctor \[ \mH = \Hom[ \catGamma[prime] ] { \mF { slot } , \mG { slot } } \colon \catGamma[op] \times \catGamma \to \catset, \] and the universal property shows that \[ \End[\catGamma] { \mH } = \Hom[ \catfun { \catGamma , \catGamma[prime] } ] { \mF , \mG } \] is the set of natural transformations between \( \mF \) and~\( \mG \). \end{example} The dual story is that of a coend: If again \( \mH \colon \catGamma[op] \times \catGamma \to \catC \) is a bifunctor, the \textdef{coend} is an object \[ \Coend[\catGamma] { \mH } = \Coend[\vgamma\in\catGamma] { \mH { \vgamma , \vgamma } } \] in~\(\catC \), together with morphisms~\( \mH { \vgamma , \vgamma } \to \Coend[\catGamma] { \mH } \) such that for any~\( \mf \colon \vgamma \to \vgamma[prime] \), the upper left square of the following digram commutes: \[\begin{tikzcd} \mH { \vgamma , \vgamma[prime] } & \mH { \vgamma , \vgamma } \ar[l, "\mH { \vgamma , \mf }"'] \\ \mH { \vgamma[prime] , \vgamma[prime] } \ar[u, "\mH { \mf , \vgamma[prime] }"] & \Coend[\catGamma] { \mH } \ar[u] \ar[l] \\[-1em] & &[-2em] A \ar[ul,dashed] \ar[llu, bend left] \ar[luu, bend right] \invdot \end{tikzcd}\] Furthermore, \( \Coend[\catGamma] { \mH } \) must be universal with this property, meaning that if \( A \)~is another object of~\( \catC \) with a collection of arrows~\( \mH { \vgamma , \vgamma } \to A \) for all~\( \vgamma \), then these factor through a unique map~\( \Coend[\catGamma] { \mH } \to A \). If \( \catC \)~is cocomplete, then clearly \[ \Coend[\catGamma] { \mH } = \Coeq[par=\Big] { \Coprod[ \mf \colon \vgamma \to \vgamma[prime] ] { \mH { \vgamma , \vgamma[prime] } } \rightrightarrows \Coprod[\vgamma\in\catGamma] { \mH { \vgamma , \vgamma } } }, \] the maps being defined as before. \section{Weighted limits} Given a functor~\( \mF \colon \catGamma \to \catC \), we may regard~\( \mF \) as a bifunctor~\( \catGamma[op] \times \catGamma \to \catC \), \( \tup{ \vgamma , \vgamma[prime] } \mapsto \mF { \vgamma[prime] } \), the first argument being simply discarded. Then it is easy to see that the universal property of~\( \End[\catGamma] { \mF } \) reduces to that of the limit~\( \invlim[\catGamma] { \mF } \), and the equalizer formula~\eqref{eq:Equalizer_formula_for_end} reduces to the familiar formula \[ {\textstyle \invlim[\catGamma] { \mF } } = \Eq[par=\Big] { \Prod[\vgamma \in\catGamma] { \mF { \vgamma } } \rightrightarrows \Prod[ \mathclap{ \mf \colon \vgamma\to\vgamma[prime]} ] { \mF { \vgamma[prime] } } }. \] Now suppose that \( \tup { \catV , \tens , \innerhom{ slot , slot } } \)~is a closed, symmetric, monoidal category. Then one has the usual isomorphism \[ \innerhom { \vv[1] , \innerhom{\vv[2] , \vv[3] } } \cong \innerhom { \vv[2], \innerhom{ \vv[1] , \vv[3] } } \] for~\( \vv[1], \vv[2], \vv[3] \in \catV \). This provides a contravariant “action” of~\( \catV \) on itself that we wish to generalize. A \textdef{powering} of~\( \catV \) on the category~\( \catC \) is a functor \[ \catV[op] \times\catC\longto\catC, \qquad \tup{ \vv, \vc } \longmapsto \vc[powering=\vv] \] satisfying \[d \] (other notations include \( \powering { \vv , \vc } \) and~\( \innerhom { \vv , \vc } \)). \section{Kan extensions} \endgroup \chapter{Tools from general category theory} We include here some notions and tools from general category theory which we shall make use of in the main body of the text. \section{Ends and coends} Let \( \catGamma \) and~\( \catC \) be categories and~\( \mH \colon \catGamma[op] \times \catGamma \to \catC \) a bifunctor. The \textdef{end} of~\( \mH \) is an object \[ \End[\catGamma] { \mH } = \End[\vgamma \in \catGamma] { \mH { \vgamma , \vgamma } } \] in~\( \catC \), together with morphisms~\( \End[\catGamma] { \mH } \to \mH { \vgamma , \vgamma } \) for all~\( \vgamma \in \catGamma \), such that for any~\( \mf \colon \vgamma \to \vgamma[prime] \) the lower right square of the following diagram commutes: \[\begin{tikzcd} A \ar[rd,dashed] \ar[rrd, bend left] \ar[ddr, bend right] \\[-1em] &[-2em] \End[\catGamma] { \mH } \ar[r] \ar[d] & \mH { \vgamma , \vgamma } \ar[d, "\mH { \vgamma , \mf }"] \\ & \mH { \vgamma[prime] , \vgamma[prime] } \ar[r, "\mH { \mf , \vgamma[prime] }"'] & \mH { \vgamma , \vgamma[prime] } \invdot \end{tikzcd}\] Furthermore, \( \End[\catGamma] { \mH} \)~is universal with this property, meaning that if \( A \)~is another object of~\( \catC \) with a collection of arrows~\( A \to \mH { \vgamma , \vgamma } \) for all~\( \vgamma \), subject to the same commutativity conditions, then these factor through a unique map~\( A\to \End[\catGamma] { \mH } \). If \( \catC \)~is complete, then clearly \begin{equation}\label{eq:Equalizer_formula_for_end} \End[\catGamma]{ H } = \Eq[par=\Big] { \Prod[ \vgamma \in \catGamma ] { \mH { \vgamma , \vgamma } } \rightrightarrows \Prod[ \mathclap{\mf \colon \vgamma \to \vgamma[prime] } ] { \mH { \vgamma , \vgamma[prime] } } }. \end{equation} Here, the second product runs over all morphisms~\( \mf \colon \vgamma \to \vgamma[prime] \) in~\( \catGamma \), and the two arrows are given by \( \mH { \vgamma , \vgamma } \xto { \mf[push] } \mH { \vgamma , \vgamma[prime] } \) resp.~\( \mH { \vgamma[prime] , \vgamma[prime] } \xto{ \mf[pull] } \mH { \vgamma , \vgamma[prime] } \). The dual story is that of a coend: If again \( \mH \colon \catGamma[op] \times \catGamma \to \catC \) is a bifunctor, the \textdef{coend} is an object \[ \Coend[\catGamma] { \mH } = \Coend[\vgamma\in\catGamma] { \mH { \vgamma , \vgamma } } \] in~\(\catC \), together with morphisms~\( \mH { \vgamma , \vgamma } \to \Coend[\catGamma] { \mH } \) such that for any~\( \mf \colon \vgamma \to \vgamma[prime] \), the upper left square of the following digram commutes: \[\begin{tikzcd} \mH { \vgamma , \vgamma[prime] } & \mH { \vgamma , \vgamma } \ar[l, "\mH { \vgamma , \mf }"'] \\ \mH { \vgamma[prime] , \vgamma[prime] } \ar[u, "\mH { \mf , \vgamma[prime] }"] & \Coend[\catGamma] { \mH } \ar[u] \ar[l] \\[-1em] & &[-2em] A \ar[ul,dashed] \ar[llu, bend left] \ar[luu, bend right] \invdot \end{tikzcd}\] Furthermore, \( \Coend[\catGamma] { \mH } \) must be universal with this property, meaning that if \( A \)~is another object of~\( \catC \) with a collection of arrows~\( \mH { \vgamma , \vgamma } \to A \) for all~\( \vgamma \), then these factor through a unique map~\( \Coend[\catGamma] { \mH } \to A \). If \( \catC \)~is cocomplete, then clearly \[ \Coend[\catGamma] { \mH } = \Coeq[par=\Big] { \Coprod[ \mf \colon \vgamma \to \vgamma[prime] ] { \mH { \vgamma , \vgamma[prime] } } \rightrightarrows \Coprod[\vgamma\in\catGamma] { \mH { \vgamma , \vgamma } } }, \] the maps being defined as before. \section{Weighted limits} \section{Kan extensions} \endgroup \chapter{Tools from general category theory} We include here some notions and tools from general category theory which we shall make use of in the main body of the text. \section{Ends and coends} Let \( \catGamma \) and~\( \catC \) be categories and~\( \mH \colon \catGamma[op] \times \catGamma \to \catC \) a bifunctor. The \textdef{end} of~\( \mH \) is an object \[ \End[\catGamma] { \mH } = \End[\vgamma \in \catGamma] { \mH { \vgamma , \vgamma } } \] in~\( \catC \), together with morphisms~\( \End[\catGamma] { \mH } \to \mH { \vgamma , \vgamma } \) for all~\( \vgamma \in \catGamma \), such that for any~\( \mf \colon \vgamma \to \vgamma[prime] \) the lower right square of the following diagram commutes: \begin{equation* \begin{tikzcd} A \ar[rd,dashed] \ar[rrd, bend left] \ar[ddr, bend right] \\[-1em] &[-2em] \End[\catGamma] { \mH } \ar[r] \ar[d] & \mH { \vgamma , \vgamma } \ar[d, "\mH { \vgamma , \mf }"] \\ & \mH { \vgamma[prime] , \vgamma[prime] } \ar[r, "\mH { \mf , \vgamma[prime] }"'] & \mH { \vgamma , \vgamma[prime] } \invdot \end{tikzcd} \end{equation*} Furthermore, \( \End[\catGamma] { \mH} \)~is universal with this property, meaning that if \( A \)~is another object of~\( \catC \) with a collection of arrows~\( A \to \mH { \vgamma , \vgamma } \) for all~\( \vgamma \), subject to the same commutativity conditions, then these factor through a unique map~\( A\to \End[\catGamma] { \mH } \). If \( \catC \)~is complete, then clearly \begin{equation}\label{eq:Equalizer_formula_for_end} \End[\catGamma]{ H } = \Eq[par=\Big] { \Prod[ \vgamma \in \catGamma ] { \mH { \vgamma , \vgamma } } \rightrightarrows \Prod[ \mathclap{\mf \colon \vgamma \to \vgamma[prime] } ] { \mH { \vgamma , \vgamma[prime] } } }. \end{equation} Here, the second product runs over all morphisms~\( \mf \colon \vgamma \to \vgamma[prime] \) in~\( \catGamma \), and the two arrows are given by \( \mH { \vgamma , \vgamma } \xto { \mf[push] } \mH { \vgamma , \vgamma[prime] } \) resp.~\( \mH { \vgamma[prime] , \vgamma[prime] } \xto{ \mf[pull] } \mH { \vgamma , \vgamma[prime] } \). \begin{example} Given functors~\( \mF , \mG \colon \catGamma \to \catGamma[prime] \), we obtain a bifunctor \[ \mH = \Hom[ \catGamma[prime] ] { \mF { slot } , \mG { slot } } \colon \catGamma[op] \times \catGamma \to \catset, \] and the universal property shows that \[ \End[\catGamma] { \mH } = \Hom[ \catfun { \catGamma , \catGamma[prime] } ] { \mF , \mG } \] is the set of natural transformations between \( \mF \) and~\( \mG \). \end{example} The end is clearly a limit, hence a right adjoint. This adjunction can be spelled out: \begin{proposition}\label{res:end_adjunction} Suppose \( \catC \)~is cocomplete. The end fits as the right adjoint of the adjunction \[\begin{tikzcd} \Hom[\catGamma] \times\slot \colon \catC \ar[r, yshift=1pt] & \catC[diag=\catGamma[op]\times\catGamma] \ar[l,yshift=-1pt] \noloc \End[\catGamma]\invdot \end{tikzcd}\] The left adjoint takes~\( \vA \in \catC \) to the bifunctor \( \Hom[\catGamma] { slot , slot } \times \vA \colon \catGamma[op]\times\catGamma\to\catC \). Here \( \times \)~denotes the canonical copowering by~\( \catset \) on the cocomplete category~\( \catC \). \end{proposition} \begin{proof} Clear from the definition. \end{proof} This is equivalent to the statement that we have an adjunction \[ \catset[diag=\catGamma[op]\times\catGamma,par=\big] { \Hom[\catGamma] , \catC { \vA , \mF } } \cong \catC[diag=\catGamma[op]\times\catGamma,par=\big] { \vA , \textstyle \End[\catGamma] { \mF } } \] for \( \vA \in \catC \). This says that the end is the weighted limit \( \End[\catGamma] = \invlim[weight=\Hom[\catGamma],\catGamma[op]\times\catGamma] \) with weight~\( \Hom[\catGamma] \). The dual story is that of a coend: If again \( \mH \colon \catGamma[op] \times \catGamma \to \catC \) is a bifunctor, the \textdef{coend} is an object \[ \Coend[\catGamma] { \mH } = \Coend[\vgamma\in\catGamma] { \mH { \vgamma , \vgamma } } \] in~\(\catC \), together with morphisms~\( \mH { \vgamma , \vgamma } \to \Coend[\catGamma] { \mH } \) such that for any~\( \mf \colon \vgamma \to \vgamma[prime] \), the upper left square of the following digram commutes: \[\begin{tikzcd} \mH { \vgamma , \vgamma[prime] } & \mH { \vgamma , \vgamma } \ar[l, "\mH { \vgamma , \mf }"'] \\ \mH { \vgamma[prime] , \vgamma[prime] } \ar[u, "\mH { \mf , \vgamma[prime] }"] & \Coend[\catGamma] { \mH } \ar[u] \ar[l] \\[-1em] & &[-2em] A \ar[ul,dashed] \ar[llu, bend left] \ar[luu, bend right] \invdot \end{tikzcd}\] Furthermore, \( \Coend[\catGamma] { \mH } \) must be universal with this property, meaning that if \( A \)~is another object of~\( \catC \) with a collection of arrows~\( \mH { \vgamma , \vgamma } \to A \) for all~\( \vgamma \), then these factor through a unique map~\( \Coend[\catGamma] { \mH }[smash] \to A \). If \( \catC \)~is cocomplete, then clearly \[ \Coend[\catGamma] { \mH } = \Coeq[par=\Big] { \Coprod[ \mf \colon \vgamma \to \vgamma[prime] ] { \mH { \vgamma , \vgamma[prime] } } \rightrightarrows \Coprod[\vgamma\in\catGamma] { \mH { \vgamma , \vgamma } } }, \] the maps being defined as before. The coend is clearly a colimit, hence a left adjoint. This adjunction can be spelled out: \begin{proposition} Suppose \( \catC \)~is complete. The coend fits as the right adjoint of the adjunction \[\begin{tikzcd} \Coend[\catGamma] \colon \catC[diag=\catGamma[op]\times\catGamma] \ar[r,yshift=1pt] & \catC \ar[l,yshift=-1pt] \noloc \alg{\slot}[powering=\Hom[\catGamma]] \end{tikzcd}\] The right adjoint takes~\( \vA \in\catC \) to the bifunctor \( \catA[ powering=\Hom[\catGamma]{ slot , slot } ] \colon \catGamma[op]\times\catGamma\to\catC \). Here \( \alg{\slot}[powering=\slot] \)~denotes the canonical powering by~\( \catset \) on the complete category~\( \catC \).\fxfatal{Think about changing the notation to~\( \catGamma\times\catGamma[op] \).} \end{proposition} This is equivalent to the statement that we have an adjunction \[ \catset[diag=\catGamma[op]\times\catGamma,par=\big] { \catC { \vA , \mF } , \Hom[\catGamma] } \cong \catC[diag=\catGamma[op]\times\catGamma,par=\big] { \textstyle \Coend[\catGamma] { \mF } , \vA } \] for \( \vA \in \catC \). This says that the end is the weighted colimit \( \Coend[\catGamma] = \dirlim[weight=\Hom[\catGamma],\catGamma[op]\times\catGamma] \) with weight~\( \Hom[\catGamma] \). \section{Weighted limits} A good reference for this section is \textcite{riehl}. Given a functor~\( \mF \colon \catGamma \to \catC \), we may regard~\( \mF \) as a bifunctor~\( \catGamma[op] \times \catGamma \to \catC \), \( \tup{ \vgamma , \vgamma[prime] } \mapsto \mF { \vgamma[prime] } \), the first argument being simply discarded. Then it is easy to see that the universal property of~\( \End[\catGamma] { \mF } \) reduces to that of the limit~\( \invlim[\catGamma] { \mF } \), and the equalizer formula~\eqref{eq:Equalizer_formula_for_end} reduces to the familiar formula \[ {\textstyle \invlim[\catGamma] { \mF } } = \Eq[par=\Big] { \Prod[\vgamma \in\catGamma] { \mF { \vgamma } } \rightrightarrows \Prod[ \mathclap{ \mf \colon \vgamma\to\vgamma[prime]} ] { \mF { \vgamma[prime] } } }. \] Now suppose that \( \tup { \catV , \tens , \innerhom{ slot , slot } , \finalobject } \)~is a unital\fxfatal{Here, unital involves \( \innerhom { \catunit , slot } = \id \); don't know if that's standard. Also, the notation is usually for final object}, closed, symmetric, monoidal category. Then one has the usual isomorphisms \[ \innerhom { \vv[1] , \innerhom{\vv[2] , \vv[3] } } \cong \innerhom { \vv[2], \innerhom{ \vv[1] , \vv[3] } } \qquad\text{and}\qquad \innerhom{ \vv[1] \tens \vv[2] , \vv[3] } \cong \innerhom{ \vv[2] , \innerhom { \vv[1] , \vv[3] } } \] for~\( \vv[1], \vv[2], \vv[3] \in \catV \). This provides a covariant as well as a contravariant “action” of~\( \catV \) on itself that we wish to generalize. Let~\( \catC \) be a category enriched\fxfatal{Be clear somewhere that an enriched category, in our setup, has an underlying category.} over~\( \catV \), and denote the enriched hom space by~\( \Hom[enrich,smash] \). Then a \textdef{powering} of~\( \catV \) on the category~\( \catC \) is a bifunctor \[ \catV[op] \times\catC\longto\catC, \qquad \tup{ \vv, \vx } \longmapsto \vc[powering=\vv] \] satisfying \[ \Hom[enrich,\catC] { \vy , { \vx[powering=\vv] } } \cong \innerhom { \vv , \Hom[enrich,\catC] { \vy , \vx } } \] for all~\( \vv \in \catV \) and \( \vx , \vy \in \catC \). (other notations include \( \powering { \vv , \vx } \) and~\( \innerhom { \vv , \vx } \)). Notice that \( \vx[powering=\catunit] = \vx \). Dually, a \textdef{copowering} of~\( \catV \) on~\( \catC \) is a bifunctor \[ \catV \times \catC \longto \catC, \qquad \tup { \vv , \vx } \longmapsto \vv \tens \vx. \] satisfying \[ \Hom[enrich,\catC] { \vv \tens \vx , \vy } \cong \innerhom { \vv , \Hom[enrich,\catC] { \vx , \vy } } \] for all~\( \vv \in \catV \) and \( \vx , \vy \in \catC \). Notice that \( \catunit \tens \vx = \vx \). By the identities above, the inner hom and the tensor product provide a powering resp.\ copowering of~\( \catV \) on itself. Now\fxfatal{Wait, wait a moment, I have not introduced enriched ends and coends! But then again, the limit should make sense in the classical sense, so I guess really no issues?} in the above setup, let~\( \catGamma \) be a category and \( \mW \colon \catGamma \to \catV \) a functor. The \textdef{weighted limit} with weight~\( \mW \) of a diagram~\( \mF \colon \catGamma \to \catC \) is given by the end \[ \invlim[weight=\mW] { \mF } = \End [ \vgamma\in\catGamma ] { {\mF{\vgamma}[powering=\mW{\vgamma}] } } \] By taking~\( \mW \) to be the constant functor at~\( \finalobject \), we obtain \( \invlim[weight=\finalobject] = \invlim \). Dually, if instead \( \mW \colon \catGamma[op] \to \catV \) is a functor, then the \textdef{weighted colimit} with weight~\( \mW \) of the diagram~\( \mF \colon \catGamma \to \catC \) is given by the coend \[ \dirlim[weight=\mW] { \mF } = \Coend[\vgamma\in\catGamma] { \mW { \vgamma} \tens \mF { \vgamma } }. \] As before, we recover the colimit by~\( \dirlim[weight=\finalobject] = \dirlim \). \section{Kan extensions} \endgroup \chapter{Tools from general category theory} We include here some notions and tools from general category theory which we shall make use of in the main body of the text. \section{Ends and coends} Let \( \catGamma \) and~\( \catC \) be categories and~\( \mH \colon \catGamma[op] \times \catGamma \to \catC \) a bifunctor. The \textdef{end} of~\( \mH \) is an object \[ \End[\catGamma] { \mH } = \End[\vgamma \in \catGamma] { \mH { \vgamma , \vgamma } } \] in~\( \catC \), together with morphisms~\( \End[\catGamma] { \mH } \to \mH { \vgamma , \vgamma } \) for all~\( \vgamma \in \catGamma \), such that for any~\( \mf \colon \vgamma \to \vgamma[prime] \) the lower right square of the following diagram commutes: \begin{equation* \begin{tikzcd} A \ar[rd,dashed] \ar[rrd, bend left] \ar[ddr, bend right] \\[-1em] &[-2em] \End[\catGamma] { \mH } \ar[r] \ar[d] & \mH { \vgamma , \vgamma } \ar[d, "\mH { \vgamma , \mf }"] \\ & \mH { \vgamma[prime] , \vgamma[prime] } \ar[r, "\mH { \mf , \vgamma[prime] }"'] & \mH { \vgamma , \vgamma[prime] } \invdot \end{tikzcd} \end{equation*} Furthermore, \( \End[\catGamma] { \mH} \)~is universal with this property, meaning that if \( A \)~is another object of~\( \catC \) with a collection of arrows~\( A \to \mH { \vgamma , \vgamma } \) for all~\( \vgamma \), subject to the same commutativity conditions, then these factor through a unique map~\( A\to \End[\catGamma] { \mH } \). If \( \catC \)~is complete, then clearly \begin{equation}\label{eq:Equalizer_formula_for_end} \End[\catGamma]{ H } = \Eq[par=\Big] { \Prod[ \vgamma \in \catGamma ] { \mH { \vgamma , \vgamma } } \rightrightarrows \Prod[ \mathclap{\mf \colon \vgamma \to \vgamma[prime] } ] { \mH { \vgamma , \vgamma[prime] } } }. \end{equation} Here, the second product runs over all morphisms~\( \mf \colon \vgamma \to \vgamma[prime] \) in~\( \catGamma \), and the two arrows are given by \( \mH { \vgamma , \vgamma } \xto { \mf[push] } \mH { \vgamma , \vgamma[prime] } \) resp.~\( \mH { \vgamma[prime] , \vgamma[prime] } \xto{ \mf[pull] } \mH { \vgamma , \vgamma[prime] } \). \begin{example} Given functors~\( \mF , \mG \colon \catGamma \to \catGamma[prime] \), we obtain a bifunctor \[ \mH = \Hom[ \catGamma[prime] ] { \mF { slot } , \mG { slot } } \colon \catGamma[op] \times \catGamma \to \catset, \] and the universal property shows that \[ \End[\catGamma] { \mH } = \Hom[ \catfun { \catGamma , \catGamma[prime] } ] { \mF , \mG } \] is the set of natural transformations between \( \mF \) and~\( \mG \). \end{example} \begin{proposition}\label{res:end_adjunction} Suppose \( \catC \)~is cocomplete. The end fits as the right adjoint of the adjunction \[\begin{tikzcd} \Coprod[ \Hom[\catGamma] ] \colon \catC \ar[r, yshift=1pt] & \catC[diag=\catGamma[op]\times\catGamma] \ar[l,yshift=-1pt] \noloc \End[\catGamma]\invdot \end{tikzcd}\] The left adjoint takes~\( \vA \in \catC \) to the bifunctor \( \Coprod[ \Hom[\catGamma] { slot , slot } ] { \vA } \colon \catGamma[op]\times\catGamma\to\catC \). \end{proposition} \begin{proof} Clear from the definition. \end{proof} This is equivalent to the statement that we have an adjunction \[ \catset[diag=\catGamma[op]\times\catGamma,par=\big] { \Hom[\catGamma] , \catC { \vA , \mF } } \cong \catC[diag=\catGamma[op]\times\catGamma,par=\big] { \vA , \textstyle \End[\catGamma] { \mF } } \] for \( \vA \in \catC \). This says that the end is the weighted limit \( \End[\catGamma] = \invlim[weight=\Hom[\catGamma],\catGamma[op]\times\catGamma] \) with weight~\( \Hom[\catGamma] \). The dual story is that of a coend: If again \( \mH \colon \catGamma[op] \times \catGamma \to \catC \) is a bifunctor, the \textdef{coend} is an object \[ \Coend[\catGamma] { \mH } = \Coend[\vgamma\in\catGamma] { \mH { \vgamma , \vgamma } } \] in~\(\catC \), together with morphisms~\( \mH { \vgamma , \vgamma } \to \Coend[\catGamma] { \mH } \) such that for any~\( \mf \colon \vgamma \to \vgamma[prime] \), the upper left square of the following digram commutes: \[\begin{tikzcd} \mH { \vgamma , \vgamma[prime] } & \mH { \vgamma , \vgamma } \ar[l, "\mH { \vgamma , \mf }"'] \\ \mH { \vgamma[prime] , \vgamma[prime] } \ar[u, "\mH { \mf , \vgamma[prime] }"] & \Coend[\catGamma] { \mH } \ar[u] \ar[l] \\[-1em] & &[-2em] A \ar[ul,dashed] \ar[llu, bend left] \ar[luu, bend right] \invdot \end{tikzcd}\] Furthermore, \( \Coend[\catGamma] { \mH } \) must be universal with this property, meaning that if \( A \)~is another object of~\( \catC \) with a collection of arrows~\( \mH { \vgamma , \vgamma } \to A \) for all~\( \vgamma \), then these factor through a unique map~\( \Coend[\catGamma] { \mH }[smash] \to A \). If \( \catC \)~is cocomplete, then clearly \[ \Coend[\catGamma] { \mH } = \Coeq[par=\Big] { \Coprod[ \mf \colon \vgamma \to \vgamma[prime] ] { \mH { \vgamma , \vgamma[prime] } } \rightrightarrows \Coprod[\vgamma\in\catGamma] { \mH { \vgamma , \vgamma } } }, \] the maps being defined as before. The coend is clearly a colimit, hence a left adjoint. This adjunction can be spelled out: \begin{proposition} Suppose \( \catC \)~is complete. The coend fits as the right adjoint of the adjunction \[\begin{tikzcd} \Coend[\catGamma] \colon \catC[diag=\catGamma[op]\times\catGamma] \ar[r,yshift=1pt] & \catC \ar[l,yshift=-1pt] \noloc \Prod[ \Hom[\catGamma] ] \end{tikzcd}\] The right adjoint takes~\( \vA \in\catC \) to the bifunctor \( \Prod[ \Hom[\catGamma] { slot , slot } ] { \vA } \colon \catGamma[op]\times\catGamma\to\catC \). \end{proposition} This is equivalent to the statement that we have an adjunction \[ \catset[diag=\catGamma[op]\times\catGamma,par=\big] { \catC { \vA , \mF } , \Hom[\catGamma] } \cong \catC[diag=\catGamma[op]\times\catGamma,par=\big] { \textstyle \Coend[\catGamma] { \mF } , \vA } \] for \( \vA \in \catC \). This says that the end is the weighted colimit \( \Coend[\catGamma] = \dirlim[weight=\Hom[\catGamma],\catGamma[op]\times\catGamma] \) with weight~\( \Hom[\catGamma] \). \section{Weighted limits} A good reference for this section is \textcite{riehl}. Given a functor~\( \mF \colon \catGamma \to \catC \), we may regard~\( \mF \) as a bifunctor~\( \catGamma[op] \times \catGamma \to \catC \), \( \tup{ \vgamma , \vgamma[prime] } \mapsto \mF { \vgamma[prime] } \), the first argument being simply discarded. Then it is easy to see that the universal property of~\( \End[\catGamma] { \mF } \) reduces to that of the limit~\( \invlim[\catGamma] { \mF } \), and the equalizer formula~\eqref{eq:Equalizer_formula_for_end} reduces to the familiar formula \[ {\textstyle \invlim[\catGamma] { \mF } } = \Eq[par=\Big] { \Prod[\vgamma \in\catGamma] { \mF { \vgamma } } \rightrightarrows \Prod[ \mathclap{ \mf \colon \vgamma\to\vgamma[prime]} ] { \mF { \vgamma[prime] } } }. \] Now suppose that \( \tup { \catV , \tens , \innerhom{ slot , slot } , \finalobject } \)~is a unital\fxfatal{Here, unital involves \( \innerhom { \catunit , slot } = \id \); don't know if that's standard. Also, the notation is usually for final object}, closed, symmetric, monoidal category. Then one has the usual isomorphisms \[ \innerhom { \vv[1] , \innerhom{\vv[2] , \vv[3] } } \cong \innerhom { \vv[2], \innerhom{ \vv[1] , \vv[3] } } \qquad\text{and}\qquad \innerhom{ \vv[1] \tens \vv[2] , \vv[3] } \cong \innerhom{ \vv[2] , \innerhom { \vv[1] , \vv[3] } } \] for~\( \vv[1], \vv[2], \vv[3] \in \catV \). This provides a covariant as well as a contravariant “action” of~\( \catV \) on itself that we wish to generalize. Let~\( \catC \) be a category enriched\fxfatal{Be clear somewhere that an enriched category, in our setup, has an underlying category.} over~\( \catV \), and denote the enriched hom space by~\( \Hom[enrich,smash] \). Then a \textdef{powering} of~\( \catV \) on the category~\( \catC \) is a bifunctor \[ \catV[op] \times\catC\longto\catC, \qquad \tup{ \vv, \vx } \longmapsto \vc[powering=\vv] \] satisfying \[ \Hom[enrich,\catC] { \vy , { \vx[powering=\vv] } } \cong \innerhom { \vv , \Hom[enrich,\catC] { \vy , \vx } } \] for all~\( \vv \in \catV \) and \( \vx , \vy \in \catC \). (other notations include \( \powering { \vv , \vx } \) and~\( \innerhom { \vv , \vx } \)). Notice that \( \vx[powering=\catunit] = \vx \). Dually, a \textdef{copowering} of~\( \catV \) on~\( \catC \) is a bifunctor \[ \catV \times \catC \longto \catC, \qquad \tup { \vv , \vx } \longmapsto \vv \tens \vx. \] satisfying \[ \Hom[enrich,\catC] { \vv \tens \vx , \vy } \cong \innerhom { \vv , \Hom[enrich,\catC] { \vx , \vy } } \] for all~\( \vv \in \catV \) and \( \vx , \vy \in \catC \). Notice that \( \catunit \tens \vx = \vx \). By the identities above, the inner hom and the tensor product provide a powering resp.\ copowering of~\( \catV \) on itself. Now\fxfatal{Wait, wait a moment, I have not introduced enriched ends and coends! But then again, the limit should make sense in the classical sense, so I guess really no issues?} in the above setup, let~\( \catGamma \) be a category and \( \mW \colon \catGamma \to \catV \) a functor. The \textdef{weighted limit} with weight~\( \mW \) of a diagram~\( \mF \colon \catGamma \to \catC \) is given by the end \[ \invlim[weight=\mW] { \mF } = \End [ \vgamma\in\catGamma ] { {\mF{\vgamma}[powering=\mW{\vgamma}] } } \] By taking~\( \mW \) to be the constant functor at~\( \finalobject \), we obtain \( \invlim[weight=\finalobject] = \invlim \). Dually, if instead \( \mW \colon \catGamma[op] \to \catV \) is a functor, then the \textdef{weighted colimit} with weight~\( \mW \) of the diagram~\( \mF \colon \catGamma \to \catC \) is given by the coend \[ \dirlim[weight=\mW] { \mF } = \Coend[\vgamma\in\catGamma] { \mW { \vgamma} \tens \mF { \vgamma } }. \] As before, we recover the colimit by~\( \dirlim[weight=\finalobject] = \dirlim \). \section{Kan extensions} \endgroup \section{The end construction} Let \( \catGamma \) and~\( \catC \) be categories, \( \catC \)~complete and cocomplete, and let~\( \mH \colon \catGamma[op] \times \catGamma \to \catC \) a bifunctor. The \textdef{end} of~\( \mH \) is an object \[ \End[\catGamma]{ \mH } = \End[\vgamma \in \catGamma]{ \mH { \vgamma , \vgamma } } \] in~\( \catC \), together with morphisms~\( \End[\catGamma]{ \mH } \to \mH { \vgamma , \vgamma } \) for all~\( \vgamma \in \catGamma \), such that for any~\( \mf \colon \vgamma \to \vgamma[prime] \), the following diagram commutes: \begin{equation* \begin{tikzcd} \End[\catGamma]{ \mH } \ar[r] \ar[d] & \mH { \vgamma , \vgamma } \ar[d, "\mH { \vgamma , \mf }"] \\ \mH { \vgamma[prime] , \vgamma[prime] } \ar[r, "\mH { \mf , \vgamma[prime] }"'] & \mH { \vgamma , \vgamma[prime] } \invdot \end{tikzcd} \end{equation*} Furthermore, \( \End[\catGamma]{ \mH} \)~is universal with this property, meaning that if \( A \)~is another object of~\( \catC \) with a collection of arrows~\( A \to \mH { \vgamma , \vgamma } \) for all~\( \vgamma \), subject to the same commutativity conditions, then these factor through a unique arrow~\( A\to \End[\catGamma]{ \mH } \): \begin{equation* \begin{tikzcd} A \ar[rd,dashed] \ar[rrd, bend left] \ar[ddr, bend right] \\[-1em] &[-2em] \End[\catGamma]{ \mH } \ar[r] \ar[d] & \mH { \vgamma , \vgamma } \ar[d, "\mH { \vgamma , \mf }"] \\ & \mH { \vgamma[prime] , \vgamma[prime] } \ar[r, "\mH { \mf , \vgamma[prime] }"'] & \mH { \vgamma , \vgamma[prime] } \invdot \end{tikzcd} \end{equation*} Clearly, we may obtain the end by the formula \begin{equation*}\label{eq:Equalizer_formula_for_end} \End[\catGamma]{ H } = \Eq[par=\Big]{ \Prod[ \vgamma \in \catGamma ]{ \mH { \vgamma , \vgamma } } \rightrightarrows \Prod[ \mathclap{\mf \colon \vgamma \to \vgamma[prime] } ]{ \mH { \vgamma , \vgamma[prime] } } }. \end{equation*} Here, the second product runs over all morphisms~\( \mf \colon \vgamma \to \vgamma[prime] \) in~\( \catGamma \), and the two arrows are given by \( \mf[push] \colon \mH { \vgamma , \vgamma } \to \mH { \vgamma , \vgamma[prime] } \) resp.~\( \mf[pull] \colon \mH { \vgamma[prime] , \vgamma[prime] } \to \mH { \vgamma , \vgamma[prime] } \). There is a dual notion of a \emph{coend}, denoted instead by~\(\smash{ \Coend[\catGamma]{ \mH }} \), which we shall not spell out. \begin{example} Given functors~\( \mF , \mG \colon \catGamma \to \catGamma[prime] \), we obtain a bifunctor \[ \mH = \Hom[ \catGamma[prime] ]{ \mF {\slot } , \mG {\slot } } \colon \catGamma[op] \times \catGamma \to \catset, \] and the universal property shows that \[ \End[\catGamma]{ \mH } = \Hom[ \catfun { \catGamma , \catGamma[prime] } ]{ \mF , \mG } \] is the set of natural transformations between \( \mF \) and~\( \mG \). \end{example} \begin{example} A diagram~\( \mF \in \catC[diag=\catGamma] \) may be regarded as a diagram in~\( \catC[diag=\catGamma[op]\times\catGamma] \) which is constant with respect to the first variable. In that case, it follows from the universal property of the end that \( \End[\catGamma]{ \mF } = \invlim[\catGamma]{ \mF } \) recovers the limit of the diagram. \end{example} \begin{proposition}\label{res:end_adjunction} The end fits as the right adjoint of the adjunction \[\begin{tikzcd}[sep=scriptsize] \Coprod[ \Hom[\catGamma] ] \colon \catC \ar[r, yshift=1.5pt] & \catC[diag=\catGamma[op]\times\catGamma] \ar[l,yshift=-1.5pt] \noloc \End[\catGamma]\invdot \end{tikzcd}\] The left adjoint takes~\( \vA \in \catC \) to the bifunctor \( \Coprod[ \Hom[\catGamma]{\slot ,\slot } ]{ \vA } \colon \catGamma[op]\times\catGamma\to\catC \). \end{proposition} \begin{proof} Clear from the definition. \end{proof} This is equivalent to the statement that we have an adjunction \[ \catset[diag=\catGamma[op]\times\catGamma,par=\big]{ \Hom[\catGamma] , \catC { \vA , \mF } } \cong \catC[diag=\catGamma[op]\times\catGamma,par=\big]{ \vA , \textstyle \End[\catGamma]{ \mF } } \] for \( \vA \in \catC \). This says that the end is the \emph{weighted limit}~\( \catC[diag=\catGamma[op]\times\catGamma] \to \catC \) with weight~\( \Hom[\catGamma] \). The dual statement for \emph{coends} is that the coend functor \[\textstyle \Coend[\catGamma] \colon \catC[diag=\catGamma[op]\times\catGamma] \longto \catC \] is left adjoint to~\( \Prod[\Hom[\catGamma]] \). \section{The projective and injective model structures} If \( \catC \)~is a model category and \( \catGamma \)~any category, there is no completely general way to turn the functor category \( \catC[diag=\catGamma] = \catfun{ \catGamma , \catC } \) into a model category. The na\"ive approach, calculating weak equivalences, cofibations, and fibrations componentwise, will not in general yield a model structure. It is natural to demand that at least the weak equivalences must be calculated componentwise for any model structure to be satisfactory. In general, however, at least one of the other two classes will in return become more complicated. The two most natural model structures one can hope for (which may or may not exist) are \begin{itemize} \item The \textdef{projective model structure}~% \( \catC[diag=\catGamma,proj] \) where weak equivalences and fibrations are calculated componentwise. \item The \textdef{injective model structure}~% \( \catC[diag=\catGamma,inj] \) where weak equivalences and cofibrations are calculated componentwise. \end{itemize} Existence of these model structures depends heavily on the structure of the target category~\( \catC \) (see \cref{res:existence_of_proj_inj_model_structures} below). We shall also use the attributes \textquote{projective(ly)} and \textquote{injective(ly)} when referring to these model structures, so e.g.~\textquote{projectively cofibrant} means cofibrant in the projective model structure. \begin{propositionbreak}[Proposition {\parencite[Proposition~A.2.8.7]{htt}}]% \label{res:Kan_extensions_Quillen_adjunctions} If \( \catC \)~is a model category and \( \mf \colon \catGamma\to\catGamma[prime] \) a functor, denote by~\( \mf[*res] \) the restriction functor \(\smash{ \catC[diag={\catGamma[prime]}] \to \catC[diag={\catGamma}] }\). Then \( \mf[*res] \)~fits as the right and left adjoint of Quillen adjunctions \[ \begin{tikzcd}[sep=scriptsize] \mf[!kan] \colon \catC[diag=\catGamma,proj] \ar[r,yshift=1.5pt] & \ar[l,yshift=-1.5pt] \catC[diag=\catGamma[prime],proj] \noloc \mf[*res] \end{tikzcd} \qquad\text{resp.}\qquad \begin{tikzcd}[sep=scriptsize] \mf[*res] \colon \catC[diag=\catGamma[prime],inj] \ar[r,yshift=1.5pt] & \ar[l,yshift=-1.5pt] \catC[diag=\catGamma,inj] \noloc \mf[*kan] \end{tikzcd} \] whenever the model structures in question exist. \end{propositionbreak} The adjoints \( \mf[!kan] \) and~\( \mf[*kan] \) are the usual \textdef[left Kan extension]{left} and \textdef[right Kan extension]{right Kan extensions} along~\( \mf \), which are given by limits \begin{equation}\label{eq:kan_extensions \mf[!kan]{ \mF }{ \vgamma[prime] } = \dirlim[\mf{\vgamma}\to\vgamma[prime]]{\mF{\vgamma}} \qquad\text{and}\qquad \mf[*kan]{ \mF }{ \vgamma[prime] } = \invlim[\vgamma[prime]\to\mf{\vgamma}]{\mF{\vgamma}} . \end{equation} These limits are taken over the categories of maps~\( \mf{\vgamma}\to\vgamma[prime] \) (resp.~\( \vgamma[prime]\to\mf{\vgamma} \)) in~\( \catGamma[prime] \) for varying~\( \vgamma \in \catGamma \). \begin{proof} Since the adjunctions in question exist, their being Quillen follows from the observation that \( \mf[*res] \)~clearly preserves (trivial) projective fibrations and (trivial) injective cofibrations. \end{proof} \newvar\vc{c} \begin{corollary} Assume in the following that the relevant model structures exist. \begin{corollarylist} \item\label{res:simple_cofibrations} If \( \mvarphi \colon \vc \to \vc[prime] \) is a (trivial) cofibration in~\( \catC \) and~\( \vgamma[0] \in \catGamma \) is an object, then the coproduct map \( \Coprod[\catGamma{ \vgamma[0] ,\slot }]{ \mvarphi } \colon \Coprod[\catGamma{ \vgamma[0] ,\slot }]{ \vc } \to \Coprod[\catGamma{ \vgamma[0] ,\slot }]{ \vc[prime] } \) is a (trivial) cofibration in~% \( \catC[diag=\catGamma,proj,smash] \). We shall refer to such (trivial) cofibrations as \textdef[simple (trivial) projective cofibration]{simple projective cofibrations}\index{cofibration!projective!simple}. \item\label{res:preserve_simple_cofibrations} If \( \mf \colon \catGamma \to \catGamma[prime] \)~is a functor, then \( \mf[!kan] \colon \catC[diag={\catGamma},proj] \to \catC[diag={\catGamma[prime]},proj] \) preserves simple (trivial) projective cofibrations, taking \( \Coprod[\catGamma{ \vgamma[0] ,\slot }]{ \mvarphi } \) to~% \( \Coprod[ { \catGamma[prime]{ \mf { \vgamma[0] } ,\slot } } ]{ \mvarphi } \). \item If \( \mpsi \colon \vc \to \vc[prime] \) is a (trivial) fibration in~\( \catC \) and \( \vgamma[0] \in \catGamma \)~is an object, then the product map \( \Prod[ \catGamma {\slot , \vgamma[0] } ]{ \mpsi } \colon \Prod[ \catGamma {\slot , \vgamma[0] } ]{ \vc } \to \Prod[ \catGamma {\slot , \vgamma[0] } ]{ \vc[prime] } \) is a (trivial) fibration in~% \( \catC[diag=\catGamma,inj,smash] \). We shall refer to such (trivial) fibrations as \textdef[simple (trivial) injective fibration]{simple injective fibrations}\index{fibration!injective!simple}. \item If \( \mf \colon \catGamma \to \catGamma[prime] \) is a functor, then \( \mf[*kan] \colon \catC[diag={\catGamma},inj] \to \catC[diag={\catGamma[prime]},inj] \) preserves simple (trivial) injective fibrations, taking \( \Prod[ \catGamma {\slot , \vgamma[0] } ]{ \mpsi } \) to~\( \Prod[ { \Gamma'( {\slot , \mf { \vgamma[0] } } ) } ]{ \mpsi } \). \end{corollarylist} \end{corollary} \begin{proof} Applying~\cref{res:Kan_extensions_Quillen_adjunctions} to the embedding% ~\( \miota \colon \vgamma[0] \into \catGamma \) of the full subcategory with~\( \vgamma[0] \) as the only object, we get that \( \miota[!kan]{ \mvarphi } \)~is a (trivial) cofibration. Now \( \miota[!kan]{ \mvarphi } = \Coprod[ { \catGamma { \vgamma[0],\slot } } , smash ]{ \mvarphi } \) by the above colimit formula for left Kan extension. The statement~\localref{res:preserve_simple_cofibrations} follows by applying Kan extensions to the diagram \[\begin{tikzcd}[sep=small] \vgamma[0] \ar[r,hook] \ar[d] & \catGamma \ar[d,"\mf"] \\ \mf { \vgamma[0] } \ar[r,hook] & \catGamma[prime] \end{tikzcd}\] and using that Kan extensions, being adjoints to restriction, respect compositions. The other statements are dual. \end{proof} \begin{corollary}\label{res:limit_functor_quillen} Denote by \( \mconst \colon \catC \to \catC[diag=\catGamma] \) the functor taking~\( \vc \in \catC \) to the constant diagram at~\( \vc \). \begin{corollarylist} \item\label{res:limit_functor_quillen_dirlim} If \( \catC[diag=\catGamma,proj] \) exists, then \( \dirlim \colon \catC[diag=\catGamma,proj] \rightleftarrows \catC \noloc \mconst \) is a Quillen adjunction. \item\label{res:limit_functor_quillen_invlim} If \( \catC[diag=\catGamma,inj] \) exists, then \( \mconst \colon \catC \rightleftarrows \catC[diag=\catGamma,inj] \noloc \invlim \) is a Quillen adjunction. \end{corollarylist} \end{corollary} \begin{proof} Apply \cref{res:Kan_extensions_Quillen_adjunctions} to the functor \( \catGamma\to * \). \end{proof} \begin{propositionbreak}[Proposition {\parencite[Proposition~A.2.8.2]{htt}}]% \label{res:existence_of_proj_inj_model_structures} If \( \catC \)~is a combinatorial model category, both the projective and injective model structures on~\( \catC[diag=\catGamma,smash] \) exist and are combinatorial. Given a generating set of (trivial) cofibrations in~\( \catC \), the corresponding \emph{simple} (trivial) cofibrations in~\( \catC[diag=\catGamma,proj,smash] \), for all choices of~\( \vgamma[0] \in \catGamma \), form a generating set of cofibrations. \end{propositionbreak} \begin{proof}[Sketch of proof] For the projective model structure, one checks by hand that simple (trivial) cofibrations have the left lifting property with respect to all degreewise fibrations (trivial fibrations). One then checks that the mentioned simple (trivial) cofibrations form a generating set. The injective model structure, on the other hand, requires more work and has a less explicit set of generating cofibrations. \end{proof} \begin{propositionbreak}[Proposition {{\parencite[Remark~A.2.8.6]{htt}}}]% \label{res:projective_injective_quillen_functorial} A Quillen adunction \( \mF \colon \catC \rightleftarrows \catD \noloc \mG \) between combinatorial model categories induces Quillen adjunctions \[ \begin{tikzcd}[sep=scriptsize] \catC[diag=\catGamma,proj] \ar[r,yshift=1.5pt] & \ar[l,yshift=-1.5pt] \catD[diag=\catGamma,proj] \end{tikzcd} \qquad\text{and}\qquad \begin{tikzcd}[sep=scriptsize] \catC[diag=\catGamma,inj] \ar[r,yshift=1.5pt] & \ar[l,yshift=-1.5pt] \catD[diag=\catGamma,inj] \end{tikzcd} \] which are Quillen equivalences if \( \tup { \mF , \mG } \)~are. \end{propositionbreak} \section{The Reedy model structure} A third approach exists to equip diagram categories~\( \catC[diag=\catGamma] \) with a model structure, provided the category~\( \catGamma \) has the structure of a Reedy category. Remarkably, unlike the projective and injective cases, this does not rely on any internal structure of~\( \catC \). A category~\( \catGamma \) is called \textdef{Reedy} if it contains two subcategories \( \catGamma[reedy+] , \catGamma[reedy-] \subset \catGamma \), each containing all objects, such that \begin{itemize} \item there exists a degree function \( \catob{\catGamma} \to \Z \), such that non-identity morphisms from~\( \catGamma[reedy+] \) strictly raise the degree and non-identity morphisms from~\( \catGamma[reedy-] \) strictly lower the degree (more generally, an ordinal number can be used instead of~\( \Z \)); \item each morphism \( \mf \in \catGamma \) factors \emph{uniquely} as \( \mf = \mg \mh \) for \( \mg \in \catGamma[reedy+] \) and \( \mh \in \catGamma[reedy-] \). \end{itemize} We note that a direct category is Reedy with \( \catGamma[reedy+] = \catGamma \), and that an inverse category is Reedy with \( \catGamma[reedy-] = \catGamma \). \begin{remark} If \( \catGamma \)~is Reedy, then so is~\( \catGamma[op] \), with \( \catGamma[op,spar,reedy+] = \catGamma[reedy-,spar,op] \) and \( \catGamma[op,spar,reedy-] = \catGamma[reedy+,spar,op] \). \end{remark} \begin{example}\label{ex:delta_reedy} The simplex category~\( \catdelta \) is Reedy with~\( \catdelta[reedy+] \) consisting of injective maps and \( \catdelta[reedy-] \)~consisting of surjective maps. The degree function does the obvious thing, \( \ordset{n} \mapsto n \). \end{example} If \( \catGamma \)~is a Reedy category and \( \catC \)~is any model category, and if \( \mF \in \catC[diag=\catGamma] \)~is a diagram, we define the \textdef[latching object]{latching} and \textdef[matching object]{matching objects} by \[ \mF[latch=\vgamma] = \dirlim[ (\valpha\underset{\smash{\neq}}{ \to }\vgamma)\in\catGamma[reedy+] ]{ \mF { \valpha } } \qquad\text{and}\qquad \mF[match=\vgamma] = \invlim[ (\vgamma\underset{\smash{\neq}}{\to}\valpha)\in\catGamma[reedy-] ]{ \mF { \valpha } }. \] In other words, the limit (resp.,\ colimit) runs over the category of all \emph{non-identity} maps \( \valpha \to \vgamma \) in~\( \catGamma[reedy+] \) (resp.,\ \( \vgamma \to \valpha \) in~\( \catGamma[reedy-] \)). The \textdef{latching map} is the canonical map \( \mF[latch=\vgamma,smash] \to \mF{\vgamma} \), and the \textdef{matching map} is the canonical map \( \mF{\vgamma} \to \mF[match=\vgamma,smash] \). If \( \mf \colon \mF \to \mG \) is a map in~\( \catC[diag=\catGamma] \), then the \textdef{relative latching map} is the map \[ \mF { \vgamma } \varpushout[ limits, {\mF[latch=\vgamma]} ] \mG[latch=\vgamma] \longto \mG { \vgamma } \] given by the universal property of the pushout. We say that \( \mf \)~is a \textdef{(trivial) Reedy cofibration} if the relative latching map is a (trivial) cofibration in~\( \catC \). If \( \mF = \varnothing \), we recover the latching map. Dually, the \textdef{relative matching map} is the map \[ \mF { \vgamma } \longto \mG { \vgamma } \varpullback[ limits, {\mG[match=\vgamma]} ] \mF[match=\vgamma] \] given by the universal property of the pullback. We say that \( \mf \)~is a \textdef{(trivial) Reedy fibration} if the relative matching map is a (trivial) fibration in~\( \catC \). If \( \mF = * \), we recover the matching map. \begin{propositionbreak}[{Proposition \parencite[Theorem~15.3.4]{hir}}] If \( \catC \)~is an arbitrary model category and \( \catGamma \)~is a Reedy category, then this defines a model structure on~\( \catC[diag=\catGamma,smash] \), called the \textdef{Reedy model structure}. The weak equivalences are componentwise weak equivalences. We shall write~\( \catC[diag=\catGamma,reedy,smash] \) when we equip the diagram category with this model structure. \end{propositionbreak} \begin{propositionbreak}[{Proposition \parencite[Example~A.2.9.22]{htt}}]% \label{res:reedy_projective_injective_restriction} Let~\( \catC \) be a model category and \( \catGamma \)~a Reedy category. Then \begin{propositionlist} \item If \( \catGamma = \catGamma[reedy+] \)~is a direct category, the projective model structure~\( \catC[diag=\catGamma,proj] \) exists and coincides with the Reedy model structure. \item If \( \catGamma = \catGamma[reedy-] \)~is an inverse category, the injective model structure~\( \catC[diag=\catGamma,inj] \) exists and coincides with the Reedy model structure. \end{propositionlist} Furthermore, a map \( \mf \colon \mF \to \mG \) in~\( \catC[diag=\catGamma] \) is a \begin{propositionlist}[resume] \item\label{res:reedy_projective_injective_restriction_plus} (trivial) cofibration if and only if the restriction \( \smash{ \mf[res=\catGamma[reedy+]] \colon \mF[res=\catGamma[reedy+]] \to \mG[res=\catGamma[reedy+]] } \) is a (trivial) projective cofibration in~\( \catC[diag=\catGamma[reedy+],proj,smash] \). \item\label{res:reedy_projective_injective_restriction_minus} (trivial) fibration if and only if the restriction \( \smash{ \mf[res=\catGamma[reedy-]] \colon \mF[res=\catGamma[reedy-]] \to \mG[res=\catGamma[reedy-]] } \) is a (trivial) injective fibration in~\( \catC[diag=\catGamma[reedy-],inj,smash] \). \end{propositionlist} \end{propositionbreak} \begin{propositionbreak}[{Proposition \parencite[Theorem~15.5.2]{hir}}]% \label{res:reedy_category_product} If \( \catGamma \) and \( \catGamma[prime] \)~are both Reedy categories, then so is \( \catGamma\times\catGamma[prime] \), and the three possible Reedy model structures one can put on \( \catC[diag=\catGamma\times\catGamma[prime]] \) agree, i.e. \[ \catC[diag=\catGamma\times\catGamma[prime],reedy] = \catC[diag=\catGamma,reedy,spar=\big,smash,diag=\catGamma[prime],reedy] = \catC[diag=\catGamma[prime],reedy,spar=\big,smash,diag=\catGamma,reedy] . \] \end{propositionbreak} \section{Homotopy limits} The following theorem is the basis for all our homotopy limit formulae: \begin{theorem}\label{res:end_functor_quillen} Let~\( \catC \) be a model category and~\( \catGamma \) a category. Regard the functor category~\( \catC[diag=\catGamma[op]\times\catGamma] \) as a model category in any of the following ways: \begin{theoremlist} \item\label{res:end_functor_quillen_gammaop_gamma} as~\( \catC[diag=\catGamma[op]\times\catGamma] = \catC[ diag=\catGamma[op], proj, spar, diag=\catGamma, inj, ] \) (assuming this model structure exists); \item\label{res:end_functor_quillen_gamma_gammaop} as~\( \catC[diag=\catGamma[op]\times\catGamma] = \catC[ diag=\catGamma, proj, spar, diag=\catGamma[op], inj, ] \) (assuming this model structure exists); \item\label{res:end_functor_quillen_reedy} as~\( \catC[diag=\catGamma[op]\times\catGamma] = \catC[ diag=\catGamma[op]\times\catGamma, reedy, ] \) (assuming \( \catGamma \)~is Reedy). \end{theoremlist} Then the end functor~\( \End[\catGamma] \colon \catC[diag=\catGamma[op]\times\catGamma] \to \catC \) is right Quillen. \end{theorem} \begin{proof} We initially prove the first statement and obtain the second one by duality. By \cref{res:end_adjunction}, it suffices to check that the left adjoint~\( \Coprod[\Hom[\catGamma]] \) takes (trivial) cofibations in~\( \catC \) to (trivial) cofibations in~\( \catC[ diag=\catGamma[op], modelstructure=\modelstructureproj, spar, diag=\catGamma, modelstructure=\modelstructureinj, smash, ] \). If \( \vc \to \vc[prime] \)~is a (trivial) cofibration in~\( \catC \), then we must therefore consider the map~\( \Coprod [ \catGamma {\slot ,\slot } ]{ \vc } \to \Coprod [ \catGamma {\slot ,\slot } ]{ \vc[prime] } \) in~\( \catC[ diag=\catGamma[op], modelstructure=\modelstructureproj, spar, diag=\catGamma, modelstructure=\modelstructureinj, smash, ] \). Checking that this is a (trivial) injective cofibration over~\( \catGamma \) amounts, by definition, to checking this componentwise. But for a fixed~\( \vgamma[0] \in\catGamma \), this component is \( \Coprod[ \catGamma{\slot , \vgamma[0] } ]{ \vc } \to \Coprod[ \catGamma{\slot , \vgamma[0] } ]{ \vc[prime] }, \) which is a simple (trivial) projective cofibration in~\( \catC[diag=\catGamma[op],smash] \). For the Reedy case, we recall from \cref{res:reedy_projective_injective_restriction,res:reedy_category_product} that being a (trivial) cofibration in the model category~\( \catC[diag=\catGamma\times\catGamma[op],reedy,smash] = \catC[diag=\catGamma,reedy,spar,diag=\catGamma[op],reedy,smash] \) is equivalent to the restriction being a (trivial) cofibration in \[ \catC[ diag={\catGamma[reedy+]},proj, spar=\big,smash, diag={\catGamma[op,spar,reedy+]},proj, ] = \catC[ diag={\catGamma[reedy+]},proj, spar=\big,smash, diag={\catGamma[reedy-,spar,op]},proj, ]. \] But we have, by the unique factorization property of Reedy categories, that \[ \Coprod[ \catGamma{\slot ,\slot } ]{ \vc } = \Coprod[ \vgamma[0] \in \catGamma ]{ \Coprod [ \catGamma[reedy-] ({\slot , \vgamma[0] }) ]{ \Coprod [ \catGamma[reedy+] ({ \vgamma[0] ,\slot }) ]{ \vc } } } \] for any~\( \vc\in\catC \). These consist of coproducts of exactly the same form as the ones appearing in the definition of simple (trivial) projective cofibrations (\cref{res:simple_cofibrations}). Thus we find that for any (trivial) cofibration~\( \vc\to\vc[prime] \) in~\( \catC \), the map~\( \Coprod[ \catGamma {\slot ,\slot } ]{ \vc } \to \Coprod[ \catGamma {\slot ,\slot } ]{ \vc[prime] } \) is a (trivial) cofibration in~\( \catC[diag=\catGamma[op]\times\catGamma,reedy,smash] \). \end{proof} Thus we can derive the end using any of these three model structures, when available. Write \( \End[rder,\catGamma] \colon\catC[diag=\catGamma[op]\times\catGamma]\to\catC \) for the derived functor, which we shall call the \textdef{homotopy end}. \begin{corollary}\label{res:homotopy_limits_via_ends} If \( \catC \)~is a combinatorial model category and \( \catGamma \)~a category, then for a diagram~\( \mF \in \catC[diag=\catGamma] \), \[\textstyle \invholim[\catGamma]{ \mF } = \End[rder,\catGamma]{ \mF } = \End [ \catGamma]{ \mfibrep{ \mF } } , \] where \( \mfibrep \)~is a fibrant replacement with respect to the model structure~\( \catC[diag=\catGamma,proj,spar,diag=\catGamma[op],inj,smash] \) or, if \( \catGamma \)~is Reedy, in~\( \catC[diag=\catGamma[op]\times\catGamma,reedy,smash] \). \end{corollary} \begin{proof} First write \( \invholim { \mF } = \invlim { \mfibrep[\catGamma]{ \mF } } \) for some fibrant replacement functor~\( \mfibrep[\catGamma] \) in~\( \catC[diag=\catGamma,inj,smash] \). Now \cref{res:limit_functor_quillen_dirlim,res:projective_injective_quillen_functorial} show that the constant functor embedding~\( \catC[diag=\catGamma,inj,smash] \into \catC[diag=\catGamma[op],proj,spar,diag=\catGamma,inj,smash] \) is right Quillen and thus preserves fibrant objects. Thus \( \mfibrep[\catGamma]{ \mF } \)~is also fibrant in~\( \catC[diag=\catGamma[op],proj,spar,diag=\catGamma,inj,smash] \). This proves the first equality sign. The second one is clear. \end{proof} Of course, even though \( \mF \) as a diagram in~\( \catC[diag=\catGamma[op]\times\catGamma] \) was constant with respect to the first variable, \( \mfibrep{ \mF } \) is in general not. Remarkably, since ends calculate naturality between the two variables, this often makes calculations of homotopy limits more manageable, compared to resolving the diagram inside~\( \catC[diag=\catGamma,inj,smash] \). \begin{corollary}\label{res:holim_direct_category} Suppose \( \catGamma \)~is a direct category, and let~\( \smash{ \mfibrep \colon \catC \to \catC[diag=\catGamma[op],inj] } \) be a functor that takes~\( \vc\in\catC \) to a fibrant replacement of the constant diagram at~\( \vc \). Then \[\textstyle \invholim[\catGamma]{ \mF } = \End[\vgamma\in\catGamma]{ \mfibrep{ \mF{\vgamma} }(\vgamma) }. \] \end{corollary} \begin{proof} Clearly, \( \fibrep{ \mF } \)~is a fibrant replacement inside~\( \catC[ diag=\catGamma[op], inj, spar, diag=\catGamma, proj, smash, ] \). By \cref{res:reedy_projective_injective_restriction,res:reedy_category_product}, this model category is equal to \[ \catC[ diag=\catGamma[op], inj, spar=\big, smash, diag=\catGamma, proj, ] = \catC[ diag=\catGamma[op], reedy, spar=\big, smash, diag=\catGamma, reedy, ] = \catC[ diag=\catGamma, reedy, spar=\big, smash, diag=\catGamma[op], reedy, ] = \catC[ diag=\catGamma, proj, spar=\big, smash, diag=\catGamma[op], inj, ] , \] so the result follows from~\cref{res:end_functor_quillen}. \end{proof} \section{Bousfield--Kan formula} In \textcite[chapter~19]{hir}, homotopy limits are being developed for arbitrary model categories via a machinery of simplicial resolutions. In this section, we use \cref{res:end_functor_quillen}/\cref{res:homotopy_limits_via_ends} to explain why this machinery works. Throughout, we denote by~\( \catsset \) the category of simplicial sets endowed with the Quillen model structure. If \( \catC \)~is a (complete) category and~\( \ssetX{*} \in \catC[diag=\catdelta[op]] \) a simplicial diagram in~\( \catC \), we may extend~\( \ssetX{*} \) to a continuous functor~\( \ssetX \colon \catsset[op] \to \catC \) via the right Kan extension along the Yoneda embedding~\( \catdelta[op] \into \catsset[op] \): \[ \ssetX[powering=\ssetK] = \invlim[\simp{n}\to\ssetK]{ \ssetX{n} }, \qquad \ssetK\in\catsset. \] If \( \catC \)~is a model category, the matching object at~\( \ordset{n} \) is~\( \ssetX[match=n]{*} = \cosimpX[powering=\simp{n}[boundary]] \), and so \( \ssetX{*} \)~being Reedy-fibrant is equivalent to the map~\(\smash{ \ssetX{n} = \ssetX[powering=\simp{n}] \to \ssetX[powering=\simp{n}[boundary]] }\) being a fibration in~\( \catC \) for all~\( n \). \begin{theorem}[Theorem (Bousfield--Kan formula)]\label{res:bousfield_kan} Suppose \( \catC \)~is a combinatorial model category,~\( \Gamma \) a category, and~\( \mF \in \catC[diag=\catGamma] \). Let~\( \fibrep \colon \catC \to \catC[diag=\catdelta[op],smash] \) be a functor that takes~\( \vc \in \catC \) to a Reedy-fibrant replacement of the constant \( \catdelta[op] \)-diagram at~\( \vc \). Let furthermore~\( \ssetK \in \catsset[diag=\catGamma,proj,smash] \) be a projectively cofibrant resolution of the point. Then \[ {\textstyle\invholim[\catGamma]{ \mF }} = \End[\vgamma\in\catGamma]{ \fibrep[output=simplicial]{\mF{\vgamma}}[ powering={\ssetK[arg=\vgamma]}, ] }. \] \end{theorem} One may prove \parencite[see e.g.][Proposition~14.8.9]{hir} that the diagram~\( \ssetK[arg=\slot] = \nerve{{ \catGamma[mod=\slot] }} \in \catsset[diag=\catGamma,proj,smash] \), taking~\( \vgamma \) to the nerve~\( \nerve{{ \catGamma[mod=\vgamma] }} \) of the comma category~\( \catGamma[mod=\vgamma] \) of all maps in~\( \catGamma \) with codomain~\( \vgamma \), is a projectively cofibrant resolution of the point. Thus we have \begin{equation}\label{eq:classical_bousfield_kan} {\textstyle\invholim[\catGamma]{ \mF }} = \End[\vgamma\in\catGamma]{ \fibrep[output=simplicial]{\mF{\vgamma}}[ powering={\nerve{{ \catGamma[mod=\vgamma] }}}, ] }, \end{equation} which is the classical form of the Bousfield--Kan formula. The proof relies on the following standard lemma: \begin{lemmabreak}[{{Lemma \parencite[Proposition~3.6.8]{hovey}}}]\label{res:hovey_lemma} Let~\( \catC \) be a model category and \( \mF \colon \catsset \to \catC \) a functor preserving colimits and cofibrations. Then \( \mF \)~preserves trivial cofibrations if and only if \( \smash{ \mF{\simp{n}} \to \mF{\simp{0}} } \)~is a weak equivalence for all~\( n \). \end{lemmabreak} \begin{proof}[Proof of~\cref{res:bousfield_kan}] Clearly, \( \fibrep{ \mF{slot} }[ibullet] \)~is a fibrant replacement of~\( \mF \) with respect to the model structure~\( \catC[ diag=\catdelta[op], reedy, spar, diag=\catGamma, proj, smash, ] \). The theorem will follow if we prove that \( \fibrep{ \mF{slot} }[powering={\ssetK[arg=\slot]}] \)~is a fibrant replacements of~\( \mF \) in~\( \catC[ diag=\catGamma, proj, spar, diag=\catGamma[op], inj, smash, ] \). This will follow from Ken Brown's Lemma if we prove that the continuous functor \[ \catsset[diag=\catGamma,proj,spar=\big,op] \longto \catC[diag=\catGamma,proj,spar=\big,smash,diag=\catGamma[op],inj], \qquad \ssetK[arg=\slot] \longmapsto \fibrep{ \mF{slot} }[powering={\ssetK[arg=\slot]}] , \] takes opposites of (trivial) cofibrations to (trivial) fibrations. (Trivial) cofibrations in~\( \catsset[diag=\catGamma,proj,smash] \) are generated from \emph{simple} (trivial) projective cofibrations via pushouts and retracts, c.f.~\cref{res:existence_of_proj_inj_model_structures}. Thus by continuity of the functor, it suffices to prove the statement for \emph{simple} (trivial) cofibrations. We therefore let \( \Coprod[\catGamma{\vgamma[0],slot}]{ \ssetK } \into \Coprod[\catGamma{\vgamma[0],slot}]{ \ssetL } \)~be one such, where \( \ssetK\into\ssetL \) is a (trivial) cofibration and~\( \vgamma[0] \in \catGamma \). This is mapped to \[\textstyle \Prod[\catGamma{\vgamma[0],slot}]{ \fibrep{\mF{slot}}[powering=\ssetL] } \to \Prod[\catGamma{\vgamma[0],slot}]{ \fibrep{\mF{slot}}[powering=\ssetK] }. \] Thus we must show that the composition \[ \catsset[op] \xrightarrow{\Prod[\catGamma{\vgamma[0],slot}]} \catsset[diag=\catGamma[op],proj,spar=\big,op] \longto \catC[diag=\catGamma,proj,spar=\big,smash,diag=\catGamma[op],inj], \qquad \textstyle \ssetK \longmapsto \Prod[\catGamma{\vgamma[0],slot}]{ \fibrep{ \mF{slot} }[powering={\ssetK}] }, \] takes (trivial) cofibrations to (trivial) fibrations. Checking that it takes cofibrations to fibrations amounts to checking this for the generating cofibrations~\( \simp{n}[boundary]\into\simp{n} \) in~\( \catsset \). This holds by the assumption that \( \fibrep{\mF{slot}}[ibullet] \) is componentwise Reedy-fibrant. Since the functor takes colimits to limits, the claim now follows from the (dual of) the lemma. \end{proof} \chapter{Homotopy-initial functors} A functor~\( \mf \colon \catGamma \to \catGamma[prime] \) is called \textdef{homotopy-initial} if for all objects~\( \vgamma[prime] \in \catGamma[prime] \), the nerve~\( \nerve{{ \mf[over=\vgamma[prime]] }} \) is contractible as a simplicial set; here \( \mf[over=\vgamma[prime]] \) denotes the comma category whose objects are pairs~\( \tup{ \vgamma , \malpha } \) where \( \malpha \) is a map~\( \mf{ \vgamma } \to \vgamma[prime] \). A morphism~\( \tup{ {\vgamma[1]} , \malpha } \to \tup{ {\vgamma[2]} , \malpha } \) is a morphism~\( \vgamma[1] \to \vgamma[2] \) in~\( \catGamma \) making the diagram \[\begin{tikzcd}[row sep=0em] \mf{\vgamma[1]} \ar[dd] \ar[rd] \\ & \vgamma[prime] \\ \mf{\vgamma[2]} \ar[ru] \end{tikzcd}\] commute. We aim to reprove the statement \begin{theorembreak}[{Theorem \parencite[Theorem~19.6.7]{hir}}]\label{res:homotopy_initial} Suppose \( \catC \)~is a combinatorial model category and~\( \catGamma, \catGamma[prime] \) two categories. If \( \mf \colon \catGamma \to \catGamma[prime] \)~is homotopy-initial, then we have \[\textstyle \invholim[\catGamma[prime]]{\mF} = \invholim[\catGamma]{{\mf[*res]{\mF}}} \] for all~\( \mF \in \catC[diag=\catGamma[prime]] \). \end{theorembreak} This relies on a few technical lemmas: \begin{lemma}\label{res:kan_extension_nerve} If \( \mf \colon \catGamma \to \catGamma[prime] \) is a functor, then~\( \mf[!kan]{ \nerve{{ \catGamma[over=\slot] }}} = \nerve{{\mf[over=\slot]}} \in \catsset[diag=\catGamma[prime]] \). In particular, since \( \nerve{{ \catGamma[over=\slot] }} \in \catsset[diag=\catGamma,proj,smash] \) is cofibrant, \( \nerve{{ \mf[over=\slot] }} \in \catsset[diag=\catGamma[prime],proj,smash] \) is cofibrant by \cref{res:Kan_extensions_Quillen_adjunctions}. \end{lemma} \begin{proof} Since colimits in diagram categories over cocomplete categories can be checked componentwise, this boils down to the observation \[ \dirlim[ \mf{\vgamma} \to \vgamma[prime] ]{ \nerve{{ \catGamma[over=\vgamma] }}{n} } = \nerve{{ \mf[over=\vgamma[prime]] }}{n} . \] \end{proof} The following lemma is inspired by \textcite[Proposition~19.6.6]{hir}. See also \textcite[Lemma~8.1.4]{riehl}. \begin{lemmabreak}\label{res:change_of_diagrams} Suppose that \( \catC \) is a complete category and that \( \catGamma \) and~\( \catGamma[prime] \) are two categories with a functor~\( \mf \colon \catGamma \to \catGamma[prime] \). Then we have \[ \End[\vgamma\in\catGamma]{ \mF{\mf{\vgamma}}[powering=\nerve{{\catGamma[over=\vgamma]}}] } = \End[\vgamma[prime]\in\catGamma[prime]]{ \mF{\vgamma[prime]}[powering={ \nerve{{ \mf[over=\vgamma[prime]] }} }] } \] for~\( \mF \in \catC[diag=\catdelta[op],spar,smash,diag=\catGamma] \) (see the previous chapter for an explanation of the power notation). \end{lemmabreak} \begin{proof} For the purpose of the proof, we recall that the Kan extension formulas in~\eqref{eq:kan_extensions} may be equivalently written in terms of (co)ends: \[ \mf[!kan]{\mF}{\vgamma[prime]} = \Coend[\vgamma\in\catGamma]{ \catGamma[prime]{\mf{\vgamma},\vgamma[prime]} \times \mF{\vgamma} } \qquad\text{and}\qquad \mf[*kan]{\mF}{\vgamma[prime]} = \End[\vgamma\in\catGamma]{ \mF{\vgamma}[powering={\catGamma[prime]{\vgamma[prime],\mf{\vgamma}}}] } . \] Here we are using the natural copowering and powering of~\( \catset \) on~\( \catC \), given by~\( S \times \vc = \Coprod[S]{\vc} \) and~\( \vc[powering=S] = \Prod[S]{\vc} \) for~\( S \in \catset \) and~\( \vc \in \catC \), which make sense whenever \( \catC \) is complete resp.~cocomplete. We shall furthermore make use of the so-called \enquote{co-Yoneda lemma} which says that \[ \mG{\mf{\vgamma}} = \End[\vgamma[prime]\in\catGamma[prime]]{ \mG{\vgamma[prime]}[ powering=\catGamma[prime]{\mf{\vgamma},\vgamma[prime]} ] } \qquad \text{for all } \mG \in \catC[diag=\catGamma[prime]] . \] Finally, we use \enquote{Fubini's theorem} for ends, which says that ends, being limits, commute. This together yields \begin{align*} \End[\vgamma\in\catGamma]{ \mF{\mf{\vgamma}}[powering=\nerve{{\catGamma[over=\vgamma]}}] } &= \End[\vgamma\in\catGamma]{ \End[\ordset{n}\in\catdelta]{ \mF{\mf{\vgamma}}[n,powering=\nerve{{\catGamma[over=\vgamma]}}{n}] } } \\ &= \End[\vgamma\in\catGamma]{ \End[\ordset{n}\in\catdelta]{ \End[\vgamma[prime]\in\catGamma[prime]]{ \mF{\vgamma[prime]}[ n, powering=\catGamma[prime]{\mf{\vgamma},\vgamma[prime]}, spar=\big, powering=\nerve{{\catGamma[over=\vgamma]}}{n}, ] } } } \\ &= \End[\vgamma\in\catGamma]{ \End[\ordset{n}\in\catdelta]{ \End[\vgamma[prime]\in\catGamma[prime]]{ \mF{\vgamma[prime]}[ n, powering={ \catGamma[prime]{\mf{\vgamma},\vgamma[prime]} \times \nerve{{\catGamma[over=\vgamma]}}{n} }, ] } } } \\ &= \End[\ordset{n}\in\catdelta]{ \End[\vgamma[prime]\in\catGamma[prime]]{ \mF{\vgamma[prime]}[n,powering={ \Coend[\vgamma\in\catGamma]{ \catGamma[prime]{\mf{\vgamma},\vgamma[prime]} \times \nerve{{\catGamma[over=\vgamma]}}{n} } }] } } \\ &= \End[\vgamma[prime]\in\catGamma[prime]]{ \mF{\vgamma[prime]}[n,powering={ \mf[!kan]{{ \nerve{{ \catGamma[over=\slot] }}[arg=\vgamma[prime]] }} }] } = \End[\vgamma[prime]\in\catGamma[prime]]{ \mF{\vgamma[prime]}[powering={ \nerve{{ \mf[over=\vgamma[prime]] }} }] } \end{align*} where the last equality sign is due to \cref{res:kan_extension_nerve}. \end{proof} \begin{proof}[Proof of~\cref{res:homotopy_initial}] \Cref{res:bousfield_kan} and equation~\eqref{eq:classical_bousfield_kan} show that \[ { \textstyle \invholim[\catGamma]{ \mf[*res]{\mF} } } = \End[\vgamma[prime]\in\catGamma[prime]]{ \fibrep{\mF{\vgamma[prime]}}[powering={ \nerve{{ \mf[over=\vgamma[prime]] }} }] } . \] Since \( \nerve{{ \mf[over=\vgamma[prime]] }} \) is contractible for all~\( \vgamma[prime]\), \( \nerve{{ \mf[over=\slot ] }} \)~is a projectively cofibrant resolution of the point by \cref{res:kan_extension_nerve}. Thus the \anrhs is exactly~\( \invholim[\catGamma[prime]]{\mF}\) by~\cref{res:bousfield_kan}. \end{proof} \begin{examplebreak}[Example: Fat totalization formula]\label{ex:fat_totalization} Recall from \cref{ex:delta_reedy} that the simplex category~\( \catdelta \) is Reedy with~\( \catdelta[plus] \) being the subcategory containing only injective maps. The inclusion \(\smash{ \miota\colon \catdelta[plus]\into\catdelta }\) is homotopy-initial (\cite[see e.g.][Example~8.5.12]{riehl} or \cite[Example 21.2]{dugger}), hence \(\smash{ \invholim[\catdelta]{ \cosimplicial{X}{*} } = \invholim[\catdelta[plus]]{ \cosimplicial{X}{*} } }\) for all~\( \smash{ \cosimplicial{X}{*} \in \catC[diag=\catdelta] } \). As \( \catdelta[plus] \)~is a direct category, we obtain from~\cref{res:holim_direct_category} that we may calculate~\( \invholim[\catdelta]{ \cosimplicial{X}{*} } \) as \[\textstyle \invholim[\catdelta] { \cosimplicial{X}{*} } = \End[\catdelta[plus]]{ \fibrep[output=simplicial] ( \cosimplicial{X}{n} )_{n} } \] for some functor \( \smash{ \fibrep \colon \catC\to\catC[diag={\catdelta[plus,op]}] } \) that takes~\( \vx \) to an injectively (i.e.~Reedy-) fibrant replacement of the constant diagram at~\( \vx \). This is the so-called \textdef{fat totalization} formula for homotopy limits over~\( \catdelta \). The dual formula for homotopy colimits over~\( \catdelta[op] \) is called the \textdef{fat geometric realization} formula. \end{examplebreak} \section*{Acknowledgements} W would like to thank Edouard Balzin, Marcel B\"okstedt, and Stefan Schwede for many fruitful discussions and for reading through a draft of this paper. Special thanks to Henning Haahr Andersen for many years of great discussions, help, and advice, and for making our cooperation possible in the first place. This paper was written mostly while the authors were visiting the Max Planck Institute for Mathematics in Bonn, Germany. We would like to express our gratitude to the institute for inviting us and for providing us with an excellent and stimulating working environment. \endgroup \pagebreak
1,108,101,562,542
arxiv
\section{Introduction} \label{sec-intro} Magnetic reconnection --- a process of rapid rearrangement of magnetic field lines corresponding to a change of magnetic field topology --- is widely recognized as one of the most important fundamental plasma physical processes \citep{Biskamp-2000, Zweibel_Yamada-2009, Yamada_etal-2010}. It is ubiquitous in laboratory, space, and astrophysical plasmas. As with many other important plasma processes, the reason why it is considered to be important and why researchers are interested in it, is that it is one of the processes controlling the plasma energetics, i.e., energy exchange between the different plasma system constituents. The specific case of reconnection involves the flow of energy from the magnetic field to the charged particles comprising the plasma. This energy conversion is made possible by the relaxation of magnetic field to a lower energy state suddenly made accessible by the breaking of topological constraints in reconnection, in particular, the flux freezing constraint. Although on the fundamental particle level, the released energy goes to the kinetic energy of individual particles, it is customary to describe the resulting energization of the plasma by splitting it into several parts at the fluid level, such as the thermal heating of plasma and the bulk kinetic energy of the reconnection outflows, plus nonthermal particle acceleration at the kinetic level of plasma description. The question of the partitioning of the dissipated magnetic energy among these different forms of plasma energy (as well as the partitioning of energy between electrons and ions) is considered to be one of the main driving questions in magnetic reconnection research (see, e.g., the chapter by Yamada et al. in this volume). To a large extent, this is because it most directly relates to the observed radiation powered by reconnection in remote astrophysical sources. In fact, one of the main reasons why scientists are interested in reconnection, and especially in its energetics aspects, is that this process is commonly believed to be responsible for some of the most spectacular and energetic phenomena observed in the Universe. In particular, it is believed to be the mechanism for powering many explosive phenomena exhibiting very rapid time variability and impulsive character --- various kinds of {\it flares} and bursts of high-energy (UV, X-ray, and gamma-ray) radiation. Reconnection is especially important as an energetically dominant mechanism in systems that are magnetically dominated (low plasma-$\beta$), that is in tenuous hot coronae and magnetospheres of dense and relatively cold gravitationally stratified objects~\citep{Uzdensky-2006}. The two most prominent and best-studied, classic examples of reconnection in Nature are solar flares and magnetic substorms in the Earth magnetosphere (see, e.g., the chapters by Shibata \& Takasao, by Cassak \& Fuselier, and by Petrukovich et al. in this volume). This is the area where reconnection research has started more than 50 years ago \citep{Sweet-1958, Parker-1957, Dungey-1961, Axford-1967, Vasyliunas-1975} and where the case for reconnection is most convincingly established by observations \citep{Masuda_etal-1994, Shibata-1996, Yokoyama_Shibata-1995, Yokoyama_etal-2001, Tsuneta-1996, Paschmann_etal-2013}. However, in the past couple of decades, reconnection has also been increasingly often proposed (although without such a strong observational evidence as in the above heliospheric examples) as an attractive possible mechanism for powerful flares in many astrophysical systems outside the solar system, especially in high-energy astrophysics. It has been invoked to explain energy dissipation and radiation in pulsar systems (e.g., in pulsar magnetospheres, the striped pulsar winds, and pulsar wind nebulae, PWNe) \citep{Michel-1982, Coroniti-1990, Michel-1994, Lyubarsky-1996, Lyubarsky-2000, Lyubarsky_Kirk-2001, Kirk_Skjaeraasen-2003, Petri_Lyubarsky-2007, Contopoulos-2007, Lyutikov-2010, Uzdensky_etal-2011, Bednarek_Idec-2011, Cerutti_etal-2012a, Cerutti_etal-2013, Cerutti_etal-2014a, Cerutti_etal-2014b, Sironi_Spitkovsky-2011, Clausen-Brown_Lyutikov-2012, Arka_Dubus-2013, Uzdensky_Spitkovsky-2014, Philippov_etal-2014, Philippov_etal-2015, Cerutti_etal-2015}; in gamma-ray bursts (GRBs) \citep{Spruit_etal-2001, Drenkhahn_Spruit-2002, Lyutikov-2006b, Giannios_Spruit-2007, McKinney_Uzdensky-2012}; magnetospheres of magnetars \citep{Lyutikov-2003b, Lyutikov-2006a, Masada_etal-2010, Uzdensky-2011, Parfrey_etal-2012, Parfrey_etal-2013}, and in coronae and jets of accreting black holes (BHs) \citep{GRV-1979, van_Oss_etal-1993, deGouveia_etal-2005, Uzdensky_Goodman-2008, Goodman_Uzdensky-2008, deGouveia_etal-2010, Khiali_etal-2015a, Kadowaki_etal-2015, Singh_etal-2015}, including those in active galactic nuclei (AGN) and blazars \citep{Romanova_Lovelace-1992, DiMatteo-1998, DiMatteo_etal-1999, Lesch_Birk-1998, Schopper_etal-1998, Larrabee_etal-2003, Liu_etal-2003, Lyutikov-2003a, Jaroschek_etal-2004a, Jaroschek_etal-2004b, Giannios_etal-2009, Giannios_etal-2010, Giannios-2010, Nalewajko_etal-2011, Khiali_etal-2015b}. It is worth noting that in most traditional, solar-system applications of reconnection, including solar flares, Earth magnetosphere, and sawtooth crashes in tokamaks, one rightfully ignores radiation emitted promptly during the reconnection process. This is well justified because in these situations the radiative cooling time of the energetic particles is usually much longer than the time they spend in the reconnection region and, in fact, than the entire duration of the reconnection event. In contrast, however, in many high-energy astrophysical environments, especially relativistic ones, the ambient magnetic and radiation energy density is so high and the system size is so large that reconnection takes place in the radiative regime. This means that the radiation reaction force on the emitting particles is rather strong and needs to be taken into account because it materially affects the particles' motion. This makes it necessary to understand {\it radiative magnetic reconnection}, which we define here as a reconnection regime in which radiation back-reaction has an important effect on reconnection dynamics, energetics, and/or particle acceleration. In this regime magnetic dissipation and radiative processes intertwine and influence each other. Understanding how this happens represents an exciting new frontier in plasma astrophysics. This frontier is only now beginning to being charted and the main goal of this article is to give an overview of the recent progress in this area. In principle, radiation reaction can exert several effects on reconnection: (1) putting an upper limit on the high-energy extent of nonthermal particle acceleration and hence on the energy of the emitted photons; (2) optically-thin or optically-thick radiative cooling; (3) radiative drag on the current-carrying electrons in the layer (radiative resistivity); (4) radiation drag on the reconnection outflows (manifested as an effective radiation viscosity in the optically-thick case); (5) radiation pressure; (6) pair creation. The main radiative mechanisms in high-energy astrophysical plasmas, especially relativistic ones, are cyclotron/synchrotron emission, curvature emission, and inverse-Compton (IC) scattering. Apart from its significance to basic plasma physics, the main motivation for exploring radiative reconnection comes from numerous astrophysical applications. Examples of radiative relativistic reconnection in astrophysics include: (1) Accretion flows and accretion disk coronae (ADC) around black holes with accretion rates approaching the Eddington limit, in both stellar-mass galactic X-ray binary (XRB) systems and in supermassive BHs in AGN, e.g., quasars. Here, reconnection processes occur in the presence of a very intense radiation field emitted by the disk, which leads to a very powerful inverse-Compton (IC) cooling of the electrons energized by reconnection \citep{Goodman_Uzdensky-2008, Khiali_etal-2015a, Khiali_etal-2015b}. (2) Magnetospheres and relativistic winds of pulsars (rapidly rotating magnetized neutron stars), where reconnection should happen in the ballerina-skirt equatorial current sheet outside the pulsar light cylinder (LC). In many cases (e.g., the Crab), the magnetic field at the LC is so high that prompt synchrotron cooling of the plasma heated by reconnection is very strong; it controls the energy balance of the layer and limits the plasma temperature. This radiation may then explain the powerful pulsed high-energy (GeV) $\gamma$-ray emission observed in Crab and other pulsars (e.g., \citep{Lyubarsky-1996, Uzdensky_Spitkovsky-2014}, Fish et~al.~2015, in prep.). (3) Pulsar Wind Nebulae (PWN), including the Crab Nebula; here, although radiative cooling is not strong enough to affect the bulk of the plasma, synchrotron radiation reaction may limit the extreme (PeV) particle acceleration, which has important implications for the $\gamma$-ray flares recently discovered in the Crab Nebula \citep{Bednarek_Idec-2011, Uzdensky_etal-2011, Cerutti_etal-2012a, Cerutti_etal-2013, Cerutti_etal-2014a, Cerutti_etal-2014b}. (4) Ultra-relativistic jets in blazars where radiative braking and cooling may alter reconnection dynamics and radiation production, e.g., in the context of very rapid TeV flares in several blazar systems \citep{Giannios_etal-2009, Nalewajko_etal-2011, Giannios-2013}. 5) Gamma-Ray Bursts (GRBs), where magnetic reconnection has been conjectured to power the main prompt gamma-ray emission \citep{Drenkhahn_Spruit-2002, Giannios_Spruit-2006}. Here, reconnection takes place in an environment with high energy density and large optical depth, so that photon resistivity, radiation cooling, and radiation pressure need to be taken into account \citep{McKinney_Uzdensky-2012}. (6) Magnetospheres of magnetars (ultra-magnetized neutron stars), where it has been suggested that, by analogy with the solar corona, reconnection may explain the observed powerful gamma-ray flares (e.g., \citep{Lyutikov-2006a, Uzdensky-2011, Parfrey_etal-2012, Parfrey_etal-2013, Yu-2012}). The energy density in these systems is extremely high and hence reconnection inevitably leads to relativistically-hot temperatures and copious photon and pair creation \citep{Uzdensky-2011, Uzdensky_Rightley-2014}. One can thus see that the large number and the great diversity of examples of radiative magnetic reconnection in astrophysics strongly motivate advancing this new research frontier. Another practical reason to study radiative reconnection is that, in astrophysics, remote telescopic observations of the radiation produced in a flare provide our only diagnostic probe for studying the underlying physics. For this reason, the ability to calculate the observable radiation spectrum is critical for testing whether reconnection (or any other given process) can explain observations. Thus, in order to connect our theoretical/numerical reconnection models with the actual observable radiation, we must develop a rigorous method for calculating the produced radiation signatures in detail, such as time-resolved spectra for a given orientation of the observer's line of sight. Finally, radiative reconnection should also be of potential considerable interest to experimental High-Energy-Density (HED) Physics, a new branch of modern physics that has emerged in recent years. One can anticipate rapid progress in HED reconnection studies facilitated by new experimental capabilities developed in the HED Physics community, e.g., made possible by powerful lasers (such as Omega EP and NIF) and Z-pinches (e.g., Magpie \citep{Lebedev_etal-2014, Suttle_etal-2014}). In fact, several HED reconnection experiments utilizing laser-produced plasmas with mega-gauss magnetic fields have already been reported (e.g., \citep{Nilson_etal-2006, Li_etal-2007, Nilson_etal-2008, Dong_etal-2012, Fox_etal-2011, Fox_etal-2012, Fiksel_etal-2014}). This review Chapter is organized as follows. Before embarking on our main discussion of the effects that radiation may exert on reconnection, in~\S~\ref{sec-passive-rad-signs} we first make some remarks about passive radiative signatures of reconnection (\S\S~\ref{subsec-passive-general}-\ref{subsec-passive-nonthermal}), including applications to hard X-ray bremsstrahlung emission in solar flares (\S~\ref{subsec-loop_top}) and low-frequency radio emission due to coherent plasma motions in reconnecting current sheets (\S~\ref{subsec-radio}). We then (in \S~\ref{sec-rad_reaction}) talk about the underlying physics of the radiation reaction force with the focus on a few astrophysically most relevant radiation mechanisms: synchrotron, curvature, and inverse-Compton. After this, we begin a systematic exposition of the different manifestations of the effects that the radiation reaction force can have on reconnection. We begin this program with the discussion of a quintessential kinetic effect of radiation: the limits that radiation reaction places on nonthermal particle acceleration in collisionless magnetic reconnection (\S~\ref{sec-particle_acceleration}); and we specifically touch upon two particularly important astrophysical contexts where this issue plays a central role: the Crab Nebula gamma-ray flares (\S~\ref{subsec-Crab-flares}) and coronae of accreting black holes (\S~\ref{subsec-BH_ADC}). Then, we move on to the fluid picture and discuss several fluid-level effects. The first category of such effects concerns the effect of radiation reaction force on random thermal motions of the particles in the reconnection layer; it thus affects the layer's thermodynamics and can be described as radiative cooling (\S~\ref{sec-rad_cooling}); here, one has to distinguish the optically thin (\S~\ref{subsec-rad_cooling-thin}) and the optically thick regimes (\S~\ref{subsec-rad_cooling-thick} and~\ref{subsec-rad_pressure}). The second group of radiative effects on fluid-level description of magnetic reconnection deals with the effect of the radiation reaction force on the plasma bulk flows and thus concerns the dynamics and electrodynamics of the process; it constitutes the subject of~\S~\ref{sec-rad_drag}. The main two processes included under this rubric are radiative resistivity (\S~\ref{subsec-rad_resistivity}) and radiative braking of the reconnection outflow (\S~\ref{subsec-rad_drag-outflow}). Finally, \S~\ref{sec-other} is devoted to a brief discussion of a few other, more exotic effects that take place in optically thick reconnection layers, such as radiation pressure, radiative viscosity, hyper-resistivity, and pair creation. We then summarize the paper and discuss critical open questions and outline the directions for future research in \S~\ref{sec-conclusions-outlook}. \section{Passive Radiative Signatures of Reconnection} \label{sec-passive-rad-signs} \subsection{General remarks} \label{subsec-passive-general} Although most of this review is devoted to a discussion of radiative magnetic reconnection, which, once again, is defined here as a situation where various aspects of the reconnection process are significantly affected by radiative energy losses, before we proceed with that discussion we first would like to discuss what one can call {\it passive} radiative signatures of reconnection. Thus, in this section we address the question of how a reconnection region {\it looks like}, in the literal sense of the word, even in the case where radiative back reaction on the reconnection process, discussed in the subsequent sections, is not important energetically or dynamically. Technically, this means that the radiative cooling time for the particles energized by the reconnection process is longer than the duration of the reconnection process itself, or at least longer than the characteristic time a typical particle spends inside the reconnection region (which could be taken as the Alfv\'en transit time along the layer). In contrast to the high-energy astrophysical reconnection discussed in the rest of this article, this non-radiative reconnection situation is the main subject of conventional reconnection research, aimed at traditional applications such as solar flares, magnetospheric substorms, tokamak sawtooth crashes, and dedicated laboratory reconnection experiments. The reasons why radiative signatures of reconnection have so far been neglected in these traditional contexts are that, first, the amount of light produced during these reconnection processes is very small and often not easily detectable with our current technology, and second, we have other, better diagnostic tools to study these reconnection processes, e.g., with direct measurements. This is especially the case for reconnection in the Earth magnetosphere, where direct in-situ measurements of various plasma properties with spacecraft are available (see, e.g., the chapters by Cassak \& Fuselier and by Petrukovich et al. in this volume), and for dedicated laboratory experiments where one can use probe arrays (see the Chapter by Yamada et al. in this volume). In contrast, in many astrophysical situations remote telescopic observations of radiation powered by a reconnection event provide our only diagnostic probe into the physical processes at play in a given system. In other words, we usually have no other tools to measure the plasma properties in remote astrophysical reconnection regions and therefore, in order to make any sense of the observations, it is imperative to build predictive theoretical models that can connect the underlying physics of reconnection with the resulting light that we then can, and do, observe at Earth. It is interesting to note that this is in fact also true for solar flares, but this case is complicated because the post-reconnection conversion of the particle energy into radiation is neither immediate nor straight-forward, it involves interaction of the accelerated particle beams with the solar surface, chromospheric evaporation, etc.. All these additional processes bring in extra modeling uncertainties and thus, to some degree, complicate the task of using the observed flare radiation to learn something about the underlying reconnection process and about the plasma properties before reconnection. In contrast, however, in many (although not all) astrophysical contexts reconnection happens in a free plasma flow far from any dense bodies, and in this case one can be reasonably certain that the observed radiation is in fact produced by the plasma that has been energized in a reconnection region. Most notable astrophysical examples of such a situation are astrophysical jets and winds, AGN radio-lobes, PWN, interstellar medium (ISM), and intra-cluster medium (ICM). The relative slowness of the radiative cooling time compared with the expected reconnection (and hence particle acceleration) time enables one to disentangle the particle acceleration and radiative cooling processes in this case. Then, a comparison of the observed flare spectrum (and its time evolution due to cooling) with the results of a rather straight-forward spectral modeling calculation can yield an unambiguous information about the energy spectrum of particles accelerated by the reconnection process. It is worthwhile to comment on what concrete radiative signatures of reconnection are regarded as being of interest. The answer depends on the specific astrophysical context and also on our observational capabilities. In general, it would be interesting to be able to calculate from first principles the actual spatially- and temporally resolved image of the reconnection layer (at different photon frequencies), that is, to produce a simulated picture or a movie of how a reconnection layer looks like. In many cases, however, the flaring region is spatially unresolved with our present technology and thus appears as a point-like source. At the same time, however, we often do have detailed spectral and temporal information. In this situation, one is interested in (time-resolved) photon energy spectra and/or (energy-resolved) light-curves. As an illustration, in the next subsection we discuss the recent numerical results on nonthermal particle acceleration in relativistic pair-plasma reconnection and the corresponding radiative (synchrotron) spectral signatures. These expected radiation spectra produced by reconnection events can potentially be compared with real observations of various astrophysical systems, as was done, for example in a series of studies of reconnection-powered synchrotron flares in accreting black holes (both microquasars and AGN) by E.~de Gouveia dal Pino and her collaborators \citep{deGouveia_etal-2005, deGouveia_etal-2010, Khiali_etal-2015a, Khiali_etal-2015b, Kadowaki_etal-2015, Singh_etal-2015}. \subsection{Radiative Signatures of Reconnection-Powered Nonthermal Particle Acceleration} \label{subsec-passive-nonthermal} In many high-energy astrophysical sources the observed radiation spectra and thus the energy distributions of the radiating particles are often non-thermal, described by power laws. For this reason, the question of nonthermal particle acceleration is a key problem in modern plasma astrophysics and reconnection is widely seen as one of key candidate mechanisms (see, e.g., \citep{Hoshino_Lyubarsky-2012} for review). Particle acceleration in reconnection is usually addressed by means of PIC simulations, together with analytical theory. A substantial number of non-radiative PIC studies have attacked this problem in the context of relativistic reconnection in pair plasmas \citep{Zenitani_Hoshino-2001, Zenitani_Hoshino-2005, Zenitani_Hoshino-2007, Zenitani_Hoshino-2008, Jaroschek_etal-2004a, Lyubarsky_Liverts-2008, Liu_etal-2011, Bessho_Bhattacharjee-2012, Kagan_etal-2013, Cerutti_etal-2012b, Sironi_Spitkovsky-2011, Sironi_Spitkovsky-2014, Guo_etal-2014, Werner_etal-2014, Nalewajko_etal-2015}. In addition, \citep{Werner_etal-2013} and \citep{Melzani_etal-2014b} have explored particle acceleration by relativistic reconnection in electron-ion plasmas, and \citep{Jaroschek_Hoshino-2009} and \citep{Cerutti_etal-2013, Cerutti_etal-2014a, Cerutti_etal-2014b} have investigated the case of relativistic pair plasma reconnection with synchrotron radiation reaction. It is very encouraging to see that continuing improvement in the available computer power has allowed researchers to tackle this problem in a reliable and systematic way with the adequate and necessary dynamic range (e.g., \citep{Sironi_Spitkovsky-2014, Guo_etal-2014, Melzani_etal-2014b, Werner_etal-2014}). These studies have shown that relativistic reconnection does indeed efficiently generate hard power-law particle distributions, $f(\gamma) \sim \gamma^{-\alpha}$. The power-law index~$\alpha$ in general varies with the system size $L$ and the upstream magnetization $\sigma\equiv B_0^2/4 \pi h$, where $h$ is the relativistic enthalpy density. For large enough pair-plasma systems, $\alpha(L,\sigma)$ seems to approach an asymptotic value $\alpha_*(\sigma)$ in the limit $L\rightarrow \infty$. Viewed as a function of~$\sigma$, this value can be larger than $\sim$2 for modestly relativistic cases ($\sigma \sim 1$) but then monotonically decreases with $\sigma$ and asymptotically approaches a finite value $\alpha_* \simeq 1-1.2$ in the ultra-relativistic limit $\sigma \gg 1$ (e.g., \citep{Sironi_Spitkovsky-2014, Guo_etal-2014, Werner_etal-2014}). This value is consistent with analytical predictions by \citep{Larrabee_etal-2003} and with the results of several previous numerical studies, e.g., \citep{Jaroschek_etal-2004a, Lyubarsky_Liverts-2008}. One of the most important new results is that relativistic non-thermal particle spectra seem to be produced at late times with high efficiency in both 2D and 3D simulations and both with and without a guide magnetic field \citep{Sironi_Spitkovsky-2014, Guo_etal-2014}. This is in contradiction with the previous picture proposed by Zenitani \& Hoshino \citep{Zenitani_Hoshino-2007, Zenitani_Hoshino-2008}, who suggested that a strong guide field is essential for nonthermal particle acceleration in 3D because it can suppress the relativistic drift-kink instability that leads to strong magnetic dissipation but inhibits nonthermal particle acceleration. Furthermore, as was shown by \citep{Werner_etal-2014} in 2D PIC simulations without a guide field starting with a relatively cold initial background plasma, the resulting final nonthermal power law is truncated at high energies by a combination of two cutoffs: \begin{equation} f(\gamma) \sim \gamma^{-\alpha} \, e^{-\gamma/\gamma_{c1}} e^{-(\gamma/\gamma_{c2})^2}, \end{equation} where the exponential cutoff $\gamma_{c1}$ and the super-exponential cutoff $\gamma_{c2}$ are well fit as functions of $L$ and~$\sigma$ by \begin{equation} \gamma_{c1} \simeq 4 \, \sigma \, , \qquad \gamma_{c2} \simeq 0.1 \, L/\rho_0 \, . \end{equation} Here $\rho_0 \equiv m_e c^2 /e B_0$ is the nominal Larmor radius and $\sigma \equiv B_0^2/4 \pi n_b m_e c^2$ is the "cold" background plasma magnetization, with $B_0$ being the reconnecting magnetic field strength and $n_b$ being the total (electrons and positrons) background plasma density, both taken upstream of the layer. Also note that the length-scale $L$ in the above expression is the size of the computational domain (with aspect ratio $L_x/L_y = 1$) and is about twice the actual length of the reconnection layer. The first of the above two cutoffs can be understood as arising from the typical acceleration time $\ell/c$ that a given particle spends in an elementary marginally stable current layer of width $\delta \sim \rho(\bar{\gamma}) = \bar{\gamma} \rho_0 \sim \sigma \rho_0$, where $\bar{\gamma} \sim 0.3 \sigma$ is the average dissipated energy per particle, and of length $\ell \sim 50-100\, \delta$ dictated by the stability condition for secondary tearing. The second cutoff probably arises from the finite time the particle spends in the entire layer of system-size length~$L$. In practice, it is the smaller of the two cutoffs that matters for limiting the extent of the power law \citep{Werner_etal-2014}. Their ratio can be expressed as \begin{equation} {\gamma_{c2}\over {\gamma_{c1}}} \simeq {1\over 40} \, {L\over{\sigma\rho_0}} = {1\over 40} \, \cdot {{3 B_{\rm cl}}\over{2 B_0} }\, \tau_T \, , \end{equation} where $\tau_T \equiv n_b L \sigma_T$ is the Thomson optical depth along the layer [here, $\sigma_T = (8\pi/3)\, r_e^2$ is the Thomson cross section, $r_e = e^2/m_e c^2 \simeq 2.8 \times 10^{-13}\, {\rm cm}$ is the classical electron radius], and $B_{\rm cl} \equiv e/r_e^2 = m_e^2 c^4/e^3 \simeq 6 \times 10^{15} \, {\rm G}$ is the critical classical magnetic field strength. The fact that there are two cutoffs allows one to define and distinguish between two different physical regimes: (1) the small-system regime ($L/\sigma \rho_0 \lesssim 40$), in which $\gamma_{c2} < \gamma_{c1}$ and so $\gamma_{c2} \propto L$ determines where the power law ends, and (2) the plasmoid-dominated, large-system regime ($L/\sigma \rho_0 \gtrsim 40$), in which $\gamma_{c1} < \gamma_{c2}$ and so $\gamma_{c1} \propto \sigma$ limits the high-energy extent of the power law, independent of~$L$. Next, to the extent that we are interested in potentially observable radiation signatures of reconnection, it is interesting to ask what radiation spectra are emitted by the particle distributions described above. If relativistic reconnection indeed produces a power law energy spectrum of electrons with an index $\alpha \simeq 1.2$ in the ultra-relativistic, high-$\sigma$ regime, then the corresponding synchrotron photon spectrum {\it immediately after the reconnection event} will be a nearly flat power law with a spectral index of $(\alpha-1)/2 \sim 0.1$, which in practice would be indistinguishable from a flat spectrum. In terms of the photon-number power-law index $\Gamma_{\rm ph}$, defined by $n_{\rm ph} (\epsilon_{\rm ph}) \sim \epsilon_{\rm ph}^{-\Gamma_{\rm ph}}$, this corresponds to $\Gamma_{\rm ph} = (\alpha+1)/2 \simeq 1.1$. In large systems ($L/\sigma \rho_0 \gtrsim 40$), the power-law synchrotron spectrum is then expected to extend up to the characteristic photon energy of \begin{eqnarray} \epsilon_{\rm sync,\, max} &=& {3\over 2} \, \hbar \Omega_{c0}\, \gamma_{c1}^2 = {3\over 2} \, \hbar {e B_0\over{m_e c}}\, \gamma_{c1}^2 = {3\over 2} \, \alpha_{\rm fs}^{-1}\, m_e c^2\, {r_e\over{\rho_0}} \, \gamma_{c1}^2 \nonumber \\ & = & {3\over 2} \, \alpha_{\rm fs}^{-1}\, m_e c^2\, {B_0\over{B_{\rm cl}}} \, \gamma_{c1}^2 \simeq 100\, {\rm MeV} \, {B_0\over{B_{\rm cl}}} \, \gamma_{c1}^2 \, , \end{eqnarray} where $\alpha_{\rm fs} = e^2/\hbar c \simeq 1/137$ is the fine structure constant. Substituting our expression $\gamma_{c1} \simeq 4 \sigma$, we find \begin{equation} \epsilon_{\rm sync,\, max} \simeq 24\, \alpha_{\rm fs}^{-1}\, m_e c^2\, {B_0\over{B_{\rm cl}}} \, \sigma^2 \, , \end{equation} It is interesting to note that this limit grows very rapidly with the magnetic field, namely as $B_0^5$. On a longer time scale following a reconnection event, subsequent cooling evolution will, of course, soften the emission spectrum since the highest energy particles have shorter radiation cooling times: \begin{eqnarray} t_{\rm cool}^{\rm sync} &=& {{\gamma m_e c^2}\over{P_{\rm rad}}} = {{\gamma m_e c^2}\over{(4/3)\, \sigma_T c B^2/8\pi}} = {9\over 4}\, (\gamma\, \Omega_{c0})^{-1} \, {\rho_0\over{r_e}} \nonumber \\ &= & {9\over 4}\, (\gamma\, \Omega_{c0})^{-1} \, {B_{\rm cl}\over{B}} \simeq 7.7 \times 10^8 \, {\rm s}\, \gamma^{-1} \, \biggl( {B\over{1\, {\rm G}}} \biggr)^{-2} \simeq 24\, {\rm yr}\, \gamma^{-1} \, \biggl( {B\over{1\, {\rm G}}} \biggr)^{-2} \, . \end{eqnarray} Here, $\Omega_{c0} \equiv e B/m_e c \simeq 1.76 \times 10^7 \, (B/{1\,{\rm G}})\, {\rm rad/s}$ is the classical electron cyclotron frequency. This results in a time-evolving cooling energy limit $\gamma_{\rm br}$ at the particle energy set by $t = t_{\rm cool}(\gamma_{\rm br})$, above which the particle energy spectrum is cut off sharply. Next, in the case of a complex system (e.g., a corona) with a large number of independent reconnection events (flares) continuously injecting power-law populations of energetic relativistic electrons, each with an initial power-law index of $\alpha_{\rm inj} = 1.2$, one expects the interplay of this continuous injection and synchrotron radiative cooling to result in a steady-state electron distribution with a power-law index $\alpha_{\rm ss} = 1+\alpha_{\rm inj} \simeq 2.2$. This corresponds to a photon number index of $\Gamma_{\rm ph} = (1+\alpha_{\rm ss})/2 =1.6$. Similar considerations apply in the case of IC emission, resulting in the same photon index of $\Gamma_{\rm ph} =1.6$ for the IC photons, which is intriguingly close to the hard X-ray photon index of 1.7 often observed in the low-hard state of~XRBs (e.g., \citep{Remillard_McClintock-2006}). Finally, when thinking about possible observable radiative signatures of relativistic reconnection at highest photon energies (hence produced by the highest-energy accelerated particles), one should take into account a possible anisotropy of the accelerated particle population. As was shown by \citep{Cerutti_etal-2012b}, the highest energy particles accelerated in a reconnection layer may be focused in a few tight beams that sweep from side to side, while staying mostly in the current sheet plane. This {\it kinetic beaming} effect is strongly energy-dependent, with the effective solid angle $\Omega$ of the particle population decreasing from $\Omega/4\pi \sim 1$ for low- and modest-energy particles to as small as $\Omega/4\pi \sim 10^{-2}$ for the highest-energy ones. This effect potentially has important implications for understanding radiative signatures of reconnection and, especially, for connecting theoretical models with observations since it suggests that the usual isotropic emission assumption may lead to large errors in evaluating the energetic requirements implied by the observed radiation flux. Kinetic beaming is also important for correctly interpreting the very rapid emission variability frequently observed in many relativistic astrophysical sources, such as the Crab PWN and blazar and GRB jets. This is because the radiative flux as seen by an external observer is greatly enhanced (relative to the isotropically-averaged total flux) when one of the beams intersects the observer's line of sight. As a result, the observed signal is strongly intermittent, leading to an enhanced rapid and energy-dependent variability. \subsection{Loop-top hard X-ray emission in solar flares} \label{subsec-loop_top} An important example of high-energy radiation produced promptly by the plasma energized in a reconnection event is the hard X-ray (up to about 100 keV) emission at the top of post-reconnected coronal magnetic loops in solar flares. In contrast to the hard X-ray emission produced at the footpoints of the reconnected loops on the solar surface, which is traditionally understood as bremsstrahlung radiation emitted by the electrons accelerated in the coronal reconnection region as they strike a dense cold target (the solar photosphere), the loop-top emission involves only those plasma particles that have gone through, and have been accelerated in, the reconnection region, without the agency of any other plasma. This radiation is also believed to be optically-thin nonthermal bremsstrahlung corresponding to a power-law distribution of electrons extending up to relativistic (MeV) energies (Krucker et al. 2010), although it can also be modeled as a kappa-distribution \citep{Oka_etal-2013, Oka_etal-2015}. The observed X-ray radiative power is so high that it implies an extremely high efficiency of non-thermal particle acceleration, with the number density of energetic particles populating the power-law tail being comparable to the expected density of ambient thermal particles. This challenges traditional flare emission models and strongly suggests that a large fraction of the ambient plasma particles in the flare region are accelerated into the power-law tail (e.g., \citep{Krucker_etal-2010, Oka_etal-2013, Oka_etal-2015}). However, these challenges may be partially alleviated by noticing that the plasmoid-dominated reconnection regime expected in solar flares naturally leads to strong inhomogeneity of the energized plasma, concentrating it into relatively compact, dense plasmoid cores. Since bremsstrahlung is a collisional process, with radiated power per unit volume proportional to the square of the plasma density, this inhomogeneity can greatly enhance the overall emission power. This effect can be easily tested in PIC simulations and perhaps also in laboratory laser-plasma studies of reconnection. It is interesting to try to generalize Werner et al's (2014) results for the high-energy nonthermal cutoff of particles accelerated by a relativistic pair-plasma reconnection process described in the preceding section to the case of non-relativistic reconnection in electron ion plasmas and to apply them to solar flares. Since the flaring region size in solar flares ($10^9-10^{10}$~cm) is many orders of magnitude larger than the ion Larmor radius, one is squarely in the large-system regime. Therefore, one expects that the relevant cutoff is $\epsilon_{c1}$, set by the acceleration in elementary (marginally stable to secondary tearing) current layers, $\epsilon_{c1} \simeq e E_{\rm rec} \ell$. We can estimate the characteristic length of these elementary layers as $\ell \sim 100\, \delta \sim 100\, \rho_{i,\rm layer}$, where the layer thickness $\delta$ is taken to be comparable to the Larmor radius of the ions inside the layer~$\rho_{i,\rm layer}$. Taking for illustration $B_0 = 100\, {\rm G}$ and the plasma density in the layer $n_e = 10^{10}\, {\rm cm^{-3}}$, and hence $V_A \simeq 2\times 10^3 \, {\rm km/s} \simeq 0.7 \times 10^{-3}\, c$, one can estimate (e.g., from the pressure balance across the layer) the plasma temperature in the layer as $k_B T = B_0^2/(16 \pi n_e) \simeq 12\, {\rm keV}$. This corresponds to an ion Larmor radius, and hence the elementary layer's thickness, of $\delta\sim \rho_{i,\rm layer} \simeq 1\, {\rm m}$, and hence the elementary layer length of $\ell \sim 100\, {\rm m}$. Next, since we are dealing with non-relativistic collisionless reconnection, the reconnection electric field can be estimated as $E_{\rm rec} \simeq 0.1\, B_0 V_A/c \simeq 0.07\, {\rm G}$. Therefore, the expected high-energy cutoff should be $\epsilon_{c1} \simeq e E_{\rm rec} \ell \sim 200\, {\rm kev}$, corresponding to mildly relativistic electrons. \subsection{Coherent Radio Emission} \label{subsec-radio} In addition to the production of high-energy radiation through incoherent mechanisms such as synchrotron, IC, and bremsstrahlung radiation, another important radiative aspect of reconnection is the possible generation of coherent low-frequency (e.g., radio or microwave) emission associated with collective plasma motions. This emission may be driven by various small-scale plasma motions excited inside thin reconnection current layers by various plasma instabilities, such as the secondary tearing, the drift-kink, lower-hybrid instability, ion-acoustic, and/or Buneman instabilities, etc.. The nonlinear development of these instabilities may lead to the production of a broad spectrum of fluctuations, e.g., having the form of plasmoids and flux ropes of different sizes in the case of the secondary tearing instability. These plasmoids exhibit complex dynamics marked by their interaction with each other through mergers. It is then plausible that some of these fluctuations will eventually be converted into low-frequency electromagnetic waves that may escape the system and be observed at Earth. Although the efficiency of conversion of reconnection-released energy into such low-frequency emission can be low, this emission may nevertheless provide an important additional diagnostics window into the reconnection process. The typical frequencies of this radiation are expected to be low, on the scale of a fraction of the plasma frequency and lower, corresponding to radio emission in systems as diverse as the solar corona \citep{Shklovsky-1947} and the Crab pulsar magnetosphere~\citep{Uzdensky_Spitkovsky-2014}. In coronae of accreting black holes in XRBs, however, the plasma density (and hence the plasma frequency) is much higher and hence the corresponding emission probably falls into the infra-red or even optical range (\citep{Goodman_Uzdensky-2008}). \section{Radiation Reaction Force} \label{sec-rad_reaction} The main reason why radiation can sometimes be important in various plasma processes, including reconnection, is that it affects the motion of the plasma particles and thus influences the basic dynamics and energetics of the process in question. The primary effect of radiation can be described by the radiation-reaction drag force ${\bf f}_{\rm rad}$ experienced by the individual single particles and, associated with this force, the energy loss term $P_{\rm rad}$ in the particle's energy equation. The relativistic 4-force representing radiation reaction on an emitting particle, called the Abraham-Lorentz-Dirac (ALD) force, can be written as (\citep{Jackson-1975}): \begin{equation} F_{\rm rad}^\mu = {2e^2\over{3c^3}} \, {{d^2 u^\mu}\over{d\tau^2}} - {P_{\rm rad}\over{c^2}}\, u^\mu \, . \label{eq-rad-4-force} \end{equation} where $u^\mu$ is the particle's 4-velocity, $\tau = \gamma^{-1} t$ is the particle's proper time, and where the radiative power $P_{\rm rad}$ is given by the Larmor formula, which reads, in relativistically covariant notation \citep{Jackson-1975}: \begin{equation} P_{\rm rad} = {2\over 3} \, {e^2 \over{m^2 c^3}} \, {{d p_\mu}\over{d\tau}} {{d p^\mu}\over{d\tau}} \, . \end{equation} Here, $p^\mu = m u^\mu = m c (\gamma, \gamma \pmb{\beta})$ is the four-momentum of the particle moving with a 3-velocity ${\bf v} = \pmb{\beta} c$. This expression can be recast in terms of the parallel ($a_\parallel$) and perpendicular ($a_\perp$) (with respect to the particle direction of motion) 3-acceleration as (\citep{RL-1979}): \begin{equation} P_{\rm rad} = {2\over 3} \, {e^2 \over{c^3}} \gamma^4\, (a_\perp^2 + \gamma^2 a_\parallel^2) \, . \end{equation} Furthermore, for a particle moving non-relativistically, this formula reduces to the familiar non-relativistic Larmor formula: \begin{equation} P_{\rm rad} = {2\over 3} \, {e^2 \over{m^2 c^3}}\, |\dot{\bf p}|^2 = {2\over 3} \, {e^2 a^2\over{c^3}} \, , \end{equation} where $a$ is the particle's 3-acceleration. Returning to our discussion of radiation reaction, in the more familiar 3D language the radiation reaction enters the relativistic equation of motion of a charged particle as an additional friction force~${\bf f}_{\rm rad}$: \begin{equation} d{\bf p}/{dt} = q \, ({\bf E} + [{\bf v\times B}]/c) + {\bf f}_{\rm rad} \, , \end{equation} where ${\bf p} = m \gamma {\bf v}$ is the particle's relativistic 3-momentum. The radiation reaction 3-force is related to the ALD 4-force via $F_{\rm rad}^\mu = (\gamma \pmb{\beta} \cdot {\bf f}_{\rm rad}, \gamma {\bf f}_{\rm rad})$. The first term in expression~(\ref{eq-rad-4-force}) for $F_{\rm rad}^\mu$, called the Schott term, is quite peculiar: it involves the second time derivative of the 4-velocity and hence the third-order time derivative of position, which means that the equation of motion that includes this term becomes a third-order differential equation in time. This term dominates in the non-relativistic case ($|{\bf v}| \ll c$, $\gamma \rightarrow 1$), in which the radiation reaction force reduces to what is known as the Abraham-Lorentz force (e.g., \citep{Landau_Lifshitz-1971}): \begin{equation} {\bf f}_{\rm rad} (\gamma \ll 1) \approx {2e^2\over{3c^3}}\, {{d^2 {\bf v}}\over{dt^2}} \, . \end{equation} In contrast, in the case of ultra-relativistic motion (of main interest to this review) the Schott term can be shown to be small. Ignoring it, the radiation reaction 3-force on ultra-relativistic particles can be expressed in terms of the radiative power simply as \begin{equation} {\bf f}_{\rm rad} (\gamma \gg 1) \approx -\, {{P_{\rm rad}}\over{c}} \, \pmb{\beta} \, . \label{eq-f_rad-general} \end{equation} That is, the radiation reaction force in this case indeed plays a role of a friction force, directed opposite to the particle direction of motion. Furthermore, the magnitude of the radiation reaction force for an ultra-relativistic particle is then simply $|{\bf f}_{\rm rad}| \approx P_{\rm rad}/c$. This is consistent with the notion that the rate of work done by the radiation reaction force, ${\bf f}_{\rm rad} \cdot {\bf v} = - P_{\rm rad}\, v^2/c^2$, becomes equal to the particle's radiative energy loss rate $- P_{\rm rad}$ in the limit $v\rightarrow c$. We shall now apply these general expressions to several specific astrophysically-important radiative processes corresponding to different types of accelerated particle motion that enters the above formulae for~$P_{\rm rad}$. In astrophysical plasmas acceleration is usually due to the particle motion in an external electromagnetic field. The most important types of accelerated motion, and the corresponding radiation mechanisms, are: (1) cyclotron gyro-motion in a magnetic field and, correspondingly, the cyclotron/synchrotron emission; (2) parallel motion along a curved magnetic field line and curvature emission; (3) oscillatory motion of a charged particle in the electromagnetic field of an incident electromagnetic wave, resulting in Compton scattering (usually referred to as inverse-Compton (IC) scattering if the energy of incident photons is smaller than that of the scattering particles); in astrophysical studies focussed on production of high-energy radiation by energetic electrons scattering soft seed photons, one sometimes treats IC scattering effectively as an emission process; (4) motion of one charged particle in the electric field of another in a close binary collision and, correspondingly, bremsstrahlung (free-free) radiation emission. We shall now discuss the radiative power and the radiation reaction force for each one of these mechanisms (except for bremsstrahlung) in more detail. {\bf (1) Synchrotron Radiation.} First, for the cyclotron motion of a particle with a 4-velocity $(\gamma, \pmb{\beta} \gamma)$ in a general electro-magnetic field, the radiative power is~(\citep{Landau_Lifshitz-1971}) \begin{equation} P_{\rm rad} \simeq {1\over{4\pi}}\, \sigma_T c \gamma^2 \, \biggl( ({\bf E} + [\pmb{\beta} \times {\bf B}])^2 - (\pmb{\beta} \cdot {\bf E})^2 \biggr) \, . \end{equation} This expression is actually only approximate: it is based on a perturbative approach, keeping only the acceleration due to the usual Lorentz 4-force $-(e/c) u_\nu F^{\mu\nu}$ in the Larmor formula, while neglecting the effect of the radiation reaction force itself. However, it is valid in most realistic astrophysical situations. In the frame of reference in which the electric field vanishes (the so-called Teller-Hoffmann frame), the radiative power of a charged particle spiraling in a magnetic field is \begin{equation} P_{\rm rad} = P_{\rm synch} = 2 \sigma_T c \, \beta^2 \gamma^2 \,{{B_\perp^2}\over{8\pi}} = {2\over 3} r_e^2 c \beta^2 \gamma^2 \,{B^2} \, \sin^2 \alpha \, , \end{equation} where $\alpha$ is the pitch angle of the particle relative to the direction of the magnetic field and $B_\perp \equiv B \sin \alpha$ is the magnetic field component perpendicular to the particle's velocity. This radiation is called synchrotron radiation in the case of ultra-relativistic particles, cyclotron radiation in the case of non-relativistic particles, and gyro-synchrotron radiation for the intermediate case of moderately relativistic particles. It is important to note that cyclotron gyration is perpendicular to the magnetic field and hence only the perpendicular velocity of the particle is involved in this radiation. Even a very energetic particle in a strong magnetic field produces no synchrotron radiation if it moves strictly parallel to the field (although it may still produce the so-called curvature radiation if the field lines are curved, see below). In the ultra-relativistic case $\gamma \gg 1$, the synchrotron radiative power (in the Teller-Hoffmann frame) becomes \begin{equation} P_{\rm synch}(\gamma\gg 1) = 2 \sigma_T c \gamma^2 \,{B^2\over{8\pi}}\, \sin^2 \alpha \, , \end{equation} and hence the corresponding synchrotron radiation reaction force is \begin{equation} {\bf f}_{\rm rad}^{\rm synch} = - {{\bf v}\over c^2} P_{\rm synch} = - 2 \, \pmb{\beta}\, \sigma_T \, \gamma^2 \,{B^2\over{8\pi}}\, \sin^2 \alpha \, . \end{equation} It is worth noting that $P_{\rm synch}$ is proportional to the square of the particle energy and hence the radiative cooling time, $t_{\rm cool} = \gamma m_e c^2/P_{\rm synch} \sim \gamma^{-1}$, is inversely proportional to the particle energy. Correspondingly, ${\bf f}_{\rm rad}^{\rm synch}$ and synchrotron energy losses are is especially important for highest-energy relativistic particles. It is also important to note that the above simple expressions for $P_{\rm synch}$ and ${\bf f}_{\rm rad}^{\rm synch}$ are valid only in the Teller-Hoffmann frame, where electric field vanishes. This reference frame corresponds to the ${\bf E} \times {\bf B}$ drift, ${\bf v}_E = c\, [{\bf E\times B}]/B^2$. An important consequence is that synchrotron emission arises only due to the perpendicular (to ${\bf B}$) motion of particles relative to the ${\bf E\times B}$ drift. In particular, this means that a cold ideal-MHD plasma flow with the ${\bf E\times B}$ velocity does not produce synchrotron emission, even if it is highly relativistic as in the case of a pulsar wind. Also, the Teller-Hoffmann frame exists only if the electric field is weaker than the magnetic field and has no component parallel to~${\bf B}$, as can be seen by examining two electromagnetic-field Lorentz invariants, ${\bf E\cdot B}$ and $E^2-B^2$. If these conditions are not satisfied, then the radiative power and hence the radiation reaction force need to be found from more general expressions for the ALD force. We finally note that the above formula for synchrotron radiation is valid only if the magnetic field remains smooth on the length scale of radiation formation, which is about $\rho_L /\gamma = \rho_0$. This requirement is usually satisfied in most astrophysical cases, but there are situations where it is violated, namely, when the nominal Larmor radius $\rho_0$ is larger than the magnetic field reversal scale~$\lambda_B$. In this case, one has the so-called jitter radiation \citep{Medvedev-2000} instead of synchrotron, which may be relevant for GRB prompt emission. However, although the jitter radiation spectrum differs substantially from that of the classical synchrotron radiation, it turns out that the overall radiative power and hence the radiation reaction force are the same in the two cases. {\bf (2) Curvature radiation.} In some astrophysical applications, especially involving ultra-relativistic particles moving in a strong magnetic field, e.g., in a pulsar magnetosphere, the particles quickly lose their perpendicular (cyclotron) energy by synchrotron radiation and fall into their lowest Landau level. Then their subsequent motion becomes essentially one-dimensional (1D), parallel to the magnetic field, and is adequately described as the motion of bids on a wire or train cars running along the rails (\citep{Sturrock-1971}). However, this does not mean that the motion becomes completely trivial or that radiative effects are not important. In particular, if the magnetic field lines are not straight, the particles still experience centripetal acceleration as they move along the curved field lines, and hence can still radiate according to the Larmor formula (\citep{Shklovsky-1960}). For relativistic motion along curved magnetic fields lines this radiation is called the curvature radiation and its radiative energy loss rate is given by (\citep{Shklovsky-1960, Sturrock-1971, Chugunov_etal-1975, Zheleznyakov-1977}): \begin{equation} P_{\rm rad} = P_{\rm curv} = {2\over 3} \, {c e^2\over{R_c^2}}\, \gamma^4 \, , \end{equation} where $R_c$ is the field lines' radius of curvature. Correspondingly, ultra-relativistic particles experience a radiative reaction drag force: \begin{equation} {\bf f}_{\rm rad}^{\rm curv} = -\, {2\over 3} \, {e^2\over{R_c^2}}\, \gamma^4 \, \pmb{\beta}. \end{equation} {\bf (3) Inverse-Compton radiation.}\\ In the case of Compton scattering in an isotropic radiation field, the radiative power $P_{\rm rad}$ entering the above expressions (\ref{eq-rad-4-force}) and (\ref{eq-f_rad-general}) for the radiation reaction force for relativistic electrons is given by (e.g., \citep{Blumenthal_Gould-1970, Pozdnyakov_etal-1983}): \begin{equation} P_{\rm rad} = P_{\rm IC} = (4/3)\, \sigma c \, U_{\rm rad} \, \gamma^2 \beta^2 \approx (4/3)\, \sigma c \, U_{\rm rad} \, \gamma^2 \, , \label{eq-Prad-IC} \end{equation} corresponding to the radiation reaction force \begin{equation} {\bf f}_{\rm rad}^{\rm IC} = - {\bf v}\, P_{\rm rad}/c^2 = - \,(4/3)\, \sigma \gamma^2 \, U_{\rm rad} \, {\bf v}/c \, , \label{eq-f_rad-IC} \end{equation} where $U_{\rm rad}$ is the radiation energy density and $\sigma$ is the applicable scattering cross-section. In most astrophysical applications the scattering is in the so-called Thomson regime, in which the seed photon's energy in the electron's rest frame, $\epsilon_{\rm ph}' \sim \gamma \epsilon_{\rm ph,\ seed}$, is less than $m_e c^2$; then one can use the simple energy-independent Thomson cross-section, $\sigma = \sigma_T$. In the opposite case, however, one has to use a more general quantum-mechanical Klein-Nishina expression for the cross-section: \begin{equation} \sigma_{\rm KN}(x) = {3\over 4}\, \sigma_T\, \biggl[ {{1+x}\over{x^2}} \, \biggl( {{2(1+x)}\over{1+2x}} - {{\ln(1+2x)}\over{x}} \biggr) + {{\ln(1+2x)}\over{2x}} - {{1+3x}\over{(1+2x)^2}} \biggr] \, , \end{equation} where $x \equiv \epsilon_{\rm ph}' / m_e c^2$. In the ultra-relativistic limit $x \gg 1$, this expression can be approximated as \begin{equation} \sigma_{\rm KN}(x \gg 1) \approx {3\over 8}\, \sigma_T\, {{\ln 2x + 1/2}\over x} \, . \end{equation} Since the radiative reaction force on relativistic particles due to both synchrotron and inverse-Compton (in the Thomson regime) mechanisms scales as the square of the particle energy $\epsilon=\gamma m c^2$, the corresponding radiative cooling time, $\tau_{\rm rad} = \epsilon/P_{\rm rad}$ scales inversely with the energy. For curvature radiation the effect is even stronger since $P_{\rm rad}^{\rm curv} \sim \gamma^4$ and hence $\tau_{\rm rad}^{\rm curv} \sim \gamma^{-3}$. This means that for each of these processes radiative losses affect more energetic particles the most. What this implies is that radiative energy losses lead not only to the overall cooling of the plasma but also affect the shape of the particle distribution function. In particular, radiative losses may result in an effective upper energy limit on nonthermal particle acceleration, which may have very important observational consequences, as we discuss in the next section. \section{Radiation Effects on the Kinetic Picture of Magnetic Reconnection: Limiting Nonthermal Particle Acceleration} \label{sec-particle_acceleration} Nonthermal particle acceleration, the hallmark of which is usually considered to be the production of (truncated) power-law particle energy distributions, is an important and ubiquitous phenomenon in collisionless space-, solar-, and astrophysical plasmas. Among plasma-physical processes commonly believed to be responsible for nonthermal particle acceleration, the most popular are collisionless shocks, magnetic reconnection, and MHD turbulence. Whatever the mechanism is, however, a particle's energy can be increased only by the work done by the electric field, since the Lorentz force due to the magnetic field is perpendicular to the particle's direction of motion. It therefore follows that, in the presence of radiative losses, the maximum Lorentz factor $\gamma_{\rm rad}$ that a charged particle accelerated by an electric field $E$ can attain is determined by the balance between the accelerating electric force $eE$ and the radiation reaction force $f_{\rm rad}(\gamma)$, i.e., $f_{\rm rad}(\gamma_{\rm rad})= eE$. Importantly, the electric field in most astrophysical applications is tied to the magnetic field and is typically of order the motional electric field, $E\lesssim v B/c = \beta B$, where $v\equiv \beta c$ is the plasma 3-velocity. Thus, it is often useful to parametrize the electric field in terms of the magnetic field, e.g., by introducing a dimensionless parameter $\beta_E = E/B$, which is usually less than unity. In most magnetically dominated systems, the typical flow velocity is of the order of the Alfv\'en speed, $V_A$, and so one typically expects $\beta_E \lesssim V_A/c$. For example, in the context of magnetic reconnection, the relevant electric field is usually the main reconnection electric field and $B$ is the reconnecting component of the magnetic field; then, $\beta_E = \beta_{\rm rec} = v_{\rm rec}/c$, where the $v_{\rm rec}$ is the reconnection inflow velocity, typically of order $v_{\rm rec} \sim 0.1 V_A$ for collisionless reconnection (e.g., \citep{Birn_etal-2001}) and $v_{\rm rec} \sim 0.01 V_A$ for resistive-MHD reconnection in the large-system, plasmoid-dominated regime \citep{Bhattacharjee_etal-2009, Huang_Bhattacharjee-2010, Uzdensky_etal-2010, Loureiro_etal-2012}, although it can be higher in the presence of background turbulence (e.g., \citep{Lazarian_Vishniac-1999, Kowal_etal-2009, Loureiro_etal-2009, Eyink_etal-2011}). In relativistic plasmas, where $\beta_E \sim 1$, one obtains the following upper limits on relativistic particle acceleration in the presence of the three main radiative mechanisms (synchrotron, IC, and curvature) discussed in \S~\ref{sec-rad_reaction}: {\bf (1) Synchrotron radiation} (\citep{Guilbert_etal-1983, deJager_etal-1996, Lyutikov-2010, Uzdensky_etal-2011}): \begin{equation} \gamma_{\rm rad}^{\rm sync} = {1\over{\sin\alpha}} \, \biggl[ {{3\, \beta_E}\over{2}}\, {e\over{B r_e^2}} \biggr]^{1/2} = {1\over{\sin\alpha}} \, \biggl[ {{3\, \beta_E}\over{2}}\, {B_{\rm cl}\over B} \biggr]^{1/2} \, , \label{eq-gamma_rad-synch} \end{equation} where $\alpha$ is the pitch angle of the particle with respect to the magnetic field (here we assume that the electric field is parallel to the direction of the particle's motion). The corresponding maximum characteristic synchrotron photon energy then is \begin{equation} \epsilon_{\rm ph, max}^{\rm sync} = {3\over 2}\, (\gamma_{\rm rad}^{\rm sync})^2\, \hbar \Omega_{c0} = {9\over 4}\, \alpha_{fs}^{-1}\, m_e c^2\, \beta_E \simeq 160\, {\rm MeV} \, \beta_E\, , \end{equation} where $\alpha_{fs} = e^2 / \hbar c \simeq 1/137$ is the fine structure constant. {\bf (2) Curvature radiation} (\citep{Sturrock-1971, Chugunov_etal-1975, Lyutikov_etal-2012b}): \begin{equation} \gamma_{\rm rad}^{\rm curv} = \biggl({{3 E_\parallel R_c^2}\over{2e}}\biggr)^{1/4} \, , \label{eq-gamma_rad-curv} \end{equation} where $E_\parallel$ is the accelerating parallel (to the magnetic field and to the particle velocity) electric field. This corresponds to a characteristic maximum photon energy that can be achieved by curvature radiation (\citep{Lyutikov_etal-2012b}) of \begin{equation} \epsilon_{\rm ph, max}^{\rm curv} = \biggl({3\over 2} \biggl)^{7/4}\, \hbar c R_c^{1/2} \, \biggl({E_\parallel\over e} \biggl)^{3/4} \, , \end{equation} which can be recast as \begin{equation} \epsilon_{\rm ph, max}^{\rm curv} = \biggl({3\over 2} \biggl)^{7/4}\, m_e c^2\, \alpha_{fs}^{-1} \, \sqrt{R_c\over{r_e}}\, \biggl({E_\parallel\over{B_{\rm cl}}} \biggl)^{3/4} \, . \end{equation} {\bf (3) Inverse Compton radiation} (in the Thomson regime): \begin{equation} \gamma_{\rm rad}^{\rm IC} = \biggl[ {3\over{4}}\, {{e E}\over{\sigma_T U_{\rm rad}}} \biggr]^{1/2} = \biggl[ {{9\,\beta_E}\over{32 \pi}}\, {{B B_{\rm cl}}\over{U_{\rm rad}}} \biggr]^{1/2} \, . \label{eq-gamma_rad-IC} \end{equation} These radiation-reaction upper energy limits become important in situations where they are lower than the applicable energy limits that may arise due to other reasons. One such other limit, for example, is due to a finite maximum available voltage drop associated with a given system size~$L$ and~electric field~$E$: $\gamma_{\rm max} = \epsilon_{\rm max}/m c^2 = eE L/m c^2 = \beta_E\, L/\rho_0$, where, once again, $\rho_0 \equiv m_e c^2/ eB$ is the fiducial Larmor radius of a mildly relativistic electron corresponding to a magnetic field~$B$ \citep{Hillas-1984, Aharonian_etal-2002}. While the condition $\gamma_{\rm rad} < \gamma_{\rm max}$ is usually not satisfied in heliospheric environments, this situation does happen naturally in some of the most important high-energy astrophysical systems. In particular, this happens in pulsar magnetospheres (curvature and synchrotron radiation), in PWN (synchrotron), and in black-hole accretion flows (inverse Compton and synchrotron), as we will discuss in the following two subsections. For example, in the case of synchrotron radiation, the condition $\gamma_{\rm rad} < \gamma_{\rm max}$ can be recast (ignoring factors of order unity) as $L > \rho_0 \, (\rho_0/r_e)^{1/2} = r_e\, (B/B_{\rm cl})^{-3/2} \simeq 1.3 \times 10^{11}\, {\rm cm}\, [B/(1\,{\rm G})]^{-3/2} = 1.3\, {\rm m}\, B_6^{-3/2}$, where $B_6 \equiv B/(1\,{\rm MG})$ is the magnetic field normalized to 1~MG, a value typical for gamma-ray-emitting pulsar magnetospheres near the light cylinder and for accretion disks of stellar-mass black holes in~XRBs. An equivalent way to think about the relative importance of radiation reaction in limiting particle acceleration is to consider relativistic particles moving at the radiation reaction limit $\gamma_{\rm rad}$ and to cast their radiative cooling length, $\ell_{\rm cool} = c t_{\rm cool} \equiv \gamma m c^3 /P_{\rm rad}$ in terms of their Larmor radius, $\rho \equiv \gamma m c^2 / eB$. Since $\gamma_{\rm rad}$ is determined by the force balance between the radiation reaction force $f_{\rm rad} \approx P_{\rm rad}/c$, and the accelerating electric force~$eE= e \beta_E B$, one can immediately see that $\ell_{\rm cool} = \rho {B/E} = \beta_E^{-1} \rho(\gamma_{\rm rad})$. In particular, in the case of reconnection, the electric field is parametrized in terms of the upstream reconnecting magnetic field $B_0$ as $E = \beta_{\rm rec} B_0$, where $\beta_{\rm rec}$ is the dimensionless reconnection rate which, in collisionless relativistic systems is of order~0.1. Thus, we see that \begin{equation} \ell_{\rm cool}(\gamma_{\rm rad}) \simeq \gamma_{\rm rad} m_e c^2/ eE \simeq \beta_{\rm rec}^{-1}\, \rho(\gamma_{\rm rad}) \, , \end{equation} i.e., only perhaps by a factor of $\beta_{\rm rec}^{-1} \sim 10$ longer than the Larmor radius of these particles. This means that, if one is interested in extreme high-energy nonthermal particle acceleration, one has to take radiation reaction into account once the size of the accelerating region exceeds about $10\rho(\gamma_{\rm rad})$. Once again, this is usually not a concern in most heliospheric environments, but this situation is ubiquitous in astrophysics. Finally, we would like to note that the above formulation treats radiation reaction as a continuos force on the particles, ignoring the fact that in reality radiation is emitted in the form of discrete photons. When the energy of the emitted photons becomes comparable to the kinetic energy $\gamma m c^2$ of the emitting particle, one has to take the quantized, discrete nature of the radiation process into account. For example, for synchrotron radiation this happens when the particle's Lorentz factor approaches $\gamma_Q = \rho_0/l_C = B_Q/B$, where $l_C \equiv \hbar/m c$ is the Compton length scale and $\rho_0 \equiv m c^2/e B$, and $B_Q = \alpha_{\rm fs} B_{\rm cl} \simeq 4.4 \times 10^{13} \, {\rm G}$ is the critical quantum magnetic field. Comparing $\gamma_Q$ with $\gamma_{\rm rad}^{\rm sync}$ and neglecting for simplicity order-unity factors like $\sin\alpha$ and $3\beta_E/2$, we see that $\gamma_{\rm rad}^{\rm sync}/\gamma_Q \sim (B/\alpha_{\rm fs} B_Q)^{1/2}$. Therefore, synchrotron radiation reaction prevents a particle from reaching the quantum-radiation regime (i.e., $\gamma_{\rm rad}^{\rm sync} < \gamma_Q$) under most astrophysically-relevant circumstances, namely, as long as $B \lesssim \alpha_{\rm fs} B_Q \sim 10^{11}\, {\rm G}$. The only class of astrophysical objects for which this inequality is violated is neutron stars and, especially, ultra-magnetized neutron stars called magnetars: typical magnetic fields in normal neutral stars are of order~$10^{12}\, {\rm G}$, and in magnetars they routinely reach $10^{15}\, {\rm G}$. This means that when considering energetic plasma processes, such as reconnection, in a close vicinity of these objects, the usual continuous-emission picture for synchrotron radiation is not applicable and one should instead describe it as emission of discrete quanta. \subsection{Radiative Relativistic Reconnection and the Crab Nebula Flares} \label{subsec-Crab-flares} One of the most prominent examples of possible radiative effects on nonthermal particle acceleration is given by the emission produced by the Crab PWN. It has now been reasonably firmly established that most of the baseline steady-state nothermal continuum emission from the Nebula, spanning from radio, to optical, to X-rays, and high-energy (tens of MeV) gamma-rays, is produced by synchrotron radiation from ultra-relativistic electrons and positrons that populate the Nebula (e.g., \citep{Shklovsky-1957, Shklovsky-1966}). Then, however, the spectrum is observed to drop rather sharply above about 100~MeV, which is convincingly explained by the above "standard" synchrotron radiation reaction limit, $\epsilon_{\rm ph, max} \simeq (9\hbar c/4e^2)\, m_e c^2 \simeq 160$~MeV, \citep{Guilbert_etal-1983, deJager_etal-1996, Lyutikov-2010, Uzdensky_etal-2011, Komissarov_Lyutikov-2011}. This indicates that the theoretical reasoning behind this limit is solid and the limit is indeed applicable in real situations, at least under normal circumstances. It turns out, however, that this is not the whole story, the actual situation is far more interesting. The validity of the above standard radiation reaction limit was recently challenged observationally by the discovery, made by the space-based gamma-ray observatories AGILE and FERMI, of short ($\sim$ 1 day), very intense 100 MeV-1 GeV flares in the Crab Nebula \citep{Abdo_etal-2011, Tavani_etal-2011, Balbo_etal-2011, Striani_etal-2011, Buehler_etal-2012, Buehler_Blandford-2014}. For basic energetics reasons, the only viable emission mechanism for the flares is still believed to be synchrotron radiation by PeV electrons in milli-Gauss magnetic fields, but the typical energies of the observed flare photons clearly exceed, by a factor of a few, the "standard" $\lesssim 100$~MeV synchrotron radiation reaction limit \citep{Abdo_etal-2011, Tavani_etal-2011}; see \citep{Buehler_Blandford-2014} for recent review). This paradox thus challenges standard theories of high-energy particle acceleration in relativistic astrophysical plasmas and has lead to an intense theoretical effort aimed at resolving it \citep{Uzdensky_etal-2011, Bednarek_Idec-2011, Komissarov_Lyutikov-2011, Yuan_etal-2011, Clausen-Brown_Lyutikov-2012, Cerutti_etal-2012a, Bykov_etal-2012, Sturrock_Aschwanden-2012, Lyutikov_etal-2012a, Lyubarsky-2012, Cerutti_etal-2013, Cerutti_etal-2014a, Cerutti_etal-2014b}. One promising idea invokes particle acceleration by magnetic reconnection (\citep{Uzdensky_etal-2011, Cerutti_etal-2012a, Cerutti_etal-2013, Cerutti_etal-2014a, Cerutti_etal-2014b}; see also \citep{Bednarek_Idec-2011}). The main idea is based on a specific peculiar property of the reconnection process that allows one to circumvent the usual expectation (on which the standard radiation reaction limit is based) that the accelerating electric field $E$ be weaker than the perpendicular magnetic field $B_\perp$ that causes the particle to radiate and hence lose its energy. Indeed, this expectation is usually well justified almost everywhere in astrophysical plasmas and is related to the applicability of ideal MHD, but intense reconnection layers are precisely the places where ideal MHD reconnection does not apply and hence where one can expect the condition $E < B_{\perp}$ to break down! In fact, the reconnecting magnetic field vanishes exactly at the X-point at the center of a current layer, whereas the electric field there remains finite. Thus, one may expect that the Crab flare paradox can be resolved if the required particle acceleration to PeV energies takes place deep inside a reconnection layer, where the magnetic field is weak and so the associated synchrotron radiation reaction force is greatly reduced \citep{Uzdensky_etal-2011, Cerutti_etal-2012a}. What makes this scenario particularly attractive is that ultra-relativistic particles moving along relativistic Speiser trajectories in a current layer have a natural tendency to focus deeper and deeper into the layer as they gain energy \citep{Kirk-2004, Contopoulos-2007, Uzdensky_etal-2011, Cerutti_etal-2012a}. This leads to the formation of discrete highly focused and very intense beams of energetic particles that can be accelerated by the reconnection electric field to energies well above the radiation reaction limit $\gamma_{\rm rad}^{\rm synch}$ associated with the upstream reconnecting magnetic field $B_0$. Eventually, these particles escape the low-$B_\perp$ accelerating region and enter a finite-$B_\perp$ region where they quickly radiate their energy in an intense short burst of synchrotron radiation above 100~MeV. The plausibility of this picture, first suggested analytically by~\citep{Uzdensky_etal-2011}, has then been tested in both test-particle (Cerutti et al. 2012a) and fully self-consistent numerical simulations using the radiative relativistic PIC code Zeltron \citep{Cerutti_etal-2013, Cerutti_etal-2014a, Cerutti_etal-2014b}. This latter study was one of the first (second only to Ref.\citep{Jaroschek_Hoshino-2009} numerical PIC studies of magnetic reconnection that incorporated the radiation reaction force, and also the first to compute the observable radiative signatures (photon spectra and light curves) of reconnection. This example illustrates that, whereas there exist important, high-profile astrophysical phenomena where radiation-reaction effects on particle acceleration are expected to play an important role, how exactly these effects play out, in particular, in the case of reconnection-powered synchrotron radiation, is highly non-trivial and extremely interesting. \subsection{Reconnection-Powered Particle Acceleration in Accreting Black Hole Coronae} \label{subsec-BH_ADC} Another important area in high-energy astrophysics where radiation reaction may play an important role in limiting relativistic electron (and perhaps positron) acceleration by reconnection, with potentially important observational consequences, is represented by accretion disks and their coronae in black hole systems, such as galactic X-ray binaries (XRBs) and~AGN. Here, unlike in pulsar systems, the main radiative mechanism is inverse-Compton (IC) scattering of soft (10-100 eV in AGN and $\sim 1\, {\rm keV}$ in XRBs) accretion-disk photons by the energetic electrons accelerated in coronal reconnection events. This is especially so in bright systems accreting at a significant fraction of the Eddington limit, such as quasars and microquasars (in the high-soft state). ADCe in such systems often have Thomson optical depth of order unity and the reconnection layers responsible for the coronal heating and the hard X-ray production is often marginally-collisionless \citep{Goodman_Uzdensky-2008}. Importantly though, the ambient soft photon field produced by the underlying accretion disk is so intense that the resulting IC radiation reaction is very strong and needs to be taken into account. In particular, it results in an effective Compton-drag resistivity (see \S~\ref{subsec-rad_resistivity}) which, under some conditions, becomes greater than the Spitzer resistivity due to electron-ion Coulomb collisions \citep{Goodman_Uzdensky-2008}. And, relevant to our present discussion, radiation reaction due to both IC and synchrotron mechanisms can affect the high-energy end of the electron distribution function and hence the observable hard X-ray and gamma-ray emission (e.g., \citep{Khiali_etal-2015a, Khiali_etal-2015b}). As discussed above, the relative importance of radiation reaction in reconnection-driven particle acceleration can be assessed by examining the radiative cooling length for electrons at the radiation reaction limit, \begin{equation} \ell_{\rm cool} (\gamma_{\rm rad}) = c t_{\rm cool} (\gamma_{\rm rad}) = \gamma_{\rm rad}\, m_e c^3/ P_{\rm rad} (\gamma_{\rm rad}) \sim \beta_E^{-1}\, \rho(\gamma_{\rm rad}) \, , \end{equation} and comparing it with other important length scales in the system. For typical conditions in ADCe of XRB black holes accreting near the Eddington limit, $\gamma_{\rm rad}^{\rm IC}\, m_e c^2$ can to be of the order of 1000~MeV; thus, there should be virtually no electrons with energies much higher than the proton rest mass (perhaps multiplied by a factor of a few) in these systems. The corresponding cooling length of these energetic electrons is then comparable to (or perhaps somehwhat larger than) the fiducial proton Larmor radius~$\rho_{i0} \sim m_p c^2/eB_0 = 0.3\, {\rm cm}\, B_7^{-1}$. Obviously, this length is much smaller than the typical expected reconnection layer length in~ADC, $L \sim R_g$, where $R_g \equiv GM/c^2 \simeq 1.5 \, {\rm km}\, M/M_\odot$ is the gravitational radius of a black hole of mass~$M$ and $M_\odot \simeq 2\times 10^{33}\, {\rm g}$ is the solar mass. For example, for a typical XRB stellar-mass black hole with $M \sim 10 M_\odot$, we have $R_g = 15\, {\rm km}$, and for a typical large super-massive black hole with $M\sim 10^8 M_\odot$, we have $R_g \sim 1.5 \times 10^{8} \, {\rm km} \approx 1\, {\rm AU}$. Interestingly, $\gamma_{\rm rad}^{IC}\, m_e c^2 \sim 1000\, {\rm MeV}$ is also comparable to the average dissipated energy per particle, $\bar{\gamma}\, m_e c^2 \sim B_0^2/(16 \pi n_e)$, provided that electrons and ions get comparable amounts of energy and that $\sigma_i \equiv B_0^2/(4\pi n_i m_p c^2) \sim 1$ in the corona. This in turn means that the above radiation reaction energy limit is more or less comparable to Werner {\it et al}'s "natural" cutoff~$\gamma_{c1} \sim 10\, \bar{\gamma}$ (see \S~\ref{subsec-passive-nonthermal}). Thus, provided that that cutoff, discovered in 2D PIC simulations of relativistic pair-plasma reconnection \citep{Werner_etal-2014}, also applies to electron-ion plasmas, we see that $\gamma_{\rm rad}^{IC}$ may in fact be the smallest, and hence the governing, cutoff that limits nonthermal electron acceleration in coronae of real black holes accreting matter at high accretion rates (e.g., of XRBs in the high-soft state). The most important potential observational implication of these findings is the prediction that reconnection events in accretion disk coronae of XRB black holes in the high-soft state should be able to produce power-law high-energy IC radiation spectra extending to photon energies on the order of $\epsilon_{\rm ph,\, max} \sim \epsilon_{\rm ph,\, soft} (\gamma_{\rm rad}^{IC})^2 \sim 10^4-10^6 \, \epsilon_{\rm ph,\, soft} \sim 10-1000 \, {\rm MeV}$, where we took $\gamma_{\rm rad}^{IC} \sim 100-1000$ and $\epsilon_{\rm ph,\, soft} \sim 1\, {\rm keV}$, a typical energy for the dominant radiation emitted by Shakura-Sunyaev \citep{Shakura_Sunyaev-1973} accretion disks around stellar-mass black holes. However, because of the high compactness of these systems, most of these high-energy gamma-ray photons probably get absorbed by other photons and create electron-positron pairs before they can escape. This could be the primary process that governs (or at least strongly contributes to) the rate of pair production in black-hole coronae and thus may affect the composition (pairs vs. electron-ion plasma) of black-hole-powered winds and jets. Thus, our ability to calculate from first principles the number of electrons accelerated by reconnection to tens and hundreds of MeV, and hence the number of IC photons at these extreme energies, should give us an important handle on the efficiency of pair production in these systems --- a fundamental issue in black-hole astrophysics. \section{Reconnection with Radiative cooling} \label{sec-rad_cooling} One of the most important effects that radiative drag force can have on reconnection, and one that often comes into play first in various astrophysical contexts, is its effect on the random thermal motions of average, run-of-the-mill particles in the reconnection layer. We call this effect {\it radiative cooling}. Here we are particularly interested in the case of prompt radiative cooling, in which the characteristic cooling time $t_{\rm cool}$ of the hot plasma energized by the reconnection processes is shorter than or comparable to the characteristic time that a given fluid element spends inside the reconnection layer, which is typically the Alfv\'en transit time along the layer, $\tau_A= L/V_A$. In this regime, which we will call the strong radiative cooling regime, radiative energy losses become important in the overall energy balance of the reconnection process and need to be taken into account. The opposite case was considered in~\S~\ref{sec-passive-rad-signs}. It is easy to estimate that radiative cooling is not important in most traditional (solar-system) applications of reconnection, i.e., in the environments of solar flares, the Earth magnetosphere, and tokamak fusion devices and dedicated laboratory experiments designed to study reconnection under controlled laboratory conditions. Because of this, there has been relatively little work done on incorporating radiative cooling effects into reconnection models. However, when one tries to think about reconnection in various astrophysical contexts, one often finds, by doing simple estimates, that if magnetic reconnection happens in these environments, it has to take place in the strong radiative cooling regime. This realization has lead to an increased interest in radiative magnetic reconnection in the high-energy astrophysics community, especially in recent years ~\citep{Dorman_Kulsrud-1995, Lyubarsky-1996, Jaroschek_Hoshino-2009, Giannios_etal-2009, Nalewajko_etal-2011, Uzdensky-2011, Uzdensky_McKinney-2011, McKinney_Uzdensky-2012, Takahashi_Ohsuga-2013, Takahashi_Ohsuga-2015, Uzdensky_Spitkovsky-2014}. The importance of radiative cooling effects on reconnection have also been recognized in the context of reconnection in the solar chromosphere \citep{Steinolfson_vanHoven-1984, Leake_etal-2013, Ni_etal-2015}. \subsection{Reconnection with Radiative Cooling: Optically Thin Case} \label{subsec-rad_cooling-thin} The first key step to study the effects of strong cooling on reconnection in a systematic way was made recently by Uzdensky \& McKinney (2011) \citep{Uzdensky_McKinney-2011}, who developed a simple but self-consistent Sweet--Parker-like model for a non-relativistic resistive-MHD reconnection layer subject to strong optically thin cooling. It is of course understood that, just like the original Sweet--Parker model, this model should not be expected to provide a complete description of reconnection in real large astrophysical systems, which are often subject to a host of other effects, such as ambient turbulence \citep{Lazarian_Vishniac-1999}, secondary current-layer instabilities such as the plasmoid instability \citep{Loureiro_etal-2007}, and collisionless effects (e.g., \citep{Birn_etal-2001}), just to name a few. For these reasons the Uzdensky \& McKinney (2011) model of reconnection with radiative cooling should only be viewed as a toy model which, however, brings out several important physical insights into the problem and provides a useful fundamental building block for future studies. The main idea of this model was that since radiative cooling limits the rise of the plasma temperature in the layer, in the absence of a guide magnetic field, the plasma inside the layer has to compress in order to maintain the pressure balance with the upstream magnetic pressure. By analyzing carefully the balance between ohmic heating and radiative and advective cooling, together with the cross-layer pressure balance and with the equation of motion along the layer, Ref.~\citep{Uzdensky_McKinney-2011} obtained estimates for the key parameters characterizing the reconnecting system: the plasma compression ratio, the reconnection rate, and the layer thickness, in terms of the general parameters of the radiative cooling function. It was found that the reconnection rate, in the case with no guide field, is enhanced relative to the non-radiative case and that the layer thickness is reduced due to the cooling-related compression; a strong guide field, however, suppresses this effect by preventing strong compression. In addition, for the specific case of Spitzer resistivity~$\eta_{\rm Sp}$, reconnection is sped up (with or without a guide field) by radiative cooling even further due to the strong inverse scaling of the resistivity with temperature, $\eta_{\rm Sp} \sim T^{-3/2}$. Furthermore, several specific astrophysically-important radiative mechanisms (bremsstrahlung, cyclotron, and inverse Compton) were considered and the conditions for strong-cooling regime were formulated for each one of them. The theory lead to specific expressions for the reconnection rate and to the prediction of a cooling catastrophe behavior for the case of strong bremsstrahlung cooling. Although this study~\citep{Uzdensky_McKinney-2011} focused mostly on optically thin case, many of its ideas, concepts, and conclusions should be valid more broadly; however, analyzing reconnection dynamics in the optically-thick case requires approaching the problem as a radiative-transfer problem, as we discuss in~\S~\ref{subsec-rad_cooling-thick}. An interesting astrophysical example of reconnection where strong radiative cooling is important is reconnection in the magnetospheres of gamma-ray pulsars (e.g., the Crab), at distances comparable to, but also perhaps somewhat larger than the light cylinder (LC) radius (\citep{Uzdensky_Spitkovsky-2014}; see also \citep{Lyubarsky-1996, Arka_Dubus-2013}). The rotating pulsar magnetosphere naturally develops an equatorial current sheet beyond the light cylinder, somewhat similar to the heliospheric current sheet, and magnetic reconnection in this current sheet can dissipate a nontrivial fraction of the overall pulsar spin-down power within a few LC radii. In some rapidly rotating pulsars, the reversing magnetic field just beyond the light cylinder is so strong (e.g., of order 1 MG for the Crab pulsar) that prompt synchrotron cooling of the heated plasma in the layer inevitably becomes important; it controls the energetics of reconnection and may result in production of observed strong pulsed GeV $\gamma$-ray emission \citep{Lyubarsky-1996, Uzdensky_Spitkovsky-2014, Arka_Dubus-2013}. In particular, by combining the conditions of the pressure balance across the current layer (reconnection in pulsar magnetosphere is expected to take place without a guide field) and of the balance between the heating by magnetic energy dissipation and synchrotron cooling, one can obtain simple estimates for key physical parameters of the layer's plasma, such as the temperature, density, and layer thickness, in terms of the reconnecting upstream magnetic field~$B_0$ \citep{Uzdensky_Spitkovsky-2014}. Specifically, one expects the plasma to be heated to roughly the radiation reaction limit $\theta_e \equiv T_e/m_e c^2 \sim \gamma_{\rm rad}^{\rm synch}$ (see Eq.~\ref{eq-gamma_rad-synch}) and thus to be compressed to the density $n_e \sim B_0^2/(16\pi \gamma_{\rm rad}^{\rm synch} m_e c^2)$ (in the comoving frame of the relativistic pulsar wind). The corresponding thickness of the small elementary inter-plasmoid current layers is then expected to be comparable to the relativistic Larmor radius of these particles, $\delta \sim \rho_L(\gamma_{\rm rad}^{\rm synch}) = \gamma_{\rm rad}^{\rm synch} \rho_0$. For the particularly important case of the Crab pulsar, one finds: $T_e \sim 10^4 \, m_e c^2 \sim 10\, {\rm GeV}$, $n_e \sim 10^{13}\, {\rm cm^{-3}}$, and $\delta \sim 10\, {\rm cm}$. After accounting for the bulk Doppler boosting due to the pulsar wind (with a Lorentz factor of order 100), the synchrotron and inverse-Compton emission from the reconnecting current sheet may plausibly explain the observed Crab's pulsed high-energy (GeV) and VHE ($\sim 100$ GeV) radiation, respectively, while the rapid motions of the secondary plasmoids in the large-scale current layer may contribute to the production of the pulsar radio emission \citep{Uzdensky_Spitkovsky-2014}. In addition to astrophysical applications, magnetic reconnection in the strong optically thin radiative cooling regime may soon be within reach to laboratory studies utilizing powerful modern laser plasma facilities, such as Omega EP and NIF (Uzdensky et al. 2016, in prep.). The main cooling mechanism in these experiments is collisional bremsstrahlung, perhaps augmented by atomic-line cooling, depending on the plasma composition. Since the bremsstrahlung cooling rate scales strongly with the plasma density (as $n_e^2$) but only weakly with the temperature (as $T^{1/2}$), in order to reach the desired radiative regime, it is advantageous to configure the laser-target setup towards a higher density, a lower temperature, and a larger illuminated area. In addition, the role of radiative cooling is enhanced if one uses targets made of high-$Z$ materials, such as copper and gold. Overall, preliminary estimates indicate that the strong radiative cooling regime is reachable on NIF and perhaps even on Omega EP when using gold targets. Interestingly, some of the physical parameters achievable in these laser-plasma experiments, e.g., magnetic field strengths, densities, characteristic kinetic plasma length scales, are not that different from the values expected in, e.g., BH accretion disk coronae in XRBs. This points to tantalizing potential prospects of studying in the lab the magnetic reconnection processes in the regimes relevant to these astrophysical environments. \subsection{Reconnection with Radiative Cooling: Optically-Thick Case} \label{subsec-rad_cooling-thick} In the optically-thick radiative cooling case, i.e., when the optical depth $\tau$ {\it across} the layer is large, a self-consistent treatment of radiation calls for serious modifications to our overall theoretical approach to the reconnection problem. Specifically, one has to view the reconnection problem in this case essentially as a radiative transfer problem \citep{Uzdensky-2011}. The current layer develops a photosphere, with a photospheric surface temperature, $T_{\rm ph}$, which in a steady state is related to the temperature $T_0$ at the center of the reconnection layer via $T_0^4/T_{\rm ph}^4 = \tau \gg 1$. Furthermore, in the strong-cooling regime the basic steady-state energy balance between the Poynting flux entering the layer from upstream with the reconnection inflow, $S = (c/4\pi)\, E_{\rm rec} B_0 = v_{\rm rec}\, B_0^2/4\pi = c\, \beta_{\rm rec} B_0^2/4\pi$, and the outgoing radiative flux emitted by the photosphere, $F_{\rm rad} = \sigma_{\rm SB} T_{\rm ph}^4$, where $\sigma_{\rm SB} = \pi^2 k_B^4/60 \hbar^3 c^2 \simeq 5.67 \times 10^{-5}\, {\rm erg \,cm^{-2}\, s^{-1} \, K^{-4}}$ is the Stefan-Boltzmann constant, determines the photospheric temperature in terms of the reconnecting magnetic field: \begin{equation} T_{\rm ph} = \biggl[ {c\,\beta_{\rm rec}\over{\sigma_{\rm SB}}} \, {{B_0^2}\over{4\pi}} \biggr]^{1/4} \, , \label{eq-opt_thick-T_ph} \end{equation} and hence the central layer temperature for a given~$\tau$: \begin{equation} T_0 = \tau^{1/4} T_{\rm ph} = \biggl[ \tau {c\,\beta_{\rm rec}\over{\sigma_{\rm SB}}} \, {{B_0^2}\over{4\pi}} \biggr]^{1/4} \, . \label{eq-opt_thick-T_0} \end{equation} Next, if there is no guide field and if the pressure inside the layer is dominated by the gas pressure, $P_0 = 2 n_{e,0} k_B T_0$, then one can use the condition of pressure balance across the layer between $P_0$ inside the layer and the combined magnetic plus plasma pressure in the upstream region outside the layer, $(1 + \beta_{\rm up})\, B_0^2/(8\pi)$, where $\beta_{\rm up}$ is the upstream plasma-$\beta$ parameter, to obtain the central plasma density: \begin{equation} n_{e,0} = {B_0^2\over{16\pi}}\, {{1+\beta_{\rm up}}\over{k_B T_0}} \, . \label{eq-opt_thick-n_0} \end{equation} The expressions (\ref{eq-opt_thick-T_ph})-(\ref{eq-opt_thick-n_0}) govern the basic thermodynamics of a reconnection layer subject to strong radiative cooling in the optically thick regime. \subsection{Optically-Thick Current Layer: Radiation Pressure Effects} \label{subsec-rad_pressure} However, in some important astrophysical phenomena, the reconnecting magnetic field is so strong and hence the total plasma energy density in the layer is so high that the pressure is dominated by radiation pressure. For an optically thick layer, we can assume thermal black-body radiation pressure: $P_{\rm rad,0} = a T_0^4/3$, where $a = 4\sigma_{\rm SB}/c \simeq 7.57 \times 10^{-15}\, {\rm erg\, cm^{-3}\, K^{-4}}$ is the radiation constant. In this case, the cross-layer pressure balance does not involve the plasma density and instead yields, in combination with the steady-state energy balance $S=c\,\beta_{\rm rec} B_0^2/4\pi = F_{\rm rad} = \sigma_{\rm SB} T_{\rm ph}^4 = \tau^{-1}\,\sigma_{\rm SB} T_0^4$, a simple but important relationship between the optical depth and the reconnection rate \citep{Uzdensky-2011}: \begin{equation} \tau \, \beta_{\rm rec} = {3\over 8} \, . \end{equation} The validity of this expression is limited by a few assumptions, namely, by the steady-state assumption and by the assumption of strong cooling. The latter, in particular, implies that the radiative diffusion time across the layer, $t_{\rm diff} \sim \tau \delta/c$ is much shorter than the (Alfv\'enic) advection time $\tau_A \sim L/V_A$ along the layer, and this imposes a certain condition on the layer's aspect ratio relative to the optical depth: $ L/\delta > \tau V_A/c$. \section{Radiation Drag on Fluid Flow} \label{sec-rad_drag} Another fluid-level manifestation of the radiative drag force is its direct braking effect on bulk plasma motions expected in a reconnecting system. Thus, in contrast to radiative cooling, which affects the {\it thermodynamics} of the reconnection process, the radiation effects that we consider in this section influence the {\it dynamics} and {\it electrodynamics} of reconnection. Here it is convenient to distinguish two aspects of such action: \\ {\bf (1)} radiative friction on the current-carrying charged particles moving in the out-of-plane direction and responsible for carrying the current in the reconnection layer, resulting in an effective {\it radiative resistivity}; \\ {\bf (2)} radiative friction slowing down the {\it reconnection outflows} in the direction along the reconnecting magnetic field (the outflow direction). Although the actual underlying physical mechanism behind these two effects is the same, for practical reasons it is convenient to consider them separately because they play different roles in changing the reconnection dynamics. \subsection{Radiative Resistivity} \label{subsec-rad_resistivity} When electrons carrying the electric current in a reconnecting current layer drift through an external radiation field, or perhaps radiate themselves via, e.g., synchrotron radiation, the radiation drag force on the electron flow may produce an effective radiative resistivity, $\eta_{\rm rad}$. In particular, for non-relativistic electrons, e.g., in applications such as accretion disks and ADCe, equating the radiation drag force given by Eq.~(\ref{eq-f_rad-IC}) with the accelerating electric force, one finds the steady-state drift velocity $<{\bf v}_e> = -\, (3/4) (ce/\sigma_T U_{\rm rad}) \, {\bf E}$. These electrons thus carry an electric current of ${\bf j}_e = - e n_e <{\bf v}_e> = (3/4)\, (cn_e e^2/\sigma_T U_{\rm rad}) \, {\bf E}$, which corresponds to an effective electron contribution to the electric conductivity of \begin{equation} \sigma_{e,\rm rad}^{\rm IC} = {3\over 4} \, {{cn_e e^2}\over{\sigma_T U_{\rm rad}}} \, . \label{eq-sigma_e_IC} \end{equation} It is interesting to note that, as was discussed by \citep{Goodman_Uzdensky-2008}, strictly speaking, Compton drag does not change the steady-state resistivity in an electron-ion plasma; perhaps counter-intuitively, the resistivity actually just remains the Spitzer collisional resistivity. This is because radiation drag essentially affects only the electrons but not the ions (because the Thomson cross-section scales as the inverse square of the particle mass). Therefore, even if the electrons are greatly slowed down by the radiation field, the ions can eventually (if long enough time scales and length scales are available) get accelerated by the applied electric field to carry the necessary current. In many important astrophysical applications, however, including reconnection in BH ADCe, one is interested in processes that take place on such short length scales that the ions may not enough range to get accelerated to the Coulomb-collision-limited steady-state drift velocity. In such situations, one can ignore the ion current and hence cast the effective Ohm's law in terms of an effective Compton-drag resistivity, or a Compton magnetic diffusivity given by \begin{equation} \eta_{\rm IC} = {c^2 \over{4\pi \sigma_{e,\rm rad}^{\rm IC}}} = {1\over{3\pi}} \, {{c \sigma_T U_{\rm rad}}\over{n_e e^2}} \, . \label{eq-eta_IC} \end{equation} In a pair plasma, of course, both electrons and positrons are subject to radiative drag equally; hence, their conductivities are both given by~(\ref{eq-sigma_e_IC}) (in the non-relativistic regime) and therefore the total radiative resistivity equals one half of the value given by Eq.~(\ref{eq-eta_IC}). As we discussed in \S~\ref{subsec-BH_ADC}, in astrophysical applications such as coronae of accretion disks of black holes accreting at a large fraction of the Eddington rate~$L_E$, the ambient radiation field, with a radiation energy density $U_{\rm rad} \sim L_E/4\pi R^2 c $, where $R \simeq 10 R_g$ is the characteristic size of the bright inner part of the accretion disk, is very intense. Under such conditions, the resulting effective radiative resistivity can be quite high and may dominate over the Spitzer resistivity due to classical Coulomb collisions. It can then seriously affect the reconnection processes that are believed to be responsible for coronal heating and for powering the observed hard X-ray emission from these sources. In particular, enhanced radiative resistivity may alter the analysis of whether the global reconnection layer is in the collisional or collisionless regime \citep{Goodman_Uzdensky-2008} and may thus affect (reduce) the reconnection rate and the hierarchy of secondary plasmoids emerging in the reconnection layer. Needless to say, however, the regime where IC effective resistivity is important probably also implies that radiative cooling is important as well, which may actually speed up reconnection (see section~\ref{subsec-rad_cooling-thin}). In this case, it is not yet clear what the overall combined effect of radiation (cooling plus resistivity) on the reconnection rate is. In the relativistic case, the Compton-drag resistivity was calculated by \citep{van_Oss_etal-1993}. An important point to keep in mind when considering effective resistivity in the relativistic case is that electric current depends only on the 3-velocity of charge carriers and not on their Lorentz factor. Thus, as long as the particles are ultra-relativistic (and thus travel nearly at the speed of light), the current density they can carry is limited by $e n_e c$, independent of the electric field. The effect of radiation drag on the resistivity in this case is diminished. In addition to black-hole accretion disks and their coronae, radiative resistivity due to various radiative mechanisms may also potentially play a role in reconnection processes in a number of other high-energy astrophysical systems, e.g., in the magnetospheres of pulsars \citep{Uzdensky_Spitkovsky-2014}, magnetars \citep{Uzdensky-2011}, and GRBs~\citep{McKinney_Uzdensky-2012}. \subsection{Radiative drag on reconnection outflow} \label{subsec-rad_drag-outflow} Finally, let us discuss the effects of radiative drag on the bulk outflow from the reconnection region. This outflow is an important aspect of the reconnection process; its main role is to evacuate from the reconnection layer the plasma that flows into the layer bringing in fresh unreconnected magnetic flux, and thus to make room for more plasma to enter. The outflow thus represents an important element of the overall stagnation-flow pattern around the magnetic X-points. The outflow is driven by a combination of the pressure gradient force and the magnetic tension force associated with reconnected magnetic field lines. It usually represents the fastest motion found in a reconnecting system, with a speed on the order of the Alfv\'en speed and hence significantly higher than the reconnection inflow velocity. Importantly, the outflow can usually be described roughly as an ideal-MHD motion as it involves the electrons and ions moving together in the same direction% \footnote{Strictly speaking in weakly collisional plasmas this is not quite correct since the electron and ion outflow patterns are somewhat different, which results in an in-plane current circulation responsible for the quadrupole out-of-plane magnetic field.}. In a number of astrophysical applications, including TeV flares in blazar jets \citep{Nalewajko_etal-2011} and black-hole accretion disks, GRB jets, and others, the Compton drag due to ambient radiation may have a substantial effect on the reconnection outflow. It can slow down the outflow, choking the motion of plasma through the reconnection system and thereby reducing the reconnection rate in a manner similar to the effect of a large viscosity. For example, \citep{Takahashi_Ohsuga-2013, Takahashi_Ohsuga-2015}, using relativistic resistive MHD simulations that included optically-thick radiation effects, reported that radiative drag on the reconnection outflow lead to a reduction of the reconnection rate for Petschek-like relativistic reconnection. In addition, as the ambient isotropic radiation field exerts a braking Compton-drag force on the plasma flow, it also extracts its energy and can, under certain circumstances, convert a noticeable fraction of the bulk kinetic energy of the outflow into radiation beamed in the outflow direction. To illustrate the effect of radiative braking of the outflow on the reconnection rate, let us consider a simple, Sweet-Parker-like toy model of a laminar non-relativistic incompressible resistive-MHD reconnection problem. For illustration, we will only take into account the Compton-drag force in the outflow fluid equation of motion and will ignore other radiative effects such as radiative cooling and resistivity. Since the inflow is generally much slower than the outflow, the effect of the radiative drag on the inflow can also be neglected. We will also ignore the guide magnetic field. Furthermore, we will focus on an extreme case where radiative drag dominates over the plasma inertia in establishing the ultimate outflow velocity. The model is then somewhat similar to the analysis of resistive Hall-MHD reconnection in Ref.~\citep{Uzdensky-2009}. For definiteness, let us choose a system of coordinates with $x$ being the direction of the reconnecting magnetic field (and hence of the reconnection outflow), $y$ being the direction across the layer, and $z$ being the ignorable direction. Then, ignoring the plasma inertia in the outflow ($x$) equation of motion, the outflow velocity $u_x$ is governed by the balance between the outward pressure gradient force $-dP/dx$ (magnetic tension may give a comparable contribution but we ignore it here for simplicity) and the Compton-drag force (per unit volume), $- (4/3) \, n_e \sigma_T U_{\rm rad} \, u_x/c$. This yields the following estimate for the final outflow speed at the end of the layer of length~$L$: \begin{equation} u_{\rm out} \sim c \, {\Delta P \over L} \, {1\over{n_e \sigma_T U_{\rm rad}}} = c \, {\Delta P \over{\tau_T\, U_{\rm rad}}} \, , \end{equation} where $\tau_T \equiv n_e \sigma_T L$ is the Thomson optical depth along the layer. The drop $\Delta P$ of the plasma pressure along the layer can be estimated, as is done in the traditional Sweet-Parker model, by using the condition of pressure balance across the layer, $P_0 = P^{\rm up} + B_0^2 /8\pi$, and ignoring the variation of the upstream plasma pressure $P^{\rm up}$ along the layer. Thus, $\Delta P \simeq B_0^2 /8\pi$ and we get \begin{equation} u_{\rm out} \sim c \, {{B_0^2}\over{8\pi \tau_T\, U_{\rm rad}}} = c \, \tau_T^{-1}\, {{U_{\rm magn}}\over{U_{\rm rad}}}\, . \end{equation} One can see that, since we assumed the outflow to be non-relativistic, $u_{\rm out} \ll c$, this result requires that the radiation energy density times the optical depth be sufficiently large compared to the magnetic energy density, i.e., $\tau U_{\rm rad} \gg U_{\rm magn}$, a condition that is indeed satisfied, for example, in the inner parts of black-hole accretion disks. Furthermore, since in this model we neglected the plasma inertia compared to the radiation drag, we must also require that $u_{\rm out} \ll V_A = B_0\, (4\pi \rho)^{-1/2}$. This, in turn, imposes an even more stringent condition than $u_{\rm out} \ll c$, namely, $\tau U_{\rm rad} \gg (U_{\rm magn}\, \rho c^2)^{1/2}$, which, however, can also be satisfied inside black-hole accretion disks. The rest of the reconnection problem analysis is the same as in the classical Sweet-Parker model. Employing the incompressibility condition, $\delta u_{\rm out} = v_{\rm rec} L$, and the steady-state resistive magnetic induction equation: $ \delta v_{\rm rec} = \eta$, where $v_{\rm rec}$ is the reconnection inflow velocity, $\delta$ is the layer thickness, and $\eta$ is the magnetic diffusivity (which may, in general, be due to both Coulomb collisions and radiative resistivity), one obtains the usual Sweet-Parker scaling for the reconnection rate and for the aspect ratio: \begin{equation} {\delta\over L} \sim {v_{\rm rec} \over{u_{\rm out}}} \sim S_{\rm rad}^{-1/2} \, , \end{equation} where, however, the radiation-controlled outflow velocity $u_{\rm out}$ replaces the Alfv\'en speed in the effective Lundquist number, i.e., \begin{equation} S_{\rm rad} \equiv L \, {u_{\rm out}\over \eta} \, . \end{equation} This model, although highly simplified, may provide a useful building block in constructing a more complete theoretical picture of magnetic reconnection in certain radiation-rich astrophysical environments, for example, in the context of high accretion rate black-hole accretion flows in XRBs and~AGNs. \section{Other radiation effects in optically thick plasmas: radiation pressure, radiative viscosity and hyper-resistivity, and pair creation} \label{sec-other} In systems with non-negligible optical depth across the layer some of the photons produced by the reconnection process do not promptly leave the system but may interact with the particles in the layer again by scattering or absorption (or pair creation at higher energies, see below). This interaction can lead to additional effects, such as radiation pressure and radiative viscosity, both of which can affect the reconnection dynamics. In particular, if the layer is optically thick to scattering, then radiation pressure $P_{\rm rad}$ enters the pressure balance across the layer: \begin{equation} P_{\rm gas} + P_{\rm rad} + B^2 / 8\pi = {\rm const} \, . \end{equation} This implies that the plasma pressure at the center of the layer does not need to increase as much as in the case without radiation pressure in order to balance the outside magnetic pressure. For example, if the optical depth is large enough for radiation to reach local thermal equilibrium with the plasma at the local plasma temperature~$T$, then $P_{\rm rad} = a T^4/3$ and hence the pressure balance becomes \begin{equation} 2 n k_B T + aT^4/3 + B^2 / 8\pi = {\rm const} \, . \end{equation} Therefore, the temperature in the layer can be lower than in the case without radiation pressure. The thermodynamic structure of the layer in this case is determined by the radiative transfer problem across the layer. If, however, the optical depth is modest, $\tau \lesssim 1$, then the effects of radiation pressure are reduced by a factor of $\tau$, but can still be significant under some circumstances. Because of the very steep dependence of the radiation pressure on temperature, we can see that radiation pressure effects are important mostly only in very hot environments. In addition, since the optical depth must be high enough to ensure a good coupling of the radiation pressure to the plasma, the density must also be high. From this one can deduce that radiation pressure effects on reconnection are expected to be important mostly in high-energy-density systems. Most notable astrophysical examples of such systems include the inner parts of black-hole and neutron-star accretion disks in binary systems; magnetospheres of normal neutron stars, e.g., X-ray pulsars (with "normal" magnetic fields of about $10^{12}\, {\rm G}$), and magnetars in, e.g., SGR systems (with magnetic fields of order $10^{15}\, {\rm G}$); and central engines of supernovae (SNe) and~GRBs. In addition to the radiation pressure effects in a reconnection layer of non-negligible optical depth, the momentum extracted by radiation from reconnection outflow (see the discussion of radiative braking in \S~\ref{subsec-rad_drag-outflow}) can be deposited back to the plasma in other parts of the layer, which results in an effective viscosity mediated by the photons. This radiative viscosity is then expected to affect the basic reconnection dynamics in a manner similar to the usual collisional viscosity caused by the thermal motions of electrons and ions: it should lead to broadening of the layer and to decreasing the reconnection rate. Likewise, the momentum extracted by radiation from current-carrying electrons in the current layer (which leads to a Compton-drag radiative resistivity, see \S~\ref{subsec-rad_resistivity}) can be deposited to other current-carrying electrons elsewhere in the layer if the optical depth is not negligible. This essentially spreads the electric current, making the layer broader, an effect that can be described as a result of a radiative hyper-resistivity proportional to the optical depth (for $\tau< 1$). Interestingly, this hyper-resistivity should only work in electron-ion plasmas; in pair plasmas, since photons can be scattered or absorbed by both electrons and positrons (which drift in opposite directions), radiative hyper-resistivity just gives way to an enhanced radiative resistivity. In the most extreme astrophysical systems, such as the magnetospheres of magnetars in SGR systems and GRB and SN central engines, the reconnecting magnetic field $B_0$ exceeds the quantum critical magnetic field $B_Q \equiv \alpha_{\rm fs} B_{\rm cl} = m_e^2 c^3/e \hbar \simeq 4.4 \times 10^{13} \, {\rm G}$. Then, the dissipated magnetic energy density is so high that the pressure of the heated plasma inside the reconnection layer becomes dominated by the radiation pressure and, furthermore, the resulting radiation temperature becomes relativistic, i.e., $T_0 \sim m_e c^2 \, (B_0/B_Q)^{1/2}$ \citep{Uzdensky-2011}. In this case, prodigious pair production inevitably results and the current layer gets quickly "dressed" in an optically thick and dense pair coat. Once again, the problem of determining the thermodynamic structure across such a dressed layer becomes a radiative transfer problem, but one in which the pair density at any location in the layer below its pair-creation photosphere is determined by the local thermodynamic equilibrium. In particular, for reconnecting magnetic fields that are not just higher, but much higher than~$B_Q$, e.g., in magnetar systems, one expects ultra-relativistically hot plasma, $T \gg m_e c^2$; and under these conditions the pair density scales as~$T^3$ and the pair contribution to the total pressure becomes comparable to the radiation pressure \citep{Uzdensky-2011}. Magnetic reconnection in this very exotic regime may be the mechanism behind some of the most spectacular, energetic phenomena in the Universe --- giant SGR flares, releasing huge amounts of energy in the form of gamma-rays in just a fraction of a second \citep{Uzdensky-2011, Uzdensky_Rightley-2014}. This regime can be regarded as the most extreme case of radiative reconnection, because all of the radiative effects discussed in this article --- radiation-reaction limits on particle acceleration, strong radiative cooling, radiation pressure, Compton-drag resistivity, etc. --- are active in this case. \section{Conclusions and Outlook} \label{sec-conclusions-outlook} This Chapter presented a review of the physics of radiative magnetic reconnection and its applications to various astrophysical phenomena. Traditional reconnection research, motivated by applications to relatively low-energy-density solar-system and laboratory plasma environments, has historically ignored the possible effects and observational signatures of radiation. In many astrophysical reconnecting systems, however, various radiation effects exert an important influence on the dynamics and energetics of the reconnection process, as well as on the associated nonthermal particle acceleration. These effects ultimately stem from the radiation reaction force on individual particles, which is directly related to the rate of energy losses suffered by the particle (i.e., the particle's radiative power). Since the radiative power is often proportional to the energy density of the external agent field that causes the particle to radiate (e.g., magnetic energy density for synchrotron radiation and ambient radiation energy density for inverse-Compton radiation), we see that the relative importance of the radiation reaction force in the particle equation of motion can usually be traced to the high energy density in astrophysical systems of interest, combined with their large size. The main radiation mechanisms involved in high-energy astrophysical reconnection, especially in relativistic systems, are cyclo/synchrotron radiation, curvature radiation, and inverse-Compton scattering. In addition, bremsstrahlung radiation and pair creation can play a role under some circumstances. The radiation reaction force can manifest itself via several different radiative effects, the relative importance of which depends on the particular astrophysical context. The first radiative effect that comes in at lowest energy densities is the radiation-reaction limit on relativistic particle acceleration. This is a purely kinetic effect, it is due to the fact that for relativistic particles the radiative power, and hence the radiation reaction force, grow rapidly with the particle's Lorentz factor~$\gamma$. This means that the radiation back-reaction first affects the most energetic particles, while leaving lower-energy particles less affected. This necessitates a kinetic treatment. One of the most prominent astrophysical examples where this effect has to be taken into account in considering magnetic reconnection is the Crab pulsar wind nebula, in particular in relation to the recently discovered short and bright gamma-ray (hundreds of MeV) flares that seem to require extreme particle acceleration to PeV energies, overcoming the synchrotron radiation reaction limit. Another important astrophysical example is found in reconnection events powering coronal heating and hard-X-ray emission in accretion disk coronae of black holes, e.g., in galactic X-ray binaries and active galactic nuclei. Here, inverse-Compton radiation drag due to the intense ambient soft photon field emitted by the underlying accretion disk imposes interesting upper limits on the electron acceleration. Most of the other radiative effects acting in high-energy astrophysical reconnection can be described as fluid-level effects; they affect not just a select few highest energy particles but most of the particle population; thus, they seriously affect the overall dynamics and energy budget of a reconnection process. Correspondingly, these effects require very high energy densities, which implies that they usually become important for reconnection events happening close to the central compact object, such as a neutron star or a black hole. Just to organize our thinking, we can categorize the radiative effects on reconnection according to the different components of the particle motion that are being affected by the radiative drag. Thus, radiative drag on random, "thermal', particle motions, especially in the direction across the current layer, effectively leads to radiative cooling, which is reviewed in \S~\ref{sec-rad_cooling}. It may lead to a substantial plasma compression and speed up the reconnection process. Radiative cooling is important in systems such as reconnecting equatorial current sheet in a pulsar magnetosphere just outside the pulsar light cylinder, perhaps powering the observed pulsed high-energy gamma-ray emission (synchrotron cooling, see \S~\ref{subsec-rad_cooling-thin}); inner parts of accretion disks and accretion disk coronae of black hole systems (inverse-Compton cooling, see ~\S~\ref{subsec-BH_ADC}); reconnection events in magnetospheres of magnetars, perhaps powering giant gamma-ray flares in Soft Gamma Repeaters; and in relativistic jets of gamma-ray bursts. Next, radiative drag on the bulk collective motions of electrons (and perhaps positrons) may result in: ({\it i}) effective radiative (Compton drag) resistivity for the flow of electrons carrying the main electric current in the reconnection current layer, important, e.g., in accreting black hole coronae, magnetar magnetospheres, and central engines of supernovae and gamma-ray bursts; and ({\it ii}) effective braking of the plasma outflow from the reconnection layer, potentially slowing down the reconnection process; this effect has been explored in the context of TeV flares in blazar jets but may also be important in a number of other systems, including accretion disks around black holes and neutron stars. Whereas most of the above-mentioned radiative effects can operate in optically thin plasmas, there are some radiation effects that take place in optically thick reconnecting systems. In particular, this may occur at very high plasma densities and energy densities, found, e.g., in systems like the central engines and jets of gamma-ray bursts, magnetar magnetospheres, and perhaps central parts of black hole accretion disks. Reconnection layers in these environments may become optically thick, which allows the photons emitted by the energetic particles in the layer to interact with the layer particles again. This secondary interaction opens up avenues for additional radiative effects, namely, radiation pressure and effective radiative viscosity (see \S~\ref{sec-other}). Furthermore, since most of these plasmas are relativistically hot and compact, there are many gamma-ray photons above the pair-production threshold, which makes a copious pair production not only possible but often inevitable. Intense pair production, in turn, further increases the optical depth and plasma collisionality of the reconnection layer. Finally, apart from its possible active role in influencing reconnection dynamics and energetics, radiation emitted by a reconnection layer also plays a role of an important (and, quite often in astrophysics, the only) {\it diagnostic tool} that we can use to study remote astrophysical systems. This applies not only to all of the above-mentioned radiative reconnection systems, but also, arguably, to most astrophysical systems where we believe magnetic reconnection takes place, even when it acts as a purely passive tracer. For this reason, it is particularly important to develop theoretical and computational tools that will enable us to predict, calculate potentially observable radiative signatures of a reconnection process. One should expect continuing rapid development of the field of radiative magnetic reconnection in the next few years. This optimistic outlook for accelerating progress in this exciting new frontier of plasma astrophysics is justified by the convergence of several factors. First, there is a strong and growing astrophysical motivation for its serious development, based on the increasing recognition by the broad astrophysical community of the importance of magnetic reconnection as a potent mechanism for plasma heating, nonthermal particle acceleration, and high-energy radiation production in numerous astrophysical phenomena. This leads to an increased interest among astrophysicists in magnetic reconnection in general; however, as argued in this Chapter, in many, if not most, of the astrophysical phenomena of interest reconnection inevitably takes place in the radiative regime, in which prompt radiative energy losses materially affect the process. In addition, the need to connect reconnection theory to observations, by developing the capability to calculate observable radiative signatures, also contributes to the astrophysical motivation. The second fundamental reason for expecting rapid progress in radiative reconnection is the emerging ability to study this reconnection regime in the lab, namely, by using modern high-energy-density laser-plasma facilities, such as Omega EP and NIF. By using high-$Z_{\rm eff}$ target materials such as gold, it should be possible to achieve a reconnection regime where bremsstrahlung and perhaps atomic-line radiative cooling become important. In addition, powerful Z-pinch facilities (such as Imperial College's MAGPIE) could also potentially be adapted to laboratory studies of radiative HED reconnection. All these new experimental capabilities that are now becoming available can potentially provide a valuable research tool, a testbed for validating theories and numerical models of radiative reconnection, and also perhaps lead to completely new, unexpected discoveries. Finally, current and future progress in radiative reconnection is greatly facilitated by the appearance of new computational tools, coupled with analytical theory. The most important new development on this front is the emergence of numerical plasma codes that self-consistently include radiation reaction effects on the plasma and, simultaneously, compute various observable radiative signatures. One of the most prominent examples of this is the radiative relativistic PIC code Zeltron developed at the University of Colorado \citep{Cerutti_etal-2013, Cerutti_etal-2014a}. In addition, active efforts are now underway to augment various fluid-level (e.g., resistive MHD and two-fluid) codes with radiative modules \citep{Takahashi_Ohsuga-2013, Leake_etal-2013, Sadowski_etal-2014, McKinney_etal-2014, Takahashi_Ohsuga-2015, Ni_etal-2015}, which will enable one to study collisional and optically-thick reconnection problems. Importantly, while Zeltron has been developed specifically to study radiative magnetic reconnection, an area in which it has already made important contributions, this code --- and, one hopes, other radiative plasma codes that are being developed or will be developed in the near future --- is sufficiently versatile and can be employed to study other important problems in radiative plasma physics and astrophysics, such as collisionless shocks and turbulence. In this sense, our research efforts towards better understanding astrophysical radiative reconnection should not only lead to progress in this particular area, but also should benefit the broader fields of plasma physics and plasma astrophysics. Although a lot of progress in developing new radiative computational capabilities has already been achieved, still more work needs to be done. Among the most important radiation processes that should, and hopefully will, be incorporated into kinetic plasma codes (in addition to synchrotron and inverse-Compton radiation already implemented in Zeltron) are non-relativistic cyclotron radiation, Klein-Nishina effects for Compton scattering, curvature radiation, and the quantum-electrodynamic modifications to various radiation processes in the presence of a magnetar-strength (above $B_Q$) magnetic field. In addition, there is also strong astrophysical motivation to include collisional and finite optical depth effects such as bremsstrahlung emission and absorption, synchrotron self-absorption, synchrotron-self-Compton (SSC) radiation, and pair creation. All these capabilities will greatly expand our ability to study magnetic reconnection, as well as other important plasma processes, in various high-energy astrophysical contexts and thus ultimately will help us attain a better understanding of this violent, shining, beautiful Universe. \begin{acknowledgements} I am very grateful to the organizers of the Parker Workshop on Magnetic Reconnection in Brazil, March 2014, and especially to Dr. Walter Gonzalez. I am also indebted to Prof. Eugene Parker for being a constant shining inspiration. I am also grateful to numerous colleagues for many stimulating and insightful conversations over many years on various topics discussed in this Chapter. Specifically, I would like to thank M. Begelman, A. Beloborodov, A. Bhattacharjee, B. Cerutti, W. Daughton, E. de Gouveia dal Pino, J. Drake, D. Giannios, J. Goodman, R. Kulsrud, H. Li, N. Loureiro, Yu. Lyubarsky, M. Lyutikov, J. McKinney, M. Medvedev, K. Nalewajko, A. Spitkovsky, and G. Werner. This work has been supported by NSF Grants PHY-0903851 and AST-1411879 , DOE Grants DE-SC0008409 and DE-SC0008655, and NASA Grants NNX11AE12G, NNX12AP17G, NNX12AP18G, and NNX13AO83G. \end{acknowledgements} \bibliographystyle{apj}
1,108,101,562,543
arxiv
\section{Introduction} The current availability of cloud-based quantum computers has led to great excitement over the possibility of obtaining quantum advantage for chemistry, materials science, and other applications in the near future. However, these Noisy, Intermediate-Scale Quantum (NISQ) devices are limited by both the number of qubits available and the hardware noise~\cite{preskill2018quantum}. Parameterized quantum circuits (PQCs) have been proposed as an ideal strategy to deal with the practical limitations of NISQ devices. Indeed, PQCs are employed in both variational quantum algorithms~\cite{cerezo2020variationalreview,bharti2021noisy,endo2021hybrid,peruzzo2014variational,farhi2014quantum,mcclean2016theory,khatri2019quantum,sharma2019noise,larose2019variational,arrasmith2019variational,cerezo2020variationalfidelity,endo2020variational,cirstoiu2020variational} and quantum neural networks~\cite{schuld2014quest,cong2019quantum,verdon2018universal,abbas2020power,beer2020training,biamonte2017quantum}, which respectively focus on ground state (and related) applications and data classification applications. PQCs have the potential to solve problems with shorter circuits and fewer qubits than traditional approaches. When training PQCs, one efficiently evaluates a task-specific cost function on a quantum computer, while employing a classical optimizer to optimize the PQC parameters. Off-the-shelf classical optimizers may not perform as well as optimizers that exploit the nature of quantum cost function landscape, and this has led to the field of quantum-aware optimizers~\cite{kubler2020adaptive,arrasmith2020operator,stokes2020quantum,koczor2019quantum,sweke2020stochastic,nakanishi2020sequential,harrow2019low,lavrijsen2020classical,parrish2019jacobi,fontana2020optimizing}. Of course, understanding the cost function landscape for PQCs will be crucial for designing quantum-aware optimizers. Unfortunately, very little is known about such landscapes. What makes quantum landscapes different from landscapes encountered in classical optimization? This is a fundamental question, not only for practical applications of quantum computers, but also for quantum foundations (i.e., efforts to understand the uniqueness of quantum theory relative to other mathematical theories~\cite{janotta2014generalized}). One important discovery about these landscapes is that deep PQCs (i.e., those that have many sequential quantum operations) exhibit \textit{barren plateaus}~\cite{mcclean2018barren}. A barren plateau (BP) is a landscape where the magnitudes of gradients are exponentially suppressed with growing problem size. This result was generalized to show that the higher the expressibility of a PQC (i.e., how many different states it can prepare), the more the gradients will be suppressed~\cite{holmes2021connecting}. It was also proven that when the cost function depends on global properties of the solution state, BPs arise even for shallow PQCs~\cite{cerezo2020cost,uvarov2020barren}. Large degrees of entanglement can also give rise to BPs~\cite{sharma2020trainability,marrero2020entanglement,patti2020entanglement}. Finally, BPs can arise due to quantum error processes washing out all landscape features, and these BPs are called noise-induced BPs~\cite{wang2020noise}. Due to the prevalence of BPs, several strategies have been developed to circumvent or mitigate the effect of barren plateaus~\cite{verdon2019learning,volkoff2021large,skolik2020layerwise,grant2019initialization,pesah2020absence,zhang2020toward,bharti2020quantum,cerezo2020variational,sauvage2021flip,liao2021quantum}. A different landscape feature that has been observed for PQCs is a \textit{narrow gorge}~\cite{cerezo2020cost}. The narrow gorge phenomenon is where the well around a minimum contracts exponentially quickly with growing system size. The narrow gorge phenomenon is not fully general, as for noise-induced BPs the minima are flattened as well. However, recognizing that narrow gorges are caused by the cost function values probabilistically concentrating about a mean, we can rephrase this phenomenon in terms of concentration. It has been suggested previously that the exponential concentration of the cost values (often in the form of a narrow gorge) always accompanies a BP, but this has not yet been proven~\cite{cerezo2020cost}. \begin{figure*}[t!] \centering \includegraphics[width=0.75\textwidth]{totalFig.pdf} \caption{\textbf{Schematic representation of our results.} Here we plot in green (blue) a cross section of four different theoretically possible cost landscapes for a system with $n = 50$ ($n=4$) qubits. Landscape a) has a narrow gorge and a barren plateau, landscape b) has a narrow gorge but no barren plateau, landscape c) has no narrow gorge but a barren plateau, and landscape d) has no narrow gorge and no barren plateau. While these four variations are mathematically possible, we prove that for parameterized quantum circuits (PQCs), under a natural set of assumptions, barren plateaus and narrow gorges are perfectly correlated. That is, for PQCs, only options a) and d) are possible. For details of the cost functions plotted here, see Appendix~\ref{sec:landscapes}.}\label{fig:Schematic} \end{figure*} In this work, we prove that the suppression of cost function gradients is always accompanied by a probabilistic concentration of cost values, and vice versa. As a consequence, we show that saying that a landscape features a BP is logically equivalent to saying that the cost values are, on average, exponentially concentrated about the mean value, which for landscapes with a well-defined minimum means a narrow gorge. Figure~\ref{fig:Schematic} illustrates this phenomenon. This result is practically significant as it means that numerical testing for a barren plateau can be significantly sped-up by gathering statistics on finite differences between random points rather than computing costly gradients. At the conceptual level, our results shows that quantum mechanics rules out certain kinds of cost landscapes (see Fig.~\ref{fig:Schematic}), which may have foundational interest in the effort to understand the uniquenes of quantum theory. Below we first provide background regarding PQCs and barren plateaus. Next we present our analytical results proving the connection between different landscape features for PQCs. We then show a numerical demonstration of the similar scaling of the variances of finite differences and gradients. Finally, we conclude and discuss the practical applications of this result. \section{Background}\label{sec:Background} To set the stage for our analytical results we will first review some prior results and establish our notation. First, we will define the class of cost functions we consider and state our assumptions about the anstatzes used. Next we discuss barren plateaus and state the formal definition we will be working with. We then review a recent result that ties the expressibility of a quantum circuit to the suppression of cost function gradients. Finally, we discuss previous results on the narrow gorge phenomenon and more generally the concentration of cost function values. \subsection{Cost Functions and PQC Structure}\label{sec:costs} In this work we will consider a highly general form of a cost function with which we will train our parameterized quantum circuit to minimize. The PQC is expressed as unitary $U(\vec{\theta})$ parameterized by a vector $\vec{\theta}$ of $m$ continuous parameters. If we are optimizing over a training set $\mathcal{S}$ containing $S$ initial state and measurement operator pairs $(\rho_i, O_i )$, we will write this cost as \begin{equation}\label{eq:cost} C(\vec{\theta}) =\sum_{i=1}^{S}a_i {\rm Tr}[U(\vec{\theta})\rho_i U^\dagger(\vec{\theta}) O_i]. \end{equation} Here $a_i$'s are just the coefficients of the linear combination of expectation values. The subscript $i$ highlights the fact that one could work with different functions for each state in the training set. We note that for VQAs such as the variational quantum eigensolver (VQE) the training set would be of size $S=1$. For quantum machine learning tasks, however, one will typically work with larger training sets. We will assume that the parameterized unitary $U$ can be written in the following form: \begin{equation} U(\vec{\theta}) = \prod_{l=1}^{m} e^{-i\theta_lH_l} W_l \, , \end{equation} where the $\{W_l\}$ is some set of fixed unitaries. We further assume that the generators $H_l$ have two distinct, non-zero eigenvalues normalized to be $\pm 1$. In this case, the parameter space is periodic and has a maximum length scale $L_\textrm{max}$ which is in $\mathcal{O}(\operatorname{poly}(m))$. Finally, for the sake of computational efficiency we assume that for $n$ qubits the number of parameters $m$ is in $\mathcal{O}(\operatorname{poly}(n))$. We will later wish to consider the portions of the PQC $U(\vec{\theta})$ that come before or after a given parameter $\theta_j$. To that end we define the left and right portions of the PQC split at the index $j$ to be \begin{equation} U(\vec{\theta})_{L,j}=\prod_{l=j+1}^{m} e^{-i\theta_lH_l} W_l\,, \end{equation} and \begin{equation} U(\vec{\theta})_{R,j}=\prod_{l=1}^{j} e^{-i\theta_lH_l} W_l\, , \end{equation} respectively. \subsection{Barren Plateaus} A barren plateau landscape is one where the average magnitudes of the cost gradients are exponentially suppressed. This notion is usually mathematically formalized in terms of the mean and variance of partial derivatives of the cost function. \begin{definition}[Barren Plateau]\label{def:BP} Consider the cost function defined in Eq.~\eqref{eq:cost}. This cost exhibits a barren plateau if, for all $\theta_\mu\in\vec{\theta}$, the variance of its partial derivative vanishes exponentially with $n$, i.e., as \begin{equation}\label{eq:var} {\rm Var}_{\vec{\theta}}[\partial_\mu C(\vec{\theta})]\leq F(n)\,,\quad \text{with}\quad F(n)\in\mathcal{O}\left(\frac{1}{b^n}\right)\,. \end{equation} for some $b> 1$. As indicated, the expectation values are taken over the parameters $\vec{\theta}$. \end{definition} An immediate consequence of Definition~\ref{def:BP} is that the fraction of parameter space with a partial derivative magnitude above a threshold $c>0$ is also exponentially suppressed. To see this, first note that on a periodic parameter space the mean partial derivative $\text{E}_{\vec{\theta}}[\partial_\mu C(\vec{\theta})]$ will always be exactly zero as this follows from the fact that the integral of $\partial_\mu C(\vec{\theta})$ along a closed curve must be zero. It then follows from Chebyshev's inequality, that the probability that the cost function partial derivative deviates from its mean value (of zero) is bounded as \begin{equation}\label{eq:cheb} P(|\partial_\mu C(\vec{\theta})|\geq c)\leq \frac{{\rm Var}_{\vec{\theta}}[\partial_\mu C(\vec{\theta})]}{c^2}\,. \end{equation} Thus if the variance in the partial derivative of the cost vanishes exponentially, the probability that the partial derivative is non-zero similarly vanishes. When a cost exhibits a barren plateau according to Definition~\ref{def:BP}, an exponentially large precision (i.e., an exponentially large number of shots) is needed to navigate through the flat landscape and determine a cost minimizing direction. Moreover, the exponential precision requirements has been shown to affect both derivative-based~\cite{cerezo2020impact} and derivative-free~\cite{arrasmith2020effect} optimization methods, meaning that one cannot improve the trainability of the cost function in a barren plateau by simply changing the optimization method. The barren plateau phenomenon was first identified in~\cite{mcclean2018barren}. Therein, the authors showed that randomly initialized deep unstructured circuits exhibits exponentially vanishing gradients. Here, the mechanism leading to barren plateaus is essentially that the set of unitaries generated by the parameterized quantum circuit is random enough that it becomes a $2$-design (i.e. the distribution matches up to the second moment that of the uniform Haar distribution of unitaries). In this case, the randomness of the circuit makes the cost function values concentrate around their average and hence changes in individual parameters lead to small changes in cost values. This intuition has also been used to show that barren plateaus preclude learning scramblers~\cite{holmes2020barren}. In addition, we note that the randomness-induced barren plateaus can also be linked to the entanglement generated by the quantum circuit. Specifically, it has been shown that circuits which generate large amounts of entanglement also lead to barren plateaus~\cite{sharma2020trainability,patti2020entanglement,marrero2020entanglement}. The barren plateau phenomenon was later studied in shallow layered hardware efficient PQC which are not random enough (or entangling enough) to lead to randomness-induced barren plateaus. In~\cite{cerezo2020cost} the authors analyze parametrized quantum circuits that are composed of random two-qubit gates acting on neighboring qubits in a brick-like fashion. Here it was proved that the locality of the operators $O_i$ in Eq.~\eqref{eq:cost} can be linked to the presence of barren plateaus, so that global cost functions (i.e. when $O_i$ acts non-trivially on all qubits) have barren plateau irrespective of the circuit depth. In this case, the mechanism leading to barren plateaus is not the randomness in the circuit, but rather the fact that the cost function trains the parameters by comparing objects that live in exponentially large Hilbert spaces. On the other hand, it was shown that local cost function (i.e. when $O_i$ acts non-trivially on at most two neighboring) can be generically trainable for shallow circuits as here the variance ${\rm Var}_{\vec{\theta}}[\partial_\mu C(\vec{\theta})]$ is at worst polynomially vanishing with $n$. Finally, we remark that it has also been shown that some types of hardware noise acting throughout the quantum circuit can also lead to barren plateaus. As proven in~\cite{wang2020noise} the effect of unital quantum noise corrupts the quantum state as it evolves trough the circuit and maps it to the fixed point of the noise model (i.e., the maximally mixed state). This noise-induced barren plateaus then flattens the whole landscape and again leads to exponentially vanishing gradients. \subsection{Expressibility} Recent work has generalized the notion of a barren plateau by providing an upper bound on the variance of partial derivatives based on more general notions of expressibility~\cite{holmes2021connecting}. To formalize the notion of expressibility, we define $\mathbb{U}$ as the set of unitaries that are reachable by varying the parameters of the PQC $U(\vec{\theta})$. Intuitively, a maximally expressive PQC could reach any unitary in the unitary group $\mathcal{U}(2^N)$. The expressability is therefore defined by comparing the difference between a uniform distribution over $\mathbb{U}$ and the uniform (Haar) distribution over $\mathcal{U}(2^N)$. This comparison is done by defining the following expressibility super-operator~\cite{sim2005best,nakaji2020expressibility} \begin{equation} \begin{aligned} \mathcal{A}_{\mathbb{U}}^{(t)}(\cdot)=&\int_{\mathcal{U}(2^N)} d\mu(V) V^{\otimes t}(\cdot)(V^{\dagger}){\otimes t}\\ &-\int_{\mathbb{U}} dU U^{\otimes t}(\cdot)(U^{\dagger}){\otimes t}. \end{aligned} \end{equation} Here $d\mu(V)$ is the Haar measure and $dU$ is the uniform measure over $\mathbb{U}$. If $\mathcal{A}_{\mathbb{U}}^{(t)}(O)=0$ for all operators $O$, then the $t$-th moments of $\mathbb{U}$ match the value for the Haar distribution and $\mathbb{U}$ is called a $t$-design~\cite{divincenzo2002quantum,gross2007evenly,roberts2017chaos,low2010pseudo,hunter2019unitary}. As a $2$-design is sufficient to guarantee a barren plateau, bounding the suppression of cost gradients only requires considering $t=2$ and we will focus on that case. In the context of optimizing a cost of the form in Eq.~\eqref{eq:cost}, we do not need to consider the expressibility on all possible operators but rather on the initial states $\{\rho_i\}$ and on the measurement operators $\{O_i\}$. As such, the quantities \begin{equation} \varepsilon_{\mathbb{U}}^{O_i}\equiv\|\mathcal{A}_{\mathbb{U}}^{(2)}(O_i^{\otimes 2})\|_2 \end{equation} and \begin{equation} \varepsilon_{\mathbb{U}}^{\rho_i}\equiv\|\mathcal{A}_{\mathbb{U}}^{(2)}(\rho_i^{\otimes 2})\|_2 \end{equation} capture the expressibility of the PQC relative to the cost function. When these quantities are small, the PQC is highly expressible. The variance of the cost function partial derivative is then bounded from above by~\cite{holmes2021connecting}: \begin{equation}\label{eq:express_bound} \begin{aligned} {\rm Var}_{\vec{\theta}}[\partial_\mu C(\vec{\theta})]\leq& \widetilde{F}(n)+\sum_i\left( 4\varepsilon^{O_i}_{\mathbb{U_L}} \varepsilon^{\rho_i}_{\mathbb{U_R}}\right.\\ &\left.+ \frac{2^{n+2}\left(\varepsilon^{O_i}_{\mathbb{U_L}}\|O_i\|^2_2+ \varepsilon^{\rho_i}_{\mathbb{U_R}}\|\rho_i\|^2_2\right)}{2^{2n}-1}\right) \, . \end{aligned} \end{equation} Here $\mathbb{U_L}$ and $\mathbb{U_R}$ are the set of unitaries reachable by $U_L(\theta)$ and $U_R(\theta)$, respectively, and $\widetilde{F}(n)\in\mathcal{O}\left(1/\widetilde{b}^n\right)$ is the variance of the partial derivative of the cost for a 2-design (i.e., a maximally expressive PQC)~\cite{mcclean2018barren}. We note that for a maximally expressive PQC (i.e., a $2$-design) this bound becomes an equality. More generally, the bound implies that highly expressive PQCs will have flatter landscapes and hence be harder to train. \subsection{Narrow Gorges and Cost Concentration} The notion of a narrow gorge arose alongside investigations of barren plateaus~\cite{cerezo2020cost}. For an example barren plateau landscape considered in that work, the authors found that the volume of the "gorges" near the minima were exponentially small. More precisely, the probability that the cost associated with a randomly sampled parameter vector $\vec{\theta}$ was lower than any fixed (non-maximal) cost threshold could be upper bounded by a function that was exponentially suppressed with growing numbers of qubits $n$. Motivated by that example, we give the following intuitive definition for a narrow gorge. \begin{definition}[Narrow Gorge]\label{def:NG} Consider the cost function defined in Eq.~\eqref{eq:cost}. This cost exhibits a narrow gorge if: \begin{itemize} \item There exist cost minima which are lower than the mean cost value by at least \begin{equation} \Delta(n)>0\, \textrm{with} \, \Delta(n)\in\Omega\left(1/\textrm{poly}(n)\right). \end{equation} \item The probability of the cost function value at a point chosen from the uniform distribution over the parameter space differs from the mean by at least $\delta$ is bounded as \begin{equation}\label{eq:ng_prob} P(|E_{\vec{\theta}}[C(\vec{\theta})]-C(\vec{\theta})|\ge \delta)\le \frac{G(n)}{\delta^2} \end{equation} with $G(n)\in\mathcal{O}\left(\frac{1}{b^n}\right)$ for some $b> 1$. \end{itemize} \end{definition} We remark that the probability in Eq.~\eqref{eq:ng_prob} is with respect to the uniform distribution over the parameters theta. We can therefore understand it as a fractional volume of parameter space. A landscape with a narrow gorge is then one where the cost function departs from the mean value by at least $\delta$ for an exponentially small fraction of the volume of the landscape. The caveat that there exist cost minima lower than the mean here does not always hold. In addition to simple cases like constant cost functions, we note that this can arise in the case of a noise induced barren plateau, where the noise can wash out the minima~\cite{wang2020noise}. In these cases there may be no gorge, but one can instead consider the concentration of the cost function about the mean. A more general concept than a narrow gorge is then an exponential (probabilistic) concentration about the mean. We will define this concentration by bounding the variance of cost function differences as \begin{equation}\label{eq:concentrate} {\rm Var}_{\vec{\theta}_A}\left[E[C(\vec{\theta})]-C(\vec{\theta}_A) \right]\le G(n) \end{equation} again with $G(n)\in\mathcal{O}\left(\frac{1}{b^n}\right)$ for some $b> 1$. We note that so long as the minima are at least $\Delta(n)>0$ below the mean cost value for $\Delta(n)\in\Omega\left(1/\textrm{poly}(n)\right)$, such a concentration implies a narrow gorge via Chebyshev's inequality. \section{Results} We begin our results by stating the following Lemma. \begin{lemma}\label{lem:zero_mean_diffs} For any cost function defined on a periodic parameter space, the following statements hold. \begin{itemize} \item The mean value of the difference in cost values between $\vec{\theta}_A$, a random draw from the uniform probability distribution over the parameter space, and the point $\vec{\theta}_A+L\hat{e}$ for a deterministically chosen distance $L$ and direction indicated by the unit vector $\hat{e}$ is zero: \begin{equation} \text{E}_{\vec{\theta}_A}\left[C(\vec{\theta}_A+L\hat{e}))-C(\vec{\theta}_A)\right]=0. \end{equation} \item The mean value of the difference in cost values between two points $\vec{\theta}_A$ and $\vec{\theta}_B$, both random draws from the uniform distribution over the parameter space, is zero: \begin{equation} \begin{aligned} \text{E}_{\vec{\theta}_A}\left[C(\vec{\theta}_B)-C(\vec{\theta}_A)\right]=&\text{E}_{\vec{\theta}_B}\left[C(\vec{\theta}_B)-C(\vec{\theta}_A)\right]\\ =&0. \end{aligned} \end{equation} \end{itemize} \end{lemma} We note that Lemma~\ref{lem:zero_mean_diffs} can be thought of as a direct consequence of the fact that the average gradient vanishes, since a partial difference can be computed via $C(\vec{\theta}_B)-C(\vec{\theta}_A)=\int_{\vec{\theta}_A}^{\vec{\theta}_B}\vec{\nabla}C(\vec{\theta})\cdot d\vec{\theta}$. For more details see the proof in Appendix~\ref{app:zero_mean_diffs}. \subsection{Barren plateaus imply cost concentration} We now state the first of our main results. \begin{theorem}\label{thm:grad->diffs} For any parameterized quantum circuit as defined in Section~\ref{sec:Background}, if ${\rm Var}{}\Big(\partial_{\theta_j}C(\theta)\Big)\le F(n)$ then: \begin{itemize} \item The variance of cost function differences between a randomly chosen point in parameter space, $\vec{\theta}_A$, and the point $\vec{\theta}_A+L\hat{e}$ for any deterministically chosen distance $L$ and direction indicated by the unit vector $\hat{e}$ is then bounded above by: \begin{equation} {\rm Var}_{\vec{\theta}_A}\left(C(\vec{\theta}_A+L\hat{e})-C(\vec{\theta}_A) \right)\le m^2 L^2 F(n) \end{equation} where $m$ is the dimension of the parameter space. \item The variance of the cost function differences between two independently chosen random points in parameter space, $\vec{\theta}_A$ and $\vec{\theta}_B$, is bounded above by: \begin{equation} {\rm Var}_{\vec{\theta}_A}\left(C(\vec{\theta}_B)-C(\vec{\theta}_A) \right)\le m^2 L_{\textrm{max}}^2 F(n). \end{equation} where $L \leq L_\textrm{max}$ is an upper bound on $L$. \end{itemize} \end{theorem} As with Lemma~\ref{lem:zero_mean_diffs} the basic idea behind this theorem is that the second moment of a finite difference can be related to the second moment of a gradient by integration. See Appendix~\ref{app:BP->NG} for the full proof of Theorem~\ref{thm:grad->diffs}. As an immediate consequence of this theorem we then have the following corollary. \begin{corollary}\label{corr:BP=>NG} If a cost function landscape exhibits a barren plateau, then it also exhibits an exponential concentration of cost values. Additionally, if there exist minima at least $\Delta(n)>0$ below the mean cost value for $\Delta(n)\in\Omega\left(1/\textrm{poly}(n)\right)$, that landscape also has a narrow gorge. \end{corollary} Corollary~\ref{corr:BP=>NG} follows from plugging the Definition~\ref{def:BP} into Theorem~\ref{thm:grad->diffs}, averaging over $\vec{\theta}_B$, and applying Chebyshev’s inequality. We note that combining Theorem~\ref{thm:grad->diffs} and Eq.~\eqref{eq:express_bound} allows one to show the extent to which the cost must be concentrated for a given level of expressibility. Hence, by leveraging the results of Ref.~\cite{holmes2021connecting}, we can connect the expressibility of a PQC to the concentration of cost landscape and also to the existence of narrow gorges. We now remark on the significance of this result for the different types of barren plateau landscapes. For barren plateaus arising due to expressibility~\cite{mcclean2018barren,holmes2021connecting}, entanglement~\cite{sharma2020trainability,marrero2020entanglement,patti2020entanglement}, or a global cost function~\cite{cerezo2020cost}, the flatness of the landscape does not preclude good minima. Therefore, assuming a reasonable PQC structure, each of these barren plateau classes will always be accompanied by narrow gorges. Noise-induced barren plateau landscapes~\cite{wang2020noise}, on the other hand, flatten all features (including the minima) exponentially quickly and so naturally exhibit cost concentration. However, barren plateau landscapes due to noise do not exhibit narrow gorges as they do not have any gorges left to be narrow. \subsection{Cost concentration implies barren plateaus} We also have the following result. \begin{lemma}\label{lemma:diffs->grad} For any parameterized quantum circuit as defined in Section~\ref{sec:Background}, if ${\rm Var}_{\vec{\theta}_A}\left(C(\vec{\theta}_A+L\hat{e})-C(\vec{\theta}_A) \right)\le G(n)$ for any deterministically chosen distance $L$ and direction indicated by the unit vector $\hat{e}$ then the variance of the gradient can be bounded above by: \begin{equation} {\rm Var}{}\Big(\partial_{\theta_j}C(\theta)\Big)\le \frac{G(n)}{4}. \end{equation} \end{lemma} This lemma follows directly from noting that the parameter shift rule allows us to write \begin{equation} \partial_{\theta_j}C(\vec{\theta})=\frac{C(\vec{\theta}'+\pi\hat{e}_j)-C(\vec{\theta}')}{2} \end{equation} where $\hat{e}_j$ is the unit vector along the $j-th$ parameter direction and $\vec{\theta}'=\vec{\theta}-\pi/2\hat{e}_j$. Combining Theorem~\ref{thm:grad->diffs} and Lemma~\ref{lemma:diffs->grad} then gives us the following result. \begin{theorem}\label{thm:grad=diffs} For any parameterized quantum circuit as defined in Section~\ref{sec:Background} the following statements hold \begin{itemize} \item A cost landscape is a barren plateau landscape if and only if it exhibits exponential concentration of the cost function as in Eq.~\eqref{eq:concentrate}. \item A cost landscape for which there exist minima at least $\Delta(n)>0$ below the mean cost value (with $\Delta(n)\in\Omega\left(1/\textrm{poly}(n)\right)$) is a barren plateau landscape if and only if it is a narrow gorge landscape. \end{itemize} \end{theorem} As we will demonstrate numerically in the following section, Theorem~\ref{thm:grad=diffs} implies that when numerically testing for barren plateaus, one can look at the variance of cost differences rather than computing the variance for comparably costly gradient evaluations. This result allows for a significant speed-up for numerically testing for barren plateaus since it removes the need to consider each parameter separately. \section{Numerical Demonstration}\label{sec:Numerics} Here we numerically demonstrate the equivalence of cost concentration and barren plateaus. For these numerical implementations, we consider a layered hardware efficient PQC, \begin{equation} \label{eq:HEA} U(\vec{\theta}, D) := \prod_{l=1}^D W V(\vec{\theta}_l) \, , \end{equation} composed of $D$ alternating layers of random single qubit gates and entangling gates. The single-qubit layer consists of a product of random $x$, $y$ and $z$ rotations on each qubit. That is, \begin{equation} V(\vec{\theta}_l) = \prod_{j=1}^n R_{j}^x(\theta_l^{xj}) R_{j}^y(\theta_l^{yj}) R_{j}^z(\theta_l^{zj}) \, , \end{equation} where $R_j^{k}(\theta_l^{kj})$ is a rotation of the $j_{\rm th}$ qubit by an angle $\theta_l^{kj}$ about the $k = x, y$ or $z$ axis. The entangling layer, \begin{equation} W = \prod_{j = 1}^{n-1} \text{C-Phase}_{j, j+1} \, , \end{equation} consists of a ladder of controlled-phase operations, $\text{C-Phase}$, between adjacent qubits in a 1-dimensional array. For concreteness, we suppose the circuit is initialized in $\rho = |\psi_0\rangle \langle \psi_0 |^{\otimes n}$ where $\ket{\psi_0} = \exp(-i(\pi/8)\sigma_Y) \ket{0}$ and focus on a 2-local cost where the measurement operator is composed of Pauli-$z$ measurements on the first and second qubits, $O = \sigma^z_1 \sigma^z_2$. We used TensorFlow Quantum~\cite{broughton2020tensorflow} to calculate the second moment of the derivative of the cost and its finite differences for an ensemble of 2000 random initializations. In Fig.~\ref{fig:numerics} we plot the variance in the partial derivative with respect to $\theta_1^{x1}$, the $x$ rotation angle for the first qubit in the first layer. While similar behaviour is expected for other parameters (see Appendix~\ref{app:paramdep} for more details) the decision to show the data for $\theta_1^{x1}$ is somewhat arbitrary. Strictly one would need to evaluate the variance of the partial derivative with respect to all $3 D n$ parameters in order to determine whether a general cost has a barren plateau. In contrast, evaluating the second moment of the cost requires simply generating two random ensembles of cost evaluations, taking their difference, and evaluating the variance of the resulting ensemble of differences. This is substantially less resource intensive than evaluating the variance of the partial derivatives with respect to all parameters. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figures/var_grads.pdf} \caption{\textbf{Comparing the second moments of finite differences and derivatives.} The variance in the partial derivative of the cost (dashed) and the variance in its finite differences (solid) as a function of the number of qubits $n$ as we vary the circuit depth $D$ of a hardware efficient PQC. In both cases we consider a local cost with $H = \sigma^z_1 \sigma^z_2$, we suppose the computer is prepared in the state $\exp(-i(\pi/8)\sigma_Y) \ket{0}$, and the variance is taken over an ensemble of 2000 random initializations. The partial derivative is taken with respect to $\theta_1^{x1}$, the rotation angle of the first qubit in the first layer.} \label{fig:numerics} \end{figure} As is shown in Fig.~\ref{fig:numerics}, we find that that the second moment of the derivative of the cost and its finite differences scale equivalently, as is expected from Theorem~\ref{thm:grad=diffs}. Specifically, for sufficiently deep circuits, namely $D \geq 60$, both the variance of the partial derivative and cost differences vanish exponentially with the size of the system $n$; however, for shallower circuits a constant scaling may be observed. \section{Discussion} By minimizing both the number of qubits required and the depth of the circuits that need to be executed, variational quantum algorithms and quantum neural networks may yield a quantum advantage before fault-tolerant quantum devices are available. In these algorithms, a problem-specific cost function is minimized using a classical optimizer to vary the parameters in a parameterized quantum circuit (PQC). While this framework minimizes the quantum resources required to perform a computation, it potentially involves a difficult classical optimization. One particular impediment to efficient optimization is a barren plateau landscape, where gradients are exponentially suppressed as the problem size scales. These landscapes occur for highly expressive PQCs, for cost functions that measure global properties of quantum states, for PQCs that generate high degrees of entanglement, and for hardware with significant noise. Another problematic landscape feature is a narrow gorge, where the fraction of parameter space below some value becomes exponentially suppressed. While these narrow gorges were originally discovered in connection to barren plateaus, there has been debate as to whether or not they are always connected. In this work, we introduced a bound for how concentrated a cost function is about the mean based on a bound on the variance of the partial derivatives of the landscape. Additionally, we have pointed out that, due to the parameter shift rule for computing derivatives, it is simple to bound the variance of the partial derivatives by a bound on the variance of finite differences computed at random points. As a consequence of these bounds, we have proven that, assuming minima below the mean values exist, saying that a landscape exhibits a barren plateau is logically equivalent to saying that it exhibits a narrow gorge. The practical application of our result arises in the process of numerically demonstrating a barren plateau. Previous approaches have either needed to focus on the partial derivative of the cost with respect to some particular parameter(s) (potentially missing different behaviors for different parameters) or chosen to accept the computational cost of gathering statistics on the scaling of full gradient vectors. By proving that the exponential suppression of the cost differences between randomly chosen points guarantees a barren plateau, our result allows one to explore the scaling of the entire gradient for a cost equivalent to that of studying a single partial derivative. Our result thus allows for significantly more efficient testing for barren plateaus. On a deeper level, one can imagine all of the possible landscapes that are allowed by PQCs, and hence by quantum mechanics. One could label this field of research as \textit{quantum landscape theory}. Our work shows that some landscapes that are mathematically possible are actually ruled out by quantum mechanics (e.g., landscapes with a narrow gorge but no barren plateau, as illustrated in Fig.~\ref{fig:Schematic}). It is worth considering the analogy to efforts to distinguish quantum theory from general mathematical theories~\cite{janotta2014generalized}, which is similar to how we distinguish quantum landscapes from general mathematical landscapes. Hence, we believe that our work is interesting from a foundations perspective, and that quantum landscape theory appears to be an important foundational area of research. \section*{Acknowledgments} AA was supported by the U.S. Department of Energy (DOE), Office of Science, Office of High Energy Physics QuantISED program under Contract No.~DE-AC52-06NA25396. ZH and PJC were supported by the Los Alamos National Laboratory (LANL) ASC Beyond Moore's Law project. MC was supported by the Center for Nonlinear Studies at LANL. This work was supported by the U.S. DOE, Office of Science, Office of Advanced Scientific Computing Research, under the Accelerated Research in Quantum Computing (ARQC) program.
1,108,101,562,544
arxiv
\section{Introduction} \label{sec:intro} The CP violation through a Dirac phase in the CKM matrix of weak interactions has been well tested in the flavored systems of hadrons. It is generally believed however that this cannot be the unique or even the dominant source of CP violation because of the observed large baryon number asymmetry in our universe (BAU). One of attractive solutions to BAU is offered by the mechanism of leptogenesis \cite{Fukugita:1986hr} in which the lepton number asymmetry is first generated through CP violation in the lepton sector and then converts partly into BAU via sphaleron effects \cite{Kuzmin:1985mm}. Our current information on the leptonic mixing matrix comes dominantly from experiments of neutrino oscillations \cite{Maltoni:2007zf}. The matrix is CKM-like involving a single Dirac phase if neutrinos are Dirac particles, but can contain additional two CP phases if neutrinos are of Majorana nature. The oscillations are blind to the latter Majorana phases while their sensitivity to the Dirac phase is seriously diminished by a very small, if not vanishing, mixing angle out of three. This leaves CP violation in the lepton sector largely untested so far except perhaps for the experiment of neutrinoless double beta ($0\nu 2\beta$) decays which can be sensitive to CP phases but whose status is under debate. Nevertheless, there is another physical observable, the electric dipole moment (EDM), that can provide an independent probe to CP violation. The current experimental limits on the EDMs of the mercury atom \cite{Romalis:2000mg}, neutron \cite{Baker:2006ts}, electron \cite{Regan:2002ta} and muon \cite{Bennett:2008dy} are already very impressive, and further improvements are expected to take place in the near future \cite{Pospelov:2005pr}. These EDMs can in principle be induced by the Dirac phase in the CKM matrix of standard model (SM). However, it was known long ago that the electric and chromoelectric dipole moments of quarks vanish to the two-loop order \cite{Shabalin:1978rs}. This was interpreted as a joint result of two features in SM \cite{Liao:1999yt}, namely the unitarity of the CKM matrix and the purely left-handed chirality of the charged current, and relaxation of any of them would yield quark EDMs at two loops \cite{Liao:1999fc}. The lepton EDMs then become extremely small in SM as they are first induced at four loops \cite{Pospelov:1991zt}. This makes them a potentially ideal place to search for CP violation in the lepton sector. If neutrinos are Dirac particles, the EDMs of charged leptons will be hopelessly tiny. The case of quarks in SM repeats in the lepton sector in an even worse manner since neutrinos can be considered degenerate to very good precision at the weak scale, in which case there is effectively no CP violation in the lepton sector. But the situation could be different when neutrinos are Majorana particles because of peculiarities with Majorana CP phases \cite{deGouvea:2002gf}. Since Majorana phases dominate over the Dirac phase in this case, an observable lepton's EDM would not only discover CP violation in the lepton sector but also expose the Majorana nature of neutrinos by a lepton number conserving quantity in sharp contrast to $0\nu 2\beta$ decays. Indeed, as pointed out in \cite{Ng:1995cs}, there is a topologically new type of two-loop Feynman diagrams when neutrinos are Majorana particles that can contribute to the charged lepton EDMs. But it was found subsequently that this type of contribution is always severely suppressed by neutrino masses from virtual loops whether one works in the standard type I seesaw model \cite{Archambault:2004td} or one augmented with an additional Higgs doublet \cite{Chang:2004pba}, or in type II seesaw \cite{deGouvea:2005jj}. The obtained numbers are actually even smaller than the four-loop result due to the CKM phase, and thus would not be observable in any foreseeable experiments. When neutrinos are Majorana particles, the lepton number may be violated either by a bare Majorana-type mass of heavy neutrinos that are singlets of SM, or by some other fields that are active in SM and couple in particular to leptons. The physics at low energies is much richer in the latter case. And the simplest choice would be to add a scalar triplet as in the type II seesaw model \cite{Konetschny:1977bn}. We are thus motivated to consider the most general Majorana-type Yukawa couplings of charged leptons to a doubly charged scalar. These couplings could arise as, but are not restricted to, part of interactions as in type II seesaw or a larger extension of SM. The CP violation encoded in the couplings can induce EDMs to charged leptons, and we find that this contribution is indeed parametrically large. Besides the product of Yukawa couplings and a lepton mass factor for chirality flip, the EDM is suppressed by charged lepton masses squared over four powers of the scalar mass and is partly enhanced by a logarithm. In particular, it incurs no suppression by neutrino masses since no neutrinos appear in virtual loops. This is the largest term in lepton EDMs, to our knowledge, coming from a flavor dependent CP source. The paper is organized as follows. In the next section we describe the Majorana-type Yukawa couplings between the charged leptons and doubly charged scalars, and count the number of independent physical parameters. The two-loop diagrams for EDMs are then evaluated analytically in section 3, and a sum rule is found. Using the most stringent constraints from lepton flavor violating decays we estimate in section 4 the largest allowed values for the EDMs. We summarize and conclude in the last section. \section{Majorana-type Yukawa couplings} The relevant interactions for our study are the Majorana-type Yukawa couplings \begin{eqnarray} {\cal L}_{\rm Yuk}=\ell^Tb{\cal C} P_L\ell\xi^{++}+\bar\ell b^\dagger{\cal C} P_R\bar\ell^T\xi^{--} % \label{eqn_yukawa} \end{eqnarray} and the standard QED \begin{eqnarray} {\cal L}_{\rm QED}=-eA^\mu\bar\ell\gamma_\mu\ell% +2ieA^\mu(\xi^{--}\partial_\mu\xi^{++} - \xi^{++}\partial_\mu\xi^{--})% \label{eqn_qed} \end{eqnarray} Here $\ell$, $\xi^{\pm\pm}$ and $A_\mu$ are respectively the charged lepton, doubly charged scalar and electromagnetic fields, and $e$ is the electromagnetic coupling. We use Greek letters to denote the three charged leptons. ${\cal C}=i\gamma^0\gamma^2$ is the matrix employed in charge conjugation and $P_{R,L}=\frac{1}{2}(1\pm\gamma_5)$ are chiral projectors. The Yukawa coupling matrix $b$ is symmetric in lepton flavors due to antisymmetry in fermion fields but is otherwise arbitrary. With $n$ flavors of leptons, $b$ has generally $n+\frac{1}{2}n(n-1)$ moduli and $n+\frac{1}{2}n(n-1)$ phases. All moduli are physical parameters. However, not all of the phases are physical. For instance, the phases in the diagonal entries $b_{\alpha\alpha}$ can all be removed by redefining complex fields $\ell_\alpha$. After this, there are no more degrees of freedom to rephase fields without reintroducing phases into $b_{\alpha\alpha}$. There are thus only $\frac{1}{2}n(n-1)$ physical phases. They signal $T$ and $CP$ violation as we analyze below. When the matrix $b$ is real, we can prescribe the $T$ and $CP$ transformations as $T\xi^{++}T^{-1}=+\xi^{++}$, $(CP)\xi^{++}(CP)^{-1}=-\xi^{--}$ so that both are preserved by the above interactions. If $b$ is purely imaginary, we can prescribe in the opposite manner to preserve both $T$ and $CP$, $T\xi^{++}T^{-1}=-\xi^{++}$, $(CP)\xi^{++}(CP)^{-1}=+\xi^{--}$. The latter case can of course be reduced to the former by rephasing the $\xi^{++}$ field by a factor of $i$. Therefore, $T$ and $CP$ symmetries are violated only when the matrix $b$ is genuinely complex, neither real nor purely imaginary. The above results are general and do not rely on any model. It is also possible to preserve lepton number by assigning two units to $\xi^{--}$. Our later calculation of EDM applies to the general case. But since the doubly charged scalars appear naturally in type II seesaw, it is interesting to consider this particular case separately: \begin{eqnarray} b=\frac{1}{2v_3}V^\ast m_\nu V^\dagger % \label{eqn_seesaw} \end{eqnarray} where $V$ is the lepton mixing matrix and $m_\nu$ the diagonal neutrino mass matrix with real, semi-positive eigenvalues $m_i$. The vacuum expectation value of the scalar triplet, $v_3$, is induced from that of the scalar doublet through a soft lepton number violating term. It is possible and common practice in type II seesaw to arrange order one Yukawa couplings $b$ by assuming $v_3$ to be the same order of magnitude as $m_\nu$. The moduli in $b$ correspond to $n$ neutrino masses (over $|v_3|$) plus $\frac{1}{2}n(n-1)$ mixing angles in $V$, while the physical phases are equivalent to $\frac{1}{2}(n-1)(n-2)$ Dirac phases and $(n-1)$ Majorana phases in $V$. \section{Evaluation of electric dipole moments} Now we calculate the EDM $d_\alpha$ induced for the charged lepton $\ell_\alpha$ due to interactions in eqs. (\ref{eqn_yukawa}, \ref{eqn_qed}). The effective EDM interaction is defined as \begin{eqnarray} {\cal L}_{\rm EDM}=-\frac{i}{2}d_\alpha \bar\ell_\alpha\gamma_5\sigma_{\mu\nu}\ell_\alpha F^{\mu\nu}% \label{eqn_edm} \end{eqnarray} There is no contribution at one loop level since the matrix element $b_{\alpha\beta}$ always appears in a self-conjugate form, $|b_{\alpha\beta}|^2$, so that no phases can survive. The other way to see this is to notice that, when computing $d_\alpha$ for a specific $\alpha$, one can choose suitable phases for the $\ell_\beta$ fields so that all of $b_{\alpha\beta}$ are real. Thus more factors of $b$ have to be involved to induce an EDM, and the first contribution occurs at two loop level. The two-loop Feynman diagrams contributing to $d_\alpha$ are depicted in Fig. 1. The incoming and outgoing momenta of the $\ell_\alpha$ are respectively $p\pm\frac{1}{2}q$ with $q$ being the outgoing momentum of the photon attached at the vertex indicated by $\otimes$. The arrows in the graphs denote the flow of negative charges, and the summation over the virtual charged leptons $\ell_\beta$, $\ell_\gamma$ and $\ell_\delta$ is implied. We find that because of the chirality structure in ${\cal L}_{\rm Yuk}$ the chirality flip required by the EDM has to be done by the external lepton mass, $m_\alpha$. Upon extracting out this mass factor we ignore further dependence on it. This is a good approximation for practical purposes with incurred relative errors of order $O(r_\alpha)$, where $r_\alpha=m^2_\alpha/m^2$ and $m$ is the mass of $\xi^{\pm\pm}$. The dependence on other charged lepton masses enters in a quadratic form, i.e., via $r_{\beta,\gamma,\delta}$. The lepton flavor dependence in the relevant term of each graph can thus be described as \begin{eqnarray} b^\ast_{\gamma\alpha}b^\ast_{\delta\beta}b_{\delta\gamma}b_{\beta\alpha} f(r_\beta,r_\gamma;r_\delta), \end{eqnarray} where $f$ is a function of the indicated mass ratios. $f$ is generally a sum of terms that are respectively symmetric and antisymmetric in $\beta$ and $\gamma$. The symmetric term cannot contribute to EDM since we are effectively summing the self-conjugated $b$ factors, $b^\ast_{\gamma\alpha}b^\ast_{\delta\beta} b_{\delta\gamma}b_{\beta\alpha}+{\rm c.c.}$, which do not vanish even for a real or purely imaginary $b$. The antisymmetric combination on the other hand survives only when $b$ is genuinely complex with CP phases involved: \begin{eqnarray} \frac{i}{2}\Im[b^\ast_{\gamma\alpha}b^\ast_{\delta\beta} b_{\delta\gamma}b_{\beta\alpha}] [f(r_\beta,r_\gamma;r_\delta)-f(r_\gamma,r_\beta;r_\delta)] \end{eqnarray} Since $r_\delta\ll 1$, the leading term, if not vanishing, is obtained by setting $r_\delta=0$. The $b$ factors then degenerate into the form \begin{eqnarray} \Im[b^\ast_{\gamma\alpha}b_{\beta\alpha}(b^\dagger b)_{\beta\gamma}] \end{eqnarray} \begin{center} \begin{picture}(320,280)(0,0) \SetOffset(20,220) % \ArrowLine(-20,0)(0,0)\ArrowLine(20,0)(0,0)% \ArrowLine(20,0)(80,0)\ArrowLine(100,0)(80,0)% \ArrowLine(100,0)(120,0)% \DashArrowArc(50,0)(30,0,180){3}% \DashArrowArcn(50,0)(50,180,95){3}\DashArrowArcn(50,0)(50,85,0){3}% \Text(-10,8)[]{$\alpha$}\Text(10,8)[]{$\beta$} \Text(50,8)[]{$\delta$}\Text(90,8)[]{$\gamma$} \Text(110,8)[]{$\alpha$} % \Text(50,50)[]{$\otimes$}% \Text(50,-20)[]{$(a)$} \SetOffset(180,220) % \ArrowLine(-20,0)(0,0)\ArrowLine(20,0)(0,0) \ArrowLine(20,0)(80,0)\ArrowLine(100,0)(80,0)\ArrowLine(100,0)(120,0)% \DashArrowArc(50,0)(30,0,80){3}\DashArrowArc(50,0)(30,100,180){3}% \DashArrowArcn(50,0)(50,180,0){3}% \Text(50,30)[]{$\otimes$}% \Text(50,-20)[]{$(b)$} \SetOffset(20,130) % \ArrowLine(-20,0)(0,0)\ArrowLine(20,0)(0,0) \ArrowLine(20,0)(46,0)\ArrowLine(54,0)(80,0) \ArrowLine(100,0)(80,0)\ArrowLine(100,0)(120,0)% \DashArrowArc(50,0)(30,0,180){3}\DashArrowArcn(50,0)(50,180,0){3}% \Text(50,0)[]{$\otimes$}% \Text(50,-20)[]{$(c)$} \SetOffset(180,130) % \ArrowLine(-20,0)(0,0)\ArrowLine(7,0)(0,0)\ArrowLine(20,0)(13,0) \ArrowLine(20,0)(80,0)\ArrowLine(100,0)(80,0)\ArrowLine(100,0)(120,0)% \DashArrowArc(50,0)(30,0,180){3}\DashArrowArcn(50,0)(50,180,0){3}% \Text(10,0)[]{$\otimes$}% \Text(50,-20)[]{$(d)$} \SetOffset(20,40) % \ArrowLine(-20,0)(0,0)\ArrowLine(20,0)(0,0) \ArrowLine(20,0)(80,0)\ArrowLine(100,0)(94,0)\ArrowLine(86,0)(80,0) \ArrowLine(100,0)(120,0)% \DashCArc(50,0)(30,0,180){3}\DashCArc(50,0)(50,0,180){3}% \DashArrowArc(50,0)(30,0,180){3}\DashArrowArcn(50,0)(50,180,0){3}% \Text(90,0)[]{$\otimes$}% \Text(50,-20)[]{$(e)$} \Text(0,-40)[l]{Figure 1. Diagrams contributing to EDM of $\ell_\alpha$.} \end{picture} \end{center} It is interesting that the above form does not vanish even in the case of two flavors where only a single Majorana phase can appear. To see the point, it suffices to consider the easier case of type II seesaw in eq. (\ref{eqn_seesaw}) with \begin{eqnarray} V=\left(\begin{array}{cc} c&s\\-s&c \end{array}\right)\left( \begin{array}{cc}u&\\&1\end{array}\right), \end{eqnarray} where $c$, $s$ are the cosine and sine of the mixing angle, and $u$ is the CP phase. Then, we find for instance \begin{eqnarray} (2|v_3|)^42i\Im[b^*_{\mu e}b_{ee}(b^\dagger b)_{e\mu}] =c^2s^2(m_1^2-m_2^2)m_1m_2(u^2-u^{*2}) \end{eqnarray} which does not vanish in general. This is a feature pertaining to the Majorana-type couplings of charged leptons in eq. (\ref{eqn_yukawa}) or the Majorana nature of neutrinos in type II seesaw. We will see in the next section that the combination $b^\ast_{\delta\beta}b_{\delta\gamma}$ is no less constrained than $(b^\dagger b)_{\beta\gamma}$. It is thus a good approximation to keep the leading term at $r_\delta=0$ while ignoring small corrections that are at most of order $r_\delta\ln r_\delta$. The final answer for $d_\alpha$ thus looks like \begin{eqnarray} d_\alpha=C\frac{em_\alpha}{m^2} \Im[b^\ast_{\gamma\alpha}b_{\beta\alpha}(b^\dagger b)_{\beta\gamma}] [f(r_\beta,r_\gamma;0)-f(r_\gamma,r_\beta;0)] \end{eqnarray} where $C$ is a loop factor. This result entails an interesting sum rule \begin{eqnarray} \frac{d_e}{m_e}+\frac{d_\mu}{m_\mu}+\frac{d_\tau}{m_\tau}=0 % \label{eqn_sum} \end{eqnarray} which is exact up to small relative corrections of $O(r_{\alpha,\delta})$. And up to logarithmic enhancements, we have approximately, \begin{eqnarray} d_\alpha\sim C\frac{em_\alpha(r_\beta-r_\gamma)}{m^2} \Im[b^\ast_{\gamma\alpha}b_{\beta\alpha}(b^\dagger b)_{\beta\gamma}] \end{eqnarray} We are now ready to present the results for the graphs. Graph (a) is symmetric in $\beta$ and $\gamma$, and does not contribute to EDM. Graphs (b) and (c) each contain symmetric and antisymmetric terms, while the sum of (d) and (e) is antisymmetric. The contribution to EDM is \begin{eqnarray} d_\alpha=\frac{2^5em_\alpha}{24(4\pi)^4m^2} \Im\big[b^*_{\gamma\alpha}b_{\beta\alpha}(b^\dagger b)_{\beta\gamma}\big]J(r_\beta,r_\gamma)% \label{eqn_d} \end{eqnarray} where again summation over $\beta,~\gamma$ is implied and $J$ is a sum over four graphs: \begin{eqnarray} J(r_\beta,r_\gamma)=J^{(b)}(r_\beta,r_\gamma) +J^{(c)}(r_\beta,r_\gamma)+J^{(d+e)}(r_\beta,r_\gamma) \end{eqnarray} Each of these four graphs has ultraviolet sub-divergences. In $4-2\epsilon$ dimensions, they are \begin{eqnarray} J^{(b){\rm div}}=-2J^{(c){\rm div}}=-2J^{(d+e){\rm div}} =F(r_\beta,r_\gamma)\Gamma(\epsilon), \end{eqnarray} where the arguments in $J$ are suppressed and \begin{eqnarray*} F(b,c)&=&-\frac{2[b^2+c^2-bc-bc(b+c)+b^2c^2]} {(b-1)^2(c-1)^2(b-c)}\\ &&+\frac{b^2[-3c+b(1+b+c)]\ln b}{(b-1)^3(b-c)^2 -\frac{c^2[-3b+c(1+b+c)]\ln c}{(c-1)^3(b-c)^2} \end{eqnarray*} The divergences are canceled on summation as they must be. The analytic result for the finite part is much more lengthy. In addition to the displayed function $F$, each graph involves one or two other twofold parameter integrals that can be worked out in terms of the fractions, logarithms $\ln r_\beta$ and $\ln r_\gamma$, and the dilogarithms ${\rm Li}_2(1-r_\beta)$ and ${\rm Li}_2(1-r_\gamma)$. We will not record these exact results but the sum of all graphs that has been expanded to the leading order in $r_{\beta,\gamma}$: \begin{eqnarray} J(r_\beta,r_\gamma)=\frac{r_\beta^2+r_\gamma^2-r_\beta r_\gamma} {r_\beta-r_\gamma} % +\frac{r_\beta^2(r_\beta-3r_\gamma)\ln r_\beta -r_\gamma^2(r_\gamma-3r_\beta)\ln r_\gamma} {2(r_\beta-r_\gamma)^2} +\cdots,% \label{eqn_J} \end{eqnarray} where the dots stand for higher order terms in $r_{\beta,\gamma}$. Since the charged lepton masses are hierarchical, further expansion is possible; for $1\gg r_\beta\gg r_\gamma$, we have \begin{eqnarray} J(r_\beta,r_\gamma)= r_\beta-2r_\gamma+\frac{1}{2}(r_\beta-3r_\gamma)\ln r_\beta+\cdots \end{eqnarray} We have tested that the leading terms shown in eq. (\ref{eqn_J}) recover the first three digits of the exact results at $m=200~{\rm GeV}$ and are good enough for our later numerical analysis. \section{Numerical analysis} \label{sec:num} Our result for the charged lepton EDMs shown in eqs. (\ref{eqn_d},\ref{eqn_J}) is suppressed by charged lepton masses squared over four powers of the scalar mass, and has a mild logarithmic enhancement factor. This is a parametrically large contribution. For instance, at our reference point $m=200~{\rm GeV}$, we have $J(r_e,r_\mu)\approx 1.83\times 10^{-6}$, $J(r_e,r_\tau)\approx J(r_\mu,r_\tau)\approx 2.94\times 10^{-4}$, and \begin{eqnarray} d_e\sim 4\times 10^{-30}\times[b~{\rm factors}]~e~{\rm cm} \end{eqnarray} which would be within the reach in the next generation of experiment at the sensitivity of order $10^{-31}~e~{\rm cm}$ \cite{Kawall:2004nv}. However, the same Yukawa couplings in eq. (\ref{eqn_yukawa}) induce other effects as well, and a realistic estimate of EDM should take into account the constraints from those effects. In this section, we present our numerical results in two approaches. The main constraints considered are from lepton flavor violating (LFV) decays of charged leptons. Also mentioned are anomalous magnetic moments and $0\nu 2\beta$ decays. We start with a model independent analysis in the next subsection and then specialize to the case of type II seesaw. The constraints in the second approach are more stringent because of less free parameters involved. \subsection{Approach 1: model independent result} The Yukawa couplings in eq. (\ref{eqn_yukawa}) mediate radiative LFV decays at one loop level and purely leptonic decays at tree level. The branching ratio for the radiative decay is \begin{eqnarray} \textrm{Br}(\ell_\beta\to\ell_\alpha\gamma)=\frac{3^3\alpha}{2^6\pi}% \left|\frac{(b^\dagger b)_{\alpha\beta}}{G_Fm^2}\right|^2 % B_\beta B_\xi,% \label{eqn_br_rad} \end{eqnarray} with $B_\mu=1$ and $B_\tau\approx 17\%$. $B_\xi$ is a model parameter which equals $(8/9)^2$ for the contribution of $\xi^{\pm\pm}$ alone (in this subsection) and equals $1$ when both $\xi^{\pm\pm}$ and $\xi^{\pm}$ are included as in type II seesaw model (in the next). The branching ratio for the purely leptonic decay is \begin{eqnarray} \textrm{Br}(\ell_\delta\to\bar\ell_\alpha\ell_\beta\ell_\gamma) =\frac{1}{2^2}\left|\frac{b_{\delta\alpha}b_{\beta\gamma}}{G_Fm^2}\right|^2 (2-\delta_{\beta\gamma})B_\delta,% \label{eqn_br_lep} \end{eqnarray} which is only induced by $\xi^{\pm\pm}$ exchange. The factor $(2-\delta_{\beta\gamma})$ distinguishes between identical and nonidentical particles in the final state. Using the experimental bounds on the branching ratios we can constrain $|(b^\dagger b)_{\alpha\beta}|/(G_Fm^2)$ and $|b_{\delta\alpha}b_{\beta\gamma}|/(G_Fm^2)$ respectively. These numbers are shown in table 1, and will be employed to set conservative upper bounds on EDMs. \begin{table} \begin{center} \begin{tabular}{|c|l|l|l|l|l|} \hline modes & $\mu\to e\gamma$ & $\tau\to e\gamma$ % & $\tau\to\mu\gamma$ & $\mu\to 3e$ & $\tau\to 3e$\\ \hline Br &$1.2~10^{-11}$ \cite{Brooks:1999pu} % & $1.1~10^{-7}$ \cite{Aubert:2005wa} % & $4.5~10^{-8}$ \cite{Hayasaka:2007vc} % & $1.0~10^{-12}$ \cite{Bellgardt:1987du} % & $4.3~10^{-8}$ \cite{Aubert:2007pw}\\ \hline % bounds & $1.2~10^{-4}$ & $2.9~10^{-2}$ & $1.9~10^{-2}$ & $2.0~10^{-6}$ & $1.0~10^{-3}$ \\ \hline % \hline modes & $\tau\to 3\mu$ & $\tau\to\bar e 2\mu$ % & $\tau\to\bar\mu 2e$ & $\tau\to\bar ee\mu$ & $\tau\to\bar\mu\mu e$\\ \hline Br &$5.3~10^{-8}$ \cite{Aubert:2007pw} % & $5.6~10^{-8}$ \cite{Aubert:2007pw} % & $5.8~10^{-8}$ \cite{Aubert:2007pw} % & $8.0~10^{-8}$ \cite{Aubert:2007pw} % & $3.7~10^{-8}$ \cite{Aubert:2007pw}\\ \hline % bounds & $1.1~10^{-3}$ & $1.1~10^{-3}$ & $1.2~10^{-3}$ & $9.7~10^{-4}$ & $6.6~10^{-4}$ \\ \hline % \end{tabular} \caption{Experimental upper bounds on branching ratios of decays in eqs. (\ref{eqn_br_rad}, \ref{eqn_br_lep}) set upper bounds on $|(b^\dagger b)_{\alpha\beta}|/(G_Fm^2)$ and $|b_{\delta\alpha}b_{\beta\gamma}|/(G_Fm^2)$ respectively.}% \label{tab_1} \end{center} \end{table} Each $d_\alpha$ has three terms proportional to $J(r_e,r_\mu)$, $J(r_e,r_\tau)$, and $J(r_\mu,r_\tau)$ respectively, for instance, \begin{eqnarray} \frac{96\pi^4}{G_F^2}\frac{d_e}{em_e}&=&% +\Im\left[\frac{b^*_{\mu e}b_{ee}}{m^2G_F}% \frac{(b^\dagger b)_{e\mu}}{m^2G_F}\right]m^2J(r_e,r_\mu) \nonumber\\ &&+\Im\left[\frac{b^*_{\tau e}b_{ee}}{m^2G_F} % \frac{(b^\dagger b)_{e\tau}}{m^2G_F}\right]m^2J(r_e,r_\tau) \nonumber\\% &&+\Im\left[\frac{b^*_{\tau e}b_{\mu e}}{m^2G_F} % \frac{(b^\dagger b)_{\mu\tau}}{m^2G_F}\right]m^2J(r_\mu,r_\tau) \end{eqnarray} The first term is much smaller because of a smaller $J$ factor and more severely suppressed moduli of the products of $b$ factors, and can safely be ignored. In the optimistic case where the products of $b$ factors in the last two terms are purely imaginary and add constructively, we get at $m=200~{\rm GeV}$, \begin{eqnarray} |d_e|\le 8.1\times 10^{-35}~e~{\rm cm} \end{eqnarray} Since our bounds in table 1 are given independently of $m^2$ while $m^2J$ depends only logarithmically on $m^2$, the above bound is stable against mild variations of $m$. Similarly, we obtain \begin{eqnarray} |d_\mu|\le 1.4\times 10^{-32}~e~{\rm cm} \end{eqnarray} The expression for $d_\tau$ contains several combinations of $b$ factors that cannot be constrained in LFV decays, so that a direct bound is not possible. But we can utilize the sum rule (\ref{eqn_sum}) to set a bound \begin{eqnarray} |d_\tau|\le 5.2\times 10^{-31}~e~{\rm cm} \end{eqnarray} The limit on $|d_e|$, though larger than the four-loop SM result \cite{Pospelov:1991zt} and the bounds reached via other mechanisms \cite{Ng:1995cs,Archambault:2004td,Chang:2004pba,deGouvea:2005jj}, is still about three orders of magnitude below the precision reachable in the near future \cite{Kawall:2004nv}. \subsection{Approach 2: a case study in type II seesaw} The discussion in the previous subsection is model independent. When the Yukawa interaction in eq. (\ref{eqn_yukawa}) is part of a complete structure in a model, more stringent constraints on EDMs can be obtained. This is the case particularly in the type II seesaw model where the Yukawa couplings are related via eq. (\ref{eqn_seesaw}) to the neutrino masses and mixing matrix which have been determined to certain extent. In this subsection we will not attempt a global fitting but demonstrate the point by a case study in this model. The mixing pattern determined by oscillation data is close to the tribimaximal texture \cite{Harrison:2002er}. We will work in this simplified scenario. There is then no Dirac phase but there can be two Majorana phases $u_{1,2}$: \begin{eqnarray} V=\left(\begin{array}{ccc} \sqrt{\frac{2}{3}}u_1&\frac{1}{\sqrt{3}}u_2&0\\ -\frac{1}{\sqrt{6}}u_1&\frac{1}{\sqrt{3}}u_2&\frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{6}}u_1&-\frac{1}{\sqrt{3}}u_2&\frac{1}{\sqrt{2}} \end{array}\right) \end{eqnarray} Then, the matrix $b^\dagger b$ is real and symmetric, \begin{eqnarray} 4|v_3|^2b^\dagger b &=&\frac{1}{3}(m_1^2+m_2^2+m_3^2)1_3\nonumber\\ &&+\frac{1}{6}\left(\begin{array}{ccc} 2\Delta_{13}&-2\Delta_{12}&2\Delta_{12}\\ -2\Delta_{12}&-\Delta_{13}&-\Delta_{13}-2\Delta_{23}\\ 2\Delta_{12}&-\Delta_{13}-2\Delta_{23}&-\Delta_{13} \end{array}\right) \end{eqnarray} with $\Delta_{ij}=m^2_i-m^2_j$. The off-diagonal moduli $|(b^\dagger b)_{\alpha\beta}|$ depend explicitly on $\Delta_{ij}$, which have been determined e.g. in \cite{Garayoa:2007fw} to be, $\Delta_{21}=7.6\times 10^{-5}~{\rm eV}^2$, $|\Delta_{31}|=2.4\times 10^{-3}~{\rm eV}^2$. The bound on $\textrm{Br}(\mu\to e\gamma)$ then implies (using $B_\xi=1$) \begin{eqnarray} |v_3|^2m^2G_F>5.75\times 10^{-2}~{\rm eV}^2% \label{eqn_bound1} \end{eqnarray} Since $\textrm{Br}(\tau\to e\gamma)$ also depends on $\Delta_{12}$, its less stringent bound is useless. Instead, its relation to $\textrm{Br}(\mu\to e\gamma)$ in type II seesaw and the experimental bound on the latter mean \begin{eqnarray} \textrm{Br}(\tau\to e\gamma)=B_\tau\textrm{Br}(\mu\to e\gamma)\le 2.0\times 10^{-12} \end{eqnarray} which is much below the current bound. We also notice that the bound on $\textrm{Br}(\tau\to\mu\gamma)$, though more than three orders of magnitude larger than $\textrm{Br}(\mu\to e\gamma)$, gives a constraint that is only slightly weaker than in eq. (\ref{eqn_bound1}), because of an enhancement factor $|\Delta_{31}|/\Delta_{21}$. A similar relation also holds between $\textrm{Br}(\tau\to 3e)$ and $\textrm{Br}(\mu\to 3e)$; the more stringent bound on the latter implies \begin{eqnarray} \textrm{Br}(\tau\to 3e)=B_\tau\textrm{Br}(\mu\to 3e)\le 1.7\times 10^{-13} \end{eqnarray} which is much smaller than its current bound and thus more difficult to observe than other decay modes of $\tau$. To proceed further with leptonic decays of $\tau$ and EDM, we set all neutrino masses in $b^*_{\delta\alpha}b_{\beta\gamma}$ (but not in $(b^\dagger b)_{\beta\alpha}$ of course) to be equal to their average value $\bar m_\nu$. This simplification holds true barring very delicate cancellation among neutrino mass differences and Majorana phases. Then, the most stringent constraints on $\mu\to 3e$, $\tau\to\bar ee\mu,~\bar\mu\mu e$ (together with the less stringent one on $\tau\to\bar e2\mu$) are proportional to $|u_1^2-u_2^2|$. We may reach the most optimistic values of EDMs by assuming $u_1^2=u_2^2=e^{i\phi}$ to avoid these bounds. The remaining ones on $\tau\to 3\mu,~\bar\mu 2e$ yield comparable constraints: \begin{eqnarray} \frac{\bar m_\nu^2|\sin\phi|}{8|v_3|^2m^2G_F} <1.1\times 10^{-3}, % ~\frac{\bar m_\nu^2|\sin(\phi/2)|}{4|v_3|^2m^2G_F}<1.2\times 10^{-3}% \label{eqn_bound2} \end{eqnarray} The electron EDM being proportional to $\Im(u_1^{*2}u_2^2)$ vanishes, while the other two simplify to \begin{eqnarray} \frac{d_\mu}{em_\mu}=-\frac{d_\tau}{em_\tau} % =\frac{\bar m_\nu^2\Delta_{13}\sin\phi}{2^{11}~3\pi^4|v_3|^4m^2} J(r_\mu,r_\tau), \end{eqnarray} barring cancellation of $O(\Delta_{21}/|\Delta_{31}|)\sim 3\%$ or $O([\Delta_{21}J(r_e,r_\mu)]/[|\Delta_{31}|J(r_\mu,r_\tau)])\sim 2\times 10^{-4}$. The bounds in eqs. (\ref{eqn_bound1}, \ref{eqn_bound2}) then give at $m=200~{\rm GeV}$ \begin{eqnarray} |d_\mu|<2.0\times 10^{-33}~e~{\rm cm},~|d_\tau|<3.4\times 10^{-32}~e~{\rm cm} \end{eqnarray} As expected, this result is better than the model-independent one in the previous subsection. In this special scenario, the effective neutrino mass measured in $0\nu 2\beta$ decays is $m_{\beta\beta}=(2m_1+m_2)/3\sim \bar m_\nu$. The contributions to anomalous magnetic dipole moments depend on the diagonal elements of $b^\dagger b$ and are given by \begin{eqnarray} a_e&=&\frac{3}{(4\pi)^2}\frac{\bar m_\nu^2m_e^2G_F}{4|v_3|^2m^2G_F}< 1.1\times 10^{-14}\nonumber\\ a_\mu&=&\frac{3}{(4\pi)^2}\frac{\bar m_\nu^2m_\mu^2G_F}{4|v_3|^2m^2G_F}<4.7\times 10^{-10} \end{eqnarray} using eq.(\ref{eqn_bound1}) and $\bar m_\nu\sim 0.21~{\rm eV}$ from a recent update of cosmological bounds on the sum of neutrino masses \cite{Hannestad:2008js}. Both $a_e$ and $a_\mu$ are below the potential gap between measurements and SM expectations \cite{Odom:2006zz, Yao:2006px}. \section{Conclusion} CP violation in the lepton sector has remained an experimentally unexplored issue. The charged lepton EDMs offer a potential arena to detect it. This is especially encouraged by the experimental precision in EDM measurements that has been reached and will possibly be accessible. However, it has been found previously that it is hard to obtain a not too tiny EDM for charged leptons from a flavor CP source. This may be blamed on the very light, thus almost degenerate at the electroweak scale, neutrinos. Together with a small mixing angle out of three, this makes a Dirac phase effectively unobservable; and it suppresses the effects of Majorana phases on EDMs by several factors of neutrino masses. We have thus been motivated to consider a CP source that arises from Majorana-type Yukawa couplings of charged leptons. Such couplings may appear naturally in SM with an extended scalar sector, such as type II seesaw model, but we have presented our analytic results in a general setting. We found that the EDMs so obtained are parametrically large. They are only suppressed by charged lepton masses squared over four powers of heavy scalar masses for order one Yukawa couplings that may be naturally arranged, for instance, in type II seesaw model by assigning a tiny vacuum expectation value for the scalar triplet. Nevertheless, the fate with a flavor CP source seems insurmountable. While a large enough EDM, though flavor diagonal, demands reasonably large flavor changing couplings, this may not be allowed by strictly bounded LFV transitions. With these bounds taken into account, we found that the electron EDM is at least three orders of magnitude below the precision achievable in the near future, although it is still much larger than the contributions considered previously. \vspace{0.5cm} \noindent % {\bf Acknowledgement} This work is supported in part by the grants NCET-06-0211 and NSFC-10775074. \vspace{0.5cm} \noindent %
1,108,101,562,545
arxiv
\section*{Appendix \thesection\protect\indent #1} \addcontentsline{toc}{section}{Appendix \thesection\ \ \ #1} } \newcommand{\tr}[1]{\,{\rm tr}\,#1\,} \newcommand{\frac{\tau} 2 }{\frac{\tau} 2 } \newcommand{${\cal T}$ }{${\cal T}$ } \newcommand{{\bf d}}{{\bf d}} \def\varepsilon{\varepsilon} \def\int\hspace{-1.07em}\not\hspace{0.6em}{\int\hspace{-1.07em}\not\hspace{0.6em}} \def\hspace*{\fill}\linebreak{\hspace*{\fill}\linebreak} \def\vspace*{\fill}\pagebreak{\vspace*{\fill}\pagebreak} \def\begin{equation}{\begin{equation}} \def\label{\label} \def\end{equation}{\end{equation}} \def\begin{eqnarray}{\begin{eqnarray}} \def\end{eqnarray}{\end{eqnarray}} \def\varepsilon{\varepsilon} \def\alpha{\alpha} \def\beta{\beta} \def\sigma{\sigma} \def\nabla{\nabla} \def\Sigma{\Sigma} \def\Delta{\Delta} \def\Gamma{\Gamma} \def\gamma{\gamma} \def\delta{\delta} \def\omega{\omega} \def\theta{\theta} \def\left({\left(} \def\right){\right)} \def\partial{\partial} \def\vec{x}{\vec{x}} \def\vec{y}{\vec{y}} \def\vec{z}{\vec{z}} \newcommand{{\mu }}{{\mu }} \newcommand{{\bf k}}{{\bf k}} \newcommand{{\bf q}}{{\bf q}} \newcommand{{\bf Z}}{{\bf Z}} \newcommand{{\cal H }}{{\cal H }} \newcommand{{\cal L }}{{\cal L }} \begin{document} \title{\hfill{LMU-TPW 99-12} \\ \hfill{UAHEP993} \\ \hfill{hep-th/9907085} \\ \vspace{1cm} Some Cubic Couplings in Type IIB Supergravity on $AdS_5\times S^5$ and Three-point Functions in SYM$_4$ at Large $N$} \author{G.Arutyunov$^{a\, c}$ \thanks{[email protected]} \mbox{} and \mbox{} S.Frolov$^{b\,c}$\thanks{[email protected] \newline $~~~~~$$^c$On leave of absence from Steklov Mathematical Institute,Gubkin str.8, GSP-1, 117966, Moscow, Russia } \vspace{0.4cm} \mbox{} \\ $^a$ Sektion Physik, \vspace{-0.1cm} \mbox{} \\ Munich University \vspace{-0.1cm} \mbox{} \\ Theresienstr. 37, \vspace{-0.1cm} \mbox{} \\ D-80333 Munich, Germany \vspace{0.4cm} \mbox{} \\ $^b$Department of Physics and Astronomy, \vspace{-0.1cm} \mbox{} \\ University of Alabama, Box 870324, \vspace{-0.1cm} \mbox{} \\ Tuscaloosa, Alabama 35487-0324, USA \mbox{} } \date {} \maketitle \begin{abstract} All cubic couplings in type IIB supergravity on $AdS_5\times S^5$ that involve two scalar fields $s^I$ that are mixtures of the five form field strength on $S^5$ and the trace of the graviton on $S^5$ are derived by using the covariant equations of motion and the quadratic action for type IIB supergravity on $AdS_5\times S^5$. All corresponding three-point functions in SYM$_4$ are calculated in the supergravity approximation. It is pointed out that the scalars $s^I$ correspond not to the chiral primary operators in the ${\cal N}=4$ SYM but rather to a proper extension of the operators. \end{abstract} \newpage \section{Introduction} According to the AdS/CFT correspondence \cite{M,GKP,W}, the generating functional of Green functions in $D=4$, ${\cal N}=4$ supersymmetric Yang-Mills theory (SYM$_4$) at large $N$ and at strong 't Hooft coupling $\lambda$ coincides with the on-shell value of the type IIB supergravity action on $AdS_5\times S^5$. For this reason, to calculate an $n$-point Green function one has to know the supergravity action up to the $n$-th order. In particular, the normalization constants of two- and three-point Green functions \cite{AV}-\cite{LT2} are determined by the quadratic and cubic actions for physical fields of supergravity. The particle spectrum of type IIB supergravity on $AdS_5\times S^5$ \cite{KRN,GM} contains scalar fields $s^I$ that are mixtures of the five form field strength on $S^5$ and the trace of the graviton on $S^5$. The transformation properties of the scalars with respect to the superconformal group of SYM$_4$ allow one to conclude that they correspond to chiral primary operators (CPOs) of SYM$_4$. In \cite{LMRS} the quadratic and cubic actions for the scalars $s^I$ have been found and used to calculate all three-point functions of normalized CPOs. These three-point functions appeared to coincide with the three-point functions of CPOs computed in free field theory for generic values of conformal dimensions of CPOs. However, there is an apparent contradiction. As was noted in \cite{HF1} (see also \cite{LT2}) a three-point function of CPOs calculated in the AdS/CFT framework vanishes, if the sum of conformal dimensions of any of the two operators equals the conformal dimension of the third operator, because of the vanishing of the cubic couplings of the corresponding scalar fields. Thus we are forced to conclude that the scalars $s^I$ used in \cite{LMRS} cannot correspond to CPOs. Another way to come to the conclusion is that the scalars from \cite{LMRS} do not coincide with the original scalars that are mixtures of the five-form and the graviton but depend nonlinearly on the original scalars and their derivatives. Thus the scalars used in \cite{LMRS} do not transform with respect to the superconformal group in a proper way and cannot correspond to CPOs. In this paper we show that a scalar $s^I$ used in \cite{LMRS} corresponds to an operator which is the sum of a CPO and non-chiral composite operators. The non-chiral operators are normal-ordered products of CPOs and their descendants, i.e. so-called double- and multi-trace operators. The knowledge of correlation functions of the chiral primary operators allows one to compute correlation functions of all their descendants, in particular, the correlation functions of the stress energy tensor and $R$-symmetry currents. To compute four-point functions\footnote{Some results on four-point functions have been obtained in \cite{HF1}-\cite{BKRS}.} of the chiral operators one has to know the $s^I$-dependent quartic terms and all cubic terms that involve two scalar fields $s^I$. In the present paper, as the first step in this direction, we determine all such cubic terms. It is sufficient to consider only the sector of type IIB supergravity that depends on the graviton and the four-form potential. There are four different types of vertices describing interaction of two scalars $s^I$ with symmetric tensor fields of the second rank coming from the $AdS_5$ components of the graviton, with vector fields, with scalar fields coming from the $S^5$ components of the graviton, and with scalar fields $t^I$ that are mixtures of the trace of the graviton on the sphere and the five form field strength on the sphere. To this end we apply an approach similar to the one used in \cite{LMRS}. Namely, we use the quadratic action for type IIB supergravity on $AdS_5\times S^5$ recently obtained in \cite{AF3} and the covariant equations of motion of \cite{S,SW,HW}. Just as it was in the case of cubic couplings of three scalars $s^I$ \cite{LMRS}, to get rid of higher-derivative terms we will have to redefine the original gravity fields. Thus the fields entering the final action correspond not to descendants of CPOs but to extended operators involving products of CPOs and their descendants. However, we expect that for generic values of conformal dimensions of these operators, their three-point functions coincide with the three-point functions of the corresponding descendants of CPOs. Let us note in passing that the only way to find an action depending on the fields that correspond directly to CPOs and their descendants seems to be to derive the action starting from the covariant action of \cite{ALS,ALT}. In this way one probably should obtain a nonvanishing cubic couplings of scalars $s^I$ corresponding to CPOs whose conformal dimensions satisfy the relation $\Delta_1 +\Delta_2=\Delta_3$. These cubic terms seem to be of the form suggested in \cite{HF1}. Unfortunately, the lack of covariance of the gauge-fixed action of \cite{ALS,ALT} makes the analysis extremely complicated. The paper is organized as follows. In section 2 we suggest the operators that correspond to the scalars $s^I$ from \cite{LMRS}. In section 3 we recall equations of motion for the graviton and the four-form potential, and the quadratic actions for the fields under consideration, and introduce notations. In section 4 we obtain cubic couplings of two scalars $s^I$ with a scalar $t^I$, and with scalars $\phi^I$ coming from the graviton on the sphere, and calculate their three-point functions by using results obtained in \cite{FMMR}. In section 5 cubic couplings of two scalars $s^I$ with symmetric second rank tensor fields are derived and the corresponding three-point functions are found. In section 6 we obtain cubic vertices of two scalars $s^I$ and a vector field, and calculate their three-point functions. Note that three-point functions of two scalars with a massive vector field, or a massive symmetric second rank tensor, were not considered in the literature before. In the Conclusion we discuss the results obtained, and open problems. In the Appendix we recall the definitions of scalar, vector and tensor spherical harmonics. \section{Extended chiral primary operators} In this section we recall the definition of chiral primary operators and introduce a notion of extended chiral primary operators. According to \cite{LMRS}, CPOs have the form \begin{eqnarray} O^I(\vec{x} )=\frac{(2\pi )^k}{\sqrt{k\lambda^k}}C^I_{i_1\cdots i_k} \tr (\phi^{i_1}(\vec{x} )\cdots \phi^{i_k}(\vec{x} )), \label{cpo} \end{eqnarray} where $C^I_{i_1\cdots i_k}$ are totally symmetric traceless rank $k$ orthonormal tensors of $SO(6)$: $\langle C^IC^J\rangle =C^I_{i_1\cdots i_k}C^J_{i_1\cdots i_k}=\delta^{IJ}$, and $\phi^{i}$ are scalars of SYM$_4$. The two- and three-point functions of CPOs computed in free theory are \cite{LMRS} \begin{eqnarray} &&\langle O^I(\vec{x} )O^J(\vec{y} )\rangle =\frac{\delta^{IJ}}{|\vec{x} -\vec{y} |^{2k}}, \label{cpo2}\\ &&\langle O^{I_1}(\vec{x} )O^{I_2}(\vec{y} )O^{I_3}(\vec{z} )\rangle =\frac 1N \frac{\sqrt{k_1k_2k_3}\langle C^{I_1}C^{I_2}C^{I_3}\rangle } {|\vec{x} -\vec{y} |^{2\alpha_3}|\vec{y} -\vec{z} |^{2\alpha_1}|\vec{z} -\vec{x} |^{2\alpha_2}}, \label{cpo3} \end{eqnarray} where $\alpha_i =\frac 12 (k_j+k_l-k_i)$, $j\neq l\neq i$, and $\langle C^{I_1}C^{I_2}C^{I_3}\rangle $ is the unique $SO(6)$ invariant obtained by contracting $\alpha_1$ indices between $C^{I_2}$ and $C^{I_3}$, $\alpha_2$ indices between $C^{I_3}$ and $C^{I_1}$, and $\alpha_3$ indices between $C^{I_2}$ and $C^{I_1}$. According to the AdS/CFT conjecture, there should exist fields of type IIB supergravity on $AdS_5\times S^5$ that correspond to CPOs. The transformation properties of CPOs and supergravity fields with respect to the superconformal group of SYM$_4$ show that these fields seem to be scalar fields $s^I$, that are mixtures of the five form field strength on $S^5$ and the trace of the graviton on $S^5$.\footnote{Strictly speaking this correspondence between CPOs and scalars $s^I$ may be valid only at linear order in supergravity fields. The reason is that the local supersymmetry transformations of supergravity fields are nonlinear, and, one should expect that the induced superconformal transformations are nonlinear too. Thus the original gravity fields seem to depend nonlinearly on fields with the linear transformation law.} To calculate the three-point functions of CPOs in the framework of the AdS/CFT correspondence the quadratic and cubic actions for the scalars $s^I$ were found in \cite{LMRS}. Then, it was shown that for generic values of conformal dimensions of CPOs the normalized three-point functions computed using the actions precisely coincide with the free field theory result (\ref{cpo3}). On the other hand, as was pointed out in \cite{HF1} the cubic couplings of scalars $s^I$ satisfying one of the three relations: \begin{eqnarray} k_1+k_2=k_3,\quad k_2+k_3=k_1,\quad k_3+k_1=k_2, \label{kkk} \end{eqnarray} vanish, and, therefore, the three-point functions of the operators corresponding to scalars $s^I$ vanish too. Thus, scalars $s^I$ used in \cite{LMRS} do not correspond to CPOs. We can explain this by noting that the scalars $s^I$ from \cite{LMRS} differ from the original scalars that are mixtures of the graviton and the five-form on $S^5$. The original scalars $s^I$ satisfy equations which depend on higher-derivative terms. To remove the derivative terms the following field redefinition was made in \cite{LMRS} \begin{eqnarray} s^{I_1}= s'^{I_1}+\sum_{I_2,I_3}\left( J_{I_1I_2I_3}s'^{I_2}s'^{I_3}+L_{I_1I_2I_3}\nabla^a s'^{I_2}\nabla_a s'^{I_3} \right). \label{red} \end{eqnarray} Namely for the scalars $s'^{I}$ the cubic couplings mentioned above vanish. Because of the redefinition (\ref{red}) new scalars $s'^{I}$ do not transform with respect to the superconformal group in a proper way, and, therefore, cannot correspond to CPOs. From the computational point of view these cubic couplings have to vanish because if, say, $k_1+k_2=k_3$ then the three-point function (\ref{cpo3}) is nonsingular at $x=y$, but gravity calculations with a nonvanishing on-shell bulk cubic coupling always lead to a function singular at $x=y$, $x=z$ and $y=z$. By the same reason we expect that $n$-point functions of operators corresponding to scalars $s'^{I}$ (with an additional field redefinition which is required to remove higher-derivative terms from the $(n-1)$-th order equations of motion for $s^{I}$) would vanish if, say, $k_n=k_1+\cdots +k_{n-1}$. Study of the general scalar exchange performed in \cite{HF1} seems to confirm the conclusion. Thus, scalars $s^I$ (here and in what follows we omit the primes on redefined fields) correspond to properly extended CPOs which have vanishing three-point functions if (\ref{kkk}) is fulfilled. Indeed one can easily find such an extension of CPOs. Namely, we define the extended CPOs that correspond to scalars $s^{I}$ as \begin{eqnarray} \tilde {O}^{I_1}=O^{I_1} -\frac {1}{2N}\sum_{I_2+I_3=I_1} C^{I_1I_2I_3}O^{I_2}O^{I_3}, \label{ecpo} \end{eqnarray} where $C^{I_1I_2I_3}=\sqrt{k_1k_2k_3}\langle C^{I_1}C^{I_2}C^{I_3}\rangle $. It is not difficult to verify that in the large $N$ limit these operators have the normalized two-point functions (\ref{cpo2}), the three-point functions (\ref{cpo3}) if (\ref{kkk}) is not satisfied, and vanishing three-point functions if (\ref{kkk}) takes place. However, these operators will require a further modification to be consistent with all $n$-point functions computed in the framework of the AdS/CFT correspondence. In general, an extended CPO is the sum of a CPO and non-chiral composite operators which are normal-ordered products of CPOs and their descendants. Nevertheless, we expect that in the large $N$ limit an $n$-point function of extended CPOs coincides with $n$-point functions of CPOs for generic values of conformal dimensions of the operators. As we will discuss in next sections a similar modification is required for operators corresponding to other supergravity fields. \section{Equations of motion and quadratic actions} To obtain cubic couplings of two scalars $s^I$ with other type IIB supergravity fields it is sufficient to consider only the graviton and the four-form potential. To this end we apply the method of \cite{LMRS}, and use the covariant equations of motion \cite{S,SW,HW} and the quadratic action for type IIB supergravity on $AdS_5\times S^5$ \cite{AF3}. The equations of motion of the 4-form potential and the graviton are \begin{eqnarray} F_{M_1...M_5}&=&\frac{1}{5!}\varepsilon_{M_1...M_{10}}F^{M_6...M_{10}}, \label{ffe}\\ R_{MN}&=&\frac{1}{3!}F_{MM_1...M_4}F_N^{~~M_1...M_4}. \label{gre} \end{eqnarray} Here $M,N,\ldots ,=0,1,\ldots 9$ and we use the following notations $$F_{M_1\ldots M_5}=5\partial_{[M_1}A_{M_2\ldots M_5]}= \partial_{M_1}A_{M_2\ldots M_5}+4~{\mbox{ terms}} ,$$ i.e all antisymmetrizations are with "weight"1. The dual forms are defined as \begin{eqnarray} &&\varepsilon_{01\ldots 9}=\sqrt{-G};\quad \ e^{01\ldots 9}=-\frac{1}{\sqrt{-G}} \nonumber\\ &&\varepsilon^{M_1\ldots M_{10}}= G^{M_1N_1}\cdots G^{M_{10}N_{10}}\varepsilon_{N_1\ldots N_{10}} \nonumber\\ && (F^*)_{M_1\ldots M_k}= \frac{1}{k!}\varepsilon_{M_1\ldots M_{10}}F^{M_{k+1}\ldots M_{10}} =\frac{1}{k!}\varepsilon^{N_1\ldots N_{10}}G_{M_1N_1}\cdots G_{M_{k}N_{k}}F_{N_{k+1}\ldots N_{10}}. \nonumber \end{eqnarray} In the units in which the radius of $S^5$ is set to be unity, the $AdS_5\times S^5$ background solution looks as \begin{eqnarray} &&ds^2=\frac{1}{x_0^2}(dx_0^2+\eta_{ij}dx^idx^j)+d\Omega_5^2= g_{MN}dx^Mdx^N \nonumber\\ &&R_{abcd}=-g_{ac}g_{bd}+g_{ad}g_{bc}; \quad R_{ab}=-4g_{ab}\nonumber\\ &&R_{\alpha\beta\gamma\delta}=g_{\alpha\gamma}g_{\beta\delta}-g_{\alpha\delta}g_{\beta\gamma}; \quad R_{\alpha\beta}=4g_{\alpha\beta}\nonumber\\ &&\bar{F}_{abcde}=\varepsilon_{abcde};\quad \bar{F}_{\alpha\beta\gamma\delta\varepsilon}= \varepsilon_{\alpha\beta\gamma\delta\varepsilon}, \label{back} \end{eqnarray} where $a,b,c,\ldots $ and $\alpha ,\beta ,\gamma ,\ldots$ are the AdS and the sphere indices respectively and $\eta_{ij}$ is the $4$-dimensional Minkowski metric. We represent the gravitational field and the 4-form potential as $$G_{MN}=g_{MN}+h_{MN};\quad A_{MNPQ}=\bar{A}_{MNPQ}+a_{MNPQ}; \quad F=\bar{F} +f.$$ Then the self-duality equation (\ref{ffe}) decomposed up to the second order looks as \begin{eqnarray} f-f^*+T^{(1)}+T(h,f^*)+T(h)=0. \label{ffeq} \end{eqnarray} Here we introduced the following notations \begin{eqnarray} T_{M_1...M_5}^{(1)}&=&\frac{1}{2}h\bar{F}_{M_1...M_5}- 5h^K_{[M_1}\bar{F}_{M_2...M_4]K},\quad h=h_K^K \nonumber \\ T_{M_1...M_5}(h,f^*)&=& \frac{1}{2}hf^*_{M_1...M_5}-5h^K_{[M_1}f^*_{M_2...M_4]K} \nonumber \\ \nonumber T_{M_1...M_5}(h) &=&\frac{5}{2}hh^K_{[M_1}\bar{F}_{M_2...M_4]K} -\left(\frac{1}{8}h^2+\frac{1}{4}h^{ML}h_{ML}\right) \bar{F}_{M_1...M_5}\nonumber\\ &-&10h^{K_1}_{[M_1}h^{K_2}_{M_2}\bar{F}_{M_3M_4M_5]K_1K_2}. \label{deft} \end{eqnarray} Decomposing the Einstein equation (\ref{gre}) up to the second order, we get \begin{eqnarray} &&R_{MN}^{(1)}+R_{MN}^{(2)}= -\frac{4}{3!}h^{KL}\bar{F}_{MKM_1M_2M_3}\bar{F}_{NL}^{~~M_1M_2M_3} \\ &&+\frac{1}{3!}(f_{MM_1...M_4}\bar{F}_{N}^{~M_1...M_4}+ \bar{F}_{MM_1...M_4}f_{N}^{~M_1...M_4})\nonumber\\ &&+\frac{4}{3!}h^{KL}h_L^S\bar{F}_{MKM_1M_2M_3}\bar{F}_{NS}^{~~M_1M_2M_3} +\frac{2\cdot3}{3!} h^{K_1S_1}h^{K_2S_2}\bar{F}_{MK_1K_2M_1M_2}\bar{F}_{NS_1S_2}^{~~~M_1M_2} \nonumber \\ \nonumber &&-\frac{4}{3!}h^{KS}(f_{MKM_1M_2M_3}\bar{F}_{NS}^{~~M_1M_2M_3} +f_{NKM_1M_2M_3}\bar{F}_{MS}^{~~M_1M_2M_3}) +\frac{1}{3!}f_{MM_1...M_4}f_N^{~M_1...M_4}. \label{greq} \end{eqnarray} Here \begin{eqnarray} &&R_{MN}^{(1)}=\nabla_Kh_{MN}^K - \frac 12 \nabla_M\nabla_Nh_L^L \nonumber\\ &&R_{MN}^{(2)}=-\nabla_K(h^K_Lh_{MN}^L) + \frac 12 \nabla_N(h_{KL}\nabla_Mh^{KL})+ \frac 12 h_{MN}^K\nabla_Kh_L^L -h_{MK}^Lh_{NL}^K \label{defr} \end{eqnarray} and we introduce a notation \begin{eqnarray} h_{MN}^K=\frac{1}{2}(\nabla_Mh_N^K +\nabla_Nh_M^K -\nabla^Kh_{MN}) \label{defh} \end{eqnarray} In eqs.(\ref{ffeq}-\ref{defh}) and in what follows indices are raised and lowered by means of the background metric, and the covariant derivatives are with respect to the background metric, too. The gauge symmetry of the equations of motion allows one to impose the de Donder gauge: \begin{eqnarray} \nabla^\alpha h_{a\alpha }=\nabla^\alpha h_{(\alpha\beta )}=\nabla^\alpha a_{M_1M_2M_3\alpha}=0;\quad h_{(\alpha\beta )}\equiv h_{\alpha\beta }-\frac{1}{5}g_{\alpha\beta}h_\gamma^\gamma . \label{ga} \end{eqnarray} This gauge choice does not remove all the gauge symmetry of the theory, for a detailed discussion of the residual symmetry see \cite{KRN}. As was shown in \cite{KRN}, the gauge condition (\ref{ga}) implies that the components of the 4-form potential of the form $a_{\alpha\beta\gamma\delta}$ and $a_{a\alpha\beta\gamma}$ can be represented as follows: \begin{equation} a_{\alpha\beta\gamma\delta}=\varepsilon_{\alpha\beta\gamma\delta\varepsilon}\nabla^\varepsilon b ;\quad a_{a\alpha\beta\gamma}=\varepsilon_{\alpha\beta\gamma\delta\varepsilon}\nabla^\delta\phi_{a}^{\varepsilon}. \label{phib} \end{equation} It is also convenient to introduce the dual 1- and 2-forms for $a_{abcd}$ and $a_{abc\alpha}$: \begin{equation} a_{abcd}=-\varepsilon_{abcde}Q^e;\quad a_{abc\alpha}=-\varepsilon_{abcde}\phi_\alpha^{de}. \label{aa} \end{equation} Then the solution of the first-order self-duality equation can be written as \begin{equation} Q^a=\nabla^ab,\quad \phi_\alpha^{ab}=\nabla^{[a}\phi_\alpha^{b]}. \label{qphi} \end{equation} The quadratic action for physical fields of type IIB supergravity was found in \cite{AF3}. To write down the action we need to expand fields in spherical harmonics, and make some fields redefinition. We begin with the scalar fields $b$ and $\pi \equiv h_\alpha^\alpha$. Expanding them into a set of scalar spherical harmonics\footnote{Here and in what follows we suppose that the spherical harmonics of all types are orthonormal.} \begin{eqnarray} \pi(x,y)=\sum\, \pi^{I_1}(x)Y^{I_1}(y);\quad b(x,y)=\sum\, b^{I_1}(x)Y^{I_1}(y); \quad \nabla_\beta^2Y^k=-k(k+4)Y^k, \nonumber \end{eqnarray} and making the fields redefinition \cite{LMRS}\footnote{We often denote $\pi^{I_1}$ as $\pi_k$ and a similar notation for other fields.} \begin{eqnarray} \pi_k=10ks_k+10(k+4)t_k;\quad b_k=-s_k+t_k \label{redbpi} \end{eqnarray} we write the quadratic actions for the scalars $s^I$ and $t^I$ in the form \begin{eqnarray} &&S(s)=\frac{4N^2}{(2\pi )^5}\int d^{5}x\sqrt{-g_a}\sum ~ \frac{32k(k-1)(k+2)}{k+1}\left( -\frac12 \nabla_as_k\nabla^as_k -\frac12 k(k-4)s_k^2\right), \label{as}\\ &&S(t)=\frac{4N^2}{(2\pi )^5}\int d^{5}x\sqrt{-g_a}\sum ~ \frac{32(k+2)(k+4)(k+5)}{k+3} \left( -\frac12 \nabla_at_k\nabla^at_k -\frac12 (k+4)(k+8)t_k^2\right).\nonumber\\ \label{at} \end{eqnarray} Now we expand the graviton on $AdS_5$ in scalar spherical harmonics \begin{eqnarray} h_{ab}(x,y)=\sum\, h_{ab}^{I_1}(x)Y^{I_1}(y) \nonumber \end{eqnarray} and make the following shift of the gravitational fields: \begin{equation} h_{ab}^k=\phi_{(ab)}^k +\nabla_{(a}\nabla_{b)}\zeta_k + \frac15 g_{ab}(\phi_{ck}^c -\frac35 \pi_k) , \label{redh} \end{equation} where \begin{equation} \zeta_k =\frac{4}{k+1}s_k +\frac{4}{k+3}t_k . \label{zeta} \end{equation} Then the zero mode $\phi_{ab}^0\equiv \phi_{ab}$ describes a graviton on $AdS_5$ with the standard action \begin{eqnarray} S(\phi_{ab})=&&\frac{4N^2}{(2\pi )^5}\int d^{5}x\sqrt{-g_a}\left( -\frac{1}{4}\nabla_c\phi_{ab}\nabla^c\phi^{ab}+ \frac{1}{2}\nabla_a\phi^{ab}\nabla^c\phi_{cb}- \frac{1}{2}\nabla_a\phi_c^c\nabla_b\phi^{ba} \right.\nonumber\\ &&+\frac{1}{4}\nabla_c\phi_a^a\nabla^c\phi^{b}_b +\left.\frac{1}{2}\phi_{ab}\phi^{ab}+ \frac{1}{2}(\phi_{a}^{a})^2\right) \label{agr0} \end{eqnarray} and the action for the traceless symmetric tensor fields $\phi_{ab}^k$ has the form \begin{eqnarray} S(\phi_{(ab)}^k)=&&\frac{4N^2}{(2\pi )^5}\int d^{5}x\sqrt{-g_a}\sum ~\left( -\frac{1}{4}\nabla_c\phi_{(ab)}^k\nabla^c\phi^{(ab)}_k+ \frac{1}{2}\nabla_a\phi^{(ab)}_k\nabla^c\phi_{(cb)}^k\right.\nonumber\\&&\left. - \frac{1}{4}(k^2+4k-2)\phi_{(ab)}^k\phi^{(ab)}_k\right) \label{agr2} \end{eqnarray} As was shown in \cite{KRN} the fields $\phi_{ck}^c$ are nondynamical and vanish on shell at the linearized level. Expanding vector fields $h_{a\alpha}$ and $\phi_{a\alpha}$ into a set of vector spherical harmonics \begin{eqnarray} &&h_{a\alpha}(x,y)=\sum\, hh_a^{I_5}(x)Y_\alpha^{I_5}(y);\quad \phi_{a\alpha}(x,y)=\sum\, \phi_a^{I_5}(x)Y_\alpha^{I_5}(y); \nonumber\\ &&(\nabla_\beta ^2-4)Y_\alpha^k=-(k+1)(k+3)Y_\alpha^k, \nonumber \end{eqnarray} and making the change of variables \cite{KRN} \begin{eqnarray} A_a^k=h_a^k -4(k+3)\phi_a^k;\quad C_a^k=h_a^k +4(k+1)\phi_a^k \label{redhphi} \end{eqnarray} we present the actions for the vector fields in the form \begin{eqnarray} &&S(A)=\frac{4N^2}{(2\pi )^5}\int d^{5}x\sqrt{-g_a}\sum \frac{k+1}{2(k+2)}\left(-\frac14 (F_{ab}(A^k))^2 -\frac12 (k^2-1)(A_a^k)^2\right) \label{aA}\\ &&S(C)=\frac{4N^2}{(2\pi )^5}\int d^{5}x\sqrt{-g_a}\sum \frac{k+3}{2(k+2)}\left(-\frac14 (F_{ab}(C^k))^2 -\frac12 (k+3)(k+5)(C_a^k)^2\right) \label{aC} \end{eqnarray} where $F_{ab}(A)=\partial_aA_b-\partial_bA_a$. Finally, expanding the graviton on the sphere in tensor harmonics \begin{eqnarray} h_{(\alpha\beta)}(x,y)=\sum\, \phi^{I_{14}}(x)Y_{(\alpha\beta)}^{I_{14}}(y);\quad (\nabla_\gamma^2-10)Y_{(\alpha\beta)}^k=-(k^2+4k+8)Y_{(\alpha\beta)}^k, \nonumber \end{eqnarray} we write the action for the scalars $\phi_k$ in the form \begin{eqnarray} S(\phi)=\frac{4N^2}{(2\pi )^5}\int d^{5}x\sqrt{-g_a}\sum ~ \left( -\frac14 \nabla_a\phi_k\nabla^a\phi_k -\frac14 k(k+4)\phi_k^2\right) \label{aphi} \end{eqnarray} \section{Cubic couplings of scalars} The aim of this section is to find the cubic couplings of the scalar fields $t_k$ and $\phi_k$ with a pair of scalars $s_k$. This can be achieved by finding the quadratic contribution of the scalars $s_k$ to the equations of motion for $t_k$ and $\phi_k$ respectively with a subsequent reconstruction of the corresponding Lagrangian vertex. \vskip 0.4cm {\it 4.1 Cubic Couplings of $t_k$} \vskip 0.4cm \noindent Since $t_k$ appear as the mixture of fields $\pi_k$ and $b_k$ we begin by considering the equations of motion for these fields. Restricting in (\ref{greq}) indices $M$ and $N$ to the sphere and taking into account the gauge conditions (\ref{ga}), (\ref{phib}) we find that Einstein equation (\ref{greq}) results in \begin{eqnarray} \label{bG} &&\frac{1}{10}g_{\alpha\beta}\left( (\nabla_M\nabla^M-32)\pi+80\nabla_{\gamma}\nabla^{\gamma}b\right) +\frac{1}{2}\nabla_{\alpha}\nabla_{\beta}\phi_a^a=\\ \nonumber &+&\frac{1}{10}g_{\alpha\beta}\nabla_a(h^{ab}\nabla_b \pi)+\frac{3}{100}\nabla_{\alpha}\pi\nabla_{\beta}\pi +\frac{3}{50}\pi\nabla_{\alpha}\nabla_{\beta}\pi+\frac{1}{4}\nabla_{\alpha}h_{ab}\nabla_{\beta}h^{ab} +\frac{1}{2}h_{ab}\nabla_{\alpha}\nabla_{\beta}h^{ab}\\ \nonumber &+&8\nabla_{\alpha}\nabla^a b\nabla_{\beta}\nabla_a b -4g_{\alpha\beta}\left( \nabla_{\gamma}\nabla^a b \nabla^{\gamma}\nabla_a b+\nabla_{\gamma}^2b \nabla_{\delta}^2b +\frac{2}{5}\pi^2 -\frac{8}{5}\pi\nabla_{\gamma}^2b-\frac{1}{200}\nabla_{\gamma}(\pi\nabla^{\gamma}\pi) \right), \end{eqnarray} where $\phi_a^a= h_a^a+\frac{3}{5}\pi$ in accordance with (\ref{redh}). Note that we have omitted all the linear terms that are projected out under the projection onto the spherical harmonics $\nabla_{(\alpha}\nabla_{\beta)}Y^I$ or $Y^I$ and accounted only for the quadratic terms that contain after the field redefinition (\ref{redbpi}) and (\ref{redh}) two scalars $s_k$. In particular the scalars $s_k$ appear after redefinition (\ref{redh}) for the gravitational field $h_{ab}$. Equation (\ref{bG}) implies then the following two equations \begin{eqnarray} \label{ctr} \nabla_{(\alpha}\nabla_{\beta)}\phi_a^a &=& \frac{3}{50}\nabla_{(\alpha}\pi\nabla_{\beta)}\pi +\frac{3}{25}\pi\nabla_{(\alpha}\nabla_{\beta)}\pi \\ \nonumber &+&\frac{1}{2}\nabla_{(\alpha}h_{ab}\nabla_{\beta)}h^{ab} +h_{ab}\nabla_{(\alpha}\nabla_{\beta)}h^{ab}+16\nabla_{(\alpha}\nabla^a b\nabla_{\beta)}\nabla_a b \end{eqnarray} and \begin{eqnarray} \label{1bp} && (\nabla_M\nabla^M-32)\pi+80\nabla_{\gamma}\nabla^{\gamma}b +\nabla_{\alpha}\nabla^{\alpha}\phi_a^a=\\ \nonumber &&\nabla_a(h^{ab}\nabla_b \pi)+\frac{13}{50}\nabla_{\alpha}\pi\nabla^{\alpha}\pi +\frac{8}{25}\pi\nabla_{\alpha}\nabla^{\alpha}\pi+\frac{1}{2}\nabla_{\alpha}h_{ab}\nabla^{\alpha}h^{ab} +h_{ab}\nabla_{\alpha}\nabla^{\alpha}h^{ab}\\ \nonumber &&-24\nabla_{\alpha}\nabla^a b\nabla^{\alpha}\nabla_a b -40\nabla_{\gamma}^2b \nabla_{\delta}^2b -16\pi^2 +64\pi\nabla_{\gamma}^2b \end{eqnarray} that are obtained by decoupling from (\ref{bG}) the trace part. Projecting eq.(\ref{ctr}) onto $\nabla_{\alpha}\nabla_{\beta}Y^I$ one can solve it for $\phi_a^a$ and substituting the result in (\ref{1bp}) obtain the close equation for $\pi$ and $b$. According to \cite{KRN} the second equation involving the fields $\pi$ and $b$ is found by considering the component of the self-duality equation (\ref{ffeq}) involving one sphere and four AdS indices, and the component with five AdS indices. In our case these components read as \begin{eqnarray} \label{sd1} \nabla_{\alpha}\left( a_{a_1...a_4}+\varepsilon_{a_1...a_5}\nabla^{a_5}b \right) = \varepsilon_{a_1...a_4 a} \left( \frac{3}{5}\pi\nabla^a\nabla_{\alpha}b+h^{ab}\nabla_b\nabla_{\alpha}b \right) \end{eqnarray} and \begin{eqnarray} \label{sd2} 5\nabla_{[a_1}a_{a_2...a_5]}= \varepsilon_{a_1...a_5}\left( \nabla_{\gamma}^2 b+\frac{1}{2}\phi_a^a-\frac{4}{5}\pi -\frac{4}{5}\pi\nabla_{\gamma}^2 b-\frac{1}{4}h_{ab}h^{ab}+\frac{37}{100}\pi^2 \right) . \end{eqnarray} Projecting (\ref{sd1}) onto $\nabla_{\alpha}Y^I$ one finds $a_{a_1...a_5}$. Substituting then $a_{a_1...a_5}$ as well as previously found $\phi_a^a$ into (\ref{sd2}) one obtains the equation for $\pi$ and $b$. The required equation for $t_k$ is then obtained by substituting the redefinition (\ref{redbpi}) in (\ref{1bp}-\ref{sd2}) and by eliminating all the terms linear in $s_k$. Skipping all the computational details we write down the equation for $t^{I}$ that is found to be of the form \begin{eqnarray} \nonumber (\nabla_a\nabla^a-(k_3+4)(k_3+8))t^{I_3}=D_{123}s^{I_1}s^{I_2} +E_{123}\nabla^a s^{I_1}\nabla_a s^{I_2} +F_{123}\nabla_{(a}\nabla_{b)} s^{I_1}\nabla^{(a}\nabla^{b)} s^{I_2}. \end{eqnarray} To remove the derivative terms we perform the appropriate redefinition of $t^I$ similar to (\ref{red}): \begin{eqnarray} \nonumber t^{I_3}= t'^{I_3}+\sum_{I_1,I_2}\left( J_{I_1I_2I_3}s'^{I_1}s'^{I_2}+L_{I_1I_2I_3}\nabla^a s'^{I_1}\nabla_a s'^{I_2} \right). \end{eqnarray} Introducing the notation $a_{123}=\int Y^{I_1} Y^{I_2} Y^{I_3}$ we quote the final answer \begin{eqnarray} \nonumber &&(\nabla_a\nabla^a-(k_3+4)(k_3+8))t^{I_3}=-t_{I_1I_2I_3}s^{I_2}s^{I_3},\\ \nonumber &&t_{I_1I_2I_3}=a_{123}\frac{4(\Sigma+4)(\alpha_1+2)(\alpha_2+2)\alpha_3 (\alpha_3-1)(\alpha_3-2)(\alpha_3-3)(\alpha_3-4)} {(k_1+1)(k_2+1)(k_3+2)(k_3+4)(k_3+5)}, \end{eqnarray} where $\alpha_3=\frac{1}{2}(k_1+k_2-k_3)$, $\Sigma=k_1+k_2+k_3$. Taking into account the normalization of the quadratic action for $t_k$ fields (\ref{at}) we obtain the corresponding vertex \begin{eqnarray} \nonumber S_{tss}=\frac{4N^2}{(2\pi )^5} T_{I_1I_2I_3} \int \sqrt{-g_a}~s^{I_1}s^{I_2}t^{I_3} \end{eqnarray} with \begin{eqnarray} \label{vtss} T_{I_1I_2I_3}= a_{123}\frac{2^7(\Sigma+4)(\alpha_1+2)(\alpha_2+2)\alpha_3(\alpha_3-1)(\alpha_3-2)(\alpha_3-3)(\alpha_3-4)} {(k_1+1)(k_2+1)(k_3+3)}. \end{eqnarray} \vskip 0.4cm {\it 4.2 Cubic Couplings of $\phi_k$} \vskip 0.4cm \noindent To find equations of motion for the fields $\phi_k$ coming from the graviton on the sphere we again consider eq.(\ref{greq}) for the indices $M=\alpha$, $N=\beta$: \begin{eqnarray} \nonumber (\nabla_M\nabla^M-2)h_{(\alpha\beta)}&=&\frac{3}{50}\nabla_{(\alpha}\pi\nabla_{\beta)}\pi +\frac{3}{25}\pi\nabla_{(\alpha}\nabla_{\beta)}\pi+ \frac{1}{2}\nabla_{(\alpha}h_{ab}\nabla_{\beta)}h^{ab}\\ \nonumber &+&h_{ab}\nabla_{(\alpha}\nabla_{\beta)}h^{ab}+16\nabla_{(\alpha}\nabla^a b\nabla_{\beta)}\nabla_a b, \end{eqnarray} where this time all the linear terms that are projected out under the projection on $Y_{(\alpha\beta)}$ were omitted. Introducing the notation $p_{123}=\int \nabla^{\alpha}Y^{I_1}\nabla^{\beta}Y^{I_2}Y_{(\alpha\beta)}^{I_3}$ and projecting both sides of the last equation on $Y_{(\alpha\beta)}$ we get an equation for $\phi$: \begin{eqnarray} \nonumber (\nabla_a\nabla^a-k_3(k_3+4))\phi^{I_3}= p_{123}\left( -\frac{3}{50}\pi^{I_1}\pi^{I_2} -\frac{1}{2}h_{ab}^{I_1}h^{ab}_{I_2} +16\nabla^a b^{I_1}\nabla_a b^{I_2} \right) . \end{eqnarray} Finally leaving on the r.h.s. only the contribution of the scalars $s_k$ we obtain \begin{eqnarray} \nonumber &&(\nabla_a\nabla^a-k_3(k_3+4))\phi^{I_3}=-\frac{p_{123}} {5(k_1+1)(k_2+1)}\times \\ \nonumber && \left( 48k_1k_2(k_1+1)(k_2+1)s^{I_1}s^{I_2}-80(k_1+1)(k_2+1)\nabla_a s^{I_1}\nabla^a s^{I_2} +40\nabla_{(a}\nabla_{b)}s^{I_1}\nabla^{(a}\nabla^{b)}s^{I_2} \right) . \end{eqnarray} Performing again a shift of $\phi^I$ to get rid of the derivative terms one arrives at \begin{eqnarray} \nonumber (\nabla_a\nabla^a-k_3(k_3+4))\phi^{I_3}=-\frac{8 p_{123}\Sigma (\Sigma+2)} {(k_1+1)(k_2+1)}(\alpha_3-1)(\alpha_3-2). \end{eqnarray} Taking into account the normalization of the quadratic action for $\phi_k$ we can read off the corresponding vertex $S_{ss\phi}$: \begin{eqnarray} S_{ss\phi}=\frac{4N^2}{(2\pi )^5}\Phi_{I_1I_2I_3} \int \sqrt{-g_a}~s^{I_1}s^{I_2}\phi^{I_3}, \label{ssphi} \end{eqnarray} where $$ \Phi_{I_1I_2I_3}=\frac{4 p_{123}\Sigma (\Sigma+2)} {(k_1+1)(k_2+1)}(\alpha_3-1)(\alpha_3-2). $$ \vskip 0.4cm {\it 4.3 Three-point Functions} \vskip 0.4cm \noindent Recall that two- and three-point correlation functions of operators ${\cal O}_{\Delta}$ in a boundary conformal field theory corresponding to scalar fields on AdS are given by \cite{FMMR}: \begin{eqnarray} \langle {\cal O}_{\Delta}(\vec{x} ){\cal O}_{\Delta}(\vec{y} )\rangle =\frac{2}{\pi^2} \frac{\theta (\Delta-1)(\Delta-2)^2}{|\vec{x}-\vec{y}|^{2\Delta}}, \end{eqnarray} \begin{eqnarray} \langle {\cal O}_{\Delta_1}(\vec{x} ){\cal O}_{\Delta_2}(\vec{y} ){\cal O}_{\Delta_3}(\vec{z} )\rangle =\frac{\lambda_{123}} {|\vec{x}-\vec{y}|^{\Delta_1+\Delta_2-\Delta_3}|\vec{x}-\vec{z}|^{\Delta_1+\Delta_3-\Delta_2}|\vec{y}-\vec{z}|^{\Delta_3+\Delta_2-\Delta_1}}, \end{eqnarray} where $\lambda_{123}$ is given by $$ \lambda_{123}=-\varphi_{123} \frac{ \Gamma[\frac12 \left( \Delta_1+\Delta_2+\Delta_3-4 \right)] \Gamma[\bar{\Delta}_1 ] \Gamma[\bar{\Delta}_2 ] \Gamma[\bar{\Delta}_3 ] }{2\pi^4\Gamma(\Delta_1-2)\Gamma(\Delta_2-2)\Gamma(\Delta_3-2)} $$ and $\bar{\Delta}_1=\frac{1}{2}(\Delta_2+\Delta_3-\Delta_1)$. Here $\varphi_{123}$ stands for the coupling of scalar fields (that is a doubled interaction vertex for the fields we consider) and $\theta$ denotes the normalization constant of their quadratic action. Taking into account that a scalar $t^{I_3}$ $(\phi^{I_3})$ corresponds to a YM operator ${\cal O}_{\Delta_3}$ with the conformal weight $\Delta_3=k_3+8$ $(\Delta_3=k_3+4)$, we, therefore find correlation functions of two extended CPOs with this operator. The constant $\lambda_{123}$ reads for both cases as follows: \begin{eqnarray} \nonumber \lambda_{123}(t)=-\frac{4N^2}{(2\pi )^5}\frac{2^8}{\pi^4} \frac{\Gamma\left( \frac{1}{2}\Sigma+3 \right) \Gamma(\alpha_1+4)\Gamma(\alpha_2+4)\Gamma(\alpha_3+1)(\alpha_1+2)(\alpha_2+2) }{(k_1+1)(k_2+1)(k_3+3)\Gamma(k_1-2)\Gamma(k_2-2)\Gamma(k_3+6)}a_{123} \end{eqnarray} and \begin{eqnarray} \nonumber \lambda_{123}(\phi)=-\frac{4N^2}{(2\pi )^5}\frac{2^4}{\pi^4} \frac{\Gamma\left( \frac{1}{2}\Sigma+2 \right) \Gamma(\alpha_1+2)\Gamma(\alpha_2+2)\Gamma(\alpha_3) }{(k_1+1)(k_2+1) \Gamma(k_1-2)\Gamma(k_2-2)\Gamma(k_3+2)}p_{123}. \end{eqnarray} Taking into account the normalization of the two-point functions one can introduce the normalized extended CPO \cite{LMRS}: \begin{eqnarray} \label{nCPO} O_{\Delta}=\frac{(2\pi )^{5/2}}{2N}\frac{\pi}{8(k-1)(k-2)}\left( \frac{k+1}{k(k+2)}\right)^{1/2}{\cal O}_{\Delta} \end{eqnarray} as well as the normalized gauge theory operator corresponding to scalar $t_k$: $$ O_{\Delta}=\frac{(2\pi )^{5/2}}{2N}\frac{\pi}{8(k+6)}\left( \frac{k+3}{(k+2)(k+4)(k+5)(k+7)}\right)^{1/2} {\cal O}_{\Delta}, ~~~\Delta = k+8 $$ and to scalar $\phi_k$: $$ O_{\Delta}=\frac{(2\pi )^{5/2}}{2N} \frac{\pi}{(k+3)^{1/2}(k+2)} {\cal O}_{\Delta}, ~~~\Delta = k+4. $$ With these formulae at hand we can finally write down the normalized constants: \begin{eqnarray} \nonumber \lambda^{norm}_{123}(t)&=&-\frac{(2\pi )^{5/2}}{N}\frac{1}{(2\pi)^{5/2}} \left( \frac{k_1k_2(k_3+1)(k_3+7)}{(k_3+3)(k_3+4)(k_3+5)} \right)^{1/2} \\ \nonumber &\times & \frac{\Gamma(\alpha_1+4)(\alpha_1+2)}{\alpha_1!}\frac{\Gamma(\alpha_2+4)(\alpha_2+2)}{\alpha_2!} \frac{k_3!}{\Gamma(k_3+8)}\langle {\cal C}^{I_1}{\cal C}^{I_2}{\cal C}^{I_3}\rangle \end{eqnarray} and \begin{eqnarray} \nonumber \lambda^{norm}_{123}(\phi)=-\frac{(2\pi )^{5/2}}{N} \frac{(\alpha_1+1)(\alpha_2+1)}{4(2\pi)^{5/2}} \left( \frac{k_1k_2}{(k_3+1)(k_3+2)(k_3+3)} \right)^{1/2} P_{123} . \end{eqnarray} Here we used explicit expressions for $a_{123}$ and $p_{123}$ from the Appendix. \section{Cubic couplings of second rank tensors with $s^{I}$} \vskip 0.4cm {\it 5.1 Cubic Couplings} \vskip 0.4cm \noindent Clearly the coupling of the symmetric second rank tensor $\phi_{(ab)}^k$ with a pair of scalars $s_k$ can be found by studing the corrected equation of motion for $\phi_{(ab)}^k$. The most simple way consists however in finding the equations of motion for the field $s_k$ corrected by the quadratic terms each containing one field $\phi_{(ab)}^k$ and $s_k$. This is explained by noting that the field $\phi_{(ab)}^k$ is transverse on-shell and therefore the interaction term, being in the latter case a Lorentz scalar does not contain derivatives acting on $\phi_{(ab)}^k$. As a consequence the additional shift needed to get rid of derivative terms is not required. Since the field $s_k$ appear as the mixture (\ref{redbpi}) of $\pi$ and $b$, the equation for $s_k$ again follows from the system (\ref{ctr})-(\ref{sd2}). Clearly this time eqs.(\ref{ctr}) and (\ref{1bp}) read as \begin{eqnarray} \label{ctr1} \nabla_{(\alpha}\nabla_{\beta)}\phi_a^a = \nabla_{(\alpha}\phi_{(ab)}\nabla_{\beta)} \nabla^a\nabla^b \zeta +\phi_{(ab)}\nabla_{(\alpha}\nabla_{\beta)}\nabla^a\nabla^b\zeta +\nabla_a\nabla_b\zeta \nabla_{(\alpha}\nabla_{\beta)}\phi^{(ab)} \end{eqnarray} and \begin{eqnarray} \label{2bp} && (\nabla_M\nabla^M-32)\pi+80\nabla_{\gamma}\nabla^{\gamma}b +\nabla_{\alpha}\nabla^{\alpha}\phi_a^a=\\ \nonumber &&\nabla_a(\phi^{(ab)}\nabla_b \pi)+ \nabla_{(\alpha}\phi_{(ab)}\nabla_{\beta)}\nabla^a\nabla^b \zeta +\phi_{(ab)}\nabla_{(\alpha}\nabla_{\beta)}\nabla^a\nabla^b\zeta +\nabla_a\nabla_b\zeta \nabla_{(\alpha}\nabla_{\beta)}\phi^{(ab)} \end{eqnarray} where we have used representation (\ref{redh}) for the graviton field $h_{ab}$ and left only the terms contributing to the vertex under consideration. By this reason the coefficients $\zeta_k$ in $\zeta=\int \zeta^I Y^I$ are reduced now to $\zeta_k=\frac{4}{k+1}s_k$ in comparison with (\ref{zeta}). Again projecting eq.(\ref{ctr1}) onto $\nabla_{\alpha}\nabla_{\beta}Y^I$ one solves for $\phi_a^a$ and after substitution of the solution into (\ref{2bp}) one obtains a closed form equation for $\pi$ and $b$. The second equation for $\pi$ and $b$ follows from eqs.(\ref{sd1}) and (\ref{sd2}) that now acquire the form \begin{eqnarray} \label{sd1'} \nabla_{\alpha}\left( a_{a_1...a_4}+\varepsilon_{a_1...a_5}\nabla^{a_5}b \right) = \varepsilon_{a_1...a_4 a} \left( \phi^{(ab)}\nabla_b\nabla_{\alpha}b \right) \end{eqnarray} and \begin{eqnarray} \label{sd2'} 5\nabla_{[a_1}a_{a_2...a_5]}= \varepsilon_{a_1...a_5}\left( \nabla_{\gamma}^2 b+\frac{1}{2}\phi_a^a-\frac{4}{5}\pi -\frac{1}{2}\phi_{(ab)}\nabla^a\nabla^b\zeta \right) . \end{eqnarray} Omitting the straightforward but lengthy algebraic manipulations we write down the final answer for the Lagrangian vertex describing the interaction of the symmetric second rank tensor $\phi_{(ab)}$ with scalars $s^I$: \begin{eqnarray} \nonumber S_{ssg}=\frac{4N^2}{(2\pi )^5}G_{I_1I_2I_3}\int \sqrt {-g_a} \nabla^a s^{I_1}\nabla^b s^{I_2}\phi_{(ab)}^{I_3}, \end{eqnarray} where $G_{I_1I_2I_3}$ is found to be \begin{eqnarray} \nonumber G_{I_1I_2I_3}=\frac{4(\Sigma+2)(\Sigma+4)\alpha_3(\alpha_3-1)}{(k_1+1)(k_2+1)} a_{123}. \end{eqnarray} \vskip 0.4cm {\it 5.2 Three-point Functions} \vskip 0.4cm \noindent Denote by ${\cal T}_{ij}^I$ the operator in SYM of the conformal weight $\Delta_G=k+4$ that corresponds to the AdS field $\phi_{(ab)}$. To compute the three-point correlation function of this operator with extended CPOs in the boundary conformal field theory one needs the bulk-to-boundary propagator for the field $ \phi_{(ab)}^{I}$. In principle this can be extracted from the momentum space results of \cite{P}. In the case of three-point correlators it is however more convenient to deal directly with the $x$-space propagator. Recall that the linearized equations of motion for $ \phi_{(ab)}^{I}$ read as \begin{eqnarray} \label{gre1} \nabla_{c}\nabla^{c}\phi_{(ab)}^I+(2-k^2-4k)\phi_{(ab)}^I=0,\quad \nabla^{b}\phi_{(a b)}^I =0. \end{eqnarray} Now one can easily check that the following function \begin{eqnarray} \label{grg} G_{ab~ij}(\omega_0,\vec{x})=\frac{\Delta_G+1}{\Delta_G-1} \omega_0^2{\cal K}_{\Delta_G}(\omega,\vec{x} )J_{ak}(\omega-\vec{x})J_{bl}(\omega-\vec{x}) {\cal E}_{ij, kl} \end{eqnarray} is the bulk-to-boundary Green function for eq.(\ref{gre1}). Here ${\cal E}_{ij, kl}$ denotes the traceless symmetric projector: $$ {\cal E}_{ij, kl}=\frac{1}{2}(\delta_{ik}\delta_{jl}+\delta_{il}\delta_{kj})-\frac{1}{4} \delta_{ij}\delta_{kl}, $$ ${\cal K}_{\Delta} (\omega,\vec{x})$ is a bulk-to-boundary propagator for a scalar field corresponding to an operator of conformal dimension $\Delta$: \begin{eqnarray} \label{skprop} {\cal K}_{\Delta} (\omega,\vec{x})= c_{\Delta}\frac{\omega_0^{\Delta}}{(\omega_0^2+({\vec{\omega}}-\vec{x})^2)^\Delta},\quad c_{\Delta}=\frac{\Gamma(\Delta)}{\pi^2 \Gamma(\Delta-2)}, \end{eqnarray} and $J_{ab}(x)=\delta_{ab}-2\frac{x_a x_b}{x^2}$. Note that function (\ref{grg}) satisfies the transversality condition $\nabla^{a}G_{ab~ij}=0$. The normalization constant $\frac{\Delta_G+1}{\Delta_G-1}$ in (\ref{grg}) is fixed by requiring the corresponding solution of (\ref{gre1}) to reproduce correctly the boundary data in the limit $\omega_0\to 0$. In the case of vanishing AdS mass eq.(\ref{grg}) turns into the graviton bulk-to-boundary propagator \cite{LT}. Having discussed the propagator for $\phi_{(ab)}$ we come back to the three-point correlator that now reads as \begin{eqnarray} \label{OOT} \langle {\cal O}^{I_1}(\vec{x}){\cal O}^{I_2}(\vec{y}){\cal T}_{ij}^{I_3}(\vec{z})\rangle =-\frac{8N^2}{(2\pi )^5}G_{I_1I_2I_3}\int \frac{d^5\omega}{\omega_0^5}\omega_0^4\nabla_a\nabla_b {\cal K}_{\Delta_1}(\omega,\vec{x}){\cal K}_{\Delta_2}(\omega,\vec{y})G_{ab~ij}^{I_3}(\omega, \vec{z}). \end{eqnarray} By the conformal symmetry this correlator is defined up to the normalization constant $\beta_{123}$: \begin{eqnarray} \nonumber &&\langle {\cal O}^{I_1}(\vec{x}){\cal O}^{I_2}(\vec{y}){\cal T}_{ij}^{I_3}(\vec{z})\rangle =\\ &&\frac{\beta_{123}} {|\vec{x}-\vec{y}|^{\Delta_1+\Delta_2-\Delta_G}|\vec{x}-\vec{z}|^{\Delta_1+\Delta_G-\Delta_2} |\vec{y}-\vec{z}|^{\Delta_2+\Delta_G-\Delta_1}} \left( \frac{Z_iZ_j}{Z^2}-\frac{1}{d}\delta_{ij}\right) , \nonumber \end{eqnarray} where \begin{eqnarray} \label{defzi} Z_i=\frac{(\vec{x}-\vec{z})_i}{(\vec{x}-\vec{z})^2}-\frac{(\vec{y}-\vec{z})_i}{(\vec{y}-\vec{z})^2}. \end{eqnarray} This constant is then found by explicit evaluation of integral (\ref{OOT}): \begin{eqnarray} \beta_{123}&=&-\frac{4N^2}{(2\pi )^5} 4\pi^2c_{\Delta_1}c_{\Delta_2}c_{\Delta_G} G_{I_1I_2I_3} \frac{\Delta_G+1}{\Delta_G-1} \frac{\Gamma\left( \frac{1}{2} (\Delta_1+\Delta_2+\Delta_G-2) \right) }{\Gamma(\Delta_G+2) } \\ \nonumber & \times & \frac{\Gamma\left(\frac{1}{2} (\Delta_1+\Delta_G-\Delta_2+2) \right) \Gamma\left(\frac{1}{2} (\Delta_2+\Delta_G-\Delta_1+2) \right) \Gamma\left(\frac{1}{2} (\Delta_1+\Delta_2-\Delta_G+2) \right) } { \Gamma(\Delta_1)\Gamma(\Delta_2) } \end{eqnarray} Substituting here the normalization constants and $G_{I_1I_2I_3}$ we finally find \begin{eqnarray} \nonumber \beta_{123}&=&-\frac{4N^2}{(2\pi )^5}\frac{64}{\pi^4}\left( \frac{k_3+2}{(k_1+1)(k_2+1)}\right) \frac{\Gamma\left( \frac{1}{2}\Sigma+3 \right) \Gamma\left(\alpha_1+3 \right) \Gamma\left(\alpha_2+3\right) \Gamma\left(\alpha_3+1\right) } {\Gamma(k_1-2)\Gamma(k_2-2)\Gamma(k_3+5) }a_{123} \end{eqnarray} The two-point correlation function of the YM operator ${\cal T}_{ij}$ corresponding to the symmetric second rank tensor field $\phi_{(ab)}$ was computed in \cite{P} $$ \langle {\cal T}_{ij}^I(\vec{x}){\cal T}_{kl}^J(\vec{y})\rangle =\frac{4N^2}{(2\pi )^5} \frac{1}{\pi^2}(\Delta_G-2)^2(\Delta_G+1)\frac{\delta^{IJ}}{|\vec{x}-\vec{y}|^{2\Delta_G}} {\cal E}_{ij~i'j'}J_{i'k}(\vec{x}-\vec{y})J_{j'l}(\vec{x}-\vec{y}). $$ Therefore, introducing the normalized operator $$ T_{ij}^I=\frac{(2\pi )^{5/2}}{2N} \frac{\pi}{(\Delta_G-2)(\Delta_G+1)^{1/2}}{\cal T}_{ij}^I $$ one obtains the correlation function of two normalized CPO's and $T_{ij}^I$ with the constant $\beta_{123}^{norm}$: \begin{eqnarray} \nonumber \beta_{123}^{norm}&=&-\frac{(2\pi )^{5/2}}{N} \frac{1}{2^{3/2}\pi^{5/2}} \left( k_1 k_2(k_3+1)(k_3+2)(k_3+5)\right)^{1/2} \\ \nonumber &\times & \frac{(\alpha_1+1)(\alpha_1+2)(\alpha_2+1)(\alpha_2+2)} {(k_3+1)(k_3+2)(k_3+3)(k_3+4)(k_3+5)} \langle {\cal C}^{I_1}{\cal C}^{I_2}{\cal C}^{I_3}\rangle , \end{eqnarray} where the explicit expression for $a_{123}$ was used. Note that the variable $\alpha_3$ completely dissappeared from the final answer. \section{Cubic couplings of two scalars $s^I$ with vector fields} \vskip 0.4cm {\it 6.1 Cubic Couplings} \vskip 0.4cm \noindent To obtain cubic couplings of two scalars $s^I$ with vectors fields we need equations of motion for the vector fields up to the second order. The equations of motion for the vector fields $\phi_a^\alpha$ can be derived from the following components of the self-duality equation \begin{eqnarray} &&f_{\alpha abcd}-f^*_{\alpha abcd}+T^{(1)}_{\alpha abcd}+ T(h,f^*)_{\alpha abcd}+T(h)_{\alpha abcd}=0, \label{phi1}\\ &&f_{\alpha\beta abc}-f^*_{\alpha\beta abc}+T^{(1)}_{\alpha\beta abc}+ T(h,f^*)_{\alpha\beta abc}+T(h)_{\alpha\beta abc}=0. \label{phi2} \end{eqnarray} From the definition of $f$ we have $$ f_{\alpha\beta abc}= 2\nabla_{[\alpha}a_{\beta ]abc},\quad f^*_{\alpha\beta abc}= \varepsilon_{abcde}(\nabla^d\nabla_\alpha\phi^e_\beta - \nabla^d\nabla_\beta\phi^e_\alpha ). $$ Here we omitted all terms dependent on the components of the 4-form potential of the form $a_{ab\alpha\beta}$ which are not relevant for the cubic couplings under consideration. From the definition of the tensors $T$ (\ref{deft}) we can easily see that $$T^{(1)}_{\alpha\beta abc}= T(h,f^*)_{\alpha\beta abc}= T(h)_{\alpha\beta abc}=0, $$ if we keep only terms which may give a contribution to the cubic couplings. Thus eq.(\ref{phi2}) does not get relevant quadratic corrections, and, therefore, \begin{eqnarray} a_{\alpha abc}=\varepsilon_{abcde}\nabla^{d}\phi_\alpha^{e} \label{phi3} \end{eqnarray} Taking into account eq.(\ref{phi3}) and formulas (\ref{deft}) for the tensors $T$, one can rewrite eq.(\ref{phi1}) in the form \begin{eqnarray} (\nabla_b^2 +\nabla_\beta^2 -4)\phi_\alpha^a -\nabla_b\nabla^a\phi_\alpha^b -h_\alpha^a + \frac12 h_b^b\nabla^a\nabla_\alpha b-\frac{3}{10}\pi\nabla^a\nabla_\alpha b- h^{ab}\nabla_b\nabla_\alpha b=0 \label{phi4} \end{eqnarray} Here we have omitted all terms that are projected out under the projection onto $Y_\alpha$. Expanding all the fields in spherical harmonics and using eqs.(\ref{redbpi}-\ref{zeta}), we obtain equations of motion for the vector fields $\phi_a^{I}$ \begin{eqnarray} &&\nabla_b^2\phi_a^3 -\nabla^b\nabla_a\phi_b^3 -(k_3+1)(k_3+3)\phi_a^3 -h_a^3 =\nonumber\\ &&-t_{123}\left( \frac{4k_2(k_2+2)}{k_2+1}s_2\nabla_as_1 + \frac{4}{k_2+1}\nabla_a\nabla_bs_2\nabla^bs_1\right), \label{phi5} \end{eqnarray} where $t_{123}\equiv t_{I_1I_2I_3}=\int\nabla^\alpha Y^{I_1}Y^{I_2}Y_\alpha^{I_3}$, $\phi_3$ means $\phi^{I_3}$ and so on, and summation over 1 and 2 is assumed. Now we proceed with the equations of motion for $h^\alpha_a$. These equations can be derived from the $a,\alpha$ components of eq.(\ref{greq}). Omitting all intermediate calculations, we present the equations in the form \begin{eqnarray} &&\nabla_b^2h_a^3 -\nabla^b\nabla_ah_b^3 -((k_3+1)(k_3+3)+8)h_a^3 -16(k_3+1)(k_3+3)\phi_a^3 =\nonumber\\ &&2t_{123}f(k_1,k_2)s_1\nabla_as_2 - 16t_{123}\frac{k_2-5}{k_1+1}\nabla_a\nabla_bs_1\nabla^bs_2 +\frac{8t_{123}}{(k_1+1)(k_2+1)}\nabla_a\nabla_b\nabla_cs_2\nabla^b\nabla^cs_1,\nonumber\\ \label{phi6} \end{eqnarray} where \begin{eqnarray} &&f(k_1,k_2)=\frac{2k_1k_2(k_2-1)}{k_2+1}- \frac{4k_1(k_2^2-4k_2-4)}{k_2+1}- \frac{5k_1(k_1-1)k_2(k_2-1)}{(k_1+1)(k_2+1)}+\nonumber\\ &&\frac{4k_1(k_1-4)k_2(k_2-1)}{(k_1+1)(k_2+1)}- \frac{8k_1(k_1-4)}{(k_1+1)(k_2+1)}-k_1k_2+ 48k_1-8k_1(k_1+4). \nonumber \end{eqnarray} The equations of motion for vector fields $A$ and $C$ are linear combinations of the two above and can be written in the form \begin{eqnarray} &&\nabla_b^2V_a^3 -\nabla^b\nabla_aV_b^3 -m_3^2V_a^3=\nabla_aV^3 + D_{123}s_1\nabla_as_2+\nonumber\\ &&\quad E_{123}\nabla^bs_1\nabla_a\nabla_bs_2+ F_{123}\nabla^b\nabla^cs_1\nabla_a\nabla_b\nabla_cs_2, \label{phi7} \end{eqnarray} where $V$ may be either $A$ or $C$, and the constants $D$, $E$, $F$ are antisymmetric with respect to the permutation of the indices 1 and 2. We can remove the higher-derivative terms from the equation by means of the following field redefinition \begin{eqnarray} V_a^3\to V_a^3 -\frac{1}{m_3^2}\nabla_a\tilde{V}^3 + J_{123}s_1\nabla_as_2 + L_{123}\nabla^bs_1\nabla_a\nabla_bs_2 , \label{phi8} \end{eqnarray} where \begin{eqnarray} &&2L_{123}=F_{123}\nonumber\\ &&2J_{123}+L_{123}(m_1^2+m_2^2-m_3^2-12)=E_{123}\nonumber\\ &&\tilde{V}^3=V^3-(J_{123}-2L_{123})m_1^2s_1s_2- L_{123}m_1^2\nabla_bs_1\nabla^bs_2 \nonumber \end{eqnarray} Then eq.(\ref{phi7}) acquires the form \begin{eqnarray} \nabla_b^2V_a^{I_3} -\nabla^b\nabla_aV_b^{I_3} -m_3^2V_a^{I_3} + \sum_{I_1,I_2}v_{I_1I_2I_3}s^{I_1}\nabla_as^{I_2}=0, \label{phi9} \end{eqnarray} where \begin{eqnarray} v_{I_1I_2I_3}=-D_{I_1I_2I_3}+ J_{I_1I_2I_3}(m_1^2+m_2^2-m_3^2) - 2L_{I_1I_2I_3}(m_1^2+m_2^2) \label{phi10} \end{eqnarray} A straightforward calculation of the constants $v$ gives \begin{eqnarray} \label{phi11} &&v_{I_1I_2I_3}(A)=\frac{4(\alpha_3-1/2)(\Sigma-1)(\Sigma+1)(\Sigma+3)} {(k_1+1)(k_2+1)}t_{123}\\ &&v_{I_1I_2I_3}(C)=\frac{16(\alpha_3-1/2)(\alpha_3-3/2)(\alpha_3-5/2)(\Sigma+3)} {(k_1+1)(k_2+1)}t_{123} \label{phi12} \end{eqnarray} Taking into account the normalization of the quadratic actions (\ref{aA}) and (\ref{aC}), we get the corresponding cubic terms \begin{eqnarray} \label{V} S_{ssv}= \frac{4N^2}{(2\pi )^5}V_{I_1I_2I_3}\int \sqrt{-g_{a}}~ s^{I_1}\nabla^a s^{I_2}V_a^{I_3}, \end{eqnarray} where \begin{eqnarray} \label{phi13} &&V_{I_1I_2I_3}(A)= \frac{2(k_3+1)(\alpha_3-1/2)(\Sigma-1)(\Sigma+1)(\Sigma+3)} {(k_1+1)(k_2+1)(k_3+2)}t_{123}\\ \label{phi14} &&V_{I_1I_2I_3}(C)= \frac{8(k_3+3)(\alpha_3-1/2)(\alpha_3-3/2)(\alpha_3-5/2)(\Sigma+3)} {(k_1+1)(k_2+1)(k_3+2)}t_{123} \end{eqnarray} \vskip 0.4cm {\it 6.2 {Three-point functions }} \vskip 0.4cm \noindent Denote by ${\cal R}_i^{I_3}$ the operator in SYM that corresponds to $V_i^{I_3}$ on the gravity side. Then the three-point function of two scalars and a vector field is given by the integral \begin{eqnarray} \label{tpV} \langle {\cal O}^{I_1}(\vec{x} ){\cal O}^{I_2}(\vec{y} ){\cal R}_i^{I_3}(\vec{z} )\rangle = \frac{8N^2}{(2\pi )^5}V_{I_1I_2I_3}\int \frac{d^5\omega}{\omega_0^5} \omega_0^2 {\cal K}_{\Delta_1} (\omega,\vec{x})\partial_{b}{\cal K}_{\Delta_2}(\omega,\vec{y}) G_{bi}^{I_3}(\omega,\vec{z}). \end{eqnarray} Here ${\cal K}_{\Delta} (\omega,\vec{x})$ with $\Delta =k$ is a bulk-to-boundary propagator (\ref{skprop}) for $s^{I}$ and $G_{a i}(\omega , \vec{x})$ is a bulk-to-boundary propagator for a massive vector field $V_a^{I_3}$ with a mass $m(V)$: \begin{eqnarray} \nonumber G_{a i}(\omega , \vec{x})=\frac{\Delta_v}{\Delta_v-1}\omega_0^{-1} {\cal K}_{\Delta_v}(\omega,\vec{x} )J_{a i}(\omega-\vec{x} ), \end{eqnarray} where $J_{ab}(x)=\delta_{ab}-2\frac{x_a x_b}{x^2}$. \noindent In the last formula $\Delta_v=2+\sqrt{1+m^2(V)}$ and, thus $\Delta_v=k+2$ for the field $A_a^I$ and $\Delta_v=k+6$ for $C_a^I$. Note that $G_{a i}$ obeys the transversality condition $\nabla^a G_{a i}=0$. The condition of the conformal covariance defines the correlator (\ref{tpV}) uniquely up to the coefficient $\lambda_{123}$: \begin{eqnarray} \label{tpV1} &&\langle {\cal O}^{I_1}(\vec{x} ){\cal O}^{I_2}(\vec{y} ){\cal R}_i^{I_3}(\vec{z} )\rangle =\\ &&\frac{\lambda_{123}} {|\vec{x}-\vec{y}|^{\Delta_1+\Delta_2-\Delta_v}|\vec{x}-\vec{z}|^{\Delta_1+\Delta_v-\Delta_2} |\vec{y}-\vec{z}|^{\Delta_2+\Delta_v-\Delta_1}} \left( \frac{|\vec{x}-\vec{z}||\vec{y}-\vec{z}|}{|\vec{x}-\vec{y}|} Z_i \right), \nonumber \end{eqnarray} with $$ Z_i=\frac{(\vec{x}-\vec{z})_i}{(\vec{x}-\vec{z})^2}-\frac{(\vec{y}-\vec{z})_i}{(\vec{y}-\vec{z})^2}. $$ Applying the inversion method of \cite{FMMR} to integrate (\ref{tpV}) one finds for $\lambda_{123}$ the following answer \begin{eqnarray} \nonumber \lambda_{123}&=&\frac{8N^2}{(2\pi )^5}\frac{1}{\pi^4}V_{I_1I_2I_3} \frac{(\Delta_v-2)\Gamma\left( \frac{1}{2} (\Delta_1+\Delta_2+\Delta_v-3) \right) }{\Gamma(\Delta_v) } \\ \nonumber & \times & \frac{\Gamma\left(\frac{1}{2} (\Delta_1+\Delta_v-\Delta_2+1) \right) \Gamma\left(\frac{1}{2} (\Delta_2+\Delta_v-\Delta_1+1) \right) \Gamma\left(\frac{1}{2} (\Delta_2+\Delta_2-\Delta_v+1) \right) } { \Gamma(\Delta_1-2)\Gamma(\Delta_2-2) } \end{eqnarray} For the field $A^I$ the last formula reads as \begin{eqnarray} \nonumber \lambda_{123}(A)&=&\frac{4N^2}{(2\pi )^5}\frac{2^5}{\pi^4} \frac{\Gamma\left( \frac{1}{2}\Sigma+\frac{5}{2}\right) }{(k_1+1)(k_2+1)(k_3+2)} \frac{\Gamma\left(\alpha_1+3/2 \right) \Gamma\left( \alpha_2+3/2\right) \Gamma\left( \alpha_3+1/2\right) } { \Gamma(k_1-2)\Gamma(k_2-2)\Gamma(k_3) } t_{123} \end{eqnarray} while for $C^I$: \begin{eqnarray} \nonumber \lambda_{123}(C)&=&\frac{4N^2}{(2\pi )^5}\frac{2^5}{\pi^4} \frac{\Gamma\left( \frac{1}{2}\Sigma+\frac{5}{2}\right) (k_3+3)(k_3+4)}{(k_1+1)(k_2+1)(k_3+2)} \frac{\Gamma\left(\alpha_1+7/2 \right) \Gamma\left( \alpha_2+7/2\right) \Gamma\left( \alpha_3+1/2\right) } { \Gamma(k_1-2)\Gamma(k_2-2)\Gamma(k_3+6) } t_{123} \end{eqnarray} The two-point correlator corresponding to a massive vector field on the AdS space was found in \cite{MV}: \begin{eqnarray} \langle {\cal R}_{i}^I(\vec{x}), {\cal R}_j^J(\vec{y})\rangle =\frac{2}{\pi^2}\theta \Delta_v (\Delta_v-1)^2 \frac{\delta^{IJ}}{|\vec{x}-\vec{y}|^{2\Delta_v}}J_{ij}(\vec{x}-\vec{y}), \end{eqnarray} where the constant $\theta$ accounts our normalization of the quadratic action for the vector fields and is equal to $\theta=\frac{4N^2}{(2\pi )^5}\frac{k+1}{2(k+2)}$ for the field $A$ and to $\theta=\frac{4N^2}{(2\pi )^5}\frac{k+3}{2(k+2)}$ for $C$ respectively. We introduce a normalized operator $R_i^I$ with the two-point correlation function $$ \langle R_{i}^I(\vec{x}), R_j^J(\vec{y})\rangle =\frac{\delta^{IJ}}{|\vec{x}-\vec{y}|^{2\Delta_v}}J_{ij}(\vec{x}-\vec{y}). $$ Explicitly $R_i^I$ is given by $$ R_i^I=\frac{(2\pi )^{5/2}}{2N}\frac{\pi}{(k+1)^{3/2}}{\cal R}_{i}^I $$ for the YM operator corresponding to $A_a^I$ and $$ R_i^I=\frac{(2\pi )^{5/2}}{2N}\frac{\pi}{(k+5)}\left( \frac{k+2}{(k+3)(k+6)} \right)^{1/2}{\cal R}_{i}^I $$ for $C_a^I$. By using these formulae, the definition (\ref{nCPO}) of the normalized CPO, and the expression for $t_{123}$ from the Appendix, one gets the correlation functions of normalized operators \begin{eqnarray} \nonumber \lambda_{123}^{norm}(A)&=&\frac{(2\pi )^{5/2}}{N} \frac{1}{4\pi^{5/2}} \left( \frac{k_1k_2}{k_3+2}\right)^{1/2} \frac{k_3(\alpha_1 +1/2)(\alpha_2 +1/2)}{(k_3+1)^2}T_{123} \end{eqnarray} \begin{eqnarray} \nonumber \lambda^{norm}_{123}(C)&=&\frac{(2\pi )^{5/2}}{N} \frac{1}{4\pi^{5/2}} \left( \frac{k_1k_2(k_3+3)}{(k_3+1)(k_3+6)}\right)^{1/2} \frac{k_3+4}{k_3+5}\\ \nonumber &\times&\frac{k_3!\Gamma (\alpha_1+7/2)\Gamma (\alpha_2+7/2)} {\Gamma (k_3+6)(\alpha_1-1/2)!(\alpha_2-1/2)!}T_{123} \end{eqnarray} \section{Conclusion} In this paper we obtained the cubic couplings in type IIB supergravity on $AdS_5\times S^5$ involving two scalar fields $s^I$ and the corresponding three-point functions by using the covariant equations of motion and the quadratic action. Since all the fields we considered correspond to operators which are descendants of CPOs, one may, in principle, derive the same results directly from the superconformal invariance. This would require a detailed study of superconformal Ward identities in SYM$_4$ which, to our knowledge, has not been carried out yet. In most cases to find a cubic coupling of two scalars $s^I$ with a field $F$ we used a corrected equation of motion for the field $F$. We saw that to get rid of higher-derivative terms we had to make a field redefinition of the form (\ref{red}). By this reason, a field $F$ corresponds not to a descendant of a CPO, but to a properly extended operator which includes products of CPOs. In fact one can obtain the cubic couplings by using corrected equations of motion for scalars $s^I$ as it was done in the case of the graviton couplings. We have done that to derive the cubic couplings of two scalars $s^I$ with the scalars $\phi^I$, and with the vector fields, and we have certainly obtained the same results (\ref{ssphi}) and (\ref{V}). The fact that we derived the same vertices from different equations also confirms the correct normalization of the quadratic action for type IIB supergravity \cite{AF3}. It is worth noting that contrary to the graviton case considered in section 5, in these cases to remove higher-derivative terms from the corrected equations of motion for $s^I$ we had to make the following redefinitions of the scalars $s^I$ $$ s_1\to s_1 +J_{123}s_2\phi_3 +L_{123}\nabla_a s_2\nabla^a\phi_3$$ and $$s_1\to s_1 +J_{123}\nabla_a s_2V^a_3 + L_{123}\nabla_a\nabla_b s_2\nabla^b V^a_3.$$ This implies that the extended CPOs corresponding to the scalars $s^I$ that were discussed in section 2 have to depend on products of CPOs and their descendants. Unfortunately, the knowledge of the three-point functions obtained in the paper does not allow one to fix the explicit form of the extended CPOs uniquely. It is worth noting that the cubic couplings of three scalars $s$ vanish when any of the $\alpha 's$ vanish \cite{LMRS}. The cubic couplings studied in this paper vanish if $\alpha_3$ takes special values, and in most of the cases there are several such values of $\alpha_3$. Since $\alpha_1$ and $\alpha_2$ have to be non-negative the cubic couplings have no zeroes at $\alpha_1$ and $\alpha_2$. However, in all of the three-point functions considered zeroes of the cubic couplings are cancelled by poles in the general expressions for the three-point functions, just as in the case of the three-point functions of extended CPOs. This gives us a reason to believe that for generic values of conformal dimensions the three-point functions obtained coincide with the three-point functions of CPOs and their descendants. The next natural step is to find quartic couplings of scalars $s^I$, and to compute four-point functions of extended CPOs. We expect that the quartic couplings vanish if, say, $k_4=k_1+k_2+k_3$, because in this case there is no exchange diagram, and all contributions to the four-point functions may be given only by the quartic couplings. However, the four-point functions in this case are nonsingular at $\vec{x}_1=\vec{x_2}$, and it seems to be impossible to reproduce such a coordinate dependence via supergravity with a nonzero on-shell quartic coupling. Finally, it would be interesting to find the supergravity fields that correspond to CPOs, and to compute their cubic couplings. A similar problem exists in the case of AdS compactifications of 11-dimensional supergravity, where analogous cubic couplings \cite{CFMc,BZ} also have zeroes. In the 11-dimensional case the problem seems to be simpler because the covariant action is known. \vskip 1cm {\bf ACKNOWLEDGMENT} We would like to thank Prof. S.Theisen and Prof. J.Wess for kind hospitality at the University of Munich. We are grateful to Prof. S.Theisen and to S.Kuzenko for valuable discussions and to F.Bastianelli and R.Zucchini for pointing out the factor 2 in the Green functions missed in the first version of the paper. The work of G.A. was supported by the Alexander von Humboldt Foundation and in part by the RFBI grant N96-01-00608, and the work of S.F. was supported by the U.S. Department of Energy under grant No. DE-FG02-96ER40967 and in part by the Alexander von Humboldt Foundation. \section{Appendix} We follow \cite{LMRS} describing spherical harmonics on $S^5$. The scalar spherical harmonics $Y^I$ are defined by \begin{eqnarray} \label{ssh} Y^I =z(k)^{-1/2}C^{I}_{i_1...i_k}x^{i_1}\cdots x^{i_k} \end{eqnarray} where $C^I_{i_1\cdots i_k}$ are totally symmetric traceless rank $k$ orthonormal tensors of $SO(6)$: $\langle C^IC^J\rangle =C^I_{i_1\cdots i_k}C^J_{i_1\cdots i_k}=\delta^{IJ}$, $x^i$ are the Cartesian coordinates of the ${\bf R}^6$ in which $S^5$ is embedded, and $$z(k)=\frac{\pi^3}{2^{k-1}(k+1)(k+2)}$$ The scalar spherical harmonics are orthonormal and satisfy the relation \begin{eqnarray} \label{ssh2} &&\int~ Y^{I_1}Y^{I_2}Y^{I_3}=a_{123} \\ \nonumber &&a_{123}=(z(k_1)z(k_2)z(k_3))^{-1/2} \frac{\pi^3}{(\frac12\Sigma +2)!2^{\frac12 (\Sigma -2)}} \frac{k_1!k_2!k_3!}{\alpha_1!\alpha_2!\alpha_3!}\langle C^{I_1}C^{I_2}C^{I_3}\rangle , \end{eqnarray} where $\alpha_i =\frac 12 (k_j+k_l-k_i)$, $j\neq l\neq i$, and $\langle C^{I_1}C^{I_2}C^{I_3}\rangle $ is the unique $SO(6)$ invariant obtained by contracting $\alpha_1$ indices between $C^{I_2}$ and $C^{I_3}$, $\alpha_2$ indices between $C^{I_3}$ and $C^{I_1}$, and $\alpha_3$ indices between $C^{I_2}$ and $C^{I_1}$. A vector spherical harmonic is defined as a tangent component of the following vector \begin{eqnarray} Y_m^I=z(k)^{-1/2}C^{I}_{m;i_1...i_k}x^{i_1}\cdots x^{i_k} \label{defvh} \end{eqnarray} where the tensor $C^{I}_{m;i_1...i_k}$ is symmetric and traceless with respect to $i_1,...,i_k$, and its symmetric part vanishes. The tensors are orthonormal $$ C^{I}_{m;i_1...i_k}C^{J}_{n;i_1...i_k}=\delta^{IJ}\delta_{mn} $$ The vector spherical harmonics are orthonormal and satisfy the relation \begin{eqnarray} &&\int~ \nabla^\alpha Y^{I_1}Y^{I_2}Y_\alpha^{I_3}=t_{123}\nonumber\\ &&t_{123}= \frac{\pi^3}{k_3+1}\frac{(z(k_1)z(k_2)z(k_3))^{-1/2}} {(\frac12 (\Sigma +3))! 2^{\frac12 (\Sigma -3)}}\frac{k_1!k_2!k_3!}{(\alpha_1-\frac12)! (\alpha_2-\frac12)!(\alpha_3-\frac12)!}T_{123} \label{t123} \end{eqnarray} where \begin{eqnarray} T_{123}=&&C^{I_1}_{mi_1...i_{p_2}j_1...j_{p_3}} C^{I_2}_{j_1...j_{p_3}l_1...l_{p_1}} C^{I_3}_{m;l_1...l_{p_1}i_1...i_{p_2}}-\nonumber\\ &&C^{I_1}_{i_1...i_{p_2+1}j_1...j_{p_3}} C^{I_2}_{j_1...j_{p_3}l_1...l_{p_1-1}m} C^{I_3}_{m;l_1...l_{p_1-1}i_1...i_{p_2+1}} \label{v123} \end{eqnarray} and $p_1=\alpha_1 +\frac12$, $p_2=\alpha_2-\frac12$, $p_3=\alpha_3-\frac12$. A tensor spherical harmonic is defined as a projection of the following six-dimensional tensor onto the sphere. \begin{eqnarray} Y_{mn}=z(k)^{-1/2}C^{I}_{mn;i_1...i_k}x^{i_1}\cdots x^{i_k}, \label{deftenh} \end{eqnarray} where the tensor $C^{I}_{mn;i_1...i_k}$ is symmetric and traceless with respect to $i_1,...,i_k$, and $m,n$, and its symmetric part vanishes, i.e. $$ C^{I}_{mn;i_1...i_k}+C^{I}_{mi_1;n...i_k}+\cdots + C^{I}_{mi_k;i_1...i_{k-1}n}=0 $$ The tensors are orthonormal $$ C^{I}_{m_1n_1;i_1...i_k}C^{J}_{m_2n_2;i_1...i_k}=\delta^{IJ}\delta_{m_1n_1;m_2n_2} $$ Then, we get that the tensor spherical harmonics are orthonormal and satisfy the relation \begin{eqnarray} &&\int~ \nabla^\alpha Y^{I_1}\nabla^\beta Y^{I_2}Y_{(\alpha\beta )}^{I_3}=p_{123}\nonumber\\ &&p_{123}=(z(k_1)z(k_2)z(k_3))^{-1/2} \frac{\pi^3}{(\frac12\Sigma +1)! 2^{\frac12\Sigma }}\cdot\frac{k_1!k_2!k_3!}{\alpha_1 ! \alpha_2 !(\alpha_3-1)!}P_{123}, \label{s123} \end{eqnarray} where \begin{eqnarray} P_{123}=C^{I_1}_{mi_1...i_{p_2}j_1...j_{p_3}} C^{I_2}_{nj_1...j_{p_3}l_1...l_{p_1}} C^{I_3}_{mn;l_1...l_{p_1}i_1...i_{p_2}} \label{S123} \end{eqnarray} and $p_1=\alpha_1$, $p_2=\alpha_2$, $p_3=\alpha_3-1$. In deriving the equations of motions for scalar fields $t_k$ and for tensor $\phi_{(ab)}^k$ one comes across a number of integrals of scalar spherical harmonics, all of them can be reduced to $a_{123}$. Introducing the concise notation $f(k)=k(k+4)$ we present below the corresponding formulae: \begin{eqnarray} \nonumber \int \nabla^{\alpha}Y^{I_1}Y^{I_2}\nabla_{\alpha}Y^{I_3} =\frac{1}{2}(f(k_1)+f(k_3)-f(k_2))a_{123}, \end{eqnarray} \begin{eqnarray} \nonumber \int \nabla^{(\alpha}\nabla^{\beta)}Y^{I_1}\nabla_{\alpha}Y^{I_2}\nabla_{\beta}Y^{I_3}&=& \left(\frac{1}{10}f(k_1)f(k_2)+\frac{1}{10}f(k_1)f(k_3)+\frac{1}{2}f(k_2)f(k_3) \right. \\ \nonumber &-&\left.\frac{1}{4}f(k_2)^2-\frac{1}{4}f(k_3)^2+\frac{3}{20}f(k_1)^2 \right) a_{123}, \end{eqnarray} \begin{eqnarray} \nonumber \int \nabla^{(\alpha}\nabla^{\beta)}Y^{I_1}Y^{I_2}\nabla_{\alpha}\nabla_{\beta}Y^{I_3}&=& \frac{1}{2}\left( -f(k_1)f(k_2)-f(k_2)f(k_3)+\frac{3}{5}f(k_1)f(k_3) +\frac{1}{2}f(k_1)^2 \right. \\ \nonumber &+&\frac{1}{2}\left. f(k_2)^2+\frac{1}{2}f(k_3)^2 -4(f(k_1)+f(k_3)-f(k_2)) \right) a_{123}. \end{eqnarray} Analogously, when computing the interaction vertex $S_{ssv}$ from equations of motion for scalars $s_k$ one finds two integrals involving the vector harmonics $Y_{\alpha}^I$. Both of them are expressed via $t_{123}$: \begin{eqnarray} \nonumber \int \nabla^{(\alpha}\nabla^{\beta)}Y^{I_1}Y^{I_2}\nabla_{\alpha}Y_{\beta}^{I_3}&=& \frac{1}{2}\left( (k_3+1)(k_3+3)-8+f(k_1)-f(k_2) \right) t_{123} \\ \nonumber \int \nabla^{(\alpha}\nabla^{\beta)}Y^{I_1}\nabla_{\alpha}Y^{I_2}Y_{\beta}^{I_3} &=&\frac{1}{2}\left( f(k_2)+\frac{3}{5}f(k_1) - (k_3+1)(k_3+3) \right) t_{123}. \end{eqnarray} Finally the derivation of the $S_{ss\phi}$-vertex from the equations of motion for scalars $s_k$ requires the knowledge of the following integrals: \begin{eqnarray} \nonumber \int \nabla^{(\alpha}\nabla^{\gamma)}Y^{I_1}\nabla^{\beta}\nabla_{\gamma} Y^{I_2}Y_{(\alpha\beta)}^{I_3}&=& \frac{1}{10}(3f(k_1)+5f(k_2)-5k_3^2-20k_3-30)p_{123} \\ \nonumber \int \nabla^{(\alpha}\nabla^{\beta)}Y^{I_1}\nabla_{\gamma} Y^{I_2}\nabla^{\gamma}Y_{(\alpha\beta)}^{I_3}&=& \frac{1}{2}(f(k_1)-f(k_2)-k_3^2-4k_3-8) p_{123}. \end{eqnarray} \newpage
1,108,101,562,546
arxiv
\section{Introduction} Glioblastoma (GBM) is the most malignant and frequently occurring type of primary brain tumor in adult. Despite the development of various aggressive treatments, patients with GBM inevitably suffers tumor recurrence with an extremely poor prognosis \cite{au2}. The dilemma in the clinical management of post-treatment patients remains precise assessment of the treatment responsiveness. However, it is mostly relied on pathological evaluations of biopsies~\cite{au1}. Magnetic resonance imaging (MRI) is considered the best non-invasive assessment method of GBM treatment responsiveness~\cite{au62}. Compared with anatomic MRI, such as T1-weighted ($T_1$w), gadolinium enhanced $T_1$w (Gd-$T_1$w), T2-weighted ($T_2$w), and fluid-attenuated inversion recovery ($FLAIR$) images, amide proton transfer-weighted ($APT$w) MRI is a novel molecular imaging technique. It has been proved to positively influence the clinical management by different labs across the world~\cite{au60}. Recently, convolutional neural network (CNN) based medical image analysis methods have provided exciting solutions in neuro-oncologic community~\cite{au3}. Several studies have demonstrated that CNN-based methods outperform humans on fine-grained classification tasks but require a large amount of accurately annotated data with rich diversity~\cite{au61}. Compared with demanding large anatomic MRI datasets, it becomes even more impractical when collecting cutting edge MR image data. Furthermore, obtaining aligned lesion annotations on the corresponding co-registrated multi-modal MR images (namely, paired training data) is costly, since expert radiologists are required to label and verify the data. While deploying conventional data augmentations, such as rotation, flipping, random cropping, and distortion, during training partly mitigates such issues, the performance of CNN models is still limited by the diversity of the dataset~\cite{au4}. In this paper, we address the problem of synthesizing meaningful high quality anatomic $T_1$w, Gd-$T_1$w, $T_2$w, $FLAIR$, and molecular $APT$w MR images based on the input lesion information. \indent Goodfellow et al. \cite{au7} proposed the generative adversarial networks (GAN) and first applied to synthesize photo-realistic images. Isola et al. \cite{au8} and Wang et al. \cite{au9} further investigated conditional GAN and achieved impressive solution to image-to-image translation problems. Synthesizing realistic MR images is a difficult task since radiographic features dramatically varies on MR images corresponding to underlying diverse pathological changes. Nevertheless, several generative models have been successfully proposed for MRI synthesis. Nguyen et al. \cite{au10} and Chartsias et al. \cite{au12} proposed CNN-based architectures to synthesize cross-modality MR images. Cordier et al. \cite{au13} further used a generative model for multi-modal MR images with brain tumors from a single label map. However, their inputs are conventional MRI modalities, and the diversity of the synthesized images is limited by the training images. Moreover, the method is not yet capable of producing manipulated outputs. Shin et al. \cite{au14} adopted Pix2Pix \cite{au8} to transfer brain anatomy and lesion segmentation maps to multi-modal MR images with brain tumors. Although, their approach can synthesize realistic brain anatomy for multiple MRI sequences, it does not consider significant differences of radiographic features between anatomic and molecular MRI. Moreover, pathological information are high frequency components and may need extra supervision during synthesis. As a result, their method cannot produce realistic molecular MR images and fails around the lesion region (see Figure~\ref{fig1}(b)). In our previous work, synthesis of anatomic and molecular MR images network (SAMR)~\cite{au59}, a novel generative model was proposed to simultaneously synthesize a diverse set of anatomic and molecular MR images. It takes arbitrarily manipulated lesion masks as input, which is facilitated by brain atlas generated from training data. SAMR \cite{au59} is a GAN-based approach, which consists of a stretch-out up-sampling module, a segmentation consistency module, and multi-scale label-wise discriminators. In this paper, we extend SAMR \cite{au59} by incorporating extra supervision on the latent features and their confidence information to further improve the synthetic performance. Intuitively, directly providing the estimated synthesized images (i.e. intermediate results) to the subsequent layers of the network may propagate errors to the final synthesized images. With the confidence map module, the proposed algorithm is capable to measure an uncertainty metric of the intermediate results and block the flow of incorrect estimation. To this end, we formulate a joint task of estimating the confidence score at each pixel location of intermediate results and synthesizing realistic multi-modal MR images. Figure~\ref{fig1}(e) presents sample results from proposed network, where CG-SAMR generates realistic multi-modal brain MR images with more detailed pathological information as compared with Figure~\ref{fig1}(b-d). Furthermore, to overcome the insufficiency of paired training data, we modify the network to allow unsupervised training, namely unpaired CG-SAMR (UCG-SAMR). In other words, the proposed unsupervised approach does not require aligned pairs of lesion segmentation maps and multi-modal MR images during training. This is achieved by adding an extra GAN which reverses the synthesis process to a segmentation task. In summary, this paper makes the following contributions: \begin{itemize} \item A novel GAN-based model, called CG-SAMR, is proposed to synthesize high quality multi-modal anatomic and molecular MR images with controllable lesion information. \item A novel stretch-out up-sampling module in the decoder is proposed which performs customized synthesis for images of each MR sequence. \item Confidence scores of each sequence measured during synthesis are used to guide the subsequent layers for better synthetic performance. \item Multi-scale label-wise discriminators are developed to provide specific supervision on distinguishing region of interests (ROIs). \item In order to increase the diversity of the synthesized data, rather than explicitly using white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) masks, we leverage atlas of each sequence to provide brain anatomic information in CG-SAMR. \item We demonstrate the feasibility of extending the application of CG-SAMR network to unpaired data training. \item Comparisons have been performed against several recent state-of-the-art paired/unpaired synthesis approaches. Furthermore, an ablation study is conducted to demonstrate the improvements obtained by various components of the proposed method. \end{itemize} Rest of the paper is organized as follows. Section~\ref{sec2} provides a review of some related works. Details of the proposed method are given in Section~\ref{sec3}. Implementation details, experimental results, and ablation study are given in Section~\ref{sec4}. Finally, Section~\ref{sec5} concludes the paper with a brief discussion and summary. \begin{figure*} \centering \includegraphics[width=.8\textwidth]{fig1.png} \caption{An overview of the proposed CG-SAMR network. The goal of the CG-SAMR network is to produce realistic multi-model MR images given the corresponding lesion masks and atlases. The orange blocks indicate the encoder part. The green blocks represent the decoder part with stretch-out up-sampling, in which we leverage same latent feature maps to perform customized synthesis for each MRI sequence. The synthesis module produces the intermediate results for each branch of stretch-out up-sampling and is denoted as SM. CM represents the confidence map module that computes confidence maps to guide the subsequent networks. The U-net lesion segmentation module regularize encoder-decoder part to produce lesion regions with correct radiographic features by a lesion shape consistency loss $\mathcal{L}_{\text{SC}}$. \label{fig2}} \vskip-0.5cm \end{figure*} \section{Related Works} \label{sec2} The goal of MR images synthesis is to generate target images with realistic radiographic features~\cite{au23}. MR image synthesis technically can be achieved by a generative model that translates the source domain to the MR image domain. The source domain usually belongs to noise or different modalities/contrast types (e.g., from CT images to MR images and from $T_1$w images to $T_2$w images). In what follows, we review some recent studies on this topic and applications of modeling uncertainty in CNN.\\ \subsection{Conventional Methods} \indent The conventional medical image synthesis methods include intensity-based methods and registration-based methods~\cite{au24}. Intensity-based methods essentially learn a transformation function mapping source intensities to target intensities. Roy \emph{et al.}~\cite{au25} proposed an example-based approach relying on sparse reconstruction from image patches to achieve contrast synthesis and further extended it under the setting of patch-based compressed sensing~\cite{au27}. Joy \emph{et al.}~\cite{au26} leveraged random forest regression to learn the nonlinear intensity mappings for synthesizing full-head $T_2$w images and $FLAIR$ images. Huang \emph{et al.}~\cite{au28} proposed a geometry regularized joint dictionary learning framework to synthesize cross-modality MR images. For registration-based methods, the synthesized images are generated by the registration between a source images and target co-registered images~\cite{au29}. Cardoso \emph{et al.}~\cite{au30} further extended this idea to synthesize expected intensities in an unseen image modality by a template-based multi-modal generative mixture-model. \subsection{CNN-based Methods} With the development of deep learning, CNN-based medical image synthesis methods have shown significant improvements over the conventional methods. Instead of using patch-based methods~\cite{au35,au36}, Sevetlidis \emph{et al.}~\cite{au34} introduced a whole image synthesis approach relying on a CNN-based autoencoder architecture. Nguyen \emph{et al.}~\cite{au10} and Chartsias \emph{et al.}~\cite{au12} proposed CNN-based architectures integrating intensity features from images to synthesize cross-modality MR images. Various GAN-based methods have also been used for medical image analysis~\cite{au37,au38}. Shin et al.~\cite{au14} adopted pix2pix~\cite{au8} to transfer brain anatomy and lesion segmentation maps to multi-modal MR images with brain tumors. It shows the benefit of using brain anatomy prior, such as WM, GM, CSF masks, to facilitate MR image synthesis. One major challenge of image synthesis is that paired source/target images are required during training which are expensive to acquire. Recent developments in GAN-based architectures, cycle-consistent adversarial networks (CycleGAN)~\cite{au39} and unsupervised image-to-image translation networks (UNIT)~\cite{au48} point to a promising direction for cross-modality biomedical image synthesis using unpaired source/target images). Wolterink \emph{et al.}~\cite{au40} leveraged cycle consistency to achieve bidirectional MR/CT images synthesis. Chartsias \emph{et al.}~\cite{au41} proposed a two stage framework for MR/CT images synthesis and demonstrated that the synthesized data can further improve the segmentation performance. Zhang \emph{et al.}~\cite{au42} and Huo \emph{et al.}~\cite{au43} introduced SynSeg-Net to achieve bidirectional synthesis and anatomy segmentation. In their approach, the source domain is the MR images as well as segmentation labels and the target domain is CT images. Inspired by these works, we also add an extra GAN-based network to CG-SAMR and leverage cycle consistency to allow the training using unpaired data. \subsection{Modeling Uncertainty in CNN} Many recent approaches model the uncertainty and use it to benefit the network on different applications. Kendall \emph{et al.}~\cite{au44} leveraged the Bayesian deep learning models to demonstrate the benefit of modeling uncertainty on semantic segmentation and depth regression tasks. In~\cite{au45}, Kendall \emph{et al.} extended the previous work~\cite{au44} to multi-task learning by proposing a multi-task loss function maximizing the Gaussian likelihood with homoscedastic uncertainty. Yasarla \emph{et al.}~\cite{au46} and Jose \emph{et al.}~\cite{au51} modeled the aleatoric uncertainty as maximum likelihood inference on image restoration and ultrasound image segmentation tasks, respectively. Inspired by these works, we introduce a novel loss function to measure the confidence score of the intermediate synthesis results and guide the subsequent networks of CG-SAMR by the estimated confidence scores. \section{Methodology} \label{sec3} Figure~\ref{fig2} gives an overview of the proposed encoder and decoder part in CG-SAMR framework. By incorporating multi-scale label-wise discriminators and shape consistency-based optimization, the generator aims to produce meaningful high-quality anatomical and molecular MR images with diverse and controllable lesion information. While applying 3D convolution operations might reflect the reality of data, the output of the proposed method is multi-modal MRI image slices, since voxel size between anatomical and molecular MRI in axial direction is significantly different and re-sampling to isotropic resolution can severely degrade the image quality. Detailed imaging parameters are given in Section~\ref{sec:da}. In what follows, we describe key parts of the network and training processes using paired and unpaired data.\\ \begin{figure} \centering \includegraphics[width=.8\columnwidth]{sn_cn.png} \caption{(a) Synthesis module. (b) Confidence map module. Here, Conv represents a convolution block that contains a convolutional layer, a batch normalization layer, and a Rectified Linear Units (ReLU) activation. $\oplus$ is the channel-wise concatenation. \label{fig3}} \end{figure} \subsection{Multi-modal MRI Generation} Our generator architecture is inspired by the models proposed by Johnson et al. \cite{au15} and Wang et al. \cite{au9}. The generator network, consists of two components (see Fig.~\ref{fig1}): an encoder and a decoder with stretch-out up-sampling module. Let the set of multi-model MR images be denoted as $\mathcal{Y}$ and the corresponding set of lesion segmentation maps and anatomic prior as $\mathcal{X}$. The generator aims to synthesize multi-modal MR images $y \in \mathcal{Y}$ given input $x \in \mathcal{X}$. Unlike many deep learning-based methods that directly synthesize MR images from input, we first estimate the intermediate synthesis results $\hat{y}_{\times 0.5}$ (0.5 scale size of $y$) and the corresponding confidence map $c_{\times 0.5}$, then use them to guide the synthesis of the final output $\hat{y}$. The input $x$ is passed through the encoder module to get the latent feature maps. Then, the same latent feature maps are passed through each branch of the stretch-out up-sampling block to perform customized synthesis. The encoder part (orange blocks in Figure~\ref{fig2}) consists of a fully-convolutional module with 5 layers and subsequent 3 residual learning blocks (ResBlock) \cite{au49}. We set the kernel size and stride equal to 7 and 1, respectively, for the first layer. For the purpose of down-sampling, instead of using maximum-pooling, the stride of other 4 layers is set equal to 2. Rectified Linear Unit (ReLu) activation and batch normalization are sequentially added after each layer. To learn better transformation functions and representations through a deeper perception, the depth of the encoder network is increased by 3 ResBlocks \cite{au4, au49}. We can observe the significant different radiographic features between anatomic and molecular MR images as shown in Figure~\ref{fig1}(a), which vastly increases the difficulty of simultaneous synthesis. To address this issue, the decoder part (green blocks in Figure~\ref{fig2}) consists of 3 ResBlocks and a stretch-out up-sampling module that contains 5 same sub-modules designed to utilize the same latent representations from the preceding ResBlock and perform customized synthesis for each MR sequence. Each sub-module contains a symmetric architecture with a fully-convolutional module in the encoder. All convolutional layers are replaced by transposed convolutional layers for up-sampling. The synthesized multi-modal MR images are produced from each sub-model. \subsection{Synthesis and Confidence Map Modules} The synthesis networks are prone to generating incorrect radiographic features at or near the edges, since they are high frequency components. Thus, a special attention in those regions where the network tends to be uncertain can improve the MR image synthesis task. To address this issue, a synthesis module and a confidence map module are added on each branch of the stretch-out up-sampling block (see Synthesis Module (SM) and Confidence Map Module (CM) in Figure~\ref{fig2}). Specifically, we estimate the intermediate synthesis results at 0.5 scale size of the final output by SM and measure the confidence map which gives attentions to the the uncertain regions by CM. The confidence score at each pixel is a measurement of certainty about the intermediate results computed at each pixel. Confidence maps produce high confidence values (i.e close to 1) from the regions where the network is certain about the synthesized intensity values, and assign low confidence scores (i.e close to 0) for those pixels where the network is uncertain. To this end, we can block the erroneous regions by combing confidence maps and the intermediate results. Thus, the masked intermediate results is returned to the subsequent networks, which makes the network more attentive in the uncertain regions. As shown in Figure~\ref{fig3}, feature maps at scale $\times$0.5 ($f_{\times 0.5}$) are given as input to SM to compute the intermediate results of each MR sequence at scale $\times$0.5. SM is a sequence of four convolutional blocks. Then, we feed the estimated intermediate results and the feature maps as inputs to CM for computing the confidence scores at every pixel. CM is also a sequence of four convolutional blocks. Finally, the confidence-masked intermediate results (i.e. the element-wise multiplication between $\hat{y}_{\times 0.5}$ and $c_{\times 0.5}$) combining with feature maps at scale $\times 0.5$ are fed back to the network to guide the subsequent layers to produce final output. Inspired by modeling the data dependent aleatoric uncertainty \cite{au44,au45}, we define the confidence map loss as follows: \setlength{\belowdisplayskip}{0pt} \setlength{\belowdisplayshortskip}{0pt} \setlength{\abovedisplayskip}{0pt} \setlength{\abovedisplayshortskip}{0pt} \begin{equation} \label{eq:cf} \begin{aligned} \mathcal{L}_{\text{CM}}(f_{\times 0.5}) &= c_{\times 0.5} \otimes \|\hat{y}_{\times 0.5}-y_{\times 0.5}\|_{1}- \lambda_{\text{cm}} C, \\ \hat{y}_{\times 0.5} &= \text{SM}(f_{\times 0.5}),\\ \hat{c}_{\times 0.5} &= \text{CM}(f_{\times 0.5}\oplus\hat{y}_{\times 0.5}),\\ C &= \sum_i\sum_j \log(c_{\times 0.5}^{ij}), \end{aligned} \end{equation} where $\otimes$, $\oplus$ are the element-wise multiplication and the channel-wise concatenation, respectively. $c_{\times 0.5}^{ij}$ represents the confidence score at the $i$th row, $j$th column of the confidence map $c_{\times 0.5}$. $\hat{y}_{\times 0.5}$ is intermediate synthesis results produced by the decoder part. In $\mathcal{L}_{\text{CM}}$, the first term minimizes the L1 difference between $\hat{y}_{\times 0.5}$ and $y_{\times 0.5}$, and the values of $c_{\times 0.5}$ as well. To avoid trivial solution (i.e. $c_{\times 0.5}^{ij} = 0, \forall i, j$), we introduce the second term as a regularizer. $\lambda_{\text{cm}}$ is a constant adjusting the weight of this regularization term $C$. Similar loss has been used for image restoration and ultrasound segmentation tasks in \cite{au50,au51}. To the best of our knowledge, our method is the first attempt to introduce this kind of loss in MR synthesis tasks. \begin{figure} \centering \includegraphics[width=2.5in]{MLD.png} \caption{An overview of multi-scale label-wise discriminators. ROI masks are produced from reorganized input lesion masks. We denote $\otimes$ as the element-wise multiplication operation. GAP is the global average pooling that generates 0.5 scale size of input. $\mathbb{D}$ is a set of discriminators. } \label{fig:4} \end{figure} \subsection{Multi-scale Label-wise Discriminators} In order to achieve large receptive field in discriminators without introducing deeper networks, we adopt multi-scale PatchGAN discriminators \cite{au8}, which have identical network architectures but take multi-scale inputs \cite{au9}. To distinguish between real and synthesized images, conventional discriminators operate on whole input. However, optimizing generator to produce realistic images in each ROI cannot be guaranteed by discriminating on holistic images, since the difficulty of synthesizing images in different regions is varying. To address this issue, we introduce label-wise discriminators. Based on the radiographic features, original lesion segmentation masks are reorganized into 3 ROIs, including background, normal brain, and lesion. As shown in Figure~\ref{fig:4}, the input of each discriminator is masked by corresponding ROI. Since the proposed discriminators are in a multi-scale setting, for each ROI there are 2 discriminators that operate on the original and a down-sampled $\times$0.5 scales. Thus, there are in total 6 discriminators for 3 ROIs and we refer to these set of discriminators as $\mathbb{D}=\{D_1, D_2, D_3, D_4, D_5, D_6\}$. In particular, \{$D_1$,$D_2$\},\{$D_3$,$D_4$\}, and \{$D_5$,$D_6$\} operate on the original and down-sampled versions of background, normal brain, and lesion, respectively. An overview of the proposed discriminators is given in Figure~\ref{fig:4}. The objective function corresponding to the discriminators $\mathcal{L}_{\text{GAN}}(G,D_k)$ is as follows: \begin{equation} \label{eq:3} \begin{aligned} \mathcal{L}_{\text{GAN}}(G,D_k)= & \mathbb{E}_{(x^\prime,y^\prime)}[\log D_{k}(x^\prime,y^\prime)] \\ + & \mathbb{E}_{x^\prime}[\log (1- D_{k}(x^\prime, G^\prime(x)))], \\ \mathcal{L}_{\text{GAN}}(G,D) = & \sum_{k=1}^{6} \mathcal{L}_{\text{GAN}} (G,D_k ), \end{aligned} \end{equation} where $x$ and $y$ are paired input and real multi-modal MR images, respectively. Here, $x^\prime \triangleq m_k\otimes x$, $y^\prime \triangleq m_k\otimes y$, and $G^\prime(x) \triangleq m_k\otimes G(x)$, where $\otimes$ denotes element-wise multiplication and $m_k$ corresponds to the ROI mask. For simplicity, we omit the down-sampling operation in this equation. \subsection{Training Using Paired Data} A multi-task loss is designed to train the generator and the discriminators in an adversarial setting. Instead of only using the conventional adversarial loss $\mathcal{L}_{\text{GAN}}$, we also adopt a feature matching loss $\mathcal{L}_{\text{FM}}$ \cite{au9} to stabilize training, which optimizes generator to match these intermediate representations from the real and the synthesized images in multiple layers of the discriminators. For discriminators, $\mathcal{L}_{\text{FM}}(G,D_k)$ is defined as follows: \begin{equation} \label{eq:4} \begin{aligned} \mathcal{L}_{\text{FM}}(G,D_k)= & \sum_{i}^{T} \frac{1}{N_{i}} \left[\|D_{k}^{(i)}(x^\prime,y^\prime)- D_{k}^{(i)}(x^\prime, G^\prime(x)\|_{2}^{2}\right] \\ \mathcal{L}_{\text{FM}}(G,D) =& \sum_{k=1}^{6}\mathcal{L}_{\text{FM}}(G,D_k ), \end{aligned} \end{equation} where $D_{k}^{(i)} $ denotes the $i$th layer of the discriminator $D_{k}$, $T$ is the total number of layers in $D_{k}$ and $N_i$ is the number of elements in the $i$th layer. If we perform lesion segmentation on images, it is worth to note that there is a consistent relation between the prediction and the real one serving as input for the generator. Lesion labels are usually occluded with each other and brain anatomic structure, which causes ambiguity for synthesizing realistic MR images. To tackle this problem, we propose a lesion shape consistency loss $\mathcal{L}_{\text{SC}}$ by adding a U-net \cite{au11} segmentation module (see Figure~\ref{fig2}) that regularizes the generator to obey this consistency relation. We adopt Generalized Dice Loss (GDL) \cite{au16} to measure the difference between the predicted and real segmentation maps and is defined as follows \begin{equation} \label{eq:7} \text{GDL}(R,S) = 1 - \frac{2\sum_i^N r_i s_i }{\sum_i^N r_i + \sum_i^N s_i }, \end{equation} where $R$ denotes the ground truth and $S$ is the segmentation result. $r_i$ and $s_i$ represent the ground truth and predicted probability maps at each pixel $i$, respectively. $N$ is the total number of pixels. The lesion shape consistency loss $\mathcal{L}_{\text{SC}}$ is then defined as follows \begin{equation} \label{eq:5} \begin{aligned} \mathcal{L}_{\text{SC}}(U) = \text{GDL}(s,U(y)) + \text{GDL}(s,U(G(x))), \end{aligned} \end{equation} where $U(y)$ and $U(G(x))$ represent the predicted lesion segmentation probability maps by taking $y$ and $G(x)$ as inputs in the segmentation module, respectively. $s$ denotes the ground truth lesion segmentation map. The final multi-task objective function for training CG-SAMR is defined as \begin{equation} \label{eq:6} \begin{aligned} \min\limits_{\text{G,U}}(\max\limits_{\text{D}} \mathcal{L}_{\text{GAN}} (G,D)) + & \lambda_{1} \mathcal{L}_{\text{FM}}(G,D) \\ +& \lambda_{2}\mathcal{L}_{\text{SC}}(U) + \lambda_{3}\mathcal{L}_{\text{CM}}(f_{\times 0.5}), \end{aligned} \end{equation} where $\lambda_{1}$, $\lambda_{2}$ and $\lambda_{3}$ three parameters that control the importance of each loss. \subsection{Training Using Unpaired Data} \begin{figure} \centering \includegraphics[width=.6\columnwidth]{unpair.png} \vskip -10pt \caption{The schematic of the proposed method corresponding to training using unpaired data. $E_1$ and $E_2$ are two encoders mapping input to the latent codes. $F_1$ is a decoder with symmetric architecture as encoders mapping the latent codes to domain 1. $F_2$ is a decoder that is used in CG-SAMR mapping the latent codes to multi-modal MR images (domain2). $D_1$ and $D_2$ are two discriminators for domain 1 and domain 2. \label{fig6}} \end{figure} \begin{table*}[] \setlength{\tabcolsep}{0.6pt} \centering \caption{Quantitative comparison. Quality of the synthesized data under paired data training is measured by pixel accuracy. Lesion indicates the union of edema, cavity, and tumor. Brain represent the holistic brain region. Here, the unit is in percent (\%).}\label{tab1} \scriptsize \begin{tabular}{cccccccccccccccccccccccccc} \hline & \multicolumn{5}{c}{Pix2Pix \cite{au8}} & \multicolumn{5}{c}{Pix2PixHD \cite{au9}} & \multicolumn{5}{c}{Shin et al. \cite{au14}} & \multicolumn{5}{c}{SAMR \cite{au59}} & \multicolumn{5}{c}{CG-SAMR (our)} \\ \hline & Edema & Cavity & Tumor & Lesion & \multicolumn{1}{c|}{Brain} & Edema & Cavity & Tumor & Lesion & \multicolumn{1}{c|}{Brain} & Edema & Cavity & Tumor & Lesion & \multicolumn{1}{c|}{Brain} & Edema & Cavity & Tumor & Lesion & \multicolumn{1}{c|}{Brain} & Edema & Cavity & Tumor & Lesion & Brain \\ \hline \multicolumn{1}{c|}{$APT$w} & 50.8 & 42.1 & 48.2 & 48.8 & \multicolumn{1}{c|}{51.0} & 55.0 & 42.1 & 51.2 & 51.5 & \multicolumn{1}{c|}{52.9} & 45.2 & 40.0 & 42.0 & 43.9 & \multicolumn{1}{c|}{46.8} & 65.9 & 52.7 & 63.1 & 63.8 & \multicolumn{1}{c|}{55.1} & 67.1 & 51.3 & 64.3 & 64.2 & 56.1 \\ \multicolumn{1}{c|}{$T_1$w} & 54.6 & 56.2 & 49.2 & 53.5 & \multicolumn{1}{c|}{42.4} & 54.0 & 53.0 & 47.8 & 52.7 & \multicolumn{1}{c|}{44.2} & 72.6 & 71.7 & 68.0 & 71.8 & \multicolumn{1}{c|}{73.9} & 73.0 & 69.0 & 67.5 & 72.8 & \multicolumn{1}{c|}{53.4} & 76.0 & 67.8 & 71.1 & 75.0 & 57.4 \\ \multicolumn{1}{c|}{$FLAIR$} & 51.7 & 41.0 & 44.7 & 48.5 & \multicolumn{1}{c|}{57.7} & 47.1 & 36.3 & 46.3 & 44.5 & \multicolumn{1}{c|}{58.8} & 60.1 & 41.9 & 51.9 & 56.9 & \multicolumn{1}{c|}{65.8} & 75.4 & 61.5 & 68.1 & 73.1 & \multicolumn{1}{c|}{68.1} & 78.2 & 67.4 & 71.5 & 76.4 & 71.6 \\ \multicolumn{1}{c|}{$T_2$w} & 52.1 & 52.3 & 42.5 & 51.2 & \multicolumn{1}{c|}{57.3} & 50.6 & 59.3 & 46.4 & 50.3 & \multicolumn{1}{c|}{57.8} & 65.6 & 55.5 & 56.3 & 63.1 & \multicolumn{1}{c|}{70.0} & 76.7 & 77.7 & 71.2 & 77.3 & \multicolumn{1}{c|}{68.9} & 81.0 & 77.7 & 74.3 & 80.7 & 72.5 \\ \multicolumn{1}{c|}{Gd-$T_1$w} & 70.4 & 57.7 & 38.1 & 63.3 & \multicolumn{1}{c|}{58.5} & 72.3 & 58.5 & 37.4 & 65.0 & \multicolumn{1}{c|}{60.5} & 74.4 & 64.8 & 38.7 & 67.5 & \multicolumn{1}{c|}{71.4} & 81.2 & 67.7 & 64.2 & 78.0 & \multicolumn{1}{c|}{69.9} & 83.1 & 69.3 & 62.6 & 79.1 & 73.2 \\ \hline \multicolumn{1}{c|}{Avg.} & 55.9 & 49.9 & 44.5 & 53.1 & \multicolumn{1}{c|}{53.4} & 55.8 & 49.8 & 45.8 & 52.8 & \multicolumn{1}{c|}{54.8} & 63.6 & 54.8 & 51.4 & 60.6 & \multicolumn{1}{c|}{65.6} & 74.4 & 65.7 & 66.8 & 73.0 & \multicolumn{1}{c|}{63.1} & \textbf{77.1} & \textbf{66.7} & \textbf{68.8} & \textbf{75.1} & \textbf{66.2} \\ \hline \end{tabular} \end{table*} Figure~\ref{fig6} shows the schematic of the proposed method corresponding to training using unpaired data. Our framework is based on the proposed CG-SAMR network and an additional GAN: $\text{GAN}_1 = \{E_1,F_1,D_1\}$ and $\text{GAN}_2 = \{E_2,F_2,D_2\}$. Denote the set of lesion segmentation maps and anatomic prior as domain 1 and the set of multi-modal MR images as domain 2. Here, we denote \textbf{unpaired} instances in domain 1 and 2 as $x_u$ and $y_u$, respectively. In $\text{GAN}_1$, $D_1$ aims to evaluate whether the translated unpaired instances are realistic. It outputs true for real instances sampled from the domain 1 and false for instances generated by $F_1$. As shown in Figure~\ref{fig6}, $F_1$ can generate two types of instances: (1) instances from the reconstruction stream $x_u^{1\rightarrow 1} = F_1(E_1(x_u))$, and (2) instances from the cross-domain stream $x_u^{2\rightarrow 1} = F_1(E_2(y_u))$. We have similar properties in $\text{GAN}_2$, but the decoder $F_2$ is replaced by the corresponding decoder part in CG-SAMR. Thus, we can realize confidence-guided customized synthesis for each MR sequence under unpaired data training. The objective functions for reconstruction streams are defined as follows \begin{equation} \label{eq:recon} \begin{aligned} \mathcal{L}_{\text{recon}_1} = & \|x_u-x_u^{1\rightarrow 1}\|_1, \\ \mathcal{L}_{\text{recon}_2} = & \|y_u-y_u^{2\rightarrow 2}\|_1 + \mathcal{L}_{\text{CM}}(f_{\times 0.5}|y_u), \end{aligned} \end{equation} where $\mathcal{L}_{\text{CM}}$ is defined in equation~(\ref{eq:cf}) and $F_2$ is a decoder network with the same architecture as used in CG-SAMR. We denote the feature maps used for $\mathcal{L}_{\text{CM}}$ in $F_2$ as $f_{\times 0.5}|y_u$ when decoding the latent code obtained by encoding $y_u$. The objective functions of cross-domain streams can be expressed as follows \begin{equation} \label{eq:cross} \begin{aligned} \mathcal{L}_{\text{GAN}_1} = & \mathbb{E}_{(x_u)}[\log D(x_u)] + \mathbb{E}_{(z_2)}[\log (1- D_{1}(F_1(z_2)))],\\ \mathcal{L}_{\text{GAN}_2} = & \mathbb{E}_{(y_u)}[\log D(y_u)] + \mathbb{E}_{(z_1)}[\log (1- D_{2}(F_2(z_1)))], \\ \end{aligned} \end{equation} where $z_1$ and $z_2$ are the latent codes, $z_1 = E_1(x_u)$, $z_2 = E_2(y_u).$ Simply relaying on the reconstruction stream and adversarial training (i.e. cross-domain streams) cannot guarantee to learn the desired mapping function. To reduce the number of possible mapping functions, we require the learned mapping functions to obey cycle-consistent constraint (i.e. $x_u \rightarrow y_u^{1\rightarrow2} \rightarrow F_1(E_2(y_u^{1\rightarrow2})) \approx x_u )$ \cite{au39}. The objective functions for cycle-reconstruction streams are defined as follows \begin{equation} \label{eq:cyc} \begin{aligned} \mathcal{L}_{\text{cyc}_1} = & \|x_u-F_1(E_2(y_u^{1\rightarrow 2}))\|_1, \\ \mathcal{L}_{\text{cyc}_2} = & \|y_u-F_2(E_1(x_u^{2\rightarrow 1}))\|_1 + \mathcal{L}_{\text{CM}}(f_{\times 0.5}|x_u^{2\rightarrow 1}). \end{aligned} \end{equation} The overall objective function used to train the UCG-SAMR in unsupervised setting is defined as follows \begin{equation} \label{eq:unpair} \begin{aligned} G^* = &\min\limits_{\{E_1,F_1,E_2,F_2\}}\max\limits_{\{D_1,D_2\}} \mathcal{L}_{\text{domain}_1} + \mathcal{L}_{\text{domain}_2}, \;\text{where}\\ \mathcal{L}_{\text{domain}_1} = & \mathcal{L}_{\text{recon}_1} + \mathcal{L}_{\text{GAN}_1} + \mathcal{L}_{\text{cyc}_1},\;\;\text{and} \\ \mathcal{L}_{\text{domain}_2} = & \mathcal{L}_{\text{recon}_2} + \mathcal{L}_{\text{GAN}_2} + \mathcal{L}_{\text{cyc}_2}. \end{aligned} \end{equation} \begin{figure} \centering \includegraphics[width=\columnwidth]{super.png} \vskip -8pt \caption{Qualitative comparison of different methods under paired data training. The same lesion mask is used to synthesize images from different methods. (a) Real data (ground truth). (b) Pix2Pix \cite{au8}. (c) Pix2PixHD\cite{au9}. (d) Shin et al. \cite{au14}. (e) CG-SAMR (our). (f) Confidence maps from CG-SAMR. Red boxes indicate the lesion region. \label{fig7}} \end{figure} \begin{table}[ht] \setlength{\tabcolsep}{4.0pt} \centering \caption{Quantitative results corresponding to image segmentation when the synthesized data is used for data augmentation. For each experiment, the first row reports the percentage of synthesized/real data for training and the number of instances of synthesized/real data in parentheses. Exp.3 reports the results of baseline trained only by real data.}\label{tab2} \scriptsize \begin{tabular}{ccccccc} \hline \multicolumn{7}{c}{Exp.1: 50\% Synthesized+ 50\% Real (1080 + 1080)} \\ \hline \multicolumn{1}{l}{} & \multicolumn{3}{c}{Dice Score} & \multicolumn{3}{c}{Hausdorff95 Distance} \\ \hline \multicolumn{1}{c|}{} & Edema & Cavity & \multicolumn{1}{c|}{Tumor} & Edema & Cavity & Tumor \\ \multicolumn{1}{c|}{Pix2Pix \cite{au8}} & 0.589 & 0.459 & \multicolumn{1}{c|}{0.562} & 13.180 & 21.003 & 10.139 \\ \multicolumn{1}{c|}{Pix2PixHD \cite{au9}} & 0.599 & 0.527 & \multicolumn{1}{c|}{0.571} & 17.406 & 8.606 & 10.369 \\ \multicolumn{1}{c|}{Shin et al. \cite{au14}} & 0.731 & 0.688 & \multicolumn{1}{c|}{0.772} & 7.306 & 6.290 & 6.294 \\ \multicolumn{1}{c|}{SAMR \cite{au59}} & 0.794 & 0.813 & \multicolumn{1}{c|}{0.821} & 6.049 & 1.568 & 2.293 \\ \multicolumn{1}{c|}{CG-SAMR (our)} & \textbf{0.804} & \textbf{0.839} & \multicolumn{1}{c|}{\textbf{0.828}} & \textbf{4.166} & \textbf{1.381} & \textbf{1.810} \\ \hline \multicolumn{7}{c}{Exp.2: 25\% Synthesized+ 75\% Real (540 + 1080)} \\ \hline \multicolumn{1}{c|}{Pix2Pix \cite{au8}} & 0.602 & 0.502 & \multicolumn{1}{c|}{0.569} & 10.706 & 9.431 & 10.147 \\ \multicolumn{1}{c|}{Pix2PixHD \cite{au9}} & 0.634 & 0.514 & \multicolumn{1}{c|}{0.663} & 17.754 & 9.512 & 9.061 \\ \multicolumn{1}{c|}{Shin et al. \cite{au14}} & 0.673 & 0.643 & \multicolumn{1}{c|}{0.708} & 14.835 & 7.798 & 6.688 \\ \multicolumn{1}{c|}{SAMR \cite{au59}} & 0.745 & 0.780 & \multicolumn{1}{c|}{0.772} & 8.779 & 6.757 & 4.735 \\ \multicolumn{1}{c|}{CG-SAMR (our)} & \textbf{0.756} & \textbf{0.793} & \multicolumn{1}{c|}{\textbf{0.773}} & \textbf{7.676} & \textbf{6.258} & \textbf{4.325} \\ \hline \multicolumn{7}{c}{Exp.3: 0\% Synthesized + 100\% Real (0 + 1080)} \\ \hline \multicolumn{1}{c|}{Baseline} & \textbf{0.646} & \textbf{0.613} & \multicolumn{1}{c|}{\textbf{0.673}} & \textbf{8.816} & \textbf{7.856} & \textbf{7.078} \\ \hline \end{tabular} \end{table} \begin{table*}[] \setlength{\tabcolsep}{3.0pt} \centering \caption{Quantitative comparison. The quality of synthesized data under unpaired data training is measured by pixel accuracy. Lesion indicates the union of edema, cavity, and tumor. Brain represent the holistic brain region.}\label{tab3} \scriptsize \begin{tabular}{cccccccccccccccc} \hline & \multicolumn{5}{c}{CycleGAN~\cite{au39}} & \multicolumn{5}{c}{UNIT~\cite{au48}} & \multicolumn{5}{c}{UCG-SAMR (our)} \\ \hline & Edema & Cavity & Tumor & Lesion & \multicolumn{1}{c|}{Brain} & Edema & Cavity & Tumor & Lesion & \multicolumn{1}{c|}{Brain} & Edema & Cavity & Tumor & Lesion & Brain \\ \hline \multicolumn{1}{c|}{$APT$w} & 51.3 & 32.8 & 39.9 & 47.3 & \multicolumn{1}{c|}{47.3} & 44.1 & 33.7 & 41.3 & 42.2 & \multicolumn{1}{c|}{42.3} & 51.5 & 36.8 & 44.7 & 48.2 & 43.0 \\ \multicolumn{1}{c|}{$T_1$w} & 35.5 & 23.6 & 34.2 & 34.9 & \multicolumn{1}{c|}{53.2} & 64.5 & 65.1 & 64.4 & 63.1 & \multicolumn{1}{c|}{68.5} & 66.4 & 60.5 & 68.7 & 65.2 & 68.0 \\ \multicolumn{1}{c|}{$FLAIR$} & 56.8 & 33.2 & 35.2 & 49.2 & \multicolumn{1}{c|}{60.1} & 55.3 & 37.9 & 49.9 & 52.3 & \multicolumn{1}{c|}{62.5} & 65.6 & 37.1 & 58.0 & 60.4 & 65.9 \\ \multicolumn{1}{c|}{$T_2$w} & 67.0 & 6.1 & 54.7 & 57.1 & \multicolumn{1}{c|}{64.0} & 63.7 & 41.4 & 52.0 & 58.8 & \multicolumn{1}{c|}{66.6} & 69.1 & 48.4 & 59.5 & 65.7 & 68.5 \\ \multicolumn{1}{c|}{$Gd$-$T_1$w} & 47.5 & 45.1 & 22.2 & 42.5 & \multicolumn{1}{c|}{62.4} & 65.7 & 65.3 & 42.2 & 62.3 & \multicolumn{1}{c|}{69.7} & 72.5 & 65.8 & 45.3 & 67.8 & 69.9 \\ \hline \multicolumn{1}{c|}{Avg.} & 51.6 & 28.2 & 37.2 & 46.2 & \multicolumn{1}{c|}{57.4} & 58.7 & 48.7 & 50.0 & 55.7 & \multicolumn{1}{c|}{61.9} & \textbf{65.0} & \textbf{49.7} & \textbf{55.2} & \textbf{61.5} & \textbf{63.1} \\ \hline \end{tabular} \end{table*} \begin{table}[] \setlength{\tabcolsep}{3.0pt} \centering \caption{Quantitative evaluation of the segmentation performance of different method under unpaired data training. }\label{tab4} \scriptsize \begin{tabular}{cccc|ccc} \hline & \multicolumn{3}{c|}{Dice Score} & \multicolumn{3}{c}{Hausdorff95 Distance} \\ \hline \multicolumn{1}{c|}{} & Edema & Cavity & Tumor & Edema & Cavity & Tumor \\ \cline{2-7} \multicolumn{1}{c|}{CycleGAN~\cite{au39}} & 0.333 & 0.01 & 0.073 & 18.647 & 30.859 & 39.611 \\ \multicolumn{1}{c|}{UNIT~\cite{au48}} & 0.527 & 0.368 & 0.506 & 9.008 & 11.225 & 10.183 \\ \multicolumn{1}{c|}{UCG-SAMR (our)} & \textbf{0.558} & \textbf{0.393} & \textbf{0.613} & \textbf{8.321} & \textbf{11.130} & \textbf{7.044} \\ \hline \end{tabular} \end{table} \section{Experiments and Results} In this section, we first discuss the data acquisition and training details. Then, the experimental setup, evaluations of the proposed synthesis methods against a set of recent state-of-the-art approaches, and comprehensive ablation studies are presented. \label{sec4} \subsection{Data Acquisition} \label{sec:da} This study was approved by the Institutional Review Board (IRB) and conducted in accordance with the U.S. Common Rule, and consent form was waved. Patients enrollment criteria are: at least 20 years old; initial diagnosis of pathologically proven primary malignant glioma; status post initial surgery and chemoradiation. There are 90 patients who are involved in this study. MRI scans were acquired by a 3T human MRI scanner (Achieva; Philips Medical Systems) by using a body coil excite and a 32-channel phased-array coil for reception \cite{au5}. $T_1$w, Gd-$T_1$w, $T_2$w, $FLAIR$, and $APT$w MRI sequences were collected for each patient. Imaging parameters for $APT$w can be summarized as: field of view (FOV), 212 $\times$ 212 $\times$ 66 $mm^{3}$; resolution, 0.82 $\times$ 0.82 $\times$ 4.4 $mm^3$; size of matrix, 256 $\times$ 256 $\times$ 15. Other anatomic MRI sequences were acquired with Imaging parameters: FOV, 212 $\times$ 212 $\times$ 165 $mm^{3}$; resolution, 0.41 $\times$ 0.41 $\times$ 1.1 $mm^3$; size of matrix, 512 $\times$ 512 $\times$ 150. Co-registration between $APT$w and anatomic sequences \cite{au17}, skull stripping \cite{au20}, N4-bias field correction \cite{au18}, and MRI standardization \cite{au19} were performed sequentially. After preprocessing, the final volume size of each sequence is 256 $\times$ 256 $\times$ 15. For every collected volume, lesion were manually annotated by an expert neuroradiologist into three labels: edema, cavity and tumor. Then, a multivariate template construction tool \cite{au21} was used to create the group average for each sequence (atlas). 1350 instances with the size of 256 $\times$ 256 $\times$ 5 were extracted from volumetric data, where 5 corresponds to five MRI sequences. For every instance, the one corresponding atlas slice and two adjunct (in axial direction) atlas slices were extracted to provide the prior of human brain anatomy in paired data training. The WM, GM, CSF probability masks were also extracted to provide anatomic prior used in the unsupervised case by SPM12 \cite{au52}. We split these instances randomly into 1080 (80\%) for training and 270 (20\%) for testing. Since the data was split on the patient level, training and testing data did not include the instances from the same patient. \subsection{Implementation Detail} The CG-SAMR synthesis model was trained based on the final objective function equation~(\ref{eq:6}) using the Adam optimizer \cite{au21}. $\lambda_{1}$, $\lambda_{2}$ and $\lambda_{3} $ were set equal to 5, 1 and 1, respectively. Hyperparameters are set as follows: constant learning rate of 2 $\times 10^{-4}$ for the first 250 epochs then linearly decaying to 0; 500 maximum epochs; batch size of 8. $\lambda_{\text{cm}}$ in equation~(\ref{eq:cf}) initially was set equal to 0.1. When the mean of scores in confidence maps $c_{\times 0.5}$ is greater than 0.7, $\lambda_{\text{cm}}$ was set equal to 0.03. Hyperparameters for unpaired data training are set as follows: constant learning rate of 2 $\times 10^{-4}$ for the first 400 epochs then linearly decaying to 0; 800 maximum epochs; batch size of 1. To further evaluating the effectiveness of the synthesized MRI sequences on data augmentation, we leveraged U-net \cite{au11} to train lesion segmentation models. U-net \cite{au11} was trained by the Adam optimizer \cite{au21}. Hyperparameters are set as follows: constant learning rate of 2 $\times 10^{-4}$ for the first 100 epochs then linearly decaying to 0; 200 maximum epochs; batch size of 16. In the segmentation training, all the synthesized data was produced from randomly manipulated lesion masks by CG-SAMR. For evaluation, we always keep 20\% of the data unseen for both of the synthesis and segmentation models. \begin{figure} \centering \includegraphics[width=.9\columnwidth]{lesion_m.png} \caption{Examples of lesion mask manipulations in CG-SAMR. (a) Real images (ground truth). (b) Synthesized images from the original mask. (c) Synthesized images by increasing tumor size to 100\%. (d) Synthesized images by shrinking tumor size to 50\%. (e) Synthesized images by replacing lesion from another slice. In lesion masks, gray, green, yellow, and blue represent normal brain, edema, tumor, and cavity, respectively. \label{fig8}} \end{figure} \begin{figure} \centering \includegraphics[width=.9\columnwidth]{unsuper.png} \caption{Qualitative comparison of segmentation and synthesis performance under unpaired data training. (a) Real data (ground truth). (b) CycleGAN \cite{au39}. (c) UNIT \cite{au9}. (d) UCG-SAMR (our). (e) Confidence maps from UCG-SAMR. In lesion masks, gray, green, yellow, and blue represent normal brain, edema, tumor, and cavity, respectively. \label{fig9}} \end{figure} \subsection{Results Corresponding to Supervised Training} We evaluate the performance of our method against the following recent state-of-the-art generic synthesis methods: Pix2Pix \cite{au8}, Pix2PixHD \cite{au9} as well as MRI synthesis methods: Shin et al. \cite{au14}, and SAMR \cite{au59}. We use pixel accuracy to compare the performance of different methods \cite{au8,au9,au39}. In particular, we calculate the difference between the synthesized data and the corresponding ground truth data and a pixel translation was counted correct if the difference was within 16 of the ground truth intensity value. Table~\ref{tab1} shows the quantitative performance of different methods in terms of pixel accuracy. As it can be seen from this table, our method clearly outperforms the present state-of-the-art synthesis algorithms. CG-SAMR gains improvement especially at lesion regions. Figure~\ref{fig7} presents the qualitative comparisons of the synthesized multi-modal MRI sequences from four different methods. It can be observed that Pix2Pix \cite{au8} and Pix2PixHD \cite{au9} fail to synthesize realistic looking human brain MR images. There is either an unreasonable brain ventricle or wrong radiographic features in the lesion region (see Figure~\ref{fig7} (b)(c)). Shin et al. \cite{au14} can produce realistic brain anatomic structures for anatomic MRI sequences. However, there is an obvious disparity between the synthesized and real $APT$w sequence in both normal brain and lesion region. The boundary of the synthezied lesion is also blurry (see red boxes in see Figure~\ref{fig7} (d)). The proposed method produces more accurate radiographic features of lesions and more diverse anatomic structure based on the human anatomy prior provided by atlas. To further evaluate the quality of the synthesized MR images, we perform data augmentation by using the synthesized images in training and then perform lesion segmentation. Evaluation metrics in BraTS challenge \cite{au3} (i.e. Dice score, Hausdorff distance (95\%)) are used to measure the performance of different methods. The data augmentation by synthesis is evaluated by the improvement for lesion segmentation models. We arbitrarily control lesion information to synthesize different number of data for augmentation. To simulate the piratical usage of data augmentation, we conduct experiments in the manner of utilizing all real data. In each experiment, we vary the percentage of the synthesized data to observe the contribution for data augmentation. Table~\ref{tab2} shows the calculated segmentation performance. Comparing with the baseline experiment that only uses real data, the synthesized data from pix2pix \cite{au8} and pix2pixHD \cite{au9} degrade the segmentation performance. The performance is improved when the synthesized data from of Shin et al. \cite{au14} and SAMR \cite{au59} are used for segmentation but the proposed method outperforms the other methods by a large margin. Figure~\ref{fig8} demonstrates the robustness of the proposed model under different lesion mask manipulations (e.g. changing the size of tumor and even reassembling lesion information between lesion masks). As can be seen from this figure, our method is robust to various lesion mask manipulations. \subsection{Results Corresponding to Unsupervised Training} We denote the proposed method under unpaired data training as UCG-SAMR and evaluate its performance against the following recent state-of-the-art unsupervised synthesis methods: CycleGAN \cite{au39} and UNIT \cite{au48}. Table~\ref{tab3} shows the quantitative synthesis performance of different methods in term of pixel accuracy. As it can be seen from this table, our method outperforms the other state-of-the-art synthesis algorithms. On average, UCG-SAMR gains 5.8\% and 15.3\% improvement at lesion regions compared to CycleGAN \cite{au39} and UNIT \cite{au48}, respectively. Table~\ref{tab4} shows the comparison of segmentation performance for different methods. We can observe that UCG-SAMR reaches the performance upper bound (i.e. supervised training by real paired data in Table~\ref{tab1} Exp.3). Figure.~\ref{fig9} presents the qualitative comparison of the segmentation and multi-modal MRI synthesis. It can be observed that CycleGAN \cite{au39} and UNIT \cite{au48} fail to synthesize realistic looking lesions, especially in the $APT$w and Gd-$T_1$w sequences. The proposed method produces more accurate radiographic features for each type of lesion label in both molecular and anatomic sequences. Facilitated by high-quality synthesis, the segmentation network works better than the other models as can be seen from Table~\ref{tab4}. \subsection{Ablation Study} \begin{table}[] \setlength{\tabcolsep}{2.0pt} \centering \caption{Ablation study of designed modules in data augmentation by synthesis.}\label{tab5} \scriptsize \begin{tabular}{lcccccccc} \hline \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{3}{c}{Dice Score} & \multicolumn{1}{l}{} & \multicolumn{3}{c}{Hausdorff95 Distance} \\ \hline & \multicolumn{1}{c|}{} & Edema & Cavity & \multicolumn{1}{c|}{Tumor} & & Edema & Cavity & \multicolumn{1}{c}{Tumor} \\ \cline{3-9} w/o Stretch-out & \multicolumn{1}{c|}{} & 0.677 & 0.697 & \multicolumn{1}{c|}{0.679} & & 13.909 & 11.481 & \multicolumn{1}{c}{7.123} \\ w/o Multi-label D & \multicolumn{1}{c|}{} & 0.753 & 0.797 & \multicolumn{1}{c|}{0.785} & & 7.844 & 2.570 & \multicolumn{1}{c}{2.719} \\ w/o Atlas & \multicolumn{1}{c|}{} & 0.684 & 0.713 & \multicolumn{1}{c|}{0.705} & & 6.592 & 5.059 & \multicolumn{1}{c}{4.002} \\ w/o $\mathcal{L}_{\text{SC}}$ & \multicolumn{1}{c|}{} & 0.728 & 0.795 & \multicolumn{1}{c|}{0.771} & & 8.604 & 3.024 & \multicolumn{1}{c}{3.233} \\ w/o $\mathcal{L}_{\text{CM}}$ & \multicolumn{1}{c|}{} & 0.794 & 0.813 & \multicolumn{1}{c|}{0.821} & \textbf{} & 6.049 & 1.568 & \multicolumn{1}{c}{2.293} \\ CG-SAMR (proposed) & \multicolumn{1}{c|}{} & \textbf{0.828} & \textbf{0.839} & \multicolumn{1}{c|}{\textbf{0.828}} & \textbf{} & \textbf{4.166} & \textbf{1.381} & \multicolumn{1}{c}{\textbf{1.810}} \\ \hline \end{tabular} \end{table} \begin{table}[] \setlength{\tabcolsep}{3.0pt} \centering \caption{Ablation study of designed modules in term of synthesis quality. The reported value is pixel accuracy in the lesion region as percent (\%).}\label{tab6} \scriptsize \begin{tabular}{lccccc|c} \hline & $APT$w & $T_1$w & $FLAIR$ & $T_2$w & Gd-$T_1$w & Avg. \\ \hline w/o Stretch-out & 62.5 & 66.1 & 66.2 & 70.5 & 72.4 & 67.5 \\ w/o Label-wise D & 63.3 & 73.8 & 73.1 & 74.2 & 77.1 & 72.3 \\ w/o Atlas & 61.6 & 66.3 & 69.2 & 73.4 & 73.7 & 68.8 \\ w/o $\mathcal{L}_{\text{SC}}$ & 63.4 & 71.7 & 71.3 & 75.8 & 75.8 & 71.6 \\ w/o $\mathcal{L}_{\text{CM}}$ & 63.8 & 72.8 & 73.1 & 77.3 & 78.0 & 73.0 \\ CG-SAMR (proposed) & \textbf{64.2} & \textbf{75.0} & \textbf{76.4} & \textbf{80.7} & \textbf{79.1} & \textbf{75.1} \\ \hline \end{tabular} \end{table} We conduct comprehensive ablation study to separately evaluate the effectiveness of using stretch-out up-sampling module in the decoder network, label-wise discriminators, atlas, lesion shape consistency loss $\mathcal{L}_{\text{SC}}$, and confidence map loss $\mathcal{L}_{\text{CM}}$ in the proposed method. We evaluate each designed module based on two aspects: (1) the effectiveness in data augmentation by the synthesized data, and (2) the contribution on the synthesis quality. For the former, we use the same experimental setting as exp.1 in Table~\ref{tab1}. The effectiveness of modules for data augmentation by synthesis is reported in Table~\ref{tab5}. Table~\ref{tab6} shows the contribution of designed modules in the MR image synthesis of different sequences. We can observe that two tables show similar trend. Losing the customized reconstruction for each sequence (stretch-out up-sampling module) can severely degrade the synthesis quality. We find that when atlas is not used in our method, it significantly affects the synthesis quality due to the lack of human brain anatomy prior. Moreover, dropping either $\mathcal{L}_{\text{SC}}$ or label-wise discriminators in the training also reduces the performance, since the shape consistency loss and the specific supervision on ROIs are not used to optimize the generator to produce more realistic images. In addition, dropping the confidence loss $\mathcal{L}_{\text{CM}}$ can lead to performance degradation, since the supervision on the intermediate results and attention of uncertain regions during synthesis can provide improved results. \section{Conclusion} \label{sec5} We proposed an effective generative model, called CG-SAMR, for multi-modal MR images, including anatomic $T_1$w, Gd-$T_1$w, $T_2$w, and $FLAIR$, and molecular $APT$w. It was shown that the proposed multi-task optimization under adversarial training further improves the synthesis quality in each ROI. The synthesized data could be used for data augmentation, particularly for images with pathological information of gliomas. Moreover, the proposed approach is an automatic, low-cost solution, which is capable to produce high quality data with diverse content that can be used for training of data-driven methods. We further extended CG-SAMR to UCG-SAMR, demonstrating the feasibility of using unpaired data for training. While our method outperforms state-of-the-art methods to some extent, there are several limitations in our current study. First, all subjects in this study were obtained from a single medical center, so the deep-learning models were not trained and tested on any external data. This leads to the proportional bias in our study without calibration. To make the algorithm more generalizable, our future work will incorporate MRI data from multiple external institutions. Second, our method is geared towards synthesizing 2D MR images. Given the lack of the continuity between adjacent scans and cross-sectional analysis, 2D model compromises the fidelity of MR data. As discussed in section~\ref{sec:da}, along the axial direction, the resolution is 4.4 mm for $APT$w images and 1.1 mm for anatomic images. These two non-comparable resolutions limit the application of 3D methods. Moreover, resampling to isotropic for 3D convolution can severely degrade the valuable pathological information in $APT$w images. Therefore, in our future work, the proposed method will be extended to 3D synthesis when comparable quality molecular MRI data is available for training. \bibliographystyle{IEEEtran} \input{tmi_syn_v5.bbl} \end{document}
1,108,101,562,547
arxiv
\section{Introduction} \setcounter{equation}{0} \noindent Although not present in the Periodic Table the homogeneous electron gas (HEG) is still an important and so far unsolved model system for electronic structure theory, cf. e.g. \cite{Tos}. In its spin-unpolarized version, the HEG ground state is characterized by only one parameter $r_s$, such that a sphere with the radius $r_s$ contains {\it on average} one electron \cite{Zie1}. It determines the Fermi wave number as $k_{\rm F}=1/(\alpha r_s)$ in atomic units (a.u.) with $\alpha =[4/(9\pi)]^{1/3}\approx 0.521062$ and it measures simultaneously both the interaction strength and the density such that high density corresponds to weak interaction and hence weak correlation \cite{foo}. For recent papers on this limit cf. \cite{Cio0,Zie2,Zie3,Mui,Cio,Zie4,Gla2,Zie5}. Usually the total ground-state energy per particle is written as (here and in the following are wave numbers measured in units of $k_{\rm F}$ and energies in units of $k_{\rm F}^2$) \begin{equation}\label{a1} e=e_0+e_{\rm x}+e_{\rm c}, \quad e_0=\frac{3}{5}\cdot \frac{1}{2}, \quad e_{\rm x}=-\frac{3}{4}\cdot\frac{\alpha r_s}{\pi}, \quad e_{\rm c}=(\alpha r_s)^2[a\ln r_s +(b+b_{2{\rm x}})+O(r_s)], \end{equation} where $e_0$ is the energy of the ideal Fermi gas, $e_{\rm x}$ is the exchange energy in lowest (1st) order (the corresponding direct term is zero, because the system is neutral), and $e_{\rm c}$ is referred to as correlation energy. The constants $a$ and $b$ arise from the ring-diagram summation as explained in the following. Naively one should expect that in the high-density limit the Coulomb repulsion $\epsilon^2/r$ \cite{foo} can be treated as perturbation. But in the early theory of the HEG, Heisenberg has shown \cite{Hei}, that ordinary perturbation theory with $e_{\rm c}=e_2+e_3+\cdots$ and $e_n\sim (\alpha r_s)^n$, where the subscript $n$ is the perturbation order, does not apply. Namely, in 2nd order, there is a direct term $e_{2{\rm d}}$ and an exchange term $e_{2{\rm x}}$, so that $e_2=e_{2{\rm d}}+e_{2{\rm x}}$. Unfortunately the direct term $e_{2{\rm d}}$ logarithmically diverges along the Fermi surface (i.e. for vanishing transition momenta $q$): $e_{2{\rm d}}\to\ln q$ for $q\to 0$ \cite{Hei}. This failure of perturbation theory has been repaired by Macke \cite{Ma} with an appropriate partial summation of higher-order terms $e_{3{\rm r}},e_{4{\rm r}}, \cdots$ (the subscript ``r'' means ``ring diagram'') up to infinite order. The result is Eq. (\ref{a1}) with $a=(1-\ln 2)/\pi^2\approx 0.031091$ after Macke \cite{Ma} and $b\approx -0.0711$ after Gell-Mann and Brueckner \cite{GB}. The latter means, that there is a pure 2nd-order remainder of the ring-diagram summation in addition to the non-analyticity $r_s^2\ln r_s$. A consistent description up to this order requires to take into account all other terms of the same order, i.e. $e_{\rm 2x}\sim r_s^2$. After Onsager, Mittag, and Stephen \cite{Ons} it is $e_{\rm 2x}=(\alpha r_s)^2b_{\rm 2x}$ with $b_{2{\rm x}}=(1/6)\ln 2-3 \zeta(3)/(2\pi)^2\approx +0.02418$. Thus part of the direct term $b$ is compensated by the exchange term $b_{\rm 2x}=-0.34\ b\ \curvearrowright\ b+b_{\rm 2x}=0.66\ b$. The physically plausible partial summation of higher-order perturbation terms used by Macke, respectively Gell-Mann and Brueckner is called ring-diagram summation with its characteristic particle-hole-pair excitations ($\vek k\to\vek k+\vek q, |\vek k|<1,|\vek k+\vek q|>1$, $\vek q=$ momentum transfer), also known as the RPA = random phase approximation. It is the simplest approximation which simultaneously describes the closely related phenomena of screening effects and plasma oscillations as well as plasmon propagation with dispersion. (For their damping one has to go beyond RPA with local field corrections.) In the high-density limit, correlation ``c'' starts with RPA or ring-diagram terms ``r'', symbolically written as c = r + $\cdots$. In the following the screening parameter $q_c^2=4\alpha r_s/\pi$ (being the interaction strength $\alpha r_s$ times $4/\pi$) is used. In terms of this screening wave number $q_c$ the electron-gas plasma frequency (measured in units of $k_{\rm F}^2$) is $\omega_{\rm pl}=q_c/\sqrt 3={\sqrt {4\alpha r_s/(3\pi)}}$. The building elements of the RPA Feynman diagrams are shown in Figs. 1a,b. The diagrams for $e_{\rm r}$ and $e_{\rm 2x}$ are in Figs. 1c,d, middle parts. \\ \noindent The non-analytical behavior of the total energy $e$ at the high-density limit carries over to its kinetic and potential components, $t$ respectively $v$, through the virial theorem \cite{Mar} and to the chemical potential $\mu$ through the Seitz theorem \cite{Sei}: \begin{equation}\label{a2} v=r_s\frac{d}{d r_s}e\ ,\quad t=-r_s^2\frac{d}{dr_s}\frac{1}{r_s}e\ , \quad \mu=\left(\frac{5}{3}-\frac{1}{3}r_s\frac{d}{dr_s}\right)e\ . \end{equation} In the high-density limit, this means \begin{equation}\label{a3} t=t_0+t_{\rm x}+t_{\rm c}, \quad t_0=\frac{3}{5}\cdot \frac{1}{2}, \quad t_{\rm x}=0, \quad t_{\rm c}=-(\alpha r_s)^2[a \ln r_s+(a+b+b_{2\rm x})+O(r_s)]\ , \end{equation} \begin{equation}\label{a4} v=v_0+v_{\rm x}+v_{\rm c}, \quad v_0=0, \quad v_{\rm x}=-\frac{3}{4}\cdot\frac{\alpha r_s}{\pi}, \quad v_{\rm c}=(\alpha r_s)^2[2a \ln r_s+(a+2b+2b_{2\rm x})+O(r_s)]\ , \end{equation} \begin{equation}\label{a5} \mu=\mu_0+\mu_{\rm x}+\mu_{\rm c}\ , \quad \mu_0=\frac{1}{2}\ , \quad \mu_{\rm x}=-\frac{\alpha r_s}{\pi}\ , \quad \mu_{\rm c}=(\alpha r_s)^2\left[a \ln r_s+\left(-\frac{1}{3}a+b+b_{\rm 2x}\right)+O(r_s)\right]\ . \end{equation} Fundamental relations between the simplest quantum-kinematical quantities (momentum distribution $n(k)$ and static structure factor $S(q)$) and the energy components $t$ and $v$ are \begin{eqnarray}\label{a6} t&=&\int\limits_0^\infty d(k^3)\ n(k)\frac{k^2}{2}\ ,\quad \int\limits_0^\infty d(k^3)\ n(k)=1 \ ,\quad 0\leq n(k)\leq 1\ , \\ && \quad z_{\rm F}=n(1^-)-n(1^+), \quad 0\leq z_{\rm F}=1-O(r_s)\ , \quad n(k\to \infty)\to\frac{A(r_s)}{k^8}+\cdots\ , \nonumber \\ v&=&-\frac{1}{3\cdot 4}\int\limits_0^\infty d(q^3)\ [1-S(q)]\frac{q_c^2}{q^2}\ ,\quad S(q\to 0)=\frac{q^2}{2\omega_{\rm pl}}+\cdots\ ,\quad [1-S(q\to\infty)]\to\frac{B(r_s)}{q^4}+\cdots\ \nonumber \\ \end{eqnarray} with the static 1-body quantity $n(k)$, the momentum distribution, and with the static 2-body quantity $S(q)$, the static structure factor (SSF). The Fourier transform of $1-S(q)$ is $1-g(r)$ with $g(r)\geq 0$ being the pair density (PD), see Sec.V. The SSF $S(q)$ behaves at transition momenta $|\vek q|=2$ non-analytically, because there occurs a topological change from two overlapping to two non-overlapping Fermi spheres. This causes asymptotic Friedel oscillations of the PD $g(r\to\infty)$, whereas cusp singularities $S(q\rightarrow 0)\sim q^{\rm odd}$ let emerge non-oscillatory asymptotic terms of $g(r\to \infty)$. For the asymptotic coefficients the sum rules \begin{equation}\label{a8} A(r_s)=\frac{1}{2}\omega_{\rm pl}^4\ g(0) \quad {\rm and} \quad B(r_s)=2\omega_{\rm pl}^2\ g(0)\quad {\rm with}\quad 1-g(0)=\frac{1}{2}\int\limits_0^\infty d(q^3)\ [1-S(q)] \end{equation} hold with the on-top PD $g(0)=1/2-O(r_s)$ \cite{Kim2,YaKa} (for $n(k)$) and \cite{Kim1,Kim2,Ya} (for $S(q)$). $g(0)$ is given by the normalisation of $1-S(q)$ and describes short-range correlations together with the peculiar behavior of $g(r\ll 1/q_c)$, besides it determines the large-wave-number asymptotics of $n(k)$ and $S(q)$. \\ \noindent In view of (\ref{a6}) and (1.7), one may ask, which peculiarities of these lowest-order quantum-kinematical quantities cause the non-analyticities of $t$ and $v$ \cite{Zie3}. The above mentioned drastic changes, when switching on the Coulomb interaction, show up in the redistribution of the non-interacting momentum distribution $n_0(k)=\Theta (1-k)$ within thin layers inside and outside the Fermi surface $|\vek k|=1$ and a remaining finite discontinuity $z_{\rm F}> 0$ (Migdal theorem \cite{Mig,Lu}). They show up also in the plasmon behavior of $S(q)$ within a small spherical region around the origin of the reciprocal space, what causes an inflexion point $q_{\rm infl}, S_{\rm infl}\sim \omega_{\rm pl}$. All these reconstructions describe the long-range correlation (screening, collective mode called plasmon), characteristic for the Coulomb interaction. They are treated in lowest order again by ring-diagram summations with the replacements $n_{\rm 2d}(k)\to n_{\rm r}(k)$ and $S_{\rm 1d}(q)\to S_{\rm r}(q)$. (Note, that $t_{\rm 2d}$ arises from $n_{\rm 2d}(k)$, but $v_{\rm 2d}$ from $S_{\rm 1d}(q)$.) How these replacements lead to the $r_s^2\ln r_s$ terms of $t$ and $v$ is shown in \cite{Zie3}. However, because the redistributions take place essentially only in the mentioned sensitive regions $||\vek k|-1|\ll q_c$ and $|\vek q|\ll q_c$ the r-terms approach the original d-terms far off these regions. Thus, to be consistent up to this order, the corresponding x-terms must be taken into account: $n_{\rm c}(k)=n_{\rm r}(k)+n_{\rm 2x}(k)+\cdots$, $S_{\rm c}(q)=S_{\rm r}(q)+S_{\rm 1x}(q)+\cdots$. These x-terms compensate part of the r-terms, similarly as this is the case for $e_{\rm c}$, $t_{\rm c}$, $v_{\rm c}$, $\mu_{\rm c}$ with the ratios of $b_{\rm 2x}$ to $b$, $a+b$, $a/2+b$, $-a/3+b$ being -0.34, -0.6, -0.43, -0.3, respectively. The ring-diagram summation for $n(k)$ has been developed in \cite{Da,Ku}, an analytical extrapolation is given in \cite{GGZ}, for the spin-polarized case see \cite{Zie8}. The ring-diagram summation for $S(q)$ has been done in \cite{Gli,Kim3}. In \cite{La} both $n(k)$ and $S(q)$ are considered on the same footing. \\ \noindent Another relevant ground state property is Dyson's self-energy $\Sigma(k,\omega)$, a {\it dynamical} 1-body quantity. Its on-shell value (as a function of $r_s$) is related to the chemical-potential shift through the Hugenholtz-van Hove (Luttinger-Ward) theorem $\mu-\mu_0= \Sigma(1,\mu)$ \cite{Hug}. Besides this, with the off-shell self-energy $\Sigma(k,\omega)$ also the 1-body Green's function \begin{equation}\label{a9} G(k,\omega)=\frac{1}{\omega-t(\vek k)-\Sigma(k,\omega)}\quad {\rm with}\quad \Sigma(k,\omega)\to{\rm sign}(k-1)\ {\rm i}\delta\quad {\rm for}\quad r_s\to 0 \\ \end{equation} is known, from which follow the momentum distribution (Migdal formula \cite{Mig}) \begin{equation}\label{a10} n(k)=\int\limits_{C_+}\frac{d\omega}{2\pi{\rm i}}\;{\rm e}^{{\rm i}\omega\delta}G(k,\omega) \quad \curvearrowright \quad \int\limits_0^\infty d(k^3)\ n(k)=1, \quad t=\int\limits_0^\infty d(k^3)\ n(k)\ \frac{k^2}{2} \end{equation} and the potential energy (Galitskii-Migdal formula \cite{Gal}, for its use in total-energy calculations cf. \cite{Miy} and refs. therein) \begin{equation}\label{a11} v=\frac{1}{2}\int\limits_0^\infty d(k^3)\int\limits_{C_+}\frac{d\omega}{2\pi{\rm i}}\;{\rm e}^{{\rm i}\omega\delta}G(k,\omega) \Sigma(k,\omega) \end{equation} with $\delta{_> \atop ^{\to}}0$ and $C_+$ means the closing of the contour in the upper complex $\omega$-plane. \\ \noindent Note that $\Sigma(k,\omega)$ and thus also $G(k,\omega)$, $n(k)$ and $t$, as well as $v$ are functionals of $t(\vek k)$ and $v(\vek q)$. Supposed the ground state energy $e$ is available as such a {\it functional}, then the (generalized) Hellmann-Feynman theorems \cite{Mar} \begin{equation}\label{a12} n(k)=\frac{4\pi}{3}\frac{\delta e}{\delta t(\vek k)}\ , \quad \quad S(q)-1=16\pi\frac{\delta e}{\delta v(\vek q)} \end{equation} hold, cf. App. A. These are equivalent writings of expressions given in \cite{Da,La} for $n(k)$ and in \cite{Ya,Kim3} for $S(q)$. The quantities $e$, $n(k)$, $t$, $S(q)$, and $v$ result as functions of $r_s$ from the replacements $t(\vek k)\to k^2/2$ and $v(\vek q)\to q_c^2/q^2$. These fundamental relations permit the following procedure (see Fig. 2): If $\Sigma(k,\omega)$ is available as a functional of $t(\vek k)$ and $v(\vek q)$ from perturbation theory or otherwise, then $n(k)$ can be calculated with the Migdal formula (\ref{a10}). Therefrom follows $t$ with (\ref{a6}) and $v$ with the Galitskii-Migdal formula (\ref{a11}). Finally from their sum $t+v=e$ and the functional derivative (\ref{a12}) the SSF $S(q)$ results (and $n(k)$ may be checked once more for consistency). So the dynamical 1-body quantity $\Sigma(k,\omega)$ provides the static 2-body quantity $S(q)$. \\ \noindent Whereas in \cite{Zie5} the {\it on-shell} self-energy $\Sigma (1,\mu)$ has been studied, here the {\it off-shell} self-energy $\Sigma(k,\omega)$ is considered. The problem in \cite{Zie5} was, to find out the correct diagrammatic sum for $\Sigma(1,\mu)$ on the rhs of the Luttinger-Ward theorem, which makes it an identity in the high-density limit $r_s\to 0$. The answer: the $GW$ approximation with $W$= RPA and $G$ = an appropriately renormalized particle-hole line yield the correct $r_s$-behavior, which agrees with the lhs as it follows from $\mu(r_s\to 0)$, cf. (\ref{a5}). Here it is shown in detail how the off-shell self-energy $\Sigma(k,\omega)$ for the model case $r_s\to 0$ yields $n(k)$, $t$, $v$, and $S(q)$ step by step according to the procedure of Fig. 2. So it is shown how $n(k)$ and $S(q)$, which have their common origin in the 2-body density matrix, are indirectly linked mutually through the self-energy $\Sigma(k,\omega)$. \\ \noindent Perturbation theory for $\Sigma(k,\omega)$ means $\Sigma=\Sigma_{\rm x}+\Sigma_{\rm c}$ with $\Sigma_{\rm c}=\Sigma_2+\Sigma_3+\cdots$, where $\Sigma_2=\Sigma_{\rm 2d}+\Sigma_{\rm 2x}$. The divergence of $\Sigma_{\rm 2d}$ is corrected by the ring-diagram summation (RPA): $\Sigma_{2d}+\cdots\to\Sigma_{\rm r}$, so $\Sigma _{\rm c}=\Sigma_{\rm r}+\Sigma_{\rm 2x}+\cdots$. The building elements of the RPA Feynman diagrams (cf. Figs. 1a and 1b) are the Coulomb repulsion $\pi^2v(\vek q)\to\pi^2q_c^2/q^2$ with the coupling constant $q_c^2=4\alpha r_s/\pi$ and the one-body Green's function of free electrons with $t(\vek k)\to k^2/2$, \begin{equation}\label{a13} G_0(k,\omega)=\frac{\Theta(k-1)}{\omega-t(\vek k)+{\rm i}\delta}+ \frac{\Theta(1-k)}{\omega-t(\vek k)-{\rm i}\delta}\ , \quad {\mbox{ $\delta{_> \atop ^{\to}}0$}}\ . \end{equation} From $G_0(k,\omega)$ follows the particle-hole propagator $Q(q,\eta)$ in RPA according to \begin{equation}\label{a14} Q(q,\eta)=-\int\frac{d^3k}{4\pi}\int\frac{d\omega}{2\pi{\rm i}}G_0(k,\omega) G_0(|{\mbox{\boldmath $k$}}+{\mbox{\boldmath $q$}}|,\omega+\eta) \end{equation} with the result \begin{equation}\label{a15} Q(q,\eta)=\int\limits_{|\vek k|<1,\ |\vek k+\vek q|>1} \frac{d^3k}{4 \pi} \left[\frac{1}{t(\vek k+\vek q)-t(\vek k) -\eta-{\rm i}\delta}+ \frac{1}{t(\vek k+\vek q)-t(\vek k)+\eta-{\rm i}\delta}\right]\ . \end{equation} The denominators contain the excitation energy to create a hole with $\vek k$ inside the Fermi sphere and a particle with $\vek k+\vek q$ outside the Fermi sphere. Thus, $Q(q,\eta)$ is a functional of $t(\vek k)$ with the functional derivative (\ref{A5}) and with $R(q,u)=Q(q,{\rm i}qu)$ defining a real function, see (\ref{B1}). In the following the self-energy contributions $\Sigma_{\rm x}$ (Sec. II), $\Sigma_{\rm r}$ (Sec. III), $\Sigma_{2{\rm x}}$ (Sec.VI) are explicitly given as they result from the diagram rules and it is derived, what follows from them: $n(k),\ t,\ v,\ e,\ S(q)$ . For (here not considered) issues as $S(q,\omega)$, $\varepsilon(k,\omega)$, $GW$-approximation etc. cf. e.g. \cite{Nech,Miy,Hol,Ara,Bro} and refs. therein. Sec. V deals with the short-range correlation following from $S(q)$, cf. (\ref{a8}). Although the high-density spin-unpolarized HEG is only a marginal corner in the complex field of electron correlation, its ring-diagram summation gives deeper insight through rigorous theorems, how energies and low-order quantum-kinematics are functionally related, what should be of a more general interest. \section{First order} \setcounter{equation}{0} \noindent The 1st-order direct term vanishes because of the neutralizing positive background. So the expansion of $\Sigma(k,\omega)$ starts with the 1st-order exchange term \begin{equation}\label{b1} \Sigma_{\rm x}(k,[v(\vek q)])=-\int\limits_{|\vek k+\vek q|<1} \frac{d^3q}{(2\pi)^3}\ \pi^2 v(\vek q)\ = -\frac{1}{8\pi}\int\limits_{|\vek k+\vek q|<1} d^3q\ v(\vek q). \end{equation} Its peculiarity is, that it does not depend on $\omega$, what makes $n_{\rm x}(k)$ vanishing identically, because of Eq. (\ref{a10}) with $G\approx G_0\Sigma_{\rm x}G_0$. This is in agreement with $t_{\rm x}=0$ as a consequence of the virial theorem (\ref{a2}). One may therefore conclude, that all the features of $n(k)$ start with the 2nd order. But this is not true. The peculiar redistribution of the non-interacting momentum distribution $n_0(k)=\Theta(1-k)$ due to the Coulomb repulsion makes the dicontinuity jump $z_{\rm F}=n(1^{-})-n(1^+)$ to deviate from its non-interacting value of 1 in 1st order: $z_{\rm F}=1-0.18\ r_s+\cdots$. But the corresponding kinetic energy starts with $t_{\rm c}\sim r_s^2\ln r_s$. - The potential energy $v_{\rm x}$ follows again as a functional of $v(\vek q)$ from the Galitskii-Migdal formula (\ref{a11}): \begin{equation}\label{b2} v_{\rm x}[v(\vek q)]= \frac{3}{8\pi}\int\limits_{|\vek k|<1}d^3k\ \Sigma_{\rm x}(k,[v(\vek q)])= -\frac{3}{(8\pi)^2}\int\limits_{|\vek k|<1, |\vek k+\vek q|<1}d^3k\int d^3q\ v(\vek q)\ . \end{equation} This is, because of $t_{\rm x}=0$, also the total energy in this order: $e_{\rm x}[v(\vek q)]=v_{\rm x}[v(\vek q)]$. Thus the functional derivative (\ref{a12}) yields \begin{equation}\label{b3} S_0(q)=1-\frac{3}{4\pi} \int\limits_{|\vek k|<1,|\vek k+\vek q|<1}d^3k= \left[\frac{3}{2}\frac{q}{2}-\frac{1}{2} \left(\frac{q}{2}\right)^3\right]\Theta(2-q)+\Theta(q-2). \end{equation} The integral arises (for $q\leq 2$) from two overlapping Fermi spheres. For $q\geq 2$ they do not overlap, thus a drastical change of the topology occurs, when passing $q=2$. After Fourier transformation, see (\ref{e1}), this non-analyticity causes the asymptotic Friedel oscillations of the non-interacting PD $g_0(r\to \infty)-1\sim \cos 2r, \sin 2r$. Another non-analyticity, namely the cusp singularities $S_0(q\to 0)\sim q, q^3$ make the non-oscillatory terms of $g_0(r\rightarrow\infty)$. The above integral $\int_{\cdots}d^3k$ is just the volume of two calottes with the height $h=1-q/2$, hence $\int_{\cdots} d^3k = 2\cdot\frac{\pi}{3}h^2(3-h)$. If in Eqs. (\ref{b1}) and (\ref{b2}) the general interaction line $v(\vek q)$ is replaced by the Coulomb interaction $q_c^2/q^2$, then \begin{equation}\label{b4} \Sigma_{\rm x}\left(k,\left[\frac{q_c^2}{q^2}\right]\right)=-\frac{q_c^2}{4}\ \left(1+\frac{1-k^2}{2k} \ln\left|\frac{k+1}{k-1}\right|\right) \quad {\rm and} \quad v_{\rm x}=-\frac{3}{16}q_c^2 \end{equation} turn out. Besides $\Sigma_{\rm x}(1)=-q_c^2/4=(4/3)e_{\rm x}$. Note $e_{\rm x}=v_{\rm x}$ in agreement with the virial theorem (\ref{a2}). This shows how the procedure of Fig. 2 works in lowest order. \section{Second Order: The direct term $\Sigma_{\rm 2d}$ and its RPA correction} \setcounter{equation}{0} \subsection{The self-energy $\Sigma_{\rm r}(k,\omega)$} \noindent In 2nd order there is a direct term (d) and an exchange term (x), see Figs. 1c,d, left: $\Sigma_2(k,\omega)=\Sigma_{2{\rm d}}(k,\omega)+\Sigma_{2{\rm x}}(k,\omega)$. The direct term diverges along the Fermi surface, i.e. for vanishing transition momenta $q_0\to 0$. This flaw is repaired by the ring-diagram summation (Fig. 1c, left) with the result \begin{equation} \Sigma_{{\rm r}}(k,\omega)=\frac{1}{8\pi}\int\limits_{q>q_0}d^3q \int\frac{d\eta}{2\pi {\rm i}} \; \frac{v^2(\vek q)Q(q,\eta)}{1+v(\vek q)Q(q,\eta)}\ G_0(|{\mbox{\boldmath $k$}}+{\mbox{\boldmath $q$}}|,\omega+\eta), \; {\mbox{ $q_0{_> \atop ^{\to}}0$}}\ . \nonumber \\ \end{equation} Note, that this defines a functional of $v(\vek q)$ and note also \begin{equation} \frac{v(\vek q)}{1+v(\vek q)Q(q,\eta)}\to\frac{q_c^2}{q^2+q_c^2Q(q,\eta)} \quad {\rm for} \quad v(\vek q)\to\frac{q_c^2}{q^2}\ , \nonumber \end{equation} where the screening (or ``Yukawa'') term $q_c^2Q(q,\eta)$ in the denominator makes the bare Coulomb repulsion renormalized and removes the above mentioned divergence of $\Sigma_{\rm 2d}(k,\omega)$. $G_0(k,\omega)$ and $Q(q,\eta)$ are functionals of $t(\vek k)$, see (\ref{a13}), (\ref{a15}). Use of Eq. (\ref{a13}) leads to \begin{eqnarray}\label{c1} \Sigma_{\rm r}(k,\omega)&=& \frac{1}{8\pi}\int d^3q\int \frac{d\eta}{2\pi{\rm i}}\ \frac{v^2(\vek q)Q(q,\eta)}{1+v(\vek q)Q(q,\eta)}\times \nonumber \\ &\times&\left[\displaystyle\frac{\Theta(|{\vek k}+{\mbox{\boldmath $q$}}|-1)} {\omega+\eta-\frac{1}{2}k^2-{\mbox{\boldmath $q$}}\cdot(\vek k+\frac{1}{2}\mbox{\boldmath $q$})+{\rm i}\delta}+ \displaystyle \frac{\Theta(1-|{\vek k}+{\mbox{\boldmath $q$}}|)}{\omega+\eta-\frac{1}{2}k^2-{\mbox{\boldmath $q$}}\cdot(\vek k+\frac{1}{2} \mbox{\boldmath $q$})-{\rm i}\delta}\right] . \end{eqnarray} In the following it is shown, how $n_{\rm r}(k)$ and $t_{\rm r}$ result from this $\Sigma_{\rm r}(k,\omega)$. \subsection{How $n_{\rm r}(k)$ results from $\Sigma_{\rm r}(k,\omega)$ and $t_{\rm r}$ from $n_{\rm r}(k)$} \noindent According to the Migdal formula (\ref{a10}), $n(k)$ follows from $G(k,\omega)$. In RPA it is \begin{equation}\label{c2} G(k,\omega)=G_0(k,\omega)+G_0(k,\omega)\Sigma_{\rm r}(k,\omega)G_0(k,\omega)+\cdots \end{equation} The first term yields the momentum distribution of the ideal Fermi gas \begin{equation}\label{c3} n_0(k)=\int\limits_{C_+}\frac{d\omega}{2\pi{\rm i}}\left[\frac{\Theta(k-1)}{\omega-\frac{1}{2}k^2+{\rm i}\delta}+ \frac{\Theta(1-k)}{\omega-\frac{1}{2}k^2-{\rm i}\delta}\right]=\Theta(1-k) \end{equation} being for small $r_s$ additively corrected by the RPA expression $n_{\rm r}(k)$, which follows from the second term of Eq. (\ref{c2}) according to \begin{equation}\label{c4} n_{\rm r}(k)=\int\limits\frac{d\omega}{2\pi{\rm i}}G_0(k,\omega)\Sigma_{\rm r}(k,\omega)G_0(k,\omega)\ . \end{equation} Using $\Sigma_{\rm r}(k,\omega)$ of Eq. (\ref{c1}) it is \begin{equation}\label{c5} n_{\rm r}(k)=\frac{1}{8\pi}\int d^3q \int \frac{d\eta}{2\pi{\rm i}}\ \frac{v^2(\vek q)Q(q,\eta)}{1+v(\vek q)Q(q,\eta)} \ f(\vek k,\vek q,\eta). \end{equation} The case ``2d'' appears, when the ``Yukawa'' term $v(\vek q)Q(q,\eta)$ in the denominator is deleted (`descreening'). The contour integrations are comprised [using $\Theta(k-1)\Theta(1-k)=0$] in \begin{eqnarray}\label{c6} f(\vek k,\vek q,\eta)=\int&\displaystyle\frac{d\omega}{2\pi{\rm i}}& \left[ \frac{\Theta(k-1)}{(\omega-\frac{1}{2}k^2+{\rm i}\delta_1)(\omega-\frac{1}{2}k^2+{\rm i}\delta_2)}+ \frac{\Theta(1-k)}{(\omega-\frac{1}{2}k^2-{\rm i}\delta_1)(\omega-\frac{1}{2}k^2-{\rm i}\delta_2)} \right]\times \nonumber \\ &\times& \left[\frac{\Theta(|\vek k+\vek q|-1)}{\omega+\eta-\frac{1}{2}k^2-\vek q(\vek k+\frac{1}{2}\vek q)+{\rm i}\delta }+ \frac{\Theta(1-|\vek k+\vek q|)}{\omega+\eta-\frac{1}{2}k^2-\vek q(\vek k+\frac{1}{2}\vek q)-{\rm i}\delta}\right] . \nonumber \\ \end{eqnarray} Only the combinations $\Theta(k-1)\Theta(1-|\vek k+\vek q|)$ with a pole in the upper $\omega$-plane and $\Theta(1-k)\Theta(|\vek k+\vek q|-1)$ with a pole in the lower plane contribute. Next the contour integration along the real $\omega$-axis is closed by half-circles in the upper, respectively lower plane. Note that $\int\limits_{C_\pm}d\omega/(\omega-\omega_\pm)=\pm2\pi{\rm i}$, where Im $\omega_\pm \gtrless 0$. The result is \begin{equation}\label{c7} f(\vek k,\vek q,\eta)=\frac{\Theta(\vek k, \vek q)}{[\eta- \vek q (\vek k+\frac{1}{2} \vek q)]^2} \end{equation} with $\Theta(\vek k, \vek q)=\pm 1$ for $|\vek k|\gtrless 1, \ |\vek k+\vek q|\lessgtr 1$ and 0 otherwise. This provides with the particle-number conservation, because of \begin{equation}\label{c8} \left(\int\limits_{|\vek k|>1,|\vek k+\vek q|<1} -\int\limits_{|\vek k|<1,|\vek k+\vek q|>1} \right) \frac{d^3k}{[\eta-\vek q(\vek k+\frac{1}{2}\vek q)]^2}=0 \quad {\rm or}\quad \int d^3k\ n_{\rm r}(k)=0. \end{equation} Here, with the replacement $\vek k\to -\vek k-\vek q$, the denominator of the second integral transforms to $[\eta+\vek q(\vek k+\frac{1}{2}\vek q)]^2$. The second integral is identical to the first one. This is because of the property $Q(q,-\eta)=Q(q,\eta)$ in the term in front of $f(\vek k,\vek q,\eta)$ in Eq. (\ref{c5}). \\ \noindent Next it is shown, how $t_{\rm r}$ results from $n_{\rm r}(k)$, starting with Eq. (\ref{c5}). With the identity (A.5) it can be written as \begin{eqnarray}\label{c9} n_{\rm r}(k)&=&-\frac{1}{4\pi}\int d^3q\int\frac{d\eta}{2\pi {\rm i}}\sum\limits_{n=1}^\infty (-1)^{n+1}v^{n+1}(\vek q)\ Q^n(q,\eta)\ \frac{\delta Q(q,\eta)}{\delta t(\vek k)} \nonumber \\ &=&-\frac{1}{4\pi}\int d^3q\int \frac{d\eta}{2\pi{\rm i}}\sum\limits_{n=1}^\infty\frac{(-1)^{n+1}}{n+1}v^{n+1}(\vek q)\ \frac{\delta Q^{n+1}}{\delta t(\vek k)}\ . \end{eqnarray} Next $t_{\rm r}=3/(4\pi)\ \int d^3k\ n_{\rm r}(k)\ t(\vek k)$ is combined with the identity (\ref{A6}). It results in \begin{equation}\label{c10} t_{\rm r}=\frac{3}{16}\int d^3q \int \frac{d\eta}{2\pi{\rm i}}\sum\limits_{n=1}^\infty (-1)^{n+1}\frac{n}{n+1} v^{n+1}(\vek q)Q^{n+1}(q,\eta)\ . \end{equation} This is the contribution of the ring-diagram summation to the kinetic energy. For the total energy contribution $e_{\rm r}$ the potential energy $v_{\rm r}$ is needed. \subsection{How $v_{\rm r}$ results from $\Sigma_{\rm r}(k,\omega)$ and $S_{\rm r}(q)$ from $e_{\rm r}$} \noindent Equations (\ref{a9}) and (\ref{c1}) inserted into the Galitskii-Migdal formula (\ref{a11}) and the $\omega-$integration performed yields \begin{eqnarray}\label{c11} v_{\rm r}&=&\frac{1}{16\pi}\int d^3q \int \frac{d\eta}{2\pi{\rm i}} \frac{v^2(\vek q)Q(q,\eta)}{1+v(\vek q)Q(q,\eta)}\times \nonumber \\ &\times&\int\frac{3\ d^3k}{4\pi} \left[\frac{\Theta(|\vek k+\vek q|-1)\Theta(1-k)}{\eta-\vek q(\vek k+\frac{1}{2}\vek q)+{\rm i}\delta}+ \frac{\Theta(1-|\vek k+\vek q|)\Theta(k-1)}{-\eta+\vek q(\vek k+\frac{1}{2}\vek q)+{\rm i}\delta}\right]\ . \end{eqnarray} With the definition (\ref{a15}) of $Q(q,\eta)$ it results \begin{equation}\label{c12} v_{\rm r}=-\frac{3}{16\pi}\int d^3q\int \frac{d\eta}{2\pi{\rm i}} \frac{v^2(\vek q)Q^2(q,\eta)}{1+v(\vek q)Q(q,\eta)}\ . \end{equation} The power-series expansion (its 1st term is $v_{\rm 2d}$) \begin{equation} v_{\rm r}=-\frac{3}{16\pi}\int d^3q\int\frac{d\eta}{2\pi{\rm i}}\sum\limits_{n=1}^\infty (-1)^{n+1}v^{n+1}(\vek q)Q^{n+1}(q,\eta)\ . \nonumber \end{equation} makes it better comparable with Eq. (\ref{c10}) for $t_{\rm r}$. Their sum yields [with $-\frac{n}{n+1}+1=\frac{1}{n+1}$] the well-known RPA expression for the total energy (after Macke) \begin{equation}\label{c13} e_{\rm r}=\frac{3}{16\pi}\int d^3q\int\frac{d\eta}{2\pi{\rm i}}[\ln(1+v(\vek q)Q(q,\eta))-v(\vek q)Q(q,\eta)]\ . \end{equation} (The 1st term of the power-series expansion gives $e_{\rm 2d}$.) Note that $r_sde_{\rm r}/dr_s$ agrees with $v_{\rm r}$ of Eq. (\ref{c12}): virial theorem (\ref{a2}). By means of functional derivatives [see Appendix A and Eqs. (\ref{a12})] follow the SSF $S_{\rm r}(q)$ and the momentum distribution $n_{\rm r}(k)$. \\ \noindent Indeed, $S_{\rm r}(q)$ results from $e_{\rm r}[t(\vek k),v(\vek q)]$ as \begin{equation}\label{c14} S_{\rm r}(q)= 16\pi\frac{\delta e_{\rm r}}{\delta v(\vek q)}=-3\int \frac{d\eta}{2\pi{\rm i}}\ \frac{v(\vek q)Q^2(q,\eta)}{1+v(\vek q)Q(q,\eta)}\ , \end{equation} cf. Fig. 1c, right. If this is multiplied by $v(\vek q)/(16\pi)$ and integrated $\int d^3q$ [according to (1.7)], then $v_{\rm r}$ turns out, as it should. \\ \noindent The analog procedure for $n_{\rm r}(k)$ is \begin{equation}\label{c15} n_{\rm r}(k)=\frac{4\pi}{3}\frac{\delta e_{\rm r}}{\delta t(\vek k)}=-\frac{1}{4}\int d^3q \frac{d\eta}{2\pi{\rm i}}\ \frac{v^2(\vek q)Q(q,\eta)}{1+v(\vek q)Q(q,\eta)}\frac{\delta Q(q,\eta)}{\delta t(\vek k)}, \quad \frac{\delta Q(q,\eta)}{\delta t(\vek k)}=-\frac{1}{2\pi}f(\vek k,\vek q,\eta)\ , \end{equation} in agreement with Eq. (\ref{c5}), as it should. \\ \noindent In the following it is shown, how Eqs. (\ref{c12}-\ref{c15}) really yield the high-density results of Macke \cite{Ma}, Gellmann/Brueckner \cite{GB}, Daniel/Vosko \cite{Da}, Kulik \cite{Ku}, and Kimball \cite{Kim3}. \subsection{How the complex $Q(q,\eta)$ is replaced by the real $R(q,u)$ and \\ how $v_{\rm r}$, $S_{\rm r}(q)$, and $n_{\rm r}(k)$ behave for $r_s\to 0$} \noindent The replacement $\eta\to{\rm i}qu$ turns the contour integration in the RPA expressions (\ref{c12}-\ref{c15}) into one along the real axis. Besides it is a useful trick to introduce the velocity $u$ instead of the frequency $\eta$. With the replacement $v(\vek q)\to q_c^2/q^2$ they take the form (note $q\ d^3q=2\pi q^2d(q^2)$): \begin{eqnarray}\label{c16} {\bf (1)}\hspace{1cm}\qquad e_{\rm r}&=&\frac{3}{8\pi}\int\limits_0^\infty du \int\limits_0^\infty d(q^2)\ q^2 \left[\ln\left(1+\frac{q_c^2}{q^2}R(q,u)\right) -\frac{q_c^2}{q^2}R(q,u)\right]\ , \\ {\bf (2)}\hspace{1cm}\qquad v_{\rm r}&=&-\frac{3q_c^4}{8\pi}\int\limits_0^\infty du \int\limits_0^\infty d(q^2)\ \frac{R^2(q,u)}{q^2+q_c^2R(q,u)}\ , \end{eqnarray} \newpage \begin{eqnarray}\label{c18} {\bf (3)}\hspace{4mm}\qquad S_{\rm r}(q)&=&-\frac{3q_c^2}{\pi}\ q\int\limits_0^\infty du\ \frac{R^2(q,u)}{q^2+q_c^2R(q,u)}\ , \\ {\bf (4)}\hspace{4mm}\qquad n_{\rm r}(k)&=&\frac{q_c^4}{8\pi}\int\limits_0^\infty du \int\limits_0^\infty d(q^2) \frac{R(q,u)}{q^2+q_c^2R(q,u)}\int\limits_{-1}^{+1} d\zeta \frac{\Theta(k,q,\zeta)}{q^2[{\rm i}u-(k\zeta+\frac{1}{2}q)]^2} \end{eqnarray} with $\Theta(k,q,\vek e_k\vek e_q)=\Theta(\vek k,\vek q)$, see (\ref{c7}). In the following, these four RPA quantities are discussed in detail in the high-density limit $r_s\to 0$. \\ \noindent {\bf (1)} Let us first consider the {\bf total energy} $e_{\rm r}$ of Eq. (3.16). The power expansion leads in lowest order to \begin{equation}\label{c20} e_{\rm 2d}=-\frac{3q_c^4}{16\pi}\int\limits_0^\infty du \int\limits_{q_0^2}^\infty \frac{d(q^2)}{q^2}\ R^2(q,u) =-\frac{3q_c^4}{(8\pi)^2}\int\limits_{q_0}^\infty\frac{dq}{q^2}I(q)\ . \end{equation} For the momentum transfer function $I(q)$, to be referred to as Macke function, see (\ref{C11}). Its property $I(q\to 0)\sim q$ makes $e_{\rm 2d}$ to diverge for $q_0{_> \atop ^{\to}}0$. Vice versa this flaw of the 2nd-order perturbation theory is rectified by the RPA summation (3.16). How this $e_{\rm r}$ behaves for small $r_s$ with the result (\ref{a1}) has been shown by Macke \cite{Ma} and Gell-Mann/Brueckner \cite{GB}. \\ \noindent {\bf (2)} In the following their method is applied to the {\bf potential energy} $v_{\rm r}$ of Eq. (3.17). Deleting the term $q_c^2R(q,u)$ in the denominator cancels the RPA partial summation and the diverging expression \begin{eqnarray}\label{c21} v_{2{\rm d}}=-\frac{3}{8\pi}q_c^4\int\limits_0^\infty du\int\limits_{q_0^2}^\infty d(q^2) \frac{R^2(q,u)}{q^2} =-\frac{2\cdot 3q_c^4}{(8\pi)^2}\int\limits_{q_0}^\infty\frac{dq}{q^2}I(q) \end{eqnarray} results. It remains to show, how to extract from Eq. (3.17) the constants $c_1$ and $c_2$ of $v_{\rm r}=(\alpha r_s)^2[c_1\ln r_s+c_2+O(r_s)]$ to be compared with $v_{\rm c}=v_{\rm r}+O(r_s^3)$ of Eq. (\ref{a4}). Whereas $c_1$ follows from $v_{2{\rm d}}$, $c_2$ results from the peculiar behavior of $v_{\rm r}$ for small transition momenta $q$. Therefore one can approximate $R(q,u)\approx R_0(u)+\cdots$ and restrict $q$ to $q<q_1$, where $q_1$ is a small (non-vanishing) momentum: \begin{eqnarray}\label{c22} v_{\rm r}^0&=&-\frac{3}{8\pi}q_c^4\int\limits_0^\infty du\int\limits_0^{q_1^2} d(q^2) \frac{R_0^2(u)}{q^2+q_{\rm c}^2R_0(u)} \nonumber \\ &=&-\frac{3}{8\pi}q_c^4\int\limits_0^\infty du\ R_0^2(u) \{\ln[q_1^2+q_{\rm c}^2R_0(u)]-\ln [q_{\rm c}^2R_0(u)]\} \nonumber \\ &=&-\frac{3}{8\pi}q_c^4\int\limits_0^\infty du\ R_0^2(u) \{[\ln q_1^2+O(r_s)]-\ln [q_{\rm c}^2R_0(u)]\}\ . \end{eqnarray} With the constants $a$, $b_{\rm r}'$ defined in Appendix B it is \begin{equation}\label{c23} v_{\rm r}^0=(\alpha r_s)^2 [2a\ln r_s+2(a\ln\frac{4\alpha}{\pi}+b_{\rm r}'-a\ln q_1^2)]+O(r_s^3)\ , \end{equation} thus $c_1=2a$. To find also $c_2$ the difference $\Delta v_{2{\rm d}}=v_{2{\rm d}}-v_{2{\rm d}}^0$ between the correct 2nd-order term of Eq. (\ref{c22}) and the first term in the expansion of $v_{\rm r}^0$, namely \begin{eqnarray}\label{c24} v_{2{\rm d}}^0=-(\alpha r_s)^2\frac{3}{2\pi^4}\int\limits_{q_0}^{q_1}\frac{dq}{q^2}2\cdot 4\pi q\frac{\pi^3}{3}a \end{eqnarray} has to be considered (exploiting the trick of Gell-Mann/Brueckner {\it mutatis mutandi}): \begin{eqnarray}\label{c25} \Delta v_{2{\rm d}}&=&-(\alpha r_s)^2\frac{3}{2\pi^4}\left\{\int\limits_{q_0}^\infty\frac{dq}{q^2}I(q)- \int\limits_{q_0}^{q_1}\frac{dq}{q^2}2\cdot 4\pi q\frac{\pi^3}{3}a\right\}+O(r_s^3) \nonumber \\ &=&(\alpha r_s)^2\left\{-\frac{3}{2\pi^4}\int\limits_{q_0}^\infty \frac{dq}{q^2}\left[I(q)- \frac{8\pi^4}{3}\frac{a}{q}\Theta(1-q)\right]+4a\int\limits_1^{q_1}\frac{dq}{q}\right\} +O(r_s^3)\ . \end{eqnarray} The first integral does no longer diverge for $q_0\to 0$, therefore one can set $q_0=0$. Besides \begin{equation}\label{c26} \Delta v_{2{\rm d}}=(\alpha r_s)^2 2(b_{2{\rm d}}+a\ln q_1^2)+O(r_s^3) \end{equation} shows [for $b_{2{\rm d}}$ see (B.8)], that for $r_s\to 0$ the sum $v_{\rm r}^0+\Delta v_{2{\rm d}}$ does not depend on the arbitrary cut-off $q_1$: \begin{equation}\label{c27} v_{\rm r}=(\alpha r_s)^2 2 \left[a \ln r_s+(a\ln\frac{4\alpha}{\pi}+b_{\rm r}'+b_{2{\rm d}})\right]+O(r_s^3)\ . \end{equation} This has to be compared with Eq. (\ref{a4}). Indeed the constant direct term beyond $\ln r_s$ yields \begin{equation}\label{c28} a+2b=2(a\ln\frac{4\alpha}{\pi}+b_{\rm r}'+b_{2{\rm d}})\ , \end{equation} which defines $b$. Its value $b\approx -0.0711$ agrees with what is given in \cite{GB}, as it should. - The appearence of a 2nd-order term $\sim r_s^2$ means that - to be consistent - all other terms of the same order contribute to the small-$r_s$ behavior of $e$. There is only one such term, namely $v_{\rm 2x}$, as treated in Sec. IV. \\ \noindent {\bf (3)} Next the {\bf static structure factor} of Eq. (3.18) is considered. In the lowest order (r$\to$1d) it is - again with the Macke function $I(q)$ of Eq. (\ref{C11}) and with $\omega_{\rm pl}=q_c/\sqrt 3$ \begin{eqnarray}\label{c29} S_{\rm 1d}(q)&=&-2\frac{\omega_{\rm pl}^2}{(4\pi/3)^2}\frac{I(q)}{q^2} \quad \curvearrowright \\ S_{\rm 1d}(q\to 0)= - 3(1-&\ln 2&)\frac{\omega_{\rm pl}^2}{q}+O(1/q^3)\ , \quad S_{\rm 1d}(q\to \infty)=-2\frac{\omega_{\rm pl}^2}{q^4}+O(1/q^6)\ . \nonumber \end{eqnarray} At $q=2$, the non-interacting value $S_0(2)=1$ and its discontinuity jump $\Delta S_0''(2)=3/4$, which arises from the Fermi edge, are reduced by [with $I(2),\Delta I''(2)$ from \cite{Zie5}, Eq. (C.2)] \begin{equation}\label{c30} S_{\rm 1d}(2)=-(13-16\ln 2)\frac{3}{20}\omega_{\rm pl}^2\ , \quad \Delta S_{\rm 1d}''(2)=-\left(\frac{3}{4}\right)^2\omega_{\rm pl}^2\ . \end{equation} Note, that $S_{\rm 1x}(q)$ compensates part (ca. half) of $S_{\rm 1d}(q)$, cf. Eq. (\ref{d7}), and note that $S(q)=S_0(q)+ S_{\rm 1d}(q)+S_{\rm 1x}(q)+\cdots$ decreases with increasing $r_s$ for a given value of $q$. Whereas for $q\gg q_c$ the perturbative treatment $S_{\rm r}(q)=S_{\rm 1d}(q)+O(r_s^2)$ holds (screening effects are not so important for large momentum transfers $q$), for $q\ll q_c$ there is a big difference between the `bare' $S_{\rm 1d}(q)$, which behaves unphysically because of $I(q\to 0)\sim q$, cf. Eq. (\ref{C11}), and its renormalized counterpart $S_{\rm r}(q)$, where the ring-diagram summation ameliorates the above mentioned flaw of $S_{\rm 1d}(q)$, cf. \cite{Zie3}, Fig. 3. For small $q$, the approximation $R(q,u)=R_0(u)+\cdots$ is sufficient. So $S_{\rm r}(q)\approx -q_cL(q/q_c)$ with \begin{equation}\label{c31} L(y)=\frac{3}{\pi}\ y\int\limits_0^\infty du\ \frac{R_0^2(u)}{y^2+R_0(u)}\quad \curvearrowright\quad L(y\to 0)=\frac{3}{4}y-\frac{\sqrt 3}{2}y^2+\frac{9\sqrt 3}{20}y^4+\cdots , \end{equation} to be referred to as Kimball function, for its properties cf. \cite{Zie3}. As a consequence, the expression \begin{equation}\label{c32} S_{\rm r}(q\ll q_c)=-\frac{3}{4}q+\frac{q^2}{2\omega_{\rm pl}}-\frac{3}{20}\frac{q^4}{\omega_{\rm pl}^3}\cdots , \quad \end{equation} (i) eliminates the divergence of $S_{\rm 1d}(q\to 0)$ and (ii) simultaneously replaces in the sum $S_0(q)+S_{\rm r}(q)$ the linear term of $S_0(q)$ with a quadratic one, which is in agreement with the plasmon sum rule \cite{Pin,Thom}. Higher-order terms arising from the difference $R(q,u)-R_0(u)=q^2R_1(u)+\cdots$ and from local field corrections beyond RPA have to kill also the cubic term $-q^3/16$ of $S_0(q)$, to substitute the coefficient of the term $\sim q^4/\omega_{\rm pl}^3$ of $S_{\rm r}(q)$ correspondingly, and to add a term $\sim q^5$, such that \cite{GGSB,Iwa} \begin{equation}\label{c33} S(q\ll q_c)=\frac{q^2}{2\omega_{\rm pl}}+s_4\frac{q^4}{\omega_{\rm pl}^3}+s_5\frac{q^5}{\omega_{\rm pl}^4}+\cdots, \quad s_4<0,\quad \frac{1}{12}<|s_4|<\frac{3}{20}\ . \end{equation} A direct consequence of these replacements is the appearance of an inflexion point, which moves for $r_s\to 0$ towards the origin with $q_{\rm infl}, S_{\rm infl}\sim\omega_{\rm pl}={\sqrt {4\alpha r_s/(3\pi)}}$\ . $S_{\rm r}(q)$ of (\ref{c18}) realizes the smooth transition from the $k_{\rm F}(\sim 1/r_s)$-scaling ``far off'' the origin [reasonably approximated by $I(q)$ of (\ref{c29})] to the $k_{\rm F}q_c(\sim1/\sqrt r_s)$-scaling near the origin [reasonably approximated by the Kimball-function $L(q/q_c)$ of (\ref{c31})]. This transition causes the non-analyticity $v_{\rm r}\sim r_s^2\ln r_s$ \cite{Zie3}. \\ \noindent {\bf (4)} Finally the {\bf momentum distribution} (3.19) is considered. The expression $q^2f(\vek k,\vek q,{\rm i}qu)$, see (\ref{c7}), is developed in the following way \begin{equation}\label{c34} \frac{1}{[{\rm i}u-(k\zeta+\frac{1}{2}q)]^2}=-\frac{1}{[u+{\rm i}(k\zeta+\frac{1}{2})q]^2}= \frac{\partial}{\partial u}\ \frac{1}{u+{\rm i}(k\zeta+\frac{1}{2}q)}\to\frac{\partial}{\partial u}\ \frac{u}{u^2+(k\zeta+\frac{1}{2}q)^2}\ . \end{equation} This allows to write Eq. (3.19) as \begin{equation}\label{c35} n_{\rm r}(k)=\frac{\omega_{\rm pl}^4}{(4\pi/3)^2}F_{\rm r}(k) \quad {\rm with} \quad F_{\rm r}(k)=\int\limits_0^\infty du\int\frac{d^3q}{q^3}\ \frac{R(q,u)}{q^2+q_c^2R(q,u)}\ F(k,q,u) \end{equation} and \begin{equation}\label{c36} F(k,q,u)=\frac{\partial}{\partial u}\left(u\int\limits_{-1}^{+1}d\zeta\ \frac{\Theta(k,q,\zeta)}{u^2+(k\zeta+\frac{1}{2}q)^2}\right)\ . \end{equation} Next the $\zeta$-integration has to be performed. The boundary conditions $k\gtrless 1$ and $|\vek k+\vek q|\lessgtr 1$ mean $k^2+q^2+2kq\zeta\lessgtr 1$ or $\zeta\lessgtr\zeta_0$ with $\zeta_0=(1-k^2-q^2)/(2kq)$. So we have \begin{equation}\label{c37} F(k>1,q,u)=\left.-\frac{1}{k}\frac{k\zeta+\frac{1}{2}q}{(k\zeta+\frac{1}{2}q)^2+u^2}\right|_{-1}^{\zeta_0},\quad F(k<1,q,u)=\left.\frac{1}{k}\frac{k\zeta+\frac{1}{2}q}{(k\zeta+\frac{1}{2}q)^2+u^2}\right|_{\zeta_0}^{+1}\ . \end{equation} For $k>1$ the function $F(k,q,u)$ is non-zero only within a certain stripe of the $k-q-$plane (see Fig. 3), namely $k-1<q<k+1$ with $\zeta = -1\cdots \zeta_0$ (area I). For $k<1$ the corresponding areas are the triangle $1-k<q<1+k$ with $\zeta=\zeta_0\cdots +1$ (area II) and the stripe $q>1+k$ with $\zeta=-1\cdots +1$ (area III). With $k\zeta_0+\frac{1}{2}q=\frac{1-k^2}{2q}$ (in area I and II) it follows \begin{eqnarray}\label{c38} F_{\rm I}(k>1,q,u)=+\frac{1}{k}\ \left[\frac{\frac{k^2-1}{2q}}{(\frac{k^2-1}{2q})^2+u^2}- \frac{k-\frac{1}{2}q}{(k-\frac{1}{2}q)^2+u^2}\right]\quad {\rm for}\quad (k,q) \quad {\rm in} \quad {\rm I}\ , \nonumber \\ F_{\rm II}(k<1,q,u)=-\frac{1}{k}\ \left[\frac{\frac{1-k^2}{2q}}{(\frac{1-k^2}{2q})^2+u^2}- \frac{\frac{1}{2}q+k}{(\frac{1}{2}q+k)^2+u^2}\right]\quad {\rm for}\quad (k,q) \quad {\rm in} \quad {\rm II}\ , \nonumber \\ F_{\rm III}(k<1,q,u)=-\frac{1}{k}\ \left[\frac{\frac{1}{2}q-k}{(\frac{1}{2}q-k)^2+u^2}- \frac{\frac{1}{2}q+k}{(\frac{1}{2}q+k)^2+u^2}\right]\quad {\rm for}\quad (k,q) \quad {\rm in} \quad {\rm III}\ . \end{eqnarray} This together with (\ref{c35}) is the momentum distribution in the ring-diagram summation \cite{Da,Ku}, see also \cite{Cio0,GGZ}. For $k=0$ only the last line for III contributes with $F_{\rm III}(0,q,u)=-8(q^2-u^2)/(q^2+u^2)^2$ giving $F_{\rm r}(0)=-4.112335+1.35595\ r_s+\cdots$. If in the denominator of Eq. (\ref{c35}) the ``Yukawa''-term $q_{\rm c}^2R(q,u)$ is deleted, then \begin{equation}\label{c39} n_{2{\rm d}}(k)=\frac{\omega_{\rm pl}^4}{(4\pi/3)^2}F_{\rm 2d}(k)\ ,\quad F_{\rm 2d}(k)=4\pi\int\limits_0^\infty\frac{dq}{q^3}\int\limits_0^\infty du\ R(q,u)\ F(k,q,u) \end{equation} arises with $F_{\rm 2d}(k>1)=F_{\rm I}(k)$ and $F_{\rm 2d}(k<1)=F_{\rm II}(k)+F_{\rm III}(k)$. [Note that the definition of $F_{\rm 2d}(k)$ with ${F_{\rm 2d}(k\gtrless 1)}\gtrless 0$ differs from what is used in \cite{Cio0, GGZ}, where $F(k)>0$ for all $k\gtrless 1$]. The function (with $u\sim \tan \varphi$ the $u$-integration is replaced by an angular integration \cite{Cio0}) \begin{eqnarray}\label{c40} F_{\rm 2d}(k>1)=+ \frac{4\pi}{k}\int\limits_{k-1}^{k+1}\frac{dq}{q^3}\int\limits_0^{\pi/2}d\varphi\ [R(q,\frac{k^2-1}{2q}\tan \varphi)-R(q,(k-\frac{1}{2}q)\tan \varphi)]\ , \nonumber \\ F_{\rm 2d}(k<1)= -\frac{4\pi}{k}\int\limits_{1-k}^{1+k}\frac{dq}{q^3}\int\limits_0^{\pi/2}d\varphi\ [R(q,\frac{1-k^2}{2q}\tan \varphi)-R(q,(k+\frac{1}{2}q)\tan \varphi)] \nonumber \\ -\frac{4\pi}{k}\int\limits_{1+k}^{\infty}\frac{dq}{q^3}\int\limits_0^{\pi/2}d\varphi\ [R(q,(\frac{1}{2}q-k)\tan \varphi)-R(q,(\frac{1}{2}q+k)\tan \varphi)] \end{eqnarray} possesses the properties (see also \cite{Cio0,Zie7} and note that $F_{\rm 2x}(k)$ is of the same order, cf. Eq. (\ref{d5})) \begin{eqnarray}\label{c41} F_{\rm 2d}(k\to 0)=-\left(4.112335+8.984\ k^2+\cdots\right)\ , \quad F_{\rm 2d}(k\to \infty) =\frac{1}{2}\frac{(4\pi/3)^2}{k^8}+\cdots\ , \nonumber \\ F_{\rm 2d}(k\to 1^{\pm})=\pm\frac{\pi^2}{3}(1-\ln 2)\frac{1}{(k-1)^2}+\cdots . \quad \quad \quad \end{eqnarray} Consequently, $n_{\rm 2d}(k)$ approximates $n_{\rm r}(k)$ for $k\ll 1$ and $k\gg 1$ (far off the Fermi surface, where the screening effect is not so important), but it diverges near the Fermi surface square-inversely as $\pm 1/(k-1)^2$ for $k\gtrless 1$. Vice versa, this divergence is removed through the ring-diagram partial summation with the replacement $q^3\to q[q^2+q_c^2R(q,u)]$ in Eq. (\ref{c39}). For $k$ near the Fermi edge, it holds \cite{Zie3} \begin{equation}\label{c42} F_{\rm r}(k\to 1^{\pm})\to \pm \frac{2\pi}{q_{\rm c}^2k^2}G\left(\frac{|k-1|}{q_{\rm c}}\right) \quad {\rm for} \quad 1\lessgtr k\lessgtr 1\pm\sqrt q_c \end{equation} with $G(x)$ being the Kulik function (\ref{B15}). Thus the discontinuity jump for $r_s\to 0$ is described by (the higher-order terms are different for outside/inside the Fermi surface) \begin{equation}\label{c43} n_{\rm r}(1^{\pm})=\pm\frac{\omega_{\rm pl}^4}{(4\pi/3)^2}\frac{2\pi}{q_{\rm c}^2}G(0)+\cdots =\pm \frac{\omega_{\rm pl}^2}{(4\pi/3)^2}\frac{2\pi}{3}G(0)+\cdots= \pm 0.088519\ r_s+\cdots\ . \end{equation} Comparison of (\ref{c41}) with (\ref{c42}) shows that the divergence at $k\to 1^\pm$ is eleminated and replaced by the non-analytical behavior of $G(x\to 0)$, see (\ref{B16}). Near the Fermi surface the distribution is ``symmetrical'' and shows a logarithmical ``snuggling'' with infinite slopes: \begin{eqnarray} n_{\rm r}(k\to 1^\pm)=n_{\rm r}(1^\pm)\pm\frac{\omega_{\rm pl}}{8}\left(\frac{\sqrt 3\pi}{4}+3\right)|k-1|\ln|k-1|+O(k-1)\ . \nonumber \end{eqnarray} $F_{\rm r}(k)$ of (\ref{c35}) realizes the smooth transition from the $k_{\rm F}(\sim 1/r_s)$-scaling ``far off'' the Fermi surface [reasonably approximated by $F_{\rm 2d}(k)$ of (\ref{c39})] to the $k_{\rm F}q_c(\sim1/\sqrt r_s)$-scaling near the Fermi surface [reasonably approximated by the Kulik-function $G(|k-1|/q_c)$, see (\ref{c42})]. This transition causes the non-analyticity of $t_{\rm r}\sim r_s^2\ln r_s$, \cite{Zie3}. - From (\ref{c43}) follows $z_{\rm F}=1-0.177038\ r_s+\cdots$, what is in agreement with the Luttinger formula \cite{Lu}, which relates the quasi-particle weight $z_{\rm F}$ directly to the self-energy $\Sigma(k,\omega)$: \begin{equation}\label{c44} z_{\rm F}=\frac{1}{1-\Sigma'(1,\mu)}\ ,\quad \Sigma'(1,\mu)= \left.{\rm Re}\ \frac{\partial\Sigma(1,\omega)}{\partial \omega}\right|_{\omega=\mu}\ . \end{equation} Indeed, with the ring-diagram approximation (\ref{c1}) it becomes \begin{eqnarray}\label{c45} \Sigma'_{\rm r}(1,\frac{1}{2})&=& \frac{1}{8\pi}\int d^3q\int \frac{d\eta}{2\pi{\rm i}}\ \frac{v^2(\vek q)Q(q,\eta)}{1+v(\vek q)Q(q,\eta)} \frac{\partial}{\partial\eta}\displaystyle\frac{1}{\eta-{\mbox{\boldmath $q$}}\cdot(\vek e+\frac{1}{2}\mbox{\boldmath $q$})\pm{\rm i}\delta}\ , \; |\vek e +\vek q|\gtrless 1\ .\nonumber \\ \end{eqnarray} For $r_s\to 0$ it behaves as \cite{Zie0} \begin{equation}\label{c46} \Sigma'_{\rm r}(1,\frac{1}{2})= \frac{\alpha r_s}{\pi^2}\int\limits_0^\infty du\ \frac{R'_0(u)}{\sqrt {R_0(u)}}\ \arctan\frac{1}{u}+\cdots \approx -0.177038\ r_s +\cdots\ , \end{equation} in agreement with (\ref{c43}) and (\ref{B17}). (In \cite{Os}, $z_{\rm F}=1-0.12\ r_s+\cdots$ is claimed, instead of the RPA figure 0.18.) The linear behavior of $z_{\rm F}(r_s)$ is an example for how higher-order partial summation may create lower-order terms. Calculations beyond RPA with (\ref{c44}) have been done in \cite{Gel1}. Calculations of $z_{\rm F}$ for $r_s\leq 55$ have been done in \cite{Nech}.- The strength of the correlation tail, i.e. the relative number of particles [with $k>1$ and using (\ref{B19})] is \cite{Ku} \begin{equation}\label{c47} N_{\rm r}=\int\limits_1^\infty d(k^3)\ n_{\rm r}(k)=\omega_{\rm pl}^3\frac{(3/2)^{5/2}}{2\pi^2}\int\limits_0^\infty du\ \frac{R'_0(u)}{\sqrt {R_0(u)}}\ u\ln \frac{u^2}{1+u^2} +\cdots \approx 0.05383\ r_s^{3/2}+\cdots \end{equation} \cite{Ku}. The $u$-integral is 1.06252 . How does $n_{\rm 2x}(k)$ change the above results for $z_{\rm F}$ and $N$ ? \section{Second order: The exchange term $\Sigma_{\rm 2x}$ and its consequences} \setcounter{equation}{0} \noindent As already mentioned above and stressed by Geldart \cite{Gel1}, for a consistent small-$r_s$ description up to terms $\sim r_s^2\ln r_s$ and $\sim r_s^2$ the exchange terms $v_{\rm 2x}$, $e_{\rm 2x}$, $n_{\rm 2x}(k)$, $S_{\rm 1x}(q)$ are needed. Here it is shown, how they arise from $\Sigma_{\rm 2x}(k,\omega)$, cf. Fig. 1d, left. \\ \noindent The self-energy in the second order of exchange is \begin{eqnarray}\label{d1} \Sigma_{2{\rm x}}(k,\omega)&=&\frac{q_c^4}{(8\pi)^2}\int\frac{d^3q_1d^3q_2}{q_1^2q_2^2}\int \frac{d\eta_1d\eta_2}{(2\pi{\rm i})^2}\times \\ &\times& G_0(|{\mbox{\boldmath $k$}}+{\mbox{\boldmath $q$}}_2|,\omega+\eta_2)G_0(|{\mbox{\boldmath $k$}}+ {\mbox{\boldmath $q$}}_1+{\mbox{\boldmath $q$}}_2|,\omega+\eta_1+\eta_2)G_0(|{\mbox{\boldmath $k$}}+{\mbox{\boldmath $q$}}_1|,\omega+\eta_1)\ . \nonumber \end{eqnarray} Use of (\ref{a9}) yields \begin{eqnarray}\label{d2} \Sigma_{2{\rm x}}(k,\omega)=-\frac{q_c^4}{(8\pi)^2}\int\frac{d^3q_1d^3q_2}{q_1^2q_2^2} &\left[\displaystyle\frac{\Theta(|{\mbox{\boldmath $k$}}+{\mbox{\boldmath $q$}}_1+{\mbox{\boldmath $q$}}_2|-1) \Theta(1-|{\mbox{\boldmath $k$} }+{\mbox{\boldmath $q$}}_1|)\Theta(1-|{\mbox{\boldmath $k$} }+{\mbox{\boldmath $q$}}_2|)}{\omega-\frac{1}{2}k^2+{\mbox{\boldmath $q$}}_1\cdot{\mbox{\boldmath $q$}}_2- {\rm i} \delta}\right. & \nonumber \\ &+\left.\displaystyle\frac{\Theta(1-|{\mbox{\boldmath $k$}}+{\mbox{\boldmath $q$}}_1+{\mbox{\boldmath $q$}}_2|) \Theta(|{\mbox{\boldmath $k$} }+{\mbox{\boldmath $q$}}_1|-1)\Theta(|{\mbox{\boldmath $k$} }+{\mbox{\boldmath $q$}}_2|-1)}{\omega-\frac{1}{2}k^2+{\mbox{\boldmath $q$}}_1\cdot{\mbox{\boldmath $q$}}_2+ {\rm i} \delta}\right]& , \nonumber \\ \end{eqnarray} see also \cite{Zie4}, Eq. (A.5). This together with (\ref{a9}) used in the Galitskii-Migdal formula (\ref{a11}) gives (cf. Fig. 1d, middle) after the $\omega-$integration has been performed \begin{eqnarray}\label{d3} v_{2{\rm x}}= -\frac{3q_c^4}{(8\pi)^3}{\rm {Re}}\left[\int\limits _A\frac{d^3kd^3q_1d^3q_2}{q_1^2q_2^2} \frac{1}{{\vek q}_1\cdot{\vek q}_2+{\rm i}\delta} +\int\limits_B\frac{d^3kd^3q_1d^3q_2}{q_1^2q_2^2}\frac{1}{{\vek q}_1\cdot(-{\vek q}_2)+{\rm i}\delta}\right]\ . \end{eqnarray} It is easy to show with the help of the substitutions ${\vek q}_1\to{\vek q}_1'$, ${\vek q}_2\to -{\vek q}_2'$, ${\vek k}\to -({\vek k}'+{\vek q}_1')$ that the second term equals the first one. The virial theorem $v_{2{\rm x}}=2e_{2{\rm x}}$ gives the 2nd-order exchange energy $e_{\rm 2x}$ to be compared with the 2nd-order direct energy $e_{\rm 2d}$ [see Eq. (\ref{c20})]: \begin{eqnarray}\label{d4} e_{2{\rm x}}=- \frac{3q_c^4}{(8\pi)^3}\int\limits_{A}\frac{d^3kd^3q_1d^3q_2}{q_1^2q_2^2}\frac{P}{{\vek q}_1\cdot {\vek q}_2}\ , \quad e_{2{\rm d}}=+2 \frac{3q_c^4}{(8\pi)^3}\int\limits_{A}\frac{d^3kd^3q_1d^3q_2}{q_1^2q_1^2}\frac{P}{{\vek q}_1\cdot {\vek q}_2}\ . \end{eqnarray} ${P}$ means the Cauchy principle value. Note the replacement $1/q_1^2\to 1/q_2^2$ and the addition of a factor $-1/2$, when going from the direct term $e_{\rm 2d}$ to the corresponding exchange term $e_{\rm 2x}$. Having $e_{\rm 2x}$ available, the functions $n_{\rm 2x}(k)$ and $S_{\rm 1x}(q)$ follow by means of the functional derivatives (\ref{a12}). \\ \noindent Combining Eq. (\ref{a12}) with (\ref{d4}) yields $n_{\rm 2x}(k)=\frac{\omega_{\rm pl}^4}{(4\pi/3)^2}F_{\rm 2x}(k)$ with \begin{equation}\label{d5} F_{\rm 2x}(k\gtrless 1)=\mp\frac{1}{4}\int\frac{d^3q_1d^3q_2}{q_1^2q_2^2}\frac{1}{({\vek q}_1\cdot {\vek q}_2)^2}, \quad {\rm for} \quad |{\vek k}+{\vek q}_1+{\vek q}_2|\gtrless 1, \quad |{\vek k}+{\vek q}_{1,2}|\lessgtr 1\ . \end{equation} For comparison the same procedure with $e_{\rm 2d}$ yields $n_{\rm 2d}(k)=\frac{\omega_{\rm pl}^4}{(4\pi/3)^2}F_{\rm 2d}(k)$ with \begin{equation}\label{d6} F_{\rm 2d}(k\gtrless 1)=\pm\frac{1}{2}\int\frac{d^3q_1d^3q_2}{q_1^2q_1^2}\frac{1}{({\vek q}_1\cdot {\vek q}_2)^2}\quad {\rm for} \quad |{\vek k}+{\vek q}_1+{\vek q}_2|\gtrless 1, \quad |{\vek k}+{\vek q}_{1,2}|\lessgtr 1\ . \end{equation} It follows from (\ref{C17}), (\ref{C18}), that (\ref{d6}) is equivalent with what arises from (\ref{c5}) for r$\to$2d (`descreening'). Whereas the direct term drives the electrons outside the Fermi surface and decrease the quasi-particle weight $z_{\rm F}$, the exchange process draws them back inside the Fermi surface and increases $z_{\rm F}$ \cite{Gel1}. Again the boundary conditions enforce $q_{1,2}\to \infty$ for $k\to \infty$, so the integral becomes simply $(4\pi/3)^2/k^8$. Hence $n_{\rm 2d}(k\to\infty)\to +2\omega_{\rm pl}^4/(4k^8)$ and $n_{\rm 2x}(k\to\infty)\to-\omega_{\rm pl}^4/(4k^8)$ $\curvearrowright$ $[n_{\rm r}(k)+n_{\rm 2x}(k)]_{k\to\infty}\to+\omega_{\rm pl}^4/(4k^8)$. Comparison with (\ref{a8}) shows $g(0)\approx g_0(0)=1/2$. $F_{\rm 2d,2x}(0)$ are given in (\ref{C22}), (\ref{C23}), integral properties of $F_{\rm 2x}(k)$ are in (\ref{C24}), (\ref{C25}). What concerns $N_{\rm 2x}$ one should expect $N_{\rm 2x}\approx -\frac{1}{2}N_{\rm r}$, because of $n_{\rm 2x}(k\to \infty)=-\frac{1}{2}n_{\rm r}(k\to\infty)$ . Thus $N\approx N_{\rm r}/2+\cdots\approx 0.02691\ r_s^{3/2}+\cdots$ . \\ \noindent Combining Eqs. (\ref{a12}) and (\ref{d4}), the 1st-order exchange term of $S(q)$ is (Fig. 1d, right) \begin{equation}\label{d7} S_{\rm 1x}(q)=+\left(\frac{\omega_{\rm pl}}{4\pi/3}\right)^2\ \frac{I_{\rm x}(q)}{q^2}\ , \quad \frac{I_{\rm x}(q)}{q^2}=-\int\limits_{A} \frac{d^3kd^3q_2}{q_2^2} \left .\frac{P}{{\vek q}_1\cdot{\vek q}_2}\right|_{{\vek q}_1\to \vek q}\ . \end{equation} For comparison with $S_{\rm 1d}(q)$ the Macke function $I(k)$ in Eq. (\ref{c29}) is rewritten with (\ref{C11}): \begin{equation}\label{d8} S_{\rm 1d}(q)=-2\left(\frac{\omega_{\rm pl}}{4\pi/3}\right)^2\frac{I(q)}{q^2}\ , \quad \frac{I(q)}{q^2}=-\int\limits_{A} \frac{d^3kd^3q_2}{q_1^2} \left .\frac{P}{{\vek q}_1\cdot{\vek q}_2}\right|_{{\vek q}_1\to \vek q}\ . \end{equation} They have the asymptotics $S_{\rm 1d,r}(q\to \infty)\to -2\omega_{\rm pl}^2/q^4$ and $S_{\rm 1x}(q\to \infty)\to +\omega_{\rm pl}^2/q^4$ $\curvearrowright$ [$S_{\rm r}(q)+S_{\rm 1x}(q)]_{q\to\infty}=-\omega_{\rm pl}^2/q^4$ \cite{Hol}. Thus the x-term again compensates half of the direct term. Comparison with (\ref{a8}) shows $g(0)\approx g_0(0)=1/2$, as it should up to this order. How does $S_{\rm 1x}(q)$ influence the non-analyticity of $S(q)$ at $q=2$ and the behavior of $S(q)$ at the origin $q=0$? \section{Pair density and short-range correlation} \setcounter{equation}{0} \noindent The SSF $S(q)$ and the PD $g(r)$ are mutually related through the Fourier transforms \begin{equation}\label{e1} 1-g(r)=\frac{1}{2}\int\limits_0^\infty d(q^3)\ \frac{\sin qr}{qr}[1-S(q)]\ ,\quad 1-S(q)=\alpha^3\int\limits_0^\infty d(r^3)\ \frac{\sin qr}{qr}\ [1-g(r)]\ . \end{equation} $S(0)=0$ expresses the perfect screening sum rule: the normalisation of $1-g(r)$ is $9\pi/4$. About it, the plasmon sum rule says $S(q\to 0)=q^2/(2\omega_{\rm pl})+\cdots$. - For $r_s=0$ (ideal Fermi gas) it is [see also Eq. (\ref{b3})] \begin{eqnarray}\label{e2} S_0(q\leq 2)=\frac{3}{2}\frac{q}{2}-\frac{1}2{}\left(\frac{q}{2}\right)^3, \ S_0(q\geq 2)=1\ \curvearrowright\ g_0(r)=1-\frac{9}{2}\left(\frac{\sin r-r\cos r}{r^3}\right)^2 \leq 1\ . \end{eqnarray} This causes the potential energy in lowest order as $v_{\rm x}=-(3/16)\ q_c^2=-(3/4)^2\ \omega_{\rm pl}^2$. The non-analyticity of $S_0(q)$ at $q=2$, namely the 2nd-order-derivative jump $\Delta S''_0(2)=3/4$, causes - Fourier transformed - the non-interacting Friedel oscillations of $g_0(r)$. They are tiny: the 1st minimum at $r\approx 5.76$ is $1-0.0037$. \\ \noindent Short-range correlation means the behavior of the PD $g(r)$ for $r\ll 1/q_c$, to which belong also (i) the coalescing cusp and curvature theorems \cite{Kim1,Kim4} and (ii) the influence of the on-top PD $g(0)$ on the large-wave-number asymptotics of $n(k)$ and $S(q)$, Eq. (\ref{a8}). \\ \noindent The relations (\ref{e1}) between the SSF $S(q)$ and the PD $g(r)$ make, that the on-top value $g(0)$ follows from the normalization (1.7) of $1-S(q)$. The expansion $S(q)=S_0(q)+S_{\rm r}(q)+S_{\rm 1x}(q)+\cdots$ creates corresponding on-top terms $g_i(0)$ with $i=0,{\rm r},{\rm 1x},\cdots$. This series starts with $g_0(0)=1/2$ according to (\ref{e2}). The contribution of $S_{\rm r}(q)$ consists of two parts, a term $S_{\rm 1d}(0)$, linear in $r_s$, and a term $S_{\rm 2r}(q)$, logarithmitically non-analytic. The first order direct term is \begin{equation}\label{e3} g_{\rm 1d}(0)= \frac{1}{2}\int\limits_0^\infty d(q^3)\ S_{\rm 1d}(q)= -\left(\frac{\omega_{\rm pl}}{4\pi/3}\right)^2\int\limits_0^\infty \frac{d(q^3)}{q^2}I(q) = -2(\pi^2+6\ln 2-3)\frac{\alpha r_s}{5\pi}\approx -0.73167\ r_s\ , \end{equation} where (\ref{C11}) is used. The factor 2 is killed by $g_{\rm 1x}(0)=-\frac{1}{2}g_{\rm 1d}(0)$. So, $g_1(0)=g_{\rm 1d}(0)+g_{\rm 1x}(0)=\frac{1}{2}g_{\rm 1d}(0)\approx -0.3658\ r_s$ \cite{Gel2,Kim3}. Exactly this high-density behavior of $g(0)$ results also from the ladder theory as a method to treat short-range correlation \cite{Cio1,Qi}. The second term $\sim r_s^2\ln r_s$ follows from \begin{eqnarray}\label{e4} S_{\rm 2r}(q)&=&S_{\rm r}(q)-S_{\rm 1d}(q)=\frac{3q_c^4}{\pi}\int\limits_0^\infty du\ \frac{1}{q}\cdot\frac{R^3(q,u)}{q^2+q_c^2R(q,u)} \quad \curvearrowright \\ g_{\rm 2r}(0)=\frac{1}{2}\int\limits_0^\infty d(q^3)\ S_{\rm 2r}(q) &=&\frac{9q_c^4}{2\pi}\int\limits_0^\infty du\int\limits_0^\infty qdq\ \frac{R^3(q,u)}{q^2+q_c^2R(q,u)}= -2\left(3-\frac{\pi^2}{4}\right)\left(\frac{3\alpha r_s}{2\pi}\right)^2\ln r_s+\cdots, \nonumber \end{eqnarray} where $\int\limits_0^\infty du\ R_0^3(u)=\frac{\pi}{8}(3-\frac{\pi^2}{4})$ is used. Again part ($\approx$ half?) of $g_{\rm 2r}(0)$ is compensated by a corresponding exchange term. Thus with $x\approx 1/2$ it is \cite{Kim3} \begin{eqnarray}\label{e5} g(0)&=&\frac{1}{2}-(\pi^2+6\ln 2 -3)\frac{\alpha}{5\pi}\ r_s-x\ 2\left(3-\frac{\pi^2}{4}\right)\left(\frac{3\alpha}{2\pi}\right)^2 r_s^2 \ln r_s\ +\cdots \nonumber \\ &=&\frac{1}{2}-0.3658\ r_s-x\ 0.032966\ r_s^2\ln r_s+\cdots\ . \end{eqnarray} The decrease of $g(0)$ with increasing $r_s$ describes the increase of the area between 1 and $S(q)$, the amount of which is $1-g(0)= 1/2+0.3658\ r_s+\cdots$. \section{Summary} \setcounter{equation}{0} \noindent Following the procedure of Fig. 2, it is shown for the ground state of the high-density electron gas (as an example), how the static 2-body quantity $S(q)$, the static structure factor (SSF), follows from the dynamic 1-body quantity $\Sigma(k,\omega)$, the Dyson self-energy, using rigorous theorems as the Migdal formula, the Galitskii-Migdal formula, the generalized Hellmann-Feynman theorem, the virial theorem. Along this way all the static RPA results are thoroughly revisited and summarized on a unified footing: the energy $e$ [Eq. (\ref{c16})] and its components $t$ and $v$, the momentum distribution $n(k)$ [Eq. (3.19)], its behavior for $k\to 0$, $k\to \infty$, and at $k\to 1^\pm$ with the discontinuity $z_{\rm F}$, the SSF $S(q)$ [Eq. (3.18)], its behavior at $q\to 0$, and the on-top pair density $g(0)$. Several identities were found, e.g. the relation between the SSF $S(q)$ and the Macke function $I(q)$. $S(q)$ and $n(k)$, stemming from the 2-body density matrix, are simultaneously linked mutually through the self-energy $\Sigma(k,\omega)$, see Fig. 2. An exercise would be to perform the ``inverse'' procedure $S(q)\to v\to e\to n(k)$. So far not solved problems: To have exactly all terms up to $\sim r_s^2$ and $r_s^2\ln r_s$ available, the functions $I_{\rm x}(q)$ and $F_{\rm 2x}(k)$ of Eqs. (\ref{C19}) and (\ref{d5}), respectively, have to be calculated. $F_{\rm 2x}(k)$ has to be renormalized. Besides the exchange term, which compensates part of the direct term $g_{\rm 2r}(0)$, Eq. (\ref{e4}), has to be specified and calculated. \section*{Acknowledgments} \noindent The author is grateful to P. Gori-Giorgi, K. Morawetz, U. Saalmann for discussions and hints and acknowledges P. Fulde for supporting this work and thanks Th. M\"uller for technical help.
1,108,101,562,548
arxiv
\section{Introduction} Consider $n$ independent and identically random variables $X_1,\dots,X_n$ defined on an abstract probability space ($\Omega, \mathcal{E},\P)$ with values in the measured space $(\mathbb{X},\mathcal{F},\mu)$. We suppose that the distribution of $X_i$ admits a density~$s$ with respect to~$\mu$ and aim at estimating~$s$ by using a parametric approach. When the unknown density $s$ is assumed to belong to a parametric model $\mathscr{F} = \{f_{\theta}, \, \theta \in \Theta \}$ of densities, a traditional method to estimate $s = f_{\theta_0}$ is the maximum likelihood one. It is indeed well known that the maximum likelihood estimator (m.l.e for short) possesses nice statistical properties such as consistency and asymptotic efficiency when the model~$\mathscr{F}$ is regular enough. However, it is also well known that this estimator breaks down for many models~$\mathscr{F}$ of interest and counter examples may be found in~\cite{Pitman1979, Ferguson1982, Lecam1990mle, BirgeTEstimateurs} among other references. Another drawback of the m.l.e lies in the fact that it is not robust. This means that if $s$ lies in a small neighbourhood of the model~$\mathscr{F}$ but not in it, the m.l.e may perform poorly. Several kinds of robust estimators have been suggested in the literature to overcome this issue. We can cite the well known $L$ and $M$ estimators (which includes the class of minimum divergences estimators of~\cite{Basu1998divergence}) and the class of estimators built from a preliminary non-parametric estimator (such as the minimum Hellinger distance estimators introduced in~\cite{Beran1977} and the related estimators of~\cite{Lindsay1994,Basu1994}). In this paper, we focus on estimators built from robust tests. This approach, which begins in the 1970s with the works of Lucien Lecam and Lucien Birgé (\cite{LeCam1973,LeCam1975,Birge1983, Birge1984, Birge1984a}), has the nice theoretical property to yield robust estimators under weak assumptions on the model~$\mathscr{F}$. A key modern reference on this topic is~\cite{BirgeTEstimateurs}. The recent papers~\cite{BirgeGaussien2002,BirgePoisson,Birge2012,BirgeDens,BaraudBirgeHistogramme,BaraudMesure,Baraud2012,SartMarkov,Sart2012} show that increasing attention is being paid to this kind of estimator. Their main interest is to provide general theoretical results in various statistical settings (such as general model selection theorems) which are usually unattainable by the traditional procedures (such as those based on the minimization of a penalized contrast). For our statistical issue, the procedures using tests are based on the pairwise comparison of the elements of a thin discretisation $\mathscr{F}_{\text{dis}}$ of $\mathscr{F}$, that is, a finite or countable subset~$\mathscr{F}_{\text{dis}}$ of $\mathscr{F}$ such that for all function $f \in \mathscr{F}$, the distance between $f$ and $\mathscr{F}_{\text{dis}}$ is small (in a suitable sense). As a result, their complexities are of order the square of the cardinality of~$\mathscr{F}_{\text{dis}}$. Unfortunately, this cardinality is often very large, making the construction of the estimators difficult in practice. The aim of this paper is to develop a faster way of using tests to build an estimator when the cardinality of~$\mathscr{F}_{\text{dis}}$ is large. From a theoretical point of view, the estimator we propose possesses similar statistical properties than those proved in~\cite{BirgeTEstimateurs, BaraudMesure}. Under mild assumptions on~$\mathscr{F}$, we build an estimator $\hat{s} = f_{\hat{\theta}} $ of $s$ such that \begin{eqnarray} \label{RelIntro} \P \left[C h^2 (s, f_{\hat{\theta}} ) \geq \inf_{\theta \in \Theta} h^2 (s, f_{\theta}) + \frac{d}{n} + \xi \right] \leq e^{-n \xi} \quad \text{for all $\xi > 0$,} \end{eqnarray} where $C$ is a positive number depending on $\mathscr{F}$, $h$ the Hellinger distance and $d$ such that $\Theta \subset \mathbb{R}^d$. We recall that the Hellinger distance is defined on the cone $\L^1_+ (\mathbb{X}, \mu)$ of non-negative integrable functions on $\mathbb{X}$ with respect to $\mu$ by $$h^2(f,g) = \frac{1}{2} \int_{\mathbb{X}} \left(\sqrt{f (x)} - \sqrt{g(x)} \right)^2 \d \mu (x) \quad \text{for all $f,g \in \L^1_+ (\mathbb{X}, \mu)$.}$$ Let us make some comments on (\ref{RelIntro}). When $s$ does belong to the model~$\mathscr{F}$, the estimator achieves a quadratic risk of order~$n^{-1}$ with respect to the Hellinger distance. Besides, there exists $\theta_0 \in \Theta$ such that $s = f_{\theta_0}$ and we may then derive from~{(\ref{RelIntro})} the rate of convergence of~$\hat{\theta}$ to~$\theta_0$. In general, we do not suppose that the unknown density belongs to the model but rather use~$\mathscr{F}$ as an approximate class (sieve) for $s$. Inequality (\ref{RelIntro}) shows then that the estimator $\hat{s} = f_{\hat{\theta}}$ cannot be strongly influenced by small departures from the model. As a matter of fact, if $\inf_{\theta \in \Theta} h^2 (s, f_{\theta}) \leq n^{-1}$, which means that the model is slightly misspecified, the quadratic risk of the estimator $\hat{s} = f_{\hat{\theta}}$ remains of order~$n^{-1}$. This can be interpreted as a robustness property. The preceding inequality (\ref{RelIntro}) is interesting because it proves that our estimator is robust and converges at the right rate of convergence when the model is correct. However, the constant~$C$ depends on several parameters on the model such as the size of $\Theta$. It is thus far from obvious that such an estimator can be competitive against more traditional estimators (such as the m.l.e). In this paper, we try to give a partial answer for our estimator by carrying out numerical simulations. When a very thin discretisation~$\mathscr{F}_{\text{dis}}$ is used, the simulations show that our estimator is very close to the m.l.e when the model is regular enough and contains~$s$. More precisely, the larger is the number of observations $n$, the closer they are, suggesting that our estimator inherits the efficiency of the m.l.e. Of course, this does not in itself constitute a proof but this allows to indicate what kind of results can be expected. A theoretical connection between estimators built from tests (with the procedure described in~\cite{BaraudMesure}) and the m.l.e will be found in a future paper of Yannick Baraud and Lucien Birgé. In the present paper, we consider the problem of estimation on a single model. Nevertheless, when the statistician has at disposal several candidate models for $s$, a natural issue is model selection. In order to address it, one may associate to each of these models the estimator resulting from our procedure and then select among those estimators by means of the procedure of~\cite{BaraudMesure}. By combining his Theorem~2 with our risk bounds on each individual estimator, we obtain that the selected estimator satisfies an oracle-type inequality. We organize this paper as follows. We begin with a glimpse of the results in Section~2. We then present a procedure and its associated theoretical results to deal with models parametrized by an unidimensional parameter in Section~3. We evaluate its performance in practice by carrying out numerical simulations in Section~4. We work with models parametrized by a multidimensional parameter in Sections~5 and~6. The proofs are postponed to Section~6. Some technical results about the practical implementation of our procedure devoted to the multidimensional models are delayed to Section~7. Let us introduce some notations that will be used all along the paper. The number $x \vee y$ (respectively $x \wedge y$) stands for $\max(x,y)$ (respectively $\min(x,y)$) and $x_+$ stands for $x \vee 0$. We set $\mathbb{N}^{\star} = \mathbb{N} \setminus \{0\}$. The vector $(\theta_1,\dots,\theta_d)$ of $\mathbb{R}^d$ is denoted by the bold letter $\boldsymbol{\theta}$. Given a set of densities $\mathscr{F} = \{f_{\boldsymbol{\theta}}, \, \boldsymbol{\theta} \in \Theta\}$, for all $A \subset \Theta$, the notation $\text{diam} A$ stands for $\sup_{\boldsymbol{\theta}, \boldsymbol{\theta}' \in A} h^2 (f_{\boldsymbol{\theta}},f_{\boldsymbol{\theta}'})$. The cardinality of a finite set $A$ is denoted by $|A|$. For $(E,d)$ a metric space, $x \in E$ and $A \subset E$, the distance between $x$ and~$A$ is denoted by $d(x,A)= \inf_{a \in A} d(x,a)$. The indicator function of a subset $A$ is denoted by~$\mathbbm{1}_A$. The notations $C$,$C'$,$C''$\dots are for the constants. The constants $C$,$C'$,$C''$\dots may change from line to line. \section{An overview of the paper} \subsection{Assumption.} In this paper, we shall deal with sets of densities $\mathscr{F} = \left\{f_{\boldsymbol{\theta}}, \; \boldsymbol{\theta} \in \Theta \right\}$ indexed by a rectangle $$\Theta = \prod_{j=1}^d [m_j, M_j]$$ of $\mathbb{R}^d$. A such set will be called model. From now on, we consider models satisfying the following assumption. \begin{hyp} \label{HypSurLeModeleQuelquonqueDebutDimD} There exist positive numbers $\alpha_1,\dots,\alpha_d$, $\underline{R}_1,\dots,\underline{R}_d$, $\widebar{R}_1,\dots,\widebar{R}_d$ such that for all $\boldsymbol{\theta} = (\theta_1,\dots,\theta_d)$, $\boldsymbol{\theta}' = (\theta_1',\dots,\theta_d') \in \Theta = \prod_{j=1}^d [m_j, M_j]$ \begin{eqnarray*} \sup_{j \in \{1,\dots,d\}} \underline{R}_j |\theta_j - \theta'_j|^{\alpha_j} \leq h^2 \left(f_{\boldsymbol{\theta}}, f_{\boldsymbol{\theta}'} \right) \leq \sup_{j \in \{1,\dots,d\}} \widebar{R}_j |\theta_j - \theta'_j|^{\alpha_j}. \end{eqnarray*} \end{hyp} This assumption allows to connect a (quasi) distance between the parameters to the Hellinger one between the corresponding densities. A similar assumption may be found in Theorem~5.8 of Chapter~1 of~\cite{Ibragimov1981} to prove results on the maximum likelihood estimator. They require however that the application $\theta \mapsto f_{\theta} (x)$ is continuous for $\mu$ almost all~$x$ to ensure the existence and the consistency of the m.l.e. Without this additional assumption, the m.l.e may not exist as shown by the translation model \begin{eqnarray*} \mathscr{F} = \left\{f_{\theta}, \, \theta \in [-1, 1]\right\} \quad \text{where} \quad f_{\theta} (x) = \begin{cases} \frac{1}{ 4 \sqrt{|x-\theta|}} \mathbbm{1}_{[-1,1]} (x-\theta) & \text{for all $x \in \mathbb{R} \setminus \{\theta\}$ } \\ 0 & \text{for $x = \theta$} \end{cases} \end{eqnarray*} for which Assumption~\ref{HypSurLeModeleQuelquonqueDebutDimD} holds with $\alpha_1 = 1/2$. Under suitable regularity conditions on the model, Theorem~7.6 of Chapter~1 of \cite{Ibragimov1981} shows that this assumption is fulfilled with $\alpha_1 = \cdots = \alpha_d = 2$. Other kinds of sufficient conditions implying Assumption~\ref{HypSurLeModeleQuelquonqueDebutDimD} may be found in this book (see the beginning of Chapter~5 and Theorem~1.1 of Chapter 6). Other examples and counter-examples are given in Chapter~7 of~\cite{DacunhaCastelle}. Several models of interest satisfying this assumption will appear later in the paper. \subsection{Risk bound.} In this paper, the risk bound we get for our estimator $\hat{s}$ is similar to the one we would get by the procedures of~\cite{BirgeTEstimateurs, BaraudMesure}. More precisely: \begin{thm} \label{ThmPrincipalDimQuelquonqueDansOverview} Suppose that Assumption~\ref{HypSurLeModeleQuelquonqueDebutDimD} holds. We can build an estimator~$\hat{s}$ of the form $\hat{s} = f_{\boldsymbol{\hat{\theta}}}$ such that for all $\xi > 0$. \begin{eqnarray} \label{eqRiskBoundThmGen} \P \left[ C h^2(s,f_{\boldsymbol{\hat{\theta}}}) \geq h^2(s, \mathscr{F}) + \frac{d}{n} + \xi \right] \leq e^{- n \xi} \end{eqnarray} where $C > 0$ depends on $\sup_{1 \leq j \leq d}\widebar{R}_j/\underline{R}_j$ and $\min_{1 \leq j \leq d} \alpha_j$. \end{thm} We deduce from this risk bound that if $s = f_{\boldsymbol{\theta}_0}$ belongs to the model $\mathscr{F}$, the estimator~${\boldsymbol{\hat{\theta}}}$ converges to $\boldsymbol{\theta}_0$ and the random variable~$h^2(s,f_{{\boldsymbol{\hat{\theta}}}})$ is of order $n^{-1}$. Besides, we may then derive from Assumption~\ref{HypSurLeModeleQuelquonqueDebutDimD} that there exist positive numbers $a$, $b_j$ such that $$ \P \left[n^{1/\alpha_j} \big|\hat{\theta}_j - \theta_{0,j} \big| \geq \xi \right] \leq a e^{-b_j \xi^{\alpha_j}} \quad \text{for all $j \in \{1,\dots, d\}$ and $\xi > 0$.}$$ Precisely, $a = e^{d}$ and $b_j = C \underline{R}_j$. We emphasize here that this exponential inequality on $\hat{\theta}_j$ is non-asymptotic but that the constants $a$, $b_j$ are unfortunately far from optimal. As explained in the introduction, there is no assumption on the true underlying density $s$, which means that the model $\mathscr{F}$ may be misspecified. In particular, when the squared Hellinger distance between the unknown density and the model $\mathscr{F}$ is of order~$n^{-1}$, the random variable~$h^2(s,f_{{\boldsymbol{\hat{\theta}}}})$ remains of order $n^{-1}$. This shows that the estimator $\hat{s}$ possesses robustness properties. \subsection{Numerical complexity.} The main interest of our procedures with respect to those of~\cite{BirgeTEstimateurs, BaraudMesure} lies in their numerical complexity. More precisely, we shall prove the proposition below. \begin{prop} \label{PropCalculComplexiteDimenQuelquonque} Under Assumption~\ref{HypSurLeModeleQuelquonqueDebutDimD}, we can build an estimator $\hat{s}$ satisfying~{(\ref{eqRiskBoundThmGen})} in less than $$4 n C^{d/\boldsymbol{\bar{\alpha}}} \left[\prod_{j=1}^d \left(1 + \left( {\widebar{R}_j}/{ \underline{R}_j}\right)^{1/\alpha_j} \right) \right] \left[\sum_{j=1}^d \max \left\{1, \log \left( (n \widebar{R}_j/d)^{1/\alpha_j} (M_j-m_j) \right) \right\} \right]$$ operations. In the above inequality, $C$ is a constant larger than $1$ (independent of $n$ and the model~$\mathscr{F}$) and~$\boldsymbol{\bar{\alpha}}$ stands for the harmonic mean of $\boldsymbol{\alpha}$, that is $$\frac{1}{\boldsymbol{\bar{\alpha}}} = \frac{1}{d} \sum_{j=1}^d \frac{1}{\alpha_j}.$$ \end{prop} If we are interested in the complexity when $n$ is large, we may deduce that this upper-bound is asymptotically equivalent to $C' n \log n$ where $$ C' = 4 ({d}/{ \boldsymbol{\bar{\alpha}}}) C^{d/ \boldsymbol{\bar{\alpha}}} \prod_{j=1}^d \left(1 + \left({\widebar{R}_j}/{ \underline{R}_j}\right)^{1/\alpha_j} \right).$$ This constant is of reasonable size when $d$, $1/{ \boldsymbol{\bar{\alpha}}}$, $(\widebar{R}_j/\underline{R}_j)^{1/\alpha_j}$ are not too large. \paragraph{Remark.} The constant $C'$ does not depend only on the model but also on its parametrisation. As a matter of fact, in the uniform model $$\mathscr{F} = \left\{f_{\theta}, \, \theta \in [m_1, M_1]\right\} \quad \text{where} \quad f_{\theta} = \theta^{-1} \mathbbm{1}_{[0,\theta]} $$ we can compute explicitly the Hellinger distance $$h^2(f_{\theta}, f_{\theta'}) = \frac{|\theta' - \theta|}{(\sqrt{\theta} + \sqrt{\theta'} ) \sqrt{\max \left(\theta,\theta' \right) }}$$ and bound it from above and from below by $$\frac{1}{2 M_1} |\theta' - \theta| \leq h^2(f_{\theta}, f_{\theta'}) \leq \frac{1}{2 m_1} |\theta'- \theta|.$$ Now, if we parametrise $\mathscr{F}$ as $$\mathscr{F} = \left\{f_{e^{t}}, \, t \in [\log m_1, \log M_1]\right\},$$ then the Hellinger becomes $h^2(f_{e^t}, f_{e^{t'}}) = 1 - e^{- |t'-t|/2}$, and we can bound it from above and from below by $$\frac{1 - \sqrt{m_1/M_1}}{\log (M_1/m_1)} |t' - t| \leq h^2(f_{e^t}, f_{e^{t'}}) \leq \frac{1}{2} |t' - t|.$$ Assumption~\ref{HypSurLeModeleQuelquonqueDebutDimD} is satisfied in both case but with different values of $\underline{R}_1$ and $\widebar{R}_1$. When $M_1/m_1$ is large, the second parametrisation is much more interesting since it leads to a smaller constant~$C'$. \section{Models parametrized by an unidimensional parameter} \label{SectionEstimationDim1} We now describe our procedure when the parametric model $\mathscr{F}$ is indexed by an interval $\Theta = [m_1,M_1]$ of $\mathbb{R}$. Throughout this section, Assumption~\ref{HypSurLeModeleQuelquonqueDebutDimD} is supposed to be fulfilled. For the sake of simplicity, the subscripts of $m_1, M_1$ and $\alpha_1$ are omitted. \subsection{Basic ideas.} \label{SectionHeuristique} We begin to detail the heuristics on which is based our procedure. We assume in this section that $s$ belongs to the model $\mathscr{F}$, that is, there exists $\theta_0 \in \Theta = [m, M]$ such that $s = f_{\theta_0}$. The starting point is the existence for all $\theta, \theta' \in \Theta$ of a measurable function $T(\theta, \theta')$ of the observations $X_1,\dots,X_n$ such that \begin{enumerate} [1.] \item For all $\theta, \theta' \in\Theta $, $T(\theta, \theta') = - T (\theta' , \theta) $. \item There exists $\kappa > 0$ such that if $\mathbb{E} \left[T (\theta,\theta') \right]$ is non-negative, then \begin{eqnarray*} h^2 (s, f_{\theta}) > {\kappa} h^2 (f_{\theta},f_{\theta'}). \end{eqnarray*} \item For all $\theta,\theta' \in\Theta $, $T(\theta , \theta') $ and $\mathbb{E} \left[T (\theta, \theta') \right]$ are close (in a suitable sense). \end{enumerate} For all $\theta \in \Theta$, $r > 0$, let $\mathcal{B} (\theta, r)$ be the Hellinger ball centered at $\theta$ with radius $r$, that is \begin{eqnarray} \label{eqDefinitionBouleHel} \mathcal{B}(\theta, r) = \left\{\theta' \in \Theta, \, h (f_{\theta}, f_{\theta'}) \leq r \right\}. \end{eqnarray} For all $\theta, \theta' \in \Theta$, we deduce from the first point that either $T ({\theta}, {\theta'})$ is non-negative, or $T ({\theta'}, {\theta})$ is non-negative. It is likely that it follows from 2. and 3. that in the first case $$\theta_0 \in \Theta \setminus \mathcal{B} \big(\theta, \kappa^{1/2} h (f_{\theta}, f_{\theta'}) \big)$$ while in the second case $$\theta_0 \in \Theta \setminus \mathcal{B} \big(\theta', \kappa^{1/2} h (f_{\theta}, f_{\theta'}) \big).$$ These sets may be interpreted as confidence sets for $\theta_0$. The main idea is to build a decreasing sequence (in the sense of inclusion) of intervals $(\Theta_i)_i$. Set $\theta^{(1)} = m$, $\theta'^{(1)} = M$, and $\Theta_1 = [\theta^{(1)}, \theta'^{(1)}]$ (which is merely $\Theta$). If $T ({\theta^{(1)}}, {\theta'^{(1)}} )$ is non-negative, we consider a set $\Theta_2$ such that $$\Theta_1 \setminus \mathcal{B} \left(\theta^{(1)}, \kappa^{1/2} h (f_{\theta^{(1)}}, f_{\theta'^{(1)}}) \right) \subset \Theta_2 \subset \Theta_1$$ while if $T ({\theta^{(1)}}, {\theta'^{(1)}} )$ is non-positive, we consider a set $\Theta_2$ such that $$ \Theta_1 \setminus \mathcal{B} \left(\theta'^{(1)}, \kappa^{1/2} h (f_{\theta^{(1)}}, f_{\theta'^{(1)}}) \right) \subset \Theta_2 \subset \Theta_1. $$ The set $\Theta_2$ may thus also be interpreted as a confidence set for $\theta_0$. Thanks to Assumption~\ref{HypSurLeModeleQuelquonqueDebutDimD}, we can define $\Theta_2$ as an interval $\Theta_2 = [\theta^{(2)}, \theta'^{(2)}]$. We then repeat the idea to build an interval $\Theta_3 = [\theta^{(3)}, \theta'^{(3)}]$ included in $\Theta_2$ and containing either $$\Theta_3 \supset \Theta_2 \setminus \mathcal{B} \left(\theta^{(2)}, \kappa^{1/2} h (f_{\theta^{(2)}}, f_{\theta'^{(2)}}) \right) \quad \text{or} \quad \Theta_3 \supset \Theta_2 \setminus \mathcal{B} \left(\theta'^{(2)}, \kappa^{1/2} h (f_{\theta^{(2)}}, f_{\theta'^{(2)}}) \right) $$ according to the sign of $T ({\theta^{(2)}}, {\theta'^{(2)}} )$. By induction, we build a decreasing sequence of such intervals $(\Theta_i)_i$. We now consider an integer~$N$ large enough so that the length of $\Theta_N$ is small enough. We then define the estimator~$\hat{\theta}$ as the center of the set~$\Theta_N$ and estimate~$s$ by $f_{{\hat{\theta}}}$. \subsection{Definition of the test.} \label{SectionDefTest} The test $T(\theta,\theta')$ we use in our estimation strategy is the one of~\cite{BaraudMesure} applied to two suitable densities of the model. More precisely, let~$\widebar{T}$ be the functional defined for all $g,g' \in \L^1_+ (\mathbb{X},\mu)$ by \begin{eqnarray} \label{eqFonctionnalBaraud} \quad \widebar{T}(g , g') = \frac{1}{n} \sum_{i=1}^n \frac{\sqrt{g' (X_i)} - \sqrt{g (X_i)}}{\sqrt{g (X_i) + g'(X_i)}} + \frac{1}{2 } \int_{\mathbb{X}} \sqrt{g(x) + g'(x)} \left(\sqrt{g' (x)} - \sqrt{g (x)} \right) \d \mu (x) \end{eqnarray} where the convention $0 / 0 = 0$ is in use. We consider $t \in (0,1]$ and $\epsilon = t (\widebar{R} n)^{-1/\alpha}$. We then define the finite sets \begin{eqnarray*} \Theta_{\text{dis}} = \left\{m+ k \epsilon , \; k \in \mathbb{N},\; k \leq (M-m) \epsilon^{-1} \right\}, \quad \mathscr{F}_{\text{dis}} = \{f_{\theta}, \, \theta \in \Theta_{\text{dis}} \} \end{eqnarray*} and the map $\pi$ on $[m, M]$ by $$\pi({x}) = m + \lfloor (x - m) / \epsilon \rfloor \epsilon \quad \text{for all $x \in[m, M]$} $$ where $\lfloor \cdot \rfloor$ denotes the integer part. We then define $T(\theta,\theta')$ by $${T} ({\theta},{\theta'}) = \widebar{T}(f_{\pi(\theta)},f_{\pi(\theta')}) \quad \text{for all $\theta,\theta' \in[m, M]$}.$$ The aim of the parameter $t$ is to tune the thinness of the net~$\mathscr{F}_{\text{dis}}$. \subsection{Procedure.} We shall build a decreasing sequence $(\Theta_i)_{i \geq 1}$ of intervals of $\Theta = [m,M]$ as explained in Section~\ref{SectionHeuristique}. Let $\kappa > 0$, and for all $\theta, \theta' \in[m,M]$ such that $\theta' < \theta$, let $\bar{r} (\theta,\theta')$, $\underline{r} (\theta,\theta')$ be two positive numbers such that \begin{eqnarray} \, [m,M] \bigcap \left[ \theta, \theta+\bar{r} (\theta,\theta') \right] &\subset& \mathcal{B} \big(\theta, \kappa^{1/2} h (f_{\theta}, f_{\theta'}) \big) \label{EquationSurRi1}\\ \, [m,M] \bigcap \left[ \theta'-\underline{r} (\theta,\theta') , \theta'\right] &\subset& \mathcal{B} \big(\theta', \kappa^{1/2} h (f_{\theta}, f_{\theta'}) \big) \label{EquationSurRi2} \end{eqnarray} where we recall that $ \mathcal{B} (\theta, {\kappa}^{1/2} h (f_{\theta}, f_{\theta'}) )$ and $ \mathcal{B} (\theta', {\kappa}^{1/2} h (f_{\theta}, f_{\theta'}) )$ are the Hellinger balls defined by~{(\ref{eqDefinitionBouleHel})}. We set $\theta^{(1)} = m$, $\theta'^{(1)} = M$ and $\Theta_1 = [\theta^{(1)},\theta'^{(1)}]$. We define the sequence $(\Theta_i)_{i \geq 1}$ by induction. When $\Theta_i = [\theta^{(i)}, \theta'^{(i)}]$, we set \begin{eqnarray*} \theta^{(i+1)} &=& \begin{cases} \theta^{(i)} + \min \left\{\bar{r} (\theta^{(i)},\theta'^{(i)}), \frac{\theta'^{(i)} - \theta^{(i)}}{2} \right\} & \text{if $ T ({\theta^{(i)}},{\theta'^{(i)}}) \geq 0$} \\ \theta^{(i)} & \text{otherwise} \end{cases} \\ \theta'^{(i+1)} &=& \begin{cases} \theta'^{(i)} - \min \left\{\underline{r} (\theta^{(i)},\theta'^{(i)}), \frac{\theta'^{(i)} - \theta^{(i)}}{2} \right\} & \text{if $ T ({\theta^{(i)}},{\theta'^{(i)}}) \leq 0$} \\ \theta'^{(i)} & \text{otherwise.} \end{cases} \end{eqnarray*} We then define $\Theta_{i+1} = [\theta^{(i+1)}, \theta'^{(i+1)}]$. The role of conditions (\ref{EquationSurRi1}) and (\ref{EquationSurRi2}) is to ensure that $\Theta_{i+1}$ is big enough to contain one of the two confidence sets $$\Theta_i \setminus \mathcal{B} \left(\theta^{(i)}, {\kappa}^{1/2} h (f_{\theta^{(i)}}, f_{\theta'^{(i)}}) \right) \quad \text{and} \quad \Theta_i \setminus \mathcal{B} \left(\theta'^{(i)}, {\kappa}^{1/2} h (f_{\theta^{(i)}}, f_{\theta'^{(i)}}) \right).$$ The parameter $\kappa$ allows to tune the level of these confidence sets. There is a minimum in the definitions of $\theta^{(i+1)}$ and $\theta'^{(i+1)}$ in order to guarantee the inclusion of $\Theta_{i+1}$ in $\Theta_i$. We now consider a positive number $\eta$ and build these intervals until their lengths become smaller than $\eta$. The estimator we consider is then the center of the last interval built. This parameter $\eta$ stands for a measure of the accuracy of the estimation and must be small enough to get a suitable risk bound for our estimator. The algorithm is the following. \begin{algorithm}[H] \caption{ } \label{AlgorithmDim1} \begin{algorithmic} [1] \STATE $\theta \leftarrow m$, $\theta' \leftarrow M$ \WHILE{$\theta' - \theta > \eta$} \STATE Compute $r = \min \left\{\bar{r} (\theta,\theta') , (\theta' - \theta)/2\right\}$ \STATE Compute $r' = \min \left\{\underline{r} (\theta,\theta'), (\theta' - \theta)/2 \right\}$ \STATE Compute $\text{Test} = T(\theta,\theta')$ \IF {$\text{Test} \geq 0$} \STATE $\theta \leftarrow \theta + r$ \ENDIF \IF {$\text{Test} \leq 0$} \STATE $\theta' \leftarrow \theta' - r'$ \ENDIF \ENDWHILE \RETURN $\hat{\theta} = (\theta + \theta')/2$ \end{algorithmic} \end{algorithm} \subsection{Risk bound.} \label{SectionPropEstimateurDim1} The following theorem specify the values of the parameters $t$, $\kappa$, $\eta$ that allow to control the risk of the estimator $\hat{s} = f_{\hat{\theta}}$. \begin{thm} \label{ThmPrincipalDim1} Suppose that Assumption~\ref{HypSurLeModeleQuelquonqueDebutDimD} holds. Set \begin{eqnarray} \label{eqEsperanceTest} \bar{\kappa} = \left(1 + \sqrt{\frac{2+\sqrt{2}}{2-\sqrt{2}}}\right)^{-2}. \end{eqnarray} Assume that $t \in (0,1]$, $\kappa \in (0,\bar{\kappa})$, $\eta \in [\epsilon, (\widebar{R} n)^{-1/\alpha} ]$ and that $\bar{r} (\theta,\theta')$, $\underline{r} (\theta,\theta')$ are such that (\ref{EquationSurRi1}) and (\ref{EquationSurRi2}) hold. Then, for all $\xi > 0$, the estimator $\hat{\theta}$ built in Algorithm~\ref{AlgorithmDim1} satisfies \begin{eqnarray*} \P \left[ C h^2(s,f_{\hat{\theta}}) \geq h^2(s, \mathscr{F}) + \frac{1}{n} + \xi \right] \leq e^{- n \xi} \end{eqnarray*} where $C > 0$ depends only on $\kappa, t, \alpha, \widebar{R}/\underline{R}$. \end{thm} A slightly sharper risk bound may be found in the proof of this theorem. \subsection{Choice of $\bar{r} (\theta,\theta')$ and $\underline{r} (\theta,\theta')$.} \label{SectionDefinitionRminBarreDim1} These parameters are chosen by the statistician. They do not change the risk bound given by Theorem~\ref{ThmPrincipalDim1} (provided that (\ref{EquationSurRi1}) and (\ref{EquationSurRi2}) hold) but affect the speed of the procedure. The larger they are, the faster the procedure is. There are three different situations. \paragraph{First case:} the Hellinger distance $ h (f_{\theta}, f_{\theta'}) $ can be made explicit. We have thus an interest in defining them as the largest numbers for which (\ref{EquationSurRi1}) and (\ref{EquationSurRi2}) hold, that is \begin{eqnarray} \, \bar{r} (\theta,\theta') &=& \sup \left\{r > 0, \; [m,M] \cap \left[ \theta, \theta+r\right] \subset \mathcal{B} \big(\theta, \kappa^{1/2} h (f_{\theta}, f_{\theta'}) \big) \right\} \label{EqDefinitionRDim1Optimal1}\\ \, \underline{r} (\theta,\theta') &=& \sup \left\{r > 0, \; [m,M] \cap \left[ \theta'-r, \theta'\right] \subset \mathcal{B} \big(\theta', \kappa^{1/2} h (f_{\theta}, f_{\theta'}) \big) \right\} \label{EqDefinitionRDim1Optimal2}. \end{eqnarray} \paragraph{Second case:} the Hellinger distance $ h (f_{\theta}, f_{\theta'}) $ can be quickly evaluated numerically but the computation of (\ref{EqDefinitionRDim1Optimal1}) and (\ref{EqDefinitionRDim1Optimal2}) is difficult. We may then define them by \begin{eqnarray} \label{EqDefintionRDim2} \underline{r} (\theta,\theta') = \bar{r} (\theta,\theta') = \left( (\kappa / {\bar{R}}) h^2 (f_{\theta}, f_{\theta'}) \right)^{1/\alpha}. \end{eqnarray} One can verify that (\ref{EquationSurRi1}) and (\ref{EquationSurRi2}) hold. When the model is regular enough and $\alpha = 2$, the value of~$\widebar{R}$ can be calculated by using Fisher information (see for instance Theorem~7.6 of Chapter~1 of \cite{Ibragimov1981}). \paragraph{Third case:} the computation of the Hellinger distance $ h (f_{\theta}, f_{\theta'}) $ involves the numerical computation of an integral and this computation is slow. An alternative definition is then \begin{eqnarray} \label{EqDefintionRDim1} \underline{r} (\theta,\theta') = \bar{r} (\theta,\theta') = (\kappa \underline{R}/ \widebar{R})^{1/\alpha} \left(\theta' - \theta \right). \end{eqnarray} As in the second point, one can check that (\ref{EquationSurRi1}) and (\ref{EquationSurRi2}) hold. Note however that the computation of the test also involves in most cases the numerical computation of an integral (see (\ref{eqFonctionnalBaraud})). This third case is thus mainly devoted to models for which this numerical integration can be avoided, as for the translation models $\mathscr{F} = \left\{f (\cdot - \theta), \, \theta \in [m,M]\right\}$ with $f$ even, $\mathbb{X} = \mathbb{R}$ and $\mu$ the Lebesgue measure (the second term of (\ref{eqFonctionnalBaraud}) is~$0$ for these models). We can upper-bound the numerical complexity of the algorithm when $\bar{r} (\theta,\theta')$ and $\underline{r} (\theta,\theta')$ are large enough. Precisely, we prove the proposition below. \begin{prop} \label{PropCalculComplexiteDimen1} Suppose that the assumptions of Theorem~\ref{ThmPrincipalDim1} hold and that $\underline{r} (\theta,\theta')$, $\bar{r} (\theta,\theta')$ are larger than \begin{eqnarray} \label{eqSurretR} (\kappa \underline{R}/ \widebar{R})^{1/\alpha} \left(\theta' - \theta\right). \end{eqnarray} Then, the number of tests computed to build the estimator $\hat{\theta}$ is smaller than $$ 1 + \max \left\{ \left( {\widebar{R}}/{ (\kappa \underline{R})} \right)^{1/\alpha}, {1}/{\log 2} \right\} \log \left( \frac{M-m}{\eta} \right).$$ \end{prop} It is worthwhile to notice that this upper-bound does not depend on $t$, that is the size of the net $\mathscr{F}_{\text{dis}}$ contrary to the preceding procedures based on tests. Obviously, the parameter $\eta$ is involved in this upper-bound, but the whole point is that it grows slowly with $1/\eta$, which allows to use the procedure with $\eta$ very small. \section{Simulations for unidimensional models} \label{SectionSimuDim1} In what follows, we carry out a simulation study in order to evaluate more precisely the performance of our estimator. We simulate samples $(X_1,\dots,X_n)$ with density~$s$ and use our procedure to estimate~$s$. \subsection{Models.} Our simulation study is based on the following models. \begin{ExempleSimuDim1} $\mathscr{F} = \left\{f_{\theta}, \, \theta \in [0.01, 100]\right\} $ where $$f_{\theta} (x) = \theta e^{- \theta x} \mathbbm{1}_{[0,+\infty)} (x) \quad \text{for all $x \in \mathbb{R}$.}$$ \end{ExempleSimuDim1} \begin{ExempleSimuDim1} $\mathscr{F} = \left\{f_{\theta}, \, \theta \in [-100, 100]\right\} $ where $$f_{\theta} (x) = \frac{1}{\sqrt{2 \pi}} \exp \left(- \frac{(x-\theta)^2}{2} \right) \quad \text{ for all $x \in \mathbb{R}$.}$$ \end{ExempleSimuDim1} \begin{ExempleSimuDim1} $\mathscr{F} = \left\{f_{\theta}, \, \theta \in [0.01, 100]\right\} $ where $$f_{\theta} (x) = \frac{x}{\theta^2} \exp \left( - \frac{x^2}{2 \theta^2} \right) \mathbbm{1}_{[0,+\infty)} (x) \quad \text{ for all $x \in \mathbb{R}$.}$$ \end{ExempleSimuDim1} \begin{ExempleSimuDim1} $\mathscr{F} = \left\{f_{\theta}, \, \theta \in [-10, 10]\right\} $ where $$f_{\theta} (x) = \frac{1}{\pi \left( 1 + (x - \theta)^2 \right)} \quad \text{ for all $x \in \mathbb{R}$.}$$ \end{ExempleSimuDim1} \begin{ExempleSimuDim1} $\mathscr{F} = \left\{f_{\theta}, \, \theta \in [0.01, 10]\right\} $ where $f_{\theta} = \theta^{-1} \mathbbm{1}_{[0,\theta]}$. \end{ExempleSimuDim1} \begin{ExempleSimuDim1} $\mathscr{F} = \left\{f_{\theta}, \, \theta \in [-10, 10]\right\} $ where $$f_{\theta} (x) = \frac{1}{(x-\theta +1)^2 }\mathbbm{1}_{[\theta,+\infty)} (x) \quad \text{for all $x \in \mathbb{R}$.}$$ \end{ExempleSimuDim1} \begin{ExempleSimuDim1} $\mathscr{F} = \left\{f_{\theta}, \, \theta \in [-10, 10]\right\} $ where $f_{\theta} = \mathbbm{1}_{[\theta - 1/2, \theta + 1/2]}$. \end{ExempleSimuDim1} \begin{ExempleSimuDim1} $\mathscr{F} = \left\{f_{\theta}, \, \theta \in [-1, 1]\right\} $ where $$f_{\theta} (x) = \frac{1}{ 4 \sqrt{|x-\theta|}} \mathbbm{1}_{[-1,1]} (x-\theta) \quad \text{for all $x \in \mathbb{R} \setminus \{\theta\}$}$$ and $f_{\theta} (\theta) = 0$. \end{ExempleSimuDim1} In these examples, we shall mainly compare our estimator to the maximum likelihood one. In examples 1,2,3,5 and 6, the m.l.e $\tilde{\theta}_{\text{mle}}$ can be made explicit and is thus easy to compute. Finding the m.l.e is more delicate for the problem of estimating the location parameter of a Cauchy distribution, since the likelihood function may be multimodal. We refer to~\cite{Barnett1966} for a discussion of numerical methods devoted to the maximization of the likelihood. In our simulation study, we avoid the issues of the numerical algorithms by computing the likelihood at $10^6$ equally spaced points between $\max(-10, \hat{\theta}-1)$ and $\min(10, \hat{\theta}+1)$ (where $\hat{\theta}$ is our estimator) and at $10^{6}$ equally spaced points between $\max(-10, \tilde{\theta}_{\text{median}}-1)$ and $\min(10, \tilde{\theta}_{\text{median}}+1)$ where $\tilde{\theta}_{\text{median}}$ is the median. We then select among these points the one for which the likelihood is maximal. In Example 5, we shall also compare our estimator to the minimum variance unbiased estimator defined by $$\tilde{\theta}_{\text{mvub}} = \frac{n+1}{n} \max_{1 \leq i \leq n} X_i.$$ In Example~7, we shall compare our estimator to $$\tilde{\theta}' = \frac{1}{2} \left(\max_{1 \leq i \leq n} X_i + \min_{1 \leq i \leq n} X_i\right).$$ In the case of Example~8, the likelihood is infinite at each observation and the maximum likelihood method fails. We shall then compare our estimator to the median and the empirical mean but also to the maximum spacing product estimator $\tilde{\theta}_{\text{mspe}}$ (m.s.p.e for short). This estimator was introduced by~\cite{Cheng1983,Ranneby1984} to deal with statistical models for which the likelihood is unbounded. The m.s.p.e is known to possess nice theoretical properties such as consistency and asymptotic efficiency and precise results on the performance of this estimator may be found in~\cite{Cheng1983,Ranneby1984,Ekstrom1998,Shao1999,Ghost2001,Anatolyev2005} among other references. This last method involves the problem of finding a global maximum of the maximum product function on $\Theta = [-1,1]$ (which may be multimodal). We compute it by considering $2 \times 10^5$ equally spaced points between $-1$ and $1$ and by calculating for each of these points the function to maximize. We then select the point for which the function is maximal. Using more points to compute the m.s.p.e would give more accurate results, especially when $n$ is large, but we are limited by the computer. \subsection{Implementation of the procedure.} Our procedure involves several parameters that must be chosen by the statistician. \paragraph{\textbf{Choice of} $t$.} This parameter tunes the thinness of the net $\mathscr{F}_{\text{dis}}$. When the model is regular enough and contains $s$, a good choice of $t$ seems to be $t = 0$ (that is $\Theta_{\text{dis}} = \Theta$, $\mathscr{F}_{\text{dis}} = \mathscr{F}$ and $T ({\theta}, {\theta'}) = \widebar{T}(f_{\theta}, f_{\theta'})$), since then the simulations suggest that our estimator is very close to the m.l.e when the model is true (with large probability). In the simulations, we take $t = 0$. \paragraph{\textbf{Choice of} $\eta$.} We take $\eta$ small: $\eta = (M-m) / 10^{8}$. \paragraph{\textbf{Choice of} $\kappa$.} This constant influences the level of the confidence sets and thus the time of construction of the estimator: the larger is~$\kappa$, the faster is the procedure. We take arbitrary $\kappa = \bar{\kappa}/2$. \paragraph{\textbf{Choice of} $\underline{r} (\theta,\theta')$ \textbf{and} $\bar{r} (\theta,\theta')$.} In examples 1,2,3,5, and 7, we define them by (\ref{EqDefinitionRDim1Optimal1}) and (\ref{EqDefinitionRDim1Optimal2}). In examples~4 and~6, we define them by (\ref{EqDefintionRDim2}). In the first case, $\alpha = 2$ and $\widebar{R} = 1/16$, while in the second case, $\alpha = 1$ and $\widebar{R} = 1/2$. In the case of Example~8, we use (\ref{EqDefintionRDim1}) with $\alpha = 1/2$, $\underline{R} = 0.17$ and $\widebar{R} = 1/\sqrt{2}$. \subsection{Simulations when $s \in \mathscr{F}$.} \label{SectionDim1VraiS} We begin to simulate $N$ samples $(X_1,\dots,X_n)$ when the true density~$s$ belongs to the model $\mathscr{F}$. They are generated according to the density~$s = f_1$ in examples $1,3,5$ and according to~$s = f_0$ in examples $2,4,6,7,8$. We evaluate the performance of an estimator $\tilde{\theta}$ by computing it on each of the $N$ samples. Let $\tilde{\theta}^{(i)}$ be the value of this estimator corresponding to the $i^{\text{\tiny th}}$ sample and let \begin{eqnarray*} \widehat{R}_N (\tilde{\theta}) = \frac{1}{N} \sum_{i=1}^N h^2 (s, f_{\tilde{\theta}^{(i)}}) \quad \text{and} \quad \widehat{\text{std}}_N (\tilde{\theta})= \sqrt{\frac{1}{N-1} \sum_{i=1}^N \left(h^2 (s, f_{\tilde{\theta}^{(i)}}) - \widehat{R}_N (\tilde{\theta}) \right)^2}. \end{eqnarray*} The risk $\mathbb{E} \left[ h^2 (s, f_{\tilde{\theta}}) \right]$ of the estimator $\tilde{\theta}$ is thus estimated by $\widehat{R}_N(\tilde{\theta})$. More precisely, if $Q_c$ denotes the $c/2$ quantile of a standard Gaussian distribution, $$\left[\widehat{R}_N(\tilde{\theta}) - Q_{c} \frac{\widehat{\text{std}}_N(\tilde{\theta})}{\sqrt{N}}, \widehat{R}_N(\tilde{\theta}) + Q_{c} \frac{\widehat{\text{std}}_N(\tilde{\theta})}{\sqrt{N}} \right]$$ is a confidence interval for $\mathbb{E} \left[ h^2 (s, f_{\tilde{\theta}}) \right]$ with asymptotic confidence level~$c$. We also introduce $$\mathcal{\widehat{R}}_{N,\text{rel}} (\tilde{\theta}) = \frac{\widehat{R}_N (\hat{\theta})}{\widehat{R}_N (\tilde{\theta})}- 1$$ in order to make the comparison of our estimator $\hat{\theta}$ and the estimator~$\tilde{\theta}$ easier. When $\mathcal{R}_{\text{rel}} (\tilde{\theta})$ is negative our estimator is better than $\tilde{\theta}$ whereas if $\mathcal{R}_{\text{rel}} (\tilde{\theta})$ is positive, our estimator is worse than~$\tilde{\theta}$. More precisely, if $\mathcal{R}_{\text{rel}} (\tilde{\theta}) = \alpha$, the risk of our estimator corresponds to the one of~$\tilde{\theta}$ reduced of $100 |\alpha| \%$ when $\alpha < 0$ and increased of $100 \alpha \%$ when $\alpha > 0$. The results are gathered below. \begin{center} \begin{longtable}{|c||c|c|c|c|c|c|} \hline & & $n = 10$ & $n = 25$ & $n = 50$ & $n = 75$ & $n = 100$ \\ \hline Example 1 & $\widehat{R}_{10^6}(\hat{\theta})$ & 0.0130 & 0.0051 & 0.0025 & 0.0017 & 0.0013 \\* & $\widehat{R}_{10^6}(\tilde{\theta}_{\text{mle}})$ & 0.0129 & 0.0051 & 0.0025 & 0.0017 & 0.0013 \\* & $\mathcal{\widehat{R}}_{10^6,\text{rel}} (\tilde{\theta}_{\text{mle}})$ & $6 \cdot 10^{-4}$ & $10^{-5}$ & $7 \cdot 10^{-7}$ & $-8 \cdot 10^{-9}$ & $2 \cdot 10^{-9}$ \\* \cline{2-7} & $ \widehat{\text{std}}_{10^6} (\hat{\theta})$ & 0.0192 & 0.0073 & 0.0036 & 0.0024 & 0.0018 \\* & $ \widehat{\text{std}}_{10^6} (\tilde{\theta}_{\text{mle}})$ & 0.0192 & 0.0073 & 0.0036 & 0.0024 & 0.0018 \\* \hline Example 2 & $\widehat{R}_{10^6}(\hat{\theta})$ & 0.0123 & 0.0050 & 0.0025 & 0.0017 & 0.0012 \\* & $\widehat{R}_{10^6}(\tilde{\theta}_{\text{mle}})$ & 0.0123 & 0.0050 & 0.0025 & 0.0017 & 0.0012 \\* & $\mathcal{\widehat{R}}_{10^6,\text{rel}} (\tilde{\theta}_{\text{mle}})$ & $5 \cdot 10^{-10}$ & $9 \cdot 10^{-10}$ & $- 2 \cdot 10^{-9}$ & $- 2 \cdot 10^{-9}$ & $- 3 \cdot 10^{-9}$ \\* \cline{2-7} & $ \widehat{\text{std}}_{10^6} (\hat{\theta})$ & 0.0170 & 0.0070 & 0.0035 & 0.0023 & 0.0018 \\* & $ \widehat{\text{std}}_{10^6} (\tilde{\theta}_{\text{mle}})$ & 0.0170 & 0.0070 & 0.0035 & 0.0023 & 0.0018 \\* \hline Example 3 & $\widehat{R}_{10^6}(\hat{\theta})$ &0.0130 & 0.0051 & 0.0025 & 0.0017 & 0.0013 \\* & $\widehat{R}_{10^6}(\tilde{\theta}_{\text{mle}})$ & 0.0129 & 0.0051 & 0.0025 & 0.0017 & 0.0013 \\* & $\mathcal{\widehat{R}}_{10^6,\text{rel}} (\tilde{\theta}_{\text{mle}})$ & $6 \cdot 10^{-4}$ & $2 \cdot 10^{-5}$ & $10^{-6}$ & $-10^{-7}$ & $-4 \cdot 10^{-9}$ \\* \cline{2-7} & $ \widehat{\text{std}}_{10^6} (\hat{\theta})$ & 0.0192 & 0.0073 & 0.0036 & 0.0024 & 0.0018 \\* & $ \widehat{\text{std}}_{10^6}(\tilde{\theta}_{\text{mle}})$ & 0.0192 & 0.0073 & 0.0036 & 0.0024 & 0.0018 \\* \hline Example 4 & $\widehat{R}_{10^6}(\hat{\theta})$ & 0.0152 & 0.0054 & 0.0026 & 0.0017 & 0.0013 \\* & $\widehat{R}_{10^4}(\tilde{\theta}_{\text{mle}})$ & 0.0149 & 0.0054 & 0.0026 & 0.0017 & 0.0012 \\* & $\mathcal{\widehat{R}}_{10^4,\text{rel}} (\tilde{\theta}_{\text{mle}})$ & -0.001 & $-2 \cdot 10^{-4}$ & $- 10^{-8}$ & $-3 \cdot 10^{-8}$ & $9 \cdot 10^{-8}$ \\* \cline{2-7} & $ \widehat{\text{std}}_{10^6} (\hat{\theta})$ & 0.0267 & 0.0083 & 0.0038 & 0.0025 & 0.0018 \\* & $ \widehat{\text{std}}_{10^6}(\tilde{\theta}_{\text{mle}})$ & 0.0255 & 0.0083 & 0.0039 & 0.0025 & 0.0018 \\* \hline Example 5 & $\widehat{R}_{10^6}(\hat{\theta})$ & 0.0468 &0.0192 & 0.0096 & 0.0064 & 0.0048 \\* & $\widehat{R}_{10^6}(\tilde{\theta}_{\text{mle}})$ & 0.0476 & 0.0196 & 0.0099 & 0.0066 & 0.0050 \\* & $\widehat{R}_{10^6}(\tilde{\theta}_{\text{mvub}})$ & 0.0350 & 0.0144 & 0.0073 & 0.0049 & 0.0037 \\* \cline{2-7} & $\mathcal{\widehat{R}}_{10^6,\text{rel}} (\tilde{\theta}_{\text{mle}})$ & -0.0160 & -0.0202 & -0.0287 & -0.0271 & -0.0336 \\* & $\mathcal{\widehat{R}}_{10^6,\text{rel}} (\tilde{\theta}_{\text{mvub}})$ & 0.3390 & 0.3329 & 0.3215 & 0.3243 & 0.3148 \\* \cline{2-7} & $ \widehat{\text{std}}_{10^6}(\hat{\theta})$ & 0.0529 & 0.0223 & 0.0112 & 0.0075 & 0.0056 \\* & $ \widehat{\text{std}}_{10^6}(\tilde{\theta}_{\text{mle}})$ & 0.0453 & 0.0192 & 0.0098 & 0.0066 & 0.0049 \\* & $ \widehat{\text{std}}_{10^6}(\tilde{\theta}_{\text{mvub}})$ & 0.0316 & 0.0132 & 0.0067 & 0.0045 & 0.0034 \\ \hline Example 6 & $\widehat{R}_{10^6}(\hat{\theta})$ & 0.0504 & 0.0197 & 0.0098 & 0.0065 & 0.0049 \\* & $\widehat{R}_{10^6}(\tilde{\theta}_{\text{mle}})$ &0.0483 & 0.0197 & 0.0099 & 0.0066 & 0.0050 \\* & $\mathcal{\widehat{R}}_{10^6,\text{rel}} (\tilde{\theta}_{\text{mle}})$ & 0.0436 & -0.0019 & -0.0180 & -0.0242 & -0.0263 \\* \cline{2-7} & $ \widehat{\text{std}}_{10^6}(\hat{\theta})$ & 0.0597 & 0.0233 & 0.0115 & 0.0076 & 0.0057 \\* & $ \widehat{\text{std}}_{10^6}(\tilde{\theta}_{\text{mle}})$ & 0.0467 & 0.0195 & 0.0099 & 0.0066 & 0.0050 \\ \hline Example 7 & $\widehat{R}_{10^6}(\hat{\theta})$ & 0.0455 & 0.0193 & 0.0098 & 0.0066 & 0.0050 \\* & $\widehat{R}_{10^6}(\tilde{\theta}')$ & 0.0454 & 0.0192 & 0.0098 & 0.0066 & 0.0050 \\* & $\mathcal{\widehat{R}}_{10^6,\text{rel}} (\tilde{\theta}')$ & 0.0029 & 0.0029 & 0.0031 & 0.0028 & 0.0030 \\* \cline{2-7} & $ \widehat{\text{std}}_{10^6}(\hat{\theta})$ & 0.0416 & 0.0186 & 0.0096 & 0.0065 & 0.0049 \\* & $ \widehat{\text{std}}_{10^6}(\tilde{\theta}')$ & 0.0415 & 0.0185 & 0.0096 & 0.0065 & 0.0049 \\ \hline Example 8 & $\widehat{R}_{10^4}(\hat{\theta})$ & 0.050 & 0.022 & 0.012 & 0.008 &0.006 \\* & $\widehat{R}_{10^4}(\tilde{\theta}_{\text{mean}})$ & 0.084 & 0.061 & 0.049 & 0.043 & 0.039 \\* & $\widehat{R}_{10^4}(\tilde{\theta}_{\text{median}})$ & 0.066 & 0.036 & 0.025 & 0.019 & 0.017 \\* & $\widehat{R}_{10^4}(\tilde{\theta}_{\text{mspe}})$ & 0.050 & 0.022 & 0.012 & 0.008 & 0.006 \\* \cline{2-7} & $\mathcal{\widehat{R}}_{10^4,\text{rel}} (\tilde{\theta}_{\text{mean}})$ &-0.40 & -0.64 & -0.76 & -0.82 & -0.85 \\* & $\mathcal{\widehat{R}}_{10^4, \text{rel}} (\tilde{\theta}_{\text{median}})$ & -0.25 & -0.39 & -0.54 & -0.59 & -0.65 \\* \cline{2-7} & $ \widehat{\text{std}}_{10^4}(\hat{\theta})$ & 0.054 & 0.025 & 0.013 & 0.009 & 0.007 \\* & $ \widehat{\text{std}}_{10^4}(\tilde{\theta}_{\text{mean}})$ & 0.045 & 0.032 & 0.025 & 0.022 & 0.020 \\ & $ \widehat{\text{std}}_{10^4}(\tilde{\theta}_{\text{median}})$ & 0.052 & 0.032 & 0.020 & 0.016 & 0.014 \\ & $ \widehat{\text{std}}_{10^4}(\tilde{\theta}_{\text{mspe}})$ & 0.051 & 0.025 & 0.014 & 0.009 & 0.007 \\ \hline \end{longtable} \end{center} In the first four examples, the risk of our estimator is very close to the maximum likelihood estimator one, whatever the value of $n$. In Example~5, our estimator slightly improves the maximum likelihood estimator but is worse than the minimum variance unbiased estimator. In Example~6, the risk of our estimator is larger than the one of the m.l.e when $n = 10$ but is slightly smaller as soon as $n$ becomes larger than $25$. In Example~7, the risk of our estimator is~{$0.3 \%$} larger than the one of~$\tilde{\theta}'$. In Example~8, our estimator significantly improves the empirical mean and the median. Its risk is comparable to the one of the m.s.p.e (we omit in this example the value of $\mathcal{\widehat{R}}_{10^4, \text{rel}} (\tilde{\theta}_{\text{mspe}})$ because it is influenced by the procedure we have used to build the m.s.p.e). When the model is regular enough, these simulations show that our estimation strategy provides an estimator whose risk is almost equal to the one of the maximum likelihood estimator. Moreover, our estimator seems to work rather well in a model where the m.l.e does not exist (case of Example~8). Remark that contrary to the maximum likelihood method, our procedure does not involve the search of a global maximum. We now bring to light the connection between our estimator and the m.l.e when the model is regular enough (that is in the first four examples). Let for $c \in \{0.99,0.999, 1\}$, $q_{c}$ be the $c$-quantile of the random variable $ \big|\hat{\theta} - \tilde{\theta}_{\text{mle}} \big|$, and $\hat{q}_c$ be the empirical version based on $N$ samples ($N = 10^{6}$ in examples~1,2,3 and $N = 10^{4}$ in Example~4). The results are the following. \begin{center} \begin{tabular}{|c||c|c|c|c|c|c|} \hline & & $n = 10$ & $n = 25$ & $n = 50$ & $n = 75$ & $n = 100$ \\ \hline Example 1 & $\hat{q}_{0.99}$ & $10^{-7}$ & $10^{-7}$ & $10^{-7}$ & $10^{-7}$ & $10^{-7}$\\ & $\hat{q}_{0.999} $ & $0.07$ & $10^{-7}$ & $10^{-7}$ & $10^{-7}$ & $10^{-7}$ \\ & $\hat{q}_{1} $ & $1.9$ & $0.3$ & $0.06$ & $0.005$ & $10^{-7}$ \\ \hline Example 2 & $\hat{q}_{0.99}$ & $2 \cdot 10^{-7}$ & $3 \cdot 10^{-7}$ & $3 \cdot 10^{-7}$ & $3 \cdot 10^{-7}$ & $3 \cdot 10^{-7}$ \\ & $\hat{q}_{0.999} $ & $3 \cdot 10^{-7}$ & $3 \cdot 10^{-7}$ & $3 \cdot 10^{-7}$ & $3 \cdot 10^{-7}$ & $3 \cdot 10^{-7}$ \\ & $\hat{q}_{1}$ & $3 \cdot 10^{-7}$ & $3 \cdot 10^{-7}$ & $3 \cdot 10^{-7}$ & $3 \cdot 10^{-7}$ & $3 \cdot 10^{-7}$ \\ \hline Example 3 & $\hat{q}_{0.99}$ & $10^{-7}$ & $10^{-7}$ & $10^{-7}$ & $10^{-7}$ & $10^{-7}$ \\ & $\hat{q}_{0.999} $ & $0.03$ & $10^{-7}$ & $10^{-7} $ & $10^{-7}$ & $10^{-7}$ \\ & $\hat{q}_{1} $ & $0.38$ & $0.12$ & $0.01$ & 0.007 & $10^{-7}$ \\ \hline Example 4 & $\hat{q}_{0.99}$ & $10^{-6}$ & $10^{-6}$ & $10^{-6}$ & $10^{-6}$ & $10^{-6}$ \\ & $\hat{q}_{0.999} $ & $3\cdot10^{-6}$ & $10^{-6}$ & $10^{-6}$ & $10^{-6}$ & $10^{-6}$ \\ & $\hat{q}_{1}$ & $1.5$ & $0.1$ & $10^{-6}$ & $10^{-6}$ & $10^{-6}$ \\ \hline \end{tabular} \end{center} This array shows that with large probability our estimator is very close to the m.l.e. This probability is quite high for small values of $n$ and even more for larger values of $n$. This explains why the risks of these two estimators are very close in the first four examples. Note that the value of $\eta$ prevents the empirical quantile from being lower than something of order $10^{-7}$ according to the examples (in Example~4, the value of $10^{-6}$ is due to the way we have built the m.l.e). \subsection{Speed of the procedure.} For the sake of completeness, we specify below the number of tests that have been calculated in the preceding examples. \begin{figure}[H] \begin{center} \begin{tabular}{|c||c|c|c|c|c|} \hline & $n = 10$ & $n = 25$ & $n = 50$ & $n = 75$ & $n = 100$ \\ \hline Example 1 & 77 (1.4) & 77 (0.9) & 77 (0.7) & 77 (0.6) & 77 (0.5) \\ \hline Example 2 & 293 (1) & 294 (1) & 294 (0.9) & 295 (0.9) & 295 (0.9) \\ \hline Example 3 & 89 (0.75) & 90 (0.5) & 90 (0.5) & 90 (0.5) & 90 (0.5) \\ \hline Example 4 & 100 (3.5) & 100 (0.5) & 100 (0.001) & 100 (0) & 100 (0) \\ \hline Example 5 & 460 (3) & 461 (1) & 462 (0.6) & 462 (0.4) & 462 (0.3) \\ \hline Example 6 & 687 (0) & 687 (0) & 687 (0) & 687 (0) & 687 (0)\\ \hline Example 7 & 412 (8) & 419 (8) & 425 (8) & 429 (8) & 432 (8) \\ \hline Example 8 & 173209 (10) & 173212 (0) & 173212 (0.9) & 173206 (12) & 173212 (0.3) \\ \hline \end{tabular} \end{center} \caption{Number of tests computed averaged over $10^6$ samples for examples 1 to 7 and over $10^4$ samples for example $8$. The corresponding standard deviations are in brackets. } \label{figureriskcomparaison} \end{figure} \subsection{Simulations when $s \not \in \mathscr{F}$.} \label{SectionRobustessDim1} In Section~\ref{SectionDim1VraiS}, we were in the favourable situation where the true distribution $s$ belonged to the model~$\mathscr{F}$, which may not hold true in practice. We now work with random variables $X_1,\dots,X_n$ simulated according to a density~$s \not \in \mathscr{F}$ to illustrate the robustness properties of our estimator. We begin with an example proposed in~\cite{BirgeTEstimateurs}. We generate $X_1,\dots,X_n$ according to the density $$s(x) = 10 \left[ (1-2n^{-1}) \mathbbm{1}_{[0,1/10]} (x) + 2n^{-1} \mathbbm{1}_{[9/10,1]}(x) \right] \quad \text{for all $x \in \mathbb{R}$}$$ and compare our estimator to the maximum likelihood estimator for the uniform model \begin{eqnarray} \label{FModelUniformeSimu} \mathscr{F} = \left\{f_{\theta}, \, \theta \in [0.01, 10]\right\} \quad \text{where} \quad f_{\theta} = \theta^{-1} \mathbbm{1} _{[0, \theta]}. \end{eqnarray} It is worthwhile to notice that $h^2(s,\mathscr{F}) = \mathcal{O} (n^{-1})$, which means that $s$ is close to $\mathscr{F}$ when $n$ is large, and that our estimator still satisfies $\mathbb{E} [h^2 (s,f_{\hat{\theta}})] = \mathcal{O} (n^{-1})$. Contrary to our estimator, the outliers make the m.l.e unstable as shown in the array below. \begin{figure}[H] \begin{center} \begin{tabular}{|c||c|c|c|c|c|c|} \hline & $n = 10$ & $n = 25$ & $n = 50$ & $n = 75$ & $n = 100$ \\ \hline $\widehat{R}_N (\hat{\theta}) $ & 0.20 & 0.06 & 0.03 & 0.02 & 0.015 \\ $\widehat{R}_N (\tilde{\theta}_{\text{mle}}) $ & 0.57 & 0.56 & 0.56 & 0.56 & 0.57 \\ \hline \end{tabular} \end{center} \caption{Risks for simulated data averaged over $10^4$ samples.} \label{figureriskcomparaison} \end{figure} We now propose a second example based on the mixture of two uniform laws. We use the same statistical model $\mathscr{F}$ but we modify the distribution of the observations. We take $p \in (0,1)$ and define the true underlying density by $$s_{p} (x) = (1-p) f_1(x) + p f_2(x) \quad \text{for all $x \in \mathbb{R}$.}$$ Set $p_0 = 1-1/\sqrt{2}$. One can check that \begin{eqnarray*} H^2(s_p,\mathscr{F}) &=& \begin{cases} H^2(s_p,f_1) & \text{if $p \leq p_0$} \\ H^2(s_p,f_2) & \text{if $p > p_0$,} \end{cases} \\ &=& \begin{cases} 1 - \sqrt{2-p}/\sqrt{2} & \text{if $p \leq p_0$} \\ 1 - (\sqrt{2-p}+\sqrt{p})/{2} & \text{if $p > p_0$,} \end{cases} \end{eqnarray*} which means that the best estimator of~$\mathscr{F}$ is $f_1$ when $p < p_0$ and $f_2$ when $p > p_0$. We now compare our estimator $\hat{\theta}$ to the m.l.e $\tilde{\theta}_{\text{mle}}$. For a lot of values of~$p$, we simulate $N$ samples of $n$ random variables with density~$s_p$ and investigate the behaviour of the estimator $\tilde{\theta} \in \{\hat{\theta}, \tilde{\theta}_{\text{mle}}\}$ by computing the function \begin{eqnarray*} \widehat{R}_{p,n,N} (\tilde{\theta}) = \frac{1}{N} \sum_{i=1}^N h^2 (s_p, f_{\tilde{\theta}^{(p,i)}}) \end{eqnarray*} where $\tilde{\theta}^{(p,i)}$ is the value of the estimator $\tilde{\theta}$ corresponding to the $i^{\text{\tiny th}}$ sample whose density is $s_p$. We draw below the functions $p \mapsto \widehat{R}_{p,n,N} (\hat{\theta}) $, $p \mapsto \widehat{R}_{p,n,N} (\tilde{\theta}_{\text{mle}}) $ and $p \mapsto H^2(s_p, \mathscr{F})$ for $n = 100$ and then for $n = 10^4$. \begin{figure}[H] \includegraphics[scale=1]{FonctionRobustessen100Arxiv.eps} \includegraphics[scale=1]{FonctionRobustessen10000Arxiv.eps} \caption{Red: $p \mapsto H^2(s_p, \mathscr{F})$. Blue: $p \mapsto \widehat{R}_{p,n,5000} (\hat{\theta}) $. Green: $p \mapsto \widehat{R}_{p,n,5000} (\tilde{\theta}_{\text{mle}}) $.} \end{figure} We observe that the m.l.e is rather good when $p \geq p_0$ and very poor when $p < p_0$. This can be explained by the fact that the m.l.e $\tilde{\theta}_{\text{mle}}$ is close to $2$ as soon as the number $n$ of observations is large enough. The shape of the function $p \mapsto \widehat{R}_{p,n,5000} (\hat{\theta}) $ is quite more satisfying since it looks more like the function $p \mapsto H^2(s_p, \mathscr{F})$. The lower figure suggests that $ \widehat{R}_{p,n,N} (\hat{\theta}) $ converges to $H^2(s_p, \mathscr{F})$ when $n,N$ go to infinity except on a small neighbourhood before $p_0$. \section{Models parametrized by a multidimensional parameter} \label{SectionEstimationDimQuel} \subsection{Assumption.} We now deal with models $\mathscr{F} = \left\{f_{\boldsymbol{\theta}}, \; \boldsymbol{\theta} \in \Theta \right\}$ indexed by a rectangle $$\Theta = \prod_{j=1}^d [m_j, M_j]$$ of $\mathbb{R}^d$ with $d$ larger than $2$. Assumption~\ref{HypSurLeModeleQuelquonqueDebutDimD} is supposed to be fulfilled all along this section. \subsection{Definition of the test.} \label{SectionDefTestDimd} As previously, our estimation strategy is based on the existence for all $\boldsymbol{\theta}, \boldsymbol{\theta}' \in \Theta$ of a measurable function $T(\boldsymbol{\theta}, \boldsymbol{\theta}')$ of the observations possessing suitable statistical properties. The definition of this functional is the natural extension of the one we have proposed in Section~\ref{SectionDefTest}. Let for $j \in \{1,\dots,d\}$, $t_j \in (0,d^{1/\alpha_j}]$ and $\epsilon_j = t_j (\widebar{R} n)^{-1/\alpha_j}$. We then define the finite sets \begin{eqnarray*} \Theta_{\text{dis}} &=& \left\{\left(m_1+ k_1 \epsilon_1, \dots, m_d+ k_d \epsilon_d\right), \; \forall j \in \{1,\dots, d\}, \; k_j \leq (M_j-m_j) \epsilon_j^{-1} \right\}\\ \mathscr{F}_{\text{dis}} &=& \{f_{\boldsymbol{\theta}}, \, \boldsymbol{\theta} \in \Theta_{\text{dis}} \}. \end{eqnarray*} Let $\pi$ be the map defined on $\prod_{j=1}^d [m_j, M_j]$ by $$\pi({\boldsymbol{x}}) = \big(m_1 + \lfloor (x_1 - m_1) / \epsilon_1\rfloor \epsilon_1, \dots, m_d + \lfloor (x_d - m_d) / \epsilon_d\rfloor \epsilon_d \big) \quad \text{for all $\boldsymbol{x} = (x_1,\dots,x_d) \in \prod_{j=1}^d [m_j, M_j]$} $$ where $\lfloor \cdot \rfloor$ is the integer part. We then define $T(\boldsymbol{\theta},\boldsymbol{\theta}')$ for all $\boldsymbol{\theta}, \boldsymbol{\theta}' \in \Theta$ by \begin{eqnarray} \label{eqDefinitionTDimD} {T} ({\boldsymbol{\theta}},{\boldsymbol{\theta}'}) = \widebar{T}(f_{\pi(\boldsymbol{\theta})},f_{\pi(\boldsymbol{\theta}')}) \quad \text{for all $\boldsymbol{\theta},\boldsymbol{\theta}' \in \Theta = \prod_{j=1}^d [m_j, M_j]$} \end{eqnarray} where $\widebar{T}$ is the functional given by (\ref{eqFonctionnalBaraud}). \subsection{Basic ideas.}\label{SectionHeuristiqueDim2} For the sake of simplicity, we first consider the case $d = 2$. We shall build a decreasing sequence $(\Theta_i)_i$ of rectangles by induction. When there exists $\boldsymbol{\theta}_0 \in \Theta$ such that $s = f_{\boldsymbol{\theta}_0}$, these rectangles $\Theta_i$ can be interpreted as confidence sets for $\boldsymbol{\theta}_0$. Their construction is strongly inspired from the heuristics of Section~\ref{SectionHeuristique}. We set $ \Theta_1 = \Theta$. Assume that $\Theta_i= [a_1,b_1] \times [a_2,b_2]$ and let us explain how we can build a confidence set $\Theta_{i+1} = [a_1,b_1] \times [a_2',b_2']$ with $a_2',b_2'$ satisfying $b_2'-a_2' < b_2-a_2$. We begin to build by induction two preliminary finite sequences $(\boldsymbol{\theta}^{(j)})_{1 \leq j \leq N}$, $(\boldsymbol{\theta}^{(j)})_{1 \leq j \leq N}$ of elements of $\mathbb{R}^2$. Let $\boldsymbol{\theta}^{(1)} = (a_1,b_1)$ be the bottom left-hand corner of $\Theta_i$ and $\boldsymbol{\theta}'^{(1)} = (a_1,b_2)$ be the top left-hand corner of~$\Theta_i$. Let $\bar{r}_1 (\boldsymbol{\theta}^{(1)},\boldsymbol{\theta}'^{(1)})$, $\bar{r}_2 (\boldsymbol{\theta}^{(1)},\boldsymbol{\theta}'^{(1)})$, $\bar{r}_1 (\boldsymbol{\theta}'^{(1)},\boldsymbol{\theta}^{(1)})$, $\underline{r}_2 (\boldsymbol{\theta}'^{(1)},\boldsymbol{\theta}^{(1)})$ be positive numbers such that the rectangles \begin{eqnarray*} \mathcal{R}_1 &=& [a_1, a_1+\bar{r}_1 (\boldsymbol{\theta}^{(1)},\boldsymbol{\theta}'^{(1)})] \times [a_2, a_2+ \bar{r}_2 (\boldsymbol{\theta}^{(1)},\boldsymbol{\theta}'^{(1)})]\\ \mathcal{R}_1' &=& [a_1, a_1+\bar{r}_1 (\boldsymbol{\theta}'^{(1)},\boldsymbol{\theta}^{(1)})] \times [b_2-\underline{r}_2 (\boldsymbol{\theta}'^{(1)},\boldsymbol{\theta}^{(1)}), b_2] \end{eqnarray*} are respectively included in the Hellinger balls $$\mathcal{B} \big(\boldsymbol{\theta}^{(1)}, \kappa^{1/2} h (f_{\boldsymbol{\theta}^{(1)}}, f_{\boldsymbol{\theta}'^{(1)}}) \big) \quad \text{and} \quad \mathcal{B} \big(\boldsymbol{\theta}'^{(1)}, \kappa^{1/2} h (f_{\boldsymbol{\theta}^{(1)}}, f_{\boldsymbol{\theta}'^{(1)}}) \big).$$ See (\ref{eqDefinitionBouleHel}) for the precise definition of these balls. We define $\boldsymbol{\theta}^{(2)},\boldsymbol{\theta}'^{(2)} \in \mathbb{R}^2$ as follows \begin{eqnarray*} \boldsymbol{\theta}^{(2)} &=& \begin{cases} \boldsymbol{\theta}^{(1)} + (\bar{r}_1 (\boldsymbol{\theta}^{(1)},\boldsymbol{\theta}'^{(1)}), 0) & \text{if $T ({\boldsymbol{\theta}^{(1)}},{\boldsymbol{\theta}'^{(1)}}) \geq 0$} \\ \boldsymbol{\theta}^{(1)} & \text{otherwise} \end{cases} \\ \boldsymbol{\theta}'^{(2)} &=& \begin{cases} \boldsymbol{\theta}'^{(1)} + (\bar{r}_1 (\boldsymbol{\theta}'^{(1)},\boldsymbol{\theta}^{(1)}), 0) & \text{if $T ({\boldsymbol{\theta}^{(1)}},{\boldsymbol{\theta}'^{(1)}}) \leq 0$} \\ \boldsymbol{\theta}'^{(1)} & \text{otherwise.} \end{cases} \end{eqnarray*} Here is an illustration. \begin{figure}[H] \begin{tikzpicture} [scale = 4] \draw (0,0) rectangle (1,1); \draw[black,fill=gray!20] (0,0) rectangle (1/6,1.4/6); \draw (0,0) node{$\bullet$}; \draw (1/6,0) node{$\bullet$}; \draw (0,0) node[below ]{$\boldsymbol{\theta}^{(1)}$}; \draw (1/6,0) node[below right ]{$\boldsymbol{\theta}^{(2)}$}; \draw (1/12,1.4/12) node{$\mathcal{R}_1$}; \draw (0,1) node{$\bullet$}; \draw (0,1) node [above ] {$\boldsymbol{\theta}'^{(1)} \!\! = \!\! \boldsymbol{\theta}'^{(2)}$}; \end{tikzpicture} \caption{Construction of $\boldsymbol{\theta}^{(2)}$ and $\boldsymbol{\theta}'^{(2)}$ when $T ({\boldsymbol{\theta}^{(1)}}, {\boldsymbol{\theta}'^{(1)}}) > 0$.} \end{figure} It is worthwhile to notice that in this figure, the heuristics of Section~\ref{SectionHeuristique} suggest that $\boldsymbol{\theta}_0$ belongs to $\Theta_i \setminus \mathcal{R}_1$. Now, if the first component of $\boldsymbol{\theta}^{(2)} = (\theta^{(2)}_1, \theta^{(2)}_2)$ is larger than $b_1$, that is $\theta^{(2)}_1 \geq b_1$, we set $N = 1$ and stop the construction of the vectors $\boldsymbol{\theta}^{(i)}$, $\boldsymbol{\theta}'^{(i)}$. Similarly, if $\theta'^{(2)}_1 \geq b_1$, we set $N = 1$ and stop the construction of the $\boldsymbol{\theta}^{(i)}$, $\boldsymbol{\theta}'^{(i)}$. If $\theta^{(2)}_1 < b_1$ and $\theta'^{(2)}_1 < b_1$, we consider positive numbers $\bar{r}_1 (\boldsymbol{\theta}^{(2)},\boldsymbol{\theta}'^{(2)})$, $\bar{r}_2 (\boldsymbol{\theta}^{(2)},\boldsymbol{\theta}'^{(2)})$, $\bar{r}_1 (\boldsymbol{\theta}'^{(2)},\boldsymbol{\theta}^{(2)})$, $\underline{r}_2 (\boldsymbol{\theta}'^{(2)},\boldsymbol{\theta}^{(2)})$ such that the rectangles \begin{eqnarray*} \mathcal{R}_2 &=& [\theta_1^{(2)}, \theta_{1}^{(2)}+ \bar{r}_1 (\boldsymbol{\theta}^{(2)},\boldsymbol{\theta}'^{(2)})] \times [a_2, a_2+\bar{r}_2 (\boldsymbol{\theta}^{(2)},\boldsymbol{\theta}'^{(2)})]\\ \mathcal{R}_2' &=& [\theta_{1}'^{(2)}, \theta_{1}'^{(2)}+\bar{r}_1 (\boldsymbol{\theta}'^{(2)},\boldsymbol{\theta}^{(2)})] \times [b_2-\underline{r}_2 (\boldsymbol{\theta}'^{(2)},\boldsymbol{\theta}^{(2)}), b_2] \end{eqnarray*} are respectively included in the Hellinger balls $$\mathcal{B} \big(\boldsymbol{\theta}^{(2)}, \kappa^{1/2} h (f_{\boldsymbol{\theta}^{(2)}}, f_{\boldsymbol{\theta}'^{(2)}}) \big) \quad \text{and} \quad \mathcal{B} \big(\boldsymbol{\theta}'^{(2)}, \kappa^{1/2} h (f_{\boldsymbol{\theta}^{(2)}}, f_{\boldsymbol{\theta}'^{(2)}}) \big).$$ We then define $\boldsymbol{\theta}^{(3)},\boldsymbol{\theta}'^{(3)} \in \mathbb{R}^2$ by \begin{eqnarray*} \boldsymbol{\theta}^{(3)} &=& \begin{cases} \boldsymbol{\theta}^{(2)} + (\bar{r}_1 (\boldsymbol{\theta}^{(2)},\boldsymbol{\theta}'^{(2)}), 0) & \text{if $T ({\boldsymbol{\theta}^{(2)}},{\boldsymbol{\theta}'^{(2)}}) \geq 0$} \\ \boldsymbol{\theta}^{(2)} & \text{otherwise} \end{cases} \\ \boldsymbol{\theta}'^{(3)} &=& \begin{cases} \boldsymbol{\theta}'^{(2)} + (\bar{r}_1 (\boldsymbol{\theta}'^{(2)},\boldsymbol{\theta}^{(2)}), 0) & \text{if $T ({\boldsymbol{\theta}^{(2)}},{\boldsymbol{\theta}'^{(2)}}) \leq 0$} \\ \boldsymbol{\theta}'^{(2)} & \text{otherwise.} \end{cases} \end{eqnarray*} If $\theta_{1}^{(3)} \geq b_1$ or if $\theta_{1}'^{(3)} \geq b_1$ we stop the construction and set $N = 2$. In the contrary case, we repeat this step to build the vectors $\boldsymbol{\theta}^{(4)}$ and $\boldsymbol{\theta}'^{(4)}$. We repeat these steps until the construction stops. Let $N$ be the integer for which $\theta_{1}^{(N+1)} \geq b_1$ or $\theta_{1}'^{(N+1)} \geq b_1$. We then define \begin{eqnarray*} a_2' &=& \begin{cases} a_2 + \min_{1 \leq j \leq N} \bar{r}_2 (\boldsymbol{\theta}^{(j)},\boldsymbol{\theta}'^{(j)}) & \text{if $\theta_1^{(N+1)} \geq b_1$} \\ a_2 & \text{otherwise} \end{cases} \\ b_2' &=& \begin{cases} b_2 - \min_{1 \leq j \leq N} \underline{r}_2 (\boldsymbol{\theta}'^{(j)},\boldsymbol{\theta}^{(j)}) & \text{if $\theta_{1}'^{(N+1)} \geq b_1$}\\ b_2 & \text{otherwise} \end{cases} \end{eqnarray*} and set $\Theta_{i+1} = [a_1,b_1] \times [a_2',b_2']$. \begin{figure}[H] \begin{minipage}[c]{0.5\linewidth} \centering \begin{tikzpicture} [scale = 4] \draw (0,0) rectangle (1,1); \draw[black,fill=gray!20] (0,0) rectangle (1/6,1.4/6); \draw[black,fill=gray!20] (1/6,0) rectangle (3/6,1/6); \draw[black,fill=gray!20] (3/6,0) rectangle (4.8/6,1.7/6); \draw[black,fill=gray!20] (4.8/6,0) rectangle (1,0.8/6); \draw[black,fill=gray!20] (0,5/6) rectangle (2.2/6,1); \draw (0,0) node{$\bullet$}; \draw (1/6,0) node{$\bullet$}; \draw (0,0) node[below ]{$\boldsymbol{\theta}^{(1)}$}; \draw (1/6,0) node[below ]{$\boldsymbol{\theta}^{(2)}$}; \draw (1/12,1.4/12) node{$\mathcal{R}_1$}; \draw (3/6,0) node{$\bullet$}; \draw (3/6,0) node[below ]{$\boldsymbol{\theta}^{(3)} \!\! = \!\! \boldsymbol{\theta}^{(4)}$}; \draw (1/6+2/12,1/12) node{$\mathcal{R}_2$}; \draw (4.8/6,0) node{$\bullet$}; \draw (4.8/6,0) node[below ]{$\boldsymbol{\theta}^{(5)}$}; \draw (3/6+1.8/12,1.7/12) node{$\mathcal{R}_4$}; \draw (4.8/6+1.2/12,0.8/12) node{$\mathcal{R}_5$}; \draw (0,1) node{$\bullet$}; \draw (0,1) node [above ] {$\boldsymbol{\theta}'^{(1)} \!\!=\!\! \boldsymbol{\theta}'^{(2)} \!\!=\!\! \boldsymbol{\theta}'^{(3)}$}; \draw (2.2/6,1) node {$\bullet$}; \draw (2.2/6,1) node [above right] {$\boldsymbol{\theta}'^{(4)}$}; \draw (2.2/12,5.5/6) node{$\mathcal{R}_3'$}; \end{tikzpicture} \end{minipage}\hfill \begin{minipage}[c]{0.5\linewidth} \centering \begin{tikzpicture} [scale = 4] \draw (0,0) rectangle (1,1); \draw[black,fill=gray!20] (0,0.8/6) rectangle (1,1); \draw (0.5,0.8/6+5.2/12) node{$\Theta_{i+1}$}; \end{tikzpicture} \end{minipage} \caption{Illustration when $N = 5$, $T (\boldsymbol{\theta}^{(i)},\boldsymbol{\theta}'^{(i)}) > 0$ for $i \in \{1,2,4,5\}$ and $T (\boldsymbol{\theta}^{(3)},\boldsymbol{\theta}'^{(3)}) < 0$.} \end{figure} In this figure, the set $$\Theta_i \setminus \left( \mathcal{R}_1 \cup \mathcal{R}_2 \cup \mathcal{R}_3' \cup \mathcal{R}_4 \cup \mathcal{R}_5 \right)$$ is a confidence set for $\boldsymbol{\theta}_0$. The set $\Theta_{i+1}$ is the smallest rectangle containing this confidence set. \paragraph{Remark 1.} We define $\Theta_{i+1}$ as a rectangle to make the procedure easier to implement. \paragraph{Remark 2.} By using a similar strategy, we can also build a confidence set $\Theta_{i+1}$ of the form $\Theta_{i+1} = [a_1',b_1'] \times [a_2,b_2]$ where $a_1',b_1'$ are such that $b_1'-a_1' < b_1-a_1$. We shall build the rectangles $\Theta_i$ until their diameters become sufficiently small. The estimator we shall consider will be the center of the last rectangle built. \subsection{Procedure.} In the general case, that is when $d \geq 2$, we build a finite sequence of rectangles $(\Theta_i)_i$ of $\Theta = \prod_{j=1}^d [m_j,M_j]$. We consider $\kappa > 0$ and for all rectangle $\mathcal{C} = \prod_{j=1}^{d} [a_j,b_j] \subset\Theta$, vectors $\boldsymbol{\theta}, \boldsymbol{\theta}' \in \mathcal{C}$, and integers $j \in \{1,\dots,d\}$, we introduce positive numbers $\bar{r}_{\mathcal{C},j} (\boldsymbol{\theta},\boldsymbol{\theta}')$, $\underline{{r}}_{\mathcal{C},j} (\boldsymbol{\theta},\boldsymbol{\theta}')$ such that \begin{eqnarray} \qquad \mathcal{C} \bigcap \prod_{j=1}^d \left[ \theta_j - \underline{r}_{\mathcal{C},j} (\boldsymbol{\theta},\boldsymbol{\theta}'), \theta_j+\bar{r}_{\mathcal{C},j} (\boldsymbol{\theta},\boldsymbol{\theta}') \right] \subset \mathcal{B} \big(\boldsymbol{\theta}, \kappa^{1/2} h (f_{\boldsymbol{\theta}}, f_{\boldsymbol{\theta}'}) \big) \label{eqInclusionRC1}. \end{eqnarray} We also consider for all $j \in \{1,\dots,d\}$, $\underline{R}_{\mathcal{C},j} \geq \underline{R}_j$ such that \begin{eqnarray} \label{eqMinorationRCj} h^2 \left(f_{\boldsymbol{\theta}}, f_{\boldsymbol{\theta}'} \right) \geq \sup_{1 \leq j \leq d} \underline{R}_{\mathcal{C},j} |\theta_j - \theta'_j|^{\alpha_j} \quad \text{for all $\boldsymbol{\theta}$, $\boldsymbol{\theta}' \in \mathcal{C}$.} \end{eqnarray} We finally consider for all $j \in \{1,\dots,d\}$, an one-to-one map $\psi_j$ from $\{1,\dots,d-1\}$ into $\{1,\dots,d\}\setminus \{j\}$. We set $\Theta_1 =\Theta$. Given $\Theta_i$, we define $\Theta_{i+1}$ by using the following algorithm. \begin{algorithm}[H] \caption{Definition of $\Theta_{i+1}$ from $\Theta_i$} \label{algoConstructionDimQuelquonqueAvant} \begin{algorithmic}[1] \REQUIRE $\Theta_i = \prod_{j=1}^d [a_j, b_j]$ \STATE Choose ${k} \in \{1,\dots,d\}$ such that $$\underline{R}_{\Theta_i, k} (b_{k} - a_{k})^{\alpha_k} = \max_{1 \leq j \leq d} \underline{R}_{\Theta_i,j} (b_{j} - a_{j})^{\alpha_j}$$ \STATE $\boldsymbol{\theta} = (\theta_1,\dots,\theta_d) \leftarrow (a_1,\dots,a_d)$, $\boldsymbol{\theta}' = (\theta_1',\dots,\theta_d') \leftarrow \boldsymbol{\theta}$ and $\theta'_{{k}} \leftarrow b_{k}$ \STATE ${\varepsilon_j} \leftarrow \bar{r}_{\Theta_i,j} (\boldsymbol{\theta},\boldsymbol{\theta}')$ and $\varepsilon_j'\leftarrow \bar{r}_{\Theta_i,j} (\boldsymbol{\theta}',\boldsymbol{\theta})$ for all $j \neq k$ \STATE $\varepsilon_{k} \leftarrow (b_k - a_k)/2$ and $\varepsilon_k' \leftarrow (b_k - a_k)/2$ \REPEAT \STATE $\text{Test} \leftarrow T(\boldsymbol{\theta},\boldsymbol{\theta}') $ \STATE For all $j$, $\bar{r}_j \leftarrow \bar{r}_{\Theta_i,j} (\boldsymbol{\theta},\boldsymbol{\theta}') $, $\bar{r}_j' \leftarrow \bar{r}_{\Theta_i,j} (\boldsymbol{\theta}',\boldsymbol{\theta})$, $\underline{r}_j' \leftarrow \underline{r}_{\Theta_i,j} (\boldsymbol{\theta}',\boldsymbol{\theta})$ \IF {$\text{Test} \geq 0$} \STATE $\varepsilon_{\psi_{k}(1)} \leftarrow \bar{r}_{\psi_k(1)} $ \STATE $\varepsilon_{\psi_k(j)} \leftarrow \min (\varepsilon_{\psi_k(j)}, \bar{r}_{\psi_k(j)})$ for all $j \in \{2,\dots, d-1\}$ \STATE $\varepsilon_{k} \leftarrow \min (\varepsilon_{k}, \bar{r}_{k})$ \STATE $J \leftarrow \left\{1 \leq j \leq d -1,\; \theta_{\psi_{k}(j)} + \varepsilon_{\psi_{k}(j)} < b_{\psi_{k}(j)} \right\}$ \IF {$J \neq \emptyset$} \STATE $\mathfrak{j}_{\text{min}} \leftarrow \min J$ \STATE $\theta_{\psi_{k}(j)} \leftarrow a_{\psi_{k}(j)}$ for all $j \leq \mathfrak{j}_{\text{min}} - 1$ \STATE $\theta_{\psi_{k}(\mathfrak{j}_{\text{min}})} \leftarrow \theta_{\psi_{k}(\mathfrak{j}_{\text{min}} )} + \varepsilon_{\psi_{k}(\mathfrak{j}_{\text{min}})}$ \ELSE \STATE $\mathfrak{j}_{\text{min}} \leftarrow d$ \ENDIF \ENDIF \IF {$\text{Test} \leq 0$} \STATE $\varepsilon_{\psi_{k}(1)}' \leftarrow \bar{r}'_{\psi_k(1)}$ \STATE $\varepsilon_{\psi_{k}(j)}' \leftarrow \min (\varepsilon_{\psi_{k}(j)}', \bar{r}'_{\psi_{k}(j)})$ for all $j \in \{2,\dots, d-1\}$ \STATE $\varepsilon_{k}' \leftarrow \min (\varepsilon_{k}', \underline{r}'_{k})$ \STATE $J' \leftarrow \left\{1 \leq j' \leq d -1,\; \theta_{\psi_{k}(j')}' + \varepsilon_{\psi_{k}(j')}' < b_{\psi_{k}(j')}\right\}$ \IF {$J' \neq \emptyset$} \STATE $\mathfrak{j}_{\text{min}}' \leftarrow \min J'$ \STATE $\theta_{\psi_{k}(j)}' \leftarrow a_{\psi_{k}(j)}$ for all $j \leq \mathfrak{j}_{\text{min}}' - 1$ \STATE $\theta_{\psi_{k}(\mathfrak{j}_{\text{min}} ')}' \leftarrow \theta_{\psi_{k}(\mathfrak{j}_{\text{min}} ')}' + \varepsilon_{\psi_{k}(\mathfrak{j}_{\text{min}} ')}'$ \ELSE \STATE $\mathfrak{j}_{\text{min}}' \leftarrow d$ \algstore{coupemonalgodimd} \end{algorithmic} \end{algorithm} \begin{algorithm} \begin{algorithmic}[1] \algrestore{coupemonalgodimd} \ENDIF \ENDIF \UNTIL {$\mathfrak{j}_{\text{min}} = d$ or $\mathfrak{j}_{\text{min}} ' = d$} \IF {$ \mathfrak{j}_{\text{min}} = d$} \STATE $a_{k} \leftarrow a_{k} + \varepsilon_{k}$ \ENDIF \IF { $\mathfrak{j}_{\text{min}} ' = d$} \STATE $b_{k} \leftarrow b_{k} - \varepsilon_{k}' $ \ENDIF \STATE $\Theta_{i+1} \leftarrow \prod_{j=1}^{d} [a_j, b_j] $ \RETURN $\Theta_{i+1}$ \end{algorithmic} \end{algorithm} We now consider $d$ positive numbers $\eta_1,\dots,\eta_d$ and use the algorithm below to build our estimator~$\boldsymbol{\hat{\theta}}$. \begin{algorithm}[H] \caption{ } \label{algoConstructionDimQuelquonque} \begin{algorithmic}[1] \STATE Set $a_j = m_j$ and $b_j = M_j$ for all $j \in \{1,\dots,d\}$ \STATE $i \leftarrow 0$ \WHILE{There exists $j \in \{1,\dots,d\}$ such that $ b_j - a_j > \eta_j$} \STATE $i \leftarrow i + 1$ \STATE Build $\Theta_i$ and set $a_1,\dots,a_d$, $b_1,\dots,b_d$ such that $\prod_{j=1}^d [a_j,b_j] = \Theta_i$ \ENDWHILE \RETURN $$\boldsymbol{\hat{\theta}} = \left(\frac{a_1 + b_1}{2}, \dots, \frac{a_d + b_d}{2} \right)$$ \end{algorithmic} \end{algorithm} The parameters $\kappa$, $t_j$, $\eta_j$ $\bar{{r}}_{\mathcal{C},j} (\boldsymbol{\theta},\boldsymbol{\theta}')$, $\underline{{r}}_{\mathcal{C},j} (\boldsymbol{\theta},\boldsymbol{\theta}')$ can be interpreted as in dimension $1$. We have introduced a new parameter $\underline{R}_{\mathcal{C},j}$ whose role is to control more accurately the Hellinger distance in order to make the procedure faster. Sometimes, the computation of this parameter is difficult in practice, in which case we can avoid it by proceeding as follows. For all $\boldsymbol{\theta}, \boldsymbol{\theta}' \in \Theta$, \begin{eqnarray*} h^2 \left(f_{\boldsymbol{\theta}}, f_{\boldsymbol{\theta}'} \right) \geq \sup_{1 \leq j \leq d} \underline{R} |\theta_j - \theta'_j|^{\alpha_j} \end{eqnarray*} where $\underline{R } = \min_{1 \leq j \leq d} \underline{R}_j$, which means that we can always assume that $\underline{R}_j$ is independent of~$j$. Choosing $\underline{R}_{\Theta_i, j} = \underline{R}$ simplifies the only line where this parameter is involved (line 1 of Algorithm~\ref{algoConstructionDimQuelquonqueAvant}). It becomes $$ (b_{k} - a_{k})^{\alpha_k} = \max_{1 \leq j \leq d} (b_{j} - a_{j})^{\alpha_j}$$ and $k$ can be calculated without computing $\underline{R}$. \subsection{Risk bound.} \label{SectionPropEstimateurDimD} Suitable values of the parameters lead to a risk bound for our estimator~$\boldsymbol{\hat{\theta}}$. \begin{thm} \label{ThmPrincipalDimQuelquonque} Suppose that Assumption~\ref{HypSurLeModeleQuelquonqueDebutDimD} holds. Let $\bar{\kappa}$ be defined by (\ref{eqEsperanceTest}), and assume that $\kappa \in (0, \bar{\kappa})$, and for all $j \in \{1,\dots,d\}$, $t_j \in (0,d^{1/\alpha_j}]$, $$\epsilon_j = t_j (\widebar{R}_j n)^{-1/\alpha_j}, \quad \eta_j \in [\epsilon_j, d^{1/\alpha_j} (\widebar{R}_j n)^{-1/\alpha_j}].$$ Suppose that for all rectangle $\mathcal{C}$, $\boldsymbol{\theta}, \boldsymbol{\theta}' \in \mathcal{C}$, the numbers $\bar{r}_{\mathcal{C},j} (\boldsymbol{\theta},\boldsymbol{\theta}')$, $\underline{{r}}_{\mathcal{C},j} (\boldsymbol{\theta},\boldsymbol{\theta}')$, are such that (\ref{eqInclusionRC1}) holds. Then, for all $\xi > 0$, the estimator $\boldsymbol{\hat{\theta}}$ built by Algorithm~\ref{algoConstructionDimQuelquonque} satisfies $$\P \left[ C h^2(s,f_{\boldsymbol{\hat{\theta}}}) \geq h^2(s, \mathscr{F}) + \frac{d}{n} + \xi \right] \leq e^{- n \xi} $$ where $C > 0$ depends only on $\kappa$, $(\widebar{R}_j/\underline{R}_j)_{1 \leq j \leq d}$, $(\alpha_j)_{1 \leq j \leq d}$, $(t_j)_{1 \leq j \leq d}$. \end{thm} \paragraph{Remark.} A look at the proof of the theorem shows that Theorem~\ref{ThmPrincipalDimQuelquonqueDansOverview} ensues from this theorem when $t_j = d^{1/\alpha_j}$ and $\eta_j = \epsilon_j$. \subsection{Choice of $\bar{r}_{\mathcal{C},j} (\boldsymbol{\theta},\boldsymbol{\theta}')$ and $\underline{{r}}_{\mathcal{C},j} (\boldsymbol{\theta},\boldsymbol{\theta}')$.} The parameters $\bar{{r}}_{\mathcal{C},j} (\boldsymbol{\theta},\boldsymbol{\theta}')$, $\underline{{r}}_{\mathcal{C},j} (\boldsymbol{\theta},\boldsymbol{\theta}')$ are involved in the procedure and must be calculated. They may be chosen arbitrary provided that the rectangle \begin{eqnarray*} \qquad \mathcal{C} \bigcap \prod_{j=1}^d \left[ \theta_j - \underline{r}_{\mathcal{C},j} (\boldsymbol{\theta},\boldsymbol{\theta}'), \theta_j+\bar{r}_{\mathcal{C},j} (\boldsymbol{\theta},\boldsymbol{\theta}') \right] \end{eqnarray*} is included in the Hellinger ball $\mathcal{B} \big(\boldsymbol{\theta}, \kappa^{1/2} h (f_{\boldsymbol{\theta}}, f_{\boldsymbol{\theta}'}) \big) $. Indeed, the theoretical properties of the estimator given by the preceding theorem does not depend on these values. However, the numerical complexity of the algorithm strongly depends on these parameters. The algorithm computes less tests when $\bar{{r}}_{\mathcal{C},j} (\boldsymbol{\theta},\boldsymbol{\theta}')$, $\underline{{r}}_{\mathcal{C},j} (\boldsymbol{\theta},\boldsymbol{\theta}')$ are large and we have thus an interest in defining them as the largest numbers possible. In the cases where a direct computation of these numbers is difficult, we may use a similar strategy that the one adopted in the unidimensional case (Section~\ref{SectionDefinitionRminBarreDim1}). \paragraph{First way.} We may consider $(\widebar{R}_{\mathcal{C},1},\dots,\widebar{R}_{\mathcal{C},d}) \in \prod_{j=1}^d (0,\widebar{R}_j]$ such that \begin{eqnarray} \label{eqDefintionRhojDimD} h^2 \left(f_{\boldsymbol{\theta}}, f_{\boldsymbol{\theta}'} \right) \leq \sup_{1 \leq j \leq d} \widebar{R}_{\mathcal{C},j} |\theta_j - \theta'_j|^{\alpha_j} \quad \text{for all $\boldsymbol{\theta}, \boldsymbol{\theta}' \in \mathcal{C}$} \end{eqnarray} and define them by \begin{eqnarray} \label{DefinitionRDimensionQuelquonque2} \bar{r}_{\mathcal{C},j} (\boldsymbol{\theta},\boldsymbol{\theta}') = \underline{r}_{\mathcal{C},j} (\boldsymbol{\theta},\boldsymbol{\theta}') = \left( (\kappa / \widebar{R}_{\mathcal{C},j}) h^2(f_{\boldsymbol{\theta}}, f_{\boldsymbol{\theta}'}) \right)^{1/\alpha_j}. \end{eqnarray} One can verify that this definition implies (\ref{eqInclusionRC1}). \paragraph{Second way.} An alternative definition that does not involve the Hellinger distance is \begin{eqnarray} \label{DefinitionRDimensionQuelquonque1} \bar{r}_{\mathcal{C},j} (\boldsymbol{\theta},\boldsymbol{\theta}') = \underline{r}_{\mathcal{C},j} (\boldsymbol{\theta},\boldsymbol{\theta}') = \left( \kappa /\widebar{R}_{\mathcal{C},j} \sup_{1 \leq k \leq d} \underline{R}_{\mathcal{C},k} |\theta'_k - \theta_k|^{\alpha_k} \right)^{1/\alpha_j}. \end{eqnarray} Similarly, one can check that (\ref{eqInclusionRC1}) holds. The complexity of our procedure can be upper-bounded as soon as $\bar{r}_{\mathcal{C},j} (\boldsymbol{\theta},\boldsymbol{\theta}')$ and $\underline{{r}}_{\mathcal{C},j} (\boldsymbol{\theta},\boldsymbol{\theta}')$ are large enough. \begin{prop} \label{PropCalculComplexiteDimenQuelquonque} Suppose that the assumptions of Theorem~\ref{ThmPrincipalDimQuelquonque} are fulfilled and that for all $j \in \{1,\dots,d\}$, all rectangle $\mathcal{C}$, $\boldsymbol{\theta}, \boldsymbol{\theta}' \in \mathcal{C}$, the numbers $\bar{r}_{\mathcal{C},j} (\boldsymbol{\theta},\boldsymbol{\theta}')$, $\underline{{r}}_{\mathcal{C},j} (\boldsymbol{\theta},\boldsymbol{\theta}')$ are larger than \begin{eqnarray} \label{eqSurretRDimD} \left( \kappa /\widebar{R}_{\mathcal{C},j} \sup_{1 \leq k \leq d} \underline{R}_{\mathcal{C},k} |\theta'_k - \theta_k|^{\alpha_k} \right)^{1/\alpha_j} \end{eqnarray} where the $\underline{R}_{\mathcal{C},j} $ and $\widebar{R}_{\mathcal{C},j}$ are respectively such that (\ref{eqMinorationRCj}) and (\ref{eqDefintionRhojDimD}) hold and such that $\underline{R}_{\mathcal{C},j} \geq \underline{R}_j$ and $\widebar{R}_{\mathcal{C},j} \leq \widebar{R}_j$. Then, the number of tests computed to build the estimator $\boldsymbol{\hat{\theta}}$ is smaller than $$ 4 \left[\prod_{j=1}^d \left(1 + \left( {\widebar{R}_j} / {(\kappa \underline{R}_j)}\right)^{1/\alpha_j} \right) \right] \left[\sum_{j=1}^d \max \left\{1, \log \left( \frac{M_j-m_j}{\eta_j} \right) \right\} \right].$$ \end{prop} \section{Simulations for multidimensional models} \label{SectionSimulationDimD} In this section, we complete the simulation study of Section~\ref{SectionSimuDim1} by dealing with multidimensional models. \subsection{Models.} We propose to work with the following models. \begin{ExempleSimuDimd} $\mathscr{F} = \left\{f_{(m,\sigma)}, \, (m,\sigma) \in [-5, 5]\times [1/5,5]\right\} $ where $$f_{(m,\sigma)} (x) = \frac{1}{\sqrt{2 \pi} \sigma} \exp \left(- \frac{(x-m)^2}{2 \sigma^2} \right) \quad \text{ for all $x \in \mathbb{R}$.}$$ \end{ExempleSimuDimd} \begin{ExempleSimuDimd} $\mathscr{F} = \left\{f_{(m,\sigma)}, \, (m,\sigma) \in [-5, 5]\times [1/5,5]\right\} $ where $$f_{(m,\sigma)} (x) = \frac{ \sigma}{\pi \left((x-m)^2 + \sigma^2\right)} \quad \text{for all $x \in \mathbb{R}$.}$$ \end{ExempleSimuDimd} \begin{ExempleSimuDimd} $\mathscr{F} = \left\{f_{(a,b)}, \, (a,b) \in [0.6, 10]\times [0.1,20]\right\} $ where $$f_{(a,b)} (x) =\frac{b^{a}}{ \Gamma(a)} x^{a - 1} e^ {-b x} \mathbbm{1}_{[0,+\infty)}(x) \quad \text{for all $x \in \mathbb{R}$} $$ where $\Gamma$ is the Gamma function. \end{ExempleSimuDimd} \begin{ExempleSimuDimd} $\mathscr{F} = \left\{f_{(a,b)}, \, (a,b) \in [0.7, 20]\times [0.7,20]\right\} $ where $$f_{(a,b)} (x) = \frac{1}{B(a,b)} x^{a - 1} (1-x)^{b - 1} \mathbbm{1}_{[0,1]} (x) \quad \text{for all $x \in \mathbb{R}$.}$$ and where $B(a,b)$ is the Beta function. \end{ExempleSimuDimd} \begin{ExempleSimuDimd} $\mathscr{F} = \left\{f_{(m,\lambda)}, \, (m,\lambda) \in [-1, 1]\times [1/5,5]\right\} $ where $$f_{(m,\lambda)} (x) = \lambda e^{- \lambda (x - m)} \mathbbm{1}_{[m,+\infty)} (x) \quad \text{for all $x \in \mathbb{R}$.}$$ \end{ExempleSimuDimd} \begin{ExempleSimuDimd} $\mathscr{F} = \left\{f_{(m,r)}, \, (m,r) \in [-0.5, 0.5]\times [0.1,2]\right\} $ where $$f_{(m,r)} (x) = r^{-1} \mathbbm{1}_{[m,m + r]} (x) \quad \text{for all $x \in \mathbb{R}$.}$$ \end{ExempleSimuDimd} We shall use our procedure with $t_j = 0$ for all $j \in \{1,\dots,d\}$ (that is $\Theta_{\text{dis}} = \Theta$, $\mathscr{F}_{\text{dis}} = \mathscr{F}$ and $T ({\boldsymbol{\theta}}, {\boldsymbol{\theta}'}) = \widebar{T}(f_{\boldsymbol{\theta}}, f_{\boldsymbol{\theta}'})$) and with $\kappa = 0.9 \bar{\kappa}$, $\eta_j = (M_j-m_j) 10^{-6}$. In order to avoid technicalities, we delay to Section~\ref{SectionImplementationProcedure} the values of $\underline{R}_{\mathcal{C},j}$, $\bar{{r}}_{\mathcal{C},j} (\boldsymbol{\theta},\boldsymbol{\theta}')$, $\underline{{r}}_{\mathcal{C},j} (\boldsymbol{\theta},\boldsymbol{\theta}')$ that have been chosen in this simulation study. \subsection{Simulations when $s \in \mathscr{F}$.} We simulate $N = 10^4$ independent samples $(X_1,\dots,X_n)$ according to a density $s \in \mathscr{F}$ and use our procedure to estimate $s$ on each of the samples. In examples~1,2,5,6 the density is $s = f_{(0,1)}$, in Example~3, $ s = f_{(2,3)}$ and in Example~4, $s = f_{(3,4)}$. The results are the following. \begin{center} \begin{longtable}{|c||c|c|c|c|c|} \hline & & $n = 25$ & $n = 50$ & $n = 75$ & $n = 100$ \\ \hline Example 1 & $\widehat{R}_{10^4}(\boldsymbol{\hat{\theta}})$ & 0.011 & 0.0052 & 0.0034 & 0.0025 \\* & $\widehat{R}_{10^4}(\boldsymbol{\tilde{\theta}}_{\text{mle}})$ & 0.011 & 0.0052 & 0.0034 & 0.0025 \\* & $\mathcal{\widehat{R}}_{10^4,\text{rel}} (\boldsymbol{\tilde{\theta}}_{\text{mle}})$ & $10^{-4}$ & $6 \cdot 10^{-5}$ & $-5 \cdot 10^{-8}$ & $3 \cdot 10^{-8}$ \\* \cline{2-6} & $\widehat{\text{std}}_{10^4} ({\boldsymbol{\hat{\theta}}})$ & 0.012 & 0.0055 & 0.0035 & 0.0026 \\* & $\widehat{\text{std}}_{10^4} (\boldsymbol{\tilde{\theta}}_{\text{mle}})$ & 0.012 & 0.0055 & 0.0035 & 0.0026 \\* \hline Example 2 & $\widehat{R}_{10^4}(\boldsymbol{\hat{\theta}})$ & 0.011 & 0.0052 & 0.0034 & 0.0026 \\* & $\widehat{R}_{10^4}(\boldsymbol{\tilde{\theta}}_{\text{mle}})$ & 0.011 & 0.0052 & 0.0034 & 0.0026 \\* & $\mathcal{\widehat{R}}_{10^4,\text{rel}} (\boldsymbol{\tilde{\theta}}_{\text{mle}})$ & $10^{-8}$ & $10^{-8}$ & $-10^{-9}$ & $4 \cdot 10^{-8}$ \\* \cline{2-6} & $\widehat{\text{std}}_{10^4} ({\boldsymbol{\hat{\theta}}})$ & 0.011 & 0.0052 & 0.0035 & 0.0026 \\* & $\widehat{\text{std}}_{10^4} (\boldsymbol{\tilde{\theta}}_{\text{mle}})$ & 0.011 & 0.0052 & 0.0035 & 0.0026 \\* \hline Example 3 & $\widehat{R}_{10^4}(\boldsymbol{\hat{\theta}})$ & 0.011 & 0.0052 & 0.0034 & 0.0025 \\* & $\widehat{R}_{10^4}(\boldsymbol{\tilde{\theta}}_{\text{mle}})$ & 0.011 & 0.0052 & 0.0034 & 0.0025 \\* & $\mathcal{\widehat{R}}_{10^4,\text{rel}} (\boldsymbol{\tilde{\theta}}_{\text{mle}})$ & $2 \cdot 10^{-4}$ & $2 \cdot 10^{-5}$ & $10^{-7}$ & $10^{-7}$ \\* \cline{2-6} & $\widehat{\text{std}}_{10^4} ({\boldsymbol{\hat{\theta}}})$ & 0.011 & 0.0053 & 0.0035 & 0.0026 \\* & $\widehat{\text{std}}_{10^4} (\boldsymbol{\tilde{\theta}}_{\text{mle}})$ & 0.011 & 0.0053 & 0.0035 & 0.0026 \\* \hline Example 4 & $\widehat{R}_{10^4}(\boldsymbol{\hat{\theta}})$ & 0.011 & 0.0052 & 0.0034 & 0.0025 \\* & $\widehat{R}_{10^4}(\boldsymbol{\tilde{\theta}}_{\text{mle}})$ & 0.011 & 0.0052 & 0.0034 & 0.0025 \\* & $\mathcal{\widehat{R}}_{10^4,\text{rel}} (\boldsymbol{\tilde{\theta}}_{\text{mle}})$ & $2 \cdot 10^{-4}$ & $10^{-5}$ & $2 \cdot 10^{-7}$ & $ 2\cdot 10^{-7}$ \\* \cline{2-6} & $\widehat{\text{std}}_{10^4} ({\boldsymbol{\hat{\theta}}})$ & 0.011 & 0.0053 & 0.0035 & 0.0026 \\* & $\widehat{\text{std}}_{10^4} (\boldsymbol{\tilde{\theta}}_{\text{mle}})$ & 0.011 & 0.0053 & 0.0035 & 0.0026 \\* \hline Example 5 &$\widehat{R}_{10^4}(\boldsymbol{\hat{\theta}})$ & 0.025 & 0.012 & 0.0082 & 0.0063 \\* & $\widehat{R}_{10^4}(\boldsymbol{\tilde{\theta}}_{\text{mle}})$ & 0.025 & 0.012 & 0.0083 & 0.0063 \\* & $\mathcal{\widehat{R}}_{10^4,\text{rel}} (\boldsymbol{\tilde{\theta}}_{\text{mle}})$ & 0.020 & -0.0020 & -0.0073 & 0.0012 \\* \cline{2-6} & $\widehat{\text{std}}_{10^4} ({\boldsymbol{\hat{\theta}}})$ & 0.025 & 0.012 & 0.0079 & 0.0061 \\* & $\widehat{\text{std}}_{10^4} (\boldsymbol{\tilde{\theta}}_{\text{mle}})$ & 0.021 & 0.011 & 0.0070 & 0.0053 \\* \hline Example 6 & $\widehat{R}_{10^4}(\boldsymbol{\hat{\theta}})$ & 0.040 & 0.019 & 0.013 & 0.0098 \\* & $\widehat{R}_{10^4}(\boldsymbol{\tilde{\theta}}_{\text{mle}})$ & 0.039 & 0.020 & 0.013 & 0.010 \\* & $\mathcal{\widehat{R}}_{10^4,\text{rel}} (\boldsymbol{\tilde{\theta}}_{\text{mle}})$ & 0.010 & -0.015 & -0.018 & -0.016 \\* \cline{2-6} & $\widehat{\text{std}}_{10^4} ({\boldsymbol{\hat{\theta}}})$ & 0.033 & 0.016 & 0.011 & 0.0080 \\* & $\widehat{\text{std}}_{10^4} (\boldsymbol{\tilde{\theta}}_{\text{mle}})$ & 0.027 & 0.014 & 0.0093 & 0.0069 \\* \hline \end{longtable} \end{center} The risk of our estimator is very close to the one of the m.l.e. In the first four examples they are even almost indistinguishable. As in dimension~1, this can be explained by the fact that the first four models are regular enough to ensure that our estimator is very close to the maximum likelihood one. To see this, let for $c \in \{0.99,0.999, 1\}$, $q_{c}$ be the $c$-quantile of the random variable $$\max \left\{ \big|\hat{\theta}_1 - \tilde{\theta}_{\text{mle},1} \big|, \big|\hat{\theta}_2 - \tilde{\theta}_{\text{mle},2} \big| \right\}$$ and $\hat{q}_c$ be the empirical version based on $10^4$ samples. These empirical quantiles are very small as shown in the array below. \begin{center} \begin{longtable}{|c||c|c|c|c|c|} \hline & & $n = 25$ & $n = 50$ & $n = 75$ & $n = 100$ \\ \hline Example 1 & $\hat{q}_{0.99}$ & $9 \cdot 10^{-7}$ & $9 \cdot 10^{-7}$ & $9 \cdot 10^{-7}$ & $9\cdot 10^{-7}$ \\* & $\hat{q}_{0.999} $ & 0.023 & $ 10^{-6}$ & $9\cdot 10^{-7}$ & $10^{-6}$ \\ & $\hat{q}_{1} $ & 0.22 & 0.072 & $10^{-6}$ & $ 10^{-6}$ \\ \hline Example 2 & $\hat{q}_{0.99}$ & $4 \cdot 10^{-7}$ & $4 \cdot 10^{-7}$ & $4 \cdot 10^{-7}$ & $4 \cdot 10^{-7}$\\* & $\hat{q}_{0.999} $ & $5 \cdot 10^{-7}$ & $5 \cdot 10^{-7}$ & $5 \cdot 10^{-7}$ & $5 \cdot 10^{-7}$ \\ & $\hat{q}_{1}$ & $5 \cdot 10^{-7}$ & $5 \cdot 10^{-7}$ & $5 \cdot 10^{-7}$ & $5 \cdot 10^{-7}$\\ \hline Example 3 & $\hat{q}_{0.99}$ & $7 \cdot 10^{-7}$ & $7 \cdot 10^{-7}$ & $7 \cdot 10^{-7}$ & $7 \cdot 10^{-7}$ \\* & $\hat{q}_{0.999} $ & $9 \cdot 10^{-7}$ & $8 \cdot 10^{-7}$ & $8 \cdot 10^{-7}$ & $8 \cdot 10^{-7}$ \\ & $\hat{q}_{1} $ & $1.5$ & $0.29$ & $ 10^ {-6}$ & $ 9 \cdot 10^ {-7}$ \\ \hline Example 4 & $\hat{q}_{0.99}$ & $10^{-6}$ & $10^{-6}$ & $ 10^{-6}$ & $ 10^{-6}$ \\ & $\hat{q}_{0.999} $ & $2\cdot 10^{-6}$ & $10^{-6}$ & $10^{-6}$ & $10^{-6}$ \\ & $\hat{q}_{1}$ & $1.6$ & $0.27$ & $2 \cdot 10^{-6}$ & $2 \cdot 10^{-6}$ \\ \hline \end{longtable} \end{center} \subsection{Simulations when $s \not \in \mathscr{F}$.} \label{SectionSimuRobustesseDimD} Contrary to the maximum likelihood estimator, our estimator possesses robustness properties. The goal of this section is to illustrate them. Suppose that we observe $n = 100$ i.i.d random variables $X_1,\dots, X_n$ from which we wish to estimate their distribution by using a Gaussian model $$\mathscr{F} = \left\{f_{(m,\sigma)}, \, (m,\sigma) \in [-10,10]\times [0.5,10]\right\} $$ where $f_{(m,\sigma)}$ is the density of a Gaussian random variable with mean $m$ and variance $\sigma^2$. The preceding section shows that when the unknown underlying density $s$ belongs to $\mathscr{F}$, our estimator is as good as the m.l.e. We now consider $p \in [0,1]$ and define $s = s_p$ where $$s_p (x) = (1-p) f_{(-5,1)} +p f_{(5,1)} \quad \text{for all $x \in \mathbb{R}$. }$$ This density belongs to the model only if $p = 0$ or $p = 1$ and we are interested in comparing our estimator to the m.l.e when $p \neq 0$ and $p \neq 1$. We then proceed as in Section~\ref{SectionRobustessDim1}. For a lot of values of~$p \in [0,1]$, we simulate $N = 1000$ samples of $100$ random variables with density~$s_p$ and measure the quality of the estimator $\boldsymbol{\tilde{\theta}}$ by \begin{eqnarray*} \widehat{R}_{p,N} (\boldsymbol{\tilde{\theta}}) = \frac{1}{1000} \sum_{i=1}^{1000} h^2 (s_p, f_{\boldsymbol{\tilde{\theta}}^{(p,i)}}) \end{eqnarray*} where $\boldsymbol{\tilde{\theta}}^{(p,i)}$ is the value of~$\boldsymbol{\tilde{\theta}}$ corresponding to the $i^{\text{\tiny th}}$ sample whose density is $s_p$. We compute this function for $\boldsymbol{\tilde{\theta}} \in \{\boldsymbol{\tilde{\theta}}_{\text{mle}}, \boldsymbol{\hat{\theta}}\}$ and obtain the graph below. \begin{figure}[H] \includegraphics[scale=1]{FonctionRobustesseLoiNormalen100Arxiv.eps} \caption{Red: $p \mapsto H^2(s_p, \mathscr{F})$. Blue: $p \mapsto \widehat{R}_{p,1000} ({\boldsymbol{\hat{\theta}}}) $. Green: $p \mapsto \widehat{R}_{p,1000} (\boldsymbol{\tilde{\theta}}_{\text{mle}}) $.} \end{figure} This figure shows that the risk of our estimator is smaller than the one of the m.l.e when $p$ is close to $0$ or $1$ (says $p \leq 0.2$ or $p \geq 0.8$) and is similar otherwise. For the Gaussian model, our estimator may thus be interpreted as a robust version of the m.l.e. \section{Proofs} \subsection{A preliminary result.} \label{SectionProcedureGenerique} In this section, we show a result that will allow us to prove Theorems~\ref{ThmPrincipalDim1} and~\ref{ThmPrincipalDimQuelquonque}. Given $\Theta' \subset \Theta$, we recall that in this paper that $\text{diam} \Theta'$ stands for $$\text{diam} \Theta' = \sup_{\boldsymbol{\theta}, \boldsymbol{\theta}' \in \Theta'} h^2(f_{\boldsymbol{\theta}},f_{\boldsymbol{\theta}'}).$$ \begin{thm} \label{ThmPrincipal} Suppose that Assumption~\ref{HypSurLeModeleQuelquonqueDebutDimD} holds. Let $\kappa \in (0,\bar{\kappa})$, $N \in \mathbb{N}^{\star}$ and let $\Theta_1 \dots \Theta_N$ be $N$ non-empty subsets of $\Theta$ such that $\Theta_1 =\Theta$. For all $j \in \{1,\dots,d\}$, let $t_j$ be an arbitrary number of $(0,d^{1/\alpha_j}]$ and $$\epsilon_j = t_j (\widebar{R}_j n)^{-1/\alpha_j}.$$ Assume that for all $i \in \{1,\dots, N-1\}$, there exists $L_i \geq 1$ such that for all $\ell \in \{1,\dots,L_i\}$ there exist two elements $\boldsymbol{\theta}^{(i,\ell)} \neq \boldsymbol{\theta}'^{(i,\ell)} $ of $\Theta_i$ such that $$\Theta_i \setminus \bigcup_{\ell=1}^{L_i} B^{(i,\ell)} \subset \Theta_{i+1} \subset \Theta_i$$ where $B^{(i,\ell)} $ is the set defined by \begin{eqnarray*} B^{(i,\ell)} = \begin{cases} A^{(i,\ell)} & \text{if $T ({\boldsymbol{\theta}^{(i,\ell)}},{\boldsymbol{\theta}'^{(i,\ell)}}) > 0$} \\ A'^{(i,\ell)} & \text{if $T ({\boldsymbol{\theta}^{(i,\ell)}},{\boldsymbol{\theta}'^{(i,\ell)}}) < 0$} \\ A^{(i,\ell)} \bigcup A'^{(i,\ell)} & \text{if $T ({\boldsymbol{\theta}^{(i,\ell)}},{\boldsymbol{\theta}'^{(i,\ell)}}) = 0$} \end{cases} \end{eqnarray*} where $ A^{(i,\ell)}$ and $ A'^{(i,\ell)}$ are the Hellinger balls defined by \begin{eqnarray*} A^{(i,\ell)} &=& \left\{ \boldsymbol{\theta}'' \in \Theta_i, \, h^2 (f_{\boldsymbol{\theta}''}, f_{\boldsymbol{\theta}^{(i,\ell)}}) \leq \kappa h^2 (f_{\boldsymbol{\theta}^{(i,\ell)}}, f_{\boldsymbol{\theta}'^{(i,\ell)}}) \right\} \\ A'^{(i,\ell)} &=& \left\{ \boldsymbol{\theta}'' \in \Theta_i, \, h^2 (f_{\boldsymbol{\theta}''}, f_{\boldsymbol{\theta}'^{(i,\ell)}}) \leq \kappa h^2 (f_{\boldsymbol{\theta}^{(i,\ell)}}, f_{\boldsymbol{\theta}'^{(i,\ell)}}) \right\} \end{eqnarray*} and where $T$ is the functional defined by (\ref{eqDefinitionTDimD}). Suppose moreover that there exists $\kappa_0 > 0$ such that $$\kappa_0 \text{diam} (\Theta_i) \leq \inf_{1 \leq \ell \leq L_i} h^2(f_{\boldsymbol{\theta}^{(i,\ell)}},f_{\boldsymbol{\theta}'^{(i,\ell)}}) \quad \text{for all $i \in \{1,\dots,N-1\}$}.$$ Then, for all $\xi > 0$, $$\P \left[ C \inf_{\boldsymbol{\theta} \in \Theta_N} h^2(s,f_{\boldsymbol{\theta}}) \geq h^2(s, \mathscr{F}) + \frac{D_{\mathscr{F}}}{n} + \xi \right] \leq e^{- n \xi}$$ where $C > 0$ depends only on $\kappa, \kappa_0$, where \begin{eqnarray*} D_{\mathscr{F}} = \max \left\{d, \sum_{j=1}^d \log \left(1 + t_j^{-1} \left( (d/ \boldsymbol{\bar{\alpha}}) (c \widebar{R}_j / \underline{R}_j) \right)^{1/\alpha_j}\right) \right\} \end{eqnarray*} and where $c$ depends only on $\kappa$. \end{thm} This result says that if $(\Theta_i)_{1 \leq i \leq N}$ is a finite sequence of subsets of $\Theta$ satisfying the assumptions of the theorem, then there exists an estimator $\boldsymbol{\hat{\theta}}$ with values in $\Theta_N$ whose risk can be upper bounded by $$\P \left[ C h^2(s,f_{\boldsymbol{\hat{\theta}}}) \geq h^2(s, \mathscr{F}) + \frac{D_{\mathscr{F}}}{n} + \xi \right] \leq e^{- n \xi}.$$ We shall show that algorithms~\ref{AlgorithmDim1} and~\ref{algoConstructionDimQuelquonque} correspond to suitable choices of sets $\Theta_i$. \subsection{Proof of Theorem~\ref{ThmPrincipal}.} Let $\boldsymbol{\theta}_0 \in\Theta$ be such that $$h^2(s, f_{\boldsymbol{\theta}_0}) \leq h^2(s, \mathscr{F}) + 1/n.$$ Define $C_{\kappa}$ such that $(1 + \sqrt{C_{\kappa}})^2 = \kappa^{-1}$ and $\varepsilon \in (1/\sqrt{2},1)$ such that $$\frac{\left(1 + \min \left( \frac{1 - \varepsilon}{2}, \varepsilon - \frac{1}{\sqrt{2}} \right) \right)^4 (1 + \varepsilon) + \min \left( \frac{1 - \varepsilon}{2}, \varepsilon - \frac{1}{\sqrt{2}} \right) }{1 - \varepsilon - \min \left( \frac{1 - \varepsilon}{2}, \varepsilon -\frac{1}{\sqrt{2}} \right)} = C_{\kappa}.$$ We then set \begin{eqnarray*} \beta &=& \min \big\{ (1 - \varepsilon)/{2}, \varepsilon - 1/{\sqrt{2}} \big\} \\ \gamma &=& (1+\beta) (1 + \beta^{-1}) \left[1-\varepsilon + (1 +\beta) (1+\varepsilon)\right]\\ c &=& 24 \big(2 + \sqrt{2}/6\, (\varepsilon - 1/\sqrt{2} ) \big) / (\varepsilon - 1/\sqrt{2} )^2 \cdot 10^{3} \\ \delta &=& (1+\beta^{-1}) \left[1 - \varepsilon + (1+\beta)^3 (1+\varepsilon) \right] + c (1 + \beta)^2. \end{eqnarray*} The proof of the theorem is based on the lemma below whose proof is delayed to Section~\ref{SectionPreuveLemmeControle}. \begin{lemme} \label{LemmeControleH} For all $\xi > 0$, there exists an event $\Omega_{\xi}$ such that $\P(\Omega_{\xi}) \geq 1 - e^{-n \xi}$ and on which the following assertion holds: if there exists $p \in \{1,\dots,N-1\}$ such that $\boldsymbol{\theta}_0 \in \Theta_p$ and such that \begin{eqnarray} \label{eqControleH} \gamma h^2(s, f_{\boldsymbol{\theta}_0}) + \delta\left(\frac{D_{\mathscr{F}}}{n} + \xi \right) < \beta \inf_{\ell \in \{1,\dots,L_p\}} \left(h^2(f_{\boldsymbol{\theta}_0},f_{\boldsymbol{\theta}^{(p,\ell)}}) + h^2(f_{\boldsymbol{\theta}_0},f_{\boldsymbol{\theta}'^{(p,\ell)}}) \right) \end{eqnarray} then $\boldsymbol{\theta}_0 \in \Theta_{p+1}$. \end{lemme} The result of Theorem~\ref{ThmPrincipal} is straightforward if $\boldsymbol{\theta}_0 \in \Theta_{N}$, and we shall thus assume that $\boldsymbol{\theta}_0 \not \in \Theta_N$. Set $$p = \max \left\{i \in \{1,\dots,N-1\}, \, \boldsymbol{\theta}_0 \in \Theta_i\right\}.$$ Let $\boldsymbol{\theta}_0'$ be any element of $\Theta_N$. Then, $\boldsymbol{\theta}_0'$ belongs to $\Theta_p$ and \begin{eqnarray*} h^2(f_{\boldsymbol{\theta}_0},f_{\boldsymbol{\theta}_0'}) &\leq& \sup_{\boldsymbol{\theta}, \boldsymbol{\theta}' \in \Theta_{p}} h^2(f_{\boldsymbol{\theta}},f_{\boldsymbol{\theta}'}) \\ &\leq& \kappa_0^{-1} \inf_{\ell \in \{1,\dots,L_p\}} h^2(f_{\boldsymbol{\theta}^{(p,\ell)}},f_{\boldsymbol{\theta}'^{(p,\ell)}}) \\ &\leq& 2 \kappa_0^{-1} \inf_{\ell \in \{1,\dots,L_p\}} \left(h^2(f_{\boldsymbol{\theta}_0},f_{\boldsymbol{\theta}^{(p,\ell)}}) + h^2(f_{\boldsymbol{\theta}_0},f_{\boldsymbol{\theta}'^{(p,\ell)}}) \right). \end{eqnarray*} By definition of $p$, $\boldsymbol{\theta}_0 \in \Theta_p \setminus \Theta_{p+1}$. We then derive from the above lemma that on $\Omega_{\xi}$, \begin{eqnarray*} \beta \inf_{\ell \in \{1,\dots,L_p\}} \left(h^2(f_{\boldsymbol{\theta}_0},f_{\boldsymbol{\theta}^{(p,\ell)}}) + h^2(f_{\boldsymbol{\theta}_0},f_{\boldsymbol{\theta}'^{(p,\ell)}}) \right) \leq \gamma h^2(s, f_{\boldsymbol{\theta}_0}) + \delta \frac{D_{\mathscr{F}} + n \xi}{n}. \end{eqnarray*} Hence, \begin{eqnarray*} h^2(f_{\boldsymbol{\theta}_0},f_{\boldsymbol{\theta}_0'}) \leq \frac{2}{ \beta \kappa_0} \left( \gamma h^2(s, f_{\boldsymbol{\theta}_0}) + \delta\frac{D_{\mathscr{F}} + n \xi}{n} \right) \end{eqnarray*} and thus \begin{eqnarray*} h^2(s,f_{\boldsymbol{\theta}_0'}) &\leq& 2 h^2 \left(s, f_{\boldsymbol{\theta}_0} \right) + 2 h^2(f_{\boldsymbol{\theta}_0},f_{\boldsymbol{\theta}_0'}) \\ &\leq& \left(2 + \frac{4 \gamma}{\beta \kappa_0 }\right) h^2(s, f_{\boldsymbol{\theta}_0}) + \frac{4 \delta }{n \beta \kappa_0} \left( D_{\mathscr{F}} + n \xi \right). \end{eqnarray*} Since $ h^2(s, f_{\boldsymbol{\theta}_0}) \leq h^2(s, \mathscr{F}) + 1/n$, there exists $C > 0$ such that \begin{eqnarray*} C h^2(s,f_{\boldsymbol{\theta}_0'}) \leq h^2(s, \mathscr{F}) + \frac{D_{\mathscr{F}} }{n} + \xi \quad \text{on $\Omega_{\xi}$.} \end{eqnarray*} This concludes the proof. \qed \subsubsection{Proof of Lemma~\ref{LemmeControleH}} \label{SectionPreuveLemmeControle} We use the claim below whose proof is postponed to Section~\ref{SectionPreuveClaimOmegaXi} \begin{Claim} \label{ClaimOmegaXi} For all $\xi > 0$, there exists an event~$\Omega_{\xi}$ such that $\P(\Omega_{\xi}) \geq 1 - e^{-n \xi}$ and on which, for all $f,f' \in \mathscr{F}_{\text{dis}}$ , $$\left(1 - \varepsilon \right) h^2(s,f' ) + \frac{\widebar{T} (f,f') }{\sqrt{2}} \leq \left(1 + \varepsilon \right) h^2(s,f ) + c \frac{ \left( D_{\mathscr{F}} + n \xi \right)}{n}$$ (see Section~\ref{SectionDefTestDimd} for the definition of $\mathscr{F}_{\text{dis}}$). \end{Claim} Let $p \in \{1,\dots,N-1\}$ such that $\boldsymbol{\theta}_0 \in \Theta_p$ and (\ref{eqControleH}) holds. Let $\ell \in \{1,\dots,L_p\}$. The aim is to show that $\boldsymbol{\theta}_0 \not \in B^{(p,\ell)}$. Without lost of generality, we assume that $T ({\boldsymbol{\theta}^{(p,\ell)}},{\boldsymbol{\theta}'^{(p,\ell)}}) = \widebar{T}(f_{\pi(\boldsymbol{\theta}^{(p,\ell) })}, f_{\pi( \boldsymbol{\theta}'^{(p,\ell)})} )$ is non-negative, and prove that $\boldsymbol{\theta}_0 \not \in A^{(p,\ell)}$. On the event $\Omega_{\xi}$, we deduce from the claim $$\left(1 - \varepsilon \right) h^2(s,f_{\pi(\boldsymbol{\theta}'^{(p,\ell)})} ) \leq \left(1 + \varepsilon \right) h^2(s,f_{\pi(\boldsymbol{\theta}^{(p,\ell)})} ) + c \frac{ \left( D_{\mathscr{F}} + n \xi \right)}{n}.$$ Consequently, by using the triangular inequality and the above inequality \begin{eqnarray*} \left(1 - \varepsilon \right) h^2(f_{\boldsymbol{\theta}_0},f_{\pi(\boldsymbol{\theta}'^{(p,\ell)})} ) &\leq& \left(1 + \beta^{-1} \right) \left(1 - \varepsilon\right) h^2(s,f_{\boldsymbol{\theta}_{0}} ) \\ & & \quad + (1 + \beta) \left(1 - \varepsilon \right) h^2(s,f_{\pi(\boldsymbol{\theta}'^{(p,\ell)})} ) \\ &\leq& \left(1 + \beta^{-1} \right) \left(1 - \varepsilon \right) h^2(s,f_{\boldsymbol{\theta}_{0}} ) \\ & & + (1 + \beta) \left[\left(1 + \varepsilon \right) h^2(s,f_{\pi(\boldsymbol{\theta}^{(p,\ell)})} ) + c \frac{ \left( D_{\mathscr{F}} + n \xi \right)}{n} \right]. \end{eqnarray*} Since $h^2(s,f_{\pi(\boldsymbol{\theta}^{(p,\ell)})} ) \leq (1+\beta^{-1}) h^2(s,f_{\boldsymbol{\theta}_0}) + (1+\beta) h^2(f_{\boldsymbol{\theta}_0},f_{\pi(\boldsymbol{\theta}^{(p,\ell)})})$, \begin{eqnarray} \label{eqPreuveThmPrincipalm} \left(1 - \varepsilon \right) h^2(f_{\boldsymbol{\theta}_0},f_{\pi(\boldsymbol{\theta}'^{(p,\ell)})} ) &\leq& (1 + \beta^{-1}) \left[1-\varepsilon + (1 +\beta) (1+\varepsilon)\right] h^2(s, f_{\boldsymbol{\theta}_0}) \\ & & \quad + (1+\beta)^2 (1 + \varepsilon) h^2(f_{\boldsymbol{\theta}_0},f_{\pi(\boldsymbol{\theta}^{(p,\ell)})}) \nonumber\\ & & \quad + \frac{c (1 + \beta) \left( D_{\mathscr{F}} + n \xi \right)}{n}. \nonumber \end{eqnarray} Remark now that for all $\boldsymbol{\theta} \in\Theta$, \begin{eqnarray*} h^2(f_{\boldsymbol{\theta}},f_{\pi(\boldsymbol{\theta})} ) \leq \sup_{1 \leq j \leq d} \widebar{R}_j \epsilon_j^{\alpha_j} \leq d/n. \end{eqnarray*} By using the triangular inequality, \begin{eqnarray*} h^2(f_{\boldsymbol{\theta}_0},f_{\pi(\boldsymbol{\theta}^{(p,\ell)})} ) &\leq& (1+\beta) h^2(f_{\boldsymbol{\theta}_0},f_{\boldsymbol{\theta}^{(p,\ell)}}) + d (1 + \beta^{-1})/n \\ h^2(f_{\boldsymbol{\theta}_0},f_{\boldsymbol{\theta}'^{(p,\ell)}} ) &\leq& (1+\beta) h^2(f_{\boldsymbol{\theta}_0},f_{\pi(\boldsymbol{\theta}'^{(p,\ell)})}) + d (1+\beta^{-1})/n. \end{eqnarray*} We deduce from these two inequalities and from (\ref{eqPreuveThmPrincipalm}) that \begin{eqnarray*} \left(1 - \varepsilon \right) h^2(f_{\boldsymbol{\theta}_0},f_{\boldsymbol{\theta}'^{(p,\ell)}} ) \!\!\!\!\! &\leq& \!\!\!\!\! \gamma h^2(s, f_{\boldsymbol{\theta}_0}) + (1+\beta)^4 (1 + \varepsilon) h^2(f_{\boldsymbol{\theta}_0},f_{\boldsymbol{\theta}^{(p,\ell)}}) \\ & & \!\!\!\!\!\!\!\!\!\! + \frac{d (1+\beta^{-1}) \left[1 - \varepsilon + (1+\beta)^3 (1+\varepsilon) \right] + c (1 + \beta)^2 \left( D_{\mathscr{F}} + n \xi \right)}{n}. \end{eqnarray*} Since $D_{\mathscr{F}} \geq d$ and $\delta \geq 1$, \begin{eqnarray*} \left(1 - \varepsilon \right) h^2(f_{\boldsymbol{\theta}_0},f_{\boldsymbol{\theta}'^{(p,\ell)}} ) &\leq& \gamma h^2(s, f_{\boldsymbol{\theta}_0}) + \frac{\delta \left( D_{\mathscr{F}} + n \xi \right)}{n} \\ & & \quad + (1+\beta)^4 (1 + \varepsilon) h^2(f_{\boldsymbol{\theta}_0},f_{\boldsymbol{\theta}^{(p,\ell)}}). \end{eqnarray*} By using (\ref{eqControleH}), \begin{eqnarray*} \left(1 - \varepsilon \right) h^2(f_{\boldsymbol{\theta}_0},f_{\boldsymbol{\theta}'^{(p,\ell)}} ) &<& \beta \left(h^2(f_{\boldsymbol{\theta}_0},f_{\boldsymbol{\theta}^{(p,\ell)}}) + h^2(f_{\boldsymbol{\theta}_0},f_{\boldsymbol{\theta}'^{(p,\ell)}}) \right) \\ & & \quad + (1+\beta)^4 (1 + \varepsilon) h^2(f_{\boldsymbol{\theta}_0},f_{\boldsymbol{\theta}^{(p,\ell)}}) \end{eqnarray*} and thus $$h^2(f_{\boldsymbol{\theta}_0},f_{\boldsymbol{\theta}'^{(p,\ell)}} ) < C_{\kappa} h^2(f_{\boldsymbol{\theta}_0},f_{\boldsymbol{\theta}^{(p,\ell)}} ).$$ Finally, \begin{eqnarray*} h^2(f_{\boldsymbol{\theta}^{(p,\ell)}},f_{\boldsymbol{\theta}'^{(p,\ell)}} ) &\leq& \left(h(f_{\boldsymbol{\theta}_0},f_{\boldsymbol{\theta}^{(p,\ell)}} ) + h(f_{\boldsymbol{\theta}_0},f_{\boldsymbol{\theta}'^{(p,\ell)}} ) \right)^2 \\ &<& \left(1 + \sqrt{C_{\kappa}} \right)^2 h^2(f_{\boldsymbol{\theta}_0},f_{\boldsymbol{\theta}^{(p,\ell)}}) \\ &<& \kappa^{-1} h^2(f_{\boldsymbol{\theta}_0},f_{\boldsymbol{\theta}^{(p,\ell)}}) \end{eqnarray*} which leads to $\boldsymbol{\theta}_0 \not \in A^{(p,\ell)}$ as wished. \subsubsection{Proof of Claim~\ref{ClaimOmegaXi}} \label{SectionPreuveClaimOmegaXi} This claim ensues from the work of~\cite{BaraudMesure}. More precisely, we derive from Proposition~2 of~\cite{BaraudMesure} that for all $f,f' \in \mathscr{F}_{\text{dis}}$, $$\left(1 - \frac{1}{\sqrt{2}}\right) h^2(s,f') + \frac{\widebar{T} (f,f')}{\sqrt{2}} \leq \left(1 + \frac{1}{\sqrt{2}}\right) h^2(s,f ) + \frac{\widebar{T}(f,f') - \mathbb{E} \left[\widebar{T}(f,f')\right]}{\sqrt{2}}.$$ Let $z = \varepsilon - 1/\sqrt{2} \in (0, 1-1/\sqrt{2})$. We define $\Omega_{\xi}$ by $$\Omega_{\xi} = \bigcap_{f,f' \in \mathscr{F}_{\text{dis}} } \left[ \frac{\widebar{T}(f,f') - \mathbb{E} \left[\widebar{T}(f,f')\right]}{ z \left(h^2(s,f ) + h^2(s,f' ) \right) + c (D_{\mathscr{F}} + n \xi)/n} \leq \sqrt{2} \right].$$ On this event, we have $$\left(1 - \varepsilon\right) h^2(s,f') + \frac{\widebar{T} (f,f')}{\sqrt{2}} \leq \left(1 + \varepsilon \right) h^2(s,f ) + c \frac{D_{\mathscr{F}} + n \xi}{n}$$ and it remains to prove that $\P(\Omega_{\xi}^c) \leq e^{- n \xi}$. The following claim shows that Assumption~3 of~\cite{BaraudMesure} is fulfilled. \begin{Claim} \label{ClaimDimMetriqueF2} Let \begin{eqnarray*} \tau &=& 4 \frac{2 + \frac{n \sqrt{2}}{6} z}{\frac{n^2}{6} z^2} \\ \eta^2_{\mathscr{F}} &=& \max \left\{3 d e^{4}, \sum_{j=1}^d \log \left(1 + 2 t_j^{-1} \left( (d/ \boldsymbol{\bar{\alpha}}) (c \widebar{R}_j / \underline{R}_j) \right)^{1/\alpha_j}\right) \right\}. \end{eqnarray*} Then, for all $r \geq 2 \eta_{\mathscr{F}}$, \begin{eqnarray} \label{EqControlDimensioNFdis2} \left|\mathscr{F}_{\text{dis}} \cap \mathcal{B}_h \left(s, r \sqrt{\tau} \right)\right| \leq \exp (r^2 / 2) \end{eqnarray} where $\mathcal{B}_h(s, r \sqrt{\tau})$ is the Hellinger ball centered at $s$ with radius $r \sqrt{\tau}$ defined by $$\mathcal{B}_h (s, r \sqrt{\tau}) = \left\{f \in \L^1_+ (\mathbb{X},\mu), \; h^2 (s,f) \leq r^2 \tau \right\}.$$ \end{Claim} We then derive from Lemma~1 of~\cite{BaraudMesure} that for all $\xi > 0$ and $y^2 \geq \tau \left( 4 \eta^2_{\mathscr{F}} + n \xi \right)$, $$\P \left[ \sup_{f,f' \in \mathscr{F}_{\text{dis}} } \frac{ \left(\widebar{T}(f,f') - \mathbb{E} \left[\widebar{T}(f,f')\right] \right)/ \sqrt{2}}{\left( h^2(s,f ) + h^2(s,f' ) \right) \vee y^2} \geq z \right] \leq e^{- n \xi}.$$ Notice now that $4 \eta_{\mathscr{F}}^2 \leq 10^{3} D_{\mathscr{F}}$ and $10^{3} \tau \leq c /n$. This means that we can choose $$y^2 = c \left(D_{\mathscr{F}} + n \xi \right)/n,$$ which concludes the proof of Claim~\ref{ClaimOmegaXi}. \begin{proof} [Proof of Claim~\ref{ClaimDimMetriqueF2}] If $\mathscr{F}_{\text{dis}}\cap \mathcal{B}_h(s, r \sqrt{\tau}) = \emptyset$, (\ref{EqControlDimensioNFdis2}) holds. In the contrary case, there exists $\boldsymbol{\theta}_0 = (\theta_{0,1},\dots,\theta_{0,d}) \in \Theta_{\text{dis}}$ such that $h^2(s,f_{\boldsymbol{\theta}_0}) \leq r^2 \tau$ and thus $$|\mathscr{F}_{\text{dis}}\cap \mathcal{B}_h (s, r \sqrt{\tau}) | \leq |\mathscr{F}_{\text{dis}} \cap \mathcal{B}_h (f_{\boldsymbol{\theta}_0}, 2 r \sqrt{\tau})|.$$ Now, \begin{eqnarray*} |\mathscr{F}_{\text{dis}} \cap \mathcal{B}_h (f_{\boldsymbol{\theta}_0}, 2 r \sqrt{\tau})| &=& \left|\left\{f_{\boldsymbol{\theta}}, \, \boldsymbol{\theta} \in \Theta_{\text{dis}}, \, h^2(f_{\boldsymbol{\theta}}, f_{\boldsymbol{\theta}_0}) \leq 4 r^2\tau \right\} \right| \\ &\leq& \left|\left\{\boldsymbol{\theta} \in \Theta_{\text{dis}}, \; \forall j \in \{1,\dots,d\}, \; \underline{R}_j |\theta_j - \theta_{0,j} |^{\alpha_j} \leq 4 r^2 \tau \right\} \right|. \end{eqnarray*} Let $k_{0,j} \in \mathbb{N}$ be such that $\theta_{0,j} = m_j + k_{0,j} \epsilon_j$. Then, \begin{eqnarray*} |\mathscr{F}_{\text{dis}} \cap \mathcal{B}_h (f_{\boldsymbol{\theta}_0}, 2 r \sqrt{\tau})| &\leq& \prod_{j=1}^d \left|\left\{k_j \in \mathbb{N}, \; |k_j - k_{0,j} | \leq \big( {4 r^2 \tau}/{ \underline{R}_j } \big)^{1/\alpha_j} \epsilon_j^{-1} \right\} \right| \\ &\leq& \prod_{j=1}^d \left(1 + 2 \epsilon_j^{-1} \left( {4 r^2 \tau}/{ \underline{R}_j } \right)^{1/\alpha_j} \right). \end{eqnarray*} By using $4 \tau \leq c/n$ and $\epsilon_j = t_j (\widebar{R}_j n)^{-1/\alpha_j}$, \begin{eqnarray*} |\mathscr{F}_{\text{dis}} \cap \mathcal{B}_h (f_{\boldsymbol{\theta}_0}, 2 r \sqrt{\tau})| \leq \prod_{j=1}^d \left(1 + 2 t_j^{-1} \left(r^2 c \widebar{R}_j/{\underline{R}_j} \right)^{1/\alpha_j}\right). \end{eqnarray*} If $\boldsymbol{\bar{\alpha}} \leq e^{-4}$, one can check that $\eta_{\mathscr{F}}^2 \geq 4 d/ \boldsymbol{\bar{\alpha}}$ (since $c \geq 1$ and $t_j^{-1} \geq d^{-1/\alpha_j}$). If now $\boldsymbol{\bar{\alpha}} \geq e^{-4}$, then $\eta_{\mathscr{F}}^2 \geq 3 d e^{4} \geq 3 d/ \boldsymbol{\bar{\alpha}}$. In particular, we always have $r^2 \geq 10 \left( d/ \boldsymbol{\bar{\alpha}}\right)$. We derive from the weaker inequality $r^2 \geq d/ \boldsymbol{\bar{\alpha}}$ that \begin{eqnarray*} |\mathscr{F}_{\text{dis}} \cap \mathcal{B}_h (f_{\boldsymbol{\theta}_0}, 2 r \sqrt{\tau})| &\leq& \left(\frac{r^2}{ d/ \boldsymbol{\bar{\alpha}} }\right) ^{d/ \boldsymbol{\bar{\alpha}}} \prod_{j=1}^d \left(1 + 2 t_j^{-1} \left( ( d/ \boldsymbol{\bar{\alpha}}) (c \widebar{R}_j / \underline{R}_j) \right)^{1/\alpha_j}\right) \\ &\leq& \exp \left( \frac{\log \left(r^2/( d/ \boldsymbol{\bar{\alpha}}) \right)}{r^2/( d/ \boldsymbol{\bar{\alpha}})} r^2\right) \exp \left(\eta_{\mathscr{F}}^2 \right). \end{eqnarray*} We then deduce from the inequalities $r^2/(d/ \boldsymbol{\bar{\alpha}}) \geq 10$ and $\eta_{\mathscr{F}}^2 \leq r^2/4$ that \begin{eqnarray*} |\mathscr{F}_{\text{dis}} \cap \mathcal{B}_h (f_{\boldsymbol{\theta}_0}, 2 r \sqrt{\tau})| \leq \exp \left( r^2/4\right) \exp \left( r^2/4\right) \leq \exp (r^2/2) \end{eqnarray*} as wished. \end{proof} \subsection{Proof of Theorem~\ref{ThmPrincipalDim1}.} This theorem ensues from the following result. \begin{thm} Suppose that the assumptions of Theorem~\ref{ThmPrincipalDim1} holds. For all $\xi > 0$, the estimator $\hat{\theta}$ built in Algorithm~\ref{AlgorithmDim1} satisfies \begin{eqnarray*} \P \left[ C h^2(s,f_{\hat{\theta}}) \geq h^2(s, \mathscr{F}) + \frac{D_{\mathscr{F}}}{n} + \xi \right] \leq e^{- n \xi} \end{eqnarray*} where $C > 0$ depends only on $\kappa, \widebar{R}/\underline{R}$, where $$D_{\mathscr{F}} = \max \left\{1, \log \left(1 + t^{-1} \big(c {\widebar{R}}/{(\alpha \underline{R}}\right)^{1/\alpha} \big) \right\}$$ and where $c$ depends on $\kappa$ only. Besides, if $$h^2(f_{\theta_2}, f_{\theta_2'}) \leq h^2(f_{\theta_1}, f_{\theta_1'}) \quad \text{for all $m \leq \theta_1 \leq \theta_2 < \theta_2' \leq \theta_1' \leq M$}$$ then $C$ depends only on~$\kappa$. \end{thm} \begin{proof} The theorem follows from Theorem~\ref{ThmPrincipal} page~\pageref{ThmPrincipal} where $\Theta_i = [\theta^{(i)}, \theta'^{(i)}]$ and $L_i = 1$. Note that $$\text{diam} \Theta_i \leq \widebar{R} \big(\theta'^{(i)} - \theta^{(i)}\big)^{\alpha} \leq ({\widebar{R}}/{\underline{R}}) h^2(f_{\theta^{(i)}}, f_{\theta'^{(i)}} )$$ which implies that the assumptions of Theorem~\ref{ThmPrincipal} are fulfilled with $\kappa_0 = \underline{R}/\widebar{R}$. Consequently, $$\P \left[ C \inf_{\theta \in \Theta_N } h^2(s,f_{\theta}) \geq h^2(s, \mathscr{F}) + \frac{D_{\mathscr{F}}}{n} + \xi \right] \leq e^{- n \xi}$$ where $\Theta_N = [\theta^{(N)}, \theta'^{(N)}]$ is such that $\theta'^{(N)} - \theta^{(N)} \leq \eta$. Now, for all $\theta \in \Theta_N$, \begin{eqnarray*} h^2(s,f_{\hat{\theta}}) &\leq& 2 h^2(s,f_{\theta}) + 2 h^2(f_{\theta},f_{\hat{\theta}}) \\ &\leq& 2 h^2(s,f_{\theta}) + 2 \widebar{R} \eta^{\alpha} \end{eqnarray*} hence, \begin{eqnarray*} h^2(s,f_{\hat{\theta}}) \leq 2 \inf_{\theta \in \Theta_N} h^2(s,f_{\theta}) + 2 /n \end{eqnarray*} which establishes the first part of the theorem. The second part derives from the fact that under the additional assumption, $\text{diam} \Theta_i \leq h^2(f_{\theta^{(i)}}, f_{\theta'^{(i)}} )$, which means that the assumptions of Theorem~\ref{ThmPrincipal} are fulfilled with~$\kappa_0 = 1$. \end{proof} \subsection{Proof of Proposition~\ref{PropCalculComplexiteDimen1}.} For all $i \in \{1,\dots,N-1\}$, \begin{eqnarray*} \theta^{(i+1)}&\in& \left\{ \theta^{(i)}, \theta^{(i)} + \min \left(\bar{r} (\theta^{(i)},\theta'^{(i)}) , (\theta'^{(i)} - \theta^{(i)})/2 \right) \right\} \\ \theta'^{(i+1)} &\in& \left\{\theta'^{(i)}, \theta'^{(i)} - \min \left(\underline{r} (\theta^{(i)},\theta'^{(i)}), (\theta'^{(i)} - \theta^{(i)})/2 \right) \right\}. \end{eqnarray*} Since $\bar{r} (\theta^{(i)},\theta'^{(i)})$ and $\underline{r} (\theta^{(i)},\theta'^{(i)})$ are larger than $$(\kappa \underline{R}/ \widebar{R})^{1/\alpha} (\theta'^{(i)} - \theta^{(i)} ),$$ we have \begin{eqnarray*} \theta'^{(i+1)} - \theta^{(i+1)} \leq \max \left\{1 - {\left(\kappa \underline{R}/\widebar{R} \right)^{1/\alpha}} , {1}/{2} \right\} (\theta'^{(i)} - \theta^{(i)} ). \end{eqnarray*} By induction, we derive that for all $i \in \{1,\dots,N-1\}$, \begin{eqnarray*} \theta'^{(i+1)} - \theta^{(i+1)} \leq \left(\max \left\{1 - {\left(\kappa \underline{R}/\widebar{R} \right)^{1/\alpha}} , {1}/{2} \right\} \right)^{i} (M -m). \end{eqnarray*} The procedure requires thus the computation of at most $N$ tests where $N$ is the smallest integer such that $$\left( \max \left\{1 - {\left(\kappa \underline{R}/\widebar{R} \right)^{1/\alpha}} , {1}/{2} \right\} \right)^{N} (M-m) \leq \eta $$ that is $$N\geq \frac{\log \left( (M- m )/\eta \right)}{ -\log \left[ \max \left\{1 - {\left(\kappa \underline{R}/\widebar{R} \right)^{1/\alpha}} , 1/2 \right\} \right]}.$$ We conclude by using the inequality $-1/\log (1-x) \leq 1/x$ for all $x \in (0,1)$. \qed \subsection{Proofs of Proposition~\ref{PropCalculComplexiteDimenQuelquonque} and Theorem~\ref{ThmPrincipalDimQuelquonque}.} \subsubsection{Rewriting of Algorithm~\ref{algoConstructionDimQuelquonque}.} \label{SectionReecritureProcedure} We rewrite the algorithm to introduce some notations that will be essential to prove Proposition~\ref{PropCalculComplexiteDimenQuelquonque} and Theorem~\ref{ThmPrincipalDimQuelquonque}. \begin{algorithm}[H] \caption{Construction of $\Theta_{i+1}$ from $\Theta_i$.} \begin{algorithmic}[1] \REQUIRE $\Theta_i = \prod_{j=1}^d [a_j^{(i)}, b_j^{(i)}]$ \STATE Choose $k^{(i)} \in \{1,\dots,d\}$ such that $$\underline{R}_{\Theta_i, k^{(i)}} \big(b_{{k^{(i)}}}^{(i)} - a_{{k^{(i)}}}^{(i)}\big)^{\alpha_{k^{(i)}}} = \max_{1 \leq j \leq d} \underline{R}_{\Theta_i, j} \big(b_{j}^{(i)} - a_{j}^{(i)}\big)^{\alpha_j}.$$ \STATE $\boldsymbol{\theta}^{(i,1)} = (a_1^{(i)},\dots,a_d^{(i)})$, $\boldsymbol{\theta}'^{(i,1)} = \boldsymbol{\theta}^{(i,1)}$ and $\theta'^{(i,1)}_{{k^{(i)}}} = b_{k^{(i)}}^{(i)}$. \STATE ${\varepsilon_j}^{(i,0)} = \bar{r}_{\Theta_i,j} (\boldsymbol{\theta}^{(i,1)},\boldsymbol{\theta}'^{(i,1)})$ and ${\varepsilon_j}'^{(i,0)} = \bar{r}_{\Theta_i,j} (\boldsymbol{\theta}'^{(i,1)},\boldsymbol{\theta}^{(i,1)})$ for all $j \neq k^{(i)} $ \STATE $\varepsilon_{k^{(i)}}^{(i,0)} = (b_{k^{(i)}}^{(i)} - a_{k^{(i)}}^{(i)})/2$ and $\varepsilon_{k^{(i)}}'^{(i,0)} = (b_{k^{(i)}}^{(i)} - a_{k^{(i)}}^{(i)})/2$ \FORALL {$\ell \geq 1$} \STATE $\boldsymbol{\theta}^{(i,\ell+1)} = \boldsymbol{\theta}^{(i,\ell)}$ and $\boldsymbol{\theta}'^{(i,\ell+1)} = \boldsymbol{\theta}'^{(i,\ell)}$ \IF {$T(\boldsymbol{\theta}^{(i,\ell)},\boldsymbol{\theta}'^{(i,\ell)}) \geq 0$} \STATE $\varepsilon_{\psi_{k^{(i)}}(1)}^{(i,\ell)} = \bar{r}_{\Theta_i,\psi_{k^{(i)}}(1)} (\boldsymbol{\theta}^{(i,\ell)},\boldsymbol{\theta}'^{(i,\ell)})$ \STATE $\varepsilon_{\psi_{k^{(i)}} (j)}^{(i,\ell)} = \min (\varepsilon_{\psi_{k^{(i)}} (j)}^{(i,\ell-1)}, \bar{r}_{\Theta_i,\psi_{k^{(i)}}(j)} (\boldsymbol{\theta}^{(i,\ell)},\boldsymbol{\theta}'^{(i,\ell)}))$, for all $j \in \{2,\dots, d-1\}$ \STATE $\varepsilon_{k^{(i)}}^{(i,\ell)} = \min (\varepsilon_{k^{(i)}}^{(i,\ell-1)}, \bar{r}_{\Theta_i,k^{(i)}} (\boldsymbol{\theta}^{(i,\ell)},\boldsymbol{\theta}'^{(i,\ell)}))$ \STATE $\mathfrak{J}^{(i,\ell)} = \left\{1 \leq j \leq d -1,\; \theta_{\psi_{k^{(i)}}(j)}^{(i,\ell)} + \varepsilon_{\psi_{k^{(i)}}(j)}^{(i,\ell)} < b_{\psi_{k^{(i)}}(j)}^{(i)}\right\}$ \IF {$\mathfrak{J}^{(i,\ell)} \neq \emptyset$} \STATE $\mathfrak{j}_{\text{min}}^{(i,\ell)} = \min \mathfrak{J}^{(i,\ell)}$ \STATE $\theta_{\psi_{k^{(i)}}(j)}^{(i,\ell+1)} = a_{\psi_{k^{(i)}}(j)}^{(i)}$ for all $j \leq \mathfrak{j}_{\text{min}}^{(i,\ell)} - 1$ \STATE $\theta_{\psi_{k^{(i)}}(\mathfrak{j}_{\text{min}}^{(i,\ell)})}^{(i,\ell+1)} = \theta_{\psi_{k^{(i)}}(\mathfrak{j}_{\text{min}}^{(i,\ell)})}^{(i,\ell)} + \varepsilon_{\psi_{k^{(i)}}(\mathfrak{j}_{\text{min}}^{(i,\ell)})}^{(i,\ell)}$ \ELSE \STATE $\mathfrak{j}^{(i,\ell)}_{\text{min}} = d$ \ENDIF \ENDIF \IF {$T(\boldsymbol{\theta}^{(i,\ell)},\boldsymbol{\theta}'^{(i,\ell)}) \leq 0$} \STATE $\varepsilon_{\psi_{k^{(i)}}(1)}'^{(i,\ell)} = \bar{r}_{\Theta_i,\psi_{k^{(i)}}(1)} (\boldsymbol{\theta}'^{(i,\ell)},\boldsymbol{\theta}^{(i,\ell)})$ \STATE $\varepsilon_{\psi_{k^{(i)}} (j)}'^{(i,\ell)} = \min (\varepsilon_{\psi_{k^{(i)}} (j)}'^{(i,\ell-1)}, \bar{r}_{\Theta_i,\psi_{k^{(i)}}(j)} (\boldsymbol{\theta}'^{(i,\ell)},\boldsymbol{\theta}^{(i,\ell)}))$, for all $j \in \{2,\dots, d-1\}$ \STATE $\varepsilon_{k^{(i)}}'^{(i,\ell)} = \min (\varepsilon_{k^{(i)}}'^{(i,\ell-1)}, {\underline{r}}_{\Theta_i,k^{(i)}} (\boldsymbol{\theta}'^{(i,\ell)},\boldsymbol{\theta}^{(i,\ell)}))$ \STATE $\mathfrak{J}'^{(i,\ell)} = \left\{1 \leq j \leq d -1,\; \theta_{\psi_{k^{(i)}}(j)}'^{(i,\ell)} + \varepsilon_{\psi_{k^{(i)}}(j)}'^{(i,\ell)} < b_{\psi_{k^{(i)}}(j)}^{(i)}\right\}$ \IF {$\mathfrak{J}'^{(i,\ell)} \neq \emptyset$} \STATE $\mathfrak{j}'^{(i,\ell)}_{\text{min}} = \mathfrak{J}'^{(i,\ell)} $ \STATE $\theta_{\psi_{k^{(i)}}(j)}'^{(i,\ell+1)} = a_{\psi_{k^{(i)}}(j)}^{(i)}$ for all $j \leq \mathfrak{j}'^{(i,\ell)}_{\text{min}} - 1$ \algstore{coupemonalgo2} \end{algorithmic} \end{algorithm} \addtocounter{algorithm}{-1} \begin{algorithm} \begin{algorithmic}[1] \algrestore{coupemonalgo2} \STATE $\theta_{\psi_{k^{(i)}}(\mathfrak{j}'^{(i,\ell)}_{\text{min}})}'^{(i,\ell+1)} = \theta_{\psi_{k^{(i)}}(\mathfrak{j}_{\text{min}}'^{(i,\ell)})}^{(i,\ell)} + \varepsilon_{\psi_{k^{(i)}}(\mathfrak{j}_{\text{min}}'^{(i,\ell)})}'^{(i,\ell)}$ \ELSE \STATE $\mathfrak{j}'^{(i,\ell)}_{\text{min}} = d$ \ENDIF \ENDIF \IF { $\mathfrak{j}^{(i,\ell)}_{\text{min}} = d$ or $\mathfrak{j}'^{(i,\ell)}_{\text{min}} = d$} \STATE $L_i = \ell$ and quit the loop \ENDIF \STATE $a^{(i+1)}_j = a^{(i)}_j$ and $b^{(i+1)}_j = b^{(i)}_j$ for all $j \neq k^{(i)}$ \ENDFOR \IF {$ \mathfrak{j}^{(i,\ell)}_{\text{min}} = d$} \STATE $a_{k^{(i)}}^{(i+1)} = a_{k^{(i)}}^{(i)} + \varepsilon_{k^{(i)} }^{(i,L_i)} $ \ENDIF \IF { $\mathfrak{j}'^{(i,\ell)}_{\text{min}} = d$} \STATE $b_{k^{(i)}}^{(i+1)} = b_{k^{(i)}}^{(i)} - \varepsilon_{k^{(i)} }'^{(i,L_i)} $ \ENDIF \RETURN $\Theta_{i+1} = \prod_{j=1}^d [a_j^{(i+1)}, b_j^{(i+1)}]$ \algstore{coupemonalgo3} \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \caption{Rewriting of Algorithm~\ref{algoConstructionDimQuelquonque}.} \label{algoConstructionDimQuelquonque2} \begin{algorithmic}[1] \algrestore{coupemonalgo3} \STATE $\Theta_1 = \prod_{j=1}^d [a_j^{(1)},b_j^{(1)}] = \prod_{j=1}^d [m_j,M_j]$ \FORALL {$i \geq 1$} \IF {There exists $j \in \{1,\dots,d\}$ such that $ b_j^{(i)} - a_j^{(i)} > \eta_j$} \STATE Compute $\Theta_{i+1}$ \ELSE \STATE Leave the loop and set ${N} = i$ \ENDIF \ENDFOR \RETURN $$\boldsymbol{\hat{\theta}} = \left(\frac{a_1^{(N)} + b_1^{(N)}}{2}, \dots, \frac{a_d^{(N)} + b_d^{(N)}}{2} \right)$$ \end{algorithmic} \end{algorithm} \subsubsection{Proof of Proposition~\ref{PropCalculComplexiteDimenQuelquonque}.} The algorithm computes $\sum_{i=1}^{N} L_i$ tests. Define for all $j \in \{1,\dots,d\}$, $$I_j = \left\{ i \in \{1,\dots,N\} , \, k^{(i)} = j \right\}.$$ Then, $\cup_{j=1}^d I_j = \{1,\dots,N\} $. Since \begin{eqnarray} \label{eqPreuvePropoCalculComplexite} \sum_{i=1}^{N} L_i \leq \sum_{j=1}^d |I_j| \sup_{i \in I_j} L_i, \end{eqnarray} we begin to bound $|I_j|$ from above. For all $i \in \{1,\dots,N-1\}$, \begin{eqnarray*} b_{j}^{(i+1)} - a_{j}^{(i+1)} &\leq& b_{j}^{(i)} - a_{j}^{(i)} - \min \left(\varepsilon_{j}^{(i, L_i) }, \varepsilon_{j}'^{(i, L_i) }\right) \quad \text{if $i \in I_j$ } \\ b_{j}^{(i+1)} - a_{j}^{(i+1)} &=& b_{j}^{(i)} - a_{j}^{(i)}\quad \text{if $i \not \in I_j$.} \end{eqnarray*} For all $i \in I_j$, and all $\ell \in \{1,\dots,L_i\}$, we derive from (\ref{eqSurretRDimD}), from the equality $\theta'^{(i,\ell)}_j = b_j^{(i)}$, $\theta^{(i,\ell)}_j = a_j^{(i)}$ and from the inequalities $\underline{R}_{\Theta_i,j} \geq \underline{R}_j$ and $\widebar{R}_{\Theta_i,j} \leq \widebar{R}_j$ that \begin{eqnarray} \bar{r}_{\Theta_i,j} (\boldsymbol{\theta}^{(i,\ell)},\boldsymbol{\theta}'^{(i,\ell)}) &\geq& (\kappa \underline{R}_j/ \widebar{R}_j)^{1/\alpha_j} (b_j^{(i)} - a_j^{(i)} ) \label{eqMinorationr1} \\ \underline{r}_{\Theta_i,j} (\boldsymbol{\theta}'^{(i,\ell)},\boldsymbol{\theta}^{(i,\ell)}) &\geq& (\kappa \underline{R}_j/ \widebar{R}_j)^{1/\alpha_j} (b_j^{(i)} - a_j^{(i)}) \label{eqMinorationr2}. \end{eqnarray} Consequently, \begin{eqnarray*} \min \left(\varepsilon_{j}^{(i, L_i) }, \varepsilon_{j}'^{(i, L_i) }\right) \geq \min \left\{{(b_j^{(i)}-a_j^{(i)})}/{2} , (\kappa \underline{R}_j/\widebar{R}_j)^{1/\alpha_j} (b_j^{(i)} - a_j^{(i)}) \right\}. \end{eqnarray*} We then have, \begin{eqnarray*} b_{j}^{(i+1)} - a_{j}^{(i+1)} &\leq& \max \left(1/2, 1 - \left({\kappa \underline{R}_j}/\widebar{R}_j \right)^{1/\alpha_{j}} \right) \big(b_j^{(i)} - a_j^{(i)}\big) \quad \text{when $i \in I_j$} \\ b_{j}^{(i+1)} - a_{j}^{(i+1)} &=& b_j^{(i)} - a_j^{(i)} \quad \text{when $i \not \in I_j$.} \end{eqnarray*} Let $n_j$ be any integer such that $$ \left(\max \left\{1/2, 1 - \left({\kappa \underline{R}_j/\widebar{R}_j} \right)^{1/\alpha_{j}} \right\} \right)^{ n_J} \leq \eta_j/(M_j-m_j).$$ If $|I_j| > n_j$, then for $i = \max I_j$, \begin{eqnarray*} b_{j}^{(i)} - a_{j}^{(i)} &\leq& \left( \max \left\{1/2, 1 - \left({\kappa \underline{R}_j} /\widebar{R}_j\right)^{1/\alpha_{j}} \right\} \right)^{|I_j| - 1} (M_j-m_j) \\ &\leq& \left(\max \left\{1/2, 1 - \left({\kappa \underline{R}_j} /\widebar{R}_j\right)^{1/\alpha_{j}} \right\} \right)^{ n_J} (M_j-m_j) \end{eqnarray*} and thus $ b_{j}^{(i)} - a_{j}^{(i)} \leq \eta_j$. This is impossible because $i \in I_j$ implies that $b_{j}^{(i)} - a_{j}^{(i)} > \eta_j$. Consequently, $|I_J| \leq n_j$. We then set $n_j$ as the smallest integer larger than $$\frac{\log \left(\eta_j / (M_j-m_j) \right)}{ \log \left(\max \left\{1/2, 1 - \left({\kappa \underline{R}_j} /\widebar{R}_j\right)^{1/\alpha_{j}} \right\} \right)}.$$ By using the inequality $-1/\log (1-x) \leq 1/x$ for all $x \in (0,1)$, we obtain \begin{eqnarray*} |I_j| \leq 1 + \max \left( 1/\log 2, \left(\kappa \underline{R}_j /\widebar{R}_j\right)^{-1/\alpha_{j}} \right) \log \left( \frac{M_j-m_j}{\eta_j} \right). \end{eqnarray*} We now roughly bound from above the right-hand side of this inequality: \begin{eqnarray*} |I_j| \leq 2 \left( 1 + \left( \widebar{R}_j/(\kappa \underline{R}_j) \right)^{1/\alpha_{j}} \right) \left( 1 \vee \log \left( \frac{M_j-m_j}{\eta_j} \right) \right). \end{eqnarray*} We recall that our aim is to bound from above $\sum_{i=1}^{N} L_i$. Thanks to (\ref{eqPreuvePropoCalculComplexite}), it remains to upper bound $ \sup_{i \in I_j} L_i$. This ensues from the following lemma. \begin{lemme} \label{LemmeDansPreuveComplexiteAlgo} Let $$\mathcal{L} = \left\{1 \leq \ell \leq L_i, \; T(\boldsymbol{\theta}^{(i,\ell)},\boldsymbol{\theta}'^{(i,\ell)}) \geq 0 \right\} \quad \text{and} \quad \mathcal{L}' = \left\{1 \leq \ell \leq L_i, \; T(\boldsymbol{\theta}^{(i,\ell)},\boldsymbol{\theta}'^{(i,\ell)}) \leq 0 \right\}.$$ Then, \begin{eqnarray*} |\mathcal{L} | &\leq& \prod_{k \in \{1,\dots,d\} \setminus \{ k^{(i)}\}} \left[ 1 + \left( {\widebar{R}_k}/({\kappa \underline{R}_k})\right)^{1/\alpha_k}\right] \\ |\mathcal{L}'| &\leq& \prod_{k \in \{1,\dots,d\} \setminus \{ k^{(i)}\}} \left[ 1 + \left( {\widebar{R}_k}/({\kappa \underline{R}_k})\right)^{1/\alpha_k}\right]. \end{eqnarray*} \end{lemme} Since $\{1,\dots,L_i\} \subset \mathcal{L} \cup \mathcal{L}'$, we obtain $$\sum_{j=1}^d |I_j| \sup_{i \in I_j} L_i \leq 4 \left[\sum_{j=1}^d \left( 1 \vee \log \left( \frac{M_j-m_j}{\eta_j} \right)\right)\right] \left[\prod_{j=1}^d \left(1 + \left( {\widebar{R}_j}/({\kappa \underline{R}_j})\right)^{1/\alpha_j} \right) \right],$$ which completes the proof. \subsubsection{Proof of Lemma~\ref{LemmeDansPreuveComplexiteAlgo}.} Without lost of generality and for the sake of simplicity, we assume that $k^{(i)} = d$ and $\psi_{d } (j) = j$ for all $j \in \{1,\dots,d-1\}$. Let $\ell_1 < \cdots < \ell_r$ be the elements of $\mathcal{L} $. Define for all $p \in \{1,\dots,d-1\}$, $k_{p,0} = 0$ and by induction for all integer $\mathfrak{m}$, \begin{eqnarray*} k_{p,\mathfrak{m}+1} = \begin{cases} \inf \left\{k > k_{p,\mathfrak{m}}, \; \mathfrak{j}_{\text{min}}^{(i,\ell_{k})} > p \right\} & \text{if there exists $k \in \{k_{p,\mathfrak{m}} +1,\dots,r \}$ such that $\mathfrak{j}_{\text{min}}^{(i,\ell_{k})} > p$} \\ r & \text{otherwise.} \end{cases} \end{eqnarray*} Let $\mathfrak{M}_p$ be the smallest integer $\mathfrak{m}$ for which $k_{p,\mathfrak{m}} = r$. Set for all $\mathfrak{m} \in \{0,\dots,\mathfrak{M}_p-1\}$, $$K_{p,\mathfrak{m}} = \left\{k_{p,\mathfrak{m}}+1, \dots, k_{p,\mathfrak{m}+1} \right\}.$$ The cardinality of $K_{p,\mathfrak{m}}$ can be upper bounded by the claim below. \begin{Claim} \label{ClaimMajorationCardinalDeKpm} For all $p \in \{1,\dots,d-1\}$ and $\mathfrak{m} \in \{0,\dots,\mathfrak{M}_p-1\}$, \begin{eqnarray} \label{eqMajorationKpm} |K_{p,\mathfrak{m}}| \leq \prod_{k=1}^p \left[ 1 + \left(\frac{\widebar{R}_k}{\kappa \underline{R}_k}\right)^{1/\alpha_k} \right]. \end{eqnarray} \end{Claim} Lemma~\ref{LemmeDansPreuveComplexiteAlgo} follows from the equality $\mathcal{L} = K_{d-1,0}$. The cardinality of $\mathcal{L}'$ can be bounded from above in the same way. \qed \begin{proof} [Proof of Claim~\ref{ClaimMajorationCardinalDeKpm}] The result is proved by induction. We begin to prove (\ref{eqMajorationKpm}) when $p = 1$. Let $\mathfrak{m} \in \{0,\dots,\mathfrak{M}_1-1\}$. We have $\theta_1^{(i,\ell_{ k_{1,\mathfrak{m}} + 1})} = a_1^{(i)}$ and for $j \in \{ 1,\dots, k_{1,\mathfrak{m}+1}-k_{1,\mathfrak{m}}-1\}$, $$ \theta_1^{(i,\ell_{ k_{1,\mathfrak{m}}+j+1})} \geq \theta_1^{(i,\ell_{ k_{1,\mathfrak{m}}+j})} + \bar{r}_{\Theta_i,1} \left(\boldsymbol{\theta}^{(i,\ell_{ k_{1,\mathfrak{m}}+j})}, \boldsymbol{\theta}'^{(i,\ell_{ k_{1,\mathfrak{m}}+j})}\right).$$ Now, \begin{eqnarray*} \bar{r}_{\Theta_i,1} \left(\boldsymbol{\theta}^{(i,\ell_{ k_{1,\mathfrak{m}}+j})}, \boldsymbol{\theta}'^{(i,\ell_{ k_{1,\mathfrak{m}}+j})}\right) \geq \left( (\kappa \underline{R}_{\Theta_i,d} / \widebar{R}_{\Theta_i,1}) ( b_d^{(i)}-a_d^{(i)} )^{\alpha_d} \right)^{1/\alpha_1}. \end{eqnarray*} Since $\underline{R}_{\Theta_i,d} (b_d^{(i)}-a_d^{(i)} )^{\alpha_d} \geq \underline{R}_{\Theta_i,1} (b_1^{(i)}-a_1^{(i)} )^{\alpha_1} $, \begin{eqnarray} \bar{r}_{\Theta_i,1} \left(\boldsymbol{\theta}^{(i,\ell_{ k_{1,\mathfrak{m}}+j})}, \boldsymbol{\theta}'^{(i,\ell_{ k_{1,\mathfrak{m}}+j})}\right) &\geq& \left( \kappa \underline{R}_{\Theta_i,1} / \widebar{R}_{\Theta_i,1} \right)^{1/\alpha_1} ( b_1^{(i)}-a_1^{(i)}) \nonumber \\ &\geq& (\kappa \underline{R}_{1}/ \widebar{R}_1)^{1/\alpha_1} ( b_1^{(i)}-a_1^{(i)}) \label{eqMinorationbarrPreuve}. \end{eqnarray} This leads to $$ \theta_1^{(i,\ell_{ k_{1,\mathfrak{m}}+j+1})} \geq \theta_1^{(i,\ell_{ k_{1,\mathfrak{m}}+j})} + \left({\kappa \underline{R}_{1}}/{\widebar{R}_1}\right)^{1/\alpha_1} (b_1^{(i)} - a_1^{(i)} ).$$ Moreover, $ \theta_1^{(i,\ell_{ k_{1,\mathfrak{m}+1}})} \leq b_1^{(i)} $ (because all the $\boldsymbol{\theta}^{(i,\ell)}, \boldsymbol{\theta}'^{(i,\ell)}$ belong to $\Theta_i$). Consequently, $$a_1^{(i)} + \left(k_{1,\mathfrak{m}+1}-k_{1,\mathfrak{m}}-1\right) \left({\kappa \underline{R}_{1}}/{\widebar{R}_1}\right)^{1/\alpha_1} \left(b_1^{(i)} - a_1^{(i)}\right) \leq b_1^{(i)},$$ which shows the result for $p = 1$. Suppose now that (\ref{eqMajorationKpm}) holds for $p \in \{1,\dots,d-2\}$. We shall show that it also holds for $p+1$. Let $\mathfrak{m} \in \{0,\dots,\mathfrak{M}_{p+1} -1 \}$. We use the claim below whose proof is postponed to Section~\ref{SectionPreuveDesClaims}. \begin{Claim} \label{ClaimInclusionKpDansKp} For all $\mathfrak{m} \in \{0,\dots, \mathfrak{M}_{p+1}-1\}$, there exists $\mathfrak{m}' \in \{0,\dots, \mathfrak{M}_{p}-1\}$ such that $k_{p,\mathfrak{m}'+1} \in K_{p+1,\mathfrak{m}}$. \end{Claim} The claim says that we can consider the smallest integer $\mathfrak{m}_0$ of $\{0,\dots, \mathfrak{M}_p-1\}$ such that $k_{p,\mathfrak{m}_0+1} > k_{p+1,\mathfrak{m}}$, and the larger integer $\mathfrak{m}_1$ of $\{0,\dots, \mathfrak{M}_p-1\}$ such that $k_{p,\mathfrak{m}_1+1} \leq k_{p+1,\mathfrak{m}+1}$. We define \begin{eqnarray*} I_{\mathfrak{m}_0} &=& \left\{k_{p+1,\mathfrak{m}}+1,\dots,k_{p,\mathfrak{m}_0+1}\right\}\\ I_{\mathfrak{m}'} &=& \left\{k_{p,\mathfrak{m}'}+1,\dots,k_{p,\mathfrak{m}'+1}\right\} \quad \text{for all $\mathfrak{m}' \in \{\mathfrak{m}_0+1,\dots,\mathfrak{m}_1\}$}\\ I_{\mathfrak{m}_1+1} &=& \left\{k_{p,\mathfrak{m}_1+1}+1,\dots,k_{p+1,\mathfrak{m}+1}\right\}. \end{eqnarray*} We then have $$K_{p+1,\mathfrak{m}} =\bigcup_{\mathfrak{m}'=\mathfrak{m}_0}^{\mathfrak{m}_1+1} I_{\mathfrak{m}'}.$$ Notice that for all $\mathfrak{m}' \in \{\mathfrak{m}_0,\dots,\mathfrak{m}_1\}$, $I_{\mathfrak{m}'} \subset K_{p,\mathfrak{m}'}$. We consider two cases. \begin{itemize} \item If $k_{p,\mathfrak{m}_1+1} = k_{p+1,\mathfrak{m}+1}$, then $I_{\mathfrak{m}_1+1} = \emptyset$ and thus, by using the above inclusion and the induction assumption, \begin{eqnarray*} |K_{p+1,\mathfrak{m}'}| \leq (\mathfrak{m}_1-\mathfrak{m}_0+1) \prod_{k=1}^p \left[ 1 + \left(\frac{\widebar{R}_k}{\kappa \underline{R}_k}\right)^{1/\alpha_k}\right]. \end{eqnarray*} \item If $k_{p,\mathfrak{m}_1+1} < k_{p+1,\mathfrak{m}+1}$ then $\mathfrak{m}_1 + 1\leq \mathfrak{M}_p-1$. Indeed, if this is not true, then $\mathfrak{m}_1 = \mathfrak{M}_p-1$, which leads to $k_{p,\mathfrak{m}_1+1} = r$ and thus $k_{p+1,\mathfrak{m}+1} > r$. This is impossible since $k_{p+1,\mathfrak{m}+1}$ is always smaller than~$r$ (by definition). Consequently, $I_{\mathfrak{m}_1+1} \subset K_{p,\mathfrak{m}_1+1}$ and we derive from the induction assumption, $$|K_{p+1,\mathfrak{m}'}| \leq (\mathfrak{m}_1-\mathfrak{m}_0+2) \prod_{k=1}^p \left[ 1 + \left(\frac{\widebar{R}_k}{\kappa \underline{R}_k}\right)^{1/\alpha_k}\right].$$ \end{itemize} We now bound from above $\mathfrak{m}_1-\mathfrak{m}_0$. Since for all $k \in \left\{k_{p+1,\mathfrak{m}}+1,\dots,k_{p,\mathfrak{m}_0+1}-1\right\}$, $\mathfrak{j}_{\text{min}}^{(i,\ell_k)} \leq p$, we have $$\theta_{p+1}^{(i,\ell_{ k_{p,\mathfrak{m}_0+1}})} = \theta_{p+1}^{(i,\ell_{k_{p+1,\mathfrak{m}}+1})} = a_{p+1}^{(i)}.$$ Since $\mathfrak{j}_{\text{min}}^{(i,\ell_{ k_{p,\mathfrak{m}_0+1}})} = p+1$, $$\theta_{p+1}^{(i,\ell_{ k_{p,\mathfrak{m}_0+1}+1})} \geq \theta_{p+1}^{(i,\ell_{ k_{p,\mathfrak{m}_0+1}})} + \bar{r}_{\Theta_i,p+1} \left(\boldsymbol{\theta}^{(i,1)}, \boldsymbol{\theta}'^{(i,1)}\right) $$ and thus by using a similar argument as the one used in the proof of (\ref{eqMinorationbarrPreuve}), \begin{eqnarray*} \theta_{p+1}^{(i,\ell_{ k_{p,\mathfrak{m}_0+1}+1})} &\geq& \theta_{p+1}^{(i,\ell_{ k_{p,\mathfrak{m}_0+1}})} + \left({\kappa \underline{R}_{p+1}}/{\widebar{R}_{p+1}}\right)^{1/\alpha_{p+1}} \left(b_{p+1}^{(i)} - a_{p+1}^{(i)}\right) \\ &\geq& a_{p+1}^{(i)} + \left({\kappa \underline{R}_{p+1}}/{\widebar{R}_{p+1}}\right)^{1/\alpha_{p+1}} \left(b_{p+1}^{(i)} - a_{p+1}^{(i)}\right). \end{eqnarray*} Similarly, for all $\mathfrak{m}' \in \{\mathfrak{m}_0+1,\dots,\mathfrak{m}_1\}$ and $k \in \left\{k_{p,\mathfrak{m}'}+1,\dots,k_{p,\mathfrak{m}'+1}-1\right\}$, $\mathfrak{j}_{\text{min}}^{(i,\ell_k)} \leq p$ and thus $$\theta_{p+1}^{(i,\ell_{ k_{p,\mathfrak{m}'+1}})} = \theta_{p+1}^{(i,\ell_{k_{p,\mathfrak{m}'}+1})}.$$ Moreover, for all $\mathfrak{m}' \in \{\mathfrak{m}_0+1,\dots,\mathfrak{m}_1-1\}$, $\mathfrak{j}_{\text{min}}^{(i,\ell_{k_{p,\mathfrak{m}'+1}})} = p+1$ and thus \begin{eqnarray} \label{EqDansPreuveComplexiteAlgoDimQuel} \theta_{p+1}^{(i,\ell_{ k_{p,\mathfrak{m}'+1}+1})} &\geq& \theta_{p+1}^{(i,\ell_{ k_{p,\mathfrak{m}'+1}})} + \left({\kappa \underline{R}_{p+1}}/{\widebar{R}_{p+1}}\right)^{1/\alpha_{p+1}} \left(b_{p+1}^{(i)} - a_{p+1}^{(i)}\right) \\ &\geq& \theta_{p+1}^{(i,\ell_{k_{p,\mathfrak{m}'}+1})} + \left({\kappa \underline{R}_{p+1}}/{\widebar{R}_{p+1}}\right)^{1/\alpha_{p+1}} \left(b_{p+1}^{(i)} - a_{p+1}^{(i)}\right) \nonumber. \end{eqnarray} This leads to \begin{eqnarray*} \theta_{p+1}^{(i,\ell_{ k_{p,\mathfrak{m}_1}+1})} &\geq& \theta_{p+1}^{(i,\ell_{k_{p,\mathfrak{m}_0+1 }+1})} + \left(\mathfrak{m}_1 - \mathfrak{m}_0-1 \right) \left({\kappa \underline{R}_{p+1}}/{\widebar{R}_{p+1}}\right)^{1/\alpha_{p+1}} \left(b_{p+1}^{(i)} - a_{p+1}^{(i)}\right) \\ &\geq& a_{p+1}^{(i)} + \left(\mathfrak{m}_1 - \mathfrak{m}_0 \right) \left({\kappa \underline{R}_{p+1}}/{\widebar{R}_{p+1}}\right)^{1/\alpha_{p+1}} \left(b_{p+1}^{(i)} - a_{p+1}^{(i)}\right). \end{eqnarray*} There are two types of cases involved: if $k_{p,\mathfrak{m}_1+1} = k_{p+1,\mathfrak{m}+1}$ and if $k_{p,\mathfrak{m}_1+1} < k_{p+1,\mathfrak{m}+1}$. \begin{itemize} \item If $k_{p,\mathfrak{m}_1+1} = k_{p+1,\mathfrak{m}+1}$, \begin{eqnarray*} \theta_{p+1}^{(i,\ell_{ k_{p+1,\mathfrak{m}+1}})} &=& \theta_{p+1}^{(i,\ell_{ k_{p,\mathfrak{m}_1}+1})} \\ &\geq& a_{p+1}^{(i)} + \left(\mathfrak{m}_1 - \mathfrak{m}_0 \right) \left({\kappa \underline{R}_{p+1}}/{\widebar{R}_{p+1}}\right)^{1/\alpha_{p+1}} \left(b_{p+1}^{(i)} - a_{p+1}^{(i)}\right). \end{eqnarray*} Since $ \theta_{p+1 }^{(i,\ell_{ k_{p+1,\mathfrak{m}+1}})} \leq b_{p+1}^{(i)}$, we have $$\mathfrak{m}_1 - \mathfrak{m}_0 \leq \left({\widebar{R}_{p+1}}/{(\kappa \underline{R}_{p+1})}\right)^{1/\alpha_{p+1}}.$$ \item If now $k_{p,\mathfrak{m}_1+1} < k_{p+1,\mathfrak{m}+1}$, then~{(\ref{EqDansPreuveComplexiteAlgoDimQuel})} also holds for $\mathfrak{m}' = \mathfrak{m}_1$. This implies \begin{eqnarray*} \theta_{p+1}^{(i,\ell_{ k_{p,\mathfrak{m}_1+1}+1})} \geq a_{p+1}^{(i)} + \left(\mathfrak{m}_1 - \mathfrak{m}_0 + 1 \right) \left({\kappa \underline{R}_{p+1}}/{\widebar{R}_{p+1}}\right)^{1/\alpha_{p+1}} \left(b_{p+1}^{(i)} - a_{p+1}^{(i)}\right). \end{eqnarray*} Since $\mathfrak{j}_{\text{min}}^{(i,\ell_k)} \leq p$ for all $k \in \left\{k_{p,\mathfrak{m}_1+1},\dots,k_{p+1,\mathfrak{m}+1}-1\right\}$, \begin{eqnarray*} \theta_{p+1}^{(i,\ell_{ k_{p+1,\mathfrak{m}+1}})} &=& \theta_{p+1}^{(i,\ell_{ k_{p,\mathfrak{m}_1+1}+1})} \\ &\geq& a_{p+1}^{(i)} + \left(\mathfrak{m}_1 - \mathfrak{m}_0 + 1 \right) \left({\kappa \underline{R}_{p+1}}/{\widebar{R}_{p+1}}\right)^{1/\alpha_{p+1}} \left(b_{p+1}^{(i)} - a_{p+1}^{(i)}\right). \end{eqnarray*} Since, $ \theta_{p+1}^{(i,\ell_{ k_{p+1,\mathfrak{m}+1}})} \leq b_{p+1}^{(i)}$, $$\mathfrak{m}_1 - \mathfrak{m}_0 + 1 \leq \left({\widebar{R}_{p+1}}/{(\kappa \underline{R}_{p+1})}\right)^{1/\alpha_{p+1}}.$$ \end{itemize} This ends the proof. \end{proof} \subsubsection{Proof of Theorem~\ref{ThmPrincipalDimQuelquonque}.} The lemma and claim below show that the assumptions of Theorem~\ref{ThmPrincipal} (page~\pageref{ThmPrincipal}) are satisfied. \begin{lemme} \label{PreuveLemmeAlgoDimQuelc} For all $i \in \{1,\dots, N-1\}$, \begin{eqnarray*} \Theta_i \setminus \bigcup_{\ell=1}^{L_i} B^{(i,\ell)} \subset \Theta_{i+1} \subset \Theta_i. \end{eqnarray*} \end{lemme} \begin{Claim} \label{ClaimPreuveAlgoDim2Kappa0} For all $i \in \{1,\dots,N - 1\}$ and $\ell \in \{1,\dots,L_i\}$, $$ \kappa_0 \text{diam} (\Theta_i) \leq h^2(f_{\boldsymbol{\theta}^{(i,\ell)}},f_{\boldsymbol{\theta}'^{(i,\ell)}})$$ where $\kappa_0 = \inf_{1 \leq j \leq d} \underline{R}_j/\widebar{R}_j$. \end{Claim} We now derive from Theorem~\ref{ThmPrincipal} that $$\P \left[ C \inf_{\boldsymbol{\theta} \in \Theta_N} h^2(s,f_{\boldsymbol{\theta}}) \geq h^2(s, \mathscr{F}) + \frac{D_{\mathscr{F}}}{n} + \xi \right] \leq e^{- n \xi}$$ where $C > 0$ depends only on $\kappa, \sup_{1 \leq j \leq d} \widebar{R}_j/\underline{R}_j$. Consequently, with probability larger than $1 - e^{-n \xi}$, \begin{eqnarray*} h^2(s,f_{\boldsymbol{\hat{\theta}}}) &\leq& 2 \inf_{\boldsymbol{\theta} \in \Theta_N} h^2(s,f_{\boldsymbol{\theta}}) + 2 \text{diam}\, \Theta_N \\ &\leq& 2 C^{-1} \left(h^2(s, \mathscr{F}) + \frac{D_{\mathscr{F}}}{n} + \xi\right) + 2 \sup_{1 \leq j \leq d} \widebar{R}_j \eta_j^{\alpha_j} \\ &\leq& 2 C^{-1} \left(h^2(s, \mathscr{F}) + \frac{D_{\mathscr{F}}}{n} + \xi\right) + 2d /n\\ &\leq& C' \left(h^2(s, \mathscr{F}) + \frac{d}{n} + \xi\right). \end{eqnarray*} The theorem follows. \qed \subsubsection{Proof of Lemma~\ref{PreuveLemmeAlgoDimQuelc}. } Since $$\varepsilon_{k^{(i)} }^{(i,L_i)} \leq \frac{b^{(i)}_{k^{(i)} } - a^{(i)}_{k^{(i)} }}{2} \quad \text{and} \quad \varepsilon_{k^{(i)} }'^{(i,L_i)} \leq \frac{b^{(i)}_{k^{(i)} } - a^{(i)}_{k^{(i)} }}{2},$$ we have $\Theta_{i+1} \subset \Theta_i$. We now aim at proving $\Theta_i \setminus \cup_{\ell=1}^{L_i} B^{(i,\ell)} \subset \Theta_{i+1}$. We introduce the rectangles \begin{eqnarray*} \mathcal{R}^{(i,\ell)} &=& \prod_{q=1}^d \left[\theta_q^{(i,\ell)}, \theta_q^{(i,\ell)} + \varepsilon_{q}^{(i,\ell)} \right] \\ \mathcal{R}'^{(i,\ell)} &=& \prod_{q=1}^{k^{(i)}-1} \left[\theta_q'^{(i,\ell)}, \theta_q'^{(i,\ell)} + \varepsilon_{q}'^{(i,\ell)} \right] \times\left[\theta_{k^{(i)}}'^{(i,\ell)} - \varepsilon_{k^{(i)}}'^{(i,\ell)} , \theta_{k^{(i)}}'^{(i,\ell)} \right] \times \prod_{q=k^{(i)}+1}^{d} \left[\theta_q'^{(i,\ell)}, \theta_q'^{(i,\ell)} + \varepsilon_{q}'^{(i,\ell)} \right] \end{eqnarray*} and we set \begin{eqnarray*} \mathcal{R}''^{(i,\ell)}= \begin{cases} \mathcal{R}^{(i,\ell)}& \text{if $T ({\boldsymbol{\theta}^{(i,\ell)}},{\boldsymbol{\theta}'^{(i,\ell)}}) > 0$} \\ \mathcal{R}'^{(i,\ell)}& \text{if $T ({\boldsymbol{\theta}^{(i,\ell)}},{\boldsymbol{\theta}'^{(i,\ell)}}) < 0$} \\ \mathcal{R}^{(i,\ell)} \bigcup \mathcal{R}'^{(i,\ell)} & \text{if $T ({\boldsymbol{\theta}^{(i,\ell)}},{\boldsymbol{\theta}'^{(i,\ell)}}) = 0$}. \end{cases} \end{eqnarray*} We derive from (\ref{eqInclusionRC1}) that $\Theta_i \cap \mathcal{R}''^{(i,\ell)} \subset B^{(i,\ell)}$. It is then sufficient to show $$ \Theta_i \setminus \bigcup_{\ell=1}^{L_i} \mathcal{R}''^{(i,\ell)} \subset \Theta_{i+1}.$$ For this purpose, note that either $T(\boldsymbol{\theta}^{(i,L_i)},\boldsymbol{\theta}'^{(i,L_i)}) \geq 0$ or $T(\boldsymbol{\theta}^{(i,L_i)},\boldsymbol{\theta}'^{(i,L_i)}) \leq 0$. In what follows, we assume that $T(\boldsymbol{\theta}^{(i,L_i)},\boldsymbol{\theta}'^{(i,L_i)}) \geq 0$ but a similar proof can be made if $T(\boldsymbol{\theta}^{(i,L_i)},\boldsymbol{\theta}'^{(i,L_i)})$ is non-positive. Without lost of generality, and for the sake of simplicity, we suppose as in the proof of Lemma~\ref{LemmeDansPreuveComplexiteAlgo} that $k^{(i)} = d$ and $\psi_{d} (j) = j$ for all $j \in \{1,\dots,d-1\}$. Let $$\mathcal{L} = \left\{1 \leq \ell \leq L_i, \; T(\boldsymbol{\theta}^{(i,\ell)},\boldsymbol{\theta}'^{(i,\ell)}) \geq 0 \right\}$$ and $\ell_1 < \cdots < \ell_r$ be the elements of $\mathcal{L} $. We have $$\Theta_{i+1} = \prod_{q=1}^{d-1} \left[a_q^{(i)}, b_q^{(i)}\right] \times \left[a_d^{(i)} + \varepsilon_d^{(i,L_i)} , b_d^{(i)} \right]$$ and it is sufficient to prove that $$\prod_{q=1}^{d-1} \left[a_q^{(i)}, b_q^{(i)}\right] \times \left[a_d^{(i)}, a_d^{(i)} + \varepsilon_d^{(i,L_i)} \right] \subset \bigcup_{k=1}^{r} \mathcal{R}^{(i, \ell_k)}.$$ For this, remark that for all $k \in \{1,\dots,r\}$, $\theta_d^{(i,\ell_{k})} = a_d^{(i)} $ and thus $$\mathcal{R}^{(i, \ell_k)} = \prod_{q=1}^{d-1} \left[ \theta_q^{(i,\ell_{k})}, \theta_q^{(i,\ell_{k})} + \varepsilon_{q}^{(i,\ell_k)} \right] \times \left[ a_d^{(i)}, a_d^{(i)} + \varepsilon_{d}^{(i,\ell_k)} \right].$$ By using the fact that the sequence $ (\varepsilon_{d}^{(i,\ell_k)})_k$ is non-increasing, $$\left[a_d^{(i)}, a_d^{(i)} + \varepsilon_d^{(i,L_i)} \right] \subset \bigcap_{k=1}^{r}\left[ a_d^{(i)}, a_d^{(i)} + \varepsilon_{d}^{(i,\ell_k)} \right].$$ This means that it is sufficient to show that \begin{eqnarray} \label{eqInclusionPreuveDimenQ} \prod_{q=1}^{d-1} \left[a_q^{(i)}, b_q^{(i)}\right] \subset \bigcup_{k=1}^{r} \prod_{q=1}^{d-1} \left[ \theta_q^{(i,\ell_{k})} , \theta_q^{(i,\ell_{k})} + \varepsilon_{q}^{(i,\ell_k)} \right]. \end{eqnarray} Let us now define (as in the proof of Lemma~\ref{LemmeDansPreuveComplexiteAlgo}) for all $p \in \{1,\dots,d-1\}$, $k_{p,0} = 0$ and by induction for all integer $\mathfrak{m}$, \begin{eqnarray*} k_{p,\mathfrak{m}+1} = \begin{cases} \inf \left\{k > k_{p,\mathfrak{m}}, \; \mathfrak{j}_{\text{min}}^{(i,\ell_{k})} > p \right\} & \text{if there exists $k \in \{k_{p,\mathfrak{m}} +1,\dots,r \}$ such that $\mathfrak{j}_{\text{min}}^{(i,\ell_{k})} > p$} \\ r & \text{otherwise.} \end{cases} \end{eqnarray*} Let $\mathfrak{M}_p$ be the smallest integer $\mathfrak{m}$ such that $k_{p,\mathfrak{m}} = r$. Let then for all $\mathfrak{m} \in \{0,\dots,\mathfrak{M}_p-1\}$, $$K_{p,\mathfrak{m}} = \left\{k_{p,\mathfrak{m}}+1, \dots, k_{p,\mathfrak{m}+1} \right\}.$$ We shall use the claim below (whose proof is delayed to Section~\ref{SectionPreuveDesClaims}). \begin{Claim} \label{ClaimPreuveAlgoDimen2} Let $\mathfrak{m}' \in \{0,\dots,\mathfrak{M}_{p+1}-1\}$, $p \in \{1,\dots,d-1\}$. There exists a subset $\mathcal{M}$ of $\{0,\dots,\mathfrak{M}_p-1\}$ such that $$K'_{p} = \left\{k_{p,\mathfrak{m}+1}, \, \mathfrak{m} \in \mathcal{M} \right\} \subset K_{p+1,\mathfrak{m}'}$$ and $$\left[a_{p+1}^{(i)}, b_{p+1}^{(i)}\right] \subset \bigcup_{ k \in K_p'} \left[ \theta_{p+1}^{(i,\ell_{k})}, \theta_{p+1}^{(i,\ell_{k})} + \varepsilon_{p+1}^{(i,\ell_{k })} \right].$$ \end{Claim} We prove by induction on $p$ the following result. For all $p \in \{1,\dots,d-1\}$, and all $\mathfrak{m} \in \{0,\dots,\mathfrak{M}_p-1\}$, \begin{eqnarray} \label{EqInclusionDansPreuve} \prod_{q=1}^{p} \left[a_q^{(i)}, b_q^{(i)}\right] \subset \bigcup_{k\in K_{p,\mathfrak{m}}} \prod_{q=1}^p \left[ \theta_q^{(i,\ell_{k})}, \theta_q^{(i,\ell_{k})} + \varepsilon_{q}^{(i,\ell_k)} \right] \end{eqnarray} Note that~{(\ref{eqInclusionPreuveDimenQ})} ensues from this inclusion when $p = d -1$ and $\mathfrak{m} = 0$. We begin to prove (\ref{EqInclusionDansPreuve}) for $p = 1$ and all $\mathfrak{m} \in \{0,\dots,\mathfrak{M}_1-1\}$. For all $k \in \{k_{1,\mathfrak{m}} +1, \dots, k_{1,\mathfrak{m}+1} - 1\}$, $\mathfrak{j}_{\text{min}}^{(i,\ell_{k})} \leq 1$ and thus $$\theta_1^{(i,\ell_{k+1})} \in \left\{ \theta_1^{(i,\ell_{k})} , \theta_1^{(i,\ell_{k})} + \varepsilon_{1}^{(i,\ell_{k})}\right\}.$$ This implies that the set $$\bigcup_{k=k_{1,\mathfrak{m}}+1}^{k_{1,\mathfrak{m}+1}} \left[ \theta_1^{(i,\ell_{k})}, \theta_1^{(i,\ell_{k})} + \varepsilon_{1}^{(i,\ell_{k})} \right].$$ is an interval. Now, $\theta_1^{(i,\ell_{k_{1,\mathfrak{m}}+1})} = a_{1}^{(i)}$ , $ \theta_1^{(i,\ell_{k_1,\mathfrak{m}+1})} + \varepsilon_{1}^{(i,\ell_{k_1,\mathfrak{m}+1})}\geq b_1^{(i)}$ since $\mathfrak{j}_{\text{min}}^{(i,\ell_{k_1,\mathfrak{m}+1})} > 1$. We have $$[a_{1}^{(i)}, b_1^{(i)}] \subset \bigcup_{k=k_{1,\mathfrak{m}}+1}^{k_{1,\mathfrak{m}+1}} \left[ \theta_1^{(i,\ell_{k})}, \theta_1^{(i,\ell_{k})} + \varepsilon_{1}^{(i,\ell_{k})} \right]$$ which establishes (\ref{EqInclusionDansPreuve}) when $p = 1$. Let now $p \in \{1,\dots,d-2\}$ and assume that for all $\mathfrak{m} \in \{0,\dots, \mathfrak{M}_p-1\}$, \begin{eqnarray*} \prod_{q=1}^{p} \left[a_q^{(i)}, b_q^{(i)}\right] \subset \bigcup_{k\in K_{p,\mathfrak{m}}} \prod_{q=1}^p \left[ \theta_q^{(i,\ell_{k})},\theta_q^{(i,\ell_{k})} + \varepsilon_{q}^{(i,\ell_k)} \right]. \end{eqnarray*} Let $\mathfrak{m}' \in \{0,\dots , \mathfrak{M}_{p+1}-1\}$. We shall show that \begin{eqnarray*} \prod_{q=1}^{p+1} \left[a_q^{(i)}, b_q^{(i)}\right] \subset \bigcup_{k \in K_{p+1,\mathfrak{m}'}} \prod_{q=1}^{p+1} \left[ \theta_q^{(i,\ell_{k})}, \theta_q^{(i,\ell_{k})} + \varepsilon_{q}^{(i,\ell_k)} \right]. \end{eqnarray*} Let $\boldsymbol{x} \in \prod_{q=1}^{p+1} \left[a_q^{(i)}, b_q^{(i)}\right]$. By using Claim~\ref{ClaimPreuveAlgoDimen2}, there exists $\mathfrak{m} \in \{0,\dots,\mathfrak{M}_{p}-1\}$ such that $$x_{p+1} \in \left[ \theta_{p+1}^{(i,\ell_{k_{p,\mathfrak{m}+1}})}, \theta_{p+1}^{(i,\ell_{k_{p,\mathfrak{m}+1}})} + \varepsilon_{p+1}^{(i,\ell_{k_{p,\mathfrak{m}+1}})} \right]$$ and such that $k_{p,\mathfrak{m}+1} \in K_{p+1,\mathfrak{m}'}$. By using the induction assumption, there exists $k \in K_{p,\mathfrak{m}}$ such that $$\boldsymbol{x} = (x_1,\dots,x_p) \in \prod_{q=1}^p \left[ \theta_q^{(i,\ell_{k})} , \theta_q^{(i,\ell_{k})} + \varepsilon_{q}^{(i,\ell_k)} \right].$$ Since $k \in K_{p,\mathfrak{m}}$, $\theta_{p+1}^{(i,\ell_{k})} = \theta_{p+1}^{(i,\ell_{k_{p,\mathfrak{m}+1}})} $ and $ \varepsilon_{p+1}^{(i,\ell_{k_{p,\mathfrak{m}+1}})} \leq \varepsilon_{p+1}^{(i,\ell_{k})}$. Hence, $$x_{p+1} \in \left[ \theta_{p+1}^{(i,\ell_{k})} , \theta_{p+1}^{(i,\ell_{k})} + \varepsilon_{p+1}^{(i,\ell_{k})} \right].$$ We finally use the claim below to show that $k \in K_{p+1,\mathfrak{m}'}$ which concludes the proof. \begin{Claim} \label{ClaimPreuveAlgoDimen2deuxieme} Let $\mathfrak{m} \in \{0,\dots,\mathfrak{M}_p-1\}$ and $\mathfrak{m}' \in \{0,\dots, \mathfrak{M}_{p+1}-1\}$. If $k_{p,\mathfrak{m}+1} \in K_{p+1,\mathfrak{m}'}$, then $K_{p,\mathfrak{m}} \subset K_{p+1,\mathfrak{m}'}$. \end{Claim} \subsubsection{Proof of the claims.}\label{SectionPreuveDesClaims} \begin{proof} [Proof of Claim~\ref{ClaimInclusionKpDansKp}.] The set $\{\mathfrak{m}' \in \{0,\dots, \mathfrak{M}_p-1\}, \; k_{p,\mathfrak{m}'+1} \leq k_{p+1,\mathfrak{m}+1} \}$ is non empty and we can thus define the largest integer $\mathfrak{m}'$ of $\{0,\dots, \mathfrak{M}_p-1\}$ such that $ k_{p,\mathfrak{m}'+1} \leq k_{p+1,\mathfrak{m}+1} $. We then have $$k_{p,\mathfrak{m}'} = \sup \left\{k < k_{p,\mathfrak{m}'+1}, \; \mathfrak{j}_{\text{min}}^{(i,\ell_k)} > p \right\}. $$ Since $k_{p,\mathfrak{m}'} < k_{p+1,\mathfrak{m}+1}$, \begin{eqnarray*} k_{p,\mathfrak{m}'} &=& \sup \left\{k < k_{p+1,\mathfrak{m}+1} , \; \mathfrak{j}_{\text{min}}^{(i,\ell_k)} > p \right\} \\ &\geq& \sup \left\{k < k_{p+1,\mathfrak{m}+1} , \; \mathfrak{j}_{\text{min}}^{(i,\ell_k)} > p + 1 \right\} \\ &\geq& k_{p+1,\mathfrak{m}}. \end{eqnarray*} Hence, $k_{p,\mathfrak{m}'+1} \geq k_{p,\mathfrak{m}'} + 1 \geq k_{p+1,\mathfrak{m}} + 1$. Finally, $k_{p,\mathfrak{m}'+1} \in K_{p,\mathfrak{m}}$. \end{proof} \begin{proof} [Proof of Claim~\ref{ClaimPreuveAlgoDim2Kappa0}] Let $i \in \{1,\dots,N-1\}$ and $\ell \in \{1,\dots,L_i\}$. Then, \begin{eqnarray*} \text{diam} (\Theta_i) &\leq& \sup_{1 \leq j \leq d} \widebar{R}_{\Theta_i,j} \big(b_{j}^{(i)} - a_{j}^{(i)} \big)^{\alpha_j} \\ &\leq& \left( \sup_{1 \leq j \leq d}\frac{\widebar{R}_{\Theta_i,j} }{\underline{R}_{\Theta_i,j} } \right) \sup_{1 \leq j \leq d} \underline{R}_{\Theta_i,j} \big(b_{j}^{(i)} - a_{j}^{(i)} \big)^{\alpha_j} \\ &\leq& \left( \sup_{1 \leq j \leq d}\frac{\widebar{R}_{\Theta_i,j} }{\underline{R}_{\Theta_i,j} } \right) \underline{R}_{\Theta_i,k^{(i)}} \big(b_{{k^{(i)}}}^{(i)} - a_{{k^{(i)}}}^{(i)} \big)^{\alpha_{k^{(i)}}}. \end{eqnarray*} Now, $\theta^{(i,\ell)}_{k^{(i)}} = a_{k^{(i)}}^{(i)}$ and $\theta'^{(i,\ell)}_{k^{(i)}} = b_{k^{(i)}}^{(i)}$ and thus \begin{eqnarray*} \text{diam} (\Theta_i) &\leq& \left( \sup_{1 \leq j \leq d}\frac{\widebar{R}_{\Theta_i,j} }{\underline{R}_{\Theta_i,j} } \right) \underline{R}_{\Theta_i,k^{(i)}} \big(\theta^{(i,\ell)}_{{k^{(i)}}} - \theta^{(i,\ell)}_{{k^{(i)}}} \big)^{\alpha_{k^{(i)}}} \\ &\leq& \left( \sup_{1 \leq j \leq d}\frac{\widebar{R}_{\Theta_i,j} }{\underline{R}_{\Theta_i,j} } \right) \sup_{1 \leq j \leq d} \underline{R}_{\Theta_i,j} \big(\theta'^{(i,\ell)}_{j} - \theta^{(i,\ell)}_{j})^{\alpha_j} \\ &\leq& \left( \sup_{1 \leq j \leq d}\frac{\widebar{R}_{\Theta_i,j} }{\underline{R}_{\Theta_i,j} } \right) h^2(f_{\boldsymbol{\theta}^{(i,\ell)}},f_{\boldsymbol{\theta}'^{(i,\ell)}}). \end{eqnarray*} We conclude by using $\widebar{R}_{\Theta_i,j}/ \underline{R}_{\Theta_i,j} \leq \widebar{R}_{j}/\underline{R}_{j}$. \end{proof} \begin{proof} [Proof of Claim~\ref{ClaimPreuveAlgoDimen2}.] Thanks to Claim~\ref{ClaimInclusionKpDansKp} (page~\pageref{ClaimInclusionKpDansKp}), we can define the smallest integer $\mathfrak{m}_0$ of $\{0,\dots,\mathfrak{M}_{p}-1\}$ such that $k_{p,\mathfrak{m}_0 + 1} \in K_{p+1,\mathfrak{m}'}$, and the largest integer $\mathfrak{m}_1$ of $\{0,\dots,\mathfrak{M}_{p}-1\}$ such that $k_{p_1 + 1} \in K_{p+1,\mathfrak{m}'}$. Define now $$\mathcal{M} = \left\{\mathfrak{m}_0,\mathfrak{m}_0+1,\dots,\mathfrak{m}_1\right\}.$$ Note that for all $\mathfrak{m} \in \{\mathfrak{m}_0,\dots,\mathfrak{m}_1\}$, $k_{p,\mathfrak{m}+1} \in K_{p+1,\mathfrak{m}'}$ (this ensues from the fact that the sequence $(k_{p,\mathfrak{m}})_{\mathfrak{m}}$ is increasing). Let $\mathfrak{m} \in \{0,\dots,\mathfrak{M}_p-1\}$ be such that $k_{p,\mathfrak{m}} \in K_{p+1,\mathfrak{m}'}$ and $k_{p,\mathfrak{m}} \neq k_{p+1,\mathfrak{m}'+1}$. Then $ \mathfrak{j}_{\text{min}}^{(i,\ell_{k_{p,\mathfrak{m}}})} \leq p + 1$ and since $ \mathfrak{j}_{\text{min}}^{(i,\ell_{k_{p,\mathfrak{m}}})} > p $, we also have $\mathfrak{j}_{\text{min}}^{(i,\ell_{k_{p,\mathfrak{m}}})} = p + 1$. Consequently, $$ \theta_{p+1}^{ \left(i,\ell_{k_{p,\mathfrak{m}}+1}\right)} = \theta_{p+1}^{ \left(i,\ell_{k_{p,\mathfrak{m}}}\right)} + \varepsilon_{p+1}^{ \left(i,\ell_{k_{p,\mathfrak{m}}}\right)}. $$ Now, $\theta_{p+1}^{ \left(i,\ell_{k_{p,\mathfrak{m}}+1}\right)} = \theta_{p+1}^{ (i,\ell_{k_{p,\mathfrak{m}+1}})}$ since $k_{p,\mathfrak{m}}+1$ and $k_{p,\mathfrak{m}+1}$ belong to $K_{p,\mathfrak{m}}$. The set $$\left[ \theta_{p+1}^{(i,\ell_{k_{p,\mathfrak{m}}})} , \theta_{p+1}^{(i,\ell_{k_{p,\mathfrak{m}}})} + \varepsilon_{p+1}^{(i,\ell_{k_{p,\mathfrak{m}} })} \right] \bigcup \left[ \theta_{p+1}^{(i,\ell_{k_{p,\mathfrak{m}+1}})}, \theta_{p+1}^{(i,\ell_{k_{p,\mathfrak{m}+1}})} + \varepsilon_{p+1}^{(i,\ell_{k_{p,\mathfrak{m}+1} })} \right] $$ is thus the interval $$\left[ \theta_{p+1}^{(i,\ell_{k_{p,\mathfrak{m}}})} , \theta_{p+1}^{(i,\ell_{k_{p,\mathfrak{m}+1}})} + \varepsilon_{p+1}^{(i,\ell_{k_{p,\mathfrak{m}+1} })} \right].$$ We apply this argument for each $\mathfrak{m} \in \{\mathfrak{m}_0+1,\dots,\mathfrak{m}_1\}$ to derive that the set $$I = \bigcup_{\mathfrak{m}=\mathfrak{m}_0}^{\mathfrak{m}_1} \left[ \theta_{p+1}^{(i,\ell_{k_{p,\mathfrak{m}+1}})} , \theta_{p+1}^{(i,\ell_{k_{p,\mathfrak{m}+1}})} + \varepsilon_{p+1}^{(i,\ell_{k_{p,\mathfrak{m}+1} })} \right]$$ is the interval $$I = \left[ \theta_{p+1}^{(i,\ell_{k_{p,\mathfrak{m}_0+1}})} , \theta_{p+1}^{(i,\ell_{k_{p,\mathfrak{m}_1+1}})} + \varepsilon_{p+1}^{(i,\ell_{k_{p,\mathfrak{m}_1+1} })} \right].$$ The claim is proved if we show that $$\left[a_{p+1}^{(i)}, b_{p+1}^{(i)}\right] \subset I.$$ Since $I$ is an interval, it remains to prove that $a_{p+1}^{(i)} \in I$ and $b_{p+1}^{(i)} \in I$. We begin to show $a_{p+1}^{(i)} \in I$ by showing that $ a_{p+1}^{(i)} = \theta_{p+1}^{(i,\ell_{k_{p,\mathfrak{m}_0+1}})}$. If $k_{p+1,\mathfrak{m}'} = 0$, then $\mathfrak{m}' = 0$ and $\mathfrak{m}_0 = 0$. Besides, since $1$ and $k_{p,1}$ belong to $K_{p,0}$, we have $\theta_{p+1}^{(i,\ell_{k_{p,1}})} = \theta_{p+1}^{(i,\ell_{1})}$. Now, $\theta_{p+1}^{(i,\ell_{1})} = a_{p+1}^{(i)}$ and thus $a_{p+1}^{(i)} \in I$. We now assume that $k_{p+1,\mathfrak{m}'} \neq 0$. Since $k_{p,\mathfrak{m}_0} \leq k_{p+1,\mathfrak{m}'}$, there are two cases. \begin{itemize} \item First case: $k_{p,\mathfrak{m}_0} = k_{p+1,\mathfrak{m}'}$. We then have $\mathfrak{j}_{\text{min}}^{(i,\ell_{k_{p,\mathfrak{m}_0}})} > p + 1$ and thus $\theta_{p+1}^{(i,\ell_{k_{p,\mathfrak{m}_0}+1})}= a_{p+1}^{(i)}$. Since $k_{p,\mathfrak{m}_0+1}$ and $k_{p,\mathfrak{m}_0}+1$ belong to $K_{p,\mathfrak{m}_0}$, $\theta_{p+1}^{(i,\ell_{k_{p,\mathfrak{m}_0+1}})} = \theta_{p+1}^{(i,\ell_{k_{p,\mathfrak{m}_0}+1})}$ and thus $\theta_{p+1}^{(i,\ell_{k_{p,\mathfrak{m}_0+1}})} = a_{p+1}^{(i)}$ as wished. \item Second case: $k_{p,\mathfrak{m}_0} + 1 \leq k_{p+1,\mathfrak{m}'}$. Then, $k_{p+1,\mathfrak{m}'} \in K_{p,\mathfrak{m}_0}$ and thus $$\theta_{p+1}^{(i,\ell_{k_{p,\mathfrak{m}_0}+1})} = \theta_{p+1}^{(i,\ell_{k_{p+1,\mathfrak{m}'}})}.$$ Since $\mathfrak{j}_{\text{min}}^{(i,\ell_{k_{p+1,\mathfrak{m}'}})} > p + 1$, we have $\theta_{p+1}^{(i,\ell_{k_{p+1,\mathfrak{m}'}} )} + \varepsilon_{p+1}^{(i,\ell_{k_{p+1,\mathfrak{m}'}} )} \geq b_{p+1}^{(i)}$. By using the fact that the sequence $( \varepsilon_{p+1}^{(i,\ell_{k})})_k$ is decreasing, we then deduce $$\theta_{p+1}^{(i,\ell_{k_{p,\mathfrak{m}_0} + 1})} + \varepsilon_{p+1}^{(i,\ell_{k_{p,\mathfrak{m}_0} + 1} )} \geq b_{p+1}^{(i)}$$ and thus $\mathfrak{j}_{\text{min}}^{(i,\ell_{k_{p,\mathfrak{m}_0} + 1})} > p + 1$. This establishes that \begin{eqnarray} \label{EqPreuveThetap} \theta_{p+1}^{(i,\ell_{k_{p,\mathfrak{m}_0} + 2})}= a_{p+1}^{(i)}. \end{eqnarray} Let us now show $k_{p,\mathfrak{m}_0} + 2 \leq k_{p,\mathfrak{m}_0+1}$. Otherwise, $k_{p,\mathfrak{m}_0} + 2 \geq k_{p,\mathfrak{m}_0+1} + 1 $ and thus $k_{p,\mathfrak{m}_0} + 1 \geq k_{p,\mathfrak{m}_0+1}$ which means that $k_{p,\mathfrak{m}_0} + 1 = k_{p,\mathfrak{m}_0+1}$ (we recall that $(k_{p,\mathfrak{m}})_{\mathfrak{m}}$ is an increasing sequence of integers). Since we are in the case where $ k_{p,\mathfrak{m}_0}+1 \leq k_{p+1,\mathfrak{m}'}$, we have $ k_{p,\mathfrak{m}_0+1} \leq k_{p+1,\mathfrak{m}'}$ which is impossible since $ k_{p,\mathfrak{m}_0+1} \in K_{p+1,\mathfrak{m}'}$. Consequently, since $k_{p,\mathfrak{m}_0} + 2 \leq k_{p,\mathfrak{m}_0+1}$, we have $k_{p,\mathfrak{m}_0} + 2 \in K_{p,\mathfrak{m}_0}$ and thus $\theta_{p+1}^{(i,\ell_{k_{p,\mathfrak{m}_0+1} })}= \theta_{p+1}^{(i,\ell_{k_{p,\mathfrak{m}_0}+2 })}$. We then deduce from (\ref{EqPreuveThetap}) that $\theta_{p+1}^{(i,\ell_{k_{p,\mathfrak{m}_0+1} })} = a_{p+1}^{(i)}$ as wished. \end{itemize} We now show that $b_{p+1}^{(i)} \in I$ by showing that $\theta_{p+1}^{(i,\ell_{k_{p,\mathfrak{m}_1+1}})} + \varepsilon_{p+1}^{(i,\ell_{k_{p,\mathfrak{m}_1+1} })} \geq b_{p+1}^{(i)}$. If $\mathfrak{m}_1 = \mathfrak{M}_p-1$, $$ \theta_{p+1}^{(i,\ell_{k_{p,\mathfrak{m}_1+1}})} + \varepsilon_{p+1}^{(i,\ell_{k_{p,\mathfrak{m}_1+1} })} = \theta_{p+1}^{(i,\ell_{r})} + \varepsilon_{p+1}^{(i,\ell_{r })} = \theta_{p+1}^{(i,L_i)} + \varepsilon_{p+1}^{(i,L_{i })}.$$ Since $\mathfrak{j}_{\text{min}}^{(i, L_i)} = d$, we have $\theta_{p+1}^{(i,L_i)} + \varepsilon_{p+1}^{(i,L_{i })} \geq b_{p+1}^{(i)}$ which proves the result. We now assume that $\mathfrak{m}_1 < \mathfrak{M}_p-1$. We begin to prove that $k_{p,\mathfrak{m}_1+1} = k_{p+1,\mathfrak{m}'+1}$. If this inequality does not hold, we derive from the inequality $k_{p,\mathfrak{m}_1+1} \leq k_{p+1,\mathfrak{m}'+1} < k_{p,\mathfrak{m}_1+2}$, that $k_{p,\mathfrak{m}_1+1} + 1 \leq k_{p+1,\mathfrak{m}'+1}$ and thus $k_{p+1,\mathfrak{m}'+1} \in K_{p,\mathfrak{m}_1+1}$. Since $\mathfrak{j}_{\text{min}}^{(i, \ell_{k_{p+1,\mathfrak{m}'+1}})} > p +1$, we have $$\theta_{p+1}^{(i,\ell_{k_{p+1,\mathfrak{m}'+1}} )} + \varepsilon_{p+1}^{(i,\ell_{k_{p+1,\mathfrak{m}'+1}} )} \geq b_{p+1}^{(i)}.$$ Hence, $$\theta_{p+1}^{(i,\ell_{(k_{p,\mathfrak{m}_1+1})+1 })} + \varepsilon_{p+1}^{(i,\ell_{(k_{p,\mathfrak{m}_1+1})+1 } )} \geq b_{p+1}^{(i)} \quad \text{which implies} \quad \mathfrak{j}_{\text{min}}^{(i,\ell_{(k_{p,\mathfrak{m}_1+1})+1 })} > p + 1.$$ Since, $$ k_{p+1,\mathfrak{m}'+1} = \inf \left\{k > k_{p+1,\mathfrak{m}'} , \; \mathfrak{j}_{\text{min}}^{(i,\ell_{k})} > p + 1\right\}$$ and $k_{p,\mathfrak{m}_1+1} + 1 > k_{p+1,\mathfrak{m}'}$, we have $ k_{p+1,\mathfrak{m}'+1} \leq k_{p,\mathfrak{m}_1+1} + 1 $. Moreover, since $ k_{p+1,\mathfrak{m}'+1} \geq k_{p,\mathfrak{m}_1+1} + 1 $, we have $k_{p,\mathfrak{m}_1+1} + 1 = k_{p+1,\mathfrak{m}'+1}$. Consequently, $$k_{p,\mathfrak{m}_1+2} = \inf \left\{k > k_{p,\mathfrak{m}_1+1} , \; \mathfrak{j}_{\text{min}}^{(i,\ell_{k})} > p \right\} = k_{p+1,\mathfrak{m}'+1}.$$ This is impossible because $ k_{p+1,\mathfrak{m}'+1} < k_{p,\mathfrak{m}_1+2}$, which finally implies that $k_{p,\mathfrak{m}_1+1} = k_{p+1,\mathfrak{m}'+1}$. We then deduce from this equality, $$\mathfrak{j}_{\text{min}}^{(i,\ell_{k_{p,\mathfrak{m}_1+1}})} = \mathfrak{j}_{\text{min}}^{(i,\ell_{k_{p+1,\mathfrak{m}'+1}})} > p + 1.$$ Hence $ \theta_{p+1}^{(i, \ell_{\ell_{k_{p,\mathfrak{m}_1+1} }} )} + \varepsilon_{p+1}^{(i, \ell_{\ell_{k_{p,\mathfrak{m}_1+1} }} )} \geq b_{p+1}^{(i)} $ and thus $b_{p+1}^{(i)} \in I$. This ends the proof. \end{proof} \begin{proof} [Proof of Claim~\ref{ClaimPreuveAlgoDimen2deuxieme}.] We have \begin{eqnarray*} k_{p+1,\mathfrak{m}'} = \sup \left\{ k < k_{p+1,\mathfrak{m}'+1}, \; \mathfrak{j}_{\text{min}}^{(i,\ell_{k})} > p + 1 \right\}. \end{eqnarray*} Since $k_{p,\mathfrak{m}+1} > k_{p+1,\mathfrak{m}'}$, \begin{eqnarray*} k_{p+1,\mathfrak{m}'} &=& \sup \left\{ k < k_{p,\mathfrak{m}+1}, \; \mathfrak{j}_{\text{min}}^{(i,\ell_{k})} > p + 1 \right\}\\ &\leq& \sup \left\{ k < k_{p,\mathfrak{m}+1}, \; \mathfrak{j}_{\text{min}}^{(i,\ell_{k})} > p \right\}\\ &\leq& k_{p,\mathfrak{m}}. \end{eqnarray*} We then derive from the inequalities $k_{p+1,\mathfrak{m}'} \leq k_{p,\mathfrak{m}}$ and $k_{p,\mathfrak{m}+1} \leq k_{p+1 '+1}$ that $K_{p,\mathfrak{m}} \subset K_{p+1,\mathfrak{m}'}$. \end{proof} \section{Annexe: implementation of the procedure when $d \geq 2$} \label{SectionImplementationProcedure} We carry out in the following sections the values of $\underline{R}_{\mathcal{C},j}$, $\bar{r}_{\mathcal{C},j} (\boldsymbol{\theta},\boldsymbol{\theta}')$ and $\underline{{r}}_{\mathcal{C},j} (\boldsymbol{\theta},\boldsymbol{\theta}')$ we have used in the simulation study of Section~\ref{SectionSimulationDimD}. We do not claim that they minimize the number of tests to compute. The number of tests that have been computed in the simulation study with these choices of parameters may be found in Section~\ref{SectionVitesseProcedureDimD}. \subsection{Example 1.} In the case of the Gaussian model, it is worthwhile to notice that the Hellinger distance between two densities $f_{(m,\sigma)}$ and $f_{(m',\sigma')}$ can be made explicit: $$h^2 \left(f_{(m,\sigma)}, f_{(m',\sigma')}\right) = 1 - \sqrt{\frac{2\sigma \sigma' }{\sigma^2 + \sigma'^2}} \exp \left(- \frac{(m-m')^2}{4 (\sigma^2 + \sigma'^2)}\right).$$ For all $\xi > 0$, a sufficient condition for that $ h^2 (f_{(m,\sigma)}, f_{(m',\sigma')}) \leq \xi $ is thus $$\sqrt{\frac{2\sigma \sigma' }{\sigma^2 + \sigma'^2}} \geq \sqrt{1-\xi} \quad \text{and} \quad \exp \left(- \frac{(m-m')^2}{4 (\sigma^2 + \sigma'^2)}\right) \geq \sqrt{1-\xi}.$$ One then deduces that the rectangle \begin{eqnarray*} & & \hspace{-2cm} \left[ m - 2 \frac{1-\sqrt{2\xi-\xi^2}}{1-\xi} \sqrt{\log \left(\frac{1}{1-\xi}\right) } \sigma, m +2 \frac{1-\sqrt{2\xi-\xi^2}}{1-\xi} \sqrt{\log \left(\frac{1}{1-\xi}\right) } \sigma \right] \\ & & \qquad \times \left[\frac{1-\sqrt{2\xi-\xi^2}}{1-\xi}\sigma , \frac{1+\sqrt{2\xi-\xi^2}}{1-\xi}\sigma \right] \end{eqnarray*} is included in the Hellinger ball $$\left\{(m',\sigma') \in \mathbb{R} \times (0,+\infty), \, h^2 (f_{(m,\sigma)}, f_{(m',\sigma')}) \leq \xi \right\}.$$ Given $\boldsymbol{\theta} = (m,\sigma)$, $\boldsymbol{\theta}' = (m', \sigma')$, we can then define $\underline{r}_{\mathcal{C},j} (\boldsymbol{\theta},\boldsymbol{\theta}')$, $\bar{{r}}_{\mathcal{C},j} (\boldsymbol{\theta},\boldsymbol{\theta}')$ by \begin{eqnarray*} \underline{\boldsymbol{r}}_{\mathcal{C}} (\boldsymbol{\theta},\boldsymbol{\theta}') &=& \left(2 \frac{1-\sqrt{2\xi-\xi^2}}{1-\xi} \sqrt{\log \left(\frac{1}{1-\xi}\right) } \sigma, \frac{-\xi+\sqrt{2\xi-\xi^2}}{1-\xi}\sigma \right) \\ \bar{\boldsymbol{r}}_{\mathcal{C}} (\boldsymbol{\theta},\boldsymbol{\theta}') &=& \left( 2 \frac{1-\sqrt{2\xi-\xi^2}}{1-\xi} \sqrt{\log \left(\frac{1}{1-\xi}\right) }\sigma, \frac{\xi+\sqrt{2\xi-\xi^2}}{1-\xi}\sigma\right) \end{eqnarray*} where $\xi = \kappa H^2 (f_{\boldsymbol{\theta}}, f_{\boldsymbol{\theta}'}) $. We now consider a rectangle $\mathcal{C} = \left[\underline{m}_0, \widebar{m}_0 \right] \times \left[\underline{\sigma}_0, \bar{\sigma}_0\right]$ of $\mathbb{R} \times (0,+\infty)$ and aim at choosing $\boldsymbol{\underline{R}}_{\mathcal{C}} = \left(\underline{R}_{\mathcal{C},1}, \underline{R}_{\mathcal{C},2}\right) $. For all $(m,\sigma)$, $(m', \sigma') \in \mathcal{C}$, \begin{eqnarray*} h^2 \left(f_{(m,\sigma)}, f_{(m',\sigma')}\right) = 1 - \sqrt{\frac{2\sigma \sigma' }{\sigma^2 + \sigma'^2}} + \sqrt{\frac{2\sigma \sigma' }{\sigma^2 + \sigma'^2}} \left[1 - \exp \left(- \frac{(m-m')^2}{4 (\sigma^2 + \sigma'^2)}\right)\right]. \end{eqnarray*} Yet, \begin{eqnarray*} 1 - \sqrt{\frac{2\sigma \sigma' }{\sigma^2 + \sigma'^2}} = \frac{\left(\sigma'-\sigma\right)^2}{ \left(\sqrt{\sigma^2 + \sigma'^2} + \sqrt{2 \sigma \sigma'} \right) \sqrt{\sigma^2 + \sigma'^2}} \geq \frac{\left(\sigma'-\sigma\right)^2}{4 \bar{\sigma}_0^2} \end{eqnarray*} and $$ \sqrt{\frac{2\sigma \sigma' }{\sigma^2 + \sigma'^2}} \geq \sqrt{\frac{2 \bar{\sigma}_0 \underline{\sigma}_0}{ \bar{\sigma}^2_0+ \underline{\sigma}^2_0 }}.$$ Moreover, \begin{eqnarray*} 1 - \exp \left(- \frac{(m-m')^2}{4 (\sigma^2 + \sigma'^2)}\right) \geq \frac{1 - e^{-{(\widebar{m}_0-\underline{m}_0)^2}/{(8 \bar{\sigma}_0^2)}}}{ (\widebar{m}_0-\underline{m}_0)^2} (m'-m)^2. \end{eqnarray*} In particular, we have proved that $$h^2 \left(f_{(m,\sigma)}, f_{(m',\sigma')}\right) \geq \max \left\{ \sqrt{\frac{2 \bar{\sigma}_0 \underline{\sigma}_0}{ \bar{\sigma}^2_0+ \underline{\sigma}^2_0 }} \frac{1 - e^{-{(\widebar{m}_0-\underline{m}_0)^2}/{(8 \bar{\sigma}_0^2)}}}{ (\widebar{m}_0-\underline{m}_0)^2} (m'-m)^2, \frac{1}{4 \bar{\sigma}^2_0} (\sigma - \sigma')^2\right\}$$ which means that we can take \begin{eqnarray*} \boldsymbol{\underline{R}}_{\mathcal{C}} = \left(\sqrt{\frac{2 \bar{\sigma}_0 \underline{\sigma}_0}{ \bar{\sigma}^2_0+ \underline{\sigma}^2_0 }} \frac{1 - e^{-{(\widebar{m}_0-\underline{m}_0)^2}/{(8 \bar{\sigma}_0^2)}}}{ (\widebar{m}_0-\underline{m}_0)^2} , \frac{1}{4 \bar{\sigma}_0^2}\right). \end{eqnarray*} \subsection{Example 2.} In the case of the Cauchy model, the Hellinger distance cannot be made explicit. However, we can use Theorem~7.6 of~\cite{Ibragimov1981} (chapter~1) to show that for all $m \in \mathbb{R}$, $\sigma > 0$, $$h^2 \left(f_{(0,1)},f_{(m,1)}\right) \leq m^2/16 \quad \text{and} \quad h^2 \left(f_{(0,1)},f_{(0,\sigma)}\right) \leq (\log^2 \sigma)/16.$$ Now, \begin{eqnarray*} h \left(f_{(m,\sigma)},f_{(m',\sigma')}\right) &\leq& h \left(f_{(m,\sigma)},f_{(m',\sigma)}\right) + h \left(f_{(m',\sigma)},f_{(m',\sigma')}\right) \\ &\leq& h \left(f_{(0,1)},f_{((m'-m)/\sigma,1)}\right) + h \left(f_{(0,1)},f_{(0,\sigma'/\sigma)}\right) \\ &\leq& \frac{|m'-m|}{4 \sigma} + \frac{|\log (\sigma'/\sigma)|}{4 }. \end{eqnarray*} For all $\xi > 0$, one then deduces that the rectangle $$\left[ m- 2 \sigma \sqrt{\xi} , m +2 \sigma \sqrt{\xi} \right] \times \left[\sigma e^{- 2 \sqrt{ \xi}}, \sigma e^{2 \sqrt{\xi}} \right]$$ is included in the Hellinger ball $$\left\{(m',\sigma') \in \mathbb{R} \times (0,+\infty), \, h^2 (f_{(m,\sigma)}, f_{(m',\sigma')}) \leq \xi \right\}.$$ This provides the values of $\bar{r}_{\mathcal{C},j} (\boldsymbol{\theta},\boldsymbol{\theta}')$ and $\bar{{r}}_{\mathcal{C},j} ' (\boldsymbol{\theta},\boldsymbol{\theta}')$: given $\mathcal{C} \subset \Theta$, $\boldsymbol{\theta} = (m,\sigma)$, $\boldsymbol{\theta}' = (m', \sigma') \in \mathcal{C}$, we can take \begin{eqnarray*} \underline{\boldsymbol{r}}_{\mathcal{C}} (\boldsymbol{\theta},\boldsymbol{\theta}') &=& \left(2 \sigma \sqrt{{ \kappa H^2 (f_{\boldsymbol{\theta}}, f_{\boldsymbol{\theta}'})}} , \sigma - \sigma e^{- 2 \sqrt{ \kappa H^2 (f_{\boldsymbol{\theta}}, f_{\boldsymbol{\theta}'}) }} \right)\\ \bar{\boldsymbol{r}}_{\mathcal{C}} (\boldsymbol{\theta},\boldsymbol{\theta}') &=& \left(2 \sigma \sqrt{{ \kappa H^2 (f_{\boldsymbol{\theta}}, f_{\boldsymbol{\theta}'})}} , \sigma e^{2 \sqrt{ \kappa H^2 (f_{\boldsymbol{\theta}}, f_{\boldsymbol{\theta}'}) }} - \sigma\right). \end{eqnarray*} For all rectangle $\mathcal{C} \subset \mathbb{R} \times (0,+\infty)$ we choose $\underline{R}_{\mathcal{C},1} = \underline{R}_{\mathcal{C},2}$. Notice that this choice allows to find easily the number $k$ that appears at line~1 of Algorithm~\ref{algoConstructionDimQuelquonqueAvant} since then the equation becomes $$ b_{k} - a_{k} = \max_{1 \leq j \leq 2} (b_{j} - a_{j}).$$ \subsection{Example 3.} Let $\xi > 0$, $a, b > 0$ and $\mathcal{C}$ be the rectangle $\mathcal{C} = [a_1, a_2] \times [b_1, b_2] \subset (0,+\infty)^2$. We aim at finding a rectangle~$\mathcal{R}$ containing $(a,b)$ such that $$\mathcal{C} \cap \mathcal{R} \subset \left\{(a',b') \in (0,+\infty)^2, \, h^2 (f_{(a,b)}, f_{(a',b')}) \leq \xi \right\}.$$ For this, notice that for all positive numbers $a', b'$, \begin{eqnarray*} h^2 \left(f_{(a,b)}, f_{(a',b')}\right) \leq 2 h^2 \left(f_{(a,b)}, f_{(a,b')}\right) + 2 h^2 \left(f_{(a,b')}, f_{(a',b')} \right). \end{eqnarray*} Now, \begin{eqnarray*} h^2 \left(f_{(a,b)}, f_{(a,b')} \right) = 1 - \left(\frac{2 \sqrt{b b'}}{b+b'}\right)^{a}. \end{eqnarray*} Let $\Gamma'$ be the derivative of the Gamma function $\Gamma$ and $\psi$ be the derivative of the digamma function $\Gamma'/\Gamma$. We derive from Theorem~7.6 of~\cite{Ibragimov1981} that \begin{eqnarray*} h^2 \left(f_{(a,b')}, f_{(a',b')}\right) \leq \frac{(a'-a)^2}{8} \sup_{t \in [\min(a,a'), \max(a,a')]} \psi (t). \end{eqnarray*} The function $\psi$ being non-increasing, \begin{eqnarray*} h^2 \left(f_{(a,b')}, f_{(a',b')}\right) \leq \begin{cases} 1/8 \psi (a) (a' - a)^2 & \text{if $a' \geq a$} \\ 1/8 \psi(a_1) (a' - a)^2 & \text{if $a' < a$.} \end{cases} \end{eqnarray*} We deduce from the above inequalities that we can take $$\mathcal{R} = \left[a - \sqrt{\frac{2}{\psi (a_1)} \xi}, a + \sqrt{\frac{2}{\psi (a)} \xi} \right] \times \left[ \frac{2-\xi'^2- 2 \sqrt{1-\xi'^2}}{\xi'^2} b, \frac{2-\xi'^2+ 2 \sqrt{1-\xi'^2}}{\xi'^2} b \right] $$ where $\xi' = (1 - \xi/4)^{1/a}$. For all rectangle $\mathcal{C}' \subset (0,+\infty)^2$ we define $\underline{R}_{\mathcal{C}',1} = \underline{R}_{\mathcal{C}',2}$. \subsection{Example 4.} As in the preceding example, we consider $\xi > 0$, $a, b > 0$ and the rectangle $\mathcal{C} = [a_1, a_2] \times [b_1, b_2] \subset (0,+\infty)^2$. Our aim is to find a rectangle~$\mathcal{R}$ containing $(a,b)$ such that $$\mathcal{C} \cap \mathcal{R} \subset \left\{(a',b') \in (0,+\infty)^2, \, h^2 (f_{(a,b)}, f_{(a',b')}) \leq \xi \right\}.$$ For all positive numbers $a', b'$, \begin{eqnarray*} h^2 \left(f_{(a,b)}, f_{(a',b')}\right) \leq 2 h^2 \left(f_{(a,b)}, f_{(a,b')}\right) + 2 h^2 \left(f_{(a,b')}, f_{(a',b')} \right). \end{eqnarray*} We derive from Theorem~7.6 of~\cite{Ibragimov1981} that \begin{eqnarray*} h^2 \left(f_{(a,b)}, f_{(a,b')} \right) \leq \frac{ (b' - b)^2 }{8} \sup_{t \in [\min(b,b'), \max(b,b')]} \left| \psi \left( t \right) - \psi \left(a + t \right) \right| \end{eqnarray*} where $\psi$ is defined in the preceding example. By using the monotony of the function $t \mapsto \psi (t) - \psi (a+t)$, we deduce that if $b' > b_1$, \begin{eqnarray*} h^2 \left(f_{(a,b)}, f_{(a,b')} \right) \leq \begin{cases} 1/8 \left( \psi (b) - \psi (a+b) \right) (b' - b)^2 & \text{if $b' \geq b$} \\ 1/8 \left( \psi (b_1) - \psi (a+b_1)\right) (b' - b)^2 & \text{if $b' < b$.} \end{cases} \end{eqnarray*} Similarly, \begin{eqnarray*} h^2 \left(f_{(a,b')}, f_{(a',b')} \right) \leq \frac{ (a' - a)^2 }{8} \sup_{t \in [\min(a,a'), \max(a,a')]} \left| \psi \left( t \right) - \psi \left(b' + t \right) \right|. \end{eqnarray*} Hence, if $a' \in [a_1,a_2]$ and $b' \in [b_1,b_2]$, \begin{eqnarray*} h^2 \left(f_{(a,b')}, f_{(a',b')} \right) \leq \begin{cases} 1/8 \left( \psi (a) - \psi (a+b_2) \right) (a' - a)^2 & \text{if $a' \geq a$} \\ 1/8 \left( \psi (a_1) - \psi (a_1+b_2)\right) (a' - a)^2 & \text{if $a' < a$.} \end{cases} \end{eqnarray*} We deduce from the above inequalities that we can take \begin{eqnarray*} \mathcal{R} &=& \left[a - \sqrt{\frac{{ 2 \xi}}{ \psi (a_1) - \psi (a_1 +b_2)} }, a + \sqrt{\frac{{ 2 \xi}}{ \psi (a) - \psi (a +b_2)} } \right] \\ & & \qquad \times \left[b - \sqrt{\frac{{ 2 \xi}}{ \psi (b_1) - \psi (a +b_1)} }, b + \sqrt{\frac{{ 2 \xi}}{ \psi (b) - \psi (a +b)} } \right]. \end{eqnarray*} As in the two last examples, we take $\underline{R}_{\mathcal{C}',1} = \underline{R}_{\mathcal{C}',2}$ for all rectangle $\mathcal{C}' \subset (0,+\infty)^2$. \subsection{Example 5.} For all $m,m' \in \mathbb{R}$, $\lambda, \lambda' > 0$, $$h^2 \left(f_{(m,\lambda)},f_{(m',\lambda')}\right) = \begin{cases} 1 - \frac{2 \sqrt{\lambda \lambda'}}{\lambda + \lambda'} e^{- \frac{\lambda}{2} |m' - m| } & \text{if $m' \geq m$} \\ 1 - \frac{2 \sqrt{\lambda \lambda'}}{\lambda + \lambda'} e^{ - \frac{\lambda'}{2} |m' - m|} & \text{if $m' \leq m$.} \end{cases} $$ We consider $\xi > 0$ and aim at finding~$\mathcal{R}$ containing $(m,\lambda)$ such that $$\mathcal{R} \subset \left\{(m',\lambda') \in \mathbb{R} \times (0,+\infty), \, h^2 (f_{(m,\lambda)}, f_{(m',\lambda')}) \leq \xi \right\}.$$ Notice that if $m' \geq m$ and if \begin{eqnarray*} \frac{2 \sqrt{\lambda \lambda'}}{\lambda + \lambda'} \geq \sqrt{1-\xi} \quad \text{and} \quad e^{- \frac{\lambda}{2} (m'-m)} \geq \sqrt{1-\xi} \end{eqnarray*} then $h^2 (f_{(m,\lambda)}, f_{(m',\lambda')}) \leq \xi $. Similarly, if $m' \leq m$ and if \begin{eqnarray*} \frac{2 \sqrt{\lambda \lambda'}}{\lambda + \lambda'} \geq \sqrt{1-\xi} \quad \text{and} \quad e^{-\frac{\lambda'}{2} (m-m')} \geq \sqrt{1-\xi} \end{eqnarray*} then $h^2 (f_{(m,\lambda)}, f_{(m',\lambda')}) \leq \xi $. We can then take $$\mathcal{R} = \left[m- \frac{ 1-\xi}{1 + \xi + 2 \sqrt{\xi}} \frac{ \log \left(1/ (1-\xi) \right)}{\lambda}, m +\frac{ \log \left(1/ (1-\xi) \right)}{\lambda}\right] \times \left[ \frac{1 + \xi - 2 \sqrt{\xi}}{ 1 - \xi }\lambda, \frac{1 + \xi + 2 \sqrt{\xi}}{ 1-\xi }\lambda \right].$$ Let now $\mathcal{C}' = \left[\underline{m}_0, \bar{m}_0 \right] \times \left[\underline{\lambda}_0, \bar{\lambda}_0\right]$ be a rectangle of $\mathbb{R} \times (0,+\infty)$. By proceeding as in the Gaussian model, we can define $\boldsymbol{\underline{R}}_{\mathcal{C}'} $ by \begin{eqnarray*} \boldsymbol{\underline{R}}_{\mathcal{C}'} = \left(\underline{R}_{\mathcal{C}',1}, \underline{R}_{\mathcal{C}',2}\right) = \left(\frac{2 \sqrt{\bar{\lambda}_0 \underline{\lambda}_0}}{ \bar{\lambda}_0+ \underline{\lambda}_0 } \frac{1- e^{-\underline{\lambda}_0 (\bar{m}_0-\underline{m}_0)/2}} {\bar{m}_0 - \underline{m}_0} , \frac{1}{8 \bar{\lambda}_0^2}\right). \end{eqnarray*} \subsection{Example 6.} For all $m, m' \in \mathbb{R}$, $r, r' > 0$, \begin{eqnarray*} h^2 \left(f_{(m,r)},f_{(m',r')}\right) = 1 - \frac{\left(\min \{m +r , m'+r' \} - \max \{m,m'\}\right)_+}{\sqrt{r r'}} \end{eqnarray*} where $(\cdot)_+$ is the positive part of $(\cdot)$. We consider $\xi \in (0,\bar{\kappa})$, and aim at finding a rectangle~$\mathcal{R}$ containing $(m,r)$ such that $$ \mathcal{R} \subset \left\{(m',r') \in (0,+\infty)^2, \, h^2 (f_{(m,r)}, f_{(m',r')}) \leq \xi \right\}.$$ For this, we begin to assume that $m' \leq m + r$ and $m' + r' \geq m$ to ensure that $$h^2 \left(f_{(m,r)},f_{(m',r')}\right) = 1 - \frac{ \min \{m +r , m'+r' \} - \max \{m,m'\} }{\sqrt{r r'}}.$$ Several cases are involved \begin{itemize} \item If $m' \leq m$ and $m'+r' \geq m + r$, a sufficient condition for $h^2 \left(f_{(m,r)},f_{(m',r')}\right) \leq \xi$ is $$r' \leq \frac{1}{(1-\xi)^2} r.$$ \item If $m' \geq m$ and $m'+r' \leq m + r$, a sufficient condition is $$r' \geq (1-\xi)^2 r.$$ \item If $m' \leq m$ and if $m' + r' \leq m + r$, a sufficient condition is $$m - m' \leq \left( \sqrt{r'} - (1-\xi) \sqrt{r} \right) \sqrt{r'},$$ which holds when $$r' \geq (1 - \xi/2)^2 r \quad \text{and} \quad |m' - m| \leq \xi/2\sqrt{1-\xi/2} r.$$ \item If $m' \geq m$ and if $m' + r' \geq m + r$, a sufficient condition is $$m' - m \leq \left(\sqrt{r} - (1-\xi) \sqrt{r'} \right) \sqrt{r}.$$ This condition is fulfilled when $$r' \leq \frac{1}{(1 - \xi/2)^2} r \quad \text{and} \quad |m' - m| \leq \frac{\xi}{2 - \xi} r.$$ \end{itemize} We can verify that if $(m',r')$ belongs to the rectangle \begin{eqnarray*} \mathcal{R} = \left[m - \frac{\xi \sqrt{2 - \xi}}{2 \sqrt{2}} r, m+ \frac{\xi}{2-\xi} r \right] \times \left[ \left(1 - \xi/2 \right)^2 r, \frac{r}{\left(1 - \xi/2 \right)^2} \right] \end{eqnarray*} then $m' \leq m + r$ and $m' + r' \geq m$ (since $\xi \leq \bar{\kappa}$). The rectangle $\mathcal{R}$ suits. For all rectangle $\mathcal{C}' \subset (0,+\infty)^2$ we choose in this example $\underline{R}_{\mathcal{C}',1} = \underline{R}_{\mathcal{C}',2}$. \subsection{Speed of the procedure.} \label{SectionVitesseProcedureDimD} By way of indication, we give below the number of tests that have been calculated in the simulation study of Section~\ref{SectionSimulationDimD}. \begin{figure}[H] \begin{center} \begin{tabular}{|c||c|c|c|c|} \hline & $n = 25$ & $n = 50$ & $n = 75$ & $n = 100$ \\ \hline Example 1 & 1602 (117) & 1577 (72) & 1570 (60) & 1567 (52) \\ \hline Example 2 & 2935 (90) & 2937 (76) & 2938 (69) & 2938 (64) \\ \hline Example 3 & 9082 (1846) & 8700 (1183) & 8569 (934) & 8511 (800) \\ \hline Example 4 & 10411 (778) & 10272 (461) & 10236 (357) & 10222 (304) \\ \hline Example 5 & 6691 (296) & 6699 (210) & 6715 (175) & 6726 (158) \\ \hline Example 6 & 32614 (1238) & 33949 (1211) & 34822 (1190) & 35397 (1177) \\ \hline \end{tabular} \end{center} \caption{Number of tests computed averaged over $10^4$ samples and their corresponding standard deviations in brackets.} \label{figureriskcomparaison} \end{figure} \thanks{Acknowledgements: the author acknowledges the support of the French Agence Nationale de la Recherche (ANR), under grant Calibration (ANR 2011 BS01 010 01). We are thankful to Yannick Baraud for his valuable suggestions and careful reading of the paper.} \bibliographystyle{apalike}
1,108,101,562,549
arxiv
\section{Introduction} For a complex algebraic variety $X$ there exists a mixed Hodge structure $(W_{\bullet}, F^{\bullet})$ on the homology group $H_{*}(X;\mathbb Q)$(\cite{De1, De2}). In \cite{Mo} J. W. Morgan first put mixed Hodge structures on the rational homotopy groups in the smooth case. Then, Morgan's results were extended to singular varieties by R. M. Hain \cite{Hain 2} (cf. \cite{Hain 1}) and V. Navarro-Aznar \cite{Nav} independently (e.g., see \cite[p.234, Historical Remarks]{PS}). Then as defined in the abstract we can define the following polynomials of three variables $t,u,v$ (see Remark \ref{rem1} below) : $$MH_X(t,u,v) := \sum_{k,p,q} \dim \Bigl ( Gr_{F^{\bullet}}^{p} Gr^{W_{\bullet}}_{p+q} H_k (X;\mathbb C) \Bigr) t^{k} u^{-p} v^{-q},$$ $$\quad \, \, MH^{\pi}_X(t,u,v) := \sum_{k,p,q} \dim \Bigl (Gr_{\tilde F^{\bullet}}^{p} Gr^{\tilde W_{\bullet}}_{p+q} (\pi_k(X) \otimes \mathbb C) \Bigr ) t^ku^{-p} v^{-q}.$$ \begin{rem}\label{rem1} In this paper we consider the rational homology groups $H_k(X;\mathbb Q)$ instead of the cohomology groups $H^k(X;\mathbb Q) \cong Hom(H_k(X;\mathbb Q), \mathbb Q)$ (by the universal coefficient theorem), thus the mixed Hodge structures have both $p, q$ negative, thus negative weights. Therefore in defining the mixed Hodge polynomial $MH_X(t,u,v)$ we consider $u^{-p} v^{-q}$ instead of $u^pv^q$ (cf. \cite[p.35]{PS}). It is the same for the homotopical mixed Hodge polynomial $MH^{\pi}_X(t,u,v)$. In other words, the above two polynomials can be also defined respectively using the cohomology groups $H^k (X;\mathbb C)$ and the dual $(\pi_k(X)\otimes \mathbb C)^{\vee}= Hom(\pi_k(X)\otimes \mathbb C; \mathbb C)$ of the homotopy group $\pi_k(X)\otimes \mathbb C$ by $$MH_X(t,u,v) := \sum_{k,p,q} \dim \Bigl ( Gr_{F^{\bullet}}^{p} Gr^{W_{\bullet}}_{p+q} H^k (X;\mathbb C) \Bigr) t^{k} u^p v^q, $$ $$\quad \quad \, \, \, MH^{\pi}_X(t,u,v) := \sum_{k,p,q} \dim \Bigl (Gr_{\tilde F^{\bullet}}^{p} Gr^{\tilde W_{\bullet}}_{p+q} ((\pi_k(X)\otimes \mathbb C)^{\vee})\Bigr ) t^ku^p v^q. $$ \end{rem} \begin{rem} In order to get the mixed Hodge structure on the homotopy groups, in fact it suffices that the algebraic variety is nilpotent in the sense that $\pi_1$ is nilpotent and acting nilpotently on higher homotopy groups (e.g.,see \cite[Remark 8.12]{PS}). Simply connected is then a particular case. \end{rem} The first polynomial is well-known, usually called the mixed Hodge polynomial and has been studied very well. The second one is a homotopical analogue, defined by the mixed Hodge structure on the homotopy groups $\pi_{*}(X)$. So, we call these two polynomials respectively \emph{the homological mixed Hodge polynomial} and \emph{the homotopical mixed Hodge polynomial}. Here we observe the following for the special values $(u,v)=(1,1)$:\\ $$P_{X}(t) = MH_X(t,1,1) = \sum_{k\geqq 0} \dim H_k(X;\mathbb C) t^{k} = 1 + \sum_{k\geqq 1} \dim H_k(X;\mathbb C) t^{k},$$ $$\, \, \, P^{\pi}_{X}(t) = MH^{\pi}_X(t,1,1) = \sum_{k\geqq 2} \dim (\pi_{k}(X) \otimes \mathbb C) t^{k} =\sum_{k\geqq 2} \dim (\pi_{k}(X) \otimes \mathbb Q) t^{k}.$$ The first polynomial is the usual \emph{Poincar\'e polynomial} and the second one is its homotopical analogue, called the \emph{homotopical Poincar\'e polynomial}. In this note we discuss some inequalities concerning these two mixed Hodge polynomials $MH_X(t,u,v)$ and $MH^{\pi}_X(t,u,v)$. More details will appear elsewhere. \section{Homological mixed Hodge polynomial and homotopical mixed Hodge polynomial} The most important and fundamental topological invariant in geometry and topology is the Euler--Poincar\'e characteristic $\chi(X)$, which is defined to be the alternating sum of the Betti numbers $\beta_i(X):=\dim_{\mathbb Q} H_i(X;{\mathbb Q}) = \dim_{\mathbb C} H_i(X;\mathbb C) $: $$\chi(X):= \sum_{i \geqq 0} (-1)^i\beta_i(X),$$ provided that each $\beta_i(X)$ and $\chi(X)$ are both finite. Similarly, for a topological space whose fundamental group is an Abelian group one can define the \emph{homotopical Betti number} $\beta^{\pi}_i(X):= \dim (\pi_i(X)\otimes {\mathbb Q})$ where $i\geqq 1$ and the \emph{homotopical Euler--Poincar\'e characteristic}: $$\chi^{\pi}(X):= \sum_{i \geqq 1} (-1)^i\beta^{\pi}_i(X),$$ provided that each $\beta^{\pi}_i(X)$ and $\chi^{\pi}(X)$ are both finite. The Euler--Poincar\'e characteristic is the special value of the Poincar\'e polynomial $P_X(t)$ at $t=-1$ and the homotopical Euler--Poincar\'e characteristic is the special value of the homotopical Poincar\'e polynomial $ P^{\pi}_X(t)$ at $t=-1$: $$P_X(t):= \sum_{i \geqq 0} t^i \beta_i(X), \quad \chi(X) = P_X(-1),$$ $$ P^{\pi}_X(t):= \sum_{i \geqq 1} t^i \beta^{\pi}_i(X), \quad \chi^{\pi}(X) = P^{\pi}_X(-1).$$ The Poincar\'e polynomial $P_X(t)$ is \emph{multiplicative} in the following sense: $$P_{X \times Y}(t) = P_X(t) \times P_Y(t),$$ which follows from the K\"unneth Formula: $$H_n(X \times Y;{\mathbb Q})= \sum_{i+j=n} H_i(X; {\mathbb Q}) \otimes H_j(Y;{\mathbb Q}).$$ The homotopical Poincar\'e polynomial $P^{\pi}_X(t)$ is \emph{additive} in the following sense: $$P^{\pi}_{X \times Y}(t) = P^{\pi}_X(t) + P^{\pi}_Y(t),$$ which follows from $$\pi_i(X \times Y) = \pi_i (X) \times \pi_i(Y) =\pi_i(X) \oplus \pi_i(Y)$$ and $ (A \oplus B)\otimes {\mathbb Q} = (A \otimes {\mathbb Q} ) \oplus (B \otimes {\mathbb Q}).$ Here we note that \begin{equation*} P_X(t) = MH_X(t,1,1), \quad P^{\pi}_X(t) = MH^{\pi}_X(t,1,1). \end{equation*} In fact the homological mixed Hodge polynomial is also multiplicative just like the Poincar\'e polynomial $P_X(t)$ \begin{equation}\label{mh-multi} MH_{X \times Y}(t,u,v) =MH_X(t,u,v) \times MH_Y(t,u,v) \end{equation} which follows from the fact that the mixed Hodge structure is compatible with the tensor product (e.g., see \cite[\S 3.1, Examples 3.2]{PS}.) As to the homotopical mixed Hodge polynomial, it is additive just like the homotopical Poincar\'e polynomial $P^{\pi}_X(t)$ \begin{equation}\label{mh-pi-additive} MH^{\pi}_{X \times Y}(t,u,v) = MH^{\pi}_X(t,u,v) + MH^{\pi}_Y(t,u,v) \end{equation} since $\pi_{*}(X \times Y) = \pi_{*}(X) \oplus \pi_{*}(Y)$ and the category of mixed Hodge structures is abelian and the direct sum of a mixed Hodge structure is also a mixed Hodge structure. \section{Local comparisons of these two mixed Hodge polynomials} By the above definition, we have $0= P^{\pi}_X(0) = MH^{\pi}_X(0,1,1) < MH_X(0,1,1) =P_X(0) = 1$. Hence we get the following strict inequality\footnote{We note that given two real valued polynomial (therefore, continuous) functions $f(x,y,z)$ and $g(x,y,z)$, a strict inequality $f(a,b,c) < g(a,b,c)$ at a special value $(a,b,c)$ implies a local strict inequality $f(x,y,z)<g(x,y,z)$ for $|x-a| \ll 1, |y-b| \ll 1, |z-c| \ll 1$}: \begin{cor}\label{cor011} $$MH^{\pi}_X(t,u,v) < MH_X(t,u,v)$$ for $|t| \ll 1, |u-1| \ll 1, |v-1| \ll 1$. \end{cor} When $t=-1$, $MH_X(-1,1,1)=P_X(-1) = \chi(X)$ is the Euler--Poincar\'e characteristic and $MH^{\pi}_X(-1,1,1)=P^{\pi}_X(-1) = \chi^{\pi}(X)$ is the homotopical Euler--Poincar\'e characteristic. In this case we do have the following theorem due to F\'elix--Halperin--Thomas \cite[Proposition 32.16]{FHT}: \begin{thm} We have $\chi^{\pi}(X) < \chi(X)$, namely $MH^{\pi}_X(-1,1,1) < MH_X(-1,1,1)$. \end{thm} Hence we get the following strict inequality: \begin{cor}\label{cor-111} $$MH^{\pi}_X(t,u,v) < MH_X(t,u,v)$$ for $|t+1|\ll1, |u-1| \ll 1, |v-1| \ll 1 $. \end{cor} As to the case when $(t,u,v)=(1,1,1)$, we have $$\quad \quad MH_X(1,1,1)=P_X(1) = \sum_{k\geqq 0} \dim H_k(X;\mathbb C)= 1 + \sum_{k\geqq 1} \dim H_k(X;\mathbb C),$$ $$MH^{\pi}_X(1,1,1) = P^{\pi}_X(1) = \sum_{k\geqq 2} \dim (\pi_{k}(X) \otimes \mathbb C). \hspace{3cm} $$ For these integers we do have the following Hilali conjecture \cite{Hil}, which has been solved affirmatively for many spaces such as smooth complex projective varieties and symplectic manifolds (e.g. see \cite{BFMM, HM, HM2}), but still open: \begin{con}[Hilali conjecture] $$P^{\pi}_X(1) \leqq P_X(1),$$ i.e., $MH^{\pi}_X(1,1,1) \leqq MH_X(1,1,1).$ \end{con} \begin{rem} The inequality $\leqq$ in the Hilali conjecture cannot be replaced by the strict inequality $<$. It follows from the minimal model of the de Rham algebra of $\mathbb P ^n$ that we have (see \cite[Example 9.9]{PS}) $$\pi_k(\mathbb P^{n}) \otimes {\mathbb Q} =\begin{cases} 0 & \, k \not =2,2n+1\\ {\mathbb Q} & \, k=2, 2n+1. \end{cases} $$ In particular, in the case when $n=1$, we have $$MH^{\pi}_{\mathbb P^1}(t,u,v) = t^2uv + t^3u^2v^2, \quad MH_{\mathbb P^1}(t,u,v) = 1 + t^2uv.$$ \noindent So we have that $MH^{\pi}_X(1,1,1) = MH_X(1,1,1) =2$, i.e. $P^{\pi}_X(1) = P_X(1)=2$. We also remark that in the case of (non-strict) inequality $MH^{\pi}_X(1,1,1) \leqq MH_X(1,1,1)$, unlike Corollary \ref{cor011} and Corollary \ref{cor-111} we cannot expect the following local inequality$$MH^{\pi}_X(t,u,v) \leqq MH_X(t,u,v)$$ for $|t-1| \ll 1, |u-1| \ll 1, |v-1| \ll 1$. Indeed, clearly the following does not hold: $$MH^{\pi}_{\mathbb P^1}(t,1,1) = t^2 + t^3 \leqq 1 + t^2 = MH_{\mathbb P^1}(t,1,1)$$ for $|t-1| \ll 1.$ \end{rem} However, using the multiplicativity of the Poincar\'e polynomial $P_X(t)$ and the additivity of the homotopical Poincar\'e polynomial $P^{\pi}_X(t)$, we can get the following theorem, which kind of says that the Hilali conjecture holds ``modulo product'' \cite{Yo}: \begin{thm} \label{hilali-product} There exists a positive integer $n_{0}$ such that for $\forall n \geqq n_0$ the following strict inequality holds: $$P^{\pi}_{X^n}(1) < P_{X^n}(1).$$ \end{thm} Hence, since $P^{\pi}_{X^n}(1) < P_{X^n}(1)$ means $MH^{\pi}_{X^n}(1,1,1) < MH_{X^n}(1,1,1)$, we have that \begin{equation}\label{ineq-111} MH^{\pi}_{X^n}(1,1,1) < MH_{X^n}(1,1,1) \, \, \text{for} \, \, \forall n \geqq n_0 \end{equation} In fact we can get the following strict inequality, which, should be noted, does not follow straightforwardly from the above strict inequality (\ref{ineq-111}) and requires a bit of work: \begin{cor} There exists a positive integer $n_{0}$ such that for $\forall n \geqq n_0$ $$MH^{\pi}_{X^n}(t,u,v) < MH_{X^n}(t,u,v)$$ for $|t-1| \ll 1, |u-1| \ll 1, |v-1| \ll 1$. \end{cor} In fact, in a similar way, using the multiplicativity of the mixed Hodge polynomial, i.e., $(\ref{mh-multi})$ and the additivity of the homotopical mixed Hodge polynomial, i.e., $(\ref{mh-pi-additive})$, we can show the following theorem. Let $\mathbb R_{>0}$ be the set of positive real numbers. \begin{thm} Let $(s,a,b) \in (\mathbb R_{>0})^3$. Then there exists a positive integer $n_{(s,a,b)}$ such that for $\forall n \geqq n_{(s,a,b)}$ the following strict inequality holds $$MH^{\pi}_{X^n}(t,u,v) < MH_{X^n}(t,u,v).$$ for $|t-s| \ll 1, |u-a| \ll 1, |v-b| \ll 1$. \end{thm} The following theorem follows from the above theorem and the compactness of the following compact cube $\mathscr C_{\varepsilon,r}$. \begin{thm} Let $\varepsilon, r$ be positive real numbers such that $0<\varepsilon \ll 1$ and $\varepsilon < r$ and $\mathscr C_{\varepsilon,r}:=[\varepsilon, r] \times [\varepsilon,r] \times [\varepsilon,r] \subset (\mathbb R_{> 0})^3$ be a cube. Then there exists a positive integer $n_{\varepsilon,r}$ such that for $\forall n \geqq n_{\varepsilon,r}$ the following strict inequality holds $$MH^{\pi}_{X^n}(t,u,v) < MH_{X^n}(t,u,v)$$ for $\forall (t,u,v) \in \mathscr C_{\varepsilon,r}.$ \end{thm} We would like to pose the following conjecture: \begin{con} Let $\varepsilon$ be a positive real number such that $0<\varepsilon \ll 1$. There exist a positive integer $n_0$ such that for $\forall n \geqq n_0$ the following strict inequality holds $$MH^{\pi}_{X^n}(t,u,v) < MH_{X^n}(t,u,v)$$ for $\forall (t,u,v) \in [\varepsilon, \infty)^3 \subset (\mathbb R_{> 0})^3$. \end{con} In the case when $u=v=1$, i.e., in the case of $P^{\pi}_X(t)$ and $P_X(t)$, we do have the following ``half-global'' version of Theorem \ref{hilali-product}: \begin{thm} Let $\varepsilon$ be a positive real number such that $0<\varepsilon \ll 1$. There exists a positive integer $n_{0}$ such that for $\forall n \geqq n_0$ the following strict inequality holds: $$P^{\pi}_{X^n}(t) < P_{X^n}(t) \quad (\forall t \in [\varepsilon, \infty)).$$\ \end{thm} {\bf Acknowledgements:} The author would like to thank the anonymous referee for his/her very useful comments and suggestions. The author also would like to thank Anatoly Libgober for his interest in this work and various comments and suggestions on an earlier version of the paper. Their comments and suggestions improved the paper. In fact, a joint paper with A. Libgober is in preparation and will contain detailed proofs of results of the present paper as well as calculations and further information about mixed Hodge polynomials of elliptic spaces. This work is supported by JSPS KAKENHI Grant Number JP19K03468.
1,108,101,562,550
arxiv
\section{Introduction} \label{sec:intro} The problem of comparing two independent data samples and looking for deviations is ubiquitous in statistical analyses. It is of particular interest in physics, when addressing the problem of searching for new phenomena in data, to compare observations with expectations to find discrepancies. In general, one would like to assess (in a statistically sound way) whether the observed experimental data are compatible with the expectations, or there are signals of the presence of new phenomena. In high-energy physics, although the Standard Model (SM) of particle physics has proved to be extremely successful in predicting a huge variety of elementary particle processes with spectacular accuracy, it is widely accepted that it needs to be extended to account for unexplained phenomena, such as the dark matter of the Universe, the neutrino masses, and more. The search for New Physics (NP) beyond the SM is the primary goal of the Large Hadron Collider (LHC). The majority of NP searches at the LHC are performed to discover or constrain specific models, i.e. specific particle physics extensions of the SM. Relatively less effort has been devoted to design and carry out strategies for model-independent searches for NP \cite{Aaltonen:2007dg, Aaltonen:2008vt, CMS-PAS-EXO-08-005, CMS-PAS-EXO-10-021, Choudalakis:2011qn, ATLAS-CONF-2017-001, Asadi:2017qon, DAgnolo:2018cun, Aaboud:2018ufy}. At the current stage of no evidence for NP in the LHC data, it is of paramount importance to increase the chances of observing the presence of NP in the data. It may even be already there, but it may have been missed by model-specific searches. Recently, there has been growing interest in applying Machine Learning (ML) techniques to high-energy physics problems, especially using supervised learning (see e.g. Refs. \cite{Kuusela:2011aa, Cranmer:2015bka, Baldi:2016fzo, Hernandez2016, Caron:2016hib, Bertone:2016mdy, Weisser:2016cnc, Dery:2017fap, Louppe:2017ipp, Cohen:2017exh, Metodiev:2017vrx, Chang:2017kvc, Paganini:2017dwg, Komiske:2018oaa, Fraser:2018ieu, Brehmer:2018eca, Brehmer:2018kdj, Brehmer:2018hga} and in particular the recent work of Ref.~\cite{DAgnolo:2018cun} with which we share some ideas, although with a very different implementation). On the other hand, applications of unsupervised learning have been relatively unexplored \cite{Kuusela:2011aa, Andreassen:2018apy, Collins:2018epr}. In unsupervised learning the data are not labeled, so the presence and the characteristics of new phenomena in the data are not known \emph{a priori}. One disadvantage of unsupervised learning is that one cannot easily associate a performance metric to the algorithm. Nevertheless, unsupervised methods such as anomaly (or outlier) detection techniques, or clustering algorithms, provide powerful tools to inspect the global and local structures of high-dimensional datasets and discover `never-seen-before' processes. In this paper, we propose a new scientific application of unsupervised learning techniques to boost our ability to search for new phenomena in data, by measuring the degree of compatibility between two data samples (e.g.~observations and predictions). In particular, we build a statistical test upon a test statistic which measures deviations between two datasets, relying on a Nearest-Neighbors technique to estimate the ratio of the local densities of points in the samples. Generally speaking, there are three main difficulties one may face when trying to carry out a search for the presence of new processes in data: (1) a model for the physics describing the new process needs to be assumed, which limits the generality of the method; (2) it is impossible or computationally very expensive to evaluate directly the likelihood function, e.g. due to the complexity of the experimental apparatus; (3) a subset of relevant features needed to be extracted from the data, otherwise the histogram methods may fail due to the sparsity of points in high-dimensional bins. A typical search for NP at LHC suffers from all such limitations: a model of NP (which will produce a signal, in the high-energy physics language) is assumed, the likelihood evaluation is highly impractical, and a few physically motivated variables (observables or functions of observables) are selected to maximize the presence of the signal with respect to the scenario without NP (the so-called background). Our approach overcomes all of these problems at once, by having the following properties: \begin{enumerate} \item it is \textit{model-independent}: it aims at assessing whether or not the observed data contain traces of new phenomena (e.g. due to NP), regardless of the specific physical model which may have generated them; \item it is \textit{non-parametric}: it does not make any assumptions about the probability distributions from which the data are drawn, so it is likelihood-free; \item it is \textit{un-binned}: it partitions the feature space of data without using fixed rectangular bins; so it allows one to retain and exploit the information from the full high-dimensional feature space, when single or few variables cannot. \end{enumerate} The method we propose in this paper is particularly useful when dealing with situations where the distribution of data in feature space is almost indistinguishable from the distribution of the reference (background) model. Although our main focus will be on high-energy particle physics searches at the LHC, our method can be successfully applied in many other situations where one needs to detect incompatibilities between data samples. The remainder of the paper is organized as follows. In Section \ref{sec:2ST} we describe the details of the construction of our method and its properties. In Section \ref{sec:applications} we apply it to case studies with simulated data, both for synthetic Gaussian samples and for a more physics-motivated example related to LHC searches. We outline some directions for further improvements and extensions of our approach, in Section \ref{sec:extensions}. Finally, we conclude in Section \ref{sec:conclusion}. \section{Statistical test of dataset compatibility} \label{sec:2ST} In general terms, we approach the problem of measuring the compatibility between datasets sampled from unknown probability densities, by first estimating the probability densities and then applying a notion of functional distance between them. The first task is worked out by performing density ratio estimation using Nearest Neighbors, while the distance between probability densities is chosen to be the Kullback-Leibler divergence \cite{KullbackLeibler}. We now describe our statistical test in more detail. \subsection{Definition of the problem} \label{subsec:generalities} Let us start by defining the problem more formally. Let $\{\boldsymbol{x}_i | \boldsymbol{x}_i\in\mathbb{R}^D\}_{i=1}^{N_T}$ and $\{\boldsymbol{x}'_i | \boldsymbol{x}'_i\in\mathbb{R}^D \}_{i=1}^{N_B}$ be two independent and identically distributed $D$-dimensional samples drawn independently from the probability density functions (PDFs) $p_T$ and $p_B$, respectively: \bea \Tsample&\equiv&\{\boldsymbol{x}_i\}_{i=1}^{N_T} \stackrel{\textrm{iid}}{\sim}p_T\,,\\ \Bsample&\equiv&\{\boldsymbol{x}'_i\}_{i=1}^{N_B} \stackrel{\textrm{iid}}{\sim}p_B\,. \eea We will refer to $\Bsample$ as a `benchmark' (or `control' or `reference') sample and to $\Tsample$ as a `trial' (or `test') sample. The $\Tsample, \Bsample$ samples consist of $N_T, N_B$ points, respectively. The $\mathbb{R}^D$ space where the sample points $\boldsymbol{x}_i,\boldsymbol{x}'_i$ live will be referred to as `feature' space. The primary goal is to check whether the two samples are drawn from the same PDF, i.e. whether $p_B=p_T$. In other words, we aim at assessing whether (and to what significance level) the two samples are compatible with each other. More formally, we want to perform a statistical test of the null hypothesis $\{H_0:p_T=p_B\}$ versus the alternative hypothesis $\{H_1:p_T\neq p_B\}$. This problem is well-known in the statistics literature as a \textit{two-sample} (or \textit{homogeneity}) test, and many ways to handle it have been proposed. We want to construct a statistical hypothesis test of dataset compatibility satisfying the properties 1-3 outlined in the introduction. First, the $\Bsample, \Tsample$ samples are going to be analyzed without any particular assumptions about the underlying model that generated them (property 1); our hypothesis test does not try to infer or estimate the parameters of the parent distributions, but it simply outputs to what degree the two samples can be considered compatible. Second, if one is only interested in a location test, such as determining whether the two samples have the same mean or variance, then a $t$-test is often adopted. However, we assume no knowledge about the original PDFs, and we want to check the equality or difference of the two PDFs as a whole; therefore, we will follow a non-parametric (distribution-free) approach (property 2). Third, we want to retain the full multi-dimensional information of the data samples, but high-dimensional histograms may result in sparse bins of poor statistical use. The popular Kolmogorov-Smirnov method only works for one-dimensional data, and extensions to multi-dimensional data are usually based on binning (for an alternative method that instead reduces the dimensionality of the data to one, see Ref.~\cite{Weisser:2016cnc}). Alternative non-parametric tests like the Cram\'er-von Mises-Anderson test or the Mann-Withney test require the possibility of ranking the data points in an ordinal way, which may be ill-defined or ambiguous in high-dimensions. Thus, we will employ a different partition of feature space not based on fixed rectangular bins (property 3), which allows us to perform a non-parametric two-sample test in high dimensions. So, in order to construct our hypothesis test satisfying properties 1-3, we need to build a new test statistic and construct its distribution, as described in the next sections. \subsection{Test statistic} \label{subsec:ts} Since we are interested in measuring the deviation between the two samples, it is convenient to define the ratio of probability densities to observe the points in the two samples, in the case $p_B\neq p_T$ (numerator) relative to the case $p_B=p_T$ (denominator) \begin{equation} \lambda \equiv\frac{\prod_{\boldsymbol{x}'_j\in \Bsample}p_B(\boldsymbol{x}'_j) \prod_{\boldsymbol{x}_j\in \Tsample}p_T(\boldsymbol{x}_j)} {\prod_{\boldsymbol{x}'_j\in \Bsample}p_B(\boldsymbol{x}'_j) \prod_{\boldsymbol{x}_j\in \Tsample}p_B(\boldsymbol{x}_j)} =\prod_{\boldsymbol{x}_j\in \Tsample}\frac{p_T(\boldsymbol{x}_j)}{p_B(\boldsymbol{x}_j)}\,. \end{equation} The above quantity may also be thought of as a likelihood ratio. However, as we are carrying out a non-parametric test, we prefer not to use this term to avoid confusion. Now, since the true PDFs $p_{B,T}$ are not known, we follow the approach of finding estimators $\hat p_{B,T}$ for the PDFs and evaluate the ratio $\lambda$ on them \begin{equation} \hat\lambda = \prod_{\boldsymbol{x}_j\in \Tsample}\frac{\hat p_T(\boldsymbol{x}_j)}{\hat p_B(\boldsymbol{x}_j)}\,. \end{equation} We then define our \textit{test statistic} $\ts$ over the trial sample as \begin{equation} \ts(\Bsample, \Tsample)\equiv \log\hat\lambda^{1/|\Tsample|} =\frac{1}{N_T}\sum_{j=1}^{N_T}\log\frac{\hat p_T(\boldsymbol{x}_j)} {\hat p_B(\boldsymbol{x}_j)}\,, \label{eq:ts} \end{equation} where $|\Tsample|=N_T$ is the size of the trial sample. This test statistic will take values close to zero when $H_0$ is true, and far from zero (positively or negatively) when $H_0$ is false. The test statistic defined in Eq.~\eqref{eq:ts} is also equal to the estimated Kullback-Leibler (KL) divergence $\hat D_{\rm KL}(\hat p_T||\hat p_B)$ between the estimated PDFs of trial and benchmark samples, with the expectation value replaced by the empirical average (see Appendix \ref{app:KL} and in particular Eq.~\eqref{KLestimated}). The KL divergence plays a central role in information theory and can be interpreted as the relative entropy of a probability distribution with respect to another one. Our choice is also motivated by the fact that the log function in Eq.~(\ref{eq:ts}) makes the test statistic linearly sensitive to small differences between the distributions. Of course, other choices for the test statistic are possible, based on an estimated divergence between distributions other than the KL divergence, e.g. the Pearson squared-error divergence. The exploration of other possibilities is beyond the scope of this paper and is left for future work. Ultimately, we want to conclude whether or not the null hypothesis can be rejected, with a specified significance level $\alpha$ (e.g. $\alpha=0.05$), therefore we need to associate a $p$-value to the null hypothesis, to be compared with $\alpha$. To this end, we first need to estimate the PDFs $\hat p_{B,T}$ from the samples, then compute the test statistics $\ts_{\rm obs}$ observed on the two given samples. Next, in order to evaluate the probability associated with the observed value $\ts_{\rm obs}$ of the test statistic, we need to reconstruct its probability distribution $f(\ts|H_0)$ under the null hypothesis $H_0$, and finally compute a \textit{two-sided} $p$-value of the null hypothesis. The distribution of the test statistic is expected to be symmetric around its mean (or median), which in general may not be exactly zero as a finite-sample effect. Therefore, the two-sided $p$-value is simply double the one-sided $p$-value. A schematic summary of the method proposed in this paper is shown in Figure \ref{fig:schematic_view}. \begin{figure}[t] \centering \includegraphics[width=0.7\linewidth]{schematic_view.pdf} \caption{ Schematic view of the proposed method to compute the $p$-value of the null hypothesis that the two samples are drawn from the same probability density.} \label{fig:schematic_view} \end{figure} In the remainder of this section we will describe this procedure in detail. \subsection{Probability density ratio estimator} \label{subsec:estimator} We now turn to describing our approach to estimating the ratio of probability densities $\hat p_{B}/\hat p_{T}$ needed for the test statistic. There exist many possible ways to obtain density ratio estimators, e.g. using kernels \cite{SUGIYAMA2011735} (see Ref.~\cite{sugiyama_suzuki_kanamori_2012} for a comprehensive review). We choose to adopt a Nearest-Neighbors (NN) approach \cite{Schilling1986, henze1988, Wang2005,Wang2006,Dasu06, PerezCruz2008, kremer}. Let us fix an integer $K>0$. For each point $\boldsymbol{x}_j\in \Tsample$, one computes the Euclidean distance\footnote{Other distance metrics may be used, e.g. a $L^p$-norm. We do not explore other possibilities here.} $r_{j,T}$ to the $K$th nearest neighbor of $\boldsymbol{x}_j$ in $\Tsample\setminus\{\boldsymbol{x}_j\}$, and the Euclidean distance $r_{j,B}$ to the $K$th nearest neighbor of $\boldsymbol{x}_j$ in $\Bsample$. Since the probability density is proportional to the density of points, the probability estimates are simply given by the number of points ($K$, by construction) within a sphere of radius $r_{j,B}$ or $r_{j,T}$, divided by the volume of the sphere and the total number of available points. Therefore, the local nearest-neighbor estimates of the PDFs read \bea \hat p_B(\boldsymbol{x}_j) &=& \frac{K}{N_B}\frac{1}{\omega_D r_{j,B}^D}\,,\\ \hat p_T(\boldsymbol{x}_j) &=& \frac{K}{N_T-1}\frac{1}{\omega_D r_{j,T}^D}\,, \eea (for any $\boldsymbol{x}_j\in \Tsample$) where $\omega_D=\pi^{D/2}/\Gamma(D/2+1)$ is the volume of the unit sphere in $\mathbb{R}^D$. So, the test statistic defined in Eq.~\eqref{eq:ts} is simply given by \begin{equation} \ts(\Bsample, \Tsample)= \frac{D}{N_T}\sum_{j=1}^{N_T}\log\frac{r_{j,B}}{r_{j,T}} + \log\frac{N_B}{N_T-1}\,. \label{eq:ts2} \end{equation} The value of the test statistic on the benchmark and trial samples will also be referred to as the `observed' test statistic $\ts_{\rm obs}$. The NN density ratio estimator described above has been proved to be consistent and asymptotically unbiased \cite{Wang2005,Wang2006,PerezCruz2008}, i.e. the test statistic $\ts$ \eqref{eq:ts2} built from the estimated probability densities converges almost surely to the KL divergence between the true probability densities in the large sample limit $N_{B},N_T\to\infty$. Two advantages of the NN density ratio estimator are that it easily handles high-dimensional data, and its calculation is relatively fast, especially if $k$-$d$ trees are employed to find the nearest neighbors. As a disadvantage, for finite sample sizes, the estimator \eqref{eq:ts2} retains a small bias, although several methods has been proposed to reduce it (see e.g. Refs.~\cite{Wang2005, Noh2014BiasRA}). Such a residual bias is only related to the asymptotic convergence properties of the test statistic to the estimated KL divergence $\hat D_{\rm KL}(\hat p_T||\hat p_B)$, and does not affect the outcome and the power of our test in any way. The use of NN is also convenient as it allows the partition of the feature space not into rectangular bins, but into hyper-spheres of varying radii, making sure they are all populated by data points. The test statistic $\ts$ in Eq.~\eqref{eq:ts2}, being an estimator of the KL divergence between the two underlying (unknown) PDFs, provides a measure of dataset compatibility. In the construction of $\ts$ we have chosen a particular $K$ as the number of nearest neighbors. Of course, there is not an \textit{a priori} optimal value of $K$ to choose. In the following analyses we will use a particular choice of $K$, and we will comment on the possibility of extending the algorithm with adaptive $K$ in Section \ref{subsec:adaptive}. Now that we have a test statistic which correctly encodes the degree of compatibility between two data samples, and its asymptotic properties are ensured by theorems, we need to associate a probability with the value of the $\ts$ calculated on the given samples, as described in the next section. \subsection{Distribution of the test statistic and $p$-value} \label{subsec:permutation} In order to perform a hypothesis test, we need to know the distribution of the test statistic $ f(\ts|H_0)$ under the null hypothesis $H_0$, to be used to compute the $p$-value. Classical statistical tests have well-known distributions of the test statistics, e.g. normal, $\chi^2$ or Student-$t$. In our case, the distribution of $\ts$ is not theoretically known, for finite sample sizes. Therefore, it needs to be estimated from the data samples themselves. We employ the resampling method known as the permutation test \cite{Edgington, vanderVaart} to construct the distribution $f(\ts|H_0)$ of the $\ts$ under the null hypothesis. It is a non-parametric (distribution-free) method based on the idea of sampling different relabellings of the data, under the assumption they are coming from the same parent PDF (null hypothesis). In more detail, the permutation test is performed by first constructing a pool sample by merging the two samples: $\mathcal{U}=\Bsample \cup \Tsample$, then randomly shuffle (sampling without replacement) the elements of $\mathcal{U}$ and assign the first $N_B$ elements to $\tilde \Bsample$, and the remaining $N_T$ elements to $\tilde \Tsample$. Next, one computes the value of the test statistic on $\tilde \Tsample$. If one repeats this procedure for every possible permutation (relabelling) of the sample points, one collects a large set of test statistic values under the null hypothesis which provides an accurate estimation of its distribution (exact permutation test). However, it is often impractical to work out all possible permutations, so one typically resorts to perform a smaller number $N_{\rm perm}$ of permutations, which is known as an approximate (or Monte-Carlo) permutation test. The $\ts$ distribution is then reconstructed from the $N_{\rm perm}$ values of the test statistic obtained by the procedure outlined above. The distribution of the test statistic under a permutation test is asymptotically normal with zero mean in the large sample limit $N_B,N_T\to \infty$ \cite{vanderVaart}, as a consequence of the Central Limit Theorem. Furthermore, when the number $N_{\rm perm}$ is large, the distribution of the $p$-value estimator approximately follows a normal distribution with mean $p$ and variance $p(1-p)/N_{\rm perm}$ \cite{EfronTibshirani, Edgington}. For example, if we want to know the $p$-value in the neighborhood of the significance level $\alpha$ to better than $\alpha/3$, we need $N_{\rm perm}>9(1-\alpha)/\alpha$, which is of the order of 1000 for $\alpha=0.01$. Once the distribution of the test statistic is reconstructed, it is possible to define the critical region for rejecting the null hypothesis at a given significance $\alpha$, defined by large enough values of $\ts_{\rm obs}$ such that the corresponding $p$-value is smaller than $\alpha$. As anticipated in Section \ref{subsec:ts}, for finite samples the test statistic distribution is still approximately symmetric around the mean, but the latter may deviate from zero. In order to account for this general case, and give some intuitive meaning to the size of the test statistic, it is convenient to standardize (or `studentize') the $\ts$ to have zero mean and unit variance. Let $\hat\mu, \hat\sigma$ be the mean and the variance of test statistic under the distribution $f(\ts|H_0)$. We then transform the test statistic as \begin{equation} \ts \rightarrow \ts' \equiv \frac{\ts - \hat\mu}{\hat \sigma}\,, \end{equation} which is distributed according to \begin{equation} f'(\ts'|H_0)=\hat\sigma f(\hat\mu+\hat\sigma\ts'|H_0)\,, \end{equation} with zero mean and unit variance. With this redefinition, the two-sided $p$-value can be easily computed as \begin{equation} p = 2 \int_{|\ts'_{\rm obs}|}^{+\infty} f'(\ts'|H_0)d\ts'\,. \label{pvalue} \end{equation} \subsection{Summary of the algorithm} \label{subsec:summary} The pseudo-code of the algorithm for the statistical test presented in this paper is summarized in Table \ref{algo:algo1}. We implemented it in Python and an open-source package is available on GitHub \footnote{ \href{https://github.com/de-simone/NN2ST} {https://github.com/de-simone/NN2ST}}. \begin{table}[t] \begin{algorithm}[H] \caption{Nearest-Neighbors Two-Sample Test} \begin{algorithmic}[1] \REQUIRE{Benchmark sample: $\Bsample=\{\boldsymbol{x}'_i| \boldsymbol{x}'_i \in \mathbb{R}^{D}\}_{i=1}^{N_B}$, Trial sample: $\Tsample=\{\boldsymbol{x}_j| \boldsymbol{x}_j \in \mathbb{R}^{D}\}_{j=1}^{N_T}$} \item[\textbf{Input:}] $K, N_{\rm perm}\in \mathbb{N}\setminus\{0\}$. \item[\textbf{Output:}] $p$-value of the null hypothesis. \item[] \FOR{$j=1$ to $N_T$} \STATE{$r_{j,B}\leftarrow$ distance of $K$th-NN in $\Bsample$ from $\boldsymbol{x}_j\in \Tsample$} \STATE{$r_{j,T}\leftarrow$ distance of $K$th-NN in $\Tsample$ from $\boldsymbol{x}_j\in \Tsample$} \ENDFOR \STATE{$\ts_{\rm obs} \leftarrow \frac{D}{N_T}\sum_{j=1}^{N_T} \log\frac{r_{j,B}}{r_{j,T}} + \log{\frac{N_B}{N_T-1}}$} \COMMENT{observed value of test statistic} \item[] \FOR{$n=1$ to $N_{\rm perm}$} \COMMENT{permutation test} \STATE{$\mathcal{U}_n\leftarrow $ randomly reshuffle $\Bsample \cup \Tsample$} \STATE{$\tilde \Bsample\leftarrow$ first $N_B$ elements of $\mathcal{U}_n$ } \STATE{$\tilde \Tsample\leftarrow$ remaining $N_T$ elements of $\mathcal{U}_n$ } \FOR{$j=1$ to $N_T$} \STATE{$\tilde r_{j,B}\leftarrow$ distance of $K$th-NN in $\tilde \Bsample$ from $\boldsymbol{\tilde x}_j\in \tilde \Tsample$} \STATE{$\tilde r_{j,T}\leftarrow$ distance of $K$th-NN in $\tilde \Tsample$ from $\boldsymbol{\tilde x}_j\in \tilde \Tsample$} \ENDFOR \STATE $ \ts_n \leftarrow \frac{D}{N_T}\sum_{j=1}^{N_T} \log\frac{\tilde r_{j,B}}{\tilde r_{j,T}} + \log{\frac{N_B}{N_T-1}}$ \COMMENT{test statistic on permutation $n$} \ENDFOR \item[] \STATE $f(\ts|H_0)\leftarrow \{\ts_n\}$ \COMMENT{probability distribution of $\ts$ under $H_0$} \STATE{$\hat\mu,\hat\sigma^2\leftarrow$ mean and variance of $\ts$ under $f$} \STATE{$\ts' \leftarrow (\ts - \hat\mu)/\hat \sigma$} \STATE{$f'(\ts'|H_0)\leftarrow\hat\sigma f(\hat\mu+\hat\sigma\ts'|H_0)$} \COMMENT{probability distribution of $\ts'$ under $H_0$} \STATE{$p\leftarrow 2 \int_{|\ts'_{\rm obs}|}^{+\infty} f'(\ts'|H_0)d\ts'$} \end{algorithmic} \end{algorithm} \caption{Pseudo-code for the two-sample test algorithm, using nearest neighbors density ratio estimation.} \label{algo:algo1} \end{table} \subsection{Extending the test to include uncertainties} \label{subsec:ext_uncert} So far we have assumed that both $\Bsample$ and $\Tsample$ samples are precisely known. However, in several situations of physical interest this may not be the case, as the features may be known only with some uncertainty, e.g. when the sample points come from physical measurements. There can be several factors affecting the precision with which each sample point is known, for instance systematic uncertainties (e.g. the smearing effects of the detector response) and the limited accuracy of the background (Monte-Carlo simulation), which may be particularly poor in some regions of the feature space. Of course, once such uncertainties are properly taken into account, we expect a degradation of the results of the statistical test described in the previous sections, leading to weaker conclusions about the rejection of the null hypothesis. Here we describe a simple and straightforward extension of the method described in this section, to account for uncertainties in the positions of the sample points. We consider the test statistic itself as a random variable, which is a sum of the test statistic $\ts$ defined in Section \ref{subsec:ts}, and computed on the original $\Bsample,\Tsample$ samples, and an uncertainty fluctuation (noise) $U$, originating when each point of $\Bsample$ (or $\Tsample$ or both) is shifted by a random vector: $\ts_u=\ts+U$. The trial and benchmark samples with uncertainties are then given by \bea \Tsample_u&=& \{\boldsymbol{x}_i+\Delta\mathbf{x}_i\}_{i=1}^{N_T}\,, \\ \Bsample_u&=& \{\boldsymbol{x}'_i+\Delta\mathbf{x}'_i\}_{i=1}^{N_B}\,, \eea which represent a point-wise random shift, where the error samples $\Delta\mathbf{x}_i,\Delta\mathbf{x}_i'\in\mathbf{R}^D$ are independent random variables drawn from the same distribution, according to the expected (or presumed) distribution of uncertainties in the features, e.g. zero-mean multivariate Gaussians. Next, one can compute the test statistic on the `shifted' samples as \begin{equation} \ts_u\equiv\ts(\Bsample_u, \Tsample_u)= \ts(\Bsample, \Tsample) + U\,. \end{equation} Since the $\ts$ computed on the original $\Bsample, \Tsample$ samples is given by the observed value $\ts_{\rm obs}$, the value of $U$ for any random samplings of the error samples is simply $U=\ts(\Bsample_u, \Tsample_u)-\ts_{\rm obs}$. By repeating the calculation of $U$ many ($N_{\rm iter}$) times, each time adding a random noise to $\Bsample$ (or $\Tsample$ or both) we can reconstruct its probability distribution $f(U)$, which is asymptotically normal with zero mean in the large-sample limit $N_B,N_T\to\infty$. The resulting distribution of the test statistic $\ts_u$, being the sum of two i.i.d. random variables, is then given by the convolution of the distribution $f(\ts|H_0)$, computed via permutation test on $\Bsample,\Tsample$, and the distribution $f(U)$ with mean set to zero. This is motivated by the desire to eliminate the bias in the mean of the distribution of $U$ coming from finite-sample effects. As a result of this procedure, the distribution $f(\ts_u|H_0)$ will have the same mean as $f(\ts|H_0)$ but a larger variance. The $p$-value of the test is computed from $\ts_{\rm obs}$ with the same steps as described in Section \ref{subsec:permutation}, but with the distribution of the test statistic with uncertainties given by $f(\ts_u|H_0)$, rather than $f(\ts|H_0)$. Since $f(\ts_u)$ has larger variance than $f(\ts)$, the $p$-value will turn out to be larger, therefore the equivalent significance $Z$ will be smaller. This conclusion agrees with the expectation that the inclusion of uncertainties leads to a degradation of the power of the test. The summary of the algorithm to compute the distribution $f(U)$ can be found in Table \ref{algo:algo2}. Once $f(U)$ is computed, it needs to be convolved with $f(\ts|H_0)$, which was previously found via permutation test, as described in Section \ref{subsec:permutation}, to provide the distribution of the test statistic with uncertainties needed to compute the $p$-value. \begin{table}[t] \begin{algorithm}[H] \caption{Distribution of the test statistic noise} \begin{algorithmic}[1] \REQUIRE{Benchmark sample: $\Bsample=\{\boldsymbol{x}'_i| \boldsymbol{x}'_i \in \mathbb{R}^{D}\}_{i=1}^{N_B}$, Trial sample: $\Tsample=\{\boldsymbol{x}_j| \boldsymbol{x}_j \in \mathbb{R}^{D}\}_{j=1}^{N_T}$} \item[\textbf{Input:}] $K, N_{\rm iter}\in \mathbb{N}\setminus\{0\}$ \item[\textbf{Input:}] $F_{\Bsample}(\mathbf{x}), F_{\Tsample}(\mathbf{x})$: distributions of feature uncertainties for $\Bsample, \Tsample$ samples \item[\textbf{Output:}] $f(U)$: distribution of the test statistic noise $U$ \item[] \STATE{$\ts_{\rm obs}\leftarrow\ts(\Bsample, \Tsample)$} \COMMENT{observed value of test statistic} \FOR{$j=1$ to $N_{\rm iter}$} \STATE{$\mathcal{E}_\Tsample= \{\Delta \mathbf{x}_i\}_{i=1}^{N_T}$ randomly drawn from $F_{\Tsample}(\mathbf{x})$} \STATE{$\mathcal{E}_\Bsample= \{\Delta \mathbf{x}_i'\}_{i=1}^{N_B}$ randomly drawn from $F_{\Bsample}(\mathbf{x})$} \STATE{$\Tsample_u\leftarrow \Tsample+\mathcal{E}_\Tsample$ } \COMMENT{point-wise sum} \STATE{$\Bsample_u\leftarrow \Bsample+\mathcal{E}_\Bsample$ } \COMMENT{point-wise sum} \STATE{$\ts_{u}\leftarrow\ts(\Bsample_u, \Tsample_u)$} \STATE{$U_j\leftarrow\ts_u-\ts_{\rm obs}$} \ENDFOR \STATE $f(U)\leftarrow \{U_j\}$ \COMMENT{distribution of $U$} \end{algorithmic} \end{algorithm} \caption{Pseudo-code for the algorithm to find the distribution $f(U)$ of the test statistic noise $U$. } \label{algo:algo2} \end{table} \section{Applications to simulated data} \label{sec:applications} \subsection{Case study: Gaussian samples} \label{subsec:2dgaussians} As a first case study of our method let us suppose we know the original distributions from which the benchmark and trial samples are randomly drawn. For instance, let us consider the multivariate Gaussian distributions of dimension $D$ defined by mean vectors $\boldsymbol{\mu}_{B,T}$ and covariance matrices $\Sigma_{B,T}$: \begin{equation} p_B=\mathcal{N}(\boldsymbol{\mu}_B, \Sigma_B)\,, \qquad p_T=\mathcal{N}(\boldsymbol{\mu}_T, \Sigma_T)\,. \end{equation} In this case, the KL divergence can be computed analytically (see Eq.~\eqref{KL_gaussians}). In the large sample limit, we recover that the test statistic converges to the true KL divergence between the PDFs (see Figure \ref{convergence_2Dgaussians} and Appendix \ref{app:KL}). Of course, the comparison is possible because we knew the parent PDFs $p_B, p_T$. \begin{figure} \centering \includegraphics[width=0.7\linewidth]{convergence_2Dgaussians.pdf} \caption{ Convergence of the test statistic to the exact KL divergence (dashed horizontal line) between two 2-dimensional Gaussian distributions, in the large-sample limit. The $\Bsample$,$\Tsample$ samples have the same size $N_B=N_T$, and they are sampled from 2-dimensional Gaussian distributions with $\boldsymbol{\mu}_B=1.0_2$, $\boldsymbol{\mu}_T=1.2_2$, $\Sigma_B=\Sigma_T=\boldsymbol{I}_2$. Two different choices for the number of nearest neighbors are shown: $K=3$ (blue squares) and $K=20$ (red crosses). } \label{convergence_2Dgaussians} \end{figure} For our numerical experiments we fix the benchmark $\Bsample$ sample by the parameters $\boldsymbol{\mu}_B=1_D$, $\Sigma_B=\boldsymbol{I}_D$, and we construct 4 different trial samples $\Tsample_{G0}, \Tsample_{G1}, \Tsample_{G2}, \Tsample_{G3}$ drawn by Gaussian distributions whose parameters are defined in Table \ref{tab:datasets}. Each sample consists of 20\,000 points randomly drawn from the Gaussian distributions defined above. Notice that the first trial sample $\Tsample_{G0}$ is drawn from the same distribution as the benchmark sample. \begin{table}[t] \centering \begin{tabular}{|c|c|c|} \hline Dataset & $\boldsymbol{\mu}$ & $\Sigma$ \\ \hline $\Bsample$ & $1_D$ & $\boldsymbol{I}_D$ \\ \hline $\Tsample_{G0}$ & $1_D$ & $\boldsymbol{I}_D$ \\ \hline $\Tsample_{G1}$ & $1.12_D$ &$\boldsymbol{I}_D$ \\ \hline $\Tsample_{G2}$ & $1_D$ & $\left( \begin{array}{c|c} \begin{matrix} 0.95 & 0.1\\ 0.1 & 0.8 \end{matrix} & \boldsymbol{0}\\ \hline \boldsymbol{0}& \boldsymbol{I}_{D-2} \end{array} \right)$\\ \hline $\Tsample_{G3}$ & $1.15_D$& $\boldsymbol{I}_D$ \\ \hline \end{tabular} \caption{ Definition of the Gaussian datasets used for the numerical experiments. Each sample consists of $N_B=N_T=20\,000$ points randomly drawn from $D$-dimensional Gaussian distributions $\mathcal{N}(\boldsymbol{\mu}, \Sigma)$. } \label{tab:datasets} \end{table} \begin{table}[t] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Trial Dataset} & \multicolumn{2}{c|}{$D=2$} & \multicolumn{2}{c|}{$D=5$} & \multicolumn{2}{c|}{$D=10$} \\ \cline{2-7} & $p$-value & $Z$ & $p$-value & $Z$ & $p$-value & $Z$ \\ \hline $\Tsample_{G0}$ & $8.2\cdot 10^{-1}$ & 0.2 $\sigma$ & $6.9\cdot 10^{-1}$ & 0.4 $\sigma$ & $6.9\cdot 10^{-1}$ & 0.4 $\sigma$ \\ \hline $\Tsample_{G1}$ & $2.8\cdot 10^{-2}$ & 2.2 $\sigma$ & $1.5\cdot 10^{-7}$ & 5.2 $\sigma$ & $3.6\cdot 10^{-13}$ & 7.3 $\sigma$\\ \hline $\Tsample_{G2}$ & $4.0\cdot 10^{-4}$ & 3.5 $\sigma$ & $8.8\cdot 10^{-8}$ & 5.3 $\sigma$ & $9.4\cdot 10^{-9}$ & 5.7 $\sigma$ \\ \hline $\Tsample_{G3}$ & $1.2\cdot 10^{-6}$ & 4.9 $\sigma$ & $1.4\cdot 10^{-19}$ & 9.1 $\sigma$ & $1.9\cdot 10^{-30}$ & 11.5 $\sigma$ \\ \hline \end{tabular} \caption{ Summary of the results comparing $\Bsample$ with 4 trial samples, for different dimensionality $D$. The samples are defined in Table \ref{tab:datasets}. We set $K=5$ and $N_{\rm perm} = 1000$. } \label{tab:gaussian-results} \end{table} As is customary, we associate an equivalent Gaussian significance $Z$ to a given (two-sided) $p$-value as: $Z\equiv\Phi^{-1}(1-p/2)$, where $\Phi$ is the cumulative distribution of a standard (zero-mean, unit-variance) one-dimensional Gaussian distribution. In Table \ref{tab:gaussian-results} we show the $p$-values and the corresponding $Z$ significance of the statistical tests for different dimensions $D$. The results are interpreted as follows. For $D=2$, the first two trial samples $\Tsample_{G0}, \Tsample_{G1}$ are not distinguished from the benchmark $\Bsample$ at more than 99\%CL ($p>0.01$), while $\Tsample_{G2}, \Tsample_{G3}$ are distinguished ($p\leq 0.01$, or equivalently $Z\geq 2.6\sigma$). Therefore, one would reject the null hypothesis at more than 99\%\,CL and conclude that the PDFs from which $\Tsample_{G2}, \Tsample_{G3}$ are drawn are different from the benchmark PDF $p_B$. It is remarkable that our statistical test is able to reject the null hypothesis with a large significance of $4.9\sigma$ for two random samples $\Bsample, \Tsample_{G3}$ drawn from 2-dimensional distributions which only differ by a shift of the mean by 15\% along each dimension. For higher dimensionality of the data, the discriminating power of the test increases, and the null hypothesis is rejected at more than $5\sigma$ significance for all trial samples $\Tsample_{G1}, \Tsample_{G2}, \Tsample_{G3}$. The running time to compute the $p$-value on a standard laptop for two 2-dimensional samples of 20\,000 points each, and for 1000 permutations, was about 2 minutes. The running time scales linearly with the number of permutations. The number of sample points ($N_{B,T}$) plays an important role. As an example, we sampled the same datasets $\Bsample, \Tsample_{G0}, \Tsample_{G1}, \Tsample_{G2}, \Tsample_{G3}$ with $N_B=N_T=2000$ points, i.e. ten times less points than for the cases shown in Table \ref{tab:gaussian-results}. The results for the equivalent significance for $\Tsample_{G0}, \Tsample_{G1}, \Tsample_{G2}, \Tsample_{G3}$ with $D=2$ are $Z=1.4 \sigma$, $1.9 \sigma$, $1.9 \sigma$, $2.3 \sigma$, respectively. Clearly, the test is not able to reject the null hypothesis at more than 99\%CL (the $p$-value is never below 0.01, or equivalently $Z<2.6\sigma$) in none of the cases. As another illustration of this point, we run the statistical test for $\Bsample=\Tsample_{G0}$ vs $\Tsample=\Tsample_{G3}$ for $D=2$ and different sample sizes $N_B=N_T$, and show the resulting $Z$ significance in Figure \ref{Z_vs_Nsamples} (left panel). We find that for $N_B\leq 10^4$, the test is not able to reject the null hypothesis at more than 99\%CL. Therefore, the power of our statistical test increases for larger sample sizes, as expected since bigger samples lead to more accurate approximations of the original PDFs. \begin{figure} \centering \includegraphics[width=0.45\linewidth]{Z_vs_Nsamples.pdf} \includegraphics[width=0.45\linewidth]{Z_vs_Buncertainty.pdf} \caption{ We compare $\Bsample=\Tsample_{G0}$ and $\Tsample=\Tsample_{G3}$, as defined in Table \ref{tab:datasets}, with $D=2$, using $K=5$ and $N_{\rm perm}=1000$. The $\Bsample$,$\Tsample$ samples have the same size $N_B=N_T$. \emph{Left panel:} The $Z$ significance of the test for different sample sizes. \emph{Right panel:} The $Z$ significance for different relative uncertainties added to $\Bsample$ only, with fixed $N_B=N_T=20\,000$. The $U$ distribution has been computed with $N_{\rm iter}=1000$ random samplings from the distribution of feature uncertainties. } \label{Z_vs_Nsamples} \end{figure} We have also studied the power performance of our statistical test with respect to parametric competitors. We ran 200 tests of two samples drawn from multivariate Gaussian distributions with $D=1,2,5$, with sample sizes $N_B=N_T=100$, and computed the approximated power as the fraction of runs where the null hypothesis is rejected with significance level 5\% ($p<0.05$). We considered normal location alternatives, with $\Bsample\sim \mathcal{N}(0_D, \boldsymbol{I}_D)$ and $\Tsample\sim \mathcal{N}(\Delta_D, \boldsymbol{I}_D)$, where $\Delta$ varies from $0.05$ to $1.0$. As competitor, we choose the Student's $t$-test (or its generalization Hotelling's $T^2$-test, for $D>1$). We find that our test shows a power comparable to its competitor, in some cases lower than that by at most a factor of 3, which is satisfactory given that the $T^2$-test is parametric and designed to spot location differences. Next, we run the statistical test by including uncertainties, as described in Section \ref{subsec:ext_uncert}. For the uncertainties, we assume uncorrelated Gaussian noise, so the covariance matrix of the uncertainties is a $D$-dimensional diagonal matrix $\textrm{diag}(\sigma_1^2,\ldots, \sigma_D^2)$ where each eigenvalue is proportional to the relative uncertainty $\epsilon$ of the component $x_i$ of the sample point $\mathbf{x}$: $\sigma_i=\epsilon x_i$. In Figure \ref{Z_vs_Nsamples} (right panel) we show how the significance of rejecting the null hypothesis degrades once uncorrelated relative uncertainties are added to the $\Bsample$ sample. For $D=2$, the initial $4.9\,\sigma$ when comparing $\Bsample=\Tsample_{G0}$ and $\Tsample=\Tsample_{G3}$ without noise goes down to about $4.1\,\sigma$ with 10\% relative error. \subsection{Case study: Monojet searches at LHC} \label{subsec:monojet} A model-independent search at the LHC for physics Beyond the Standard Model (BSM), such as Dark Matter (DM), has been elusive \cite{ CMS-PAS-EXO-08-005, CMS-PAS-EXO-10-021, Choudalakis:2011qn, ATLAS-CONF-2017-001}. Typically it is necessary to simulate the theoretical signal in a specific model, and compare with data to test whether the model is excluded. The signal-space for DM and BSM physics in general is enormous, and despite thorough efforts, the possibility exists that a signal has been overlooked. The compatibility test described in Section \ref{sec:2ST} is a promising technique to overcome this challenge, as it can search for deviations between the expected simulated Standard Model signal and the true data, without any knowledge of the nature of the new physics. In a real application of our technique by experimental collaborations, the benchmark dataset $\Bsample$ will be a simulation of the SM background, while the trial dataset $\Tsample$ will consist of real measured data, potentially containing an unknown mix of SM and BSM events. As a proof-of-principle, we test whether our method would be sensitive to a DM signature in the monojet channel. For our study, both $\Bsample$ and $\Tsample$ will consist of simulated SM events (`background'), however $\Tsample$ will additionally contain injected DM events (`signal'). The goal is to determine whether the algorithm is sensitive to differences in $\Bsample$ and $\Tsample$ caused by this signal. \subsubsection*{Model and simulations} The signal comes from a standard simplified DM model (see e.g. Ref.~\cite{DeSimone:2016fbz} for a review) with Fermion DM $\chi$ and an $s$-channel vector $Z'$ mediator \cite{Boveia:2016mrp, Albert:2017onk}. Our benchmark parameters are $g_\chi = 1$, $g_q = 0.1$, $g_\ell = 0.01$, in order to match the simplified model constraints from the ATLAS summary plots \cite{vector_summary}. We use a DM mass of 100 GeV, and mediator masses of (1200, 2000, 3000) GeV, in order to choose points that are not yet excluded but could potentially be in the future \cite{vector_summary}. Signal and background events are first simulated using MG5\_aMC@NLO v$2.6.1$ \cite{Alwall:2014hca} at center-of-mass energy $\sqrt{s}=13$ TeV, with a minimal cut of $E_T^{\rm miss} > 90$ GeV, to emulate trigger rather than analysis cuts. We use Pythia 8.230 \cite{Sjostrand:2014zea} for hadronization and Delphes 3.4.1 \cite{delphes} for detector simulation. The so-called `monojet' signal consists of events with missing energy from DM and at least one high-$p_T$ jet. The resulting signal cross-section is $\sigma_{\rm signal} = (20.4,\,3.8,\,0.6)$ pb for $M_{\rm med} = (1200,\,2000,\,3000)$ GeV respectively. For the background samples, we simulate 40\,000 events of the leading background, $Z\rightarrow \nu \bar \nu + n j$ where $n$ is 1 or 2, resulting in a cross section of $\sigma_{\rm background} = 202.6$ pb. The Delphes ROOT file is converted to LHCO and a feature vector is extracted with Python for each event, consisting of $p_T$ and $\eta$ for the two leading jets; the number of jets; missing energy $E_T^{\rm miss}$; Hadronic energy $H_T$; and $\Delta \phi$ between the leading jet and the missing energy. Together this gives an 8-dimensional feature vector $(D=8)$, which is scaled to zero-mean unit-variance based on the mean and variance of the background simulations. This feature vector is chosen to capture sufficient information about each event while keep running time of the algorithm reasonable. Other choices of the feature vector could be chosen to capture different aspects of the physical processes, including higher- or lower-level features, such as raw particle 4-vectors. Application of high-performance computing resources would allow the feature vector to be enlarged, potentially strengthening results. A full study of the choice of feature vector is left to future work. Our simulation technique is simple and designed only as a proof of principle; we do not include sub-leading SM backgrounds, nor full detector effects, adopting a generic Delphes profile. \subsubsection*{Test Statistic distribution under null hypothesis} Following the technique described in Section~\ref{sec:2ST}, for each of the 3 considered points in signal model parameter space, we first construct an empirical distribution of the test statistic under the null hypothesis, $f(\ts|H_0)$, and we then measure $\ts_{\rm obs}$ and compute the $p$-value to determine the compatibility of the datasets. We choose $K=5$ and $f(\ts|H_0)$ is constructed over $N_{\rm perm} = 3000$. The pool sample $\Bsample \cup \Tsample$ consists of the 40\,000 background events, along with a number of signal events proportional to the signal cross-section. We define $\Bsample$ and $\Tsample$ as having an equal number of background events, so that $N_{\rm signal} = 20\,000 \times \sigma_{\rm signal}/\sigma_{\rm background}$, $N_T = 20\,000 + N_{\rm signal}$. The resulting distribution of TS under the null hypothesis is shown in Fig.~\ref{fig:TS-monojet}. The simulations are relatively fast, taking approximately an hour per 1000 permutations on a standard laptop, although computation time grows as a power-law with the number of events, such that further optimization and high-performance computing resources will be a necessity for application to real LHC data with many thousands of events. The statistics of $f(\ts|H_0)$ converge quickly, as shown in Fig.~\ref{fig:TSvsNPerm-monojet}, consistent with the discussion of $N_{\rm perm}$ in Section \ref{subsec:permutation}, and showing that $N_{\rm perm}$ is more than sufficient. Note that since $\tilde{\Bsample}, \, \tilde{\Tsample}$ are chosen from permutations of $\Bsample \cup \Tsample$, it is not necessary to specify how the 40\,000 background events are divided between $\Bsample$ and $\Tsample$; It is only necessary to specify $N_B$ and $N_T$ at this point. \subsubsection*{Observed Test Statistic} To test whether the null hypothesis would be excluded in the event of an (otherwise unobserved) DM signal hiding in the data, we calculate $\ts_{\rm obs}$ using $\Bsample$ containing only background, and $\Tsample$ containing background plus a number of signal events proportional to the relative cross section. In a practical application of this technique by the experimental collaborations, $\Bsample$ would instead correspond to background simulations, while $\Tsample$ would be the real-world observation; therefore only one measurement of $\ts_{\rm obs}$ would be performed. However, in our case the distribution of TS under the null hypothesis is insensitive to the way the 40\,000 background events are divided between $\Bsample$ and $\Tsample$. Therefore we can simulate multiple real-world measurements of $\ts_{\rm obs}$ by dividing the 40\,000 background events between $\Bsample$ and $\Tsample$ in different permutations (always keeping 20\,000 background events in each sample). This allows us to be more robust: since $\ts_{\rm obs}$ is itself a random variable, multiple measurements of $\ts_{\rm obs}$ allows us to avoid the claim of a small $p$-value, when in reality the algorithm may not be sensitive to a small signal. The calculation of $\ts_{\rm obs}$ is performed for 100 random divisions. The $p$-value and significance $Z$ of each $\ts_{\rm obs}$ are calculated with respect to the empirical distribution $f(\ts|H_0)$ where possible. In many cases, $\ts_{\rm obs}$ is so extreme that it falls outside the measured range of $f(\ts|H_0)$, in which case $p$ and $Z$ are determined from a Gaussian distribution with mean $\hat \mu$ and variance $\hat\sigma^2$. This is equivalent to assuming that $f(\ts|H_0)$ is well-approximated by a Gaussian, which is true to a good approximation, as seen in Fig.~\ref{fig:TS-monojet}. To be conservative, the technique is only considered sensitive to the signal if all simulated observations of TS exclude the null hypothesis, i.e. we show the minimum $Z$ significance (and maximum $p$-value). These results are shown in Table~\ref{tab:monojet-results}, where we see that the background-only hypothesis is strongly excluded for $\Tsample_1$ and $\Tsample_2$, even though these points are not yet excluded by traditional LHC searches. Bear in mind that this is a proof-of-concept, and real-world results are unlikely to be as clean, as discussed in Section~\ref{subsec:realdata}. \begin{figure} \centering \includegraphics[width=0.32\linewidth]{null1} \includegraphics[width=0.32\linewidth]{null2} \includegraphics[width=0.32\linewidth]{null3} \caption{ Distribution of the test statistic under the null hypothesis for our 3 signal points. Overlayed is a Gaussian distribution with the same mean and standard deviation as the data. } \label{fig:TS-monojet} \end{figure} \begin{table}[t] \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline Sample & $M_{\rm med}$ & $\sigma_\Tsample$ [pb] & $\sigma_{\rm signal}$ [pb] & max($p$-value) & min($Z$) \\ \hline $\Tsample_1$ &1.2 TeV& 223.0 & 20.4 & $<10^{-50}$ & $>15\,\sigma$ \\ \hline $\Tsample_2$ &2 TeV& 206.4 & 3.8 & $5.7\times 10^{-25}$ & $ 10 \, \sigma$ \\ \hline $\Tsample_3$ &3 TeV& 203.2 & 0.6 & 0.90 & $0.13\, \sigma$ \\ \hline \end{tabular} \caption{Summary of monojet results comparing $\Bsample$ (background only) with $\Tsample$ (background plus DM signal). The cross section corresponding to the trial sample is simply given by $\sigma_{\Tsample}=\sigma_{\rm background}+\sigma_{\rm signal}$. The $p$-value and $Z$ statistic show the compatibility between $\Bsample$ and $\Tsample$; Large $Z$ indicates that $\Tsample$ is not consistent with the background-only hypothesis. Note that these results will be weakened by application of uncertainties (see text for details).} \label{tab:monojet-results} \end{table} \subsubsection*{Inclusion of uncertainties} To test the sensitivity of this technique to uncertainties and errors in the background simulation, we use the method outlined in Section~\ref{subsec:ext_uncert} to estimate the drop in significance when uncertainties are taken into account. Uncorrelated Gaussian noise with $\epsilon = 10\%$ (as defined in Section~\ref{subsec:2dgaussians}) is added to $\Bsample$, allowing the construction of $f(\ts_u|H_0)$ using $N_{\rm iter} = 1000$. Note that while the primary result without uncertainties is agnostic as to how the overall background sample is divided between $\Bsample$ and $\Tsample$, this is not the case when applying uncertainties. We construct $f(\ts_u|H_0)$ by repeatedly applying different noise to the same $\Bsample$, and so $\Bsample$ and $\Tsample$ must be defined from the outset, leaving just one measurement of $\ts_{\rm obs}$, for a random draw of $\Bsample$ and the background component of $\Tsample$ from the overall pool of background simulations. For $(\Tsample_1, \Tsample_2, \Tsample_3)$, we find that without noise $Z = (40, 13, 2.7)$. Note that as expected, these are larger than the minimum values over 100 observations reported in Table~\ref{tab:monojet-results}. With $\epsilon = 10\%$, we find that this reduces to $Z = (26, 12, 2.5)$ for the 3 samples, respectively. This is in line with expectations: while this is a powerful technique, limited knowledge of the expected background will degrade the results. With this in mind, we reiterate that results based on simulations alone should be taken with a grain of salt. They show the strengths of the statistical test we are proposing and prove it is worthwhile to investigate it further, but they will be weakened in a real-world situation. As an application to experimental data, our technique could be applied by seeding the simulated background $\Bsample$ with noise associated with uncertainties in the Monte-Carlo background estimation, or seeding the measured data sample $\Tsample$ with noise associated with systematic uncertainties. \subsubsection*{Discussion} To study the threshold to which this technique is sensitive, we can construct $\Tsample$ by adding an arbitrary number of signal events to the background, without reference to the relative signal cross-section. The result is shown in Figure \ref{fig:zvsX-monojet} (left panel), using the signal dataset with $M_{\rm med} = 2$ TeV. For each value of $N_{\rm sig}$, the distribution $f(\ts|H_0)$ is constructed over 1000 permutations, and the $Z$ significance is determined through taking the minimum value of $Z$ over 100 measurements of $\ts_{\rm obs}$ for different background permutations. There is a clear threshold, below which the significance is negligible and constant, and above which the significance grows as a power-law. The number of signal events in $\Tsample_2$ crosses this threshold while $\Tsample_3$ does not, explaining the rapid drop in the significance. The strength of the technique is also sensitive to the number of samples. Figure~\ref{fig:zvsX-monojet} (right panel) demonstrates this, again using the signal dataset with $M_{\rm med} = 2$ TeV, $N_{\rm perm} = 1000$, and taking the minimum $Z$ over 100 measurements of $T_{\rm obs}$. It shows an approximately power-law growth in the significance, consistent with the same growth in the significance with number of signal events. Clearly, the more data the better. \begin{figure} \centering \includegraphics[width=0.6\linewidth]{mj-TSvsNPerm.pdf} \caption{ Effect of $N_{\rm perm}$ on the null-hypothesis test statistic for the monojet study with $\Tsample_2$. } \label{fig:TSvsNPerm-monojet} \end{figure} \begin{figure} \centering \includegraphics[width=0.45\linewidth]{mj-zvsNSig2-log} \includegraphics[width=0.45\linewidth]{mj-zvsNBkg2-log} \caption{The effect of $N_{\rm sig}$ (left) and $N_B$ (right) on the ability of the algorithm to distinguish $\Bsample$ and $\Tsample$. For the left figure, $N_B = 20\,000$ background events and $N_T = N_B + N_{\rm sig}$. (Based on the actual simulated signal and background cross-sections, the true value is $N_{\rm sig} = 375$.) In the right figure, $N_T = N_B + N_{\rm sig}$, where $N_{\rm sig}$ varies in proportion to $N_B$ and the relative signal/background cross-sections. In both cases, we use the trial sample $\Tsample_2$ corresponding to the signal with $M_{\rm med} = 2$ TeV.} \label{fig:zvsX-monojet} \end{figure} \subsection{Future application to real data} \label{subsec:realdata} In a practical application of this technique by experimental collaborations, $\Bsample$ would correspond to simulations of the SM background, while $\Tsample$ would be the real-world observation, consisting of an unknown mix of signal and background events. Both $\Bsample$ and $\Tsample$ could be constructed under the same set of minimal cuts, imposed based on trigger requirements rather than as a guide to finding new physics. While the technique itself is model-independent, there is freedom to apply physical knowledge in the choice of minimal cuts to keep the background simulation and data load manageable, and in the choice of feature vector, which can either be low-level (raw 4-vectors of reconstructed objects, or even pixel hits) or high-level (missing energy, hadronic energy etc.). Even though we have only applied our method to a generic monojet signal, the strength of the algorithm is that it is sensitive to unspecified signals, and is limited only by the accuracy of the background simulation. We emphasize that our case study in Section~\ref{subsec:monojet} is a proof of concept with a generic signal and a na\"ive estimation of the background. Accurately estimating SM backgrounds at the LHC is a significant challenge in the field and must be considered carefully in any future application of this technique. Currently used techniques of matching simulations to data in control regions still allow the use of our method, although this introduces some model-dependent assumptions. Alternatively, one may apply our statistical test in the context of data-driven background calculation, as a validation tool to measure the compatibility of Monte-Carlo simulations with data in control regions. For instance, it is common practice to tune the nuisance parameters in order to make the Monte-Carlo simulation of the background match the data in control regions. When one deals with more than one control region, this procedure results in a collection of patches of the feature space, in each of which the background simulation is fit to the data. The statistical test we propose in this paper can be used to determine to what extent (significance) the background simulation is representative of the data at the global level, in all control regions. And in case of discrepancies, it can pinpoint the regions of feature space where the mismatch between data and simulations is the largest. As we have shown by implementing sample uncertainties in our statistical test, the test alone may not be sufficient to claim discovery in cases where background simulations are not sufficiently accurate, but this does not weaken the value of the method. It remains valuable as a tool to identify regions of excess in a model-independent way, allowing follow-up hand-crafted analyses of potential signal regions. \section{Directions for extensions} \label{sec:extensions} In this section we summarize two main directions to extend and improve the method proposed in this paper. We limit ourselves to just outlining some ideas, leaving a more complete analysis of each of these issues to future work. \subsection{Adaptive choice of the number of nearest neighbors} \label{subsec:adaptive} The procedure for the density ratio estimator described in Section \ref{subsec:estimator} relies on choosing the number $K$ of NN. As mentioned earlier, it is also possible to make the algorithm completely unsupervised by letting it choose the optimal value of $K$. One approach is to proceed by model selection as in Refs.~\cite{SugiyamaMuller, SUGIYAMA2011735,kremer}. We define the loss function as a mean-squared error between the true (unknown) density ratio $r(\boldsymbol{x})=p_T(\boldsymbol{x})/p_B(\boldsymbol{x})$ and the estimated density ratio $\hat r(\boldsymbol{x})=\hat p_T(\boldsymbol{x})/\hat p_B(\boldsymbol{x})$ over the benchmark PDF $p_B(\boldsymbol{x})$, \bea L(r,\hat r)&=&\frac{1}{2}\int \left[\hat r(\boldsymbol{x}')- r(\boldsymbol{x}')\right]^2 p_B(\boldsymbol{x}') d\boldsymbol{x}'\\ &=&\frac{1}{2}\int \hat r(\boldsymbol{x}')^2 p_B(\boldsymbol{x}') d\boldsymbol{x}' -\int \hat r(\boldsymbol{x}) p_T(\boldsymbol{x}) d\boldsymbol{x} +\frac{1}{2}\int r(\boldsymbol{x}')^2 p_B(\boldsymbol{x}') d\boldsymbol{x}'\,, \eea where the last term is constant and can be dropped, thus making the loss function independent of the unknown ratio $r(\boldsymbol{x})$. The estimated loss function is obtained by replacing the expectations over the unknown PDF $p_B$ with the empirical averages \begin{equation} \hat L(r,\hat r) = \frac{1}{2N_B}\sum_{\boldsymbol{x'}\in \Bsample}\hat r(\boldsymbol{x'})^2 -\frac{1}{N_T}\sum_{\boldsymbol{x}\in \Tsample}\hat r(\boldsymbol{x})\,. \label{loss} \end{equation} So, one can perform model selection by minimizing the estimated loss function \eqref{loss} with respect to the parameter $K$ and choosing this value of $K$ as the optimal one. However, this procedure may be computationally intensive as it requires running the full algorithm several times (one for each different value of $K$). Another approach is to implement the Point-Adaptive k-NN density estimator (PAk) \cite{Rodriguez1492, Laio2018, 2018arXiv180210549D}, which is an algorithm to automatically find a compromise between large variance of the k-NN estimator (for small $K$), and large bias (for large $K$) due to variations of the density of points. \subsection{Identifying the discrepant regions} \label{subsec:location} Suppose that after running the statistical test described in this paper one finds a $p$-value leading to a rejection of the null hypothesis, or at least for evidence of incompatibility between the original PDFs. This means that the absolute value of the test statistic on the actual samples $|\ts_{\rm obs}|$ is large enough to deviate from zero significantly (to simplify the discussion, we assume in this subsection that $\ts_{\rm obs}>0$ and the distribution of $\ts$ has zero mean and unit variance: $\hat \mu=0, \hat\sigma =1$). Then, our algorithm has a straightforward by-product: it allows to characterize the regions in feature space which contribute the most to a large $\ts_{\rm obs}$. \begin{figure}[t] \centering \includegraphics[width=0.6\linewidth]{circle_cross_samples.pdf}\\ \includegraphics[width=0.6\linewidth]{zscore.pdf} \caption{ \textit{Upper panel:} benchmark (magenta crosses, left) and trial (blue squares, right) samples. \textit{Lower panel:} points of trial sample with $z>3.0$; this condition isolates the regions where most of the discrepancy between samples occurs.} \label{fig:zfield} \end{figure} From the expression of the test statistic in Eq.~\eqref{eq:ts2} we see that we may associate a density field $(\boldsymbol{x}_j)$ to each point $\boldsymbol{x}_j\in\Tsample$ as \begin{equation} u(\boldsymbol{x}_j)\equiv \log\frac{r_{j,B}}{r_{j,T}}\,, \end{equation} such that the test statistic is simply given by the expectation value (arithmetic average) of $u(\boldsymbol{x}_j)$ over the whole trial sample $\Tsample$ \begin{equation} \ts_{\rm obs} = D\cdot\textrm{E}_\Tsample[ u(\boldsymbol{x}_j)] + \log\frac{N_B}{N_T-1}\,. \end{equation} It is then convenient to define a $z$-score field over the trial sample, by standard normalization of $u(\boldsymbol{x}_j)$ as \begin{equation} z(\boldsymbol{x}_j)\equiv\frac{u(\boldsymbol{x}_j) -\textrm{E}_\Tsample[ u(\boldsymbol{x}_j)]}{\sqrt{\textrm{Var}_\Tsample[u(\boldsymbol{x}_j)]}}\,. \end{equation} One can then use this score field to identify those points in $\Tsample$ which are significantly larger than $\ts_{\rm obs}$, and they can be interpreted as the regions (or clusters) where the two samples manifest larger discrepancies. This way, the $z$-score field provides a guidance for characterizing the regions in feature space where the discrepancy is more relevant, similar in spirit to regions of large signal-to-background ratio. For instance, the points $\boldsymbol{x}_j$ with $z(\boldsymbol{x}_j)$ larger than a given threshold, e.g. $z(\boldsymbol{x}_j)>3$, are the points where one expects most of the ``anomaly'' to occur. An example of this is shown in Figure \ref{fig:zfield}, where a circular $\Bsample$ sample is compared with a cross-like $\Tsample$ sample. As expected, the $z$-field has higher density in correspondence of the corners of the cross. Such regions of highest incompatibility between trial and benchmark samples may even be clustered using standard clustering algorithms, thus extending the method studied in this paper with another unsupervised learning technique. Once they have been characterized and isolated, these high-discrepancy regions in feature space can provide a guidance for further investigation, in order to identify what causes the deviations. For example, they can be used to place data selection cuts. \section{Conclusions} \label{sec:conclusion} Many searches for new phenomena in physics (such as searches for New Physics at the LHC) rely on testing specific models and parameters. Given the unknown nature of the physical phenomenon we are searching for, it is becoming increasingly important to find model-independent methods that are sensitive to an unknown signal hiding in the data. The presence of a new phenomenon in data manifests itself as deviations from the expected distribution of data points in absence of the phenomenon. So, we propose a general statistical test for assessing the degree of compatibility between two datasets. Our method is model-independent and non-parametric, requiring no information about the parameters or signal spectrum of the new physics being tested; it is also un-binned, taking advantage of the full multi-dimensional feature space. The test statistic we employ to measure the `distance' between two datasets is built upon a nearest-neighbors estimation of their relative local densities. This is compared with the distribution of the test statistic under the null hypothesis. Observations of the test statistic at extreme tails of its distribution indicate that the two datasets come from different underlying probability densities. Alongside an indication of the presence of anomalous events, our method can be applied to characterize the regions of discrepancy, providing a guidance for further analyses even in the case where one of the two samples (e.g. the background) is not known with enough accuracy to claim discovery. The statistical test proposed in this paper has a wide range of scientific and engineering applications, e.g. to decide whether two datasets can be analyzed jointly, to find outliers in data, to detect changes of the underlying distributions over time, to detect anomalous events in time-series data, etc. In particular, its relevance for particle physics searches at LHC is clear. In this case the observed data can be compared with simulations of the Standard Model in order to detect the presence of New Physics events in the data. Our method is highly sensitive even to a small number of these events, showing the strong potential of this technique. \acknowledgments We would like to thank A.~Davoli and A.~Morandini for collaboration at the early stages of this work, and D.~Barducci, R.~Brasselet, R.T.~D' Agnolo, A.~Farbin, F.~Gieseke, E.~Merelli, E.~Mer\'enyi, A.~Laio, L.~Lista and A.~Wulzer for insightful discussions.
1,108,101,562,551
arxiv
\section{Introduction} Model predictive control (MPC) is used in many applications to control complex dynamical systems. Examples of such systems include production lines, car engines, robots, other numerically controlled machining, and power generators. The MPC is based on optimization of the operation of the system over a future finite time-horizon, subject to constraints, and implementing the control only over the current time step. Model predictive controllers rely on dynamic models of the process, most often linear empirical models, in which case the MPC is linear. Nonlinear MPC (NMPC), which describes systems with nonlinear models and constraints, is often more realistic, compared to the linear MPC, but computationally more difficult. Similar to the linear MPC, the NMPC requires solving optimal control problems on a finite prediction horizon, generally not convex, which poses computational challenges. Numerical solution of the NMPC optimal control problems may be based on Newton-type optimization schemes. Exact Newton-type optimization schemes require an analytic expression of a corresponding Jacobian matrix, which is rarely available in practice and is commonly replaced with a forward difference (FD) approximation; see, e.g., \cite{K95}. Such approximate Newton-type optimization schemes utilize the FD approximation of the original nonlinear equation during every time step. An efficient variant of the approximate Newton-type optimization can be performed by a Continuation NMPC (CNMPC) numerical method proposed by T.~Ohtsuka in \cite{O04}, where each step of the algorithm requires solving a system of linear equations performed by the GMRES iterative method \cite{SS86}. Our contributions presented below are two-fold. We describe an extension of CNMPC with a terminal constraint, suitable to solve minimum-time optimal control problems, and with an optimization parameter. We~investigate preconditioning for GMRES in the context of the NMPC problems and using the MINRES iteration \cite{PS74} instead of GMRES. MINRES provides overall faster implementation, compared to GMRES without restarts, of our approach in cases, where many iterations are required. Our numerical simulations show that the preconditioning can considerably improve the quality of controllers with marginal extra computational time, which can be reduced or eliminated by employing a parallel processing for the preconditioner setup. The rest of the paper is organized as follows. In Section \ref{s2}, we formulate CNMPC of Ohtsuka, extended to having a terminal constraint and a parameter. Section~\ref{s3} describes the original algorithm of Ohtsuka, where the FD linear system is solved using GMRES, and then introduces MINRES as an alternative to GMRES, discusses preconditioning for GMRES and MINRES, and suggests specific algorithms of constructing the preconditioner and using it to accelerate convergence of iterations. In Section \ref{s4}, we give a detailed description of a test minimum-time optimal control problem, defining a quickest arrival of the system to a given destination, with inequality constraints on the system control, and its CNMPC formulation. Section \ref{s5} presents our results of numerical experiments solving the test problem, demonstrating advantages of the proposed approaches. \section{Finite horizon optimization by CNMPC}\label{s2} As a specific example of a mathematical formalism of NMPC, we consider an extended version of the control problem considered by T. Ohtsuka \cite{O04} as follows, \[ \min_{u,p} J, \] \[ J = \phi(t+T,x(t+T),p)+\int_t^{t+T}L(t',x(t'),u(t'),p)dt' \] subject to \begin{equation}\label{e1} \dot{x} = \frac{dx}{dt'}=f(t',x(t'),u(t'),p), \end{equation} \begin{equation}\label{e2} C(t',x(t'),u(t'),p) = 0, \end{equation} \begin{equation}\label{e3} \psi(t+T,x(t+T),p) = 0. \end{equation} Here, $x=x(t)$ denotes the vector of the state of the dynamic system, also serving as an initial state for the optimal control problem over the horizon. The vector $u=u(t)$ is the control vector, serving as an input to control the system. The scalar function $J$ describes a performance cost to be minimized, which includes a terminal cost (the first term in the sum) and a cost over the finite horizon (the second term in the sum). Equation (\ref{e1}) is the system dynamic model that may be nonlinear in $x$ and/or $u$. Equation (\ref{e2}) describes the equality constraints for the state $x$ and the control~$u$. The horizon time length $T$ may in principle also depend on $t$, e.g.,\ for time-optimal control problems. In this case, the original problem can be converted into a fixed horizon problem by letting $T(t)=1\cdot t_f$, where $t_f$ is an additional parameter to be included in $p$ and determined in MPC. Substituting $t+\tau t_f$ for the time $t'$, we arrive at a problem with the normalized time scale $\tau$ and fixed horizon $[t,t+1]$. Such a conversion is applied to the test problem in Section \ref{s4}. Compared to \cite{O04}, one extra constraint (\ref{e3}), described by the terminal constraint function $\psi$, and an extra parameter vector $p$ are being added to the problem formulation, allowing one to extend CNMPC to a wide range of optimal control and design problems. The NMPC optimal control problem is solved by a variational approach. Its discrete counterpart is solved by the traditional Lagrange method of undetermined multipliers. We denote the costate vector by $\lambda$ and the Lagrange multiplier vector associated with the equality constraint (\ref{e2}) by $\mu$. The terminal constraint (\ref{e3}) is relaxed by introducing the Lagrange multiplier $\nu$. The so-called Hamiltonian function, as defined in control theory, is \begin{eqnarray*} \lefteqn{H(t,x,\lambda,u,\mu,p) = L(t,x,u,p)}\hspace*{6em}\\ &&{}+\lambda^Tf(t,x,u,p)+\mu^TC(t,x,u,p). \end{eqnarray*} To discretize the continuous formulation of the optimal control problem stated above, we introduce a uniform horizon time grid by dividing the horizon $[t,t+T]$ into $N$ time steps of size $\Delta\tau$ and replace the time-continuous vector functions $x(\tau)$ and $u(\tau)$ by their indexed values $x_i$ and $u_i$ at the grid points. Thus, $N$ is a number of artificial time steps for the optimal control problem over the horizon. The integral in the performance cost $J$ over the time horizon is approximated by a simple quadrature rule. The time derivative of the state vector is approximated by the forward difference formula. Then the discretized optimal control problem appears as follows, \[ \min_{u_i,p} J, \] \[ J = \phi(\tau_N,x_N,p) + \sum_{i=0}^{N-1}L(\tau_i,x_i,u_i,p)\Delta\tau, \] subject to \begin{equation* \qquad x_{i+1} = x_i + f(\tau_i,x_i,u_i,p)\Delta\tau,\quad i = 0,1,\ldots,N-1, \end{equation*} \begin{equation* C(\tau_i,x_i,u_i,p) = 0,\quad i = 0,1,\ldots,N-1, \end{equation*} \begin{equation* \psi(\tau_N,x_N,p) = 0. \end{equation*} We note that we have so far discretized the NMPC optimal control problem only in the horizon time. We will discretize the system time $t$ later using the uniform time step size $\Delta t$, i.e. discretization in the horizon time may be different from the time discretization of the system. The necessary optimality conditions for the discretized horizon problem are obtained using the discrete Lagrangian function \begin{eqnarray*} \lefteqn{\mathscr{L}(X,U)=\phi(\tau_N,x_N,p)+\sum_{i=0}^{N-1}L(\tau_i,x_i,u_i,p)\Delta\tau}\\ &&{}+\lambda_0^T[x(t)-x_0]\\ &&+\sum_{i=0}^{N-1}\lambda_{i+1}^T[x_i-x_{i+1} +f(\tau_i,x_i,u_i,p)\Delta\tau]\\ &&+\sum_{i=0}^{N-1}\mu_i^TC(\tau_i,x_i,u_i,p)\Delta\tau+\nu^T\psi(\tau_N,x_N,p), \end{eqnarray*} where $X = [x_i\; \lambda_i]^T$ and $U = [u_i\; \mu_i\; \nu\; p]^T$. Namely, the necessary optimality conditions coincide with the stationarity conditions \[ \frac{\partial \mathscr{L}^T}{\partial X}(X,U)=0 \mbox{ and } \frac{\partial \mathscr{L}^T}{\partial U}(X,U)=0. \] For example, the derivative with respect to $u_i$, which is $\partial \mathscr{L}^T/\partial u_i=0$, yields the following equation: \begin{eqnarray*} \lefteqn{\frac{\partial L}{\partial u_i}(\tau_i,x_i,u_i,p)\Delta\tau+\lambda_{i+1}^T \frac{\partial f}{\partial u_i}(\tau_i,x_i,u_i,p)\Delta\tau}\hspace*{6.2em}\\ &&+\mu_i^T\frac{\partial C}{\partial u_i}(\tau_i,x_i,u_i,p)\Delta\tau=0. \end{eqnarray*} Using the Hamiltonian function, it can be shortened to \[ \frac{\partial H}{\partial u_i}(\tau_i,x_i,\lambda_{i+1},u_i,\mu_i,p)\Delta\tau=0. \] Taking the derivative with respect to $\mu_i$, which is ${\partial \mathscr{L}^T}/{\partial\mu_i}=0$, we obtain the following equation, which also involves the factor $\Delta\tau$, \[ C(\tau_i,x_i,u_i,p)\Delta\tau = 0. \] Now we proceed to the construction of a vector function $F(U,x,t)$, which is used to formulate the full set of necessary optimality conditions. The vector function $U=U(t)$ combines the control input $u$, the Lagrange multiplier $\mu$, the Lagrange multiplier $\nu$, and the parameter $p$, all in one vector, as follows, \[ U(t)=[u_0^T,\ldots,u_{N-1}^T,\mu_0^T,\ldots,\mu_{N-1}^T,\nu^T,p^T]^T. \] The vector argument $x$ in the function $F(U,x,t)$ denotes the current measured state vector, which serves as the initial vector $x_0$ in the following algorithm, defining an evaluation of $F(U,x,t)$. \begin{enumerate} \item Starting with the current measured state $x_0$, compute $x_i$, $i=1,2\ldots,N$, by the forward recursion \[ x_{i+1} = x_i + f(\tau_i,x_i,u_i,p)\Delta\tau,\, i=0,\ldots,N-1. \] Then starting with the value \[ \lambda_N=\frac{\partial\phi^T}{\partial x}(\tau_N,x_N,p)+ \frac{\partial\psi^T}{\partial x}(\tau_N,x_N,p)\nu \] compute the costate $\lambda_i$, $i=N\!-\!1,\ldots,0$, by the backward recursion \[ \lambda_i=\lambda_{i+1}+\frac{\partial H^T}{\partial x} (\tau_i,x_i,\lambda_{i+1},u_i,\mu_i,p)\Delta\tau. \] \item Calculate the vector function $F[U,x,t]$, using the just obtained $x_i$ and $\lambda_i$, $i=0,1\ldots,N$, as follows, \begin{eqnarray*} \lefteqn{F[U,x,t]}\\ &&\hspace*{-2em}=\left[\begin{array}{c}\begin{array}{c} \frac{\partial H^T}{\partial u}(\tau_0,x_0,\lambda_{1},u_0,\mu_0,p)\Delta\tau\\ \vdots\\\frac{\partial H^T}{\partial u}(\tau_i,x_i,\lambda_{i+1},u_i,\mu_i,p)\Delta\tau\\ \vdots\\\frac{\partial H^T}{\partial u}(\tau_{N-1},x_{N-1},\lambda_{N},u_{N-1}, \mu_{N-1},p)\Delta\tau\end{array}\\\;\\ \begin{array}{c}C(\tau_0,x_0,u_0,p)\Delta\tau\\ \vdots\\C(\tau_i,x_i,u_i,p)\Delta\tau\\\vdots\\ C(\tau_{N-1},x_{N-1},u_{N-1},p)\Delta\tau\end{array}\\\;\\ \psi(\tau_N,x_N,p)\\[2ex] \begin{array}{c}\frac{\partial\phi^T}{\partial p}(\tau_N,x_N,p)+ \frac{\partial\psi^T}{\partial p}(\tau_N,x_N,p)\nu\\ +\sum_{i=0}^{N-1}\frac{\partial H^T}{\partial p}(\tau_i,x_i, \lambda_{i+1},u_i,\mu_i,p)\Delta\tau\end{array} \end{array}\right]. \end{eqnarray*} \end{enumerate} The optimality condition is the nonlinear equation \begin{equation}\label{e7} F[U(t),x(t),t]=0 \end{equation} with respect to the unknown $U(t)$, which needs to be solved numerically by a computer processor at each time step of NMPC in real time on the controller board. This is the most difficult and challenging part of implementation of NMPC. At the initial time $t=t_0$, we need to approximately solve \eqref{e7} directly. Let us denote the step size of the system time discretization by $\Delta t$, assume that $U(t-\Delta t)$ is already available at the time $t$, and set $\Delta U=U(t)-U(t-\Delta t).$ For a small scalar $h>0$, which may be different from the system time step $\Delta t$ and from the horizon time step~$\Delta \tau$, we introduce the operator \begin{eqnarray}\label{e9} \lefteqn{a(V)=(F[U(t-\Delta t)+hV,x(t),t]}\hspace*{4em}\\ &&{}-F[U(t-\Delta t),x(t),t])/h.\nonumber \end{eqnarray} Then equation (\ref{e7}) is equivalent to the equation \begin{equation* h a(\Delta U/h)=b, \text{ where } b=-F[U(t-\Delta t),x(t),t]. \end{equation*} Let us denote the $j$-th column of the $m\times m$ identity matrix by $e_j$, where $m$ is the dimension of the vector $U$, and construct an $m\times m$ matrix $A$ with the columns $Ae_j$, $j=1,\ldots,m$, defined by the formula \begin{equation}\label{e10} Ae_j=a(e_j). \end{equation} The matrix $A$ approximates the symmetric Jacobian matrix $F_U[U(t-\Delta t),x(t),t]$ so that $a(V)=AV+O(h)$. It is important to realize that the operator $a(\cdot)$ in~\eqref{e10} may be nonlinear. In particular, this explains why our algorithms of explicitly computing $A$ for the purpose of a preconditioner setup may result in a non-symmetric matrix $A$. Numerical stability of computations may be improved by enforcing the symmetry, by substituting $(A+A^T)/2$ for $A$. The deviation from the symmetry gets smaller with a sampling period $h$, which we are free to choose independently of $\Delta t$ and $\Delta \tau$. A key limitation in the choice of $h$ comes from the fact that the cancellation error starts picking up in the finite difference evaluation in the operator $a(V)$ due to inexact arithmetic of the controller processor. This is an unavoidable side effect of using the finite difference approximation of the derivative. A recommended lower bound for the value of $h$ can for example be $10^{-8}$ in the double precision arithmetic, but the optimal value also depends on the function $F[U,x,t]$. Given the formulas for computing the vector function $F[U,x,t]$, nonlinear equation (\ref{e7}) must be solved at the points of the grid $t_i=t_0+i\Delta t$, $i=0,1,\ldots$. At the initial state $x_0=x(t_0)$, we find an approximate solution $U_0$ to the equation $F[U_0,x_0,t_0]=0$ by a suitable optimization procedure. The dimension of the vector $u(t)$ is denoted by $n_u$. Since \[ U(t)=[u_0^T,\ldots,u_{N-1}^T,\mu_0^T,\ldots,\mu_{N-1}^T,\nu^T,p^T]^T, \] the first block entry of $U_0$, formed from the first $n_u$ elements of $U_0$, is taken as the control $u_0$ at the state $x_0$. The next state $x_1=x(t_1)$ is either measured by a sensor or computed by the formula $x_1=x_0+\Delta tf(t_0,x_0,u_0)$; cf. \eqref{e1}. Now we start the recursion as follows. At the time $t_i$, where $i>0$, we arrive with the state $x_i$ and the vector $U_{i-1}$. The operator \[ a_i(V)=\left(F[U_{i-1}+hV,x_i,t_i]-F[U_{i-1},x_i,t_i]\right)/h, \] defined by \eqref{e9}, determines an $m\times m$ matrix $A_i$ with the columns \[ A_ie_j=a_i(e_j),\, j=1,\ldots,m, \] as in (\ref{e10}). At the current time $t_i$, our goal is to solve the following equation \begin{equation}\label{e11} ha_i(\Delta U_i/h)=b_i, \text{where } b_i=-F[U_{i-1},x_i,t_i]. \end{equation} Then we set $U_i=U_{i-1}+\Delta U_i$ and choose the first $n_u$ components of $U_i$ as the control $u_i$. The next state $x_{i+1}=x(t_{i+1})$ either comes from a sensor, estimated, or computed by the formula $x_{i+1}=x_i+\Delta tf(t_i,x_i,u_i)$. Having the basic setup of CNMPC now described, leading to equation \eqref{e11}, next we discuss numerical solution of \eqref{e11}. Let us highlight that equation \eqref{e11} is never solved exactly in practice, thus, a choice of an algorithm may greatly affect not only the performance of the controller, but also the computed control as well. \section{Algorithms}\label{s3} A direct way to solve \eqref{e11} approximately is generating the matrix $A_i$ and then solving the system of linear equations $A_i\Delta U_i=b_i$ by, e.g.,\ the Gaussian elimination. Another way is solving (\ref{e11}) by a suitable Krylov subspace iteration, e.g.,\ by GMRES \cite{SS86} or MINRES~\cite{PS74} methods, where we do not need to generate the matrix~$A_i$ explicitly. Namely, we simply use the operator $a_i(V)$ instead of computing the matrix-vector product $A_iV$, for arbitrary vectors $V$; cf.,~\cite{K95,KK04}. In his seminal paper~\cite{O04}, T.~Ohtsuka uses the GMRES iteration. A typical implementation of the preconditioned GMRES without restarts is given by Algorithm \ref{a1}, where $Tr$ denotes an action of a precontioner $T$ on a vector $r$, as explained below. The unpreconditioned GMRES, as in~\cite{O04}, simply uses $z=r$. We denote by $H_{i_1:i_2,j_1:j_2}$ the submatrix of $H$ with the entries $H_{ij}$ such that $i_1\leq i\leq i_2$ and $j_1\leq j\leq j_2$. \begin{algorithm} \caption{Preconditioned GMRES without restarts} \begin{algorithmic}[1]\label{a1} \REQUIRE $a(v)$, $b$, $x_0$, $k_{\max}$, $T$ \ENSURE Solution $x$ of $a(x)=b$ \STATE $r=b-a(x_0)$, $z=Tr$, $\beta=\|z\|_2$, $v_1=z/\beta$ \FOR{$k=1,\ldots,k_{\max}$} \STATE $r=a(v_k)$, $z=Tr$ \STATE $H_{1:k,k}=[v_1,\ldots,v_k]^Tz$ \STATE $z=z-[v_1,\ldots,v_k]H_{1:k,k}$ \STATE $H_{k+1,k}=\|z\|_2$ \STATE $v_{k+1}=z/\|z\|_2$ \ENDFOR \STATE $y=\mbox{arg min}_y\|H_{1:k_{\max}+1,1:k_{\max}}y-[\beta,0,\dots,0]^T\|_2$ \STATE $x=x_0+[v_1,\ldots,v_{k_{\max}}]y$ \end{algorithmic} \end{algorithm} We emphasize that the operator $a_i(\cdot)$ may be nonlinear, but approximates the symmetric Jacobian matrix $F_U[U_{i-1},x_i,t_i]$. This implies a slight deviation from the symmetry property $V_2^Ta_i(V_1)=(a_i(V_2))^TV_1$ for arbitrary vectors $V_1$ and $V_2$. We assume that the deviation is small and propose applying the MINRES iteration to solve equation (\ref{e11}). When the operator $a_i(\cdot)$ is linear and symmetric, the projected $(k_{\max}+1)\times k_{\max}$ matrix $H$, constructed by GMRES without preconditioning, is tridiagonal. The~MINRES method is then a special variant of GMRES, which makes use of the tridiagonal structure. The table below, adopted from \cite{CS14}, gives a comparison of computational complexities of MINRES and GMRES without preconditioning for solution of a linear system $Ax=b$ with a symmetric $m\times m$ matrix $A$ in terms of memory storage required by working vectors in the solvers and the number of floating-point operations. By $t_P$ we denote the work needed for evaluating $a_i(V)$. \vspace{1ex}\hspace{-1.5em}\begin{tabular}{|l|c|c|} \hline Solver&Storage&Work per iteration\\ \hline MINRES&$7m$&$t_P+9m$\\ \hline GMRES&$(k_{\max}+2)m$&$t_P+(k_{\max}+3)m+\frac{m}{k_{\max}}$\\[.5ex] \hline \end{tabular} \vspace{1ex} If the matrix $A_i$ gets ill-conditioned, the convergence of GMRES or MINRES may stagnate. The convergence can be improved by preconditioning. A matrix $T_i$ that approximates the matrix $A_i^{-1}$ and such that computing the product $T_ir$ for an arbitrary vector $r$ is relatively easy, is referred to as a preconditioner. The preconditioning for the system of linear equations $Ax=b$ with the preconditioner $T$ formally replaces the original system $Ax=b$ with the equivalent preconditioned linear system $TAx=Tb$. If the condition number $\kappa(TA)=\|TA\|\|A^{-1}T^{-1}\|$ of the matrix $TA$ is small, convergence of iterative solvers for the preconditioned system can be fast. However, the convergence of the preconditioned GMRES, in contrast to that of the preconditioned MINRES with a symmetric positive definite preconditioner, is not necessarily determined by the condition number $\kappa(TA)$. Results on convergence of GMRES in a nonlinear case can be found in \cite{BM01}. When the approximate solution $x_{k_{\max}}$ computed by GMRES after $k_{\max}$ iterations is not accurate enough, it is very common to restart GMRES with $x_0$ equal to $x_{k_{\max}}$ instead of increasing the maximum number of iterations $k_{\max}$. Practical implementations of GMRES perform restarts. Restarts allow to cap the GMRES memory use to $k_{\max}+2$ vectors, but may significantly slow down the convergence. In our tests, we apply GMRES without restarts for simplicity of presentation. To setup the preconditioner, the matrix $A_i$ is computed at some time $t_i$ and then its LU factorization $A_i=LU$ is computed, where $L$ is a lower- and $U$ is an upper-triangular matrix. The product $Tr$ is mathematically given by $Tr=U^{-1}(L^{-1}r)$, but is computed by back-substitution, which is much cheaper than the computation of the inverses of $L$ and $U$. The same preconditioner $T$ is used in a number of subsequent grid points starting from $t_i$. The computation of the matrix $A_i$ requires $m$ evaluations $a_i(e_j)$, see \eqref{e10}, that can be efficiently implemented in parallel. The symmetry of the preconditioner $T$ can be used to reduce the memory storage and processor work; see, e.g.,\ \cite{BGL05}. For example, the factorization $T=LDL^T$, see e.g.\ \cite{GVL13}, instead of the LU factorization allows us using only half of memory. The anti-triangular factorization from~\cite{MVD13} may also reduce both the memory requirements and work in preconditioning. MINRES requires symmetric positive definite preconditioners such as in~\cite{VK13}. In our MINRES simulations, although not reported in Section \ref{s5} in details, we use the preconditioned MINRES-QLP method from \cite{CS14}. \section{Test problem}\label{s4} In this section, we formulate a test nonlinear problem called TfC below for brevity, which describes the minimum-time motion from a state $(x_0,y_0)$ to a state $(x_f,y_f)$ with an inequality constrained control. The problem TfC has the following components: \begin{itemize} \item State vector: $\vec{x}=\left[\begin{array}{c}x\\y\end{array}\right]$. Input: $\vec{u}=\left[\begin{array}{c}u\\u_d\end{array}\right]$. \item Parameter variables: $\vec{p}=[t_f]$, where $t_f$ denotes the length of the evaluation horizon. \item Dynamics: $\dot{\vec{x}}=f(\vec{x},\vec{u},\vec{p})= \left[\begin{array}{c}(Ax+B)\cos u\\(Ax+B)\sin u\end{array}\right]$. \item Constraints: $C(\vec{x},\vec{u},\vec{p})=[(u-c_{u})^2+u_d^2-r_{u}^2]=0$, i.e., the control $u$ always stays within the band $c_{u}-r_{u}\leq u\leq c_{u}+r_{u}$). \item Terminal constraints: $\psi(\vec{x},\vec{p})=\left[\begin{array}{c} x-x_f\\y-y_f\end{array}\right]=0$ (the state should pass through the point $(x_f,y_f)$ at $t=t_f$) \item Objective function to minimize: \[ J=\phi(\vec{x},\vec{p})+\int_t^{t+t_f}L(\vec{x},\vec{u},\vec{p})dt', \] where \[ \phi(\vec{x},\vec{p})=t_f,\quad L(\vec{x},\vec{u},\vec{p})=-w_{d}u_d \] (the state should arrive at $(x_f,y_f)$ in the shortest time; the function $L$ serves to stabilize the slack variable $u_d$) \item Constants: $A=B=1$, $x_0=y_0=0$, $t_0=0$, $x_f=y_f=1$, $c_{u}=0.8$, $r_{u}=0.2$, $w_{d}=0.005$. \end{itemize} The components of the corresponding discretized problem on the horizon are given below: \begin{itemize} \item the scaled horizon time $(\tau-\tau_0)/t_f\in[0,1]$ substitutes the original horizon time $\tau\in[\tau_0,\tau_0+t_f]$; \item the discretized scaled horizon time is thus $\tau_i=i\Delta\tau$, where $i=0,1,\ldots,N$, and $\Delta\tau=1/N$; \item the participating variables are the state $\left[\begin{array}{c} x_i\\y_i\end{array}\right]$, the costate $\left[\begin{array}{c} \lambda_{1,i}\\\lambda_{2,i}\end{array}\right]$, the control $\left[\begin{array}{c} u_{i}\\u_{di}\end{array}\right]$, the Lagrange multipliers $\mu_i$ and $\left[\begin{array}{c}\nu_{1}\\\nu_{2}\end{array}\right]$; \item the state is governed by the model equation \[ \left\{\begin{array}{l} x_{i+1}=x_i+\Delta\tau\left[p\left(Ax_{i}+B\right)\cos u_{i}\right],\\ y_{i+1}=y_i+\Delta\tau\left[p\left(Ax_{i}+B\right)\sin u_{i}\right],\end{array}\right. \] where $i=0,1,\ldots,N-1$; \item the costate is determined by the backward recursion ($\lambda_{1,N}=\nu_1$, $\lambda_{2,N}=\nu_2$) \[ \left\{\begin{array}{l} \lambda_{1,i}=\lambda_{1,i+1}\\ \hspace{2.5em}{}+\Delta\tau\left[pA(\cos u_i \lambda_{1,i+1}+\sin u_i\lambda_{2,i+1})\right],\\ \lambda_{2,i} = \lambda_{2,i+1},\end{array}\right. \] where $i=N-1,N-2,\ldots,0$; \item the equation $F(U,x_0,t_0)=0$, where \begin{eqnarray*} \lefteqn{U=[u_0,u_{d,0},\ldots,u_{N-1},u_{d,N-1},}\hspace*{8em}\\ &&\mu_0,\ldots,\mu_{N-1},\nu_1,\nu_2,p], \end{eqnarray*} has the following rows from the top to the bottom: \[ \left\{\begin{array}{l} \Delta\tau p\left[(Ax_i+B)\left(-\sin u_i\lambda_{1,i+1}+ \cos u_i\lambda_{2,i+1}\right)\right.\\ \hspace*{11em}\left.{}+2\left(u_i-c_{u}\right)\mu_i\right] = 0 \\ \Delta\tau p\left[2\mu_iu_{di}-w_{d}\right] = 0 \end{array}\right. \] \[ \left\{\;\;\Delta\tau p\left[(u_i-c_{u})^{2}+u_{di}^2-r_{u}^2\right]=0 \right.\hspace*{8em} \] \[ \left\{\begin{array}{l} x_N-x_r=0\\y_N-y_r=0\end{array}\right.\hspace{15em} \] \[ \left\{\begin{array}{l}\Delta\tau \{\sum\limits^{N-1}_{i=0} (Ax_i+B)(\cos u_i\lambda_{1,i+1}+\sin u_i\lambda_{2,i+1})\\ \hspace{0em}{}+\mu_i[(u_i-c_u)^2+u_{di}^2-r_u^2]-w_du_{di}\}+1 = 0.\end{array}\right. \] \end{itemize} Substituting $p\mu_i$ for $\mu_i$, prior to differentiating the Lagrangian, leads to alternative simpler and more numerically stable, as observed in our tests, formulas, as follows \[ \left\{\begin{array}{l} \Delta\tau\left[p(Ax_i+B)\left(-\sin u_i\lambda_{1,i+1}+ \cos u_i\lambda_{2,i+1}\right)\right.\\ \hspace*{11em}\left.{}+2\left(u_i-c_{u}\right)\mu_i\right] = 0 \\ \Delta\tau\left[2\mu_iu_{di}-w_{d}p\right] = 0 \end{array}\right. \] \[ \left\{\;\;\Delta\tau\left[(u_i-c_{u})^{2}+u_{di}^2-r_{u}^2\right]=0 \right.\hspace*{8em} \] \[ \left\{\begin{array}{l} x_N-x_r=0\\y_N-y_r=0\end{array}\right.\hspace{15em} \] \[ \left\{\begin{array}{l}\Delta\tau [\sum\limits^{N-1}_{i=0} (Ax_i+B)(\cos u_i\lambda_{1,i+1}+\sin u_i\lambda_{2,i+1})\\ \hspace{12em}{}-w_du_{di}]+1 = 0.\end{array}\right. \] We use the latter formulas in our numerical experiments described in the next section. \section{Numerical results}\label{s5} In our numerical experiments with the TfC problem the system of linear equations (\ref{e11}) is solved by the GMRES method. We have also tested MINRES, obtaining the controls similar to those with GMRES, reported here. The number of evaluations of $a(V)$ in GMRES does not exceed an a priori chosen parameter denoted by $k_{\max}$, the error tolerance is $tol=10^{-5}$. The sampling time in the evaluation horizon is $\Delta\tau=0.1$, the sampling time of the simulation is $\Delta t=0.02$, and $h=10^{-5}$. The preconditioners are constructed as follows. At the time instances $t=jt_p$, $j=0,1,\ldots$, with an a priori chosen time increment $t_p$ we calculate all entries of the matrix $A$ by (\ref{e10}) and its LU factorization $A=LU$ by Gaussian elimination with partial pivoting. The computed factors $L$ and $U$ are then used in the preconditioner as follows $Tr=U^{-1}(L^{-1}r)$ for all sampling points $t=i\Delta t$ in the interval $[jt_p,(j+1)t_p)$. The whole set of simulations reported here consists of the following four cases: \begin{enumerate} \item no preconditioning, $k_{\max}=10$; \item preconditioning with $t_p=0.2$ sec, $k_{\max}=1$; \item preconditioning with $t_p=0.4$ sec, $k_{\max}=2$; \item preconditioning with $t_p=0.4$ sec, $k_{\max}=10$. \end{enumerate} The computed results are similar in all reported cases. Figure 1 displays the typical CNMPC control $u$, within the constant constraints, and the time to destination $t_f$, both as functions of the system time in seconds, shown at the horizontal axis. Figure 2 shows a typical system trajectory in the $x$-$y$ plane. \begin{figure*} \centering \subfloat[NMPC control $u$ and time to destination $t_f$ for TfC (reaches the target at $t=0.96$)]{ \label{fig1a} \resizebox{.45\textwidth}{!}{{\scalefont{1}\includegraphics{fig1}}} }\hfill \subfloat[TfC trajectory by NPMC]{ \label{fig1b} \resizebox{.45\textwidth}{!}{{\scalefont{1}\includegraphics{fig2}}} } \end{figure*} \begin{figure*} \setcounter{subfigure}{2} \centering \subfloat[GMRES without preconditioning, $k_{\max}=10$]{ \label{fig2a} \resizebox{.45\textwidth}{!}{{\scalefont{1}\includegraphics{fig3}}} }\hfill \subfloat[GMRES with preconditionining, $t_p=0.2$ sec, $k_{\max}=1$]{ \label{fig2b} \resizebox{.45\textwidth}{!}{{\scalefont{1}\includegraphics{fig4}}} } \end{figure*} \begin{figure*} \setcounter{subfigure}{4} \centering \subfloat[GMRES with preconditionining, $t_p=0.4$ sec, $k_{\max}=2$]{ \label{fig3a} \resizebox{.45\textwidth}{!}{{\scalefont{1}\includegraphics{fig5}}} }\hfill \subfloat[GMRES with preconditioning, $t_p=0.4$ sec, $k_{\max}=10$]{ \label{fig3b} \resizebox{.45\textwidth}{!}{{\scalefont{1}\includegraphics{fig6}}} } \end{figure*} Figures 3--6 show the value of $\|F\|$, which we want to be vanished, and the GMRES residual (the left vertical axis) and the number of the actually performed GMRES iterations (the right vertical axis) at every system time step for all four cases, where the horizontal axis represents the system time in seconds. Figure 3 corresponds to the GMRES iterations without preconditioning. Figures 4-6 involve the preconditioner, recalculated with various frequencies, determined by the time increment $t_p$, and for different $k_{\max}$ ranging from $1$ to $10$. In Figure 3, the number of the actually performed GMRES iterations without preconditioning is always the maximum allowed in this test $k_{\max}=10$. We use this test as a baseline for comparisons. We first point out a good behavior of the preconditioned GMRES even with $k_{\max}=1$ and where the preconditioner is reconstructed once each $t_p=0.2$ sec, see Figure 4. This clearly demonstrates the fact that preconditioning reduces the number of evaluations of the vector function $F(U,x,t)$. The effect of increasing the maximum number $k_{\max}$ of GMRES steps is seen by comparing Figures 4-6. Specifically, in Figure 4, $t_p=0.2$ sec and $k_{\max}=1$, compared to $t_p=0.4$ sec and $k_{\max}=2$ in Figure~5, i.e., we can recompute the preconditioner twice less frequently at the cost of increasing $k_{\max}$ from $1$ to $2$, and we observe a slightly better quality of the solution, as measured by the generally smaller values of $\|F\|$ and the GMRES residual (the left vertical axis). In Figure 6, the preconditioner is recomputed as frequent as in Figure 5, but the largest allowed number of GMRES iterations is increased from $k_{\max}=2$ to $k_{\max}=10.$ We observe in Figure 6 that GMRES often activates the default tolerance stopping criteria for the residual norm smaller than $10^{-5}$, before maxing out the allowed number of iterations $k_{\max}$. Overall, this leads to a generally much smaller residual in Figure 6 compared to that in Figure 5. However, the most decisive quantity $\|F\|$ behaves similar both in Figures 5 and 6, and the computed controls are so similar that the increase of $k_{\max}$ from $2$ to $10$ may be unnecessary. Efficiency of preconditioning is illustrated by comparing Figures 3 and 5, where the number of iterations is reduced five times giving similar/smaller values of $\|F\|$. In minimum-time optimal control problems, the length of the evaluation horizon gets smaller as the state $(x,y)$ approaches the goal position. Near the goal position $(1,1)$ the control has less capability (controllability) to direct the state towards the goal because of short time for control. This makes the equation $F(U)=0$ more difficult for numerical solution, thus, $\|F\|$ increases near the goal position, as seen in Figures~3--6. \section*{Conclusions} Time-optimal problems are practically important, giving optimal solutions for guidance, navigation and control, which can be used for vehicles, trains, etc. Due to heavily nonlinear equations and highly coupled variables, the time-optimal problems are difficult to solve numerically. We present an apparently first successful extension of CNMPC for real-time control of such problems. Our numerical experiments demonstrate dramatic acceleration of convergence of iterations without sacrificing control quality, if proper preconditioning is used. The proposed concurrent construction of the preconditioner can be trivially efficiently implemented in parallel on controllers having multiple processing units, such as multi-core, graphics processing units, and modern field-programmable gate arrays. Replacing GMRES with the MINRES iterative solver may help reducing controller memory requirements and increasing the speed of convergence. Our algorithm, including the preconditioner setup implemented in parallel and the iterative solver, can significantly speed up the calculation of the control, compared to traditional sequential CNMPC algorithms, thus allowing to control system with faster dynamics. Our future work concerns analyzing MINRES, as a possible replacement of GMRES, and developing efficient preconditioners, with faster on-line setup and application, within the framework of CNMPC.
1,108,101,562,552
arxiv
\section{Introduction} Quantum automorphism groups were introduced by Wang \cite{Wangqsymmetry} in his study of noncommutative symmetries of finite dimensional $ C^* $-algebras. These quantum groups are quite different from $ q $-deformations of compact Lie groups, and interestingly, they appear naturally in a variety of contexts, including combinatorics and free probability, see for instance \cite{BBCsurvey}, \cite{BCSdefinetti}. The $ C^* $-algebraic properties of quantum automorphism groups were studied by Brannan \cite{Brannanquantumautomorphism}, revealing various similarities with free group $ C^* $-algebras. \\ An interesting subclass of quantum automorphism groups is provided by quantum permutation groups. Following \cite{BSliberation}, we will write $ S_n^+ $ for the quantum permutation group on $ n $ letters. According to the definition of Wang, the quantum group $ S_n^+ $ is the universal compact quantum group acting on the abelian $ C^* $-algebra $ \mathbb{C}^n $. If one replaces $ \mathbb{C}^n $ by a general finite dimensional $ C^* $-algebra, one has to add the data of a state and restrict to state-preserving actions in the definition of quantum automorphism groups. Indeed, the choice of state is important in various respects. This is illustrated, for instance, by the work of De Rijdt and Vander Vennet on monoidal equivalences among quantum automorphism groups \cite{dRV}. \\ The aim of the present paper is to compute the $ K $-theory of quantum automorphism groups. Our general strategy follows the ideas in \cite{Voigtbcfo}, which in turn build on methods from the Baum-Connes conjecture, formulated in the language of category theory following Meyer and Nest \cite{MNtriangulated}. In fact, the main result of \cite{Voigtbcfo} implies rather easily that the appropriately defined assembly map for duals of quantum automorphism groups is an isomorphism. The main additional ingredient, discussed further below, is the construction of suitable resolutions, entering the left hand side of the assembly map in the framework of \cite{MNtriangulated}. \\ The reason why this is more tricky than in \cite{Voigtbcfo} is that quantum automorphism groups have torsion. At first sight, the presence of torsion may appear surprising because these quantum groups behave like free groups in many respects. Indeed, the way in which torsion enters the picture is different from what happens for classical discrete groups. Therefore quantum automorphism groups provide an interesting class of examples also from a conceptual point of view. Indeed, a better understanding of quantum torsion seems crucial in order to go beyond the class of quantum groups studied in the spirit of the Baum-Connes conjecture so far \cite{MNcompact}, \cite{Voigtbcfo}, \cite{VVfreeu}. We have therefore included some basic considerations on torsion in discrete quantum groups in this paper. \\ From our computations discussed below one can actually see rather directly the effect of torsion on the level of $ K $-theory. In particular, the $ K $-groups of monoidally equivalent quantum automorphism groups can differ quite significantly due to minor differences in their torsion structure. Our results also have some direct operator algebraic consequences, most notably, they imply that the reduced $ C^* $-algebras of functions on quantum permutation groups can be distinguished by $ K $-theory. \\ Let us now explain how the paper is organised. In section \ref{secqg} we collect some definitions and facts from the theory of compact quantum groups and fix our notation. Section \ref{secqut} contains more specific preliminaries on quantum automorphism groups and their actions. In section \ref{sectorsion} we collect some basic definitions and facts regarding torsion in discrete quantum groups. In the quantum case, this is studied most efficiently in terms of ergodic actions of the dual compact quantum groups, and our setup generalises naturally previous considerations by Meyer and Nest \cite{MNcompact}, \cite{Meyerhomalg2}. Finally, section \ref{seckqaut} contains our main results. \\ Let us conclude with some remarks on notation. We write $ \LH(\E) $ for the algebra of adjointable operators on a Hilbert module $ \E $. Moreover $ \KH(\E) $ denotes the algebra of compact operators. The closed linear span of a subset $ X $ of a Banach space is denoted by $ [X] $. Depending on the context, the symbol $ \otimes $ denotes either the tensor product of Hilbert spaces, the spatial tensor product of $ C^* $-algebras, or the exterior tensor product of Hilbert modules. \section{Compact quantum groups} \label{secqg} In this preliminary section we collect some definitions from the theory of compact quantum groups and fix our notation. We will mainly follow the conventions in \cite{Voigtbcfo} as far as general quantum group theory is concerned. \\ Let us start with the following definition. \begin{definition} \label{defcqg} A compact quantum group $ G $ is given by a unital Hopf $ C^* $-algebra $ C(G) $, that is, a unital $ C^* $-algebra $ C(G) $ together with a unital $ * $-homomorphism $ \Delta: C(G) \rightarrow C(G) \otimes C(G) $, called comultiplication, such that $$ (\Delta \otimes \id) \Delta = (\id \otimes \Delta) \Delta $$ and $$ [(C(G) \otimes 1) \Delta(C(G))] = C(G) \otimes C(G) = [(1 \otimes C(G)) \Delta(C(G))]. $$ \end{definition} For every compact quantum group there exists a Haar state, namely a state $ \phi: C(G) \rightarrow \mathbb{C} $ satisfying the invariance conditions $ (\id \otimes \phi)\Delta(f) = \phi(f) 1 = (\phi \otimes \id)\Delta(f) $ for all $ f \in C(G) $. The image of $ C(G) $ in the GNS-representation of $ \phi $ is denoted $ C^\red(G) $, and called the reduced $ C^* $-algebra of functions on $ G $. We will write $ L^2(G) $ for the GNS-Hilbert space of $ \phi $, and notice that the GNS-representation of $ C^\red(G) $ on $ L^2(G) $ is faithful. \\ A unitary representation of $ G $ on a Hilbert space $ \H $ is a unitary element $ U \in M(C^\red(G) \otimes \KH(\H)) = \LH(C^\red(G) \otimes \H) $ such that $ (\Delta \otimes \id)(U) = U_{13} U_{23} $. In analogy with the classical theory for compact groups, every unitary representation of a compact quantum group $ G $ is completely reducible, and irreducible representations are finite dimensional. We write $ \Irr(G) $ for the set of equivalence classes of irreducible unitary representations of $ G $. The linear span of matrix coefficients of all unitary representations of $ G $ forms a dense Hopf $ * $-algebra $ \Poly(G) $ of $ C^\red(G) $. \\ The full $ C^* $-algebra $ C^\max(G) $ of functions on $ G $ is the universal $ C^* $-completion of $ \Poly(G) $. It admits a comultiplication as well, satisfying the density conditions in definition \ref{defcqg}. The quantum group $ G $ can be equivalently described in terms of $ C^\max(G) $ or $ C^\red(G) $, or in fact, using $ \Poly(G) $. One says that $ G $ is coamenable if the canonical quotient map $ C^\max(G) \rightarrow C^\red(G) $ is an isomorphism. In this case we will simply write again $ C(G) $ for this $ C^* $-algebra. By slight abuse of notation, we will also write $ C(G) $ if a statement holds for both $ C^\max(G) $ and $ C^\red(G) $. \\ The regular representation of $ G $ is the representation of $ G $ on $ L^2(G) $ corresponding to the multiplicative unitary $ W \in M(C^\red(G) \otimes \KH(L^2(G))) $ determined by $$ W^*(\Lambda(f) \otimes \Lambda(g)) = (\Lambda \otimes \Lambda)(\Delta(g)(f \otimes 1)), $$ where $ \Lambda(f) \in L^2(G) $ is the image of $ f \in C^\max(G) $ under the GNS-map. The comultiplication of $ C^\red(G) $ can be recovered from $ W $ by the formula $$ \Delta(f) = W^*(1 \otimes f) W. $$ One defines the algebra of functions $ C_0(\hat{G}) $ on the dual discrete quantum group $ \hat{G} $ by $$ C_0(\hat{G}) = [(\LH(L^2(G))_* \otimes \id)(W)], $$ together with the comultiplication $$ \hat{\Delta}(x) = \hat{W}^* (1 \otimes x) \hat{W} $$ for $ x \in C_0(\hat{G}) $, where $ \hat{W} = \Sigma W^* \Sigma $. We remark that there is no need to distinguish between $ C_0^\max(\hat{G}) $ and $ C_0^\red(\hat{G}) $ in the discrete case. \\ Since we are following the conventions of Kustermans and Vaes \cite{KVLCQG}, there is a flip map built into the above definition of $ \hat{\Delta} $, so that the comultiplication of $ C_0(\hat{G}) $ corresponds to the opposite multiplication of $ C^\red(G) $. This is a natural choice in various contexts, but it is slightly inconvenient when it comes to Takesaki-Takai duality. We will write $ \check{G} $ for $ C_0(\hat{G})^\cop $, that is, for the Hopf $ C^* $-algebra $ C_0(\hat{G}) $ equipped with the opposite comultiplication $ \hat{\Delta}^\cop = \sigma \hat{\Delta} $, where $ \sigma $ denotes the flip map. By slight abuse of terminology, we shall refer to both $ \check{G} $ and $ \hat{G} $ as the dual quantum group of $ G $, but in the sequel we will always work with $ \check{G} $ instead of $ \hat{G} $. According to Pontrjagin duality, the double dual of $ G $ in either of the two conventions is canonically isomorphic to $ G $. \\ An action of a compact quantum group $ G $ on a $ C^* $-algebra $ A $ is a coaction of $ C(G) $ on $ A $, that is, an injective nondegenerate $ * $-homomorphism $ \alpha: A \rightarrow M(C(G) \otimes A) $ such that $ (\Delta \otimes \id) \alpha = (\id \otimes \alpha) \alpha $ and $ [(C(G) \otimes 1) \alpha(A)] = C(G) \otimes A $. In a similar way one defines actions of discrete quantum groups, or in fact arbitrary locally compact quantum groups. We will call a $ C^* $-algebra equipped with a coaction of $ C^\red(G) $ a $ G $-$ C^* $-algebra. Moreover we write $ G \Alg $ for the category of all $ G $-$ C^* $-algebras and equivariant $ * $-homomorphisms. \\ The reduced crossed product $ G \ltimes_\red A $ of a $ G $-$ C^* $-algebra $ A $ is the $ C^* $-algebra $$ G \ltimes_\red A = [(C_0(\check{G}) \otimes 1)\alpha(A)] $$ The crossed product is equipped with a canonical dual action of $ \check{G} $, which turns it into a $ \check{G} $-$ C^* $-algebra. Moreover, one has the following analogue of the Takesaki-Takai duality theorem \cite{BSUM}. \begin{theorem} \label{TTduality} Let $ G $ be a regular locally compact quantum group and let $ A $ be a $ G $-$ C^* $-algebra. Then there is a natural isomorphism $$ \check{G} \ltimes_\red G \ltimes_\red A \cong \KH(L^2(G))) \otimes A $$ of $ G $-$ C^* $-algebras. \end{theorem} We will use Takesaki-Takai duality only for discrete and compact quantum groups, and in this setting regularity is automatic. At some points we will also use the full crossed product $ G \ltimes_\max A $ of a $ G $-$ C^* $-algebra $ A $, and we refer to \cite{NVpoincare} for a review of its definition in terms of its universal property for covariant representations. \section{Quantum automorphism groups} \label{secqut} In this section we review some basic definitions and results on quantum automorphism groups of finite dimensional $ C^* $-algebras and fix our notation. We refer to \cite{Wangqsymmetry}, \cite{Banicageneric}, \cite{Banicafusscatalan} for more background on quantum automorphism groups. \\ Let us start with the definition of the quantum automorphism group of a finite dimensional $ C^* $-algebra $ A $, compare \cite{Wangqsymmetry} \cite{Banicageneric}. If $ \mu: A \otimes A \rightarrow A $ denotes the multiplication map then a faithful state $ \omega $ on $ A $ is called a $ \delta $-form for $ \delta > 0 $ if $ \mu \mu^* = \delta^2 \id $ with respect to the Hilbert space structures on $ A $ and $ A \otimes A $ implemented by the GNS-constructions for $ \omega $ and $ \omega \otimes \omega $, respectively. \begin{definition} \label{defqut} Let $ A $ be a finite dimensional $ C^* $-algebra and let $ \omega $ be a $ \delta $-form on $ A $ for some $ \delta > 0 $. The quantum automorphism group $ \Qut(A) = \Qut(A, \omega) $ is the universal compact quantum group acting on $ A $ such that $ \omega $ is preserved. \\ That is, if $ G $ is any compact quantum group together with a coaction $ \delta: A \rightarrow C^\max(G) \otimes A $ then there exists a unique morphism of quantum groups $ \iota: G \rightarrow \Qut(A) $ such that the diagram $$ \xymatrix{ A \ar@{->}[r]^{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \alpha} \ar@{->}[rd]_{\delta} & C^\max(\Qut(A)) \otimes A \ar@{->}[d]^{\iota^* \otimes \id} \\ & C^\max(G) \otimes A } $$ is commutative and $$ (\id \otimes \omega)\alpha(a) = \omega(a) 1 $$ for all $ a \in A $. Here $ \iota^*: C^\max(\Qut(A)) \rightarrow C^\max(G) $ denotes the homomorphism of Hopf $ C^* $-algebras corresponding to the morphism $ \iota $. \end{definition} A basic result of the theory is that the compact quantum group $ \Qut(A, \omega) $ indeed exists \cite{Wangqsymmetry}, see also \cite{Mrozinskiso3deformations}. As indicated in definition \ref{defqut}, we will typically omit the state $ \omega $ from our notation and write $ \Qut(A) $ instead of $ \Qut(A,\omega) $, although $ \omega $ is an important part of the data. The notation for quantum automorphism groups used in \cite{Wangqsymmetry} is $ A_{aut}(B) = C^\max(\Qut(B)) $. \\ We remark that the matrix coefficients of an action of a compact quantum group $ G $ on a finite dimensional $ C^* $-algebra $ A $ are contained in the Hopf $ * $-algebra $ \Poly(G) $. In particular, any coaction $ \alpha: A \rightarrow C^\max(G) \otimes A $, where $ G $ is a compact quantum group, comes from a Hopf algebraic coaction $ \alpha: A \rightarrow \Poly(G) \otimes A $. Using this fact, one can study quantum automorphism groups from a more algebraic perspective, see for instance \cite{Mrozinskiso3deformations}. The universal $ C^* $-algebras of quantum automorphism groups can be defined explicitly in terms of generators and relations \cite{Wangqsymmetry}. \\ Let us take a closer look at the special case of quantum permutation groups. By definition, the quantum permutation group $ S_n^+ $ is the quantum automorphism group of $ A = \mathbb{C}^n $ with the trace corresponding to the uniform probability measure on $ n $ points, which is a $ \delta $-form for $ \delta = \sqrt{n} $. In order to describe $ C^\max(S_n^+) $ let us recall some terminology. If $ B $ is any unital $ * $-algebra then a matrix $ u = (u_{ij}) \in M_n(B) $ is called a magic unitary if all entries $ u_{ij} $ are projections, and on each row and column of $ u $ these projections are mutually orthogonal and sum up to $ 1 $. Explicitly, this means \begin{equation*} u_{ij}^* = u_{ij} = u_{ij}^2 \end{equation*} for all $ 1 \leq i,j \leq n $ and \begin{equation*} \sum_{i = 1}^n u_{ik} = 1, \qquad \sum_{i = 1}^n u_{ki} = 1 \end{equation*} for all $ k $. These relations imply in particular that the matrix $ u $ and its transpose $ u^t $ are both unitary. \begin{prop} The full $ C^* $-algebra of functions on the quantum permutation group $ S_n^+ = \Qut(\mathbb{C}^n) $ is the universal unital $ C^* $-algebra $ C^\max(S_n^+) $ with generators $ u_{ij} $ for $ 1 \leq i,j \leq n $ such that $ u = (u_{ij}) \in M_n(C^\max(S_n^+)) $ is a magic unitary matrix. \end{prop} The comultiplication $ \Delta: C^\max(S_n^+) \rightarrow C^\max(S_n^+) \otimes C^\max(S_n^+) $ is defined by the formula $$ \Delta(u_{ij}) = \sum_{k = 1}^n u_{ik} \otimes u_{kj} $$ on generators. The defining coaction $ \alpha: \mathbb{C}^n \rightarrow C^\max(S_n^+) \otimes \mathbb{C}^n $ is given by $$ \alpha(e_i) = \sum_{j = 1}^n u_{ij} \otimes e_j, $$ where $ e_1, \dots, e_n $ are the minimal projections in $ \mathbb{C}^n $. \\ From the fact that $ S_n^+ $ is the universal compact quantum group acting on $ A = \mathbb{C}^n $ we obtain a morphism of quantum groups $ S_n \rightarrow S_n^+ $, that is, a unital $ * $-homomorphism $ C^\max(S_n^+) \rightarrow C(S_n) $ compatible with comultiplications. Here $ S_n $ is the symmetric group acting by permutations on $ A $. In fact, every character of $ C^\max(S_n^+) $ is induced from a character of $ C(S_n) $, and $ C(S_n) $ is the abelianisation of $ C^\max(S_n^+) $. \\ The structure of quantum permutation groups $ S_n^+ $ is well-understood for small values of $ n $, for the following fact compare \cite{Banicasmallmetric}. \begin{lemma} \label{s123plusstructure} For $ n = 1,2,3 $ the canonical morphism of quantum groups $ S_n \rightarrow S_n^+ $ is an isomorphism. \end{lemma} Clearly, this means in particular that $ S_n^+ $ is coamenable for these values of $ n $. \\ For $ n = 4 $ the natural morphism $ S_n \rightarrow S_n^+ $ is no longer an isomorphism. In fact, the $ C^* $-algebra $ C^\max(S_4^+) $ is infinite dimensional. The following result due to Banica and Bichon \cite{BBfourpoints} describes the structure of $ S_4^+ $. \begin{theorem} \label{snplus4structure} There is an isomorphism of quantum groups $ S_4^+ \cong SO_{-1}(3) $, where the latter is obtained from $ SU_{-1}(3) $ by making the fundamental matrix orthogonal. The quantum group $ SO_{-1}(3) $ is a $ 2 $-cocycle twist of the classical group $ SO(3) $. \end{theorem} We note that since the classical group $ SO(3) $ is a coamenable compact quantum group, the same holds for its cocycle twist $ S_4^+ $, see \cite{Banicafusion}. For $ n \geq 5 $ the quantum groups $ S_n^+ $ are no longer coamenable, that is, the reduced $ C^* $-algebras $ C^\red(S_n^+) $ fail to be nuclear. \\ Still, the $ C^* $-algebras $ C^\red(S_n^+) $ are exact for all $ n $. This can be shown using the monoidal equivalences among quantum automorphism groups \cite{dRV} to be discussed below, by invoking a general observation of Vaes and Vergnioux \cite{VaesVergnioux}, namely that exactness of the reduced $ C^* $-algebras of functions on compact quantum groups is preserved under monoidal equivalences, compare \cite{Brannanquantumautomorphism}. \\ According to lemma \ref{s123plusstructure}, all quantum automorphisms for $ C^* $-algebras of dimension at most $ 3 $ come from classical automorphisms. In the sequel we will therefore restrict attention to finite dimensional $ C^* $-algebras of dimension at least $ 4 $. \\ In the case of dimension $ 4 $, the only $ C^* $-algebra to consider apart from $ \mathbb{C}^4 $ is $ M_2(\mathbb{C}) $. The quantum automorphism group of $ A = \mathbb{C}^4 $ is the quantum permutation group $ S_4^+ $ already mentioned above, and for $ M_2(\mathbb{C}) $, the quantum automorphism groups are determined by the following result of Soltan \cite{Soltanquantumso3}. \begin{theorem} Let $ \omega $ be a faithful state on $ M_2(\mathbb{C}) $. The quantum automorphism group $ \Qut(M_2(\mathbb{C}), \omega) $ is isomorphic to $ SO_q(3) $ for a unique $ q \in (0,1] $. Here $ SO_q(3) $ is the quantum $ SO(3) $-group of Podle\`s. \end{theorem} We emphasise that the quantum group $ SO_q(3) $ of Podle\`s \cite{Podlesspheres} is different from the quantum group $ SO_{-1}(3) $ appearing in theorem \ref{snplus4structure}. \\ Let us now return to general quantum automorphism groups. The main tool at our disposal are the monoidal equivalences exhibited by De Rijdt and Vander Vennet as follows \cite{dRV}. \begin{theorem} \label{fdmonoidaleq} Let $ A_j $ be finite dimensional $ C^* $-algebras of dimension at least $ 4 $, equipped with $ \delta_j $-forms $ \omega_j $ for $ j = 1,2 $, respectively. Then $ \Qut(A_1) $ is monoidally equivalent to $ \Qut(A_2) $ iff $ \delta_1 = \delta_2 $. \end{theorem} This result will play a crucial role in our analysis, it implies in particular that any quantum automorphism group is monoidally equivalent to $ SO_q(3) $ for some $ q \in (0,1] $. Theorem \ref{fdmonoidaleq} shows also that the quantum permutation groups $ S_n^+ = \Qut(\mathbb{C}^n) $ are pairwise monoidally inequivalent for $ n \geq 4 $. \section{Torsion in discrete quantum groups} \label{sectorsion} In this section we discuss some definitions and facts related to torsion in discrete quantum groups. This will be useful in our analysis of the equivariant Kasparov theory of quantum automorphism groups. The study of torsion in duals of compact groups has already been carried out in \cite{MNcompact}, and our definitions are by and large motivated from this. \\ Firstly, we have to explain what we mean by torsion. Let $ \check{G} $ be the dual of a discrete quantum group $ G $, and assume that $ U \in C^\red(\check{G}) \otimes \KH(V) $ is a unitary representation of $ \check{G} $ on the finite dimensional Hilbert space $ V $. Then we obtain a coaction $ \ad_U: \KH(V) \rightarrow C^\red(\check{G}) \otimes \KH(V) $ by the formula $$ \ad_U(T) = U^* (\id \otimes T) U, $$ which turns $ \KH(\H) $ into a $ \check{G} $-$ C^* $-algebra. Coactions of this form are precisely the coactions on simple matrix algebras which are $ \check{G} $-equivariantly Morita equivalent to the trivial coaction on $ \mathbb{C} $. \\ The idea is, roughly, that torsion in $ G $ is encoded in coactions on finite dimensional $ C^* $-algebras which are not of the above form. In order to make this precise, recall that a unital $ \check{G} $-$ C^* $-algebra $ B $ with coaction $ \beta: B \rightarrow C(\check{G}) \otimes B $ is called ergodic iff its fixed point subalgebra $$ B^\beta = \{b \in B \mid \beta(b) = 1 \otimes b \} $$ is equal to $ \mathbb{C} $. \\ The following definition is motivated by the considerations regarding torsion-free quantum groups in \cite{Meyerhomalg2}. \begin{definition} \label{deftorsion} Let $ G $ be a discrete quantum group. A finite dimensional ergodic $ \check{G} $-$ C^* $-algebra $ B $ is called a torsion coaction of $ G $. Such a coaction is called nontrivial if $ B $ is not $ \check{G} $-equivariantly Morita equivalent to the trivial $ \check{G} $-$ C^* $-algebra $ \mathbb{C} $. \\ The quantum group $ G $ is called torsion-free if $ G $ does not admit any nontrivial torsion coactions. \end{definition} It is straightforward to check that a quantum group $ G $ is torsion-free iff for every finite dimensional $ \check{G} $-$ C^*$-algebra $ B $ there are finite dimensional unitary corepresentations $ U_j \in C^\red(\check{G}) \otimes \KH(V_j) $ such that $$ B \cong \KH(V_1) \oplus \cdots \oplus \KH(V_l) $$ as $ \check{G} $-$ C^* $-algebras, where each matrix block $ \KH(V_j) $ is equipped with the adjoint coaction $ \ad_{U_j} $ as explained above. That is, the notion of torsion-freeness in definition \ref{deftorsion} is compatible with the terminology used in \cite{Meyerhomalg2}, \cite{Voigtbcfo}. \\ Assume that $ G $ is a discrete quantum group, and let $ Q $ be a Galois object for $ C^*(H)^\cop = C(\check{H}) $ where $ H \subset G $ is a finite quantum subgroup. That is, $ Q $ is a $ C^* $-algebra equipped with an ergodic coaction of $ C(\check{H}) $ of full quantum multiplicity, compare for instance \cite{BdRV}, \cite{Decommergaloisalg}. Then $ Q $ is a torsion coaction of $ G $ since $ Q $ is clearly finite dimensional, and the ergodic action of $ \check{H} $ on $ Q $ is naturally an ergodic action of $ \check{G} $ as well. \\ The following proposition shows that this class of coactions exhausts all torsion coactions in the case of classical discrete groups. \begin{prop} \label{discretegrouptorsion} Let $ G $ be a discrete group. Then every torsion coaction of $ G $ is $ \check{G} $-equivariantly Morita equivalent to a $ \check{G} $-$ C^* $-algebra of the form $ C^*_\omega(H) $ for some finite subgroup $ H \subset G $ and a normalised cocycle $ \omega \in Z^2(H, U(1)) $. \end{prop} \proof Let $ \beta: B \rightarrow C^*_\red(G) \otimes B $ be a torsion coaction and consider the corresponding spectral decomposition $$ B = \bigoplus_{s \in G} B_s $$ of $ B $, where $ B_s = \{b \in B \mid \beta(b) = s \otimes b \} $. Ergodicity means that the component of the identity element $ e \in G $ is $ B_e = \mathbb{C} $. Observe next that if $ b \in B_s $ then $ b^* \in B_s^{-1} $ and $ b^* b \in B_e $, so that $ b^* b = \lambda > 0 $ is invertible. It follows that all spectral subspaces are one-dimensional and spanned by invertible elements. \\ For every $ s \in G $ such that $ B_s $ is nonzero we may choose a unitary $ \delta_s \in B_s $, and without loss of generality we may pick $ \delta_e = 1 $. Clearly, the elements $ s \in G $ such that $ B_s \neq 0 $ form a finite subgroup $ H $ of $ G $. Moreover we have $ \delta_s \delta_t = \omega(s,t) \delta_{st} $ for $ \omega(s,t) \in U(1) $. It is straightforward to check that this yields a normalised $ 2 $-cocycle $ \omega \in Z^2(H, U(1)) $, and we conclude that $ B $ is isomorphic to $ C^*_\omega(H) $, the twisted group $ C^* $-algebra of $ H $. \qed \\ As a corollary of proposition \ref{discretegrouptorsion} we see in particular that the notion of torsion-freeness introduced in definition \ref{deftorsion} agrees with the usual terminology for discrete groups. That is, a discrete group $ G $ is torsion-free in the sense of definition \ref{deftorsion} iff it has no nontrivial finite subgroups. \\ In a sense, the torsion coactions obtained from Galois objects for finite quantum subgroups are the most obvious examples of torsion coactions. If one goes beyond classical discrete groups then more exotic torsion coactions can appear. \\ In particular, as shown in \cite{MNcompact}, this already happens for duals of compact groups. If $ G $ is a compact group then a torsion coaction of the dual discrete quantum group $ \check{G} $ is nothing but an ergodic action of $ G $ on a finite dimensional $ C^* $-algebra $ B $. Such actions factorise over a Lie group quotient $ K $ of $ G $ because the group $ \Aut(B) $ of $ * $-automorphisms of $ B $ is a compact Lie group. Moreover, ergodicity implies that $ B $ must consist of matrix blocks of the same size. That is, we have $ B \cong M_k(\mathbb{C})^{\oplus n} $ for some $ k,n \in \mathbb{N} $. The subgroup $ L \subset K $ preserving a fixed matrix block of $ B $ contains the connected component $ K_{(0)} $ of $ K $, and $ B $ is isomorphic to the induced $ C^* $-algebra $ \ind_L^K(M_k(\mathbb{C})) $ corresponding to the action of $ L $ on $ M_k(\mathbb{C}) $. In other words, we see in particular that all proper homogeneous $ \check{G} $-$ C^* $-algebras considered in \cite{MNcompact} arise from torsion coactions of $ \check{G} $. \\ It is sometimes useful to distinguish between two basic types of torsion coactions. Let us say that a projective torsion coaction of a discrete quantum group $ G $ is a torsion coaction whose underlying $ C^* $-algebra is simple, and which is not equivariantly Morita equivalent to $ \mathbb{C} $. Moreover, let us refer to all torsion coactions on non-simple $ C^* $-algebras as permutation torsion coactions. Clearly, the quantum group $ G $ is torsion-free if it has neither projective torsion nor permutation torsion. The terminology is motivated from the fact that one sourse of permutation torsion for $ G $ comes from finite quantum subgroups and their regular coactions. Similarly, projective torsion arises from projective representations of $ \check{G} $. \\ Roughly speaking, the following lemma shows that permutation torsion for $ G $ is related to the connectedness of $ \check{G} $. \begin{lemma} The dual of a compact group $ G $ has no permutation torsion iff $ G $ is connected. \end{lemma} \proof If $ G $ is connected, then every action of $ G $ on $$ B = M_{k_1}(\mathbb{C}) \oplus \cdots \oplus M_{k_n}(\mathbb{C}) $$ must preserve the individual matrix blocks, and ergodicity implies that $ n = 1 $. Conversely, assume that $ G $ is not connected. Then the quotient $ G/G_{(0)} $ of $ G $ by its connected component is a nontrivial profinite group. If $ F $ is a nontrivial finite quotient of $ G/G_{(0)} $, then the permutation action of $ F $ on the commutative $ C^* $-algebra $ C(F) $ induces an ergodic action of $ G $ via the quotient maps $ G \rightarrow G/G_{(0)} \rightarrow F $. Hence $ G $ has permutation torsion in this case. \qed \\ Assume that $ G $ and $ H $ are discrete quantum groups such that $ \check{G} $ and $ \check{H} $ are monoidally equivalent. Then the general correspondence between actions of $ \check{G} $ and $ \check{H} $ shows that torsion coactions of $ G $ and $ H $ are in a bijective correspondence \cite{dRV}, \cite{Voigtbcfo}. For duals of quantum automorphism groups it is therefore quite easy to determine all torsion coactions up to equivariant Morita equivalence. \begin{lemma} \label{quttorsion} Let $ \Qut(A, \omega) $ be the quantum automorphism group of a $ C^* $-algebra $ A $ of dimension at least $ 4 $ with respect to the $ \delta $-form $ \omega $. Then the trivial coaction on $ \mathbb{C} $ and the defining coaction on $ A $ are the only torsion coactions of the dual of $ \Qut(A, \omega) $ up to equivariant Morita equivalence. \end{lemma} \proof According to the results in \cite{dRV}, the quantum group $ \Qut(A,\omega) $ is monoidally equivalent to $ H = SO_q(3) $ for some $ q \in (0,1] $. If we write $ G = SU_q(2) $, then we have $ C(H) \subset C(G) $, and if $ B $ is a finite dimensional ergodic $ H $-$ C^* $-algebra, it is also naturally an ergodic $ G $-$ C^* $-algebra. According to \cite{Voigtbcfo}, the quantum group $ \check{G} $ is torsion-free. It follows that $ B \cong \KH(V(n)) $ for some $ n \in \frac{1}{2} \mathbb{N}_0 \cong \Irr(SU_q(2)) $. If $ n $ is an integer then $ B $ is $ H $-equivariantly Morita equivalent to $ \mathbb{C} $. In this case the corresponding ergodic action of $ \Qut(A, \omega) $ is Morita equivalent to the trivial action on $ \mathbb{C} $. Otherwise $ B $ is $ H $-equivariantly Morita equivalent to $ \KH(V(1/2)) $. The $ H $-$ C^* $-algebra $ \KH(V(1/2)) $ corresponds to $ A $ under the monoidal equivalence, because up to isomorphism it is the only ergodic $ H $-$ C^* $-algebra with the correct spectral decomposition. \qed \\ In particular, lemma \ref{quttorsion} shows that projective torsion may be turned into permutation torsion under monoidal equivalence, and vice versa. \\ For the applications to discrete quantum groups we have to study the crossed products of torsion coactions, and it will be convenient to use the following terminology, compare \cite{MNcompact}. \begin{definition} Let $ G $ be a discrete quantum group. A $ G $-$ C^* $-algebra is called proper almost homogeneous if it is $ G $-equivariantly Morita equivalent to the crossed product of some torsion coaction of $ G $. \end{definition} Notice that we do not need to distinguish between reduced or maximal crossed products here since the dual of $ G $ is compact. \\ A guiding example for proper almost homogeneous algebras arises from the torsion coaction $ B = C(\check{H}) $ for some finite quantum subgroup $ H \subset G $. In this case the proper almost homogeneous algebra $ G \ltimes B $ is $ G $-equivariantly Morita equivalent to $ C_0(G/H) $, which should be viewed as the prototypical example of a proper homogeneous action of $ G $. \\ According to proposition \ref{discretegrouptorsion}, proper almost homogeneous actions of classical discrete groups are indeed essentially determined by homogeneous spaces $ G/H $ where $ H $ is finite. Notice that in the presence of a nontrivial cocycle $ \omega \in Z^2(H, U(1)) $, the crossed product $ P = \check{G} \ltimes C^*_\omega(H) $ is a proper $ G $-$ C^* $-algebra over $ C_0(G/H) $ in the sense of Kasparov. \\ Assume that $ G $ be a locally compact quantum group and let $ A $ be a $ G $-$ C^* $-algebra. Let us say that $ G $ acts amenably on $ A $ if the canonical quotient map $ G \ltimes_\max A \rightarrow G \ltimes_\red A $ is an isomorphism. In particular, according to this terminology, $ G $ is amenable in the sense that $ C^*_\max(G) \cong C^*_\red(G) $ iff $ G $ acts amenably on $ \mathbb{C} $. \begin{lemma} \label{properamenable} Let $ G $ be a discrete quantum group. If $ P $ is a proper almost homogeneous $ G $-$ C^* $-algebra then $ G $ acts amenably on $ P $. \end{lemma} \proof Without loss of generality we may assume that $ P = \check{G} \ltimes B $ for a finite dimensional $ C^* $-algebra $ B $ with an ergodic action of $ \check{G} $. It is enough to observe that the algebraic crossed product $ G \ltimes_\alg \check{G} \ltimes_\alg B $, taken in the framework of algebraic quantum groups \cite{vDadvances}, is dense inside both $ G \ltimes_\max P $ and $ G \ltimes_\red P $. Indeed, the algebraic crossed product is isomorphic to the algebraic tensor product of $ B $ with an algebra of possibly infinite matrices, and the $ C^* $-norm on such an algebra is uniquely determined. Therefore the quotient map $ G \ltimes_\max P \rightarrow G \ltimes_\red P $ is an isomorphism. \qed \section{The $ K $-theory of quantum automorphism groups} \label{seckqaut} In this section we compute the $ K $-theory of quantum automorphism groups. The basic strategy is the same as for free quantum groups \cite{Voigtbcfo}, \cite{VVfreeu}, namely, in a first step it will be shown that quantum automorphism groups satisfy a strong form of the Baum-Connes conjecture. The second step consists of the actual computation, essentially this amounts to computing the left hand side of the assembly map. \\ Let $ G = \Qut(A, \omega) $ be the quantum automorphism group of a finite dimensional $ C^* $-algebra $ A $ with respect to a $ \delta $-form $ \omega $. In the sequel we will always assume that the dimension of $ A $ is at least $ 4 $. We will also fix $ q \in (0,1] $ such that $ H = SO_q(3) $ is monoidally equivalent to $ G $, see \cite{dRV}. \\ Firstly, we have to analyse the structure of the equivariant $ KK $-theory of $ G $. For this we need the language of triangulated categories, compare \cite{MNtriangulated}, \cite{NVpoincare}, \cite{Voigtbcfo}. More precisely, we consider the category $ KK^G $ with objects all separable $ G $-$ C^* $-algebras, and the morphism set between objects $ B $ and $ C $ is given by the equivariant Kasparov group $ KK^G(B,C) $. Composition of morphisms is given by the Kasparov product. The category $ KK^G $ is triangulated with translation automorphism $ \Sigma: KK^G \rightarrow KK^G $ given by the suspension $ \Sigma B = C_0(\mathbb{R}, B) $ of a $ G $-$ C^* $-algebra $ B $. The exact triangles are all diagrams isomorphic to diagrams of the form $$ \xymatrix{ \Sigma C \;\; \ar@{->}[r] & C_f \ar@{->}[r] & B \ar@{->}[r]^f & C } $$ where $ C_f $ denotes the mapping cone of an equivariant $ * $-homomorphism $ f: B \rightarrow C $. For more information we refer to \cite{MNtriangulated}, \cite{NVpoincare}, \cite{Voigtbcfo}. \\ Let $ \T_G \subset KK^G $ be the full subcategory consisting of all trivial $ G $-$ C^* $-algebras, and all $ G $-$ C^* $-algebras of the form $ A \otimes C $, equipped with the the action coming from the defining action of $ G $ on the first tensor factor. These actions should be thought of as crossed products of compactly induced actions for the dual discrete quantum group $ \check{G} $. We write $ \bra \T_G \ket $ for the localising subcategory of $ KK^G $ generated by $ \T_G $. Accordingly, we let $ \bra \CI_{\check{G}} \ket \subset KK^{\check{G}} $ be the full subcategory corresponding to $ \bra \T_G \ket $ under Baaj-Skandalis duality \cite{BSUM}, that is, under the equivalence $ KK^G \rightarrow KK^{\check{G}} $ of triangulated categories given by taking reduced crossed products. In the sequel we will however avoid working in $ KK^{\check{G}} $ for most of the time, because the constructions are somewhat clearer in the compact picture. \\ Since $ H = SO_q(3) $ is monoidally equivalent to $ G $, we have an equivalence of triangulated categories between $ KK^H $ and $ KK^G $, see \cite{Voigtbcfo}. Notice that the category $ \T_G $ corresponds to $ \T_H $ under this equivalence. Therefore, it largely suffices to study the structure of $ KK^H $. \\ Recall that the quantum group $ H = SO_q(3) $ is closely related to $ K = SU_q(2) $ since $ C(SO_q(3)) \subset C(SU_q(2)) $. We will identify the set $ \Irr(SU_q(2)) $ of equivalence classes of irreducible representations of $ SU_q(2) $ with $ \frac{1}{2} \mathbb{N}_0 $, and refer to the irreducible representation $ V(n) $ of $ SU_q(2) $ corresponding to $ n \in \frac{1}{2} \mathbb{N}_0 $ as the representation of spin $ n $. In the Peter-Weyl picture, the algebra $ C(SO_q(3)) $ is the norm closure of the space of matrix elements of all integral spin representations. Accordingly, the set $ \Irr(SO_q(3)) $ of irreducible representations of $ SO_q(3) $ identifies with $ \mathbb{N}_0 $. We shall write $ C^*_\omega(SO_q(3)) $ for the quotient of $ C^*(SU_q(2)) $ corresponding to the remainig part of $ \Irr(SU_q(2)) $. That is, $ C^*_\omega(SO_q(3)) $ is the norm closure of the matrix algebras associated to representations of half-integral spin. By construction, we have a direct sum decomposition $$ C^*(SU_q(2)) = C^*(SO_q(3)) \oplus C^*_\omega(SO_q(3)), $$ and this decomposition is compatible with the canonical coactions of $ C^*(SO_q(3)) $ induced from the comultiplication of $ C^*(SU_q(2)) $. \begin{lemma} \label{soq3morita} There is a $ C^*(SO_q(3)) $-colinear Morita equivalence $$ C^*_\omega(SO_q(3)) \sim SO_q(3) \ltimes \KH(V(1/2)) $$ where $ V(1/2) $ is the defining representation of $ SU_q(2) $. \end{lemma} \proof We simply have to cut down the $ C^*(SU_q(2)) $-colinear Morita equivalence between $ C^*(SU_q(2)) $ and $ SU_q(2) \ltimes \KH(V(1/2)) $ implemented by the imprimitivity bimodule $$ C^*(SU_q(2)) \ltimes V(1/2) = [(C^*(SU_q(2)) \otimes 1) \lambda(V(1/2))] \subset \LH(L^2(SU_q(2)) \otimes V(1/2)). $$ More precisely, split the Hilbert space $ L^2(SU_q(2)) = \H_0 \oplus \H_1 $ into the direct sum of $ \H_0 = L^2(SO_q(3)) $ and its orthogonal complement $ \H_1 $. Then we obtain a Hilbert $ C^*_\omega(SO_q(3)) $-module $$ [(C^*(SO_q(3)) \otimes 1) \lambda(V(1/2)) (C^*_\omega(SO_q(3)) \otimes 1)] \subset \LH(\H_1, \H_0) \otimes V(1/2), $$ and it is straightforward to check that this module implements a $ C^*(SO_q(3)) $-colinear Morita equivalence between $ C^*_\omega(SO_q(3)) $ and $ SO_q(3) \ltimes \KH(V(1/2)) $. \qed \\ The advantage of $ SO_q(3) \ltimes \KH(V(1/2)) $ over $ C^*_\omega(SO_q(3)) $ is that it is easy to check what happens to the former under monoidal equivalences. Indeed, according to proposition \ref{quttorsion}, the $ SO_q(3) $-$ C^* $-algebra $ \KH(V(1/2)) $ corresponds to the defining action of $ G = \Qut(A) $ on $ A $. \\ Recall that a discrete quantum group $ F $ is called $ K $-amenable if the canonical map $ F \ltimes_\max B \rightarrow F \ltimes_\red B $ is an isomorphism in $ KK $ for any $ F $-$ C^* $-algebra $ B $, compare \cite{Vergniouxkam}. \\ The following theorem shows that duals of quantum automorphism groups satisfy the Baum-Connes conjecture. \begin{theorem} \label{bcqut} Let $ A $ be a finite dimensional $ C^* $-algebra of dimension $ \dim(A) \geq 4 $, and let $ G $ be the quantum automorphism group of $ A $ with respect to a $ \delta $-form on $ A $. Then we have $ KK^G = \bra \T_G \ket $. In particular, the dual of $ G $ is $ K $-amenable. \end{theorem} \proof For the first claim it is enough to check the corresponding assertion for $ H = SO_q(3) $. In this case it follows from the strong Baum-Connes property of $ K = SU_q(2) $, see \cite{Voigtbcfo}. More precisely, let $ B \in KK^{\check{H}} $ and consider the induction functor $ KK^{\check{H}} \rightarrow KK^{\check{K}} $. According to \cite{Voigtbcfo}, the algebra $ \ind_{\check{H}}^{\check{K}}(B) $ is contained in the localising subcategory of $ KK^{\check{K}} $ generated by all algebras of the form $ C_0(\check{K}) \otimes C $, where $ C $ is any $ C^* $-algebra, with the coaction given by comultiplication on the first tensor factor. Moreover, due to lemma \ref{soq3morita}, the $ \check{H} $-$ C^* $-algebra $ C_0(\check{K}) $ decomposes as $$ C_0(\check{K}) \cong (SO_q(3) \ltimes \mathbb{C}) \oplus (SO_q(3) \ltimes \KH(V(1/2))) $$ in $ KK^{\check{H}} $. Therefore it suffices to observe that $ B $ is a direct summand in $ \res^{\check{K}}_{\check{H}} \ind_{\check{H}}^{\check{K}}(B) $ as an $ \check{H} $-algebra. Using the concrete description of induced $ C^* $-algebras given in \cite{VVfreeu}, this in turn follows by considering the coaction of $ B $, viewed as a $ * $-homomorphism $ \beta: B \rightarrow \ind_{\check{H}}^{\check{K}}(B) \subset M(C_0(\hat{K}) \otimes B) $, and the map $ \check{\epsilon} \otimes \id: \ind_{\check{H}}^{\check{K}}(B) \rightarrow B $ where $ \check{\epsilon} $ denotes the counit of $ C_0(\check{K}) $. \\ The $ K $-amenability of $ \check{G} $ follows now immediately using lemma \ref{properamenable}, compare the analogous argument in \cite{Voigtbcfo}. \qed \\ If $ C $ is an $ SO_q(3) $-$ C^* $-algebra we write $ \KH(V(1/2)) \otimes C $ for the $ SO_q(3) $-algebra obtained by considering the $ SU_q(2) $-$ C^* $-algebra $ \KH(V(1/2)) \otimes C $ and observing that the coaction on $ \KH(V(1/2)) \otimes C $ takes values in $ C(SO_q(3)) \otimes \KH(V(1/2)) \otimes C $. \begin{lemma} \label{Khalfshifting} For any $ SO_q(3) $-$ C^* $-algebra $ B $ we have an $ SO_q(3) $-equivariant Morita equivalence $$ \KH(V(1/2)) \otimes \KH(V(1/2)) \otimes B \sim_M B. $$ Moreover, there exists a natural isomorphism $$ KK^{SO_q(3)}(\KH(V(1/2)) \otimes B, \KH(V(1/2)) \otimes C) \cong KK^{SO_q(3)}(B, C) $$ for all $ SO_q(3) $-$ C^* $-algebras $ B, C $. \end{lemma} \proof For the first claim observe that $$ \KH(V(1/2)) \otimes \KH(V(1/2)) \otimes B \cong \KH(V(1/2) \otimes V(1/2)) \otimes B $$ is $ SO_q(3) $-equivariantly Morita equivalent to $ B $ since $ V(1/2) \otimes V(1/2) \cong V(0) \oplus V(1) $ is an honest representation of $ SO_q(3) $. \\ The second part of the lemma follows from this. More precisely, the functor $ F: SO_q(3) \Alg \rightarrow KK^{SO_q(3)} $ given by $ F(B) = \KH(V(1/2)) \otimes B $ is homotopy invariant, stable, and split exact. By the universal property of equivariant $ KK $-theory \cite{NVpoincare}, it therefore induces a functor $ f: KK^{SO_q(3)} \rightarrow KK^{SO_q(3)} $. Alternatively, one can also construct $ f $ directly on the level of Kasparov cycles. Using the above Morita equivalence, one checks that $ f^2 $ is naturally isomorphic to the identity. In particular, $ f $ is a natural isomorphism. \qed \\ We need some basic calculations. Let us write $$ A = M_{k_1}(\mathbb{C}) \oplus \cdots \oplus M_{k_n}(\mathbb{C}) $$ and view it as a $ G $-$ C^* $-algebra with the defining action of $ G = \Qut(A, \omega) $. Moreover we write write $ \mathbb{C} $ for the trivial $ G $-$ C^* $-algebra. As indicated above, we may identify $ \Irr(G) \cong \mathbb{N}_0 $ and $ R(G) = \mathbb{Z}[t] $, where $ t $ corresponds to the underlying representation of $ A $, that is, $ t = V(0) + V(1) $ under the identification with $ R(SO_q(3)) = K_*(C^*(SO_q(3))) $. Similarly, we let $ R^\omega(G) = K_*(C^*_\omega(SO_q(3))) $, and identify $ R^\omega(G) = t^{1/2} \mathbb{Z}[t] $ as an $ R(G) $-module. Here the generator $ t^{1/2} $ corresponds to $ V(1/2) $. \\ The representation ring $ R(G) $ acts on the groups $ KK^G(B,C) $ in a natural way, compare \cite{Meyerhomalg2}. We have to identify these $ R(G) $-modules in a few special cases. \begin{lemma} \label{repringcomputations} Let $ G = \Qut(A, \omega) $ as above. Then we have isomorphisms \begin{align*} KK^G(\mathbb{C}, C^\red(G)) &\cong KK(\mathbb{C}, \mathbb{C}) = \mathbb{Z} \\ KK^G(A, C^\red(G)) &\cong KK(A, \mathbb{C}) = \mathbb{Z}^n \end{align*} and \begin{align*} KK^G(\mathbb{C}, \mathbb{C}) \cong R(&G) \cong KK^G(A, A) \\ KK^G(A, \mathbb{C}) \cong R^\omega(&G) \cong KK^G(\mathbb{C}, A) \end{align*} of $ R(G) $-modules. \end{lemma} \proof In the same way as in the proof of proposition 4.7 in \cite{NVpoincare} one checks the first two isomorphisms. Indeed, although the quantum group $ G $ will typically fail to be coamenable, the arguments given there essentially carry over because $ \check{G} $ is $ K $-amenable, see theorem \ref{bcqut}. The remaining isomorphisms follow from lemma \ref{Khalfshifting}. \\ The $ R(G) $-module structures on the various Kasparov groups involving $ A $ and $ \mathbb{C} $ correspond to the obvious ones on $ R(G) $ and $ R^\omega(G) $. \qed \\ We are now ready to prove our main theorem. \begin{theorem} \label{Kqaut} Let $$ A = M_{k_1}(\mathbb{C}) \oplus \cdots \oplus M_{k_n}(\mathbb{C}) $$ be a finite dimensional $ C^* $-algebra of dimension at least $ 4 $, equipped with a $ \delta $-form $ \omega $ for some $ \delta > 0 $. Then $$ K_0(C(G)) = \mathbb{Z}^{(n - 1)^2 + 1} \oplus \mathbb{Z}_d^{2n - 1}, \qquad K_1(C(G)) = \mathbb{Z}, $$ where $ \mathbb{Z}_d $ is the finite cyclic group of order $ d = \gcd(k_1, \dots, k_n) $, the greatest common divisor of the numbers $ k_1, \dots, k_n $. \end{theorem} \proof Notice that according to theorem \ref{bcqut} the dual of the quantum automorphism group $ G = \Qut(A, \omega) $ is $ K $-amenable, so that we do not have to distinguish between $ C^\max(G) $ and $ C^\red(G) $ as far as $ K $-theory is concerned. \\ In order to compute the $ K $-groups $ K_*(C^\max(G)) \cong K_*(C^\red(G)) $ we have to write down a suitable resolution of the trivial action on $ \mathbb{C} $ in $ KK^{\check{G}} $. It is slightly easier to work in the compact picture, so that we shall construct a resolution of $ C^\red(G) $ in $ KK^G $. \\ The corresponding homological algebra can be expressed using the framework of homological ideals, see \cite{MNhomalg1}. In order to do this it is convenient to first pass from $ KK^G $ to $ KK^H $ for $ H = SO_q(3) $ monoidally equivalent to $ G $. We define an ideal $ \mathfrak{J}_H \subset KK^H $ by taking the intersection $ \mathfrak{J}_H = \ker(F_0) \cap \ker(F_1) $ where $ F_j: KK^H \rightarrow KK $ are the functors given by $$ F_0(C) = H \ltimes C, \qquad F_1(C) = H \ltimes (\KH(V(1/2)) \otimes C), $$ respectively. It is straightforward to check that $ \mathfrak{J}_H $ is a stable homological ideal in $ KK^H $, and we let $ \mathfrak{J}_G $ be the corresponding ideal in $ KK^G $. The general machinery in \cite{MNhomalg1}, \cite{Meyerhomalg2} now allows us to study $ \mathfrak{J}_G $-projective resolutions in $ KK^G $. \\ Recall from lemma \ref{quttorsion} that the defining coaction on $ A $ and the trivial coaction on $ \mathbb{C} $ are the only torsion coactions of $ \check{G} $ up to equivariant Morita equivalence. It is straightforward to check that both these algebras are $ \mathfrak{J}_G $-projective when viewed as objects in $ KK^G $. \\ Let us consider the diagram $ C_\bullet $ given by $$ \xymatrix{ 0 \ar@{->}[r] & C_1 \ar@{->}[r]^{d_1} & C_0 \ar@{->}[r]^{d_0} & C^\red(G) \ar@{->}[r] & 0 } $$ in $ KK^G $ where \begin{align*} C_1 &= \mathbb{C}^{\oplus n} \oplus A \\ C_0 &= A^{\oplus n} \oplus \mathbb{C} \end{align*} and the arrows are defined as follows. The morphism $ d_0 $ is $$ d_0 = \epsilon_1 \oplus \cdots \oplus \epsilon_n \oplus u $$ where $ \epsilon_j $ corresponds to the canonical basis vector $ e_j $ in $ \mathbb{Z}^n = KK^G(A, C^\red(G)) $ and $ u: \mathbb{C} \rightarrow C^\red(G) $ is the unit homomorphism. The morphism $ d_1 $ is $$ d_1 = \begin{pmatrix} t^{1/2} & 0 & \hdots & & -k_1 \\ 0 & \ddots & & & \vdots \\ \vdots & & \ddots & & \\ & & & t^{1/2} & -k_n \\ -k_1 & \cdots & & -k_n & t^{1/2}, \end{pmatrix} $$ where we use the identifications obtained in lemma \ref{repringcomputations}. \\ In order to show that $ C_\bullet $ is $ \mathfrak{J} $-exact, it suffices to check that the sequences $ KK^G(\mathbb{C}, C_\bullet) $ and $ KK^G(A, C_\bullet) $ are both exact. \\ Let us compute the induced maps in the sequence $ KK^G(\mathbb{C}, C_\bullet) $. For $ d_0 $ we obtain $$ d^\mathbb{C}_0 = \begin{pmatrix} k_1 & \cdots & k_n & 1 \end{pmatrix}, $$ viewed as the $ R(G) $-linear map acting on the free $ R(G) $-module $ KK^G(\mathbb{C}, C_0) \cong R^\omega(G)^{\oplus n} \oplus R(G) \cong R(G)^{\oplus n + 1}$. Indeed, the generator $ t^{1/2} $ in the $ j $-th copy of this module corresponds to the unit map $ \mathbb{C} \rightarrow A $, and composing with $ \epsilon_j $ yields $ k_j \in \mathbb{Z} = KK^G(\mathbb{C}, C^\red(G)$. Moreover, the generator $ 1 $ in the last copy of $ R(G) $ is clearly mapped to $ 1 \in \mathbb{Z} $. \\ The morphism $ d_1^\mathbb{C} $ becomes $$ d^\mathbb{C}_1 = \begin{pmatrix} 1 & 0 & \hdots & & -k_1 \\ 0 & \ddots & & & \vdots \\ \vdots & & \ddots & & \\ & & & 1 & -k_n\\ -k_1 & \cdots & & -k_n & t \end{pmatrix}. $$ It is straightforward to check that $ KK^G(\mathbb{C}, C_\bullet) $ is an exact complex. \\ Similarly, for the induced maps in $ KK^G(A, C_\bullet) $ we obtain $$ d^A_0 = \begin{pmatrix} 1 & 0 & \hdots & & & k_1 \\ 0 & \ddots & & & & \vdots \\ \vdots & & \ddots & & & \vdots \\ & & & \ddots & & \vdots \\ 0 & \cdots & & 0 & 1 & k_n \end{pmatrix} $$ and $$ d^A_1 = \begin{pmatrix} t & 0 & \hdots & & -k_1 \\ 0 & \ddots & & & \vdots \\ \vdots & & \ddots & & \vdots \\ & & & t & -k_n \\ -k_1 & \cdots & & -k_n & 1 \end{pmatrix}. $$ The resulting diagram is an exact complex as well, but this is slightly more subtle. Indeed, one has to be careful to take into account the correct $ R(G) $-module structures when identifying the action of $ t $. With this in mind, in order to check $ \ker(d^A_0) = \im(d^A_1) $ one has to inductively reduce every element in $ \ker(d^A_0) $ to an $ n $-tuple of constant polynomials modulo $ \im(d^A_1) $, and such elements are in the image of $ d_0^A $. The surjectivity of $ d^A_0 $ and the injectivity of $ d^A_1 $ are easy. \\ It follows that $ C_\bullet $ is indeed a $ \mathfrak{J} $-projective resolution, and to compute the $ K $-theory of $ C(G) $ it suffices to compute kernel and cokernel of the map $ \partial: \mathbb{Z}^n \oplus \mathbb{Z}^n = K_0(\mathbb{C}^{\oplus n} \oplus A) \rightarrow K_0(A^{\oplus n} \oplus \mathbb{C}) = (\mathbb{Z}^n)^n \oplus \mathbb{Z} $ given by $$ \partial = \begin{pmatrix} {\bf k}^T & 0 & \hdots & & -k_1 {\bf 1} \\ 0 & \ddots & & & \vdots \\ \vdots & & \ddots & & \vdots \\ & & & {\bf k}^T & -k_n {\bf 1} \\ -k_1 & \cdots & & -k_n & {\bf k} \end{pmatrix}. $$ Here $ {\bf k}^T $ is the transpose of $ (k_1, \dots, k_n) = {\bf k} $ and $ {\bf 1} $ is the identity matrix in $ M_n(\mathbb{Z}) $. \\ Let us compute $ \ker(\partial) $. Inspecting the $ rn + 1 $th rows of $ \partial $ for $ r = 0, \dots, n - 1 $ we see that an element of $ \ker(\partial) $ is necessarily of the form $ (a_1, \dots, a_n, a_1, \dots, a_n) $. Moreover, the first $ n $ rows give the relations $ k_i a_1 = k_1 a_i $, the next $ n $ rows give $ k_i a_2 = k_2 a_i $, and so on. That is, we obtain $$ k_i a_j = k_j a_i $$ for all $ 1 \leq i,j \leq n $. Notice in particular that $ a_2, \dots, a_n $ are uniquely determined by $ a_1 \in \mathbb{Z} $. \\ The general solution to these equations is $ (a_1, \dots, a_n) = (m k_1/d, \dots, m k_n/d) $ where $ m \in \mathbb{Z} $ and $ d = gcd(k_1, \dots, k_n) $ is the greatest common divisor of $ k_1, \dots, k_n $. In particular, we conclude $$ \ker(\partial) = \mathbb{Z}. $$ Let us now compute $ \coker(\partial) $. Using elementary row operations we can transform $ \partial $ to $$ \partial_1 = \begin{pmatrix} {\bf k}^T & 0 & \hdots & & -k_1 {\bf 1} \\ 0 & \ddots & & & \vdots \\ \vdots & & \ddots & & \vdots \\ & & & {\bf k}^T & -k_n {\bf 1} \\ 0 & \cdots & & 0 & {\bf 0} \end{pmatrix} $$ We thus obtain a direct summand $ \mathbb{Z} $ in $ \coker(\partial) $, and we may restrict to the matrix obtained by deleting the last row from $ \partial_1 $. Performing elementary row and column operations we may reduce the resulting matrix to $$ \partial_2 = \begin{pmatrix} {\bf v}^T & 0 & \hdots & & -k_1 {\bf 1} \\ 0 & \ddots & & & \vdots \\ \vdots & & \ddots & & \vdots \\ & & & {\bf v}^T & -k_n {\bf 1} \end{pmatrix} $$ where $ {\bf v} = (d, 0, \dots, 0) $, and we recall that $ d = gcd(k_1, \dots, k_n) $. Simplifying the right hand side of this matrix further leads to a diagonal matrix with $ n + (n - 1) $ entries $ d $, and all remaing entries zero. Hence the final result is $$ \coker(\partial) = \mathbb{Z}^{(n - 1)^2 + 1} \oplus \mathbb{Z}_d^{2n - 1} $$ as claimed. \qed \\ Let us remark that the case $ n = 1 $ of theorem \ref{Kqaut} was already discussed in \cite{Voigtbcsuq2}. At the opposite extreme $ k_1 = \cdots = k_n = 1 $, theorem \ref{Kqaut} implies the following result. \begin{cor} \label{snplus} Let $ n \geq 4 $. Then the quantum permutation group $ S_n^+ $ is $ K $-amenable, and the $ K $-theory is given by $$ K_0(C(S_n^+)) = \mathbb{Z}^{n^2 - 2n + 2}, \qquad K_1(C(S_n^+)) = \mathbb{Z}. $$ Generators in degree zero are given by the projections $ 1 $ and $ u_{ij} $ for $ 1 \leq i,j \leq n - 1 $. \end{cor} \proof It remains only to verify the claim regarding generators of $ K_0(C(S_n^+)) $. For this it suffices to consider the images of the generating projections $ u_{ij} \in C^\max(S_n^+) $ in $ K_0(C(S_n)) $, and notice that they span a copy of $ \mathbb{Z}^{n^2 - 2n + 2} $ inside $ K_0(C(S_n)) = \mathbb{Z}^{n!} $. Essentially, in each row and column of the matrix $ u = (u_{ij}) $ the last entry is determined by the remaining $ n - 1 $ entries, with no further relations. This accounts for $ (n - 1)^2 = n^2 - 2n + 1 $ generators, and the missing generator is the class of the unit. \qed \\ By mapping $ C(S_n^+) $ to $ C(S_4^+) $ and using theorem 5.2 in \cite{BBfourpoints}, it is not hard to check that the defining unitary $ u \in M_n(C(S_n^+)) $ yields a nonzero class $ [u] \in K_1(C(S_n^+)) = \mathbb{Z} $, and that $ [u] $ is of the form $ k x $ where $ x $ is a generator and $ k \in \mathbb{N} $ is at most $ 8 $. However, to actually identify the generator of $ K_1(C(S_n^+)) $ would require more work. \\ Let us point our that corollary \ref{snplus} shows in particular that the $ K $-theory of $ C(S_4^+) = C(SO_{-1}(3)) $ differs significantly from the $ K $-theory of $ SO(3) $. \\ Using the explicit structure of $ C^\red(S_n^+) $ for $ n = 1,2,3 $ and corollary \ref{snplus}, we can distinguish the reduced $ C^* $-algebras $ C^\red(S_n^+) $ for different values of $ n $. \begin{cor} Let $ m, n \in \mathbb{N} $. Then $ C^\red(S_m^+) \cong C^\red(S_n^+) $ iff $ m = n $. \end{cor} Of course, this result holds for the maximal $ C^* $-algebras as well. Notice that the maximal $ C^* $-algebras can already be distinguished by comparing their abelianisations, a method which does not work for the reduced $ C^* $-algebras. \bibliographystyle{plain}
1,108,101,562,553
arxiv
\section{Offline Analysis and Learning}\label{sec:offline} This section discusses the process of building a generic permission model using program analysis and machine learning. \subsection{Foreground Data Extraction}\label{sec:collection} {\tt COSMOS} models the context of a sensitive request using the foreground data associated with the request. Although one can manually interact with an app and record the foreground data, it is infeasible to build a faithful model by analyzing a large number of apps manually. An alternative approach is using existing random fuzzing techniques that generate random inputs in order to trigger as many sensitive behaviors as possible. However, random fuzzing is inefficient, as it generates many inputs with similar program behavior. More importantly, without any prior knowledge, random testing wastes time on exploiting code paths that are irrelevant to sensitive resource accesses. In this work, we propose a hybrid approach to collect relevant foreground data, including the set of widgets, the triggering events and the windows associated with sensitive API calls. Our approach has two phases, a \textbf{static analysis} phase and a \textbf{dynamic rendering} phase. In particular, we adopt static program analysis to accurately locate the foreground components that would trigger a permission request. Compared with random fuzzing, our approach achieves better coverage and eliminates redundant traces. The identified foreground components are then rendered dynamically with actual execution, which provides more complete and precise information compared to a pure static approach. As an over-approximation approach, pure static analysis is criticized by generating false relationships between UI elements~\cite{borges2017data}. Furthermore, we underline the fact that the existing hybrid approaches~\cite{leaksemantic} only focus on the program slices directly related to sensitive invocations, which typically omit the code corresponding to user interface. To illustrate our hybrid approach, we use the code in Listing~\ref{lst:component} as an example throughout this section, which presents the underlying logic of the open-source SMS app \texttt{QKSMS}. \smallskip \noindent \textbf{Static Analysis:} For each target app, we first identify its permission-protected API calls through method signatures. We construct a call graph for the given app with the help of {\tt FlowDroid}~\cite{arzt2014flowdroid} and iterate over the graph to locate the target calls. The list of permission-protected API methods is provided in {\tt PScout}~\cite{felt2011android} and {\tt FlowDroid}. In {\tt QKSMS}, \texttt{sendTextMessage} in line 11 is marked as a sensitive API caller that requests the {\tt SEND\_SMS} permission. The set of call graph entry points of the sensitive API calls are then identified by traversing through the call graph. For instance, the \texttt{onClick} method inside {\tt ComposeView} (line 8) is found as an entry method of {\tt sendTextMessage}. Further, the set of widgets that invoke the entry points (e.g, {\tt mButton}) are extracted by locating the event handlers of the entry points. We then conduct a \emph{data flow analysis} to track the sources of the widgets. After knowing where the widget {\tt mButton} is initialized, we are able to get its unique resource id ({\tt compose\_button}) within the app by inspecting the initialization procedure (line 4). As the foreground windows set context, our analysis goes beyond individual widgets by further identifying the windows that the widgets belong to. In our case, we aim to identify the {\tt Activity} that includes {\tt mButton}. Since {\tt mButton} is initialized inside {\tt ComposeView}, we search for the usage of {\tt ComposeView} within the app. {\tt ComposeView} is declared in {\tt ComposeFragment}, from which we can finally identify {\tt ComposeActivity} as the window for {\tt mButton}. We notice that due to over-approximation, the static analysis phase may misidentify some UI elements that are not correlated with the indicated permission request. We manually filter the misidentified samples before building the learning model to lower the impact of false alarms as much as possible. However, we remark that it can be beneficial to keep some contextual instances that do not request a permission and label them as illegal since they simulate more scenarios that should not use the permission. \smallskip \noindent \textbf{Dynamic Rendering:} For each target {\tt Activity} recognized by our static analysis (e.g., {\tt ComposeActivity}), we then render it with actual execution to precisely extract its layout and widget information. Actual execution enables us to extract relevant data loaded at runtime. Capturing rendering information specified by source code is intractable for static rendering approaches such as {\tt SUPOR}~\cite{supor}, which solely leverage app resource files to uncover the layout hierarchies. For instance, the title of the crafting page ({\tt Compose}) of {\tt QKSMS}, a critical piece of context while using the app, is declared in the Java code (line 19 in Listing~\ref{lst:component}) instead of the resource files. Losing this kind of dynamically generated information may hinder the progress of our upcoming task to precisely infer the purpose of the underlying program behavior. Most {\tt Activities} cannot be directly called by default. Hence, for each app, we automatically instrument the app configuration file {\tt manifest.xml} with a tag {\tt <android: exported>} and then repackage it into a new {\tt apk} file. After installing the new package, we wake up the interested {\tt Activities} one by one with the {\tt adb} commands provided by Android. Once an {\tt Activity} is awakened, the contextual foreground app data, including the layout and widget information, is extracted and stored in XML files. For some {\tt Activities} that cannot be correctly started in this way, we can manually interact with them. Advanced automatic UI interaction is an active open research problem~\cite{borges2017data} and it goes beyond the scope of this paper. \subsection*{Discussion on Online Permission System} We briefly discuss how COSMOS handle corner cases related to online permission system.\\ \noindent \textbf{Background Services} In an Android app, an {\tt Activity} can start a background {\tt Service} through inter-component communication. When a sensitive call is initiated by a {\tt Service}, its call stack does not contain the information of the starting {\tt Activity}. In this case, {\tt COSMOS} monitors the calls of {\tt Activity.startService(Intent)} to track the relationship between running {\tt Activities} and {\tt Services}. {\tt COSMOS} can then use the information available from the {\tt Activity} to infer the purpose of a {\tt Service} request. One problem with this approach is that a {\tt Service} may still be alive even when the foreground {\tt Activity} has finished. In this case, {\tt COSMOS} simply notifies the user about the background request and lets the user decide whether to allow or deny the request. Alternatively, we can always reject such requests. We argue that sensitive services should not exist unless they provide sufficient foreground clues to indicate their purposes. Users tend to reject requests without foreground as suggested by three recent important user studies~\cite{wagner2015, wagner2017, smarper}. Google also further restricts background services in the most recent Android O~\cite{androido}. \noindent \textbf{Handling of False Automatic Decisions} Achieving 100\% precision and recall is intractable for any machine learning algorithm. To provide better usability, COSMOS notifies the user of each rejection and provides rich contextual information, including the activation event, the triggering widget, and the screenshot, to help the user perceive the cause. For any false automatic decision made by the system, the user can override it at the backend and our incremental learning models will incorporate the user's decision immediately. } \input{rq3} \subsection*{Discussion on Future Exploration}\label{sec:discussion} \noindent \textbf{Features} We envision that it is a long-term battle to fight against increasingly more advanced adversary. Similar to existing machine learning based detection methods~\cite{zhang2014semantics,arp2014drebin,yangappcontext,ubicomp2015,gorla2014checking,qu2014autocog}, {\tt COSMOS} could be bypassed with feature engineering through carefully designed evasion logic. However, we argue that the design philosophy of {\tt COSMOS} makes such attacks more difficult to succeed. First, the adversary can only target apps that are legitimate to use the target permissions. Second, the adversary is restricted to exploit the target permissions under proper scenes only. Thus, by enforcing contextual integrity, {\tt COSMOS} is more robust than approaches that only check description-to-permission fidelity~\cite{pandita2013whyper,gorla2014checking,qu2014autocog}. Third, as {\tt COSMOS} examines both the trigger events and the activation widgets, the adversary should carefully plug the payload into the correct position of the targeted app source code. Moreover, {\tt COSMOS} can be integrated with other techniques based upon different feature sets to provide more comprehensive protection. \ignore{ Although COSMOS provides a more detailed characterization of user interface than existing approaches~\cite{flowintent, backstage, asdroid, peruim} to better detect improper permission requests, it leaves room to consider more advanced features. Moreover, an adversary who knows the precise list of features we use can potentially obfuscate the user interface to match our criteria. For example, one may put human invisible text labels (e.g. using white text on a white background) on the screen to deceive our system. Although such an attack is possible, it cannot easily bypass the current version of COSMOS, as COSMOS considers multiple types of UI features. As we mentioned before, our system would warn the user if it encounters confused scenarios that do not lead to a confident decision. } \noindent \textbf{Implementation} As mentioned in Section~\ref{sec:online}, our run-time system stores the references of encountered UI elements and leverages the information available in the call stack to match sensitive API calls to the corresponding UI elements. However, the mapping could be imprecise due to multi-threading. One reason is that the call stack does not contain the caller's information of a child thread. The problem could be alleviated by modifying the base code of {\tt Xposed} to log the values we need inside the runtime environment. \noindent \textbf{Beyond Lab-based User Study} We so far did preliminary lab-based user study in evaluating our proof-of-concept system. The demographic distribution of participants is not comprehensive and the data set is small. Once our system is ready for daily use, we will release it to popular app stores and get feedback from actual deployments beyond the controlled lab environment. \section{Evaluation}\label{sec:eval} In this section, we evaluate the effectiveness of {\tt COSMOS} by answering the following questions: \noindent \textbf{RQ1}: Can {\tt COSMOS} effectively identify misbehaviors (i.e., inconsistencies between context and request) in mobile apps? How do the feature sets of \emph{who}, \emph{when} and \emph{what} contribute to the effectiveness of misbehavior identification? \noindent \textbf{RQ2}: Can {\tt COSMOS} be applied to capture personal privacy preferences? \noindent \textbf{RQ3}: Can {\tt COSMOS} be deployed on real devices with a low overhead? RQ1 measures the effectiveness of the generic models where individual user preferences are not involved. A request that cannot be confidently labeled as either legal or illegal is considered as user-dependent and its relavant effectiveness is measured in RQ2. \subsection{RQ1: Accuracy in Identifying Misbehaviors} We manually labeled 6,560 identified permission requests that belong to 1,844 different apps, each of them was either a top-ranked app crawled across 25 categories from Google Play, or a malware sample collected from VirusShare~\cite{virusshare}. Each request was labeled through the associated foreground contextual data, including the widget (if any), the events and the window. In particular, we determined whether a request (e.g., {\tt RECORD\_AUDIO}) was initiated by an appropriate widget (e.g., a ``microphone'' button) after a proper interaction (e.g., clicking) and under a correct environment (e.g., voice assistant). \smallskip \noindent \textbf{Overall Effectiveness:} For each permission type, we leveraged the labeled requests both as training and test data in a five-fold cross validation. Specifically, we randomly divided all instances of the same permission into 5 equally sized buckets, training on 4 of the buckets, and using the remaining bucket for testing. We repeated the process 5 times and every bucket was used exactly once as the testing data. As our online learning approach is a continuous training process that adapts to user decisions, a classifier that can process one example at a time is desired. To determine which machine learning technique to use, we evaluated the effectiveness of four commonly used learning methods that support incremental classification, including {\it Hoeffding Tree, (Multinomial) Naive Bayes, (linear) SVM} and {\it Logistic Regression}. Compared to non-updatable classifiers, all these methods can iteratively incorporate new user feedback to update their knowledge and do not assume the availability of a sufficiently large training set before the learning process can start~\cite{ross2008incremental}. A summary of the results is given in Table~\ref{tab:overall}, where the mean values are calculated over all permission types. As we can see, logistic regression achieved the best result among all four classifiers. Table~\ref{tab:perm} further provides detailed results of logistic regression on each permission type. We considered seven permissions that are highly security or privacy sensitive~\cite{smarper,arzt2014flowdroid} and are commonly required by the collected apps. We observed that among all the permission types, differentiating requests of {\tt DEVICE\_ID} is more challenging since developers normally do not provide sufficient information in apps to indicate why the permission is requested. More human intervention could be beneficial regarding {\tt DEVICE\_ID}. \ignore{ \begin{table}[t] \centering \begin{threeparttable} \caption{\label{tab:overall} Results for Different Classifiers} \small{ \begin{tabular}{ l c c r } \toprule Algorithm & \tabincell{c}{Median \\ F-measure} & \tabincell{c}{Average \\ Precision} & \tabincell{c}{Average \\ Recall} \\\midrule Hoeffding Tree & 77.9\% & 81.7\% & 78.3\% \\ Naive Bayes & 93.9\% & 93.3\% & 92.9\% \\ SVM & 95.5\% & 95.4\% & 95.4\% \\ Logistic Regression & 96.1\% & 95.8\% & 95.5\% \\ \bottomrule \end{tabular} } \end{threeparttable} \end{table} \begin{table}[t] \centering \begin{threeparttable} \caption{\label{tab:perm} Results for Different Permissions} \small{ \begin{tabular}{ l c c r } \toprule Permission & Precision & Recall & F-Measure \\\midrule {\tt DEVICE\_ID} & 89.8\% & 89.3\% & 89.3\% \\ {\tt LOCATION} & 93.8\% & 93.9\% & 93.8\% \\ {\tt CAMERA} & 95.0\% & 95.0\% & 95.0\% \\ {\tt RECORD\_AUDIO} & 96.0\% & 96.1\% & 96.1\% \\ {\tt BLUETOOTH} & 97.9\% & 97.9\% & 97.9\% \\ {\tt NFC} & 96.7\% & 96.6\% & 96.6\% \\ {\tt SEND\_SMS} & 99.8\% & 99.8\% & 99.8\% \\ \bottomrule \end{tabular} } \end{threeparttable} \end{table} } \ignore{ \begin{table}[t] \centering \begin{threeparttable} \caption{\label{tab:feature}Classification with Different Feature Sets} \small{ \begin{tabular}{ l c c r } \toprule Feature Type & Precision & Recall & F-Measure \\\midrule \bf{Who} & 81.9\% & 78.8\% & 75.7\% \\ \bf{When} & 69.7\% & 70.7\% & 70.0\% \\ \bf{What} & 95.4\% & 95.3\% & 95.3\% \\%& 0.836 & 0.824 & 0.826 \\ \bf{Who \& When} & 80.0\% & 79.1\% & 76.9\% \\ \bf{Who \& What} & 95.6\% & 95.6\% & 95.6\% \\ \bf{When \& What} & 95.6\% & 95.6\% & 95.6\% \\ \bf{All} & 96.0\% & 96.1\% & 96.1\% \\ \bottomrule \end{tabular} } \end{threeparttable} \end{table} } \smallskip \noindent \textbf{Feature Comparison:} To measure how each feature set contributes to the effectiveness of behavior classification, we used the same learning technique (e.g., logistic regression) with different feature sets under ``who'', ``when'' and ``what'' and some combinations of them, respectively. The cross validation results of {\tt RECORD\_AUDIO} are presented in Table~\ref{tab:feature}. Since the comparison results of other permissions share the similar trend, we omit them here. For each feature set, we evaluated its effectiveness by comparing the evaluation metrics of our learning models when the feature set is used and when it is not. We found that the ``what'' features contributed the most among the three feature sets. As we mentioned in Section~\ref{sec:intro}, benign instances often share similar themes that can be inferred from window content and layout. For example, an audio recorder instance typically has a title {\tt Recorder}, a timer frame {\tt 00:00} at the center and two buttons with words {\tt start} and {\tt stop}, respectively. From these keywords and their positions in the page, {\tt COSMOS} is often able to tell whether the user is under a recording theme. Although the ``what'' features successfully predicted most audio recorder instances, it may be of limited use in other cases where {\tt RECORD\_AUDIO} permission is used. For instances, developers tend to integrate voice search into their apps to better serve users. However, as the searching scenarios differ greatly from each other, it is hard to classify their intentions using ``what'' features only. The ``who'' features help alleviate the above problem by further examining the meta data of the corresponding widget. For instance, {\tt co.uk.samsnyder.pa: id/speakButton} is an image button for speech recognition, which does not provide useful ``what'' features as the image button does not contain any extractable textual information. However, the word ``speak'' in the resource-id clearly indicates the purpose of the button. In addition to the textual data, the relative position and the class attribute of a widget can also help locate non-functional components, e.g., the advertisements at the bottom. We observed that for {\tt RECORD\_AUDIO}, the ``who'' features and the ``when'' features are highly correlated in most cases. This is because most sensitive method calls initiated by widgets are bound with the event {\tt onClick}. However, there are exceptions. For instance, a walkie talkie app that transfers users' audio information to each other has the tips {\tt Press \& Hold} shown in its main window, which indicates that the recording should start only after user clicking. However, it actually starts recording once the app is open. This misbehavior can be effectively identified using the ``when'' features, which emphasizes that apps should request a permission only after proper user interactions. \ignore{ Code obfuscation and name manipulation threaten the feasibility of the methods mainly rely on code-level namespace. Shown in the Table~\ref{tab:feature}, the effectiveness of namespace can be highly improved with the combination of the features from user-interface. From another perspective, namespace also contributes to the effectiveness of user-interface features. The class name and method name of the sensitive API methods sometimes contain rich information to further eliminate ambiguous. Recall the button {\tt co.uk.samsnyder.pa:id/speakButton} used for voice recognition, it would be more innocent if it calls {\tt <android.speech.SpeechRecognizer:} {\tt setRecognitionListener(...)>}, instead of {\tt <android.media.AudioRecord:} {\tt startRecording()>}. Note that an adversary could only manipulate the names of its self-defined API callers, not the API methods set by official SDK. } In summary, ``what'' features work well in differentiating between most legitimate and illegitimate instances at the current stage. However, as malware continues to evolve, we expect that collecting more comprehensive contextual data including ``who'', ``when'' and ``what'' can provide better protection. The last row in Table~\ref{tab:feature} shows that the combination of all the three feature sets provides the best results. \begin{table} \centering \begin{threeparttable} \caption{\label{tab:thread} } \begin{tabular}{ l c c } \toprule Target App & Requests/min & \tabincell{c}{CPU Time (\%)} \\\midrule \tabincell{c}{Wechat} & 12.6 & 4.4\% \\ \tabincell{c}{Yelp} & 5.8 & 2.2\% \\ Yahoo Weather & 2.5 & 1.4\% \\ \tabincell{c}{Amazon} & 0.8 & 0.6\% \\ \tabincell{c}{Paypal} & 0.4 & 0.2\% \\ \bottomrule \end{tabular} \end{threeparttable} \end{table} \input{rq2} \subsection{RQ3: Performance Measurements} To investigate the performance overhead incurred by {\tt COSMOS}, we installed five selected representative apps collected from different categories, including {\tt Wechat, Yelp, Yahoo Weather, Amazon} and {\tt Paypal}, on Nexus 5 with {\tt COSMOS} deployed. We then interacted with them as in common daily use, and monitored the overhead introduced by {\tt COSMOS}. Shown in Table~\ref{tab:thread}, {\tt COSMOS} consumed 1.8\% total CPU time on average and less than 5\% total CPU time for all the monitored apps. We also measured other impacts related to the performance such as memory and storage overhead, and all the values were reasonably small for daily use. Details are omitted due to the page limit. \section{Introduction}\label{sec:intro} Mobile operating systems such as Android and iOS adopt permission systems that allow users to grant or deny a permission request when it is needed by an app for the first time. But this approach does not provide sufficient protection as an adversary can easily induce users to grant the permission first, and then exploit the same resource for malicious purposes. A recent user study~\cite{wagner2015} showed that at least 80\% users would have preferred to preventing at least one permission request involved in the study and suggested the necessity of more fine-grained control of permissions. Ideally, a permission system should be able to identify suspicious permission requests {\it on the fly} and {\it automatically} by taking user preferences into account and notify users only when necessary. As shown in several user studies~\cite{wagner2015,wagner2017,smarper}, it is crucial to consider the {\it context} pertinent to sensitive permission requests. Moreover, a user's preference is strongly correlated with the foreground app and the visibility of the permission requesting app (i.e., whether the app is currently visible to the user). The intuition is that users often rely on displayed information to infer the purpose of a permission request and they tend to block requests that are considered to be irrelevant to app's functionalities~\cite{wagner2015}. Thus, a permission system that can properly identify and utilize foreground data may significantly improve decision accuracy and reduce user involvement. We posit that to fully achieve {\it contextual integrity}~\cite{contextualintegrity}, it is crucial to capture detailed foreground information by inspecting \textbf{\emph{who}} is requesting the permission, \textbf{\emph{when}} the request is initiated, and under \textbf{\emph{what}} circumstances it is initiated, in order to model the precise context surrounding a request. In this paper, we present the design and implementation of a lightweight run-time permission control system named {\tt COSMOS} (COntext-Sensitive perMissiOn System). {\tt COSMOS} detects unexpected permission requests through examination of contextual foreground data. For instance, a user interacting with an SMS composing page would expect the app to ask for the {\tt SEND\_SMS} permission once the sending button is pushed, while an SMS message sent by a flashlight instance is suspicious. Given a large number of popular apps with similar functionalities and user interfaces (UIs), {\tt COSMOS} is able to learn a generic model that reflects the correspondences between foreground user interface patterns (texts, layouts, etc.) and their background behaviors. However, such a one-size-fits-all model is not always sufficient. In practice, different users may have very different preferences on the same permission request even in a similar context~\cite{wagner2017,smarper}. Therefore, {\tt COSMOS} then incrementally trains the generic model on each device with its user's privacy decisions made over time. In the end, each user has a personalized model. In summary, this paper makes the following contributions: \begin{itemize} \item We propose a novel permission system that inspects app foreground information to enforce runtime contextual integrity. Our approach involves a two-phase learning framework to build a personalized model for each user. \item We implement a prototype of the {\tt COSMOS} permission system. It is implemented as a standalone app and can be easily installed on Android devices with root access. It is also completely transparent to third-party apps. \item We show that {\tt COSMOS} achieves both high precision and high recall (95\%) for 6,560 requests from both authentic apps and malware. Further, it is able to capture users' specific privacy preferences with an acceptable median f-measure (84.7\%) for 1,272 decisions collected from users. We also show {\tt COSMOS} can be deployed on real devices to provide real-time protection with a low overhead. \end{itemize} \subsection{Classification}\label{sec:learning} \ignore{ \begin{table}[t] \centering \setlength\tabcolsep{0pt} \begin{threeparttable} \caption{\label{tab:func} Legal Permission Usage} \small{ \begin{tabular} {c c} \toprule Permission & Functionality \\\midrule {\tt LOCATION} & weather, map, navigation, tracking\\ & nearby services, location-based socialization \\ {\tt CAMERA} & photo/video shooting, scanner, flashlight \\ & video chatting, check deposit, face recognition \\ {\tt NFC} & device binding, payment \\ {\tt RECORD\_AUDIO} & audio recording, speech recognition\\ & audio chatting, sleep monitor, talking game \\ {\tt BLUETOOTH} & file transfer, accessory pairing\\ & stream transmission\\ {\tt DEVICE\_ID} & identity verification, device configuration, tracking \\% owner profile edition {\tt SEND\_SMS} & chatting, text invitation, authentication\\ \bottomrule \end{tabular} } \end{threeparttable} \end{table} } Using the extracted foreground data, we are able to build a machine learning model to detect user-unintended resource accesses. Given a permission request, we consider it as : \smallskip \noindent \textit{Legitimate:} if the permission is necessary to fulfill the core functionality indicated by the corresponding foreground context. The requests in this category would be directly allowed by our runtime mediation system to eliminate unnecessary user intervention. We emphasize that the core functionality here is with respect to the running foreground context, not the app as a whole. For example, some utility apps include a referral feature for inviting friends to try the apps by sending SMS messages. This is typically not a core functionality of the apps and the developers normally do not mention this feature on the apps' description pages. However, the SMS messages sent under the ``invite friends'' page after a user clicks the {\tt Invite} button should be considered as user intended. In contrast, description-based approaches~\cite{pandita2013whyper,gorla2014checking} would unnecessarily raise alarms. \smallskip \noindent \textit{Illegitimate:} if the permission neither serves the core functionality indicated by the foreground context nor provides any utility gain to the user. An illegitimate request can be triggered by either malicious code snippet or flawed program logic. The latter can happen as developers sometimes require needless permissions due to the misunderstanding of the official development documents~\cite{felt2011android}. \smallskip \noindent \textit{User-dependent:} if the request does not confidently fall into the above two categories; that is, it is not required by the core functionality suggested by the foreground context, but the user may obtain certain utility by allowing it. Intuitively, in addition to the core functionality, the foreground context may also indicate several minor features that require sensitive permissions. Whether these additional features are desirable can be user dependent. For example, besides the {\tt CAMERA} permission, a picture shooting instance may also ask permissions such as {\tt ACCESS\_LOCATION} to add a geo-tag to photos. Although some users may be open to embed their location information into their photos that may be shared online later, those who are more sensitive to location privacy may consider this a bad practice. In this case, we treat {\tt ACCESS\_LOCATION} as a user-dependent request and leave the decision to individual users. \smallskip \noindent \textbf{Features:} Before extracting features from the collected foreground contextual data, we pre-process the crawled layouts to better retrieve their structural properties. Mobile devices have various resolutions. With absolute positions, models built for one device may not apply to other devices with different resolutions. Therefore, we divide a window into a $3 \times 3$ grid and map absolute positions to relative positions. The processed layouts are then used to extract features. We construct three feature sets to enforce contextual integrity discussed in Section~\ref{sec:prob}. More specifically, we derive the following features from a sensitive request: \smallskip \noindent \textit{Who:} The static phase of our foreground data collection described in Section~\ref{sec:collection} allows us to identify the widgets leading to sensitive API calls. We then collect the attribute values of the target widgets using the dynamically extracted layout files. It is possible that the permission request is triggered by an {\tt Activity} rather than a widget. In this case, we would leave the value of this feature set as empty and rely on the ``what'' feature set to handle windows. \smallskip \noindent \textit{When:} The call graph traversal gives us entries of sensitive API calls. An entry point can be either a lifecycle callback or an event listener. The lifecycle models the transition between states such as the creation, pause, resume and termination of an app component. The event listeners monitor and respond to runtime events. Both lifecycle callbacks and event listeners are prior events happened before an API call and serve as useful temporal context to the call. We therefore use the signatures of entry methods as the ``when'' feature set. \smallskip \noindent \textit{What:} The text shown on target widgets could be too generic (such as {\tt Ok} and {\tt Yes}) to convey any meaningful context. Therefore, we also derive features from the windows to help infer the overall theme of the requesting environment. We iterate over the view hierarchy of the window layout and obtain all the related widgets with text labels. For every such widget, we save the text on the widget and its relative position in the window as features. Including both textual and structural attributes provides better scalability to capture semantic and structural similarities across millions of pages. Although developers may adopt various design styles for the same functionality, their implementations usually share a similar characterization. For instance, we do not need to know whether a window is implemented with {\tt Material} design. Instead, learning the title shown at the top of the window, such as {\tt Compose} and {\tt New message}, is crucial. By focusing on features directly visible to users, our approach is resilient to code level obfuscation. Note that the entry methods are overridden of the existing official SDK APIs and cannot be renamed by the third parties. For each of the three feature sets mentioned above, we generate a separate feature vector. Note that although attributes of a widget leading to sensitive API calls appear in both the ``who'' feature set and the ``what'' feature set, they are treated separately to stress the triggering widget. For the ``what'' set, the text and positions of all the widgets shown on the window are included, while for the ``who'' set, only those related to the triggering widget are included. All the textual features are pre-processed using natural language processing (NLP) techniques. In particular, we perform {\it identifier splitting, stop-word filtering, stemming} and leverage bag-of-words model to convert them into feature vectors. The process is similar to other text-based learning methods~\cite{flowintent}. In addition to the three sets of features, it is possible to include more features to further raise the bar of potential attacks. \ignore{ \begin{table}[] \centering \setlength\tabcolsep{0.5pt} \begin{threeparttable} \caption{\label{tab:features}The Feature Sets} \small{ \begin{tabular}{ l | c | r } \toprule Type & Source & Features \\\midrule {\tt What} & Window & textual surroundings\\ & & layout \\ {\tt When} & Activation Event & event type\\ {\tt Who} & Sensitive Widget & attribute, position \\ {\tt Namespace} & Sensitive API call and caller & class name \\ & & method name\\ \bottomrule \end{tabular} } \end{threeparttable} \end{table} } \smallskip \noindent \textbf{Training:} \label{sec:learning} Using the three sets of features discussed above, we train a one-size-fits-all learning model as follows. For each permission type, a classifier is trained with a data mining tool {\tt Weka}~\cite{weka} using the manually labeled sensitive API calls related to that permission. The classifiers are trained separately for different permissions to eliminate potential interference. Each permission request is labeled as either \emph{legal} or \emph{illegal} based on the foreground contextual data we collected including: the entry point method signature, the screenshot of the window, and the highlighted widget invoking the API call (if there is such a widget). We ensure contextual integrity by checking whether they altogether imply the sensitive API call. The request is marked as illegal if it is not supported by any type of the foreground data. For instance, a {\tt SEND\_SMS} permission requested under the ``Compose'' page without user interactions or required by an advertisement view is categorized as illegal. As we mentioned in Section~\ref{sec:overview}, our generic models will be continuously updated at runtime to incorporate individual user's preferences. One option is to keep sending data to a remote cloud for pruning the models. However, since the content shown on a device can be deeply personal, transmitting this kind of sensitive data out of the device would raise serious concerns on potential leaks~\cite{fernandes2016appstract}. Consider the SMS composing example again, the window may contain private information typed by the user, which is inappropriate to share with a third-party service. On the other hand, the limited computational power of mobile devices makes it infeasible to repeatedly train complicated models from scratch inside the devices. To meet both the privacy and performance requirements, we apply light-weight incremental classifiers that can be updated instantaneously using new instances with a low overhead, which matches the memory and computing constraints of smart phones~\cite{yin2016incremental}. A key question is which incremental learning technique to use. To this end, we have evaluated popular incremental learning algorithms. The detailed results are given in Section~\ref{sec:eval}. \section{Conclusion}\label{sec:conclusion} We propose a context-sensitive permission system called {\tt COSMOS} that automatically detects semantic mismatches between foreground interface and background behavior of running mobile applications. Our evaluation shows that {\tt COSMOS} can effectively detect malicious resource accesses with high precision and high recall. We further show that {\tt COSMOS} is capable of capturing users' specific privacy preferences and can be installed on Android devices to provide real-time protection with a very low performance overhead. \section*{Acknowledgment} The effort described in this article was partially sponsored by the U.S. Army Research Laboratory Cyber Security Collaborative Research Alliance under Contract Number W911NF-13-2-0045. The views and conclusions contained in this document are those of the authors, and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes, notwithstanding any copyright notation hereon. The work of Zhu was supported through NSF CNS-1618684. \footnotesize \bibliographystyle{abbrv} \section{Problem Statement}\label{sec:prob} \subsection{Threat Model} We target threats from third-party apps that access {\it unnecessary} device resources in fulfilling their functionalities provided to the users. Such threats come from both intended malicious logic embedded in an app and vulnerable components of an app that can be exploited by attackers. We assume that the underlying operating system is trustworthy and uncompromised. \subsection{Design Goals} Our goal is to design a runtime permission system that enforces \emph{contextual integrity} with \emph{minimum user involvement}. \smallskip \noindent \textbf{Contextual Integrity:} To enforce contextual integrity in mobile platforms, one needs to ask the following three questions regarding a permission request: \textit{Who initiated the request?} An app may request the same permission for different purposes. For instance, a map app may ask user's locations for updating the map as well as for advertisement. Although it can be difficult to know the exact purpose of a permission request, it is critical to distinguish the different purposes by tracing the sources of requests. \textit{When did it happen?} Ideally, a permission should be requested only when it is needed, which implies that the temporal pattern of permission requests is an important piece of contextual data. For instance, it is helpful to know if a permission is requested at the beginning or at the termination of the current app activity and if it is triggered by proper user interactions such as clicking, checking, etc. \textit{What kind of environment?} A proper understanding of the overall theme or scenario when a permission is requested is critical for proper permission control. For instance, it is expected that different scenarios such as entertainment, navigation, or message composing may request very different permissions. In contrast to {\it who} and {\it when} that focus on detailed behavioral patterns, {\it what} focuses on a high level understanding of the context. The above context will help us detect mismatches between app behavior and user expectation. A research challenge here is how to learn the context automatically for dynamic access control. Moreover, as user expectation may vary from one to another, how should we meet each user's personal expectation? \smallskip \noindent \textbf{Minimum User Effort:} Recent studies on runtime permission control focus on characterizing users' behavioral habit and attempt to mimic users' decisions whenever possible~\cite{smarper,wagner2015,wagner2017}. Although this approach caters to an individual user's privacy preference, it also raises some concerns. First, a user could be less cautious and the potential poor decisions made by the user could lead to poor access control~\cite{wagner2017}. Second, malicious resource accesses are user independent (although they may still be context dependent), which should be rejected by the runtime permission system without notifying the user. Furthermore, the permission system should automatically grant the permissions required for the core functional logic indicated by the context of the running app to reduce user intervention. To achieve minimum user involvement, our system should notify a user only when the decision is user dependent {\it and} the current scenario is new to the user. In all other cases, it should automatically accept or deny a permission request based on the current model with the user's previous decisions incorporated. Our system should also provide high \textit{scalability} and {\it adaptivity}. It should scale to a large number of diverse permission requests and require no app source code or additional developer effort. Its accuracy and usability can be continuously improved with more user decisions incorporated. \ignore{ COSMOS is a new permission system that continuously captures semantic information about app behaviors. It enforces contextual integrity through comprehensive inspection of the foreground from three distinct perspectives. In particular, it answers the questions of ``who'', ``when'' and ``what'' by examining the {\it activation widgets, trigger events} and {\it windows}. Moreover, COSMOS adopts a two-layer design to protect users from malicious logic with minimum user intervention, while catering to individual user's privacy preferences. Overall, we achieve the following specific goals: \begin{itemize} \item Intention-based detection: Our approach detects mismatches between app intentions and user intentions. It infers the purpose of a sensitive permission request through inspection of the foreground context. It stresses on contextual integrity by conducting analysis from three distinct perspectives. Our approach is able to meet users' personal expectations through continuous updates of the on-device learning modules. \item Limited user involvement: Our system notifies a user only when the decision is user dependent {\it and} the current scenario is new to the user. In other cases, it automatically accepts or denies an app request based on the latest model with the user's previous decisions incorporated. \item High scalability and adaptivity: Our approach is scalable to a large number of diverse permission requests. It is transparent to app source code and requires no additional developer efforts. Its accuracy and usability can be continuously improved with more apps available in the app stores and more user decisions incorporated. \item Obfuscation resilience: Previous research utilized program namespace to build context-aware permission models~\cite{ubicomp2015, smarper}. However, commercial apps and malwares often modify their classes, methods, variable names and call sequences to prevent reverse engineering. In contrast, our foreground-based design is resilient to code obfuscation and name space manipulation. \item Privacy-preserving: Our solution not only protects users from privacy threats caused by third-party apps, but also eliminates the potential privacy risk due to sharing user data with a remote server by keeping and processing all user sensitive data on the devices. \end{itemize} } \section{Online Permission System} \label{sec:online} {\tt COSMOS} as a mediation system dynamically intercepts sensitive calls, collects features for them, and finally automatically grants or denies the requests using online learning models. \subsection{Mediation and Data Extraction} Android does not officially allow a third-party app to mediate other apps' requests. Instead of modifying the OS and flashing the new firmware, {\tt COSMOS} is written in Java as a standalone Android app and can be easily installed on Android devices with root access. The implementation of {\tt COSMOS} is based on {\tt Xposed}~\cite{xposed}, an open-source method hooking framework for Android. {\tt Xposed} provides native support to intercept method calls, which enables us to execute our code before and after execution of the hooked method. To detect improper permission requests at runtime, {\tt COSMOS} dynamically extracts information from the UI elements associated with sensitive calls. Consider the example shown in Figure~\ref{fig:online}. The {\tt sendTextMessage} is triggered after clicking {\tt mButton} shown on {\tt MainActivity}. {\tt COSMOS} needs to retrieve the memory references of the interested UI elements, including the running instances of {\tt mButton} and {\tt MainActivity}. However, simply intercepting the target sensitive call is insufficient. The problem is that although we can extract the values of the variables appeared in the current call (e.g., {\tt sendTextMessage}), retrieving the values from the prior calls (e.g., {\tt onClick}) is currently infeasible in {\tt Xposed}, which makes it difficult to retrieve the trigger UI instances by only hooking the sensitive API call. To address the above problem, {\tt COSMOS} intercepts the invocations of both Activity lifecycle callbacks (e.g., {\tt Activity.onCreate}) and event listeners (e.g., {\tt onClick}) in addition to sensitive API calls. For each of these methods, it records the references of the method parameters. For instance, in the above example, the references to {\tt mButton} and the Activity are stored when processing {\tt onClick(mButton)}. When it encounters a sensitive API call, {\tt COSMOS} retrieves the {\it latest} widget and Activity it saved, and extracts the same features from them as in the offline model. In particular, ``who'' features are collected from the widget and ``what'' features are extracted from the activity by iterating over all its widgets. Moreover, {\tt COSMOS} examines call stack traces to determine the entry point methods leading to sensitive calls, which are used to derive the ``when'' features. After converting the features into numerical values, {\tt COSMOS} uses the online learning model to predict the type of the sensitive request. It automatically grants the permission if the request is classified to be legitimate with high confidence and rejects the request if it is confidently classified as illegitimate. For a rejected request, {\tt COSMOS} further pops up a warning to the user including the details of the request. A request that is neither legal or illegal with high confidence will be treated as user-dependent and will be handled by the user preference module as discussed below. As users can switch between Activities, a request may be initiated by a background Activity. By tracking the memory references of the associated UI elements, \emph{{\tt COSMOS} is able to reason about the background requests even if the associated UI elements are currently invisible.} \ignore{ \begin{figure}[t] \centering \includegraphics[height=1.9in,width=0.53\textwidth]{fig_prompt} \caption{An example user prompt shown by {\tt COSMOS}. In the top right corner, the ``Upload'' button that is accessing the device location is highlighted.} \label{fig:prompt} \end{figure} } \smallskip \noindent \textbf{GUI Spoofing:} To ensure that the foreground data is indeed associated with the background request, {\tt COSMOS} dynamically inspects the widget information with the hook support and ignores the widgets that are not owned by the permission requesting app. Thus, {\tt COSMOS} is resilient to GUI spoofing that tries to evade detection by hiding behind the interfaces of other apps. More advanced GUI spoofing attacks have also been proposed in the literature~\cite{bianchi2015app}. For example, when a benign app running in the foreground expects a sensitive permission to be granted, a malware may replicate and replace the window of the benign app to elicit the user. However, such attacks can be hard to implement in practice as they require Accessibility feature enabled to the malware by the user. It is worth noting that using Accessibility may play against the malware itself, since Android repeatedly warns the user about the threats caused by Accessibility. If needed, {\tt COSMOS} can also intercept the calls initiated from Accessibility to further alarm users. \smallskip \noindent \textbf{Background Services:} An Activity can start a background Service. However, when a sensitive call is initiated by a Service, its call stack does not contain the information of the starting Activity. In this case, {\tt COSMOS} monitors the calls of {\tt Activity.startService(Intent)} to track the relationship between running Activities and Services. {\tt COSMOS} then uses the information available from the Activity to infer the purpose of a Service request. A Service may exist without any triggering Activity. In this case, {\tt COSMOS} notifies the user about the background request and lets the user decide whether to allow or deny the request. Alternatively, we can always reject such requests. We argue that sensitive services should not exist unless they provide sufficient foreground clues to indicate their purposes. Users tend to reject requests without foreground as suggested by three recent user studies~\cite{wagner2015,wagner2017,smarper}. Indeed, the recent updates of Android further restrict background services~\cite{androido}. \subsection{User Preference Modeling} To incorporate user preferences, {\tt COSMOS} notifies the user if the online model identifies a request as user-dependent. Consider the example shown in Figure~\ref{fig:prompt}. The UI shows a product review page and a location permission is requested once the {\tt Upload} button is clicked. On the one hand, the user may be beneficial from sharing location if the seller provides subsequent services to promote customer experience based on the user's review and location. On the other hand, the sharing behavior could put the user at risk since there is no guarantee how exactly the location information would be used by the app developer. As the page does not provide enough evidences whether location sharing is necessary, {\tt COSMOS} treats the instance as user-dependent, and then creates a prompt to accept user decision. Our prompt not only alarms the user about the existence of the permission request, but also highlights the widget that triggered the request and the activation event. The user decision, along with the features of the instance, is then used to update our model. Discussed in Section~\ref{sec:learning}, our classifiers are built through incremental learning in order to take care of both privacy concern and performance overhead. The incremental learning model immediately accepts the new instance and adjusts the decision strategy to better match user criteria next time. \ignore{ \subsection{Defense Against GUI Spoofing} To ensure that the foreground data is indeed associated with the background request, {\tt COSMOS} dynamically inspects the widget information with the hook support and ignores the widgets that are not owned by the permission requesting app. Thus, {\tt COSMOS} is resilient to GUI spoofing that tries to evade detection by hiding behind the interfaces of other apps. More advanced GUI spoofing attacks have also been proposed in the literature~\cite{bianchi2015app}. For example, when a benign app running in the foreground expects a sensitive permission to be granted, a malware may replicate and replace the window of the benign app to elicit the user. An adversary may also programmatically simulate user behaviors to interact with other apps. However, such attacks can be hard to implement in practice as they require {\tt Accessibility} feature enabled to the malware by the user. It is worth noting that using {\tt Accessibility} may play against the malware itself, since Android repeatedly warns the user about the threats caused by {\tt Accessibility}. If needed, COSMOS can also intercept the method calls initiated from {\tt Accessibility} to further alarm users. } \section{System Architecture} Figure~\ref{fig:arch} depicts the overall system architecture of {\tt COSMOS}, which contains two phases. \smallskip \noindent \textbf{Offline Phase}: The offline phase (details in Section~\ref{sec:offline}) builds a generic model to predicate user expectation when a sensitive permission request is made. To build the model, we collect a large number of benign apps and malicious apps and develop a lightweight static analysis technique to extract the set of sensitive API calls and the corresponding foreground windows. Subsequently, the windows are dynamically rendered to extract their layouts as well as the information of their embedded widgets. The system calls, widgets and layouts are then used to extract features to build learning models that classify each sensitive API call of third-party apps as either legitimate, illegal or user-dependent. \smallskip \noindent \textbf{Online Phase}: In the online phase (discussed in Section~\ref{sec:online}), the generic model trained previously is personalized as follows. For each sensitive API call invoked by a third-party app, our mediation system will intercept the call and leverage the personalized model to identify its nature (initially, the personalized model is the same as the generic model). The sensitive API call is allowed if it is classified as legal and is blocked (optionally with a pop-up warning window) if it is classified as illegal. Otherwise, the API call is considered as undetermined and the user will be notified for decision making. The user's decision is then fed back to the online learning model so that automatic decisions can be made for similar scenarios in the future. To better assist user's decisions, detailed contextual information is provided in addition to the sensitive API call itself. Moreover, we provide specific mechanisms to handle background requests without foreground context. \ignore{ \subsection{Data Collection} We perform the followings for the given app: \begin{enumerate} \item Search the sensitive API calls. \item For each sensitive invocation, we retrieve the entry points and identify the Android components (e.g. an Activity) where the entries lie. We refer the extracted Activities as {\it the target Activities}. \item Instrument the app to make each target Activity callable from outside environment. \item Run each target Activity and record the UI information. \end{enumerate} } \section{Related Work} Early studies on building context-aware systems mainly depend on manually crafted policies specific to certain behaviors~\cite{peg,miettinen2014conxsense,zhang2016rethinking}. Recent approaches attempt to infer context-aware policies from users' behavioral traits~\cite{wagner2015,wagner2017,smarper}. They observe that the visibility of apps is the most crucial factor that contributes to users' decisions on permission control. However, they do not capture more fine-grained foreground information beyond visibility and package names. Some recent efforts have also been made to detect unexpected app behavior from UI data. For instance, {\tt AppIntent}~\cite{appintent} uses symbolic execution to extract a sequence of GUI manipulations leading to data transmissions. {\tt PERUIM}~\cite{peruim} relates user interface with permission requests through program analysis. Both approaches require user efforts to locate suspicious behaviors. {\tt AsDroid}~\cite{asdroid} identifies the mismatch between UI and program behavior with heuristic rules. {\tt DroidJust}~\cite{chen2015droidjust} tracks the sensitive data flows to see whether they are eventually consumed by any human sensible API calls. Ringer et al.~\cite{ringeraudacious} design a GUI library for Android to regulate resource access initiated by UI elements. As these approaches rely on a small set of human crafted policies, they can only recognize certain misbehaviors within the domains. Most recently, machine learning has been used to automate the analysis of user interface. {\tt FlowIntent}~\cite{flowintent} and {\tt Backstage}~\cite{backstage} detect behavioral anomalies by examining all textual information shown on the foreground windows with supervised learning and unsupervised learning, respectively. Though similar in spirit, they touch upon a subset of the challenges that {\tt COSMOS} tries to address and only focus on static app auditing. We extend this line of research in several ways. First, we propose to protect contextual integrity through analyzing UI data from three distinctive perspectives: {\it who}, {\it when} and {\it what}. Second, we provide a two-layer machine learning framework that can automatically grant the necessary permission requests and reject the harmful requests without requiring user involvement, as well as improving the decision accuracy based on user feedback. Third, we implement our system on real devices to provide runtime protection and conduct comprehensive evaluations. \ignore{ FlowIntent~\cite{fu2016flowintent} searches user-unintended sensitive transmissions with the help of front-end user interfaces. Though similar in spirit, FlowIntent handles only a subset of the issues COSMOS does. DroidJust~\cite{chen2015droidjust} detects malicious connections who do not trigger user-observable behaviors. An adversary can evade the detection by invoking a widget that does not contribute to the functionality, i.e. loading an ad image. Wang \emph{et al.}~\cite{wang2015using} use keywords extracted from class names and method names to help infer the purpose of third-party custom code. However, obfuscation that creates human-unreadable class and method names are widely adopted in commercial apps recently, which compromises its overall effectiveness. Our technique provides another dimension towards the goal, which could be applied to improve the above techniques. Researchers conducted user studies to characterize user privacy preferences and leveraged machine learning to predict user decisions\cite{wagner2015, wagner2017, smarper}. They put entirely attention on user profiling and ignored rich semantic information available at user interface. However, the potential poor decisions made by users could put themselves at risk. COSMOS not only captures user intention with on-device self-adaptive learning, but also examines the underlying app intention with features derived from app user interface. Our two-layer model further protects users from malicious behaviors by training upon mobile virus. Yet, our design and implementation reflects more fine-grained foreground information to users for training unique preferences, which brings users one step closer to make right decisions. Moreover, each of them leveraged one single model for all types of permission. Instead, COSMOS constructs distinct models for different permission type, therefore eliminate the interferences among decisions from various sources. AUDACIOUS~\cite{ringeraudacious} proposes a library for developers to enforce the integrity of UI elements. Unlike COSMOS that stresses on zero developer effort, AUDACIOUS requires developers to integrate the library into their app and strictly follow the instructions. As an automatic tool, COSMOS does not involve human to generate security policies, whereas AUDACIOUS needs to feed pre-specified rules. More importantly, AUDACIOUS fails to consider the scene of each app usage case and therefore cannot distinguish the widgets who have similar bitmap. For example, a single button with a triangle image attached can represent actions for both ``recording'' and ``playing'', which indicates separate program logics and permission required. } \subsection{RQ2: Effectiveness of Capturing Personal Preferences} \ignore{ \begin{figure} \centering \includegraphics[height=2.2in,width=0.48\textwidth]{figure_user_res} \caption{The precision and recall of each user. We observed that some results were close and even identical, leading to the overlapping dots shown on the diagram. } \label{fig:user} \end{figure} } We conducted a lab-based survey \footnote{Our user study was proceeded with Institutional Review Board (IRB) approval.} to measure the effectiveness of our models to capture individual user's preferences, where we asked participants to classify a set of requests that were not faithfully labeled as legal or illegal. The survey was composed and spread through Google Forms. Among the 24 participants, 3 were professors, 6 were undergraduate students and 15 were graduate students. Each user is asked to classify 50 location accessing requests collected from 40 real apps, covering several user-dependent scenarios such as shopping, photo geo-tagging, news, personal assistant and product rating. We collected 1,272 user decisions in total. To simulate the real decision making on device, for each request, the following information is displayed to the participants: 1) Screenshot: the screenshot taken from the app right after the request was initiated, with the triggering widget highlighted. 2) Prior event: the event led to the request, such as app start and user clicking. 3) Meta-information: the app name and a Google Play link are included, whereby the participants can find more information. We evaluated the effectiveness of our user preference modeling by updating the pre-trained model constructed during the evaluation phase of RQ1 with the decisions collected from each individual user. For each user's decisions, we randomly partitioned them into three sets and used two of the three sets as the training set to update the pre-trained model, and the rest set as the testing set. Our model yielded a median f-measure of 84.7\% among the 24 users, which is reasonably good due to the limited number of samples. We expect our model to be more accurate with more user feedback in the future. Figure~\ref{fig:user} presents the detailed result of each individual. We observed that some results were close and even identical, leading to the overlapping dots shown on the diagram. A quarter of users' results have more than 90\% precision and 90\% recall. Our model performed surprisingly well for one individual, with 100\% precision and 100\% recall. One individual tends to behave conservatively by rejecting nearly all requests, giving a sharp outlier in the lower right corner with a perfect precision but a terrible recall. We also observed that some users made inconsistent decisions under a similar context. For instance, one user allowed a request from a product rating page but rejected another with a closely related context. One possible explanation is that sometimes users are less cautious and make random decisions as suggested in~\cite{wagner2017}. Fortunately, our system can greatly help protect users from malicious behaviors caused by malware even if users make random decisions, since our generic model has already learned many misbehaviors in offline training. We also conducted a controlled experiment to test whether the fine-grained contextual info shown in our prompts can help users make better decisions. We used the screenshots with location-based functionality at the center and a behavioral advertisement at the bottom. Without prompts, 79.2\% of the participants chose to grant the permission. After being alerted that the location requests were actually initiated by advertisements, 73.9\% of them changed their minds to reject the requests. These results encourage the deployment of {\tt COSMOS} to better assist users against unintended requests. \subsection*{Performance on Real Devices and Case Studies} In this subsection, we measured the overhead incurred by {\tt COSMOS}. We installed the online module of {\tt COSMOS} on a Google Nexus 5 running Android 5.1.1 with 2.26 GHz quad-core CPU and 2GB RAM. \noindent \textbf{CPU Time} We installed some popular apps from different categories on the phone, interacted with them as in common daily use, and monitored the performance overhead introduced by {\tt COSMOS}. The performance data were collected using the runtime profiling tool Traceview~\cite{traceview}, which is officially supported for debugging Android apps through tracking the performance of each method call. We modified the device firmware to let {\tt Traceview} monitor the released commercial apps without requiring their debuggable installation packages. The overhead introduced by {\tt COSMOS} is measured within the target monitored app. Table~\ref{tab:thread} shows the measured CPU overhead of {\tt COSMOS} when interacting with 5 representative apps installed on the phone. Each of these apps has at least 10 million installations according to Google Play. The first column gives the average number of sensitive requests made by each app per minute. The second column shows the average total time that {\tt COSMOS} spent on inspecting a request, excluding the time waiting for user's decisions. The third column gives the average CPU time that INSPIRED spent on a request, excluding the waiting time on I/O. The last column gives the percentage of the CPU time used by {\tt COSMOS} within an app over the total CPU time that app used during execution. Note that the value was measured within each target app, not the total CPU time used by the entire device. We observe that {\tt COSMOS} consumed less than 5\% total CPU time for all the five apps and the values vary a lot across apps. In particular, {\tt COSMOS} incurred the highest overhead on Wechat, which can be explained by two main reasons. First, Wechat intensively requests permissions when used. As a complicated communication and social app, it needs to access several sensors to provide functionalities such as voice input, location sharing, video call, etc. It also periodically reads the device ID for analytical purpose. Second, Wechat adopts its own GUI library, which takes {\tt COSMOS} longer time for analysis. Yelp and Yahoo Weather also frequently initiate sensitive requests. They continuously update locations to provide nearby services and weather information, respectively. Compared to Wechat, their UI structures are simpler. Amazon asks to access microphone and location for embedded voice assistance, which has limited foreground information and was triggered only after proper user interactions. During the experiment, Paypal only initiated sensitive requests when the app was first started. The lower frequency of permission requests and the simpler UI together led to the least overhead for Amazon and Paypal. \noindent \textbf{Memory Usage} As the method profiling provided by {\tt TraceView} did not include the memory cost. We estimated the rough memory usage of {\tt COSMOS} by dumping the runtime objects into files. We serialized the running {\tt COSMOS} objects and the related referenced objects such as {\tt Weka} instances at the decision points, in which the memory use should reach the peak value. The average memory use was 5,712 KB over 50 separate decision points. Among them, over 95\% memory can be attributed to the {\tt Weka} machine learning module. \noindent \textbf{Storage} The size of the installation package of our run-time control system is 8.7 MB. After installation, the total storage occupied is 19.86 MB, including the {\tt COSMOS} classes, {\tt Xposed} library, {\tt Weka} library, Android support library and the resource files. We can reduce the size by discarding the unused classes files inside the libraries, and further reduction is possible by compressing some resource files. \noindent \textbf{Network bandwidth} {\tt COSMOS} does not generate any network traffic on its normal use. This is a significant overhead reduction compared to cloud-based systems that continuously consume bandwidth to upload user data.
1,108,101,562,554
arxiv
\section{Introduction} Over the past years, many questions concerning the behavior of disordered systems have been put in a new perspective by addressing them from the point of view of the more general jamming scenario \cite{liu1998}. Especially for granular systems it has turned out to be very fruitful to study the changes in the properties and the response of granular packings as one approaches the jamming point from the jammed side, where the packing gets close to an isostatic solid. An isostatic packing is indeed essentially a marginal solid which has just enough contacts to maintain a stable packing. From simple counting arguments, one finds that the average coordination number $Z$ of an $d$-dimensional isostatic packing of frictionless spheres equals $Z_{\rm iso}=2d$ \cite{alexander}. Upon approaching this marginal solid, many static and dynamic properties exhibit anomalous behavior, associated with the fact that the excess number of average bonds, $\Delta Z\equiv Z-Z_{\rm iso} $, goes to zero \cite{epitome,wyart,wyartlett,wyartE}. In fact, $\Delta Z$ itself scales anomalously, namely as the square root of the difference in density from the one at jamming \cite{epitome}. Likewise, the ratio G/K of the shear modulus G over the compression modulus K is found to scale as $\Delta Z$, and the density of states of the vibrational modes becomes flat at low frequencies above some crossover frequency $\omega^* \sim \Delta Z$, due to the emergence of many low frequency modes. Much of this behavior was explained by Wyart {\em et al.}\cite{wyart,wyartlett,wyartE} in terms of the existence of an important cross-over length scale $\ell^*\sim 1/\Delta Z$, the length up to which the response is close to that of an isostatic packing. This scale $\ell^*$ diverges as the jamming point is approached, but is difficult to probe directly. Nevertheless, the length $\ell^*$ has recently been uncovered as the important cross-over length to continuum behavior in the static response \cite{wouter,ellenbroek2008}. Although most of these results pertain explicitly to packings of frictionless spheres, there are several indications \cite{somfai2007,shundyak2007} that many of these observations and ideas can be generalized to frictional packings. It has been noted in several studies that both the response to a local or global deformation \cite{wouter,tanguy} and the behavior of the vibrational eigenmodes \cite{wyart,wyartE} of a packing become much more disordered as one approaches the jamming point: as the snapshots of two vibrational modes in Fig.~\ref{snapshots} illustrate, far above the jamming point the eigenmodes have a structure reminiscent of what one gets in a continuum theory of an elastic medium, but close to the jamming point one is immediately struck by the appearance of many disordered ``swirls''. The arguments put forward by Wyart {\em et al.} \cite{wyart,wyartlett,wyartE} indicate that the excess low frequency modes cannot be localized on scales $\lesssim \ell^*$ since they are the vestiges of the {\em global} floppy modes that emerge at the isostatic point. Hence, if there are any low-frequency modes away from jamming and if indeed their localization length is $\gtrsim \ell^*$, we should see this as the jamming point is approached. The aim of this paper is to investigate whether this is indeed the case. \begin{figure} \onefigure[width=60mm]{fig1.eps} \caption{Snapshots of two low-frequency eigenmodes in our packings. The arrows indicate the direction and magnitude of the displacements of the individual particles. (a) At low pressure $p=10^{-6}$, close to the jamming point, the mode is very disordered, whereas at high pressure (b) $p=3\cdot10^{-2}$, the mode is more reminiscent of an elastic shear wave. Similar features are seen in the response to a local or global deformation \cite{wyart,wouter,tanguy}.} \label{snapshots} \end{figure} Localization was discovered fifty years ago by Anderson\cite{anderson}, who in his study of non-interacting electrons in a random potential found that disorder can induce electron localization. Unlike the extended (delocalized) Bloch waves, in a localized state the weight of the electron wave function is concentrated near some point in space; the amplitude falls off as $e^{-r/\xi}$ with distance $r$ from the center. This defines the localization length $\xi(E)$ which depends on the electron energy $E$. The possibility that disorder can localize the eigenmodes of systems governed by wave equations is quite general and extends to many systems, not only sound modes \cite{john,bunde,sheng} but also gravity waves \cite{sheng}, light propagation \cite{sheng} and diffusion on random lattices \cite{bunde,sheng}. We will focus on the localization behavior of vibrational modes of 2$d$ frictionless packings. In two dimensions there is no localization-delocalization transition: in the presence of disorder the states are generally localized in the thermodynamic limit for cases like the one presented here \cite{john}. The dynamic response of granular packings is affected by three types of disorder --- bond disorder, mass disorder and topological packing disorder. Any of these is sufficient to cause localization, but in practice all three play a role for realistic models of granular packings: bond disorder is present for all force laws except one-sided harmonic springs, polydisperse particles will have varying masses, and topological disorder is naturally present except for especially prepared regular piles, like a regular stack of marbles. Of course, in computer models these effects can be separated easily; we will not attempt to disentangle these three contributions here, but do use this freedom later to our advantage in testing our scaling predictions. \begin{figure} \onefigure[width=60mm]{fig2.eps} \vspace{-0.2cm} \caption{Scatter plot of the angularly averaged $\xi$'s of all the 2000 modes of our granular packing of 1000 particles as a function of the frequency $\omega$ at pressure $p=4\cdot10^{-6}$ studied with the method explained in the text. Note the large scatter and the fact that the $\xi$ values are of order of the linear system size $L=45$ or larger throughout most of the frequency range. } \label{spread} \end{figure} The crucial dilemma in extracting the localization length of the vibrational modes of granular packings is that the effective disorder is so weak that one needs prohibitively large systems to reach the true localization regime $\xi \ll L$ for most modes. Here $L$ is the linear system size. At the same time, existing methods which are based on spatial averages (like the direct expression based on the second moment of the eigenmode or the (Inverse) Participation Ratio method \cite{haake}) do not give much insight into the structure of the modes when $\xi$ approaches the system size $L$, \emph{i.e.}, for modes which are extended throughout the finite system. As Fig.~\ref{spread} illustrates, this is the relevant regime throughout most of the frequency range, as only the modes with the highest frequency $\omega$ are truly localized. The method we introduce in this paper, which is motivated by earlier work on non-Hermitian quantum problems \cite{nelson}, is based on studying the response to an asymmetric perturbation. It not only gives the proper localization length $\xi$ of each localized mode, but at the same time assigns a well-defined and precise direction-dependent value $\xi (\phi)$ to each mode, that spans through our finite system --- see Fig.~\ref{angularplots}. We stress that although we will follow common practice in referring to $\xi$ as the {\em localization} length even for $\xi \gtrsim L$, one should keep in mind that many modes {\em extend} throughout our finite periodic system, as both Figs.~\ref{spread} and \ref{angularplots} illustrate. As we shall show, this method does allow us to study the scaling with system size and disorder, and opens up the possibility to bring Random Matrix Theory \cite{haake,beenakker} to bear on this class of problems. While all methods essentially yield the same localization length in the localization regime $\xi \ll L$, the extension of the concept of a localization length to the regime $\xi \stackrel{>}{\sim} L$ depends on the method used and it is not a priori clear what $\xi$ in this regime pertains to. For our method, one can however extract useful information about the large system limit from studying the regime $\xi \stackrel{>}{\sim} L$. In conventional methods, one finds $\xi\approx L$ if the system size is too small. With our method we find a disorder-dependence too which can be used to extract quantitative estimates of the intrinsic localization length. As we will discuss in a forthcoming paper, this is simplest in one dimension where we predict and find a scaling $\xi \simeq A L^{1/2}$ in the regime $ \xi \stackrel{>}{\sim} L$. Since we expect a crossover to the localization regime when the \emph{intrinsic} localization length obeys $\xi_{int} \simeq L$, this to estimate the infinite size localization length from the small system data simply as $ \xi_{int} \simeq A^2$. Preliminary analysis \cite{zz} indicates that this simple estimate works well in 1$d$, but we focus here on the behavior as function of the distance from the jamming point in 2$d$. \begin{figure} \begin{center} \includegraphics[width=25mm]{fig3a.eps}\hspace*{5mm} \includegraphics[width=25mm]{fig3b.eps}\hspace*{5mm} \includegraphics[width=25mm]{fig3c.eps} \end{center} \vspace{-0.5cm} \caption{Polar plot of the localization length $\xi(\phi)$ in one of our granular packings at $p=4\cdot10^{-4}$ at low in (a), high in (b), and intermediate frequency in (c). The angular variation of $\xi(\phi)$ is comparable to the angularly averaged value itself. } \label{angularplots} \end{figure} Our main results can be summarized as follows {\em (i)} The average localization length $\bar{\xi}(\omega)$ of granular packings is largely independent of the pressure, and hence of the distance from the jamming point. {\em (ii)} However, away from the jamming point there are a few ``quasi-localized'' low-frequency modes which disappear when approaching jamming. This behavior is qualitatively in accord with theoretical expectations for the change in behavior near the jamming point. {\em (iii)} In accord with what is expected on the basis of Random Matrix Theory (RMT) \cite{haake,beenakker}, modes with $\xi\lesssim L$ are effectively noninteracting and the distribution of their level spacing is Poissonian, while modes with $\xi \gtrsim L$ show level repulsion: the level spacing follows the so-called Wigner surmise of RMT. {\em (iv) } In the regime $\xi \gtrsim L$, $\bar{\xi}(\omega)$ scales as $L^{d/2}$ and is inversely proportional to the disorder strength, in $d$ dimensions. {\em (v)} Due to level repulsion the distribution $P(\xi) $ falls off for large $\xi$ as $1/ \xi^3$. \section{Method} We use 2$d$ packings of 1000 frictionless particles which are prepared using molecular dynamics simulations --- see \cite{wouter,somfai2007,shundyak2007} for the description of our algorithm that gently prepares packing at a target pressure and other details. The particles interact with the 3$d$ Hertzian force law, $f_{ij}\backsimeq\delta_{ij}^{3/2}$, where $\delta_{ij}$ is the overlap between particles $i$ and $j$. The unit of length is the average particle diameter. Unless noted otherwise we here present results for our most extensive studies with 20\% polydispersity in the radii, but runs with different amount of polydispersity give similar results. The masses $m_i$ of the grains are taken proportional to $R_i^3$, corresponding to packing of spheres in 2$d$. The confining pressure, with which we tune the distance from the jamming point, is in the range $p\in(10^{-6},3\cdot 10^{-2})$ in the units of the Young modulus of the particles. We employ periodic boundary conditions in both directions. Our use of the 3$d$ Hertzian force law implies that the vibrational bonds $k_{ij}=df_{ij}/d\delta_{ij}\sim \delta_{ij}^{1/2} \sim p^{1/3}$ are disordered (they vary from bond to bond) and get weaker at smaller pressures. The natural frequency scale therefore goes down with pressure as $p^{1/6}$. As in \cite{somfai2007}, when reporting our data we will therefore always rescale all frequencies $\omega$ with a factor $p^{-1/6}$, as to be able to compare data at different $p$. \begin{figure} \begin{center} \includegraphics[width=30mm]{fig4a.eps} \includegraphics[width=47mm]{fig4b.eps} \end{center} \vspace{-0.2cm} \caption{(a) Histogram of the ratio of squared amplitudes of the fourth (quadrupole) and second (dipole) harmonic at $p=4\cdot10^{-4}$. Most modes have predominantly dipole symmetry. (b) Average angular anisotropy $\Delta \xi / \bar{\xi} $ as a function of frequency for various pressures. } \label{anisotropy} \end{figure} The vibrational modes and their density of states (DOS) are obtained in the standard way, by expanding the energy about the equilibrium positions of the grains up to quadratic terms. Just as in solid state physics, the dynamical matrix, whose elements are the second derivatives of the energy with respect to the positions of the grains, determines the linear equations of motion of the vibrational modes. The dynamical matrix of a granular packing is a sparse symmetric matrix, because each particle only interacts with a few others. Our method to extract the localization length is motivated by the work of Hatano and Nelson \cite{nelson} on the delocalization transition in non-Hermitian transfer matrix problems arising in the statistical mechanics of vortex lines in superconductors. Consider first the case of a one-dimensional chain of masses connected by springs with spring constants $k_{ij}$ ($j=i\pm1$) and periodic boundary conditions. We introduce an asymmetric bias term into the equations of motion so that the eigenvalue equation of a mode $u_i e^{-i\omega t}$ becomes \begin{equation} m_i \omega^2 u_i = \sum_{j=i\pm1}k_{ij} \left(e^{h\hat{x} \cdot \vec{x}_{ij}} u_{j}-u_i\right). \end{equation} Here $x_i$ are the rest positions of the particles and $\vec{x}_{ij}$ is a vector pointing from particle $i$ to particle $j$. For $h=0$ this is simply the dynamical equation for vibrations. The trick now is that we can extract the localization length $\xi_k$ of each mode $k$ by following whether or not its eigenvalue $\omega_k^2$ changes when we turn on $h$ in small steps. Indeed, as long as $h< 1/ \xi_k$ the eigenvalue $\omega^2_k$ will not change at all. To see this, note that in this case we can perform a ``gauge transformation'' to a field $\tilde{u}_i = {u}_i e^{h x_i}$ which obeys the original equation with $h=0$ {\em and} which falls off exponentially on both sides so that, in a large enough system, it obeys the periodic boundary conditions. This implies that for $h<1/\xi_k$, the eigenvalue $\omega^2_k$ does not change. However, once $h > \xi_k$ the function $\tilde{u}$ obtained with this transformation does not fall off exponentially to both sides. Thus, it can not obey the periodic boundary condition with the same eigenvalue as it had for $h<1/\xi_k$: its eigenvalue {\em has} to change! In practice, when we increase $h$ the eigenvalue $\omega^2_k$ starts to change rapidly and collide with a neighboring eigenvalue when $h\approx 1/\xi$; beyond that, when $h\gtrsim 1/\xi_k$ the eigenvalue $\omega^2_k$ moves into the complex plane \cite{nelson}. Hence we can simply obtain the localization length $\xi_k$ of each mode $k$ from the value $h_k$ at which the eigenvalue moves into the complex plane upon increasing $h$: $\xi_k=1/h_k$. Note that in this method we do not need to calculate the eigenfunctions explicitly --- we only need to track the eigenvalues! It is straightforward to extend this method to higher dimensions: as above, we simply multiply the off-diagonal elements of our dynamical matrix with an exponential $e^{\vec{r}_{ij}\cdot\vec{h}}$, where $\vec{r}_{ij}$ is the vector pointing from the center of particle $i$ to its neighbor $j$. Our probe field $\vec{h}$ is now a vector, so by changing the angle that $\vec{h}$ makes with the $x$-axis, we can extract the angular anisotropy of the localization length $\xi(\phi)$ of each mode. \section{Results} \begin{figure} \onefigure[width=68mm]{fig5.eps} \caption{(a) DOS of our packings for 6 different pressures confirming the main features of earlier studies \cite{epitome,wyart,wyartlett,wyartE,somfai2007} close to jamming. (b) Our frequency binned and angularly averaged values $\bar{\xi}(\omega)/L$ are all very similar. (c,d) Level spacing statistics for the modes that have $\bar{\xi} \gtrsim L$ in (c) and for the modes with $\bar{\xi}\lesssim L$. The lower frequency modes are essentially all extended and do show level repulsion in accord with the predictions from RMT \cite{haake,beenakker}, while the high frequency modes are truly localized and their level spacing is close to Poissonian. The gray lines indicate the frequency ranges used to obtain the level statistics in (c) and (d).} \label{all} \end{figure} \begin{figure} \onefigure[width=87mm]{fig6.eps} \caption{Scatter plot for the localization lengths $\xi$ (determined to a precision of order unity) on a logarithmic frequency scale at $p=3 \cdot 10^{-2}$ in (a) and $p= 10^{-6}$ in (b) at system size $L=45$. Note that at the large pressure the lowest-frequency modes are localized; at small pressures this is not the case. The inset illustrates the two lowest-frequency modes, which has $\xi/L \approx 0.3$ in (a) and $\xi/L \approx 2$ in (b).} \label{logarithmicplot} \end{figure} We first discuss some properties of the localization length of individual modes before turning to their scaling as a function of frequency, system size and distance from the jamming point. {\em Anisotropy} --- Fig.~\ref{angularplots} shows the angular dependence $\xi(\phi) $ of three typical modes. One clearly sees that $\xi(\phi )$ is a $\pi$-periodic function and that the angular variation of $\xi(\phi)$ is significant. While few modes, like the second one in Fig.~\ref{angularplots}, have a quadrupolar structure, the anisotropy is predominantly dipolar, as the histogram in Fig.~\ref{anisotropy}(a) shows. We will denote from here on the angularly averaged value of the localization length of an individual mode by $\xi$. Figure~\ref{anisotropy}(b) shows that the root mean square average angular variation $\Delta \xi$ of $\xi(\phi)$ is almost half $\xi$, and that it is slightly larger at higher frequencies. There is no strong dependence of the anisotropy on the pressure, \emph{i.e.}, on the distance from the jamming point. {\em Spread} --- The angularly averaged values $\xi(\omega)$ also show a large spread, as Fig.~\ref{spread} illustrates for a small value of the pressure. One also sees from this figure that most modes have a value of $\xi\gtrsim L$, which means that they are extended within the systems we can analyze --- only our largest frequency modes are truly localized \cite{privatecommunication,theirresults}. {\em DOS} --- We now turn to a more systematic analysis of our data as a function of pressure and system size. In Fig.~\ref{all}(a) we show that the density of states (DOS) of our packings behaves as found before \cite{epitome,wyart,wyartlett,wyartE,somfai2007} for such packings: As the the jamming point is approached by lowering the pressure, the density of low-frequency modes increases dramatically, which, as mentioned before, is due to the nearness of the isostatic point. {\em Average localization length $\bar{\xi}(\omega)$ } --- For each dataset of the individual angularly averaged values of $\xi$, as in Fig.~\ref{spread}, we determine the frequency binned average values $\bar{\xi}(\omega)$ (each based on about 100 to 200 modes). The behavior of $\bar{\xi}/L$ as a function of (scaled) frequency is show in Fig.~\ref{all}(b) for six different values of the pressure. In these average values, there is no strong variation with pressure, \emph{i.e.} with distance to jamming. We already noted in Fig.~\ref{spread} that most of our eigenmodes have $\xi \gtrsim L$, \emph{i.e.} are extended in our finite system. This is also clear from Fig.~\ref{all}(b): at all but the largest frequencies we have, $\bar{\xi}\gtrsim L$. There are indeed roughly three regimes present in Fig.~\ref{all}(b). From high frequencies towards low frequencies, we first have a range of high-frequency localized modes, for which $\bar{\xi}<L$. These modes are always present at any pressure and are the high-frequency modes in which only a few (light) particles oscillate more or less in anti-phase as in an optical mode (such type of modes generally arise immediately when disorder is introduced into an ordered system). For intermediate-range frequencies there is a plateau in $\bar{\xi}$. Finally for the lowest frequencies (in the frequency range where actually the excess modes appear in the DOS in Fig.~\ref{all}(a) at low pressures), there is an indication of an upswing in $\bar{\xi}$ for small $\omega$. We find this upswing at low frequencies in all our data, also on percolation lattices \cite{zz}, where it is even more pronounced. {\em Quasi-localized low-frequency modes at high pressure} --- From the above data for the bin-averaged $\bar{\xi}$, it would appear at first sight that we see no signature of the nearness of the jamming point. This, however, is not true: in Fig.~\ref{all} we show data obtained by averaging over 100-200 modes. However, this averaging washes out systematic trends visible for the lowest frequency eigenmodes discovered by Vitelli, Xu \emph{et al.} \cite{theirresults,privatecommunication}. When plotted on a logarithmic scale, as in Fig.~\ref{logarithmicplot}, we see a systematic trend for $\xi$ of the low frequency modes to decrease with increasing pressure. As the inset of Fig.~\ref{logarithmicplot}(a) illustrates, these are ``quasi-localized'' modes in which a reasonably well defined ``localized'' group of particles performs what looks like a resonant oscillation that is weakly coupled to the extended elastic field. For our limited range of $L$, we find $\xi/L\simeq0.3$ and a reduced anisotropy of $\Delta \xi/\xi \approx 0.2$ for these modes. \begin{figure} \begin{center} \includegraphics[width=41mm]{fig7a.eps} \includegraphics[width=43mm]{fig7b.eps} \end{center} \vspace{-0.3cm} \caption{(a) Finite size scaling for $p=4\cdot10^{-6}$ and linear system size $L$ ranging from 15 to 45, confirming that the extended states, where $\bar{\xi}\gtrsim L$, scale with $L$, while the high-frequency modes are, within the statistical error, $L$-independent. (b) Scaling collapse of $\bar{\xi}$ according to (\ref{ourscaling}) for hexagonal lattices with identical springs but varying masses $m_i \propto R_i^3$, as for spheres. The distribution of radii $R_i$ is taken to be flat, and $W$ is taken to be the width of the distribution in percent. } \label{fss}\label{hexagonalresults} \end{figure} As we discussed in the introduction, for packings closer to the jamming point (at lower pressures) the isostaticity length $\ell^*$ increases as $1/\Delta Z$, where $\Delta Z$ is the excess contact number. Up to this scale $\ell^*$ we do not expect localized modes at low frequencies, since up to this scale the response mirrors that of the {\em global} floppy modes that emerge near the isostatic point. Indeed, within the system sizes we can study there are no low-frequency ``quasi-localized'' modes at all at low pressures, as Fig.~\ref{logarithmicplot}(b) illustrates for $p=10^{-6}$, even though the response is in many ways more disordered due to the nearness of the jamming point! While our data are qualitatively in accord with the above scenario, we have unfortunately too few low-frequency ``quasi-localized'' modes to confirm quantitatively that as we tune the packings closer to jamming the extent of the resonant region increases with $ \ell^* \sim 1/ \Delta Z$. {\em Level Spacing Statistics } --- Based on the results of RMT \cite{haake,beenakker}, one expects the following: the frequencies $\omega_i$ of the localized modes should be independently distributed, so that their spacing $\Delta \omega_i=\omega_{i+1}-\omega_i$ obeys a Poisson distribution, while the modes which extend throughout the system should interact and repel each other, with a level spacing distribution which is given by the Wigner surmise, $P_W(s)=\pi s/2\exp{(-\pi s^2/4)}$, where $s=\Delta \omega / \overline{\Delta \omega}$. Figs.~\ref{all}(c,d) confirm that this expectation is fully born out by our data at all pressures. Note that the distribution in Fig~\ref{all}(c) deviates somewhat from the Wigner surmise at the two highest pressures --- this is due to the ``quasi-localized'' low-frequency modes discussed above. {\em Scaling with system size $L$} --- One would of course expect $\bar{\xi}$ for the modes which extend through our system size to scale with $L$. As we will sketch below, we have used RMT \cite{haake,beenakker} to derive for our method the scaling behavior $\xi \sim L^{d/2}$. More generally we propose \begin{equation} \bar{\xi} \sim L^{d/2}/W, \label{ourscaling} \end{equation} where $W$ is a measure of the effective disorder. Fig.~\ref{fss}(a) shows that the $\bar{\xi}\sim L$-scaling is well obeyed for our two-dimensional packings for the extended modes in the range $\omega \lesssim 3$ (as noted before the quasi-localized modes obey $\bar{\xi}\simeq 0.3L$), while the high-frequency localized modes for $\omega \gtrsim 3.4$ have $\bar{\xi}$'s which are indeed essentially $L$-independent. For our gently prepared granular packings the strength of the disorder can not easily be varied. In order to test our scaling prediction (\ref{ourscaling}), we have prepared {\em ordered} hexagonal lattices with all spring constants the same but varying masses. As Fig.~\ref{hexagonalresults}(b) shows, we obtain very good data collapse with (\ref{ourscaling}) at all but the highest frequencies. Note also that for small amount of disorder, we have $\bar{\xi} \gg L$. Results for one-dimensional chains are fully consistent with the predicted $L^{1/2}$ scaling \cite{zz}. \section{Implications from Random Matrix Theory} Let us finally summarize what Random Matrix Theory (RMT) brings to bear on the study of the localization length. We refer for a more extensive discussion to \cite{zz}. {\em 1)} In RMT it is well known that for analyzing the level statistics, like in Fig.~\ref{all}(c,d), it is important to use the proper variable. The procedure to obtain the proper variable is the so-called ``unfolding of the spectrum'' \cite{haake}. For our case, the unfolding ensures that in each small interval, the mean level spacing is the same as in the original spectrum. The proper variables are then indeed the frequencies $\omega$, not the eigenvalues $\omega^2$ of the dynamical matrix. {\em 2)} The scaling (\ref{ourscaling}) of the modes with $\xi \gtrsim L$ can be understood as follows. When we turn on $h$, energy levels start to move on the real axis, some getting closer together and some further apart. Because of reflection symmetry under $\vec{h} \to - \vec{h}$ (which is also apparent in Fig.~\ref{angularplots}), the shift is quadratic in $h$. We determine $\xi$ of a mode from the collision value $h_{\rm c} $ at which two modes collide along the real $\omega$-axis and ``pop'' into the complex plane. According to RMT \cite{haake}, the typical collision parameter is then $h_{c}^2\approx \frac{mean\ level\ spacing}{typical\ level\ velocity}$. For our systems the mean level spacing is proportional to $ 1/L^{d}$ and the typical level velocity does not depend on $L^{d}$, from which the scaling $\bar{\xi} \sim L^{d/2}$ immediately results upon identifying $h_c$ with $\xi^{-1}$. {\em 3)} In line with the large spread in the values of $\xi$, we find that the distribution $P(\xi /\bar{\xi})$ implied by Fig.~\ref{spread} falls off as $(\xi /\bar{\xi})^{-3}$ for large $\xi$ both in 1 and 2 dimensions. This power law decay can be derived from how the level repulsion of the extended modes changes, when we change the perturbation parameter $h$ by $\Delta h$ \cite{zz}. \section{Conclusions and Outlook} In this paper we have introduced a new method, motivated by previous studies of non-Hermitian quantum problems \cite{nelson}, which allows an analysis of localization in phonon spectrum, including the regime $\bar{\xi}\gtrsim L$ when the eigenmodes are extended within the finite systems we can study. The method is especially relevant for granular packings, where $\bar{\xi}\gtrsim L$ throughout most of the frequency range, since even in this regime our method gives different results depending on the amount of disorder. For the system sizes that are numerically accessible at present we can not yet test the proposed scaling relation $\xi \gtrsim \ell^* \sim 1/ \Delta Z$ quantitatively. Nevertheless, the disappearance of the ``quasi-localized'' low-frequency modes as we approach the jamming point by lowering the pressure, agrees with the scenario advanced by Wyart {\em et al.} \cite{wyart,wyartlett,wyartE} that up to this length scale the low-frequency rearrangements and modes extend over a diverging scale $\ell^*$. We aim to study larger systems and more packings in the future using sparse matrix eigenvalue routines rather than a Mathematica program, to investigate the nature and scaling of the low-frequency modes in more detail. A few final remarks are in order. {\em (i)} Our method will allow us to determine which type of disorder (mass disorder, bond disorder or topological disorder) plays the dominant effect in the localization behavior. {\em (ii)} The resonant region of the quasi-localized mode shown in Fig.~\ref{logarithmicplot} has quadrupolar symmetry, and is reminiscent of the quadrupolar deformation fields that have been proposed \cite{lemaitre} to dominate quasistatic shear relaxation. Although this is not true for all quasi-localized modes, this may not be accidental. The possible connection between these resonances and shear transformation zones is extremely intriguing and should be pursued further. {\em (iii)} The results for $\bar{\xi}(\omega)$ in finite system typically show an upswing for small $\omega $, except at the largest pressures; whether this is a finite system analogue of the well known $\omega \to 0$ divergence in infinite 2$d$ systems \cite{john} is unclear to us. {\em (iv)} The states with large but finite localization lengths at low frequency that we find at high pressures (see Fig.~\ref{logarithmicplot}(a)) are intriguing. It will be interesting to see if these states persist in the presence of the entropic interactions at finite temperature. {\em (v)} Diffusion on percolation lattices is also an appealing model system to apply the method to: close to the percolation threshold most eigenmodes are truly localized and thus have $\xi \ll L$, while away from the percolation threshold there is a crossover to the regime where $\bar{\xi} \gg L$ \cite{zz}. \acknowledgments We are grateful to Sid Nagel, Andrea Liu and Vincenzo Vitelli for illuminating discussions and for stressing to focus on the the lowest frequency modes, to Jens Bardarson, Martin van Hecke, Kostya Shundyak and Dani ben-Avraham for their interest and advice, and Wouter Ellenbroek and Ell\'{a}k Somfai for supplying the granular packings needed for this study. DRN would like to acknowledge conversations with Bertrand I.~Halperin. ZZ acknowledges support from physics foundation FOM, and DRN support of the National Science Foundation, through Grant DMR-0654191 and the Harvard Materials Research Science and Engineering Center, through Grant DMR-0213805.
1,108,101,562,555
arxiv
\section{Introduction} In the first part of the paper, we squeeze some more results out of Brian Day's PhD thesis \cite{DayPhD}. The question with which the thesis began was how to extend monoidal structures along dense functors, all at the level of enriched categories. Brian separated the general problem into two special cases. The first case concerned extending along a Yoneda embedding, which led to promonoidal categories and Day convolution \cite{DayConv}. The second case involved extending along a reflection into a full subcategory: the Day Reflection Theorem \cite{DayRT}. While the thesis was about monoidal categories, we can, without even modifying the biggest diagrams, adapt the results to skew monoidal categories. Elsewhere \cite{115,116} we have discussed convolution. Here we will provide the skew version of the Day Reflection Theorem \cite{DayRT}. The beauty of this variant is further evidence that the direction choices involved in the skew notion are important for organizing, and adding depth to, certain mathematical phenomena. In the second part of the present paper, the skew warpings of \cite{115} are slightly generalized to involve a skew action; they can in turn be seen as a special case of the skew warpings of \cite{121}. Under certain natural conditions these warpings can be lifted to the category of Eilenberg-Moore coalgebras for a comonad. In particular, this applies to lift skew monoidal structures. For idempotent comonads, we compare the result with our skew reflection theorem. \section{Skew monoidal reflection} Recall from \cite{Szl2012,115,116} the notion of {\em (left) skew monoidal structure} on a category ${\mathscr X}$. It involves a functor $\otimes : {\mathscr X} \times {\mathscr X} \longrightarrow {\mathscr X}$, an object $I\in {\mathscr X}$, and natural families of (not necessarily invertible) morphisms $$\alpha_{A,B,C} : (A\otimes B)\otimes C \rightarrow A\otimes (B\otimes C) , \qquad \lambda_A : I\otimes A \rightarrow A , \qquad \rho_A : I\rightarrow A\otimes I ,$$ satisfying five coherence conditions. It was shown in \cite{JimA} that these five conditions are independent. Recall, also from these references, that an {\em opmonoidal structure} on a functor $L\colon {\mathscr X} \rightarrow {\mathscr A}$ consists of a natural family of morphisms $$\psi_{X,Y} \colon L(X\otimes Y)\rightarrow LX\bar{\otimes} LY$$ and a morphism $\psi_0 \colon LI \rightarrow \bar{I}$ satisfying three axioms. We say the opmonoidal functor is {\em normal} when $\psi_0$ is invertible. We say the opmonoidal functor is {\em strong} when $\psi_0$ and all $\psi_{X,Y}$ are invertible. However, in this paper, a limited amount of such strength, in which only certain components of $\psi$ are invertible, will be important. Suppose $({\mathscr X}, \otimes, I, \alpha, \lambda,\rho)$ and $({\mathscr A}, \bar{\otimes}, \bar{I}, \bar{\alpha}, \bar{\lambda},\bar{\rho})$ are skew monoidal categories. \begin{theorem}\label{skmonrefthm} Suppose $L\dashv N \colon {\mathscr A}\rightarrow {\mathscr X}$ is an adjunction with unit $\eta \colon 1_{{\mathscr X}}\Rightarrow NL$ and invertible counit $\varepsilon \colon LN \Rightarrow 1_{{\mathscr A}}$. Suppose ${\mathscr X}$ is skew monoidal. There exists a skew monoidal structure on ${\mathscr A}$ for which $L\colon {\mathscr X}\rightarrow {\mathscr A}$ is normal opmonoidal with each $\psi_{X,NB}$ invertible if and only if, for all $X\in {\mathscr X}$ and $B\in {\mathscr A}$, the morphism \begin{eqnarray}\label{moninv} L(\eta_X\otimes 1_{NB})\colon L(X\otimes NB)\rightarrow L(NLX\otimes NB) \end{eqnarray} is invertible. In that case, the skew monoidal structure on ${\mathscr A}$ is unique up to isomorphism. \end{theorem} \begin{proof} Suppose ${\mathscr A}$ has a skew monoidal structure $(\bar{\otimes}, \bar{I},\bar{\alpha}, \bar{\lambda},\bar{\rho})$ for which $L$ is normal opmonoidal with the $\psi_{X,NB}$ invertible. We have the commutative square $$ \xymatrix{ LX\bar{\otimes} LNB \ar[rr]^-{L\eta_X\bar{\otimes} 1} \ar[d]_-{\psi^{-1}} && LNLX\bar{\otimes} LNB \ar[d]^-{\psi^{-1}} \\ L(X\otimes NB) \ar[rr]^-{L(\eta_X\otimes 1)} && L(NLX\otimes NB)} $$ in which the vertical arrows are invertible. The top arrow is invertible with inverse $\varepsilon_{LX}\bar{\otimes}1$. So the bottom arrow is invertible. Conversely, suppose each $L(\eta_X\otimes 1_{NB})$ is invertible. Wishing $L$ to become opmonoidal with the limited strength, we are forced (up to isomorphism) to put $$A\bar{\otimes} B = L(NA\otimes NB) \ \text{ and } \ \bar{I} = LI \ ,$$ and to define the constraints $\bar{\alpha}, \bar{\lambda},\bar{\rho}$ by commutativity in the following diagrams. $$ \xymatrix{ L((NA\otimes NB)\otimes NC) \ar[rr]^-{L(\eta \otimes 1)} \ar[d]_-{L\alpha} && L(NL(NA\otimes NB)\otimes NC) \ar[d]^-{\bar{\alpha}} \\ L(NA\otimes (NA\otimes NC)) \ar[rr]_-{L(1\otimes \eta)} && L(NA\otimes NL(NB\otimes NC))} $$ $$ \xymatrix{ L(I\otimes NA) \ar[rr]^-{L(\eta_I\otimes 1)} \ar[d]_-{L\lambda} && L(NLI\otimes NA) \ar[d]^-{\bar{\lambda}} \\ LNA \ar[rr]_-{\varepsilon_A} && A} \qquad \xymatrix{ LNA \ar[rr]^-{\varepsilon_A} \ar[d]_-{L\rho} && A \ar[d]^-{\bar{\rho}} \\ L(NA\otimes I) \ar[rr]_-{L(1\otimes \eta_I)} && L(NA\otimes NLI)} $$ The definitions make sense because the top arrows of the squares are invertible (while the bottom arrows may not be). Now we need to verify the five axioms. The proofs all proceed by preceding the desired diagram of barred morphisms by suitable invertible morphisms involving only $\varepsilon_A$, $L\eta_X$, $\eta_{NA}$, or $L(\eta_X\otimes 1_{NB})$, then manipulating until one can make use of the corresponding unbarred diagram. The biggest diagram for this is the proof of the pentagon for $\bar{\alpha}$. Fortunately, the proof in Brian Day's thesis \cite{DayPhD} of the corresponding result for closed monoidal categories has the necessary Diagram 4.1.3 on page 94 written without any inverse isomorphisms, so saves us rewriting it here. (The notation is a little different with $\psi$ in place of $N$ and with some of the simplifications we also use below.) It remains to verify the other four axioms. The simplest of these is \begin{eqnarray*} \bar{\lambda}_{LI} \bar{\rho}_{LI} & = & \bar{\lambda}_{LI} \bar{\rho}_{LI}\varepsilon_{LI}L\eta_I \\ & = & \bar{\lambda}_{LI} L(1\otimes \eta_I) L\rho_{NLI}L\eta_I \\ & = & \bar{\lambda}_{LI} L(1\otimes \eta_I) L(\eta_I\otimes I) L\rho_I \\ & = & \bar{\lambda}_{LI} L(\eta_I\otimes I) L(1\otimes \eta_I) L\rho_I \\ & = & \varepsilon_{LI} L\lambda_{NLI} L(1\otimes \eta_I) L\rho_I \\ & = & \varepsilon_{LI} L\eta_I L\lambda_I L\rho_I \\ & = &1_{LI} L(\lambda_I \rho_I) \\ & = & 1_{LI} \ . \end{eqnarray*} For the other three, to simplify the notation (but to perhaps complicate the reading), we write as if $N$ were an inclusion of a full subcategory, choose $L$ so that the counit is an identity, and write $XY$ for $X\otimes Y$. Then we have \begin{eqnarray*} \bar{\lambda}_{B\bar{\otimes}C} \bar{\alpha}_{LI,B,C} L(\eta_{(LI)\bar{\otimes}B}1_C)L((\eta_I1_B)1_C) & = & \bar{\lambda}_{B\bar{\otimes}C} L(1\eta_{BC})L\alpha_{LI,B,C} L((\eta_I1_B)1_C) \\ & = & \bar{\lambda}_{B\bar{\otimes}C} L(1_{LI}\eta_{BC})L(\eta_I1_{BC}) L\alpha_{I,B,C} \\ & = & \bar{\lambda}_{B\bar{\otimes}C} L(\eta_{I}1_{BC}) L(1_I\eta_{BC}) L\alpha_{I,B,C} \\ & = & L\lambda_{BC} L\alpha_{I,B,C} \\ & = & L(\lambda_{B} 1_C) \\ & = & (\bar{\lambda}_B\bar{\otimes}1_C)L(\eta_{(LI)B}1_C)L((\eta_I1_B)1_C) \end{eqnarray*} yielding the axiom $\bar{\lambda}_{B\bar{\otimes}C} \bar{\alpha}_{LI,B,C} = \bar{\lambda}_B\bar{\otimes}1_C$ on right cancellation. For the proof of the axiom $(1_A\bar{\otimes} \bar{\lambda}_C)\bar{\alpha}_{A,LI,C} (\bar{\rho}_A\bar{\otimes} 1_C) = 1_{A\bar{\otimes} C}$, we can look at Diagram 4.1.2 on page 93 of \cite{DayPhD}. The required commutativities are all there once we reverse the direction of the right unit constraint which Day calls $r$ instead of $\rho$. For the final axiom, we have \begin{eqnarray*} \bar{\alpha}_{A,B,LI} \bar{\rho}_{A\bar{\otimes}B} & = & \bar{\alpha}_{A,B,LI} L(\eta_{AB}1_{LI})L(1_{AB}\eta_I)L\rho_{AB} \\ & = & L(1_A\eta_{BLI}) L\alpha_{A,B,LI} L(1_{AB}\eta_I)L\rho_{AB} \\ & = & L(1_A\eta_{BLI}) L(1_A(1_B\eta_I)) L\alpha_{A,B,I} L\rho_{AB} \\ & = & L(1_A\eta_{BLI}) L(1_A(1_B\eta_I)) L(1_A\rho_B) \\ & = & 1_A\bar{\otimes} \bar{\rho}_B \ . \end{eqnarray*} The desired opmonoidal structure on $L$ is defined by $\psi_0 = 1 \colon LI\rightarrow \bar{I}$ and $\psi_{X,Y} = L(\eta_X\otimes \eta_Y) \colon L(X\otimes Y) \rightarrow L(NLX\otimes NLY)$. The three axioms for opmonoidality are easily checked and we have each $\psi_{X,NB} = L(1_{NLX}\otimes \eta_{NB})L(\eta_X\otimes 1_{NB})$ invertible. \end{proof} \section{A reflective lemma}\label{aLoR} In this section we state a standard result in a form required for later reference. For the sake of completeness, we include a proof. Assume we have an adjunction $L\dashv N \colon {\mathscr A}\rightarrow {\mathscr X}$ with unit $\eta \colon 1_{{\mathscr X}}\Rightarrow NL$ and counit $\varepsilon \colon LN \Rightarrow 1_{{\mathscr A}}$. Assume $N$ is fully faithful; that is, equivalently, the counit $\varepsilon$ is invertible. \begin{lemma}\label{reflem} For $Z\in {\mathscr X}$, the following conditions are equivalent: \begin{itemize} \item[(i)] there exists $A\in {\mathscr A}$ and $Z\cong NA$; \item[(ii)] for all $X\in {\mathscr X}$, the function ${\mathscr X}(\eta_X,1)\colon {\mathscr X}(NLX,Z)\rightarrow {\mathscr X}(X,Z)$ is surjective; \item[(iii)] the morphism $\eta_Z \colon Z\rightarrow NLZ$ is a coretraction (split monomorphism); \item[(iv)] the morphism $\eta_Z \colon Z\rightarrow NLZ$ is invertible; \item[(v)] for all $X\in {\mathscr X}$, the function ${\mathscr X}(\eta_X,1)\colon {\mathscr X}(NLX,Z)\rightarrow {\mathscr X}(X,Z)$ is invertible. \end{itemize} \end{lemma} \begin{proof} $(i) \Rightarrow (ii)$ \begin{eqnarray*} \xymatrix{ {\mathscr X}(X,Z) \ar[r]^-{\cong} \ar[d]_-{1} & {\mathscr X}(X,NA) \ar[r]^-{\cong} & {\mathscr A}(LX,A) \ar[d]^-{N} \\ {\mathscr X}(X,Z) & {\mathscr X}(NLX,Z) \ar[l]_-{{\mathscr X}(\eta_X,1)} & {\mathscr X}(NLX,NA)\ar[l]_-{\cong} } \end{eqnarray*} $(ii) \Rightarrow (iii)$ Take $X=Z$ and obtain $\nu \colon NLZ \rightarrow Z$ with ${\mathscr X}(\eta_Z,1)\nu=1_Z$. $(iii) \Rightarrow (iv)$ If $\nu \eta_Z=1$ then $(\eta_Z \nu) \eta_Z=1 \eta_Z$, so, by the universal property of $\eta_Z$, we have $\eta_Z \nu = 1$. $(iv) \Rightarrow (v)$ The non-horizontal arrows in the commutative diagram $$\xymatrix{ {\mathscr X}(NLX,Z) \ar[d]_{{\mathscr X}(1,\eta_Z)} \ar[rr]^{{\mathscr X}(\eta_X,1)} && {\mathscr X}(X,Z) \ar[d]^{{\mathscr X}(1,\eta_Z)} \\ {\mathscr X}(NLX,NLZ)\ar[rr]^{{\mathscr X}(\eta_X,1)} && {\mathscr X}(X,NLZ) \\ & {\mathscr A}(LX,LZ) \ar[lu]^-{N} \ar[ru]_-{\cong} }$$ are all invertible, so the horizontal arrows are invertible too. $(v) \Rightarrow (i)$ Clearly $(v) \Rightarrow (ii)$ and we already have $(ii) \Rightarrow (iii)\Rightarrow (iv)$, so take $A=LZ$ and the invertible $\eta_Z$. \end{proof} \section{Skew closed reflection} The Reflection Theorem \cite{DayRT} also deals with closed structure. If, for objects $Y$ and $Z$ the functor ${\mathscr X}(-\otimes Y,Z)$ is representable, say via a natural isomorphism $${\mathscr X}(X\otimes Y,Z)\cong {\mathscr X}(X,[Y,Z]),$$ we call the representing object $[Y,Z]$ a {\em left internal hom}. Recall from Section 8 of \cite{116} that if this exists for all $Z$, so that $-\otimes Y$ has a right adjoint, then ${\mathscr X}$ becomes left skew closed. \begin{theorem}\label{skclosedrefthm} Suppose $L\dashv N \colon {\mathscr A}\rightarrow {\mathscr X}$ is an adjunction with unit $\eta \colon 1_{{\mathscr X}}\Rightarrow NL$ and invertible counit $\varepsilon \colon LN \Rightarrow 1_{{\mathscr A}}$. Suppose ${\mathscr X}$ is skew monoidal and left internal homs of the form $[NB,NC]$ exist for all $B,C\in {\mathscr A}$. The morphisms \eqref{moninv} are invertible for all $X \in {\mathscr X}$ and $B\in {\mathscr A}$ if and only if the morphisms \begin{eqnarray}\label{lclosedinv} \eta_{[NB,NC]} \colon [NB,NC] \rightarrow NL[NB,NC] \end{eqnarray} are invertible for all $B, C\in {\mathscr A}$. In that case, the skew monoidal structure abiding on ${\mathscr A}$, as seen from Theorem~\ref{skmonrefthm}, is left closed. Also, the functor $N$ is strong left closed. \end{theorem} \begin{proof} Consider the following commutative diagram. $$ \xymatrix{ {\mathscr A}(L(NLX\otimes NB),C) \ar[rr]^-{{\mathscr A}(L(\eta \otimes 1),1)} \ar[d]_-{\cong} && {\mathscr A}(L(X\otimes NB),C) \ar[d]^-{\cong} \\ {\mathscr X}(NLX\otimes NB,NC) \ar[d]_-{\cong} && {\mathscr X}(X\otimes NB,NC) \ar[d]^-{\cong} \\ {\mathscr X}(NLX,[NB,NC]) \ar[rr]_-{{\mathscr X}(\eta_X,1)} && {\mathscr X}(X,[NB,NC])} $$ Invertibility of the arrows \eqref{moninv} is equivalent to the invertibility of the top horizontal arrows. This is equivalent to invertibility of the bottom horizontal arrows. By Lemma~\ref{reflem}, this is equivalent to invertibility of the arrows \eqref{lclosedinv}. For the penultimate sentence of the Theorem, we now have the natural isomorphisms: \begin{eqnarray*} {\mathscr A}(A\bar{\otimes} B,C) & \cong & {\mathscr X}(NA\otimes NB,NC) \\ & \cong & {\mathscr X}(NA,[NB,NC]) \\ & \cong & {\mathscr X}(NA,NL[NB,NC]) \\ & \cong & {\mathscr A}(A,L[NB,NC]) \end{eqnarray*} yielding the left internal hom $[B,C]=L[NB,NC]$ for ${\mathscr A}$. For the last sentence, we have $N[B,C]=NL[NB,NC]\cong [NB,NC]$. \end{proof} Our notation for a right adjoint to $X\otimes -$ is $${\mathscr X}(X\otimes Y,Z) \cong {\mathscr X}(Y,\langle X,Z\rangle) \ .$$ The {\em right internal hom} $\langle X,Z\rangle$ may exist for only certain objects $Z$. In general, the existence of right homs in a left skew monoidal category does not give a left or right skew closed structure. When they do exist, we can reinterpret a stronger form of the invertibility condition \eqref{moninv} of Theorem~\ref{skmonrefthm}. \begin{theorem}\label{rskclosedrefthm} Suppose $L\dashv N \colon {\mathscr A}\rightarrow {\mathscr X}$ is an adjunction with unit $\eta \colon 1_{{\mathscr X}}\Rightarrow NL$ and invertible counit $\varepsilon \colon LN \Rightarrow 1_{{\mathscr A}}$. Suppose ${\mathscr X}$ is skew monoidal, and left internal homs of the form $[Y,NC]$ and right internal homs of the form $\langle X,NC\rangle$ exist. The invertibility of one of the following three natural transformations implies invertibility of the other two: \begin{eqnarray}\label{moninvY} L(\eta_X\otimes 1_{Y})\colon L(X\otimes Y)\rightarrow L(NLX\otimes Y) \ ; \end{eqnarray} \begin{eqnarray}\label{lclosedinvY} \eta_{[Y,NC]} \colon [Y,NC] \rightarrow NL[Y,NC] \ ; \end{eqnarray} \begin{eqnarray}\label{rclosedinvY} {\langle \eta_X,NC\rangle} \colon \langle NLX,NC\rangle \rightarrow \langle X,NC\rangle \ . \end{eqnarray} \end{theorem} \begin{proof} Consider the commutative diagram \eqref{rightproof}. Invertibility of any one of the horizontal families in the diagram implies that of the other two. Invertibility of the arrows \eqref{moninvY} is equivalent to the invertibility of the top horizontal family. By Lemma~\ref{reflem}, invertibility of the middle horizontal family is equivalent to invertibility of the arrows \eqref{lclosedinv}. By the Yoneda Lemma, invertibility of the bottom horizontal family is equivalent to invertibility of the arrows \eqref{rclosedinvY}. \end{proof} \begin{eqnarray}\label{rightproof} \begin{aligned} \xymatrix{ {\mathscr A}(L(NLX\otimes Y),C) \ar[rr]^-{{\mathscr A}(L(\eta \otimes 1),1)} \ar[d]_-{\cong} && {\mathscr A}(L(X\otimes Y),C) \ar[d]^-{\cong} \\ {\mathscr X}(NLX\otimes Y,NC) \ar[d]_-{\cong} && {\mathscr X}(X\otimes Y,NC) \ar[d]^-{\cong} \\ {\mathscr X}(NLX,[Y,NC]) \ar[rr]^-{{\mathscr X}(\eta_X,1)} \ar[d]_-{\cong} && {\mathscr X}(X,[Y,NC]) \ar[d]_-{\cong} \\ {\mathscr X}(Y,\langle NLX,NC\rangle) \ar[rr]^-{{\mathscr X}(1,\langle \eta_X,1\rangle)} && {\mathscr X}(Y,\langle X,NC\rangle)} \end{aligned} \end{eqnarray} \section{An example}\label{ex} This is an example of the opposite (dual) of Theorem~\ref{skmonrefthm} which we enunciate explicitly as Proposition~\ref{skmoncorefprop} below. Instead of a reflection we have a coreflection. To keep using left skew monoidal categories we also reverse the tensor product. For a monoidal functor $R \colon {\mathscr X} \to {\mathscr A}$, we denote the structural morphisms by $$\phi_0\colon I \to RI \ \text{ and } \ \phi_{X,Y}\colon RX\otimes RY\to R(X\otimes Y) \ .$$ \begin{proposition}\label{skmoncorefprop} Suppose $R\vdash N \colon {\mathscr A}\rightarrow {\mathscr X}$ is an adjunction with counit $\varepsilon \colon NR \Rightarrow 1_{{\mathscr X}}$ and invertible unit $\eta \colon 1_{{\mathscr A}}\Rightarrow RN$. Suppose ${\mathscr X}$ is left skew monoidal. There exists a left skew monoidal structure on ${\mathscr A}$ for which $R\colon {\mathscr X}\rightarrow {\mathscr A}$ is normal monoidal each $\phi_{NA,Y}$ invertible if and only if, for all $A\in {\mathscr A}$ and $Y\in {\mathscr X}$, the morphism \begin{eqnarray}\label{moninv} R(NA\otimes \varepsilon_Y)\colon R(NA\otimes NRY)\rightarrow R(NA\otimes Y) \end{eqnarray} is invertible. \end{proposition} Consider an injective function $\mu \colon U\rightarrow O$. For an object $A$ of the slice category $\mathrm{Set}/U$, we write $A_u$ for the fibre over $u\in U$. We have an adjunction $$R\vdash N \colon \mathrm{Set}/U \rightarrow \mathrm{Set}/O$$ defined by $(NA)_i = \sum_{\mu(u)=i}{A_u}$ and $(RX)_u = X_{\mu(u)}$ with invertible unit. The $i$th component of the counit $\varepsilon_X\colon NRX \rightarrow X$ is the function $\sum_{\mu(u)=i}{X_{\mu(u)} \rightarrow X_i}$ which is the identity of $X_i$ when $i$ is in the image of $\mu$. Let ${\mathscr C}$ be a category with $\mathrm{ob}{\mathscr C} = O$. Then $\mathrm{Set}/O$ becomes left skew monoidal on defining the tensor $X\otimes Y$ by $$(X\otimes Y)_j = \sum_i{X_i\times {\mathscr C}(i,j)\times Y_j}$$ and the (skew) unit $I$ by $I_j=1$. The associativity constraint $\alpha \colon (X\otimes Y)\otimes Z \rightarrow X\otimes (Y\otimes Z)$ is defined by the component functions $$\sum_{i,j}{X_i\times {\mathscr C}(i,j)\times Y_j\times {\mathscr C}(j,k)\times Z_k}\rightarrow \sum_{i,j}{X_i\times {\mathscr C}(i,k)\times Y_j\times {\mathscr C}(j,k)\times Z_k}$$ induced by the functions $$ {\mathscr C}(i,j)\times {\mathscr C}(j,k) \rightarrow {\mathscr C}(i,k)\times {\mathscr C}(j,k)$$ taking $(a \colon i\rightarrow j,b\colon j\rightarrow k)$ to $(b\circ a \colon i\rightarrow k, b\colon j\rightarrow k)$. Define $\lambda_Y\colon I\otimes Y\rightarrow Y$ to have $j$-component $\sum_i{{\mathscr C}(i,j)\times Y_j} \rightarrow Y_j$ whose restriction to the $i$th injection is the second projection onto $Y_j$. Define $\rho_X \colon X\rightarrow X\otimes I$ to have $j$-component $X_j \rightarrow \sum_i{X_i\times {\mathscr C}(i,j)}$ equal to the composite of $X_j\rightarrow X_j\times {\mathscr C}(j,j), \ x \mapsto (x,1_j),$ with the $j$th injection. This provides an example of Proposition~\ref{skmoncorefprop}. In fact, it satisfies the stronger condition of the dual to Theorem~\ref{rskclosedrefthm}. To see that $$R(X \otimes \varepsilon_Y)\colon R(X\otimes NRY)\rightarrow R(X\otimes Y)$$ is invertible, since $N$ is fully faithful, we need to prove $$G(X \otimes \varepsilon_Y)\colon G(X\otimes GY)\rightarrow G(X\otimes Y)$$ is invertible where $G = NR$ is the idempotent comonad generated by the reflection. Notice that $(GX) = U_j\times X_j$ where $U_j$ is the fibre of $\mu$ over $j\in O$. Since $\mu$ is injective, $U_j\cong U_j\otimes U_j$, so \begin{eqnarray*} G(X\otimes GY)_j & = &U_j\times (X\otimes GY)_{j} \\ & = & U_j\times \sum_i{X_i\times {\mathscr C}(i,j)\times (GY)_j} \\ & = & \sum_i{U_j\times X_i\times {\mathscr C}(i,j)\times U_j\times Y_j} \\ & \cong & \sum_i{U_j\times X_i\times {\mathscr C}(i,j) \times Y_j} \\ & = & U_j\times (X\otimes Y)_{j} \\ & = & G(X\otimes Y)_{j} \ . \end{eqnarray*} The resultant left skew structure on $\mathrm{Set}/U$ has tensor product \begin{eqnarray*} (A\bar{\otimes} B)_v & = & R(NA\otimes NB)_v \\ & = & (NA\otimes NB)_{\mu(v)} \\ & = & \sum_i{(NA)_i\times {\mathscr C}(i,\mu(v))\times (NB)_{\mu(v)}} \\ & \cong & \sum_u{A_{u}\times {\mathscr C}(\mu(u),\mu(v))\times B_v} \ . \end{eqnarray*} Of course we can see that this is merely the left skew structure on $\mathrm{Set}/U$ arising from the category whose objects are the elements $u\in U$ and whose morphisms $u\rightarrow v$ are morphisms $\mu(u)\rightarrow \mu(v)$ in ${\mathscr C}$; that is, the category arising as the full image of the functor $\mu \colon U\rightarrow {\mathscr C}$. As an easy exercise the reader might like to calculate the monoidal structure $$RX\bar{\otimes} RY \rightarrow R(X\otimes Y)$$ on $R$ and check that these components are not invertible in general while, of course, they are for $X=NA$. \section{Skew warpings riding a skew action}\label{sw} We slightly generalize the notion of skew warping defined in \cite{115} to involve an action. This is actually a special case of skew warping on a two-object skew bicategory in the sense of \cite{121}. Let ${\mathscr C}$ denote a left skew monoidal category. A {\em left skew action} of ${\mathscr C}$ on a category ${\mathscr A}$ is an opmonoidal functor \begin{eqnarray}\label{lsa} {\mathscr C} \longrightarrow [{\mathscr A},{\mathscr A}] \ , \ X \mapsto X\star - \end{eqnarray} where the skew monoidal (in fact strict monoidal) tensor product on the endofunctor category $[{\mathscr A},{\mathscr A}]$ is composition. The opmonoidal structure on \eqref{lsa} consists of natural families \begin{eqnarray}\label{lsas} \alpha_{X,Y,A} \colon (X\otimes Y)\star A \longrightarrow X\star (Y\star A) \ \text{ and } \ \lambda_A \colon I\star A \longrightarrow A \end{eqnarray} subject to the three axioms \eqref{lsa1}, \eqref{lsa2}, \eqref{lsa3}. \begin{eqnarray}\label{lsa1} \begin{aligned} \xymatrix{ ((X\otimes Y)\otimes Z)\star A \ar[rr]^-{\alpha} \ar[d]_-{\alpha\star 1} & & (X\otimes Y)\star (Z\star A) \ar[d]^-{\alpha} \\ (X\otimes (Y\otimes Z))\star A \ar[r]_-{\alpha} & X\star ((Y\otimes Z)\star A) \ar[r]_-{1\star\alpha} & X\star (Y\star (Z\star A)) } \end{aligned} \end{eqnarray} \begin{eqnarray}\label{lsa2} \begin{aligned} \xymatrix{ (I\otimes Y)\star A \ar[rd]_{\lambda\star 1}\ar[rr]^{\alpha} && I\star (Y\star A) \ar[ld]^{\lambda} \\ & Y\star A & } \end{aligned} \end{eqnarray} \begin{eqnarray}\label{lsa3} \begin{aligned} \xymatrix{ (X\otimes I)\star A \ar[rr]^-{\alpha} && X\star (I\star A) \ar[d]^-{1\star \lambda} \\ X\star A \ar[rr]_-{1} \ar[u]^-{\rho\star 1} && X\star A} \end{aligned} \end{eqnarray} A category ${\mathscr A}$ equipped with a skew action of ${\mathscr C}$ is called a {\em skew ${\mathscr C}$-actegory}. A {\em skew left warping} riding the skew action of ${\mathscr C}$ on ${\mathscr A}$ consists of the following data: \begin{itemize} \item[(a)] a functor $T:{\mathscr A} \longrightarrow {\mathscr C}$; \item[(b)] an object $K$ of ${\mathscr A}$; \item[(c)] a natural family of morphisms $v_{A,B} : T(TA\star B)\longrightarrow TA\otimes TB$ in ${\mathscr C}$; \item[(d)] a morphism $v_0 : TK \longrightarrow I$; and, \item[(e)] a natural family of morphisms $k_A : A \longrightarrow TA\star K$; \end{itemize} such that the following five diagrams commute. \begin{equation}\label{warpassoc} \begin{aligned} \xymatrix{ T(TA\star B)\otimes TC \ar[rr]^-{v_{A,B}\otimes 1} && (TA\otimes TB)\otimes TC \ar[d]^-{\alpha_{TA,TB,TC}} \\ T(T(TA\star B)\star C) \ar[u]^-{v_{TA\star B , C}} \ar[d]_-{T(v_{A,B}\star 1)} && TA\otimes (TB\otimes TC) \\ T((TA\otimes TB)\star C) \ar[dr]_-{T\alpha_{TA,TB,C}\phantom{AA}} && TA\otimes T(TB\star C) \ar[u]_-{1\otimes v_{B,C}} \\ & T(TA\star (TB\star C)) \ar[ru]_-{\phantom{AA}v_{A,TB\star C}} &} \end{aligned} \end{equation} \begin{equation}\label{warpunit1} \begin{aligned} \xymatrix{ & TK\otimes TB \ar[rd]^-{ v_0 \otimes 1_{TB}} & \\ T(TK\star B) \ar[ru]^-{v_{K,B}} \ar[d]_-{T(v_0\star 1_B)} & & I\otimes TB \ar[d]^-{\lambda_{TB}} \\ T(I\otimes B) \ar[rr]_-{T\lambda_B} & & TB } \end{aligned} \end{equation} \begin{equation}\label{warpunit2} \begin{aligned} \xymatrix{ T(TA\star K) \ar[rr]^-{v_{A,K}} && TA\otimes TK \ar[d]^-{1\otimes v_{0}} \\ TA \ar[u]^-{Tk_{A}} \ar[rr]_{\rho_{TA}} && TA\otimes I }\end{aligned} \end{equation} \begin{equation}\label{warpunit3} \begin{aligned} \xymatrix{ T(TA\star B)\star K \ar[rr]^-{v_{A,B}\star 1_K} && (TA\otimes TB)\star K \ar[d]^-{\alpha_{TA,TB,K}} \\ TA\star B \ar[u]^-{k_{TA\star B}} \ar[rr]_{1_{TA}\star k_B} && TA\star (TB\star K) } \end{aligned} \end{equation} \begin{equation}\label{warpunit4} \begin{aligned} \xymatrix{ TK\star K \ar[rr]^-{v_0\star1_K} && I\star K \ar[d]^-{\lambda_K} \\ K \ar[rr]_-{1_K} \ar[u]^-{k_K} && K} \end{aligned} \end{equation} \begin{example} A skew warping on a skew monoidal category (in the sense of \cite{115}) is just the case where ${\mathscr A}={\mathscr C}$ with tensor as action. \end{example} Just as in Proposition 3.6 of \cite{115}, we obtain a skew monoidal structure from a skew warping. \begin{proposition}\label{newskew} A skew left warping riding a left skew action of a left skew monoidal category ${\mathscr C}$ on a category ${\mathscr A}$ determines left skew monoidal structure on ${\mathscr A}$ as follows: \begin{itemize} \item[(a)] tensor product functor $A\bar{\otimes} B = TA\star B$; \item[(b)] unit $K$; \item[(c)] associativity constraint $$T(TA\star B)\star C \stackrel{v_{A,B}\star 1_C}\longrightarrow (TA\otimes TB) \star C \stackrel{\alpha_{TA,TB,C}}\longrightarrow TA\star (TB\star C) \ ;$$ \item[(d)] left unit constraint $$TK\star B\stackrel{v_0\star 1_B}\longrightarrow I\star B \stackrel{\lambda_B}\longrightarrow B \ ;$$ \item[(e)] right unit constraint $$A\stackrel{k_A} \longrightarrow TA\star K \ .$$ \end{itemize} There is an opmonoidal functor $(T, v_0 , v_{A,B}) : ({\mathscr A} , \bar{\otimes} , K) \longrightarrow ({\mathscr C} , \otimes , I)$. \end{proposition} \begin{example} Skew warpings are more basic than skew monoidal structures in the following sense. Just pretend, for the moment, that we do not know what a skew monoidal (or even monoidal) category is, except that we would like endofunctor categories to be examples. For any category ${\mathscr A}$, the endofunctor category ${\mathscr C} = [{\mathscr A},{\mathscr A}]$ acts on ${\mathscr A}$ by evaluation; as a functor \eqref{lsa}, the action is the identity. A left skew warping riding this action could be taken as the definition of a left skew monoidal structure on ${\mathscr A}$. \end{example} \section{Comonads on skew actegories} For a left skew monoidal category ${\mathscr C}$, let $\ensuremath{\mathrm{Cat}}\xspace^{\mathscr C}$ denote the 2-category whose objects are left skew ${\mathscr C}$-actegories as defined in Section~\ref{sw}. A morphism is a functor $F \colon {\mathscr A} \to {\mathscr B}$ equipped with a natural family of morphisms \begin{eqnarray} \gamma_{X,A}\colon X\star FA \longrightarrow F(X\star A) \end{eqnarray} such that \eqref{am1} and \eqref{am2} commute. \begin{equation}\label{am1} \begin{aligned} \xymatrix{ (X\otimes Y)\star FA \ar[rr]^{\gamma} \ar[d]_{\alpha} && F((X\otimes Y)\star A) \ar[d]^{F\alpha} \\ X\star(Y\star FA) \ar[r]_{1\star\gamma} & X\star F(Y\star A) \ar[r]_{\gamma} &F( X\star(Y\star A)) } \end{aligned} \end{equation} \begin{equation} \label{am2} \begin{aligned} \xymatrix{ I\star FA \ar[r]^{\gamma} \ar[dr]_{\lambda} & F(I\star A) \ar[d]^{F\lambda} \\ & FA } \end{aligned} \end{equation} Such a morphism is called {\em strong} when each $\gamma_{X,A}$ is invertible. A 2-cell $\xi \colon (F,\gamma) \Rightarrow (G,\gamma)$ in $\ensuremath{\mathrm{Cat}}\xspace^{\mathscr C}$ is a natural transformation $\xi \colon F \Rightarrow G$ such that \eqref{am2cell} commutes. \begin{equation} \label{am2cell} \begin{aligned} \xymatrix{ X\star FA \ar[r]^{\gamma_{X,A}} \ar[d]_{1\star\xi_A} & F(X\star A) \ar[d]^{\xi_{X\star A}} \\ X\star GA \ar[r]_{\gamma_{X,A}} & G(X\star A) } \end{aligned} \end{equation} As usual with actions, there is another way to view the 2-category $\ensuremath{\mathrm{Cat}}\xspace^{\mathscr C}$. Regard ${\mathscr C}$ as the homcategory of a 1-object skew bicategory $\Sigma{\mathscr C}$ in the sense of Section 3 of \cite{121}. A left skew ${\mathscr C}$-actegory is an oplax functor ${\mathscr A} \colon \Sigma{\mathscr C} \to \ensuremath{\mathrm{Cat}}\xspace$. A morphism $(F,\gamma) \colon {\mathscr A} \to {\mathscr B}$ in $\ensuremath{\mathrm{Cat}}\xspace^{\mathscr C}$ can be identified with a lax natural transformation between the oplax functors. The 2-cells are the modifications. We are interested in comonads $({\mathscr A}, G,\gamma, \delta, \epsilon)$ in the 2-category $\ensuremath{\mathrm{Cat}}\xspace^{\mathscr C}$. These are objects of the 2-category $\mathrm{Mnd}_*(\ensuremath{\mathrm{Cat}}\xspace^{\mathscr C})$ as defined in \cite{3}. Alternatively, they are oplax functors $\Sigma{\mathscr C} \to \mathrm{Mnd}_*(\ensuremath{\mathrm{Cat}}\xspace)$. For later reference, apart from the conditions for being a comonad on ${\mathscr A}$ and the conditions \eqref{am1} and \eqref{am2}, we require commutativity of \eqref{am3}. \begin{equation} \label{am3} \begin{aligned} \xymatrix{ X\star GA \ar[rr]^{\gamma} \ar[d]_{1\star \delta} & & G(X\star A) \ar[d]^{\delta} \\ X\star G^2A \ar[r]_{\gamma} & G(X\star GA) \ar[r]_{G\gamma} & G^2(X\star A) } \quad \xymatrix{ X\star GA \ar[r]^{\gamma} \ar[dr]_{1\star\epsilon} & G(X\star A) \ar[d]^{\epsilon} \\ & X\star A } \end{aligned} \end{equation} The Eilenberg-Moore coalgebra construction $({\mathscr A},G, \delta, \epsilon)\mapsto {\mathscr A}^G$ is the 2-functor right adjoint to the 2-functor $\ensuremath{\mathrm{Cat}}\xspace \to \mathrm{Mnd}_*(\ensuremath{\mathrm{Cat}}\xspace)$ taking each category to that category equipped with its identity comonad. \begin{proposition}\label{coalgskewaction} For each comonad $({\mathscr A}, G,\gamma, \delta, \epsilon)$ in the 2-category $\ensuremath{\mathrm{Cat}}\xspace^{\mathscr C}$, the Eilenberg-Moore coalgebra category ${\mathscr A}^G$ becomes a left skew ${\mathscr C}$-actegory with skew action \begin{eqnarray*} X\star (A \stackrel{a}\longrightarrow GA) = (X\star A, X\star A \stackrel{X\star a \ }\longrightarrow X\star GA \stackrel{\gamma_{X,A} \ }\longrightarrow G(X\star A)) \ . \end{eqnarray*} This provides the Eilenberg-Moore construction in the 2-category $\ensuremath{\mathrm{Cat}}\xspace^{\mathscr C}$ (in the sense of \cite{3}). \end{proposition} \begin{proof} Compose the oplax functor $\Sigma{\mathscr C} \to \mathrm{Mnd}_*(\ensuremath{\mathrm{Cat}}\xspace)$ corresponding to $({\mathscr A}, G,\gamma, \delta, \epsilon)$ with the Eilenberg-Moore 2-functor $\mathrm{Mnd}_*(\ensuremath{\mathrm{Cat}}\xspace) \to \ensuremath{\mathrm{Cat}}\xspace$. \end{proof} Let $\mathrm{U}\colon {\mathscr A}^G \to {\mathscr A}$ denote the underlying functor $(A,a)\mapsto A$. \begin{proposition}\label{liftwarping} Suppose $(T,K,v,v_0,k)$ is a skew left warping riding the ${\mathscr C}$-actegory ${\mathscr A}$. Suppose $({\mathscr A}, G,\gamma , \delta, \epsilon)$ is a comonad in the 2-category $\ensuremath{\mathrm{Cat}}\xspace^{\mathscr C}$ for which all morphisms of the form $\gamma_{TA,K}$ and $\gamma_{TA,TB\star K}$ are invertible. Then $(T\mathrm{U},(GK,\delta_K),v,v'_0,k')$ is a skew left warping riding the ${\mathscr C}$-actegory ${\mathscr A}^G$ of Proposition~\ref{coalgskewaction}, where $v'_0=v_0\circ T\epsilon_K$ and $k'_{(A,a)} = \gamma^{-1}_{TA,K}\circ Gk_A\circ a$. \end{proposition} \begin{proof} First we need to see that $k'_{(A,a)}\colon (A,a) \to (TA\star GK, \gamma_{TA,GK}\circ (1\star \delta_K))$ is a $G$-coalgebra morphism. This uses the first diagram of \eqref{am3}, naturality of $\delta$ with respect to $k_A$, and the coassociativity of the coaction $a\colon A\to GA$. It remains to verify the five axioms \eqref{warpassoc}, \eqref{warpunit1}, \eqref{warpunit2}, \eqref{warpunit3}, \eqref{warpunit4}. Since only $v$ is involved in \eqref{warpassoc}, it follows from axiom \eqref{warpassoc} for the original skew warping. For \eqref{warpunit1}, we have the diagram $$ \xymatrix{ T(TGK\star B) \ar[rr]^-{T(T\epsilon_K\star 1)} \ar[d]_-{v_{GK,B}} && T(TK\star B) \ar[rr]^-{T(v_0\star 1)} \ar[d]^-{v_{K,B}} && T(I\otimes B) \ar[d]^-{T\lambda_B} \\ TGK\otimes TB \ar[rr]_-{T\epsilon_K\otimes 1} && TK\otimes TB \ar[r]_-{v_0\otimes 1} & I\otimes TB\ar[r]_-{\lambda_{TB}}& TB} $$ which uses naturality of $v$ and axiom \eqref{warpunit1} for the original skew warping. The next diagram proves \eqref{warpunit2}. $$ \xymatrix{ TA\ar[r]^-{Ta} \ar[dd]_-{1} & TGA \ar[ldd]^-{T\epsilon_A} \ar[r]^-{TGk_A}& TG(TA\star K)\ar[r]^-{T\gamma^{-1}} \ar[d]_-{T\epsilon_{TA\star K}} & T(TA\star GK)\ar[d]^-{v_{A,GK}}\ar[ld]^-{T(1\star \epsilon_K)} \\ & & T(TA\star K)\ar[rd]^-{v_{A,K}} & TA\otimes TGK\ar[d]^-{1\otimes T\epsilon_K} \\ TA \ar[rr]_-{\rho_{TA}}\ar[rru]^-{Tk_A} & & TA\otimes I & TA\otimes TK \ar[l]^-{1\otimes v_0} }$$ Precomposing the next diagram with $1\star b \colon TA\star B \to TA\star GB$ proves \eqref{warpunit3}. Take note here of which components of $\gamma$ are required to be invertible. $$ \xymatrix{ TA\star GB \ar[d]_-{1\star Gk_B}\ar[r]^-{\gamma} & G(TA\star B)\ar[r]^-{Gk_{TA\star B}}\ar@/_/@{->}[lddd]^-{G(1\star k_B)} & G(T(TA\star B)\star K)\ar[d]^-{\gamma^{-1}}\ar[ld]_-{1} \\ TA\star G(TB\star K)\ar[dd]_-{\gamma} & \ G(T(TA\star B)\star K)\ar[d]^-{G(v_{TA,B}\star 1)} & T(TA\star B)\star GK\ar[d]^-{v_{A,B}\star 1}\ar[l]_-{\gamma} \\ & G((TA\otimes TB)\star K)\ar[ld]_-{G\alpha} & (TA\otimes TB)\star GK \ar[d]^-{\alpha}\ar[l]_-{\gamma} \\ G(TA\star (TB\star K))\ar[r]_-{\gamma^{-1}} & TA\star G(TB\star K)\ar[r]_-{1\star \gamma^{-1}} & TA\star (TB\star GK) }$$ Then $$ \xymatrix{ GK\ar[r]^-{\delta_K}\ar@/_/@{->}[rd]_-{1} & G^2K\ar[r]^-{Gk_{GK}}\ar[d]^-{G\epsilon_K} & G(TGK\star K)\ar[r]^-{\gamma^{-1}}\ar[d]^-{G(T\epsilon_K\star 1)} & TGK\star GK \ar[d]^-{T\epsilon_K\star 1} \\ & GK\ar[r]^-{Gk_K}\ar@/_/@{->}[rdd]_-{1} & G(TK\star K)\ar[r]^-{\gamma^{-1}}\ar[d]^-{G(v_0\star 1)} & TK \star GK\ar[d]^-{v_0\star 1} \\ & & G(I\star K)\ar[d]^-{G\lambda_K} & I\star GK\ar[d]^-{\lambda_{GK}}\ar[l]^-{\gamma} \\ & & GK\ar[r]_-{1} & GK }$$ yields \eqref{warpunit4}, which completes the proof. \end{proof} \begin{corollary}\label{liftwarpcor1} Under the hypotheses of Proposition~\ref{liftwarping}, the functor $\mathrm{U}\colon {\mathscr A}^G\to {\mathscr A}$ preserves the tensor products obtained from the skew warpings via Proposition~\ref{newskew} and becomes opmonoidal when equipped with the unit constraint $\epsilon_I \colon GI\to I$. \end{corollary} \begin{corollary}\label{liftwarpcor2} Since ${\mathscr C}$ is an object of $\ensuremath{\mathrm{Cat}}\xspace^{\mathscr C}$ with its own tensor product as skew action, and since it supports the identity skew warping, for any comonad $({\mathscr C}, G,\gamma, \delta, \epsilon)$ in the 2-category $\ensuremath{\mathrm{Cat}}\xspace^{\mathscr C}$, Corollary~\ref{liftwarpcor1} applies to give a skew monoidal structure on ${\mathscr C}^G$ with $\mathrm{U}\colon {\mathscr C}^G\to {\mathscr C}$ opmonoidal. \end{corollary} \begin{remark} If the comonad of Corollary~\ref{liftwarpcor2} is idempotent and $(G,\gamma)$ is strong in $\ensuremath{\mathrm{Cat}}\xspace^{\mathscr C}$ then $U\colon {\mathscr C}^G\to {\mathscr C}$ is a coreflection and the dual of Theorem~\ref{skmonrefthm} applies. The same skew monoidal structure on ${\mathscr C}^G$ is obtained as in Corollary~\ref{liftwarpcor2}. The point is that the diagram \eqref{link} commutes by $G$ applied to the right-hand diagram of \eqref{am3} and a counit property of the comonad. So Theorem~\ref{skmonrefthm} appears to be a stronger result than Corollary~\ref{liftwarpcor2} in the idempotent comonad case. \begin{equation}\label{link} \begin{aligned} \xymatrix{ G(X\otimes GY) \ar[rr]^-{G(1\otimes \varepsilon_Y)} \ar[d]_-{G\gamma_{X,GY}} && G(X\otimes Y) \ar[d]^-{\delta_{X\otimes Y}} \\ GG(X\otimes Y) \ar[rr]_-{1} \ar[rru]_-{G\varepsilon_{X\otimes Y}} && GG(X\otimes Y)} \end{aligned} \end{equation} \end{remark} \section{The example of Section~\ref{ex} without injectivity} Let ${\mathscr C}$ be a category with object set $O$ and morphism set $E$, and let $\xi\colon U\to O$ be a function (not necessarily injective). Composition with $\xi$ induces a comonadic functor $N = \xi_!\colon\ensuremath{\mathrm{Set}}\xspace/U\to\ensuremath{\mathrm{Set}}\xspace/O$; write $R = \xi^*$ for the right adjoint, given by pullback. The comonad $G= NR = \xi_!\xi^*$ is given by $-\times_O U$. The category structure on ${\mathscr C}$ induces a skew monoidal structure on $\ensuremath{\mathrm{Set}}\xspace/O$, with tensor product $X\otimes Y$ given by: $$(X\otimes Y)_j = \sum_i X_i \times {\mathscr C}(i,j)\times Y_j$$ and so $X\otimes-$ is given by $X\times_O E\times_O -$. The unit $I$ is the terminal object $1\colon O\to O$. From the formulas for $G$ and $X\otimes-$ involving products in $\ensuremath{\mathrm{Set}}\xspace/O$, it is clear that we have natural isomorphisms $\gamma_{X,Y}\colon X\otimes GY\cong G(X\otimes Y)$, compatible with the comonad structure, in the sense that the diagrams \eqref{am3} commute. Almost as easy is compatibility with the associativity map and left unit constraint in the sense of diagrams \eqref{am1} and \eqref{am2}. So we have a category ${\mathscr C}$ with object-set $O$, giving rise to the skew monoidal category $\ensuremath{\mathrm{Set}}\xspace/O$, and the comonad $G=\xi_!\xi^*$ on $\ensuremath{\mathrm{Set}}\xspace/O$ as required by Corollary~\ref{liftwarpcor2}. This gives rise to a skew monoidal structure on $\ensuremath{\mathrm{Set}}\xspace/U$, with unit $\xi^*I$; in other words with unit $I'$ equal to the terminal object $1\colon U\to U$. It is clear from the construction that this tensor product preserves colimits in each variable. So from the general theory, it must correspond to some category ${\mathscr A}$ with object-set $U$. Since $\xi^*\colon\ensuremath{\mathrm{Set}}\xspace/U\to\ensuremath{\mathrm{Set}}\xspace/O$ is opmonoidal, $\xi$ is the object part of a functor $F\colon{\mathscr A}\to{\mathscr C}$. Since $\xi^*$ preserves the tensor, the functor $F$ is fully faithful. y Thus ${\mathscr A}$ must in fact be obtained from $\xi \colon U\to{\mathscr C}$ via the factorization into a bijective-on-objects functor followed by a fully faithful functor.
1,108,101,562,556
arxiv
\section{Introduction} Matrix models originating from ten-dimensional string theory have been shown in some limit to contain geometry and gravity in less than ten dimensions.\cite{Steinacker:2010rh},\cite{Blaschke:2010ye} Most of the matrix models that have been studied, such as the IKKT model,\cite{Ishibashi:1996xs}, are of the Yang-Mills type, with a Lagrangian which is quadratic in time derivatives. Matrix models with Lagrangians that are first order in the time derivative are also possible. More specifically, they can be matrix analogues of a topological model, such as Chern-Simons theory.\cite{Oda:1998xj},\cite{Kluson} As has been known for some time, Chern-Simons theory allows for a description of gravity in $2+1$ dimensions.\cite{Achucarro:1987vz},\cite{Witten:1988hc} A matrix model analogue of Chern-Simons theory may contain $2+1$ dimensional geometry and gravity in some limit. Here we show that a Chern-Simons matrix model is capable of providing a statistical mechanical explanation of the entropy formula for the black hole in $2+1$ gravity, i.e., the BTZ black hole.\cite{Banados:1992wn} Our matrix model derivation of the entropy proceeds in a similar fashion to Carlip's derivation in \cite{Carlip:1996yb},\cite{Banados:1998ta}, which was based on the continuum Chern-Simons formulation of $2+1$ gravity. The continuum Chern-Simons model of \cite{Carlip:1996yb},\cite{Banados:1998ta} had physical degrees of freedom in the classical theory due to the presence of a boundary, the boundary being associated with the black hole horizon.\cite{Bos:1989kn},\cite{Balachandran:1994up} These degrees of freedom corresponded to edge states in the quantum theory,\cite{Balachandran:1991dw} and the log of the degeneracy of these states gave the entropy \be S=\frac {\pi r_+}{2 G}\;,\label{btzntrop}\ee where $G$ is the $2+1$ gravitational constant and $r_+$ is the outer horizon radius of the BTZ black hole. The matrix model presented here is described in terms of two spatial coordinates, which are represented by $N\times N$ matrices, $\tilde X_i, \;i=1,2$. (Time remains a continuous parameter.) Their dynamics is determined from an action which is similar to that of Chern-Simons theory on the Moyal-Weyl plane.\cite{Grandi:2000av}-\cite{Fradkin:2002qw} Chern-Simons theory on the Moyal-Weyl plane has no dynamical content, and therefore has no hope of describing the properties of a physical system such as a black hole. On the other hand, the matrix model we consider has dynamical degrees of freedom, which are analogous to the edge states of the continuum theory. The system possesses an $SU(N)$ gauge symmetry, along with an additional $U(1)$ gauge symmetry. The $U(1) $ sector often plays a special role in noncommutative gauge theories, and that is the case here as well. While $SU(N)$ corresponds to an internal symmetry group, the relevant $U(1)$ gauge transformations are external transformations. More specifically, they are time-dependent rigid rotations. The $U(1) $ rotations do not decouple from the internal $SU(N)$ transformations in the matrix model, and together they define a semidirect product group. We note that rotations preserve the fundamental commutation relations of the Moyal-Weyl plane, and so rotation symmetry is also implementable for Chern-Simons theory on the Moyal-Weyl plane. Rigid rotation symmetry was also present in Carlip's analysis, and moreover, it played a crucial role in the derivation of the black hole entropy\cite{Carlip:1996yb},\cite{Banados:1998ta}. This symmetry was associated with the isometry of the horizon. Rotation symmetry can be utilized in a similar manner for the matrix model calculation. As we shall show, the physical degrees of freedom for the matrix model correspond to $N$ harmonic oscillators, which are constrained by the first class constraint generating rotations. A unique invariant can be written down for the model which is quadratic in the spatial coordinates $\tilde X_i$, and its spectrum and degeneracy are easily computed. In order to make a connection with BTZ geometry, we need to identify the quadratic invariant with a geometric invariant for the BTZ black hole which has units of distance-squared. A natural choice is $r_+^2$. A final requirement is that we take the limit of infinite dimensional representations for $\tilde X_i$, i.e., $N\rightarrow \infty$, for only then can we hope to recover a two-dimensional continuous geometry from the matrix theory. The limit of the matrix model is not Chern-Simons theory on the Moyal-Weyl plane, and moreover the limit yields an infinite number of physical states. Upon taking the asymptotic limit, and identifying the quadratic invariant of the matrix model with $r_+^2$, we obtain a degeneracy which grows exponentially with $r_+$. The usual formula for the BTZ black hole entropy (\ref{btzntrop}) can thus be recovered from this model.\footnote{Here we are assuming that $r_+$ is the outer horizon radius. If one instead makes the identification with the inner horizon radius $r_-$, one recovers the results for the `exotic' BTZ black hole\cite{TownsendZhang}.} The outline of this article is the following: In section 2 we review the standard noncommutative Chern-Simons theory, which has no dynamical content. In section 3 we show that physical degrees of freedom survive in a $N\times N$ matrix model analogue of the theory. The rotation symmetry is introduced in section 4, and a consistent invariant action is found. The density of states is then computed and found to be exponentially increasing in the large $N$ limit. There we also show that the collective quantum system behaves as an integer (half-integer) spin particle for even (odd) $N$ under a $2\pi$-rotation. Concluding remarks and speculations are given in section 5. \section{Noncommutative Chern-Simons theory} \setcounter{equation}{0} We now review standard noncommutative Chern-Simons theory.\cite{Grandi:2000av}-\cite{Fradkin:2002qw} The dynamical variables for the theory are a pair of infinite dimensional square matrices $X_i$, $i=1,2$, which have been referred to in the literature as covariant coordinates. We will take them to have units of distance. The Lagrangian is defined using an invariant trace \be L_{cs}(X_i,\dot X_i)=\frac k{2\theta_0} {\rm Tr}\Bigl (\epsilon_{ij}D_tX_i X_j -{ 2i}{\theta_0} A_0 \Bigr) \;,\label{NCcsLag1}\ee where the covariant derivative is defined by\be D_tX_i =\dot X_i+[A_0,X_i]\;, \ee and the dot denotes differentiation in the time $t$, which is assumed to be continuous. $k$ and $\theta_0$ are real constants. The former, which we assume to be positive, is known as the level, and here takes integer values.\cite{lq},\cite{Bak:2001ze}. Level quantization was a result of the fact that the Lagrangian is not invariant under gauge transformations, but rather changes by a time derivative. $\theta_0$ is the noncommutativity parameter, and has units of length-squared. $k$ and $\theta_0$ will play different roles in the subsequent sections. $ A_0$ is an infinite dimensional square matrix whose elements correspond to Lagrange multipliers. Reality for the Lagrangian requires $A_0$ to be antihermitean, while $X_i$ can be hermitean or antihermitean. Our convention will be to take $X_i$ antihermitean. The equations of motion obtained from varying $ A_0$ and $X_i$ are \beqa [X_i,X_j]&=& i{\theta_0}\epsilon_{ij}\BI\label{fij0}\\D_tX_i&=&0\label{covX}\;,\eeqa respectively, $\BI$ being the identity. The equation of motion (\ref{fij0}) is the Heisenberg algebra, which implies that the space spanned by coordinates $X_i$ is the Moyal-Weyl plane, with noncommutativity parameter $\theta_0$. The action $\int dt L_{cs}(X_i,\dot X_i)$ is invariant under noncommutative gauge transformations, where $X_i$ is in the adjoint representation. Infinitesimal variations are of the form \beqa \delta_\Lambda X_i&=&[X_i,\Lambda] \cr \delta_\Lambda A_0&=& D_t\Lambda\;,\label{ncft}\eeqa where $\Lambda$ is an infinite dimensional square matrix, with time-dependent matrix elements. The reality conditions for $X_i$ and $A_0$ are preserved provided $\Lambda$ is antihermitean. Gauge transformations are generated by (\ref{fij0}) in the Hamiltonian formulation of the theory. There they correspond to first class constraints, and since there is one first class constraint for every pair of matrix elements in $X_1$ and $X_2$, no physical degrees of freedom remain in this system. \section{ Matrix Chern-Simons theory} \setcounter{equation}{0} Here we consider a finite matrix analogue of the above system. For this let $X_i$ and $ A_0$ now represent finite $N\times N$ antihermitean matrices, and let Tr be the standard matrix trace. A modification of the Lagrangian (\ref{NCcsLag1}) is required in this case. This is evident from the equation of motion (\ref{fij0}) which is inconsistent with the matrix trace. The inconsistency is easily cured by making $ A_0$ traceless. It then takes values in the adjoint representation of the $su(N)$ Lie algebra. The Lagrangian in this case simplifies to \be L_{cs}^{(N)}(X_i,\dot X_i)=\frac k{2\theta_0} \epsilon_{ij}{\rm Tr}D_tX_i X_j \label{NCcsLag}\ee Now instead of (\ref{fij0}), variations of $A_0$ lead to \be [X_i,X_j]=0\label{xixj}\;,\ee while variations in $X_i$ again give (\ref{covX}). The equation of motion (\ref{xixj}) implies that the space spanned by spatial coordinates $X_i$ is {\it commutative}, as opposed to what one gets from (\ref{fij0}). (Here $\theta_0$ no longer plays the role of a noncommutativity parameter.) Commuting configurations did not play a role in a derivation of four dimensional gravity from matrix models.\cite{Steinacker:2010rh} The reason was that they do not support propagating degrees of freedom. On the other hand, there are no propagating degrees of freedom in a $2+1$ gravity theory. As we desire $2+1$ gravity to emerge from the matrix model in some limit, it is reasonable to consider commuting configurations here. The Lagrangian (\ref{NCcsLag}) possesses an $SU(N)$ gauge symmetry, with infinitesimal variations given by (\ref{ncft}). Here $\Lambda$ are {\it traceless} antihermitean matrices. (The Lagrangian will be modified in the following section in order to include an additional $U(1)$ gauge symmetry. The additional symmetry is coupled to the $SU(N)$ symmetry in a non trivial way.) Note that because the Lagrangian (\ref{NCcsLag}) does not contain the previous Tr$A_0$ term, it is invariant under $SU(N)$ gauge transformations, as opposed to changing by a total time derivative. This implies that the constant {\it $k$ does not get quantized in this model}. Since $X_i$ has units of length, all we require is that $k/\theta_0$ has units of inverse length-squared. These statements will also apply in section four. At the end of that section, we shall argue that $k/\theta_0$ is proportional to one over the square of the gravitational constant in $2+1$ dimensions. The Poisson structure resulting from Lagrangian (\ref{NCcsLag}) is given by \be \{(X_i)_{\alpha\beta},(X_j)_{\gamma\delta}\}=\frac{\theta_0}k \epsilon_{ij}\delta_{\alpha\delta}\delta_{\beta\gamma} \;,\label{.06} \ee where $\alpha,\beta,\gamma,\delta,...=1,...,N$ are the matrix indices. Here (\ref{xixj}) correspond to first class constraints, with the $SU(N)$ gauge transformations generated from \be G(\Lambda)= -\frac k{2\theta_0}\epsilon_{ij}{\rm Tr}\Lambda[X_i,X_j] \label{sunggens}\ee This is since $\{X_i,G(\Lambda)\}=[X_i,\Lambda]$. Using (\ref{.06}), they form a closed algebra \be \{G(\Lambda), G(\Lambda')\}=G([\Lambda',\Lambda]) \;\label{brktGG}\ee There are a total of $N^2-1$ first class constraints, which means that at least two independent physical degrees of freedom are present in the $N\times N$ matrices $X_1$ and $X_2$. Actually, there are more. To count the number of physical degrees of freedom, one starts with the unconstrained $2N^2-$dimensional phase space spanned by the two matrices $ X_i,\;i=1,2$. The traceless parts of these matrices, call them $ X_i^{tl},\;i=1,2$, can be taken to be elements of the $su(N)$ Lie algebra. Using the $SU(N)$ gauge symmetry, one of them, say $ X_1^{tl}$, can be rotated to the $(N-1)$-dimensional Cartan sub-algebra. (The result is unique up to Weyl reflections.) This corresponds to a gauge fixing. (Actually, it is only a partial gauge fixing, as the rotated $ X_1^{tl}$ are invariant under rotations by the Cartan generators.) From the gauge constraints, the remaining matrix $ X_2^{tl}$ must commute with the gauge fixed $ X_1^{tl}$. If the latter spans all of the $su(N)$ Cartan-subalgebra (we call this the generic case), then $ X_2^{tl}$ must also be in the Cartan-subalgebra. So $2(N-1)$ phase space variables remain amongst $ X_i^{tl},\;i=1,2$, after eliminating the gauge degrees of freedom. Upon including the $SU(N)$ invariant traces of $ X_1$ and $X_2$, one then ends up with $2N$ independent degrees of freedom. They can be expressed in terms of the $SU(N)$ invariants Tr$X_1^nX_2^m$, $n$ and $m$ being integers. The above argument shows that only $2N$ of them are independent. For the example of $N=2$, we can take them to be \be {\rm Tr}X_1\;,\quad {\rm Tr} X_2\;, \quad {\rm Tr}X_1^2,\quad {\rm and} \quad {\rm Tr}X_2^2\label{4ndpndntdof} \ee More generally, (\ref{4ndpndntdof}) correspond to a minimal set of independent degrees of freedom for the matrix model. Let us examine the simplest case of $N=2$. ($N>2$ will be studied in detail in the following section.) The $2\times 2$ antihermitean matrices $X_1$ and $X_2$ can be expressed as \be X_1=\sqrt{\frac {\theta_0}{2 k}}\; p_\mu\tau_\mu \qquad X_2=\sqrt{\frac {\theta_0}{2 k}}\; q_\mu\tau_\mu \;,\qquad \mu,\nu,... =0,...,3\;, \ee where $\tau_0=i\BI$ and $\tau_{1,2,3}=i\sigma_{1,2,3}$. $\BI$ and $\sigma_{1,2,3}$, respectively, denote the unit matrix and Pauli matrices. Then (\ref{.06}) correspond to canonical brackets for $q_\mu$ and $p_\mu$, \be\{q_\mu,p_\nu\}=\delta_{\mu\nu}\label{euclid}\ee The traces of $X_i$, which are proportional to $q_0$ and $p_0$, are $SU(2)$ invariants. The traceless parts of $X_i$, corresponding to $\vec q=(q_1,q_2,q_3)$ and $\vec p=(p_1,p_2,p_3)$, transform as vectors, so additional $SU(2)$ invariants are $\vec q^2$, $\vec p^2$ and $\vec q\cdot \vec p$, the dot denoting the scalar product. These invariants are not all independent since the constraint (\ref {xixj}) means that the cross product of $\vec q$ and $\vec p$ vanishes. Excluding the special (non generic) cases where one of the vectors vanishes and the other is arbitrary, we get that $\vec q$ and $\vec p$ are parallel. Then there are a total of four independent gauge invariant quantities, $q_0$, $p_0$, $\vec q^2$ and $\vec p^2$, or equivalently, (\ref{4ndpndntdof}). \section{$ {\rm Diff}_0$ Invariant Matrix Model} \setcounter{equation}{0} Here we modify the above matrix model so that it contains an additional $U(1)$ gauge symmetry. Rather than behaving like another internal gauge symmetry, the $U(1)$ transformation acts on the spatial indices of the coordinates $X_i$, and hence is an external symmetry transformation. More specifically it is the analogue of rigid rotations, which we denote by $ {\rm Diff}_0$. Physically, this is added in order to account for the rotational symmetry of the BTZ solution. The rigid rotation symmetry played a crucial role in Carlip's derivation of the black hole entropy\cite{Carlip:1996yb},\cite{Banados:1998ta}, and we show that it plays an important role in the analogous derivation for the matrix model. After first writing down a consistent Lagrangian, we compute the spectrum of a unique invariant of the model, which is quadratic in the spatial coordinates. The entropy is obtained from the degeneracy of eigenvalues. \subsection{Invariant Action} We define transformations of the matrices $X_i$ in an analogous fashion to how rotations act on components of a vector field $v_i,\;i=1,2$, defined on $ {\mathbb{R}}^2$. For the latter, infinitesimal variations are of the form \be\delta_{\epsilon} v_i =\epsilon(t)\,({\tt L} v_ i + \epsilon_{ij} v_j) \label{ledrvA}\;,\ee where ${\tt L}=\epsilon_{ij}x_i \frac{\partial}{\partial x_j}$ is the angular momentum operator, $\epsilon(t) $ is an infinitesimal time-dependent angle and $x_i$ are Cartesian coordinates on ${\mathbb{R}}^2$. In analogy to this, we write down infinitesimal variations of the matrices $X_i$ of the form \be\delta_\epsilon X_i=\epsilon(t)({\tt L}_\Delta X_i +\epsilon_{ij}X_j) \;,\label{vrtnsetaz}\ee where ${\tt L}_\Delta$ denotes some derivation. We define it by ${\tt L}_\Delta M=[\Delta,M]$, when acting on any $N\times N$ matrix $M$, where $\Delta$ is some time-independent $N\times N$ antihermitean matrix. It follows from (\ref{vrtnsetaz}) that $\delta_\epsilon [X_i,X_j] =\epsilon(t){\tt L}_\Delta[X_i,X_j]$. We need to define the corresponding variation of $A_0$. We take it to have the form \be\delta_\epsilon A_0=\epsilon(t){\tt L}_\Delta A_0 +\dot \epsilon(t)\Upsilon \label{dfona0}\;\ee Since $A_0$ is a traceless $N\times N$ antihermitean matrix, the same must be true for $\Upsilon$. From (\ref{vrtnsetaz}) and (\ref{dfona0}) we get the following variation of the Lagrangian (\ref{NCcsLag}) \be \delta_\epsilon L_{cs}^{(N)}(X_i,\dot X_i)=\dot \epsilon(t)\frac k{2\theta_0}{\rm Tr}\Bigl(\epsilon_{ij}({\tt L}_\Delta X_i) X_j+X_iX_i +\epsilon_{ij}[X_i,X_j]\Upsilon \Bigr)\label{grnvrtnlcs} \ee It vanishes if we set $\Upsilon=-\Delta$ and constrain Tr$X_iX_i$ to zero. In this case, we need to require that Tr$\Delta$=0, while the constraint Tr$X_iX_i=0$ can be ensured by adding a Lagrange multiplier term to (\ref{NCcsLag}). More generally, there is a one-parameter family of $\Upsilon$'s for which (\ref{vrtnsetaz}) and (\ref{dfona0}) are symmetry transformations. It is $\Upsilon=iaX_iX_i-\Delta$, along with the constraint \be {\rm Tr}(X_iX_i+i\Delta/a) =0\;,\label{cntonxsq}\ee where $a$ is real. The constraint can be imposed by adding a Lagrange multiplier term to the Lagrangian. Now the variation (\ref{grnvrtnlcs}) is a time derivative. Using (\ref{cntonxsq}), it is $ \delta_\epsilon L_{cs}^{(N)}(X_i,\dot X_i)=\dot \epsilon(t)\frac k{2\theta_0a}(-i{\rm Tr}\Delta )$. (Recall that $\Delta$ is antihermitean, and so its trace is imaginary. Also, for $a\ne 0$ we no longer need to require that $\Delta$ is traceless, since Tr$\Upsilon=0$ follows from the constraint.) The result can be extended to finite rotations. For a $2\pi$-rotation, the corresponding action $S^{(N)}$ changes by \be \frac {\pi k}{\theta_0a}(-i{\rm Tr}\Delta )\label{chngnactn}\ee We show later that its value gets fixed in the quantum theory. In conclusion, the action $S^{(N)}=\int dt L_{cs}^{'(N)}(X_i,\dot X_i)$, with \be L_{cs}^{'(N)}(X_i,\dot X_i)=\frac k{2\theta_0} \epsilon_{ij}{\rm Tr}D_tX_i X_j +\mu{\rm Tr}(X_iX_i+i\Delta/a) \label{dfNCcsLag}\;,\ee is invariant under infinitesimal variations (\ref{vrtnsetaz}) and \beqa\delta_\epsilon A_0&=&\epsilon(t){\tt L}_\Delta A_0 +\dot \epsilon(t)(iaX_iX_i-\Delta)\cr &&\cr \delta_\epsilon\mu&=&-\dot \epsilon(t)\frac k{2\theta_0} \label{difofa0mu}\;,\eeqa where $\mu$ is the Lagrange multiplier. We define (\ref{vrtnsetaz}) and (\ref{difofa0mu}) to be the infinitesimal $ {\rm Diff}_0$ variations for the matrix model. (For the special case $a=0$, we should drop the term $i{\rm Tr}(\Delta/a)$ from the Lagrange constraint and assume that $\Delta$ is traceless.) Of course, in addition to the $ {\rm Diff}_0$ symmetry, the Lagrangian (\ref{dfNCcsLag}) is invariant under $SU(N)$ gauge transformations, where the infinitesimal variations are (\ref{ncft}). The equations of motion following from the Lagrangian (\ref{dfNCcsLag}) are \be D_tX_i+\frac {2\theta_0}k\mu\epsilon_{ij}X_j=0\;,\label{DxeqepX}\ee (\ref{cntonxsq}) and (\ref{xixj}). Eq. (\ref{DxeqepX}) replaces (\ref{covX}), while the condition (\ref{cntonxsq}) is new and has nontrivial consequences. Upon restricting $i$Tr$\Delta /a>0$, it states that all matrix elements of $X_i$ lie on the surface of a $2N^2-1$ dimensional sphere. (Recall that $X_i$ are antihermitean.) However, from (\ref{.06}), one does not have the Poisson structure on a sphere. The constraint (\ref{cntonxsq}) implies that all matrix elements have a finite range, corresponding to the diameter of the sphere. This means that boundary conditions must be imposed in all directions in the phase space, making quantization problematic. [The situation is even worse for the case Tr$\Delta=0$, since then the constraint (\ref{cntonxsq}) says that all matrix elements of the antihermitean matrices $X_i$ vanish!] This obstacle to quantization can be easily rectified by a simple modification of the reality conditions on the matrices $X_i$, as we describe below. \subsection{{\tt Alternative Reality} conditions} An interesting feature of the above matrix model is that one can choose independent reality conditions for the trace and traceless parts of the dynamical matrices. Here we exploit this feature in order to obtain a consistent quantization. More specifically, we replace the antihermitean matrices $X_i$ in the Lagrangian (\ref{dfNCcsLag}), by matrices $\tilde X_i$, for which a) the trace is real and b) the traceless part is antihermitean. \noindent This choice is consistent with the reality of $ L_{cs}^{'(N)}(\tilde X_i,\dot {\tilde X_i})$. It is also consistent with the $SU(N)$ and $ {\rm Diff}_0$ symmetry transformations. Infinitesimal variations for the former are given by (\ref{ncft}), while they are given by (\ref{vrtnsetaz}) and (\ref{difofa0mu}) for the latter. We again assume that $\Lambda$ and $\Delta$ are antihermitean matrices. $\Lambda$ is time-dependent and traceless, while $\Delta$ is a constant matrix. From conditions a) and b), the constraint (\ref{cntonxsq}) [with $X_i$ replaced by $\tilde X_i$] now defines a $2N^2-1$ dimensional {\it unbounded} surface. Of course, most of the matrix elements in $\tilde X_i$ are not physical degrees of freedom. In addition to containing the $SU(N)$ gauge degrees of freedom discussed in the previous section, the matrix elements have a $ {\rm Diff}_0$ gauge degree of freedom. In the Hamiltonian formalism, the $SU(N)$ gauge symmetry is generated by (\ref{sunggens}) [with $X_i$ replaced by $\tilde X_i$], while the $ {\rm Diff}_0$ symmetry is generated by the first class constraint \be V_\Delta= \frac k{2\theta_0} {\rm Tr} \Bigl(\epsilon_{ij}( {\tt L}_\Delta \tilde X_i)\tilde X_j + \tilde X_i\tilde X_i+ {i\Delta}/{a}\Bigr) \approx 0\label{genofrr}\ee Using (\ref{.06}), one gets $\{\tilde X_i,V_\Delta\}={\tt L}_\Delta \tilde X_i +\epsilon_{ij} \tilde X_j$, which means that (\ref{vrtnsetaz}) can be generated in the Hamiltonian formalism. From \be \{V_\Delta, G(\Lambda)\}=G([\Delta,\Lambda])\;, \label{pbgv}\ee and (\ref{brktGG}), the $SU(N)$ generators $G(\Lambda)$, along with the $ {\rm Diff}_0$ generator $V_\Delta$, form a closed algebra, and yield a total of $N^2$ first class constraints in the Hamiltonian formalism. (\ref{pbgv}) implies that external rotations are coupled to the internal $SU(N)$ gauge transformations, and that the combination of the two transformations defines the action of a semidirect product group, $SU(N)\rtimes {\rm Diff}_0$. Even though there are now $N^2$ first class constraints, they do not eliminate all physical degrees of freedom from the two $N\times N$ matrices $\tilde X_1$ and $\tilde X_2$. Following the discussion after (\ref{brktGG}), $2N$ independent degrees of freedom remain in the generic case after eliminating the $SU(N)$ gauge degrees of freedom. The $SU(N)$ invariants (\ref{4ndpndntdof}) represent a minimum set of such degrees of freedom. The physical phase space dimension reduces to $2(N-1)$ once one introduces the additional ${\rm Diff}_0$ gauge symmetry. (We shall construct the variables spanning the reduced phase space explicitly in subsections 4.3.1 and 4.3.2.) Then for the example of $N=2$, only two of the four $SU(N)$ invariants (\ref{4ndpndntdof}) can be independent physical degrees of freedom. More generally, a minimum of two physical degrees of freedom occur for this matrix model. One such degree of freedom is the $SU(N)\rtimes {\rm Diff}_0$ invariant \be \hat{\cal I}^{(2)}= \frac 1N\Bigl(({\rm Tr}\,\tilde X_1)^2 +({\rm Tr}\,\tilde X_2)^2 \Bigr)\label{.6}\;\ee The factor of $1/N$ was introduced in order to give it a universal (i.e., $N-$independent) spectrum in the quantum theory. (\ref{.6}) is the unique quadratic invariant for the matrix model and it has units of distance-squared.\footnote{ Another quadratic $SU(N)\rtimes {\rm Diff}_0$ invariant is $ {\rm Tr}( \tilde X_1^2 +\tilde X_2^2)$, however it is constrained by (\ref{cntonxsq}) (with $X_i$ replaced by $\tilde X_i$), and hence it is not a physical degree of freedom.} For the BTZ black hole, the natural invariant with units of distance-squared is the square of the horizon radius. We will identify these two invariants at the end of this section. The spectrum of the operator analogue of (\ref{.6}) is that of the energy of a harmonic oscillator. For this we note that the $SU(N)$ invariants Tr$\tilde X_1$ and Tr$\tilde X_2$, obey the Heisenberg algebra \be \{ {\rm Tr}\tilde X_1, {\rm Tr} \tilde X_2\}=\frac {\theta_0 N}k \label{trx1trx2} \ee This algebra persists after eliminating the $ {\rm Diff}_0$ gauge degree of freedom. For this we can impose a gauge fixing condition. A convenient choice is \be \psi = {\rm Tr}\tilde X_2^2-\frac 1N( {\rm Tr}\tilde X_2)^2\approx 0 \;,\label{gnrlgfx} \ee which along with $V_\Delta$ form a second class set of constraints. $\psi$ has zero bracket with both $ {\rm Tr}\tilde X_1$ and $ {\rm Tr}\tilde X_2$, and as a result, the Dirac bracket of $ {\rm Tr}\tilde X_1$ with $ {\rm Tr} \tilde X_2$ is identical to (\ref{trx1trx2}).\footnote{More generally, the Dirac bracket of phase space variables $A$ and $B$ is given by $$\{A,B\}_{ {\tt DB}}= \{A,B\}+\frac{\{A,V_\Delta\}\{\psi,B\} -\{B,V_\Delta\}\{\psi,A\} }{\{V_\Delta,\psi\}}$$} In the quantum theory, $ {\rm Tr}\tilde X_1$ and $ {\rm Tr}\tilde X_2$ are promoted to hermitean operators, which we denote by $\widehat{ {\rm Tr}X_1}$ and $\widehat{ {\rm Tr} X_2}$, respectively. They satisfy commutation relations \be [\widehat{ {\rm Tr}X_1},\widehat{ {\rm Tr} X_2}]=i\frac {\theta_0 N}k \label{2.29} \ee Raising and lowering operators, $a^\dagger$ and $a$ satisfying $[a,a^\dagger]=1$, can be introduced by writing $\widehat{ {\rm Tr}X_1}=\sqrt{\frac{\theta_0 N}{2k}}(a^\dagger+a)$ and $\widehat{ {\rm Tr}X_2}=i\sqrt{\frac{\theta_0 N}{2k}}(a^\dagger -a)$. Then the operator analogue of the invariant (\ref{.6}) can be expressed in terms of a number operator $a^\dagger a$, and has the eigenvalues: \be {\cal I}^{(2)}_n=\frac {2\theta_0} k\Bigl ( n +\frac 12\Bigr) \;,\quad n=0,1,2,...\; \label{nrgignvlu}\ee \subsection{Degeneracy} We now determine the degeneracy of the eigenvalues $ {\cal I}^{(2)}_n $. We first show that all eigenvalues are nondegenerate for the case $N=2$ in subsection 4.3.1, and then compute the degeneracy for $N>2$ in subsection 4.3.2. \subsubsection{$N=2$} It is easy to see that all eigenvalues $ {\cal I}^{(2)}_n $ are nondegenerate for $N=2$. For this it is convenient to expand $\tilde X_1$ and $\tilde X_2$ in terms of $2\times 2$ matrices $\tilde\tau_0=\BI$ and $\tilde\tau_{1,2,3}=i\sigma_{1,2,3}$ according to \be \tilde X_1=\sqrt{\frac {\theta_0}{2 k}}\; p_\mu\tilde\tau_\mu \qquad \tilde X_2=\sqrt{\frac {\theta_0}{2 k}}\; q_\mu\tilde \tau_\mu \qquad \ee In contrast to (\ref{euclid}), $q_\mu$ and $p_\mu$ now satisfy brackets \be\{q_\mu,p_\nu\}=\eta_{\mu\nu}\;,\label{su2pbs}\ee where $\eta$ is the Minkowski metric tensor $ \eta={\rm diag}(-1,1,1,1)$.\footnote{The Minkowski signature is a result of the choice of reality conditions made on the coordinates $\tilde X_i$ in the previous subsection. This is in contrast to the Euclidean signature that resulted from the antihermeitian coordinates $X_i$, as was seen in (\ref{euclid}).} As noted at the end of section 3, there are four independent rotationally invariant quantities $q_0$, $p_0$, $\vec q^2$ and $\vec p^2$, i.e., (\ref{4ndpndntdof}). Here they generally contain a $ {\rm Diff}_0$ gauge degree of freedom, where from (\ref{vrtnsetaz}), infinitesimal $ {\rm Diff}_0$ variations are of the form \beqa \delta_\epsilon q_0=-\epsilon(t) p_0 &\qquad & \delta_\epsilon p_0=\epsilon(t) q_0 \cr \delta_\epsilon \vec q^2=-2\epsilon(t) \vec q\cdot\vec p &\qquad & \delta_\epsilon \vec p^2=2\epsilon(t) \vec q\cdot\vec p \eeqa Furthermore, from (\ref{cntonxsq}), the four rotationally invariant quantities are (weakly) constrained by \be q^2_0+p^2_0\approx\vec q^2+\vec p^2 +d_0\label{Neq2cnstrnt} \;, \ee where \be d_0=\frac k{\theta_0 a}(-i{\rm Tr} \Delta)\label{dzero}\ee (Again recall that ${\rm Tr} \Delta$ is imaginary.) So here the physical phase space is two dimensional. We can eliminate the $ {\rm Diff}_0$ gauge degree of freedom by imposing the gauge fixing condition $\vec q^2\approx 0$ [i.e., (\ref{gnrlgfx})], and furthermore solve for $\vec p^2$ using (\ref{Neq2cnstrnt}). The remaining independent coordinates are then $q_0$ and $p_0$, i.e., $ {\rm Tr}\tilde X_1$ and $ {\rm Tr}\tilde X_2$, and their Dirac bracket is identical to the bracket $\{q_0,p_0\}=-1$. The rotational invariant quantity $q^2_0+p^2_0\propto {\cal I}^{(2)} $ has the form of a harmonic oscillator Hamiltonian and its eigenvalues in the quantum theory are $2n+1$, $n=0,1,2,...$ . Each eigenvalue is associated with a single harmonic oscillator state. In an alternative quantization, one can first eliminate two of the $SU(2)$ gauge degrees of freedom (up to a $\pi-$rotation) by requiring one vector, say $\vec p$, to point along the third-direction, i.e., we impose the gauge conditions $p_1=p_2=0$. Upon restricting to the generic solution, $\vec p\parallel \vec q$, of the equation of motion $ [\tilde X_1,\tilde X_2]=0$, we also have that $q_1=q_2=0$.\footnote{The special solutions where one vector (either $\vec q$ or $\vec p$) vanishes, while the other is arbitrary, cannot give a discrete spectrum for the invariant $q^2_0+p^2_0 $, using (\ref{Neq2cnstrnt}), and it is therefore inconsistent with the above result, and also (\ref{nrgignvlu}).} The remaining nonvanishing degrees of freedom are $q_0,p_0,q_3$ and $p_3$. They are subject to the constraint $q^2_0+p^2_0\approx\ q_3^2+\vec p_3^2+d_0$. While they are invariant under the remaining gauge transformations in the $U(1)$ subgroup of $ SU(2)$, they contain the $ {\rm Diff}_0$ gauge degree of freedom. So again we find two independent physical variables. Now instead of taking them to be $q_0$ and $p_0$, as we did previously, let us choose them to be $q_3$ and $p_3$. We can eliminate the $ {\rm Diff}_0$ gauge degree of freedom (up to a $\pi-$rotation) by imposing the constraint $q_0\approx 0$. Then the Dirac bracket of $q_3$ with $p_3$ is identical to the bracket $\{q_3,p_3\}=1$, and $ q_3^2+ p^2_3$ defines another harmonic oscillator Hamiltonian. It has eigenvalues $2n+1$, $n=0,1,2,...$, in the quantum theory. This spectrum is identical to what we previously obtained for the operator analogue of $q^2_0+p^2_0$, which here is weakly equal to $ p^2_0$. In order to make these results consistent with the constraint $q^2_0+p^2_0\approx\ q_3^2+\vec p_3^2+d_0$, we must have $d_0=0$, which from (\ref{dzero}) implies that $\Delta$ is traceless. This result only applies for $N=2$. We shall show that $\Delta$ has nonvanishing trace when $N>2$. In (\ref{chngnactn}) we wrote down the change of the action $S^{(N)}$ under a $2\pi$-rotation. Using (\ref{dzero}), it is just $\pi d_0$. Since we have found that $d_0=0$, we here get that the action is invariant under $2\pi$-rotations. This result is only valid for $N=2$. For general $N\times N$ matrices, $d_0$ depends on $N$, as we show below, and this leads to nontrivial transformation properties of the action. \subsubsection{$N>2$} For $N>2$ it is convenient to expand $\tilde X_1$ and $\tilde X_2$ in the Cartan-Weyl basis of $U(N)$, \be \tilde X_1=\sqrt{\frac{\theta_0} k}\;\Bigl(\frac{p_0\BI} {\sqrt{N}} +i\sqrt{2}p_aH_a +i p_{-\vec\alpha }E_{\vec\alpha }\Bigr)\quad\qquad \tilde X_2=\sqrt{\frac{\theta_0} k}\; \Bigl( \frac{ q_0\BI }{\sqrt{N}}+i\sqrt{2}q_aH_a + iq_{-\vec\alpha }E_{\vec\alpha }\Bigr) \;, \ee where $\{H_a,\;a=1,...,N-1\}$ span the Cartan subalgebra and $E_{\vec\alpha }$ are the root vectors, $\vec\alpha$ labeling the $N(N-1)$ roots. $\BI$ is again the identity matrix. Thus \beqa [H_a,H_b]&=&0\cr&&\cr [H_a,E_{\vec\alpha }]&=& \alpha_a E_{\vec\alpha }\cr&&\cr [E_{\vec\alpha },E_{\vec\beta }]&=&\left\{ \matrix{\alpha_aH_a\;,& {\rm if }\quad \vec\alpha+\vec\beta=0\cr N_{\vec \alpha,\vec\beta} E_{\vec\alpha +\vec\beta}\;,&{\rm if }\quad \vec\alpha+\vec\beta\quad{\rm is }\;{\rm a }\;{\rm root} \cr 0\;,&{\rm if }\quad \vec\alpha+\vec\beta\quad{\rm is }\;{\rm not}\;{\rm a }\;{\rm root}}\right. \;, \eeqa where for all non zero roots $\vec\gamma =\vec\alpha+\vec\beta$, $ N_{\vec \alpha,\vec\beta}=N_{\vec \beta,\vec\gamma}=N_{\vec\gamma ,\vec\alpha}\ne 0$. The representation can be chosen such that \be {\rm Tr}H_aH_b=\frac 12\delta_{a,b}\qquad {\rm Tr}E_{\vec\alpha }E_{\vec\beta } = \delta_{\vec \alpha+\vec \beta,0}\qquad {\rm Tr}H_aE_{\vec\alpha }=0\ee Then from (\ref{.06}), we recover canonical brackets for the $q$'s and $p$'s \beqa \{q_0,p_0\}&=&-1\label{4.35}\\ \{q_a,p_b\}&=&\delta_{a,b}\label{4.36}\\ \{q_{\vec \alpha},p_{\vec \beta}\}&=& \delta_{\vec \alpha+\vec \beta,0}\eeqa In terms of the canonical coordinates, the generators of the $SU(N)$ transformations are the first class constraints \beqa \Phi_a&=&\sum_{\vec \alpha}\alpha_a q_{\vec \alpha} p_{-\vec \alpha}\approx 0\cr\Phi_{\vec \alpha}&=&\sqrt{2}\sum_{a} \alpha_a( q_{-\vec \alpha}p_a- p_{-\vec \alpha}q_a) +\sum_{\vec \beta\ne \vec \alpha}N_{\vec\alpha-\vec\beta ,\vec\beta}\,q_{-\vec \beta}p_{\vec \beta-\vec\alpha}\approx 0\label{sunggenqp} \eeqa Following the procedure outlined in section 3, some of the $SU(N)$ gauge freedom can be eliminated, up to Weyl reflections, by rotating the traceless part of one of the matrices, say $\tilde X_1$, to the $SU(N)$ Cartan sub-algebra. (The freedom to rotate around the Cartan generators is not eliminated by this gauge fixing, since the resulting matrix $\tilde X_1$ is invariant under such rotations.) More specifically, we can fix a point on the adjoint orbit of $\tilde X_1$ by imposing the gauge fixing constraints $p_{\vec\alpha} \approx 0$. Provided that this point is not restricted to intersect certain directions, i.e., $\alpha_a p_a = 0$, one gets from (\ref{sunggenqp}) that all $q_{\vec\alpha}$'s also vanish. Thus, in this generic case, the surviving phase space variables in $\tilde X_1$ and $\tilde X_2$ lie in the direction of the $U(N)$ Cartan subalgebra. The nonvanishing Dirac brackets of these variables, which include $q_0$ and $p_0$, are identical to the nonvanishing brackets (\ref{4.35}) and (\ref{4.36}).\footnote{ Dirac brackets $\{\,,\,\}_{ {\tt DB}}$ in the generic case are computed using $\{\Phi_{\vec\alpha}, p_{\vec\beta}\}\approx \sqrt{2}\alpha_ap_a\delta_{\vec\alpha,\vec\beta}$ and $\{\Phi_{a}, p_{\vec\beta}\}\approx 0$. For two functions $A$ and $B$ on phase space, one gets $$ \{A,B\}_{ {\tt DB}}=\{A,B\} +\sum_{\vec \alpha}\frac 1{\sqrt{2}\,\alpha_ap_a}\Bigl( \{A, \Phi_{\vec\alpha}\}\{p_{\vec\alpha},B\}- \{B, \Phi_{\vec\alpha}\}\{p_{\vec\alpha},A\} \Bigr)\;,$$ where the sum is over the roots. The parenthesis vanishes when $A$ and $B$ are taken from the set $q_0,p_0,q_a$ and $p_a$, showing that their Dirac brackets are identical to the brackets (\ref{4.35}) and (\ref{4.36}). Furthermore, these Dirac brackets can be extended to include the lines in phase space along the root directions, $\alpha_ap_a=0.$ } The $2N-$dimensional reduced phase space spanned by $q_0,p_0,q_a$ and $p_a$ are subject to one more constraint and contain one gauge degree of freedom associated with $ {\rm Diff}_0$. The constraint can again be written in the form (\ref{Neq2cnstrnt}), where here $\vec q$ and $\vec p$ are $N-1$ dimensional vectors, $\vec q=(q_1,..,q_{N-1})$ and $\vec p=(p_1,..,p_{N-1})$. So, as stated before, there are $2(N-1)$ independent physical variables. After imposing (\ref{Neq2cnstrnt}) and the gauge fixing constraint $q_0\approx 0$, we can take them to be $\vec q$ and $\vec p$, thereby eliminating $q_0$ and $p_0$, The Dirac brackets for $\vec q$ and $\vec p$, i.e., (\ref{4.36}), are once again preserved by this gauge fixing. From the constraint (\ref{Neq2cnstrnt}), $q_0^2+p_0^2\approx p_0^2$ is now the sum of $N-1$ harmonic oscillator Hamiltonians. If we denote the eigenvalues of their corresponding number operators by $n_a=0,1,...$ , then the eigenvalues for the operator analogue of $q_0^2+p_0^2$ are $2\sum_{a=1}^{N-1} n_a+ {N-1+d_0}$. This means that the eigenvalues of the $SU(N)\rtimes {\rm Diff}_0$ invariant (\ref{.6}) are \be\frac {2\theta_0 } k\Bigl(\sum_{a=1}^{N-1} n_a+\frac {N-1+d_0}2 \Bigr)\label{427}\ee In comparing with (\ref{nrgignvlu}), $n=\sum_{a=1}^{N-1} n_a+\frac {N+d_0}2-1$. Since the eigenvalues of (\ref{nrgignvlu}) and (\ref{427}) must agree, we have that \be n=\sum_{a=1}^{N-1} n_a\;,\qquad \quad d_0=2-N \label{d0frN}\ee Thus only one value of $d_0$ is possible for any given $N$. For $N=2$ the result is $d_0=0$, which agrees with what we found previously. The degeneracy $g_n^{(N)}$ of the $n^{\rm th}$ excited level of the matrix model is identical to what one would get from the $N-1$ dimensional isotropic harmonic oscillator. The system can be expressed in terms of $N-1$ pairs of raising and lowering operators, $\hat a_a^\dagger$ and $\hat a_a, $ respectively. Since we want the degrees of freedom to be associated with those of a gravitational field, it makes sense to identify $\hat a_a^\dagger$ and $\hat a_a $ with {\it bosonic } creation and annihilation operators. In this picture, the $n^{\rm th}$ excited level consists of states of $n$ identical bosons occupying $N-1$ sites. The degeneracy $g_n^{(N)}$ is a sum of the number $p(n,k)$ of partitions of $n$ into $k$ parts, \be g_n^{(N)}=\sum^{N-1}_{k=1} p(n,k)\label{gsubn}\ee This sum is known to be identical to the number of partitions $p_{N-1}(n)$ of $n$ into parts none of which exceeds $N-1$.\cite{Grosswald} In the asymptotic limit $N,n\rightarrow\infty$, with $N\ge n$, it is given by the Hardy-Ramanujan formula \be g_n^{(N)}\rightarrow \frac 1{4n\sqrt{3} }\exp{\Bigl(\pi\sqrt{\frac{2n}3}\Bigr)}\label{4.32}\ee We define the entropy as the log of the degeneracy. Upon taking the log of (\ref{4.32}) and substituting (\ref{nrgignvlu}), one gets the following result for the entropy of the $n^{\rm th}$ excited level in the asymptotic limit \be S_n\sim \pi \sqrt{\frac{k {\cal I}^{(2)}_n}{3\theta_0}}\label{ntropee}\ee The usual formula for the BTZ black hole entropy (\ref{btzntrop}) is recovered when we make the identification of the quadratic invariant (\ref{.6}) with the square of the black hole horizon radius, $r_+^2$ and the identification of constants in the two theories, $k/\theta_0$ and $\frac{3}{4G^2}$. The latter sets the scale for the eigenvalues of (\ref{.6}), and hence $r_+^2$. It says that they are separated by $\frac 83 G^2$, and that the smallest value for the horizon radius is $\frac 2{\sqrt{3}}\,G$. Concerning the asymptotic limit, we assumed above that both $N$ and $n$ go to $\infty$, with $N\ge n$. Other limits are possible. The leading order entropy doesn't grow as fast as in (\ref{ntropee}) and can depend on $N$ for those cases. For example, if one instead holds the size $N$ of the matrices fixed while taking $n\rightarrow\infty$, then\cite{Grosswald} $g_n^{(N)}\rightarrow \frac{n^{N-2} }{(N-1)!(N-2)!}$. The leading order behavior of the entropy is logarithmic in this case, \be S_n\sim (N-2) \log \frac{k {\cal I}^{(2)}_n}{2\theta_0},\;\;\qquad N>2\ee Finally, we comment on the rotational properties of the collective system. From (\ref{chngnactn}) and (\ref{dzero}), the change of the action $S^{(N)}$ under a $2\pi$-rotation is $\pi d_0$. Our result (\ref{d0frN}) for arbitrary $N$, then gives a change of $\pi (2-N)$. So under a $2\pi$-rotation, the phase $\exp{i S^{(N)}}$ picks up a factor $(-1)^N$. This means that the collective quantum system behaves as an integer (half-integer) spin particle for even (odd) $N$ under a $2\pi$-rotation. \section{Concluding remarks} We have shown that the BTZ black hole entropy formula emerges from a Chern-Simons matrix model in the asymptotic limit $N,n\rightarrow\infty$, with $N\ge n$. One does not recover Chern-Simons theory on the Moyal-Weyl plane in this limit, even though both are expressed in terms of two infinite dimensional matrices representing the spatial coordinates. This is fortunate because Chern-Simons theory on the Moyal-Weyl plane has no dynamical content. The two systems also differ by the fact that our matrix model has only commutative configurations, which persist in the limit, while (\ref{fij0}) states that Chern-Simons theory on the Moyal-Weyl plane has noncommutative configurations. An important ingredient in the matrix model is the $ {\rm Diff}_0$ symmetry. In addition to corresponding to the rotational symmetry of the BTZ solution, it is responsible for the first class constraint (\ref{genofrr}), from which the density of states was computed. The entropy law followed after identifying the invariant (\ref{.6}), which was quadratic in the coordinates $X_i$, with the square of the horizon radius, $r^2_+$, and taking the asymptotic limit. From the identification, one gets a harmonic oscillator spectrum for $r^2_+$. From a further identification of the constants of the two systems, one sets the scale of the eigenvalues of $r_+$. For example, the ground state value of $r_+$ is $\frac 2{\sqrt{3}}\,G $. An exact expression for the entropy can be given for any eigenvalue for $r_+$ and for any $N$. Lastly, we found that the collective quantum system behaves as an integer (half-integer) spin particle for even (odd) $N$ under a $2\pi$-rotation. It remains to be seen whether the BTZ geometry can be recovered from this matrix model, with perhaps some modifications, in some asymptotic limit. In this regard, the 4 dimensional Schwarzschild and Reissner-Nordstr\"om black hole geometries were shown to emerge from a matrix model in a `semiclassical' limit.\cite{Blaschke:2010ye} The relevant matrix model in that case was of the Yang-Mills type, with an action that involved quadratic and higher order terms. It also required an embedding in higher dimensions. An analogous derivation of the BTZ solution, starting from a higher dimensional Yang-Mills type matrix model, may also be possible. Our work suggests that the total action should include a topological term in order to recover the correct BTZ entropy formula. It also suggests that commuting configurations and the Diff${}_0$ symmetry should play an important role. A generalization of the topological action examined here can be made to any odd number of dimensions. Questions concerning whether or not the computations carried out here are generalizable to higher dimensions, or if topological terms play a role in higher dimensional matrix models, are worth pursuing. \bigskip {\Large {\bf Acknowledgments} } \noindent We are very grateful to A. Pinzul for valuable discussions. A.S. was supported in part by the DOE, Grant No. DE-FG02-10ER41714. \bigskip
1,108,101,562,557
arxiv
\section{\label{}} Quantum data compression is one of the most fundamental tasks in quantum information theory\cite{Nielsen2000,Wilde2011}. Schumacher first provided a tight bound (equal to the von Neumann entropy of the source) to which quantum information may be compressed\cite{Schmacher1995}. From then on, many compression schemes have been proposed\cite{Koashi2001,Bostroem2002,Jozsa2003,Hayashi2010,Kaltchenko2003, Bennett2006,Jozsa1998,Hayashi2002a,Hayashi2002b,Braunstein2000}. In this paper we will consider the universal quantum data compression in the case that we only know the entropy of the source does not exceed some given value h. In classical information, an explicit example of such compression is a scheme based on the theory of types developed by Csiszar and K\"{o}rner\cite{CK2011}. They showed that the data can be compressed to $h$ bits/siganl by encoding all the sequences $x^{n}$ for which $H(\mathbf {p_{x}})\leq h +\varepsilon$(called $CK$ sequence), where $\mathbf{p_{x}}$ is the type of $x^{n}$ and $H(\cdot)$ is the Shannon entropy function. For quantum information, an analogous theory was established\cite{Jozsa1998} by Jozsa et al. They extended the classical $CK$ sequence to a quantum subspace $\Xi(B)$ for a given basis $B$, and then to $ \Upsilon$ which is the span of $ \Xi(B)$ as $B$ ranges over all bases. They proved that the dimension of $\Upsilon$ is up to some polynomial multiple of $\dim{\Xi(B)}$, so the compression rate h is achievable asymptotically. We note that, their proof is based on the $CK$ sequence set, so a natural question arises: Is the $CK$ set essential to the proof? Or can it be replaced by a smaller set? In this paper, we give the answer. It will be shown that, if we replace the $CK$ set with the entropy-typical set $\{ x^{n}:\left| H(\mathbf{p_{x}})-h \right|\leq \varepsilon \} $, the proof still holds. This result is based on the quantum entropy-typical subspace theory which reveals that any $\rho^{\otimes n}$ with entropy$\leq h$ can be preserved by the entropy-typical subspace with entropy$=h$. Before presenting our main results, we begin with describing some basic concepts which will be used later. Let $\chi=\{ 1, 2,..., d\}$ be a alphabet with $d$ symbols. We use $\mathbf{p}=\left( p(1), p(2),...,p(d) \right)$ to denote a probability distribution on $\chi$, where $p(a)$ is the probability of the symbol $a$. Let $X_{1}, X_{2},..., X_{n}$ be a sequence of n symbols from $\chi$. We will use the notation $x^{n}$ to denote a sequence $x_{1}, x_{2},..., x_{n}$. The type $\mathbf{p_{x}}$ of $x^{n}$ is the relative proportion of occurrences of each symbol in $\chi $, i.e. $\mathbf{p_{x}}(a)=N(a|x^{n})/n$ for all $ a\in \chi$, where $N(a|x^{n})$ is the number of times the symbol $a$ occurs in the sequence $x^{n}$. The strongly typical set of a source with distribution $\mathbf {p}$ is defined as: \begin{eqnarray} A_{\varepsilon}(\mathbf{p})= \bigg \{ x^{n}: \left| \mathbf{p_{x}}(a)-p(a) \right| \leq \frac{\varepsilon}{\left|\chi\right|}, \forall \ a\in \chi \bigg \} \label{eqn:1} \end{eqnarray} $A_{\varepsilon}(\mathbf{p})$ is a high probability set\cite{Cover2012}, i.e. for any fixed $\varepsilon > 0$ and $\delta > 0$, when n is large enough, \begin{eqnarray} \sum_{x^{n} \in A_{\varepsilon}(\mathbf{p})}p(x^{n}) \geq 1-\delta \label{eqn:2} \end{eqnarray} where $p(x^{n})=p(x_{1})p(x_{2})...p(x_{n})$. Now we specify the definition of classical entropy-typical set. \noindent\textbf{Definition1:} \emph{Given $\varepsilon > 0$ and $h > 0$, the classical entropy-typical set $T_{\varepsilon}(h)$ is defined as:} \begin{eqnarray} T_{\varepsilon}(h)=\{ x^{n}:\left| H(\mathbf{p_{x}})-h \right|\leq \varepsilon \} \label{eqn:3} \end{eqnarray} \emph{where $H(\cdot)$ is the Shannon entropy function.}\\ \noindent\emph{Property1.1.} According to the type method theory\cite{CK2011,Cover2012}, we can easily know, for any $\varepsilon >0$, \begin{eqnarray} \left | T_{\varepsilon}(h) \right | \leq (n+1)^{d} 2^{n(h+\varepsilon)} \label{eqn:4} \end{eqnarray} \noindent\emph{Property1.2.} For any source with $H(\mathbf{p})=h $, $T_{\varepsilon}(h)$ is a high-probability set, i.e. for any $\varepsilon > 0$ and $\delta > 0$, for sufficiently large n, \begin{eqnarray} \sum_{x^{n} \in T_{\varepsilon}(h)}p(x^{n}) \geq 1-\delta \label{eqn:5} \end{eqnarray} \emph{Proof}: It is easy to show that $T_{\varepsilon}(h)$ contains a subset $A_{\varepsilon^{\prime}}(\mathbf{p})$. Since $H(\cdot)$ is a continuous function, for any $\varepsilon >0$ there exist $\varepsilon^{\prime} >0$ such that $\left| H(\mathbf{p_{x}})-H(\mathbf{p})\right| \leq \varepsilon$ for all $\left| \mathbf{p_{x}}-\mathbf{p} \right| \leq \varepsilon^{\prime}$. Combined with the definition of the strongly typical set, we see that, $\left| H(\mathbf{p_{x}})-H(\mathbf{p})\right|=\left| H(\mathbf{p_{x}})-h\right| \leq \varepsilon$ holds for all $x^{n}\in A_{\varepsilon^{\prime}}(\mathbf{p}) $, which means $A_{\varepsilon^{\prime}}(\mathbf{p})\subseteq T_{\varepsilon}(h)$. Thus for sufficiently large n, \begin{eqnarray} \sum_{x^{n} \in T_{\varepsilon}(h)}p(x^{n}) \geq \sum_{x^{n} \in A_{\varepsilon^{\prime}}(\mathbf{p})}p(x^{n}) \geq 1-\delta \label{eqn:6} \end{eqnarray} \\ Let $\mathcal{H}$ be a d-dimensional Hilbert space and $B=\left \{ \left | e_{1} \right \rangle,\left | e_{2} \right \rangle,...,\left | e_{d} \right \rangle \right \}$ be a basis of $\mathcal{H}$. We can extend $T_{\varepsilon}(h)$ to quantum case.\\ \noindent\textbf{Definition2.} \emph{The entropy-typical subspace for a given basis B can be defined as:} \begin{eqnarray} \Xi(h,B)=span\{ \left | e_{x_{1}}e_{x_{2}}...e_{x_{n}} \right \rangle:x^{n} \in T_{\varepsilon}(h) \} \label{eqn:7} \end{eqnarray} Denote the projector onto $\Xi(h,B)$ by $\Pi(h,B)$, \begin{eqnarray} \Pi(h,B)=\sum_{x^{n} \in T_{\varepsilon}(h)}\ | e_{x_{1}}e_{x_{2}}...e_{x_{n}} \rangle \langle e_{x_{1}}e_{x_{2}}...e_{x_{n}} \ | \label{eqn:8} \end{eqnarray} From the properties of $T_{\varepsilon}(h)$, we can get the properties of $\Xi(h,B)$.\\ \emph{Property2.1.} For any $\varepsilon > 0$, \begin{eqnarray} \dim{\Xi(h,B)}=|T_{\varepsilon}(h)| \leq (n+1)^{d}2^{n(h+\varepsilon)} \label{eqn:9} \end{eqnarray} \emph{Property2.2.} Given a mixed state $\rho$ with von Neumann entropy $S(\rho)=h$, if the eigenstates of $\rho $ lies in B, then for any fixed $\varepsilon > 0$ and $\delta > 0$, for sufficiently large n, \begin{eqnarray} tr(\Pi(h,B)\rho^{\otimes n})=\sum_{x^{n} \in T_{\varepsilon}(h)}p(x^{n}) \geq 1-\delta \label{eqn:10} \end{eqnarray} Now let $\Upsilon(h)$ be the subspace of $\mathcal{H}^{\otimes n}$ which contains $\Xi(h,B)$ for all choices of basis $B$ and $\Pi_{\Upsilon}(h)$ be the projector onto $\Upsilon(h)$. Any other basis $B^{\prime}$ can be obtained from $B$ by applying some $d \times d$ unitary transformation $U$, thus $\Xi(h,B^{\prime})$ is obtained by applying $U^{\otimes n}$ to $\Xi(h,B)$. Then $\Upsilon(h)$ can be represented as the span of all $U^{\otimes n}\left |\phi \right \rangle$ where $U$ ranges over all $d \times d$ unitary matrices and $\left |\phi \right \rangle$ ranges over $\Xi(h,B)$.\\ \noindent\textbf{Definition3} \emph{The quantum entropy-typical subspace $\Upsilon(h)$ is defined as} \begin{eqnarray} \Upsilon(h)=span\{U^{\otimes n}|\phi\rangle:U \in \mathcal{U},|\phi\rangle \in \Xi(h,B) \} \label{eqn:11} \end{eqnarray} \emph{where $\mathcal{U}$ is the collection of all $d \times d$ unitary matrixes.}\\ According to Ref\cite{Jozsa1998}, the expansion of dimension from $\Xi(h,B)$ to $\Upsilon(h)$ is up to $(n+1)^{d^{2}}$. Combined with property2.1, we have \begin{eqnarray} \dim{\Upsilon(h)} \leq (n+1)^{(d^{2}+d)}2^{n(h+\varepsilon)} \label{eqn:12} \end{eqnarray} An immediate consequence of property2.2 is that $\Pi_{\Upsilon}(h)$ preserves $\rho^{\otimes n}$ approximately if $S(\rho)=h $. However, we give a stronger theorem below.\\ \noindent\textbf{Theorem1} \emph{Given a mixed state $\rho$, if the von Neumann entropy $S(\rho)\leq h $, then for any fixed $\varepsilon > 0$ and $\delta > 0$, for sufficiently large n,} \begin{eqnarray} tr(\Pi_{\Upsilon}(h)\rho^{\otimes n})\geq 1-\delta \label{eqn:13} \end{eqnarray} \\ \textbf{Remark:} $\Pi_{\Upsilon}(h)$ preserves $\rho^{\otimes n}$ not only for the case that $S(\rho)=h$, but also for $S(\rho) < h$ !\\ To prove the theorem, we need the following lemma:\\ \noindent\textbf{Lemma1} \emph{Given a mixed state $\rho$, if $S(\rho)\leq h \leq d$, then there exist a basis $B^{\prime}=\{|e_{1}^{\prime}\rangle, |e _{2}^{\prime}\rangle... |e_{d}^{\prime} \rangle \}$ such that $S(\rho^{\prime})=h$, where $\rho^{\prime}=\sum_{i}\langle e_{i}^{\prime}|\rho|e_{i}^{\prime}\rangle|e_{i}^{\prime}\rangle\langle e_{i}^{\prime}|$.}\\ \noindent\emph{Proof}: Suppose the spectrum decomposition of $\rho$ is $\rho=\sum_{k}p_{k}|e_{k}^{0}\rangle\langle e_{k}^{0}|$, where the eigenstates $|e_{k}^{0}\rangle $ lies in the basis $B^{0}=\{|e_{1}^{0}\rangle, |e _{2}^{0}\rangle... |e_{d}^{0} \rangle \}$. Define a basis $B^{1}=\left\{|e_{1}^{1}\rangle,|e_{2}^{1}\rangle...|e_{d}^{1}\rangle\right\}$ by \begin{eqnarray} |e_{l}^{1}\rangle=\frac{1}{\sqrt{d}}\sum_{k=1}^{d}\exp\left\{j2\pi\frac{kl}{d}\right\} \left|e_{k}^{0}\right\rangle \label{eqn:14} \end{eqnarray} where $j$ is the imaginary unit. If we measure $\rho$ on the basis $B^{1}$, the result ensemble can be stated as $\rho^{1}=\sum_{l}\langle e^{1}_{l}|\rho|e^{1}_{l}\rangle|e^{1}_{l}\rangle\langle e^{1}_{l}|$. It can be verified that $S(\rho^{1})=d$.\\ Define an unitary operator $W$ by \begin{eqnarray} W=\sum_{i}|e_{i}^{1}\rangle\langle e_{i}^{0}| \label{eqn:15} \end{eqnarray} Suppose the spectrum decomposition of $W$ is $W=\sum_{s}\exp\{j\theta_{s}\}|e^{W}_{s}\rangle\langle e^{W}_{s}|$. With the basis $\{e_{s}^{W}\}$, we can define a function \begin{eqnarray} U(y_{1},y_{2},...,y_{d})=\sum_{s}\exp\{jy_{s}\}|e^{W}_{s}\rangle\langle e^{W}_{s}| \label{eqn:16} \end{eqnarray} where $y_{s}\in [0,\theta_{s}]$. Obviously, $U$ is unitary, and \begin{eqnarray} U(0,0,...,0)=I, \ U(\theta_{1}, \theta_{2},...,\theta_{d})=W \label{eq:17} \end{eqnarray} By applying $U(y_{1},y_{2},...,y_{d})$ on each state of the basis $B^{0}$, we can obtain the basis $B^{\mathbf{y}}=\{|e_{1}^{\mathbf{y}}\rangle, |e_{2}^{\mathbf{y}}\rangle,...|e_{d}^{\mathbf{y}}\rangle\}$, where $|e_{i}^{\mathbf{y}}\rangle=U(y_{1},y_{2},...,y_{d})|e_{i}^{0}\rangle$. If we measure $\rho$ on the basis $B^{\mathbf{y}}$, the result ensemble is \begin{eqnarray} \rho^{\mathbf{y}}&=&\sum_{i}\left\langle e_{i}^{\mathbf{y}}\right|\rho\left|e_{i}^{\mathbf{y}}\right\rangle \left|e_{i}^{\mathbf{y}}\right\rangle\left\langle e_{i}^{\mathbf{y}}\right| \nonumber\\ &=&\sum_{i}\left\langle e_{i}^{\mathbf{y}}\right|\left( \sum_{k}p_{k}\left|e_{k}^{0}\right\rangle\left\langle e_{k}^{0}\right|\right) \left|e_{i}^{\mathbf{y}}\right\rangle \left|e_{i}^{\mathbf{y}}\right\rangle\left\langle e_{i}^{\mathbf{y}}\right| \nonumber\\ &=&\sum_{i} \sum_{k}p_{k}\left|\langle e_{k}^{0}|e_{i}^{\mathbf{y}}\rangle\right|^{2}|e_{i}^{\mathbf{y}}\rangle\langle e_{i}^{\mathbf{y}}| \nonumber\\ &=&\sum_{i} \sum_{k}p_{k}\left|\langle e_{k}^{0}|U(y_{1},y_{2},...,y_{d})|e_{i}^{0}\rangle\right|^{2}|e_{i}^{\mathbf{y}}\rangle\langle e_{i}^{\mathbf{y}}| \nonumber\\ &=&\sum_{i} \sum_{k}p_{k}\left|\sum_{s}\exp\{jy_{s}\}\langle e_{k}^{0}|e_{s}^{W}\rangle\langle e_{s}^{W}|e_{i}^{0}\rangle\right|^{2} |e_{i}^{\mathbf{y}}\rangle\langle e_{i}^{\mathbf{y}}| \nonumber \label{eqn:18} \end{eqnarray} Suppose the spectrum decomposition of $\rho^{\mathbf{y}}$ is $\rho^{\mathbf{y}}=\sum_{i}p_{i}^{\mathbf{y}}|e_{i}^{\mathbf{y}}\rangle\langle e_{i}^{\mathbf{y}}|$, then \begin{eqnarray} p_{i}^{\mathbf{y}}=\sum_{k}p_{k}\left|\sum_{s}\exp\{jy_{s}\}\langle e_{k}^{0}|e_{s}^{W}\rangle\langle e_{s}^{W}|e_{i}^{0}\rangle\right|^{2} \label{eqn:19} \end{eqnarray} Since $S(\rho^{\mathbf{y}})=-\sum_{i}p_{i}^{\mathbf{y}}\log {p_{i}^{\mathbf{y}}}$, we can see that, $S(\rho^{\mathbf{y}})$ can be represented as a multi-variable function $S(y_{1},y_{2}...,y_{d})$ with domain $\left\{(y_{1},y_{2},...y_{d}) | \ 0\leq y_{s}\leq \theta_{s}, s=1,2,...d\right\}$. $S(y_{1},...,y_{d})$ is an elementary function, so it is continuous. Furthermore, it is easy to verify that \begin{eqnarray} S(0,0,...,0)=S(\rho), \ \ S(\theta_{1}, \theta_{2},...,\theta_{d})=d \label{eq:20} \end{eqnarray} By the intermediate-value theorem, for any $h$ between $S(\rho)$ and $d$, there exist a point $(\alpha_{1},\alpha_{2},...\alpha_{d})$ in the domain such that $S(\alpha_{1},\alpha_{2},...,\alpha_{d})=h$. Let $|e_{i}^{\prime}\rangle=U(\alpha_{1},\alpha_{2},...,\alpha_{d})|e_{i}^{0}\rangle$, then the basis $B^{\prime}=\{|e_{i}^{\prime}\rangle, i=1\ldots d\} $ is what we are finding. With this lemma, we can prove theorem1. \begin{eqnarray} tr(\Pi_{\Upsilon}(h)\rho^{\otimes n})&\geq&tr[\Pi(h,B^{\prime})\rho^{\otimes n}]\nonumber \\ &=&\sum_{x^{n} \in T_{\varepsilon}(h)}\left\langle e_{x_{1}}^{\prime}e_{x_{2}}^{\prime}...e_{x_{n}}^{\prime}\right|\rho^{\otimes n}\left|e_{x_{1}}^{\prime}e_{x_{2}}^{\prime}...e_{x_{n}}^{\prime}\right\rangle \nonumber\\ &=&\sum_{x^{n} \in T_{\varepsilon}(h)}\prod_{i=1}^{n}\left\langle e_{x_{i}}^{\prime}\right|\rho\left|e_{x_{i}}^{\prime}\right\rangle \nonumber\\ &=&\sum_{x^{n} \in T_{\varepsilon}(h)}\prod_{i=1}^{n}\left\langle e_{x_{i}}^{\prime}\right|\rho^{\prime}\left|e_{x_{i}}^{\prime}\right\rangle \nonumber\\ &=&tr[\Pi(h,B^{\prime})\rho^{\prime \otimes n}]\geq 1-\delta \nonumber \label{eqn:21} \end{eqnarray} The first inequality holds because $\Xi(h,B^{\prime})\subseteq \Upsilon(h)$. The third equality holds because $\rho^{\prime}=\sum_{i}\left\langle e^{\prime}_{i}\right|\rho \left|e^{\prime}_{i}\right\rangle\left|e^{\prime}_{i}\right\rangle \left\langle e^{\prime}_{i}\right| $. The last equality holds from (\ref{eqn:10}). This result allows us to construct a universal compression scheme for all sources with von Neumman entropy $\leq h$ using the skill developed by Schumacher\cite{Schmacher1995,Wilde2011,Nielsen2000}. More precisely, the encoding operation is the map $\mathcal{C}^{n}: \mathcal{H}^{\otimes n} \rightarrow \mathcal{H}^{n}_{c}$, \begin{eqnarray} \mathcal{C}^{n}(\sigma)\equiv \Pi_{\Upsilon}(h)\sigma\Pi_{\Upsilon}(h)+\sum_{u}E_{u}\sigma E_{u}^{\dag} \label{eqn:22} \end{eqnarray} where $E_{u}\equiv |0\rangle\langle u|$. $|0\rangle$ is some standard state chosen from $\Upsilon(h)$ and $\left \{|u\rangle \right \} $ is an orthonormal basis for the orthocomplement of $\Upsilon(h) $. The decoding operation is the map $\mathcal{D}^{n}: \mathcal{H}^{n}_{c} \rightarrow \mathcal{H}^{\otimes n}$, $\mathcal{D}^{n}(\sigma)\equiv\sigma$. With the encoding and decoding operation, the fidelity of the compression \begin{eqnarray} F\left(\rho^{\otimes n},\mathcal{D}^{n}(\mathcal{C}^{n}(\rho^{\otimes n}))\right)&=&|tr(\Pi_{\Upsilon}(h)\rho^{\otimes n})|^{2}+\sum_{u}|tr(E_{u}\rho^{\otimes n})|^{2} \nonumber\\ &\geq & |tr(\Pi_{\Upsilon}(h)\rho^{\otimes n})|^{2} \nonumber\\ &\geq &|1-\delta|^{2}\geq 1-2\delta \nonumber \label{eqn:23} \end{eqnarray} $\delta$ can be made arbitrarily small for sufficiently large n, so the compression scheme is reliable. The compression rate R is given by \begin{eqnarray} R&=&\lim_{n\rightarrow \infty} \frac{\log{\dim{\Upsilon(h)}}}{n} \nonumber \\ &\leq &\lim_{n\rightarrow\infty} (d^{2}+d)\frac{\log(n+1)}{n}+h+\varepsilon \nonumber \label{eqn:24} \end{eqnarray} which tends to $h+\varepsilon$. Because $\varepsilon$ can be as small as desired, the rate $h$ is achievable asymptotically. Thus we have shown that for any given $h$ and sufficiently large n, projection onto $\Upsilon(h)$ will provide a reliable compression for all sources with von Neumann entropy $\leq h$. In this paper, we gives the definition of quantum entropy-typical subspace $\Upsilon(h)$. Then we show that any $\rho^{\otimes n}$ with $S(\rho)\leq h$ can be preserved approximately by $\Upsilon(h)$. This result implies a reliable universal compression scheme for the case that the von Neumann entropy of the source does not exceed $h$.\\ \begin{acknowledgments} This work is supported by the National Natural Science Foundation of China Grant No.61271174. \end{acknowledgments} \providecommand{\noopsort}[1]{}\providecommand{\singleletter}[1]{#1}%
1,108,101,562,558
arxiv
\section{Introduction} This paper studies a fundamental question arising from the theory of card shuffling, where the evolution of card positions is typically modeled by a Markov chain on the space of permutations on the set of cards. In this paper, we investigate a continuous-time Markov chain in which the cards at positions $i$ and $j$ are interchanged at rate $\alpha_{ij}$. Interchange rates may be zero if cards at the corresponding positions cannot be interchanged. This Markov chain is known as the {\em interchange process}. Another continuous-time Markov chain arises as the position of an arbitrary but fixed card in a deck which evolves according to the interchange process. This Markov chain is known as the {\em random-walk process}. A key question is how long it takes for a deck of cards to be well-shuffled in some sense, and an important quantity in addressing questions of this form is the {\em spectral gap}. Assuming the interchange rates $\alpha_{ij}$ are chosen so that both the interchange process and the random-walk process are irreducible, the spectral gap is defined as the negative of the second largest eigenvalue of the intensity matrix of the interchange process. A conjecture of Aldous and Diaconis from 1992, often referred to as Aldous' conjecture in the literature, says that the spectral gap of the interchange process is {\em exactly} the same as the spectral gap of the random-walk process. This conjecture, which is also listed in the open problem section of the recent book by Levin~{\em et al.}~\cite[Sec.~23.3]{MR2466937}, is the topic of the present paper. (Strictly speaking, the original conjecture is more restrictive than in the above discussion, since it only allows $\alpha_{ij}$ to take values in $\{0,1\}$). It is customary to think of the interchange and random-walk processes in terms of an undirected, connected, weighted graph $G$. Each vertex of $G$ has a label, and each edge $(i,j)$ of $G$ has an associated Poisson process with intensity $\alpha_{ij}\ge 0$. The Poisson processes on different edges are stochastically independent. At each Poisson epoch corresponding to edge $(i,j)$, the labels at vertices $i$ and $j$ are interchanged. The interchange process records the positions of all labels in the graph, while the random-walk process only records the position of a given label. \medskip Aldous' conjecture has attracted the attention of many researchers over the past decades, but all existing results rely on some special structure on the weights $\alpha_{ij}$. These results can roughly be categorized according to their proof methods: induction on the number of vertices in the graph \cite{MR1419872,MR2415139,starrconomos} or representation theory~\cite{cesi2,cesi,MR964069,MR626813,MR770635}. The current paper effectively combines these two approaches, and might serve as a first step towards proving Aldous' conjecture in full generality. The main idea behind the combination of mathematical induction and representation theory can be summarized as follows. In the induction step, a new vertex is attached to a graph for which it is known that the conjecture holds. If there is only one new edge incident to the new vertex, then standard eigenvalue bounds can be employed which imply that the conjecture holds for the new graph \cite{MR1419872}. In the general case where several edges are incident to the new vertex, however, the main technical obstacle has been that the addition of this vertex may significantly impact the spectrum of the resulting random walk and interchange process. This difficulty can be overcome with representation theory. In fact, we shall argue that the following conjecture suitably controls the changes to the spectrum if the new vertex is of degree $k-1$. An empty sum should be interpreted as zero, $\mathcal S_k$ is the symmetric group on $k$ letters, and $(ij)\in \mathcal S_k$ stands for the transposition of $i$ and $j$. \begin{conjecture} \label{conj} Given any $k\ge 2$, the following holds for any function $g:\mathcal S_k\to{\bb R}$ and any nonnegative $\gamma_{1},\ldots,\gamma_{k-1}$: \[ \sum_{\sigma\in \mathcal S_k} \sum_{i=1}^{k-1} \gamma_{i} [g(\sigma) - g((ik)\sigma)]^2\ge \sum_{\sigma\in \mathcal S_k} \sum_{1\le i<j\le k-1} \frac{\gamma_{i}\gamma_{j}} {\gamma_{1}+\ldots+\gamma_{k-1}} [g(\sigma) - g((ij)\sigma)]^2. \] \end{conjecture} This inequality can be interpreted as a comparison of two Dirichlet forms, with the left-hand side corresponding to the interchange dynamics on a (weighted) `star' graph with center $k$ and the right-hand side corresponding to the interchange dynamics on a (weighted) special complete graph with an isolated vertex $k$. The contributions of this paper are threefold. First, we show that Conjecture~\ref{conj} implies Aldous' conjecture. One of the key ingredients is an interlacing result for Laplacians of random walks on weighted graphs, which appears to be new. Second, we give a proof of Conjecture~\ref{conj} for $k\le 4$ as well as a proof for general $k$ when $\gamma_{1}=\ldots=\gamma_{k-1}$. Third, as an application of the developed theory, we prove that Aldous' conjecture holds for a large family of weighted graphs that only rely on Conjecture~\ref{conj} for $k\le 4$. This class includes all wheel graphs, all weighted graphs with four vertices, certain graphs with weighted cycles of different lengths, certain nonplanar graphs, as well as all trees (for which the conjecture is already known to hold). It is the first time results for such general weighted graphs are obtained, illustrating the power of knowing that Conjecture~\ref{conj} holds even for small values of $k$. Throughout, matrix inequalities of the form $A\le B$ should be interpreted as $A-B$ being negative semidefinite. All vectors in this paper should be interpreted as column vectors, and we use the symbol ${}^{{\top}}$ for vector or matrix transpose. The $\ell\times\ell$ identity matrix is denoted by $I_\ell$. We multiply permutations from right to left, so $\sigma'\sigma$ is the permutation obtained by first applying $\sigma$ and then $\sigma'$. This paper is organized as follows. Sections~\ref{sec:interlacing}--\ref{sec:conjstronger} focus on proving that Conjecture~\ref{conj} implies Aldous' conjecture. The main technical tools are the aforementioned interlacing for Laplacians of random walks, which is discussed in Section~\ref{sec:interlacing}, and a representation-theoretic view of Conjecture~\ref{conj}, which is the topic of Section~\ref{sec:representation}. These tools are tied together in Section~\ref{sec:conjstronger}, which contains the main argument of the proof that Conjecture~\ref{conj} implies Aldous' conjecture. Sections~\ref{sec:proofconjS4} and \ref{sec:proofconjJM} prove Conjecture~\ref{conj} in special cases: Section~\ref{sec:proofconjS4} focuses on $k\le 4$ with general nonnegative $\gamma_i$, while Section~\ref{sec:proofconjJM} deals with general $k$ but identical $\gamma_i$. In Section~\ref{sec:application}, we present a class of graphs for which we can prove Aldous' conjecture using the newly developed methodology. A discussion concludes this paper, and two appendices give background on representation theory. {\bf A postscript; independent work of Caputo, Liggett, and Richthammer.} Aldous' conjecture was one of three problems targeted by an international team of researchers at the Markov Chain Working Group in June 2009, held at Georgia Tech. I presented this paper at that meeting, and Pietro Caputo presented a joint work with Thomas Liggett and Thomas Richthammer. Both teams had posted their work on arxiv.org at the beginning of the meeting \cite{caputov1,dieker:interchangev1}. Although the papers were written from different perspectives, we had independently arrived at the same proof outline: both works propose the updating rule (\ref{eq:defalphaprime}) below and formulate Conjecture~\ref{conj}. Only days after the working group meeting, Caputo {\em et al.}~were able to give a full proof of Conjecture~\ref{conj}. It can be found in \cite{caputo}. I currently do not know if it is possible to give a different proof of Conjecture~\ref{conj} using representation theory, i.e., to complete the approach taken here. The present article is an unmodified version of my `working group' preprint \cite{dieker:interchangev1}, with some typos corrected and some arguments clarified. \section{Interlacings for Laplacians of random walks on weighted graphs} \label{sec:interlacing} In this section, we state and prove an interlacing result for the weighted random-walk process on a given weighted graph $G$ with $n$ vertices. For other interlacing results and illustrations of the technique, we refer to Godsil and Royle~\cite[Ch.~9]{MR1829620} or (in a slightly different setting) the recent paper by Butler~\cite{MR2320563}. Let $\alpha_{ij}\ge 0$ be the weight of edge $(i,j)$ in $G$, $i\neq j$. We simply write $\alpha$ for the collection of edge weights $\{\alpha_{ij}\}$. We also write $\{e_i\}$ for the standard basis in ${\bb R}^n$, and define $w_{ij}=e_i-e_j$. The Laplacian of $G$ is defined through \[ \mathcal L_n^{{RW}}(\alpha)(i,j)= \begin{cases} -\alpha_{ij} & \text{if $i\neq j$}\\ \sum_{k=1}^{j-1} \alpha_{kj}+ \sum_{k=j+1}^n \alpha_{jk} & \text{if $i=j$} \end{cases} \] for $1\le i,j\le n$, and can thus be written in matrix form as \[ \mathcal L_n^{{RW}}(\alpha) = \sum_{1\le i<j\le n} \alpha_{ij} w_{ij} w_{ij}^{\top}. \] The superscript `RW' is meant to stress that this Laplacian is the negative of the intensity matrix of the random-walk process defined in the introduction. Consider edge weights $\alpha_{ij}'$ given by, for $1\le i,j\le n-1$, \begin{equation} \label{eq:defalphaprime} \alpha_{ij}'= \alpha_{ij} + \frac{\alpha_{in}\alpha_{jn}}{\alpha_{1n} + \ldots + \alpha_{n-1,n}}, \end{equation} while $\alpha_{in}'=0$ for $i<n$. We abuse notation by writing $\alpha'$ for the restriction to $i,j\le n-1$. We write $\mu_1^n(\alpha)\le \ldots\le \mu_n^n(\alpha)$ for the eigenvalues of $\mathcal L_n^{{RW}}(\alpha)$. The edge weights $\alpha'_{ij}$ assume a particularly simple form if $\alpha_{in}=0$ for all $i$ except possibly for one: then $\alpha'_{ij}=\alpha_{ij}$ for $i,j\le n-1$. Note that $\mathcal L_n^{{RW}}(\alpha')$ is the Laplacian of a graph with an isolated vertex $n$. \begin{proposition} \label{prop:interlacing} If $\sum_{i=1}^{n-1}\alpha_{in}>0$, then we have \[ \mathcal L_n^{{RW}}(\alpha) = \mathcal L_n^{{RW}}(\alpha') + \frac1{\sum_{i=1}^{n-1}\alpha_{in}} \left(\sum_{i=1}^{n-1} \alpha_{in} w_{in}\right) \left(\sum_{i=1}^{n-1} \alpha_{in} w_{in}\right)^{\top}. \] In particular, the eigenvalues of $\mathcal L_n^{{RW}}(\alpha)$ and $\mathcal L_n^{{RW}}(\alpha')$ interlace, i.e., \[ \mu_1^n(\alpha')\le \mu_1^n(\alpha)\le \mu_2^n(\alpha')\le \ldots \le \mu_n^n(\alpha')\le \mu_n^n(\alpha). \] Moreover, for $1\le k\le n$, \[ \mu_k^n(\alpha)-\mu_k^n(\alpha') \le \frac{2\sum_{1\le i\le j \le n-1} \alpha_{in}\alpha_{jn}}{\sum_{i=1}^{n-1} \alpha_{in}}. \] \end{proposition} \proof First observe that \[ \mathcal L_n^{{RW}}(\alpha) - \mathcal L_n^{{RW}}(\alpha') = \sum_{i=1}^{n-1} \alpha_{in} w_{in} w_{in}^{\top} -\sum_{1\le i<j\le n-1} \frac{\alpha_{in}\alpha_{jn}}{\alpha_{1n}+\ldots+\alpha_{n-1,n}} w_{ij} w_{ij}^{\top}. \] For the last sum, we note that \begin{eqnarray*} \sum_{1\le i<j\le n-1} \alpha_{in}\alpha_{jn} w_{ij} w_{ij}^{\top} &=& \sum_{1\le i<j\le n-1} \alpha_{in}\alpha_{jn} (w_{in}-w_{jn}) (w_{in}-w_{jn})^{\top}\\ &=& (\alpha_{1n}+\ldots+\alpha_{n-1,n}) \sum_{i=1}^{n-1} \alpha_{in} w_{in} w_{in}^{\top} - \sum_{1\le i,j\le n-1} \alpha_{in}\alpha_{jn} w_{in} w_{jn}^{\top}. \end{eqnarray*} On combining the preceding two displays, we conclude that \[ \mathcal L_n^{{RW}}(\alpha)-\mathcal L_n^{{RW}}(\alpha') = \sum_{1\le i,j\le n-1} \frac{\alpha_{in}\alpha_{jn}}{\alpha_{1n}+\ldots+\alpha_{n-1,n}} w_{in} w_{jn}^{\top}, \] and we have therefore proven the first claim. The interlacing property follows from standard results in linear algebra on the spectrum of rank-1 perturbations of symmetric matrices, e.g., \cite[Thm.~4.3.4]{MR1084815}. After noting that \[ \frac1{\sum_{i=1}^{n-1}\alpha_{in}} \left(\sum_{i=1}^{n-1} \alpha_{in} w_{in}\right)^{\top} \left(\sum_{i=1}^{n-1} \alpha_{in} w_{in}\right) = \frac{2\sum_{1\le i\le j \le n-1} \alpha_{in}\alpha_{jn}}{\sum_{i=1}^{n-1} \alpha_{in}}, \] the eigenvalue bound is immediate from, e.g., \cite[Thm~4.3.1]{MR1084815}.~\endproof \section{A representation-theoretic view on Conjecture~\ref{conj}} \label{sec:representation} We now relate Conjecture~\ref{conj} to the representation theory of the symmetric group. Background on this theory is given in Appendix~\ref{sec:representationtheory}. This section only contains standard results from representation theory, with a focus on transpositions of the symmetric group; see \cite[Section~3]{cesi} for a recent account in the context of the present paper. We write $\rho^\lambda$ for Young's orthonormal irreducible representation corresponding to the partition $\lambda$, and set $V^\lambda_{ij}=I-\rho^\lambda_{ij}$. Given the edge weights $\alpha =\{\alpha_{ij}\}$ of a graph $G$ with $n$ vertices, we set for $\lambda\vdash n$, \[ \mathcal L^\lambda(\alpha) = \sum_{1\le i<j\le n} \alpha_{ij} V^\lambda_{ij}. \] Note that $\mathcal L^\lambda(\alpha)$ is a symmetric matrix, so it has real eigenvalues. Write $\mu^\lambda(\alpha)$ for the eigenvalues of $\mathcal L^\lambda(\alpha)$, ordered so that $\mu_1^\lambda(\alpha)\le \ldots\le\mu^\lambda_{f^\lambda}(\alpha)$ where $f^\lambda$ is the dimension of the $V_{ij}^\lambda$: the number of standard Young tableau with shape $\lambda$. In the context of Markov chains arising from weighted graphs, represention theory allows us to write their Laplacians as direct sums of matrices (up to a change of basis). We first work this out for the weighted random walk, in which case the Laplacian is closely related to the so-called defining representation. \begin{proposition} \label{prop:laplaciandecomp} There exists an $n\times n$ orthonormal matrix $S$ such that for all weights $\alpha_{ij}$ on the edges of a graph with $n$ vertices, \[ S \mathcal L_n^{{RW}}(\alpha) S^{\top} = \mathcal L^{(n)}(\alpha) \oplus \mathcal L^{(n-1,1)}(\alpha). \] \end{proposition} We note that the above decomposition of the random-walk Laplacian has a special structure. Indeed, $\mathcal L^{(n)}(\alpha)$ equals zero regardless of the weights $\alpha$. Moreover, $\mathcal L^{(n-1,1)}(\alpha)$ is closely related to $A_{n-1}$ reflection groups, since $\rho^{(n-1,1)}_{ij}$ is the Householder reflection matrix corresponding to the transposition $(ij)$ of $\mathcal S_n$. That is, each $\rho^{(n-1,1)}_{ij}$ acts on a point in ${\bb R}^{n-1}$ by reflecting it in a certain hyperplane. \medskip The Laplacian $\mathcal L_n^I(\alpha)$ of the interchange process, which is an $n!\times n!$ matrix, can similarly be written as a direct sum (see also \cite[Section~3E]{MR964069}). This Laplacian is defined through \[ \mathcal L_n^I(\alpha)(\sigma,\sigma') = \begin{cases} -\alpha_{ij} & \text{if $\sigma' = (ij) \sigma$} \\ \sum_{1\le \ell< k \le n} \alpha_{\ell k} & \text{if $\sigma = \sigma'$} \\ 0& \text{otherwise,} \end{cases} \] where $\sigma,\sigma'\in\mathcal S_n$. This Laplacian is closely related to the so-called regular representation. We write $f^\lambda \mathcal L^\lambda(\alpha)$ for the direct sum of $f^\lambda$ copies of $\mathcal L^\lambda(\alpha)$. \begin{proposition} \label{prop:interchangelaplaciandecomp} There exists an $n!\times n!$ orthonormal matrix $T$ such that for all weights $\alpha_{ij}$ on the edges of a graph with $n$ vertices, \[ T \mathcal L_n^I(\alpha)T^{\top} = \bigoplus_{\lambda\vdash n} f^\lambda \mathcal L^\lambda(\alpha). \] \end{proposition} The preceding proposition holds without any assumption on the sign of the weights $\alpha_{ij}$. After defining signed weights on a graph with $k$ nodes through \[ \alpha_{ij} = \begin{cases} \gamma_i & \text{if $j=k$}\\ -\frac{\gamma_i\gamma_j}{\gamma_{1}+\ldots+\gamma_{k-1}}& \text{if $1\le i<j\le k-1$}, \end{cases} \] we immediately obtain a reformulation of Conjecture~\ref{conj} from Proposition~\ref{prop:interchangelaplaciandecomp}. \begin{lemma} \label{lem:alphaprime} The following is equivalent to Conjecture~\ref{conj}. Given any $k\ge 2$, the following holds for any $\lambda\vdash k$ and any nonnegative $\gamma_{1},\ldots,\gamma_{k-1}$: \begin{equation} \label{eq:conjrepr} \sum_{i=1}^{k-1} \gamma_{i} V^\lambda_{ik} \ge \sum_{1\le i<j\le k-1} \frac{\gamma_{i}\gamma_{j}}{\gamma_{1}+\ldots+\gamma_{k-1}} V^\lambda_{ij}. \end{equation} \end{lemma} \iffalse Along the lines of the proof of Proposition~\ref{prop:interlacing}, it is readily verified that (\ref{eq:conjrepr}) is equivalent to \begin{equation} \label{eq:conjrepr2} \sum_{i=1}^{k-1} \gamma_{i}^2 V^\lambda_{ik} + \sum_{1\le i<j\le k-1} \gamma_{i}\gamma_{j} [V^\lambda_{ik}+V^\lambda_{jk}-V^\lambda_{ij}] \ge 0. \end{equation} \fi \section{Conjecture~\ref{conj} implies Aldous' conjecture} \label{sec:conjstronger} In this section, we prove that Conjecture~\ref{conj} implies Aldous' conjecture. We use mathematical induction on the number of vertices $n$. The conjecture trivially holds if $n=2$. Suppose Aldous' conjecture holds for all graphs with $n-1$ vertices. Consider an arbitrary weighted graph $G$ with $n$ vertices, and write $\alpha_{ij}\ge 0$ for the weight of edge $(i,j)$ in $G$. By Proposition~\ref{prop:laplaciandecomp} and the fact that $\mu_1^{(n)}(\alpha)=0$, Aldous' conjecture is that the second smallest eigenvalue of $\mathcal L_n^I(\alpha)$ equals $\mu_1^{(n-1,1)}(\alpha)$. In view of Proposition~\ref{prop:interchangelaplaciandecomp}, this is equivalent with \begin{equation} \label{eq:toshow} \mu^\lambda_1(\alpha) \ge \mu^{(n-1,1)}_1(\alpha ) \end{equation} for all partitions $\lambda \vdash n$ with $\lambda\neq (n)$. This inequality trivially holds if $\lambda=(n-1,1)$, so we exclude this partition from further consideration. Note that we do not need to assume that the graph $G$ be connected; since the right-hand side of (\ref{eq:toshow}) vanishes if this is not the case, (\ref{eq:toshow}) then holds trivially since the $V_{ij}$ are positive semidefinite. As before, we write $\alpha'$ for the weights given by (\ref{eq:defalphaprime}). The induction hypothesis yields \[ \mu^{(n-2,1)}_1( \alpha' ) = \min_{\lambda' \,\vdash\, n-1: \,\lambda'\neq (n-1)} \mu^{\lambda'}_1(\alpha'). \] To prove (\ref{eq:toshow}), we will show that the following string of inequalities holds if $\lambda\neq (n),(n-1,1)$: \begin{equation} \label{eq:stringofineq} \mu^{(n-1,1)}_1(\alpha) \le \mu^{(n-2,1)}_1(\alpha' ) = \min_{\lambda' \,\vdash\, n-1: \,\lambda'\neq (n-1)} \mu^{\lambda'}_1(\alpha') \le \mu^\lambda_1(\alpha). \end{equation} It is in the last inequality that we use Conjecture~\ref{conj}, but we first prove the first inequality. \begin{lemma} \label{lem:firstineq} We have \[ \mu^{(n-1,1)}_1(\alpha) \le \mu^{(n-2,1)}_1(\alpha') \le \mu^{(n-1,1)}_2(\alpha) \le \ldots\le \mu^{(n-2,1)}_{n-2}(\alpha') \le \mu^{(n-1,1)}_{n-1}(\alpha). \] \end{lemma} \proof Consider the decomposition in Proposition~\ref{prop:laplaciandecomp}. Since $\mathcal L_n^{{RW}}(\alpha)$ and $\mathcal L_n^{{RW}}(\alpha')$ are positive semidefinite and $\mathcal L^{(n)}(\alpha)=\mathcal L^{(n)}(\alpha')$ are one-dimensional and equal to zero, we conclude that $\mu_1^n(\alpha)=\mu_1^n(\alpha') =0$ and that the largest $n-1$ eigenvalues of $\mathcal L_n^{{RW}}(\alpha)$ and $\mathcal L_n^{{RW}}(\alpha')$ are given by the eigenvalues of $\mathcal L^{(n-1,1)}(\alpha)$ and $\mathcal L^{(n-1,1)}(\alpha')$, respectively. In particular, $\mu_{k+1}^n(\alpha) = \mu_k^{(n-1,1)}(\alpha)$ and $\mu_{k+1}^n(\alpha') = \mu_k^{(n-1,1)}(\alpha')$, and Proposition~\ref{prop:interlacing} yields \[ \mu_1^{(n-1,1)}(\alpha')\le \mu_1^{(n-1,1)}(\alpha)\le \mu_2^{(n-1,1)}(\alpha')\le \ldots \le \mu_{n-1}^{(n-1,1)}(\alpha')\le \mu_{n-1}^{(n-1,1)}(\alpha). \] The claim readily follows from these inequalities, for instance after noting that $\mu^{(n-1,1)}_1(\alpha')=\mu^{(n-1)}_1(\alpha')=0$ and $\mu^{(n-1,1)}_k(\alpha')=\mu^{(n-2,1)}_{k-1}(\alpha')$ for $k\ge 1$ by the branching rule (see Appendix~A).\endproof \begin{lemma} \label{lem:lastineq} If Conjecture~\ref{conj} holds, then we have for $\lambda\ne (n),(n-1,1)$, \[ \min_{\lambda' \,\vdash\, n-1: \,\lambda'\neq (n-1)} \mu^{\lambda'}_1(\alpha') \le \mu^\lambda_1(\alpha). \] \end{lemma} \proof Under Conjecture~\ref{conj}, we have by Lemma~\ref{lem:alphaprime}, \begin{eqnarray} \mathcal L^\lambda(\alpha) &=& \sum_{1\le i<j\le n} \alpha_{ij} V_{ij}^\lambda = \sum_{1\le i<j\le n-1} \alpha_{ij} V_{ij}^\lambda + \sum_{i=1}^{n-1} \alpha_{in} V_{in}^\lambda\nonumber\\ &\ge& \sum_{1\le i<j\le n-1} \alpha_{ij} V_{ij}^\lambda + \sum_{1\le i<j\le n-1} \frac{\alpha_{in}\alpha_{jn}}{\alpha_{1n}+\ldots+\alpha_{n-1,n}} V_{ij}^\lambda\label{eq:conjectureineq}\\ &=&\sum_{1\le i<j\le n-1} \alpha_{ij}' V_{ij}^\lambda = \mathcal L^\lambda(\alpha'),\nonumber \end{eqnarray} so that $\mu^\lambda_k(\alpha)\ge \mu^\lambda_k(\alpha')$ for all $k$. By the branching rule (see Appendix~\ref{sec:representationtheory}), since $\alpha'_{in}=0$ for all $i$, \[ \mathcal L^\lambda(\alpha') = \bigoplus_{\lambda'\vdash n-1 : \lambda' \nearrow \lambda} \mathcal L^{\lambda'}(\alpha'). \] We conclude that \[ \mu^\lambda_1(\alpha)\ge \mu^\lambda_1(\alpha') = \min_{\lambda'\vdash n-1 : \lambda' \nearrow \lambda} \mu^{\lambda'}_1(\alpha')\ge \min_{\lambda'\vdash n-1 : \lambda' \ne (n-1)} \mu^{\lambda'}_1(\alpha'), \] where the last inequality follows from $\lambda\neq (n),(n-1,1)$. \endproof \medskip We have thus finished the proof that Conjecture~\ref{conj} implies Aldous' conjecture. Before continuing, we mention a corollary of the proof which we use in Section~\ref{sec:application}. \begin{corollary} \label{cor:induction} Suppose Conjecture~\ref{conj} holds for $k\le K$. Let $\alpha_{ij}\ge 0$ be the weight of edge $(i,j)$ in a given weighted graph $G$ with $n$ vertices, and suppose that at most $K-1$ of the $n-1$ possible $\alpha_{in}$ are strictly positive. If Aldous' conjecture holds for the graph on $n-1$ vertices induced by edge weights $\alpha'$ given in (\ref{eq:defalphaprime}), then it holds for $G$. \end{corollary} \proof The equality in (\ref{eq:stringofineq}) holds by assumption. The first inequality in (\ref{eq:stringofineq}) holds by Lemma~\ref{lem:firstineq}, so we focus on the last inequality and the proof of Lemma~\ref{lem:lastineq} in particular. When at most $K-1$ of the $n-1$ possible $\alpha_{in}$ are strictly positive, we may assume without loss of generality that these are $\alpha_{1n},\ldots,\alpha_{K-1,n}$. By repeated application of the branching rule and a change of basis on interchanging $n$ and $K$, we get \[ \sum_{i=1}^{n-1} \alpha_{in} V_{in}^\lambda = \bigoplus_{\lambda' \vdash K: \lambda'\nearrow\ldots\nearrow\lambda} \left( \sum_{i=1}^{K-1} \alpha_{in} V_{in}^{\lambda'}\right) \cong \bigoplus_{\lambda' \vdash K: \lambda'\nearrow\ldots\nearrow\lambda} \left( \sum_{i=1}^{K-1} \alpha_{in} V_{iK}^{\lambda'}\right), \] where the direct product should be taken over all possible simple paths from $\lambda'$ to $\lambda$ in the Hasse diagram of Young's lattice. Thus, one can deduce (\ref{eq:conjectureineq}) from Conjecture~\ref{conj} with $k=K$ and the last inequality in (\ref{eq:stringofineq}) holds in that case. \endproof \medskip This corollary is of particular interest when $K=2$, in which case Conjecture~\ref{conj} holds trivially. If the $n$-th vertex is incident to exactly one other vertex, then the modified weights $\alpha'$ on the edges of the `small' graph consisting of the vertices $1,\ldots,n-1$ are simply equal to the unmodified weights $\alpha$. In this context, Corollary~\ref{cor:induction} is essentially equivalent to the induction step in Handjani and Jungreis~\cite{MR1419872}, and it readily implies that Aldous' conjecture holds for trees. A related argument is given by Cesi~\cite[Sec.~3]{cesi2}. \section{Proof of Conjecture~\ref{conj} for $k\le 4$} \label{sec:proofconjS4} Conjecture~\ref{conj} for $k=4$ implies the conjecture for $k<4$, so this section focuses on proving Conjecture~\ref{conj} for $k=4$. In view of Lemma~\ref{lem:alphaprime}, it suffices to prove (\ref{eq:conjrepr}) for all $\lambda\vdash 4$. We do so by making use of the explicit forms of Young's orthonormal irreducible representations given in Appendix~\ref{sec:reprS4}. In particular, we use the vectors $v^\lambda_{ij}$ introduced in Appendix~\ref{sec:reprS4}. Throughout this section, for notational convenience, we suppress the superscripts $\lambda$ in these vectors, so that, e.g., $v_{ij}$ stands for $v_{ij}^{(3,1)}$ in Section~\ref{sec:repr31} while it stands for $v_{ij}^{(2,2)}$ in Section~\ref{sec:repr22}. We follow the same notational convention for the superscripts $\lambda$ in $V^\lambda_{ij}$. \subsection{$\lambda={(4)}$} Since $V_{ij} = 0$ for $1\le i<j\le 4$, (\ref{eq:conjrepr}) trivially holds. \subsection{$\lambda={(3,1)}$} \label{sec:repr31} Since $v_{ij} = v_{i4}-v_{j4}$ for $1\le i<j\le 3$, we readily find that \[ (\gamma_1+\gamma_2+\gamma_3) \sum_{i=1}^3 \gamma_i V_{i4} - \sum_{1\le i<j\le 3}\gamma_i\gamma_j V_{ij} = (\gamma_1 v_{14} + \gamma_2 v_{24}+ \gamma_3 v_{34}) (\gamma_1 v_{14} + \gamma_2 v_{24}+ \gamma_3 v_{34})^{\top}, \] as in the proof of Proposition~\ref{prop:interlacing}. This implies (\ref{eq:conjrepr}) for $\lambda=(3,1)$. \subsection{$\lambda=(2,2)$} \label{sec:repr22} We first show that \begin{eqnarray} \lefteqn{(\gamma_1+\gamma_2+\gamma_3) \sum_{i=1}^3 \gamma_i V_{i4} - \sum_{1\le i<j\le 3}\gamma_i\gamma_j V_{ij}} \nonumber\\ &=& \sum_{1\le i<j\le 3} (\gamma_i v_{i4} - (-1)^{i-j} \gamma_j v_{j4}) (\gamma_i v_{i4} - (-1)^{i-j} \gamma_j v_{j4})^{\top} - \sum_{i=1}^3 \gamma_i^2 V_{i4}. \label{eq:toshow22} \end{eqnarray} Since $v_{12} = -(v_{14}-v_{24})$, we see that \begin{eqnarray*} (\gamma_1+\gamma_2)(\gamma_1 V_{14} + \gamma_2 V_{24})-\gamma_1\gamma_2 V_{12} &=& (\gamma_1+\gamma_2)(\gamma_1 V_{14} + \gamma_2 V_{24})-\gamma_1\gamma_2 (v_{14}-v_{24})(v_{14}-v_{24})^{\top} \\ &=& (\gamma_1 v_{14} +\gamma_2 v_{24}) (\gamma_1 v_{14} +\gamma_2 v_{24})^{\top}. \end{eqnarray*} After two similar calculations using $v_{13} = v_{14}+v_{34}$ and $v_{23}=v_{24}-v_{34}$, we find that for $1\le i<j\le 3$, \[ (\gamma_i+\gamma_j)(\gamma_i V_{i4} + \gamma_j V_{j4})-\gamma_i\gamma_j V_{ij}= (\gamma_i v_{i4} - (-1)^{i-j} \gamma_j v_{j4}) (\gamma_i v_{i4} - (-1)^{i-j} \gamma_j v_{j4})^{\top}. \] After summing these identities over $1\le i<j\le 3$ and some rearranging, we get (\ref{eq:toshow22}). To show that the right-hand side of (\ref{eq:toshow22}) is positive semidefinite, we first introduce \[ u = \left(\begin{array}{c} \gamma_1\gamma_2 \\ -\gamma_1\gamma_3 \\ \gamma_2\gamma_3 \end{array}\right), \quad w = \left(\begin{array}{c} 1 \\ -1 \\ 1 \end{array}\right), \] and write $P_u=I_3-u u^{\top}/\|u\|^2$ for the projection matrix on the hyperplane orthogonal to $u$. By the Courant-Fischer variational theorem, the smallest eigenvalue of the matrix in (\ref{eq:toshow22}) equals \begin{eqnarray*} \lefteqn{\min_{x\in{\bb R}^2: \|x\|=1} \sum_{1\le i<j\le 3} [\gamma_i x^{\top} v_{i4} -(-1)^{i-j} \gamma_j x^{\top} v_{j4}]^2 - \sum_{i=1}^3 [\gamma_i x^{\top} v_{i4}]^2} \\ &\ge& \min_{z\in{\bb R}^3: \|z\| =1, u^{\top} z=0} [(z_1+z_2)^2+(z_1-z_3)^2+(z_2+z_3)^2 -z_1^2-z_2^2-z_3^2] \\ &=& \min_{z\in{\bb R}^3: \|z\| =1, u^{\top} z=0} z^{\top}\left[ 2I_3 - w w^{\top} \right]z = 2-\max_{z\in{\bb R}^3: \|z\| =1, u^{\top} z=0} (z^{\top} w)^2 \\ &=& 2-\max_{z\in{\bb R}^3: \|z\| =1} (z^{\top} P_u w)^2 = 2-{\rm tr } (P_u w w^{\top} P_u)\\ &=& 2-w^{\top} P_u w = -1 + (w^{\top} u)^2/\|u\|^2, \end{eqnarray*} where the first inequality follows from $v_{14}-v_{24}+v_{34} =0$. Since $(w^{\top} u)^2 \ge \|u\|^2$, we have proven (\ref{eq:conjrepr}) for $\lambda={(2,2)}$. \subsection{$\lambda={(2,1^2)}$} Since $V_{ij} = 2I_3-v_{ij}v_{ij}^{\top}$ and $v_{ij} = v_{i4}-v_{j4}$ for $1\le i<j\le 3$, we find that \begin{eqnarray*} \lefteqn{(\gamma_1+\gamma_2+\gamma_3) \sum_{i=1}^3 \gamma_i V_{i4} - \sum_{1\le i<j\le 3}\gamma_i\gamma_j V_{ij}}\\ &=&2\left(\sum_{1\le i\le j\le 3} \gamma_i\gamma_j\right)I_3 - (\gamma_1+\gamma_2+\gamma_3) \sum_{i=1}^3 \gamma_i v_{i4}v_{i4}^{\top}+ \sum_{1\le i<j\le 3}\gamma_i\gamma_j v_{ij}v_{ij}^{\top}\\ &=&2\left(\sum_{1\le i\le j\le 3} \gamma_i\gamma_j\right)I_3 - (\gamma_1 v_{14} + \gamma_2 v_{24} + \gamma_3 v_{34})(\gamma_1 v_{14} + \gamma_2 v_{24} + \gamma_3 v_{34})^{\top}, \end{eqnarray*} where the last equality follows from the same calculation as in Section~\ref{sec:repr31}. Therefore, the smallest eigenvalue of this matrix is \[ 2\left(\sum_{1\le i\le j\le 3} \gamma_i\gamma_j\right) - (\gamma_1 v_{14} + \gamma_2 v_{24} + \gamma_3 v_{34})^{\top}(\gamma_1 v_{14} + \gamma_2 v_{24} + \gamma_3 v_{34}), \] which equals zero since $v_{i4}^{\top} v_{j4}=1$ for $i\neq j$ while $v_{i4}^{\top} v_{i4}=2$. This proves (\ref{eq:conjrepr}) for $\lambda=(2,1^2)$. \subsection{$\lambda={(1^4)}$} Since $V_{ij} = 2$ for all $i,j$, (\ref{eq:conjrepr}) reduces to $\sum_{i=1}^{k-1} \gamma_i^2 + \sum_{1\le i<j\le k-1} \gamma_i\gamma_j\ge 0$, which clearly holds. \section{Proof of Conjecture~\ref{conj} for $\gamma_{1}=\ldots=\gamma_{k-1}$} \label{sec:proofconjJM} This section proves Conjecture~\ref{conj} for $\gamma_{1}=\ldots=\gamma_{k-1}$. The main ingredients are Jucys-Murphy matrices and a content minimization calculation for standard Young tableaux, see Appendix~\ref{sec:representationtheory} for definitions. Fix $k\ge 2$ and choose some partition $\lambda\vdash k$. We need to prove that \begin{equation} \label{eq:JMdiagmatrix} (k-1) \sum_{i=1}^{k-1} V^\lambda_{ik} - \sum_{1\le i<j\le k-1} V^\lambda_{ij}\ge 0. \end{equation} The left-hand side of (\ref{eq:JMdiagmatrix}) can be written in terms of Jucys-Murphy matrices as \[ \frac 12 k(k-1) I_{f^\lambda} + \sum_{i=1}^{k} X_{i}^\lambda-k X^\lambda_{k}. \] In particular, it is a diagonal matrix and its diagonal elements are readily found. Indeed, element $(t,t)$ of this matrix is calculated from tableau $t$ through \[ \frac 12 k(k-1) + \sum_{i=1}^{k} c^t_i - k c^t_k, \] where $c^t_i$ is the content of the box containing $i$ in tableau $t$. The sum over $i$ is the sum of all contents corresponding to a Young tableau of shape $\lambda$, which can be expressed in terms of $\lambda$ by noting that the sum over the contents in the $j$-th row equals $\frac 12 \lambda_j(\lambda_j-1) - (j-1)\lambda_j$. As this is independent of $t$, the smallest diagonal element of the matrix on the left-hand side of (\ref{eq:JMdiagmatrix}) corresponds to a tableau $t$ for which $c^t_k$ is maximized, i.e., to a tableau $t$ with $c^t_k = \lambda_1-1$. Therefore, we find that the smallest eigenvalue of (\ref{eq:JMdiagmatrix}) equals \begin{eqnarray*} \lefteqn{\frac 12 k(k-1) + \sum_{i=1}^{k} c^t_i - k (\lambda_1-1)}\\ &=& \frac 12 k(k-1) + \sum_{j=1}^\infty \frac 12 \lambda_j(\lambda_j-1) - \sum_{j=1}^\infty (j-1)\lambda_j - k (\lambda_1-1)\\ &=& \frac 12 k^2 + \sum_{j=1}^\infty \frac 12 \lambda_j^2 - \sum_{j=1}^\infty (j-1)\lambda_j - k \lambda_1 = \frac 12 (k-\lambda_1)^2 + \frac 12 \sum_{j=2}^\infty \lambda_j^2 - \sum_{j=1}^\infty (j-1)\lambda_j \\&=& \frac 12 \left(\sum_{j=2}^\infty \lambda_j\right)^2 + \frac 12 \sum_{j=2}^\infty \lambda_j^2 - \sum_{j=1}^\infty (j-1)\lambda_j = \sum_{j=2}^\infty \lambda_j^2 + \sum_{j=2}^\infty \sum_{i=2}^{j-1}\lambda_i\lambda_j -\sum_{j=1}^\infty (j-1)\lambda_j\\ &\ge& \sum_{j=2}^\infty \lambda_j^2 + \sum_{j=2}^\infty (j-2)\lambda_j^2 -\sum_{j=1}^\infty (j-1)\lambda_j = \sum_{j=2}^\infty (j-1)\lambda_j (\lambda_j-1). \end{eqnarray*} Since each $\lambda_j$ is a nonnegative integer, this is clearly nonnegative. This proves Conjecture~\ref{conj} for $\gamma_{1}=\ldots=\gamma_{k-1}$. \section{Weighted graphs with nested triangulation} \label{sec:application} In this section, we introduce a class of weighted graphs for which we prove Aldous' conjecture. This class includes all trees and all cycles of arbitrary length, and it arises by repeated application of Corollary~\ref{cor:induction} for $K=4$. \begin{figure} \resizebox{115pt}{!}{\includegraphics*{triangle.eps}} \hspace{7mm} \resizebox{115pt}{!}{\includegraphics*{level2N1.eps}} \hspace{7mm} \resizebox{115pt}{!}{\includegraphics*{level1.eps}} \caption{\label{fig:graphs} The graphs $T_{0,N}=K_3$ (left), $T_{2,1}$ (center), and $T_{1,2}$ (right).} \end{figure} Our graphs with nested triangulations are parameterized by two integers: a branching parameter $N\ge 1$ and a depth parameter $D\ge 0$. For a given $N$, the graphs $\{T_{i,N}: i\ge 0\}$ are nested in the sense that $T_{i,N}$ is a subgraph of $T_{i+1,N}$ for $i\ge 0$. The graphs are defined recursively as follows. Let $T_{0,N}$ be the complete graph on 3 vertices: $T_{0,N}=K_3$. For each cycle of length 3 that is present in $T_{i,N}$ but not in $T_{i-1,N}$, we construct $T_{i+1,N}$ by adding $N$ vertices to $T_{i,N}$, and by adding 3 new edges for each new vertex to connect it to the 3 vertices of the given cycle. Thus, $3N$ edges are added for each cycle of length 3 in $T_{i,N}$ but not in $T_{i-1,N}$. The vertices of $T_{D,N}$ can be partitioned into $D+1$ levels according to the stage at which they have been added. Examples are given in Figure~\ref{fig:graphs}. Note that $T_{D,1}$ is a maximal planar (triangulated) graph for any $D\ge 1$, but that not all maximal planar graphs are graphs with nested triangulations. Also note that $T_{3,1}$ has $K_{3,3}$, the complete bipartite graph on six vertices, as a subgraph and it is therefore nonplanar by Kuratowski's theorem. \begin{proposition} \label{prop:TDN} For any $D\ge 0$, $N\ge 1$, let $T_{D,N}$ have arbitrary nonnegative interchange rates on its edges and assume that the graph remains connected after removing zero-rate edges. Aldous' conjecture holds for this graph. \end{proposition} \proof We use induction. Since Aldous' conjecture trivially holds for a connected graph with two vertices, we conclude from Corollary~\ref{cor:induction} that Aldous' conjecture holds for the triangle $K_3$. Since each vertex at level $i+1$ is incident to exactly 3 vertices at lower levels, we may repeatedly use Corollary~\ref{cor:induction} to deduce the claim for $T_{i+1,N}$ from the claim for $T_{i,N}$.~\endproof \medskip This proposition is of particular interest for $D=N=1$, in which case $T_{D,N}$ is the complete graph $K_4$ on four vertices. Proposition~\ref{prop:TDN} then states that Aldous' conjecture holds for {\em all} weighted graphs with four vertices. Choosing some of the interchange rates equal to zero in Proposition~\ref{prop:TDN} proves Aldous' conjecture for some special classes of graphs. For instance, any tree can be embedded in a graph with nested triangulations. Indeed, given any tree, let $D$ be the maximum distance to the root and let $N$ be the maximum degree. It is readily seen that one can embed the tree into $T_{D,N}$ by mapping a vertex at distance $i\ge 0$ from the root to a vertex at level $i$ in $T_{D,N}$. Thus, Proposition~\ref{prop:TDN} recovers the main result from \cite{MR1419872} in this case. Instead of showing that a given graph is a subgraph of a graph with inner triangulations and appealing to Proposition~\ref{prop:TDN}, the following prodecure is an alternative for showing that Aldous' conjecture must hold according to the results of this paper. Corollary~\ref{cor:induction} implies that Aldous' conjecture holds for graphs which (after removing all edge weights) can be reduced to an edge by repeatedly using the following permissible rules: \begin{itemize} \item Degree-one reduction: delete a degree-one vertex and its incident edge. \item Series reduction: delete a degree-two vertex $k$ and its two incident edges $(i,k)$ and $(j,k)$, and add in a new edge $(i,j)$. \item Parallel reduction: delete one of a pair of parallel edges. \item Y-$\Delta$ transformation: delete a vertex $k$ and its three incident edges $(i,k)$, $(j,k)$, $(\ell, k)$ and add in a triangle $ij\ell$. \end{itemize} These operations also appear in the context of star-triangle reducibility of a graph~\cite{MR1222626}, but it is important to note that the $\Delta$-Y transformation (which is the inverse of the Y-$\Delta$ transformation) is not permissible here. Wheel graphs are examples of graphs which can be reduced to an edge using these operations. We write $W_n$ for the wheel graph with $n$ vertices, see Figure~\ref{fig:wheel} for $W_7$. Indeed, one obtains $W_{n-1}$ from $W_n$ after applying a Y-$\Delta$ transformation to one of the outer vertices of $W_n$ followed by three parallel reductions. This procedure can be repeated until $W_3=K_3$ arises, which is readily reduced to an edge. Note that cycles are subgraphs of wheel graphs: choose the interchange rates on the spokes of the wheel equal to zero except for two adjacent spokes, and also let the interchange rate vanish on the edge incident to the two outer vertices of these two spokes. Thus, we have also proven that Aldous' conjecture holds for weighted cycles. \begin{figure} \resizebox{150pt}{!}{\includegraphics*{wheelgraph.eps}} \caption{\label{fig:wheel} The wheel graph $W_7$ with 7 vertices.} \end{figure} \section{Discussion} \subsection*{Other Markov processes with the same spectral gap as the random walk} Apart from the interchange process and the random walk, several other natural Markov chains arise from the interchange dynamics on a weighted graph. Indeed, one may allow several vertices to receive the same label, which can be thought of as a color. Interchanging nodes with the same color then does not change the color configuration on the graph. Thus, for each possible initial configuration of colors, one obtains a continuous-time Markov chain. One can think of these processes as parameterized by Young diagrams (partitions), where each row corresponds to a color and the number of boxes in each row correspond to the number of vertices to receive this color. The resulting process can be interpreted as a random walk on a so-called Schreier graph, see also Cesi~\cite{cesi2}. The interchange process is a special case of this construction with $\lambda=(1^n)$, i.e., all vertices have different colors. Similarly, the random walk process arises on setting $\lambda=(n-1,1)$, i.e., one vertex has a different color from the other vertices. After a change of basis, the intensity matrices for these Markov processes can be written as a direct sum of irreducible representations as in Propositions~\ref{prop:laplaciandecomp} and \ref{prop:interchangelaplaciandecomp}. This is called Young's rule; indeed, the intensity matrix corresponding to partition $\lambda$ naturally arises from the $M^\lambda$ module in representation theory. The multiplicities of the irreducible representations are given by the so-called Kostka numbers. As a consequence of the resulting block structure, all of the intensity matrices (except for the trivial one corresponding to $\lambda=(n)$) contain the irreducible representation corresponding to the partition $(n-1,1)$. Thus, if Conjecture~\ref{conj} can be shown to hold, all of these processes have exactly the same spectral gap. \subsection*{Gelfand-Tsetlin patterns} By Proposition~\ref{prop:interlacing}, subsequent removal of vertices and updating of the weights according to (\ref{eq:defalphaprime}) yields a Gelfand-Tsetlin pattern, i.e., a collection of subsequent interlaced sequences. Subsequent removal of vertices {\em without} weight updating yields a nondecreasing spectral-gap sequence, an observation which has previously proven useful in the context of Aldous' conjecture \cite{MR2415139,starrconomos}. The significance of the Gelfand-Tsetlin structure is currently unclear. \subsection*{The cut-off phenomenon} It is a natural question whether the results of this paper can be exploited to study the cut-off phenomenon for Markov chains. This question is currently open. Proving a cut-off phenomenon requires control over the whole spectrum, not only near the edge. A variety of known results \cite{MR2023654}, e.g., on $\ell$-adjacent transposition walks, suggests that (pre)cut-off thresholds for interchange processes have an extra $\log(n)$ factor when compared to the corresponding random walk processes. Proposition~\ref{prop:interchangelaplaciandecomp} suggests that the second smallest eigenvalue of the Laplacian of the interchange process typically has multiplicity $f^{(n-1,1)} = n-1$ if Aldous' conjecture holds, which may explain the additional $\log(n)$ factor. \subsection*{Electric networks} Section~\ref{sec:application} showed how the Y-$\Delta$ transformation naturally arises in the context of Corollary~\ref{cor:induction}, but there may be a deeper connection. For $n=4$, the definition of $\alpha'$ in (\ref{eq:defalphaprime}) in terms of $\alpha$ appears in formulas for the resistance in electrical networks when a $\Delta$ is transformed into a Y. The recent work of Caputo {\em et al.}~\cite{caputo} sheds some light on this. \section*{Acknowledgments} The author would like to thank Prasad Tetali and David Goldberg for valuable discussions, and Kavita Ramanan for helpful comments on an earlier draft.
1,108,101,562,559
arxiv
\section{Introduction} The pulsar magnetospheres consist mainly of electron-positron plasma. This plasma can affect the radiation produced in the inner region of the magnetosphere or at the stellar surface and, owing to this, the pulsar emission can provide information regarding the physical conditions in the magnetosphere. This might be a powerful method for the diagnostics of the magnetosphere. For instance, fluctuations of the pulsar emission can be caused by non-stationary phenomena in the magnetospheric plasma (such as instabilities, waves, etc.), which are determined by the conditions in the magnetosphere. Therefore, the spectrum and characteristic timescale of the detected fluctuations can provide important information regarding the state of plasma in the magnetosphere. That is why understanding non-stationary properties of the magnetospheric plasma is of crucial importance for the interpretation of observational data. The physical processes in the pulsar magnetosphere are very particular. The mean free path of particles is typically short compared to the characteristic lengthscale and, hence, the magnetohydrodynamic discription is justified. For typical values of the magnetic field, the electromagnetic energy density is greater than the kinetic energy density. This suggests that the force-free equation is a good approximation for determining the magnetic field structure over much of the magnetosphere. The growing observational data on spectra and pulse profiles of isolated pulsars prompt continued improvement of theoretical models of such force-free magnetospheres (see, e.g., Goodwin et al. 2004, Contopoulos et al. 1999, Komissarov 2006, McKinney 2006, Petrova 2013). In the axisymmetric case, the equations governing the structure of a pulsar magnetosphere can be reduced to the well-known Grad-Shafranov equation (see, e.g., Michel 1973, Mestel 1973, Mestel \& Shibata 1994; see also Beskin 1997 for general overview)). Most models based on this equation have a ``dead zone'' with field lines that are close within the light-cylinder and a ``wind zone' with poloidal field lines that cross the light-cylinder. Poloidal currents in the ``wind zone'' maintain a toroidal field component, whereas currents are vanishing in the ``dead zone''. Many magnetohydrodynamic (MHD) phenomena in the force-free magnetosphere, however, are still poorly understood. Particularly, this concerns the non-stationary processes, such as the various types of instability, that can occur in the magnetosphere. This question is of particular interest because the existence of a stationary force-free configuration even rises doubts (see, e.g., Timokhin 2006). One of the MHD instabilities that can arise in the pulsar magnetosphere is the so-called diocotron instability, which is the non-neutral plasma analog of the Kelvin-Helmholtz instability. This instability has been studied extensively in the context of laboratory plasma devices (see, e.g., Levy 1965; Davidson 1990; Davidson \& Felice 1998). The possible existence of pulsars having a differentially rotating equatorial disk with a non-vanishing charge density could trigger instability of diocotron modes (Petri et al. 2002). In the non-linear regime, the diocotron instability might cause diffusion outwardsof the charged particles across the magnetic field lines outwards (Petri et al. 2003). The role of a diocotron instability in causing drifting subpulses in radio pulsar emission has been discussed by Fung et al. (2006). Recently, a new mode of the magnetospheric oscillations has been considered by Urpin (2011). This mode is closely related to the Alfv\'enic waves from standard magnetohydrodynamics, which have been modified by the force-free condition and non-vanishing electric charge density. This type of magnetospheric waves can be unstable because there is a number of destabilising factors in the magnetosphere (such as differential rotation, electric currents, non-zero charge density, etc.). For example, many models of the magnetosphere predict that rotation should be differential (see, e.g., Mestel \& Shibata 1994; Contopoulos et al. 1999) but it is known that differential rotation in plasma with the magnetic field leads to the so-called magnetorotational instability (Velikhov 1959). In the axisymmetric model of a magnetosphere suggested by Countopoulos et al. (1999), the angular velocity decreases inversely proportional to the cylindrical radius beyond the light cylinder and even stronger in front of it. For such rotation, the growth time of unstable magnetospheric waves is of the order of the rotation period (Urpin 2012). Numerical modelling by Komissarov (2006) showed that plasma rotates differentially basically near the equator and poles within the light cylinder. Such strong differential rotation should lead to instability that also arises on a timescale of the order of a rotation period. Note that the magnetorotational instability in the pulsar magnetosphere differs essentially from the standard magnetorotational instability because of a non-vanishing charge density and the force-free condition (Urpin 2012). The electric currents flowing in plasma also provide a destabilising influence that leads to the so-called Tayler instability (see, e.g., Tayler 1973a, b). This instability is well studied in both laboratory and stellar conditions. It arises basically on the Alfv\'en time scale and is particularly efficient if the strengths of the toroidal and poloidal field components differ significantly (see, e.g., Bonanno \& Urpin 2008a,b). This condition is satisfied in many magnetospheric models (see, e.g., Contopoulos et al. 1999), and these models can be unstable. However, this instability might have a number of qualitative features in the pulsar magnetosphere because of the force-free condition and non-zero charge density. In the present paper, we consider the instability of the pulsar magnetosphere relevant to magnetospheric waves considered by Urpin (2011). We show that these waves can be unstable in the regions where the magnetic pressure gradient is non-vanishing. The considered instability arises on a short timescale and can be responsible for a short-term variability of the pulsar emission. \section{Basic equations} Despite uncertainties in estimates of many parameters, plasma in the pulsar magnetosphere is likely collisional and the Coulomb mean free path of electrons and positrons is small compared to the characteristic length scale. Therefore, the magnetohydrodynamic description can be applied to such highly magnetized plasma (see Urpin 2012 for more details). Let us define the hydrodynamic velocity and electric current of an electron-positron plasma as \begin{equation} {\bf V} = \frac{1}{n} (n_e {\bf V}_e + n_p {\bf V}_p) , \;\;\; {\bf j}= e (n_p {\bf V}_p - n_e {\bf V}_e), \end{equation} where $({\bf V}_e, n_e)$ and $({\bf V}_p. n_p)$ are the partial velocities and number densities of electrons and positrons, respectively; $n=n_e + n_p$. Then, partial velocities of the electrons and positrons can be expressed in terms of ${\bf V}$ and ${\bf j}$: \begin{equation} {\bf V}_e = \frac{1}{2 n_e} \left( n {\bf V} - \frac{{\bf j}}{e} \right), \;\;\; {\bf V}_p = \frac{1}{2 n_p} \left( n {\bf V} + \frac{{\bf j}}{e} \right). \end{equation} If the number density of plasma, $n$, is much greater than the charge number density, $|n_p-n_e|$, then $V \gg j/en$. In the general case, the hydrodynamic and current velocities can be comparable in the electron-positron plasma. MHD equations governing the electron-positron plasma can be obtained from the partial momentum equations for the electrons and positrons in the standard way (see Urpin 2012). Assuming that plasma is non-relativistic, the momentum equation for particles of the sort $\alpha$ ($\alpha = e, p$) reads \begin{eqnarray} m_{\alpha} n_{\alpha} \left[ \dot{{\bf V}}_{\alpha} + ({\bf V}_{\alpha} \cdot \nabla) {\bf V}_{\alpha} \right] = - \nabla p_{\alpha} + n_{\alpha} {\bf F_{\alpha}} + \nonumber \\ e_{\alpha} n_{\alpha} \left({\bf E} + \frac{{\bf V}_{\alpha}}{c} \times {\bf B} \right) + {\bf R}_{\alpha} \end{eqnarray} (see, e.g., Braginskii 1965 where the general plasma formalism is considered); the dot denotes the partial time derivative. Here, ${\bf V}_{\alpha}$ is the mean velocity of particles $\alpha$; $n_{\alpha}$ and $p_{\alpha}$ are their number density and pressure, respectively; ${\bf F}_{\alpha}$ is an external force acting on the particles $\alpha$ (in our case ${\bf F}_{\alpha}$ is the gravitational force); ${\bf E}$ is the electric field; and ${\bf R}_{\alpha}$ is the internal friction force caused by collisions of the particles $\alpha$ with other sorts of particles. Since ${\bf R}_{\alpha}$ is the internal force, the sum of ${\bf R}_{\alpha}$ over $\alpha$ is zero in accordance with Newton's Third Law. Therefore, we have in the electron-positron plasma ${\bf R}_e = - {\bf R}_p$. The inertial terms on the l.h.s. of Eq.(3) give a small contribution to the force balance because of a small mass of both electrons and positrons. Gravitational force can also be neglected because of the same reason. A gas pressure is much smaller than the magnetic pressure in the force-free pulsar magnetosphere. Therefore, the momentum equation (3) reads in the electron-positron plasma \begin{equation} e_{\alpha} n_{\alpha} \left({\bf E} + \frac{{\bf V}_{\alpha}}{c} \times {\bf B} \right) + {\bf R}_{\alpha} = 0. \end{equation} Generally, the friction force, ${\bf R}_{\alpha}$ contains two terms: one proportional to the difference of partial velocities $({\bf V}_e - {\bf V}_p)$ and another proportional to the temperature gradient (see, e.g., Braginskii 1965). We will neglect the thermal contribution to ${\rm R}_{\alpha}$ and take into account only friction caused by a difference in the partial velocities. This is equivalent to neglecting the thermal diffusion of particles compared to their hydrodynamic velocities. The friction force is related to the velocity difference, $({\bf V}_e - {\bf V}_p)$, by a tensor that generally has components along and across the magnetic field and the so-called Hall component, which is perpendicular to the both magnetic field and velocity difference. In a strong magnetic field, parallel and perpendicular components are comparable but the Hall component is small (see Braginskii 1965). Therefore, we mimic the friction force between electrons and positrons by \begin{equation} {\bf R}_e = - \frac{m_e n_e}{\tau_e} ({\bf V}_e - {\bf V}_p), \end{equation} where $\tau_e$ is the relaxation time of electrons. Note that this simple model for the friction force is often used even for a magnetized plasma in laboratory conditions (Braginskii 1965) and yields qualitatively correct results. We assume that accuracy of Eq.(5) is sufficient to study the magnetosphere of pulsars. It is usually more convenient to use linear combinations of Eq.(4) than to solve partial equations. The sum of electron and positron Eq.(4) yields the equation of hydrostatic equilibrium in the magnetosphere \begin{eqnarray} \rho_e {\bf E} + \frac{1}{c} \; {\bf j} \times {\bf B} = 0, \end{eqnarray} where $\rho_e = e (n_p - n_e) = e \delta n$ is the charge density. Taking the difference between electron and positron Eq.(4), we obtain the Ohm's law in the form \begin{equation} {\bf j} = \rho_e {\bf V} + \sigma \!\left({\bf E} \! + \! \frac{{\bf V}}{c} \! \times \! {\bf B} \right) \end{equation} where $\sigma = e^2 n_p \tau_e/m_e$ is the conductivity of plasma. It was shown by Urpin (2012) that Eqs.(6)-(7) are equivalent to two equations \begin{equation} {\bf j} = \rho_e {\bf V}\;, \;\;\; {\bf E} = - \frac{{\bf V}}{c} \times {\bf B}. \end{equation} These equations imply that the force-free condition and the Ohm's law (Eqs.(6)-(7)) are equivalent to the conditions of a frozen-in magnetic field and the presence of only advective currents in the magnetosphere. Departures from this equivalence can be caused, for instance, by general relativistic corrections (see Palenzuela 2013) but they are very small in the magnetosphere. Note that the cross-production of the frozen-in condition and $\vec{B}$ yields the well-known expression for the transverse to $\vec{B}$ component of the velocity: $\vec{V}_{\perp} = c (\vec{E} \times \vec{B}/B^2)$. Certainly, the electric current should be non-vanishing in the magnetosphere, ${\bf j} \neq 0$, because it maintains the magnetic configuration. Hence, the hydrodynamic velocity should be non-zero as well since the current is advective. Therefore, the force-free magnetosphere can only exist if hydrodynamic motions are non-vanishing (Urpin 2012). \section{Equation governing the magnetospheric waves} Equation (8) should be complemented by the Maxwell equations. Then, the set of equations, governing MHD processes in the force-free pulsar magnetosphere reads \begin{eqnarray} \nabla \cdot {\bf E} = 4 \pi \rho_e , \;\;\; \nabla \times {\bf E} = - \frac{1}{c} \frac{\partial {\bf B}}{\partial t} , \nonumber \\ \nabla \cdot {\bf B} = 0 ,\;\;\; \nabla \times {\bf B} = \frac{1}{c} \frac{\partial {\bf E}}{\partial t} + \frac{4 \pi}{c} {\bf j}, \nonumber \\ {\bf j} \approx \rho_e {\bf V}, \;\;\; {\bf E} \approx - \frac{{\bf V}}{c} \times {\bf B}. \end{eqnarray} Consider the properties of MHD waves with small amplitudes as descibed by these equations. We assune that the electric and magnetic fields are equal to ${\bf E}_0$ and ${\bf B}_0$ in the unperturbed magnetosphere. The corresponding electric current, charge density, and velocity are ${\bf j}_0$, $\rho_{e0}$. and ${\bf V}_0$, respectively. For the sake of simplicity, we assume that motions in the magnetosphere are non-relativistic ($V_0 \ll c$). Linearizing Eq.(9), we obtain the set of equations that describes the behaviour of modes with a small amplitude. Small perturbations are indicated by subscript 1. We consider waves with a short wavelength. The space-time dependence of such waves can be taken in the form $\propto \exp(i \omega t - i {\bf k} \cdot {\bf r})$, where $\omega$ and ${\bf k}$ are the frequency and wave vector, respectively. Such waves exist if their wavelength $\lambda = 2 \pi /k$ is short compared to the characteristic length scale of the magnetosphere, $L$. Typically, $L$ is greater than the stellar radius. We search in magnetohydrodynamic modes with the frequency to satisfy the condition $\omega < 1/ \tau_e$, since we use the MHD approach. Substituting the frozen-in condition, ${\bf E} = - {\bf V} \times {\bf B}/c$, into the equation $c \nabla \times {\bf E} = - \partial {\bf B}/ \partial t$ and linearizing the obtained induction equation, we have \begin{equation} i \omega {\bf B}_1 = \nabla \times ( {\bf V}_1 \times {\bf B}_0 + {\bf V}_0 \times {\bf B}_1 ). \end{equation} A destabilising effect of shear already has been studied by Urpin (2012). In the present paper, we concentrate on the instability caused by the presence of electric currents in the magnetosphere. Therefore, we assume that shear is small and neglect terms proportional to $|\partial V_{0 i} / \partial x_j|$. Then, Eq.(10) reads \begin{equation} i\tilde{\omega} {\bf B}_1 = i {\bf B}_0 ({\bf k} \cdot {\bf V}_1 ) - i {\bf V}_1 ({\bf k} \cdot {\bf B}_0 ) - ({\bf V}_1 \cdot \nabla ){\bf B}_0, \end{equation} where $\tilde{\omega} = \omega - {\bf k} \cdot {\bf V}_0$. The last term on the r.h.s. is usually small compared to the third term ($\sim \lambda/L$) in a short wavelength approximation. However, it becomes crucially important if the wavevector of perturbations is almost perpendicular to ${\bf B}_0$. Substituting the expression ${\bf j} = \rho_e {\bf V}$ into Ampere's law (second line of Eq.(9)) and linearising the obtained equation, we have \begin{equation} {\bf V}_1 = - \frac{i}{4 \pi \rho_{e0}} (c {\bf k} \times {\bf B}_1 + \omega {\bf E}_1) - \frac{\rho_{e1}}{\rho_{e0}} {\bf V}_0 . \end{equation} We search in relatively low-frequency magnetohydrodynamic modes with the frequency $\omega < c k$. Note that the frequency of MHD modes must satisfy the condition $\omega < 1/ \tau_e$ because of the MHD approach used. The relaxation time can be estimated as $\tau_e \sim \ell_e / c_e$, where $c_e$ and $\ell_e$ are the thermal velocity and mean free path of particles, respectively. The frequency $ck$ can be greater or smaller than $1/ \tau_e$ depending on a wavelength $\lambda$. If $\lambda > 2 \pi \ell_e (c/c_e)$, then we have $ck < 1/ \tau_e$. If $\lambda < 2 \pi \ell_e (c/c_e)$, then we have $ck > 1/ \tau_e$. By eliminating ${\bf E}_1$ from Eq.(12) by making use of the linearised frozen-in condition and neglecting terms of the order of $(\omega/ck) (V_0/c)$, we obtain the following for such modes: \begin{equation} {\bf V}_1 + \frac{i \omega}{4 \pi c \rho_{e0}} {\bf B}_0 \times {\bf V}_1 = - \frac{i c}{4 \pi \rho_{e0}} {\bf k} \times {\bf B}_1 - \frac{\rho_{e1}}{\rho_{e0}} {\bf V}_0. \end{equation} The perturbation of the charge density can be calculated from the equation $\rho_{e1} = \nabla \cdot {\bf E}_1 /4 \pi$. We have then with accuracy in terms of the lowest order in $(\lambda/L)$ \begin{equation} \rho_{e1} \!=\! \frac{1}{4 \pi c} [ i {\bf B}_0 \cdot ({\bf k} \!\times\! {\bf V}_1 \!) - i {\bf V}_0 \cdot ({\bf k} \!\times\! {\bf B}_1 \!)]. \end{equation} Substituting Eq.(14) into Eq.(13) and neglecting terms of the order of $V_{0}^2/c^2$, we obtain the second equation, which couples ${\bf B}_1$ and ${\bf V}_1$, \begin{eqnarray} 4 \pi c \rho_{e0} {\bf V}_1 \! + \! i \omega {\bf B}_0 \! \times \! {\bf V}_1 \! = \!-\! i c^2 {\bf k} \! \times \! {\bf B}_1 \!-\! i {\bf V}_0 [ {\bf B}_0 \! \cdot \! ( {\bf k} \! \times \! {\bf V}_1 \! )]. \end{eqnarray} Eliminating ${\bf B}_1$ from Eqs.(11) and (15) in favor of ${\bf V}_1$ and again neglecting terms of the order of $(\omega/ck)(V_0/c)$ and $(\omega/ck)^2$, we obtain the equation for ${\bf V}_1$ in the form \begin{equation} 4 \pi \! c \rho_{e0} \! {\bf V}_{1} \!\!-\!\! i \frac{c^2}{\tilde{\omega}} ({\bf k} \! \cdot \! {\bf B}_0) {\bf k} \! \times \!\!{\bf V}_{1} \!\!=\! \frac{c^2}{\tilde{\omega}} {\bf k} \! \times \! [ (\!{\bf V}_{1} \!\! \cdot \!\! \nabla \!) {\bf B}_0 \!-\!i {\bf B}_0 ({\bf k} \! \cdot \!\! {\bf V}_{1}) ]. \end{equation} It immediately follows from this equation that $({\bf k} \cdot {\bf V}_1) = 0$ and, hence, the magnetospheric waves are transverse. Equation (16) is simplified to \begin{equation} \alpha {\bf V}_{1} - i ({\bf k} \cdot {\bf B}_0) {\bf k} \times {\bf V}_{1} = {\bf k} \times ( {\bf V}_{1} \cdot \nabla ) {\bf B}_0, \end{equation} where \begin{equation} \alpha = 4 \pi \! \rho_{e0} \frac{\tilde{\omega}}{c} \end{equation} In the case of a uniform magnetic field, Eq.(17) reduces to the equation considered by Urpin (2011). \section{Dispersion equation and instability of magnetospheric modes} Generally, the stability properties of perturbations are complicated even in the local approximation. Equation (17) can be transformed to a more convenient form that does not contain a cross production of ${\bf k}$ and ${\bf V}_{1}$. Calculating the cross production of ${\bf k}$ and Eq.~(17) and taking into account ${\bf k} \cdot {\bf V}_1 = 0$, we have \begin{equation} {\bf k} \times {\bf V}_1 = - \frac{1}{\alpha} \{ i k^2 ({\bf k} \cdot {\bf B}_0) {\bf V}_1 - ({\bf V}_1 \cdot \nabla) [ {\bf k} \times ({\bf k}\times {\bf B}_0)] \}. \end{equation} Substituting this expression into Eq.~(17), we obtain \begin{eqnarray} \left[ \alpha^2 - k^2 ({\bf k} \cdot {\bf B}_0)^2 \right] {\bf V}_1 = \alpha ({\bf V}_1 \cdot \nabla) {\bf k} \times {\bf B}_0 + \nonumber \\ i ({\bf k} \cdot {\bf B}_0) ({\bf V}_1 \cdot \nabla) [ {\bf k} ({\bf k} \cdot {\bf B}_0) - k^2 {\bf B}_0 ]. \end{eqnarray} The magnetospheric waves can exist in the force-free pulsar magnetosphere only if the wavevector ${\bf k}$ and the unperturbed magnetic field ${\bf B}_0$ are almost (but not exactly) perpendicular and the scalar production $({\bf k} \cdot {\bf B}_0)$ is small but non-vanishing (see Urpin 2011, 2012). The reason for this is clear from simple qualitative arguments. The magnetospheric waves are transverse (${\bf k} \cdot {\bf V}_1 =0$), and the velocity of plasma is perpendicular to the wave vector. However, wave motions across the magnetic field in a strong field are suppressed and the velocity component along the magnetic field should be much greater than in the transverse (see, e.g., Mestel \& Shibata 1994). Therefore, the direction of a wavevector ${\bf k}$ should be close to the plane perpendicular to ${\bf B}_0$. That is why we treat Eq.~(20) only in the case of small $({\bf k} \cdot {\bf B}_0)$. Consider Eq.~(20) in the neighbourhood of a point, ${\bf r}_0$, using local Cartesian coordinates. We assume that the $z$-axis is parallel to the local direction of the unperturbed magnetic field and the corresponding unit vector is ${\bf b} = {\bf B}_0({\bf r}_0) /B_0({\bf r}_0)$. The wavevector can be represented as ${\bf k} = k_{\parallel} {\bf b} + {\bf k}_{\perp}$, where $k_{\parallel}$ and ${\bf k}_{\perp}$ are parallel and perpendicular to the magnetic field components of ${\bf k}$, respectively. Then, we have from the continuity equation \begin{equation} V_{1z} = - \frac{1}{k_{\parallel}} ({\bf k}_{\perp} \cdot {\bf V}_{1 \perp}). \end{equation} Since $k_{\perp} \gg k_{\parallel}$, we have $V_{1z} \gg V_{1 \perp}$ and, hence, $({\bf V}_1 \cdot \nabla) \approx V_{1z} \partial /\partial z$. Therefore, the $z$-component of Eq.~(20) yields the following dispersion relation \begin{equation} \alpha^2 + A \alpha + i D = 0, \end{equation} where \begin{eqnarray} A= ({\bf k} \times {\bf b}) \cdot \frac{\partial {\bf B}_0}{\partial z}\;, \nonumber \\ D = k^2 ({\bf k} \cdot {\bf B}_0) \left[ {\bf b} \cdot \frac{\partial {\bf B}_0}{\partial z} + i ({\bf k} \cdot {\bf B}_0) \right]. \end{eqnarray} We neglect in $D$ corrections of the order $\sim \lambda / L$ to $k^2 ({\bf k} \cdot {\bf B}_0)^2$. The roots of Eq.(22) correspond to two modes with the frequencies given by \begin{equation} \alpha_{1, 2} = - \frac{A}{2} \pm \left( \frac{A^2}{4} - i D \right)^{1/2}. \end{equation} If the magnetic field is approximtely uniform along the field lines, then $\partial {\bf B}_0 / \partial z \approx 0$, and, hence, $A \approx 0$ and $D \approx i k^2 ({\bf k} \cdot {\bf B}_0)^2$. In this case, the magnetospheric modes are stable and $\alpha_{1,2} \approx \pm \sqrt{-iD}$. The corresponding frequency is \begin{equation} \tilde{\omega} \approx \pm \frac{ck}{4 \pi \rho_{e0}} ({\bf k} \cdot {\bf B}_0). \end{equation} These waves have been first studied by Urpin (2011). Deriving the dispersion Equation (25), it was assumed that $\omega \ll ck$. Therefore, the magnetospheric waves can exist only if $({\bf k} \cdot {\bf B}_0)$ is small, as was discussed above: $({\bf k} \cdot {\bf B}_0) \ll 4 \pi \rho_{e0}$. Sometimes, it is convenient to measure the true charge density, $\rho_{e0}$, in units of the Goldreich-Julian charge density, $\rho_{GJ}= \Omega B_0 / 2 \pi c$, where $\Omega$ is the angular velocity of a pulsar. Then, we can suppose $\rho_{e0} = \xi \rho_{GJ}$, where $\xi$ is a dimensionless parameter and, hence, the condition $\omega \ll ck$ transforms into \begin{equation} 2 \xi \Omega \gg c |{\bf k} \cdot {\bf b}|. \end{equation} Obviously, this condition can be satisfied only for waves with the wavevector almost perpendicular to ${\bf B}_0$. Note, however, that if ${\bf k}$ is exactly perpendicular to ${\bf B}_0$ the magnetospheric waves do not exist. If $\partial {\bf B}_0 / \partial z \neq 0$, the magnetospheric waves turn out to be unstable. The instability is especially pronounced if $|{\bf k} \cdot {\bf B}_0| < B_0 / L$. In this case, the second term in the brackets of Eq.~(24) is smaller than the first one and, therefore, the roots are \begin{equation} \alpha_1 \approx - A + i \frac{D}{A} \;, \;\;\; \alpha_2 = - i \frac{D}{A}. \end{equation} The coefficient $D$ is approximately equal to \begin{equation} D \approx k^2 ({\bf k} \cdot {\bf B}_0) \left( {\bf b} \cdot \frac{\partial {\bf B}_0}{\partial z} \right). \end{equation} The first and second roots of Eq.~(27) correspond to oscillatory and non-oscillatory modes, respectively. The occurence of instability is determined by the sign of the ratio $D/A$. If this ratio is positive for some wavevector ${\bf k}$, then the non-oscillatory mode is unstable but the oscillatory one is stable for such ${\bf k}$. If $D/A < 0$, then the oscillatory mode is unstable but the non-oscillatory one is stable for corresponding ${\bf k}$. Note, however, that the frequency of oscillatory modes often can be very high and $\omega \gg ck$. Our consideration does not apply in this case. Indeed, we have $\alpha_1 \sim A$ and, hence, $\tilde{\omega}_1 \sim ck (B_0 / 4 \pi \rho_{e0} L)$. The condition $\omega \ll ck$ implies that $B_0/4 \pi \rho_{e0} L <1$. Expressing the charge density in units of the Goldreich-Julian density, $\rho_{e0} = \xi \rho_{GJ}$, we transform this inequality into \begin{equation} \frac{1}{2 \xi} \; \frac{c}{\Omega L} \ll 1. \end{equation} This condition can be satisfied only in regions where $\xi \gg 1$ and the charge density is much greater than the Goldreich-Julian density. If inequality (29) is not fulfilled, then Eq.~(27) for the oscillatory mode $\alpha_1$ does not apply, and only the non-oscillatory modes exist. For example, the charge density is large in the region where the electron-positron plasma is created. Therefore, condition (29) can be satisfied there, and, hence, the oscillatory instability can occur in this region. The non-oscillatory modes have a lower growth rate and usually can occur in the pulsar magnetosphere. For any magnetic configuration, it is easy to verify that one can choose the wavevector of perturbations, ${\bf k}$, in such a way that the ratio $D/A$ becomes positive, and, hence, the non-oscillatory mode is unstable. Indeed, we can represent ${\bf k}$ as the sum of components parallel and perpendicular to the magnetic field, ${\bf k} = {\bf k}_{\parallel} + {\bf k}_{\perp}$. Obviously, $A \propto k_{\perp}$ and $D \propto k_{\parallel}$ and, hence, $A/D \propto k_{\parallel}/ k_{\perp}$. Therefore, if $A/D < 0$ for a value of ${\bf k} = ( k_{\parallel}, k_{\perp})$, this ratio changes the sign for ${\bf k} = ( - k_{\parallel}, k_{\perp})$ and ${\bf k} = ( k_{\parallel},- k_{\perp})$, and waves with such the wavevectors are unstable. It turns out that there always exists the range of wavevectors for which the non-oscillatory modes are unstable and, hence, the force-free magnetosphere is always the subject of instability. The necessary condition of instability is $D \neq 0$. As it was mentioned, the magnetospheric waves exist only if the wavevector ${\bf k}$ is close to the plane perpendicular to the unperturbed magnetic field, ${\bf B}_0$, and the scalar production $({\bf k} \cdot {\bf B}_0)$ is small (but non-vanishing). Therefore, the necessary condition $D \neq 0$ is equivalent to ${\bf b} \cdot (\partial {\bf B}_0/ \partial z) \neq 0$. Since ${\bf b} = {\bf B}_0 / B_0$, we can rewrite this condition as \begin{equation} {\bf B}_0 \cdot \frac{\partial {\bf B}_0}{\partial z} \neq 0. \end{equation} This condition is satisfied if the magnetic pressure gradient along the magnetic field is non-zero. The topology of the magnetic field can be fairly complicated in the magnetosphere, particularly in a region close to the neutron star. This may happen because the field geometry at the neutron star surface should be very complex (see, e.g., Bonanno et al. 2005, 2006). Therefore, Eq.~(30) can be satisfied in different regions of the magnetosphere. However, this condition can be fulfilled even if the magnetic configuration is relatively simple. As a possible example, we consider a region near the magnetic pole of a neutron star. It was shown by Urpin (2012) that a special type of cylindrical waves can exist there with $m \neq 0$, where $m$ is the azimuthal wavenumber. The criterion of instability (30) is certainly satisfied in this region and, hence, the instability can occur. Indeed, one can mimic the magnetic field by a vacuum dipole near the axis. The radial and polar components of the dipole field in the spherical coordinates $(r, \theta, \varphi)$ are \begin{equation} B_r = B_p \left( \frac{a}{r} \right)^3 \cos \theta , \;\;\; B_{\theta} = \frac{1}{2} B_p \left( \frac{a}{r} \right)^3 \sin \theta , \end{equation} where $B_p$ is the polar strength of the magnetic field at the neutron star surface and $a$ is the stellar radius (see, e.g., Urpin et al. 1994). The radial field is much greater than the polar one near the axis and, therefore, it is easy to check that the criterion of instability (30) is fulfilled in the polar gap. Hence, filament-like structures can be formed there. In some models, note that the force-free field at the top of the polar gap can differ from that of a vacuum dipole (see, e.g., Petrova 2012) but this cannot change our conclusion qualitatively. We will consider the instability in the polar gap in more detail elsewhere. From Eq.~(30), it follows that the instability in pulsar magnetospheres is driven by a non-uniformity of the magnetic pressure and, hence, it can be called ``the magnetic pressure-driven instability''. Note that this instability can occur only in plasma with a non-zero charge density, $\rho_{e0} \neq 0$, and does not arise in a neutral plasma with $\rho_{e0} = 0$. \section{Discussion} We have considered stability of the electron-positron plasma in the magnetosphere of pulsars. The pair plasma in the magnetosphere is likely created in a two-stage process: primary particles are accelerated by an electric field parallel to the magnetic field near the poles up to extremely high energy, and these produce a secondary, denser pair plasma via a cascade process (see, e.g., Michel 1982). The number density of this secondary plasma greatly exceeds the Goldreich-Julian number density, $n_{GJ} = |\rho_{GJ}|/e$, required to maintain a corotation electric field and, hence, the multiplicity factor $\xi$ can be very large. Unfortunately, this factor is model dependent and rather uncertain with estimates in a wide range from $10^2$ to $10^6$ (see, e.g., Gedalin et al. 1998). For example, the model of bound-pair creation above polar caps (Usov \& Melrose 1996) results in a relatively low value of $\xi$. This model postulates that the photons emitted by primary particles as a result of curvature emission create bound pairs (positronium) rather than free pairs. While the pairs remain bound, the screening of the component ${\bf E}$ parallel to the magnetic field is inefficient because screening is attributed to free pairs that can be charge-separated as a result of acceleration by ${\bf E}_{\parallel}$. In the absence of screening, the height of the polar gap and the maximum energy of primary particles increase. The main part of the energy that primary particles gain during their motion through the polar gap is transformed into the energy of curvature photons and then into the energy of secondary pairs. Assuming that the electric field in the gap has more or less standard value ($\sim 10^{10}$ V cm$^{-1}$), Usov \& Melrose (1996) obtain the following estimate for the multiplicity factor above polar caps: \begin{equation} \xi \sim 4 \times 10^2 \left( \frac{P}{0.1 {\rm s}} \right)^{-3/4}, \end{equation} where $P$ is the pulsar period. Note that this value can be higher if dissociation of bound pairs is taken into account. However, the instability can operate in the region of pair creation even if $\xi$ is such small. It can be efficient in regions with $\xi \sim 1$ as well. The geometry of motions in the unstable magnetospheric waves is simple. Since these waves are transverse (${\bf k} \cdot {\bf V}_1 = 0$) and the wavevector of such waves should be close to the plane perpendicular to ${\bf B}_0$, plasma motions are almost parallel or anti-parallel to the magnetic field. The velocity across ${\bf B}_0$ is small. In our model, we have only considered the stability of plane waves using a local approximation. In this model, the instability should lead to formation of filament-like structures with filaments alongside the magnetic field lines. Note that plasma can move in the opposite directions in different filaments. The characteristic timescale of formation of such structures is $\sim 1/ {\rm Im} \omega$. Since the necessary condition (30) is likely satisfied in a major fraction of a magnetosphere, one can expect that filament-like structures can appear (and disappear) in different magnetospheric regions. We used the hydrodynamic approach in our consideration, which certainly does not apply to a large distance from the pulsar where the number density of plasma is small. Therefore, the considered instability is most likely efficient in the inner part of a magnetosphere where filament-like structures can be especially pronounced. The example of a region where the instability can occur is the so-called dead zone. Most likely, the hydrodynamic approximation is valid in this region and hydrodynamic motions are non-relativistic, as it was assumed in our analysis. Note that a particular geometry of motions in the basic (unperturbed) state is not crucial for the instability and cannot suppress the formation of filament-like structures. These structures can be responsible for fluctuations of plasma and, hence, the magnetospheric emission can fluctuate with the same characteristic timescale. It should be also noted that the considered instability is basically electromagnetic in origin as followed in our treatment. Hydrodynamic motions in the basic state play no important role in the instability. For instance, the unperturbed velocity does even not enter the expression for the growth rate. Therefore, one can expect that the same type of instability arises in the regions where velocities are relativistic. This case will be considered in detail elsewhere. Hydrodynamic motions accompanying the instability can be the reason of turbulent diffusion in the magnetosphere. This diffusion should be highly anisotropic because both the criteria of instability and its growth rate are sensitive to the direction of the wave vector. However, the turbulent diffusion caused by motions may be efficient in the transport of angular momentum and mixing plasma with a much higher enhancement of the diffusion coefficient in the direction of the magnetic field since the velocity of motions across the field is much slower than along it. The characteristic growth rate of unstable waves, Im~$\omega$, can be estimated from Eq.~(27) as Im~$\omega \sim (c/ 4\pi \rho_{e0})(D/A)$. Since ${\bf k}$ and ${\bf B}_0$ should be close to orthogonality in magnetospheric waves, we have $A \sim k B_0 / L$ and $D \sim k^2 ({\bf k} \cdot {\bf b}) B_0^2 /L$, where we estimate ${\bf b} \cdot ( \partial {\bf B}_0 /\partial z)$ as $B_0/L$. Then, \begin{equation} {\rm Im}~\omega \sim c k ~ \frac{({\bf k} \cdot {\bf B}_0)}{4 \pi \rho_{e0}} \sim \frac{1}{\xi} c k ~ \frac{c({\bf k} \cdot {\bf b})}{\Omega}. \end{equation} Like stable magnetospheric modes, the unstable ones can occur in the magnetosphere if Eq.~(26) is satisfied. Generally, this condition requires a position close to orthogonality of (but not orthogonal) ${\bf k}$ and ${\bf B}_0$. Under certain conditions, however, the instability can arise even if departures from orthogonality are not very small but $\xi \gg 1$. As it was mentioned, this can happen in regions where the electron-positron plasma is created. The parameter $\xi$ can also be greater than 1 in those regions where plasma moves with the velocity greater $\Omega L$. Indeed, we have $\rho_{e0} = (1/4 \pi) \nabla \cdot {\bf E}_0$ for the unperturbed charge density. Since ${\bf E}_0$ is determined by the frozen-in condition (8), we obtain $\rho_{e0} \sim (1/4 \pi c L) V_0 B_0$. If the velocity of plasma in a magnetosphere is greater than the rotation velocity, then $\xi \sim V_0 / \Omega L$. Some models predict that the velocity in the magnetisphere can reach a fraction of $c$. Obviously, in such regions, condition (26) is satisfied even if departures from orthogonality of ${\bf k}$ and ${\bf B}_0$ are relatively large. The growth rate of instability (31) is sufficiently high and can reach a fraction of $ck$. For example, if a pulsar rotates with the period 0.01 sec and $\xi \sim 1$, magnetospheric waves with the wavelength $\sim 10^5 - 10^6$ cm grow on a timescale $\sim 10^{-4} - 10^{-5}$ s if a departure from orthogonality between ${\bf k}$ and ${\bf B}_0$ is of the order of $10^{-4}$. The considered instability can occur everywhere in the magnetosphere except regions close to the surfaces where ${\bf B}_0 \cdot (\partial {\bf B}_0 /\partial z) = 0$ and instability criterion (30) is not satisfied. The instability considered is caused by a combined action of non-uniform magnetic field and non-zero charge density. Certainly, this is not the only instability that can occur in the pulsar magnetosphere. There are many factors that can destabilize a highly magnetized magnetospheric plasma and lead to various instabilities with substantially different properties. For instance, differential rotation predicted by many models of the magnetosphere can be the reason of instability as was shown by Urpin (2012). This instability is closely related to the magnetorotational instability (Velikhov 1959) which is well-studied in the context of accretion disks (see, e.g., Balbus \& Hawley 1991). Generally, the regions, where rotation is differential and the magnetic field is non-unifom, can overlap. Thus, the criteria of both instabilities can be fulfilled in the same region. However, these instabilities usually have substantially different growth rates. The instability caused by differential rotation arises usually on a time-scale comparable to the rotation period of a pulsar. The growth rate of the magnetic pressure-driven instability is given by Eq.(30) and can even reach a fraction of $ck$ in accordance with our results. Therefore, this instability occurs typically on a shorter time-scale than the instability caused by differential rotation. If two different instabilities can occur in the same region, then, the instability with a shorter growth time usually turns out to be more efficient and determines plasma fluctuations. It is likely, therefore, that the magnetic pressure-driven instability is more efficient everywhere in the magnetosphere except surfaces where criterion (30) is not satisfied. In the neighbourhood of these surfaces, howevere, the instability associated with differential rotation can occur despite it arises on a longer time-scale. Therefore, it appears that the whole pulsar magnetosphere should be unstable. Instabilities can lead to a short-term variability of plasma and, hence, to modulate the magnetospheric emission of pulsars. The unstable plasma can also modulate the radiation produced at the stellar surface and propagating through the magnetosphere. Since the growth time of magnetospheric waves can be substantially different in different regions, the instability can lead to a generation of fluctuations over a wide range of timescales, including those yet to be detected in the present and future pulsar searches (Liu et al. 2011, Stappers et al. 2011). Detection of such fluctuations would uncover the physical conditions in the magnetosphere and enable one to construct a relevant model of the pulsar magnetosphere and its observational manifestations beyond the framework of the classical concept (see, e.g., Kaspi 2010). \section*{Acknowledgement} The author thanks the Russian Academy of Sciencs for financial support under the programm OFN-15.
1,108,101,562,560
arxiv
\section{Introduction} In astronomy, the collection of data is often limited by the presence of background noise. Various methods are used to filter the noise while retaining as much ``useful'' information as possible. In recent years, wavelets have played an increasing role in astrophysical data analysis. It provides for a general parameter-free procedure to look for objects of varying size scales. In the case of the Cosmic Microwave Background (CMB) one is interested in the non-Gaussian component in the presence of Gaussian noise and signal. An application of wavelets is presented by Tenorio et al (1999). This paper generalizes their analysis beyond the thresholding approximation. X-ray images are also frequently noise dominated, caused by instrumental and cosmic background. Successful wavelet reconstructions were achieved by Damiani et al (1997a,b). At times generic tests for non-Gaussianity are desired. Inflationary theories predict, for example, that the intrinsic fluctuations in the CMB are Gaussian, while topological defect theories predict non-Gaussianity. A full test for non-Gaussianity requires measuring all N-point distributions, which is computationally not tractable for realistic CMB maps. Hobson et al (1998) have shown that wavelets are a more sensitive discriminant between cosmic string and inflationary theories if one examines only the one point distribution function of basis coefficients. For Gaussian random processes, Fourier modes are statistically independent. Current theories of structure formation start from an initially linear Gaussian random field which grows non-linear through gravitational instability. Non-linearity occurs through processes local in real space. Wavelets provide a natural basis which compromise between locality in real and Fourier space. Pando \& Fang (1996) have applied the wavelet decomposition in this spirit to the high redshift $L_\alpha$ systems which are in the transition from linear to non-linear regimes, and are thus well analyzed by the wavelet decomposition. We will concentrate in this paper on the specific case of data layed out on a two dimensional grid, where each grid point is called a {\it pixel}. Such images are typically obtained through various imaging instruments, including CCD arrays on optical telescopes, photomultiplier arrays on X-ray telescopes, differential radiometry measurements using bolometers in the radio band, etc. In many instances, the images are dominated by noise. In the optical, the sky noise from atmospheric scatter, zodiacal light, and extragalactic backgrounds, sets a constant flux background to any observation. CCD detectors essentially count photons, and are limited by the Poissonian discreteness of their arrival. A deep exposure is dominated by sky background, which is subtracted from the image to obtain the features and objects of interest. Since the intensity of the sky noise is constant, it has a Poissonian error with standard deviation $e\propto n^{1/2}$, where $n$ is the photon count per pixel. After subtracting the sky average, this fluctuating component remains as white noise in the image. For large modern telescopes, images are exposed to near the CCD saturation limit, with typical values of $n\sim 10^4$. The Poisson noise is well described by Gaussian statistics in this limit. We would like to pose the problem of filtering out as much of the noise as possible, while maximally retaining the data. In certain instances, optimal methods are possible. If we know the data to consist of astronomical point objects, which have a shape on the grid given by the atmospheric spreading or telescope optics, we can test the likelihood at each pixel that a point source was centered there. The iterative application of this procedure is implemented in the routine {\it clean} of the Astronomical Image Processing Software (AIPS) (Cornwell \& Braun 1989). If the sources are not point-like, or the atmospheric point spread function varies significantly across the field, {\it clean} is no longer optimal. In this paper we examine an approach to implement a generic noise filter using a wavelet basis. In section \ref{sec:classical} we first review two popular filtering techniques, thresholding and Wiener. In section \ref{sec:opt} we generalize Wiener filtering to inherit the advantages of thresholding. A Bayesian approach to image reconstruction (Vidakovic 1998) is used, where we use the data itself to estimate the prior distribution of wavelet coefficients. We recover Wiener filtering for Gaussian data. Some concrete examples are shown in section \ref{sec:example}. \section{Classical Filters} \label{sec:classical} \subsection{Thresholding} A common approach to supressing noise is known as thresholding. If the amplitude of the noise is known, one picks a specific threshold value, for example $\tau=3\sigma_{\rm noise}$ to set a cutoff at three times the standard deviation of the noise. All pixels less than this threshold are set to zero. This approach is useful if we wish to minimize false detections, and if all sources of signal occupy only a single pixel. It is clearly not optimal for extended sources, but often used due to its simplicity. The basic shortcoming is its neglect of correlated signals which covers many pixels. The choice of threshold also needs to be determined heuristically. We will attempt to quantify this procedure. \subsection{Wiener Filtering} \begin{figure} \resizebox{\textwidth}{!}{ \includegraphics{pkwien.eps} } \caption{The power spectrum of figure \protect{\ref{plate:org}}. The dashed line is the power spectrum measured from the noisy data. The horizontal line is the noise. The lower solid line is the difference between the measured spectrum and the noise. We see that the measurement of the difference becomes noise limited at large $k$. } \label{fig:pkwien} \end{figure} In the specific case that both the signal and the noise are Gaussian random fields, an optimal filter can be constructed which minimizes the impact of the noise. If the noise and signal are stationary Gaussian processes, Fourier space is the optimal basis where all modes are uncorrelated. In other geometries, one needs to expand in signal-to-noise eigenmodes (see e.g. Vogeley and Szalay 1996). One needs to know both the power spectrum of the data, and the power spectrum of the noise. We use the least square norm as a measure of goodness of reconstruction. Let $E$ be the reconstructed image, $U$ the original image and $N$ the noise. The noisy image is called $D=U+N$. We want to minimize the error $e=\langle (E-U)^2\rangle$. For a linear process, $E=\alpha D$. For our stationary Gaussian random field, different Fourier modes are independent, and the optimal solution is $\alpha=\langle U^2\rangle/\langle D^2\rangle$. $U^2$ is the intrinsic power spectrum. Usually, $D^2$ can be estimated from the data, and if the noise power spectrum is known, the difference can be estimated subject to measurement scatter as shown in figure \ref{fig:pkwien}. Often, the powerspectrum decays with increasing wave number (decreasing length scale): $\langle U^2\rangle =c k^{-n}$. For white noise with unit variance, we then obtain $\alpha= c/(k^n+c)$, which tends to one for small $k$ and zero for large $k$. We really only need to know the parameters $c,n$ in the crossover region $E^2\sim 1$. In section \ref{sec:example} we will illustrate a worked example. Wiener filtering is very different from thresholding, since modes are scaled by a constant factor independent of the actual amplitude of the mode. If a particular mode is an outlier far above the noise, the algorithm would still force it to be scaled back. This can clearly be disadvantageous for highly non-Gaussian distributions. If the data is localized in space, but sparse, the Fourier modes dilute the signal into the noise, thus reducing signal significantly as is seen in the examples in section \ref{sec:example}. Furthermore, choosing $\alpha$ independent of $D$ is only optimal for Gaussian distributions. One can generalize as follows: \section{Non-Gaussian Filtering} \label{sec:opt} We can extend Wiener filtering to Non-Gaussian Probability Density Functions (PDFs) if the PDF is known and the modes are still statistically independent. We will denote the PDF for a given mode as $\Theta(u)$ which describes a random variable $U$. When Gaussian white noise with unit variance ${\cal N}(0,1)$ is added, we obtain a new random variable $D=U+{\cal N}(0,1)$ with PDF $f(d)=(2\pi)^{-1/2}\int \Theta(u) \exp(-(u-d)^2/2) du$. We can calculate the conditional probability $P(U|D)=P(D|U)P(U)/P(D)$ using Bayes' theorem. For the posterior conditional expectation value we obtain \begin{eqnarray} \langle U|D=d \rangle &=& \frac{1}{\sqrt{2\pi}f(d)} \int \exp[-(u-d)^2/2] \Theta(u) u du \nonumber\\ &=& D + \frac{1}{\sqrt{2\pi}f(d)} \partial_d \int \exp[-(u-d)^2/2] \Theta(u) du \nonumber \\ &=& D+ (\ln f)'(d). \label{eqn:bayes} \end{eqnarray} Similarly, we can calculate the posterior variance \begin{equation} \left\langle (U-\bar{U})^2 | D=d \right\rangle = 1+(\ln f)''(d). \label{eqn:var} \end{equation} For a Gaussian prior with variance $\sigma$, equation (\ref{eqn:bayes}) reduces to Wiener filtering. We have a generalized form for $\alpha=1+(\ln f)'/D$. For distributions with long tails, $(\ln f)' \sim 0$, $\alpha \sim 1$, and we leave the outliers alone, just as thresholding would suggest. For real data, we have two challenges: 1. estimating the prior distribution $\Theta$. 2. finding a basis in which $\Theta$ is most non-Gaussian. \subsection{Estimating Prior $\Theta$} The general non-Gaussian PDF on a grid is a function of $N$ variables, where $N$ is the number of pixels. It is generally not possible to obtain a complete description of this large dimensional space (D. Field, this proceedings). It is often possible, however, to make simplifying assumptions. We consider two descriptions: Fourier space and wavelet space. We will assume that the one point distributions of modes are non-Gaussian, but that they are still statistically independent. In that case, one only needs to specify the PDF for each mode. In a hierarchical basis, where different basis functions sample characteristic length scales, we further assume a scaling form of the prior PDF $\Theta_l(u)=l^{-\beta}\Theta(u/l^\beta)$. Here $l\sim 1/k$ is the characteristic length scale, for example the inverse wave number in the case of Fourier modes. For images, we often have $\beta \sim 1$. Wavelets still have a characteristic scale, and we can similarly assume scaling of the PDF. In analogy with Wiener filtering, we first determine the scale dependence. For computational simplicity, we use Cartesian product wavelets (Meyer 1992). Each basis function has two scales, call them $2^i,2^j$. The real space support of each wavelet has area $A\propto 2^{i+j}$, and we find empirically that the variance depends strongly on that area. The scaling relation does not directly apply for $i\ne j$, and we introduce a lowest order correction using $\ln(\sigma) = c_1 (i+j)+c_2(i-j)^2+c_3$. We then determine the best fit parameters $c_i$ from the data. The actual PDF may depend on the length scale $i+j$ and the elongation $i-j$ of the wavelet basis. One could parameterize the PDF, and solve for this dependence (Vidakovic 1998), or bin all scales together to measure a non-parametric scale averaged PDF. We will pursue the latter. The observed variance is the intrinsic variance of $\Theta$ plus the noise variance of ${\cal N}$, so the variance $\sigma_{\rm intrinsic}^2=\sigma_{\rm obs}^2-\sigma_{\rm noise}^2$ has error $\propto \sigma_{\rm obs^2}/n$ where $n$ is the number of coefficients at the same length scale. We weigh the data accordingly. Because most wavelet modes are at short scales, most of the weight will come near the noise threshold, which is what we desire. We now proceed to estimate $f(d)$. Our Ansatz now assumes $\Theta_{ij}(u)\propto\Theta(u/\exp[c_1(i+j)+c_2(i-j)^2+c_3])$ where $\Theta(u)$ has unit variance. We can only directly measure $f_{ij}$. We sort these in descending order of variance, $f_m$. Again, typically the largest scale modes will have the largest variance. In the images explored here, we find typical values of $c_1$ between 1 and 2, while $c_2 \sim -0.2$. For the largest variance modes, noise is least important. From the data, we directly estimate a binned PDF for the largest scale modes. By hypothesis, $D=U/l^\beta+{\cal N}(0,\sigma_{l,\rm noise})$. We reduce the larger scale PDF by convolving it with the difference of noise levels to obtain an initial guess for the smaller scale PDF: \begin{equation} f'_{l'}(d) = \frac{(l'/l)^\beta}{\sqrt{\pi}} \int_{-\infty}^\infty f_l\left[u (l/l')^\beta\right] \exp\left[-\frac{(u-d)^2}{2(\sigma_{l',\rm noise}^2-\sigma_{l,\rm noise}^2) } \right] du. \end{equation} To this we add the actual histogram of wavelets coefficients at the smaller scale. We continue this hierarchy to obtain an increasingly better estimate of the PDF, having used the information from each scale. In figure \ref{fig:alph} we show the optimal weighting function obtained for the examples in section \ref{sec:example}. \begin{figure} \resizebox{\textwidth}{!}{ \includegraphics{palpha.ps} } \caption{The optimal filter function $\alpha$ for the non-Gaussian wavelet model at $\sigma_{\rm noise}=\sigma_{\rm data}$. $U$ is given in units of the total standard deviation $\sigma_{\rm abs}^2=\sigma_{\rm noise}+\sigma_{\rm data}$. At small amplitudes, it is similar to a Wiener filter $\alpha=1/2$, but limits to 1 for large outliers.} \label{fig:alph} \end{figure} On the largest scales, the PDF will be poorly defined because relatively few wavelets lie in that regime. The current implementation performs no filtering, i.e. sets $\alpha=1$ for the largest scales. A potential improvement could be implemented: Within the scaling hypothesis, we can deconvolve the noisy $f(D)$ obtained from small scales to estimate the PDF on large scales. The errors in the PDF estimation are themselves Poissonian, and in the limit that we have many points per PDF bin, we can treat those as Gaussian. The deconvolution can then be optimally filtered to maximize the use of the large number of small scale wavelets to infer the PDF of large scale wavelets. Of course, the non-Gaussian wavelet analysis could then be recursively applied to estimate the PDF. Instead of the Bayesian prior PDF, we would then specify a prior for the prior. This possibility will be explored in future work. \subsection{Maximizing Non-Gaussianity using Wavelets} \label{subsec:nongauss} Errors are smallest if a large number of coefficients are near zero, and when the modes are close to being statistically independent. Let us consider several extreme cases and their optimal strategies. Imagine that we have an image consisting of true uncorrelated point sources, and each point source only occupies one pixel. Further assume that only a very small fraction $\epsilon$ of possible pixels are occupied, but when a point source is present, it has a constant luminosity $L$. And then add a uniform white noise background with unit variance. In Fourier space, each mode has unit variance from the noise, and variance $L^2 \epsilon$ from the point sources. We easily see that it will be impossible to distinguish signal from noise if $L^2 \epsilon < 1$. In real space, white noise is also uncorrelated, so we are justified to treat each pixel separately. Now we can easily distinguish signal from noise if $L > \sqrt{-\ln(\epsilon)}$. If $L=10$ and $\epsilon=0.001$, we have a situation where the signal is easy to detect in real space and difficult in Fourier space, and in fact the optimal filter (\ref{eqn:bayes}) is optimal in real space where the points are statistically independent. In Fourier space, even though the covariance between modes is zero, modes are not independent. Now consider the more realistic case that objects occupy more than one pixel, but are still localized in space, and only have a small covering fraction. This is the case of intermittent information. The optimal basis will depend on the actual shape of the objects, but it is clear that we want basis functions which are localized. Wavelets are a very general basis to achieve this, which sample objects of any size scale, and are able to effectively excise large empty regions. We expect PDFs to be more strongly non-Gaussian in wavelet space than either real or Fourier space. In this formulation, we obtain not only a filtered image, but also an estimate of the residual noise, and a noise map. For each wavelet coefficient we find its posterior variance using (\ref{eqn:var}). The inverse wavelet transform then constructs a noise variance map on the image grid. \section{Examples} \label{sec:example} In order to be able to compare the performance of the filtering algorithm, we use as example an image to which the noise is added by hand. The de-noised result can then be compared to the 'truth'. We have taken a random image from the Hubble Space telescope, in this case the 100,000th image (PI: C. Steidel). The original picture is shown in figure \ref{plate:org}. The gray scale is from 0 to 255. At the top are two bright stars with the telescope support structure diffraction spikes clearly showing. The extended objects are galaxies. We then add noise with variance 128, which is shown in figure \ref{plate:noise}. The mean signal to noise ratio of the image is 1/4. We can tell by eye that a small number of regions still protrude from the noise. The power spectrum of the noisy image is shown in figure \ref{fig:pkwien}. We use the known expectation value of the noise variance. The subtraction of the noise can be performed even when the noise substantially dominates over the signal, as can be seen in the image. In most astronomical applications, noise is instrumentally induced and the distribution of the noise is very well documented. Blank field exposures, for example, often provide an empirical measurement. \begin{figure} \resizebox{\textwidth}{!}{ \includegraphics{100KObs.ps} } \caption{The original image, taken from the Space Telescope web page {\tt www.stsci.edu}. It is the 100,000th image taken with HST for C. Steidel of Caltech.} \label{plate:org} \end{figure} \begin{figure} \resizebox{\textwidth}{!}{ \includegraphics{noisy.ps} } \caption{Figure \protect\ref{plate:org} with substantial noise added.} \label{plate:noise} \end{figure} \begin{figure} \resizebox{\textwidth}{!}{ \includegraphics{out.100kwien.ps} } \caption{Wiener filtered. We see that the linear scaling function substantially reduces the flux in the bright stars.} \label{plate:wiener} \end{figure} We first apply a Wiener filter, with the result shown in figure \ref{plate:wiener}. We notice immediately the key feature: all amplitudes are scaled by the noise, so the bright stars have been down scaled significantly. The noise on the star was less than unity, but each Fourier mode gets contributions from the star as well as the global noise of the image. The situation worsens if the filling factor of the signal regions is small. The mean intensity of the image is stored in the $k=0$ mode, which is not significantly affected by noise. While total flux is approximately conserved, the flux on each of the objects is non-locally scattered over the whole image by the Wiener filter process. \begin{figure} \resizebox{\textwidth}{!}{ \includegraphics{out.100kd12.ps} } \caption{Non-Gaussian wavelet filtered. Several of the features that had been lost in the Wiener filtering process are recovered here.} \label{plate:dwt} \end{figure} The optimal Bayesian wavelet filter is shown in figure \ref{plate:dwt}. A Daubechies 12 wavelet was used, and the prior PDF reconstructed using the scaling assumption described in section \ref{sec:opt}. We see immediately that the amplitudes on the bright objects are much more accurate. We see also that the faint vertical edge-on spiral on the lower right just above the bright elliptical is clearly visible in this image, while it had almost disappeared in the Wiener filter. \begin{figure} \resizebox{\textwidth}{!}{ \includegraphics{error.100kd12.ps} } \caption{Error map. Plotted is the posterior Bayesian variance. We see that some features, for example the small dot in the upper left, have large errors associated with them, and are therefore artefacts.} \label{plate:err} \end{figure} The Bayesian approach allows us to estimate the error in the reconstruction using Equation (\ref{eqn:var}). We show the result in figure \ref{plate:err}. We can immediately see that some features in the reconstructed map, for example the second faint dot above the bright star on the upper left, have large errors associated with them, and are indeed artefacts of reconstruction. Additionally, certain wavelets experience large random errors. These appear as checkered 'wavelet' patterns on both the reconstructed image and the error map. \section{Discussion} Fourier space has the advantage that for translation invariant processes, different modes are pairwise uncorrelated. If modes were truly independent, the optimal filter for each $k$ mode would also be globally optimal. As we have seen from the example in section \ref{sec:opt}\ref{subsec:nongauss}, processes which are local in real space are not optimally processed in Fourier space, since different Fourier modes are not independent. Wavelet modes are not independent, either. For typical data, the correlations are relatively sparse. In the astronomical images under consideration, the stars and galaxies are relatively uncorrelated with each other. Wavelets with compact support sample a limited region in space, and wavelets which do not overlap on the same objects on the grid will be close to independent. Even for Gaussian random fields, wavelets are close to optimal since they are relatively local in Fourier space. Their overlap in Fourier space leads to residual correlations which are neglected. We see that wavelets are typically close to optimal, even though they are never truly optimal. But in the absence of a full prior, they allow us to work with generic data sets and usually outperform Wiener filtering. In our analysis, we have used Cartesian product Daubechies wavelets. These are preferentially aligned along the grid axes. In the wavelet filtered map (figure \ref{plate:dwt}) we see residuals aligned with the coordinate axes. Recent work by Kingsbury (this proceedings) using complex wavelets would probably alleviate this problem. The complex wavelets have a factor of two redundancy, which is used in part to sample spatial translations and rotational directions more homogeneously and isotropically. \section{Conclusions} We have presented a generalized noise filtering algorithm. Using the Ansatz that the PDF of mode or pixel coefficients is scale invariant, we can use the observed data set to estimate the PDF. By application of Bayes' theorem, we reconstruct the filter map and noise map. The noise map gives us an estimate of the error, which tells us the performance of the particular basis used and the confidence level of each reconstructed feature. Based on comparison with controlled data, we find that the error estimates typically overestimate the true error by about a factor of two. We argued that wavelet bases are advantageous for data with a small duty cycle that is localized in real space. This covers a large class of astronomical images, and images where the salient information is intermittently present. \begin{acknowledgements} I would like to thank Iain Johnstone, David Donoho and Robert Crittenden for helpful discussions. I am most grateful to the Bernard Silvermann and the Royal Society for organizing this discussion meeting. \end{acknowledgements}
1,108,101,562,561
arxiv
\section{Curves of arithmetic genus $1$ with $n$ marked points and $A_\infty$-structures} \subsection{Extension of some results from \cite{LPol}}\label{results-sec} Below we always assume that $n\ge 2$. Recall that in the previous work \cite{LPol} we studied the moduli stacks $\mathcal{U}_{1,n}^{sns}$ of curves $C$ of arithmetic genus $1$ together with $n$ distinct smooth marked points $p_1,\ldots,p_n$ satisfying \begin{itemize} \item $h^0(\mathcal{O}_C(p_i))=1$ for all $i$, and \item $\mathcal{O}_C(p_1+\ldots+p_n)$ is ample. \end{itemize} We denote by $\widetilde{\mathcal{U}}_{1,n}^{sns}\to \mathcal{U}_{1,n}^{sns}$ the ${\Bbb G}_m$-torsor corresponding to a choice of a nonzero element $\omega\in H^0(C,\omega_C)$. In the case $n\ge 3$ we identified $\widetilde{\mathcal{U}}_{1,n}^{sns}$ with an explicit affine scheme of finite type over ${\Bbb Z}$, while in the case $n=2$ we proved that over ${\Bbb Z}[1/2]$ one has $\widetilde{\mathcal{U}}_{1,2}^{sns}\simeq {\Bbb A}^3$ (see \cite[Thm.\ 1.4.2]{LPol}). More precisely, in the case $n\ge 3$ we showed that the ring $\mathcal{O}(\widetilde{\mathcal{U}}^{sns}_{1,n})$ is generated over ${\Bbb Z}$ by the functions defined as follows. Let $(C,p_1,\ldots,p_n,\omega)$ be the universal family, and let $h_{ij}$, for $i\neq j$, be elements of $H^0(C,\mathcal{O}(p_i+p_j))$ such that $\operatorname{Res}_{p_i}(h_{ij}\omega)=1$. We normalize the elements $h_{1i}$ by the condition $h_{1i}(p_2)=0$ for $i\ge 3$ and $h_{12}(p_3)=0$. Then there is a relation of the form $$h_{12}h_{13}^2-h_{12}^2h_{13}=ah_{12}h_{13}+bh_{12}+ch_{13}+d,$$ and the functions $a,b,c,d$ together with \begin{equation}\label{c-ij-eq} c_{ij}:=h_{1i}(p_j), \end{equation} where $i,j\ge 2$, $i\neq j$, generate the algebra of functions on $\widetilde{\mathcal{U}}^{sns}_{1,n}$ over ${\Bbb Z}$ (see \cite[Sec.\ 1.1]{LPol}). Note that these functions have the following weights with respect to the ${\Bbb G}_m$-action: \begin{equation}\label{U-n-gen-weights-eq} wt(c_{ij})=wt(a)=1, \ wt(b)=wt(c)=2, \ wt(d)=3. \end{equation} In the case $n=2$ we showed (see \cite[Sec.\ 1.2]{LPol}) that $\widetilde{\mathcal{U}}^{sns}_{1,2}\otimes {\Bbb Z}[1/2]$ is isomorphic to the affine $3$-space over ${\Bbb Z}[1/2]$ with the coordinates $\a$, $\b$ and $\gamma$ of weights $2$, $3$ and $4$, so that the affine part $C\setminus\{p_1,p_2\}$ of the universal curve is given by the equation $$y^2-yx^2=\a(y-x^2)+\b x+\gamma,$$ which is simply the unfolding of the tacnode. Furthermore, for each $(C,p_1,\ldots,p_n)$ as above we considered the generator $$G=\mathcal{O}_C\oplus\mathcal{O}_{p_1}\oplus\ldots\oplus \mathcal{O}_{p_n}$$ of the perfect derived category of $C$. The standard dg-enhancement of this category allows to construct a minimal $A_\infty$-algebra, equivalent to $\mathrm{REnd}^*(G)$. The choice of a nonzero element in $H^0(C,\omega_C)$ gives rise to a canonical identification of the underlying associative algebra, $\operatorname{Ext}^*(G,G)$, with the algebra $E_{1,n}\otimes k$, associated with the following quiver $Q=Q_n$ with relations. \begin{figure}[!h] \centering \includegraphics[scale=1]{quiver.pdf} \caption{The quiver $Q_n$} \label{fig0} \end{figure} The path algebra ${\Bbb Z}[Q]$ is generated by $(A_i,B_i)$, where $A_i$ is the arrow from the central vertex to the vertex $i$ and $B_i$ is the arrow in the opposite direction. We define a grading on ${\Bbb Z}[Q]$ by letting $|A_i|=0$ and $|B_i|=1$. The ideal of relations $J$ is generated by the relations $$B_iA_i=B_jA_j,\ \ A_iB_j=0 \ \text{ for } i\neq j.$$ We set \begin{equation}\label{E1n-eq} E_{1,n}:={\Bbb Z}[Q]/J. \end{equation} Let $\mathcal{M}_\infty=\mathcal{M}_\infty(E_{1,n})$ be the functor on the category of commutative rings, associating to $R$ the set of gauge equivalence classes of minimal $A_\infty$-structures on $E_{1,n}\otimes R$ (strictly unital with respect to the idempotents $e_i\in E_{1,n}$ corresponding to the vertices) extending the given $m_2$ on $E_{1,n}\otimes R$. The map associating with a point $(C,p_1,\ldots,p_n,\omega)\in \widetilde{\mathcal{U}}_{1,n}^{sns}(R)$ the corresponding $A_\infty$-structure on $E_{1,n}\otimes R$ (defined up to a gauge equivalence), extends to a morphism of functors \begin{equation}\label{curve-to-ainf-map} \widetilde{\mathcal{U}}_{1,n}^{sns}\to \mathcal{M}_\infty. \end{equation} Namely, we can use the homological perturbation construction associated with a dg-model for the subcategory generated by $\mathcal{O}_C\oplus\mathcal{O}_{p_1}\oplus\ldots\oplus\mathcal{O}_{p_n}$, described in \cite[Sec.\ 3]{P-ainf}. This requires a choice of relative formal parameters $t_i$ at $p_i$ (compatible with $\omega$ so that $\operatorname{Res}_{p_i}(\omega/t_i)=1$). However, this affects only the choice of homotopies in the homological perturbation, and hence, a different choice leads to a gauge equivalent $A_\infty$-structure. Furthermore, the morphism \eqref{curve-to-ainf-map} is compatible with the ${\Bbb G}_m$-action, where the action of $\lambda\in{\Bbb G}_m$ on $\mathcal{M}_\infty$ is given by the rescalings $m_n\mapsto \lambda^{2-n}m_n$. We proved in \cite{LPol} that the map \eqref{curve-to-ainf-map} becomes an isomorphism once we change the base to any field (of characteristic $\neq 2$, if $n=2$). Thus, over a field $k$, every minimal $A_\infty$-structure on $E_{1,n}\otimes k$ can be realised, up to gauge equivalence, in the derived category of some curve $(C,p_1,\ldots,p_n)$ as above. The reason we restricted to working over a field in \cite{LPol} was the dependence on the results of \cite{P-ainf} about moduli of $A_\infty$-structures. Using the new work \cite{P-ainf2} on the relative moduli of $A_\infty$-structures we can now extend our results, so that everything works over ${\Bbb Z}$, for $n\ge 3$. For $n=2$ the similar arguments work over ${\Bbb Z}[1/2]$, but we can also use the more ad hoc construction as in \cite{LP} to establish a partial result we need over ${\Bbb Z}$ (see Section \ref{n=2-case-sec} below). \begin{thm}\label{M1n-ainf-thm} Assume that $n\ge 3$. Then the functor $\mathcal{M}_\infty$ is represented by an affine scheme of finite type over ${\Bbb Z}$. Furthermore, the morphism \eqref{curve-to-ainf-map} is an isomorphism. For $n=2$ the same assertions hold over ${\Bbb Z}[1/2]$. \end{thm} \noindent {\it Proof} . Assume first that $n\ge 3$. By \cite[Thm.\ 2.2.6]{P-ainf2}, the representability of our functor by an affine scheme follows from the vanishing \begin{equation}\label{HH-E1n-van-eq} HH^i(E_{1,n}\otimes k)_{<0}=0 \end{equation} for $i\le 1$ and any field $k$. For $i=1$ this is \cite[Eq.\ (2.2.1)]{LPol}, while for $i=0$ this is clear. Furthermore, to check that $\mathcal{M}_\infty$ is a closed subscheme of the scheme $\mathcal{M}_n$ of finite type, parametrizing $A_n$-structures, it is enough to have the vanishing $$HH^2(E_{1,n}\otimes k)_{<-d}=0$$ for some $d$ and all fields $k$. This is indeed the case for $d=3$ by \cite[Cor.\ 2.2.6]{LPol}. Recall that the morphism \eqref{curve-to-ainf-map} is also compatible with the ${\Bbb G}_m$-action, where the action on the global functions $\lambda\mapsto (\lambda^{-1})^*$ has positive weights on generators over ${\Bbb Z}$: for $\widetilde{\mathcal{U}}_{1,n}^{sns}$ the weights are given by \eqref{U-n-gen-weights-eq}, while for $\mathcal{M}_\infty$ the weight of the coordinates of $m_n$, for $n>2$, is equal to $n-2$. Thus, the morphism \eqref{curve-to-ainf-map} corresponds to a homomorphism of non-negatively finitely generated graded algebras over ${\Bbb Z}$ $$f:A\to B,$$ such that $A_0=B_0={\Bbb Z}$, $f\otimes{\Bbb Q}$ and $f\otimes{\Bbb Z}/p$ are isomorphisms. In addition, we know that each $B_n$ is a free ${\Bbb Z}$-module. Indeed, for $n\ge 5$ this follows from \cite[Cor.\ 1.1.7]{LPol}, while for $n=3$ (resp., $n=4$) this follows from the identification of the moduli space with the affine space ${\Bbb A}^4$ (resp., $n=5$) given in \cite[Prop.\ 1.1.5]{LPol}. Thus, $C_n=\operatorname{coker}(f_n:A_n\to B_n)$ is a finitely generated abelian group such that $C_n\otimes {\Bbb Q}=0$ and $C_n\otimes{\Bbb Z}/p=0$. Hence, $C_n=0$ and so $f_n$ is surjective. Since $B_n$ is flat over ${\Bbb Z}$, for each $p$ we have an exact sequence $$0\to (\ker f_n)\otimes{\Bbb Z}/p\to A_n\otimes{\Bbb Z}/p\to B_n\otimes{\Bbb Z}/p\to 0$$ which shows that $\ker f_n\otimes {\Bbb Z}/p=0$. Since $\ker f_n\otimes{\Bbb Q}=0$, we derive that $\ker f_n=0$. The argument in the case $n=2$ is similar. The vanishing of \eqref{HH-E1n-van-eq} in the case when $\operatorname{char}(k)\neq 2$ follows from \cite[Eq.\ (2.1.4), Cor.\ 2.2.2]{LPol}. In the proof of the second assertion we use the identification of $\widetilde{\mathcal{U}}_{1,2}^{sns}\otimes{\Bbb Z}[1/2]$ with ${\Bbb A}^3$, with coordinates of weights $2$, $3$ and $4$. \qed\vspace{3mm} \begin{rem} An alternative way to prove the second assertion of Theorem \ref{M1n-ainf-thm} is to mimic the proof of \cite[Thm.\ B]{P-ainf2} by first showing that the deformations over ${\Bbb Z}$ of the tacnode point of $\widetilde{\mathcal{U}}_{1,2}^{sns}(k)$, for any field $k$ of characteristic $\neq 2$, match the deformations of the $A_\infty$-structures. \end{rem} \subsection{Case $n=2$}\label{n=2-case-sec} In the cases $n=1$ and $n=2$ the moduli stack $\widetilde{\mathcal{U}}_{1,2}^{sns}$ over $\operatorname{Spec}({\Bbb Z})$ is not an affine scheme. The case $n=1$ is considered in detail in \cite{LP}, so here we discuss the case $n=2$. Given a curve $C$ of arithmetic genus $1$ with smooth marked points $p_1$, $p_2$, such that $H^1(C,\mathcal{O}(p_i))=0$, a choice of a nonzero element in $H^0(C,\omega_C)$ is equivalent to a choice of a nonzero tangent vector at $p_1$ (see \cite[Lem.\ 1.1.1]{LPol}). Let $t_1$ be a formal parameter at $p_1$ compatible with this tangent vector. Then there exist elements $x\in H^0(C,\mathcal{O}(p_1+p_2)$ and $y\in H^0(C,\mathcal{O}(2p_1))$ such that $$x\equiv \frac{1}{t_1}+k[[t_1]], \ \ y\equiv \frac{1}{t_1^2}+t_1^{-1}k[[t_1]]$$ at $p_1$, defined uniquely up to adding a constant. It is easy to see that then the elements $$1,x,x^2,y,xy$$ form a basis of $H^0(C,\mathcal{O}(3p_1+2p_2))$. Note that $y^2-yx^2\in H^0(C,\mathcal{O}(3p_1+2p_2))$. Hence, we should have a relation of the form $$y^2=yx^2+axy+by+b'x^2+cx+d.$$ Adding a constant to $y$ we can make the term with $x^2$ to disappear, so there is a unique choice of $y$, so that the above relation takes form \begin{equation}\label{tacnode-unfold-eq} y^2=yx^2+axy+by+cx+d. \end{equation} There remains ambiguity in a choice of $x$: changing $x$ to $x-\a$ leads to the transformation \begin{equation}\label{additive-action-eq} (a,b,c,d)\mapsto (a+2\a, b+\a a+\a^2, c, d+\a c). \end{equation} Let us consider the quotient stack ${\Bbb A}^4/{\Bbb G}_a$, where the action of the additive group on ${\Bbb A}^4$ is given by \eqref{additive-action-eq}. Note that the action \eqref{additive-action-eq} is compatible with the ${\Bbb G}_m$-action on ${\Bbb A}^4$ such that $wt(a)=1$, $wt(b)=2$, $wt(c)=3$, $wt(d)=4$ and the standard ${\Bbb G}_m$-action on ${\Bbb G}_a$ (so $wt(\a)=1$). \begin{prop} One has a natural isomorphism $\widetilde{\mathcal{U}}_{1,2}^{sns}\simeq {\Bbb A}^4/{\Bbb G}_a$, compatible with the ${\Bbb G}_m$-actions. \end{prop} \noindent {\it Proof} . Consider the ${\Bbb G}_a$-torsor over $\widetilde{\mathcal{U}}_{1,2}^{sns}$ corresponding to a choice of a function $x\in H^0(C,\mathcal{O}(p_1+p_2))$ such that the polar part of $x$ at $p_1$ is $\frac{1}{t_1}$. Then as we have seen above, we can uniquely find $y\in H^0(C,\mathcal{O}(2p_1))$, such that the defining equation is of the form \eqref{tacnode-unfold-eq}. Conversely, starting from the affine curve $\operatorname{Spec}(A)$ given by such an equation we construct the projective curve $C$ by taking $\operatorname{Proj} \mathcal{R}(A)$, where $\mathcal{R}(A)$ is the Rees algebra associated with the filtration by degree, where $\deg(x)=1$, $\deg(y)=2$. As in \cite[Thm.\ 1.2.4]{P-ainf}, one easily checks that this gives an isomorphism between our ${\Bbb G}_a$-torsor over $\widetilde{\mathcal{U}}_{1,2}^{sns}$ and ${\Bbb A}^4$. \qed\vspace{3mm} \begin{prop}\label{n=2-ainf-surj-prop} The map ${\Bbb A}^4(R)\to \mathcal{M}_\infty(R)$ associating with $(a,b,c,d)\in R^4$ the $A_\infty$-structure coming from the corresponding curve in $\widetilde{\mathcal{U}}_{1,2}^{sns}$, is surjective in the following cases: (i) $R$ is any field; (ii) $R$ is an integral domain with the quotient field of characteristic zero. \end{prop} \noindent {\it Proof} . We mimic the proof of \cite[Thm.\ C]{LP} (see \cite[Sec.\ 5.3]{LP}). Let $W$ denote the tangent space to ${\Bbb A}^4$ at $0$. The derivative of the action \eqref{additive-action-eq} at $0$ gives a map $$d:\operatorname{Lie}({\Bbb G}_a)\to W,$$ which we can easily compute: $d(\operatorname{\partial}_x)=2\operatorname{\partial}_a$. As in \cite[Sec.\ 5.3]{LP}, the map from ${\Bbb A}^4/{\Bbb G}_a$ to the functor of $A_\infty$-structures, induces at the infinitesimal level a chain map $\kappa$ from the dg Lie algebra $[\operatorname{Lie}({\Bbb G}_a)\to W]$ (living in degrees $0$ and $1$) to the shifted Hochschild cochain complex $CH^*(E_{1,2})_{<0}[1]$ (truncated in negative internal degrees). Similarly to \cite[Thm.\ 5.5]{LP} we claim that this map induces an isomorphism of cohomology in degrees $0$ and $1$, when tensored with any field $k$, or over ${\Bbb Z}$. Indeed, first let us check that $$H^1(\kappa\otimes k): \operatorname{coker}(d\otimes k)\to HH^2(E_{1,2}\otimes k)_{<0}$$ is an isomorphism. Note that the source can be viewed as the tangent space to the deformations of the tacnode curve $(C_{tn},p_1,p_2,\omega)$ in $\widetilde{\mathcal{U}}_{1,2}^{sns}$, and the map itself as the tangent map to the morphism of deformation functors associating to a deformation of $(C_{tn},p_1,p_2,\omega)$ the corresponding family of $A_\infty$-structures on $E_{1,2}$. Now \cite[Prop.\ 4.3.1]{P-ainf}, applied to families over $k[\epsilon]/(\epsilon^2)$ implies that the map $H^1(\kappa\otimes k)$ is injective. On the other hand, we claim that the source and the target have the same dimension, which is equal to $3$ when $\operatorname{char}(k)\neq 2$ and is equal to $4$, when $\operatorname{char}(k)=2$. Indeed, for the source this is easy to see, while for the target this follows from \cite[Cor.\ 2.2.6]{LPol} in the case $\operatorname{char}(k)\neq 2$. In the case $\operatorname{char}(k)=2$ it suffices to show that $\dim HH^2(E_{1,2}\otimes k)_{<0}\le 4$, which follows from Lemma \ref{tacnode-lem} below. Thus, we conclude that $H^1(\kappa\otimes k)$ is an isomorphism. Now we turn to showing that $$H^0(\kappa\otimes k): \ker(d\otimes k)\to HH^1(E_{1,2}\otimes k)_{<0}$$ is an isomorphism. If $\operatorname{char}(k)\neq 2$ then $\ker(d\otimes k)=0$ and $HH^1(E_{1,2}\otimes k)_{<0}=0$ (see \cite[Cor.\ 2.2.2]{LPol}). The interesting case is when $k$ has characteristic $2$. Then $\ker(d\otimes k)=\langle\operatorname{\partial}_x\rangle$ and $\operatorname{\partial}_x$ maps under $\kappa$ to the nonzero element of the one-dimensional space $$HH^1(E_{1,2})_{<0}\simeq HH^1(C_{tn})_{<0}\simeq H^0(C_{tn},\mathcal{T})_{<0}$$ corresponding to the global vector field $\operatorname{\partial}_x$ of weight $-1$ on the tacnode $C_{tn}$ (see \cite[Prop.\ 2.1.3, Lem.\ 1.5.2]{LPol} and the proof of \cite[Cor.\ 2.2.2]{LPol}). The rest of the proof goes as in \cite[Sec.\ 5.3]{LP}. \qed\vspace{3mm} We have used the following result. \begin{lem}\label{tacnode-lem} Let $k$ be a field of characteristic $2$. Then $\dim HH^2(E_{1,2}\otimes k)=4$. \end{lem} \noindent {\it Proof} . Let $C=C_{tn}$ be the (projective) tacnode curve over $k$, equipped with a pair of smooth points $p_1,p_2$ on each component. Then we have an isomorphism $HH^*(E_{1,2}\otimes k)_*\simeq HH^*(C)$, so that the second grading is induced by the ${\Bbb G}_m$-action on $C$. Indeed, this follows from the homotopical triviality of the $A_\infty$-structure on $E_{1,2}\otimes k$ associated with $C$ (see \cite[Lem. 2.1.2]{LPol} and \cite[Prop.\ 4.4.1]{P-ainf}). We have an exact sequence $$0\to H^1(C,\mathcal{T})\to HH^2(C)\to HH^2(U)\to 0,$$ where $U=C\setminus\{p_1,p_2\}$ (see \cite[Sec.\ 4.1.3]{LP}). We claim that $H^1(C,\mathcal{T})=0$. Indeed, let $V=C\setminus q$, where $q$ is the singular point. Then $(U,V)$ is an affine covering of $C$. Let $x_1,x_2$ be the natural coordinates on the components of $U$ (both vanishing at $q$). Derivations of $\mathcal{O}(U\cap V)$ are just pairs $(P_1(x_1,x_1^{-1})\operatorname{\partial}_{x_1},P_2(x_2,x_2^{-1})\operatorname{\partial}_{x_2})$. Derivations of $\mathcal{O}(U)$ are those pairs, for which there exist constants $a,b$ such that $P_i\equiv a+bx_i\mod x_i^2k[x_1]$, for $i=1,2$ (see \cite[Lem.\ 1.5.2]{LPol}). On the other hand, when $P_i$ are linear combinations of $x_i^n$ for $n\le 2$ then the corresponding derivation extends to $V$. This immediately implies that every derivation of $\mathcal{O}(U\cap V)$ is a sum of those extending either to $U$ or to $V$, hence, $H^1(C,\mathcal{T})=0$. Finally, $U$ is an affine plane curve $k[x,y]/(y^2-yx^2)$, so $HH^2(U)$ is given by the corresponding Tjurina algebra $k[x,y]/(x^2,y^2)$, which is $4$-dimensional. \qed\vspace{3mm} \subsection{$A_\infty$-characterization of the wheel of projective lines}\label{wheel-char-sec} For a commutative ring $R$ we consider $$G_{n,R}:=G_n\times \operatorname{Spec}(R),$$ the {\it standard $n$-gon over} $R$. Note that it has natural smooth $R$-points $p_1,\ldots,p_n$ (corresponding to the point $1\in \P^1$ on each component, where the points $0$ and $\infty$ are used for gluing). Furthermore, there is a natural choice of a section $\omega$ of the dualizing sheaf of $G_{n,R}$ over $R$ (see \cite[Ex.\ 1.1.9]{LPol}), so we can view $(G_{n,R},p_1,\ldots,p_n,\omega)$ as a family in $\widetilde{\mathcal{U}}_{1,n}^{sns}(R)$. By abuse of notation we will sometimes refer to this family simply as $G_{n,R}$. Now let $k$ be a field. We are going to give several characterizations of the equivalence class of minimal $A_\infty$-structures on $E_{1,n}\otimes k$ associated with $G_{n,k}$ (via the morphism \eqref{curve-to-ainf-map}). Note that for every subset $S\subset\{1,\ldots,n\}$ we have a natural subquiver in $Q_n$ such that the corresponding subalgebra is isomorphic to $E_{1,|S|}$. In particular, we have $n$ subquivers $Q_1(i)\subset Q_n$ (where $i=1,\ldots,n$) that give embeddings of $E_{1,1}$ into $E_{1,n}$. Now given a minimal $A_\infty$-structure $m_\bullet$ on $E_{1,n}$, for each $i$ we have a well defined restriction $m_\bullet|_{Q_1(i)}$ which is a minimal $A_\infty$-structure on $E_{1,1}$ (recall that we consider $A_\infty$-structures unital with respect to the idempotents in $E_{1,n}$). On the other hand, every such $m_\bullet$ gives a structure of right $A_\infty$-module on $P_i=e_iE_{1,n}$, $i=0,\ldots,n$ (where $e_0,\ldots,e_n$ are the idempotents in $E_{1,n}$ corresponding to the vertices in $Q_n$). \begin{thm}\label{B-wheel-char-thm} Let $k$ be a field, and let $m^{wh}_\bullet$ be the minimal $A_\infty$-structure on $E_{1,n}\otimes k$ associated with $G_{n,k}$, where $n\ge 2$. Then $m^{wh}_\bullet$ is characterized uniquely (among the $A_\infty$-structures we consider in Theorem \ref{M1n-ainf-thm}) up to gauge equivalence and up to ${\Bbb G}_m$-action, by the following conditions (i) and either (ii) or (ii'): \noindent (i) for every $i=1,\ldots,n$, the restriction $m^{wh}_\bullet|_{Q_1(i)}$ is not homotopically trivial; \noindent (ii) $\dim HH^2(E_{1,n},m^{wh}_\bullet)=n$; \noindent (ii') the subcategories $\langle P_0,P_i\rangle$ split-generated by the right $A_\infty$-modules $P_0$ and $P_i$ (where the $A_\infty$-structure comes from $m^{wh}_\bullet$) are all distinct for $i=1,\ldots,n$. Furthermore, for all minimal $A_\infty$-structures $m_\bullet$ on $E_{1,n}$ satisfying (i), one has $$\dim HH^2(E_{1,n},m_\bullet)\le n.$$ \end{thm} \begin{lem}\label{HH-wheel-lem} One has $\dim HH^2(G_{n,k})=n$. \end{lem} \noindent {\it Proof} . Let us write $G_n$ instead of $G_{n,k}$ for brevity. We have an isomorphism $$HH^2(G_n)\simeq \operatorname{Ext}^1({\bf L}_{G_n/k},\mathcal{O}_{G_n})\simeq \operatorname{Ext}^1(\Omega_{G_n /k},\mathcal{O}_{G_n}),$$ where ${\bf L}_{G_n /k}$ is the cotangent complex, which in this case is isomorphic to $\Omega_{G_n /k}$ (since $G_n$ is a locally complete intersection). Let us pick $n$ smooth points $p_1,\ldots,p_n\in G_n$, one on each component, and let $D=p_1+\ldots+p_n$. Then the exact sequence $$0\to \mathcal{O}_{G_n}(-D)\to \mathcal{O}_{G_n} \to \mathcal{O}_D\to 0$$ induces a long exact sequence \begin{align*} &0\to\operatorname{Hom}(\Omega_{G_n /k},\mathcal{O}_{G_n}(-D))\to\operatorname{Hom}(\Omega_{G_n /k},\mathcal{O}_{G_n})\to \operatorname{Hom}(\Omega_{G_n /k},\mathcal{O}_D)\to \\ &\operatorname{Ext}^1(\Omega_{G_n /k},\mathcal{O}_{G_n}(-D))\to \operatorname{Ext}^1(\Omega_{G_n/k},\mathcal{O}_{G_n})\to \operatorname{Ext}^1(\Omega_{G_n/k},\mathcal{O}_D)\to\ldots \end{align*} Since $\Omega_{G_n/k}$ is locally free near $D$, we have $\operatorname{Ext}^1(\Omega_{G_n/k},\mathcal{O}_D)=0$. On the other hand, we have $$\operatorname{Hom}(\Omega_{G_n/k},\mathcal{O}_{G_n}(-D))=H^0(G_n,\mathcal{T}(-D))=0,$$ while $H^0(G_n,\mathcal{T})$ is $n$-dimensional. Hence the restriction map $$H^0(G_n,\mathcal{T})\to H^0(G_n,\mathcal{T}|_D)$$ is an isomorphism, so from the above long exact sequence we obtain an isomorphism $$\operatorname{Ext}^1(\Omega_{G_n/k},\mathcal{O}_{G_n}(-D))\simeq\operatorname{Ext}^1(\Omega_{G_n/k},\mathcal{O}_{G_n}).$$ But the space $\operatorname{Ext}^1(\Omega_{G_n/k},\mathcal{O}_{G_n}(-D))$ is the tangent space to $\overline{\mathcal{M}}_{1,n}$ at the point $(G_n,p_\bullet)$, so it is $n$-dimensional (see \cite{DM}). \qed\vspace{3mm} \noindent {\it Proof of Theorem \ref{B-wheel-char-thm}.} By Theorem \ref{M1n-ainf-thm} (resp., Proposition \ref{n=2-ainf-surj-prop} for $n=2$), any minimal $A_\infty$-structure on $E_{1,n}$ comes from a curve $(C,p_1,\ldots,p_n,\omega)$ defining a point of $\widetilde{\mathcal{U}}^{sns}_{1,n}(k)$. Recall that over an algebraically closed field any such pointed curve coincides with its minimal subcurve of arithmetic genus $1$ without disconnecting nodes (see the proof of \cite[Thm.\ 1.5.7]{LPol}). Hence, by \cite[Lem.\ 3.3]{Smyth-I}, $$(\overline{C},\overline{p}_1,\ldots,\overline{p}_n):=(C,p_1,\ldots,p_n)\times_k \overline{k}$$ (where $\overline{k}$ is an algebraic closure of $k$) is either an elliptic $m$-fold curve with $m\le n$, or the standard $m$-gon $G_{m,\overline{k}}$ with $m\le n$ (the case $m=1$ being the irreducible nodal curve), or a smooth elliptic curve. Recall that by definition of $\widetilde{\mathcal{U}}^{sns}_{1,n}$ there is at least one marked point on each irreducible component of $\overline{C}$. Assume first that $\overline{C}$ is an elliptic $m$-fold curve. Without loss of generality we can assume that the points $\overline{p}_1,\ldots,\overline{p}_m$ all lie on different irreducible components of $\overline{C}$. Then the group $\operatorname{Aut}(\overline{C},\overline{p}_1,\ldots,\overline{p}_m)$ is isomorphic to $\overline{k}^*$, hence, by the Hilbert Theorem 90, we get that $(C,p_1,\ldots,p_m)$ is isomorphic to the elliptic $m$-fold curve over $k$. Now we claim that restricting $m_\bullet$ to the subquiver $Q_m\subset Q_n$ corresponding to the points $p_1,\ldots,p_m$, we obtain a homotopically trivial $A_\infty$-structure. Indeed, this follows from the fact that the operation of restricting to $Q_m$ corresponds to the natural morphism $\widetilde{\mathcal{U}}^{sns}_{1,n}\to \widetilde{\mathcal{U}}^{sns}_{1,m}$ forgetting the last $n-m$ marked points (see \cite[Rem.\ 2.2.10]{LPol}), together with the fact that the elliptic $m$-fold curve in $\widetilde{\mathcal{U}}^{sns}_{1,m}$ corresponds to the trivial $A_\infty$-structure. Hence, for any $A_\infty$-structure satisfying (i), the curve $\overline{C}$ is either smooth or isomorphic to $G_{m,\overline{k}}$ with $m\le n$. Note that in the former case $C$ itself is smooth, while in the latter case, using the fact that the standard $m$-gon, equipped with one marked point on each component, has no automorphisms, we see that $C$ itself is isomorphic to $G_{m,k}$ over $k$. In both cases applying forgetful morphisms to $\widetilde{\mathcal{U}}^{sns}_{1,1}$ we get a curve which is either smooth or nodal, hence condition (i) is satisfied. Since for smooth $C$ we have $\dim HH^2(C)=1$, the characterization of $m^{wh}_\bullet$ using conditions (i) and (ii) now follows from Lemma \ref{HH-wheel-lem}. Note that if $C$ is irreducible then for any pair of points $p_1,p_2\in C$ we have $$\operatorname{Perf}(C)=\langle \mathcal{O}_C,\mathcal{O}_{p_1}\rangle=\langle \mathcal{O}_C,\mathcal{O}_{p_2}\rangle$$ (see the proof of \cite[Prop.\ 4.3.1]{P-ainf} or Corollary \ref{Perf-C-gen-cor} below), so the condition (ii') does not hold in this case. Thus, for the characterization using conditions (i) and (ii') we need to show that for a standard $m$-gon $C$ (with $m\ge 2$) and a pair of smooth points $p_1,p_2\in C$, the subcategories $\mathcal{C}_1=\langle \mathcal{O}_C,\mathcal{O}_{p_1}\rangle$ and $\mathcal{C}_2=\langle \mathcal{O}_C,\mathcal{O}_{p_2}\rangle$ are the same if and only if $p_1$ and $p_2$ lie on the same irreducible component of $C$. Assume first that $p_1\in C_1$, $p_2\in C_2$, where $C_1$ and $C_2$ are different components of $C$. Consider the (derived) restriction functor $i_1^*:\operatorname{Perf}(C)\to D^b(C_1)$. Then $i_1^*\mathcal{C}_2\subset \langle \mathcal{O}_{C_1}\rangle$ which does not contain $i_1^*\mathcal{O}_{p_1}\simeq\mathcal{O}_{p_1}$, so $\mathcal{C}_1\neq\mathcal{C}_2$. Now assume that $p_1$ and $p_2$ lie on the same component $C_1$. There exists a morphism $f:C\to \overline{C}$, where $\overline{C}=G_1$ is the irreducible rational curve with one node, contracting all components different from $C_1$, and such that $f|_{C_1}:C_1\to\overline{C}$ is the normalization map. Let $x_i=f(p_i)\in \overline{C}$, $i=1,2$. Then $\mathcal{O}_{p_i}\simeq f^*\mathcal{O}_{x_i}$ for $i=1,2$. Furthermore, we have $$\operatorname{Perf}(\overline{C})=\langle \mathcal{O}_{\overline{C}}, \mathcal{O}_{x_1}\rangle=\langle \mathcal{O}_{\overline{C}}, \mathcal{O}_{x_2}\rangle.$$ Since $\operatorname{Perf}(\overline{C})$ is idempotent-complete, it is enough to check that the functor $f^*:\operatorname{Perf}(\overline{C})\to \operatorname{Perf}(C)$ is fully faithful. By the projection formula, this would follow from the equality $Rf_*\mathcal{O}_{C}\simeq \mathcal{O}_{\overline{C}}$. Let $q\in\overline{C}$ be the node. Then $f$ is an isomorphism over the complement to $q$, so $R^1f_*\mathcal{O}_C$ is supported at $q$. It is also easy to see that $R^0f_*\mathcal{O}_C\simeq \mathcal{O}_{\overline{C}}$. Hence, from the Leray spectral sequence we deduce that $$H^1(C,\mathcal{O})\simeq H^1(\overline{C},\mathcal{O})\oplus H^0(\overline{C}, R^1f_*\mathcal{O}_C).$$ Since both $H^1(C,\mathcal{O})$ and $H^1(\overline{C},\mathcal{O})$ are $1$-dimensional this implies the vanishing of $R^1f_*\mathcal{O}_C$. \qed\vspace{3mm} \section{The $n$-Tate curve} \subsection{Construction} The $n$-Tate curve we are going to construct will be a family of curves over ${\Bbb Z}[[t_1,t_2,\ldots,t_n]]$, generically smooth and with the standard $n$-gon $G_n$ as the specialization at $t_1=\ldots=t_n=0$. This is a natural generalization of the construction of the Tate curves in \cite{DR} (obtained as the specialization $t_1=\ldots=t_n$ from our construction). In particular, for $n=1$ we get the standard Tate curve. The construction goes through the same steps as in the case of the usual Tate curve: we first construct a formal scheme over ${\Bbb Z}[[t_1,t_2,\ldots,t_n]]$ as the quotient of the formal completion of certain toric scheme $\mathcal{T}_{\infty,n}$ by the action of ${\Bbb Z}$, and then apply Grothendieck's existence theorem. We start by describing the scheme $\mathcal{T}_{\infty,n}$ in terms of gluing open affine pieces $U_i$ numbered by $i\in{\Bbb Z}$. Consider the periodic set of independent variables $t_i$, $i\in{\Bbb Z}$, where $t_{i+n}=t_i$, and set $${\Bbb Z}[t]_n={\Bbb Z}[\ldots,t_i,t_{i+1},\ldots].$$ Then we define open affine pieces by $$U_i=\operatorname{Spec} {\Bbb Z}[t]_n[X_i,Y_{i+1}]/(X_iY_{i+1}-t_i).$$ The intersections $V_i=U_{i-1}\cap U_i$ correspond to setting $X_iY_i=1$, so $V_i$ is the distinguished open $X_i\neq 0$ in $U_i$ (resp., $Y_i\neq 0$ in $U_{i-1}$). Thus, $$V_i=\operatorname{Spec} {\Bbb Z}[t]_n[X_i,X_i^{-1}].$$ Let $\mathcal{T}_{\infty,n}$ be the scheme over ${\Bbb Z}[t]_n$ obtained by gluing these open pieces (note that $U_i$ and $U_j$ will intersect not only for $|i-j|=1$). Note that the central fiber $\mathcal{T}_0$ (obtained by setting $t_1=\ldots=t_n=0$) is just the infinite chain of projective lines. On the other hand, since $U_i$ are just affine spaces over ${\Bbb Z}$, we see that $\mathcal{T}_{\infty,n}$ is an integral scheme. Note that we have the following relations between rational functions on $\mathcal{T}_{\infty,n}$: \begin{equation}\label{XYt-rel} X_{i-1}=X_it_{i-1}, \ \ Y_{i+1}=Y_it_i, \text{ for } i\in{\Bbb Z}. \end{equation} In particular, all $X_i$ with $i\le i_0$ and all $Y_i$ with $i>i_0$ are regular functions on $U_{i_0}$. We have a natural action of ${\Bbb Z}$ on $\mathcal{T}_{\infty,n}$, so that the automorphism $\tau$ corresponding to $1\in{\Bbb Z}$ maps $U_{i-1}$ to $U_{i}$, and $$\tau^*t_j=t_{j-1}, \ \ \tau^*X_i=X_{i-1}, \ \ \tau^*Y_i=Y_{i-1}.$$ Next, we take the formal neighborhood of $t_1=\ldots=t_n=0$ in $\mathcal{T}_{\infty,n}$, and take the quotient by the action of the subgroup $n{\Bbb Z}\subset {\Bbb Z}$. Note that this subgroup acts trivially on ${\Bbb Z}[t]_n$, so we get a formal curve $\hat{\mathcal{T}}_n$ over ${\Bbb Z}[[t]]_n={\Bbb Z}[[t_1,\ldots,t_n]]$. Note that for each $i\in{\Bbb Z}$ we have a section of the central fiber \begin{equation}\label{sigma-i-eq} \sigma_i:\operatorname{Spec}({\Bbb Z})\to \mathcal{T}_0\cap V_i \end{equation} given by $X_i=-1$. \subsection{Toric point of view} As in the case $n=1$, we can view $\mathcal{T}_{\infty,n}$ as a toric ${\Bbb Z}$-scheme of infinite type associated with an infinite fan. Namely, we observe that there is an open embedding of a torus $T$ (over ${\Bbb Z}$) into $\mathcal{T}_{\infty,n}$ given by $$T=\operatorname{Spec}({\Bbb Z}[t_1,t_1^{-1},\ldots,t_n,t_n^{-1},X_0,X_0^{-1}])\subset V_0.$$ We have $$X_i|_T=\begin{cases}\frac{X_0}{t_0t_1\ldots t_{i-1}}, & i>0, \\ t_it_{i+1}\ldots t_{-1}X_0, &i<0,\end{cases}$$ (and $Y_i=X_i^{-1}$ on $T$). Let us consider the $(n+1)$-dimensional real vector space $N_{\Bbb R}$ with the coordinates $(x,y_1,\ldots,y_n)$. We identify indices of coordinates $y_i$ with ${\Bbb Z}/n$. We define simplicial cones $C_i\subset N_{\Bbb R}$ for $i\in{\Bbb Z}$ by $$C_i=\begin{cases} \{(x,y_1,\ldots,y_n)\in{\Bbb R}\times {\Bbb R}_{\ge 0}^n \ |\ y_0+y_1+\ldots+y_{i-1}\le x\le y_0+y_1+\ldots+y_i\}, & i>0,\\ \{(x,y_1,\ldots,y_n)\in{\Bbb R}\times {\Bbb R}_{\ge 0}^n \ |\ 0\le x\le y_0\}, & i=0,\\ \{(x,y_1,\ldots,y_n)\in{\Bbb R}\times {\Bbb R}_{\ge 0}^n \ |\ -y_{-1}\le x\le 0\}, & i=-1,\\ \{(x,y_1,\ldots,y_n)\in{\Bbb R}\times {\Bbb R}_{\ge 0}^n \ |\ -y_i-\ldots-y_{-1}\le x\le -y_{i+1}-\ldots-y_{-1}\}, & i<-1. \end{cases} $$ Let $M_{\Bbb R}$ be the dual vector space to $N_{\Bbb R}$ with the basis $(f,e_1,\ldots,e_n)$ dual to the coordinates $(x,y_1,\ldots,y_n)$, and let $M\subset M_{\Bbb R}$ be the ${\Bbb Z}$-lattice generated by the basis vectors. We identify elements of $M$ with characters of the torus $T$ by $t_i=z^{e_i}$, $X_0=z^f$. Then $$X_i=\begin{cases} z^{f-e_0-\ldots-e_{i-1}}, & i>0, \\ z^f, & i=0, \\ z^{f+e_i+e_{i+1}+\ldots+e_{-1}}, & i<0, \end{cases}$$ and we get $$U_i=\operatorname{Spec}({\Bbb Z}[C_i^\vee\cap M]),$$ Thus, we can identify $\mathcal{T}_{\infty,n}$ with the toric scheme associated with the fan generated by the cones $C_i$. The automorphism $\tau$ of $\mathcal{T}_{\infty,n}$ corresponds to the action of the linear automorphism \begin{equation}\label{tau-N-eq} \tau_N:N_{\Bbb R}\to N_{\Bbb R}:(x,y_0,\ldots,y_{n-1})\mapsto (x+y_{n-1},y_{n-1},y_0,\ldots,y_{n-2}). \end{equation} which preserves the lattice $N$ and sends $C_i$ to $C_{i+1}$. \subsection{Polarization} Let us define the line bundle $L$ over $\mathcal{T}_{\infty,n}$ by setting $L|_{U_i}=\mathcal{O}_{U_i}z_i$ (where $z_i$ is a formal symbol) and defining the transitions on $V_i=U_{i-1}\cap U_i$ by the rule \begin{equation}\label{X-z-rel} z_{i-1}=X_iz_i. \end{equation} We can realize $L$ as a subsheaf in the sheaf of rational functions $\mathcal{K}_{\mathcal{T}_{\infty,n}}$ by identifying $z_i$ with the character $z^{w_i}$ of the torus $T$, where $(w_i\in M)_{i\in{\Bbb Z}}$ is the unique collection of lattice points such that $z^{w_{i-1}}=X_iz^{w_i}$ and $w_0=0$. More explicitly, we have $$w_i=\begin{cases} -if+ie_0+(i-1)e_1+\ldots+e_{i-1}, & i>0,\\ 0, & i=0,\\ f, & i=-1, \\ -if+e_{i+1}+2e_{i+2}+\ldots+(-i-1)e_{-1}, & i<-1.\end{cases} $$ We need to lift the ${\Bbb Z}$-action on $\mathcal{T}_{\infty,n}$ to $L$. For this let us consider the affine transformation $\tau$ of $M_{\Bbb R}={\Bbb R}\times{\Bbb R}^n$ given by \begin{equation}\label{tau-eq} \tau(v)=\tau_M(v)+f, \end{equation} where $\tau_M:M_{\Bbb R}\to M_{\Bbb R}$ is the linear transformation dual to $\tau_N$ (see \eqref{tau-N-eq}), so that $$\tau_M(e_i)=e_{i-1}, \ \tau_M(f)=f+e_{-1}.$$ Then $\tau$ preserves the lattice $M$ and satisfies $\tau(w_i)=w_{i-1}$, hence, it gives the required lifting of the ${\Bbb Z}$-action to $L$. Let $C_i^\vee\subset M_{\Bbb R}$ be the cone dual to $C_i$, and let $$\Delta=\cap_{i\in{\Bbb Z}}(w_i+C_i^\vee).$$ Note that $\tau_M(C_i^\vee)=C_{i-1}^\vee$, so $\tau(w_i+C_i^\vee)=w_{i-1}+C_{i-1}^\vee$, hence, $$\tau(\Delta)=\Delta.$$ Let us consider the piecewise linear function $\phi:{\Bbb R}\to{\Bbb R}_{\ge 0}$ given by \begin{equation}\label{phi-def} \phi(t)=tk-\frac{k(k+1)}{2} \ \text{ for} \ k\le t\le k+1, \ k\in{\Bbb Z}. \end{equation} Note that this function has the property $\phi(\frac{1}{m}{\Bbb Z})\subset \frac{1}{m}{\Bbb Z}$, which will be useful for the theory of theta functions (see Section \ref{mult-theta-sec}). It also satisfies the quasi-periodicity \begin{equation}\label{phi-t+1-eq} \phi(t+1)=\phi(t)+t. \end{equation} \begin{lem}\label{wi-coef-lem} We have the following formula for $w_i$: $$w_i=\sum_{j=0}^{n-1} n\phi(\frac{j-i}{n})e_j - if.$$ \end{lem} \noindent {\it Proof} . Let $w'_i$ denote the right-hand side of this formula. Since $w'_0=w_0=0$, it is enough to check that $\tau(w'_i)=w'_{i-1}$. We have \begin{align*} &\tau(\sum_{j=0}^{n-1} n\phi(\frac{j-i}{n})e_j - if)= (n\phi(\frac{-i}{n})-i)e_{n-1}+ \sum_{j=1}^{n-1} n\phi(\frac{j-i}{n})e_{j-1} - (i-1)f=\\ &(n\phi(\frac{-i}{n})-i)e_{n-1}+ \sum_{j=0}^{n-2} n\phi(\frac{j-i+1}{n})e_j - (i-1)f, \end{align*} so the statement reduces to $$n\phi(\frac{-i}{n})-i=n\phi(\frac{n-i}{n}),$$ which follows from \eqref{phi-t+1-eq}. \qed\vspace{3mm} \begin{lem}\label{phi-lem} (i) Let $(x^*,y^*_0,\ldots,y^*_{n-1})$ be the coordinates on $M_{\Bbb R}$. Then the cone $C_i^\vee\subset M_{\Bbb R}$ is described by the inequalities \begin{equation} y^*_j+(\lfloor \frac{i-j}{n}\rfloor+1)x^*\ge 0, \ \ y^*_j+(\lfloor \frac{i-j-1}{n}\rfloor+1)x^*\ge 0, \ \ j=0,\ldots,n-1. \end{equation} \noindent (ii) The set $\Delta\subset M_{\Bbb R}$ is described by the inequalities $$y^*_j\ge n\phi(\frac{x^*+j}{n}), \ j=0,\ldots, n-1.$$ \end{lem} \noindent {\it Proof} . (i) It suffices to check that the cone $C_i$ is generated by the vectors $$e_j^*+(\lfloor \frac{i-j}{n}\rfloor+1)f^*, \ \ e_j^*+ (\lfloor \frac{i-j-1}{n}\rfloor+1)f^*, \ \ j=0,\ldots,n-1,$$ where $(f^*,e_0^*,\ldots,e_j^*)$ is the standard basis in $N_{\Bbb R}$. In the case $i=0$ these are the vectors $e_0^*+f^*,e_0^*,\ldots,e_{n-1}^*$, so the assertion is clear. The general case follows easily using the automorphism $\tau_N$ that sends $C_i$ to $C_{i+1}$. \noindent (ii) Let us unravel the conditions $v-w_i\in C_i^\vee$, $i\in{\Bbb Z}$. Using part (i) and Lemma \ref{wi-coef-lem} we get the following inequalities on the coordinates $(x^*,y^*_0,\ldots,y^*_{n-1})$ of $v\in M_{\Bbb R}$: $$y^*_j-n\phi(\frac{j-i}{n})+(\lfloor \frac{i-j}{n}\rfloor+1)(x^*+i)\ge 0, \ \ y^*_j-n\phi(\frac{j-i}{n})+(\lfloor \frac{i-j-1}{n}\rfloor+1)(x^*+i)\ge 0, $$ for $j=0,\ldots,n-1$. At this point it is convenient to introduce the function $$\psi(q,t)=qt-\frac{q(q+1)}{2},$$ so that \begin{equation}\label{max-phi-eq} \phi(t)=\max_{q\in{\Bbb Z}}\psi(q,t)=\psi(\lfloor t\rfloor,t). \end{equation} Using the identities $$n\psi(q,\frac{t+j}{n})=n\psi(q,\frac{j-i}{n})+q(t+i) \ \ \ \text{ and}$$ $$n\phi(q)+(q-1)(t+j-nq)=n\psi(q-1,\frac{t+j}{n}), \ \ q\in{\Bbb Z},$$ we can rewrite the above collection of inequalities as $$y^*_j\ge n\psi(q,\frac{x^*+j}{n})$$ for $j=0,\ldots,n-1$, $q\in{\Bbb Z}$. It remains to apply \eqref{max-phi-eq} again. \qed\vspace{3mm} \begin{lem} (i) For each $i$ the section $z_i\in H^0(U_i,L)$ extends uniquely to a global section of $L$. \noindent (ii) The sections $(z_i)$ form a basis of $H^0(\mathcal{T}_{\infty,n},L)$ as ${\Bbb Z}[t]_n$-module. \end{lem} \noindent {\it Proof} . (i) This is equivalent to showing that $w_i\in M\cap\Delta$. This is obvious for $w_0=0$. The general case follows by the ${\Bbb Z}$-action. \noindent (ii) The group $H^0(\mathcal{T}_{\infty,n},L)$ has a ${\Bbb Z}$-basis spanned by $z^u$, where $u\in M\cap\Delta$. Note that the multiplication by $t_i$ corresponds to adding $e_i$ to $u$. In particular, the group $H^0(\mathcal{T}_{\infty,n},L)$ decomposes into a direct sum of subgroups spanned by $z^u$, where $u$ has a fixed $f$-component. Thus, it suffices to check that for any $u=-if+m_1e_1+\ldots+m_ne_n\in M\cap\Delta$ one has $u-w_i\in {\Bbb Z}_{\ge 0}e_1+\ldots+{\Bbb Z}_{\ge}e_n$ (note that $w_i\in M\cap\Delta$ by part (i)). But this follows from the inclusion $C_i^\vee\subset {\Bbb R}\times {\Bbb R}_{\ge 0}^n$, which in turn follows from the fact that $C_i$ maps surjectively onto ${\Bbb R}_{\ge 0}^n$ under the projection $(x,y_1,\ldots,y_n)\mapsto (y_1,\ldots,y_n)$. \qed\vspace{3mm} \subsection{Theta functions} Let us set $$\theta=\sum_{i\in{\Bbb Z}} z_i.$$ Note that $\theta$ is invariant with respect to the ${\Bbb Z}$-action. Using the relation \eqref{X-z-rel} and the equivalent relation $z_{i+1}=Y_{i+1}z_i$ we can rewrite this series as $$\theta=z_0\cdot \bigl(1+[X_0+X_0X_{-1}+X_0X_{-1}X_{-2}+\ldots] + [Y_1+Y_1Y_2+Y_1Y_2Y_3+\ldots]\bigr). $$ Recall that $z_0$ gives a trivialization of $L|_{U_0}$ and the functions $(X_i)_{i\le 0}$ and $(Y_i)_{i\ge 1}$ are regular on $U_0$. Furthermore, the relations \eqref{XYt-rel} show that the infinite sum defining $\th$ becomes finite on any finite order thickening of the central fiber in $U_0$. By ${\Bbb Z}$-invariance this is true over all $\mathcal{T}_{\infty,n}$, and so $\th$ defines a global section of $L$ over the formal neighborhood $\hat{\mathcal{T}}=\hat{\mathcal{T}}_{\infty,n}$ of the central fiber $\mathcal{T}_0$. \begin{lem} The sections $\sigma_i$ of the central fiber (see \eqref{sigma-i-eq}) extend to some sections $\sigma^\th_i:\operatorname{Spf}({\Bbb Z}[[t]]_n)\to \hat{\mathcal{T}}$ given by $$-X_i=1+(\tau^*)^{-i}s(t)$$ on $\hat{\mathcal{T}}\cap V_i$ for some formal series $s(t)\in{\Bbb Z}[[t]]_n$ with no constant term, such that $\th$ defines an isomorphism \begin{equation}\label{L-si-th-eq} \mathcal{O}_{\hat{T}}(\sum_{i\in {\Bbb Z}}\sigma^\th_i)\simeq L|_{\hat{T}}. \end{equation} \end{lem} \noindent {\it Proof} . Restricting to the open subset $V_0$ we can use the trivialization of $L$ given by $z_0$ and view $\th$ as a function on the formal neighborhood of the central fiber: $$\th=\sum_{i\in{\Bbb Z}} z^{w_i}=\sum_{i\in{\Bbb Z}} z^{e_i(0)}X_0^{-i}= 1-u-t_0u^{-1}+t_{-1}u^2+t_0^2t_1u^{-2}-t_{-2}t_{-1}^2u^3+\ldots,$$ where $u=-X_0$. There is a unique formal series $s(t_1,\ldots,t_n)$ with no constant term such that $u=1+s$ is a zero of the above function. Namely, if we write $s=s_1+s_2+\ldots$, where $s_d$ is homogeneous of degree $d$ then the equation $$\sum_{i\in{\Bbb Z}} (-1)^i z^{e_{-i}(0)}(1+s)^i=-s-t_0(1+s)^{-1})+t_{-1}(1+s)^2+\ldots=0$$ will give the recursive formulas for $s_d$. E.g., we get $s_1=t_{-1}-t_0$, $s_2=(t_0+2t_{-1})(t_{-1}-t_0)$, etc. The fact that $\sigma^\th_i$ are the only simple zeros of $\th$ follows easily by computing the restriction to the central fiber: $$\th|_{\mathcal{T}_0\cap U_i}=1+X_i+Y_{i+1}.$$ This shows that $\th$ is invertible away from the loci $X_i=-1$ on $\mathcal{T}_0\cap V_i$, and that $X_i=-1$ are simple zeros of $\th|_{\mathcal{T}_0\cap V_i}$. \qed\vspace{3mm} We are interested in the formal curve $$\hat{\mathcal{T}}_n:=\hat{\mathcal{T}}/n{\Bbb Z}$$ over ${\Bbb Z}[[t]]_n$. Recall that we have a lifting of the ${\Bbb Z}$-action to $L$, so $L$ descends to a line bundle on $\hat{\mathcal{T}}_n$. For each $i\in{\Bbb Z}/n$ let us set $$\th_i=\sum_{j\in{\Bbb Z}} z_{i+nj}.$$ These are sections of $L$ over $\hat{\mathcal{T}}$, invariant with respect to the action of $n{\Bbb Z}\subset{\Bbb Z}$, so they descend to sections of $L$ on $\hat{\mathcal{T}}_n$. Similarly, the sections $\sigma^\th_i$ are $n{\Bbb Z}$-equivariant, so we can view them as ${\Bbb Z}[[t]]_n$-points of $\hat{\mathcal{T}}_n$ (with $i\in {\Bbb Z}/n{\Bbb Z}$), and the isomorphism \eqref{L-si-th-eq} descends to \begin{equation}\label{L-si-th-eq-bis} \mathcal{O}_{\hat{\mathcal{T}}_n}(\sigma^\th_0+\ldots+\sigma^\th_{n-1})\simeq L|_{\hat{\mathcal{T}}_n}. \end{equation} We also have a natural generator $\omega$ of the relative dualizing sheaf for the family $\mathcal{T}_{\infty,n}\to \operatorname{Spec}({\Bbb Z}[t]_n)$, such that $\omega|_{V_i}=dX_i/X_i$, which induces a generator of the relative dualizing sheaf of $\hat{\mathcal{T}}_n$ over ${\Bbb Z}[[t]]_n$. \begin{lem}\label{alg-form-lem} The formal scheme $\hat{\mathcal{T}}_n$ is obtained from the usual scheme $T_n$ over ${\Bbb Z}[[t]]_n$, and the line bundle $L$, its sections $\th_i$, the sections $\sigma^\th_i$, and the element $\omega$, all come from the corresponding data over $T_n$. \end{lem} \noindent {\it Proof} . The isomorphism \eqref{L-si-th-eq-bis} shows that the restriction of $L$ to the central fiber, which is the standard $n$-gon, has degree one on every component. Hence, this restriction is ample. Thus, the assertion follows from Grothendieck's existence theorem (see \cite[5.1.4, 5.4.5]{EGA3}). \qed\vspace{3mm} \begin{defi}\label{n-Tate-def} The {\it $n$-Tate curve} is the data $(T_n,\sigma^\th_0,\ldots,\sigma^\th_{n-1},\omega)$ over $\operatorname{Spec}({\Bbb Z}[[t]]_n)$ defined in Lemma \ref{alg-form-lem}. \end{defi} Using the fact that the specialization $t_i=0$ gives the $n$-gon curve $G_n$, which a family in $\widetilde{\mathcal{U}}_{1,n}^{sns}({\Bbb Z})$, one can easily deduce that the $n$-Tate curve $(T_n,\sigma^\th_0,\ldots,\sigma^\th_{n-1},\omega)$ is a family in $\widetilde{\mathcal{U}}_{1,n}^{sns}({\Bbb Z}[[t]]_n)$. Note that by \cite[Prop.\ 1.1.5]{LPol}, for $n\ge 5$ the ring of functions on the moduli space $\widetilde{\mathcal{U}}_{1,n}^{sns}$ (which is an affine scheme over ${\Bbb Z}$) is generated by the functions $c_{ij}$ (see \eqref{c-ij-eq}). Thus, the $n$-Tate curve is determined by the corresponding formal power series $c_{ij}\in {\Bbb Z}[[t]]_n$. Let us show how to express them in terms of theta functions (this result is not used anywhere else in the paper). First, we claim that rational functions $\th_i/\th$ have poles only along $\sigma^\th_i$ and $\sigma^\th_{i+1}$. Indeed, this is checked easily by computing the restrictions of these functions to the central fiber: we have $$\frac{\th_i}{\th}|_{\mathcal{T}_0\cap V_i}=\frac{1}{X_0+1},$$ $$\frac{\th_i}{\th}|_{\mathcal{T}_0\cap V_{i+1}}=\frac{X_1}{X_1+1},$$ and the restrictions of $\th_i/\th$ to $\mathcal{T}_0\cap V_j$ for $j\neq i,i+1$, are zero. Now we can use our rational functions $\th_i/\th$ to compute the coordinates $c_{ij}$ from Section \ref{results-sec} for the $n$-Tate curve $T_n$ equipped with the marked points $p_i=\sigma^\th_i$ (where $i\in{\Bbb Z}/n$). Let us denote by $\operatorname{\partial}$ the global relative vector field on $\mathcal{T}_{\infty,n}$ such that $\operatorname{\partial}|_{U_i}=X_i\operatorname{\partial}_{X_i}$. Set $$R_0:=\operatorname{Res}_{p_0}(\frac{\th_0}{\th}\omega)=\frac{\th_0 (p_0)}{\operatorname{\partial} \th (p_0)} =\frac{\sum (-1)^{ni} z^{e_{-ni}(0)}(1+s)^{ni}} {\sum_i (-1)^i i z^{e_{-i}(0)}(1+s)^i}.$$ Note that this is an invertible element of ${\Bbb Z}[[t]]_n$ (equal to $1$ modulo the maximal ideal). Then the rational function $$h_{01}:=\frac{\th_0}{R_0 \th}$$ satisfies the conditions of Section \ref{results-sec}: it belongs to $\mathcal{O}(p_0+p_1)$ and $h_{01}\omega$ has residue $1$ at $p_0$. Similarly, the rational functions $$h_{i,i+1}:=\frac{\th_i}{R_i\th},$$ where $$R_i:=\frac{\th_i (p_i)}{\operatorname{\partial} \th(p_i)}=-\frac{\th_i (p_{i+1})}{\operatorname{\partial} \th (p_{i+1})},$$ satisfy similar properties with respect to $p_i$ and $p_{i+1}$. Now the coordinates $c_{ij}$ on the moduli space are determined by $$b_{ij}:=h_{i,i+1}(p_j), \text{ where }\ j\neq i,i+1, \text{ and}$$ $$b_i:=(h_{i-1,i}+h_{i,i+1})|_{p_i}.$$ Namely, since $h_{1i}=h_{12}+h_{23}+\ldots+h_{i-1,i}$, we have $$c_{ij}=\begin{cases} \sum_{r=1}^{i-1} b_{r,j}, & i<j\\ b_j+\sum_{1\le r<i, r\neq j-1,j} b_{r,j}, & 1<j<i.\end{cases}$$ It remains to express $b_{ij}$ and $b_i$ in terms of the theta functions. \begin{lem} We have $$b_{ij}=h_{i,i+1}(p_j)=\frac{\th_i(p_j) \operatorname{\partial}\th(p_i)}{\th_i(p_i)\th(p_j)},$$ $$b_{i+1}=(h_{i,i+1}+h_{i+1,i+2})|_{p_{i+1}}=\operatorname{\partial} \log \frac{\th_{i+1}}{\th_i} (p_{i+1}),$$ where $j\neq i,i+1$. \end{lem} \noindent {\it Proof} . The first formula is straightforward. For the second we use l'Hopital's rule: $$(h_{i,i+1}+h_{i+1,i+2})|_{p_{i+1}}=\frac{R_{i+1}\th_i+R_i\th_{i+1}}{R_iR_{i+1}\th}|_{p_{i+1}}= \frac{R_{i+1}\operatorname{\partial}\th_i+R_i\operatorname{\partial}\th_{i+1}}{R_iR_{i+1}\operatorname{\partial}\th}|_{p_{i+1}},$$ and then use the definition of $R_i$ and $R_{i+1}$ to rewrite this as $$-\frac{\operatorname{\partial} \th_i(p_{i+1})}{\th_i(p_{i+1})}+\frac{\operatorname{\partial}\th_{i+1}(p_{i+1})}{\th_{i+1}(p_{i+1})}.$$ \qed\vspace{3mm} \subsection{Multiplication of theta functions}\label{mult-theta-sec} First, let us determine a basis of global sections of $L^m$ for $m\ge 1$ on $\hat{T}$. For each $p\in{\Bbb Q}$ let us set $$w_p=-pf+\sum_{i=0}^{n-1}n\phi(\frac{i-p}{n})e_i.$$ \begin{lem}\label{basis-Lm-sections-lem} For $m\ge 1$ the sections $(z^{mw_p})_{p\in \frac{1}{m}{\Bbb Z}}$ form a ${\Bbb Z}[t]_n$-basis (resp., ${\Bbb Z}[[t]]_n$-basis) of $H^0(\mathcal{T}_{\infty,n},L^m)$ (resp., of $H^0(\hat{\mathcal{T}},L^m)$). \end{lem} \noindent {\it Proof} . The ${\Bbb Z}$-basis of $H^0(\mathcal{T}_{\infty,n},L^m)$ is formed by $z^{mu}$ with $u\in\Delta\cap \frac{1}{m}M$. Now Lemma \ref{phi-lem}(ii) implies that $$\Delta\cap \frac{1}{m}M=\sqcup_{p\in \frac{1}{m}{\Bbb Z}} (w_p+{\Bbb Z}_{\ge 0}e_0+\ldots+{\Bbb Z}_{\ge 0}e_{n-1}),$$ and the assertion follows. \qed\vspace{3mm} Note that the ${\Bbb Z}$-action on $L^m$ corresponds to the affine automorphism $v\mapsto m\tau(v/m)$ of $m\Delta\cap M$. This automorphism sends $mw_p$ to $mw_{p-1}$. Now for $m\ge 1$, $p\in \frac{1}{m}{\Bbb Z}$, let us set \begin{equation} \th_{m,p}=\sum_{i\in {\Bbb Z}} z^{mw_{-p+in}}=\sum_{i\in {\Bbb Z}} z^{mn[(\frac{p}{n}+i)f+\phi(\frac{p}{n}+i)e_0+\phi(\frac{p+1}{n}+i)e_1+\ldots+ \phi(\frac{p+n-1}{n}+i)e_{n-1}]}. \end{equation} Then $\th_{m,p+n}=\th_{m,p}$ and each $\th_{m,p}$ is invariant with respect to the action of $n{\Bbb Z}\subset {\Bbb Z}$. Furthermore, as an easy consequence of Lemma \ref{basis-Lm-sections-lem} we get that $(\th_{m,p})$, for $p\in \frac{1}{m}{\Bbb Z}/n{\Bbb Z}$, is a ${\Bbb Z}[[t]]_n$-basis of the space of $n{\Bbb Z}$-invariant sections $H^0(\hat{\mathcal{T}},L^m)^{n{\Bbb Z}}$. By \cite[5.1.4]{EGA3}, the latter space is identified with $H^0(T_n,L^m)$. \begin{prop} \label{Bsidecomp} For $m_1,m_2\ge 1$, $p_1\in \frac{1}{m_1}{\Bbb Z}$, $p_2\in \frac{1}{m_2}{\Bbb Z}$, one has $$\th_{m_1,p_1}\th_{m_2,p_2}=\sum_{k\in{\Bbb Z}}\th_{m_1+m_2,E(p_1,p_2+kn)}\cdot \prod_{j=0}^{n-1} t_j^{n\lambda(\frac{p_1+j}{n},\frac{p_2+j}{n}+k)},$$ where $$E(a,b)=\frac{m_1a+m_2b}{m_1+m_2},$$ $$\lambda(a,b)=m_1\phi(a)+m_2\phi(b)-(m_1+m_2)\phi(E(a,b)).$$ \end{prop} \noindent {\it Proof} . The calculation is very similar to the one in \cite[Sec.\ 8.4.2]{Gross}. \qed\vspace{3mm} \section{Relative Fukaya categories of genus 1 curves and homological mirror symmetry} \label{relFuk} We recall the definition of \emph{the relative Fukaya category} $\mathcal{F}(M,D)$ in the case where $M=\mathbb{T}$ is a symplectic $2$-torus and $D$ is the divisor consisting of $n$ points on $\mathbb{T}$. The relative Fukaya category of a pair $(M,D)$ was introduced in \cite{SeidelICM} and was further studied in \cite{Seidelquartic}. We follow closely the exposition provided in \cite{LP} where the case $M=\mathbb{T}$ is a symplectic $2$-torus and $D = \{z_1 \}$ is a single point was discussed in detail. As the construction given there applies to the mildly generalised situation where $D$ is the union of $n$ marked points, we will not give full details here. Let $\mathbb{T}$ be a closed, orientable surface of genus 1; $\omega$ a symplectic form on $\mathbb{T}$. Let $z_1, \ldots, z_n$ be $n$ marked points on $\mathbb{T}$, and $\mathbb{T}_0 = \mathbb{T} \backslash \{ z_1, \ldots, z_n \}$ be the $n$-punctured torus. We shall fix a primitive $\theta$ for $\omega|_{\mathbb{T}_0}$ and give $\mathbb{T}_0$ a Liouville structure. We will also fix an unoriented (real) line-field $l$ on $\mathbb{T}$. Such line fields form a torsor for $C^{\infty}(\mathbb{T}, \mathbb{R}P^1)$ and the connected components can be identified with $H^1(\mathbb{T}; \mathbb{Z})$. One concrete way to fix these data is as follows (cf.\ \cite{HKK}). Let us consider $\mathbb{T} = \mathbb{C} / ( \mathbb{Z} \oplus i \mathbb{Z}) $ as a Riemann surface and $D$ as a divisor on $\mathbb{T}$. Now, consider a holomorphic one-form $\alpha \in H^0(\mathbb{T}, \Omega_{\mathbb{T}}^{1,0})$. The square $\Omega = \alpha \otimes \alpha \in H^0(\mathbb{T}, (\Omega_{\mathbb{T}}^{1,0})^{\otimes 2})$ determines a non-vanishing quadratic form, which gives a flat Riemannian metric $|\Omega|$ on $\mathbb{T}$ and a horizontal foliation of tangent vectors $v$ with $\Omega(v,v) >0$. The flat Riemannian metric determines an area form $\omega$ and the horizontal foliation determines a grading structure on $\mathbb{T}$, i.e., a section of the projectivized tangent bundle of $\mathbb{T}$, which we view as an unoriented line field $l \subset T(\mathbb{T})$. (Note that if one is only interested in working with $\mathbb{T}_0$, one could start with a holomorphic one-form $\alpha \in H^0(\mathbb{T}, \Omega^{1,0}_{\mathbb{T}} (D))$ giving rise to a more general grading structure on the Fukaya category $\mathcal{F}(\mathbb{T}_0)$, which does not extend to $\mathcal{F}(\mathbb{T},D)$.) In addition, to be able to work with exact Lagrangians, we also need to fix a primitive $\theta$ of $\omega|_{\mathbb{T}_0}$. This amounts to giving a (real) vector field $Z$ on $\mathbb{T}_0$ which is {\it Liouville}, i.e., satisfies $\mathcal{L}_Z \omega = d(\iota_Z \omega)= \omega$. We can choose this vector field $Z$ conveniently so as to make our favourite objects given in Figure \ref{figure3} exact, see Prop. \ref{exactlags} and Prop. \ref{primitive} below. Recall that a closed curve $L$ is an exact Lagrangian if and only if $\int_L \iota_Z \omega =0$. Starting from this data, one constructs the relative Fukaya category ${\mathcal F}(\mathbb{T},D)$, an $A_\infty$-category linear over ${\Bbb Z}[[t_1,t_2,\ldots,t_n ]]$, well-defined up to quasi-equivalence (in particular, independent of the primitive $\theta$ and the line field $l$). Recall that the objects of ${ \mathcal F} (\mathbb{T}, D)$ are compact exact Lagrangian submanifolds $ L \subset \mathbb{T}_0$ which are equipped with an orientation, a spin structure and a grading (a grading is a homotopy from $l|_{L}$ to $TL$ in $T(\mathbb{T}_0)|_{L}$). Since $\dim_{\Bbb R} \mathbb{T}_0 = 2$, an oriented Lagrangian submanifold is just an oriented simple closed curve on $\mathbb{T}_0$. It is well known that an oriented simple closed curve which is not homotopic into a neighborhood of a puncture (in particular, not null-homotopic), is smoothly isotopic to a unique oriented exact Lagrangian up to Hamiltonian isotopy. Furthermore, since we have required that the grading structure on $\mathbb{T}_0$ is restricted from a grading structure on $\mathbb{T}$, for an oriented exact Lagrangian in $\mathbb{T}_0$ to have a grading, it is necessary and sufficient that the underlying curve $L \subset \mathbb{T}_0$ is \emph{non-separating} (see \cite{seidelgraded} for gradings; in particular the proof of Prop. 2.12 in \cite{seidelgraded} is relevant here). Note also that an oriented exact Lagrangian can be equipped with either the trivial or the non-trivial spin structure. We often refer to an object of $\mathcal{F}(\mathbb{T},D)$ by specifying an exact Lagrangian $L \subset \mathbb{T}_0$, but suppressing the choice of an orientation, a spin structure and a grading. The morphism space $hom(L, L')$ is the free ${\Bbb Z}[[t_1,t_2,\ldots, t_n]]$-module on the intersections $L \cap L'$. Given a sequence of exact Lagrangians $L_0,\ldots, L_k$ in $\mathbb{T}_0$, one constructs the $A_\infty$-structure maps \[ \mathfrak{m}_k: hom(L_{k-1},L_k) \otimes \ldots \otimes hom(L_0, L_1) \to hom(L_0,L_k)[2-k] \] defined by counts of solutions to a family of inhomogeneous Cauchy-Riemann equations $u: S \to (T, L_1 \cup \ldots \cup L_k)$ on the closed unit disk $S$ with $(k+1)$ boundary punctures, which are weighted by \[ \epsilon(u) t_1^{u \cdot z_1} t_2^{u \cdot z_2} \ldots t_n^{u \cdot z_n} \] where $u \cdot z_i$ denotes the intersection number of $u$ with $z_i$ and $\epsilon(u)$ is a sign (see \cite[Sec. 7]{SeidelGenus2} for a formula). Note that objects of ${ \mathcal F} (\mathbb{T},D)$ are defined as submanifolds of $\mathbb{T}_0 = \mathbb{T} \backslash D$, however the $A_\infty$-structure is defined by counting maps $u$ that intersect the divisor $D$. One can define an $A_\infty$-category linear over ${\Bbb Z}$ where one requires $u \cdot z_i =0 $ for all $i=1,\ldots n$. We call this \emph{the exact Fukaya category} of the $n$-punctured torus and denote it by ${ \mathcal F}(\mathbb{T}_0)$. Note that by definition $$\mathcal{F}(\mathbb{T}_0)=\mathcal{F}(\mathbb{T}_0,D)\otimes_{{\Bbb Z}[[t_1,\ldots,t_n]]} {\Bbb Z},$$ where we use the homomorphism ${\Bbb Z}[[t_1,\ldots,t_n]]\to{\Bbb Z}$, sending all $t_i$ to zero. Thus, one should think of ${ \mathcal F}(\mathbb{T},D)$ as a deformation of ${\mathcal F}(\mathbb{T}_0)$. We will also write $D^\pi{\mathcal F}(\mathbb{T},D)$ for the split-closed triangulated closure of ${\mathcal F}(\mathbb{T},D)$, which is called the derived Fukaya category of the pair $(\mathbb{T},D)$. An explicit model for this triangulated category is provided by the split-closure of twisted complexes (see \cite{SeidelBook}). The exact Fukaya category $\mathcal{F}(\mathbb{T}_0)$ has only compact Lagrangians as objects. In fact, there is an enlargement of this that allows non-compact objects called the \emph{wrapped Fukaya category} and denoted by $\mathcal{W}(\mathbb{T}_0)$. It is a ${\Bbb Z}$-linear $A_\infty$-category containing $\mathcal{F}(\mathbb{T}_0)$ as a full $A_\infty$-subcategory. The objects of $\mathcal{W}(\mathbb{T}_0)$ are properly embedded eventually conical exact Lagrangian submanifolds of $\mathbb{T}_0$ equipped with orientations, spin structures and gradings. The morphism spaces are $hom_{{\mathcal{W}}(\mathbb{T}_0)}(L,L')$ are cochain complexes $CW^*(L,L')$ computing the wrapped Floer cohomology (see \cite{AAEKO} for a working definition in this dimension or \cite{abouzgen} for a general definition). Finally, we note that a symplectomorphism $\phi : \mathbb{T}_0 \to \mathbb{T}_0$ is exact if $[\phi^* \theta - \theta] = 0 \in H^1(\mathbb{T}_0)$. Exact symplectomorphisms, equipped with a grading structure, act on $\mathcal{F}(\mathbb{T}_0)$ and $\mathcal{W}(\mathbb{T}_0)$ as they send exact Lagrangians to exact Lagrangian, and symplectomorphisms act on Fukaya categories. If $\phi : \mathbb{T} \to \mathbb{T}$ is a symplectomorphism which fixes $D$ pointwise and such that $\phi|_{\mathbb{T}_0} : \mathbb{T}_0 \to \mathbb{T}_0$ is an exact symplectomorphism, then we also get an action on the relative Fukaya category $\mathcal{F}(\mathbb{T},D)$. We also note that exact symplectomorphisms of $\mathbb{T}_0$ up to exact isotopy form a group, and since $\dim_{{\Bbb R}}(\mathbb{T}_0)=2$, the natural map to the mapping class group of $\mathbb{T}_0$ yields an isomorphism by an application of Moser's theorem (cf. \cite{seidelmcg}). Therefore, to compare exact symplectomorphisms, we may apply techniques from mapping class groups (cf. \cite{primer}). An example of an exact symplectomorphism equipped with a grading is the (right-handed) Dehn twist $\tau_K$ around a (spherical) object $K$ of $\mathcal{F}(\mathbb{T}_0)$. We will use the corresponding Dehn twist exact triangle for $L\in\mathcal{F}(\mathbb{T},D)$ (see \cite{seidelLES}): \begin{equation} \label{DTT-eq} \xymatrix{ \mathit{HF}^*(K,L) \otimes K \ar[r]^-{\mathrm{ev}} & L \ar[d] \\ & \tau_{K}(L). \ar[ul]^{[1]} } \end{equation} Note that since $\tau_K$ is an exact symplectomorphism, the Lagrangian $\tau_{K}(L)$ is exact. Furthermore, the orientation, spin structure and the grading structures on $K$ and $L$, induces the same structures on $\tau_{K}(L)$ in a canonical fashion for the exact triangle \eqref{DTT-eq} to hold. It is worth highlighting that if $K$ and $L$ are equippied with non-trivial spin structures, then $\tau_{K}(L)$ should be equipped with the non-trivial spin structure. (See \cite{LPshort} for an explicit verification of this exact triangle in the case of once-punctured torus.) \subsection{Generating objects for the relative Fukaya category}\label{generators-Aside-sec} We consider $(n+1)$ objects \[ L_0, L_1, \ldots L_n \in { \mathcal F}(\mathbb{T},D)\] with the underlying oriented Lagrangians given by exact representatives of simple closed curves that are depicted in Figure \ref{figure2}. In Cartesian coordinates $(x,y) \in {\Bbb R}^2/{\Bbb Z}^2$, we have \[ L_0 = \{ (x,0) : x \in {\Bbb R}/{\Bbb Z} \}, \ \ L_i = \{ (n-i)/n , y) : y \in {\Bbb R}/{\Bbb Z} \} \text{ for $i =1,\ldots n$}. \] We equip these with non-trivial spin structures which we keep track of with a marked point $\star \in L_i$ signifying the non-trivial double cover of $L_i$. These are needed in the calculation of signs $\epsilon(u)$ for polygons $u$ that contribute to the $A_\infty$-structure maps. Namely, if the intersection points at the corners of $u$ have \emph{even} Floer cohomology indices then $\epsilon(u) = (-1)^s$ where $s$ is the number of stars on the boundary. In our explicit computations, we will be in this situation. In general, $\epsilon(u)$ depends on orientations, spin structures and the indices of the corners. For a complete description of gradings and of signs $\epsilon(u)$, see \cite[Sec. 7]{SeidelGenus2}, and also \cite{LP}. As depicted in Figure \ref{figure2}, we specify the divisor $D = \{z_1,\ldots,z_n \}$ by letting \[ \{z_1,z_2,\ldots, z_n \} := \{ (n-i)/n + \epsilon, \epsilon) : i=1,\ldots, n \} \] in $\mathbb{R}^2/\mathbb{Z}^2$, for sufficiently small $\epsilon>0$. Ultimately, the exact locations of the points $z_i$ do not matter as long as there is one point on each connected component of $\mathbb{T} \setminus \{ L_0,L_1,\ldots, L_n \}$. The specific choice that we have (for sufficiently small $\epsilon$) is so that the formulae that we will get in explicit calculations would match the identity of Prop. \ref{Bsidecomp}. \begin{figure}[htb!] \centering \begin{tikzpicture} [scale=1.1] \tikzset{->-/.style={decoration={ markings, mark=at position #1 with {\arrow[scale=2,>=stealth]{>}}},postaction={decorate}}} \draw (0,0) -- (5,0); \draw[red, ->-=.5] (0,0) -- (6,0); \draw [red, ->-=.5] (0,6) -- (0,0); \draw [red, ->-=.5] (1,6) -- (1,0); \draw [red, ->-=.5] (4,6) -- (4,0); \draw [red, ->-=.5] (5,6) -- (5,0); \draw (0,6) -- (6,6); \draw (6,0) -- (6,6); \node at (0,5) {$\star$}; \node at (1,5) {$\star$}; \node at (4,5) {$\star$}; \node at (5,5) {$\star$}; \node at (0.5,0) {$\star$}; \node at (-0.3,2) {\footnotesize $L_n$}; \node at (0.65,2) {\footnotesize $L_{n-1}$}; \node at (3.7,2) {\footnotesize $L_{2}$}; \node at (4.7,2) {\footnotesize $L_1$}; \node at (3,-0.3) {\footnotesize $L_0$}; \draw[red, thick, fill=red] (1.7,2) circle(.02); \draw[red, thick, fill=red] (2.4,2) circle(.02); \draw[red, thick, fill=red] (3.1,2) circle(.02); \draw[black, thick, fill=black] (2.2,0.5) circle(.01); \draw[black, thick, fill=black] (2.6,0.5) circle(.01); \draw[black, thick, fill=black] (3.0,0.5) circle(.01); \draw[thick, fill=black] (0.5,0.5) circle(.03); \draw[thick, fill=black] (1.5,0.5) circle(.03); \draw[thick, fill=black] (3.5,0.5) circle(.03); \draw[thick, fill=black] (4.5,0.5) circle(.03); \draw[thick, fill=black] (5.5,0.5) circle(.03); \node at (0.5,0.7) {\footnotesize $z_n$}; \node at (1.5,0.7) {\footnotesize $z_{n-1}$}; \node at (3.5,0.7) {\footnotesize $z_{3}$}; \node at (4.5,0.7) {\footnotesize $z_2$}; \node at (5.5,0.7) {\footnotesize $z_1$}; \end{tikzpicture} \caption{Symplectic torus with $n+1$ oriented Lagrangians, $n$ vertical, $1$ horizontal} \label{figure2} \end{figure} \begin{lem} \label{relgenerate} The objects $L_0, L_1, \ldots L_n$ split-generate the split-closed triangulated category $D^\pi{\mathcal F}(\mathbb{T},D)$. \end{lem} \noindent {\it Proof}. It is well known that Dehn twists around the curves $L_0,\ldots,L_n$ generate the pure mapping class group of $(\mathbb{T},D)$ (\cite[Sec. 4.4.4]{primer}). Furthermore, the pure mapping class group acts transitively on the set of oriented non-separating simple closed curves, as follows from the classification of surfaces (\cite[Sec. 1.3.1]{primer}). Hence, using the exact triangles associated with Dehn twists (see \eqref{DTT-eq}), we deduce that any object $L$ of $\mathcal{F}(\mathbb{T},D)$, where the underlying spin structure is non-trivial, is generated by the collection $L_0,L_1,\ldots, L_n$. Now, given an arbitrary object $L$ with a trivial spin structure, we claim that $L \oplus L[2]$ is generated by $L_0,L_1,\ldots, L_n$. To see this, let $L'$ be the object with the same underlying oriented curve and same grading structure as $L$, but with the non-trivial spin-structure. Let $L^{''}$ be an object whose underlying curve intersect $L'$ at a unique point, and also equipped with a non-trivial spin structure. Now $(L',L'')$ is an $(A_2)$-configuration in $\mathbb{T}_0$. Hence, a neighborhood of $L' \cup L^{''}$ is a torus $P$ with one boundary component $\partial P$, embedded in $\mathbb{T}_0$. Now, looking at the mapping class group of $P$, we have the relation \[ (\tau_{L'} \tau_{L^{''}})^6 \simeq \tau_{\partial P}. \] In particular, we can observe that $(\tau_{L'} \tau_{L^{''}})^6$ sends the curve $L$ back to itself. However, as proven in Lemma 5.9 of \cite{seidelgraded}, this automorphism acts non-trivially on the grading. Namely, we have \[ (\tau_{L'} \tau_{L^{''}})^6 (L) \simeq L[2]. \] By the above argument, we know that $L'$ and $L^{''}$ are generated by the collection $L_0,L_1,\ldots, L_n$. Hence, we can combine the exact triangles of Dehn twists to obtain an exact triangle between $L$, $(\tau_{L'} \tau_{L^{''}})^6(L) \simeq L[2]$ and a complex built out of $L'$ and $L^{''}$. Now, for grading reasons, the map between $L$ and $L[2]$ has to vanish from which we conclude that $L \oplus L[2]$ is generated by $L_0, L_1,\ldots,L_n$, hence $L$ is split-generated by $L_0, L_1 \ldots,L_n$. \qed\vspace{3mm} In view of this generation result, one studies the derived Fukaya category of $(\mathbb{T}, D)$ via the $A_\infty$-algebra over ${\Bbb Z}[[t_1,t_2,\ldots t_n]]$, \[ {\mathscr A} = \bigoplus_{i,j=0}^{n} hom_{{\mathcal F}(\mathbb{T},D)}(L_i,L_j). \] We also consider the exact Fukaya category ${\mathcal F}(\mathbb{T}_0)$ and correspondingly, we have the $A_\infty$-algebra over ${\Bbb Z}$, \begin{equation}\label{A0-algebra-eq} {\mathscr A}_0 = \bigoplus_{i,j=0}^{n} hom_{{\mathcal F}(\mathbb{T}_0)} (L_i,L_j). \end{equation} Note that the proof in Lemma \ref{relgenerate} also gives that the collection $(L_0,\ldots,L_n)$ split-generates $\mathcal{F}(\mathbb{T}_0)$. As for the wrapped Fukaya category, we will consider dual objects. Namely, we consider $(n+1)$ objects \[ \hat{L}_0, \hat{L}_1, \ldots \hat{L}_n \in{ \mathcal W}(\mathbb{T}_0)\] with the underlying oriented non-compact arcs given by exact representatives of free homotopy classes that are depicted in Figure \ref{figure3}. In Cartesian coordinates $(x,y) \in \mathbb{R}^2/ {\Bbb Z}^2$, we have that \[ \hat{L}_0 = \{ ((n-1)/n + \epsilon, \epsilon + t) : t \in (0,1) + {\Bbb Z} \}, \] \[ \hat{L}_i = \{ ( \epsilon + (n-i-t)/n, \epsilon) : t \in (0,1) + {\Bbb Z} \} \text{ for } i=1,\ldots, n. \] Note that there exists a unique isomorphism class of a spin structure on these arcs once they are oriented. \begin{figure}[htb!] \centering \begin{tikzpicture} [scale=1.1] \tikzset{->-/.style={decoration={ markings, mark=at position #1 with {\arrow[scale=2,>=stealth]{>}}},postaction={decorate}}} \draw (0,0) -- (5,0); \draw[red, ->-=.5] (0,0) -- (6,0); \draw [red, ->-=.5] (0,6) -- (0,0); \draw [red, ->-=.5] (1,6) -- (1,0); \draw [red, ->-=.5] (4,6) -- (4,0); \draw [red, ->-=.5] (5,6) -- (5,0); \draw [blue, ->-=.5] (1.5,0.5) -- (0.5,0.5); \draw [blue, ->-=.5] (4.5,0.5) -- (3.5,0.5); \draw [blue, ->-=.5] (5.5,0.5) -- (4.5,0.5); \draw [blue, ->-=1] (0.5,0.5) -- (0,0.5); \draw [blue ] (6,0.5) -- (5.5,0.5); \draw [blue, ->-=.5] (5.5,6) -- (5.5,0); \draw (0,6) -- (6,6); \draw (6,0) -- (6,6); \node at (-0.3,2) {\footnotesize $L_n$}; \node at (0.65,2) {\footnotesize $L_{n-1}$}; \node at (3.7,2) {\footnotesize $L_{2}$}; \node at (4.7,2) {\footnotesize $L_1$}; \node at (3,-0.3) {\footnotesize $L_0$}; \node at (-0.2,0.4) {\footnotesize $\hat{L}_n$}; \node at (1,0.25) {\footnotesize $\hat{L}_{n-1}$}; \node at (3.8,0.25) {\footnotesize $\hat{L}_{2}$}; \node at (4.8,0.25) {\footnotesize $\hat{L}_{1}$}; \node at (5.7,2) {\footnotesize $\hat{L}_0$}; \node at (0,5) {$\star$}; \node at (1,5) {$\star$}; \node at (4,5) {$\star$}; \node at (5,5) {$\star$}; \node at (0.5,0) {$\star$}; \draw[red, thick, fill=red] (1.7,2) circle(.02); \draw[red, thick, fill=red] (2.4,2) circle(.02); \draw[red, thick, fill=red] (3.1,2) circle(.02); \draw[black, thick, fill=black] (2.2,0.5) circle(.01); \draw[black, thick, fill=black] (2.6,0.5) circle(.01); \draw[black, thick, fill=black] (3.0,0.5) circle(.01); \draw[thick, fill=black] (0.5,0.5) circle(.03); \draw[thick, fill=black] (1.5,0.5) circle(.03); \draw[thick, fill=black] (3.5,0.5) circle(.03); \draw[thick, fill=black] (4.5,0.5) circle(.03); \draw[thick, fill=black] (5.5,0.5) circle(.03); \node at (0.5,0.7) {\footnotesize $z_n$}; \node at (1.5,0.7) {\footnotesize $z_{n-1}$}; \node at (3.5,0.7) {\footnotesize $z_{3}$}; \node at (4.5,0.7) {\footnotesize $z_2$}; \node at (5.5,0.7) {\footnotesize $z_1$}; \end{tikzpicture} \caption{$n+1$ non-compact Lagrangians (blue) dual to $n+1$ compact Lagrangians (red).} \label{figure3} \end{figure} \begin{lem} \label{ncgener} The objects $\hat{L}_0, \hat{L}_1, \ldots \hat{L}_n$ generate the triangulated category $D^b {\mathcal W}(\mathbb{T}_0)$. \end{lem} \noindent {\it Proof}. We shall apply the argument of Theorem A.1 of \cite{AAEKO} (which in turn is based on Prop.\ 18.17 of \cite{SeidelBook}). Namely, the once-punctured torus has a Lefschetz fibration over ${\Bbb C}$ given by a double covering branched over 3 points. There is an unbranched covering of the once-punctured torus by the $n$-punctured torus. This, in turn, gives a Lefschetz fibration on the $n$-punctured torus. Now Theorem A.1 of \cite{AAEKO} gives that a basis of thimbles for this Lefschetz fibration generates the derived category $D^b \mathcal{W}(C)$. \begin{figure}[htb!] \centering \begin{tikzpicture} [scale=1.1] \tikzset{->-/.style={decoration={ markings, mark=at position #1 with {\arrow[scale=2,>=stealth]{>}}},postaction={decorate}}} \draw (0,0) -- (6,0); \draw (0,0) -- (0,6); \draw [blue, ->-=.5] (1.5,0.5) -- (0.5,0.5); \draw [blue, ->-=.5] (4.5,0.5) -- (3.5,0.5); \draw [blue, ->-=.5] (5.5,0.5) -- (4.5,0.5); \draw [blue, ->-=1] (0.5,0.5) -- (0,0.5); \draw [blue ] (6,0.5) -- (5.5,0.5); \draw [blue, ->-=.5] (5.5,6) -- (5.5,0); \draw [green!50!black, ->-=.5] (0.5,6) -- (0.5,0); \draw [green!50!black, ->-=.5] (3.5,6) -- (3.5,0); \draw [green!50!black, ->-=.5] (4.5,6) -- (4.5,0); \draw [violet, ->-=.5] (5.5,0.5) -- (4.65,6); \draw [violet, =.5] (4.65,0) -- (4.5,0.5); \draw [violet, ->-=.5] (4.5,0.5) -- (3.65,6); \draw [violet, =.5] (3.65,0) -- (3.5,0.5); \draw [violet, ->-=.5] (1.5,0.5) -- (0.65,6); \draw [violet, =.5] (0.65,0) -- (0.5,0.5); \draw (0,6) -- (6,6); \draw (6,0) -- (6,6); \draw[green!50!black, thick, fill=green!50!black] (1.7,2) circle(.02); \draw[violet, thick, fill=violet] (2.4,2) circle(.02); \draw[green!50!black, thick, fill=green!50!black] (3.1,2) circle(.02); \draw[black, thick, fill=black] (2.2,0.5) circle(.01); \draw[black, thick, fill=black] (2.6,0.5) circle(.01); \draw[black, thick, fill=black] (3.0,0.5) circle(.01); \draw[thick, fill=black] (0.5,0.5) circle(.03); \draw[thick, fill=black] (1.5,0.5) circle(.03); \draw[thick, fill=black] (3.5,0.5) circle(.03); \draw[thick, fill=black] (4.5,0.5) circle(.03); \draw[thick, fill=black] (5.5,0.5) circle(.03); \node at (0.5,0.7) {\footnotesize $z_n$}; \node at (1.5,0.7) {\footnotesize $z_{n-1}$}; \node at (3.5,0.7) {\footnotesize $z_{3}$}; \node at (4.5,0.7) {\footnotesize $z_2$}; \node at (5.5,0.7) {\footnotesize $z_1$}; \end{tikzpicture} \caption{Generators given by a basis of thimbles} \label{figure3bis} \end{figure} In our case, a choice of such a basis of thimbles is depicted in Figure \ref{figure3bis}. This construction gives $3n$ objects which is more than what one needs. Indeed, we will use explicit exact sequences to show that the (blue) curves that correspond to $\hat{L}_i$, for $i=0,\ldots,n$, generate the other ones. For this purpose, let us label the vertical (green for $i>1$) curves as $G_i$, for $i=1,\ldots,n$, where $G_i$ has ends approaching to the puncture $z_i$. Note that $G_1 = \hat{L}_0$. Let us also label the negatively sloped (violet) curves, that connect $z_i$ to $z_{i+1}$, as $P_i$, for $i=1,\ldots, n$. We claim that we have the following exact triangles for $i=1,\ldots,n$ (where $G_{n+1}=G_1$): \begin{equation} \label{wexact} \xymatrix{ \hat{L}_i \ar[r] & P_i \ar[d] \\ & G_i \ar[ul]^{[1]} } \ \ \ , \ \ \ \xymatrix{ \hat{L}_i \ar[r] & P_i \ar[d] \\ & G_{i+1} \ar[ul]^{[1]} } \end{equation} (Note that the objects on the top of the two exact triangles are the same, however, the degree 0 morphisms between them are different. Compare these to the exact sequences \eqref{O(-1)-O-qi-seq} and \eqref{O(-1)-O-qi-bis-seq} appearing later in the proof of Prop. \ref{Db-Coh-gen-prop}). Since $G_1= \hat{L}_0$ by definition, the above exact sequences suffice to show that $\{ \hat{L}_i \}$, for $i=0,\ldots, n$, generate all the objects $P_i$ and $G_i$, and hence, the wrapped category $D^b \mathcal{W}(\mathbb{T}_0)$. It remains to establish the existence of the exact triangles \eqref{wexact}. We will give the proof for the one on the left. The proof for the other one is similar. Let $\Sigma_i$ be a Liouville subdomain in $\mathbb{T}_0$ that is obtained by removing a collar neighborhood of the puncture at $z_i$. In other words, $\mathbb{T}_0$ can be symplectically identified as the completion $\widehat{\Sigma}_i$ of $\Sigma_i$ at its compact boundary (by putting back the collar neighborhood). Our strategy will be to use the restriction functors on wrapped Fukaya categories constructed by Abouzaid and Seidel in \cite{AbouzSeidel}, which in our case gives an exact functor \[ \mathcal{W}(\widehat{\Sigma}_i) \to \mathcal{W}(\mathbb{T}_0) \] such that on objects we just intersect the underlying Lagrangian in $\widehat{\Sigma}_i$ with $\Sigma_i$ and then extend it to $\mathbb{T}_0$ in the obvious conical way. We will establish an exact triangle in $\widehat{\Sigma}_i$ associated to a (negative) Dehn twist around a spherical object and the desired exact triangle in $\mathcal{W}(\mathbb{T}_0)$ will be obtained as the image under this restriction functor. \begin{figure}[htb!] \centering \begin{tikzpicture} [scale=1.1] \tikzset{->-/.style={decoration={ markings, mark=at position #1 with {\arrow[scale=2,>=stealth]{>}}},postaction={decorate}}} \draw[thick] (0,0) circle(1); \draw [green!50!black, ->-=.7] (0,1) to[in=160,out=200] (0,-1); \draw [blue, ->-=.8] (0,0) to (-1,0); \draw [violet, ->-=.4] (0,0) to (-0.7,0.7); \draw[black, thick, fill=black] (0,0) circle(.03); \draw[thick] (3.5,0) circle(1); \draw [green!50!black, ->-=.7] (3.5,1) to (3.5,-1); \draw [blue, ->-=.8] (3.5,0) to (3.5-1,0); \draw [violet, ->-=.4] (3.5,0) to (3.5-0.7,0.7); \draw[black, thick, fill=black] (3.5,0) circle(.03); \draw[thick] (0,0) circle(1); \end{tikzpicture} \caption{Neighborhood of the puncture $z_i$ and the extensions of $\hat{L}_i$, $P_i$ and $G_i$ in $\widehat{\Sigma}_i$ (left) and $\mathbb{T}_0$ (right). } \label{figure3bisbis} \end{figure} Now, we have the restriction of the objects $\hat{L}_i$, $P_i$ and $G_i$ to $\Sigma_i$. We shall extend them to $\widehat{\Sigma}_i$ so that $\hat{L}_i$ and $P_i$ are extended as in $\mathbb{T}_0$ but we modify $G_i$ so that it is extended to a compact curve $\overline{G}_i \subset \widehat{\Sigma}_i$, see Figure \ref{figure3bisbis} where the neighborhood of the puncture $z_i$ is drawn. In order to ensure that $\overline{G}_i$ is actually an exact Lagrangian in $\widehat{\Sigma}_i$, one may need to isotope $G_i$ in $\Sigma_i$ but this is unproblematic. (Note that in Figure \ref{figure3bisbis}, we completed $G_i$ to a compact curve that goes around the puncture in a counter-clockwise manner, the other choice is used in proving the other exact triangle in (\ref{wexact})). On $\widehat{\Sigma}_i$ the Lagrangian $\overline{G}_i$ is a spherical object, so we can apply the Dehn twist exact triangle associated to the (negative) Dehn twist around $\overline{G}_i$. This gives an exact triangle \begin{equation} \xymatrix{ \hat{L}_i \ar[r] & P_i \ar[d] \\ & \overline{G}_i \ar[ul]^{[1]} } \end{equation} since it is easy to see that $\tau_{\overline{G}_i}^{-1}(P_i)$ is isotopic to $\hat{L}_i$ in $\widehat{\Sigma}_i$. Finally, note that we have arranged it so that after restricting $\hat{L}_i, P_i$ and $\overline{G}_i$ to $\Sigma_i$ and extending them conically to $\mathbb{T}_0$, we get back the objects $\hat{L}_i, P_i$ and $G_i$ in $\mathcal{W}(\mathbb{T}_0)$ (this is clear from Figure \ref{figure3bisbis}). Hence, the image of the above exact triangle in $\mathcal{W}(\mathbb{T}_0)$ under the restriction functor is \begin{equation}\label{LPG-exact-triangle-eq} \xymatrix{ \hat{L}_i \ar[r] & P_i \ar[d] \\ & G_i \ar[ul]^{[1]} } \end{equation} which is precisely the left triangle in \eqref{wexact}. \qed\vspace{3mm} \begin{rem} \label{pitfall} An alternative approach would be to try to check the split-generation result due to Abouzaid \cite[Thm. 1.1]{abouzgen}. In order to apply this result, one needs to show that the open-closed map:\[ \mathcal{OC}: HH_{*-1}( \langle \hat{L}_i \rangle ) \to SH^*(\mathbb{T}_0) \] hits the unit in $SH^0(\mathbb{T}_0)$ (we shifted the grading so that $\mathcal{OC}$ is degree preserving). Now, the unit in $SH^0(\mathbb{T}_0)$ is represented by the fundamental class. Hence, it is enough to find holomorphic polygons with boundary on the $\{ \hat{L}_i \}$ whose total image covers the whole manifold $\mathbb{T}_0$ with multiplicity 1. At first sight, this seems easy to arrange since the complement $\mathbb{T}_0 \setminus \{ \hat{L}_0, \hat{L}_1,\ldots, \hat{L}_n \}$ consists of a disjoint union of (open) polygons. However, even though the Hochschild chain that appears at the corners of these polygons is sent by the open-closed map to the unit in $SH^0(\mathbb{T}_0)$, there is no a priori guarantee that this chain is actually a cocycle and so it may not give an element of $HH_*(\langle \hat{L}_i \rangle)$. We thank Hansol Hong for warning us about this dangerous pitfall. \end{rem} In view of Lemma \ref{ncgener} one can study the derived Fukaya category of $\mathcal{W}(\mathbb{T}_0)$ via the $A_\infty$-algebra over ${\Bbb Z}$, \begin{equation}\label{B-algebra-eq} {\mathscr B} = \bigoplus_{i,j=0}^{n} hom_{{\mathcal W}(\mathbb{T}_0)}(\hat{L}_i,\hat{L}_j). \end{equation} We note that the intersection pattern of the compact Lagrangians $L_i$ with the non-compact Lagrangians $\hat{L}_i$ immediately gives the following duality property. \begin{prop} \label{exactlags} One has $$\operatorname{Hom}_{\mathcal{W}(\mathbb{T}_0)}(L_i,\hat{L}_j)=\operatorname{Hom}_{\mathcal{W}(\mathbb{T}_0)}(\hat{L}_j,L_i)=0 \text{ for } i\neq j,$$ $$\operatorname{Hom}_{\mathcal{W}(\mathbb{T}_0)}(L_i,\hat{L}_i)\simeq \operatorname{Hom}_{\mathcal{W}(\mathbb{T}_0)}(\hat{L}_i,L_i)[1]\simeq {\Bbb Z}$$ for $i=0,\ldots,n$. Furthermore, the composition map $$\operatorname{Hom}_{\mathcal{W}(\mathbb{T}_0)}(\hat{L}_i,L_i)\otimes \operatorname{Hom}_{\mathcal{W}(\mathbb{T}_0)}(L_i, \hat{L}_i)[1]\to \operatorname{Hom}_{\mathcal{F}(\mathbb{T}_0)}(L_i,L_i)[1]$$ is an isomorphism. \end{prop} \noindent {\it Proof} . We only need to check that we can actually find a primitive $\theta$ of $\omega|_{\mathbb{T}_0}$ that makes the Lagrangians $L_i$ and $\hat{L}_i$, as depicted in Figure \ref{figure3}, exact. This is equivalent to exhibiting a Liouville vector field $Z$ on $\mathbb{T}_0$ such that $\int_{L_i} \iota_Z \omega = \int_{\hat{L}_i} \iota_Z \omega =0$ for all $i=0,\ldots, n$. By observing that $\mathbb{T}_0$ retracts onto a neighborhood of $\bigcup_i L_i$ which can be locally identified with a neighborhood of a plumbing of $T^*L_i$'s, it is easy to see that there is a vector field $Z$ which vanishes along $L_i$ and is tangent to $\hat{L}_i$. Hence, the Lagrangians $L_i$ and $\hat{L}_i$ can be made exact as drawn in Figure \ref{figure3}. \qed\vspace{3mm} This duality property will play a key role in proving that $\mathscr{A}_0$ and $\mathscr{B}$, equipped with certain augmentations, are Koszul dual $A_\infty$-algebras (see Sec.\ \ref{koszul}). Let $T : \mathbb{T} \to \mathbb{T}$ be the symplectomorphism given by the translation \begin{equation}\label{T-translation-eq} T(x,y) = (x+1/n, y) \end{equation} Then, $T$ preserves $L_0$, sends $L_i$ to $L_{i+1}$ for $i=1,\ldots (n-1)$, $L_n$ to $L_1$, and preserves $D = \{ z_1,\ldots,z_n\}$. Since the homology classes of $L_i$ for $i=0,\ldots,n$ give a basis of $H_1(\mathbb{T}_0)$, we derive that for a primitive $\theta$ of $\omega|_{\mathbb{T}_0}$ for which $L_i$ are exact Lagrangians, the induced symplectomorphism $T : \mathbb{T}_0 \to \mathbb{T}_0$ is exact, i.e., $[T^*\theta - \theta] = 0 \in H^1(\mathbb{T}_0)$. In particular, $T$ acts on the wrapped Fukaya category $\mathcal{W}(\mathbb{T}_0)$. Another set of generators for the wrapped category is given in the following lemma (see Figure \ref{figure4}). \begin{figure}[htb!] \centering \begin{tikzpicture} [scale=1.1] \tikzset{->-/.style={decoration={ markings, mark=at position #1 with {\arrow[scale=2,>=stealth]{>}}},postaction={decorate}}} \draw [red, ->-=.5] (0,0) -- (6,0); \draw (0,6) -- (0,0); \draw [red, ->-=.5] (0.5,6) -- (0.5,0); \draw [red, ->-=.5] (3.5,6) -- (3.5,0); \draw [red, ->-=.5] (4.5,6) -- (4.5,0); \draw [red, ->-=.5] (5.5,6) -- (5.5,0); \draw (0,6) -- (6,6); \draw (6,0) -- (6,6); \node at (1.1,2) {\footnotesize $T^{n-1} \hat{L}_0$}; \node at (4.85,2) {\footnotesize $T \hat{L}_0$}; \node at (3.95,2) {\footnotesize $T^2 \hat{L}_0$}; \node at (3,-0.3) {\footnotesize $L_0$}; \node at (5.7,2) {\footnotesize $\hat{L}_0$}; \node at (0.3,0) {$\star$}; \draw[red, thick, fill=red] (1.8,2) circle(.02); \draw[red, thick, fill=red] (2.5,2) circle(.02); \draw[red, thick, fill=red] (3.2,2) circle(.02); \draw[black, thick, fill=black] (1.2,0.5) circle(.01); \draw[black, thick, fill=black] (2.1,0.5) circle(.01); \draw[black, thick, fill=black] (3.0,0.5) circle(.01); \draw[thick, fill=black] (0.5,0.5) circle(.03); \draw[thick, fill=black] (3.5,0.5) circle(.03); \draw[thick, fill=black] (4.5,0.5) circle(.03); \draw[thick, fill=black] (5.5,0.5) circle(.03); \node at (0.5,0.7) {\footnotesize $z_n$}; \node at (3.5,0.7) {\footnotesize $z_{3}$}; \node at (4.5,0.7) {\footnotesize $z_2$}; \node at (5.5,0.7) {\footnotesize $z_1$}; \end{tikzpicture} \caption{Another set of generators for $\mathcal{W}(\mathbb{T}_0)$} \label{figure4} \end{figure} \begin{lem}\label{another-generators-W-lem} $D^\pi(\mathcal{W}(\mathbb{T}_0))$ is split-generated by $L_0$ and the objects $T^i\hat{L}_0$, $i=0,\ldots,n-1$. \end{lem} \noindent {\it Proof}. The proof is similar to the proof of Lemma \ref{ncgener}. In the notation of the proof of Lemma \ref{ncgener}, we have $T^i \hat{L}_0 = G_{i+1}$ for $i=0,1\ldots, n-1$. Note that for each $i$ we have that $G_i$ intersects $L_0$ at a unique point. As in the proof of Lemma \ref{ncgener}, we can consider compact curves $\overline{G}_i \subset \widehat{\Sigma}_i$. Let us do this for all $i=1,\ldots, n$, and write $\widehat{\Sigma}$ for the completion obtained in this manner for all $i$ simultaneously. Next, consider the Dehn twist exact triangle corresponding to the composition $\tau = \tau_{\overline{G}_1} \circ \ldots \circ \tau_{\overline{G}_n}$ around the disjoint curves $\overline{G}_i$ for $i=1, \ldots n$. This gives the following exact triangle in $\mathcal{W}(\widehat{\Sigma})$: \begin{equation} \xymatrix{ \bigoplus_{i=1}^n \overline{G}_i \ar[r] & L_0[1] \ar[d] \\ & \bigoplus_{i=1}^n \overline{P}_i \ \ar[ul]^{[1]} } \end{equation} where $\overline{P}_i$ are compact curves which restrict to the non-compact curves $P_i[1]$ in $\Sigma$ (note that $P_i[1]$ has the same underlying non-compact Lagrangian as $P_i$ but equipped with the opposite orientation) . Hence, as in Lemma \ref{ncgener}, we can use the exact restriction functor $\mathcal{W}(\widehat{\Sigma}) \to \mathcal{W}(\mathcal{\mathbb{T}}_0)$ to obtain an exact triangle (which is an analog of the exact sequence \eqref{surgery} appearing later on the B-side): \begin{equation} \xymatrix{ L_0 \ar[r] & \bigoplus_{i=1}^n P_i \ar[d] \\ & \bigoplus_{i=1}^n G_i \ \ar[ul]^{[1]} } \end{equation} (Note that one could alternatively use the Lagrangian surgery exact triangle due to Fukaya-Oh-Ohta-Ono \cite{FOOO} to obtain this result more directly.) This proves that $L_0$ together with $G_i$, for $i=1,\ldots n$, split-generate $P_i$. Now the exact triangles \eqref{LPG-exact-triangle-eq} show that $G_i$ and $P_i$ generate $\hat{L}_i$. Since by Lemma \ref{ncgener}, $\hat{L}_i$, for $i=0,\ldots, n$, generate $\mathcal{W}(\mathbb{T}_0)$, the result follows. \qed\vspace{3mm} Additively, wrapped Floer cohomology of $T^i\hat{L}_0$ is easy to compute, as it is easy to see that the Floer differential vanishes. \begin{lem}\label{sympl-dim-Hom-lem} For all $i=0,\ldots, n-1$, the non-compact Lagrangians $T^i\hat{L}_0$ can be equipped with a grading structure such that for any commutative ring $R$ one has an isomorphism of $R$-modules \[ \operatorname{Hom}^*_{\mathcal{W}(\mathbb{T}_0)\otimes R}(T^i\hat{L}_0,T^i\hat{L}_0) \cong R \langle u_i, v_i \rangle / (u_i^2, v_i^2) \] where $\deg(u_i)= \deg(v_i)= 1$. \end{lem} \noindent {\it Proof}. Recall that the wrapped Floer complex $CW^*(T^i \hat{L}_0, T^i \hat{L}_0 )$ is generated by time 1 chords of the flow generated by a Hamiltonian $H : \mathbb{T}_0 \to {\Bbb R}$ which is quadratic at infinity (in other words, $H(r,\theta) = r^2$ at the cylindrical ends; this implies that near the ends the flow is given by the clockwise rotation). For simplicity, we require that $H|_{T^i\hat{L}_0}$ is a Morse function with a unique minimum. The minimum gives a generator of $\operatorname{Hom}^0 (T^i\hat{L}_0, T^i\hat{L}_0)$. Let us label the shortest (non-constant) time 1 chords $u_i$ and $v_i$ (corresponding to left and right semicircles at the cylindrical end oriented clockwise). Since we required the line field $l$ on $\mathbb{T}_0$ to extend to $\mathbb{T}$, the rotation number of a small simple closed loop around $i^{th}$ puncture with respect to the induced trivialization is 1. It follows that the Maslov indices of $u_i$ and $v_i$ have to obey the equality \[ \deg(u_i) + \deg(v_i) = 2, \] as composing $u_i$ and $v_i$, we get a loop that goes once around the puncture. In addition, because of the rigid polygonal region with the boundary $T^i \hat{L}_0$ and $T^{i+1} \hat{L}_0$ and $L_0$, we have the constraint \[ \deg(u_i) + \deg(v_{i+1}) =2. \] A priori, these are the only restrictions that we have on the degrees. To pin down the degrees exactly, we need to choose a grading structure on $T^i\hat{L}_0$, and we do this so that $\deg(u_i)=1$, which then forces that $\deg(v_i)=1$. Any other chord is obtained by composing $u_i$ and $v_i$ in alternating fashion, hence it follows that there is a graded isomorphism of vector spaces such that $CW^*(T^i\hat{L}_0, T^i\hat{L}_0) \cong R \langle u_i,v_i \rangle / (u_i^2, v_i^2)$. It remains to see that there is no differential. This could either be deduced directly, or we could argue as follows. We could alternatively choose a grading structure on $T^i\hat{L}_0$ such that $\deg(u_i)=2$ and $\deg(v_i)=0$. Then the complex $CW^*(T^i\hat{L}_0, T^i\hat{L}_0)$ would be concentrated in even degree, hence there could not be any rigid holomorphic curve contributing to the differential. This geometric fact remains the same for any choice of grading structures, hence it follows that the Floer differential vanishes irrespective of how we grade the Lagrangians $T^i\hat{L}_0$. \qed\vspace{3mm} We will always grade our Lagrangian $T^i\hat{L}_0$ so that the graded isomorphism given in Lemma \ref{sympl-dim-Hom-lem} holds. Later, we will see that in fact the isomorphism given in Lemma \ref{sympl-dim-Hom-lem} is an isomorphism of graded rings. \subsection{Calculations in the relative Fukaya category} Following the strategy in \cite{LP}, we will not compute the $A_\infty$-algebra $\mathscr{A}$ directly. The presence of higher products (cf. \cite{LPshort}) makes such a direct computation difficult. On the other hand, the cohomology algebra $A= H^*\mathscr{A}$ is easy to compute. \begin{prop} There exists a choice of gradings on the Lagrangians $(L_i)$, such that we have an isomorphism of graded algebras $$H^*\mathscr{A} \cong E_{1,n}\otimes {\Bbb Z}[[t_1,\ldots,t_n]],$$ where the algebra $E_{1,n}$ is given by \eqref{E1n-eq}. \end{prop} \noindent {\it Proof}. The additive isomorphism is a direct consequence of the intersection pattern of the curves $L_0,L_1,\ldots, L_n$ and the choice of orientations and gradings. Namely, the given orientations imply that the rank 1 free module $HF^*(L_0,L_i)$ is supported in even degree for all $i=1,\ldots, n$. We can pick the grading structure on $L_0$ arbitrarily, and then choose the grading structure on $L_i$ so that $HF^*(L_0,L_i) \cong HF^0(L_0,L_i)$. This implies that the rank 1 module $HF^*(L_i,L_0)$ is supported in degree 1, i.e., $HF^*(L_i,L_0) \cong HF^1(L_i,L_0)$. Note that since $L_i$ are exact, we have that $HF^*(L_i,L_i) \cong H^*(L_i) \cong H^*(S^1)$ as a ring. The rest of the algebra structure is determined by the following pairings, for $i=1,\ldots,n$: \begin{align} \label{PD} HF^1(L_i, L_0) \otimes HF^0(L_0,L_i) &\to HF^1(L_0,L_0), \\ \label{PD2} HF^0(L_0,L_i) \otimes HF^1(L_i,L_0) &\to HF^1(L_i,L_i). \end{align} Note that, as explained in \cite[Sec.7]{SeidelGenus2}, the only contributions to these products are given by constant holomorphic triangles (and their moduli space is regular). We can either compute their contribution explicitly or observe that as a consequence, we have that \[ H^*\mathscr{A} \cong H^*\mathscr{A}_0 \otimes {\Bbb Z}[[t_1,\ldots,t_n]]. \] Now, all of the maps (\ref{PD}) and (\ref{PD2}) have to be non-degenerate pairings by the general Poincar\'e duality property of Floer cohomology (see \cite[Sec. 12e]{SeidelBook}). In $\mathscr{A}_0$ each of these maps is of the form: \[ {\Bbb Z} \otimes {\Bbb Z} \to {\Bbb Z}. \] This has to be non-degenerate after reducing mod $p$ for all $p$, hence we see that the map has to be $(1,1) \to \pm 1$ and the sign does not matter up to isomorphism, so we deduce that $H^* \mathscr{A}_0$ is isomorphic to $E_{1,n}$. \qed\vspace{3mm} Thus, by Theorem \ref{M1n-ainf-thm} (resp., by Proposition \ref{n=2-ainf-surj-prop}, if $n=2$), there exists a family of curves $(C_{mirror},p_1,\ldots,p_n,\omega)$ in $\widetilde{\mathcal{U}}_{1,n}^{sns}$ over ${\Bbb Z}[[t_1,\ldots,t_n]]$ such that the $A_\infty$-algebra $\mathscr{A}$ is gauge-equivalent to the $A_\infty$-structure on $E_{1,n}\otimes {\Bbb Z}[[t_1,\ldots,t_n]]$, associated with this family via \eqref{curve-to-ainf-map}. Under this correspondence, we can identify $L_0 \leftrightarrow \mathcal{O}_C$ and $L_i \leftrightarrow \mathcal{O}_{p_i}$. Therefore, to identify the $A_\infty$-algebra $\mathscr{A}$ up to an $A_\infty$-equivalence, it suffices to determine the curve $(C_{mirror},p_1,\ldots, p_n)$. We will do this, by computing its homogeneous coordinate ring which can be done at the level of the cohomological category $H^*(\mathcal{F}(\mathbb{T},D))$: \[ R_{C_{mirror}} = \bigoplus_{N \geq 0} H^0({\mathcal{O}}_C (N (p_1+\ldots+p_n))) \cong \bigoplus_{N \geq 0} HF^0(L_0, (\tau_{L_1} \circ \tau_{L_2} \circ \ldots \tau_{L_n})^N (L_0) ),\] where $\tau_{L_1}, \ldots, \tau_{L_n}$ are Dehn twists around the curves $L_1, \ldots, L_n$, respectively. To make sure that these Dehn twists act on the category $\mathcal{F}(\mathbb{T},D)$, we need to give models for them that are \emph{exact} symplectomorphisms. Furthermore, we prefer to have linear models so that we can explicitly compute $R_{C_{mirror}}$. Rather than exhibiting explicit models for each $\tau_{L_i}$, we will exhibit a model for $\tau_{L_1} \circ \tau_{L_2} \circ \ldots \circ \tau_{L_n}$. To this end, consider the symplectomorphism $\rho : \mathbb{T} \to \mathbb{T}$ given by \[ \rho(x,y) = (x, y-nx). \] This is almost a model for $\tau$, however, it does not preserve the divisor $D$, so we do not get a symplectomorphism of $\mathbb{T}_0$. To fix this, we observe that the divisor $D = \{z_1,\ldots,z_n \}$ on $\mathbb{T}$ is sent by $\rho$ to $\{w_1,\ldots, w_n \}$ where \[ w_i = \rho(z_i)= (\frac{n-i}{n}+\epsilon, \epsilon - (n-i)- n\epsilon) = (\frac{n-i}{n}+\epsilon, \epsilon - n\epsilon) \in {\Bbb R}^2/{\Bbb Z}^2. \] Note that for $\epsilon$ sufficiently small, for all $i$, $z_i$ and $w_i$ are very close to each other as points in $\mathbb{T}$. Let $U_i$ be an $\epsilon^2$-neighborhood of the segment $[z_i, w_i]$. We can find a compactly supported symplectomorphism $\delta_i$ of $U_i$ sending $w_i$ to $z_i$ for all $i$. We then set \begin{equation}\label{tau-sympl-eq} \tau = (\delta_1 \circ \ldots \circ \delta_n) \circ \rho. \end{equation} Note that $\tau$ defines a symplectomorphism of $\mathbb{T}_0$, such that \[ \tau(x,y)= (x, y-nx) \] outside of an arbitrarily small (depending on $\epsilon$) neighbourhood of $D$. The following proposition ensures that for a suitable choice of the primitive $\theta$, $\tau$ will be an exact symplectomorphism of $\mathbb{T}_0$, that is a model for the composition of the Dehn twists around $L_i$, which is linear outside of a small neighborhood of $D$. \begin{prop} \label{primitive} There exists a primitive $\theta = \iota_Z \omega$ of $\omega|_{\mathbb{T}_0}$ such that the Lagrangians $L_0$ and $\tau(L_0)$ are exact. Furthermore, for the same choice of $\theta$, the symplectomorphism $\tau: \mathbb{T}_0 \to \mathbb{T}_0$ given by \eqref{tau-sympl-eq} is exact and is symplectically isotopic to the composition of Dehn twists $\tau_{L_1} \circ \tau_{L_2} \circ \ldots \circ \tau_{L_n}$. \end{prop} \noindent {\it Proof} . To make $L_0$ and $\tau({L}_0)$ exact Lagrangians, we need to exhibit a Liouville vector field $Z$ on $\mathbb{T}_0$ such that $\int_{L_0} \iota_Z \omega = \int_{\tau ({L}_0)} \iota_Z \omega =0$. By observing that $\mathbb{T}_0$ retracts onto a neighborhood of $L_0 \cup \tau (L_0)$ which can be locally identified with a neighborhood of a plumbing of $T^*L_0$ and $T^*(\tau(L_0))$'s, it is easy to see that there is a Liouville vector field $Z$ which vanishes along $L_0$ and $\tau(L_0)$ (see Figure \ref{alexander}). \begin{figure}[htb!] \centering \begin{tikzpicture} [scale=1.1] \tikzset{->-/.style={decoration={ markings, mark=at position #1 with {\arrow[scale=2,>=stealth]{>}}},postaction={decorate}}} \draw (0,0) -- (6,0); \draw (6,6) -- (0,6); \draw (6,0) -- (6,6); \draw (0,0) -- (0,6); \draw[red, ->-=.5] (0,0) -- (6,0); \draw [red, ->-=.5] (0,6) -- (1,0); \draw [red, ->-=.5] (1,6) -- (2,0); \draw [red, ->-=.5] (2,6) -- (3,0); \draw [red, ->-=.5] (4,6) -- (5,0); \draw [red, ->-=.5] (5,6) -- (6,0); \node at (3,-0.3) {\footnotesize $L_0$}; \draw[red, thick, fill=red] (3.0,2) circle(.02); \draw[red, thick, fill=red] (3.7,2) circle(.02); \draw[red, thick, fill=red] (4.4,2) circle(.02); \draw[black, thick, fill=black] (3.2,0.5) circle(.01); \draw[black, thick, fill=black] (3.6,0.5) circle(.01); \draw[black, thick, fill=black] (4,0.5) circle(.01); \draw[thick, fill=black] (0.5,0.4) circle(.03); \draw[thick, fill=black] (1.5,0.4) circle(.03); \draw[thick, fill=black] (2.5,0.4) circle(.03); \draw[thick, fill=black] (4.5,0.4) circle(.03); \draw[thick, fill=black] (5.5,0.4) circle(.03); \node at (0.6,0.6) {\footnotesize $z_n$}; \node at (1.6,0.6) {\footnotesize $z_{n-1}$}; \node at (2.6,0.6) {\footnotesize $z_{n-2}$}; \node at (4.6,0.6) {\footnotesize $z_2$}; \node at (5.6,0.6) {\footnotesize $z_1$}; \end{tikzpicture} \caption{Effect of exact symplectomorphisms $\tau$ on $L_0$ } \label{alexander} \end{figure} \begin{comment} \begin{figure}[htb!] \centering \begin{tikzpicture} [scale=1.1] \tikzset{->-/.style={decoration={ markings, mark=at position #1 with {\arrow[scale=2,>=stealth]{>}}},postaction={decorate}}} \draw (0,0) -- (6,0); \draw[red, ->-=.5] (0,0) -- (6,0); \draw [red, ->-=.5] (0,6) -- (6,0); \draw [red, ->-=.5] (1,6) -- (6,1); \draw [red, ->-=.5] (0,1) -- (1,0); \draw [red, ->-=.5] (2,6) -- (6,2); \draw [red, ->-=.5] (0,2) -- (2,0); \draw [red, ->-=.5] (0,5) -- (5,0); \draw [red, ->-=.5] (5,6) -- (6,5); \draw (6,6) -- (0,6); \draw (6,0) -- (6,6); \draw (0,0) -- (0,6); \node at (1,5) {$\star$}; \node at (2,5) {$\star$}; \node at (3,5) {$\star$}; \node at (6,5) {$\star$}; \node at (0.5,0) {$\star$}; \node at (0,6.2) {\footnotesize $\tau_1L_0$}; \node at (1,6.2) {\footnotesize $\tau_2L_0$}; \node at (2,6.2) {\footnotesize $\tau_3L_0$}; \node at (5,6.2) {\footnotesize $\tau_nL_0$}; \node at (3,-0.3) {\footnotesize $L_0$}; \draw[red, thick, fill=red] (0.6,2) circle(.02); \draw[red, thick, fill=red] (1.3,2) circle(.02); \draw[red, thick, fill=red] (2,2) circle(.02); \draw[red, thick, fill=red] (3.7,5) circle(.02); \draw[red, thick, fill=red] (4.4,5) circle(.02); \draw[red, thick, fill=red] (5.1,5) circle(.02); \draw[black, thick, fill=black] (2.8,0.5) circle(.01); \draw[black, thick, fill=black] (3.2,0.5) circle(.01); \draw[black, thick, fill=black] (3.6,0.5) circle(.01); \draw[thick, fill=black] (0.2,0.4) circle(.03); \draw[thick, fill=black] (1.2,0.4) circle(.03); \draw[thick, fill=black] (2.2,0.4) circle(.03); \draw[thick, fill=black] (4.2,0.4) circle(.03); \draw[thick, fill=black] (5.2,0.4) circle(.03); \node at (0.2,0.6) {\footnotesize $z_1$}; \node at (1.2,0.6) {\footnotesize $z_2$}; \node at (2.2,0.6) {\footnotesize $z_3$}; \node at (4.2,0.6) {\footnotesize $z_{n-1}$}; \node at (5.2,0.6) {\footnotesize $z_n$}; \end{tikzpicture} \caption{Effect of exact symplectomorphisms $\tau_i$ on $L_0$} \label{figure4} \end{figure} \end{comment} Next, recall that a symplectomorphism $\phi: \mathbb{T}_0 \to \mathbb{T}_0$ is exact if $[\phi^* \theta - \theta ] = 0 \in H^1(\mathbb{T}_0)$. On the other hand, the compact Lagrangians $L_i$ form a basis of $H_1(\mathbb{T}_0)$. Hence, to show that $\tau$ is exact, it suffices to check that $\int_{\tau(L_i)} \iota_Z \omega = \int_{L_i} \iota_Z \omega$ for all $i=0,\ldots, n$. We have chosen the primitive $\theta = \iota_Z \omega$ above so that both integrals are zero for $i=0$. On the other hand, for $i=1,\ldots, n$, we note that $\tau(L_i) = L_i$, since $L_i = \{ ((n-i)/n, y) : y \in {\Bbb R}/{\Bbb Z} \}$ for $i=1,\ldots, n$ and $\tau(L_i) = \{ ((n-i)/n , y- (n-i)) : y \in {\Bbb R}/ {\Bbb Z} \}$ are the same curves. Finally, we will apply Alexander's method (see \cite{primer}) using $L_0, L_1, \ldots, L_n$ as test curves to show that $\tau$ is isotopic to the composition of Dehn twists around $L_i$. Since $\tau({L}_i) = L_i$ for $i=1,\ldots n$, it suffices to check that $\tau({L}_0)$ is isotopic to the curve that is obtained by Dehn twisting $L_0$ around $L_1,\ldots, L_n$. But this is clear from the depiction of $\tau(L_0)$ in Figure \ref{alexander}. \qed\vspace{3mm} For a fixed $m_0$, we can choose $\epsilon$ small enough so that for all $1 \leq m \leq m_0$, $\tau^m : \mathbb{T}_0 \to \mathbb{T}_0$ is given by \[ \tau^m(x,y) = (x, y-mnx) \] outside an arbitrary small neighborhood of $D$ and is a model for the $m^{th}$ power of the composition of the Dehn twists around $L_1,\ldots, L_n$. Therefore, we can use these linear models to compute triangle products. As explained in \cite{LP}, the ring structure on $\bigoplus_{N \geq 0 } HF^* (L_0, \tau^{N} L_0)$ is determined by the Floer triangle products: \[ \mathfrak{m}_2: HF^*(\tau^{m_1}L_0, \tau^{m_1+m_2}L_0) \otimes HF^*(L_0, \tau^{m_1}L_0) \to HF^*(L_0, \tau^{m_1+m_2}(L_0)). \] This is what we compute next. First, let us observe that since the intersection points of $L_0$ and $\tau^{m}L_0$ all live in degree 0 by our grading choices, the Floer cohomology $HF^*(L_0, \tau^{m}L_0)$ is freely generated by $L_0 \cap \tau^{m}L_0$. Note that $L_0$ intersects $\tau^{m}L_0$ at the points $x + \mathbb{Z}$, where \[ -nmx \in \mathbb Z. \] So, $x \in \{ 0, \frac{1}{nm}, \frac{2}{nm}, \ldots, \frac{nm-1}{nm} \} + \mathbb{Z}$. Thus, we get a one-to-one correspondence between the $x$-coordinates of the intersection points $L_0 \cap \tau^{m}L_0$ and elements of the set $\frac{1}{m}\mathbb{Z}/n\mathbb{Z}$, so we write \[ HF^*(L_0, \tau^{m}L_0) = \bigoplus_{p \in \frac{1}{m}\mathbb{Z}/n\mathbb{Z}} \mathbb{Z}[[t_1,\ldots,t_n]] x_{m,p}. \] Note also that the Dehn twist $\tau^{m_1}$ gives an identification of $HF(L_0, \tau^{m_2} L_0)$ and $HF(\tau^{m_1} L_0, \tau^{m_1+m_2} L_0)$ by mapping the intersection points $L_0 \cap \tau^{m_2} L_0$ bijectively onto $\tau^{m_1}L_0 \cap \tau^{m_1+m_2}L_0$. Thus, we have \[ HF^*(\tau^{m_1}L_0, \tau^{m_1+m_2}L_0) \cong \bigoplus_{p \in \frac{1}{m_2}\mathbb{Z}/n\mathbb{Z}} \mathbb{Z}[[t_1,\ldots,t_n]] \tau^{m_1}x_{m_2,p}. \] The ring structure on $R_{C_{mirror}} \cong \bigoplus_{N \geq 0} HF^0(L_0, \tau^{N}L_0)$ is given by the triangle counts \[ x_{m_2, p_2} \cdot x_{m_1, p_1} = \mathfrak{m}_2 (\tau^{m_1} x_{m_2,p_2}, x_{m_1,p_1}). \] In the following proposition we compute this explicitly. \begin{prop} Let $x_{m_i,p_i} \in HF^*(L_0, \tau^{m_i} L_0)$, where $p_i \in \frac{1}{m_i} \mathbb{Z}/ n\mathbb{Z}$. For $\epsilon \ll 1/ n(m_1+m_2)$, we have: \[ x_{m_2,p_2} \cdot x_{m_1,p_1} = \mathfrak{m}_2 ( \tau^{m_1}(x_{m_2,p_2}), x_{m_1,p_1}) = \sum_{k \in {\Bbb Z} } x_{m_1+m_2,E(p_1, p_2+ kn)} \Pi_{j=1}^{n} t_j^{n\lambda(\frac{p_1+j}{n}, \frac{p_2+j}{n}+k)} \] where $$E(a,b)=\frac{m_1a+m_2b}{m_1+m_2},$$ $$\lambda(a,b)=m_1\phi(a)+m_2\phi(b)-(m_1+m_2)\phi(E(a,b))$$ and $\phi$ is the piecewise linear function given by (\ref{phi-def}). \end{prop} \noindent {\it Proof}. This is analogous to the computation given in \cite{LP} with the exception that there are $n$ different marked points and we need to keep track of intersection numbers of triangles with respect to these. On the other hand, one can compute these intersection numbers one point at a time, hence Brion's formula \cite{brion} (see also \cite{Barvinok}) applied in \cite{LP} can be used here as well. For $k \in {\Bbb Z}$, let us set $p_3 = E(p_1, p_2+kn)$ and $m_3 = m_1 + m_2$. For $p_i \in \frac{1}{m_i} {\Bbb Z}/ n{\Bbb Z}$ and fixed $j \in \{1,\ldots, n\}$, let us set \begin{align*} \frac{p_1+j}{n} &= q_1 + r_1/m_1 n, \text{ where } q_1, r_1 \in {\Bbb Z} \text{ and } 0 \leq r_1 < m_1 n, \\ \frac{p_2+kn+j}{n} &= q_2 + r_2/m_2 n, \text{ where } q_2, r_2 \in {\Bbb Z} \text{ and } 0 \leq r_2 < m_2 n, \\ \frac{p_3+j}{n} &= q_3 + r_3/m_3 n, \text{ where } q_3, r_3 \in {\Bbb Z} \text{ and } 0 \leq r_3 < m_3 n. \end{align*} Then, one can compute \begin{align*} &n\lambda(\frac{p_1+j}{n}, \frac{p_2+j}{n}+k) = \\ & n \left(m_1 \frac{q_1(q_1-1)}{2} + m_2 \frac{q_2(q_2-1)}{2} - m_3\frac{q_3(q_3-1)}{2} \right) + r_1 q_1 + r_2 q_2 -r_3 q_3. \end{align*} The count of triangles contributing to $\mathfrak{m}_2 (\tau^{m_1}x_{m_2,p_2},x_{m_1,p_1})$ can be enumerated as embedded triangles, $T(p_1,p_2+kn)$ for $k\in \mathbb{Z}$, in the universal cover ${\Bbb R}^2$. The first vertex of $T(p_1,p_2+kn)$ is a lift of $x_{m_1,p_1}$ which we can fix to be the point $(\frac{p_1}{n}, 0)$. The second vertex is a lift of $\tau^{m_1}(x_{m_2,p_2})$, which lies on the line of slope $-m_1n$ that passes through $(\frac{p_1}{n}, 0) $, so it has to be of the form \[ \left( \frac{p_2}{n}+k , -m_1(p_2+nk-p_1) \right) , \ \ \ k \in {\Bbb Z}. \] Finally, the third vertex is a lift of $x_{m_1+m_2, E(p_1,p_2+kn)}$ and has the coordinates \[ (\frac{ m_1 p_1 + m_2 (p_2+kn)}{(m_1+m_2)n}, 0) = (\frac{E(p_1,p_2 +kn)}{n} , 0) , \ \ \ k \in {\Bbb Z}. \] Figure \ref{figure5} shows the type of triangles that appear in the computation. \begin{figure}[htb!] \centering \begin{tikzpicture} [scale=1.1] \tikzset{->-/.style={decoration={ markings, mark=at position #1 with {\arrow[scale=2,>=stealth]{>}}},postaction={decorate}}} \node at (-2,6.3) {\footnotesize $L_0$}; \node at (2,4) {\footnotesize $\tau^{m_1+m_2} L_0$}; \node at (-2,4) {\footnotesize $\tau^{m_1} L_0$}; \draw (-4,6) -- (0,6); \node at (-5,6) {\footnotesize $(\frac{p_1}{n}, 0)$}; \draw (0,6) -- (2,2); \node at (3,1.6) {\footnotesize $\left( \frac{p_2}{n}+k , -m_1(p_2+nk-p_1) \right)$ }; \draw (-4,6) -- (2,2); \node at (2,6) {\footnotesize $(\frac{E(p_1,p_2+kn)}{n}, 0)$}; \end{tikzpicture} \caption{Contributions to the product $x_{m_2,p_2} \cdot x_{m_1,p_1}$} \label{figure5} \end{figure} For a convex region $C \subset \mathbb{R}^2$ and $j=1,\ldots,n$ and $\epsilon>0$, let us consider the 2-variable Laurent series recording the count of points: \[ F_{C,j} (x,y) = \sum_{ \{(a,b) \in \mathbb{Z}^2 : (a- \frac{j}{n} + \epsilon, b +\epsilon) \in C\} } x^a y^b. \] Note that $F_{T(p_1,p_2+kn),j}(1,1)$ records the intersection number of the triangle $T(p_1,p_2+kn)$ with the base-point at $z_j = ((n-j)/n + \epsilon, \epsilon)$. We want to show that $$F_{T(p_1,p_2+kn),j}(1,1) = n\lambda(\frac{p_1+j}{n}, \frac{p_2+j}{n}+k).$$ It is slightly more convenient to work with perturbed lattice points $({\Bbb Z}+\epsilon)^2$, hence we rewrite this as: \[ F_{C,j} (x,y) = \sum_{ \{(a,b) \in \mathbb{Z}^2 : (a + \epsilon, b +\epsilon) \in C(j)\} } x^a y^b \] where $C(j)= C + {(\frac{j}{n}, 0)}$ is a horizontal translate of $C$. To apply Brion's formula, we consider the convex cones $C_1, C_2 , C_3$ with sides parallel to the sides of the triangle $T(p_1,p_2+kn)+ (\frac{j}{n},0)$ and the tip points at the vertices $(\frac{p_1+j}{n}, 0)$, $\left( \frac{p_2+j}{n}+k , -m_1(p_2+nk-p_1) \right)$ and $(\frac{p_3+j}{n}, 0)$. Each of $F_{i,j} := F_{C_i,j}$ is a rational function of $x$ and $y$ and Brion's formula gives: \[ F_{T(p_1,p_2+kn),j} = F_{1,j} + F_{2,j} + F_{3,j}. \] To compute $F_{T(p_1,p_2+kn),j}(1,1)$ it suffices to specialize to $y=1$, so let us set $G_{i,j} = {F_{i,j}}|_{{y=1}}$. We can compute the generating function $G_{i,j}$ by first counting the points in a primitive parallelogram $P_i$ of the cone and then tiling the cone. For example, $P_1$ is the parallelogram with the corners \[ (\frac{p_1+j}{n},0), (\frac{p_1+j}{n}+1, 0), (\frac{p_1+j}{n}+1, -nm_1), (\frac{p_1+j}{n}+2 , -nm_1). \] The resulting count can be expressed as follows: \[ G_{1,j} = g_{1,j}(x) \frac{1}{(1-x)^2} \ \ \ , \ \ \ G_{2,j} = g_{2,j}(x) \frac{x^2}{(1-x)^2} \ \ \ , \ \ \ G_{3,j} = g_{3,j}(x) \frac{-x}{(1-x)^2}, \] where $g_{i,j}$ are counts of points in the primitive parallelograms. One can compute these explicitly: \[ g_{1,j}(x) = (nm_1-r_1) x^{q_1+1} + r_1 x^{q_1+2}, \] \[ g_{2,j}(x) = (nm_2-r_2) x^{q_2-1} + r_2 x^{q_2}, \] \[ g_{3,j}(x) = (nm_3 - r_3) x^{q_3} + r_3 x^{q_3+1}. \] Let $h(x) = g_{1,j}(x)+ x^2 g_{2,j}(x) - x g_{3,j}(x)$. Then, we have that $h(1)=0$ and $h'(1)=0$ since $r_1 + r_2 - r_3 = n (m_3 q_3 - m_2 q_2 - m_1 q_1)$. Hence, we can compute: \[ F_{T(p_1,p_2+kn),k}(1,1) = \frac{1}{2} h''(1) = n \lambda( \frac{p_1+j}{n}, \frac{p_2+j}{n}+k). \] \qed\vspace{3mm} Comparing the above computation of $R_{C_{mirror}}$ with the formulas for multiplication of theta functions (see Proposition \ref{Bsidecomp}) we derive the following key result. \begin{cor}\label{curve-isom-cor} There exists a ${\Bbb Z}[[t_1,\ldots,t_n]]$-linear isomorphism of the homogeneous ring $R_{C_{mirror}}$ with the ring $\bigoplus_{N\ge 0}H^0(T_n,L^{\otimes N})$, where $T_n$ is the $n$-Tate curve with its natural polarization $L$. In particular, we get an isomorphism of curves $C_{mirror}\simeq T_n$ over ${\Bbb Z}[[t_1,\ldots,t_n]]$. \end{cor} \subsection{Generating objects on the $B$ side}\label{gen-B-sec} Let $G_n=\cup_{i=1}^n C_i$ be the standard $n$-gon curve over ${\Bbb Z}$. We assume that the components $C_i\simeq \P^1$ are glued so that the point $0\in C_i$ is identified with the point $\infty\in C_{i+1}$. For each $i$ let $p_i\subset C_i$ be the ${\Bbb Z}$-point with the coordinate $1$, and let $\pi_i:\P^1\to G_n$ denote the natural map with the image $C_i$. Note that in the case $n=1$ this becomes the normalization map. Let us denote by $q_i$ the node at the intersection of $C_i$ and $C_{i+1}$ (so $q_i$ is a closed subscheme, isomorphic to $\operatorname{Spec}({\Bbb Z})$). We identify indices with elements of ${\Bbb Z}/n$, so $q_0=q_n$. For a commutative ring $R$ we set $G_{n,R}=G_n\times \operatorname{Spec}(R)$. We still denote by $p_i$ and $q_i$ the $R$-points of $G_{n,R}$ obtained from the similar ${\Bbb Z}$-points of $G_n$. \begin{lem}\label{Perf-C-gen-lem} Let $(\mathcal{C},p_1,\ldots,p_n)$ be a flat proper family of pointed curves (where the marked points are smooth and distinct) over $\operatorname{Spec}(R)$, where $R$ is a Noetherian ring. Assume that $\mathcal{O}(p_1+\ldots+p_n)$ is ample on every fiber. Then the perfect derived category $\operatorname{Perf}(\mathcal{C})$ is split-generated by the objects $(\mathcal{O}_\mathcal{C}, \mathcal{O}_{p_1},\ldots,\mathcal{O}_{p_n})$. \end{lem} \noindent {\it Proof} . Using the twist functors with respect to $\mathcal{O}_{p_i}$ we obtain all the line bundles $L^m$, where $L=\mathcal{O}_C(p_1+\ldots+p_n)$ and $m\in{\Bbb Z}$. But $L$ is ample, and it is well known that all powers of an ample line bundle generate the perfect derived category (see e.g., \cite[Thm.\ 4]{O-generators}). \qed\vspace{3mm} \begin{cor}\label{Perf-C-gen-cor} For every Noetherian ring $R$, the category $\operatorname{Perf}(G_{n,R})$ is split-generated by the objects \begin{equation}\label{F-collection-eq} F_0:=\mathcal{O}_{G_{n,R}} \ \text{ and } \ F_i:=\mathcal{O}_{p_i}, \ i=1,\ldots,n. \end{equation} \end{cor} Next, we consider generators of the full derived category $D^b(\operatorname{Coh} G_{n,R})$. Let us define a collection of objects $(\hat{F}_0,\ldots,\hat{F}_n)$ in $D^b(\operatorname{Coh} G_{n,R})$ as follows: \begin{equation}\label{Fhat-collection-eq} \hat{F}_i:=(\pi_{i})_* \mathcal{O}(-1)[1], \ i=1,\ldots,n, \ \ \hat{F}_0:=\mathcal{O}_{q_0}. \end{equation} \begin{prop}\label{Db-Coh-gen-prop} Assume $R$ is a regular ring. Then the category $D^b(\operatorname{Coh} G_{n,R})$ is split-generated by the objects $(\hat{F}_0,\ldots,\hat{F}_n)$. Furthermore, $D^b(\operatorname{Coh} G_{n,R})$ is generated (as a triangulated category) by the sheaves $P\otimes_R\hat{F}_i$, $i=0,\ldots,n$, where $P$ ranges over finitely generated projective $R$-modules. \end{prop} \noindent {\it Proof} . Below $P$ denotes a finitely generated projective $R$-module. Set $C=G_{n,R}$, and let $\mathcal{C}\subset D^b(\operatorname{Coh} C)$ (resp., $\mathcal{C}'\subset D^b(\operatorname{Coh} C)$) be the thick subcategory split-generated by $\hat{F}_i$, $i=0,\ldots,n$ (resp., the triangulated subcategory generated by $(P\otimes_R\hat{F_i})$). Note that $\mathcal{C}'\subset \mathcal{C}$. \noindent {\bf Step 1}. One has $P\otimes_R (\pi_{i})_* \mathcal{O}\in \mathcal{C}'$, $P\otimes_R \mathcal{O}_{q_i}\in \mathcal{C}'$ for $i=0,\ldots,n-1$. This follows easily by considering the exact sequences \begin{equation}\label{O(-1)-O-qi-seq} 0\to (\pi_{i})_* \mathcal{O}(-1)\to (\pi_{i})_* \mathcal{O}\to \mathcal{O}_{q_i}\to 0, \end{equation} \begin{equation}\label{O(-1)-O-qi-bis-seq} 0\to (\pi_{i+1})_* \mathcal{O}(-1)\to (\pi_{i+1})_* \mathcal{O}\to \mathcal{O}_{q_i}\to 0, \end{equation} and tensoring them with $P$. Indeed, for $i=0$ these sequences give $P\otimes_R(\pi_{0})_* \mathcal{O}\in \mathcal{C}'$, $P\otimes_R(\pi_{1})_* \mathcal{O}\in \mathcal{C}'$. Next, the first sequence for $i=1$ gives $P\otimes_R\mathcal{O}_{q_1}\in \mathcal{C}'$, and then we continue using the induction on $i$. \noindent {\bf Step 2}. One has $\operatorname{Perf}(C)\subset \mathcal{C}$. Indeed, the exact sequence \begin{equation}\label{surgery} 0\to \mathcal{O}_C\to \bigoplus_{i=1}^n (\pi_{i})_* \mathcal{O} \to \bigoplus_{i=1}^n\mathcal{O}_{q_i}\to 0, \end{equation} together with Step 1, shows that $\mathcal{O}_C\in\mathcal{C}$. Also, the exact sequences $$ 0\to (\pi_{i})_* \mathcal{O}(-1)\to (\pi_{i})_* \mathcal{O}\to \mathcal{O}_{p_i}\to 0 $$ show that $\mathcal{O}_{p_i}\in \mathcal{C}$. But $\operatorname{Perf}(C)$ is split-generated by $\mathcal{O}_C$ and $\mathcal{O}_{p_i}$ by Corollary \ref{Perf-C-gen-cor}. \noindent {\bf Step 3}. The category $D^b(\operatorname{Coh} C)$ is split-generated by $\operatorname{Perf}(C)$ and by all the structure sheaves of nodes $\mathcal{O}_{q_i}$, $i=1,\ldots,n$, hence $\mathcal{C}=D^b(\operatorname{Coh} C)$. Indeed, this is proved similarly to \cite[Prop.\ 2.7]{O-compl}. Namely, let $\mathcal{T}\subset D^b(\operatorname{Coh} C)$ be the thick subcategory, split generated by $\operatorname{Perf}(C)$ and $(\mathcal{O}_{q_i})$, and let $j:U\hookrightarrow C$ be the complement to $Z=\cup_{i=1}^n q_i$. Note that any coherent sheaf supported on $Z$ is obtained as an iterated extension of coherent sheaves of the form $q_{i*}F$ for some $F$ on $\operatorname{Spec}(R)$. Since $R$ is regular, each $q_{i*}F$ belongs to the thick subcategory generated by $\mathcal{O}_{q_i}$. Let $D^b_Z(\operatorname{Coh} C)\subset D^b(\operatorname{Coh} C)$ be the subcategory of complexes with cohomology supported on $Z$. Then we obtain the inclusion $$D^b_Z(\operatorname{Coh} C)\subset \mathcal{T}.$$ On the other hand, it is well known that the quotient $D^b(\operatorname{Coh} C)/D^b_Z(\operatorname{Coh} C)$ is naturally equivalent to $D^b(\operatorname{Coh} U)$. Hence, the projection $D^b(\operatorname{Coh} C)\to D^b(\operatorname{Coh} C)/\mathcal{T}$ factors as a composition $$D^b(\operatorname{Coh} C)\rTo{j^*} D^b(\operatorname{Coh} U)\to D^b(\operatorname{Coh} C)/\mathcal{T}.$$ Now given a coherent sheaf $F$ on $C$ we can find an exact sequence of the form $$0\to G\to P_N\to P_{N-1}\to \ldots P_1\to P_0\to F\to 0$$ where $P_i$ are vector bundles on $C$, and $N>d$, where $d$ is the global dimension of $R$. Then the induced morphism $\a:F\to G[N+1]$ has a cone in $\operatorname{Perf}(C)$, so it becomes an isomorphism in $D^b(\operatorname{Coh} C)/\mathcal{T}$. On the other hand, $j^*\a=0$ since $\operatorname{Coh}(U)$ has homological dimension $d+1$. Therefore, $\a$ becomes zero in $D^b(\operatorname{Coh} C)/\mathcal{T}$, which gives that $F\in\mathcal{T}$, as required. \noindent {\bf Step 4}. Now we can prove that $\mathcal{C}'=D^b(\operatorname{Coh} C)$, i.e., $D^b(\operatorname{Coh} C)$ is generated by objects of the form $P\otimes_R \hat{F}_i$. By the previous steps we know that $\mathcal{C}'$ is dense in $D^b(\operatorname{Coh} C)$. Hence, by Thomason's theorem (\cite{thomason}), it is enough to see that the classes of our objects generate the Grothendieck group of $D^b(\operatorname{Coh} C)$. Let us consider the localization sequence $$\ldots\to K_0(D^b_Z(\operatorname{Coh} C))\to K_0(D^b(\operatorname{Coh} C))\to K_0(D^b(\operatorname{Coh} U))\to 0.$$ Since $R$ is regular, the objects of the form $P\otimes \mathcal{O}_{q_i}$ generate $D^b_Z(\operatorname{Coh} C)$. Thus, by Step 1, we have $D^b_Z(\operatorname{Coh} C)\subset\mathcal{C}'$. It remains to check that the images of the classes of $(P\otimes\hat{F}_i)_{i=1,\ldots,n}$ generate $K_0(D^b(\operatorname{Coh} U))$. But $U$ is the disjoint union of $n$ copies of $\operatorname{Spec}(R[t,t^{-1}])$, so we have an identification $$K_0(D^b(\operatorname{Coh} U))\simeq \bigoplus_{i=1}^n K_0(R[t,t^{-1}]).$$ It remains to observe that the map $[P]\mapsto [P\otimes\hat{F}_i|_U]$ corresponds to the standard map \begin{equation}\label{Rt-isom} K_0(R)\to K_0(R[t,t^{-1}]) \end{equation} followed by the inclusion as the $i^{th}$ component of the above direct sum. But \eqref{Rt-isom} is known to be an isomorphism, since $R$ is regular. \qed\vspace{3mm} \begin{cor}\label{Db-Coh-gen-cor} If $R$ is a regular ring with $\hat{K}_0(R)=0$ (here $\hat{K}_0\subset K_0$ is the kernel of the rank homomorphism) then the category $D^b(\operatorname{Coh} G_{n,R})$ is generated by the objects $(\hat{F}_i)_{i=0,\ldots,n}$ as a triangulated category. In particular, this is true if $R={\Bbb Z}$ or $R$ is a field. \end{cor} \noindent {\it Proof} . Indeed, in this case every finitely generated projective module over $R$ is stably free, so it has a $2$-term resolution by free modules. \qed\vspace{3mm} Our next result is that the collections $(F_0,\ldots,F_n)$ and $(\hat{F}_0,\ldots,\hat{F}_n)$ (see \eqref{F-collection-eq} and \eqref{Fhat-collection-eq}) are dual. \begin{prop}\label{dual-generators-prop} Let $R$ be a Noetherian ring, and consider the objects $(F_i)$ and $(\hat{F}_i)$ on $G_{n,R}$. One has $$R\operatorname{Hom}(F_i,\hat{F}_j)=R\operatorname{Hom}(\hat{F}_j,F_i)=0 \text{ for } i\neq j,$$ $$R\operatorname{Hom}(F_i,\hat{F}_i)\simeq R\operatorname{Hom}(\hat{F}_i,F_i)[1]\simeq R$$ for $i=0,\ldots,n$. \end{prop} \noindent {\it Proof} . Recall that since $F_i$ are perfect objects, by Serre duality, $$R\operatorname{Hom}(F_i,\hat{F}_j)\simeq R\operatorname{Hom}(R\operatorname{Hom}(\hat{F}_j,F_i)[1],R).$$ Thus, the assertion follows from $$R\operatorname{Hom}(\mathcal{O}_C,(\pi_{i})_* \mathcal{O}(-1))=R\Gamma(\P^1,\mathcal{O}(-1))=R\operatorname{Hom}(\mathcal{O}_{p_i},\mathcal{O}_{q_1})=0,$$ $$R\operatorname{Hom}((\pi_{i})_* \mathcal{O}(-1),\mathcal{O}_{p_i})=0 \text{ for } i\neq j,$$ $$R\operatorname{Hom}(\mathcal{O}_C,\mathcal{O}_{q_1})\simeq R\operatorname{Hom}((\pi_{i})_* \mathcal{O}(-1),\mathcal{O}_{p_i})\simeq R.$$ \qed\vspace{3mm} \begin{lem}\label{Ext-nodes-lem} For $C=G_{n,R}$ and any node $q_i\in C$ one has an isomorphism of graded algebras $$\operatorname{Ext}^*_C(\mathcal{O}_{q_i},\mathcal{O}_{q_i})\simeq R\langle u,v\rangle/(u^2,v^2),$$ where $\deg(u)=\deg(v)=1$. \end{lem} \noindent {\it Proof} . First, we claim that this algebra does not change if we replace $C$ by the affine curve $\operatorname{Spec}(A)$, where $A=R[x,y]/(xy)$. Indeed, note that for a locally free sheaf $V$ one has $\operatorname{Ext}^{>0}(V,\mathcal{O}_{q_i})=0$. Hence, we can use any locally free resolution $V_\bullet$ of $\mathcal{O}_{q_i}$ to compute $\operatorname{Ext}^*_C(\mathcal{O}_{q_i},\mathcal{O}_{q_i})$. Since the similar assertion holds for the $\operatorname{Ext}^*$ computed over the affine open $\operatorname{Spec}(A)$, our claim follows. Thus, we have to understand the algebra $\operatorname{Ext}^*_A(R,R)$. But the algebra $A$ is Koszul since it is given by monomial relations $xy=yx=0$, so the $\operatorname{Ext}$-algebra is just given by the quadratic dual. \qed\vspace{3mm} \subsection{Proofs of Theorems A and B(i)} $\phantom{x}$ \noindent {\bf Proof of Theorem A:} We follow the same strategy as in the proof of \cite[Thm.\ A]{LP}. Recall that by Theorem \ref{M1n-ainf-thm} (resp., by Proposition \ref{n=2-ainf-surj-prop}, if $n=2$), there exists a family of curves $(C_{mirror},p_1,\ldots,p_n,\omega)$ in $\widetilde{\mathcal{U}}_{1,n}^{sns}$ over ${\Bbb Z}[[t_1,\ldots,t_n]]$, such that the corresponding $A_\infty$-structure on $E_{1,n}\otimes {\Bbb Z}[[t_1,\ldots,t_n]]$ is gauge equivalent to the $A_\infty$-endomorphisms algebra $\AA$ of the object $L_0\oplus\ldots\oplus L_n$ of $\mathcal{F}(\mathbb{T},D)$. We have already seen that the objects $(L_0,\ldots,L_n)$ split-generate the derived category of $\mathcal{F}(\mathbb{T},D)$, while the objects $(F_0,\ldots,F_n)$ split-generate $\operatorname{Perf}(C_{mirror})$ (see Lemmas \ref{relgenerate} and \ref{Perf-C-gen-lem}). Hence, the equivalence of the corresponding $A_\infty$-algebras implies the equivalence between the $A_\infty$-categories $D^\pi\mathcal{F}(\mathbb{T},D)$ and $\operatorname{Perf}(C_{mirror})$. Finally, Corollary \ref{curve-isom-cor} implies that $C_{mirror}$ is isomorphic to the $n$-Tate curve (as a family of curves over ${\Bbb Z}[[t_1,\ldots,t_n]]$). \qed\vspace{3mm} As a corollary (from the above proof) we obtain a proof of Theorem B(i) stated in the introduction: \begin{cor}\label{Fuk-Perf-cor} For any commutative Noetherian ring $R$ there is an $R$-linear equivalence of the split-closed derived category of $\mathcal{F}(\mathbb{T}_0)\otimes R$ with $\operatorname{Perf}(G_{n,R})$, sending the objects $(L_0,L_1,\ldots,L_n)$ to $(\mathcal{O}, \mathcal{O}_{p_1},\ldots,\mathcal{O}_{p_n})$. \end{cor} \noindent {\it Proof} . We use the fact that the construction of the $A_\infty$-structure on $E_{1,n}\otimes A$ associated with a family of curves in $\widetilde{\mathcal{U}}^{sns}_{1,n}$ over $\operatorname{Spec}(A)$ is compatible with the base changes $A\to A'$. The $n$-Tate curve is such a family over ${\Bbb Z}[[t_1,\ldots,t_n]]$. Applying the base change ${\Bbb Z}[[t_1,\ldots,t_n]]\to{\Bbb Z}\to R$ gives $G_{n,R}$. Since the relevant objects still split-generate $D^\pi(\mathcal{F}(\mathbb{T}_0)\otimes R)$ and $\operatorname{Perf}(G_{n,R})$, respectively, this implies the result. \qed\vspace{3mm} {\bf Another proof of Theorem B(i):} We can give another proof of Corollary \ref{Fuk-Perf-cor}, which still uses the connection to the moduli space $\widetilde{\mathcal{U}}_{1,n}^{ns}$ but replaces the computations leading to Corollary \ref{curve-isom-cor} with Theorem \ref{B-wheel-char-thm}. Assume first that $R=k$ is a field. Then we can check that conditions (i) and (ii)' of Theorem \ref{B-wheel-char-thm} hold for $\mathscr{A}_0\otimes k$. Indeed, condition (i) follows from \cite[Thm.\ 8]{LPshort} when $\text{char}(k) \neq 2$ or $3$ and from the main result of \cite{LP} in arbitrary characteristic. As for condition (ii)', for each $i=1,\ldots, n$, let $N_i$ be a Weinstein neighborhood of the union of $L_0$ and $L_i$. Notice that $N_i$ is a Liouville subdomain in $\mathbb{T}_0$ whose completion $\hat{N}_i$ is a once-punctured torus. Now, the work \cite{AbouzSeidel} of Abouzaid and Seidel gives for each $i$ a restriction functor \[ \mathcal{W}(\mathbb{T}_0) \to \mathcal{W}(\hat{N}_i). \] At the level of objects, this functor intersects a Lagrangian in $\mathbb{T}_0$ with the neighborhood $N_i$ and extends it to the completion in an obvious way. In the case at hand, these restriction functors are easy to understand at the level of objects. In particular, compact Lagrangians contained in $N_i$ go to their obvious representatives in $\hat{N}_i$. By abuse of notation, we do not distinguish these notationally. By pre-composing with the full and faithful inclusion of $\mathcal{F}(\mathbb{T}_0) \to \mathcal{W}(\mathbb{T}_0)$, we obtain functors \[ r_i: \mathcal{F}(\mathbb{T}_0) \to \mathcal{W}(\hat{N}_i). \] Now, it follows easily from Lemma A.2 in \cite{AAEKO} that for $i\neq j$, $r_i(L_j)$ is a non-compact Lagrangian given by a cotangent fibre to $L_0$ inside $\hat{N}_i$. So, $r_i$ sends $\langle L_0, L_i \rangle$ into $\mathcal{F}(\hat{N}_i)$, while it sends $L_j$, for $i\neq j$, to an object of $\mathcal{W}(\hat{N}_i)$ which has infinite-dimensional endomorphisms. Hence, the functors $r_i$, for $i=1,\ldots n$, distinguish the subcategories $\langle L_0 , L_i \rangle$ of $D^\pi\mathcal{F}(\mathbb{T}_0)$, split-generated by $L_0$ and $L_i$. This completes the verification of condition (ii)' of Theorem \ref{B-wheel-char-thm}, hence we deduce that $\AA_0\otimes k$ is equivalent to the $A_\infty$-algebra associated with the standard $n$-gon over $k$. Next, let us consider the case $R={\Bbb Z}$. As in the proof of Theorem A, we see that $\AA_0$ is equivalent to the $A_\infty$-algebra coming from a family of curves $(C,p_1,\ldots,p_n,\omega)$ in $\widetilde{\mathcal{U}}_{1,n}^{sns}$ over ${\Bbb Z}$. Furthermore, the above argument shows that the base change of $(C,p_1,\ldots,p_n)$ with respect to any homomorphism ${\Bbb Z}\to k$, where $k$ is a field, gives the standard $n$-gon over $k$ (with one marked point on each component). Hence, the family $(C,p_1,\ldots,p_n)$ corresponds to a morphism from $\operatorname{Spec}({\Bbb Z})$ to an affine neighborhood of the standard $n$-gon (with marked points) in the moduli space of $n$-pointed stable curves without automorphisms. We have another such morphism, which corresponds to the standard $n$-gon over ${\Bbb Z}$. Since these morphisms agree on the generic point of $\operatorname{Spec}({\Bbb Z})$, they are in fact the same, so $C$ is the standard $n$-gon over ${\Bbb Z}$. Thus, the $A_\infty$-algebra $\AA_0$ is equivalent to the one coming from $G_{n,{\Bbb Z}}$. As in Corollary \ref{Fuk-Perf-cor}, this implies the equivalence of $\AA_0\otimes R$ with the $A_\infty$-structure associated with $G_{n,R}$, and hence the equivalence of $D^\pi(\mathcal{F}(\mathbb{T}_0)\otimes R)$ with $\operatorname{Perf}(G_{n,R})$. \qed\vspace{3mm} \subsection{Characterization of the generators on the B side} \begin{lem}\label{DbCoh-lem} Let $R$ be a Noetherian ring, and let $X$ be a projective scheme over $\operatorname{Spec}(R)$. Assume that $\mathcal{F}\in D(\operatorname{Qcoh} X)$ satisfies the following property: $R\operatorname{Hom}(P,F)$ is in $D^b(R)$ for every perfect complex $P$ on $X$. Then $\mathcal{F}\in D^b(\operatorname{Coh} X)$. \end{lem} \noindent {\it Proof} . Let $i:X\to \P^n_R$ be a closed embedding. It is enough to prove that $i_*\mathcal{F}$ is in $D^b(\operatorname{Coh} \P^N_R)$. Note that by our assumption for every $m\in{\Bbb Z}$ we have $$R\operatorname{Hom}(\mathcal{O}(m),i_*F)\simeq R\operatorname{Hom}(i^*\mathcal{O}(m),\mathcal{F})\in \operatorname{Perf}(R).$$ Now let us consider the following sequence of mutations of $\mathcal{F}_0=i_*\mathcal{F}$: $$\mathcal{F}_1\to R\operatorname{Hom}(\mathcal{O},\mathcal{F}_0)\otimes_R \mathcal{O}\to \mathcal{F}_0\to \ldots,$$ $$\mathcal{F}_2\to R\operatorname{Hom}(\mathcal{O}(-1),\mathcal{F}_1)\otimes_R \mathcal{O}(-1)\to \mathcal{F}_1\to \ldots,$$ $$\ldots$$ $$\mathcal{F}_{n+1}\to R\operatorname{Hom}(\mathcal{O}(-n),\mathcal{F}_n)\otimes_R \mathcal{O}\to \mathcal{F}_n\to \ldots$$ Note that all the middle terms are in $D^b(\operatorname{Coh} \P^N_R)$ (this can be seen by induction). Then we have $\operatorname{Hom}(\mathcal{O}(m),\mathcal{F}_{n+1})=0$ for $m=0,-1,\ldots,-n$. Since the category $D(\operatorname{Qcoh}\P^n_R)$ is generated by $\mathcal{O},\mathcal{O}(-1),\ldots,\mathcal{O}(-n)$ (see \cite[Thm.\ 2.1.2, 3.1.1]{B-VdB}), it follows that $\mathcal{F}_{n+1}=0$. Now the above triangles show that $i_*\mathcal{F}\in D^b(\operatorname{Coh} \P^n_R)$. \qed\vspace{3mm} \begin{lem}\label{char-nodes-lem} Let $R$ be a Noetherian ring, and let $C=G_{n,R}$. Let $F\in D(\operatorname{Qcoh} C)$ be such that $\operatorname{Ext}^*(\mathcal{O}_{p_i},F)=0$ for all $i=1,\ldots,n$, and $\operatorname{Ext}^*(\mathcal{O}_C,F)\simeq R$, concentrated in degree $0$. Then there exists an $R$-point $p:\operatorname{Spec}(R)\to C$ such that $F\simeq\mathcal{O}_p$. Furthermore, assume that $R=k$ is a field. Then $\operatorname{Ext}^*(F,F)$ is infinite-dimensional (equivalently, $\operatorname{Ext}^1(F,F)$ is $2$-dimensional) if and only if $p$ is one of the nodes. \end{lem} \noindent {\it Proof} . First, since $\mathcal{O}_{p_i}$ and $\mathcal{O}_C$ generate $\operatorname{Perf}(C)$ (see Corollary \ref{Perf-C-gen-cor}), we see that our assumptions imply that $R\operatorname{Hom}(P,F)$ is in $\operatorname{Perf}(R)$ for every $P\in\operatorname{Perf}(C)$. Hence, using Lemma \ref{DbCoh-lem} we derive that $F$ is in $D^b(\operatorname{Coh} C)$. Next, let us consider the case when $R$ is a field. Then using the spectral sequences of the form $$\bigoplus_k \operatorname{Ext}^q_C(\underline{H}^{k-p}(P),\underline{H}^k(F))\implies \operatorname{Ext}^{p+q}(P,F)$$ for $P\in\operatorname{Perf}(C)$, as in \cite[Lem.\ 8.9]{LP}\footnote{The relevant spectral sequences converge due to the fact that $R\underline{\operatorname{Hom}}(P,F)\in D^b(\operatorname{Coh} C)$.} we see that that $F$ is supported on the open affine subset $C\setminus \{p_1,\ldots,p_n\}$, and then deduce that it is of the form $\mathcal{O}_p$. The last assertion follows from the fact that over a field, $\operatorname{Ext}^*(\mathcal{O}_q,\mathcal{O}_q)$ is infinite-dimensional (resp., $\operatorname{Ext}^1(\mathcal{O}_q,\mathcal{O}_q)$ is $2$-dimensional) for each node $q$ (see Lemma \ref{Ext-nodes-lem}), whereas $\operatorname{Ext}^*(\mathcal{O}_p,\mathcal{O}_p)$ is finite-dimensional (resp., $\operatorname{Ext}^1(\mathcal{O}_p,\mathcal{O}_p)$ is $1$-dimensional) for each smooth point $p$. Now let us consider the case of general $R$. Since $C$ is flat over $R$, for every $P\in \operatorname{Perf}(C)$ the formation of $R\operatorname{Hom}(P,F)$ is compatible with the base change. Thus, we deduce that for every homomorphism $R\to k$, with $k$ a field, the object $F\otimes^{{\Bbb L}}_R k$ is a structure sheaf of a point on $C\times_{\operatorname{Spec}(R)}\operatorname{Spec}(k)$. This implies that $F$ is in fact the push-forward of a line bundle on $\operatorname{Spec}(R)$ with respect to some section $p:\operatorname{Spec}(R)\to C$. Finally, the condition $H^0(C,F)\simeq R$ implies that this line bundle is trivial. \qed\vspace{3mm} \begin{lem}\label{char-dual-to-point-lem} Let $C=G_{n,k}$, where $k$ is a field. \noindent (i) Assume $n>1$. Let $F\in D(\operatorname{Qcoh} C)$ be such that $\operatorname{Ext}^*(\mathcal{O},F)=0$, $\operatorname{Ext}^*(\mathcal{O}_{p_i},F)=0$ for $i=2,\ldots,n$, and $\operatorname{Ext}^*(\mathcal{O}_{p_1},F)$ is one-dimensional, concentrated in degree $1$. Then $F\simeq \hat{F}_1=(\pi_{1})_* \mathcal{O}(-1)$. \noindent (ii) Now let $n=1$. Then any $F\in D(\operatorname{Qcoh} C)$ such that $\operatorname{Ext}^*(\mathcal{O},F)=0$ and $\operatorname{Ext}^*(\mathcal{O}_{p_1},F)$ is one-dimensional, sitting in degree $1$, is either a nontrivial line bundle of degree $0$ on $C$ or isomorphic to $\pi_*\mathcal{O}(-1)$, where $\pi:\P^1\to C$ is the normalization map. \end{lem} \noindent {\it Proof} . First, as in Lemma \ref{char-nodes-lem}, we see that such $F$ is in $D^b(\operatorname{Coh} C)$. Next, we observe that for $P=\mathcal{O}$ or $P=\mathcal{O}_{p_i}$ the spectral sequence $$E_2^{rs}=\operatorname{Ext}^r(P,\underline{H}^s(F)) \implies \operatorname{Ext}^{r+s}(P,F)$$ degenerates at $E_2$, since $\operatorname{Ext}^r(P,\underline{H}^s(F))=0$ for $r\neq 0,1$. Thus, each cohomology sheaf $G=\underline{H}^s(F)$ still satisfies $H^*(C,G)=0$ and $\operatorname{Ext}^*(\mathcal{O}_{p_i},G)=0$ for $i=2,\ldots,n$. Note that the condition $H^0(C,G)=0$ implies that $G$ is torsion free. Hence, from the vanishing of $\operatorname{Ext}^*(\mathcal{O}_{p_i},G)$ for $i=2,\ldots,n$ we deduce that $G$ is supported on $C_1$. Assume first that $n>1$. Then we claim that $G$ is necessarily of the form $V\otimes (\pi_{1})_* \mathcal{O}(-1)$ for some vector space $V$, which will imply our statement due to the condition $\dim \operatorname{Ext}^*(\mathcal{O}_{p_1},G)=1$. Indeed, any torsion free coherent sheaf supported on $C_1$ is necessarily of the form $G=(\pi_{1})_* \widetilde{G}$, as follows from the classification of torsion free modules over $k[[x,y]]/(xy)$ (see \cite[Sec.\ 7]{BD}, \cite{Bass}). Now we have $H^*(C,G)=H^*(\P^1,\widetilde{G})=0$, hence, $\widetilde{G}\simeq V\otimes \mathcal{O}(-1)$. Now assume $n=1$. Then using the condition $\dim \operatorname{Ext}^*(\mathcal{O}_{p_1},G)=1$ we see that $G$ is a rank $1$ torsion free sheaf on $C$. It is well known that such $G$ is either a line bundle or has form $\pi_*L$, where $L$ is a line bundle on $\P^1$ (see e.g., \cite[Sec.\ 1.2]{Jarvis}). Now the condition $H^*(C,G)=0$ implies that in the former case $G$ has to be a nontrivial line bundle of degree $0$, while in the latter case $G\simeq \pi_*\mathcal{O}(-1)$. \qed\vspace{3mm} \subsection{Fully faithfulness of the Yoneda functor $\mathcal{W}(\mathbb{T}_0)\to D(\mod \text{-}\mathcal{F}(\mathbb{T}_0))$} In this section $R$ denotes a commutative Noetherian ring. If $\mathcal{C}$ is an enhanced triangulated category, $\mathcal{C}_0\subset \mathcal{C}$ a full subcategory, then by {\it Yoneda functor} for the pair $(\mathcal{C},\mathcal{C}_0)$ we mean the functor $$\mathcal{C}\to D(\mod \text{-}\mathcal{C}_0)$$ sending $X$ to the module $X_0\mapsto \operatorname{Hom}_{dg}(X_0,X)$. The following result is well known (see \cite[Thm.\ 8.9]{Toen}). \begin{lem}\label{B-Qcoh-lem} The Yoneda functor $D(\operatorname{Qcoh} G_{n,R})\to D(\mod \text{-}\operatorname{Perf}(G_{n,R}))$ is an equivalence. \end{lem} This implies that the Yoneda functor $D^b(\operatorname{Coh} G_{n,R})\to D(\mod \text{-} \operatorname{Perf}(G_{n,R}))$ is fully faithful. We want to prove the following analog of this result on the symplectic side. \begin{thm}\label{A-fully-faithful-thm} Assume that $R$ is a regular ring. Then the Yoneda functor $$Y:\mathcal{W}(\mathbb{T}_0)\otimes R\to D(\mod \text{-}\mathcal{F}(\mathbb{T}_0)\otimes R)$$ is fully faithful. \end{thm} The proof will substantially use the equivalence $$\mathcal{F}(\mathbb{T}_0)\otimes R\simeq \operatorname{Perf}(G_{n,R}),$$ sending $(L_0,L_1,\ldots,L_n)$ to $(\mathcal{O}, \mathcal{O}_{p_1},\ldots,\mathcal{O}_{p_n})$ (see Corollary \ref{Fuk-Perf-cor}). Namely, using this equivalence and Lemma \ref{B-Qcoh-lem} we can identify $D(\mod \text{-}\mathcal{F}(\mathbb{T}_0))$ with $D(\operatorname{Qcoh} G_{n,R})$, and view the Yoneda functor for the pair $(\mathcal{W}(\mathbb{T}_0), \mathcal{F}(\mathbb{T}_0))$ as a functor \begin{equation}\label{Yoneda-W-Qcoh-eq} Y:\mathcal{W}(\mathbb{T}_0)\otimes R\to D(\operatorname{Qcoh} G_{n,R}), \end{equation} whose restriction to $\mathcal{F}(\mathbb{T}_0)$ is fully faithful and sends $(L_0,L_1,\ldots,L_n)$ to $(\mathcal{O}, \mathcal{O}_{p_1},\ldots,\mathcal{O}_{p_n})$. We will use the following notation below: for a coherent sheaf $F$ on $G_{n,R}$, for which we have already constructed an object $\Lambda_F$ in $\mathcal{W}(\mathbb{T}_0)\otimes R$ such that $Y(\Lambda_F)\simeq F$, we will write $[F]=\Lambda_F$. Note that the functor $Y$ extends naturally to $D^b (\mathcal{W}(\mathbb{T}_0)\otimes R)$ and that for $F$ in $\operatorname{Perf}(G_{n,R})$ we already have an object $[F]$ in $D^\pi(\mathcal{F}(\mathbb{T}_0)\otimes R)$ such that $Y([F])=F$, due to Corollary \ref{Fuk-Perf-cor}. For example, $[\mathcal{O}]=L_0$, $[\mathcal{O}_{p_i}]=L_i$. \begin{lem}\label{Y-hat-L0-lem} Assume that $\operatorname{Spec}(R)$ is connected. Then there exists $i_0$ such that $Y(\hat{L}_0)\simeq \mathcal{O}_{q_{i_0}}$. \end{lem} \noindent {\it Proof} . By Lemma \ref{char-nodes-lem}, there exists an $R$-point $p:\operatorname{Spec}(R)\to G_{n,R}$ such that $Y(\hat{L}_0)\simeq \mathcal{O}_p$. Furthermore, for every homomorphism $R\to k$, where $k$ is a field, by the same Lemma, the $k$-point associated with $p$ is a node of $G_{n,k}$. This implies that the image of $p$ is contained in $\sqcup_{i=1}^n q_i(\operatorname{Spec}(R))$. Since $\operatorname{Spec}(R)$ is connected, the assertion follows. \qed\vspace{3mm} \begin{rem} Later we will prove that $Y(\hat{L}_0)\simeq\mathcal{O}_{q_n}$. \end{rem} Recall that we have an exact symplectomorphism $T : \mathbb{T}_0 \to \mathbb{T}_0$ given by translation (see \eqref{T-translation-eq}). This is an exact symplectomorphism of $\mathbb{T}_0$ which is of contact type at infinity. Such an exact symplectomorphism acts on all the data used to construct $\mathcal{F}(\mathbb{T}_0)$ and $\mathcal{W}(\mathbb{T}_0)$, so we get the following result. \begin{lem} \label{cyclicsym} The Yoneda functor $Y : \mathcal{W}(\mathbb{T}_0)\otimes R \to D(\mod\text{-}\mathcal{F}(\mathbb{T}_0)\otimes R)$ is $T$-equivariant. \qed\vspace{3mm} \end{lem} \noindent {\it Proof of Theorem \ref{A-fully-faithful-thm}}. It is enough to consider the case when $\operatorname{Spec}(R)$ is connected. In this case by Lemma \ref{Y-hat-L0-lem}, we have $[\mathcal{O}_{q_{i_0}}]=\hat{L}_0$. Since $Y$ is compatible with the cyclic symmetry (see Lemma \ref{cyclicsym}), the objects $[\mathcal{O}_{q_1}],\ldots,[\mathcal{O}_{q_n}]$ are obtained from $\hat{L}_0$ by the action of $T$ (see \eqref{T-translation-eq}). Hence, by Lemma \ref{another-generators-W-lem}, $\mathcal{W}(\mathbb{T}_0)\otimes R$ is split-generated by $\mathcal{F}(\mathbb{T}_0)\otimes R$ and by $[\mathcal{O}_{q_1}],\ldots,[\mathcal{O}_{q_n}]$. Since we already know that $Y$ is fully faithful when restricted to $\mathcal{F}(\mathbb{T}_0)\otimes R$, it is enough to prove that for any $a\in\mathcal{F}(\mathbb{T}_0)$ and $i,j\in [1,n]$, the following natural maps are isomorphisms: \begin{equation}\label{Hom-a-q-eq} \operatorname{Hom}^*(a,[\mathcal{O}_{q_i}])\to \operatorname{Hom}^*(Y(a),\mathcal{O}_{q_i}), \end{equation} \begin{equation}\label{Hom-q-a-eq} \operatorname{Hom}^*([\mathcal{O}_{q_i}],a)\to \operatorname{Hom}^*(\mathcal{O}_{q_i},Y(a)), \end{equation} \begin{equation}\label{Hom-q-q-eq} \operatorname{Hom}^*([\mathcal{O}_{q_i}],[\mathcal{O}_{q_j}])\to \operatorname{Hom}^*(\mathcal{O}_{q_i},\mathcal{O}_{q_j}). \end{equation} Note that \eqref{Hom-a-q-eq} is an isomorphism by the definition of $Y$. To see that \eqref{Hom-q-a-eq} is an isomorphism, it is enough to consider the case when $a$ is one of the generators $L_i$ of $\mathcal{F}(\mathbb{T}_0)$. Since in this case $Y(a)$ is a spherical object on $G_{n,R}$, by Serre duality, we have a perfect pairing $$\operatorname{Hom}^i(\mathcal{O}_{q_i}, Y(a))\otimes \operatorname{Hom}^{1-i}(Y(a), \mathcal{O}_{q_i})\to \operatorname{Hom}^1(Y(a),Y(a))\simeq R.$$ We have a similar perfect pairing on the symplectic side (see Proposition \ref{exactlags}). Thus, the assertion follows from the fact that \eqref{Hom-a-q-eq} is an isomorphism. Note that $\operatorname{Hom}^*([\mathcal{O}_{q_i}],[\mathcal{O}_{q_j}])=0$ for $i\neq j$. Thus, it remains to prove that \eqref{Hom-q-q-eq} is an isomorphism for $i=j$. By Lemmas \ref{sympl-dim-Hom-lem} and \ref{Ext-nodes-lem}, both sides in \eqref{Hom-q-q-eq} for $i=j$ are free $R$-modules of the same rank. Hence, it is enough to prove the same assertion for the morphism \eqref{Hom-q-q-eq} reduced modulo any maximal ideal in $R$. Since the formation of the maps \eqref{Hom-q-q-eq} is compatible with the change of scalars $R\to R'$, we can assume for the rest of the proof that $R=k$ is a field. We set $C=G_{n,k}$. By cyclic symmetry, it is enough to consider the case of $i=i_0$. Let us write $q=q_{i_0}$ for brevity. We know by Lemma \ref{Ext-nodes-lem} that the algebra $\operatorname{Hom}^*(\mathcal{O}_q,\mathcal{O}_q)$ is generated in degree $1$. Thus, it suffices to check that the map $$\operatorname{Hom}^1([\mathcal{O}_q],[\mathcal{O}_q])\to \operatorname{Hom}^1(\mathcal{O}_q,\mathcal{O}_q)$$ is an isomorphism. Let $J_q\in D^b(\operatorname{Coh} C)$ denote the ideal sheaf of $q$, and let us define $[J_q]$ from the exact triangle \[ [J_q]\to [\mathcal{O}_C]\xrightarrow{\alpha} [\mathcal{O}_q]\xrightarrow{\delta} [J_q][1] \] in $\mathcal{W}(\mathbb{T}_0)$, where $\a$ corresponds to $1\in H^0(C,\mathcal{O}_q)$ under the isomorphism $\operatorname{Hom}^0([\mathcal{O}_C],[\mathcal{O}_q])\simeq \operatorname{Hom}^0(\mathcal{O}_C,\mathcal{O}_q)$. Note that since $Y$ is an exact functor, the image of the above exact triangle under $Y$ is the standard triangle \[ J_q\to \mathcal{O}_C\to \mathcal{O}_q\rightarrow J_q[1]. \] In particular, $Y([J_q])\simeq J_q$. To describe the object $[J_q]$ concretely as a Lagrangian, we note that the Dehn twist triangle \eqref{DTT-eq} with $K = L_0 = [\mathcal{O}_C]$ and $L = \hat{L}_0 =[\mathcal{O}_q]$ gives an isomorphism \[ [J_q] \simeq \tau_{L_0}(\hat{L}_0)[-1]. \] \begin{comment} which we depicted in Figure \ref{fig7}. Note that as in Prop \label{primitive}, we can find a suitable primitive $\theta$ of the symplectic from such that the Dehn twist around $L_0$ is given by: \[ \tau_{L_0}(x,y) = (x-y, y) \] outside a small neighborhood of $D$. \begin{figure}[htb!] \centering \begin{tikzpicture} [scale=1.1] \tikzset{->-/.style={decoration={ markings, mark=at position #1 with {\arrow[scale=2,>=stealth]{>}}},postaction={decorate}}} \draw (0,0) -- (6,0); \draw[red, ->-=.5] (0,0) -- (6,0); \draw [red, ->-=.5] (0,6) -- (6,0); \draw [red, ->-=.5] (1,6) -- (6,1); \draw [red, ->-=.5] (0,1) -- (1,0); \draw [red, ->-=.5] (2,6) -- (6,2); \draw [red, ->-=.5] (0,2) -- (2,0); \draw [red, ->-=.5] (0,5) -- (5,0); \draw [red, ->-=.5] (5,6) -- (6,5); \draw (6,6) -- (0,6); \draw (6,0) -- (6,6); \draw (0,0) -- (0,6); \node at (0.5,0) {$\star$}; \node at (0,6.2) {\footnotesize $\tau_{L_0} \hat{L}_0$}; \node at (5,6.2) {\footnotesize $\tau_{L_0} (T^{n-1}\hat{L}_0)$}; \node at (3,-0.3) {\footnotesize $L_0$}; \draw[red, thick, fill=red] (0.6,2) circle(.02); \draw[red, thick, fill=red] (1.3,2) circle(.02); \draw[red, thick, fill=red] (2,2) circle(.02); \draw[red, thick, fill=red] (3.7,5) circle(.02); \draw[red, thick, fill=red] (4.4,5) circle(.02); \draw[red, thick, fill=red] (5.1,5) circle(.02); \draw[black, thick, fill=black] (2.5,0.5) circle(.01); \draw[black, thick, fill=black] (3,0.5) circle(.01); \draw[black, thick, fill=black] (3.5,0.5) circle(.01); \draw[thick, fill=black] (0.5,0.5) circle(.03); \draw[thick, fill=black] (1.5,0.5) circle(.03); \draw[thick, fill=black] (4.5,0.5) circle(.03); \draw[thick, fill=black] (5.5,0.5) circle(.03); \node at (0.5,0.7) {\footnotesize $z_n$}; \node at (1.5,0.7) {\footnotesize $z_{n-1}$}; \node at (4.5,0.7) {\footnotesize $z_{2}$}; \node at (5.5,0.7) {\footnotesize $z_1$}; \end{tikzpicture} \caption{Effect of exact symplectomorphisms $\tau_{L_0}$ on $\hat{L}_i$} \label{fig7} \end{figure} \end{comment} Note that we have a commutative square \begin{diagram} \operatorname{Hom}^0([J_q],[\mathcal{O}_q]) &\rTo{\circ\delta}& \operatorname{Hom}^1([\mathcal{O}_q],[\mathcal{O}_q])\\ \dTo{}&&\dTo{}\\ \operatorname{Hom}^0(J_q,\mathcal{O}_q) &\rTo{}& \operatorname{Hom}^1(\mathcal{O}_q,\mathcal{O}_q)\\ \end{diagram} We claim that the horizontal arrows in this square are isomorphisms. Indeed, for the bottom arrow this follows from the long exact sequence $$0\to \operatorname{Hom}^0(\mathcal{O}_q,\mathcal{O}_q)\xrightarrow{\sim} \operatorname{Hom}^0(\mathcal{O}_C,\mathcal{O}_q)\to \operatorname{Hom}^0(J_q,\mathcal{O}_q)\to \operatorname{Hom}^1(\mathcal{O}_q,\mathcal{O}_q)\to \operatorname{Hom}^1(\mathcal{O}_C,\mathcal{O}_q)=0. $$ For the top arrow this follows similarly from the exact triangle defining $[J_q]$. Therefore, it is enough to check that the Yoneda functor induces an isomorphism $$\operatorname{Hom}^0([J_q],[\mathcal{O}_q])\to \operatorname{Hom}^0(J_q,\mathcal{O}_q).$$ By Lemma \ref{J-sympl-lem} below, there exists a line bundle $\mathcal{L}$ on $C$ (defined over $k$) such that the pairings \begin{equation}\label{line-bundle-pairing-A} \operatorname{Hom}^0([J_q],[\mathcal{O}_q])\otimes_k \operatorname{Hom}^0([\mathcal{L}],[J_q])\to \operatorname{Hom}^0([\mathcal{L}],[\mathcal{O}_q])\simeq k, \end{equation} \begin{equation}\label{line-bundle-pairing-B} \operatorname{Hom}^0(J_q,\mathcal{O}_q)\otimes_k \operatorname{Hom}^0(\mathcal{L},J_q)\to \operatorname{Hom}^0(\mathcal{L},\mathcal{O}_q)\simeq k, \end{equation} given by the composition, are perfect pairings between $2$-dimensional vector spaces (note that $[\mathcal{L}]$ is in $D^\pi(\mathcal{F}(\mathbb{T}_0)\otimes k)$). Thus, we have a commutative square \begin{diagram} \operatorname{Hom}^0([J_q],[\mathcal{O}_q])&\rTo{}& \operatorname{Hom}_k\bigl(\operatorname{Hom}^0([\mathcal{L}],[J_q]),\operatorname{Hom}^0([\mathcal{L}],[\mathcal{O}_q])\bigr)\\ \dTo{Y}&&\dTo{Y}\\ \operatorname{Hom}^0(J_q,\mathcal{O}_q)&\rTo{}& \operatorname{Hom}_k\bigl(\operatorname{Hom}^0(\mathcal{L},J_q),\operatorname{Hom}^0(\mathcal{L},\mathcal{O}_q)\bigr) \end{diagram} in which the right vertical arrow is an isomorphism by the definition of $Y$ (since $[\mathcal{L}]$ is in $D^\pi(\mathcal{F}(\mathbb{T}_0)\otimes k)$) and the horizontal arrows are isomorphisms. Hence, we deduce that the left vertical arrow is also an isomorphism, which is what we needed. \qed\vspace{3mm} \begin{lem}\label{J-sympl-lem} Let $C=G_{n,k}$, where $k$ is a field. For $n>1$ let $\mathcal{L}$ be a line bundle that has degrees $-1$ and $-2$ when restricted to the components of $C$, passing through $q$, and is trivial on all the other components. In the case $n=1$ let $\mathcal{L}$ be a line bundle of degree $-3$ on $C$. Then the pairing \eqref{line-bundle-pairing-B} is perfect. Furthermore, for $\mathcal{L}=\mathcal{O}_C(-2p_{i_0}-p_{i_0+1})$ (resp., $\mathcal{L}=\mathcal{O}_C(-3p_1)$ if $n=1$) there exists an object $[\mathcal{L}]\in\mathcal{F}(\mathbb{T}_0)$ such that $Y([\mathcal{L}])\simeq \mathcal{L}$ and the pairing \eqref{line-bundle-pairing-A} is perfect. \end{lem} \noindent {\it Proof} . First, let us check the assertion on the B side in the case $n>1$. Let $C^+$ and $C^-$ be the components of $C$, passing through $q$, and let $q^+\in C^+$ and $q^-\in C^-$ be the points that glue into $q$. Let also $p^+\in C^+$ and $p^-\in C^-$ be the points corresponding to the other nodes on $C^+$ and $C^-$ (we can assume that under an isomorphism $C^\pm\simeq \P^1$ the point $q^\pm$ corresponds to $\infty$, while the point $p^\pm$ corresponds to $0$). Since $\mathcal{L}$ is trivial on all the other components, we can identify $\operatorname{Hom}^0(\mathcal{L},J_q)\simeq H^0(C,\mathcal{L}^{-1}\otimes J_q)$ with the vector space \begin{align*} V=\{ & (s^+,s^-)\in H^0(C^+, \mathcal{L}^{-1}|_{C^+})\oplus H^0(C^-,\mathcal{L}^{-1}|_{C^-}) \ |\ s^+(q+)=0, \\ &s^-(q^-)=0, s^+(p^+)=s^-(p^-)\}. \end{align*} Without loss of generality we can assume that $\mathcal{L}|_{C^-}\simeq\mathcal{O}(-1)$. Then the projection to $s^+$ induces an isomorphism of $V$ with $H^0(C^+,\mathcal{L}^{-1}(-q^+))$. Now the space $\operatorname{Hom}(J_q,\mathcal{O}_q)$ has a basis $(e^+,e^-)$ corresponding to the decomposition of the completion of $J_q$ at $q$ into $xk[[x]] \oplus yk[[y]]$, so that the pairing of $V$ with $e^+$ (resp., $e^-$) is given by $s^+\mod {\frak m}_{q^+}^2$ (resp., $s^-\mod {\frak m}_{q^-}^2$). Note that since $s^-(q^-)=0$, the functional $s^-\mod{\frak m}_{q^-}^2$ on $V$ is proportional (with nonzero scalar) to $s^-(p^-)=s^+(p^+)$. Thus, the two linear maps on $V$ given by the pairing with $e^+$ and $e^-$ can be identified with the maps $$s^+\mapsto s^+\mod {\frak m}_{q^+}^2, \ \ \ s^+\mapsto s^+(p^+).$$ Since these two maps form a basis of the dual of $V$, the assertion follows. In the case $n=1$, the assertion similarly reduces to the fact that $$s\mapsto s\mod {\frak m}_{\infty}^2, \ \ \ s\mapsto s\mod {\frak m}_0^2$$ form a basis of the dual of $H^0(\P^1, \mathcal{O}(3)(-0-\infty))$. Now let us prove the analogous statement on the A side. Recall that each $1$-spherical object $S$ gives rise to the corresponding dual twist functor $T'_S$, such that there is an exact triangle $$T'_S(F)\to F\to \operatorname{Hom}(F,S)^\vee\otimes S\to\ldots$$ In the case when $S=\mathcal{O}_{p_i}$ we have $T'_S(F)\simeq F(-p_i)$. Hence, $\mathcal{L}=\mathcal{O}_C(-2p_{i_0}-p_{i_0+1})$ can be obtained from $\mathcal{O}_C$ by applying three reflection functors of this type. On the Fukaya category side, we have the (negative) Dehn twist exact triangle: \begin{equation} \label{eq:exactness2} \xymatrix{ \tau_{K}^{-1}(L)\ar[r] & L \ar[d]^-{\mathrm{ev}^{\vee}} \\ & \mathit{HF}^* (L,K)^{\vee} \otimes K \ar[ul]^{[1]} } \end{equation} where $\tau_{K}^{-1}(L)$ is the exact Lagrangian in $\mathbb{T}_0$ which is obtained by a left-handed Dehn twist of $L$ around $K$ (equipped with the induced orientation, spin structure and grading). This implies that the corresponding object $[\mathcal{L}]$ of the Fukaya category is given by $\tau_{L_{i_0}}^{-2} \tau_{L_{i_0 +1}}^{-1}(L_0)$ for $n\geq 2$ (resp., by $\tau_{L_1}^{-3}(L_0)$ for $n=1$). In Figure \ref{fig7}, assuming that $n\geq 2$, we drew the isotopy classes of the curves corresponding to $[\mathcal{L}]$ (in red), $\hat{L}_0$ (in blue) and $[J_q[1]]$ (in violet). For $n=1$ the same picture can be used except that only the marked point labelled $z_1$ should be left. \begin{figure}[htb!] \centering \begin{tikzpicture} [scale=1.1] \tikzset{->-/.style={decoration={ markings, mark=at position #1 with {\arrow[scale=2,>=stealth]{>}}},postaction={decorate}}} \draw (0,0) -- (6,0); \draw (6,6) -- (0,6); \draw (6,0) -- (6,6); \draw (0,0) -- (0,6); \begin{scope} \clip (2.53,1.49) -- (3.1,6) -- (3.1,2) ; \clip[preaction={draw,fill=gray!30}] (0,0) rectangle (8,8); \end{scope} \begin{scope} \clip (3.1,2) to[in=-123,out=46] (6,6) -- (3.1,0) ; \clip[preaction={draw,fill=gray!30}] (0,0) rectangle (8,8); \end{scope} \draw [red, ->-=.5] (0,0) to[in=-100,out=35] (2.35,6); \draw[red, ->-=.5] (2.35,0) -- (3.1,6); \draw[red, ->-=.5] (3.1,0) -- (6,6); \draw[blue, ->-=.5] (3.1,6) -- (3.1,0); \draw[violet, ->-=.5] (6,6) to[in=22,out=-122] (0,0); \node at (0.5,0.4) {$\star$}; \draw[black, thick, fill=black] (0.7,2) circle(.01); \draw[black, thick, fill=black] (0.9,2) circle(.01); \draw[black, thick, fill=black] (1.1,2) circle(.01); \draw[black, thick, fill=black] (4.7,2) circle(.01); \draw[black, thick, fill=black] (4.9,2) circle(.01); \draw[black, thick, fill=black] (5.1,2) circle(.01); \draw[thick, fill=black] (0.5,2) circle(.03); \draw[thick, fill=black] (1.4,2) circle(.03); \draw[thick, fill=black] (3.1,2) circle(.03); \draw[thick, fill=black] (4.3,2) circle(.03); \draw[thick, fill=black] (5.5,2) circle(.03); \node at (1.4,2.2) {\footnotesize $z_2$}; \node at (3.1,2.2) {\footnotesize $z_1$}; \node at (4.3,2.2) {\footnotesize $z_n$}; \draw[thick, fill=black] (6,6) circle(.05); \draw[thick, fill=black] (3.1,6) circle(.05); \draw[thick, fill=black] (3.1,0) circle(.05); \draw[thick, fill=black] (2.53,1.49) circle(.05); \end{tikzpicture} \caption{The curves corresponding to $[\mathcal{L}]$ (red), $[J_q[1]]$ (violet) and $\hat{L}_0$ (blue)} \label{fig7} \end{figure} Now, the non-degeneracy of the pairing (\ref{line-bundle-pairing-A}) is equivalent to the non-degeneracy of the pairing \begin{equation} \operatorname{Hom}^{1}([J_q][1],[\mathcal{O}_q])\otimes \operatorname{Hom}^{-1}([\mathcal{L}],[J_q][1])\to \operatorname{Hom}^0([\mathcal{L}],[\mathcal{O}_q])\simeq k. \end{equation} This can be computed via the Floer $\mathfrak{m}_2$-products given by triangles with boundary on $([\mathcal{L}], [J_q][1], [\mathcal{O}_q])$ as in Figure \ref{fig7}. We see that there are precisely two triangles (shaded in Figure \ref{fig7}) contributing to this product. From this we conclude that the pairing (\ref{line-bundle-pairing-A}) is perfect, as required. \qed\vspace{3mm} \subsection{Equivalence of the wrapped Fukaya category with $D^b(\operatorname{Coh} G_{n,R})$} Recall that the objects $\hat{F}_0,\ldots,\hat{F}_n$ in $D^b(\operatorname{Coh} G_{n,R})$ were defined by \eqref{Fhat-collection-eq}. \begin{lem}\label{Y-gen-lem} One has $Y(\hat{L}_i)\simeq \hat{F}_i$, $i=0,\ldots,n$. \end{lem} \noindent {\it Proof} . For $i=1,\ldots,n$ this follows from Lemma \ref{char-dual-to-point-lem}. Furthermore, Lemma \ref{char-nodes-lem} implies that $Y(\hat{L}_0)$ is isomorphic to $\mathcal{O}_{q_i}$ for one of the nodes $q_i$. By cyclic symmetry, it is enough to distinguish $\mathcal{O}_{q_n}$ from the other nodes. Now we use the property that $\operatorname{Hom}^1(\hat{F}_i,\mathcal{O}_{q_n})=0$ for $i\neq 1,n$, while for $i=1,n$, this space is nonzero. \qed\vspace{3mm} \begin{thm}\label{wrapped-equivalence-thm} For a regular ring $R$ there exists an exact equivalence $D^\pi(\mathcal{W}(\mathbb{T}_0)\otimes R)\to D^b(\operatorname{Coh} G_{n,R})$, extending the equivalence of Corollary \ref{Fuk-Perf-cor}, and sending $\hat{L}_i$ to $\hat{F}_i$ for $i=0,\ldots,n$. If in addition $\hat{K}_0(R)=0$ (say, $R={\Bbb Z}$ or $R$ is a field), then $D^\pi(\mathcal{W}(\mathbb{T}_0)\otimes R)=D^b(\mathcal{W}(\mathbb{T}_0)\otimes R)$. \end{thm} \noindent {\it Proof} . Set $C=G_{n,R}$. Theorem \ref{A-fully-faithful-thm} together with Lemma \ref{another-generators-W-lem} imply that the Yoneda functor induces an equivalence of $D^\pi(\mathcal{W}(\mathbb{T}_0)\otimes R)$ with the subcategory of $D(\operatorname{Qcoh} C)$, split-generated by $Y(\hat{L}_i)$, $i=0,\ldots,n$. But by Lemma \ref{Y-gen-lem} together with Proposition \ref{Db-Coh-gen-prop}, this subcategory is equivalent to $D^b(\operatorname{Coh} C)$. The last assertion follows from Lemma \ref{ncgener} and Corollary \ref{Db-Coh-gen-cor}. \qed\vspace{3mm} \section{Koszul duality} \label{koszul} \subsection{Koszul duality for $A_\infty$-algebras} In this section we work over a field $k$. We consider the Koszul duality picture for $A_\infty$-algebras, similar to the one for the dg-categories, considered by Keller in \cite[Sec.\ 10]{Keller-derived-dg}. It involves a slightly more general notion of augmented $A_\infty$-algebras than the one considered in \cite[Sec.\ 3.5]{Keller-ainf} and \cite{LPWZ}. \begin{defi} (i) Let $K=\bigoplus_{i=0}^n k$, and let $A$ be an $A_\infty$-algebra over $K$. Let $S_i$, $i=0,\ldots,n$, be the simple $K$-modules (so that $S_i$ corresponds to the $i$th summand in $K$). A {\it left augmentation} on $A$ is a collection $(\widetilde{S}_i)$ of left $A_\infty$-modules over $A$ such that $H^*\widetilde{S}_i\simeq S_i$ as $K$-modules. Thus, we get an $A_\infty$-module $\widetilde{K}:=\bigoplus_{i=0}^n \widetilde{S}_i$ over $A$. Similarly, we define a {\it right augmentation} using right $A_\infty$-modules. \noindent (ii) Let $A$ be a right augmented $A_\infty$-algebra over $K$. The {\it Koszul dual of} $A$ is the left augmented dg-algebra $$E(A):=R\operatorname{Hom}_A(\widetilde{K},\widetilde{K}),$$ where the augmentation is given by the natural $E(A)$-module structure on $\widetilde{K}$. Similarly, the Koszul dual of a left augmented dg-algebra $A$ is the right augmented dg-algebra $E(A)=R\operatorname{Hom}_A(\widetilde{K},\widetilde{K})^{op}$. \end{defi} Note that if $\widetilde{K}$ is a left augmentation of $A$ then $R\operatorname{Hom}_K(\widetilde{K},K)$ is a right augmentation of $A$ and vice versa. We view augmented $A_\infty$-algebras up to a natural equivalence, extending the $A_\infty$-equivalence of $A_\infty$-algebras over $K$. The operation of passing to the Koszul dual is well-defined on equivalence classes. \begin{rems} 1. For any (left or right) augmented $A_\infty$-algebra $A$ there is an $A_\infty$-morphism $A\to E(E(A))$. However, in general it is not a quasi-isomorphism, and the double Koszul dual $E(E(A))$ should be viewed as some kind of completion of $A$, see \cite{DG}, \cite{Efimov}. \noindent 2. In \cite[Sec.\ 3.5]{Keller-ainf} and \cite{LPWZ} the authors work with a stronger notion of augmentation: they require the existence of a surjection $A\to K$ in the category of $A_\infty$-algebras. In the main example of interest for us, considered below, such a surjection may not exist. \end{rems} Recall that we have the $A_\infty$-algebra $\AA_0$ associated with the generators $(L_i)$ of the exact Fukaya category $\mathcal{F}(\mathbb{T}_0)$ and the $A_\infty$-algebra $\mathscr{B}$ associated with the generators $(\hat{L}_i)$ of the wrapped Fukaya category $\mathcal{W}(\mathbb{T}_0)$ (see \eqref{A0-algebra-eq}, \eqref{B-algebra-eq}). Since we now work over a field $k$, we consider the $A_\infty$-algebras $$\AA_{0,k}:=\AA_0\otimes k, \ \ \mathscr{B}_k:=\mathscr{B}\otimes k.$$ Recall that the cyclic group ${\Bbb Z}/n{\Bbb Z}$ acts on $\mathcal{F}(\mathbb{T}_0)$ (resp., $\mathcal{W}(\mathbb{T}_0)$) by means of the transformation $T$ (see \eqref{T-translation-eq}). This action preserves $L_0$ and cyclically permutes the remaining generators $L_1,\ldots,L_n$, hence, we get an induced action on the $A_\infty$-algebra $\AA_{0,k}$. Note however, that the ${\Bbb Z}/n{\Bbb Z}$-action on $\mathcal{W}(\mathbb{T}_0)$ does not preserve the generator $\bigoplus_{i=0}^n \hat{L}_i$ (the object $\hat{L}_0$ is mapped to similar objects associated with other punctures). Using Lemmas \ref{char-nodes-lem} and \ref{char-dual-to-point-lem}, as well as the equivalence of Corollary \ref{Fuk-Perf-cor}, we derive the following fact about $A_\infty$-modules over $\AA_{0,k}$. \begin{prop}\label{aug-prop} There is a unique left (resp., right) augmentation on $\AA_{0,k}$, up to the twist by the ${\Bbb Z}/n{\Bbb Z}$-action on $\AA_{0,k}$, such that $\dim\operatorname{Ext}^1_{\AA_{0,k}}(\widetilde{S}_i,\widetilde{S}_i)>1$ for every $i$ (equivalently, $\operatorname{Ext}^*_{\AA_{0,k}}(\widetilde{S}_i,\widetilde{S}_i)$ is infinite-dimensional). \qed\vspace{3mm} \end{prop} We are going to check that $\mathscr{B}_k$ and $\AA_{0,k}$ are Koszul dual of each other, with respect to some augmentations, such that the left augmentation on $\AA_{0,k}$ is the one given in Proposition \ref{aug-prop}. Note that $n$ choices of such an augmentation correspond to $n$ different choices of defining the object $\hat{L}_0$ in our generating set for $\mathcal{W}(\mathbb{T}_0)$. \subsection{The proof of Koszul duality on the $B$-side} Let us set $C=G_{n,k}$, and consider the objects $$F:=\bigoplus_{i=0}^n F_i, \ \ \hat{F}:=\bigoplus_{i=0}^n \hat{F}_i$$ in $D^b(\operatorname{Coh} C)$ (see Sec.\ \ref{gen-B-sec}). Due to equivalences of Corollary \ref{Fuk-Perf-cor} and Theorem \ref{wrapped-equivalence-thm}, we have $$\AA_{0,k}:=R\operatorname{Hom}_{\operatorname{Perf}(C)}(F,F), \ \ \mathscr{B}_k=R\operatorname{Hom}_{D^b(\operatorname{Coh} C)}(\hat{F},\hat{F}).$$ Here we apply the homological perturbation to get the corresponding $A_\infty$-algebras over $K$. Let us define the left augmentation of $\AA_{0,k}$ and the right augmentation of $\mathscr{B}_k$ using $$\widetilde{K}:=R\operatorname{Hom}_{D^b(\operatorname{Coh} C)}(\hat{F},F),$$ which has a natural $\AA_{0,k}-\mathscr{B}_k$-bimodule structure. More precisely, we use the corresponding $A_\infty$-bimodule obtained by the homological perturbation. \begin{prop}\label{Koszul-B-side-prop} One has $A_\infty$-equivalences $$\AA_{0,k}\simeq R\operatorname{Hom}_{\mathscr{B}_k}(\widetilde{K},\widetilde{K}), \ \ \mathscr{B}_k\simeq R\operatorname{Hom}_{\AA_{0,k}}(\widetilde{K},\widetilde{K})^{op}.$$ Thus, $\AA_{0,k}$ and $\mathscr{B}_k$ are the Koszul duals of each other. \end{prop} \noindent {\it Proof} . Since $\hat{F}$ generates $D^b(\operatorname{Coh} C)$, the functor $$R\operatorname{Hom}_{D^b(\operatorname{Coh} C)}(\hat{F},?): D^b(\operatorname{Coh} C)\to D(\mod \text{-}\mathscr{B}_k)$$ is fully faithful, so we get an equivalence $$\AA_{0,k}\simeq R\operatorname{Hom}_{D^b(\operatorname{Coh} C)}(F,F)\simeq R\operatorname{Hom}_{\mathscr{B}_k}(\widetilde{K},\widetilde{K}).$$ On the other hand, we claim that the functor $$R\operatorname{Hom}_{D^b(\operatorname{Coh} C)})(?,F): D^b(\operatorname{Coh} C)^{op}\to D(\AA_{0,k}\text{-}\mod),$$ sending $\hat{F}$ to $\widetilde{K}$, is also fully faithful. Indeed, this follows easily from Serre duality and the fact that $R\operatorname{Hom}_{D^b(\operatorname{Coh} C)})(F,?)$ is fully faithful (with the essential image consisting of right $\AA_{0,k}$-modules with finite-dimensional cohomology). Thus, we get $$\mathscr{B}_k\simeq R\operatorname{Hom}_{D^b(\operatorname{Coh} C)}(\hat{F},\hat{F})\simeq R\operatorname{Hom}_{\AA_{0,k}}(\widetilde{K},\widetilde{K})^{op}.$$ \qed\vspace{3mm} \begin{rem}\label{Koszul-rem} A Koszul duality result between $\mathcal{F}(M)$ and $\mathcal{W}(M)$ was proven in \cite{EL} in the case when $M$ is a plumbing of the cotangent bundles $T^*Q$'s according to a plumbing tree, where $Q=S^2$. The Koszul duality result above can be seen as an analogue of this for $Q=S^1$. It is interesting to note that the result in \cite{EL} was inspired by the classical Koszul duality result between the dg-algebras $C^*(Q)$ and $C_{-*}(\Omega Q)$ for $Q$ a \emph{simply-connected} manifold. The relevance of these two dg-algebras comes from the fact that they are quasi-isomorphic to the endomorphism algebras of generators of $\mathcal{F}(T^*Q)$ and $\mathcal{W}(T^*Q)$ respectively (see \cite{EL} for details). In the $2$-dimensional case we see that even though the corresponding Koszul duality result fails for $T^*S^1$ as $S^1$ is not simply-connected, Koszul duality between $\mathcal{F}(M)$ and $\mathcal{W}(M)$ holds for $M=\mathbb{T}_0$, a plumbing of $T^*S^1$'s according to the star-shaped tree as in Figure \ref{fig0}. \end{rem} \begin{rem} In Lemma \ref{ncgener} we prove that the objects $\hat{L}_0, \hat{L}_1, \ldots, \hat{L}_n$ generate the wrapped Fukaya category $\mathcal{W}(\mathbb{T}_0)$. It is likely that this can also be checked via the generation criterion from \cite{abouzgen} (see Remark \ref{pitfall}). This is the statement that the natural map \[ HH_{*-1}(\mathcal{W}(\mathbb{T}_0)) \to SH^*(\mathbb{T}_0) \] relating the Hochschild homology of the wrapped Fukaya category to the symplectic cohomology of $\mathbb{T}_0$ hits the identity element. When this generation criterion is satisfied, then it was proven by Ganatra \cite{ganatra} that the natural maps \[ HH_{*-1}(\mathcal{W}(\mathbb{T}_0)) \to SH^*(\mathbb{T}_0) \to HH^*(\mathcal{W}(\mathbb{T}_0)) \] are all isomorphisms. On the other hand, in the case at hand, it is easy to compute $SH^*(\mathbb{T}_0)$ explicitly via a Morse-Bott type spectral sequence (see Ex. 3.3 of \cite{seidelbiased}). In fact, we have \[ SH^0(\mathbb{T}_0) = k\ , \ SH^1(\mathbb{T}_0) = k^{n+1}, \ SH^d(\mathbb{T}_0) =k^n (d \geq 2). \] In particular, we deduce that $\text{dim} HH^2(\mathcal{W}(\mathbb{T}_0)) = n$. Now, we have seen above that there is a Koszul duality between the exact Fukaya category $\mathcal{F}(\mathbb{T}_0)$ and the wrapped Fukaya category $\mathcal{W}(\mathbb{T}_0)$. By the result of Keller \cite{keller} this implies that we have an isomorphism between $HH^*(\mathcal{F}(\mathbb{T}_0))$ and $HH^*(\mathcal{W}(\mathbb{T}_0))$. This gives that $\text{dim}HH^2 (\mathcal{F}(\mathbb{T}_0)) = \text{dim} HH^2(\mathscr{A}_0, \mathscr{A}_0)= n$, so the above calculation matches our calculation of $HH^2(G_{n,k})$ in Lemma \ref{HH-wheel-lem}. \end{rem}
1,108,101,562,562
arxiv
\section{Introduction} The Titchmarsh--Weyl function is an indispensable tool in direct and inverse spectral theory of ordinary differential operators and more general systems of ordinary differential equations; see the classical monographs~\cite{CL55,T62} and~\cite{B01,DK99,GS96,GS00-1,GP87,HS81,KST12,LW11,S96,S99} for a small selection of more recent contributions. For a singular second order Sturm--Liouville differential operator of the form $\sL_+=- \frac{d^2}{d x^2} + q_+$ on $\mathbb{R}_+$ with a real-valued, bounded potential $q_+$ the Titchmarsh--Weyl function $m_+$ can be defined as \begin{equation}\label{mfct} m_+(\lambda)=\frac{f_\lambda' (0)}{f_\lambda (0)}, \qquad \lambda \in \mathbb{C} \setminus \mathbb{R}, \end{equation} where $f_\lambda$ is a square-integrable solution of $\sL_+ f = \lambda f$ on $\mathbb{R}_+$; cf.~\cite{T62,W10}. The function $m_+ : \mathbb{C} \setminus \mathbb{R} \to \mathbb{C}$ belongs to the class of Nevanlinna (or Riesz--Herglotz) functions and it is a celebrated fact that it reflects the complete spectral properties of the selfadjoint realizations of $\sL_+$ in $L^2 (\mathbb{R}_+)$. E.g. the eigenvalues of the Dirichlet realization $A_D$ are precisely those $\lambda \in \mathbb{R}$, where $\lim_{\eta \searrow 0} i \eta m_+ (\lambda + i \eta) \neq 0$, the isolated eigenvalues among them coincide with the poles of $m_+$, and the absolutely continuous spectrum of $A_D$ (roughly speaking) consists of all $\lambda$ with the property $0 < {\text{\rm Im}} m_+ (\lambda + i 0) < + \infty$. If $\sL=- \frac{d^2}{d x^2} + q$ is a singular Sturm-Liouville expression on $\mathbb{R}$ with $q$ real-valued and bounded, it is most natural to use decomposition methods of Glazman type for the analysis of the corresponding selfadjoint operator in $L^2(\dR)$; cf.~\cite{G65}. More precisely, the restriction of $\sL$ to $\mathbb{R}_+$ gives rise to the Titchmarsh--Weyl function $m_+$ in \eqref{mfct}, and similarly a Titchmarsh--Weyl function $m_-$ associated to the restriction of $\sL$ to $\mathbb{R}_-$ is defined. In that case usually the functions \begin{equation}\label{mfct2} m(\lambda)=-\bigl(m_+(\lambda)+m_-(\lambda)\bigr)^{-1}\quad\text{and}\quad \widetilde m(\lambda)= \begin{pmatrix} - m_+(\lambda) & 1 \\ 1 & m_-(\lambda)^{-1} \end{pmatrix}^{-1} \end{equation} are employed for the description of the spectrum. Whereas the scalar function $m$ seems to be more convenient it will in general not contain the complete spectral data, a drawback that is overcome when using the $2\times 2$-matrix function $\widetilde m$. Some of these observations were already made in \cite{K63,T62}, similar ideas can also be found in \cite{HSW00,HS83,K89} for Hamiltonian systems, and more recently in an abstract operator theoretical framework in \cite{DHMS00,DHMS09}, see also \cite{BLu07,BLT13}. One of the main objectives of this paper is to extend the classical spectral analysis of ordinary differential operators via the Titchmarsh--Weyl functions in~\eqref{mfct2} to the multidimensional setting. For this consider the second order partial differential expression \begin{align}\label{eq:diffexprIntro} \cL = - \sum_{j, k = 1}^n \frac{\partial}{\partial x_j} a_{jk} \frac{\partial}{\partial x_k} + \sum_{j = 1}^n \left( a_j \frac{\partial}{\partial x_j} - \frac{\partial}{\partial x_j} \overline{a_j} \right) + a \end{align} with smooth, bounded coefficients $a_{jk}, a_j : \mathbb{R}^n \to \mathbb{C}$ and $a: \mathbb{R}^n \to \mathbb{R}$ bounded, and assume that $\cL$ is formally symmetric and uniformly elliptic on $\mathbb{R}^n$. Let $A$ be the selfadjoint operator associated to \eqref{eq:diffexprIntro} in $L^2(\mathbb{R}^n)$. Our main goal is to describe the spectral data of $A$, that is, isolated and embedded eigenvalues, continuous, absolutely continuous and singular continuous spectral points, in terms of the limiting behaviour of appropriate multidimensional counterparts of the functions in \eqref{mfct2}. Note first that the multidimensional analogue of the Titchmarsh--Weyl function \eqref{mfct} is the Dirichlet-to-Neumann map, and in order to define suitable analogues of the functions in \eqref{mfct2} we proceed as follows: Split $\mathbb{R}^n$ into a bounded domain $\Omega_{\rm i}$ with smooth boundary $\Sigma$ and let $\Omega_{\rm e} = \mathbb{R}^n \setminus \overline{\Omega_{\rm i}}$ be the exterior of $\Omega_{\rm i}$. For $\lambda \in \mathbb{C} \setminus \mathbb{R}$ the Dirichlet-to-Neumann maps for $\cL$ in $\Omega_{\rm i}$ and $\Omega_{\rm e}$, respectively, on the compact interface $\Sigma$ are given by \begin{align*} \Lambda_{\rm i} (\lambda) u_{\lambda, \rm i} |_\Sigma := \frac{\partial u_{\lambda, \rm i}}{\partial \nu_{\cL_{\rm i}}} \Big|_\Sigma \quad\text{and}\quad \Lambda_{\rm e} (\lambda) u_{\lambda, \rm e} |_\Sigma := \frac{\partial u_{\lambda, \rm e}}{\partial \nu_{\cL_{\rm e}}} \Big|_\Sigma, \qquad \lambda \in \mathbb{C} \setminus \mathbb{R}, \end{align*} where $u_{\lambda, j} \in H^2 (\Omega_j)$ solve $\cL u_{\lambda, j} = \lambda u_{\lambda,j}$, $j = \rm i, e$, and $u_{\lambda, j}|_\Sigma$ and $\frac{\partial u_{\lambda,j}}{\partial \nu_{\cL_j}} |_\Sigma$ denote the trace and the conormal derivative, respectively; cf.~Section~\ref{41} for further details. Both functions $\Lambda_{\rm i}$ and $\Lambda_{\rm e}$ are viewed as operator-valued functions in $L^2(\Sigma)$ defined on the dense subspace $H^{3/2}(\Sigma)$. The multidimensional counterparts of the functions in \eqref{mfct2} are \begin{align}\label{eq:WeylSchreodIntro} M (\lambda) = \big( \Lambda_{\rm i} (\lambda) + \Lambda_{\rm e} (\lambda) \big)^{-1} \quad\text{and}\quad \widetilde M(\lambda)= \begin{pmatrix} \Lambda_{\rm i}(\lambda) & 1 \\ 1 & -\Lambda_{\rm e}(\lambda)^{-1} \end{pmatrix}^{-1} \end{align} (the differences in the signs are due to the definition of the conormal derivative, where the normals of $\Omega_{\rm i}$ and $\Omega_{\rm e}$ point into opposite directions). Observe that, in contrast to the one-dimensional situation described above, $\mathbb{R}^n$ is split into a bounded domain and an unbounded domain. This yields that $\Lambda_{\rm i}$ is meromorphic, which in turn essentially allows us to give an almost complete characterization of the spectrum of $A$ with the function $M$ in \eqref{eq:WeylSchreodIntro} in Theorem~\ref{thm:eigenSchroed}; the only possible spectral points that cannot be detected with $M$ are eigenvalues of $A$ with vanishing traces on $\Sigma$, and possible accumulation points of such eigenvalues. A complete picture of the spectrum of $A$ in terms of the limiting behaviour of Dirichlet-to-Neumann maps is obtained with help of the $2\times 2$-block operator matrix function $\widetilde M$ in \eqref{eq:WeylSchreodIntro} in Theorem~\ref{thm:eigenSchroedDecoup}. We mention that in connection with Schr\"{o}dinger operators in $\mathbb{R}^3$ the function $M$ in \eqref{eq:WeylSchreodIntro} was already used in \cite{AP04} for the extension of a classical convergence property of the Titchmarsh--Weyl function to the three-dimensional case, see also \cite{BLL13,BLL13-2,R09}. We also remark that for Schr\"odinger operators on exterior domains with $C^2$-boundaries the connection of the spectrum to the limits of the Dirichlet-to-Neumann map was already investigated by the authors in~\cite{BR13}. In this paper our approach to Titchmarsh--Weyl functions and their connection to spectral properties of corresponding selfadjoint differential operators is more abstract and of general nature, based on the concepts of (quasi) boundary triplets and their Weyl functions. Recall first that for a symmetric operator $S$ in a Hilbert space $\cH$ a boundary triple $\{ {\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}, \Gamma_0, \Gamma_1 \}$ consists of a ``boundary space'' ${\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}$ and two linear mappings $\Gamma_0, \Gamma_1 : {\text{\rm dom\,}} S^* \to {\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}$, which satisfy an abstract Green identity \begin{equation}\label{gi} (S^* f, g)_\cH - (f, S^* g)_\cH = (\Gamma_1 f, \Gamma_0 g)_{\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I} - (\Gamma_0 f, \Gamma_1 g)_{\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}, \quad f, g \in {\text{\rm dom\,}} S^*, \end{equation} and a maximality condition. The corresponding Weyl function $M$ is defined as \begin{equation}\label{wf} M (\lambda) \Gamma_0 f_\lambda = \Gamma_1 f_\lambda, \qquad \lambda \in \mathbb{C} \setminus \mathbb{R}, \end{equation} where $f_\lambda \in \cH$ solves the equation $S^* f = \lambda f$; the values $M(\lambda)$ of the Weyl function $M$ are bounded operators in the Hilbert space ${\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}$. The example of the Sturm--Liouville expression $\sL_+$ in the beginning of the introduction fits into this scheme: There $\cH = L^2 (\mathbb{R}_+)$,~$S$ is the minimal operator associated with the differential expression $\mathfrak{L}_+$ in $L^2 (\mathbb{R}_+)$, ${\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I} = \mathbb{C}$, and the mappings $\Gamma_0, \Gamma_1$ are given by \begin{align*} \Gamma_0 f = f (0) \quad \text{and} \quad \Gamma_1 f = f' (0), \qquad f \in {\text{\rm dom\,}} S^*, \end{align*} where $S^*$ is the maximal operator associated with $\sL_+$ in $L^2 (\mathbb{R}_+)$. Then the corresponding Weyl function is $m_+$ in \eqref{mfct}, the selfadjoint Dirichlet operator $A_D$ coincides with $S^* \upharpoonright \ker \Gamma_0$, and the spectrum can be described with the help of the limits of the Weyl function. The correspondence between the spectrum of the particular selfadjoint extension $A_0 := S^* \upharpoonright \ker \Gamma_0$ and the limits of the Weyl function is not a special feature of the boundary triple for the above Sturm--Liouville equation. In fact, it holds as soon as the symmetric restriction $S$ (and, thus, the boundary mappings $\Gamma_0$ and $\Gamma_1$) is chosen properly. More abstract considerations from \cite{DM91,KL73,KL77,LT77} yield that the operator $A_0$ (and hence its spectrum) is determined up to unitary equivalence by the Weyl function if and only if the symmetric operator $S$ is simple or completely non-selfadjoint, that is, there exists no nontrivial subspace of $\cH$ which reduces $S$ to a selfadjoint operator. This condition can be reformulated equivalently as \begin{align}\label{eq:simpleIntro} \cH = \clsp \bigl\{ \gamma (\nu) g : \nu \in \mathbb{C} \setminus \mathbb{R}, \, g \in {\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I} \bigr\}, \end{align} where $\gamma (\nu)=(\Gamma_0\upharpoonright\ker(S^*-\nu))^{-1}$ is the so-called $\gamma$-field and $\clsp$ denotes the closed linear span; cf.~\cite{K49}. Under the assumption that $S$ is simple a description of the absolutely continuous and singular continuous spectrum in the framework of boundary triples and their Weyl functions was given in \cite{BMN02}; for more recent related work see also~\cite{BGW09,BHMNW09,BMNW08,BGP08,HMM13,M10,MN11,P13,R07,SW13}. The concept of boundary triples and their Weyl functions was extended in~\cite{BL07} in such a way that it is conveniently applicable to PDE problems. For that one defines boundary mappings $\Gamma_0,\Gamma_1$ on a suitable, smaller subset of the domain of the maximal operator and requires Green's identity \eqref{gi} only to hold on this subset; the definition of the Weyl function associated to such a quasi boundary triple $\{ {\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}, \Gamma_0, \Gamma_1 \}$ is as in \eqref{wf}, except that only solutions in the domain of the boundary maps are used; cf. Section~\ref{21}. For the second order elliptic operator $\cL$ in~\eqref{eq:diffexprIntro} restricted to the smooth domain $\Omega_{\rm i}\subset\mathbb{R}^n$ one may choose ${\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}=L^2(\Sigma)$, \begin{align*} \Gamma_0 u = u\vert_{\partial\Omega_{\rm i}} \quad \text{and} \quad \Gamma_1 u = -\frac{\partial u}{\partial\nu_{\cL_{\rm i}}}\Big|_{\partial\Omega_{\rm i}}, \qquad u \in H^2(\Omega), \end{align*} in which case the corresponding Weyl function is (minus) the Dirichlet-to-Neumann map $-\Lambda_{\rm i}$. Based on orthogonal couplings of symmetric operators and extending abstract ideas in \cite{DHMS00} also the functions $M$ and $\widetilde M$ in \eqref{eq:WeylSchreodIntro} can be interpreted as Weyl functions associated to properly chosen quasi boundary triples; e.g., $M$ corresponds to the pair of boundary mappings \begin{equation}\label{couple} \Gamma_0 u = \frac{\partial u_{\rm i}}{\partial \nu_{\cL_{\rm i}}} \Big|_\Sigma + \frac{\partial u_{\rm e}}{\partial \nu_{\cL_{\rm e}}} \Big|_\Sigma, \quad \Gamma_1 u = u |_\Sigma, \qquad u = u_{\rm i} \oplus u_{\rm e}, \quad u_{\rm i} |_\Sigma = u_{\rm e} |_\Sigma, \end{equation} where $u_j \in H^2 (\Omega_j)$, $j = \rm i, e$. Moreover, $\ker \Gamma_0$ is the domain of the unique selfadjoint operator $A$ associated with $\cL$ in $L^2 (\mathbb{R}^n)$. When trying to link the spectral properties of $A$ to the limiting behavior of the function $M$ it is necessary to extend the known results for boundary triples to the more general notion of quasi boundary triples. Moreover, a subtle difficulty arises: The symmetric operator $S$ corresponding to the boundary mappings in \eqref{couple} may possess eigenvalues and thus in general is not simple. In the abstract part of the present paper we show how this difficulty can be overcome. In the general setting of quasi boundary triples and their Weyl functions we show that a local simplicity condition on an open interval (or, more generally, a Borel set) $\Delta \subset \mathbb{R}$ suffices to characterize the spectrum of $A_0$ in $\Delta$. To be more specific, we assume that \begin{align}\label{eq:localSimpleIntro} E (\Delta) \cH = \clsp \big\{ E (\Delta) \gamma (\nu) g : \nu \in \mathbb{C} \setminus \mathbb{R}, \, g \in {\text{\rm ran\,}} \Gamma_0 \big\}, \end{align} where $E (\Delta)$ denotes the spectral projection of $A_0 = S^* \upharpoonright \ker \Gamma_0$ on $\Delta$; this is a local version of the condition~\eqref{eq:simpleIntro}. Under this assumption we provide characterizations of the isolated and embedded eigenvalues and the corresponding eigenspaces, as well as the continuous, absolutely continuous and singular continuous spectrum of $A_0$ in~$\Delta$ in terms of the limits of $M (\lambda)$ when $\lambda$ approaches the real axis. For instance, we prove that the eigenvalues of $A_0$ in $\Delta$ are those $\lambda$, where $\lim_{\eta \searrow 0} i \eta M (\lambda + i \eta) g \neq 0$ for some $g \in {\text{\rm ran\,}} \Gamma_0$, and that the absolutely continuous spectrum of $A_0$ can be characterized by means of the points $\lambda$ where $0 < {\text{\rm Im}} (M (\lambda + i 0) g, g)_{\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I} < \infty$. Moreover, we prove inclusions and provide conditions for the absence of singular continuous spectrum. Afterwards we apply the obtained results to the selfadjoint elliptic differential operator associated to $\cL$ in \eqref{eq:diffexprIntro} in $L^2 (\mathbb{R}^n)$. We prove that, despite the fact that the underlying symmetric operator fails to be simple in general, the whole absolutely continuous spectrum of $A_0$ can be recovered from the mapping $M$ in~\eqref{eq:WeylSchreodIntro}. Moreover, we prove that the eigenvalues of $A_0$ and the corresponding eigenfunctions can be characterized by limiting properties of $M$ as far as the eigenfunctions do not vanish on the interface $\Sigma$. A complete picture of the spectrum of $A_0$ is obtained when using the function $\widetilde M$ in~\eqref{eq:WeylSchreodIntro}. This paper is organized in the following way. In Section~2 we recall the basic facts on quasi boundary triples and corresponding Weyl functions and discuss the local simplicity property~\eqref{eq:localSimpleIntro} in detail. In Section~3 the connection between the spectra of selfadjoint operators and corresponding abstract Weyl functions is investigated. Section~4 contains the application of the abstract results to the mentioned PDE problems. Finally, let us fix some notation. For a selfadjoint operator $A$ in a Hilbert space $\cH$ we denote by $\sigma (A)$ ($\sigma_{\rm p} (A), \sigma_{\rm c} (A)$, $\sigma_{\rm ac} (A)$, $\sigma_{\rm sc} (A)$, $\sigma_{\rm s} (A)$, respectively) the spectrum (set of eigenvalues, continuous, absolutely continuous, singular continuous, singular spectrum, respectively) of $A$ and by $\rho (A) = \mathbb{C} \setminus \sigma (A)$ its resolvent set. \section{Quasi boundary triples, associated Weyl functions, and a local simplicity condition} In this preliminary section we first recall the concepts of quasi boundary triples, their $\gamma$-fields and their Weyl functions. Afterwards we discuss a local simplicity property of symmetric operators, which will be assumed to hold in most of the results of Section~\ref{sec:abstr}. \subsection{Quasi boundary triples}\label{21} The notion of quasi boundary triples was introduced in~\cite{BL07} as a generalization of the notions of boundary triples and generalized boundary triples, see~\cite{DHMS06,DM91,DM95,GG91,K75}. The basic definition is the following. \begin{definition}\label{def:qbt} Let $S$ be a closed, densely defined, symmetric operator in a separable Hilbert space $\cH$ and let $T \subset S^*$ be an operator whose closure coincides with $S^*$, i.e., $\overline T = S^*$. A triple $\{ {\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}, \Gamma_0, \Gamma_1\}$ consisting of a Hilbert space ${\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}$ and two linear mappings $\Gamma_0, \Gamma_1 : {\text{\rm dom\,}} T \to {\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}$ is called a {\em quasi boundary triple} for $S^*$ if the following conditions are satisfied. \begin{enumerate} \item The range of the mapping $\Gamma:=(\Gamma_0, \Gamma_1)^\top:{\text{\rm dom\,}} T\rightarrow {\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I} \times {\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}$ is dense. \item The identity \begin{align}\label{eq:absGreen} (T u, v)_\cH - (u, T v)_\cH = (\Gamma_1 u, \Gamma_0 v)_{\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I} - (\Gamma_0 u, \Gamma_1 v)_{\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I} \end{align} holds for all $u, v \in {\text{\rm dom\,}} T$. \item The operator $A_0 := T \upharpoonright \ker \Gamma_0$ is selfadjoint in $\cH$. \end{enumerate} \end{definition} In the following we suppress the indices in the scalar products and simply write $(\cdot, \cdot)$, when no confusion can arise. We recall some facts on quasi boundary triples, which can be found in \cite{BL07,BL11}. Let $S$ be a closed, densely defined, symmetric operator in $\cH$. A quasi boundary triple $\{ {\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}, \Gamma_0, \Gamma_1\}$ for $S^*$ exists if and only if the defect numbers of $S$ are equal. What we will use frequently is that if $\{ {\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}, \Gamma_0, \Gamma_1\}$ is a quasi boundary triple for $S^*$ then ${\text{\rm dom\,}} S=\ker\Gamma_0\cap\ker\Gamma_1$. Recall also that a quasi boundary triple with the additional property ${\text{\rm ran\,}}(\Gamma_0,\Gamma_1)^\top={\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}\times{\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}$ becomes an (ordinary) boundary triple and that, in particular, in this case the boundary mappings $\Gamma_0,\Gamma_1$ are defined on ${\text{\rm dom\,}} S^*$ and \eqref{eq:absGreen} holds with $T$ replaced by $S^*$. In particular, in the case of finite defect numbers the notions of quasi boundary triples and (ordinary) boundary triples coincide. For more details on quasi boundary triples we refer to \cite{BL07,BL11}. In order to prove that a triple $\{ {\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}, \Gamma_0, \Gamma_1\}$ is a quasi boundary triple for the adjoint $S^*$ of a given symmetric operator $S$ it is not necessary to know $S^*$ explicitly, as the following useful proposition shows; cf.~\cite[Theorem~2.3]{BL07} for a proof. \begin{proposition}\label{prop:ratetheorem} Let $T$ be a linear operator in a separable Hilbert space $\cH$, let ${\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}$ be a further Hilbert space, and let $\Gamma_0, \Gamma_1 : {\text{\rm dom\,}} T \to {\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}$ be linear mappings which satisfy the following conditions. \begin{enumerate} \item The range of the map $\Gamma = ( \Gamma_0, \Gamma_1 )^\top : {\text{\rm dom\,}} T \to {\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I} \times {\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}$ is dense in ${\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}\times{\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}$ and $\ker \Gamma$ is dense in $\cH$. \item The identity~\eqref{eq:absGreen} holds for all $u,v\in{\text{\rm dom\,}} T$. \item There exists a selfadjoint restriction $A_0$ of $T$ in $\cH$ with ${\text{\rm dom\,}} A_0 \subset \ker \Gamma_0$. \end{enumerate} Then $S := T \upharpoonright \ker \Gamma$ is a closed, densely defined, symmetric operator in $\cH$, $\overline T = S^*$ holds, and $\{ {\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}, \Gamma_0, \Gamma_1 \}$ is a quasi boundary triple for $S^*$ with $T \upharpoonright \ker \Gamma_0 = A_0$. \end{proposition} \subsection{$\gamma$-fields and Weyl functions} Let $S$ be a closed, densely defined, symmetric operator in $\cH$ and let $\{{\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I},\Gamma_0,\Gamma_1\}$ be a quasi boundary triple for $\overline T=S^*$ with $A_0=T\upharpoonright\ker\Gamma_0$. In order to define the $\gamma$-field and the Weyl function corresponding to $\{{\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I},\Gamma_0,\Gamma_1\}$ note that the direct sum decomposition \begin{align*} {\text{\rm dom\,}} T = {\text{\rm dom\,}} A_0 \dotplus \ker (T - \lambda) = \ker \Gamma_0 \dotplus \ker (T - \lambda) \end{align*} holds for each $\lambda \in \rho (A_0)$ and that, in particular, the restriction of $\Gamma_0$ to $\ker (T - \lambda)$ is injective. The following definition is formally the same as for ordinary and generalized boundary triples. \begin{definition}\label{def:gammaweyl} Let $\{ {\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}, \Gamma_0, \Gamma_1\}$ be a quasi boundary triple for $\overline T=S^*$ and let $A_0=T\upharpoonright\ker\Gamma_0$. Then the {\em $\gamma$-field} $\gamma$ and the {\em Weyl function} $M$ associated with $\{{\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}, \Gamma_0, \Gamma_1\}$ are given by \begin{align*} \gamma (\lambda) = \big( \Gamma_0 \upharpoonright \ker (T - \lambda) \big)^{-1} \quad \text{and} \quad M (\lambda) = \Gamma_1 \gamma (\lambda),\quad\lambda\in\rho(A_0), \end{align*} respectively. \end{definition} It follows immediately from the definition that for each $\lambda \in \rho (A_0)$ the operator $M (\lambda)$ satisfies the equality \begin{align*} M (\lambda) \Gamma_0 u_\lambda = \Gamma_1 u_\lambda, \qquad u_\lambda \in \ker (T - \lambda), \end{align*} and that ${\text{\rm ran\,}} \gamma (\lambda) = \ker (T - \lambda)$ holds. We summarize some properties of the $\gamma$-field and the Weyl function. For the proofs of items (i)-(iv) in the next lemma we refer to~\cite[Proposition~2.6]{BL07}, item (v) is a simple consequence of (ii) and (iii). \begin{lemma}\label{lem:gammaWeylProp} Let $\{ {\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}, \Gamma_0, \Gamma_1 \}$ be a quasi boundary triple for $\overline T = S^*$ with $\gamma$-field $\gamma$ and Weyl function $M$ and let $A_0=T\upharpoonright\ker\Gamma_0$. Then for $\lambda, \mu,\nu \in \rho (A_0)$ the following assertions hold. \begin{enumerate} \item $\gamma (\lambda)$ is a bounded operator from ${\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}$ to $\cH$ defined on the dense subspace ${\text{\rm ran\,}}\Gamma_0$. The adjoint $\gamma (\lambda)^* : \cH \to {\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}$ is defined on $\cH$ and is bounded. It is given by \begin{align*} \gamma (\lambda)^* = \Gamma_1 (A_0 - \overline \lambda)^{-1}. \end{align*} \item The identity \begin{align*} \gamma (\lambda)g = \left( I + (\lambda - \mu) (A_0 - \lambda)^{-1} \right) \gamma (\mu)g \end{align*} holds for all $g\in{\text{\rm ran\,}}\Gamma_0$. \item The $\gamma$-field and the Weyl function are connected via \begin{align*} (\lambda - \overline \mu) \gamma (\mu)^* \gamma (\lambda)g = M (\lambda)g - M (\mu)^*g,\qquad g\in {\text{\rm ran\,}}\Gamma_0, \end{align*} and $M (\overline \lambda) \subset M (\lambda)^*$ holds. \item $M (\lambda)$ is an operator in ${\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}$ defined on the dense subspace ${\text{\rm ran\,}}\Gamma_0$ and satisfies \begin{equation}\label{eq:Mformula} \begin{split} \qquad M (\lambda)g & = {\text{\rm Re}} M (\mu)g \\ & \qquad + \gamma (\mu)^* \left( (\lambda - {\text{\rm Re}} \mu) + (\lambda - \mu) (\lambda - \overline \mu) (A_0 - \lambda)^{-1} \right) \gamma (\mu)g \end{split} \end{equation} for all $g\in{\text{\rm ran\,}}\Gamma_0$. In particular, for every $g \in {\text{\rm ran\,}} \Gamma_0$ the function $\lambda \mapsto M (\lambda) g$ is holomorphic on $\rho (A_0)$ and each isolated singularity of $\lambda \mapsto M (\lambda) g$ is a pole of first order. Moreover, $\lim_{\eta \searrow 0} i \eta M (\zeta + i \eta) g$ exists for all $g \in {\text{\rm ran\,}} \Gamma_0$ and all $\zeta \in \mathbb{R}$. \item The identity \begin{align*} \gamma (\mu)^* (A_0 - \lambda)^{-1} \gamma (\nu)g = \frac{M (\lambda)g}{(\lambda - \nu) (\lambda - \overline \mu)} + \frac{M (\overline \mu)g}{(\lambda-\overline \mu) (\nu-\overline \mu )} + \frac{M (\nu)g}{(\nu - \lambda) ( \nu - \overline \mu)} \end{align*} holds for all $g\in{\text{\rm ran\,}}\Gamma_0$ if $\lambda\not=\nu$, $\lambda\not=\overline\mu$ and $\nu\not=\overline\mu$. \end{enumerate} \end{lemma} \subsection{Simple symmetric operators and local simplicity}\label{simplesec} Let $S$ be a closed, densely defined, symmetric operator in the separable Hilbert space $\cH$. Recall that $S$ is said to be {\em simple} or {\em completely non-selfadjoint} if there is no nontrivial $S$-invariant subspace $\cH_0$ of $\cH$ which reduces $S$ to a selfadjoint operator in $\cH_0$, see~\cite[Chapter~VII-81]{AG93}. According to \cite{K49} the simplicity of $S$ is equivalent to the density of the span of the defect spaces of $S$ in $\cH$, i.e., $S$ is simple if and only if \begin{align}\label{eq:simple} \cH = \clsp \bigl\{ \ker (S^* - \nu) : \nu \in \mathbb{C} \setminus \mathbb{R} \bigr\} \end{align} holds; here $\clsp$ stands for the closed linear span. Assume that $\{ {\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}, \Gamma_0, \Gamma_1 \}$ is a quasi boundary triple for $\overline T=S^*$ with $A_0 = T\upharpoonright\ker\Gamma_0$. Then it follows that $S$ is simple if and only if \eqref{eq:simple} holds with $\ker(S^*-\nu)$ replaced by $\ker(T-\nu)$. Moreover, if $\gamma$ is the $\gamma$-field corresponding to the quasi boundary triple $\{ {\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}, \Gamma_0, \Gamma_1 \}$ we conclude that $S$ is simple if and only if \begin{align}\label{eq:simple2} \cH = \clsp \bigl\{ \gamma (\nu) g : \nu \in \mathbb{C} \setminus \mathbb{R},\, g \in {\text{\rm ran\,}} \Gamma_0 \bigr\} \end{align} holds. We also mention that the set $\mathbb{C} \setminus \mathbb{R}$ in \eqref{eq:simple2} can be replaced by any set $G \subset \rho (A_0)$ which has an accumulation point in each connected component of $\rho (A_0)$; cf. Lemma~\ref{simplelemma}~(v) below. Our aim is to generalize the notion of simplicity and to replace it by some weaker, local condition, which is satisfied in, e.g., the applications in Section~\ref{sec:4}. Instead of \eqref{eq:simple2} we will assume that \begin{align}\label{eq:localSimple'} E (\Delta) \cH=\clsp \bigl\{ E (\Delta) \gamma (\nu) g : \nu \in \mathbb{C} \setminus \mathbb{R},\, g \in {\text{\rm ran\,}} \Gamma_0 \bigr\} \end{align} holds on a Borel set (later on usually an open interval) $\Delta$; here $E(\cdot)$ denotes the spectral measure of~$A_0$. This condition will be imposed in many of the general results in Section~\ref{sec:abstr}. In the next lemma we discuss this condition and some consequences of it. \begin{lemma}\label{simplelemma} Let $S$ be a closed, densely defined, symmetric operator in $\cH$ and let $\{ {\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}, \Gamma_0, \Gamma_1 \}$ be a quasi boundary triple for $\overline T=S^*$ with $A_0=T\upharpoonright\ker\Gamma_0$. Then the following holds. \begin{enumerate} \item If $S$ is simple then \eqref{eq:localSimple'} is satisfied for every Borel set $\Delta\subset\dR$. \item If \eqref{eq:localSimple'} holds for some Borel set $\Delta\subset\dR$ then \begin{align}\label{eq:localSimple2} E (\Delta^\prime) \cH=\clsp \bigl\{ E (\Delta^\prime) \gamma (\nu) g : \nu \in \mathbb{C} \setminus \mathbb{R},\, g \in {\text{\rm ran\,}} \Gamma_0 \bigr\} \end{align} holds for every Borel set $\Delta^\prime\subset\Delta$. \item If $\delta_1, \delta_2, \dots$ are disjoint open intervals such that \begin{align}\label{eq:zerlegungSimple} E (\delta_j) \cH=\clsp \bigl\{ E (\delta_j) \gamma (\nu) g : \nu \in \mathbb{C} \setminus \mathbb{R},\, g \in {\text{\rm ran\,}} \Gamma_0 \bigr\} \quad \text{for~all}~j \end{align} then~\eqref{eq:localSimple'} holds for $\Delta = \bigcup_{j} \delta_j$. \item If \eqref{eq:localSimple'} holds for some Borel set $\Delta\subset\dR$ then $\Delta\cap\sigma_{\rm p} (S)=\emptyset$. \item If \eqref{eq:localSimple'} holds and $G$ is a subset of $\rho (A_0)$ which has an accumulation point in each connected component of $\rho (A_0)$ then \begin{align}\label{eq:localSimple99} E (\Delta) \cH = \clsp \bigl\{ E (\Delta) \gamma (\nu) g : \nu \in G,\, g \in {\text{\rm ran\,}} \Gamma_0 \bigr\}. \end{align} \end{enumerate} \end{lemma} \begin{proof} Assertion (i) is a consequence of item (ii) since \eqref{eq:localSimple'} holds with $\Delta = \dR$ when $S$ is simple. For~(ii) note that the inclusion $\supset$ in~\eqref{eq:localSimple2} clearly holds. For the converse inclusion let $u\in E(\Delta^\prime)\cH$. As $\Delta^\prime\subset\Delta$ we have $u\in E(\Delta)\cH$ and hence there exists a sequence $(v_n)$, $n=1,2,\dots$, in the linear span of $\{ E (\Delta) \gamma (\nu) g : \nu \in \mathbb{C} \setminus \mathbb{R},\, g \in {\text{\rm ran\,}} \Gamma_0 \}$ which converges to $u$. Then $(E(\Delta^\prime)v_n)$, $n=1,2,\dots$, is a sequence in the linear span of $\{ E (\Delta^\prime) \gamma (\nu) g : \nu \in \mathbb{C} \setminus \mathbb{R},\, g \in {\text{\rm ran\,}} \Gamma_0 \}$ which converges to $E(\Delta^\prime)u=u$. In order to prove (iii) let $\delta_j$ be as in the assumptions and let $\Delta = \bigcup_{j} \delta_j$. The inclusion $\supset$ in~\eqref{eq:localSimple'} again is obvious. For the converse inclusion let $u\in E(\Delta)\cH$ and define \begin{equation}\label{htildemalwieder} \widetilde \cH := \clsp \bigl\{ E (\Delta) \gamma (\nu) g : \nu \in \mathbb{C} \setminus \mathbb{R},\, g \in {\text{\rm ran\,}} \Gamma_0 \bigr\}. \end{equation} Since \begin{equation*} u=E(\Delta) u=\sum_{j} E(\delta_j)u \end{equation*} it is sufficient to show $E(\delta_j)u\in\widetilde \cH$ for all $j$. Note first that by assumption \eqref{eq:zerlegungSimple} we have \begin{equation*} E(\delta_j)u \in \clsp \bigl\{ E (\delta_j) \gamma (\mu) h : \mu \in \mathbb{C} \setminus \mathbb{R},\, h \in {\text{\rm ran\,}} \Gamma_0 \bigr\} \end{equation*} and hence the assertion follows if we verify \begin{equation}\label{schoenwaers} E (\delta_j) \gamma (\mu) h \in \widetilde \cH \end{equation} for all $\mu \in \mathbb{C} \setminus \mathbb{R}$, $h \in {\text{\rm ran\,}} \Gamma_0$, and all $j$. For this purpose consider some fixed $ E (\delta_j) \gamma (\mu) h$. According to Lemma~\ref{lem:gammaWeylProp}~(ii) we have \begin{align*} \gamma (\nu) g = \gamma (\mu) g + (\nu - \mu) (A_0 - \nu)^{-1} \gamma (\mu) g \end{align*} for all $\nu \in \mathbb{C} \setminus \mathbb{R}$ and all $g \in {\text{\rm ran\,}} \Gamma_0$, and hence $\widetilde\cH$ in \eqref{htildemalwieder} can be rewritten in the form \begin{align*} \widetilde \cH = \clsp \bigg\{ E (\Delta) \gamma (\mu) g, E (\Delta) (A_0 - \nu)^{-1} \gamma (\mu) g : \nu \in \mathbb{C} \setminus \mathbb{R}, g \in {\text{\rm ran\,}} \Gamma_0 \bigg\}. \end{align*} It follows that for $\eta, \varepsilon > 0$ the element \begin{align*} \int_{\alpha_j + \eta}^{\beta_j - \eta} E (\Delta) \big( ( A_0 - (\lambda + i \varepsilon) )^{-1} - (A_0 - (\lambda - i \varepsilon) )^{-1} \big) \gamma (\mu) h \,d \lambda \end{align*} belongs to $\widetilde \cH$, where we have written $\delta_j = (\alpha_j, \beta_j)$. From this and Stone's formula it follows \begin{align*} E (\delta_j) \gamma (\mu) h = E (\delta_j) E (\Delta) \gamma (\mu) h \in \widetilde \cH, \end{align*} which proves \eqref{schoenwaers} and, hence, yields the inclusion $\subset$ in \eqref{eq:localSimple'}. Item (iii) is proved. In order to verify (iv), assume that $S u = \lambda u$ for some $u\in{\text{\rm dom\,}} S$ and $\lambda \in \Delta$. Then $A_0 u = \lambda u$ and hence $u \in E (\Delta) \cH$. On the other hand, for $g \in {\text{\rm ran\,}} \Gamma_0$ and $\nu \in \mathbb{C} \setminus \mathbb{R}$ it follows together with Lemma~\ref{lem:gammaWeylProp}~(i) that \begin{align*} (u, E (\Delta) \gamma (\nu) g) = (\gamma (\nu)^* u, g) = \big( \Gamma_1 (A_0 - \overline \nu)^{-1} u, g \big) = (\lambda - \overline \nu)^{-1} (\Gamma_1 u, g) = 0, \end{align*} as $u \in {\text{\rm dom\,}} S \subset \ker \Gamma_1$. Hence, $u\in E (\Delta) \cH$ is orthogonal to the linear span of the elements $E(\Delta)\gamma(\nu)g$, $\nu\in\mathbb{C} \setminus \mathbb{R}$, $g\in{\text{\rm ran\,}}\Gamma_0$, which is dense in $E (\Delta) \cH$ by \eqref{eq:localSimple'}. This implies $u = 0$ and thus $S$ does not possess eigenvalues in $\Delta$. It remains to show (v). The inclusion $\supset$ in \eqref{eq:localSimple99} is obvious. In order to prove the inclusion $\subset$ it suffices to verify that the vectors $E (\Delta) \gamma (\nu) g$, $g \in {\text{\rm ran\,}} \Gamma_0$, $\nu \in G$, span a dense set in $E (\Delta)\cH$. Suppose that $E (\Delta) u$ is orthogonal to this set, that is, \begin{equation}\label{samstag2} 0 = (E (\Delta) \gamma (\nu) g,E (\Delta) u) \end{equation} holds for all $g \in {\text{\rm ran\,}} \Gamma_0$ and all $\nu \in G$. Since $\rho(A_0)\ni\nu\mapsto \gamma (\nu) g$ is analytic for each $g \in {\text{\rm ran\,}} \Gamma_0$ (see Lemma~\ref{lem:gammaWeylProp}~(ii)) it follows that for each $g \in {\text{\rm ran\,}} \Gamma_0$ the function $\nu\mapsto (E (\Delta) \gamma (\nu) g, E (\Delta) u)$ is analytic on $\rho(A_0)$, and hence \eqref{samstag2} implies that this function is identically equal to zero. Now \eqref{eq:localSimple'} yields $E (\Delta) u=0$ and (v) follows. \end{proof} \section{Spectral properties of selfadjoint operators and corresponding Weyl functions}\label{sec:abstr} This section contains the main abstract results of this paper. We describe the spectral properties of a given selfadjoint operator by means of a corresponding Weyl function. For this we fix the following setting. \begin{assumption}\label{ass} Let $S$ be a closed, densely defined, symmetric operator in the separable Hilbert space $\cH$ and let $\{ {\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}, \Gamma_0, \Gamma_1\}$ be a quasi boundary triple for $\overline T = S^*$ with corresponding $\gamma$-field $\gamma$ and Weyl function $M$. Moreover, let $A_0 = T \upharpoonright \ker \Gamma_0$ and denote by $E (\cdot)$ the spectral measure of $A_0$. \end{assumption} \subsection{Eigenvalues and corresponding eigenspaces} Let us start with a characterization of the isolated and embedded eigenvalues as well as the corresponding eigenspaces of a selfadjoint operator by means of an associated Weyl function. We write s-$\lim$ for the strong limit of an operator function. \begin{theorem}\label{thm:eigenGeneral} Let Assumption~\ref{ass} be satisfied. Then $\lambda\in\dR$ is an eigenvalue of $A_0$ such that $\cK := \ker (A_0 - \lambda) \ominus \ker (S - \lambda) \neq \{0\}$ if and only if $R_\lambda M :=$ \textup{s}-$\lim_{\eta \searrow 0} i \eta M (\lambda + i \eta) \neq 0$. If $\dim \cK < \infty$ then the mapping \begin{align}\label{eq:tau} \tau : \cK \to {\text{\rm ran\,}} R_\lambda M, \quad u \mapsto \Gamma_1 u, \end{align} is bijective; if $\dim \cK = \infty$ then the mapping \begin{align}\label{eq:taugen} \tau : \cK \to \cl_\tau \bigl({\text{\rm ran\,}} R_\lambda M\bigr), \quad u \mapsto \Gamma_1 u, \end{align} is bijective, where $\cl_\tau$ denotes the closure in the normed space ${\text{\rm ran\,}} \tau$. \end{theorem} \begin{remark}\label{rem:residue} Recall that the limit $(R_\lambda M) g = \lim_{\eta \searrow 0} i \eta M (\lambda + i \eta)g$ exists for all $\lambda\in\dR$ and all $g\in{\text{\rm ran\,}}\Gamma_0$ by Lemma~\ref{lem:gammaWeylProp}~(iv). Moreover, if $\lambda$ is an isolated singularity of $M$, that is, there exists an open neighborhood $\cO$ of $\lambda$ such that $M$ is strongly holomorphic on $\cO \setminus \{ \lambda \}$, then $R_\lambda M \neq 0$ if and only if for some $g \in {\text{\rm ran\,}} \Gamma_0$ the ${\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}$-valued function $\zeta\mapsto M(\zeta) g$ has a pole at $\lambda$. In this case $R_\lambda M$ coincides with the residue $\textup{Res}_\lambda M$ of $M$ at $\lambda$ in the strong sense, i.e., \begin{align*} (R_\lambda M) g = (\textup{Res}_\lambda M)g= \frac{1}{2 \pi i} \int_\cC M (z) g \,d z, \quad g \in {\text{\rm ran\,}} \Gamma_0, \end{align*} where $\cC$ denotes the boundary of an open ball $B$ such that $M$ is strongly holomorphic in a neighborhood of $\overline B$ except the point~$\lambda$. We also remark that without additional assumptions the Weyl function is not able to distinguish between isolated and embedded eigenvalues of $A_0$; cf.~Proposition~\ref{prop:isolatedEV} below. \end{remark} \begin{proof}[Proof of Theorem~\ref{thm:eigenGeneral}] Let $\lambda \in \mathbb{R}$ be fixed. Note first that the mapping $\Gamma_1 \upharpoonright \cK$ is injective. Indeed, for $u \in \cK =\ker (A_0 - \lambda) \ominus \ker (S - \lambda)$ with $\Gamma_1 u = 0$ we have $u \in \ker\Gamma_0\cap\ker\Gamma_1={\text{\rm dom\,}} S$ and $S u = \lambda u$; hence $u = 0$. It is our aim to prove the inclusions \begin{align}\label{eq:incFlambda} {\text{\rm ran\,}} R_\lambda M \subset {\text{\rm ran\,}} (\Gamma_1 \upharpoonright \cK ) \subset \overline{{\text{\rm ran\,}} R_\lambda M}. \end{align} From this it follows immediately that the mapping $\tau$ in~\eqref{eq:tau} and \eqref{eq:taugen} is well-defined and bijective. In order to verify~\eqref{eq:incFlambda} let $g \in {\text{\rm ran\,}} \Gamma_0$ and denote by $E(\cdot)$ the spectral measure of $A_0$. Then \begin{align}\label{calc} \big\| i \eta (A_0 - & (\lambda + i \eta))^{-1} \gamma (\nu) g + E (\{\lambda\}) \gamma (\nu) g \big\|^2 \nonumber \\ & = \int_\mathbb{R} \left| \frac{i \eta}{t - (\lambda + i \eta)} + \chi_{\{\lambda\}}(t) \right|^2 d (E (t) \gamma (\nu) g, \gamma (\nu) g ) \to 0 \quad \text{as} \quad \eta \searrow 0 \end{align} holds for all $\nu \in \mathbb{C} \setminus \mathbb{R}$. Since by Lemma~\ref{lem:gammaWeylProp}~(i) the operator $\gamma (\nu)^*$ is bounded, it follows from~\eqref{calc} \begin{align}\label{eq:Plambda} \lim_{\eta \searrow 0} i \eta \gamma (\nu)^* (A_0 - (\lambda + i \eta))^{-1} \gamma (\nu) g = - \gamma (\nu)^* E (\{\lambda\}) \gamma (\nu) g \end{align} for all $\nu \in \mathbb{C} \setminus \mathbb{R}$, and together with Lemma~\ref{lem:gammaWeylProp}~(v) we conclude that the limit on the left hand side of \eqref{eq:Plambda} coincides with \begin{equation}\label{mitternacht} \lim_{\eta \searrow 0} i \eta \, \frac{M (\lambda + i \eta)g}{((\lambda + i \eta) - \nu) ((\lambda + i \eta) - \overline \nu)}=\frac{(R_\lambda M) g}{(\lambda-\nu)(\lambda-\overline\nu)}. \end{equation} With the help of Lemma~\ref{lem:gammaWeylProp}~(i), \eqref{eq:Plambda} and \eqref{mitternacht} we obtain \begin{align*} \Gamma_1 & E (\{\lambda\}) \gamma (\nu) g \nonumber \\ & = \Gamma_1 (A_0 - \overline \nu)^{-1} (A_0 - \overline \nu) E (\{\lambda\}) \gamma (\nu) g = (\lambda - \overline \nu) \gamma (\nu)^* E (\{\lambda\}) \gamma (\nu) g \nonumber \\ & = - (\lambda - \overline \nu) \lim_{\eta \searrow 0} i \eta \gamma (\nu)^* (A_0 - (\lambda + i \eta))^{-1} \gamma (\nu) g = \frac{1}{\nu - \lambda}(R_\lambda M) g \end{align*} for all $\nu \in \mathbb{C} \setminus \mathbb{R}$. Denoting by $P$ the orthogonal projection in $\cH$ onto $\cK = \ker (A_0 - \lambda) \ominus \ker (S - \lambda)$ it follows \begin{align}\label{eq:stronglimit} \Gamma_1 P \gamma (\nu) g = \frac{1}{\nu - \lambda}(R_\lambda M) g, \end{align} where we have used $\Gamma_1 (\ker (S - \lambda)) = \{0\}$. From this the first inclusion in~\eqref{eq:incFlambda} follows immediately. For the second inclusion in~\eqref{eq:incFlambda} note that the mapping $\Gamma_1\upharpoonright\cK$ is continuous as $\Gamma_ 1u=\gamma(\mu)^*(A_0-\overline\mu)u=(\lambda-\overline\mu)\gamma(\mu)^*u$ holds for all $u\in\cK$ by Lemma~\ref{lem:gammaWeylProp}~(i). Moreover, for each $\nu \in \mathbb{C} \setminus \mathbb{R}$ the linear space $\{P \gamma (\nu) g : g \in {\text{\rm ran\,}} \Gamma_0 \}$ is dense in $\cK$. In fact, fix $\nu \in \mathbb{C} \setminus \mathbb{R}$ and let $u \in \cK$ be orthogonal to $P \gamma (\nu) g$ for all $g \in {\text{\rm ran\,}} \Gamma_0$. Then \begin{align*} 0 = (u, P \gamma (\nu) g) = (\gamma (\nu)^* u, g) = (\Gamma_1 (A_0 - \overline \nu)^{-1} u, g) = (\lambda - \overline \nu)^{-1} (\Gamma_1 u, g) \end{align*} by Lemma~\ref{lem:gammaWeylProp}~(i), which implies $\Gamma_1 u = 0$ as ${\text{\rm ran\,}}\Gamma_0$ is dense. Hence we have $u \in \ker\Gamma_0\cap\ker\Gamma_1={\text{\rm dom\,}} S$ and this implies $u \in \cK\cap \ker (S - \lambda)$, so that $u=0$. Now the second inclusion in~\eqref{eq:incFlambda} follows together with \eqref{eq:stronglimit} and the fact that $\Gamma_1\upharpoonright\cK$ is continuous. Hence the mapping $\tau$ in~\eqref{eq:taugen} is well-defined and bijective. If $\cK$ is finite-dimensional then clearly the closure in~\eqref{eq:taugen} can be omitted and we end up with the bijectivity of~\eqref{eq:tau}. \end{proof} As an immediate consequence of Theorem~\ref{thm:eigenGeneral} all eigenvalues of $A_0$ which are not eigenvalues of $S$ can be characterized as ``generalized poles'' of the Weyl function. \begin{corollary}\label{thm:eigen} Let Assumption~\ref{ass} be satisfied, and assume that $\lambda \in \mathbb{R}$ is not an eigenvalue of $S$. Then $\lambda$ is an eigenvalue of $A_0$ if and only if $R_\lambda M :=$ \textup{s}-$\lim_{\eta \searrow 0} i \eta M (\lambda + i \eta) \neq 0$. If the multiplicity of the eigenvalue $\lambda$ is finite then the mapping \begin{align*} \tau : \ker (A_0 - \lambda) \to {\text{\rm ran\,}} R_\lambda M, \quad u \mapsto \Gamma_1 u, \end{align*} is bijective; if the multiplicity of the eigenvalue $\lambda$ is infinite then the mapping \begin{align*} \tau : \ker (A_0 - \lambda) \to \cl_\tau \bigl({\text{\rm ran\,}} R_\lambda M\bigr), \quad u \mapsto \Gamma_1 u, \end{align*} is bijective, where $\cl_\tau$ denotes the closure in the normed space ${\text{\rm ran\,}} \tau$. \end{corollary} \subsection{Continuous, absolutely continuous, and singular continuous spectra} In this subsection we describe the continuous, absolutely continuous, and singular continuous spectrum of a selfadjoint operator $A_0$ by means of the limits of an associated Weyl function $M$. Again we fix the setting in Assumption~\ref{ass}. It is clear that an additional minimality or simplicity condition must be imposed. Usually one assumes that the underlying symmetric operator $S$ is simple; cf. \cite{BMN02}. However, for our purposes the weaker assumption of local simplicity in Section~\ref{simplesec} is more appropriate: in order to characterize the spectrum of $A_0$ in an open interval $\Delta \subset \mathbb{R}$ we assume that \begin{align}\label{eq:localSimple} E (\Delta) \cH = \clsp \bigl\{ E (\Delta) \gamma (\nu) g : \nu \in \mathbb{C} \setminus \mathbb{R},\, g \in {\text{\rm ran\,}} \Gamma_0 \bigr\}. \end{align} For instance, in Theorem~\ref{thm:eigenGeneral} it turned out that an eigenvalue $\lambda$ of $A_0$ with its full multiplicity can only be detected by the Weyl function if $\lambda \notin \sigma_{\rm p} (S)$. This condition corresponds to the identity~\eqref{eq:localSimple} with $\Delta$ replaced by $\{\lambda\}$; cf.~Lemma~\ref{simplelemma}~(iv). In the next theorem we agree to say that the Weyl function $M$ can be continued analytically to some point $\lambda \in \mathbb{R}$ if there exists an open neighborhood $\cO$ of $\lambda$ in~$\mathbb{C}$ such that $\zeta\mapsto M (\zeta) g$ can be continued analytically to $\cO$ for all $g \in {\text{\rm ran\,}} \Gamma_0$. We mention that the proof of (i) is similar to the proof of \cite[Theorem 1.1]{DLS93}. \begin{theorem}\label{thm:specTotal} Let Assumption~\ref{ass} be satisfied, and let $\Delta \subset \mathbb{R}$ be an open interval such that the condition~\eqref{eq:localSimple} is satisfied. Then the following assertions hold for each $\lambda \in \Delta$. \begin{enumerate} \item $\lambda \in \rho (A_0)$ if and only if $M$ can be continued analytically into $\lambda$. \item $\lambda \in \sigma_{\rm c} (A_0)$ if and only if \textup{s}-$\lim_{\eta \searrow 0} i \eta M (\lambda + i \eta) = 0$ and $M$ cannot be continued analytically into $\lambda$. \end{enumerate} If $S$ is simple then~{\rm (i)} and~{\rm (ii)} hold for all $\lambda \in \mathbb{R}$. \end{theorem} \begin{proof} (i) Recall first that by Lemma~\ref{lem:gammaWeylProp}~(iv) the function $\lambda\mapsto M (\lambda) g$ is analytic on $\rho (A_0)$ for each $g \in {\text{\rm ran\,}} \Gamma_0$, which proves the implication $(\Rightarrow)$. In order to verify the implication $(\Leftarrow)$ in (i), let us assume that $M$ can be continued analytically to some $\lambda \in \Delta$, that is, there exists an open neighborhood $\cO$ of $\lambda$ in $\mathbb{C}$ with $\cO \cap \mathbb{R} \subset \Delta$ such that $\zeta\mapsto M (\zeta) g$ can be continued analytically to $\cO$ for each $g \in {\text{\rm ran\,}} \Gamma_0$. Choose $a, b \in \mathbb{R}$ with $\lambda \in (a, b)$, $[a, b] \subset \cO$, and $a, b \notin \sigma_{\rm p} (A_0)$. The spectral projection $E ( (a, b) )$ of $A_0$ corresponding to the interval $(a, b)$ is given by Stone's formula \begin{align}\label{stone} E ( (a, b) ) = \textup{s-}\hspace{-0.8mm}\lim_{\delta \searrow 0} \frac{1}{2 \pi i} \int_a^b \left( (A_0 - (t + i \delta) )^{-1} - (A_0 - (t - i \delta) )^{-1} \right) d t, \end{align} where the integral on the right-hand side is understood in the strong sense. Using the identity in Lemma~\ref{lem:gammaWeylProp}~(v) and~\eqref{stone} a straight forward calculation leads to \begin{equation*} \begin{split} \left\| E ( (a,b) ) \gamma (\nu) g \right\|^2 & = \bigl(\gamma(\nu)^* E ( (a,b) ) \gamma (\nu) g, g\bigr) \\ & = \lim_{\delta \searrow 0} \frac{1}{2 \pi i} \int_a^b \Bigl( \bigl(\gamma(\nu)^*(A_0 - (t + i \delta) )^{-1}\gamma(\nu)g,g\bigr ) \\ & \qquad \qquad \qquad \qquad - \bigl(\gamma(\nu)^* (A_0 - (t - i \delta) )^{-1}\gamma(\nu)g,g\bigr) \Bigr) d t=0 \end{split} \end{equation*} for all $g \in {\text{\rm ran\,}} \Gamma_0$ and all $\nu \in \mathbb{C} \setminus \mathbb{R}$, since $\zeta\mapsto ( M (\zeta) g, g)$ admits an analytic continuation into $\cO$ for all $g \in {\text{\rm ran\,}} \Gamma_0$. Thus the assumption~\eqref{eq:localSimple} and $[a, b] \subset \Delta$ together with Lemma~\ref{simplelemma}~(ii) imply $E ( (a, b) ) = 0$. In particular, $\lambda \in \rho (A_0)$. (ii) According to Lemma~\ref{simplelemma}~(iv) the condition~\eqref{eq:localSimple} implies that $S$ does not have eigenvalues in $\Delta$. Hence item~(ii) follows immediately from item~(i) and Corollary~\ref{thm:eigen}. If $S$ is simple then by Lemma~\ref{simplelemma}~(i) the assumption \eqref{eq:localSimple} is satisfied for $\Delta = \mathbb{R}$. Hence~(i) and~(ii) hold for all $\lambda \in \mathbb{R}$. \end{proof} Now we return to the characterization of eigenvalues. We formulate a sufficient condition under which the Weyl function is able to distinguish between isolated and embedded eigenvalues. \begin{proposition}\label{prop:isolatedEV} Let Assumption~\ref{ass} be satisfied and let $\Delta \subset \mathbb{R}$ be an open interval. Assume that the condition~\eqref{eq:localSimple} is satisfied and let $\lambda \in \Delta$. Then all assertions of Corollary~\ref{thm:eigen} hold for $\lambda$. Moreover, $\lambda$ is an isolated eigenvalue of $A_0$ if and only if $\lambda$ is a pole in the strong sense of $M$. In this case $R_\lambda M$ is the residue of $M$ in the strong sense at $\lambda$; cf.~Remark~\ref{rem:residue}. \end{proposition} \begin{proof} Let $\lambda \in \mathbb{R}$ and let $\Delta \subset \mathbb{R}$ be an open interval with $\lambda \in \Delta$ such that~\eqref{eq:localSimple} holds. Then $\lambda\not\in\sigma_{\rm p} (S)$ by Lemma~\ref{simplelemma}~(iv) and hence the assertions in Corollary~\ref{thm:eigen} hold for $\lambda$. Moreover, if $\lambda$ is an isolated eigenvalue of $A_0$ then by Lemma~\ref{lem:gammaWeylProp}~(iv) there exists an open neighborhood $\cO$ of $\lambda$ such that $\zeta\mapsto M (\zeta) g$ is holomorphic on $\cO \setminus \{ \lambda \}$ for all $g \in {\text{\rm ran\,}} \Gamma_0$. From Corollary~\ref{thm:eigen} we conclude that there exists $g \in {\text{\rm ran\,}} \Gamma_0$ such that \begin{align}\label{eq:pole} \lim_{\eta \searrow 0} i \eta M (\lambda + i \eta) g \neq 0. \end{align} Hence Lemma~\ref{lem:gammaWeylProp}~(iv) implies that $M$ has a pole of first order in the strong sense at $\lambda$. Conversely, if $M$ has a pole (of first order) in the strong sense at $\lambda$ then there exists $g \in {\text{\rm ran\,}} \Gamma_0$ such that~\eqref{eq:pole} holds. According to Lemma~\ref{lem:gammaWeylProp}~(iv) the order of the pole is one and, hence, \begin{align*} \lim_{\eta \searrow 0} i \eta M (\lambda + i \eta) g = (\textup{Res}_\lambda M) g \neq \{ 0 \} \end{align*} for all $g \in {\text{\rm ran\,}} \Gamma_0$. It follows with the help of Corollary~\ref{thm:eigen} that $\lambda$ is an eigenvalue of $A_0$. Moreover, Theorem~\ref{thm:specTotal}~(i) implies that there exists an open neighborhood $\cO$ of $\lambda$ in $\mathbb{C}$ such that $\cO \setminus \{\lambda\} \subset \rho (A_0)$. Hence $\lambda$ is isolated in the spectrum of $A_0$. This completes the proof. \end{proof} Next we discuss the relation of the function $M$ to the absolutely continuous and singular continuous spectrum of $A_0$. In the special case of ordinary boundary triples and $\Delta=\dR$ the following results reduce to those in \cite{BMN02}. For our purposes a localized version and an extension to quasi boundary triples is necessary. The proofs presented here are somewhat more direct than those in \cite{BMN02}; in particular, the integral representation of Nevanlinna functions and the corresponding measures are avoided. In the following for a finite Borel measure $\mu$ on $\mathbb{R}$ we denote the set of all growth points of $\mu$ by $\supp \mu$, that is, \begin{align*} \supp \mu = \big\{ x \in \mathbb{R} : \mu ( (x - \varepsilon, x + \varepsilon) ) > 0~\text{for~all}~\varepsilon > 0 \big\}. \end{align*} Note that $\supp \mu$ is closed with $\mu (\mathbb{R} \setminus \supp \mu) = 0$ and that $\supp \mu$ is minimal with this property, that is, each closed set $S \subset \mathbb{R}$ with $\mu (\mathbb{R} \setminus S) = 0$ satisfies $\supp \mu \subset S$. Moreover, for a Borel set $\chi \subset \mathbb{R}$ we define the {\em absolutely continuous closure} (also called {\em essential closure}) by \begin{align*} \clac (\chi) := \big\{ x \in \mathbb{R} : \left|(x - \varepsilon, x + \varepsilon) \cap \chi \right| > 0 ~\text{for~all}~\varepsilon > 0 \big\}, \end{align*} where $| \cdot |$ denotes the Lebesgue measure, and the {\em continuous closure} by \begin{align}\label{eq:clc} \clc (\chi) := \big\{ x \in \mathbb{R} : (x - \varepsilon, x + \varepsilon) \cap \chi~\text{is~not~countable~for~all}~\varepsilon > 0 \big\}. \end{align} Observe that $\clac (\chi)$ and $\clc (\chi)$ both are closed and that $\clac (\chi) \subset \clc (\chi) \subset \overline\chi$ holds, but in general the converse inclusions are not true. In fact, $\clac (\chi)=\emptyset$ if and only if $|\chi|=0$, and $\clc (\chi) = \emptyset$ if and only if $\chi$ is countable. The following lemma can partly be found in, e.g., the monographs~\cite{D74} or~\cite{T09}. \begin{lemma}\label{mesLem} Let $\mu$ be a finite Borel measure on $\mathbb{R}$ and denote by $F$ its Borel transform, \begin{align*} F (\lambda) = \int_{\mathbb{R}} \frac{1}{t - \lambda} d \mu (t), \quad \lambda \in \mathbb{C} \setminus \mathbb{R}. \end{align*} Then the limit ${\text{\rm Im}} F (x + i0) = \lim_{y \searrow 0} {\text{\rm Im}} F (x + i y)$ exists and is finite for Lebesgue almost all $x \in \mathbb{R}$. Let $\mu_{\rm ac}$ and $\mu_{\rm s}$ be the absolutely continuous and singular part, respectively, of $\mu$ in the Lebesgue decomposition $\mu = \mu_{\rm ac} + \mu_{\rm s}$, and decompose $\mu_{\rm s}$ into the singular continuous part $\mu_{\rm sc}$ and the pure point part. Then the following assertions hold. \begin{enumerate} \item $\supp \mu_{\rm ac} = \clac ( \{ x \in \mathbb{R} : 0 < {\text{\rm Im}} F (x + i 0) < +\infty \} )$. \item $\supp \mu_{\rm s} \subset \overline{\{ x \in \mathbb{R} : {\text{\rm Im}} F (x + i 0) = + \infty\}}$. \item $\supp \mu_{\rm sc} \subset \clc (\{ x \in \mathbb{R} : {\text{\rm Im}} F (x + i 0) = + \infty, \lim_{y \searrow 0} y F ( x + i y) = 0 \})$. \end{enumerate} \end{lemma} \begin{proof} From~\cite[Lemma~3.14 and Theorem~3.23]{T09} it follows immediately that assertion~(i) is true, that the limit ${\text{\rm Im}} F (x + i 0)$ exists and is finite for Lebesgue almost all $x \in \mathbb{R}$, and that \begin{align}\label{eq:singSupp} \mu_{\rm s} \big( \mathbb{R} \setminus \{ x \in \mathbb{R} : {\text{\rm Im}} F (x + i 0) = + \infty \} \big) = 0, \end{align} which implies~(ii). In order to verify (iii) note first that $\lim_{y \searrow 0} y F (x + i y) = i \mu (\{ x \} )$ holds for all $x \in \mathbb{R}$ since \begin{align*} \big| y F (x + i y) - i \mu ( \{x\} ) \big| \leq \int_{\mathbb{R}} \left| \frac{y}{t - (x + i y)} - i \mathbbm{1}_{\{x\}} (t) \right| d \mu (t) \to 0, \quad y \searrow 0. \end{align*} In particular, $\mu ( \{x\} ) \neq 0$ if and only if $\lim_{y \searrow 0} y F (x + i y) \neq 0$. Hence it follows from~\eqref{eq:singSupp} and the definition of $\mu_{\rm sc}$ that \begin{align}\label{eq:SCmeszero} \mu_{\rm sc} ( \mathbb{R} \setminus M_{\rm sc} ) = 0, \end{align} where \begin{align*} M_{\rm sc} := \Bigl\{ x \in \mathbb{R} : {\text{\rm Im}} F (x + i 0) = + \infty, \lim_{y \searrow 0} y F (x + i y) = 0 \Bigr\}. \end{align*} For $x \in \mathbb{R} \setminus \clc (M_{\rm sc})$ by definition there exists $\varepsilon > 0$ such that $(x - \varepsilon, x + \varepsilon) \cap M_{\rm sc}$ is countable; thus $\mu_{\rm sc} ((x - \varepsilon, x + \varepsilon) \cap M_{\rm sc}) = 0$. With the help of~\eqref{eq:SCmeszero} it follows \begin{align*} \mu_{\rm sc} ( (x - \varepsilon, x + \varepsilon) ) \leq \mu_{\rm sc} ( (x - \varepsilon, x + \varepsilon) \cap M_{\rm sc} ) + \mu_{\rm sc} (\mathbb{R} \setminus M_{\rm sc}) = 0, \end{align*} that is, $x \notin \supp \mu_{\rm sc}$. \end{proof} The absolutely continuous spectrum of a selfadjoint operator in some interval~$\Delta$ can be characterized in the following way. \begin{theorem}\label{thm:ACtheorem} Let Assumption~\ref{ass} be satisfied and let $\Delta \subset \mathbb{R}$ be an open interval such that the condition \begin{align}\label{eq:localSimple00} E (\delta) \cH = \clsp \bigl\{ E (\delta) \gamma (\nu) g : \nu \in \mathbb{C} \setminus \mathbb{R},\, g \in {\text{\rm ran\,}} \Gamma_0 \bigr\} \end{align} is satisfied for each open interval $\delta \subset \Delta$ with $\delta \cap \sigma_{\rm p} (S) = \emptyset$. Then the absolutely continuous spectrum of $A_0$ in $\Delta$ is given by \begin{align}\label{eq:ACidentity} \overline{\sigma_{\rm ac} (A_0) \cap \Delta} = \overline{ \bigcup_{g \in {\text{\rm ran\,}} \Gamma_0} \clac \big( \big\{x \in \Delta : 0 < {\text{\rm Im}} (M (x + i 0) g, g) < +\infty \big\} \big) }. \end{align} If $S$ is simple then~\eqref{eq:ACidentity} holds for each open interval $\Delta$, including the case $\Delta = \mathbb{R}$. \end{theorem} \begin{proof} The proof of Theorem~\ref{thm:ACtheorem} consists of two separate steps in which the assertions \eqref{acid1} and \eqref{eq:suppMuAc} below will be shown. The identity \eqref{eq:ACidentity} is then an immediate consequence of \eqref{acid1} and \eqref{eq:suppMuAc} (note that the right hand side in \eqref{eq:suppMuAc} does not depend on $\zeta\in\mathbb{C}\setminus\mathbb{R}$). We fix some notation first. Let us set \begin{align}\label{D} {\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_\Delta := \bigl\{ E (\Delta) \gamma (\zeta) g : \zeta \in \mathbb{C} \setminus \mathbb{R} ,\, g \in {\text{\rm ran\,}} \Gamma_0\bigr\} \end{align} and define the measures $\mu_u := (E (\cdot) u, u)$ for $u\in\cH$. Denote by $P_{\rm ac}$ the orthogonal projection in $\cH$ onto the absolutely continuous subspace $\cH_{\rm ac}$ of $A_0$. Observe that the spectral measure of the absolutely continuous part of $A_0$ is $E(\cdot)P_{\rm ac}$ and that the absolutely continuous measures $\mu_{u, \rm ac}$ are given by $\mu_{u, \rm ac}=(E(\cdot)P_{\rm ac}u,P_{\rm ac}u)=\mu_{P_{\rm ac}u}$. \vskip 0.2cm\noindent {\bf Step 1.} In this step the identity \begin{align}\label{acid1} \overline{\sigma_{\rm ac} (A_0) \cap \Delta} = \overline{\bigcup_{u \in {\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_\Delta} \supp \mu_{u, \rm ac}} \end{align} will be verified. First of all the open set $\Delta^\prime:=\Delta\backslash\overline{\sigma_{\rm p} (S)}$ is the disjoint union of open intervals $\delta_j$, $1\leq j\leq N$, $N\in\dN\cup\{\infty\}$, and for each $\delta_j$ we have \begin{align*} E (\delta_j) \cH = \clsp \bigl\{ E (\delta_j) \gamma (\nu) g : \nu \in \mathbb{C} \setminus \mathbb{R},\, g \in {\text{\rm ran\,}} \Gamma_0 \bigr\} \end{align*} by assumption. With the help of Lemma~\ref{simplelemma}~(iii) we conclude \begin{align*} E (\Delta^\prime) \cH = \clsp \big\{ E (\Delta^\prime) \gamma (\nu) g : \nu \in \mathbb{C} \setminus \mathbb{R},\, g \in {\text{\rm ran\,}} \Gamma_0 \big\}. \end{align*} Since $\Delta^\prime\subset\Delta$ it follows immediately that $E (\Delta^\prime) \cH\subset \clsp {\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_\Delta$. Moreover, we have \begin{align*} P_{\rm ac} E (\Delta) \cH = P_{\rm ac} E (\Delta^\prime) \cH \subset P_{\rm ac}\bigl(\clsp {\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_\Delta\bigr)\subset \clsp P_{\rm ac}{\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_\Delta \subset P_{\rm ac} E (\Delta) \cH \end{align*} and therefore \begin{equation}\label{eq:ohneSchlangeId} P_{\rm ac} E (\Delta) \cH=\clsp P_{\rm ac}{\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_\Delta. \end{equation} In order to verify~\eqref{acid1}, assume first that $x$ does not belong to the left hand side of \eqref{acid1}, that is, $x \notin \overline{\sigma_{\rm ac} (A_0) \cap \Delta}$. Then there exists $\epsilon > 0$ such that $(x - \epsilon, x + \epsilon) \cap \Delta$ contains no absolutely continuous spectrum of $A_0$. This yields $$E((x - \epsilon, x + \epsilon) \cap \Delta)P_{\rm ac}=0$$ and for $u \in E (\Delta) \cH$ one obtains \begin{equation*} \begin{split} \mu_{u, \rm ac} ((x - \epsilon, x + \epsilon)) & = \bigl(E ((x - \epsilon, x + \epsilon)) P_{\rm ac} u, P_{\rm ac} u\bigr) \\ & = \bigl(E ((x - \epsilon, x + \epsilon)) P_{\rm ac} E (\Delta) u, P_{\rm ac} u\bigr) \\ & = \bigl(E ((x - \epsilon, x + \epsilon) \cap \Delta) P_{\rm ac} u, P_{\rm ac} u\bigr) \\ & =0. \end{split} \end{equation*} Therefore $(x - \epsilon, x + \epsilon)\cap \supp \mu_{u, \rm ac}=\emptyset$ for all $u \in E (\Delta) \cH$, in particular, for all $u \in {\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_\Delta$. Thus $$ x\not \in\overline{\bigcup_{u \in {\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_\Delta} \supp \mu_{u, \rm ac}} $$ and the inclusion $\supset$ in~\eqref{acid1} follows. For the converse inclusion assume that $x$ does not belong to the right hand side of~\eqref{acid1}. Then there exists $\epsilon > 0$ such that $(x - \epsilon, x + \epsilon) \subset \dR \setminus \supp \mu_{u, \rm ac}$ for all $u \in {\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_\Delta$, that is, \begin{equation*} \Vert E ((x - \epsilon, x + \epsilon)) P_{\rm ac} u\Vert^2= \mu_{u, \rm ac} ( (x - \epsilon, x + \epsilon) ) = 0 \end{equation*} for all $u \in {\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_\Delta$, and hence also for all $u \in \clsp {\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_\Delta$. With the help of~\eqref{eq:ohneSchlangeId} it follows $$ E ((x - \epsilon, x + \epsilon)\cap\Delta) P_{\rm ac} u= E ((x - \epsilon, x + \epsilon)) P_{\rm ac} E(\Delta) u=0 $$ for all $u\in \cH$. This shows that $(x - \epsilon, x + \epsilon) \cap \Delta$ does not contain absolutely continuous spectrum of $A_0$, in particular, $x \notin \overline{\sigma_{\rm ac} (A_0) \cap \Delta}$ and the inclusion $\subset$ in~\eqref{acid1} follows. \vskip 0.2cm\noindent {\bf Step 2.} In this step we show that the identity \begin{align}\label{eq:suppMuAc} \supp \mu_{u, {\rm ac}} = \clac \bigl( \bigl\{ x \in \Delta : 0 < {\text{\rm Im}} \bigl(M (x + i 0) g, g \bigr) < +\infty \bigr\} \bigr) \end{align} holds for all $u=E (\Delta) \gamma (\zeta) g \in {\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_\Delta$. Indeed, with the help of the formula~\eqref{eq:Mformula} we compute \begin{align}\label{kette} {\text{\rm Im}} (M & (x + i y) g, g) \nonumber \\ & = y \| \gamma (\zeta) g \|^2 + \left( |x - \zeta|^2 - y^2 \right) {\text{\rm Im}} \left( (A_0 - (x + i y) )^{-1} \gamma (\zeta) g, \gamma (\zeta) g \right) \nonumber \\ & \quad + 2 (x - {\text{\rm Re}} \zeta) y {\text{\rm Re}} \left( (A_0 - (x + i y) )^{-1} \gamma (\zeta) g, \gamma (\zeta) g \right), \end{align} for all $x \in \mathbb{R}$, $y>0$, $g \in {\text{\rm ran\,}} \Gamma_0$, and $\zeta \in \mathbb{C} \setminus \mathbb{R}$. Moreover, dominated convergence implies that \begin{align*} y {\text{\rm Re}} \left( (A_0 - (x + i y) )^{-1} \gamma (\zeta) g, \gamma (\zeta) g \right) = \int_\mathbb{R} \frac{y (t - x)}{(t - x)^2 + y^2} d ( E (t) \gamma (\zeta) g, \gamma (\zeta) g ) \end{align*} converges to zero as $y \searrow 0$. Therefore for $x \in \mathbb{R}$~\eqref{kette} implies \begin{align}\label{MResId} {\text{\rm Im}} ( M (x + i 0) g, g) & = |x - \zeta|^2 {\text{\rm Im}} \left( (A_0 - (x + i 0) )^{-1} \gamma (\zeta) g, \gamma (\zeta) g \right), \end{align} in the sense that one of the limits exists if and only if the other limit exists, where $+\infty$ is allowed as (improper) limit. For $u \in \cH$, $x \in \mathbb{R}$, and $y > 0$ the imaginary part of the Borel transform $F_u$ of the measure $\mu_{u}=(E(\cdot)u,u)$ is given by \begin{align}\label{eq:Fu} {\text{\rm Im}} F_{u} (x + i y) & = {\text{\rm Im}} \int_{\mathbb{R}} \frac{1}{t - (x+iy)} d (E(t)u,u) = {\text{\rm Im}}\left( (A_0 - (x + i y) )^{-1} u, u \right), \end{align} and for $u \in E (\Delta)\cH$ we obtain \begin{align*} {\text{\rm Im}} F_u (x + i 0) = \begin{cases} {\text{\rm Im}} \left( (A_0 - (x + i 0) )^{-1} u, u \right) & \text{if}\,\,\, x \in \Delta, \\ 0 & \text{if}\,\,\, x \notin \overline \Delta, \end{cases} \end{align*} in particular, if $u = E (\Delta) \gamma (\zeta) g \in {\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_\Delta$ then \begin{align*} {\text{\rm Im}} F_u (x + i 0) = \begin{cases} {\text{\rm Im}} \left( (A_0 - (x + i 0) )^{-1} \gamma (\zeta) g, \gamma (\zeta) g \right) & \text{if}\,\,\, x \in \Delta, \\ 0 & \text{if}\,\,\, x \notin \overline \Delta. \end{cases} \end{align*} Taking into account \eqref{MResId} we then find \begin{align}\label{eq:wichtig4} {\text{\rm Im}} F_u (x + i 0) = \begin{cases} |x - \zeta|^{-2} {\text{\rm Im}} (M (x + i 0) g, g) & \text{if}\,\,\, x \in \Delta, \\ 0 & \text{if}\,\,\, x \notin \overline \Delta, \end{cases} \end{align} for $u = E (\Delta) \gamma (\zeta) g \in {\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_\Delta$. From Lemma~\ref{mesLem}~(i) we conclude together with \eqref{eq:wichtig4} that \begin{align*} \supp \mu_{u, {\rm ac}} & = \clac \bigl( \bigl\{ x \in \Delta : 0 < {\text{\rm Im}} F_u (x + i 0) < +\infty \bigr\} \bigr) \nonumber \\ & = \clac \bigl( \bigl\{ x \in \Delta : 0 < {\text{\rm Im}} (M (x + i 0) g, g) < +\infty \bigr\} \bigr) \end{align*} holds for $u = E (\Delta) \gamma (\zeta) g \in {\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_\Delta$, which shows \eqref{eq:suppMuAc}. \end{proof} Theorem~\ref{thm:ACtheorem} immediately implies the following two corollaries. \begin{corollary}\label{thm:ACcomplete2} Let Assumption~\ref{ass} be satisfied and assume that~\eqref{eq:localSimple00} holds for each open interval $\delta \subset \mathbb{R}$ such that $\delta \cap \sigma_{\rm p} (S) = \emptyset$. Then \begin{align*} \sigma_{\rm ac} (A_0) = \overline{ \bigcup_{g \in {\text{\rm ran\,}} \Gamma_0} \clac \big( \big\{x \in \mathbb{R} : 0 < {\text{\rm Im}} (M (x + i 0) g, g) < +\infty \big\} \big) }. \end{align*} \end{corollary} \begin{corollary}\label{cor:ACDelta} Let Assumption~\ref{ass} be satisfied and let $\Delta \subset \mathbb{R}$ be an open interval such that the condition~\eqref{eq:localSimple} holds. Then the absolutely continuous spectrum of $A_0$ in $\Delta$ is given by \begin{align*} \overline{\sigma_{\rm ac} (A_0) \cap \Delta} = \overline{ \bigcup_{g \in {\text{\rm ran\,}} \Gamma_0} \clac \big( \big\{x \in \Delta : 0 < {\text{\rm Im}} (M (x + i 0) g, g) < +\infty \big\} \big) }. \end{align*} \end{corollary} In the next corollary a necessary and sufficient condition for the absence of absolutely continuous spectrum is given. \begin{corollary}\label{cor:ACequiv} Let Assumption~\ref{ass} be satisfied and let $\Delta \subset \mathbb{R}$ be an open interval. Assume that the condition~\eqref{eq:localSimple00} holds for each open interval $\delta \subset \Delta$ with $\delta \cap \sigma_{\rm p} (S) = \emptyset$. Then $\sigma_{\rm ac} (A_0) \cap \Delta = \emptyset$ if and only if ${\text{\rm Im}} (M (x + i 0) g, g) = 0$ holds for all $g \in{\text{\rm ran\,}} \Gamma_0$ and for almost all $x \in \Delta$. \end{corollary} \begin{proof} We make use of the fact that for $g \in {\text{\rm ran\,}} \Gamma_0$ \begin{equation}\label{zerozero} \clac \big(\left\{ x \in \Delta : 0 < {\text{\rm Im}} ( M (x + i 0) g, g) < + \infty \right\} \big)=\emptyset \end{equation} if and only if \begin{equation}\label{zeroset} \big|\left\{ x \in \Delta : 0 < {\text{\rm Im}} ( M (x + i 0) g, g) < + \infty \right\} \big|=0. \end{equation} Assume first that $\sigma_{\rm ac} (A_0) \cap \Delta = \emptyset$. Then \eqref{eq:ACidentity} yields \eqref{zerozero} for all $g \in {\text{\rm ran\,}} \Gamma_0$, and hence \eqref{zeroset} holds for all $g \in {\text{\rm ran\,}} \Gamma_0$. Moreover, for $u = \gamma (\zeta) g$ and $x \in \mathbb{R}$ by~\eqref{MResId} and~\eqref{eq:Fu} we have \begin{align*} {\text{\rm Im}} (M (x + i 0) g, g) = |x - \zeta|^2 {\text{\rm Im}} F_{u} (x + i 0), \end{align*} and by Lemma~\ref{mesLem} this limit exists and is finite for Lebesgue almost all $x \in \mathbb{R}$. Hence~\eqref{zeroset} implies ${\text{\rm Im}} (M (x + i 0) g, g) = 0$ for all $g \in {\text{\rm ran\,}} \Gamma_0$ and almost all $x \in \Delta$. For the converse implication assume that ${\text{\rm Im}} (M (x + i 0) g, g) = 0$ for all $g \in{\text{\rm ran\,}} \Gamma_0$ and for almost all $x \in \Delta$. Then \eqref{zeroset} and hence also \eqref{zerozero} holds for all $g \in{\text{\rm ran\,}} \Gamma_0$. Thus \eqref{eq:ACidentity} yields $\sigma_{\rm ac} (A_0) \cap \Delta = \emptyset$. \end{proof} Let us prove next inclusions for the singular and singular continuous spectra of~$A_0$. Recall the definition of the continuous closure $\clc (\chi)$ of a Borel set $\chi$ in~\eqref{eq:clc}. \begin{theorem}\label{thm:SCtheorem} Let Assumption~\ref{ass} be satisfied, and let $\Delta \subset \mathbb{R}$ be an open interval. Then the following assertions hold. \begin{enumerate} \item If the condition~\eqref{eq:localSimple} holds then the singular spectrum of $A_0$ in $\Delta$ satisfies \begin{align*} \bigl(\sigma_{\rm s} (A_0) \cap \Delta\bigr)\, \subset \overline{ \bigcup_{g \in {\text{\rm ran\,}} \Gamma_0} \big\{x \in \Delta : {\text{\rm Im}} (M (x + i 0) g, g) = +\infty \big\} }. \end{align*} \item If the condition~\eqref{eq:localSimple00} is satisfied for each open interval $\delta \subset \Delta$ with $\delta \cap \sigma_{\rm p} (S) = \emptyset$ then the singular continuous spectrum of $A_0$ in $\Delta$, $\sigma_{\rm sc} (A_0) \cap \Delta$, is contained in the set \begin{align*} \overline{ \bigcup_{g \in {\text{\rm ran\,}} \Gamma_0} \clc \big(\big\{x \in \Delta : {\text{\rm Im}} (M (x + i 0) g, g) = +\infty, \lim_{y \searrow 0} y (M (x + i y) g, g) = 0 \big\} \big) }. \end{align*} \end{enumerate} If $S$ is simple then {\rm (i)} and {\rm (ii)} hold for each open interval $\Delta$, including the case $\Delta = \mathbb{R}$. \end{theorem} \begin{proof} We show the statements (i) and (ii) at once. Let us define \begin{align*} {\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_\Delta := \bigl\{ E (\Delta) \gamma (\zeta) g : \zeta \in \mathbb{C} \setminus \mathbb{R},\, g \in {\text{\rm ran\,}} \Gamma_0 \bigr\}. \end{align*} Note first that the same arguments as in Step~1 of the proof of Theorem~\ref{thm:ACtheorem} imply \begin{align}\label{eq:SCSrepr} \overline{\sigma_i (A_0) \cap \Delta} = \overline{\bigcup_{u \in {\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_\Delta} \supp \mu_{u, i}}, \qquad i = \rm s, sc. \end{align} In order to apply Lemma~\ref{mesLem}~(ii) and~(iii), respectively, we calculate the limits that appear there. In fact, it follows from \eqref{eq:Mformula} that for each $g \in {\text{\rm ran\,}} \Gamma_0$ and each $\zeta \in \mathbb{C} \setminus \mathbb{R}$ \begin{equation}\label{eq:limitTransform} \lim_{y\searrow 0}{\text{\rm Im}} ( M (x + i y) g, g) = |x - \zeta|^2 \lim_{y\searrow 0} {\text{\rm Im}} \left( (A_0 - (x + i y) )^{-1} \gamma (\zeta) g, \gamma (\zeta) g \right) \end{equation} and \begin{equation}\label{eq:limitTransform'} \lim_{y\searrow 0} y ( M (x + i y) g, g) = |x - \zeta|^2 \lim_{y\searrow 0} y \left( (A_0 - (x + i y) )^{-1} \gamma (\zeta) g, \gamma (\zeta) g \right) \end{equation} hold; cf.~\eqref{MResId} for the first identity and the text below ~\eqref{MResId} for its interpretation as a possible improper limit. Let $u = E (\Delta) \gamma (\zeta) g \in {\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_\Delta$ and let $$ F_u(x+iy)=\int_{\mathbb{R}} \frac{1}{t - (x+iy)} d (E(t)u,u) =\left( (A_0 - (x + i y) )^{-1} u, u \right) $$ be the Borel transform of $\mu_u = (E (\cdot) u, u)$. Then \begin{align*} {\text{\rm Im}} F_u (x + i 0) & = {\text{\rm Im}} \left( (A_0 - (x + i 0) )^{-1} E (\Delta) \gamma (\zeta) g, E (\Delta) \gamma (\zeta) g \right) \end{align*} for all $x \in \mathbb{R}$. From this we conclude with the help of~\eqref{eq:limitTransform} that \begin{align}\label{eq:wichtig1} {\text{\rm Im}} F_u (x + i 0) = \begin{cases} |x - \zeta|^{-2} {\text{\rm Im}} (M (x + i 0) g, g) & \text{if}\,\,\, x \in \Delta, \\ 0 & \text{if}\,\,\, x \notin \overline \Delta. \end{cases} \end{align} Similarly, from~\eqref{eq:limitTransform'} we obtain \begin{align}\label{eq:wichtig2} \lim_{y \searrow 0} y F_u (x + i y) = \begin{cases} |x - \zeta|^{-2} \lim_{y \searrow 0} y (M (x + i y) g, g) &\text{if}\,\,\, x \in \Delta, \\ 0 & \text{if}\,\,\, x \notin \overline \Delta. \end{cases} \end{align} It follows from~\eqref{eq:wichtig1}, \eqref{eq:wichtig2}, and Lemma~\ref{mesLem} that \begin{align*} \supp \mu_{u, \rm s} \subset \overline{\bigl\{ x \in \Delta: {\text{\rm Im}} ( M (x + i 0) g, g) = + \infty \bigr\}} \end{align*} and \begin{align*} \supp \mu_{u, \rm sc} \subset \clc \Big( \Big\{ x \in \Delta : {\text{\rm Im}} ( M (x + i 0) g, g) = + \infty, \lim_{y \searrow 0} y (M (x + i y) g, g) = 0 \Big\} \Big) \end{align*} for $u = E (\Delta) \gamma (\zeta) g \in {\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}_\Delta$. Thus the assertions of the theorem follow from~\eqref{eq:SCSrepr}. \end{proof} We formulate two immediate corollaries which concern the singular continuous spectrum. \begin{corollary}\label{thm:SCcomplete2} Let Assumption~\ref{ass} be satisfied and assume that~\eqref{eq:localSimple00} holds for each open interval $\delta \subset \mathbb{R}$ such that $\delta \cap \sigma_{\rm p} (S) = \emptyset$. Then the singular continuous spectrum $\sigma_{\rm sc} (A_0)$ of $A_0$ is contained in the set \begin{align*} \overline{ \bigcup_{g \in {\text{\rm ran\,}} \Gamma_0} \clc \big(\big\{x \in \mathbb{R} : {\text{\rm Im}} (M (x + i 0) g, g) = +\infty, \lim_{y \searrow 0} y (M (x + i y) g, g) = 0 \big\} \big) }. \end{align*} \end{corollary} \begin{corollary}\label{thm:SCcomplete3} Let Assumption~\ref{ass} be satisfied, let $\Delta \subset \mathbb{R}$ be an open interval, and assume that the condition~\eqref{eq:localSimple} holds. Then the singular continuous spectrum $\sigma_{\rm sc} (A_0)$ of $A_0$ in $\Delta$, $\sigma_{\rm sc} (A_0) \cap \Delta$, is contained in the set \begin{align*} \overline{ \bigcup_{g \in {\text{\rm ran\,}} \Gamma_0} \clc \big(\big\{x \in \Delta : {\text{\rm Im}} (M (x + i 0) g, g) = +\infty, \lim_{y \searrow 0} y (M (x + i y) g, g) = 0 \big\} \big) }. \end{align*} \end{corollary} As a further immediate corollary of Theorem~\ref{thm:SCtheorem} we formulate a sufficient criterion for the absence of singular continuous spectrum in terms of the limiting behaviour of the function~$M$. The corresponding result for ordinary boundary triples (in the special case $\Delta = \mathbb{R}$) can be found in~\cite{BMN02}. \begin{corollary}\label{cor:SCcor} Let Assumption~\ref{ass} be satisfied and let $\Delta \subset \mathbb{R}$ be an open interval such that the condition~\eqref{eq:localSimple00} is satisfied for each open interval $\delta \subset \Delta$ with $\delta \cap \sigma_{\rm p} (S) = \emptyset$. If for each $g \in {\text{\rm ran\,}} \Gamma_0$ there exist at most countably many $x \in \Delta$ such that \begin{align*} {\text{\rm Im}} (M (x + i y) g, g) \to + \infty \quad \text{and} \quad y (M (x + i y) g, g) \to 0 \quad \text{as} \quad y \searrow 0 \end{align*} then $\sigma_{\rm sc} (A_0) \cap \Delta = \emptyset$. If $S$ is simple the assertion holds for each open interval $\Delta$, including the case $\Delta = \mathbb{R}$. \end{corollary} As a further corollary of the theorems of this section we provide sufficient criteria for the spectrum of the operator $A_0$ to be purely absolutely continuous or purely singular continuous, respectively, in some set. \begin{corollary} Let Assumption~\ref{ass} be satisfied, let $\Delta \subset \mathbb{R}$ be an open interval such that the condition~\eqref{eq:localSimple} is satisfied, and assume that \begin{align}\label{eq:noeig} \lim_{y \searrow 0} y M (x + i y) g = 0 \end{align} for all $g \in {\text{\rm ran\,}} \Gamma_0$ and all $x \in \Delta$. Then the following assertions hold. \begin{enumerate} \item If for each $g \in {\text{\rm ran\,}} \Gamma_0$ there exist at most countably many $x \in \Delta$ such that ${\text{\rm Im}} (M (x + i 0) g, g) = + \infty$, then $\sigma (A_0) \cap \Delta = \sigma_{\rm ac} (A_0) \cap \Delta$. \item If ${\text{\rm Im}} ( M (x + i 0) g, g) = 0$ holds for all $g \in {\text{\rm ran\,}} \Gamma_0$ and almost all $x \in \Delta$, then $\sigma (A_0) \cap \Delta = \sigma_{\rm sc} (A_0) \cap \Delta$. \end{enumerate} In particular, if $S$ is simple and $\Delta$ is an arbitrary open interval such that~\eqref{eq:noeig} holds for all $g \in {\text{\rm ran\,}} \Gamma_0$ and all $x \in \Delta$ then~{\rm (i)} and~{\rm (ii)} are satisfied. \end{corollary} \section{Second order elliptic differential operators on $\mathbb{R}^n$}\label{sec:4} In this section we show how the spectrum of a selfadjoint second order elliptic differential operator on $\mathbb{R}^n$, $n \geq 2$, can be described with the help of a Titchmarsh--Weyl function acting on an $n-1$-dimensional compact interface $\Sigma$ which splits $\mathbb{R}^n$ into a bounded domain $\Omega_{\rm i}$ and an unbounded domain $\Omega_{\rm e}$ with common boundary $\Sigma$. We consider the differential expression \begin{align*} \cL = - \sum_{j, k = 1}^n \frac{\partial}{\partial x_j} a_{jk} \frac{\partial}{\partial x_k} + \sum_{j = 1}^n \left( a_j \frac{\partial}{\partial x_j} - \frac{\partial}{\partial x_j} \overline{a_j} \right) + a, \end{align*} where $a_{jk}, a_j \in C^\infty(\mathbb{R}^n)$ together with their derivatives are bounded and satisfy $a_{jk} (x) = \overline{a_{kj} (x)}$ for all $x \in \mathbb{R}^n$, $1 \leq j, k \leq n$, and $a \in L^\infty(\mathbb{R}^n)$ is real valued. Moreover, we assume that $\cL$ is uniformly elliptic on $\mathbb{R}^n$, that is, there exists $E > 0$ with \begin{align}\label{eq:elliptic} \sum_{j, k = 1}^n a_{jk} (x) \xi_j \xi_k \geq E \sum_{k = 1}^n \xi_k^2, \quad x \in \mathbb{R}^n, \quad \xi = (\xi_1, \dots, \xi_n)^\top \in \mathbb{R}^n. \end{align} The selfadjoint operator associated with $\cL$ in $L^2 (\mathbb{R}^n)$ is given by \begin{align}\label{eq:SchroedingerOp} A_0 u = \cL u, \quad {\text{\rm dom\,}} A_0 = H^2 (\mathbb{R}^n), \end{align} where $H^2 (\mathbb{R}^n)$ is the usual $L^2$-based Sobolev space of order $2$ on $\mathbb{R}^n$. In Sections~\ref{41} and \ref{42} two different choices of Titchmarsh--Weyl functions for the differential expression $\cL$, both acting on the interface $\Sigma$, are studied. \subsection{A Weyl function corresponding to a transmission problem}\label{41} We first consider a Weyl function for the operator $A_0$ which appears in transmission problems in connection with single layer potentials (see, e.g. \cite[Chapter 6]{M00} and \cite{R09}) and which was also used in \cite{AP04} to generalize the classical limit point/limit circle analysis from singular Sturm--Liouville theory to Schr\"{o}dinger operators in $\dR^3$. Let $\Sigma$ be the boundary of a bounded $C^\infty$-domain $\Omega_{\rm i} \subset \mathbb{R}^n$ and denote by $\Omega_{\rm e}$ the exterior of $\Sigma$, that is, $\Omega_{\rm e} = \mathbb{R}^n \setminus \overline{\Omega_{\rm i}}$. In the following we make use of operators induced by $\cL$ in $L^2 (\Omega_{\rm i})$ and $L^2 (\Omega_{\rm e})$, respectively. For $j = {\rm i, e}$ we write $\cL_j$ for the restriction of the differential expression $\cL$ to functions on $\Omega_j$. For functions in $L^2(\Omega_j)$ we use the index $j$ and we write $u=u_{\rm i}\oplus u_{\rm e}$ for $u\in L^2(\dR^n)$. As $\Sigma$ is smooth, the selfadjoint Dirichlet operator associated with $\cL_j$ in $L^2 (\Omega_j)$ is given by \begin{equation*} A_{{\rm D}, j} u_j = \cL_j u_j, \quad {\text{\rm dom\,}} A_{{\rm D}, j} = \left\{ u_j \in H^2 (\Omega_j) : u_j |_{\Sigma} = 0 \right\},\qquad j = {\rm i, e}, \end{equation*} where $u_j |_\Sigma$ denotes the trace of $u_j$ at $\Sigma=\partial\Omega_j$. Let $H^s (\Sigma)$ be the Sobolev spaces of orders $s \geq 0$ on $\Sigma$. We recall that for each $\lambda \in \rho (A_{{\rm D}, j})$ and each $g \in H^{3/2} (\Sigma)$ there exists a unique solution $u_{\lambda, j} \in H^2 (\Omega_j)$ of the boundary value problem $\cL_j u_j = \lambda u_j$, $u_j |_\Sigma =g$. This implies that for each $\lambda \in \rho (A_{{\rm D}, j})$ the Dirichlet-to-Neumann map \begin{align}\label{eq:DNie} \Lambda_j (\lambda) : H^{3/2} (\Sigma) \to H^{1/2} (\Sigma), \quad u_{\lambda,j} |_\Sigma \mapsto \frac{\partial u_{\lambda,j}}{\partial \nu_{\cL_j}} \Big|_\Sigma, \end{align} is well-defined; here the conormal derivative with respect to $\cL_j$ in the direction of the outer unit normal $\nu_j = (\nu_{j,1}, \dots, \nu_{j,n})^\top$ at $\Sigma = \partial \Omega_j$ is defined by \begin{align*} \frac{\partial u}{\partial \nu_{\cL_j}} \Big|_{\Sigma} = \sum_{k, l = 1}^n a_{kl} \nu_{j,k} \frac{\partial u}{\partial x_l} \Big|_{\Sigma} + \sum_{k = 1}^n \overline{a_k} \nu_{j,k} u |_{\Sigma}. \end{align*} Note that the outer unit normals at $\partial \Omega_{\rm i}$ and $\partial \Omega_{\rm e}$ coincide up to a minus sign. The operator $\Lambda_{\rm i} (\lambda) + \Lambda_{\rm e} (\lambda)$ is invertible for all $\lambda\in\rho(A_0)\cap\rho (A_{\rm D, i}) \cap \rho (A_{\rm D, e})$ and, hence, the operator function \begin{align}\label{eq:WeylSchreod} \lambda\mapsto M (\lambda) = \big( \Lambda_{\rm i} (\lambda) + \Lambda_{\rm e} (\lambda) \big)^{-1} \end{align} is well-defined on $\rho(A_0)\cap\rho (A_{\rm D, i}) \cap \rho (A_{\rm D, e})$. We remark that the values $M (\lambda)$ are bounded operators in $L^2 (\Sigma)$ with domain $H^{1/2} (\Sigma)$; cf.~Lemma~\ref{prop:qbtSchreod} below for the details. The following theorem is the main result of this section. It states that the absolutely continuous spectrum of $A_0$ can be recovered completely from the knowledge of the function $M$ in~\eqref{eq:WeylSchreod}, while the eigenvalues and corresponding eigenspaces may be only partially visible for the function $M$. This depends on the choice of the interface $\Sigma$ and the fact that the symmetric operator \begin{align}\label{eq:SSchroed} S u = \cL u, \quad {\text{\rm dom\,}} S = \left\{ u \in H^2 (\mathbb{R}^n) : u |_{\Sigma} = 0 \right\}, \end{align} may have eigenvalues. In particular, in general $S$ is not simple; cf.~Example~\ref{ex:simple} and Example~\ref{ex:simple2} below. \begin{theorem}\label{thm:eigenSchroed} Let $A_0$, $\Sigma$, $S$, and $M$ be as above, let $\lambda,\mu\in\dR$ such that $\lambda\not\in\overline{\sigma_{\rm p} (S)}$, $\mu \not\in\sigma_{\rm p} (S)$, and let $\Delta \subset \mathbb{R}$ be an open interval. Then the following assertions hold. \begin{enumerate} \item $\mu \in\sigma_{\rm p} (A_0)$ if and only if $R_\mu M :=$ \textup{s}-$\lim_{\eta \searrow 0} i \eta M (\mu + i \eta) \neq 0$; if the multiplicity of the eigenvalue $\mu$ is finite then the mapping \begin{align}\label{eq:tauSchroed} \tau : \ker (A_0 - \mu) \to {\text{\rm ran\,}} R_\mu M, \quad u \mapsto u |_\Sigma, \end{align} is bijective; if the multiplicity of the eigenvalue $\mu$ is infinite then the mapping \begin{align}\label{eq:taugenSchroed} \tau : \ker (A_0 - \mu) \to \cl_\tau \bigl({\text{\rm ran\,}} R_\mu M\bigr), \quad u \mapsto u |_\Sigma, \end{align} is bijective, where $\cl_\tau$ denotes the closure in the normed space ${\text{\rm ran\,}} \tau$. \item $\lambda$ is an isolated eigenvalue of $A_0$ if and only if $\lambda$ is a pole in the strong sense of~$M$. In this case \eqref{eq:tauSchroed} and~\eqref{eq:taugenSchroed} hold with $\mu=\lambda$ and $R_\lambda M = \textup{Res}_\lambda M$. \item $\lambda\in\rho(A_0)$ if and only if $M$ can be continued analytically into $\lambda$. \item $\lambda\in\sigma_{\rm c} (A_0)$ if and only if \textup{s}-$\lim_{\eta \searrow 0} i \eta M (\lambda + i \eta) = 0$ and $M$ cannot be continued analytically into $\lambda$. \item The absolutely continous spectrum $\sigma_{\rm ac} (A_0)$ of $A_0$ in $\Delta$ is given by \begin{equation*} \qquad\qquad\overline{\sigma_{\rm ac} (A_0)\cap\Delta} = \overline{ \bigcup_{g \in H^{1/2} (\Sigma)} \clac \big( \big\{x \in \Delta : 0 < {\text{\rm Im}} (M (x + i 0) g, g) < +\infty \big\} \big) } \end{equation*} and, in particular, $\sigma_{\rm ac} (A_0) \cap \Delta = \emptyset$ if and only if ${\text{\rm Im}} (M (x + i 0) g, g) = 0$ holds for all $g \in H^{1/2} (\Sigma)$ and for almost all $x \in \Delta$. \item The singular continous spectrum $\sigma_{\rm sc} (A_0)$ of $A_0$ in $\Delta$ is contained in \begin{equation*} \overline{ \bigcup_{g \in H^{1/2} (\Sigma)} \clc \big(\big\{x \in \Delta : {\text{\rm Im}} (M (x + i 0) g, g) = +\infty, \lim_{y \searrow 0} y (M (x + i y) g, g) = 0 \big\} \big) }, \end{equation*} and, in particular, if for each $g \in H^{1/2} (\Sigma)$ there exist at most countably many $x \in \Delta$ such that ${\text{\rm Im}} (M (x + i y) g, g) \to + \infty$ and $y (M (x + i y) g, g) \to 0$ as $ y \searrow 0$ then $\sigma_{\rm sc} (A_0) \cap \Delta = \emptyset$. \end{enumerate} \end{theorem} The proof of Theorem~\ref{thm:eigenSchroed} makes use of the following two lemmas and is given at the end of this subsection. \begin{lemma}\label{prop:qbtSchreod} Let $S$ be defined as in~\eqref{eq:SSchroed} and let \begin{align}\label{eq:TSchreod} T u = \cL u, \quad {\text{\rm dom\,}} T = \left\{ u_{\rm i} \oplus u_{\rm e} \in H^2 (\Omega_{\rm i}) \oplus H^2 (\Omega_{\rm e}) : u_{\rm i} |_\Sigma = u_{\rm e} |_\Sigma \right\}. \end{align} Then $\{ L^2 (\Sigma), \Gamma_0, \Gamma_1\}$, where \begin{align*} \Gamma_0, \Gamma_1 : {\text{\rm dom\,}} T \to L^2 (\Sigma),\quad \Gamma_0 u = \frac{\partial u_{\rm i}}{\partial \nu_{\cL_{\rm i}}} \Big|_\Sigma + \frac{\partial u_{\rm e}}{\partial \nu_{\cL_{\rm e}}} \Big|_\Sigma, \quad \Gamma_1 u = u |_\Sigma, \end{align*} is a quasi boundary triple for $S^*$ such that $A_0 = T \upharpoonright \ker \Gamma_0 $ and ${\text{\rm ran\,}}\Gamma_0= H^{1/2} (\Sigma)$. For all $\lambda \in\rho(A_0)\cap \rho (A_{\rm D, i}) \cap \rho (A_{\rm D, e})$ the corresponding Weyl function coincides with the function $M$ in \eqref{eq:WeylSchreod}, and ${\text{\rm dom\,}} M (\lambda) = H^{1/2} (\Sigma)$. \end{lemma} \begin{proof} The proof is similar to the proof of \cite[Proposition 3.2]{BLL13}. For the convenience of the reader we provide the details. In order to show that $\{ L^2 (\Sigma), \Gamma_0, \Gamma_1\}$ is a quasi boundary triple for $S^*$ we verify (i)-(iii) in the assumptions of Proposition~\ref{prop:ratetheorem}. Recall first that by the classical trace theorem the mapping $$ H^2(\Omega_j)\rightarrow H^{3/2}(\Sigma)\times H^{1/2}(\Sigma),\qquad u_j\mapsto\left\{u_j\vert_\Sigma, \frac{\partial u_j}{\partial \nu_{\cL_j}} \Big|_\Sigma\right\},\quad j = \rm i, e, $$ is onto. Hence, for given $\varphi\in H^{1/2}(\Sigma)$ and $\psi\in H^{3/2}(\Sigma)$ there exist $u_j\in H^2(\Omega_j)$ such that $$ \frac{\partial u_{\rm i}}{\partial \nu_{\cL_{\rm i}}} \Big|_\Sigma = \varphi,\quad \frac{\partial u_{\rm e}}{\partial \nu_{\cL_{\rm e}}} \Big|_\Sigma = 0,\quad\text{and}\quad u_{\rm i} |_\Sigma =\psi= u_{\rm e} |_\Sigma, $$ and it follows $u_{\rm i} \oplus u_{\rm e} \in {\text{\rm dom\,}} T$, $\Gamma_0 (u_{\rm i} \oplus u_{\rm e}) = \varphi$, and $\Gamma_1 (u_{\rm i} \oplus u_{\rm e}) = \psi$. This implies that ${\text{\rm ran\,}}(\Gamma_0,\Gamma_1)^\top=H^{1/2}(\Sigma)\times H^{3/2}(\Sigma)$. In particular, ${\text{\rm ran\,}}(\Gamma_0,\Gamma_1)^\top$ is dense in $L^2(\Sigma)\times L^2(\Sigma)$. Furthermore, $C_0^\infty(\dR^n\backslash\Sigma)$ is a dense subspace of $L^2(\dR^n)$ which is contained in $\ker\Gamma_0\cap\ker\Gamma_1$. Thus (i) in Proposition~\ref{prop:ratetheorem} holds. Next we verify the identity \eqref{eq:absGreen} for $u=u_{\rm i}\oplus u_{\rm e}, v=v_{\rm i}\oplus v_{\rm e}\in{\text{\rm dom\,}} T$. With the help of Green's identity and $u\vert_\Sigma=u_j\vert_\Sigma$, $v\vert_\Sigma=v_j\vert_\Sigma$, $j = \rm i,e$, we compute \begin{equation*} \begin{split} &(Tu,v)-(u,Tv)=(\cL_{\rm e}u_{\rm e},v_{\rm e})-(u_{\rm e},\cL_{\rm e}v_{\rm e})+(\cL_{\rm i}u_{\rm i},v_{\rm i})-(u_{\rm i},\cL_{\rm i}v_{\rm i})\\ &\qquad =\left(u_{\rm e}\vert_\Sigma,\frac{\partial v_{\rm e}}{\partial \nu_{\cL_{\rm e}}} \Big|_\Sigma\right)- \left(\frac{\partial u_{\rm e}}{\partial \nu_{\cL_{\rm e}}} \Big|_\Sigma,v_{\rm e}\vert_\Sigma\right)+ \left(u_{\rm i}\vert_\Sigma,\frac{\partial v_{\rm i}}{\partial \nu_{\cL_{\rm i}}} \Big|_\Sigma\right)- \left(\frac{\partial u_{\rm i}}{\partial \nu_{\cL_{\rm i}}} \Big|_\Sigma,v_{\rm i}\vert_\Sigma\right)\\ &\qquad =\left(u\vert_\Sigma, \frac{\partial v_{\rm i}}{\partial \nu_{\cL_{\rm i}}} \Big|_\Sigma+\frac{\partial v_{\rm e}}{\partial \nu_{\cL_{\rm e}}} \Big|_\Sigma\right)- \left(\frac{\partial u_{\rm i}}{\partial \nu_{\cL_{\rm i}}} \Big|_\Sigma+\frac{\partial u_{\rm e}}{\partial \nu_{\cL_{\rm e}}} \Big|_\Sigma,v\vert_\Sigma\right)\\ &\qquad=(\Gamma_1 u,\Gamma_0 v)-(\Gamma_0 u,\Gamma_1 v). \end{split} \end{equation*} We have shown that (ii) in Proposition~\ref{prop:ratetheorem} holds. Finally it is not difficult to see that ${\text{\rm dom\,}} A_0=H^2(\dR^n)$ is contained in $\ker\Gamma_0$, that is, assumption (iii) in Proposition~\ref{prop:ratetheorem} is satisfied. Therefore we obtain from Proposition~\ref{prop:ratetheorem} that $T\upharpoonright(\ker\Gamma_0\cap\ker\Gamma_1)$ is a densely defined, closed, symmetric operator in $L^2(\dR^n)$, that $\{ L^2 (\Sigma), \Gamma_0, \Gamma_1\}$ is a quasi boundary triple for its adjoint and that $A_0 = T\upharpoonright\ker\Gamma_0$. In particular, $T\upharpoonright\ker\Gamma_0$ is defined on $H^2(\dR^n)$. Hence $T\upharpoonright(\ker\Gamma_0\cap\ker\Gamma_1)$ coincides with the symmetric operator $S$ in \eqref{eq:SSchroed} and $\{ L^2 (\Sigma), \Gamma_0, \Gamma_1\}$ is a quasi boundary triple for $\overline T=S^*$. It remains to check that the corresponding Weyl function has the form \eqref{eq:WeylSchreod}. For this let $\lambda \in\rho(A_0)\cap \rho (A_{\rm D, i}) \cap \rho (A_{\rm D, e})$ and let $u_\lambda=u_{\lambda, \rm i}\oplus u_{\lambda, \rm e} \in \ker (T - \lambda)$, that is, $u_{\lambda, j} \in H^2 (\Omega_j)$, $j = \rm i, e$, $u_{\lambda, \rm i} |_\Sigma = u_{\lambda, \rm e} |_\Sigma$, and $\cL_j u_{\lambda,j}=\lambda u_{\lambda,j}$, $j = \rm i, e$. Then we have \begin{align}\label{eq:couplingWeyl} \bigl(\Lambda_{\rm i} (\lambda) + \Lambda_{\rm e} (\lambda)\bigr)\Gamma_1 u_\lambda =\frac{\partial u_{\lambda,\rm i}}{\partial \nu_{\cL_{\rm i}}} \Big|_\Sigma+\frac{\partial u_{\lambda,\rm e}}{\partial \nu_{\cL_{\rm e}}} \Big|_\Sigma =\Gamma_0 u_\lambda. \end{align} Note further that $\Lambda_{\rm i} (\lambda) + \Lambda_{\rm e} (\lambda)$ is injective for all $\lambda \in\rho(A_0)\cap \rho (A_{\rm D, i}) \cap \rho (A_{\rm D, e})$. In fact, assume $\Gamma_1 u_\lambda \in \ker (\Lambda_{\rm i} (\lambda) + \Lambda_{\rm e} (\lambda))$. Then \eqref{eq:couplingWeyl} implies $u_\lambda \in \ker \Gamma_0 = {\text{\rm dom\,}} A_0$, and it follows $u_\lambda \in \ker (A_0 - \lambda)$. Since $\lambda \in \rho (A_0)$ we obtain $u_\lambda = 0$ and, hence, $\Gamma_1 u_\lambda = 0$. Therefore it follows from \eqref{eq:couplingWeyl} that the Weyl function corresponding to $\{{\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I},\Gamma_0,\Gamma_1\}$ coincides with the function $M$ in~\eqref{eq:WeylSchreod}. \end{proof} In the next lemma it is shown that $S$ satisfies the local simplicity in the assumptions of the results in Section~\ref{sec:abstr}. \begin{lemma}\label{sonnenschein} Let $A_0$ be the selfadjoint elliptic operator in \eqref{eq:SchroedingerOp} with spectral measure $E (\cdot)$ and let $S$ be the symmetric operator in \eqref{eq:SSchroed}. Let $\{L^2 (\Sigma), \Gamma_0, \Gamma_1\}$ be the quasi boundary triple in Lemma~\ref{prop:qbtSchreod} and let $\gamma$ be the corresponding $\gamma$-field. Then $$ \clsp \bigl\{ E (\delta) \gamma (\nu) g : g \in H^{1/2} (\Sigma),\,\nu \in \mathbb{C} \setminus \mathbb{R} \bigr\} = E (\delta) L^2 (\mathbb{R}^n)$$ holds for every open interval $\delta\subset\dR$ such that $\delta\cap\sigma_{\rm p}(S)=\emptyset$. \end{lemma} \begin{proof} For $j = \rm i, e$ we consider the densely defined, closed, symmetric operators \begin{align*} S_j u_j = \cL_j u_j, \quad {\text{\rm dom\,}} S_j = \bigg\{ u_j \in H^2 (\Omega_j) : u_j |_\Sigma = \frac{\partial u_j}{\partial \nu_{\cL_j}} \Big|_\Sigma = 0 \bigg\}, \end{align*} in $L^2 (\Omega_j)$ and the operators \begin{align*} T_j u_j = \cL_j u_j, \quad {\text{\rm dom\,}} T_j = H^2 (\Omega_j), \end{align*} in $L^2 (\Omega_j)$. It is not difficult to verify that $\{L^2(\Sigma),\Gamma_0^j,\Gamma_1^j\}$, where \begin{align*} \Gamma_0^j, \Gamma_1^j : {\text{\rm dom\,}} T_j \to L^2 (\Sigma),\quad \Gamma_0^j u_j = u_j |_{\Sigma}, \quad \Gamma_1^j u_j = - \frac{\partial u_j}{\partial \nu_{\cL_j}} \Big|_\Sigma, \end{align*} is a quasi boundary triple for $S_j^*$, $j = \rm i, e$; cf. \cite[Proposition 4.1]{BL07}. For $\lambda \in \rho (A_{\rm D, j})$, $j = \rm i, e$, the corresponding $\gamma$-fields are given by \begin{align*} \gamma_j (\lambda):L^2(\Sigma)\supset H^{3/2}(\Sigma)\rightarrow L^2(\Omega_j),\qquad \varphi \mapsto \gamma_j (\lambda)\varphi= u_{\lambda,j}, \end{align*} where $u_{\lambda,j}$ is the unique solution in $H^2(\Omega_j)$ of $\cL_j u_j=\lambda u_j$, $u_j\vert_\Sigma=\varphi$. It follows in the same way as in \cite[Proposition 2.2]{BR13} that $S_{\rm e}$ is simple; the simplicity of $S_{\rm i}$ follows from a unique continuation argument, see, e.g. \cite[Proposition 2.5]{BR12}. Therefore we have \begin{align*} L^2 (\Omega_j) = \clsp \bigl\{ \gamma_j (\nu) g : g \in H^{3/2} (\Sigma), \, \nu \in \mathbb{C} \setminus \mathbb{R} \bigr\}, \quad j = \rm i, e, \end{align*} and hence \begin{align}\label{eq:gammaDensElliptic2} \begin{split} L^2 (\mathbb{R}^n) &= L^2 (\Omega_{\rm i})\oplus L^2 (\Omega_{\rm e})\\ &=\clsp \bigl\{ \gamma_{\rm i} (\mu) g\oplus \gamma_{\rm e} (\nu)h : g,h \in H^{3/2} (\Sigma), \, \mu, \nu \in \mathbb{C} \setminus \mathbb{R} \bigr\}. \end{split} \end{align} Here and in the following $\oplus$ denotes the orthogonality of the closed subspaces $L^2 (\Omega_{\rm i})$ and $L^2 (\Omega_{\rm e})$ in $L^2 (\mathbb{R}^n)$. Let now $\delta\subset\dR$ be an open interval such that $\delta\cap\sigma_{\rm p}(S)=\emptyset$ and let $T$ be as in \eqref{eq:TSchreod}. Since \begin{equation}\label{neuesfeld} \bigl\{\gamma_{\rm i} (\nu) g \oplus \gamma_{\rm e}(\nu)g: g \in H^{3/2} (\Sigma)\bigr\} = \ker (T - \nu) = {\text{\rm ran\,}} \gamma (\nu),\qquad \nu \in \mathbb{C} \setminus \mathbb{R}, \end{equation} we have to verify that \begin{equation*} \cH_\delta := \clsp \big\{ E (\delta) (\gamma_{\rm i} (\nu) g\oplus \gamma_{\rm e} (\nu)g) : g \in H^{3/2} (\Sigma) , \,\nu \in \mathbb{C} \setminus \mathbb{R} \big\} = E (\delta) L^2 (\mathbb{R}^n). \end{equation*} We note first that the inclusion $\cH_\delta\subset E (\delta) L^2 (\mathbb{R}^n)$ is obviously true. For the opposite inclusion we conclude from \eqref{eq:gammaDensElliptic2} that it suffices to verify \begin{equation}\label{simplificationSimpleSchroed} \begin{split} E (\delta) (\gamma_{\rm i} (\mu) g \oplus 0) \in \cH_\delta,&\qquad g \in H^{3/2} (\Sigma), \, \mu \in \mathbb{C} \setminus \mathbb{R},\\ E (\delta) (0 \oplus \gamma_{\rm e} (\nu) h ) \in \cH_\delta,&\qquad h \in H^{3/2} (\Sigma), \, \nu \in \mathbb{C} \setminus \mathbb{R}. \end{split} \end{equation} Let us show the statements in~\eqref{simplificationSimpleSchroed}. We start with the second one. Let us fix $\mu \in \mathbb{C} \setminus \mathbb{R}$. By Lemma~\ref{lem:gammaWeylProp}~(ii) we have \begin{align*} \gamma_j (\nu) h = \big( I + (\nu - \mu) (A_{{\rm D}, j} - \nu)^{-1} \big) \gamma_j (\mu) h, \quad h \in H^{3/2} (\Sigma),\, \nu \in \mathbb{C} \setminus \mathbb{R}, \end{align*} $j = \rm i, e$. From this it follows \begin{equation*} \begin{split} \cH_\delta & = \clsp \big\{ E (\delta) (\gamma_{\rm i} (\nu) h\oplus \gamma_{\rm e} (\nu)h) : h \in H^{3/2} (\Sigma) , \,\nu \in \mathbb{C} \setminus \mathbb{R} \big\}\\ & = \clsp \Big\{ E (\delta) (\gamma_{\rm i} (\mu) h \oplus \gamma_{\rm e} (\mu) h), \\ & \quad E (\delta) \left((A_{\rm D, i} - \nu)^{-1} \gamma_{\rm i} (\mu) h \oplus (A_{\rm D, e} - \nu)^{-1} \gamma_{\rm e} (\mu) h \right) : h \in H^{3/2} (\Sigma), \,\nu \in \mathbb{C} \setminus \mathbb{R} \Big\}. \end{split} \end{equation*} Since $A_{\rm D, i}$ and $A_{\rm D, e}$ are both semibounded from below we may choose $\lambda_0 \in \mathbb{R}$ such that $\sigma (A_{{\rm D}, j}) \subset (\lambda_0, \infty)$, $j = \rm i, e$. Recall that the spectrum of $A_{\rm D, i}$ is purely discrete and let $\lambda_1 < \lambda_2 < \dots$ be the distinct eigenvalues of $A_{\rm D, i}$. Then for all $\eta, \varepsilon > 0$ and $k=0, 1, 2,\dots$ the function \begin{align*} E (\delta) \Bigg[\int_{\lambda_k + \eta}^{\lambda_{k + 1} - \eta} & \big( (A_{\rm D, i} - (\lambda + i \varepsilon) )^{-1} - (A_{\rm D, i} - (\lambda - i \varepsilon) )^{-1} \big) \gamma_{\rm i} (\mu) h\, d\lambda \\ & \oplus \int_{\lambda_k + \eta}^{\lambda_{k + 1} - \eta} \big( (A_{\rm D, e} - (\lambda + i \varepsilon) )^{-1} - (A_{\rm D, e} - (\lambda - i \varepsilon) )^{-1} \big) \gamma_{\rm e} (\mu) h \,d \lambda\Bigg] \end{align*} belongs to $\cH_\delta$, and as $(\lambda_k,\lambda_{k+1})\subset\rho(A_{\rm D, i})$, Stone's formula implies \begin{align}\label{eq:defectExpr} E (\delta) \bigl(0 \oplus E_{\rm e} ((\lambda_k, \lambda_{k + 1})) \gamma_{\rm e} (\mu) h \bigr) \in \cH_\delta, \end{align} where $E_{\rm e} (\cdot)$ is the spectral measure of $A_{\rm D, e}$. Next we show that for the eigenvalues $\lambda_k$, $k=1,2,\dots,$ of $A_{\rm D, i}$ the property \begin{align}\label{eq:defectExpr3} E (\delta) \bigl(0 \oplus E_{\rm e} (\{\lambda_k\}) \gamma_{\rm e} (\mu) h \bigr) \in \cH_\delta \end{align} holds. For this consider the element $$u = 0 \oplus E_{\rm e} (\{\lambda_k\}) \gamma_{\rm e} (\mu) h$$ for some fixed $h\in H^{3/2} (\Sigma)$. Clearly, as $u \in \ker ((A_{\rm D, i} \oplus A_{\rm D, e}) - \lambda_k)$ and as $A_{\rm D, i} \oplus A_{\rm D, e}$ is a selfadjoint extension of the symmetric operator $S$ in \eqref{eq:SSchroed} we may write $u$ in the form $u=u_{\rm D}\widetilde\oplus u_S$ with $u_S\in\ker(S-\lambda_k)$ and \begin{equation}\label{ud} u_{\rm D}\in\ker\bigl((A_{\rm D, i} \oplus A_{\rm D, e}) - \lambda_k\bigr)\widetilde\ominus\ker(S-\lambda_k), \end{equation} where $\widetilde\oplus$ and $\widetilde \ominus$ indicate the orthogonality of subspaces in $\ker ((A_{\rm D, i} \oplus A_{\rm D, e}) - \lambda_k)$. Then for each $v \in \bigcap_{\nu \in \mathbb{C} \setminus \mathbb{R}} {\text{\rm ran\,}} (S - \nu)$ and each $\nu \in \mathbb{C} \setminus \mathbb{R}$ one has \begin{align}\label{eq:calc} \begin{split} (v,u_{\rm D}) & = ( (S - \nu) (S - \nu)^{-1} v,u_{\rm D}) = \bigl((S - \nu)^{-1} v,((A_{\rm D, i} \oplus A_{\rm D, e}) - \overline\nu) u_{\rm D}\bigr) \\ & = (\lambda_k - \nu) ((S - \nu)^{-1} v,u_{\rm D}). \end{split} \end{align} Since the limit $$y:=\lim_{\eta \searrow 0} \eta \bigl(S - (\lambda_k + i \eta)\bigr)^{-1} v= \lim_{\eta \searrow 0} \eta \bigl((A_{\rm D, i} \oplus A_{\rm D, e} )- (\lambda_k + i \eta)\bigr)^{-1} v$$ exists and \begin{equation*} \begin{split} \bigl(y,(S^*-\lambda_k)w \bigr) &=\lim_{\eta\searrow 0}\eta\bigl(\bigl(S - (\lambda_k + i \eta)\bigr)^{-1} v,(S^*-\lambda_k)w\bigr)\\ &=\lim_{\eta\searrow 0}\eta\bigl((S-\lambda_k)\bigl(S - (\lambda_k + i \eta)\bigr)^{-1} v,w\bigr)\\ &=\lim_{\eta\searrow 0}\eta\bigl[(v,w)+\bigl(i\eta\bigl(S - (\lambda_k + i \eta)\bigr)^{-1} v,w\bigr)\bigr]=0 \end{split} \end{equation*} holds for all $w \in {\text{\rm dom\,}} S^*$ we conclude that $$y=\lim_{\eta \searrow 0} \eta \bigl(S - (\lambda_k + i \eta)\bigr)^{-1} v\in \bigl({\text{\rm ran\,}} (S^*-\lambda_k)\bigr)^\bot=\ker(S-\lambda_k).$$ In particular, \eqref{ud} implies $(y,u_{\rm D})=0$. Therefore we obtain from the identity~\eqref{eq:calc} with $\nu = \lambda_k + i \eta$ in the limit \begin{equation*} (v,u_{\rm D})=-i\,\lim_{\eta \searrow 0}\eta \bigl(\bigl(S - (\lambda_k + i \eta)\bigr)^{-1} v,u_{\rm D}\bigr)=-i(y,u_{\rm D})=0. \end{equation*} This shows that $u_{\rm D}$ is orthogonal to $\bigcap_{\nu \in \mathbb{C} \setminus \mathbb{R}} {\text{\rm ran\,}} (S - \nu)$ and hence \begin{equation*} u_{\rm D} \in \clsp \bigl\{ \ker (S^* - \nu) : \nu \in \mathbb{C} \setminus \mathbb{R} \bigr\}= \clsp \bigl\{ \ker (T - \nu) : \nu \in \mathbb{C} \setminus \mathbb{R} \bigr\}. \end{equation*} Therefore \eqref{neuesfeld} implies \begin{equation}\label{eq:uD} u_{\rm D} \in \clsp \bigl\{ \gamma_{\rm i} (\nu) h \oplus \gamma_{\rm e}(\nu)h : h \in H^{3/2} (\Sigma),\, \nu \in \mathbb{C} \setminus \mathbb{R}\bigr\}. \end{equation} Note that if the eigenvalue $\lambda_k$ of $A_{\rm D, i}$ is contained in the interval $\delta$ then by assumption $\lambda_k\not\in\sigma_{\rm p}(S)$ and hence $u=u_{\rm D}$ in this case. If $\lambda_k\not\in\delta$ then $u_S \in \ker (S - \lambda_k) \subset \ker (A_0 - \lambda_k)$ implies that $u_S$ is orthogonal to ${\text{\rm ran\,}} E (\delta)$, so that $E (\delta) u_S=0$. Summing up we have for any eigenvalue $\lambda_k$, $k=1,2,\dots$, of $A_{\rm D, i}$ that \begin{equation*} E (\delta) \bigl(0 \oplus E_{\rm e} (\{\lambda_k\}) \gamma_{\rm e} (\mu) h\bigr) = E ( \delta) u = E ( \delta) (u_S \widetilde\oplus u_{\rm D}) = E (\delta) u_{\rm D} \in \cH_\delta \end{equation*} by~\eqref{eq:uD}. We have shown \eqref{eq:defectExpr3}. Let $m\in\mathbb{N}$. Then we have $$ E_{\rm e}((-\infty,\lambda_m))\gamma_{\rm e} (\mu) h= \sum_{k=1}^{m-1} E_{\rm e}(\{\lambda_k\})\gamma_{\rm e} (\mu) h+\sum_{k=0}^{m-1} E_{\rm e}((\lambda_k,\lambda_{k+1}))\gamma_{\rm e} (\mu) h $$ and from \eqref{eq:defectExpr} and \eqref{eq:defectExpr3} we conclude \begin{align*} E (\delta) \big( 0 \oplus E_{\rm e} (-\infty, \lambda_m) \gamma_{\rm e} (\mu) h \big) \in \cH_\delta. \end{align*} Taking the limit $m \nearrow + \infty$ we obtain $E (\delta) (0 \oplus \gamma_{\rm e} (\mu) h) \in \cH_\delta$. We have proved the second statement in \eqref{simplificationSimpleSchroed}. For the first statement in \eqref{simplificationSimpleSchroed} observe that for $\mu\in\mathbb{C}\setminus\mathbb{R}$ fixed, $g\in H^{3/2}(\Sigma)$ and $k=1,2,\dots$ \begin{align*} E (\delta) \bigl(E_{\rm i} (\{\lambda_k\}) \gamma_{\rm i} (\mu) g \oplus 0 \bigr) \in \cH_\delta \end{align*} can be verified in the same way as \eqref{eq:defectExpr3}, where $E_{\rm i} (\cdot)$ is the spectral measure of $A_{\rm D, i}$. Hence for $m\in\mathbb{N}$ we conclude \begin{align*} E (\delta) \bigl(E_{\rm i} ((-\infty,\lambda_m)) \gamma_{\rm i} (\mu) g \oplus 0 \bigr) \in \cH_\delta \end{align*} and in the limit $m \nearrow + \infty$ we obtain the first statement in \eqref{simplificationSimpleSchroed}. Now \eqref{simplificationSimpleSchroed} together with \eqref{eq:gammaDensElliptic2} imply the inclusion $E(\delta) L^2(\dR^n)\subset \cH_\delta$. This completes the proof of Lemma~\ref{sonnenschein}. \end{proof} As a consequence of Lemma~\ref{sonnenschein} we obtain the following corollary. \begin{corollary}\label{simplecor} The operator $S$ in \eqref{eq:SSchroed} is simple if and only if $\sigma_{\rm p}(S)=\emptyset$. \end{corollary} \begin{proof}[{\bf Proof of Theorem~\ref{thm:eigenSchroed}}] Let $\{L^2 (\Sigma), \Gamma_0, \Gamma_1 \}$ be the quasi boundary triple for $\overline T = S^*$ in Lemma~\ref{prop:qbtSchreod}. Then $T \upharpoonright \ker \Gamma_0$ corresponds to the selfadjoint elliptic differential operator $A_0$ in~\eqref{eq:SchroedingerOp} and the associated Weyl function coincides with the operator function $M$ in~\eqref{eq:WeylSchreod}. Taking Lemma~\ref{sonnenschein} into account, item (i) follows from Corollary~\ref{thm:eigen} and items (ii)-(iv) are consequences of Theorem~\ref{thm:specTotal} and Proposition~\ref{prop:isolatedEV} when choosing an open interval $\delta \ni \lambda$ with $\delta \cap \sigma_{\rm p} (S) = \emptyset$. Moreover, item~(v) follows from Theorem~\ref{thm:ACtheorem} and Corollary~\ref{cor:ACequiv}, and item~(vi) is due to Theorem~\ref{thm:SCtheorem} and Corollary~\ref{cor:SCcor}. \end{proof} We point out that in the case that the symmetric operator $S$ is simple the assertions in Theorem~\ref{thm:eigenSchroed} hold for all $\lambda,\mu\in\dR$. On the other hand, without further assumptions, it may happen that $S$ possesses eigenvalues. In this case at least the parts of the eigenspaces of $A$ which do not belong to $S$ can be characterized in terms of the function $M$; cf.~Theorem~\ref{thm:eigenGeneral}. The next examples illustrate that a proper choice of the interface $\Sigma$ may avoid eigenvalues of $S$. \begin{example}\label{ex:simple} Assume that $\cL$ equals the Laplacian outside some compact set $K \subset \mathbb{R}^n$ and choose $\Sigma$ to be the boundary of any smooth, bounded domain $\Omega_{\rm i} \supset K$. Then $S$ does not have any eigenvalues. Indeed, if $u \in H^2 (\mathbb{R}^n)$ satisfies $\cL u = \lambda u$ on $\mathbb{R}^n$ and $u |_\Sigma = 0$ then $u |_{\Omega_{\rm e}}$ belongs to $\ker (A_{\rm D, e} - \lambda)$ and must vanish. Then a unique continuation argument implies $u = 0$. Hence $S$ is simple by Corollary~\ref{simplecor} and the assertions in Theorem~\ref{thm:eigenSchroed} hold for all $\lambda,\mu\in\dR$. \end{example} \begin{example}\label{ex:simple2} Let the coefficients of $\cL$ be chosen in a way such that for some bounded, smooth domain $\Omega_{\rm i} \subset \mathbb{R}^n$ the operator $A_{\rm D,i}$ in $L^2 (\Omega_{\rm i})$ is strictly positive; for instance this happens if $- \frac{2}{E} \sum_{j = 0}^n \|a_j\|_\infty^2 + \inf a \geq 0$ on $\Omega_{\rm i}$, where $E$ is an ellipticity constant for $\cL$, see~\eqref{eq:elliptic}. If we choose $\Sigma = \partial \Omega_{\rm i}$ then $S$ has no non-positive eigenvalues, otherwise $S u = \lambda u$ for some $\lambda \leq 0$ and $u \in {\text{\rm dom\,}} S$ with $u \neq 0$, and a unique continuation argument yields that $u_{\rm i}$ is nontrivial, thus $u_{\rm i}$ is an eigenfunction of $A_{\rm D, i}$ corresponding to the eigenvalue $\lambda \leq 0$, a contradiction. Hence in this situation all non-positive eigenvalues of $A_0$ and the corresponding eigenspaces can be described completely in terms of the function $M$. \end{example} \subsection{A block operator matrix Weyl function associated with a decoupled system}\label{42} In this section we consider a different Weyl function for the operator $A_0$, which corresponds to a symmetric operator which is always simple, independently of the choice of the interface $\Sigma$. This symmetric operator is the orthogonal sum of the minimal symmetric realizations $S_{\rm i}$ and $S_{\rm e}$ of $\cL$ in $L^2 (\Omega_{\rm i})$ and $L^2 (\Omega_{\rm e})$, respectively, in the proof of Lemma~\ref{sonnenschein}, and hence an infinite dimensional restriction of the symmetric operator in \eqref{eq:SSchroed}; it can be viewed as a decoupled symmetric operator. Let $\Lambda_{\rm i}$ and $\Lambda_{\rm e}$ be the Dirichlet-to-Neumann maps for the interior and exterior elliptic boundary value problem, respectively, defined in~\eqref{eq:DNie}, and let \begin{equation*} A_{\rm N, e} u_{\rm e} = \cL_{\rm e} u_{\rm e}, \quad {\text{\rm dom\,}} A_{{\rm N,e}} = \left\{ u_{\rm e} \in H^2 (\Omega_{\rm e}) : \frac{\partial u_{\rm e}}{\partial \nu_{\cL_{\rm e}}} \Big|_\Sigma= 0 \right\}, \end{equation*} be the selfadjoint realization of $\cL_{\rm e}$ in $L^2(\Omega_{\rm e})$ with Neumann boundary conditions. In Lemma~\ref{prop:qbtSchreod3} below it will turn out that the function \begin{equation}\label{widetildem} \lambda \mapsto \widetilde M(\lambda)= \begin{pmatrix} \Lambda_{\rm i}(\lambda) & 1 \\ 1 & -\Lambda_{\rm e}(\lambda)^{-1}\end{pmatrix}^{-1} \quad \text{in}~L^2 (\Sigma) \times L^2 (\Sigma) \end{equation} is well defined on $\rho(A_0)\cap \rho (A_{\rm D, i}) \cap \rho (A_{\rm N, e})$ and can be viewed as the Weyl function of a quasi boundary triple for $(S_{\rm i} \oplus S_{\rm e})^*$, where $A_0$ in~\eqref{eq:SchroedingerOp} corresponds to the kernel of the first boundary mapping. We mention that a scalar analog of the function $\widetilde M$ in \eqref{widetildem} appears in connection with $\lambda$-dependent Sturm--Liouville boundary value problems in \cite{DLS87} and in more general abstract form in \cite{DHMS00}, see also \cite{BLT13} for more details and references. In the present setting Lemma~\ref{prop:qbtSchreod3} and Lemma~\ref{simpleagain} below combined with the results in Section~\ref{sec:abstr} lead to an improvement of items (i)-(iv) in Theorem~\ref{thm:eigenSchroed}. The assertions (v) and (vi) in Theorem~\ref{thm:eigenSchroed} remain valid with $M$ and $H^{1/2}(\Sigma)$ replaced by $\widetilde M$ and $H^{1/2}(\Sigma)\times H^{3/2}(\Sigma)$, respectively, but will not be formulated again. \begin{theorem}\label{thm:eigenSchroedDecoup} Let $A_0$, $\Sigma$, and $\widetilde M$ be as above and let $\lambda\in\dR$. Then the following assertions hold. \begin{enumerate} \item $\lambda \in\sigma_{\rm p}(A_0)$ if and only if $R_\lambda \widetilde M:=$ \textup{s}-$\lim_{\eta \searrow 0} i \eta \widetilde M (\lambda + i \eta) \neq 0$; if the multiplicity of the eigenvalue $\lambda$ is finite then the mapping \begin{align}\label{eq:tauSchroeddeltaq} \tau : \ker (A_0 - \lambda) \to {\text{\rm ran\,}} R_\lambda \widetilde M, \quad u \mapsto \begin{pmatrix} u_{\rm i} |_\Sigma\\ \frac{\partial u_{\rm e}}{\partial \nu_{\cL_{\rm e}}} \big|_\Sigma\end{pmatrix}, \end{align} is bijective; if the multiplicity of the eigenvalue $\lambda$ is infinite then the mapping \begin{align}\label{eq:taugenSchroeddeltaq} \tau : \ker (A_0 - \lambda) \to \cl_\tau \bigl( {\text{\rm ran\,}} R_\lambda \widetilde M\bigr), \quad u \mapsto \begin{pmatrix} u_{\rm i} |_\Sigma\\ \frac{\partial u_{\rm e}}{\partial \nu_{\cL_{\rm e}}} \big|_\Sigma\end{pmatrix}, \end{align} is bijective, where $\cl_\tau$ denotes the closure in the normed space ${\text{\rm ran\,}} \tau$. \item $\lambda$ is an isolated eigenvalue of $A_0$ if and only if $\lambda$ is a pole in the strong sense of~$\widetilde M$. In this case \eqref{eq:tauSchroeddeltaq} and~\eqref{eq:taugenSchroeddeltaq} hold with $R_\lambda \widetilde M = \textup{Res}_\lambda \widetilde M$. \item $\lambda\in\rho(A_0)$ if and only if $\widetilde M$ can be continued analytically into $\lambda$. \item $\lambda\in\sigma_{\rm c}(A_0)$ if and only if \textup{s}-$\lim_{\eta \searrow 0} i \eta \widetilde M (\lambda + i \eta) = 0$ and $\widetilde M$ cannot be continued analytically into $\lambda$. \end{enumerate} \end{theorem} We provide a quasi boundary triple such that $\widetilde M$ in \eqref{widetildem} is the corresponding Weyl function. As indicated above we make use of the densely defined, closed, symmetric operators \begin{align*} S_j u_j = \cL_j u_j, \quad {\text{\rm dom\,}} S_j = \bigg\{ u_j \in H^2 (\Omega_j) : u_j |_\Sigma = \frac{\partial u_j}{\partial \nu_{\cL_j}} \Big|_\Sigma = 0 \bigg\}, \end{align*} in $L^2 (\Omega_j)$ for $j = \rm i, e$, which appeared already the proof of Lemma~\ref{sonnenschein} and which are both simple. Besides the operators $S_j$ also the operators \begin{align*} T_j u_j = \cL_j u_j, \quad {\text{\rm dom\,}} T_j = H^2 (\Omega_j), \end{align*} appear in the formulation of the next lemma. \begin{lemma}\label{prop:qbtSchreod3} The triple $\{ L^2 (\Sigma)\times L^2 (\Sigma), \widetilde\Gamma_0, \widetilde\Gamma_1\}$, where $\widetilde\Gamma_0, \widetilde\Gamma_1 : {\text{\rm dom\,}} (T_{\rm i}\oplus T_{\rm e}) \to L^2 (\Sigma)\times L^2 (\Sigma)$ and \begin{align*} \widetilde\Gamma_0 u = \begin{pmatrix}\frac{\partial u_{\rm i}}{\partial \nu_{\cL_{\rm i}}} \big|_\Sigma + \frac{\partial u_{\rm e}}{\partial \nu_{\cL_{\rm e}}} \big|_\Sigma \\ u_{\rm i} |_\Sigma-u_{\rm e} |_\Sigma\end{pmatrix} , \quad \widetilde\Gamma_1 u = \begin{pmatrix} u_{\rm i} |_\Sigma\\ \frac{\partial u_{\rm e}}{\partial \nu_{\cL_{\rm e}}} \big|_\Sigma\end{pmatrix}, \end{align*} is a quasi boundary triple for $S_{\rm i}^*\oplus S_{\rm e}^*$ such that $(T_{\rm i}\oplus T_{\rm e})\upharpoonright \ker \widetilde\Gamma_0$ coincides with the operator $A_0$ in~\eqref{eq:SchroedingerOp} and ${\text{\rm ran\,}}\widetilde\Gamma_0= H^{1/2} (\Sigma)\times H^{3/2}(\Sigma)$. For all $\lambda \in\rho(A_0)\cap \rho (A_{\rm D, i}) \cap \rho (A_{\rm N, e})$ the corresponding Weyl function coincides with the function $\widetilde M$ in \eqref{widetildem}. \end{lemma} \begin{proof} The proof of Lemma~\ref{prop:qbtSchreod3} follows the same strategy as the proof of Lemma~\ref{prop:qbtSchreod} and some details are left to the reader. Well known properties of traces of $H^2$-functions yield $${\text{\rm ran\,}}(\widetilde\Gamma_0,\widetilde\Gamma_1)^\top= \bigl(H^{1/2}(\Sigma)\times H^{3/2}(\Sigma)\bigr)\times\bigl(H^{3/2}(\Sigma) \times H^{1/2}(\Sigma)\bigr),$$ which is dense in $(L^2 (\Sigma)\times L^2 (\Sigma))^2$. Moreover, $C_0^\infty(\dR^n\setminus\Sigma)$ is a dense subspace of $L^2 (\mathbb{R}^n)$ which is contained in $\ker\widetilde\Gamma_0\cap\ker\widetilde\Gamma_1$. Green's identity implies that \eqref{eq:absGreen} holds, and as $H^2(\dR^n)$ is contained in $\ker\widetilde\Gamma_0$ the selfadjoint operator $A_0$ is contained in $(T_{\rm i}\oplus T_{\rm e})\upharpoonright \ker \widetilde\Gamma_0$. Hence the assumptions (i)-(iii) in Proposition~\ref{prop:ratetheorem} are satisfied and it follows that $\{ L^2 (\Sigma)\times L^2 (\Sigma), \widetilde\Gamma_0, \widetilde\Gamma_1\}$ is a quasi boundary triple for $S_{\rm i}^*\oplus S_{\rm e}^*$ such that $A_0 = (T_{\rm i}\oplus T_{\rm e})\upharpoonright \ker \widetilde\Gamma_0 $. Let us verify that the corresponding Weyl function is given by $\widetilde M$ in \eqref{widetildem}. For this let $\lambda \in\rho(A_0)\cap \rho (A_{\rm D, i}) \cap \rho (A_{\rm N, e})$ and let $u_\lambda=u_{\lambda, \rm i}\oplus u_{\lambda, \rm e} \in{\text{\rm dom\,}} (T_{\rm i}\oplus T_{\rm e})$ be such that $\cL_j u_{\lambda,j}=\lambda u_{\lambda,j}$, $j = \rm i, e$. Then we have \begin{equation*} \begin{split} \begin{pmatrix} \Lambda_{\rm i}(\lambda) & 1 \\ 1 & -\Lambda_{\rm e}(\lambda)^{-1}\end{pmatrix} \widetilde\Gamma_1 u_\lambda &= \begin{pmatrix} \Lambda_{\rm i}(\lambda) & 1 \\ 1 & -\Lambda_{\rm e}(\lambda)^{-1}\end{pmatrix} \begin{pmatrix} u_{\lambda,{\rm i}} |_\Sigma\\ \frac{\partial u_{\lambda,{\rm e}}}{\partial \nu_{\cL_{\rm e}}} \big|_\Sigma\end{pmatrix}\\ &=\begin{pmatrix} \Lambda_{\rm i}(\lambda) u_{\lambda,{\rm i}} |_\Sigma + \frac{\partial u_{\lambda,{\rm e}}}{\partial \nu_{\cL_{\rm e}}} \big|_\Sigma\\ u_{\lambda,{\rm i}} |_\Sigma- \Lambda_{\rm e}(\lambda)^{-1}\frac{\partial u_{\lambda,{\rm e}}}{\partial \nu_{\cL_{\rm e}}} \big|_\Sigma \end{pmatrix}\\ &= \begin{pmatrix} \frac{\partial u_{\lambda,{\rm i}}}{\partial \nu_{\cL_{\rm i}}} \big|_\Sigma+\frac{\partial u_{\lambda,{\rm e}}}{\partial \nu_{\cL_{\rm e}}} \big|_\Sigma \\ u_{\lambda,{\rm i}} |_\Sigma - u_{\lambda,{\rm e}} |_\Sigma \end{pmatrix}=\widetilde\Gamma_0 u_\lambda. \end{split} \end{equation*} By the definition of the Weyl function we obtain that the function $\widetilde M$ in \eqref{widetildem} coincides with the Weyl function associated to the quasi boundary triple $\{ L^2 (\Sigma)\times L^2 (\Sigma), \widetilde\Gamma_0, \widetilde\Gamma_1\}$ for all $\lambda \in\rho(A_0)\cap \rho (A_{\rm D, i}) \cap \rho (A_{\rm N, e})$. \end{proof} The next lemma is a direct consequence of the fact that the symmetric operators $S_{\rm i}$ and $S_{\rm e}$ are simple; cf. \cite[Proposition 2.5]{BR12} and \cite[Proposition 2.2]{BR13}. \begin{lemma}\label{simpleagain} The symmetric operator $S_{\rm i}\oplus S_{\rm e}$ is simple. \end{lemma} \begin{proof}[{\bf Proof of Theorem~\ref{thm:eigenSchroedDecoup}}] Let $\{L^2 (\Sigma)\times L^2(\Sigma), \widetilde\Gamma_0, \widetilde\Gamma_1 \}$ be the quasi boundary triple in Lemma~\ref{prop:qbtSchreod3}. Then $(T_{\rm i}\oplus T_{\rm e}) \upharpoonright \ker \widetilde\Gamma_0$ corresponds to the selfadjoint elliptic differential operator $A_0$ in~\eqref{eq:SchroedingerOp} and the associated Weyl function coincides with the operator function $\widetilde M$ in~\eqref{widetildem}. Taking Lemma~\ref{simpleagain} into account, item (i) follows from Corollary~\ref{thm:eigen} and items (ii)-(iv) are consequences of Theorem~\ref{thm:specTotal} and Proposition~\ref{prop:isolatedEV}. \end{proof}
1,108,101,562,563
arxiv
\section{Introduction} The foundations of theory of domination can be traced back to the chess problem of finding the minimum number of queens required such that all the squares are either occupied or can be attacked by a queen\cite{CF}.The applications of theory of domination includes communication network problems, facility location problem, routings, etc.\cite{Sas,Pre} .The domination in graphs and signed graphs have been well studied by various authors in different forms viz. Roman domination, double domination, total domination, signed domination, signed total domination etc.\cite{Har1,Hay,BD,As,Pal,Bli,Sam,Boh}. \par In \cite{Pay}, the authors introduced the concept of Signed Petri net(SPN) by utilizing the properties of signed graph and Petri net.SPNs are capable of modeling a large variety of systems and is prefered over a Petri net due to the presence of two types of tokens in it, positive and negative, which are distinguishable. Other advantages of using SPN over previously defined extensions of PN are the ability to assign sign to vertices of an SPN which is further utilized to introduce the concept of a balanced SPN as defined in \cite{Pay}.Further, in comparison to a signed graph, SPN is advantageous since a single SPN can be used to represent various signed graphs by simply varying the marking of SPN due to firing of a sequence of transitions.Thus, we need to analyse one SPN in order to infer about all possible signed graph structures that can be formed for a fixed number of vertices. \par As SPN is a bipartite graph, it can be used to develop the theory of domination for dynamic systems as such a theory is not prevalent for Petri nets. \section{Basic Definitions} \subsection{Petri Net (PN)} A \textit{Petri net }\cite{Jen} is a 5-tuple $ N=(P,T,I^{-},I^{+},\mu_0)$,where \begin{enumerate} \item P is the finite, non-empty set of places. \item T is the finite, non-empty set of transitions. \item $ P\cap T =\emptyset$. \item $ I^{-},I^{+} :(P\times T)\rightarrow \mathbb{N}$ where $\mathbb{N}$ is the set of non-negative integers, are called negative and positive incidence functions respectively. \item $\forall p \in P,\exists \ t \in T $ such that $I^{-}(p,t) \neq 0 \ or \ I^{+}(p,t) \neq 0$,and\\ $\forall t \in T,\exists \ p \in P $ such that $I^{-}(p,t) \neq 0 \ or \ I^{+}(p,t) \neq 0$ \item $\mu_0 :P\rightarrow \mathbb{N} $ is the initial marking which gives the initial distribution of tokens in places. The arc set of the Petri net $N$ is defined as: $$E=\{(p,t):I^{-}(p,t)>0\} \cup \{(t,p):I^{+}(p,t)>0\}$$ \end{enumerate} \subsection{Signed Petri Net } \label{S_1} \begin{definition}{\textbf{Signed Petri Net (SPN)}} A Signed Petri Net \cite{Pay} is defined as a 3-tuple $N^{*}=(N',\sigma,\mu_0)$ ,where \begin{enumerate} \item $N'=(P,T,I^{-},I^{+})$ is a Petri net structure. \item $\sigma :E \rightarrow \{+,-\}$, where $E$ is the arc set of $N'$.An arc is called a positive or negative arc respectively according to the sign $+$ or $-$ assigned to it using the function $\sigma$. \item $\mu_0=(\mu^{+}_0,\mu^{-}_0)$ is the initial marking of SPN where \begin{enumerate} \item $\mu_0^{+}:P\rightarrow \mathbb{N}$ gives the initial distribution of positive tokens in the places,called positive marking of SPN. \item $\mu_0^{-}:P\rightarrow \mathbb{N}$ gives the initial distribution of negative tokens in the places,called negative marking of SPN. \end{enumerate} \end{enumerate} \end{definition} Thus,a marking in SPN can be represented as a vector $\mu=(\mu^{+},\mu^{-})$ with $\mu^+,\mu^- \in \mathbb{N}^n,n=|P|$ such that $\mu(p_i)=(\mu^{+}(p_i),\mu^{-}(p_i)) \ \forall \ p_i \in P$.\\ \par Graphically, positive and negative arcs in an SPN are represented by solid and dotted lines respectively.A positive token is represented by a filled circle and a negative token by an open circle.\\ An SPN is said to be \textit{negative} if all of its arcs are negative in sign. \begin{remark} $N''=(N',\sigma)$ is called an SPN structure where $N'$ is a PN structure and $\sigma :E \rightarrow \{+,-\}$. \end{remark} \subsubsection{Execution Rules for Signed Petri Net} \label{prop} Similar to a Petri net, the execution of an SPN depends on the distribution of tokens in its places.The execution takes place by firing of a transition.A transition may fire if it is enabled.\\ A transition $t$ in an SPN $N^{*}$ is \textit{enabled} at a marking $\mu=(\mu^+,\mu^-)$ if \\ $$ I^{-}(p,t) \leq \mu^{+}(p) \ \forall p \in {}^{\bullet}t \ \textnormal{for which} \ \sigma(p,t) = + $$ $$ I^{-}(p,t) \leq \mu^{-}(p) \ \forall p \in {}^{\bullet}t \ \textnormal{for which} \ \sigma(p,t) = - $$ An enabled transition $t$ may \textit{fire} at $\mu=(\mu^+,\mu^-)$ provided $\exists p_k \in t^{\bullet}$ such that: $$\sigma(t,p_k)= \begin{cases} + & \text{if}\ \sigma(p,t)=+ \ \forall p \in {}^{\bullet}t \\ - & \text{if}\ \sigma(p,t)=- \ \forall p \in {}^{\bullet}t \\ + \ \text{or} \ - & \text{if}\ \sigma(p,t)=+ \ \text{for some} \ p \in {}^{\bullet}t \ and \ - \ \text{for some} \ p \in {}^{\bullet}t \\ \end{cases} $$ After firing,it yields a new marking $\mu_1=(\mu_1^+,\mu_1^-)$ given by the rule: $$\mu_1^+(p)=\mu^+(p) - I^{-}(p,t) + I^{+}(p,t) \ \forall p \in P \quad \textnormal{where (p,t) \& (t,p) are positive arcs,}$$ $$ \textnormal{\qquad \qquad \qquad if exist} $$ $$\mu_1^-(p)=\mu^-(p) - I^{-}(p,t) + I^{+}(p,t) \ \forall p \in P \quad \textnormal{where (p,t) \& (t,p) are negative arcs,}$$ $$ \textnormal{\qquad \qquad \qquad if exist}$$ We say that $\mu_1$ is reachable from $\mu$ and write $\mu \stackrel{t}\to \mu_1$.We restrict the movement of positive(negative) tokens to positive(negative) arcs only. \par A marking $\mu$ is reachable from $\mu_0$ if there exists a firing sequence $\eta$ that transforms $\mu_0$ to $\mu$ and is written $\mu_0 \stackrel{\eta}\to \mu$. A \textit{firing or occurence sequence} is a sequence of transitions $\eta=t_1t_2\ldots t_k$ such that $$\mu_0 \stackrel{t_1}\to \mu_1\stackrel{t_2}\to \mu_2 \stackrel{t_3} \to \mu_3 \ldots \stackrel{t_k}\to \mu$$ Note that a transition $t_j,1 \leq j \leq k$ can occur more than once in the firing sequence $\eta$.\\ Let us look at the execution of an SPN with the help of an example.\\ In figure \ref{SPN13}$(a)$, \ $t_1$ and $t_2$ both are enabled at $\mu_0$. Firing of $t_1$ yields a new marking $\mu=((0,1,1,0),$$(1,0,1,0))$ and firing of $t_2$ yields a new marking $\mu=((1,0,2,0),(0,1,0,1))$. In figure \ref{SPN13}$(b)$, \ $t_1$ is enabled,while $t_2$ is not. $t_1 $ can fire to give a new marking $\mu=((0,0,1,0),(0,0,0,1))$.\\ \begin{figure}[ht] \centering \subfloat[SPN with $\mu_0=((1,0,1,0),(1,0,0,0))$ ]{{\includegraphics[scale=0.35]{SPN11.jpg} }} \qquad \qquad \qquad \subfloat[SPN with $\mu_0=((1,0,0,0),(0,0,0,0))$]{{\includegraphics[scale=0.3]{SPN12.jpg} }} \caption{ {Execution of an SPN}} \label{SPN13} \end{figure} \begin{definition}{\textbf{Reachability Set of Signed Petri net}} The Reachability Set $R(N^{*},\mu_0)$ of an SPN $N^{*}$ is the set of all markings of $N^*$ reachable from the initial marking $\mu_0$. \end{definition} \subsection{Assignment of sign to vertices of an SPN} The vertices in an SPN can also be assigned sign.Transitions are assigned sign by product of sign of arcs (incoming and outgoing) incident on it.In Figure \ref{SPN13}(a) and \ref{SPN13}(b), all transitions are negative in sign. Places can be assigned sign in one of the two ways: \begin{enumerate} \item \textbf{With respect to arcs --}\ Sign is assigned to a place by taking product of incident arcs (incoming and outgoing) on that place.In Figure \ref{SPN13}(a),all the places are negative in sign while in Figure \ref{SPN13}(b) $p_1$ and $p_3$ are positive in sign while $p_2$ and $p_4$ are negative in sign. \item \textbf{With respect to marking --}\ Sign is assigned to a place by taking product of sign of tokens in that place in the given marking.A place without token is considered to be positive.In Figure \ref{SPN13}(a), places $p_2,p_3,p_4$ are positive in sign while $p_1$ is negatively signed w.r.t. the marking $\mu_0$.In Figure \ref{SPN13}(b), all the places are positively signed w.r.t. the marking $\mu_0$. \end{enumerate} \textit{Remark}-- Assigning sign to places with respect to arcs doesn't utilize the most important characteristic(dynamic behaviour) of PN which is a marking.Hence, assigning sign to places with respect to marking has been used throughout the paper. \par An example is given which uses the concept of place sign to determine whether a product is approved or disapproved by a company.A company has to make a decision on a certain product by voting of two board members.This situation is represented in figure \ref{fig1} by modeling it with an SPN. \begin{figure}[h] \centering \includegraphics[scale=0.28]{SPN15.jpg} \caption{\textit{An SPN model to check for approval or disapproval of the product}} \label{fig1} \end{figure} In the figure, $p$ and $q$ represent the board members .The transitions $t_1$ and $t_3$ will fire if positive tokens exist in places $p$ and $q$ while $t_2$ and $t_4$ transitions fire if $p$ and $q$ have negative tokens.A positive token is generated in places representing board members($p$ and $q$) if they approve the product and else if they dispprove of it , a negative token is generated.A decision on the product is reached if either both $p$ and $q$ disapproves or both approves the product.On the other hand,if one member approves the product while other rejects it,no decision is made.We can determine the decision made by the company on the basis of the sign of the place $r$ in the final marking(when place $r$ gets a token).This is shown in table \ref{table1}. \begin{table*}[htb] \centering \caption{Decision Table for the Product} \label{table1} \begin{tabular}{| c |c |c |} \hline Transition firings & Sign of place $r$ & Decision made \\ \hline \hline $t_1 ,t_3$ & + & Yes\\ \hline $t_1 ,t_4$ & - & No \\ \hline $t_2 ,t_3$ & - & No \\ \hline $t_2 ,t_4$ & + & Yes \\ \hline \end{tabular} \end{table*} Thus,based on the sign of the place $r$,we can infer the following: \begin{enumerate} \item The decision made about the product(The company reaches a final decison if the sign of place $r$ is positive in marking with $\mu(r) \neq 0$,else no decision is made regarding product). \item Whether $p$ and $q$ have same opinions or varied ones(Both the members have same opinion regarding product if the sign of place $r$ is positive,else they have varied opinion). \end{enumerate} If initially the places $p$ and $q$ in figure \ref{fig1} have one positive and negative token respectively, then this situation corresponds to second row in table \ref{table1}.Hence,the product will not be approved by the company. \section{Domination Theory} In this section, we assume an ordinary SPN (without multiple arcs) unless stated otherwise. \begin{definition}{\textbf{Dominating Set}} A set $D \subseteq V=PUT$ in an SPN $N^*$ is called a Dominating set with respect to a marking $\mu_1 \in R(N^*,\mu_0)$ if either all the vertices of $V$ are in $D$ or $\forall \ v \in V\backslash D$ $$ v^\bullet \cap D \neq \emptyset \ \textnormal {and} \ \sigma(v,u)=S(v).S(u) \ \forall \ u \in v^\bullet \cap D $$ where $S(v),S(u)$ are sign of vertices $u,v$ with respect to marking $\mu_1$. \end{definition} \begin{remark} It may be noted that sign of a transition remains same irrespective of marking of the SPN while sign of a place may vary if the marking of SPN changes. \end{remark} \begin{definition}{\textbf{Dominating Set with respect to a set of markings($D_M$)}} A set $D_M\subseteq V$ in an SPN $N^*$ is called a Dominating set with respect to a set of markings $M \subseteq R(N^*,\mu_0)$ if $D_M$ is a dominating set with respect to all the markings $\mu \in M$.(Clearly,$|M|\geq 2$) \end{definition} \begin{definition}{\textbf{Dependent(or connected) Dominating Set}} A dominating set $D_M$ with respect to a set of markings $M$ in an SPN $N^*$ is called a dependent Dominating set if the markings of $M$ are all the nodes of some subtree of the reachability tree of $N^*$. \end{definition} \begin{definition}{\textbf{Minimal Dominating Set}} A dominating set $D$ is called Minimal if no proper subset of it is a dominating set or it is a dominating set with minimum number of vertices. \end{definition} \begin{remark} For application purposes we would like to find minimal dependent dominating set $D$ over a maximal set of markings $M$,i.e. We try to find a maximal set of markings,$M$ over which $D$ is a minimal dominating set. \end{remark} \begin{theorem} For an SPN structure $N''$ in which all the transitions are positively signed and each place has incident(input/output) arcs of one kind only, we can find a marking $\mu$ w.r.t. which $V\backslash A $ is a dominating set where $A$ is a set of source vertices. \end{theorem} \begin{proof} If $A=\emptyset$, then $V\backslash A=V$ is a dominating set by definition.\\ If $A\neq \emptyset$, then we have to find a marking $\mu$ such that $V\backslash A$ is a dominating set w.r.t. $\mu$. For any $x \in V\backslash(V \backslash A)= A$, $x^{\bullet} \cap (V\backslash A) \neq \emptyset$(Since, $x$ is a source vertex). Then, we find a marking $\mu$ such that $\forall y \in x^{\bullet} \cap (V\backslash A)$ \begin{equation}{\label{1}} \sigma(x,y) =S(x)S(y) \end{equation} where $S(x)$ and $S(y)$ are sign of vertices w.r.t. $\mu$. Let $y \in x^{\bullet} \cap (V\backslash A) $. Now, two cases arise: \begin{enumerate} \item \textbf{$x$ is a place.}\\ Therefore, $y$ is a transition. $\implies S(y)= + $ (By hypothesis). \begin{enumerate} \item Now, if $\sigma(x,y)=+$.In this case, for equation \ref{1} to hold ;$S(x)=+$.Therefore, $\mu(x)=(\mu^+(x),\mu^-(x))$ where $\mu^+(x) \in \mathbb{N} \cup \{0\}\ \& \ \mu^-(x) \in 2\mathbb{N} \cup \{0\}. $ \item However, if $\sigma(x,y)=-$.In this case, for equation \ref{1} to hold ;$S(x)=-$.Therefore, $\mu(x)=(\mu^+(x),\mu^-(x))$ where $\mu^+(x) \in \mathbb{N} \cup \{0\}\ \& \ \mu^-(x) \in \mathbb{N} \backslash 2\mathbb{N}. $ \end{enumerate} \item \textbf{$x$ is a transition.}\\ Therefore, $y$ is a place.Then, $S(x)= + $ (By hypothesis).Now, $\sigma(x,y)$ can be positive or negative. \begin{enumerate} \item If $\sigma(x,y)=+$ then, for equation \ref{1} to hold ;$S(y)=+$ and hence, $\mu(y)=(\mu^+(y),\mu^-(y))$ where $\mu^+(y) \in \mathbb{N} \cup \{0\}\ \& \ \mu^-(y) \in 2\mathbb{N} \cup \{0\}. $ \item However, if $\sigma(x,y)=-$ then, for equation \ref{1} to hold ;$S(y)=-$ and hence, $\mu(y)=(\mu^+(y),\mu^-(y))$ where $\mu^+(y) \in \mathbb{N} \cup \{0\}\ \& \ \mu^-(y) \in \mathbb{N} \backslash 2\mathbb{N}. $ \end{enumerate} \end{enumerate} Hence, for all $\ p_i \ \in \ A \cup \{z\ \in \ V\backslash A \ | \ z \in A^{\bullet}\}\ $, $\mu^+(p_i) \ \in \ \mathbb{N} \cup \{0\} \ \& \\ $ $\mu^-(p_i) \ \in \ \begin{cases} 2\mathbb{N} \cup \{0\} &, \ \textnormal{if} \ p_i \ \textnormal{has positive incident arcs}. \\ \mathbb{N} \backslash 2\mathbb{N} &, \ \textnormal{if} \ p_i \ \textnormal{has negative incident arcs}. \\ \end{cases} $\\ All the remaining places can have any number of positive and negative tokens without any restrictions. \end{proof} \begin{theorem} If an SPN structure $N''$ with no source/sink vertices and in which any place has only one type of incident arcs then, we can find a marking $\mu$ such that $P$ and $T$ are dominating sets w.r.t. $\mu$, provided all the transitions are of same sign. \end{theorem} \begin{proof} Since all the transitions are of same sign,two cases arise: \begin{enumerate} \item \textbf{Transitions are positively signed.}\\ We find a marking $\mu$ w.r.t. which $P$ and $T$ are dominating sets. \begin{enumerate} \item \textbf{$P$ is a dominating set}.\\ Let $t \in V\backslash P $,therefore, $t^{\bullet} \cap P \neq \emptyset$ (since there are no source/sink vertices). We need to find a marking $\mu$ such that $\forall \ p\ \in \ t^{\bullet} \cap P$; \begin{equation}{\label{2}} \sigma(t,p)=S(t)S(p) \end{equation} where $S(p)$ is the sign of place $p$ w.r.t. marking $\mu$ and $S(t)$ is the sign of transition $t$.\\ Let $p \in t^{\bullet} \cap P$. Then, \begin{enumerate} \item $\sigma(t,p)= + $ \\ Now, since $S(t)=+ \ \forall t \in T$.Then, for equation \ref{2} to hold $S(p)=+$. Then, $\mu_(p)=(\mu^+(p),\mu^-(p))$ where $\mu^+(p) \in \mathbb{N} \cup \{0\},\mu^-(p) \in 2\mathbb{N} \cup \{0\}$. \item $\sigma(t,p)= - $ \\ Now, since $S(t)=+ \ \forall t \in T$.Then, for equation \ref{2} to hold $S(p)=-$. Then, $\mu_(p)=(\mu^+(p),\mu^-(p))$ where $\mu^+(p) \in \mathbb{N} \cup \{0\},\mu^-(p) \in \mathbb{N} \backslash 2\mathbb{N} $. \end{enumerate} \item \textbf{$T$ is a dominating set}.\\ Let $p \in V\backslash T$,therefore, $p^{\bullet} \cap T \neq \emptyset$ (since there are no source/sink vertices). We need to find a marking $\mu$ such that $\forall \ t\ \in \ p^{\bullet} \cap T$; \begin{equation}{\label{3}} \sigma(p,t)=S(p)S(t) \end{equation} where $S(p)$ is sign of place $p$ w.r.t. marking $\mu$ and $S(t)$ is the sign of transition $t$.\\ Let $t \ \in p^{\bullet} \cap T$.Then, \begin{enumerate} \item $\sigma(p,t)= + $\\ Now, since $S(t)=+ $. Then, for equation \ref{3} to hold $S(p)=+$. Then, $\mu_(p)=(\mu^+(p),\mu^-(p))$ where $\mu^+(p) \in \mathbb{N} \cup \{0\},\mu^-(p) \in 2\mathbb{N} \cup \{0\}$. \item $\sigma(p,t)= - $\\ Now, since $S(t)=+ $. Then, for equation \ref{3} to hold $S(p)=-$. Then, $\mu_(p)=(\mu^+(p),\mu^-(p))$ where $\mu^+(p) \in \mathbb{N} \cup \{0\},\mu^-(p) \in \mathbb{N} \backslash 2\mathbb{N} $. \end{enumerate} Hence, for all $p_i$, $\mu^+(p_i) \ \in \ \mathbb{N} \cup \{0\} \ \& \\ $ $\mu^-(p_i) \ \in \ \begin{cases} 2\mathbb{N} \cup \{0\} &, \ \textnormal{if} \ p_i \ \textnormal{has positive incident arcs}. \\ \mathbb{N} \backslash 2\mathbb{N} &, \ \textnormal{if} \ p_i \ \textnormal{has negative incident arcs}. \\ \end{cases} $\\ \end{enumerate} \item \textbf{Transitions are negatively signed.}\\ By following the same procedure as in the case when transitions are positively signed, we find that for all $p_i$, $\mu^+(p_i) \ \in \ \mathbb{N} \cup \{0\} \ \& \\ $ $\mu^-(p_i) \ \in \ \begin{cases} \mathbb{N} \backslash 2\mathbb{N} &, \ \textnormal{if} \ p_i \ \textnormal{has positive incident arcs}. \\ 2\mathbb{N} \cup \{0\} &, \ \textnormal{if} \ p_i \ \textnormal{has negative incident arcs}. \\ \end{cases} $\\ \end{enumerate} Thus, we get a marking $\mu$ w.r.t. which $P$ and $T$ are dominating sets. \end{proof} \begin{remark} The above theorem may be used in case of a disaster to check which transitions representing events in the system will dominate and hence may be used for preparedness against any disaster. \end{remark} \begin{remark} Theorems 1 and 2 given above show that: \begin{enumerate} \item We can get a dominating set if we begin with the specified initial marking as mentioned in the proof. \item $P \ \& \ T$ sets in a PN represent conditions and events respectively of the system modeled.So,we can check whether conditions or events dominate in the given system w.r.t. a given marking. \item If we know the structure of an SPN and marking $\mu_0$, then we can check whether domination can occur by checking if $\exists \ \mu \ \in \ R(N^*,\mu_0)$ w.r.t. which there exists a dominating set.So, in order to avoid or force domination we can restrict (or force) SPN to avoid (or reach) such a marking. \end{enumerate} \end{remark} \section{Applications of Domination} We discuss applications of domination in SPN that can be utilized in various areas. \subsection{Producer -Consumer Problem} Consider, a standard Producer -Consumer Problem with two producers producing a same product (assuming quality,price and other conditions are same),we need to check whether one of the producer can dominate the market over the other.This can happen due to availability of product is greater for one producer as compared to other or because one product is well known due to its better marketing, etc. Consider an SPN model for the problem given in figure \ref{fig9}. \begin{figure}[ht] \centering \includegraphics[scale=0.5]{1.jpg} \caption{\textit{Producer-Consumer Problem}} \label{fig9} \end{figure} Now,in order to check if producer 2 dominates the market ,we need to find a set of vertices $D_1$ which is a dependent dominating set over a maximal set of markings $M$. A dependent dominating set is considered because one producer is said to dominate over the other if such a domination exists over a period of time, not for just an instant. Choose $D_1=V \backslash \{p_7,t_7\}$.Find if there exists a set of markings $M$(maximal) with marking $\mu \in M$ such that $\mu(p_6)\neq 0$ and $D_1$ must be a dominating set w.r.t. $M$. Similarly, to check if producer 1 dominates the market we need to check domination of set $D_2$ w.r.t. a set of markings where $D_2 =V \backslash \{p_8,t_7\}$. \subsection{Search of food by bees} Bees (Scout bees) go out in search of food.The one which find the food will return to the hive and celebrate.This scout bee can be considered to dominate other scout bees.This problem can be modeled using an SPN and then it can be identified which scout bee will dominate the bee-hive.Consider the figure \ref{fig2} where place $p_1$ represents the bee-hive while $p_2,p_3$ represent possible food locations where scout bees $A$ and $B$ respectively search for food.The transitions $t_2,t_3$ represent events of food search while $t_1,t_4$ represent events of food search completion.The positive tokens are used to represent bees and negative ones to represent food. \par According to the initial marking of SPN, location $p_2$ has food while location $p_3$ does not.Therefore, bee $A$ must dominate.This can be verified by checking that the set $D_1=\{p_1,p_3,t_1,t_2,t_3,t_4\}$ is a dominating set w.r.t.the initial marking $\mu_0$ rather than set $D_2=\{p_1,p_2,t_1,t_2,t_3,t_4\}$.In the later case, bee $B$ will dominate the bee-hive. \begin{figure}[ht] \centering \includegraphics[scale=0.32]{2.jpg} \caption{\textit{An SPN model with initial marking $((3,0,0),(0,1,0))$ for food finding by Scout Bees}} \label{fig2} \end{figure} The model can be extended if more than two bees search for food. \subsection{Finding papers with similarity to a given paper using softwares like Turnitin} Consider a paper which need to be checked for similarity using a software.This paper is compared with web content that is publicly available, books, papers in journals, articles and other content which is present in a software database.Let the paper to be checked be represented by a place $p_0$ and the rest of the content be represented by places $p_1,p_2,p_3,\ldots p_k$.To check for similarity, an SPN model is formed by connecting place $p_0$ to all other places via transitions $t_1,t_2,t_3,\ldots t_k$ and by using negative arcs as in figure \ref{fig3}.\par While comparing paper $p_0$ with another article represented by place $p_i$ (say),$1\leq i \leq k$, a matching algorithm is used to find a set of strings within submitted paper $p_0$ that matches with the papers maintained in its database.If a similarity exists, a negative token is generated in place $p_0$ which can be used to fire corresponding transition $t_i$.In this way, the submitted paper is checked for similarity with all the content present in the database.After the comparisons are completed, we get a new marking for the SPN in which all the articles that have some similarity with the submitted paper get a negative token in the place representing it.All such places will form a list of articles that are similar to the paper submitted which is to be tested for similarity. \par Now, in order to find the list of all articles which have some similarity with the submitted paper,the concept of domination can be used instead of finding all the places having a negative token. Begin with set $D_1=P=\{p_0,p_1,p_2,\ldots p_k\}$.Check whether set $D_1$ is a dominating set w.r.t. the final marking(say $\mu'$) obtained after the matching algorithm is complete.If yes,then all the papers $p_i,1\leq i\leq k$ have some similarity with the submitted paper.If not, we find $D_2 \subseteq T$ such that $D'=D_1 \cup D_2$ is a dominating set w.r.t.$\mu'$.Then ,the set $\{p_i|t_i \in T\backslash D_2\}$ will form the set of all the articles which have similarity with the submitted paper. \begin{figure}[ht] \centering \includegraphics[scale=0.3]{3.jpg} \caption{\textit{Finding similarity in Papers}} \label{fig3} \end{figure} \bibliographystyle{abbrvnat}
1,108,101,562,564
arxiv
\section{Introduction} A Riemannian manifold $(M,g)$ is called a Ricci soliton if there exist a vector field $X\in\mathfrak{X}(M)$ and a constant $c\in\mathbb{R}$ such that the Ricci tensor satisfies the equation \begin{align}\label{solitonequation} \mathrm{Ric}_g+\frac{1}{2}L_Xg=c\cdot g. \end{align} Ricci solitons were first introduced by Hamilton in the eighties \cite{Ham88}. They appear as self-similar solutions of the Ricci flow: Ricci solitons evolve particularly simple under the Ricci flow, namely by diffeomorphisms and rescalings. A soliton is called gradient, if $X=\mathrm{grad} f$ for some $f\in C^{\infty}(M)$. We call $(M,g)$ expanding, if $c<0$, steady, if $c=0$ and shrinking, if $c>0$. If $X=0$, we recover the definition of an Einstein metric with Einstein constant $c$. If $X\neq0$, we call the soliton nontrivial. In the compact case, any soliton is gradient and any expanding or steady soliton is Einstein. For a detailed expository to the theory of Ricci solitons, see e.g.\ \cite{Cao10,CC07}. In this paper, we study the moduli space of Ricci solitons (the set of Ricci solitons modulo diffeomorphism and rescaling) on compact manifolds. A local description of this moduli space can be given using a modified version of Ebin's slice theorem provided by Podest{\`a} and Spiro \cite{PS13}. They construct a slice to the natural action of the diffeomorphism group on the space of metrics with tangent space $\mathrm{ker}(\delta_f)$. Here, $\delta_f$ is the divergence weighted with the smooth function $f$. Let $(M,g)$ be a Ricci soliton. A tensor $h\in C^{\infty}(S^2M)$ is called infinitesimal solitonic deformation, if $\delta_f(h)=0$ (where $f$ is the soliton potential), $\int_M \mathrm{tr} h\text{ }e^{-f}dV=0$ and $h$ lies in the kernel of the linearization of \eqref{solitonequation}. If $g$ is not an isolated point in the moduli space, it admits infinitesimal solitonic deformations. Conversely, given an infinitesimal solitonic deformation $h$, it is not clear whether it is integrable, i.e.\ if there exists a curve of Ricci solitons tangent to $h$. In this paper, we show that for certain deformations, this is not the case. Moreover, we prove that $\mathbb{C}P^{2n}$ with the Fubini-Study metric is an isolated point in the moduli space of Ricci solitons although it has solitonic deformations. Some obstructions against the existence of infinitesimal solitonic deformations are given in \cite{PS13}. Analoguous questions have been studied in the Einstein case before \cite{Koi78,Koi80,Koi82,Koi83}, see also \cite{Bes08} and the methods used here are quite similar. This paper is organized as follows: In Section \ref{shrinkerentropy}, we introduce Perelman's shrinker entropy, which provides a variational characterization of shrinking Ricci solitons as its critical points. In Section \ref{modulispace}, we introduce various notions of rigidity. We mention a theorem of \cite{PS13} describing the structure of the moduli space of Ricci solitons and generalizing previous results obtained in the Einstein case \cite{Koi83}. We prove a criterion for weak solitonic rigidity of Einstein manifolds: \begin{prop}\label{weakrigidity} Let $(M,g)$ be an Einstein manifold with Einstein constant $\mu>0$. If $2\mu\notin\mathrm{spec}(\Delta)$, any Ricci soliton $H^s$-close (with $s\geq[\frac{n}{2}]+3$) to $g$ is Einstein. \end{prop} In Section \ref{nonintegrability}, we discuss integrability of infinitesimal solitonic deformations. If $(M,g)$ is Einstein with constant $\mu$ and $2\mu$ is a positive eigenvalue of the Laplacian, we have infinitesimal solitonic deformations which can be formed of the corresponding eigenfunctions. For these deformations, we prove an obstruction against integrability: \begin{thm}\label{rigiditytheorem} Let $(M,g)$ be an Einstein manifold with Einstein constant $\mu>0$. Let $v\in C^{\infty}(M)$ be such that $\Delta v=2\mu v$. Then the infinitesimal solitonic deformation $\mu v\cdot g+\nabla^2 v$ is not integrable if there exists another function $w\in C^{\infty}(M)$ with $\Delta w=2\mu w$ such that \begin{align*} \int_M v^2 w \text{ }dV\neq 0. \end{align*} \end{thm} \noindent Concrete examples are discussed in Section \ref{examples}. We use the above theorem to prove \begin{thm}\label{CP2n} The $\mathbb{C}P^{2n}$ with the Fubini-Study metric is isolated in the moduli space of Ricci solitons although it has infinitesimal solitonic deformations. \end{thm} \section{Notation and conventions} Throughout, any manifold $M$ will be compact and any metric considered on $M$ will be smooth. The dimension of $M$ will be $n$. For the Riemann curvature tensor, we use the sign convention such that $R_{X,Y}Z=\nabla^2_{X,Y}Z-\nabla^2_{Y,X}Z$. Given a fixed metric, we equip the bundle of $(r,s)$-tensor fields (and any subbundle) with the natural pointwise scalar product induced by the metric. By $S^pM$, we denote the bundle of symmetric $(0,p)$-tensors. Let $f\in C^{\infty}(M)$. We introduce some differential operators weighted by $f$. The $f$-weighted Laplacian (or Bakry-Emery Laplacian) acting on $C^{\infty}(S^pM)$ is \begin{align*} \Delta_f h=-\sum_{i=1}^n \nabla^2_{e_i,e_i}h+\nabla^2_{\mathrm{grad} f}h. \end{align*} By the sign convention, $\Delta_f=(\nabla^*_f)\nabla$, where $\nabla^*_f$ is the adjoint of $\delta$ with respect to the weighted $L^2$-scalar product $\int_M\langle.,.\rangle \text{ }e^{-f}dV$. The weighted divergence $\delta_f:C^{\infty}(S^pM)\to C^{\infty}(S^{p-1}M)$ and its formal adjoint $\delta_f^*\colon C^{\infty}(S^{p-1}M)\to C^{\infty}(S^pM)$ with respect to the weighted scalar product are given by \begin{align*}\delta_f T(X_1,\ldots,X_{p-1})=&-\sum_{i=1}^n\nabla_{e_i}T(e_i,X_1,\ldots,X_{p-1})+T(\mathrm{grad} f,X_1,\ldots,X_{p-1}),\\ \delta_f^*T(X_1,\ldots,X_p)=&\frac{1}{p}\sum_{i=0}^{p-1}\nabla_{X_{1+i}}T(X_{2+i},\ldots,X_{p+i}), \end{align*} where the sums $1+i,\ldots,p+i$ are taken modulo $p$. If $f$ is constant, we recover the usual notions of Laplacian and divergence. In this case, we will drop the $f$ in the notation. For $\omega\in\Omega^1(M)$, we have $\delta_f^*\omega=\frac{1}{2}L_{\omega^{\sharp}}g$ where $\omega^{\sharp}$ is the vector field dual to $\omega$ and $L$ is the Lie derivative. Thus, $\delta_f^*(\Omega^1(M))$ is the tangent space of the manifold $g\cdot \mathrm{Diff}(M)=\left\{\varphi^*g|\varphi\in\mathrm{Diff}(M)\right\}$. \section{The shrinker entropy}\label{shrinkerentropy} Let $g$ be a Riemannian metric, $f\in C^{\infty}(M)$, $\tau>0$ and define \begin{align*}\mathcal{W}(g,f,\tau)=\frac{1}{(4\pi\tau)^{n/2}}\int_M [\tau(|\nabla f|^2_g+\mathrm{scal}_g)+f-n]\text{ }e^{-f}dV, \end{align*} where $\mathrm{scal}_g$ is the scalar curvature of $g$. Let\index{$\mu_(g,\tau)$} \begin{align*}\mu(g,\tau)&=\inf \left\{\mathcal{W}(g,f,\tau)\suchthat{f\in C^{\infty}(M),\frac{1}{(4\pi\tau)^{n/2}}\int_M \text{ }e^{-f}dV_g=1}f\in C^{\infty}(M),\frac{1}{(4\pi\tau)^{n/2}}\int_M \text{ }e^{-f}dV_g=1\right\}. \end{align*} For any fixed $\tau>0$, the infimum is finite and is realized by a smooth function \cite[Lemma 6.23 and 6.24]{CC07}. We define the shrinker entropy\index{shrinker entropy} as \begin{align*}\nu(g)&=\inf \left\{\mu(g,\tau)\mid\tau>0\right\}. \end{align*} This functional was first introduced in the pioneering work of Perelman \cite{Per02}. If the smallest eigenvalue of the operator $4\Delta_g+\mathrm{scal}_g$ is positive, $\nu(g)$ is finite and realized by some $\tau_g>0$ \cite[Corollary 6.34]{CC07}. By construction, $\nu$ is scale and diffeomorphism invariant. Its first variation is \begin{align*}\nu(g)'(h)=-\frac{1}{(4\pi\tau_g)^{n/2}}\int_M\left\langle \tau_g(\mathrm{Ric}+\nabla^2 f_g)-\frac{1}{2}g,h\right\rangle\text{ }e^{-f_g}dV_g, \end{align*} where $(f_g,\tau_g)\in C^{\infty}(M)\times \mathbb{R}_+$ is a pair realizing $\nu(g)$ (see e.g.\ \cite[Lemma 2.2]{CZ12}). The critical points of $\nu$ are precisely the shrinking Ricci solitons. By the above, these are the metrics for which we have the equation \begin{align}\label{Riccisoliton} \mathrm{Ric}+\nabla^2 f_g=\frac{1}{2\tau_g}g. \end{align} For any Ricci soliton $g$, there exists a $C^{2,\alpha}$-neighbourhood $\mathcal{U}$ in the space of metrics on which $\nu$ depends analytically on the metric. Moreover, the minimizers $f_g,\tau_g$ are unique on $\mathcal{U}$ and depend analytically on the metric \cite[Lemma 4.1]{Kro14}. In the particular case of an Einstein metric, $f_g$ is constant and $\tau_g=\frac{1}{2\mu}$ where $\mu$ is the Einstein constant. \begin{prop}[{\cite{CHI04,CZ12}}]\label{secondnu}Let $(M,g)$ be a shrinking Ricci soliton. Then the second variation of $\nu$ at $g$ is given by \begin{align} \nu''_{g}(h)=\frac{\tau}{(4\pi\tau)^{n/2}}\int_M \langle N h,h\rangle \text{ }e^{-f}dV, \end{align} where $(f,\tau)$ is the minimizing pair realizing $\nu$. The stability operator $N$ is given by \begin{align} Nh=-\frac{1}{2}\Delta_f h+\mathring{R}h+\delta^*_f(\delta_f(h))+\frac{1}{2}\nabla^2 v_h-\mathrm{Ric} \frac{\int_M \langle \mathrm{Ric},h\rangle \text{ }e^{-f}dV}{\int_M \mathrm{scal} \text{ }e^{-f}dV}. \end{align} Here, $\mathring{R}h(X,Y)=\sum_{i=1}^nh(R_{e_i,X}Y,e_i)$ and $v_h$ is the unique solution of \begin{align}\label{v_h} (-\Delta_f+\frac{1}{2\tau})v_h=\delta_f(\delta_f(h)). \end{align} \end{prop} \begin{rem} The operator $N:C^{\infty}(S^2M)\to C^{\infty}(S^2M)$ is formally self-adjoint with respect to $L^2(e^{-f}dV)$. Since $\nu$ is scale and diffeomorphism invariant, $N$ vanishes on \begin{align*} W:=\mathbb{R}\cdot \mathrm{Ric}\oplus \delta_f^*(\Omega^1(M)). \end{align*} \end{rem} \section{The moduli space of Ricci solitons}\label{modulispace} Let $\mathcal{M}$ be the set of smooth metrics on $M$ and assume that $s\in\mathbb{N}$ satisfies $s\geq [\frac{n}{2}]+3$. According to \cite{PS13}, we introduce the following notions: \begin{defn} A shrinking Ricci soliton $g$ is called rigid, if there exists a $H^s$-neighbourhood $\mathcal{U}\subset\mathcal{M}$ such that any shrinking Ricci soliton $\tilde{g}\in\mathcal{U}$ is homothetic to $g$, i.e.\ there exist $\lambda>0$ and $\varphi\in\mathrm{Diff}(M)$ such that $g=\lambda\varphi^*\tilde{g}$. Analoguously, an Einstein metric $g$ is called rigid, if there exists a $H^s$-neighbourhood $\mathcal{U}\subset\mathcal{M}$ such that any Einstein metric in $\mathcal{U}$ is homothetic to $g$. An Einstein metric is called weakly solitonic rigid, if there exists a $H^s$-neighbourhood $\mathcal{U}\subset\mathcal{M}$ such that any Ricci soliton in $\mathcal{U}$ is Einstein. \end{defn} \noindent Let $f\in C^{\infty}(M)$ and $g\in\mathcal{M}$. In \cite{PS13}, Podest{\`a} and Spiro construct a set $\mathcal{S}^s_{f}\subset\mathcal{M}$ satisfying the following properties: \begin{itemize} \item There exists a small $H^s$-neighbourhood $\mathcal{U}$ of $g$ in the set of metrics such that any $\tilde{g}\in\mathcal{U}$ is isometric to a unique metric $\hat{g}\in\mathcal{S}^s_{f}$. \item $\mathcal{S}^s_{f}$ is a smooth manifold with tangent space $T_{g}\mathcal{S}^s_{f}=\mathrm{ker}(\delta_{g,f})$. \end{itemize} Such a set is called an $f$-twisted slice. If $f\equiv0$, we recover the slice constructed in Ebin's slice theorem \cite[Theorem 7.1]{Eb70}. Now the question, whether all Ricci solitons in a neighbourhood of a given Ricci soliton $g$ are homothetic to $g$ reduces to the question whether $g$ is an isolated point in \begin{align*} \mathcal{S}ol^s_{g}=\left\{\tilde{g}\in\mathcal{S}^s_{f_g}\text{ }\bigg\vert \text{ }\mathrm{Ric}_{\tilde{g}}+\nabla^2f_{\tilde{g}}=\frac{1}{2\tau_g}\cdot \tilde{g}\right\}. \end{align*} Note that we fixed the constant $\tau_{g}$ in order to avoid rescalings of metrics. Let $g_t$ a $C^1$-curve in $\mathcal{S}ol^s_g$. Then we have \begin{align*} \frac{d}{dt}\bigg\vert_{t=0}\tilde{g}_t=h\in V:=\left\{h\in C^{\infty}(S^2M)\text{ }\bigg\vert \text{ }\delta_f(h)=0 \text{ and }\int_M \langle\mathrm{Ric},h\rangle \text{ }e^{-f}dV=0\right\} \end{align*} and $\nu''(h)=0$ because $\nu$ is constant along $\tilde{g}_t$. The space $V$ is the $L^2(e^{-f}dV)$-orthogonal complement of the space $W$ defined above. This motivates the following definition: \begin{defn}\label{DefnISD} Let $(M,g)$ be a gradient shrinking Ricci soliton and let $N$ be the stability operator of Proposition \ref{secondnu}. We call $h\in C^{\infty}(S^2M)$ an infinitesimal solitonic deformation if $h\in V$ and $N(h)=0$. An infinitesimal solitonic deformation is called integrable if there exists a $C^1$-curve of Ricci solitons $g_t$ through $g=g_0$ such that $\frac{d}{dt}|_{t=0}g_t=h$. \end{defn} In general, one cannot expect that $ \mathcal{S}ol^s_g$ is a manifold, but the following holds \cite[Theorem 3.4]{PS13}: There exists an analytic finite-dimensional submanifold $\mathcal{Z}^s\subset\mathcal{S}^s_{f_g}$ such that $T_g\mathcal{Z}^s=\mathrm{ker}{N|_V}$ and $ \mathcal{S}ol^s_g$ is an analytic subset of $\mathcal{Z}^s$. If all infinitesimal solitonic deformations are integrable, we have $ \mathcal{S}ol^s_g=\mathcal{Z}^s$ (possibly after passing to smaller neighbourhoods). \begin{defn}[{\cite[p.\ 347]{Bes08}}]Let $(M,g)$ be an Einstein manifold. We call $h\in C^{\infty}(S^2M)$ an infinitesimal Einstein deformation if $\delta h=0$, $\mathrm{tr} h=0$ (i.e.\ $h$ is a transverse traceless tensor) and $\nabla^*\nabla h-2\mathring{R}h=0$. An infinitesimal Einstein deformation is called integrable if there exists a $C^1$-curve of Einstein metrics $g_t$ through $g=g_0$ such that $\frac{d}{dt}|_{t=0}g_t=h$. \end{defn} The set of Einstein metrics close to $g$ admits similar properties as the set of Ricci solitons \cite[Theorem 3.1]{Koi83}. Suppose that $g$ is an Einstein manifold with constant $\mu$. Let $\mathcal{S}^s$ be the slice in the space of metrics as constructed in Ebin's slice theorem. Then, there exists an analytic finite-dimensional manifold $\mathcal{W}^s\subset \mathcal{S}^s$ such that $T_g\mathcal{W}^s=\mathrm{ker}((\nabla^*\nabla-2\mathring{R})|_{TT})$ and \begin{align*} \mathcal{E}_g^s=\left\{\tilde{g}\in\mathcal{S}^s|\mathrm{Ric}_{\tilde{g}}=\mu\tilde{g}\right\} \end{align*} is an analytic subset of $\mathcal{W}^s$. Let $IED$ be the space of infinitesimal Einstein deformations and $ISD$ be the space of infinitesimal solitonic deformations. Then we have \begin{align*} ISD=IED\oplus \left\{\mu v\cdot g+\nabla^2 v|v\in C^{\infty}(M),\Delta v=2\mu v\right\}, \end{align*} see \cite[Lemma 6.2]{Kro14}. Now we prove the statement about weak solitonic rigidity stated in the introduction: \begin{proof}[Proof of Proposition \ref{weakrigidity}] Suppose that $g$ is not weakly solitonic rigid. Then we have a sequence of nontrivial Ricci solitons $g_i\to g$ in $H^s$. For nontrivial Ricci solitons, it is well-known that $\frac{1}{\tau_{g_i}}\in\mathrm{spec}(\Delta_{f_{g_i}})$ and an eigenfunction is given by $f_{g_i}-\frac{n}{2}-\nu(g_i)$. This follows from the Euler-Lagrange equation for $f_{g_i}$ \cite[p.\ 5]{CZ12}. For the $k$th eigenvalue of $\Delta_{f_g}$, we have the minimax characterization \begin{align*} \lambda_k(\Delta_{f_g})=\min_{\substack{U\subset C^{\infty}(M)\\ \mathrm{dim}{U}=k}}\max_{\substack{u\in U\\u\neq0}}\frac{\int_M |\nabla u|^2 \text{ }e^{-f_g}dV_g}{\int_M u^2 \text{ }e^{-f_g}dV_g} \end{align*} which shows that the spectrum of $\Delta_{f_g}$ depends continuously on the metric with respect to the $C^{2,\alpha}$-topology. By Sobolev embedding, $H^s$-continuity follows. This implies that $\frac{1}{\tau_g}=2\mu\in\mathrm{spec}(\Delta)$ and the proof is finished by contradiction. \end{proof} \begin{rem} This proposition generalizes \cite[Proposition 5.3]{PS13}, where only the case of K\"ahler-Einstein manifolds is considered. \end{rem} \section{Non-integrability of solitonic deformations}\label{nonintegrability} In this section we use similar arguments as in \cite{Koi82}, where the integrability of infinitesimal Einstein deformations was discussed. Consider the tensor \begin{align*} \mathrm{Sol}_g=\tau_g(\mathrm{Ric}_g+\nabla^2 f_g)-\frac{g}{2}. \end{align*} Obviously, the zero set of the map $ g\mapsto \mathrm{Sol}_g$ equals the set of shrinking Ricci solitons. This map is well-defined and analytic in an open $C^{2,\alpha}$-neighbourhood of a given Ricci soliton. Suppose now, we have a smooth curve of Ricci solitons $g_t$. From differentiating the equation $\mathrm{Sol}=0$ along $g_t$ twice, we have \begin{align*} D\mathrm{Sol}(g^{(1)})=0,\qquad D\mathrm{Sol}(g^{(2)})+D^2\mathrm{Sol}(g^{(1)},g^{(1)})=0. \end{align*} Here, $D^k\mathrm{Sol}$ denotes the k'th Fr\'{e}chet derivative of the map $g\mapsto\mathrm{Sol}_g$ and $g^{(k)}$ denotes the k'th derivative of the curve $g_t$. More generally, from differentiating $k$ times, we obtain \begin{align}\label{recursion0} D\mathrm{Sol}(g^{(k)})+\sum_{l=2}^{k}\sum_{\substack{ 1\leq k_1\leq\ldots\leq k_l\\ k_1+\ldots+k_l=k}} C(k,l,k_1,\ldots,k_l) D^l\mathrm{Sol}(g^{(k_1)},\ldots,g^{(k_l)})=0 \end{align} where $C(k,l,k_1,\ldots,k_l)\in\mathbb{N}$ are constants only depending on $k,l,k_1,\ldots,k_l$. Let now $g$ be a Ricci soliton and $h$ an infinitesimal solitonic deformation. Suppose that $h$ is integrable. By projecting to the slice and rescaling, we may assume that there is a curve $g_t$ in the set $\mathcal{S}ol^s_g$ such that $g_0^{(1)}=h$. By analyticity of the set $\mathcal{S}ol^s_g$ we may also assume $g_t$ to be analytic. Then the higher derivatives of the curve nessecarily satisfy \eqref{recursion0}. \begin{defn} Let $(M,g)$ be a shrinking Ricci soliton and $h=:g^{(1)}\in ISD$. We call $h$ integrable up to order $k$, if there exists a sequence of tensors $g^{(2)},\ldots,g^{(k)}\in C^{\infty}(S^2M)$ satisfying the recursion formulas $\eqref{recursion0}$. If $h$ is integrable up to order $k$ for any $k\in\mathbb{N}$, $h$ is integrable of infinite order. \end{defn} \begin{lem} A tensor $h\in ISD$ is integrable in the sense of Definition \ref{DefnISD} if and only if it is integrable of infinite order. \end{lem} \begin{proof} If $h$ is integrable, it is obviously integrable of infinite order. Conversely, suppose we have $\tilde{g}^{(k)}\in C^{\infty}(S^2M)$, $k\in\mathbb{N}$, solving $\eqref{recursion0}$ for any $k$. Then for any $k\in\mathbb{N}$, we can construct a curve $\tilde{g}_t^k$ of metrics such that $\frac{d^l}{dt^l}|_{t=0}\tilde{g}^k_t=\tilde{g}^{(l)}$ and $\frac{d^l}{dt^l}\mathrm{Sol}_{\tilde{g}_t}=0$ for $l\in\left\{1,\ldots,k\right\}$ . By projecting these curves to the subset $\mathcal{Z}^s$ in the slice $\mathcal{S}_f^s$ suitably, we have curves $g_t^k$ in $\mathcal{Z}^s$ and a sequence $g^{(k)}\in C^{\infty}(S^2M)$ such that for each $k\in\mathbb{N}$, we have $\frac{d^l}{dt^l}|_{t=0}g^k_t=g^{(l)}$ and $\frac{d^l}{dt^l}\mathrm{Sol}_{g_t}=0$ for $l\in\left\{1,\ldots,k\right\}$ and $g^{(k)}$ satisfies \eqref{recursion0}. Therefore, since $\mathcal{S}ol^s_g\subset \mathcal{Z}^s$ is an analytic subset, we can apply \cite[Theorem (1.2)]{Art68}: The existence of the formal solution $g+\sum_{k=1}^{\infty}\frac{t^k}{k!}g^{(k)}$ of the equation $\mathrm{Sol}=0$ implies the existence of a real solution $g_t$ of $\mathrm{Sol}=0$ in $\mathcal{Z}^s$ such that $\frac{d}{dt}|_{t=0}g_t=g^{(1)}=h$. Thus, $h$ is integrable. \end{proof} \begin{lem}\label{rigidity} Let $(M,g)$ be a shrinking Ricci soliton. If all $h\in ISD$ are integrable only up to some finite order, then $g$ is rigid. \end{lem} \begin{proof} If $g$ was not isolated, the analyticity of $\mathcal{S}ol^s_g$ implies the existence of a smooth curve $g_t\subset\mathcal{S}ol^s_g$ through $g$ and $h=\frac{d}{dt}|_{t=0}g_t\in ISD$ is integrable of order infinity. \end{proof} By the second variation of $\nu$, $D\mathrm{Sol}=-\tau N$ where $N$ is the stability operator. Thus the operator $D\mathrm{Sol}:C^{\infty}(S^2M)\to C^{\infty}(S^2M)$ is formally self-adjoint with respect to the $L^2(e^{-f_g}dV)$-scalar product. Provided we already found $g^{(1)},\ldots g^{(k-1)}$ recursively via \eqref{recursion0}, the equation \eqref{recursion0} for $g^{(k)}$ has a solution if and only if \begin{align*} \sum_{l=2}^{k}\sum_{\substack{ 1\leq k_1\leq\ldots\leq k_l\\ k_1+\ldots+k_l=k}} C(k,l,k_1,\ldots,k_l) D^l\mathrm{Sol}(g^{(k_1)},\ldots,g^{(k_l)})\in \mathrm{ker}(D\mathrm{Sol})^{\perp}. \end{align*} In the above line and in the next lemma, the orthogonal complement is taken with respect to $L^2(e^{-f_g}dV)$. \begin{lem} Let $(M,g)$ be a Ricci soliton and $h\in ISD$. Then $h$ is integrable up to second order if and only if $D^2\mathrm{Sol}(h,h)\perp ISD$. \end{lem} \begin{proof} We claim that $D^2\mathrm{Sol}(h,h)\perp \mathbb{R} \cdot g\oplus \delta_f^*(\Omega^1(M))$. Let $\tilde{h}\in \mathbb{R} \cdot g\oplus \delta_f^*(\Omega^1(M))$ and let $a_t\varphi_t^*g$ be a curve of homothetic metrics such that $a_0=1,\varphi_0=\mathrm{id}_M$ and $\frac{d}{dt}|_{t=0}a_t\varphi_t^*g=\tilde{h}$. Consider the two-parameter-family of metrics given by $g(s,t)=a_t\varphi_t^*(g+sh)$. By the first variation of $\nu$, \begin{align*} \frac{d}{dt}\nu(g(s,t))=-\frac{1}{(4\pi\tau)^{n/2}}\int_M\langle \mathrm{Sol},\frac{d}{dt}g\rangle\text{ }e^{-f}dV. \end{align*} By scale and diffeomorphism invariance, $\nu(g(s,t))$ only depends on $s$. Thus, by differentiating the above twice and using $D\mathrm{Sol}(h)=0$, we have \begin{align*} 0=\frac{d^2}{ds^2}\frac{d}{dt}\bigg\vert_{t,s=0}\nu(g(s,t))=-\frac{1}{(4\pi\tau)^{n/2}}\int_M\langle D^2\mathrm{Sol}(h,h),\tilde{h}\rangle \text{ }e^{-f}dV, \end{align*} which proves the claim. We argued above that $h$ is integrable up to second order if and only if $D^2\mathrm{Sol}(h,h)$ is orthogonal to $\mathrm{ker}(D\mathrm{Sol})$ but we have the orthogonal decomposition \begin{align*} \mathrm{ker}(D\mathrm{Sol})=ISD\oplus \mathbb{R} \cdot g\oplus\delta_f^*(\Omega^1(M)) \end{align*} and so the statement of the lemma follows from the claim. \end{proof} Now we consider an Einstein manifold $g$ with constant $\mu>0$ and suppose that $2\mu\in\mathrm{spec}(\Delta)$. For the rest of the section, we focus on infinitesimal solitonic deformations contained in \begin{align*} \left\{\mu v\cdot g+\nabla^2 v|v\in C^{\infty}(M),\Delta v=2\mu v\right\}. \end{align*} By diffeomorphism invariance, we only have to deal with the conformal part of these deformations which makes the calculations easier. \begin{lem} Let $(M,g)$ be an Einstein manifold $g$ with constant $\mu>0$ and let $v\in C^{\infty}(M)$ so that $\Delta v=2\mu v$. Then, \begin{align*} D^2\mathrm{Ric}(v\cdot g,v\cdot g)=&-(\frac{n}{2}-2)|\nabla v|^2g-2\mu v^2 g+3(\frac{n}{2}-1)\nabla v\otimes\nabla v+(n-2)\nabla^2v\cdot v,\\ D^2(\nabla^2 f_g)(v\cdot g,v\cdot g)=&\nabla^2 u-(n-2)\nabla v\otimes\nabla v+(\frac{n}{2}-1)|\nabla v|^2g, \end{align*} where $u$ is a solution of the equation \begin{align*} (\frac{1}{\mu}\Delta-1)u=D^2\tau(v\cdot g,v\cdot g)-\frac{1}{\mu}[n\mu v^2+(-\frac{3}{4}n+\frac{1}{2})|\nabla v|^2]. \end{align*} \end{lem} \begin{proof} The calculations were already done in \cite[pp.\ 22-24]{Kro13} while computing a third variation of the shrinker entropy. The second Fr\'{e}chet derivative of the Ricci tensor was computed in \cite[p.\ 23]{Kro13}. Moreover, we have \begin{align*} (\nabla^2 f)''=\nabla^2(f'')-\nabla f'\otimes\nabla v-\nabla v\otimes\nabla f'+\langle \nabla f',\nabla v\rangle g, \end{align*} where the primes denote Fr\'{e}chet derivatives in the direction of $v\cdot g$. The function $u:=f''$ satisfies the equation in the statement of the lemma and $f'=(\frac{n}{2}-1)v$. \end{proof} \begin{lem}\label{D^2sol} Let $(M,g)$ be an Einstein manifold $g$ with constant $\mu>0$ and let $v\in C^{\infty}(M)$ so that $\Delta v=2\mu v$. Then, \begin{align*} D^2\mathrm{Sol}(v\cdot g,v\cdot g)=&\frac{1}{2\mu}(\nabla^2 u-2\mu v^2 g+|\nabla v|^2g+(\frac{n}{2}-1)\nabla v\otimes\nabla v+(n-2)\nabla^2v\cdot v)\\ &+D^2\tau(v\cdot g,v\cdot g)\mu g, \end{align*} where $u$ is a solution of the equation \begin{align*} (\frac{1}{\mu}\Delta-1)u=D^2\tau(v\cdot g,v\cdot g)-\frac{1}{\mu}[n\mu v^2+(-\frac{3}{4}n+\frac{1}{2})|\nabla v|^2]. \end{align*} \end{lem} \begin{proof} We have $\tau_g=\frac{1}{2\mu}$ and by \cite[Lemma 2.4]{CZ12}, $D\tau(v\cdot g)=0$ because $\int_M v\text{ }dV=0$. This yields \begin{align*} D^2\mathrm{Sol}(v\cdot g,v\cdot g)=D^2\tau(v\cdot g,v\cdot g)\mu g+\frac{1}{2\mu}(D^2\mathrm{Ric}(v\cdot g,v\cdot g)+D^2(\nabla^2 f_g)(v\cdot g,v\cdot g)). \end{align*} Now the result follows from the previous lemma. \end{proof} \begin{thm}\label{secondordercriterion} Let $(M,g)$ be an Einstein manifold with Einstein constant $\mu>0$. Let $v\in C^{\infty}(M)$ be such that $\Delta v=2\mu v$. Then $h=\mu v\cdot g+\nabla^2 v\in ISD$ is not integrable of second order if there exists another function $w\in C^{\infty}(M)$ with $\Delta w=2\mu w$ such that \begin{align*} \int_M v^2 w \text{ }dV\neq 0. \end{align*} \end{thm} \begin{proof} Suppose the contrary. Then, there exists a smooth curve $g_t$ of metrics such that $\frac{d}{dt}|_{t=0}g_t=h$ and $\frac{d}{dt}|_{t=0}\mathrm{Sol}_{g_t}=\frac{d^2}{dt^2}|_{t=0}\mathrm{Sol}_{g_t}=0$. By pulling back by a suitable family of diffeomorphisms, we obtain a smooth curve $\tilde{g}_t$ of Ricci solitons such that $\frac{d}{dt}|_{t=0}\tilde{g}_t=v\cdot g$ and $\frac{d}{dt}|_{t=0}\mathrm{Sol}_{\tilde{g}_t}=\frac{d^2}{dt^2}|_{t=0}\mathrm{Sol}_{\tilde{g}_t}=0$. By the arguments at the beginning of this section, this implies that \begin{align}\label{perp} D^2\mathrm{Sol}(v\cdot g,v\cdot g)\in\mathrm{ker}(D\mathrm{Sol})^{\perp}. \end{align} On the other hand, pick $w\in C^{\infty}(M)$ be as in the statement of the theorem. Then, $D\mathrm{Sol}(w\cdot g)=0$ by the calculations in \cite[p.\ 22]{Kro13}. Furthermore, straightforward calculations, using the eigenvalue equation $\Delta w=2\mu w$ and Lemma \ref{D^2sol} show that \begin{align*} \int_M \langle D^2\mathrm{Sol}(v\cdot g,&v\cdot g),w\cdot g\rangle\text{ }dV=\frac{1}{2\mu}\int_M \langle\nabla^2 u-2\mu v^2 g+|\nabla v|^2g,w\cdot g\rangle\text{ }dV\\ &+\frac{1}{2\mu}\int_M\langle(\frac{n}{2}-1)\nabla v\otimes\nabla v+(n-2)\nabla^2v\cdot v,w\cdot g\rangle\text{ }dV\\ =&-\int_M u\cdot w\text{ }dV-2(n-1)\int_M v^2w\text{ }dV+\frac{1}{\mu}(\frac{3n}{4}-\frac{1}{2})\int_M |\nabla v|^2w\text{ }dV\\ =&\frac{1}{\mu}\int_M \left(\frac{1}{\mu}\Delta-1\right)^{-1}\left[D^2\tau(v\cdot g,v\cdot g)\mu+n\mu v^2+\left(-\frac{3}{4}n+\frac{1}{2}\right)|\nabla v|^2\right]\cdot w\text{ }dV\\ &-2(n-1)\int_M v^2w\text{ }dV+\frac{1}{\mu}(\frac{3n}{4}-\frac{1}{2})\int_M |\nabla v|^2w\text{ }dV\\ =&\frac{1}{\mu}\int_M \left[D^2\tau(v\cdot g,v\cdot g)\mu+n\mu v^2+\left(-\frac{3}{4}n+\frac{1}{2}\right)|\nabla v|^2\right]\cdot \left(\frac{1}{\mu}\Delta-1\right)^{-1}w\text{ }dV\\ &-2(n-1)\int_M v^2w\text{ }dV+\frac{1}{\mu}(\frac{3n}{4}-\frac{1}{2})\int_M |\nabla v|^2w\text{ }dV\\ =&-(n-2)\int_M v^2w\text{ }dV\neq0 \end{align*} which contradicts \eqref{perp}. The terms containing $D^2\tau(v\cdot g,v\cdot g)$ vanish after integration because $w$ has vanishing integral. \end{proof} \begin{rem} Theorem \ref{secondordercriterion} and Lemma \ref{rigidity} imply Theorem \ref{rigiditytheorem}. \end{rem} \section{Examples}\label{examples} As an application of the above criterion, we are prove the following \begin{thm} All infinitesimal solitonic deformations of $(\mathbb{C}P^{2n},g_{fs})$ are not integrable of second order. \end{thm} \begin{proof}Let $\mu$ be the Einstein constant. Recall that $(\mathbb{C}P^{m},g_{fs})$ has no infinitesimal Einstein deformations (see e.g.\ \cite[Table 2]{CH13}) but $2\mu\in\mathrm{spec}(\Delta)$ and thus, the space of infinitesimal solitonic deformations is given by \begin{align*} \left\{\mu v\cdot g+\nabla^2 v|v\in C^{\infty}(M),\Delta v=2\mu v\right\}. \end{align*} We will show that for any such $v$, there exists a function $w$ in the same eigenspace such that $\int_M v^2 w\text{ }dV\neq0$. Then Theorem \ref{secondordercriterion} yields the result. Let us briefly recall the construction of eigenfunctions on $\mathbb{C}P^{2n}$ as in \cite[pp.\ 172-173]{BGM71}. Consider $\mathbb{C}^{2n+1}=\mathbb{R}^{4n+2}$ with coordinates $(x_1,\ldots,x_{2n+1},y_1,\ldots,y_{2n+1})$ and let $z_j=x_j+iy_j$, $\bar{z}_j=x_j-iy_j$ be the complex coordinates\index{complex coordinates}. Defining\index{$\partial_{x_i}$, directional derivative} $\partial_{z_j}=\frac{1}{2}(\partial_{x_j}-i\partial_{y_j})$ and $\partial_{\bar{z}_j}=\frac{1}{2}(\partial_{x_j}-i\partial_{y_j})$, we can rewrite the Laplace operator on $\mathbb{C}^{2n+1}$ as \begin{align*}\Delta=-4\sum_{j=1}^{2n+1}\partial_{z_j}\circ \partial_{\bar{z}_j}. \end{align*} Let $P_{k,k}$ be the space of polynomials on $\mathbb{C}^{2n+1}$ which are homogeneous of degree $k$ in $z$ and $\bar{z}$ and let $H_{k,k}$ the subspace of harmonic polynomials in $P_{k,k}$. We have the decomposition $P_{k,k}=H_{k,k}\oplus r^2P_{k-1,k-1}$. Elements in $P_{k,k}$ are $S^1$-invariant and thus, they descend to functions on the quotient $\mathbb{C}P^{2n}=S^{2n+1}/S^1$. The eigenfunctions on $\mathbb{C}P^{2n}$ correspond to functions in $H_{k,k}$, $k\geq0$. Let $v$ be an eigenfunction to the eigenvalue $2\mu$. Then $v$ corresponds to a function $f\in H_{1,1}$. We may assume that $f$ is of the form $f(z,\bar{z})=\sum_{i}\lambda_i |z_i|^2$ and $\sum_{i} \lambda_i=0$. Consider its square \begin{align*} f^2\in P_{2,2}=H_{2,2}\oplus r^2 H_{1,1}\oplus \mathbb{R} \cdot r^4 \end{align*} To prove the claim, it suffices to show that $f^2\notin H_{2,2}\oplus\mathbb{R} \cdot r^4$ or equivalently, $\Delta(f^2)\notin\mathbb{R}\cdot \Delta (r^4)$. In fact, we have \begin{align*}\Delta(f^2)=&\Delta\left(\sum_i\lambda_i^2 |z_i|^4\right)+\Delta\left( \sum_{i\neq j}\lambda_i\lambda_j |z_i|^2|z_j|^2\right)\\ =&-16\sum_i\lambda_i^2 |z_i|^2-4\sum_{i\neq j}\lambda_i\lambda_j(|z_i|^2+|z_j|^2) =-8\sum_i\lambda_i^2 |z_i|^2. \end{align*} On the other hand, \begin{align*} \Delta(r^4)=&\Delta\left( |z_i|^4\right)+\Delta\left( \sum_{i\neq j} |z_i|^2|z_j|^2\right)=-16\sum_i |z_i|^2-4\sum_{i\neq j}(|z_i|^2+|z_j|^2)=-16(n+1)r^2. \end{align*} Thus, if $\Delta(f^2)\in\mathbb{C}\cdot \Delta (r^4)$, $|\lambda_i|$ is independant of $i$ but this contradicts $\sum_{i}\lambda_i=0$. \end{proof} \begin{rem} Together with Lemma \ref{rigidity}, the above result implies Theorem \ref{CP2n}. \end{rem} \begin{rem} The proof of the above theorem shows the following: The $\mathbb{C}P^{2n+1}$ have a zero set of infinitesimal solitonic deformations (in the space of all ISD's) which are integrable up to second order. For these deformations, $D^2\mathrm{Sol}(h,h)$ is orthogonal to all conformal deformations of the form $w\cdot g$, $w\in E(2\mu)$ and the proof of Lemma \ref{secondordercriterion} implies that $D^2(h,h)$ is orthogonal to all ISD's. The statement then follows from Lemma \ref{secondordercriterion}. However, it is not clear whether these deformations are integrable up to higher order. It seems likely that also $\mathbb{C}P^{2n+1}$ is rigid. \end{rem} \begin{exmp}Consider $S^2\times S^2$ with the product of round metrics and let $\mu$ be the Einstein constant of the metric. Then, $2\mu$ is in the spectrum of the product metric and there are no infinitesimal Einstein deformations. The space of infinitesimal solitonic deformations is therefore again equal to \begin{align*} \left\{\mu v\cdot g+\nabla^2 v|v\in C^{\infty}(M),\Delta v=2\mu v\right\}. \end{align*} Moreover, $2\mu$ is the smallest eigenvalue of Laplacian on the product metric and its eigenspace is \begin{align*} E(2\mu)=\left\{\mathrm{pr}_1^*v+\mathrm{pr}_2^*w|v,w\in C^{\infty}(S^2), \Delta v=2\mu v,\Delta w=2\mu w\right\}. \end{align*} The eigenfunctions on the factors are restrictions of linear functions on $\mathbb{R}^3$ (e.g.\ \cite[Chapter III C.I.]{BGM71}) and thus, they are antisymmetric with respect to the antipodal map $\sigma\in\mathrm{Iso}(S^2,g_{st})$. Therefore, $(\sigma\times\sigma)^*v=-v$ for any $v\in E(2\mu)$ and so, $\int_{S^2\times S^2}v^2 w\text{ }dV=0$ for any $v,w\in E(2\mu)$. The same argumentation as in the remark above implies that all infinitesimal solitonic deformations on $S^2\times S^2$ are integrable up to second order. It is again not clear whether they are integrable of higher order. \end{exmp} \begin{rem} In \cite{Koi82}, Koiso showed that the product Einstein metric on $S^2\times \mathbb{C}P^{2n}$ is rigid as an Einstein metric although it has infinitesimal Einstein deformations. In fact, the infinitesimal Einstein deformations are linear combinations of the form $\alpha v\cdot g_1+\beta v\cdot g_2+\gamma \nabla^2 v$ where $\alpha,\beta,\gamma\in\mathbb{R}$, $v=pr_2^*w$ and $w\in E(2\mu)$ and they are not integrable of second order. It is not clear if $S^2\times \mathbb{C}P^{2n}$ is rigid as a soliton. The eigenfunctions to the first nonzero eigenvalue on the first factor form infinitesimal solitonic deformations which are integrable up to second order. \end{rem} \vspace{3mm} \textbf{Acknowledgement.} The author would like to thank the Sonderforschungsbereich 647 funded by the Deutsche Forschungsgemeinschaft for financial support. \newcommand{\etalchar}[1]{$^{#1}$}
1,108,101,562,565
arxiv
\section{Introduction} \label{sec:intro} \blfootnote{\noindent This research was supported by the Office of Naval Research (ONR) under MURI Grant N00014-19-1-2621.} The Internet of Things (IoT) is arguably the most important technology of the coming decade \cite{saad2019vision}. However, the effective operation of several IoT services, such as industrial monitoring \cite{indus}, health monitoring \cite{health}, drones \cite{mohadrone}, virtual reality \cite{chenvr}, and vehicular network \cite{vehicular}, requires timely and frequent communications. To maintain the proper performance of such diverse IoT applications, the base station (BS) must maintain the most relevant information gathered from the IoT devices at any given time.\\ \indent In addition to timely transmissions from the devices to the BS, another key challenge is to account for the distinctive characteristics of an IoT and its devices. One prominent property of an IoT is its massive scale as the number of devices greatly outnumbers the available communication resources \cite{saad}. Therefore, an appropriate allocation of the limited communication resources among numerous IoT devices is necessary for the deployment of an IoT and its services \cite{iotalloc}. Furthermore, the IoT exhibits a high heterogeneity in terms of device types, functions, messages, transmission requirements, and resource constraints \cite{minehetero}. The aforementioned IoT properties pose challenges for timely uplink transmission in an IoT. To ensure the performance of time-sensitive IoT applications despite the aforementioned challenges, a new information timeliness performance metric is needed as an alternative to conventional delay, reliability, and data rate.\\ \indent To evaluate the communication between the BS and the devices, the \textit{age of information} (AoI), which is a metric that can quantify the relevance and the freshness of the information, is used \cite{aoi1, bo2}. However, the AoI has different characteristics compared to delay \cite{aoi1}, because it explicitly considers packet generation time. The problem of AoI minimization in an IoT has unique challenges due to the characteristics of an IoT, including massive scale, limited communication resources, and IoT device heterogeneity. Largely, AoI minimization can be done in a centralized way or in a distributed way. However, a centralized AoI minimization approach is not always viable for an IoT, because the energy constrained IoT devices may not be able to communicate frequently with BS. On the other hand, a distributed AoI minimization approach may require extensive device-to-device communication and could perform worse than a centralized solution for some IoT scenarios. Therefore, both centralized and distributed AoI minimization must be investigated to compare their applicability and performances in an IoT.\vspace{-1mm} \subsection{Existing Works} A number of recent works studied the problem of AoI minimization in wireless networks \cite{bo2, aoi1, aoischedule1, aoischedule3, multihop, adhoc, aoimultiaccess, aoischedule2, aoicsi, bo3, aoischedule4, multiinfo, multisource, aoibackoff, aoicsma, aoisleep, qm3, mm1, mg1, mg11, aoiurllc, qm2, aoinoma, nonlin1, gc2020}. These studies use various approaches to minimize the AoI under different constraints and conditions. For instance, the works in \cite{bo2, aoi1, aoischedule1, aoischedule3, multihop, adhoc, aoimultiaccess, aoischedule2, aoicsi, bo3, aoischedule4, multiinfo, multisource, aoibackoff, aoicsma, aoisleep,qm3, mm1, mg1, mg11, aoiurllc, qm2, aoinoma, nonlin1} study a variety of scheduling policies for AoI minimization in different networks, including single hop broadcast network \cite{aoischedule1}, single-hop uplink communication \cite{aoischedule3}, multi-hop uplink communication \cite{multihop}, ad-hoc networks \cite{adhoc}, and ALOHA-like random access \cite{aoimultiaccess}. The authors in \cite{aoischedule2} and \cite{aoicsi} propose and analyze scheduling policies for the wireless networks with known and unknown channel state information. The works in \cite{aoi1}, \cite{aoischedule3}, \cite{adhoc}, and \cite{bo3} introduce effective scheduling policies to minimize the average AoI with network constraints, such as throughput requirement, physical constraint for sensing, spectrum sharing, and varying packet sizes. In \cite{adhoc}, \cite{aoischedule4}, and \cite{multiinfo}, the authors use online techniques, such as reinforcement learning to perform AoI-minimal scheduling. Moreover, the authors in \cite{multiinfo} and \cite{multisource} analyze the performance of user scheduling for minimizing the average AoI in presence of multiple sources of information and propose a hybrid queueing system. The authors in \cite{aoibackoff} analyze the coexistence of DSRC and WiFi networks as a game, in which the DSRC network minimizes the AoI and the WiFi network maximizes the throughput. For CSMA networks, the work in \cite{aoicsma} optimizes the backoff time of each communication link to minimize the total average AoI, and the authors in \cite{aoisleep} propose a sleep-wake scheduling to optimize the tradeoff between AoI minimization and energy consumption.\\ \indent The works in \cite{qm3, mm1, mg1, mg11} address the problem of AoI minimization using queueing-theoretic approaches. In \cite{qm3}, the authors analyze the peak AoI in a multi-class queueing system with packets having heterogeneous service times and requirements. The authors in \cite{mm1, mg1, mg11} derive closed-form solutions for the average AoI and the peak AoI for different queueing models, including M/M/1, M/G/1, and M/G/1/1. In \cite{aoiurllc}, the authors consider a vehicular network with ultra-reliable low-latency communication and minimize the tail of the AoI distribution. The peak AoI considering the packet delivery failure is analyzed in \cite{qm2}. Moreover, the non-orthogonal multiple access is compared against the conventional orthogonal multiple access in terms of AoI minimization in \cite{aoinoma}. The authors study the sampling policies to minimize the average AoI with the joint status sampling in IoT \cite{bo2} or with the non-linear aging functions \cite{nonlin1}.\\ \indent Despite being interesting, the existing solutions in \cite{bo2, aoi1, aoischedule1, aoischedule3, multihop, adhoc, aoimultiaccess, aoischedule2, aoicsi, bo3, aoischedule4, multiinfo, multisource, aoibackoff, aoicsma, aoisleep, qm3, mm1, mg1, mg11, aoiurllc, qm2, aoinoma, nonlin1, gc2020} do not consider some of the unique properties of an IoT, such as limited communication resources, massive scale, and high device heterogeneity. One of the key challenges in IoT is the massive scale of IoT coupled with highly limited available communication resources. However, the works in \cite{aoi1, aoischedule1, aoischedule3, multihop, adhoc, aoimultiaccess, aoischedule4}, and \cite{qm2, aoinoma, nonlin1} do not investigate the realistic IoT scenario in which the number of devices greatly outnumbers the communication resources. Furthermore, the inherent heterogeneity among IoT devices and the presence of non-linear aging functions are not considered in \cite{bo2, aoischedule1, aoischedule3, multihop, adhoc, aoimultiaccess, aoischedule2, aoicsi, bo3, aoischedule4}, and \cite{mm1, mg1, mg11, aoiurllc, qm2, aoinoma}. Moreover, most of the prior works for AoI minimization in \cite{aoi1, aoischedule1, aoischedule3, multihop, adhoc, aoimultiaccess, aoischedule2, aoicsi, multiinfo, multisource,aoibackoff, aoicsma, aoisleep,qm3, mm1, mg1, mg11}, and \cite{nonlin1} only considers a centralized approach. Moreover, in \cite{gc2020}, we studied centralized AoI minimization with non-linear aging functions and proposed a centralized resource allocation scheme to enable the BS to consider different aging functions. However, the work in \cite{gc2020} only investigates a centralized approach for AoI minimization and does not introduce a distributed resource allocation framework for an IoT with non-linear aging functions. A centralized approach may not always be suitable for an IoT, because the frequent communication with BS is not viable for the energy constrained IoT devices. These important challenges for enhancing the AoI in an IoT have been largely overlooked in prior works \cite{bo2, aoi1, aoischedule1, aoischedule3, multihop, adhoc, aoimultiaccess, aoischedule2, aoicsi, bo3, aoischedule4, multiinfo, multisource, aoibackoff, aoicsma, aoisleep, qm3, mm1, mg1, mg11, aoiurllc, qm2, aoinoma, nonlin1, gc2020}. \subsection{Contributions} \indent The main contributions of this paper are novel centralized and distributed resource allocation frameworks that can be used to minimize the average instantaneous AoI for a massive IoT with heterogeneous devices and non-linear aging functions. In particular, we capture the heterogeneity among IoT devices using \emph{non-linear aging functions}. Typically, the AoI is defined only in terms of time, and it is assumed to increase linearly with a slope of $1$ \cite{aoi1}. However, the definition of the AoI can be broader such that the AoI can be a function of completeness, validity, accuracy, currency, and utility \cite{nonlin2, nonlin3}. Under such a broader definition of the AoI, the aging function can be defined as an age penalty function or an age utility function \cite{nonlin1}, which can be an exponential, linear, or step function \cite{nonlin2}. As such, we propose to capture the heterogeneity among IoT devices and messages by assigning different aging functions based on the devices types, the IoT application, the message content, and the transmission requirement.\\ \indent For centralized AoI minimization, we propose a new priority scheduling scheme with a learning perspective such that the device types and the aging functions can be determined. For non-linear aging functions, we show that using the future AoI for priority scheduling achieves a lower average instantaneous AoI than using the current AoI. Simulation results for the centralized approach show that the proposed priority scheduling scheme achieves $26.7\%$ lower average instantaneous AoI with high activation probability and $31.7\%$ lower average instantaneous AoI with high outage probability than a simple priority scheduling. In particular, our approach outperforms a simple priority scheduling and performs similar to a priority scheduling with complete information on device types and aging functions.\\ \indent For the distributed AoI minimization, we formulate a minority game \cite{kolkata}, such that massive number of IoT devices can share the limited available communication resources autonomously. Furthermore, a payoff function is designed to allow the messages with the highest AoI to transmit first in a self-organizing manner. We then show the conditions that a resource allocation among IoT devices must satisfy to achieve a Nash equilibrium (NE). We propose a stochastic crowd avoidance algorithm for the resource allocation game and prove that the resource allocation using our proposed algorithm converges to an NE with sufficient information and under certain network parameters. Simulation results for the distributed case show that the proposed algorithm is effective in minimizing the AoI even if the devices only have the partial information about other devices. The results show that our game-based approach achieves $63.6\%$ lower average instantaneous AoI with limited information and $45.8\%$ lower average instantaneous AoI with high outage probability than a random resource allocation. Moreover, after convergence, our game-based approach performs similar to the pre-determined resource allocation with complete information.\\ \indent The centralized and the distributed AoI minimization schemes are compared in terms of overhead, implementation, and requirements. In particular, the centralized AoI minimization has an overhead of uplink communication request, while the distributed AoI minimization has an overhead of device-to-device communication. Simulation results show that the distributed AoI minimization achieves $40$-fold higher average instantaneous AoI than the centralized AoI minimization in a massive IoT, where the communication resources are highly limited and the devices only have partial information. In a less constrained IoT, simulation results show that the distributed AoI minimization achieves $8$-fold higher average instantaneous AoI than the centralized AoI minimization. Although the centralized AoI minimization outperforms the distributed AoI minimization in terms of average instantaneous AoI, the distributed approach may be more suitable for an IoT, because the centralized approach may not be practical or viable for an IoT. As such, our analysis clearly showcases the contrasts between the two solutions.\\ \indent The rest of this paper is organized as follows. Section II introduces the system model and the non-linear aging functions. Section III analyzes the AoI minimization with coexistence of linear and non-linear aging functions. Section IV presents the centralized and the distributed resource allocations in an IoT. Section V analyzes the simulation results, while Section VI draws conclusions.\vspace{-2mm} \section{System Model}\label{sec:SM} Consider the uplink of a wireless IoT system consisting of one BS serving $N$ IoT devices. The IoT devices can transmit their messages to the BS using the communication resources allocated by either a centralized or distributed resource allocation scheme. To transmit to the BS, the IoT devices use time-slotted orthogonal frequency-division multiple access (OFDMA). Here, $R$ time-frequency resource blocks (RBs) are allocated to the IoT devices at each time slot. If more than one IoT device use a given RB, none of the messages transmitted using the given RB can be successfully decoded, which leads to transmission failures. This implies that at most $R$ devices can transmit successfully to the BS at a time slot. In an IoT where the number of devices $N$ greatly outnumbers the number of RBs $R$, RB allocation is critical for the operation of IoT, and RB allocation can be done in a centralized or in a distributed way.\\ \indent Under a centralized resource allocation scheme, the BS allocates the RBs to the IoT devices such that a given RB is used by only one device. Therefore, duplicate RB usage will not occur when using a centralized resource allocation. However, centralized resource allocation incurs an overhead related to the need that the devices request their uplink communication resources via a random access channel (RACH) \cite{aoi3gpp}. Furthermore, the uplink communication resource request using RACH can fail resulting in transmission failure. In contrast, when using a distributed resource allocation scheme, the devices decide which RB to use autonomously without any intervention from the BS and without RACH. Although there is no overhead related to the need for requesting uplink communication resources, distributed resource allocation incurs an overhead related to the devices cooperating to avoid duplicate RB usage. Since there is no RACH request when performing distributed resource allocation, no RACH request failures will happen. However, the uplink transmission may fail because of a duplicate RB usage.\\ \indent For both centralized and distributed resource allocations, the uplink transmission can also fail because of the RB outage. The RB outage is based on the signal-to-noise ratio (SNR) such that the transmission is considered to be a failure if the SNR is less than a given threshold $\epsilon \geq 0$. We consider a stationary Rayleigh fading channel with additive white Gaussian noise (AWGN), such that the statistical properties of channel do not change over time. Therefore, the SNR outage probability is $\Pr\left(\sfrac{S^2}{\sigma^2} \leq \epsilon \right)$, where the received signal power $S^2$ is an exponentially distributed random variable and $\sigma^2$ is the variance of the AWGN. We assume that the IoT devices only know the distributional properties of the channel and the AWGN at the receiver. Furthermore, we assume that all devices transmit with the same transmit power as the devices do not know exact channel gain \cite{chenvr, iotalloc}, and \cite{aoisp1, aoisp2, aoisp3}. Since we consider a Rayleigh fading channel, the received signal power $S^2$ is exponentially distributed, and the devices know the mean $\lambda^{-1}$ of $S^2$. We assume that the transmit powers of all IoT devices are equal, and, thus, the mean of the received signal power will be the same for all devices. When an IoT device uses multiple RBs simultaneously, then the transmit power will be equally divided among those RBs. For instance, if an IoT device $i$ uses $R_{i,t}$ RBs simultaneously at time slot $t$, then the received signal power is $\sfrac{S^2}{R_{i,t}}$. For an IoT device $i$ using $R_{i,t}$ RBs simultaneously at time slot $t$, the outage probability $p_{i,t}$ for device $i$ at time slot $t$ will be: \begin{equation} p_{i,t} = \Pr\left(\frac{\sfrac{S^2}{R_{i,t}}}{\sigma^2} \leq \epsilon \right). \label{outp1} \end{equation} Since $S^2$ is exponentially distributed with mean $\lambda^{-1}$, $\frac{\sfrac{S^2}{R_{i,t}}}{\sigma^2}$ is exponentially distributed with mean $(R_{i,t} \sigma^2 \lambda)^{-1}$ for a given $R_{i,t}$ and a known $\sigma^2$. Moreover, the outage probability $p_{i,t}$ can be interpreted as a cumulative distribution function of an exponential distribution. Since an exponential random variable $S^2$ with mean $\lambda^{-1}$ has a cumulative distribution function of $\Pr\left(S^2 \leq \epsilon\right) = 1 - \exp\left(-\lambda \epsilon\right)$ for $\epsilon \geq 0$, the outage probability $p_{i,t}$ in \eqref{outp1} will be: \begin{equation} p_{i,t} = 1 - \exp\left(-\left(R_{i,t} \sigma^2 \lambda\right)\epsilon\right). \label{outp2} \end{equation} For a successful uplink transmission, an RB must be used by only one device, and the SNR must be higher than a given threshold $\epsilon$.\\% devices static so we do not consider the pathloss? no need to mention for now, do not mention transmit power at all or max transmit power \indent One prominent feature of an IoT is its massive scale. In particular, the number of IoT devices $N$ greatly outnumbers the number of RBs $R$. For an IoT scenario where $N > R$, the problem of RB allocation among the IoT devices becomes more challenging. Another prominent feature of an IoT is the heterogeneity among the IoT devices. The IoT devices are heterogeneous in terms of message types, transmission requirements, and packet content. A metric that can be used to determine which $R$ out of the $N$ devices will transmit and to quantify the freshness of the information in perspective of the destination is AoI. Furthermore, the heterogeneity among the IoT devices and their messages can be captured by extending the definition of AoI to include the quality of information and introducing non-linear aging functions. \subsection{Age of Information The AoI is a metric that quantifies the freshness of the information in the perspective of a destination \cite{bo2}, and the definition of the AoI is the time elapsed since the generation of a message that is most recently received at the BS. In prior art on the AoI, devices are commonly assumed to generate the messages at will \cite{gatwill2} and to update BS with a new message just in time \cite{jintime}. The generate-at-will model for AoI implies that IoT devices can have messages to transmit to the BS at any time, and the just-in-time model for AoI implies that IoT devices transmit new message to the BS immediately after the successful transmission of the current message. However, for an IoT, the devices are not always \textit{active} and do not always have the messages to transmit to the BS. In our model, a device has a message to transmit to the BS at a given time slot with an activation probability $v_a$. If a device transmits to the BS unsuccessfully at a given time slot, then the device retransmits the message immediately at following time slot without a random backoff time. For the distributed RB allocation, the proposed game is designed to give incentive to devices with low AoI to not transmit. The RACH phase of the centralized RB allocation can use the game formulated for distributed RB allocation to achieve a lower average instantaneous AoI. Meanwhile, the random backoff time used in \cite{aoibackoff} and \cite{backoff} does not consider different aging functions, when the backoff time is determined for each device. For instance, a long backoff time can be assigned to a device with higher AoI and exponential aging function, while a short backoff time can be assigned to a device with low AoI and linear aging function.\\ \indent Minimizing the AoI implies that the destination maintains fresh information from the source. However, minimizing the AoI is different from simply minimizing delay\cite{aoi1}. The AoI is measured by an aging function, and it is typically assumed that all devices have the same, linear aging function with a slope of $1$\cite{aoi1}. However, by using different aging functions for different messages, the AoI can naturally capture the inherent heterogeneity among IoT devices and messages. For instance, depending on the device type and the message characteristics, the aging function can be assigned appropriately. If the device is a simple sensor transmitting a time-insensitive update messages, the appropriate aging function is a linear aging function. On the other hand, if the device is an industrial monitoring sensor transmitting a time-sensitive status report, the appropriate aging function is an exponential aging function. By using different aging functions, the AoI captures both the freshness of information and the value of information \cite{nonlin1, nonlin2, nonlin3}.\\ \indent To model the heterogeneous messages, we consider the coexistence of linear aging function and exponential aging function. In particular, with a linear aging function, the AoI from IoT device $i$ at the beginning of time slot $t \in \mathbb{Z}_+$ is: \begin{equation} a_i(t) = t - \delta_i(t), \label{af1} \end{equation} where $\delta_i(t)$ is time slot at which the most recent message from device $i$ received by the BS was generated. With an exponential aging function, the AoI from IoT device $i$ at the beginning of time slot $t \in \mathbb{Z}_+$ is: \begin{equation} b_i(t) = 2^{t - \delta_i(t) - 1}. \label{af2} \end{equation} Although a specific linear and exponential aging functions are considered, our proposed centralized and distributed approaches to minimize the AoI are not limited to these aging functions only, and any type of aging function can be used.\\ \indent In a scenario where all IoT devices have the same linear aging function with slope $1$, the IoT devices and the BS can easily determine the AoI of the messages by simply counting the number of time slots passed since the most recently received message was generated \cite{aoi1}. However, in our system model where different aging functions coexist, the aging function of a given message is determined by the content of the message\cite{nonlin2}. For instance, the aging function of a message whose content is critical would be the exponential aging function $b_i(t)$, while the aging function of a message whose content is normal would be the linear aging function $a_i(t)$ \cite{nonlin1}. This implies that the BS cannot determine the aging function directly before receiving it, and, thus, the BS cannot compute the AoI of the messages. Therefore, the devices determine the aging function of their own message and compute the current AoI.\\ \indent To capture the heterogeneity among the IoT devices, the IoT devices are classified into different types based on the probabilistic properties of their messages. A typical IoT device would not always have a time-sensitive message to send to BS. Additionally, a device that usually sends time-insensitive messages may sometimes have a critical message to send. In our model, we consider two types of devices. Type $1$ devices are more likely to have linearly aging messages than exponentially aging messages. In other words, type $1$ devices have the messages with aging function $a_i(t)$ with probability $m_1$ and have messages with aging function $b_i(t)$ with probability $(1-m_1)$ with $1 > m_1 > 0.5$. Type $2$ devices are more likely to have exponentially aging messages than linearly aging messages. In other words, type $2$ devices have messages with aging function $b_i(t)$ with probability $m_2$ and have messages with aging function $a_i(t)$ with probability $(1 - m_2)$ with $1 > m_2 > 0.5$. We assume that the characteristics of devices types, such as $m_1$ and $m_2$, are known to the BS, but the BS does not know the type of a given device. Although having different types of messages realistically models the heterogeneity of the IoT devices, this makes the RB allocation more challenging, because the messages transmitted by a given device may have different aging functions. With non-linear aging functions and heterogeneous device types, the problem of AoI minimization is different from AoI minimization with only linear aging function and homogeneous devices. Hence, next we investigate the problem of non-linear AoI minimization with coexistence of different aging functions and heterogeneous devices. \section{Non-linear AoI Minimization} \indent To minimize the average instantaneous AoI, the devices with highest AoI are permitted to transmit to the BS, and the AoI of different devices are compared to decide which devices are allocated the RBs in a massive IoT. Without coexistence of different aging functions, comparing AoI from different devices to allocate the limited RBs is simple. If all devices have same linear aging function $a_i(t)$, then $a_i(\tau) > a_h(\tau)$ implies $a_i(\tau + \beta) > a_h(\tau + \beta)$ for any positive integers $\tau$ and $\beta$ given that devices $i$ and $h$ do not transmit successfully to the BS. Therefore, in a massive IoT with $N > R$, comparing current AoI with $t = \tau$ at time slot $\tau$ can be used to decide which $R$ out of $N$ devices transmit and to minimize the average instantaneous AoI. Furthermore, using current AoI with $t = \tau$ or using future AoI with $t = \tau+\beta$ at time slot $\tau$ is equivalent, when all devices have same linear aging function $a_i(t)$.\\ \indent When a linear aging function $a_i(t)$ and an exponential aging function $b_i(t)$ coexist, it is insufficient to only compare the AoI of different devices to minimize the average instantaneous AoI, and RBs must be allocated considering the aging functions to minimize the average instantaneous AoI. For instance, even if a device $i$ with $b_i(t)$ has lower AoI than a device $h$ with $a_h(t)$, device $i$ will eventually have higher AoI than device $h$, because $b_i(t)$ increases faster than $a_h(t)$. Therefore, the AoI comparison must take aging functions into account, and one way to consider the aging functions is to compare the future AoI.\\ \indent The optimization problem to minimize the average instantaneous AoI at time slot $\tau$ is: \begin{align} \min_{\boldsymbol{n}}& \ \ \frac{1}{N}\left(\sum_{i \in \boldsymbol{n}} z_{i}'(\tau) + \sum_{i \not \in \boldsymbol{n}, i \in \boldsymbol{N}} z_{i}(\tau)\right)\\ \textrm{s.t.} &\ \ \boldsymbol{n} \subseteq \boldsymbol{N},\\ &\ \ |\boldsymbol{n}| = R, \end{align} where $\boldsymbol{N} = \{1, \cdots, N\}$ is set of all devices, $z_{i}'(\tau)$ is aging function of device $i$ such that $\delta_i(\tau) \neq \delta_i(\tau-1)$ is updated, and $z_{i}(\tau)$ is aging function of device $i$ such that $\delta_i(\tau) = \delta_i(\tau-1)$. $\boldsymbol{n}$ is a set of all devices allocated the RBs at $(\tau-1)$, while $\boldsymbol{N} - \boldsymbol{n}$ is a set of all devices not allocated the RBs at $(\tau-1)$. Next, for the problem of instantaneous AoI minimization, we prove that performing RB allocation based on the future AoI achieves a lower average instantaneous AoI than an alternative that is based on the current AoI. \begin{proposition} \label{pp1} \normalfont In a massive IoT where $N > R$, if there are different aging functions $a_i(t)$ and $b_i(t)$, comparing the future AoI with $t = \tau + \beta$ for some positive integer $\beta$ at time slot $\tau$ to determine the RB allocation achieves a lower average instantaneous AoI than comparing the current AoI with $t = \tau$. \end{proposition} \begin{proof} See Appendix \ref{app1}. \end{proof} \indent From Proposition \ref{pp1}, we observe that allocating RBs to the devices with highest future AoI results in a lower overall average instantaneous AoI of all devices at all time slots than allocating RBs to the devices with highest current AoI. It is important to note that the current AoI at the time slot of successful transmission is used to compute the average instantaneous AoI, and the future AoI is only used to determine the RB allocation. In Proposition \ref{pp1}, the value of the positive integer $\beta$ in determining which future AoI to use for the RB allocation is a design parameter, and different values of $\beta$ have different effects. If higher values of $\beta$ are used, devices with exponentially aging messages are more likely to be allocated the RBs than devices with linearly aging messages. This implies that devices with exponentially aging messages are allocated the RBs even when their current AoI is low, while devices with linearly aging messages are not allocated the RBs even when their current AoI is high. Therefore, the exponentially aging messages are being transmitted before the linearly aging messages. In other words, with higher values of $\beta$, the average instantaneous AoI of IoT devices with exponentially aging messages is lower, while the average instantaneous AoI of IoT devices with linearly aging messages is higher. Furthermore, using higher values of $\beta$ for the future AoI does not necessarily achieve a lower average instantaneous AoI, and $\beta$ can be chosen depending on how much the exponentially aging messages are prioritized over the linearly aging messages. In our system model, the centralized and the distributed RB allocations use future AoI with $\beta = 1$.\\ \indent We consider that the IoT devices may have messages requiring multiple RBs to successfully transmit to the BS. When the messages take multiple RBs to successfully transmit, $\delta_i(t)$ in \eqref{af1} and \eqref{af2} will represent the time slot during which the most recent message from device $i$, which is fully received by BS, was generated. In this case, the devices must determine how to transmit the messages taking multiple RBs. In particular, if a device $i$ has a message that requires $n_i$ RBs to transmit, then the message may be transmitted simultaneously by using $n_i$ RBs at a given time slot, consecutively by using $1$ RB each time for $n_i$ time slots, or jointly by using both simultaneous and consecutive transmissions. The simultaneous transmission may complete the transmission at once reducing the AoI, but it has high $R_{i,t}$ and outage probability $p_{i,t}$ \eqref{outp2}. A consecutive transmission achieves its lowest outage probability $p_{i,t}$ with $R_{i,t} = 1$, but it takes the largest number of time slots to completely transmit increasing the AoI. Therefore, the problem of AoI minimization must consider the outage probability. The optimization problem to minimize the average instantaneous AoI for a device $i$ with a linearly aging message requiring $n_i$ RBs at time slot $\tau$ is: \begin{align} \min_{\boldsymbol{R}}& \ \ a_i\left(\tau + \sum_{j = \tau}^{\tau + n_i - 1} (1 - p_{i,j})^{-1}\right), \label{opt}\\ \textrm{s.t.} &\ \ \sum\nolimits_{j = \tau}^{\tau + n_i - 1} R_{i,j} = n_i,\\ &\ \ 0 \leq R_{i,j} \leq R \ \forall \ j, \end{align} where $\boldsymbol{R} = [R_{i,\tau}, ..., R_{i, \tau+n_i-1}]$. Since $R_{i, \tau} \in \mathbb{Z}_+$, $|\boldsymbol{R}| = n_i$, and $n_i$ is typically small for the IoT devices \cite{iotshort}, the solution space of optimization problem \eqref{opt} is finite and small. Therefore, the optimization problem can be solved easily using any discrete optimization method, such as combinatorial optimization. In particular, for any $n_i$, the optimization problem \eqref{opt} can be mapped as a directed graph with $(1 - p_{i,j})^{-1}$ as weights to find a shortest path. When a device uses $R_{i,t}$ RBs simultaneously, $(1 - p_{i,t})$ is the probability of successful transmission given that duplicate RB selection did not occur. Taking $(1 - p_{i,t})$ as the success probability in a geometric distribution, the expected number of time slots needed for the successful transmission when using $R_{i,t}$ RBs simultaneously is $(1 - p_{i,t})^{-1}$, which is the mean of the geometric distribution. Therefore, $\sum_{j = \tau}^{\tau + n_i - 1} (1 - p_{i,t})^{-1}$ is the expected number of time slots needed to transmit a message requiring $n_i$ RBs. If a device $i$ has an exponentially aging message, $b_i(t)$ replaces $a_i(t)$ in \eqref{opt}. The solution to the optimization problem \eqref{opt} determines the number of RBs $R_{i,\tau}$ that a device $i$ should be allocated with at time slot $\tau$ so that the average instantaneous AoI is minimized. Furthermore, the solution to the optimization problem is used for centralized and distributed approaches for the RB allocation. \section{Resource Block Allocation} In a massive IoT with $N > R$, RB allocation is a challenging problem especially given the high heterogeneity among IoT devices and the messages. RB allocation in an IoT can be done in a centralized way or in a distributed way. Moreover, RB allocation schemes can achieve a lower average instantaneous AoI by allocating the limited RBs to IoT devices with higher future AoI. Centralized RB allocation scheme is based on a priority scheduling improved with maximum likelihood to determine the aging functions and to learn the device types. The proposed distributed RB allocation scheme in Algorithm 1 is designed to enable only the devices with sufficiently high future AoI to transmit, and the proposed stochastic crowd avoidance algorithm is proved to converge to an NE of the formulated game. \subsection{Centralized RB Allocation} For centralized RB allocation, the BS allocates the RBs to the IoT devices. In a time slot. the centralized approach has two phases. The active devices request for the RB using RACH in the first phase. If an active device is allocated an RB, the active device transmits its message to the BS in the second phase. Although there will be no duplicate RB usage causing a transmission failure in the second phase, there may be RACH preamble collision causing an RB request failure in the first phase. Therefore, using centralized RB allocation scheme, a device fails to transmit to the BS because of the RACH preamble collision, the outage based on SNR, and the lack of RB allocation.\\ \indent We let $P$ be the number of RACH preambles and $N_t$ be the number of active devices at time slot $t$. The probability of the RACH preamble collision $c_t$ at time slot $t$ is: \begin{equation} c_t = 1 - \left(\frac{P-1}{P}\right)^{N_t-1}, \label{prefail} \end{equation} which is the probability of more than one active device using a given RACH preamble. If a device fails to request for an RB at time $t$ with probability $c_t$, then this is equivalent to a transmission failure. However, if a device $i$ successfully requests for an RB at time $t$, IoT device $i$ sends the information about its current AoI $C_i$ and the necessary number of RBs $R_{i,t}$ at time slot $t$. After gathering the information from the devices, the BS determines the RB allocation based on the future AoI of devices to minimize the average instantaneous AoI.\\% need to defend why not just send F_i, no need to learn type if you send F_i from the beginning \indent In the second phase, the BS allocates the RBs to the active devices using a priority scheduling based on the future AoI. The priority scheduling allocates the $R$ RBs to at most $R$ active devices with the highest future AoI, minimizing the average instantaneous AoI. If a device $i$ with high future AoI has $R_{i,t} > 1$, then IoT device $i$ may be allocated more than $1$ RB at time slot $t$. The problem in the second phase is determining the future AoI using the received current AoI because of the coexistence of different aging functions. In other words, the BS does not know the aging function of an active device $i$, and the BS must determine the aging function to compute the future AoI $F_i$, which is used for the priority scheduling scheme to achieve a lower average instantaneous AoI as shown in Proposition \ref{pp1}. However, the BS can determine the current aging function of an active device $i$ from $C_i$.\\ \indent The BS can determine that the aging function of an active device $i$ is $a_i(t)$, when $C_i$ cannot be derived using $b_i(t)$. The possible values of AoI using $a_i(t)$ are $\{1, 2, 3, 4, ...\}$, and the possible values of the AoI using $b_i(t)$ are $\{1, 2, 4, 8, ...\}$. Therefore, the values of AoI that are only possible using $a_i(t)$ are $\{3, 5, 6, 7, ...\}$. If the received current age $C_i$ from an active device $i$ is one of $\{3, 5, 6, 7, ...\}$, then the aging function is $a_i(t)$, and, thus, the future AoI $F_i$ of device $i$ is $C_i + 1$. The BS can also determine the aging functions of active devices by using regression. For an active device $i$, the BS can use the AoI from the most recently received uplink request from device $i$ and $C_i$ to determine if the aging function is $a_i(t)$ or $b_i(t)$. After determining the aging functions of active devices, the BS can compute the future AoI $F_i$ of active devices and learn the device types, which are used for the priority scheduling scheme with learning. The BS can use either of the two methods to accurately determine the current aging functions, but it is not always possible to use these methods. When both methods are not possible to use in a given time slot due to RACH preamble collision, the BS uses the expected value of $F_i$ for the priority scheduling scheme.\\ \indent The BS can compute the expected value of $F_i$ of an active device $i$ by learning the device type of device $i$. The BS can learn the type of a device $i$ by using the previous data from the instances that the BS was able to determine the aging function of messages from device $i$. In particular, the BS can use a maximum likelihood to determine the device types. We let $\mathcal{S}$ be a set of all device types and $\boldsymbol{O}_i$ be a vector of aging functions that a device $i$ had that the BS was able to determine exactly. Furthermore, we let $k_{i,f}$ be the number of times that the BS determined device $i$ to have aging function $f$. Assuming that the aging functions of a device $i$ are determined independently, the learned type $H_i \in \mathcal{S}$ of a device $i$ is: \begin{align} H_i &= \argmax\limits_{s \in \mathcal{S}} \Pr(\boldsymbol{O}_i \mid s) = \argmax\limits_{s \in \mathcal{S}} \prod\limits_{f\in\mathcal{F}} {\Pr(f \mid s)}^{k_{i,f}},\\ &= \argmax\limits_{s \in \mathcal{S}} \sum\limits_{f\in\mathcal{F}} {k_{i,f}}\ln(\Pr(f\mid s)), \label{ml} \end{align} where $\mathcal{F}$ is a set of all aging functions. For our model, the values of $\Pr(f \mid s)$ for any $f \in \mathcal{F}$ and $s \in \mathcal{S}$ are known. In particular, $\Pr(a_i \mid s = 1) = m_1$, $\Pr(b_i \mid s = 1) = 1 - m_1$, $\Pr(a_i \mid s = 2) = 1 - m_2$, and $\Pr(b_i \mid s = 2) = m_2$. Therefore, the maximum likelihood in \eqref{ml} can be solved by the BS directly.\\ \indent Once the device type of a device $i$ is learned, the expected future AoI $\mathbb{E}[F_i]$ of device $i$ can be computed. For our model, if $H_i = 1$, then the expected future AoI $\mathbb{E}[F_i\mid H_i = 1]$ is: \begin{equation} \mathbb{E}[F_i \mid H_i = 1] = m_1(C_i + 1) + (1 - m_1)(2C_i). \end{equation} If $H_i = 2$, then the expected future AoI $\mathbb{E}[F_i\mid H_i = 2]$ is: \begin{equation} \mathbb{E}[F_i \mid H_i = 2] = (1 - m_2)(C_i + 1) + m_2(2C_i). \end{equation} The expected value of $F_i$ is used for the priority scheduling scheme only if the exact value of $F_i$ cannot be determined.\\ \indent Priority scheduling determines the RB allocation among the active IoT devices based on the future AoI $F_i$. In a time slot $\tau$, the RBs are allocated first to the devices with the highest $F_i$. Furthermore, for a device $i$ with highest $F_i$ at time slot $\tau$, the number of RBs allocated to device $i$ is $R_{i, \tau}$. If some of the active devices have same $F_i$, then the RBs are allocated first to the devices whose type is more likely to have a faster aging function. For instance, if a device $i$ is device type $1$, a device $j$ is device type $2$, and $F_i = F_j$, then device $j$ has a priority over device $i$, because device $j$ is more likely to have exponentially aging messages. The RBs are allocated to the active devices until all RBs are allocated or all active devices are allocated the RBs.\\ \begin{algorithm}[t] \caption{Priority scheduling based on future AoI at time slot $t$.} \begin{algorithmic}[1]\label{centalg}\vspace{1mm} \item[1 :] \hspace{0.0cm} Receive values $C_i$ and $R_{i,t}$, and initialize $R$. \item[2 :] \hspace{0.0cm} Compute $F_i$ or $\mathbb{E}[F_i]$ for each $C_i$. \item[3 :] \hspace{0.0cm} {\bf for} $j = 1, 2, \cdots$ \item[4 :] \hspace{0.3cm} $\mathcal{Z}_1 \leftarrow$ set of type $1$ devices with $j$-th highest value among $F_i$. \item[5 :] \hspace{0.3cm} $\mathcal{Z}_2 \leftarrow$ set of type $2$ devices with $j$-th highest value among $F_i$. \item[6 :] \hspace{0.3cm} {\bf for all $i \in \mathcal{Z}_2$} \item[7 :] \hspace{0.6cm} {\bf if} $R > 0$, \item[8 :] \hspace{0.9cm} Allocate $\textrm{min}(R, R_{i,t})$ RBs to device $i$. \item[9 :] \hspace{0.9cm} $R \leftarrow R - \textrm{min}(R, R_{i,t})$. {\bf end if} \item[10 :] \hspace{0.3cm} {\bf end for} \item[11 :] \hspace{0.3cm} {\bf for all $i \in \mathcal{Z}_1$} \item[12 :] \hspace{0.6cm} {\bf if} $R > 0$, \item[13 :] \hspace{0.9cm} Allocate $\textrm{min}(R, R_{i,t})$ RBs to device $i$. \item[14 :] \hspace{0.9cm} $R \leftarrow R - \textrm{min}(R, R_{i,t})$. {\bf end if} \item[15 :] \hspace{0.3cm} {\bf end for} \item[16:] \hspace{0.0cm} {\bf end for} \end{algorithmic}\vspace{1mm} \end{algorithm} \indent One of the major problems with priority scheduling is the infinite blocking of low-priority tasks. However, since the priority depends on the future AoI, the priority of low-priority messages increases with time. Therefore, regardless of the aging function or the device type, the messages will eventually be allocated the RBs to transmit to the BS. One of the limitations of centralized RB allocation scheme is the overhead related to the RB request via RACH. With the higher number of active devices $N_t$ at time $t$, the probability of the RACH preamble collision $c_t$ becomes significant, and, thus, more transmission failures occur. Furthermore, with centralized RB allocation scheme, a frequent communication between the devices and the BS is required, which may not be viable for the IoT devices \cite{nocent}. However, the main advantage of centralized RB allocation scheme is that the RBs are fully utilized. \subsection{Distributed RB Allocation} Distributed RB allocation enables the devices to allocate the RBs in a self-organizing manner without any intervention from the BS. Since a duplicate RB selection results in transmission failures, the active devices must choose the RBs such that no other device is choosing the same RB. Furthermore, the distribution RB allocation can be modeled as one-to-one association between the RBs and the devices. The behavior of the devices wanting to choosing an RB alone can be formulated as a minority game \cite{iotkolkata}.\\ \indent A suitable minority game for distributed RB allocation in an IoT is the Kolkata paise restaurant (KPR) game \cite{kolkata}. The KPR game is a repeated game in which the customers simultaneous go to one of the restaurants, which can only serve one customer each. Additionally, the cost of going to a restaurant is the same for all restaurants, and a customer can only go to one restaurant at any given time. In the KPR game, the players are the customers, whose action in each iteration is to choose one of the restaurants. The payoff of a given player depends on the utility of the chosen restaurants and the number of players choosing the same restaurant. Given players, actions, and payoffs in a game, one important stable solution is an NE. A vector of actions is an NE if no player can achieve a higher payoff by a unilateral change of action. For the KPR game, the existence of an NE depends on the utilities of the restaurants \cite{kolkata}, and an NE is when all customers go to different restaurants and none of the customers have the utility of $0$. If an NE exists in the KPR game, it coincides with the socially optimal solution, which is when all restaurants are being utilized.\\ \indent The fundamental structure of the KPR game can be readily extended for our IoT model. The customers can be modeled as IoT devices, and the restaurants can be modeled as the RBs. Furthermore, the cost of using any of the RBs is same. However, there are significant differences between the KPR game and the IoT game. In the KPR game, the number of customers and the number of restaurants are the same, and each customer goes to one of the restaurants at every iteration. In the IoT game, the number of devices $N$ and the number of RBs $R$ may not be the same, and not all devices are active and need to use the RBs at each time slot. The most significant difference is the payoff in the case of duplicate RB selection. When multiple customers choose the same restaurant in the KPR game, one of those customers is randomly chosen to get the full payoff, while other customers with duplicate selection get a zero payoff. However, in the IoT game, all devices that choose the same RB get a zero payoff because of the transmission failures.\\ \indent For our AoI minimization, the players are the $N$ IoT devices, and their action is to transmit using the RBs or not to transmit. We let $\mathcal{R}$ be the set of $R$ RBs and let $x_i(t)$ be the action of device $i$ at time slot $t$. If $x_i(t) \in \mathcal{R}$, then device $i$ transmits using the RB in $x_i(t)$ at time slot $t$. If $x_i(t) = 0$, then device $i$ does not transmit at time slot $t$. at each time slot, the payoff of each device depends on the actions of all devices. If a device transmits successfully by using an RB alone, then the payoff is $\rho$. Furthermore, a successful transmission using any of the RBs has the same payoff of $\rho$. If a device transmits unsuccessfully due to a duplicate RB usage, then the payoff is $-\gamma$. The transmission failure has a negative payoff, because the energy is consumed for the transmission without success. Additionally, the transmission failure using any of the RBs has the same payoff of $-\gamma$, and $\rho$ and $\gamma$ are positive numbers such that $\rho > \gamma$.\\ \indent To minimize the AoI of the devices, active devices with lower AoI must not transmit, while the active devices with high AoI need to transmit. Using a distributed RB allocation scheme, the devices must know the AoI of other devices to determine if their own AoI is high enough to transmit. We assume that the active devices broadcast their own future AoI $F_i$ to other devices within the communication range $r_c$, and the communication resource for this broadcast is pre-allocated. Moreover, we assume that device-to-device communication links are orthogonal to the uplink communication as done in \cite{iotd2d1, iotd2d2, iotd2d3, iotd2d4, iotd2d5}. Similar to the overhead related to the RACH uplink request for centralized RB allocation scheme, the communication between the devices to share the AoI can be seen as the overhead for distributed RB allocation scheme. However, depending on $r_c$, the active devices may not know $F_i$ of all other active devices. We let $\alpha_i$ be the active status of a device $i$ such that $\alpha_i = 0$ implies that device $i$ is inactive and $\alpha_i = 1$ implies that device $i$ is active. We let $\boldsymbol{A}$ be a vector that captures the future AoI $F_i$ of all active devices, and $\boldsymbol{A}_i$ be a vector of the future AoI $F_i$ of the active devices within $r_c$ of an active device $i$. With $r_c$ sufficiently large, $|\boldsymbol{A}_i| = |\boldsymbol{A}|$ for all devices, where $|\boldsymbol{A}|$ is the cardinality of $\boldsymbol{A}$. For an active device $i$, if $F_i$ is higher than $\kappa$-th highest AoI in $\boldsymbol{A}_i$, then the active device $i$ transmits. $\kappa$ determines if $F_i$ is sufficiently higher than the future AoI of other active devices, and we let $\boldsymbol{A}_i(\kappa)$ be the $\kappa$-th highest AoI in $\boldsymbol{A}_i$. Moreover, $\kappa$ should ensure that the number of transmitting devices $T_t$ at time slot $t$ is equal to $R$ such that all transmitting devices can be allocated with the RB. If $T_t$ is higher than $R$, then there are at least two active devices with transmission failure, which causes the average instantaneous AoI to increase. If $T_t$ is less than $R$, then there are some of the RBs not used by any of the active devices, which may cause the average instantaneous AoI to increase. However, $T_t = R$ is difficult to achieve when $r_c$ is not sufficiently large and $|\boldsymbol{A}_i| < |\boldsymbol{A}|$.\\ \indent When $|\boldsymbol{A}_i| = |\boldsymbol{A}|$, the devices have full information on the future AoI $F_i$ of the active devices. Moreover, the devices know all active devices. Since the devices have full information of $F_i$, an active device $i$ transmits if $F_i \geq \boldsymbol{A}_i(R)$, and an active device $i$ does not transmit if $F_i < \boldsymbol{A}_i(R)$. Therefore, $\kappa = R$ when $|\boldsymbol{A}_i| = |\boldsymbol{A}|$ for all $i$. This ensures that $R$ active devices transmit, while $|\boldsymbol{A}| - R$ devices do not transmit. Therefore, the number of transmitting devices $T_t$ at time slot $t$ is equal to $R$. To formulate this decision to transmit or not to transmit into the payoff, the payoff $y_{\textrm{full}}$ when an active device $i$ does not transmit is: \vspace{-2mm} \begin{multline} y_{\textrm{full}}(\boldsymbol{A}_i) = (\rho + \eta) \theta_{\mathbb{R}_{+}}(\boldsymbol{A}_i(R) - F_i-\eta)\\ - (\gamma + \eta) \theta_{\mathbb{R}_{+}}(F_i - \boldsymbol{A}_i(R)), \label{payoff1} \end{multline} where $\eta$ is a real number in $(0, 1)$ and $\theta_{\mathbb{R}_{+}}$ is an indicator function such that: \begin{equation} \theta_{\mathbb{R}_{+}}(x)=\left\{ \begin{array}{ll} 1 \hfill &\text{if} \ x \in [0, \infty),\\ 0 \hfill &\text{if} \ x \not\in [0, \infty). \end{array} \right. \end{equation} The function of $\eta$ in payoff functions is to ensure that the payoff functions are used as intended and only appropriate indicator function is activated. It is important to note that $y_{\textrm{full}}(\boldsymbol{A}_i)$ does not depend on actions of the players, because the decision to transmit or not to transmit only depends on future AoI. Given the payoff in \eqref{payoff1}, the $R$ active devices with $F_i \geq \boldsymbol{A}_i(R)$ transmit, because their payoff of not transmitting is $-(\gamma + \eta)$. The $|\boldsymbol{A}| - R$ active devices with $F_i < \boldsymbol{A}_i(R)$ do not transmit, because their payoff of not transmitting is $\rho + \eta$. Therefore, IoT devices with $R$ highest future AoI transmit, while other devices do not transmit.\\ \indent In a more realistic scenario where IoT devices do not have full information of $F_i$, $|\boldsymbol{A}_i| < |\boldsymbol{A}|$, and the devices do not know all active devices. It is difficult to make only $R$ active devices to transmit at time slot $t$, and, thus, $\kappa$ is designed to make $T_t \approx R$. For the case of $|\boldsymbol{A}_i| < |\boldsymbol{A}|$, $\kappa$ is: \begin{equation} \kappa = \left\lceil\frac{R}{|\boldsymbol{A}|} |\boldsymbol{A}_i|\right\rceil, \label{kappa1} \end{equation} where $\kappa = R$ in the case of $\boldsymbol{A} = \boldsymbol{A}_i$. However, with partial information, the active devices do not know $\boldsymbol{A}$ in \eqref{kappa1}. $|\boldsymbol{A}|$ is the number of active devices, and the expected number of newly active devices is $Nv_a$. However, with previously active devices yet to transmit successfully, $|\boldsymbol{A}|$ is typically greater than $Nv_a$. Therefore, in \eqref{kappa1}, $|\boldsymbol{A}|$ can be estimated with $N v_a \zeta$, where $\zeta$ is a design parameter to consider the number of previously active devices yet to transmit successfully. With higher values of $\zeta$, the number of transmitting devices $T_t$ at time slot $t$ is smaller, while $T_t$ is bigger with smaller values of $\zeta$. Therefore, with approximation for $|\boldsymbol{A}|$, $\kappa$ is: \begin{equation} \kappa = \left\lceil\frac{R}{N v_a \zeta} |\boldsymbol{A}_i|\right\rceil. \label{kappa2} \end{equation} For an active device $i$, $\kappa$ in \eqref{kappa2} approximates if $F_i$ is sufficiently higher than the future AoI of other active devices based on the percentile of $F_i$ on the known vector of future AoI $\boldsymbol{A}_i$. With $|\boldsymbol{A}_i| < |\boldsymbol{A}|$, the payoff $y_{\textrm{act}}$ when an active device $i$ does not transmit is:\vspace{-2mm} \begin{multline} y_{\textrm{act}}(\boldsymbol{A}_i) = (\rho + \eta) \theta_{\mathbb{R}_{+}}(\boldsymbol{A}_i(\kappa) - F_i-\eta)\\ - (\gamma + \eta) \theta_{\mathbb{R}_{+}}(F_i - \boldsymbol{A}_i(\kappa)). \label{payoff2} \end{multline} It is important to note that $y_{\textrm{act}}(\boldsymbol{A}_i)$ is equal to $y_{\textrm{full}}(\boldsymbol{A}_i)$, when the active devices have full information with $|\boldsymbol{A}_i| = |\boldsymbol{A}|$ and $\kappa = R$. Similar to $y_{\textrm{full}}(\boldsymbol{A}_i)$ \eqref{payoff1}, with the payoff $y_{\textrm{act}}(\boldsymbol{A}_i)$, the active devices with sufficiently high $F_i$ satisfying $F_i \geq \boldsymbol{A}_i(\kappa)$ transmit, while the active devices with $F_i$ such that $F_i < \boldsymbol{A}_i(\kappa)$ do not transmit. Moreover, $y_{\textrm{act}}(\boldsymbol{A}_i)$ also does not depend on actions of the players.\\ \indent In the IoT game, the payoff needs to consider the inactive devices. The inactive devices with $\alpha_i = 0$ do not transmit as they do not have messages to transmit. The payoff $y_{\textrm{nt}}$ when an device $i$ does not transmit is: \vspace{-1mm} \begin{equation} y_{\textrm{nt}}(\boldsymbol{A}_i, \alpha_i) = (1 - \alpha_i)(\rho + \eta) + \alpha_i y_{\textrm{act}}(\boldsymbol{A}_i). \vspace{-1mm} \end{equation} With payoff $y_{\textrm{nt}}(\boldsymbol{A}_i, \alpha_i)$ for not transmitting, the inactive devices with $\alpha_i = 0$ do not transmit as they get payoff of $\rho + \eta$. The active devices with $\alpha_i = 1$ decide to transmit or to not transmit based on $\kappa$ and future AoI. We let $\boldsymbol{x}(t) = [x_1(t), x_2(t), \cdots, x_N(t)]$ be a vector of all actions of $N$ devices at time slot $t$. For a given $\boldsymbol{x}(t)$, the payoff function $y_i(\boldsymbol{x}(t), \boldsymbol{A}_i, \alpha_i)$ for a device $i$ at time slot $t$ is: \vspace{-2mm} \begin{multline} y_i(\boldsymbol{x}(t), \boldsymbol{A}_i, \alpha_i)\\ =\left\{ \begin{array}{ll} \rho \hfill &\text{if} \ x_i(t) \neq x_j(t) \ \forall \ j \neq i, x_i(t) \neq 0,\\ -\gamma \hfill &\text{if} \ \exists \ j \neq i \ \text{s.t.} \ x_j(t) = x_i(t) \neq 0,\\ y_{\textrm{nt}}(\boldsymbol{A}_i, \alpha_i) \hfill &\text{if} \ x_i(t) = 0.\\ \end{array} \right. \label{payoff3}\vspace{-2mm} \end{multline} In a simple game where $N = R = 2$ with $v_a = 1$, the payoffs of two devices at each time slot is summarized in Table \ref{game1}. NE in this IoT game is $\boldsymbol{x}(t) = [1, 2]$ or $\boldsymbol{x}(t) = [2, 1]$. For simple IoT game, NE is when two devices choose different RBs and get the payoff of $\rho$. If one device deviates from NE and the other device does not deviate from NE, the device deviating from NE gets the lower payoff of $-\gamma$ or $-(\gamma + \eta)$. Furthermore, in the IoT game, NE implies that a duplicate RB selection does not occur. An NE for a more general case of the IoT game can be found with certain conditions. \begin{table}[t] \centering \vspace{0mm} \caption{IoT game with $N = R = 2$.} \label{game1} \begin{tabular}{rccc} & $x_1(t) = 1$ & $x_1(t)=2$ & $x_1(t) = 0$ \\ \cline{2-4} \multicolumn{1}{r|}{$x_2(t) = 1$} & \multicolumn{1}{c|}{$(-\gamma, -\gamma)$} & \multicolumn{1}{c|}{$(\rho, \rho)$} & \multicolumn{1}{c|}{$(-(\gamma + \eta), \rho)$} \\ \cline{2-4} \multicolumn{1}{r|}{$x_2(t) = 2$} & \multicolumn{1}{c|}{$(\rho, \rho)$} & \multicolumn{1}{c|}{$(-\gamma, -\gamma)$} & \multicolumn{1}{c|}{$(-(\gamma + \eta), \rho)$} \\ \cline{2-4} \multicolumn{1}{r|}{$x_2(t) = 0$} & \multicolumn{1}{c|}{$(\rho, -(\gamma + \eta))$} & \multicolumn{1}{c|}{$(\rho, -(\gamma + \eta))$} & \multicolumn{1}{c|}{$(-(\gamma+\eta), -(\gamma + \eta))$} \\ \cline{2-4} \end{tabular}\vspace{-4mm} \end{table} \begin{theorem}\label{thm1} For the IoT game with $N$ players with action $x_i(t) \in \{0, \mathcal{R}\}$, payoff function $y_i(\boldsymbol{x}(t))$, and $|\boldsymbol{A}_i| = |\boldsymbol{A}|$ for all $i$, any vector of actions $\boldsymbol{x}(t)$ such that at most $R$ active devices with $F_i \geq \boldsymbol{A}_i(R)$ transmit, the rest of the devices do not transmit, and each of the RBs is used by at most one device is an NE. \end{theorem} \begin{proof} See Appendix \ref{app2}. \end{proof} \indent There are many sets of actions that satisfy the conditions described in Theorem \ref{thm1}, and, thus, NE in the IoT game is not unique. For instance, when the number of devices is equal to the number of RBs with $v_a = 1$ in an IoT, an NE is when each RB is used by one device, and, thus, the number of NEs in that particular IoT game is $N!$. There are many NEs, because the payoff of successful transmission does not depend on which RB is used. \emph{Although there are many NEs, the expected payoffs of devices at any given NE are the same, and, thus, one of those NEs is chosen with distributed RB allocation algorithm discussed in Section \ref{secsca}.} Similar to the simple IoT game in Table \ref{game1}, an NE for our IoT game implies that the number of transmitting devices $T_t$ is equal to the number of RBs $R$ and that a duplicate RB selection does not occur. Furthermore, at an NE, a transmitting device has one of the $R$ highest AoI, and a device that is not transmitting is inactive or has low AoI. This is because the payoff function with $|\boldsymbol{A}_i| = |\boldsymbol{A}|$ is designed to only allow the messages having the $R$ highest AoI to transmit. Therefore, the convergence of a distributed RB allocation algorithm to an NE reduces the average instantaneous AoI. However, when the devices only have partial information such that $|\boldsymbol{A}_i| < |\boldsymbol{A}|$, $\boldsymbol{x}(t)$ described in the Theorem \ref{thm1} is not necessarily an NE, because $\kappa$ is not necessarily equal to $R$.\\ \indent In addition to the NE, another solution concept is a socially optimal solution in which the overall payoff of an IoT game is maximized. In other words, a vector of actions is a socially optimal solution when the sum of payoffs of all devices is maximized. In an IoT game, a socially optimal solution is when the devices with highest future AoI $F_i$ fully utilize RBs without any duplicate RB selection. Therefore, similar to an NE in KPR game \cite{kolkata}, an NE in IoT game coincides with a socially optimal solution. A performance metric that can be used to describe an NE and a socially optimal solution is a service rate $s_r$, which is the percentage of RBs that are used by one device. NE in Theorem \ref{thm1} has a service rate of $1$, which implies that all RBs are used by one device, or the highest possible service rate of $\sfrac{T_t}{R}$.\\ \indent With the number of transmitting devices $T_t$ approximately equal to $R$ via IoT game design, a distributed RB allocation algorithm is necessary to enable the transmitting devices to share RBs autonomously. Moreover, with existence of a socially optimal NE in our IoT game, the convergence of a distributed RB allocation algorithm to an NE is crucial in minimizing the average instantaneous AoI. Therefore, to evaluate different RB allocation algorithms, the convergence to an NE and the service rates are analyzed. The service rate is an important metric to determine convergence to an NE, because a vector of actions must achieve service rate of $1$ or highest service rate to be an NE. Furthermore, the service rate is an important performance metric for the AoI, because the higher service rate implies that more devices are transmitting successfully at each time slot, reducing the average instantaneous AoI. Therefore, a distributed RB allocation algorithm must achieve a service rate of $1$ or increase the service rate as high as possible, because a high service rate is required to achieve a low average instantaneous AoI. \subsubsection{Stochastic Crowd Avoidance} \label{secsca We propose a stochastic crowd avoidance (SCA) algorithm that enables the devices to avoid using the RBs that are used by many devices stochastically and to choose an RB for a successful transmission, as shown in Algorithm 2. For the SCA algorithm, the devices need to share more information in addition to their $F_i$ and can perform a channel sensing to determine the RBs that were not used at a previous time slot \cite{aoibackoff, aoicsma, aoisleep}, and \cite{aoiinst}. At time slot $t$, the devices that transmitted at the time slot $t-1$ share their previous actions $x_i(t-1) \in \mathcal{R}$ and the previous payoff $y_i(\boldsymbol{x}(t-1))$ of their transmission. We let $\boldsymbol{X}_i$ be the vector of actions of the transmitting devices at time slot $t-1$ that a device $i$ knows and $\boldsymbol{P}_i$ be the vector of payoffs of the transmitting devices at time slot $t-1$ that a device $i$ knows. Furthermore, we let $\boldsymbol{X}_i(x)$ with $x \in \mathcal{R}$ be the number of $x$ in $\boldsymbol{X}_i$. In other words, $\boldsymbol{X}_i(x)$ is the number of devices that chose the RB $x$ at time slot $t-1$ that a device $i$ knows. Learning from $\boldsymbol{X}_i$ and $\boldsymbol{P}_i$, the proposed SCA algorithm enables a transmitting device $i$ at time slot $t$ to not use the RBs that are being used successfully by other devices and to avoid using the contended RBs stochastically.\\ \begin{algorithm}[t] \caption{SCA for device $i$ at time $t$.} \begin{algorithmic}[1]\label{distalg}\vspace{1mm} \item[1 :] \hspace{0.0cm} Receive $\boldsymbol{X}_i$, $\boldsymbol{P}_i$, $\boldsymbol{A}_i$, and $\mathcal{L}$. \item[2 :] \hspace{0.0cm} {\bf if} $x_i(t-1) \in \mathcal{R}$, $y_i(\boldsymbol{x}(t-1)) = \rho$, and $\alpha_i = 1$, \item[3 :] \hspace{0.3cm} $x_i(t) \leftarrow x_i(t-1)$. \item[4 :] \hspace{0.0cm} {\bf else if} $x_i(t-1) \in \mathcal{R}$, $y_i(\boldsymbol{x}(t-1)) = \rho$, and $\alpha_i = 0$, \item[5 :] \hspace{0.3cm} $j \leftarrow$ one neighboring device chosen with $\frac{F_j}{\sum_{F_h \in \boldsymbol{A}_i} F_h}$. \item[6 :] \hspace{0.3cm} $x_j(t) \leftarrow x_i(t-1)$. \item[7 :] \hspace{0.0cm} {\bf else if} $x_i(t-1) \in \mathcal{R}$ and $y_i(\boldsymbol{x}(t-1)) = -\gamma$, \item[8 :] \hspace{0.3cm} $z \leftarrow 1$ with probability $\boldsymbol{X}_i(x_i(t-1))^{-1}$. \item[9 :] \hspace{0.3cm} {\bf if} $z = 1$, $x_i(t) \leftarrow x_i(t-1)$. \item[10 :] \hspace{0.3cm} {\bf else} $x_i(t) \leftarrow$ randomly chosen from $\mathcal{L}$. {\bf end if}. \item[11 :] \hspace{0.0cm} {\bf else if} $x_i(t-1) = 0$, \item[12 :] \hspace{0.3cm} $x_i(t) \leftarrow$ randomly chosen from $\mathcal{L}$. \item[13 :] \hspace{0.0cm} {\bf end if.} \end{algorithmic \end{algorithm} \indent Using SCA algorithm, at time slot $t$, a transmitting device $i$ determines its RB usage based on $x_i(t-1)$ and $y_i(\boldsymbol{x}(t-1))$. If the transmission at time slot $t-1$ is successful such that $x_i(t-1) \in \mathcal{R}$ and $y_i(\boldsymbol{x}(t-1)) = \rho$, then transmitting device $i$ uses the same RB $x_i(t) = x_i(t-1)$. If the transmission at time slot $t-1$ is unsuccessful such that $x_i(t-1) \in \mathcal{R}$ and $y_i(\boldsymbol{x}(t-1)) = -\gamma$, then transmitting device $i$ uses the same RB $x_i(t) = x_i(t-1)$ with probability $\boldsymbol{X}_i(x_i(t-1))^{-1}$ or chooses an RB from a set $\mathcal{L}$ uniformly randomly. $\mathcal{L}$ is a set of the RBs that were not used by any of the device at time slot $t-1$ determined with channel sensing. If there is no transmission at time slot $t-1$ such that $x_i(t-1) = 0$, then the transmitting device $i$ chooses an RB from $\mathcal{L}$ uniformly randomly. In the case where $v_a < 1$, a device that transmitted successfully at time slot $t-1$ may no longer be active at time slot $t$. In this case, a device $i$ with $\alpha_i = 0$ and $y_i(\boldsymbol{x}(t-1)) = \rho$ chooses a neighboring device $j$ with $\alpha_j = 1$ with a probability proportional to $F_j$, such that the probability is $\sfrac{F_j}{\sum_{F_h \in \boldsymbol{A}_i} F_h}$.\\ \indent In our SCA algorithm, the RB that is used successfully at time slot $t-1$ is also used successfully at time slot $t$ if $v_a = 1$ or if there is an active neighboring device. Moreover, with $r_c$ sufficiently large, the expected number of transmitting devices using an RB that is used by more than one device at the previous time slot $t-1$ is $1$ at the current time slot $t$. This is because the probability of choosing the same RB after a transmission failure is $\sfrac{1}{\boldsymbol{X}_i(x_i(t-1))}$. The devices avoid to use the same RB stochastically even after a transmission failure, because the strict crowd avoidance causes the crowding in other RBs resulting in more transmission failures. Furthermore, when a device $i$ chooses some other RB such that $x_i(t) \neq x_i(t-1)$, device $i$ chooses from a set of RBs $\mathcal{L}$ that were not used at time slot $t-1$. This is to avoid using the RBs that are either used successfully or crowded, both of which cause the transmission failures. With SCA algorithm design to avoid duplicate RB selection, next we prove that the proposed SCA algorithm converges to an NE under certain IoT system parameters. \begin{theorem}\label{thm2} When $N$ devices are always active with full information and use $1$ out of $N$ RBs to transmit with negligible outage probability $p_{i,t}$ at each time slot, the vector of actions $\boldsymbol{x}(t)$ converges to an NE using SCA. \end{theorem} \begin{proof} See Appendix \ref{app3}. \end{proof} Under the conditions in Theorem \ref{thm2}, $\boldsymbol{x}(t)$ converges to an NE, and this implies that the service rate increases to $1$. In general, SCA algorithm increases the service rate, because an RB, which is used by $1$ device at previous time slot $t-1$, is still used by $1$ device at current time slot $t$. Therefore, SCA algorithm is effective in reducing the average instantaneous AoI. However, SCA is susceptible to the high outage probability $p_{i,t}$ based on the SNR. This is because SCA cannot distinguish the transmission failure due to the duplicate RB selection and the transmission failure due to the outage based on the SNR. Furthermore, with partial information $|\boldsymbol{A}_i| < |\boldsymbol{A}|$, the devices also have partial information of $\boldsymbol{X}_i$ and $\boldsymbol{P}_i$, and, thus, the devices cannot choose $x_i(t)$ accurately in the case of transmission failure.\\ \indent In a massive IoT with $N > R$ and partial information, the vector of actions $\boldsymbol{x}(t)$ does not converge to an NE using SCA, because it is not possible to achieve service rate of $1$. The service rate cannot be $1$, because the transmitting devices are changing every time slot and a duplicate RB selection is inevitable. Since the service rate cannot be $1$, the average instantaneous AoI in a massive IoT is higher than the average instantaneous AoI in an ideal IoT described in Theorem \ref{thm2}. However, in a massive IoT, the proposed SCA algorithm still enables the transmitting devices to stochastically avoid duplicate RB selection with available information. Therefore, the proposed SCA algorithm still increases the service rate and reduces the average instantaneous AoI. However, in that case, it does not reach an NE but rather a sub-optimal, heuristic solution. To evaluate performance of the proposed SCA algorithm, next we study a random RB selection for distributed RB allocation scheme. \subsubsection{Random RB Selection} One way for the transmitting devices to determine their RB usage is via random selection. In other words, the actions $x_i(t)$ of the transmitting devices are chosen uniformly random in $\mathcal{R}$. This random RB selection is used as a baseline. Even with $|\boldsymbol{A}_i| = |\boldsymbol{A}|$ for all $i$, the random RB selection is highly unlikely to achieve $\boldsymbol{x}(t)$ such that each of the RBs is used by at most one device, which is the requirement of $\boldsymbol{x}(t)$ to be an NE. Furthermore, the service rate $s_r$ using random RB selection for the transmitting devices is low. \begin{proposition} \normalfont At a time slot $t$, the service rate $s_r$ with $T_t$ transmitting devices using random RB selection is: \begin{equation} s_r = \frac{T_t}{R} \left(\frac{R-1}{R}\right)^{T_t - 1},\label{srr} \end{equation} and, for a massive IoT with $N$ increasing to infinity, the service rate $s_r$ is: \begin{equation} \lim_{N \rightarrow \infty} s_r = \frac{T_t}{R-1} \exp\left(\frac{-T_t}{R}\right) \label{srriot} \end{equation} \end{proposition} \begin{proof} See Appendix \ref{app4}. \end{proof} Under the conditions in Theorem \ref{thm2}, when $N$ devices are always active and use $1$ out of $N$ RBs, the number of transmitting devices $T_t$ is equal to $N$. In this case, the service rate using random RB selection is always less than $1$ even with $N = R \geq 2$. On the other hand, for a massive IoT with $N > R$, the service rate using random RB selection exponentially decreases to $0$ as $N$ increases. Therefore, the random RB selection is not suitable for a massive IoT in which the number of devices $N$ outnumbers the number of RBs $R$, Furthermore, with low service rate, the probability of a successful transmission is low for a device, and, thus, the average instantaneous AoI is high. \section{Simulation Results and Analysis} For our simulations, we consider a rectangular area with width $w$ and length $l$ within which the $N$ devices are deployed following a Poisson point process. We let $w = l = 10$ m and $R = 50$ with a $10$ MHz frequency band \cite{50rblte}, while the number of devices $N$ will be varied for analysis. We choose a time slot duration of $1$ ms \cite{1msts} and expected value of SNR of $20$ dB with $\lambda = .1$ and $\sigma^2 = 0.001$. To vary outage probability $p_{i,t}$, different values of $\epsilon$ are used. Moreover, a device is assumed to be of type $1$ with probability $0.6$ with $m_1 = 0.75$, while a device is assumed to be of type $2$ with probability $0.4$ with $m_2 = 0.75$. The average of the current AoI $C_i$ at the time slot of successful transmission is the performance metric for different RB allocation schemes, and their performances are analyzed with varying $v_a$ and $p_{i,t}$.\\ \indent For the centralized RB allocation scheme, the number of RACH preambles $P$ for the uplink transmission request is $64$ \cite{rach}. Three different kinds of priority scheduling are analyzed. Priority scheduling without learning \cite{aoischedule1, aoischedule3, multihop, adhoc} does not learn the device types, and, hence, this scheme is used for baseline comparison. The proposed priority scheduling with learning learns the device types using maximum likelihood and information on $F_i$. Priority scheduling with full information assumes that the BS always knows the types of all devices, and, hence, this scheme is used for optimal performance comparison. All three priority scheduling algorithms are analyzed with $N = 500$, while varying $v_a$ and $p_{i,t}$. \begin{figure}[t] \centering \includegraphics[width = 9cm]{cent_actprob.eps}\vspace{-.2cm} \caption{Average instantaneous AoI using centralized RB allocation schemes while varying $v_a$.}\vspace{-.5cm} \label{cpa} \end{figure} Fig. \ref{cpa} shows the average instantaneous AoI of the devices using centralized RB allocation schemes for different values of the activation probability $v_a$ with $p_{i,t} = 0.01$ and $\epsilon = 1$. The average instantaneous AoI for the no learning case quickly increases to $27.88$ and then increases slowly above $30$, while the average instantaneous AoI for both learning and full information quickly increases to $19.62$ and then increases slowly above $20$. As $v_a$ increases, the average instantaneous AoI increases for all priority scheduling algorithms, because more devices are transmitting and RACH preamble collisions are more likely to occur. However, for $v_a > 0.2$, the average instantaneous AoI flattens and increases at a much slower rate with increasing $v_a$ for all priority scheduling algorithms, because all RBs are fully saturated. Moreover, there is a significant difference between priority scheduling with learning and without learning. After the average instantaneous AoI flattens, the difference of the average instantaneous AoI between priority scheduling with learning and without learning is constantly about $8$. This implies that the proposed priority scheduling scheme with learning achieves about $26.7\%$ lower average instantaneous AoI when compared to simple priority scheduling scheme. Hence, learning the device types is important to decrease the average instantaneous AoI for priority scheduling. However, there is an insignificant difference between priority scheduling with learning and with full information, which implies that the learning is effective in learning the device types. \begin{figure}[t] \centering \includegraphics[width = 9cm]{cent_actprob_xp.eps}\vspace{-.2cm} \caption{Average instantaneous AoI using centralized RB allocation schemes with different transmit powers while varying $v_a$.}\vspace{-.5cm} \label{cpaxp} \end{figure} Fig. \ref{cpaxp} shows the average instantaneous AoI of the devices with different transmit powers using centralized RB allocation schemes for different values of the activation probability $v_a$ with $p_{i,t} = 0.01$ and $\epsilon = 1$. The devices have different transmit powers such that their SNR values are uniformly distributed random variables from $17.0$ dB to $21.8$ dB, and the only difference between Fig. \ref{cpa} and Fig. \ref{cpaxp} is assumption on the transmit powers of devices. The overall trends of the average instantaneous AoI for all three centralized RB allocation schemes are similar to the trends shown in Fig. \ref{cpa}. However, a notable difference is the average instantaneous AoI after the curves flatten. After the average instantaneous AoI flattens, the average instantaneous AoI increases slowly above $30$ for priority scheduling without learning, increases slowly to $25$ for priority scheduling with learning, and increases slowly to $23.5$ for priority scheduling with full information. With different SNR values randomly assigned to the devices, some devices have higher outage probability and other devices have lower outage probability compared to the devices in Fig. \ref{cpa}. With exponentially aging messages, an increase in the average instantaneous AoI from devices with higher outage probability outweighs a decrease in the average instantaneous AoI from devices with lower outage probability. Therefore, there is a slight increase in average instantaneous AoI when the devices have different transmit powers. However, similar to Fig. \ref{cpa}, the learning can still effectively decrease the average instantaneous AoI, and priority scheduling with learning performs similar to priority scheduling with full information even when the devices have different transmit powers. \begin{figure}[t] \centering \includegraphics[width = 9cm]{cent_eps.eps}\vspace{-.2cm} \caption{Average instantaneous AoI using centralized RB allocation schemes while varying $p_{i,t}$.}\vspace{-.5cm} \label{ceps} \end{figure} Fig. \ref{ceps} shows the average instantaneous AoI of the devices using centralized RB allocation schemes for different values of the SNR outage probability $p_{i,t}$ with $v_a = 0.2$ and varying $\epsilon$ from $1$ to $20$. Unlike in Fig. \ref{cpa}, the average instantaneous AoI does not flatten and increases at about the same rate. With $p_{i,t} = 0.02$, the average instantaneous AoI without learning is $24.34$, and the average instantaneous AoI with learning is $18.41$. With $p_{i,t} = 0.16$, the average instantaneous AoI without learning is $40.17$, and the average instantaneous AoI with learning is $27.45$. Therefore, the difference between the average instantaneous AoI with learning and without learning increases as $p_{i,t}$ increases. Furthermore, with high $p_{i,t}$, the proposed priority scheduling scheme with learning achieves about $31.7\%$ lower average instantaneous AoI when compared to simple priority scheduling scheme. For a higher $p_{i,t}$ and more frequent transmission failures, the AoI of the exponentially aging messages becomes much higher than the AoI of the linearly aging messages. In this case, the learning scheme, which enables the BS to accurately identify the messages aging faster, becomes more crucial in reducing the average instantaneous AoI. Furthermore, the difference between the average instantaneous AoI with learning and with full information also increases as $p_{i,t}$ increases. This is because it becomes increasingly difficult to learn the device types as $p_{i,t}$ increases.\\ \indent For the distributed RB allocation scheme, the communication range $r_c$ determines the information $\boldsymbol{A}_i$ that the devices have. For the given dimensions of the deployment area, $r_c \geq 15$ m is sufficiently large such that $|\boldsymbol{A}_i| = |\boldsymbol{A}|$ for any device $i$. SCA algorithm is compared against two algorithms, which are the random RB selection and the pre-determined RB selection. The pre-determined RB selection scheme \cite{kolkata} is known as the dictator's solution in the KPR game, and the RB usage for a device $i$ is pre-determined based on the rank of $F_i$. For instance, if $F_i$ is $\kappa$-th highest in $\boldsymbol{A}_i$ for $\kappa \leq R$, device $i$ uses a specific RB as previously agreed among IoT devices. The pre-determined RB allocation scheme requires full information $|\boldsymbol{A}_i| = |\boldsymbol{A}|$ for all $i$ and always achieves a service rate $s_r$ of $1$. While the pre-determined RB selection is used for optimal performance comparison, the random RB selection is used for baseline comparison. To analyze different distributed RB allocation schemes, the activation probability $v_a$, the SNR outage probability $p_{i,t}$, and the communication range $r_c$ are varied. In addition to the average instantaneous AoI, the service rate $s_r$ is evaluated for different distributed RB allocation schemes. \begin{figure}[t] \centering \includegraphics[width = 9cm]{dist_actprob.eps}\vspace{-.2cm} \caption{Average instantaneous AoI and service rate using distributed RB allocation schemes while varying $v_a$.}\vspace{-.5cm} \label{dpa} \end{figure} Fig. \ref{dpa} shows the average instantaneous AoI and the service rate of the devices using distributed RB allocation schemes for different values of the activation probability $v_a$ with $p_{i,t} = 0.01$, $\epsilon = 1$, $r_c = 10$ m, and $N = 200$. It is important to note that the expected number of newly active devices at a given time slot is $Nv_a$, and, thus, the number of active devices $N_t$ outnumbers the number of RBs $R$ for high values of $v_a$, simulating a massive IoT. Moreover, with $r_c = 10$ m, SCA only has partial information such that $|\boldsymbol{A}_i| < |\boldsymbol{A}|$ for all $i$. The service rate converges to $0.42$ for SCA with $v_a = 0.25$, $0.41$ for SCA with $v_a = 0.35$, $0.39$ for SCA with $v_a = 0.45$, and $0.35$ for random RB allocation with $v_a = 0.35$. As $v_a$ increases from $0.25$ to $0.45$, the average instantaneous AoI increases from $22.16$ to $38.82$ using SCA. As $N_t$ increases with increasing $v_a$, the transmission failure is more likely to occur due to the duplicate RB selection, because $R$ is fixed. Therefore, as $v_a$ increases, $s_r$ decreases, and the average instantaneous AoI increases. The pre-determined RB allocation scheme achieves a much lower average instantaneous AoI compared to SCA, because the pre-determined RB allocation scheme requires and uses the full information. Furthermore, the average instantaneous AoI with random RB allocation is multiple orders of magnitude higher than the average instantaneous AoI with SCA. Therefore, in a massive IoT with partial information, the proposed SCA algorithm is the most suitable algorithm to achieve low average instantaneous AoI as it balances between having full information and performing arbitrary allocations. \begin{figure}[t] \centering \includegraphics[width = 9cm]{dist_actprob_xp.eps}\vspace{-.2cm} \caption{Average instantaneous AoI and service rate using distributed RB allocation schemes with different transmit powers while varying $v_a$.}\vspace{-.5cm} \label{dpaxp} \end{figure} Fig. \ref{dpaxp} shows the average instantaneous AoI and the service rate of the devices with different transmit powers using distributed RB allocation schemes for different values of the activation probability $v_a$ with $p_{i,t} = 0.01$, $\epsilon = 1$, $r_c = 10$ m, and $N = 200$. The devices have different transmit powers such that their SNR values are uniformly distributed random variables from $17.0$ dB to $21.8$ dB, and the only difference between Fig. \ref{dpa} and Fig. \ref{dpaxp} is assumption on the transmit powers of devices. The difference in the service rates between Fig. \ref{dpa} and Fig. \ref{dpaxp} is insignificant. However, there is a notable increase in the average instantaneous AoI in Fig. \ref{dpaxp} compared to Fig. \ref{dpa}. Some devices have higher outage probability and other devices have lower outage probability, because the devices have different transmit powers. With an exponential aging function, an increase in the average instantaneous AoI with higher outage probability is more significant than a decrease in the average instantaneous AoI with lower outage probability. Therefore, similar to Fig. \ref{cpaxp}, there is a slight increase in the average instantaneous AoI when the devices have different transmit powers. Even when the devices have different transmit powers, the proposed SCA algorithm is still the most suitable algorithm to achieve low average instantaneous AoI in a massive IoT with partial information. \begin{figure}[t] \centering \includegraphics[width = 9cm]{dist_eps.eps}\vspace{-.2cm} \caption{Average instantaneous AoI and service rate using distributed RB allocation schemes while varying $p_{i,t}$.}\vspace{-.5cm} \label{deps} \end{figure} Fig. \ref{deps} shows the average instantaneous AoI and the service rate of the devices using distributed RB allocation schemes for different values of the SNR outage probability $p_{i,t}$ with $r_c = 10$ m, $v_a = 1$, $N = R = 50$, and varying $\epsilon$ from $1$ to $5$. It is important to note that the number of active devices $N_t$ is equal to $R$ with $N = R$ and $v_a = 1$, and this is the condition considered in Theorem \ref{thm2}. The service rate converges to $0.98$ for SCA with $p_{i,t} = 0.01$, $0.92$ for SCA with $p_{i,t} = 0.03$, $0.88$ for SCA with $p_{i,t} = 0.05$, and $0.37$ for random RB allocation. As $p_{i,t}$ increases from $0.01$ to $0.05$, the average instantaneous AoI increases from $1.25$ to $4.14$ using SCA, while the average instantaneous AoI increases from $5.73$ to $7.64$ using random RB allocation. With high $p_{i,t}$, the proposed SCA algorithm achieves about $45.8\%$ lower average instantaneous AoI when compared to random RB allocation. Since $T_t = N$ with $v_a = 1$ and $N = R$, the theoretical value of $s_r$ \eqref{srr} for random RB allocation case matches the simulated value of $s_r$ in Fig. \ref{deps}. As $p_{i,t}$ increases, the converged value of $s_r$ for SCA decreases, because the proposed SCA assumes that the transmission failures are caused by duplicate RB selection. Therefore, using SCA, a device $i$ stochastically avoids to use an RB even when there was no duplicate RB selection and the transmission failure is caused by SNR outage. Furthermore, the difference between the average instantaneous AoI using SCA and random RB allocation decreases as $p_{i,t}$ increases, because increasing $p_{i,t}$ has a more negative impact on SCA than on random RB allocation. However, it is important to note that SCA with low $p_{i,t}$ converges quickly to the service rate of $1$ with low $p_{i,t}$ as discussed in Theorem \ref{thm2}. \begin{figure}[t] \centering \includegraphics[width = 9cm]{dist_rc.eps}\vspace{-.2cm} \caption{Average instantaneous AoI and service rate using distributed Rb allocation schemes while varying $r_c$.}\vspace{-.5cm} \label{drc} \end{figure} Fig. \ref{drc} shows the average instantaneous AoI and the service rate of the devices using distributed RB allocation schemes for different values of the communication range $r_c$ with $p_{i,t} = 0.01$, $\epsilon = 1$, $v_a = 1$, and $N = R = 50$. The communication range $r_c$ determines the amount of information $\boldsymbol{A}_i$ that the devices have, and $r_c = 15$ m implies that the devices have full information $|\boldsymbol{A}_i| = |\boldsymbol{A}|$ for all $i$. The service rate converges to $0.981$ for SCA with $r_c = 15$ m, $0.978$ for SCA with $r_c = 10$ m, $0.967$ for SCA with $r_c = 5$ m, $0.939$ for SCA with $r_c = 1$ m, and $0.374$ for random RB allocation. As $r_c$ increases from $2$ m to $15$ m, the average instantaneous AoI decreases from $2.62$ to $1.21$ using SCA, while the average instantaneous AoI decreases from $7.19$ to $5.07$ using random RB allocation. With low $r_c$, the proposed SCA algorithm achieves about $63.6\%$ lower average instantaneous AoI when compared to random RB allocation. Similar to the Fig. \ref{deps}. the theoretical and simulated values of $s_r$ for random RB allocation are matched. As $r_c$ increases, the convergence value of $s_r$ for SCA increases, because the devices have more information $\boldsymbol{A}_i$ with higher $r_c$. As $r_c$ increases sufficiently such that $|\boldsymbol{A}_i| = |\boldsymbol{A}|$ for all $i$, the service rate converges to $1$ as discussed in Theorem \ref{thm2}. However, SCA with only partial information can still achieve $s_r$ close to $1$. Moreover, as $r_c$ increases, the average instantaneous AoI using SCA converges to the average instantaneous AoI using pre-determined RB allocation scheme.\\ \indent Next, the proposed centralized and distributed RB allocation schemes are compared in a massive IoT with $N > R$ and in an ideal IoT described in Theorem \ref{thm2}. To analyze different RB allocation schemes, the activation probability $v_a$ and the SNR outage probability $p_{i,t}$ are varied, \begin{figure}[t] \centering \includegraphics[width = 9cm]{cd_beta.eps}\vspace{-.2cm} \caption{Average instantaneous AoI using centralized and distributed RB allocation schemes while varying $\beta$ in a massive IoT.}\vspace{-.5cm} \label{beta} \end{figure} Fig. \ref{beta} shows the average instantaneous AoI of the devices using centralized and distributed RB allocation schemes for different values of $\beta$ with $p_{i,t} = 0.01$, $\epsilon = 1$, $r_c = 10$ m, $N = 100$, and $v_a = 1$. This simulates a massive IoT as the number of devices $N$ greatly outnumbers the number of RBs $R$. For the centralized RB allocation scheme, as $\beta$ increases, the average instantaneous AoI with learning decreases from $3.6$ to $3.13$, and the average instantaneous AoI with full information decreases from $3.34$ to $3.05$. However, the average instantaneous AoI without learning does not change significantly. With many type $2$ devices frequently transmitting exponentially aging messages, high values of $\beta$ can effectively reduce the average instantaneous AoI of type $2$ devices as BS learns the device types. However, without learning the device types, high values of $\beta$ are ineffective in reducing the average instantaneous AoI. For the distributed RB allocation scheme, as $\beta$ increases, the average instantaneous AoI with random selection decreases from $79.5$ to $25.1$, and the average instantaneous AoI with proposed SCA decreases from $13.4$ to $5.1$. This is because high values of $\beta$ enable the exponentially aging messages to be transmitted before their AoI increases greatly due to duplicate RB selection. However, the average instantaneous AoI with pre-determined selection does not change significantly, because most duplicate RB selection can be avoided with given $r_c$ and pre-determined selection. $\beta$ affects centralized and distributed RB selection schemes differently depending on other parameters of the IoT. \begin{figure}[t] \centering \includegraphics[width = 9cm]{cd_actprob.eps}\vspace{-.2cm} \caption{Average instantaneous AoI using centralized and distributed RB allocation schemes while varying $v_a$ in a massive IoT.}\vspace{-.5cm} \label{cdpa} \end{figure} Fig. \ref{cdpa} shows the average instantaneous AoI of the devices using centralized and distributed RB allocation schemes for different values of the activation probability $v_a$ with $p_{i,t} = 0.01$, $\epsilon = 1$, and $N = 200$. This simulates a massive IoT as the number of devices $N$ greatly outnumbers the number of RBs $R$. As $v_a$ increases, the average instantaneous AoI converges to $200$ for SCA with $r_c = 5$ m, $157$ for SCA with $r_c = 10$ m, and $145$ for SCA with $r_c = 15$ m. On the other hand, for priority scheduling, the average instantaneous AoI increases from $1.14$ to $6.03$ as $v_a$ increases from $0.1$ to $0.5$. With high $v_a$, the proposed SCA algorithm with $r_c = 15$ m achieves about $24$-fold higher average instantaneous AoI when compared to the proposed priority scheduling with learning. For almost any values of $v_a$, centralized RB allocation with priority scheduling performs much better than distributed RB allocation with SCA in terms of the average instantaneous AoI. However, centralized RB allocation scheme requires the BS to dictate the RB allocation for all devices, and, thus, centralized RB allocation scheme may not be viable for some of the IoT. Moreover, similar to Fig. \ref{cpa}, the average instantaneous AoI flattens after a certain value of $v_a$, because all RBs are fully saturated. There is a performance gap between SCA with different values of $r_c$, because $r_c$ is directly related to the amount of information that the devices have. With higher $r_c$ and more information for the devices, SCA is more effective in reducing the average instantaneous AoI. \begin{figure}[t] \centering \includegraphics[width = 9cm]{cd_eps.eps}\vspace{-.2cm} \caption{Average instantaneous AoI using centralized and distributed RB allocation schemes while varying $p_{i,t}$ in a massive IoT.}\vspace{-.5cm} \label{cdeps} \end{figure} Fig. \ref{cdeps} shows the average instantaneous AoI of the devices using centralized and distributed RB allocation schemes for different values of the SNR outage probability $p_{i,t}$ with $v_a = 0.5$, $N = 200$, and varying $\epsilon$ from $1$ to $20$. Similar to Fig. \ref{cdpa}, this simulates a massive IoT. When $p_{i,t} = 0.18$. the average instantaneous AoI is $585.59$ using SCA with $r_c = 5$ m, $529.94$ using SCA with $r_c = 10$ m, $472.81$ using SCA with $r_c = 15$ m, and $11.90$ using priority scheduling. With high $p_{i,t}$, the proposed SCA algorithm with $r_c = 15$ m achieves about $40$-fold higher average instantaneous AoI when compared to the proposed priority scheduling with learning. Similar to Fig. \ref{cdpa}, centralized RB allocation with priority scheduling performs much better than distributed RB allocation with SCA in terms of the average instantaneous AoI for all values of $p_{i,t}$. It is interesting to note that the difference in the average instantaneous AoI between SCA algorithms increases as $p_{i,t}$ increases. This implies that SCA with less information is more severely affected by increasing $p_{i,t}$ than SCA with more information. \begin{figure}[t] \centering \includegraphics[width = 9cm]{cd_epsid.eps}\vspace{-.2cm} \caption{Average instantaneous AoI using centralized and distributed RB allocation schemes while varying $p_{i,t}$ in an ideal IoT.}\vspace{-.5cm} \label{cdid} \end{figure} Fig. \ref{cdid} shows the average instantaneous AoI of the devices using centralized and distributed RB allocation schemes for different values of the SNR outage probability $p_{i,t}$ with $v_a = 1$, $N = R = 50$, and varying $\epsilon$ from $1$ to $20$. This simulates an ideal IoT for SCA as some of the conditions for NE convergence in Theorem \ref{thm2} are satisfied. When $p_{i,t} = 0.18$. the average instantaneous AoI is $25.53$ using SCA with $r_c = 5$ m, $19.96$ using SCA with $r_c = 10$ m, $17.27$ using SCA with $r_c = 15$ m, and $2.21$ using priority scheduling. With high $p_{i,t}$, the proposed SCA algorithm with $r_c = 15$ m achieves about $8$-fold higher average instantaneous AoI when compared to the proposed priority scheduling with learning. It is interesting to note that even in an ideal IoT for SCA, priority scheduling with learning performs better than SCA in terms of the average instantaneous AoI for most values of $p_{i,t}$. Moreover, similar to Fig. \ref{cdeps}, the difference in the average instantaneous AoI between SCA algorithms increases as $p_{i,t}$ increases.\\ \indent From our simulations, we observe that both priority scheduling and SCA are susceptible to high SNR outage probability $p_{i,t}$ as the average instantaneous AoI increases without flattening as $p_{i,t}$ increases. This is because the SNR outage probability is directly related to the transmission failures. However, the average instantaneous AoI increases slowly after a certain value of the activation probability $v_a$, because the RBs are fully saturated. Since increasing $v_a$ with fixed $N$ is equivalent to increasing $N$ with fixed $v_a$, the average instantaneous AoI also flattens for the case in which only the number of devices $N$ increases. Although centralized RB allocation scheme outperforms distributed RB allocation scheme in most cases, SCA can still achieve a high service rate $s_r$ and low average instantaneous AoI only with partial information. Furthermore, communication range $r_c$ and information availability are critical to the performance of SCA. \section{Conclusion} In this paper, we have proposed centralized and distributed approaches for allocating the limited communication resources based on the aging function and the current AoI of IoT devices. In the presence of both linear and exponential aging functions, we have shown that comparing the future AoI achieves a lower average instantaneous AoI at the BS than comparing the current AoI. For the centralized approach, we have introduced a priority scheduling scheme with learning, which enables the BS to allocate the limited RBs to the heterogeneous devices based on their future AoI. For the distributed approach, we have formulated the problem of autonomously allocating the limited RBs to the devices using game theory, and we have designed payoff functions to encourage the devices with high AoI to transmit, while discouraging the devices with low AoI to not transmit. Furthermore, we have proposed a novel SCA algorithm such that the heterogeneous devices can allocate the RBs in a self-organizing manner to avoid the duplicate RB selection and to minimize the AoI. We have proved the conditions that a vector of actions in the IoT game must satisfy to achieve an NE. Furthermore, we have proved that the actions of devices using our proposed SCA algorithm converge to an NE, if the devices have sufficient information under certain network parameters. Simulation results have shown that the average instantaneous AoI is an increasing function of the activation probability and the SNR outage probability. Moreover, the simulation results have shown that the service rate is an increasing function of the communication range and a decreasing function of the activation probability and the SNR outage probability. We have compared our centralized and distributed RB allocation schemes, and we have shown that our centralized RB allocation scheme outperforms our distributed RB allocation scheme in most cases. However, our proposed SCA algorithm has shown to be effective in reducing the AoI and increasing the service rate only with partial information. With high SNR outage probability, the proposed priority scheduling scheme with learning has shown to achieve about $31.7\%$ lower average instantaneous AoI when compared to simple priority scheduling scheme. Furthermore, with high SNR outage probability, the proposed SCA algorithm has shown to achieve about $45.8\%$ lower average instantaneous AoI when compared to random RB allocation. \section*{Acknowledgment} This research was supported by the U.S. Office of Naval Research (ONR) under Grant N00014-19-1-2621.
1,108,101,562,566
arxiv
\section{Introduction} Local quantum field theory is a powerful framework to build models of physical reality. In particular, the Standard Model is known for its extremely precise predictions. Its modifications, known as ``Beyond Standard Model Physics", are also local quantum field theories. For the models of quantum gravity, local quantum field theories serve as low-energy effective theories. Despite this success, it is very unlikely that a physical theory including all known interactions can be built within that framework. In particular, it was argued by many authors, starting from the early days of the quantum field theory \cite{Bronstein, Klein}, that quantum gravity can not be local. The physical reason of that is the interplay between Heisenberg uncertainty relations and the role played by the energy and momentum in General Relativity. As a result, any attempt of a too much localized measurement inevitably produces a dramatic change of the geometry such as a black hole formation, making the measurement impossible. Assuming that the main object of the theory is a system of (in principle) measurable observables, we conclude that the language of local QFT is not appropriate. There is a related long-standing conjecture that the gravity-induced non-locality may be a cure of the ultra-violet divergences of local QFT \cite{Deser}, since the later are caused by the singular products of point-localised observables, or, equivalently, by point-localized elementary scatterings. \par For this reason, non-local quantum field theories attract interest. In particular, a lot of non-local models are considered as effective theories of quantum field theory on quantum (non-commutative) spacetimes, e.g. \cite{DFR,NekrasovNCQFT}. \par \begin{rmk} \label{rmk:Intro/TwoLocalities} Locality plays a crucial role in both the mathematical and physical aspects of Quantum Field Theory. The meaning of this word is, however, slightly different. In mathematics, in particular in algebraic Quantum Field Theory, by locality one usually means commutativity of the observables related to measurements performed in space-like separated regions of spacetime. In physics one is usually concerned with the locality of the interactions. The precise meaning of this varies from approach to approach, but one expects that the terms describing the interaction (e.g. the higher than quadratic part of the Hamiltonian or Lagrangian) are local non-linear functionals of quantum fields. In the language of Feynman graphs, it may be interpreted as the localization of all ``elementary scatterings" at points of spacetime. These two meanings are not unrelated. For example, in causal perturbation theory the ``physical" locality together with causality implies the ``algebraic one" \cite{pAQFT}. Throughout this paper we by default understand non-locality in the ``physical" sense. \end{rmk} The main problem of this area of mathematical physics is the lack of a universal understanding of what a non-local quantum field theory is. The ambiguities arise even at the perturbation theory level. In conventional quantum field theory, there are different approaches, involving various mathematical constructions but describing essentially the same physics. This equivalence is lost in the absence of locality, as well as the applicability of the most popular methods just can not be used anymore. For example, the approaches based on the functional integrals, widely used in physics, fail to provide a unitary theory in the non-local case \cite{NekrasovNCQFT}\footnote{In this paper we consider the physical Lorentzian signature of the spacetime metrics only. In the Euclidean signature, for which the functional integral approach, as usual, is much more reasonable and is widely studied, e.g. \cite{EucNCQFT}. Yet, the Wick rotation trick, normally used to pass from one signature to another does not work for non-local theories \cite{BahnsWick} at least in a straightforward way.}. Mathematical approaches to perturbative quantum field theory \cite{EG73}, based on the Einstein causality can not be applied directly to non-local theories. \par Therefore, less standard ideas turn out to be useful for non-local theories. In this paper we deal with the so-called Hamiltonian approach, first suggested in \cite{DFR} and further developed in \cite{BahnsPhD,BahnsEtAl}. The basic idea is to apply the standard Hamiltonian perturbation theory to a non-local interaction part of the Hamiltonian. As usual, it produces a time-ordered product of interacting Hamiltonians, integrated over spacetime (see Section \ref{HpQFT}). Unlike the local theories, here the time-ordered product makes sense without UV regularization. The integration over spacetime is instead divergent and has to be regularized through the multiplication of all coupling constants with the adiabatic cut-off function depending on the spacetime coordinates and vanishing far away from the origin. This is a typical trick of perturbative quantum field theory, unavoidable in general due to the Haag theorem \cite{HaagThm}. To obtain a physically meaningful result one has to pass to the limit of the adiabatic cut-off function going to the unity, known as the adiabatic limit. \par Among the advantages of the Hamiltonian approach are the explicit expression for the scattering operator and its manifest unitarity\footnote{See, however, Remark \ref{rmk:HpQFT/loc-alg}.} . A notable restriction of this method is that all theories constructed in this way (if they are not local) have a selected reference frame, in which residual locality persists (see Remark \ref{rmk:HpQFT/loc-alg}). \par Originally in \cite{BahnsEtAl,BahnsPhD} the Hamiltonian approach was applied to a class of non-local quantum field theories built as an effective description of quantum field theories in the Doplicher-Frednhagen-Roberts quantum (non-commutative) spacetime defined in \cite{DFR}. The latter is a simple Lorentz-covariant realization of the physically motivated spacetime uncertainty relations also derived in \cite{DFR}. A set of Feynman rules for this theory was derived in \cite{BahnsPhD} and it was shown that the non-locality smears out the typical ultraviolet singularities of local theories. However, it was observed that the adiabatic limit of the scattering operator (so-called strong adiabatic limit) does not exist in general due to the external lines correction. This is not a surprise. In fact, in the local case, the strong adiabatic limit exists only when particular renormalization conditions are specified \cite{EG73}. \par In the author's Ph.D. thesis \cite{PhDThesis} a class of non-local quantum ultraviolet finite Hamiltonian perturbative Quantum Field Theories (HpQFT) was defined. This class includes the theories on the Doplicher-Fredenhagen-Roberts quantum spacetime considered in \cite{BahnsPhD,BahnsEtAl}. It was shown that the weak adiabatic limit always exists as in the local case. \par In this paper, we reproduce this result using significantly different techniques for the computations. Namely, instead of unbounded operators (operator-valued functions, operator-valued distributions, etc.) on the Hilbert space, we deal with continuous operators on the spaces of Schwartz functions. Then the vectors, vector-valued, operator-valued functions, distributions, and parameter-dependent distributions can be treated uniformly as linear continuous operators between the spaces of Schwartz functions. Furthermore, we introduce the second quantization map (see Subsection \ref{DS/2Q}), allowing us to do all computations with the spaces of a fixed finite number of particles only. Na\"ive formulas, such as the Wick theorem and its generalizations for operator-valued distributions can be provided with a direct sense in this formalism. Here we use only a small part of this framework. In the subsequent paper devoted to the strong adiabatic limit, a suitable generalization of this formalism will be used much more intensively. From this perspective, besides giving an improved exposition of the constructions of \cite{PhDThesis} this paper may be considered as an illustration of this new formalism. \par The paper is organized as follows. The rest of this section contains a summary of the notation and terminology used. In the second Section we explain how different objects, such as vector-valued, operator-valued, and parameter-dependent (as well as mixes of these kinds) distributions can be treated uniformly as linear operators between appropriately chosen spaces. In the third Section, the standard domain of the quantum field theory $\mathcal{D}_{\mathcal{S}}$ is defined. We also construct classes of operators, operator-valued functions, distributions, etc. on $\mathcal{D}_{\mathcal{S}}$ and introduce the second quantization procedure. In the Fourth Section, motivated by the time-dependent perturbation field theory in quantum mechanics, we describe the class of non-local quantum field theories we deal with. In the last, fifth Section, the weak adiabatic limit existence is formulated and proved. The Appendix contains several formulations of Feynman rules in the adiabatic limit convenient for practical computations. \paragraph{Acknowledgements} This work is a reformulation and continuation of the PhD project and later partially supported by the PostDoc fellowship, both granted by the University of Rome ``Tor Vergata" and performed under the advisory of G. Morsella. \subsection{Conventions, notation and terminology}\label{Intro/prelim} \paragraph{General} Three-dimensional vectors semantically related to the physical coordinate or momentum space are denoted with the vector symbol, e.g. $\vec{p}\in\RR{3}$. Other finite-dimensional vectors are denoted with bold letters, e.. $\bm{x}\in\RR{n}$. \par We use the standard notation $$ \partial^{\alpha}=\partial_{x}^{\alpha}=\frac{\partial^{|\alpha|}}{\partial x_1^{\alpha_1} \cdots \partial x_n^{\alpha_n}} $$ for the partial derivatives, where $\alpha=(\alpha_1,\ldots,\alpha_n)\in\mathbb{N}_0^{n}$ is a multi-index and $|\alpha|=\sum_{j=1}^n \alpha_j$. \par The permutation group of $k$ elements is denoted with $\symmgr{k}$. A permutation $\sigma\in\symmgr{k}$ is understood as a functions $\{1,\ldots,k\}\rightarrow \{1,\ldots,k\}$, $i\mapsto \sigma(i)=\sigma_i$. \par We use the following abbreviated notations for lists of repeating variables: $$\arrs{x}{n} \Longleftrightarrow x_1,x_2,\ldots, x_n$$ $$\arrsM{x}{n}{k} \Longleftrightarrow x_n,x_{n+1},\ldots, x_k,$$ $$\arrsP{x}{k}{\sigma} \Longleftrightarrow x_{\sigma(1)},\ldots,x_{\sigma(k)}, \, (\sigma\in\symmgr{k}),$$ $$ (x_i)_{i=1\ldots n} \Longleftrightarrow x_1,\ldots, x_n, $$ $$ (x_{i,j})_{i=1\ldots n,j=1\ldots m_j} \Longleftrightarrow x_{1,1},x_{1,2},\ldots, x_{1,m_1},x_{2,1},x_{2,2},\ldots,x_{2,m_2},\ldots,x_{n,m_n}. $$ The last two notations are used for more composite construction depending on $i$ (or $(i,j)$) in place of $x_i$ (or $x_{i,j}$). Inside integrals we use $$ d^{nk}\arrs{\bm{x}}{k}=\prod_{j=1}^k d^n \bm{x}_j $$ for $\arrs{\bm{x}}{n}\in\RR{n}$. Note that we always indicate the dimensions of the space explicitly. We write $\arrs{\bm{x}}{k}\in\RR{n}$ to express that $\bm{x}_i\in\RR{k}$ for each $i=1,\ldots,n$ in contrast with $(\arrs{\bm{x}}{k})\in\RR{n\cdot k}$. \par For two topological spaces $X$, $Y$ we use $\mathcal{L}(X,Y)$ to denote the space of all linear continuous operators from $X$ to $Y$. We set $\mathcal{L}(X)=\mathcal{L}(X,X)$. \par If $f$ is a function on $\RR{n+m}$ we may interchangeably write $f(\bm{x},\bm{y})$ or $f(\bm{z})$ for $\bm{x}\in\RR{n},\bm{y}\in\RR{m}$and $\bm{z}\in\RR{n+m}$. \par We use the ordering convention for non-commutative products $$ \prod_{i=n}^{m} A_i=A_n A_{n+1}\cdots A_{n+m}. $$ \paragraph{Quantum field theories} By a quantum field theory we in general mean a (local or non-local, not necessarily Lorentz-covariant) bosonic real scalar perturbative quantum field theory. All results can be easily generalized to the case of several particle species and the Fermionic statistics. The spin plays no role in absence of the Lorentz invariance, so different polarizations can be treated as separate species. The same can be said about the antiparticles in absence of the locality. \par By a wave function we mean a vector of state of such theory with a fixed number of particles and presented as a function of the momenta. \par The Quantum Field theories we consider are not necessarily Lorentz-invariant, both because no reasonable non-local Lorentz-invariant interaction is known and due to immanent breaking of Lorentz invariance by the non-local Hamiltonian formalism (see Remark \ref{rmk:HpQFT/loc-alg}). Yet, in \cite{BahnsPhD,BahnsEtAl} the free theory is assumed to be Lorentz-invariant. We instead allow it to be broken from the beginning as it produces no additional technical problems. For this reason, we need to deal with generalized dispersion relations. We require that both the energy and its inverse are smooth functions of the momentum which have at most polynomial growth at infinity together with all their partial derivatives. The space of such functions $\Mlt{3}$ is defined in Subsection \ref{Framework/Mlt}. \begin{Def}\label{def:dispRel} By a \emph{(massive) dispersion function} we mean a positive function $\omega_0\in\Mlt{3}$ such that $(\vec{k}\mapsto \frac{1}{\omega_0(\vec{k})})\in\Mlt{3}$, and\footnote{The $M>0$ condition is introduced to eliminate exotic dispersion functions of polynomial decaying at the infinity. The possibility of zero energy at a finite point is already excluded by the smoothness of the inverse energy.} $$ M=\inf_{\vec{k}\in\RR{3}}\omega_0({\vec{k}})>0. $$ The constant $M$ is called the \emph{mass} of the dispersion function. We always assume $\omega_0(-\vec{p})=\omega_0(\vec{p})$ for the sake of simplicity. This condition can be easily relaxed. \end{Def} \paragraph{Feynman graphs and Feynman rules} We use the Feynman graphs to keep track of combinatorics in operator products computations. \par The terminology used is mostly standard and intuitive, but it makes sense to summarize it in one place. We introduce various versions of the Feynman rules both in the main text and in Appendix based on Feynman graphs with different labeling and auxiliary structures. \par In general, a \emph{Feynman graph} is a graph with marked vertices and edges (interchangeably called lines). We always exclude the graphs containing an edge starting and ending at the same vertex. \par We consider \emph{partially} or \emph{totally ordered} and \emph{unordered} Feynman graphs, which come with partial, total or no order on the set of vertices respectively. When dealing with ordered graphs, we use terms like \emph{precede}, \emph{earlier}, \emph{earliest} and so on (respectively, \emph{follows}, \emph{later}, \emph{latest} etc.) to describe the relative order of vertices and use the symbols $\prec$ and $\succ$ respectively for these two relations\footnote{This terminology have clear interpretation if we keep in mind that each vertex corresponds to an operator. Then earlier vertices correspond to operators which act first. In computations of the Green functions (and the scattering amplitudes \cite{PhDThesis}) the order can be physically interpreted as the order of the ``elementary scatterings" in the physical time. The physical interpretation is lost in the Wightman functions computation, where some parts of the operator product are anti-time ordered.}. \par A total order, when prescribed, induces the natural (up to an overall flip) orientation of the edges, making the graph directed acyclic. For definiteness, we choose the orientation from the earlier to the later vertices. Partial orders in general do not define orientations of edges, but we consider only such of them that do, and moreover are generated by an acyclic orientation of the edges\footnote{In other words, instead of partially ordered graphs we should be speaking of directed acyclic ones. But the partial order plays a major role in the Feynman rules, which we have emphasized in the terminology.}. In the figures we always draw earlier vertices to the left of the later ones. \par We distinguish external and internal vertices. The external vertices may be incident to only one line, for internal ones there are no limitations\footnote{Of course, the contribution of a diagram may vanish if the corresponding term is absent in the interaction.}. \par The vertices can be enumerated or not, and the numeration can agree with the order or not. When it does, rather counter-intuitively, in ordered graphs we enumerate the vertices from the \emph{last to the earliest}\footnote{This choice of numeration is dictated by the traditional right-to-left composition notation and numeration of the factors in products from left to right}. \par To the lines we often assign a flux of the momentum and sometimes the energy flow. The flow is directed, i.e. it is incoming for some vertices and outgoing for others. Graphically we draw an arrow parallel to the line to show the flow direction. For ordered graphs we assume that the flux always goes from the earlier to the later vertex. We call the difference of the sum of all outgoing and the sum of all incoming momenta (respectively, energy) at a vertex the \emph{momentum} (respectively, \emph{energy}) \emph{defect} of the vertex. In some cases we make a distinguishing between the on-shell (i.e. constrained by a dispersion relation) and off-shell (i.e. an independent free variable) energies, and respectively the on-shell and off-shell energy defects. \par We say that a connected component of a Feynman graph is a \emph{vacuum component} if it contains no external vertices. \par The \emph{Feynman rules} consist of the following ingredients: \begin{itemize} \item Description of the classes of relevant Feynman graphs (e.g. ordered graphs with a fixed number of internal and external enumerated vertices with no vacuum energy corrections); \item Labeling rules, i.e. the list of free variables assigned to each line (e.g. spatial momentum) or each vertex (e.g. timestamp); \item Prescription o the factors corresponding to each line and each vertex; \item The integration and summation range for the free parameters and additional overall factor if necessary. \end{itemize} To each graph corresponds an expression constructed as a product of factors corresponding to the components of the graph according to the Feynman rules and the overall factor (if it is present), integrated and summed with respect to the free parameters and divided by the order of the automorphisms group of the graph\footnote{We assume that the automorphisms preserve all defined structure of the graphs (including order and numeration of vertices if they are defined) except for free parameters assignment. The combinatoric factors are not at all important for the weak adiabatic limit, so we do not discuss the symmetry factor, but it can always be reconstructed. Yet, we note that this factor is different from the one of \cite{BahnsPhD} because of different normalization of the vertex factors. Our conventions instead agree with the standard physics textbooks, e.g. \cite{PS} to facilitate eventual comparisons.}. \par We say that some object can be computed by Feynman rules if it is equal to the sum of expressions corresponding to all relevant graphs according to that Feynman rules. \paragraph{Formal power series} Let $g$ be a formal parameter. For a vector space $X$, the space of $X$-valued formal power series in terms of $g$ is defined as the vector space of $X$-valued functions on $\mathbb{N}_0$, $$ X\formalPS{g}=X^{\mathbb{N}_0}=\{x:\mathbb{N}_0\rightarrow X\} $$ with pointwise linear operations. Elements of $X\formalPS{g}$ we symbolically present as $$ x(g)=\sum_{n=0}^{\infty}x_{(n)}g^n. $$ \par If $X$, $Y$ and $Z$ are vector spaces and $*:X\times Y\rightarrow Z$ is a bilinear operation, then for $x\in X\formalPS{g}$, $y\in Y\formalPS{g}$ we set $$ (x*y)(g)=\sum_{n=0}^{\infty}\left(\sum_{m=0}^{n}x_{(m)}*x_{(n-m)}\right)g^n\in Z\formalPS{g}. $$ The limits involving formal power series should be understood in the sense of pointwise convergence of the underlying functions on $\mathbb{N}_0$. Consequently, integrals of power series are defined. \par To avoid confusion we write $(x(g))_{g^n}$ instead of $x_n$ \paragraph{Test functions, distributions, and related objects} We use the standard symbol for the space of Schwartz function $\SRR{n}$ and the space of tempered distributions $\SpRR{n}=\mathcal{L}(\SRR{n},\mathbb{C})$ dual to it (here $n\in\mathbb{N}_0$ with $\SRR{0}=\SpRR{0}=\mathbb{C}$ for convenience). The seminorms of $\SRR{n}$ are denoted as follows: $$ ||f||_{\SRR{n},m,k}=\sup_{\bm{x}\in\RR{n},|a|\leq k}(1+||\bm{x}||)^m|\partial^{a}f(\bm{x})|, \,\forall f\in\SRR{n},\forall m,k\in\mathbb{N}_0. $$ We often omit the adjective tempered, as no other classes of distributions appear in this paper. We use the square brackets to denote the evaluation of a distribution on a test function (or, more generally, the action of an operator on a vector\footnote{In selected cases we omit the square bracket in this sense.}). The round brackets after a symbol of distribution are used in the symbolic integral notation which is explained in Subsection \ref{Framework/Symbolic}. \par A lot of non-standard notation is introduced in Section \ref{Framework}. In particular, the operator $\evaluateP{}{}$ appears in Subsection \ref{Framework/VectorTest}; the operation $\extDH{}{}{}$ is explained Subsection \ref{Framework/OpVal}; the operations $\intD{}{}$ and $\restrD{}{}$ is introduced in Subsection \ref{Framework/Restrict}; the operations $\extD{}{}$ and the tensor product $\otimes$ are defined in Subsection \ref{eq:Framework/extProd}. Most of the other operators used in the paper can be found in Subsection \ref{Framework/exmp} \par \par For the secondary quantization map $A\mapsto \widehat{A}$, and an explanation of the underlining notation see Subsection {\ref{DS/2Q}}. As a general rule, the hated underlined symbols denote the objects in their standard sense. For example, one can think of $\uwhat{\phi}_0$ as the usual free quantum field, defined as an operator-valued distribution. The original object $\phi_0\in\widecheck{\mathcal{L}}(\DSS{}{4},\mathcal{D}_{\mathcal{S}}{})$ (notation of Section \ref{DS} is used) does not have a direct sense but is convenient to operate with. \section{Framework for operator-valued parameter-dependent distributions}\label{Framework} As usual in Quantum Field Theory, operator-valued distributions are essential for construction. Standard general formalism (e.g. \cite{SimonReed, NN}) is, however, not suitable for our needs, because we deal with a certain class of operators and distributions only. In particular, instead of the Hilbert space, we may always deal with finer spaces of Schwartz functions $\SRR{n}$, $n\in\mathbb{N}$. This section is devoted to $\SRR{n}$ and $\mathcal{L}(\SRR{n},\SRR{m})$-valued functions and (in general, parameter-dependent) distributions. In Section \ref{DS} we explain how this construction leads to the usual notion of distributions, valued in unbounded operators on the Hilbert space. \par \par The material presented here is not original, but nowhere presented in the form suitable enough for us. In the parts devoted to parameter-dependent distributions and restrictions of distributions we mostly follow \cite{NN}. The later concept is an older ``cheap" version of the microlocal analysis\cite{Lars}, allowing to restrict the distribution to fixed values of the parameters. There are several reasons why we stick to it. First of all, it is just simpler to verify the existence conditions in the situations we deal with (especially taking into account that otherwise we would be forced to use generalized to the tempered distributions case). The second reason is that this way our framework is more uniform, allowing to treat vector-valued and parameter-dependent distributions together. Finally, the strong adiabatic limit, which is not considered here, but will be presented in the subsequent paper \cite{PAP1}, requires more general spaces, and hence the microlocal analysis would require further generalization. Instead, we aim to use as few properties of the space $\SRR{n}$ as possible\footnote{In broad terms, we use only that it is Fr`echet and that the partial evaluations are continuous in a siutable sense}. \par In description of vector-valued functions we partially follow \cite{Treves}. The part on the multiplier spaces follows \cite{Amman}. \subsection{Vector-valued test functions}\label{Framework/VectorTest} We start by the following observation\footnote{See chapter 40 of \cite{Treves}}. \begin{rmk}\label{rmk:Framework/VectorTest} For any $n,m\in\mathbb{N}$ there is an isomorphism between the following topological vector spaces \begin{enumerate} \item The space $\SRR{n+m}$; \item The space $$\mathcal{S}(\RR{n},\SRR{m})= $$ $$ \left\{f: \RR{n}\rightarrow \SRR{m}\Big| ||f||_{\mathcal{S}(\RR{n},\SRR{m}),s,k,l,r} < \infty, \quad \forall l,r,s,k\in\mathbb{N}_0\right\}, $$ where for a function $f: \RR{n}\rightarrow \SRR{m}$ and $l,r,s,k\in\mathbb{N}_0$ we set \begin{equation} ||f||_{\mathcal{S}(\RR{n},\SRR{m}),s,k,l,r}=\sup_{\bm{x}\in\RR{n},|a|\leq k}(1+||\bm{x}||)^s||(\partial^{\alpha}f)(\bm{x})||_{\SRR{m},l,r} \end{equation} with the topology induced by these seminorms. \end{enumerate} The isomorphism is given by $\evaluateP{n+1\ldots n+m}{n+m}\in \mathcal{L}(\SRR{n+m},\mathcal{S}(\RR{m},\SRR{n}))$: $$ \evaluateP{n+1\ldots n+m}{n+m}f(\bm{x})(\bm{y})=f(\bm{x},\bm{y}), \forall f\in\SRR{n+m}, \forall \bm{x}\in \RR{n}, \forall \bm{y} \in \RR{m} $$ $$ \left(\evaluateP{n+1\ldots n+m}{n+m}\right)^{-1}g(\bm{x},\bm{y})=g(\bm{x})(\bm{y}), \, \forall g\in \mathcal{S}(\RR{m},\SRR{n}),\forall \bm{x}\in \RR{n}, \forall \bm{y}\in \RR{m}. $$ \end{rmk} \begin{proof} The linear map $\evaluateP{n+1\ldots n+m}{n+m}$ is clearly well-defined. We have to check that $(\evaluateP{n+1\ldots n+m}{n+m})^{-1}$ is indeed valued in $\SRR{n+m}$, or more precisely the joint continuity of $(\evaluateP{n+1\ldots n+m}{n+m})^{-1}g$ together with its partial derivatives for any $g\in \mathcal{S}(\RR{m},\SRR{n})$. For that we write $$ |g(\bm{x}')(\bm{y}')-g(\bm{x})(\bm{y})|\leq |g(\bm{x}')(\bm{y})-g(\bm{x})(\bm{y})|+|g(\bm{x}')(\bm{y}')-g(\bm{x}')(\bm{y})|\leq $$ $$||g||_{\mathcal{S}(\RR{m},\SRR{n}),0,1,0,0}||\bm{x}-\bm{x}'||+ ||g||_{\mathcal{S}(\RR{m},\SRR{n}),0,0,0,1}||\bm{y}-\bm{y}'||, $$ $$ \forall \bm{x},\bm{x}'\in\RR{n}, \bm{y},\bm{y}'\in\RR{m} $$ The right-hand sides can be made anyhow small for sufficiently small $||\bm{x}-\bm{x}'||$ and $||\bm{y}-\bm{y}'||$, so $g$ is jointly smooth. The derivatives can be treated in the same way. The continuity of $\evaluateP{n+1\ldots n+m}{n+m}$ and $\evaluateP{n+1\ldots n+m}{n+m}$ follows by the standard inequalities \begin{equation} (1+||(\bm{x},\bm{y})||)\leq (1+||\bm{x}||)(1+||\bm{y}||), \forall \bm{x}\in\RR{n}, \forall \bm{y}\in\RR{m}, \label{eq:xy<x} \end{equation} \begin{equation} (1+||\bm{x}||)\leq (1+||(\bm{x},\bm{y})||), \forall \bm{x}\in\RR{n}, \forall \bm{y}\in\RR{m}, \label{eq:x<xy} \end{equation} $$ (1+||\bm{y}||)\leq (1+||(\bm{x},\bm{y})||), \forall \bm{x}\in\RR{n}, \forall \bm{y}\in\RR{m}. $$ \end{proof} We conclude that the space $\SRR{n+m}$ can be interpreted as the space of $\SRR{m}$-valued Schwartz functions on $\RR{n}$. The identification is not unique, in general we define $\evaluateP{i_{1}\ldots i_{m}}{n+m}$ extracting the arguments on positions $i_{1},\ldots,i_{m}$ as parameters. This operator is also of great use in the technical definition below in this section. \par In what follows we do all real computations with more understandable space $\SRR{n+m}$, but sometimes use the aforementioned interpretation to streamline the presentation and to simplify comparison with standard techniques. \subsection{Vector-valued and parameter-dependent distributions}\label{Framework/VV=PD} There are two related spaces important for Section \ref{DS}. First one may consider the space of \emph{vector-valued distributions} $\mathcal{S}'(\RR{n},\SRR{m})$ which is natural to identify with $\mathcal{L}(\SRR{n},\SRR{m})$. The second one is the space of \emph{parameter-dependent distributions of Schwartz class}\footnote{We use terminology of \cite{NN}} $\mathcal{S}(\RR{m},\SpRR{n})$. Following \cite{NN}, we define it as a space of maps $\RR{m}\rightarrow \SpRR{n}$ $$ F: \RR{m} \ni \bm{x} \rightarrow F_y\in \SpRR{n} $$ such that $$ (y\mapsto F_y[f]) \in \SRR{m}, \quad \forall f\in \SRR{n}. $$ Indeed, it is nothing but a tautological rewriting of differentiability and fast decay (together with all partial derivatives) properties of $F$, considered as a distribution-valued function on $\RR{n}$\footnote{Recall that we assume the weak-* topology for $\SpRR{n}$, so all limits including $F$ should be computed after evaluation on a test function.}. \par These two notions coincide. \begin{rmk}[Variant of Exercise 2.43 of \cite{NN}]\label{rmk:Framework/VV=PD} Fix $n,m\in\mathbb{N}$. Then there is a natural bijection between $\mathcal{S}'(\RR{n},\SRR{m})$ and $\mathcal{S}(\RR{m},\SpRR{n})$. It is given by identifying $$ F_{\bm{x}}[f]=F[f](\bm{x}), \quad, \forall \bm{x}\in \RR{m},\forall f\in \SRR{n}, $$ where in the left and the right-hand sides $F$ is treated as a parameter-dependent and vector-valued distribution. \end{rmk} \begin{proof} The only thing to prove is that a parameter-dependent distribution of class $\SRR{m}$ defines a continuous map $\SRR{n}\rightarrow\SRR{m}$. As $\SRR{n}$ and $\SRR{m}$ are Fr\'echet spaces, by the closed graph theorem, it is enough to show for any sequence $f_j\in\SRR{n}$, $j=1,\ldots,n$ the conditions \begin{equation} f_j\underset{j\rightarrow \infty}{\longrightarrow}{f}\in \SRR{n}, \label{eq:proof:rmk:Framework/VV=PD/f} \end{equation} \begin{equation} (y\mapsto F_y[f_j])\underset{j\rightarrow \infty}{\longrightarrow}{g}\in \SRR{m}, \label{eq:proof:rmk:Framework/VV=PD/F[f]} \end{equation} imply $g=(y\mapsto F_y[f])$. Since convergence in $\SRR{m}$ implies the pointwise one, (\ref{eq:proof:rmk:Framework/VV=PD/f}-\ref{eq:proof:rmk:Framework/VV=PD/F[f]}) and the continuity of $F_y$ for each $y\in\RR{m}$ give $$ g(y)=\lim_{j\rightarrow\infty}F_y[f_j]=F_y[f],\, \forall y\in\RR{m} $$ as needed. \end{proof} From now on we make no distinguishing between vector-valued and parameter-dependent distributions and use the notation $\mathcal{L}(\SRR{n},\SRR{m})$ for both. \begin{rmk} Combining Remarks \ref{Framework/VV=PD} and \ref{rmk:Framework/VectorTest}, for $m,n,k\in\mathbb{N}_0$ we can interpret $\mathcal{L}(\SRR {n},\SRR{m+k})$ as the space of parameter-dependent $\SRR{m}$-valued distributions on $\RR{n}$ of class $\SRR{k}$. \end{rmk} \subsection{Operator-valued (parameter-dependent) distributions}\label{Framework/OpVal} \begin{rmk} Take $n,m,k\in\mathbb{N}_0$. There is a correspondence between elements of the space $\mathcal{L}(\SRR{n+m},\SRR{k})$ and $\mathcal{L}(\SRR{m},\SRR{k})$\emph{-valued distributions on $\RR{n}$}. It is given by defining for each $F\in\mathcal{L}(\SRR{m},\SRR{k})$ an operator-valued distribution $\evaluatePD{1,\ldots,n}F\in \mathcal{L}(\SRR{n},\mathcal{L}(\SRR{m},\SRR{k}))$ by $$ \evaluatePD{1,\ldots,n}F[f][\psi]=F[f\otimes \psi],\quad \forall f\in\SRR{n},\forall \psi\in\SRR{m}. $$ Here we assume the pointwise topology on $\mathcal{L}(\SRR{m},\SRR{k})$. \end{rmk} \begin{proof} For $f,\psi$ as in the statement, the map $\psi\mapsto f\otimes \psi$ is continuous, so $$\evaluatePD{1,\ldots,n}F[f]\in\mathcal{L}(\SRR{m},\SRR{k}), \quad, \forall f\in\SRR{n}.$$ Similarly, we get continuity of the map $f\mapsto F[f][\psi]$ for any $\psi\in\SRR{m}$ making $\evaluatePD{1,\ldots,n}F$ into a distribution. \par Conversely, any distribution $G\in \mathcal{L}(\SRR{n},\mathcal{L}(\SRR{m},\SRR{k}))$ defines a separately continuous bilinear map $$\SRR{n}\times \SRR{m}\rightarrow \SRR{k},$$ $$\SRR{n}\times \SRR{m} \ni (f,\psi)\mapsto G[f][\psi].$$ Each such map is jointly continuous (corollary of Theorem 34.1 in \cite{Treves}) and defines ${\evaluatePD{1,\ldots,n}}^{-1}G$ such that $$ {\evaluatePD{1,\ldots,n}}^{-1}G[f\otimes \psi]=G[f][\psi]. $$ \end{proof} \begin{rmk} Combining this result with Remark \ref{rmk:Framework/VectorTest}, for $n,m,k,l\in\mathbb{N}_0$ we may interpret $\mathcal{L}(\SRR{n+m},\SRR{k+l})$ as the space of \emph{parameter-dependent operator-valued distributions on $\RR{n}$, taking values in $\mathcal{L}(\SRR{m},\SRR{k})$ of class $\SRR{l}$}. \end{rmk} As in Subsection \ref{Framework/VectorTest} the identification is not unique and, in general, we define $\evaluatePD{i_1,\ldots,i_n}F$ analogously. \par As before, we prefer to perform computations within the spaces of linear operators, but use the interpretation presented above to make their sense more clear. \subsection{Restriction of a distribution to fixed values of arguments}\label{Framework/Restrict} Occasionally we prefer to treat some of the arguments interchangeably as distributional one parametric ones. It is easy to make the latter into the former using the following operation. \begin{notn} Take $n,k,m\in\mathbb{N}_0$, $i_1,\ldots,i_k\in \{1,\ldots, m+k\}$, $i'_1,\ldots,i'_k\in \{1,\ldots, n+k\}$ and $F\in\mathcal{L}(\SRR{n},\SRR{k+m})$. We define \begin{equation}\label{eq:Framework/RestrExt} \intD{\arrs{i}{k}}{\arrs{i'}{k}}F\in \mathcal{L}(\SRR{k+n},\SRR{m}), \quad \intD{\arrs{i}{k}}{\arrs{i'}{k}}F[f](\bm{y})= \end{equation} $$ \int \evaluateP{\arrs{i}{k}}{m+k}\left[F\left[\left(\evaluateP{\arrs{i'}{k}}{n+k}\left[f\right]\right)\left(\bm{x}\right)(\bm{y})\right]\right](\bm{x})(\bm{y})d^k\bm{x}, \quad \forall \bm{y}\in\RR{k},\,\forall f\in\SRR{n}. $$ \end{notn} The meaning of this operator is that it makes the parameters on the places $i_1,\ldots,i_k$ into distributional arguments inserted on positions $i'_1,\ldots,i'_k$. To understand the meaning of the rather formal construction above let us consider a simple example. \begin{exmp} Take $m\in\mathbb{N}$, $h\in\SRR{m}=\mathcal{L}(\SRR{0},\SRR{m})$. Then $$\intD{1,\ldots,m}{1,\ldots,m}h[f]=\int{h(\bm{x})f(\bm{x})d^m\bm{x}}.$$ \end{exmp} We introduce the partially defined inverse operation $$ \restrD{\arrs{i'}{k}}{\arrs{i}{k}}=\left(\intD{\arrs{i}{k}}{\arrs{i'}{k}}\right)^{-1}. $$ If for $G\in\mathcal{L}(\SRR{n+k},\SRR{m})$ the operator $\restrD{\arrs{i'}{k}}{\arrs{i}{k}}G$ exists, we call it a \emph{restriction of $G$} because effectively it allows us to evaluate $G$ at fixed values of its distributional arguments. \begin{rmk} Here we again follow \cite{NN}. Note that the microlocal analysis suggests a more sophisticated notion of restriction. One may show that whenever both are applicable, the results coincide, but the existence conditions are not in general equivalent. \end{rmk} \begin{prop}[Analog of Proposition 2.11 in \cite{NN}] For the existence of $$\restrD{\arrs{i'}{k}}{\arrs{i}{k}}G $$ it is sufficient to find a map $F: \SRR{n}\rightarrow \SRR{m+k}$ (not a priori continuous) such that $G[f]=\intD{\arrs{i}{k}}{\arrs{i'}{k}}F[f]$ where the right-hand sides is formally defined by (\ref{eq:Framework/RestrExt}). \end{prop} \begin{proof} The reasoning is similar to Remark \ref{rmk:Framework/VV=PD}. Without any loss of generality, we assume $$ i_{j}=i'_{j}=j, \quad j=1,\ldots,k. $$ Assume $G\in\mathcal{L}(\SRR{n+k},\SRR{m})$ and $F$ is as in the statement. Our goal is to prove that $F$ is continuous. To do that we evaluate $G$ on a tensor product \begin{equation}\label{eq:proof:Framework/Restr} G[h\otimes g](\bm{y})=\intD{1,\ldots,k}{1,\ldots,k}F[h\otimes g](\bm{y})= \int{F[h](\bm{x},\bm{y})g(\bm{x})d^k\bm{x}}, \end{equation} $$ \forall g\in\SRR{k}, \forall h\in\SRR{n},\,\forall \bm{y}\in\RR{m}, $$ and apply the closed graph theorem. Indeed, assume that there is a sequence $h_{j}\in\SRR{n}$, $j\in\mathbb{N}$ such that \begin{equation}\label{eq:proof:Framework/Restr/closedGraph} \lim_{j\rightarrow \infty}h_j=h, \, \lim_{j\rightarrow \infty}F[h_j]=u. \end{equation} On the one hand, for any $g\in\SRR{k}$ by (\ref{eq:proof:Framework/Restr},\ref{eq:proof:Framework/Restr/closedGraph}) we get \begin{equation}\label{eq:proof:Framework/Restr/outetLim} \lim_{j\rightarrow \infty}G[h_j\otimes g]=G[h\otimes g]. \end{equation} On the other hand, the continuity of $G$ and $h\mapsto h\otimes g$ for any $g\in\SRR{k}$ together with (\ref{eq:proof:Framework/Restr/closedGraph}) gives \begin{equation}\label{eq:proof:Framework/Restr/innerLim}\lim_{j\rightarrow \infty}G[h_j\otimes g](\bm{y})= \int{u(\bm{x},\bm{y})g(\bm{x})d^k\bm{x}}. \end{equation} Combining (\ref{eq:proof:Framework/Restr/outetLim}) with (\ref{eq:proof:Framework/Restr/innerLim}), we conclude that whenever (\ref{eq:proof:Framework/Restr/closedGraph}) is satisfied, $$ \int{u(\bm{x},\bm{y})g(\bm{x})d^k\bm{x}}=\int{F[h](\bm{x},\bm{y})g(x)d^k\bm{x}}, \,\forall g\in \SRR{k}, \forall g\in\SRR{k},\forall \bm{y}\in\RR{m}. $$ By assumption $F[h],u\in\SRR{m+k}$, so the last identity is possible only if $F[h]=u$ as desired. \end{proof} \subsection{Augmentations and products}\label{Framework/Ext} Developing Remark \ref{rmk:Framework/VectorTest} we may act by an operator from $\mathcal{L}(\SRR{n},\SRR{m})$ on a vector from $\SRR{n+k}$ treating the rest $k$ arguments as parameters. This leads to the following construction. \begin{rmk}\footnote{Variant of Theorem 2.1.3 in \cite{Lars}.} Let $n,m,k\in\mathbb{N}$, $F\in\mathcal{L}(\SRR{n},\SRR{m})$. Then we may introduce $\extD{1,\ldots,k}{1,\ldots,k}F\in \mathcal{L}(\SRR{n+k},\SRR{m+k}) $ as \begin{equation} \extD{1,\ldots,k}{1,\ldots,k}F[f](\bm{x},\bm{y})=F[\evaluateP{1,\ldots,k}{n+k}f(\bm{x})](\bm{y}), \qquad \forall f\in\SRR{n+k}, \,\forall \bm{x}\in\RR{k},\,\forall \bm{y}\in\RR{m} \label{eq:Framework/extSimp} \end{equation} More generally, take two sequences of pairwise different integers $$i_1,\ldots,i_k\in\{1,\ldots,n+k\}, \qquad i'_1,\ldots,i'_k\in\{1,\ldots,m+k\}.$$ Then there is $\extD{\arrs{i}{k}}{\arrs{i'}{k}}F\in\mathcal{L}(\SRR{n+k},\SRR{m+k})$ defined by \begin{equation} \extD{\arrs{i}{k}}{\arrs{i'}{k}}F=\left(\evaluateP{\arrs{i'}{k}}{m+k}\right)^{-1}\left(\RR{k}\ni \bm{x}\mapsto F[\evaluateP{\arrs{i'}{k}}{n+k}f(\bm{x})]\right). \label{eq:Framework/extGen} \end{equation} \end{rmk} \begin{proof} Clearly, it is enough to consider $\extD{1,\ldots,k}{1,\ldots,k}F$. Equivalence of (\ref{eq:Framework/extGen}) and (\ref{eq:Framework/extSimp}) follows by construction of $\left(\evaluateP{\arrs{i'}{k}}{m+k}\right)^{-1}$ in Remark \ref{rmk:Framework/VectorTest}. \par By Remark \ref{rmk:Framework/VV=PD} we only have to show that the right-hand sides of (\ref{eq:Framework/extSimp}) is in $\SRR{m+k}$ as a function of $(\bm{x},\bm{y})$ and is continuous with respect to $f$ when $(\bm{x},\bm{y})$ is fixed. Both facts follow from Remark \ref{rmk:Framework/VectorTest} and continuity of $F$ (which in particular allows to commute it with derivatives with respect to $\bm{x}$ and estimate seminorms of $F[\evaluateP{1,\ldots,k}{n+k}f(\bm{x})]$ in terms of the ones of $f$). \end{proof} \begin{rmk}\label{rmk:Framewor/ExtUniversal} We use the same notation as in the previous remark. Equivalently $\extD{1,\ldots,k}{1,\ldots,k}F$ can be characterised as a unique operator in $\mathcal{L}(\SRR{n+k},\SRR{m+k})$ such that for any $f\in\SRR{k}$ and $g\in\SRR{n}$ holds \begin{equation}\label{eq:Framework/extProd} \extD{1,\ldots,k}{1,\ldots,k}F[f\otimes g]=g\otimes F[g]. \end{equation} \end{rmk} \begin{proof} Clearly, (\ref{eq:Framework/extSimp}) implies (\ref{eq:Framework/extProd}). Uniqueness of the operator satisfying (\ref{eq:Framework/extProd}) follows by density of $\SRR{k}\otimes_{\mathrm{alg}}\SRR{n}$ in $\SRR{n+k}$. \end{proof} \begin{rmk}\label{rmk:Framework/ProdComm} From the last remark follows that ``operators acting on different variables commute". In particular, for $F\in\mathcal{L}(\SRR{n},\SRR{m})$ and $F'\in\mathcal{L}\left(\SRR{n'},\SRR{m'}\right)$ we have $F\otimes F'\in \mathcal{L}\left(\SRR{n+n'},\SRR{m+m'}\right)$ defined as $$ F\otimes F'=\left(\extD{n+1,\ldots,n+m'}{n+1,\ldots,n+m'}F\right)\circ \left(\extD{1,\ldots,n}{1,\ldots,n}F'\right)=\left(\extD{1,\ldots,m}{1,\ldots,m}F'\right)\circ\left(\extD{n+1,\ldots,n+n'}{n+1,\ldots,n+n'}F\right). $$ This operation coincides with the ordinary tensor product of test functions for $n=n'=0$ and of distributions for $m=m'=0$. \end{rmk} \begin{rmk}\label{rmk:Framework/Augment-as-tensor} The augmentations can be expressed via the tensor product introduced above. In particular, let $n,m,k\in \mathbb{N}_0$. Then $$ \left(\extD{n+1,\ldots,n+k}{m+1,\ldots,m+k}F\right)=F\otimes \mathbb{1}_{\mathcal{L}(\SRR{k})}, $$ $$ \left(\extD{1,\ldots,k}{1,\ldots,k}F\right) = \mathbb{1}_{\mathcal{L}(\SRR{k})}\otimes F. $$ \end{rmk} \begin{rmk}\label{rmk:Framework/Augment/algebra} The following two simple properties of augmentation are useful: \begin{itemize} \item Consecutive augmentations can be combined, e.g. for $n,m\in\mathbb{N}_0$ and $F\in\mathcal{L}(\SRR{n},\SRR{m})$ $$ \extD{1,\ldots,l}{1,\ldots,l}(\extD{1,\ldots,k}{1,\ldots,k}F)=\extD{1,\ldots,l+k}{1,\ldots,l+k}F,\,\forall k,l\in\mathbb{N}_0; $$ \item Augmentation is distributive with respect to compositions, e.g. for $n,m\in\mathbb{N}_0$ and $F\in\mathcal{L}(\SRR{n},\SRR{m})$, $G\in\mathcal{L}(\SRR{k},\SRR{n})$ $$ \extD{1,\ldots,k}{1,\ldots,k}(F\circ G)=(\extD{1,\ldots,k}{1,\ldots,k}F)\circ (\extD{1,\ldots,k}{1,\ldots,k}G), \forall k\in\mathbb{N}_0. $$ \item As a consequence of the above and Remark \ref{rmk:Framework/ProdComm}, if $n,n',n'',m,m',m''\in\mathbb{N}_0$, $$F\in\mathcal{L}\left(\SRR{n'},\SRR{n''}\right),\,F'\in\mathcal{L}\left(\SRR{n},\SRR{n'}\right),$$ $$G\in\mathcal{L}\left(\SRR{m'},\SRR{m''}\right),\,G'\in\mathcal{L}\left(\SRR{m},\SRR{m'}\right),$$ then $(F\otimes G)\circ (F'\otimes G')=(F\circ F')\otimes (G\circ G')$. \end{itemize} \end{rmk} \subsection{Multipliers}\label{Framework/Mlt} For now, we considered only parameter-dependent distributions fast decaying at the high values of the parameters. This condition is too restrictive. For example, the free quantum field is known to be an operator-valued distribution restrictable to the fixed values of time (see \cite{SimonReed} for standard treatment and Subsection \ref{DS/2Q} for construction in our formalism) but it does not decay in time. In this subsection we define a large enough space containing such objects. \par We recall that the space of multipliers \cite{NN} $\Mlt{n}$ consists of functions having limited growth together with all their derivatives, $$ \Mlt{n}=\{f\in C^{\infty}(\RR{n})| \forall k\in\mathbb{N}_0\, \exists m\in \mathbb{N}_0: \, \sup_{x\in\RR{n},|a|\leq k} (1+||x||)^{-m}|\partial^{a}f(x)|\leq \infty \}. $$ The name comes from the following alternative characterization of $\Mlt{n}$. For a function $f$ on $\RR{n}$ define pointwise multiplication by $f$ as $$ (\MltS{f} g)(\bm{x})=f(\bm{x})g(\bm{x})\, (\forall g\in\SRR{n}, \forall \bm{x}\in\RR{n}). $$ Then\footnote{See \cite{NN} and \cite{Amman}, or generalization of this fact in Remark \ref{rmk:Framework/Mlt/Alt} below.} $$ \Mlt{n}=\{f:\RR{n}\rightarrow \mathbb{C}| \MltS{f}\in \mathcal{L}(\SRR{n})\}. $$ Note that by Remark \ref{rmk:Framework/VV=PD} it is enough to show that $\MltS{f}$ leaves $\SRR{n}$ invariant. We also define a complementary operator $\MltM{g}: \Mlt{n}\rightarrow \SRR{n}$ defined for each $g\in\SRR{n}$ and defined by $$ \MltM{g}[f]=f\cdot g. $$ \par There is a natural topology on $\Mlt{n}$ generated by seminorms\footnote{We follow \cite{Amman}.} $$ ||f||_{\Mlt{n},g,m,k}=||\MltM{g} f||_{\SRR{n},m,k},\quad g\in\SRR{n},\, m,k\in\mathbb{N}_0 $$ making all $\MltM{g}$ for each $g\in\SRR{n}$ continuous. Conversely, for some topological vector space $X$ an operator $F: X\rightarrow \Mlt{n}$ is continuous whenever $$ \MltM{g}\circ F\in\mathcal{L}(X,\SRR{n}), \quad \forall g\in\SRR{n}. $$ This way we can always replace $\Mlt{n}$-valued operators with $\SRR{n}$ ones. We postpone it to Remark \ref{rmk:Framework/Mlt/Strategy} to introduce first a generalization of the space of multipliers. \par For $m,n\in\mathbb{N}_0$ we introduce the mixed space $\SMlt{n}{m}$ as $$ \SMlt{n}{m}=\{f\in C^{\infty}(\RR{n+m})| \forall k,l\in\mathbb{N}_0\,$$ $$ \exists r\in \mathbb{N}_0: \, \sup_{\substack{\bm{x}\in\RR{n},y\in\RR{m},\\|a|+|b|\leq k}} (1+||\bm{x}||)^{l}(1+||\bm{y}||)^{-r}|\partial_{\bm{x}}^{a}\partial_{\bm{y}}^{b}f(\bm{x},\bm{y})|\leq \infty \}. $$ \begin{rmk}\label{rmk:Framework/Mlt/Alt} There is an equivalent characterization of $\SMlt{n}{m}$, $n,m\in\mathbb{N}{0}$ as a space of all functions $f$ that define a map $\MltS{f}:\SRR{m}\rightarrow \SRR{n+m}$ by pointwise multiplication $$ \MltS{f}[g](\bm{x},\bm{y})=f(\bm{x},\bm{y})g(\bm{y}). $$ Furthermore, $$\MltS{f}\in\mathcal{L}(\SRR{m},\SRR{n+m}), \forall f\in\SMlt{n}{m}.$$ For any $g\in\SRR{m}$ there is a map $\MltMn{g}{n}\in\mathcal{L}(\SMlt{n}{m},\SRR{m+n})$ such that $$ \MltMn{g}{n}[f]=\MltS{f}[g], $$ where the topology on $\SMlt{n}{m}$ is defined by $$ ||f||_{\SMlt{n}{m},g,l,k}=||\MltS{f}[g]||_{\SRR{n+m},l,k},\quad g\in\SRR{m},\, k,l\in\mathbb{N}_0. $$ Conversely, for an operator $F:X\rightarrow \SMlt{n}{m}$ with some topological vector space $X$ we have $F\in\mathcal{L}(X,\SMlt{n}{m})$ if and only if $$ \MltS{g}\circ F\in\mathcal{L}(X,\SRR{n+m})\,\forall g\in\SRR{m}. $$ \end{rmk} \begin{proof}\footnote{The presented proof follows \cite{Amman} with minimal changes.} If $f\in\SMlt{n}{m}$ and $g\in\SRR{m}$, then $\MltS{f}[g]\in\SRR{n+m}$ essentially due to (\ref{eq:xy<x}). Conversely, if a function $f$ on $\RR{n+m}$ induces a map $\MltS{f}: \SRR{m}\rightarrow \SRR{m}$, then by Remark \ref{rmk:Framework/VV=PD} this map is continuous. Let $g\in\SRR{m}$ be such that $$g(\bm{x})=1, \forall \bm{x}\in U$$ for some neighborhood of the origin $U$ and for $\bm{y}\in\RR{m}$ set $g_{\bm{y}}\in\SRR{m}$, $g_{\bm{y}}(\bm{x})=g(\bm{x}-\bm{y})$. Then $$ f(\bm{x})=\MltS{f}[g_y](\bm{x}), \quad \forall \bm{x}\in \bm{y}+U. $$ The smoothness of $f$ is clear. By continuity of $\MltS{f}$ for any multi-index $\alpha\in\mathbb{N}_0^{n+m}$ and any $r\in\mathbb{N}{0}$ there are some $C>0$ and $l,b\in\mathbb{N}_0$ such that for every $x\in\RR{n}$ $$ |\partial_{\alpha}f(\bm{x})|(1+||\bm{x}||)^r \leq C ||g_{\bm{y}}||_{\SRR{m},l,b}. $$ Finally, for some $C'>0$ $$ ||g_{\bm{y}}||_{\SRR{m},l,b}\leq C'(1+||\bm{y}||)^{l} $$ by $$1+||\bm{x}-\bm{y}||\leq (1+||\bm{x}||)(1+||\bm{y}||).$$ Thus $f\in\SMlt{n}{m}$. The rest of the statement is trivial. \end{proof} \begin{rmk}\label{rmk:Framework/Mlt/VectorTest} One can prove an analogue of Remark \ref{rmk:Framework/VectorTest}, identifying the $\SRR{n}$-valued vector functions of class $\SMlt{m}{k}$ with elements of $\SMlt{n+m}{k}$. \end{rmk} \begin{rmk}\label{rmk:Framework/Mlt/Strategy} The key strategy of dealing with the operators like $$F: \SRR{n}\rightarrow \SMlt{m}{k}$$ for $n,m,k\in\mathbb{N}_0$ is to always work with $\MltM{f}\circ F$ instead, where $f\in\SRR{n}$. For example, similarly to Subsection \ref{Framework/VV=PD} we may consider the space of parameter-dependent distributions $\mathcal{O_M}(\RR{m},\SpRR{n})$ defined along the same lines as $\mathcal{S}(\RR{m},\SpRR{n})$. Any $F\in \mathcal{O_M}(\RR{m},\SpRR{n})$ defines a map $F:\SRR{n}\rightarrow \Mlt{m}$ which by Remark \ref{rmk:Framework/Mlt/Alt} is continuous if $\MltM{f}\circ F\in\mathcal{L}(\SRR{m},\SRR{n})$ for every $f\in\SRR{n}$, for which we use Remark \ref{rmk:Framework/VV=PD}. This way we get that Subsections \ref{Framework/VV=PD} - \ref{Framework/Restrict} can be generalized to the case then the target space is of the form $\SMlt{m}{k}$, $m,k\in\mathbb{N}_0$. \end{rmk} We should separately take care of the augmentations of Subsection \ref{Framework/Ext}, because in general operators acting from the multiplier and mixed spaces may appear. We consider only one simple case, the generalizaiton is straightforward. \begin{exmp} Let $m,n,k\in \mathbb{N}_0$ and $F\in\mathcal{L}(\SRR{m}, \SRR{n})$. Then $\extD{m+1,\ldots,m+k}{n+1,\ldots,n+k}F$ can be continued to an element of $\mathcal{L}(\SMlt{m}{k},\SMlt{n}{k})$ such that \begin{equation} \extD{m+1,\ldots,m+k}{n+1,\ldots,n+k}F[g](\bm{x},\bm{y})=F[\evaluateP{m+1,\ldots,m+k}{m+k}g(\bm{y})](\bm{x}), \label{eq:Framework/Mlt/Ext-Expm} \end{equation} $$\forall \bm{x}\in\RR{n},\forall \bm{y}\in\RR{k},\forall g\in\SMlt{m}{k}. $$ Alternatively, it can be defined by the relation \begin{equation} \MltMn{f}{n}\circ\extD{m+1,\ldots,m+k}{n+1,\ldots,n+k}F= \extD{m+1,\ldots,m+k}{n+1,\ldots,n+k}F\circ \MltMn{f}{m}, \forall f\in\SRR{k}, \label{eq:Framework/Mlt/Ext-Formal} \end{equation} where in the right-hand sides $\extD{m+1,\ldots,m+k}{n+1,\ldots,n+k}F\in\mathcal{L}(\SRR{m+k},\SRR{n+k})$ is defined as in Subseciton \ref{Framework/Ext}. \end{exmp} \begin{proof} First take some $f\in\SRR{k}$ evaluate the right-hand sides of (\ref{eq:Framework/Mlt/Ext-Formal}) on a function $g\in\SMlt{m}{k}$: $$ \extD{m+1,\ldots,m+k}{n+1,\ldots,n+k}F\circ \MltMn{f}{m}[g](\bm{x},\bm{y})=F[\evaluateP{m+1,\ldots,m+k}{m+k}g(\bm{y})](\bm{x})f(\bm{y}). $$ The left-hand side is a Schwartz function of $(\bm{x},\bm{y})$, so pointwise multiplication by a function $\extD{m+1,\ldots,m+k}{n+1,\ldots,n+k}F[g]$ defined by (\ref{eq:Framework/Mlt/Ext-Expm}) induces a map $\SRR{k}\rightarrow \SRR{k+n}$, so by Remark \ref{rmk:Framework/Mlt/Alt} we have $\extD{m+1,\ldots,m+k}{n+1,\ldots,n+k}F[g]\in\SMlt{n}{k}$ and it automatically is the unique solution of (\ref{eq:Framework/Mlt/Ext-Formal}). The right-hand side of (\ref{eq:Framework/Mlt/Ext-Formal}) belongs to $\mathcal{L}(\SMlt{n}{k},\SRR{m+k})$, so applying once again Remark \ref{rmk:Framework/Mlt/Alt} we get $\extD{m+1,\ldots,m+k}{n+1,\ldots,n+k}F\in\mathcal{L}(\SMlt{n}{k},\SMlt{m}{k})$. \end{proof} \subsection{Elementary examples}\label{Framework/exmp} A lot of standard operations often arising in the distributions theory and QFT are operators in $\mathcal{L}\left(\SMlt{n}{m},\SMlt{n'}{m'}\right)$ for some $n,m,n',m'\in\mathbb{N}_0$. We have already constructed the pointwise multiplication operators. Here we list some other classes of operators which are convenient to use later in the text. \paragraph{Partial derivatives} For $n\in\mathbb{N}$ and a multi-index $\alpha\in\mathbb{N}_0^{n}$ we have $\partial^{\alpha}\in\mathcal{L}(\SRR{n},\SRR{n})$. Likewise, the derivatives can be defined for $\SMlt{n}{m}$, $n,m\in\mathbb{N}_0$ \paragraph{Integration with moving boundary} We can define $J_{+},J_{-}\in\mathcal{L}(\SRR{})$ and $J\in\mathcal{L}(\SRR{},\SRR{2})$ as \begin{equation}\label{eq:Framework/exmp/J-} J_{-}[f](t)=\int_{-\infty}^{t}f(t)dt, \, \forall t\in\RR{}, \forall f\in\SRR{}; \end{equation} \begin{equation} J_{+}[f](t)=\int_{t}^{+\infty}f(t)dt, \, \forall t\in\RR{}, \forall f\in\SRR{}; \end{equation} \begin{equation}\label{eq:Framework/exmp/J} J_{}[f](t,t')=\int_{t}^{t'}f(t)dt, \, \forall t,t'\in\RR{}, \forall f\in\SRR{}. \end{equation} These operators interact with the derivatives in obvious ways, e.g. $$\partial\circ J_{+}=-J_{+}\circ \partial=\mathbb{1}_{\mathcal{L}(\SRR{})},$$ \begin{equation}\label{eq:Framework/exmp/Int-dif} \partial_1\circ J=-\partial_2\circ J=\mathbb{1}_{\mathcal{L}(\SRR{})}. \end{equation} \par It is worth noting that the trivial property $J_{+}[f](t)=\lim_{t'\rightarrow +\infty}J[f](t,t')$ survives the augmentation, i.e. for any $n\in\mathbb{N}$ and $f\in\SRR{n}$ we have \begin{equation}\label{Framework/exmp/IntLim} \lim_{t'\rightarrow \infty} \evaluateP{2}{1}(\extD{2,\ldots,n}{3,\ldots,n+1}J[f])(t')=\extD{2,\ldots,n}{2,\ldots,n}J_{+}[f], \end{equation} and similarly for analogous relations. \par For future use we introduce more complicated operators $ \intSimplex{n}\in\mathcal{L}(\SRR{n},\SRR{2}) $ and $\intUSimplex{n}{\pm}\in\mathcal{L}(\SRR{n},\SRR{1})$ defined by $$ \intSimplex{n}[f](t,t')=\int_{t}^{t'}dt_1\int_{t}^{t_2}dt_2\cdots \int_{t}^{t_{n-1}}dt_n f(\arrs{t}{n}),\, \forall f\in\SRR{n},\,\forall t,t'\in\RR{}; $$ $$ \intUSimplex{n}{+}[f](t)=\int_{t}^{+\infty}dt_1\int_{t}^{t_2}dt_2\cdots \int_{t}^{t_{n-1}}dt_n f(\arrs{t}{n}),\, \forall f\in\SRR{n},\,\forall t,t'\in\RR{} $$ $$ \intUSimplex{n}{-}[f](t)=\int_{-\infty}^{t}dt_1\int_{t}^{t_2}dt_2\cdots \int_{t}^{t_{n-1}}dt_n f(\arrs{t}{n}),\, \forall f\in\SRR{n},\,\forall t,t'\in\RR{}. $$ Analogues of (\ref{Framework/exmp/IntLim}) can be written for this class of operators. \begin{rmk}\label{rmk:Framework/exmp/Strong} If we treat $\SRR{n+k}$ as $\mathcal{S}(\RR{k},\SRR{n})$ and think of $\SRR{n}$ as a subspace of $L^2(\RR{n})$, then the partial derivatives with respect to the parameters define partial derivatives of the vector-valued functions with respect to the parameters. Moreover, the integration with respect to the parameters\footnote{Here we suppose that the first $k$ arguments are parameters as in Subsection \ref{eq:Framework/extProd}} such as $$ \extD{k+1,\ldots,k+n}{k+1,\ldots,k+n}J[f] \, (f\in\SRR{n+k}) $$ coincides with the Bochner integral of the vector-valued function $\evaluateP{1,\ldots,k}{1,\ldots,k}f$. This follows from approximation of $f\in\SRR{n+k}$ by $$ f=\lim_{n\rightarrow \infty} g_n\otimes h_n, g_n\in\SRR{k}, h_n\in\SRR{n} $$ and then an approximation of $h_n$ by sequences of the simple functions. \end{rmk} \paragraph{Precompositions} Clearly, if $A$ is an injective\footnote{If $A$ is not injective, then $A^*$ is valued in the multiplier or mixed space} linear operator\footnote{We limit ourselves to the linear functions for simplicity. In general a function should belong to the multipliers space (see Section \ref{Framework/Mlt}) together with its inverse, see Exercise 2.8 of \cite{NN}.} $\RR{n}\rightarrow\RR{m}$, then it defines an operator\footnote{This notation, of course, is in conflict with the one for adjoint operators. The later, however, is used here only episodically, since another concept of conjugated operator is more important for us. Anyway, we never introduce adjoint of a finite-dimensional operators, so no confusion is possible.} $A^*\in\mathcal{L}(\SRR{m},\SRR{n}),$ $$ A^*[f](\bm{x})=f(A\bm{x}), \, \forall\in\SRR{n}, \, \forall\bm{x}\in\RR{m}. $$ In particular, the diagonal map $D_n: \RR{n}\rightarrow\RR{2n}$, $$D_n(\bm{x})=(\bm{x},\bm{x}), \, \forall \bm{x}\in\RR{n}$$ defines $\DiagM{n}\in\mathcal{L}(\SRR{2n},\SRR{n})$ such that $\DiagM{n}f(\bm{x})=f(\bm{x},\bm{x})$ for $f\in\SRR{n}$ and $\bm{x}\in\RR{n}$. \par Another important class of maps on comes from the permutation group. For $\sigma\in\symmgr{n}$ and $k\in\mathbb{N}$ we set $(\sigma)_{(k)}: \RR{n\cdot k}\rightarrow \RR{n\cdot k}$ as $(\sigma)_{(k)}(\arrs{\bm{x}}{n})=(\arrsP{\bm{x}}{n}{\sigma}),\, \bm{x}_j\in\RR{k}, j=1,\ldots, n$. We define $\permK{k}{\sigma}=(\sigma^{-1})_{(k)}^*$ defining a representation of $\symmgr{n}$ on $\SRR{n\cdot k}$ and $$ \symmze{k}{n}=\frac{1}{n!}\sum_{\sigma\in\symmgr{n}}\permK{k}{\sigma}. $$ \paragraph{Functions as operators, tensor and pointwise multiplication} Recall that we identify $\SRR{n}$ with $\mathcal{L}(\SRR{0},\SRR{n})$ by treating $f\in\SRR{n}$ as a map $$\mathbb{C}\ni c\mapsto c f.$$ \par The augmentation define the tensor multiplication by $f$, e.g. $$\extD{1,\ldots,m}{1,\ldots,m}f[g]=g\otimes f,\forall f\in\SRR{m},\forall g\in\SRR{n}$$. The following example illustrates usage of this operation together with the pointwise multiplication of Subsection \ref{Framework/Mlt}. The subsequent remark points out the issue of symbolic notation of the next subsection which one should be aware of. \begin{exmp}\label{exmp:Framework/2products} There is an important relation between the tensor multiplication above and the pointwise multiplication of Subsection \ref{Framework/Mlt}. For example, let $n,m,k\in\mathbb{N}_0$, $f\in\SRR{n+k}$ and consider a map $F\in\mathcal{L}(\SRR{m+k},\SRR{n+m+k})$ $$ F[g](\bm{x},\bm{y},\bm{z})=g(\bm{x},\bm{y})f(\bm{y},\bm{z}), \,\forall g\in\SRR{m+k}, \forall \bm{x}\in\RR{m},\forall \bm{y}\in\RR{k},\forall \bm{z}\in \RR{n}. $$ On the one hand, we may first tensor with $f$ $$ \extD{1,\ldots,m+k}{1,\ldots,m+k}f[g](\bm{x},\bm{y},\bm{y'},\bm{z})=f(\bm{x},\bm{y})g(\bm{y}',\bm{z})\,\forall g\in\SRR{m+k}, \forall \bm{x}\in\RR{m},\forall \bm{y},\bm{y}'\in\RR{k},\forall \bm{z}\in \RR{n}, $$ and then collapse $\bm{y}$ and $\bm{y}'$ into one variable, \begin{equation} \label{eq:Framework/exmp/mult=tensor} F=\extD{1,\ldots,m,m+2k+1,\ldots,m+2k+n}{1,\ldots,m,m+k+1,\ldots,m+k+n}\DiagM{k}\circ \extD{1,\ldots,m+k}{1,\ldots,m+k}f. \end{equation} On the other hand, we may start from tensoring with a constant function by $\extD{1,\ldots,m+k}{1,\ldots,m+k}\mathbb{1}_{\Mlt{n}}\in\mathcal{L}(\SRR{m+k},\SRR{n+m+k}),$ $$ \extD{1,\ldots,m+k}{1,\ldots,m+k}\mathbb{1}_{\Mlt{n}}[g](\bm{x},\bm{y},\bm{z})=g(\bm{x},\bm{y})\,\forall g\in\SRR{m+k}, \forall \bm{x}\in\RR{m},\forall \bm{y}\in\RR{k},\forall \bm{z}\in \RR{n}, $$ and then multiply by $f$, ignoring first $n$ variables ($\bm{x}$ in the notation above), getting \begin{equation} F=\extD{1,\ldots,n}{1,\ldots,n}\MltS{f}\circ \extD{1,\ldots,m+k}{1,\ldots,m+k}\mathbb{1}_{\Mlt{n}}.\label{eq:Framework/exmp/tensor=mult} \end{equation} \end{exmp} \begin{rmk}\label{rmk:exmp:Framework/2products} In the setting of the previous example the forms (\ref{eq:Framework/exmp/mult=tensor}) and (\ref{eq:Framework/exmp/tensor=mult}). However, only the later generalizes easily to the case $f\in\Mlt{n}$. Indeed, the map $\DiagM{k}$ does maps $\SMlt{k}{k}$ into $\Mlt{k}$ rather than $\SRR{k}$, so in the form (\ref{eq:Framework/exmp/mult=tensor}) it takes a separate effort to prove that $F\in\mathcal{L}(\SRR{m+k},\SRR{n+m+k})$. At the same time, $\MltS{f}$ can be replaced by $\MltM{f}$ which extends to a continuous operator $\Mlt{n}\rightarrow\Mlt{n}$ for obvious reasons. \end{rmk} \paragraph{The Fourier transform} $$ \Fourier{n}{\eta}[f](\bm{x})=\int{e^{\eta \mathrm{i} \bm{k}\cdot \bm{x}}f(\bm{k})d^{k}\bm{x}}, \forall f\in \SRR{n},\forall \bm{k}\in\RR{n} $$ is a well-known invertible operator in $\SRR{n}$. Here $\eta=\pm 1$. Note that $\Fourier{n}{\eta}$ has a restriction $\restrD{1,\ldots,n}{n+1,\ldots,2n}\Fourier{n}{\eta}\in\Mlt{2n}$, \begin{equation} \restrD{1,\ldots,n}{n+1,\ldots,2n}\Fourier{n}{\eta}(\bm{x},\bm{k})=e^{i\eta\bm{k}\cdot \bm{x}}. \label{eq:Framework/exmp/Fourier} \end{equation} \subsection{Symbolic notation and calculus}\label{Framework/Symbolic} In this subsection we generalize the standard symbolic integral notation from the distributions theory \begin{equation} F[f]=\int F(\bm{x})f(\bm{x})d^n\bm{x}, \, F\in\SpRR{n}, \,f\in\SRR{n}. \label{Framework/symb/std} \end{equation} We introduce all the notions and operations for the spaces $\SRR{n}$, $n\in\mathbb{N}_0$, but use their generalization to the multiplier and mixed spaces as explained in Subsection \ref{Framework/Mlt}. Before giving it in the full generality let us first make one step from (\ref{Framework/symb/std}) \begin{notn}\label{notn:symb-dist-eval} We write \begin{equation} F[f](\bm{y})=\int F\symbDistArgs{\bm{x}}{\bm{y}}f(\bm{x})d^n\bm{x}, \label{eq:Framework/Symb/F[f]} \end{equation} where $F\in\mathcal{L}(\SRR{n},\SRR{m})$, $f\in\SRR{n}$ and $y\in\RR{m}$. We call $\bm{x}$ the \emph{distributional argument} in contrast to the \emph{parameters} $\bm{y}$ of $F$. The compose symbol $F\symbDistArgs{\bm{x}}{\bm{y}}$ we call the \emph{symbolic kernel} of $F$. \end{notn} Note that (\ref{Framework/symb/std}) reduces to (\ref{eq:Framework/Symb/F[f]}) if $m=0$. \begin{rmk} By the Schwartz kernel theorem we can could treat the symbolic kernel of $F\in\mathcal{L}(\SRR{n},\SRR{m})$ as an element of $\SpRR{n+m}$, keeping (\ref{eq:Framework/Symb/F[f]}). But this is not in the line with the goals we set in the beginning of the section: to use minimally special properties of the Schwartz spaces and to highlight that all the objects we deal with are, in one way or another, linear operators between the Schwartz spaces. \end{rmk} The notation (\ref{eq:Framework/Symb/F[f]}) resembles action of a matrix on a vector. So, it is natural to use the matrix product-like notation for the symbols of compositions of operators. \begin{notn}\label{notn:symb-dist-compose} Let $n\in\mathbb{N}$. For each $j=1,\ldots,n$ take $F_j\in\mathcal{L}(\SRR{m_j},\SRR{m_{j-1}})$, where $m_j\in\mathbb{N}_0$, $j=0,\ldots,n$. Then we write \begin{equation}\label{eq:notn:symb-dist-compose} (F_1\circ F_2\circ\cdots\circ F_n)\symbDistArgs{\bm{x}_{n}}{\bm{x_0}}= \end{equation} $$\int F_1\symbDistArgs{\bm{x}_{1}}{\bm{x_0}}F_1\symbDistArgs{\bm{x}_{2}}{\bm{x_1}}\cdots F_n\symbDistArgs{\bm{x}_{n-1}}{\bm{x_n}}\prod_{j=1}^{n-1}d^{m_j}\bm{x}_j,\,\bm{x}_0\in\RR{m_0},\,\bm{x}_n\in\RR{m_n}. $$ \end{notn} We wrote for one time the product of the symbolic kernels explicitly to underline that it is ordered in a particular way. As these expressions are formal, we have no right to permute the multipliers for now, eventhough essentially they are numerical distributions. At the same time, we may safely use the ``Fubini's theorem" for symbolic integrals by construction (and associativity of the composition). As we make no distinguishment between $\SRR{n}$ and $\mathcal{L}(\SRR{0},\SRR{n})$, we naturally identify $F\in \mathcal{L}(\SRR{0},\SRR{n})$ with the function $F\in\SRR{n}$. Then Notation \ref{notn:symb-dist-compose} reduces to Notation \ref{notn:symb-dist-eval} when $n=2$ and $m_2=0$. We underline that the ordering of symbolic kernels in (\ref{eq:notn:symb-dist-compose}) is not arbitrary, instead, it reproduces the order of operators in the composition. \par The next step is to introduce separate notation for special classes of operators. \begin{notn}\label{notn:Framework/symb/stdop} Fix $n,m\in\mathbb{N}_0$ and $F\in\mathcal{L}(\SRR{n},\SRR{m})$. We set: \begin{itemize} \item For $\alpha\in\mathbb{N}_0^m$, $$\partial_{\bm{y}}^{\alpha}F\symbDistArgs{\bm{x}}{\bm{y}}=(\partial^{\alpha}\circ F )\symbDistArgs{\bm{x}}{\bm{y}} (\bm{x}\in\RR{n},\,\bm{y}\in\RR{m});$$ \item For an injective map $A\in\mathcal{L}(\RR{k},\RR{m})$ we set\footnote{As before, the non-injective maps are allowed, but lead to multiplier (or mixed) space-valued operators} $$ F\symbDistArgs{A\bm{x}}{\bm{y}}=({A}^*\circ F)\symbDistArgs{\bm{x}}{\bm{y}}\, (\bm{x}\in\RR{n},\,\bm{y}\in\RR{m}); $$ \item For $f\in\Mlt{m+k}$ ($k\in\mathbb{N}_0$) we set\footnote{Clearly, $\MltM{f}\circ \extD{1,\ldots,m}{1,\ldots,m}\mathbb{1}_{\Mlt{k}}\circ F\mathcal{L}(\SRR{n},\SRR{n+m+k})$. We add this case despite our decision to not discuss the multiplier and mixed spaces, because it concerns the issues not appearing for the Schwartz spaces. We will need this operation to deal with quantum fields and Hamiltonian densities.} $$ f(\bm{y},\bm{z})F\symbDistArgs{\bm{x}}{\bm{y}}=(\MltM{f}\circ \extD{1,\ldots,m}{1,\ldots,m}\mathbb{1}_{\Mlt{k}}\circ F)\symbDistArgs{\bm{x}}{\bm{y},\bm{z}} (\bm{x}\in\RR{n},\,\bm{y}\in\RR{m},\bm{z}\in\RR{k}); $$ \item For $n',m'\in\mathbb{N}_0$ and $F'\in \mathcal{L}\left(\SRR{n'},\SRR{m'}\right)$: $$ (F\otimes F')\symbDistArgs{\bm{x},\bm{x'}}{\bm{y},\bm{y'}}= F\symbDistArgs{\bm{x}}{\bm{y}}F'\symbDistArgs{\bm{x}'}{\bm{y}'},\,\bm{x}\in\RR{n},\bm{y}\in\RR{m},\bm{x}'\in\RR{n'},\bm{y}'\in\RR{m'}. $$ \end{itemize} \end{notn} \begin{rmk} Let us look on the consistency issues of the notation above. \begin{itemize} \item Notation \ref{notn:Framework/symb/stdop} becomes tautological if we put $m=0$; \item If $F$ and $A$ are as in Notation \ref{notn:Framework/symb/stdop} and $B\in\mathcal{L}(\RR{l},\RR{k})$, then $$ F\symbDistArgs{\bm{x}}{BA\bm{y}} $$ has two interpretations $$ ((BA)*\circ F)\symbDistArgs{\bm{x}}{\bm{y}} \,\rm{and}\, (B^*\circ A^*\circ F)\symbDistArgs{\bm{x}}{\bm{y}}. $$ But clearly $(BA)^*=B^*\circ A^*$, so the interpetations coincide; \item for $F,F',A$ as in Notation \ref{notn:Framework/symb/stdop} and $A'\in\mathcal{L}\left(\RR{k'},\RR{m'}\right)$ the expression $$ F\symbDistArgs{\bm{x}}{A\bm{y}}F'\symbDistArgs{\bm{x}'}{A'\bm{y}'} $$ has two possible interpretations: $$ (A\oplus A')^*\circ (F\otimes F') \quad \rm{and}\quad (A^*\circ F)\otimes ({A'}^* \circ F')=(A^*\otimes {A'}^*)\circ (F\otimes F'), $$ where we have used Remark \ref{rmk:Framework/Augment/algebra} to rewrite the second version. By a direct computation $(A\oplus A')^*=A^*\otimes {A'}^*$, so the interpretations are equivaent; \item For $F$ as in Notation \ref{notn:Framework/symb/stdop} and $f\in\SRR{m+k}$ expression $$ f(\bm{y},\bm{z})F\symbDistArgs{\bm{x}}{\bm{y}} $$ has two possible interpretations: $$ (\MltM{f}\circ \extD{1,\ldots,m}{1,\ldots,m}\mathbb{1}_{\Mlt{k}}\circ F)\symbDistArgs{\bm{x}}{\bm{y},\bm{z}} \,\rm{and}\, \extD{k+1,\ldots,k+m,2k+m+1,\ldots,2k+m+n}{k+1,\ldots,k+m+n}\DiagM{k}\circ(f\otimes F). $$ Here in the first case we treat $f$ as an element of $\Mlt{m+k}\supset \SRR{m+k}$, while in the second case as an operator in $\mathcal{L}(\SRR{0},\SRR{m+k})$. They coincide by Example \ref{exmp:Framework/2products}. As explained in Remark \ref{rmk:exmp:Framework/2products}, the separate definition of pointwise multiplication for the multiplier spaces can not be discarded. \end{itemize} \end{rmk} \begin{rmk}\label{rmk:Framework/symb/stdop-props} The operations introduced above satisfy the following: \begin{itemize} \item The partial derivatives commute among themselves; \item Derivatives interact with the linear changes of variables as usual, according to the chain rule; \item The Leibnitz rule for partial derivatives hods; \item For $F,F'$ as in Notation \ref{notn:Framework/symb/stdop} \begin{equation}\label{eq:Framework/ProdComm-symb} F\symbDistArgs{\bm{x}}{\bm{y}}F'\symbDistArgs{\bm{x}'}{\bm{y}'}= F'\symbDistArgs{\bm{x}'}{\bm{y}'} F\symbDistArgs{\bm{x}}{\bm{y}} \end{equation} \item Let $n,n'\in\mathbb{N}$. Take $$ F_j\in\mathcal{L}\left(\SRR{m_j},\SRR{m_{j-1}}\right), j=1,\ldots,n $$ $$ F'_j\in\mathcal{L}\left(\SRR{m'_j},\SRR{m'_{j-1}}\right), j=1,\ldots,n', $$ where $\arrsM{m}{0}{n},\arrsM{m'}{0}{n'}\in\mathbb{N}_0$. Then \begin{equation} \left(\int \left(\prod_{j=1}^{n}F_j\symbDistArgs{\bm{x}_{j}}{\bm{x}_{j-1}}\right) \prod_{j=1}^{n-1} d^{m_j}\bm{x}_j\right) \left(\int \left(\prod_{j=1}^{n'}F'_j\symbDistArgs{\bm{x}'_{j}}{\bm{x}'_{j-1}}\right) \prod_{j=1}^{n'-1} d^{m'_j}\bm{x}'_j\right)= \label{eq:symb-tensor-compos-distr} \end{equation} $$\int \left(\prod_{j=1}^{n}F_j\symbDistArgs{\bm{x}_{j}}{\bm{x}_{j-1}}\right) \left(\prod_{j=1}^{n'}F'_j\symbDistArgs{\bm{x}'_{j}}{\bm{x}'_{j-1}}\right) \left(\prod_{j=1}^{n}d^{m_j}\bm{x}_j\right) \left(\prod_{j=1}^{n'}d^{m'_j}\bm{x}'_j\right), $$ $$ \bm{x}_0\in \RR{m_0}, \quad \bm{x}_n\in \RR{m_n} $$ \end{itemize} \end{rmk} \begin{proof} Most of the statements are trivial, so we only mark the way of the proof for some of the claims. The Leibnitz rule for the defining form of symbolic expression (\ref{eq:notn:symb-dist-compose}) is trivial by definition of the partial derivatives. To deal with tensor products use $$ \partial^{(a,b)}=\partial^{a}\otimes \partial^{b}, $$ where $a\in\mathbb{N}_0^m$ and $b\in\mathbb{N}_0^{m'}$ are two multiindices and $(a,b)\in\mathbb{N}_0^{m+m'}$ is their concatenation ($m,m' \in\mathbb{N}_0$) and the Remark \ref{rmk:Framework/Augment/algebra}. \par The commutativity (\ref{eq:Framework/ProdComm-symb}) is due to Remark \ref{rmk:Framework/ProdComm}. The relation (\ref{eq:symb-tensor-compos-distr}) holds By Remark \ref{rmk:Framework/Augment/algebra}. \end{proof} It is natural to write the symbolic kernel of $\mathbb{1}_{\mathcal{L}(\SRR{n})}$ ($n\in\mathbb{N}_0$) as\footnote{It can be also considered as a special case of Notation \ref{notn:Framework/symb/restrict}} $$ \mathbb{1}_{\mathcal{L}(\SRR{n})}\symbDistArgs{\bm{x}}{\bm{y}}=\delta^{(n)}(\bm{x}-\bm{y}),\,\bm{x},\bm{y}\in\RR{n}. $$ Then we can introduce the folowing formal rules. \begin{notn}\label{notn:symb-dist-1} $$ \int \mathbb{1}_{\mathcal{L}(\SRR{n})}\symbDistArgs{\bm{x}}{\bm{y}} F\symbDistArgs{\bm{z}}{\bm{x},\bm{u}} d^n\bm{x}= d^n\bm{x}=F\symbDistArgs{\bm{z}}{\bm{y},\bm{u}}, $$ and $$ \int F'\symbDistArgs{\bm{y},\bm{z}}{\bm{u}} \mathbb{1}_{\mathcal{L}(\SRR{n})}\symbDistArgs{\bm{x}}{\bm{y}} d^n\bm{x}= d^n\bm{x}=F\symbDistArgs{\bm{z},\bm{x}}{\bm{u}}, $$ for appropriately chosen $F$ and $F'$. We assume that such an integral can be always separated from the long symbolic expression by ``Fubini theorem" and that the factors with independent variables commute as in Remark \ref{rmk:Framework/symb/stdop-props}. \end{notn} \begin{exmp}By Remark \ref{rmk:Framework/Augment-as-tensor},for $m,m',n',n,k\in\mathbb{N}_0$, $F\in\mathcal{L}(\SRR{n+k},\SRR{m})$, and $G\in\mathcal{L}\left(\SRR{n'},\SRR{m'+k}\right)$ we have $$\extD{{m+1,\ldots,m+m'}}{n+1,\ldots,n+m'}F\circ \extD{1,\ldots,n}{1,\ldots,n}G=(F\otimes \mathbb{1}_{\mathcal{L}(\SRR{m'})})\circ(\mathbb{1}_{\mathcal{L}(\SRR{n})}\otimes G),$$ so \begin{equation}\label{eq:Framework/Symbolic/ExtProd} (\extD{{m+1,\ldots,m+m'}}{n+1,\ldots,n+m'}F\circ \extD{1,\ldots,n}{1,\ldots,n}G)\symbDistArgs{\bm{x},\bm{x}'}{\bm{y},\bm{y}'}= \end{equation} $$ \int F\symbDistArgs{\bm{x}'',\bm{z}}{\bm{y}}\mathbb{1}_{\mathcal{L}(\SRR{m'})}\symbDistArgs{\bm{y}''}{\bm{y}'} G\symbDistArgs{\bm{x}'}{\bm{y}'',\bm{z}}\mathbb{1}_{\mathcal{L}(\SRR{n})}\symbDistArgs{\bm{x}}{\bm{x}''}d^{k}\bm{z}d^{m'}\bm{y}''d^{n}\bm{x}'= $$ $$ \int F\symbDistArgs{\bm{x}',\bm{z}}{\bm{y}} G\symbDistArgs{\bm{x}'}{\bm{y}',\bm{z}}d^{k}\bm{z}, $$ $$\bm{x}\in\RR{n},\bm{x}'\in\RR{n'},\bm{y}\in\RR{m},\bm{y}'\in\RR{m'}. $$ \end{exmp} Finally, to restore all the power of the standard symbolic notation for distributions, we have to allow formal transformations under the integral sign. \begin{notn}\label{notn-symb-dist} Let $n,m,r,k,\arrs{m}{k},\arrs{m'}{k}\in\mathbb{N}_0$, $$F_j\in\mathcal{L}\left(\SRR{m_j},\SRR{m'_j}\right),\, x_j\in\mathcal{L}(\RR{n+m+r},\RR{m_j})\, y_j\in\mathcal{L}(\RR{n+m+r},\RR{m'_j}) $$ and $F\in\mathcal{L}(\SRR{n},\SRR{m})$. We write $$ F\symbDistArgs{\bm{x}}{\bm{y}}=\int{\prod_{j=1}^k F_j\symbDistArgs{x_j(\bm{x},\bm{y},\bm{z})}{y_j(\bm{x},\bm{y},\bm{z})}d^k\bm{z}} $$ if for general $f\in\SRR{n}$ and $\bm{y}\in\RR{m}$ the expression $$ \int{\prod_{j=1}^k F_j\symbDistArgs{x_j(\bm{x},\bm{y},\bm{z})}{y_j(\bm{x},\bm{y},\bm{z})}f(\bm{x})d^k\bm{z} d^n\bm{x}} $$ can be transformed to the one, translating to $F[f](\bm{y})$ by Notations \ref{notn:symb-dist-compose}, \ref{notn:Framework/symb/stdop}, and \ref{notn:symb-dist-1} using \begin{itemize} \item Formal linear invertible transformation of integration variables, \item Formal integration by parts. \end{itemize} \end{notn} It is easy to see that the necessary change of variables is essentially unique (up to irrelevant renaming), so the construction above is well-defined. The ``Fubini theorem" and properties stated in Remark \ref{rmk:Framework/symb/stdop-props} are still valid and can be extended to new operations, linear transformation of the distributional arguments and partial derivatives with respect to them. \begin{rmk}\label{rmk:symb-prod-cond} Note that the order of formal kernels in the product is still not arbitrary: after all changes of variables it should be presented in the form (\ref{eq:notn:symb-dist-compose}), where, in particular, each integration variable appears first (if we are counting from left to right) as a distributional argument, then as a parameter. Notations \ref{notn:Framework/symb/stdop} and \ref{notn:symb-dist-1} do not affect this rule. One may show that if the order can be changed so that the conditions of Notation \ref{notn-symb-dist} are preserved, than the resulting operator does not change\footnote{Roughly speaking, such a reordering is possible only for tensor products, for which the commutativity was already found in Remark \ref{rmk:Framework/symb/stdop-props}}. In this light, we may postulate that the products of symbolic kernels is commutative, assuming that expression is defined if it can be reordered so that conditions of Notation \ref{notn-symb-dist} are satisfied. However, to avoid possible confusions, we will rarely use this possibility. \end{rmk} The restrictions can be naturally incorporated into this calculus. \begin{notn}\label{notn:Framework/symb/restrict} For $n,m,k$ let $F\in\mathcal{L}(\SRR{n},\SRR{m+k})$ and $G=\intD{1,\ldots,k}{1,\ldots,k}F$ (or equivalently $F=\restrD{1,\ldots,k}{1,\ldots,k}G$). Then $$ G(\bm{x},\bm{y}|\bm{z})=F(\bm{y}|\bm{x},\bm{z}), \bm{x}\in\RR{k},\bm{y}\in\RR{n},\bm{z}\in\RR{m}. $$ \end{notn} However, this identification makes the notation even more confusing, so we will use it only is selected situations. In particular, Notation \ref{notn:Framework/symb/restrict} and (\ref{eq:Framework/exmp/Fourier}) $$ \Fourier{n}{\eta}(\bm{x},\bm{k})=e^{i\eta\bm{k}\cdot \bm{x}}. $$ which makes the standard integral notation for the (partial) Fourier transform of distributions into a special case of (\ref{eq:Framework/extProd}). \par \begin{rmk}\label{rmk:Framework/symb/disc-restriction} In some cases it is convenient to go further and identify the formal symbol with a possibly discontinuous function making (\ref{eq:Framework/Symb/F[f]}) precise. For example, we write $$ J_{+}\symbDistArgs{t'}{t}=\theta(t-t'), \, J_{-}\symbDistArgs{t'}{t}=\theta(t'-t),$$ $$J\symbDistArgs{t'}{t_1,t_2}=\theta(t-t_1)\theta(t_2-t)-\theta(t-t_2)\theta(t_1-t), $$ and treat the Heaviside function just as $\theta\in L^{\infty}(\RR{})$. Note that we always are interested in $L^{\infty}$ class of such discontinuous kernels only. This identification, unlike the one in Notation \ref{notn:Framework/symb/restrict}, is ill-behaved if the derivatives are involved. We also point out that in this form it is absolutely not clear that $J_{\pm}$ and $J$ are smooth with respect to its parameters, although from direct definitions (\ref{eq:Framework/exmp/J-}-\ref{eq:Framework/exmp/J}) we know that they are. \end{rmk} Finally, we also use another intuitive notation for the integral operators of Subsection \ref{Framework/exmp}, writing, for example, $ \int_{-\infty}^t (\ldots)dt' $ instead of $\int J_+\symbDistArgs{t'}{t}(\ldots)dt'$ and similarly for $J$, $J_-$, $\intSimplex{n}{}$ and $\intUSimplex{n}{\pm}$ ($n\in\mathbb{N}_0$). \par To sum up, we see that all the operations on operators we have introduced have a simple intuitive presentation in the language of symbolic integrals. Most of the formal manipulations with such integrals are allowed by the facts we have established. However, overuse of this notation may hide under cover some non-trivial operations, so we often provide at least one precise translation for the formal notation used to avoid any confusion. \section{The domain \texorpdfstring{$\mathcal{D}_{\mathcal{S}}$}{DS} and the secondary quantization}\label{DS} \subsection{Construction of \texorpdfstring{$\mathcal{D}_{\mathcal{S}}$}{DS}} To begin with, we define the space of $l$-particle states. We start from making the space $\SRR{3l}$ into a pre-Hilbert space with the scalar product \begin{equation}\label{eq:DS/SProd} (\Psi,\Psi')=\int \overline{\Psi\left(\arrs{\vec{p}}{l}\right)} \Psi'\left(\arrs{\vec{p}}{l}\right)d^{3l}\arrs{\vec{p}}{l}, \, \forall \Psi,\Psi'\in\SRR{3l}. \end{equation} \begin{rmk} In the Lorentz invariant theories a $\frac{1}{2\omega_0(\vec{k})}$ multiplier\footnote{Here we $\omega_0$ is the dispersion function defined below.} is often added to the measure of integration to make the measure Lorentz invariant (sometimes together with a power of $2\pi$ for convenience). Here we do not assume the Lorentz invariance and chose the simplest possible measure of integration. Note, that as long as the dispersion function $\omega_0$ is as in Definition \ref{def:dispRel}, such a factor can be always absorbed into a redefinition of the wave function, without leaving the $\SRR{3l}$ class. \end{rmk} \begin{rmk}\label{rmk:DS/tau-cont} $$ |(\Psi,\Psi)|\leq C||\Psi||_{\SRR{3l}, k,0}^2 $$ for sufficiently large $C$ and $k$. Thus the tautological map from $\SRR{3l}$ (with the standard topology) to the homonymous pre-Hilbert space is continuous. \end{rmk} Now we take into account the bosonic statistics and define the $$ \mathcal{D}_{\mathcal{S}}^{l}=\symmze{3}{}\SRR{3l} $$ and \begin{equation} \label{eq:DS-def} \mathcal{D}_{\mathcal{S}}={\bigoplus_{l=0}^{\infty}}_{\mathrm{alg}}\mathcal{D}_{\mathcal{S}}^{l}. \end{equation} For $\Psi\in\mathcal{D}_{\mathcal{S}}$ we denote by $\Psi_l\in\mathcal{D}_{\mathcal{S}}^{l}$ its $l$th component. The scalar product on $\mathcal{D}_{\mathcal{S}}$ is given by $$ (\Psi,\Psi')=\sum_{l=0}^{\infty}(\Psi_l,\Psi'_l), \, \forall \Psi,\Psi'\in\mathcal{D}_{\mathcal{S}}. $$ The completion of $\mathcal{D}_{\mathcal{S}}$ with respect to that product is the Hilbert space of physical states $$\overline{\mathcal{D}_{\mathcal{S}}}=\mathcal{H}_{\mathrm{phys}}.$$ \begin{rmk} The space $\mathcal{H}_{\mathrm{phys}}$ is important for the physical interpretation of the Hamiltonian perturbative QFT construction in Section \ref{HpQFT}. It is also relevant for the conjugated operator notion (Definition \ref{def:DS/conj}) and in particular for the unitarity condition in Subsection \ref{HpQFT/AdmTh}. Besides these two contexts, we ignore the pre-Hilbert space structure and use instead the topology of the summands of (\ref{eq:DS-def}) inherited from $\SRR{3l}$. \end{rmk} It is convenient to introduce the truncated spaces $$ \mathcal{D}_{\mathcal{S}}^{\leq l}={\bigoplus_{l'=0}^{l}}\mathcal{D}_{\mathcal{S}}^{l'}. $$ We also define the vacuum vector as $$\Omega_0=1\in\mathbb{C}=\mathcal{D}_{\mathcal{S}}^{0}.$$ Clearly, $$ (\Omega_0,\Omega_0)=1. $$ \subsection{Operators on \texorpdfstring{$\mathcal{D}_{\mathcal{S}}$}{DS}}\label{DS/op} As $\mathcal{D}_{\mathcal{S}}^{l}$, $l\in\mathbb{N}$ are topological spaces, the space of operators $\mathcal{L}(\mathcal{D}_{\mathcal{S}}^{l},\mathcal{D}_{\mathcal{S}}^{l'})$ can be defined as usual. The following characterization of it will be convenient for us later. \begin{rmk}\label{rmk:DS/op/sigma-proj} There is a one-to-one correspondence between $\mathcal{L}(\mathcal{D}_{\mathcal{S}}^{l},\mathcal{D}_{\mathcal{S}}^{l'})$ and the set $$ \left\{A\in\mathcal{L}(\SRR{3l},\SRR{3l'})| \symmze{3}{}[A]=A\right\}=\symmze{3}{}\left[\mathcal{L}(\SRR{3l},\SRR{3l'})\right], $$ where for $A\in\mathcal{L}(\mathcal{D}_{\mathcal{S}}^{l},\mathcal{D}_{\mathcal{S}}^{l'})$ $$ \symmze{3}{}[A]=\symmze{3}{}\circ A \circ \symmze{3}{}. $$ \end{rmk} \begin{proof} The correspondence is almost tautological. If $A\in \mathcal{L}(\SRR{3l},\SRR{3l'})$ satisfies $\symmze{3}{}[A]=A$, then it is valued in $\mathcal{D}_{\mathcal{S}}^{l'}$ and in particular its restriction to $\mathcal{D}_{\mathcal{S}}^{l'}$ belongs to $\mathcal{L}\left(\mathcal{D}_{\mathcal{S}}^{l},\mathcal{D}_{\mathcal{S}}^{l'}\right)$. Conversely, if $A'\in \mathcal{L}\left(\mathcal{D}_{\mathcal{S}}^{l},\mathcal{D}_{\mathcal{S}}^{l'}\right)$, then we set $A=A'\circ \symmze{3}{}\in\mathcal{L}\left(\SRR{3l},\SRR{3l'}\right)$ and get $\symmze{3}{}[A]=A$. It is trivial to see that these two maps are inverse to each other. \end{proof} \begin{Def}\label{def:DS/LDS} The algebra $\LL(\mathcal{D}_{\mathcal{S}})$ consists of all operators $A: \mathcal{D}_{\mathcal{S}}\rightarrow \mathcal{D}_{\mathcal{S}}$ such that for any $l\in\mathbb{N}_0$ there is $l'\in\mathbb{N}_0$, $$ A_{\mathcal{D}_{\mathcal{S}}^{l}}\in\mathcal{L}\left(\mathcal{D}_{\mathcal{S}}^{l},\mathcal{D}_{\mathcal{S}}^{\leq l'}\right). $$ Here $A_{\mathcal{D}_{\mathcal{S}}^{l}}$ is a restriction of $A$ to the space $\mathcal{D}_{\mathcal{S}}^{l}$. \end{Def} \begin{rmk} One may show that the notation $\LL(\mathcal{D}_{\mathcal{S}})$ becomes precise if we endow $\mathcal{D}_{\mathcal{S}}$ with the final topology induced by the inclusions $\mathcal{D}_{\mathcal{S}}^{\leq l}\rightarrow \mathcal{D}_{\mathcal{S}}$. Still, the direct characterization above is more convenient. \end{rmk} \begin{rmk}\label{rmk:DS/LDS=unbounded} All elements of $\LL(\mathcal{D}_{\mathcal{S}})$ are at the same time densely-defined (unbounded) operators on $\mathcal{H}_{\mathrm{phys}}$. Their domain of definition is $\mathcal{D}_{\mathcal{S}}$, and they leave it invariant. \end{rmk} \begin{Def}\label{def:DS/LDS1} The space $\widecheck{\mathcal{L}}(\mathcal{D}_{\mathcal{S}}) $ is defined by $$ \widecheck{\mathcal{L}}(\mathcal{D}_{\mathcal{S}})={\bigoplus_{l,l'=0}^{\infty}}_{\mathrm{alg}}\mathcal{L}(\mathcal{D}_{\mathcal{S}}^{l},\mathcal{D}_{\mathcal{S}}^{l'}) $$ \end{Def} \begin{rmk} There is a natural way to embed $\widecheck{\mathcal{L}}(\mathcal{D}_{\mathcal{S}})$ into $\LL(\mathcal{D}_{\mathcal{S}})$ as an ideal, but we do not need that. It serves as a space of ``unquantized" operators on which the second quantization operation (see Section \ref{DS/2Q}) is defined. \end{rmk} In the context of Remark \ref{rmk:DS/LDS=unbounded}, it seems natural to look for the adjoint operators $\LL(\mathcal{D}_{\mathcal{S}})^*$. For obvious reasons, $\LL(\mathcal{D}_{\mathcal{S}})^*$ is not a subspace of $\LL(\mathcal{D}_{\mathcal{S}})$, and furthermore, elements of $\LL(\mathcal{D}_{\mathcal{S}})^*$ could be not defined on $\mathcal{D}_{\mathcal{S}}$ at all. So, we introduce an alternative operation on $\LL(\mathcal{D}_{\mathcal{S}})$. \begin{Def}\label{def:DS/conj} Fix an operator $A\in\LL(\mathcal{D}_{\mathcal{S}})$. We say that it \emph{has a conjugate operator} if there is $A^{\dag} \in\LL(\mathcal{D}_{\mathcal{S}})$ such that $$ (\Psi',A \Psi)=(A^{\dag}\Psi', \Psi), \quad \forall \Psi ,\Psi'\in\mathcal{D}_{\mathcal{S}}. $$ \end{Def} \begin{rmk} If $A$ has a conjugate, then by definition $A^{\dag}\subset A^{*}$ is the restriction of the adjoint operator to the space $\mathcal{D}_{\mathcal{S}}$. In general. The operators satisfying $A^{\dag}=A$ are exactly the symmetric operators defined on $\mathcal{D}_{\mathcal{S}}$. \end{rmk} \begin{rmk}\label{rmk:DS/LDS/conj-product} If $A,B\in\mathcal{L}(\mathcal{D}_{\mathcal{S}})$ have conjugates, then so does $A\circ B$, and, moreover, $(A\otimes B)^{\dag}=B^{\dag}\otimes A^{\dag}$ \end{rmk} \begin{rmk} With almost no changes Definition can be applied for $A\in\mathcal{L}(\mathcal{D}_{\mathcal{S}}^{l},\mathcal{D}_{\mathcal{S}}^{l'})$ and $A^{\dag}\in \mathcal{L}(\mathcal{D}_{\mathcal{S}}^{l'},\mathcal{D}_{\mathcal{S}}^{l})$ for some $l,l'\in\mathbb{N}$ or to the case then both belong to $\widecheck{\mathcal{L}}(\mathcal{D}_{\mathcal{S}})$. We use this generalization. \end{rmk} \subsection{\texorpdfstring{$\mathcal{D}_{\mathcal{S}}$, $\LL(\mathcal{D}_{\mathcal{S}})$ and $\widecheck{\mathcal{L}}(\mathcal{D}_{\mathcal{S}})$}{DS, L(DS) and Lv(DS)}-valued functions and distributions} Lead by the ideas of Section \ref{Framework}, we introduce the following notions. \begin{Def} Fix $n,l,m\in\mathbb{N}_0$. The space of $\mathcal{D}_{\mathcal{S}}^{(l)}$-valued functions of class $\SMlt{n}{m}$ is\footnote{Recall that in notation introduced in Section \ref{Framework} the operator $\extD{3l+1\ldots 3l+m+n}{3l+1\ldots 3l+m+n}\symmze{3}{}$ symmetrizes the function with respect to permutations of the first $3l$ arguments grouped by three and ignores the rest} $$ \DSSMlt{l}{n}{m}=\left(\extD{3l+1\ldots 3l+m+n}{3l+1\ldots 3l+m+n}\symmze{3}{}\right)\SMlt{3l+n}{m}. $$ We use abbreviate notation $$ \DSMlt{l}{m}=\DSSMlt{l}{0}{m}, \, \DSS{l}{n}=\DSSMlt{l}{n}{0}. $$ \end{Def} \begin{rmk} The spaces $\DSSMlt{l}{n}{m}$ inherit the topology of $\SMlt{3l+n}{m}$, so $\mathcal{L}\left(\DSS{l}{n},\DSSMlt{l'}{n'}{m}\right)$ and $\mathcal{L}\left(\SRR{n},\DSSMlt{l'}{n'}{m}\right)$ are well-defined for any $l,l',m,n,n'\in\mathbb{N}_{0}$. In the sense explained in Section \ref{Framework} they play role of the parameter-dependent distributions on $\RR{n}$ of class $\SMlt{n'}{m'}$ valued in $\mathcal{L}(\mathcal{D}_{\mathcal{S}}^{l},\mathcal{D}_{\mathcal{S}}^{l'})$ and $\mathcal{D}_{\mathcal{S}}^{l'}$ respectively. \end{rmk} \begin{rmk} Similarly to Remark \ref{rmk:DS/op/sigma-proj} for each $\textbf{}$ we identify $$ \mathcal{L}\left(\DSS{l}{n},\DSSMlt{l'}{n'}{m'}\right)=$$ $$\symmze{3}{}\left[\mathcal{L}(\SRR{3l+n},\SMlt{3l'+n'}{n'}{m'})\right], $$ and $$ \mathcal{L}\left(\SRR{n},\DSSMlt{l'}{n'}{m'}\right)=$$ $$\left(\extD{3l'+1\ldots 3l'+m+n}{3l'+1\ldots 3l'+m+n}\symmze{3}{}\right)\circ \mathcal{L}(\SMlt{3l+n}{m},\SMlt{3l'+n'}{m'}), $$ where for $A\in \mathcal{L}(\SMlt{3l+n}{m},\SMlt{3l'+n'}{n'}{m'})$ we set $$ \symmze{3}{}[A]= \left(\extD{3l'+1\ldots 3l'+m'+n'}{3l'+1\ldots 3l'+m'+n'}\symmze{3}{}\right)\circ A\circ \left(\extD{3l+1\ldots 3l+m+n}{3l+1\ldots 3l+m+n}\symmze{3}{}\right). $$ \end{rmk} Generalization of Definitions \ref{def:DS/LDS} and \ref{def:DS/LDS1} is straightforward. \begin{Def} The space $\mathcal{L}\left(\DSS{}{n},\DSSMlt{}{n'}{m'}\right),$ consists of all operators $A: \DSS{}{n}\rightarrow \DSSMlt{}{n'}{m'}$ such that for any $l\in\mathbb{N}_0$ there is $l'\in\mathbb{N}_0$, for which $$ A_{\DSS{l}{n}}\in\mathcal{L}\left(\DSS{ l}{n},\DSSMlt{\leq l'}{n'}{m'}\right), $$ where $A_{\DSS{l}{n}}$ is the restriction of $A$ on the space $\DSS{l}{n}$. \end{Def} \begin{Def} For $n,n',m'\in\mathbb{N}$ we define $$ \widecheck{\mathcal{L}}\left(\DSS{}{n},\DSSMlt{}{n'}{m'}\right)=$$ $${\bigoplus_{l,l'=0}^{\infty}}_{\mathrm{alg}}\mathcal{L}\left(\DSS{l}{n},\DSSMlt{l'}{n'}{m'}\right) $$ and $$ \mathcal{L}\left(\SRR{n},\DSSMlt{}{n'}{m'}\right)={\bigoplus_{l'=0}^{\infty}}_{\mathrm{alg}}\mathcal{L}\left(\SRR{n},\DSSMlt{l'}{n'}{m'}\right). $$ \end{Def} \begin{notn}\label{notn:DS/opval-symb} For the sake of readability, we eventually use the symbolic notation generalizing the on of Subsection \ref{Framework/Symbolic}. \begin{itemize} \item If $\Psi\in\DSSMlt{l}{n}{m}$, ($l,n,m\in\mathbb{N}_0$) we set $$\underline{\Psi}=\evaluateP{3l+1\ldots 3l+n+m}{3l+n+m}\Psi;$$ \item If $A\in\mathcal{L}\left(\SRR{k},\DSSMlt{l}{n}{m}\right)$ ($l,n,m,k\in\mathbb{N}_0$), then $$ \underline{A}=\evaluateP{3l+1\ldots 3l+n+m}{3l+n+m}\circ A; $$ \item If $A\in\mathcal{L}(\DSS{l}{k},\DSSMlt{l'}{n}{m})$, ($l,l',,n,m,k\in\mathbb{N}_0$), then $\underline{A}$ is the operator-valued parameter-dependent distributions defined by $$ \underline{A}= \evaluateP{3l'+1,\ldots,3l'+n+m}{3l'+n+m}\circ (\evaluatePD{1,\ldots,k}{A})$$ \end{itemize} We assume that these maps are defined for, respectively, $\Psi\in\DSSMlt{}{n}{m}$, $LL\left(\SRR{k},\DSSMlt{}{n}{m}\right)$ and $\mathcal{L}(\DSS{}{k},\DSSMlt{}{n}{m})$ by linearity. \end{notn} \begin{rmk}\label{rmk:DS/opval-symb} \begin{itemize} \item For $\Psi\in\DSSMlt{}{n}{m}$, ($n,m\in\mathbb{N}_0$) we have $\underline{\Psi}\in\mathcal{SO_M}(\RR{n},\RR{m};\mathcal{H}_{\rm{phys}})$\footnote{$\mathcal{SO_M}(\RR{n},\RR{m};\mathcal{H}_{\rm{phys}})$ can be defined similar to Remark \ref{rmk:Framework/VectorTest};.} ; \item For $A\in\mathcal{L}\left(\SRR{k},\mathcal{D}_{\mathcal{S}}\right)$ ($k\in\mathbb{N}_0$), is a vector-valued distribution in the sense of \cite{NN}; \item For $A\in\mathcal{L}(\DSS{}{k},\mathcal{D}_{\mathcal{S}}{})$, ($k\in\mathbb{N}_0$), $\underline{A}$ is an operator-valued distribution in the sense of \cite{NN}. \end{itemize} More general versions define parameter-dependent vector-valued distributions. \end{rmk} \begin{proof} The key ingredient is Remark \ref{rmk:DS/tau-cont} combined with the results of Section \ref{Framework}. \end{proof} Now we want to generalize the operations of Section \ref{Framework} to the operator-valued distributions. In the light of Notation \ref{notn:DS/opval-symb} we just have to ignore the first $3l$ arguments. \begin{notn}\label{notn:DS/hat-op} \begin{itemize} \item For $n,m\in\mathbb{N}_{0}$, two tuples of pairwise distinct numbers $\arrs{i}{k}\in\{1,\ldots,n\}$, $\arrs{i'}{k}\in\{1,\ldots,m\}$, and $A\in\mathcal{L}(\DSS{}{n},\DSS{}{m})$ we define $\extDH{\arrs{i}{k}}{\arrs{i'}{k}}A \in\mathcal{L}(\DSS{}{n+k},\DSS{}{m+k})$ by setting for each $l,l'\in\mathbb{N}_{0}$ $$ (\extDH{\arrs{i}{k}}{\arrs{i'}{k}}A\Psi)_{l'}= (\extD{(3l+i_j)_{j=1,\ldots,k}}{(3l'+i_j)_{j=1,\ldots,k}}A\Psi)_{l'}, \,\forall \Psi \in\DSS{l}{n+k} $$ \item For $n,m\in\mathbb{N}_{0}$, two pairwise distinct tuples $\arrs{i}{k}\in\{1,\ldots,n\}$, $\arrs{i'}{k}\in\{1,\ldots,m\}$, and $A\in\mathcal{L}(\DSS{}{n},\DSS{}{m+k})$ we define \\ $\intDH{\arrs{i}{k}}{\arrs{i'}{k}}A \in\mathcal{L}(\DSS{}{n+k},\DSS{}{m})$ by setting for each $l,l'\in\mathbb{N}_{0}$ $$ (\intDH{\arrs{i}{k}}{\arrs{i'}{k}}A\Psi)_{l'}= (\intD{(3l+i_j)_{j=1,\ldots,k}}{(3l'+i_j)_{j=1,\ldots,k}}A\Psi)_{l'}, \,\forall \Psi \in\DSS{l}{n+k}; $$ We set $\restrDH{\arrs{i}{k}}{\arrs{i'}{k}}=\left(\intDH{\arrs{i'}{k}}{\arrs{i}{k}}\right)^{-1}.$ \item For $n,m,n',m,m'\in\mathbb{N}_0$, $F\in\widecheck{\mathcal{L}}(\DSS{}{n},\DSS{}{m})$ and\\ $F'\in\widecheck{\mathcal{L}}\left(\DSS{}{n'},\DSS{}{m'}\right)$ we set $$ F\widehat{\otimes} F'=\left(\extDH{n+1,\ldots,n+m'}{n+1,\ldots,n+m'}F\right)\circ \left(\extDH{1,\ldots,n}{1,\ldots,n}F'\right)=$$ $$\left(\extDH{1,\ldots,m}{1,\ldots,m}F'\right)\circ\left(\extDH{n+1,\ldots,n+n'}{n+1,\ldots,n+n'}F\right) \in \widecheck{\mathcal{L}}\left(\DSS{}{n+n'},\DSS{}{m+m'}\right). $$ \item If $A\in\mathcal{L}(\SRR{n},\SRR{m})$, then we define\footnote{As we will see in Example \ref{exmp:DS/2Q/numdist} this notation is consistent with the secondary quantization formalism of the next Subsection.} $\widehat{A}\in\mathcal{L}(\SRR{n},\SRR{m})$ by setting for each $l\in\mathbb{N}$ $$ \widehat{A}[\Psi]=\extD{1,\ldots,3l}{1,\ldots,3l}A[\Psi]\, \forall \Psi\in\mathcal{D}_{\mathcal{S}}^{l}. $$ \end{itemize} \end{notn} \begin{rmk} For $n,n'\in\mathbb{N}_0$, $F\in\widecheck{\mathcal{L}}(\DSS{}{n},\mathcal{D}_{\mathcal{S}}{})$ and $F'\in\widecheck{\mathcal{L}}\left(\DSS{}{n'},\mathcal{D}_{\mathcal{S}}{}\right)$ we have $$ \underline{F\widehat{\otimes} F'}=\underline{F}\otimes \underline{F'}, $$ where the tensor product of the operator-valued distributions in the right-hand sides is understood as in \cite{NN}. \end{rmk} \begin{proof} Recall that in \cite{NN} the tensor product of distributions is characterised as a uninque distribution such that $$(\underline{F}\underline{F'})[f\otimes f']= \underline{F}[f] \circ \underline{F'}[f'], \forall f\in\SRR{n},\forall f'\in\SRR{n'}. $$ The rest follows by Remarks \ref{rmk:Framewor/ExtUniversal} and \ref{rmk:Framework/Augment/algebra}. \end{proof} \begin{notn}\label{notn:DS/symb-opval} We define the symbolic notation of Subsection \ref{Framework/Symbolic} for the operator-valued distributions defined in Notation \ref{notn:DS/opval-symb} by replacing all the operations ($\extD{}{}$, $\intD{}{}$, $\restrD{}{}$, $\otimes$, pre- and post-compositions with auxiliary operators) by their hatted versions of Notation \ref{notn:DS/hat-op}. \end{notn} \begin{rmk} As was established in Section \ref{Framework} the correspondences behind Notation \ref{notn:DS/opval-symb} and Remark \ref{rmk:DS/opval-symb} are one-to-one. In particular, it means that the ``underlining" is an invertible operation. This allows to define the operators via symbolic expressions with their underlined counterparts. For example, for $F\in\mathcal{L}(\DSS{}{n},\DSS{}{m})$ and $\Psi\in\DSS{}{n}$ we may write $$ \underline{\Phi}(\bm{y})=\int \underline{F}\symbDistArgs{\bm{x}}{\bm{y}} \underline{\Psi}(\bm{x})d^n \bm{x}, \, \bm{y}\in\RR{m} $$ instead of $\Phi=F\circ \Psi$. We will see more complicated examples in which, unlike the trivial one above, the symbolic notation is more insightful. \end{rmk} In the light of the last remark, the following definition is well formulated. \begin{Def} For $A\in \mathcal{L}(\DSS{}{n},\DSSMlt{}{m}{k})$ we set $A^{\dag}\in \mathcal{L}(\DSS{}{n},\DSSMlt{}{m}{k})$ to be a unique operator, characterized by $$ \underline{A^{\dag}}[f](\bm{x})=\underline{A}[\overline{f}](\bm{x})^{\dag},\,\forall f\in\RR{n}, \forall \bm{x}\in\RR{m+k}. $$ (if it exists). \end{Def} \begin{rmk}\label{rmk:DS/LDS/conj-product-gen} Remark \ref{rmk:DS/LDS/conj-product} does not generalize to this setting directly, because the operators, in general, appear in incompatible order. However, the following is true: \begin{itemize} \item If $A\in\mathcal{L}(\DSS{}{n},\DSS{}{m})$, $B\in\mathcal{L}(\DSS{}{k},\DSS{}{l})$ have conjugated operators ($n,m,k,l\in\mathbb{N}_0$) and $$ C=A\widehat{\otimes} B, $$ then $C$ has a conjugated operator, defined by $$ \underline{C}\symbDistArgs{\bm{x},\bm{x}'}{\bm{y},\bm{y'}}=\underline{B}^{\dag}\symbDistArgs{\bm{x}'}{\bm{y'}}\underline{A}^{\dag}\symbDistArgs{\bm{x}}{\bm{y}},\,\bm{x}\in\RR{n},\bm{y}\in\RR{m},\bm{x}'\in\RR{k},\bm{y'}\in\RR{l}. $$ \item If $F\in\mathcal{L}(\SRR{n},\SRR{m})$ and $A\in\mathcal{L}(\DSS{}{m},\DSS{}{k})$ $A$ has a conjugated operator, that so does $F\circ A$. Moreover, $(F\circ A)^{\dag}=\overline{F}\circ A^{\dag}$. Here $\overline{F}$ is a unique operator in $\mathcal{L}(\SRR{n},\SRR{m})$ such that $$ \overline{F}[f]=\overline{F[\overline{f}]}, \,\forall f\in\SRR{n}. $$ \end{itemize} \end{rmk} The proof is straightforward. It is convenient to use Remark \ref{rmk:Framewor/ExtUniversal}. \subsection{Secondary quantization}\label{DS/2Q} In perturbative quantum field theory we deal with a very special kind of operators in $\LL(\mathcal{D}_{\mathcal{S}})$ (and thus very special kinds of operator-valued functions and distributions). The relevant operators are constructed by smearing tensor (or Wick) products of the creation and annihilation operator-valued distributions with some test functions and, more generally, with distributions with particular types of singularities (see e.g. \cite{pAQFT}). We instead first characterize the explicitly by their action on $\mathcal{D}_{\mathcal{S}}$ and then show correspondence with more traditional approaches. \par We use the name second quantization for this procedure resembles the way how the many-particle operators (say the energy and the momentum) are constructed from the single-particle ones in the second quantization in physical literature. We first present the construction in details for the operators and then explain how it generalizes to operator-valued functions and distributions (including parameter-dependent ones). \subsubsection{Basic construction} \begin{Def}\label{def:DS/2Q/AhatDS}Let $l,l'\in\mathbb{N}_0$ and $ A\in \mathcal{L}(\mathcal{D}_{\mathcal{S}}^{l},\mathcal{D}_{\mathcal{S}}^{l'})$. The \emph{second quantization} of $A$ is $\widehat{A}\in\LL(\mathcal{D}_{\mathcal{S}})$ defined by \begin{equation}\label{eq:def:DS/2Q/LDS} \widehat{A}\Psi=\frac{\sqrt{n!(n-l+l')!}}{(n-l)!}\left(\symmze{3}{}\circ \left(\extD{3l+1,\ldots ,3n}{3l'+1,\ldots, 3n-3l+3l'}A\right)\right)\Psi, \forall\Psi\in \mathcal{D}_{\mathcal{S}}^{n}, \qquad n\geq l. \end{equation} $$ \widehat{A}\Psi=0,\qquad \forall\Psi\in \mathcal{D}_{\mathcal{S}}^{n}, \qquad n<l. $$ \end{Def} \begin{rmk} Recall that $$ \extD{3l+1,\ldots ,3n}{3l'+1,\ldots, 3n-3l+3l'}A=A\otimes \mathbb{1}_{\mathcal{L}(\SRR{3(n-l)})}. $$ This form is less direct but more intuitive. \end{rmk} \begin{rmk}\label{rmq:DS/2Q/SmoothKernel} To understand the meaning of the rather technical Definition let us introduce, a bit in advance, the standard creation and annihilation operator-valued distributions which we denote with $\underline{\widehat{a}_+}$ and $\underline{\widehat{a}_-^{R}}$ as they will appear in Examples \ref{exmp:DS/2Q/apm} and \ref{exmp:DS/2Q/amR} \footnote{We use a non-Lorentz-invariant normalization leading to the commutation relation (\ref{eq:DS/CCR}).}. Then Definition \ref{def:DS/2Q/AhatDS} is a formalization of\footnote{See Example \ref{rmk:DS/2Q/symbolic} for the rigorous meaning of (\ref{eq:DS/2Q-symbolic})} \begin{equation} \widehat{A}=\int\left(\prod_{j=1}^{l'}\underline{\widehat{a}_+}(\vec{p}_j')\right) A\symbDistArgs{\arrs{\vec{p}}{l}}{\arrs{\vec{p}'}{l'}} \left(\prod_{j=1}^{l}\underline{\widehat{a}_-^{R}}(\vec{p}_j)\right)d^{3l}\arrs{\vec{p}}{l}d^{3l'}\arrs{\vec{p'}}{l'} \label{eq:DS/2Q-symbolic} \end{equation} Here we took into account that the annihilation operator is in fact a smooth function of its parameter (in our terminology it means that it has a restriction which we denoted with $a_{-}^R$). The correspondence becomes straightforward in the case then $A$ has a smooth kernel (in our terminology, A has an $\SRR{3(l+l')}$-class restriction with respect to all its distributional arguments). Then coincidence of (\ref{eq:DS/2Q-symbolic}) and (\ref{eq:def:DS/2Q/LDS}) follows by comparison of their value on generic $\Psi\in\mathcal{D}_{\mathcal{S}}$. The operators we consider in this paper are of this type only, but for the operator-valued distributions defined in the next subsection the general construction of Definition (\ref{eq:def:DS/2Q/LDS}) is more convenient. It is worth noting that operators with a smooth kernel could never realize momentum or energy conservation law, so in the context of the strong adiabatic limit the full construction will be necessary. \end{rmk} \begin{rmk} The map $A\mapsto\widehat{A}$ is obviously linear. It has a straightforward linear extension to $\widecheck{\mathcal{L}}(\mathcal{D}_{\mathcal{S}})$ which we assume to be defined from now on \end{rmk} \begin{rmk}\label{rmk:DS/2Q/injective-sym} We can extend the secondary quantization from the space $\mathcal{L}(\mathcal{D}_{\mathcal{S}}^{l},\mathcal{D}_{\mathcal{S}}^{l'})$ to $\mathcal{L}\left(\SRR{3l},\SRR{3l'}\right)$ keeping (\ref{eq:def:DS/2Q/LDS}) untouched. It is easy that $\widehat{A}=\widehat{\symmze{3}{}[A]}$. Our choice is dictated by injectivity of the second quantization acting defined on $\widecheck{\mathcal{L}}(\mathcal{D}_{\mathcal{S}})$\footnote{To see injectivity of second quantization on $\mathcal{L}\left(\mathcal{D}_{\mathcal{S}}^{l},\mathcal{D}_{\mathcal{S}}^{l'}\right)$ consider its action on $\mathcal{D}_{\mathcal{S}}^{l}$. For the whole space $\widecheck{\mathcal{L}}(\mathcal{D}_{\mathcal{S}})$, proceed by induction in $l$.}. \end{rmk} The main advantage of this formalism is the fact that all operations on such operators can be done on the level of $\widecheck{\mathcal{L}}(\mathcal{D}_{\mathcal{S}},\mathcal{D}_{\mathcal{S}})$ without keeping track of the extra variables and combinatoric factors. In particular, the conjugation and the products can be computed that way. \begin{rmk}\label{rmk:DS/2Q/2Q-conj} For any $A\in\widecheck{\mathcal{L}}(\mathcal{D}_{\mathcal{S}},\mathcal{D}_{\mathcal{S}})$ one has $$ \widehat{A^{\dag}}=\widehat{A}^{\dag}. $$ In other words, second quantization commutes with conjugation. \end{rmk} \begin{proof} It is enough to consider $A\in\mathcal{L}(\mathcal{D}_{\mathcal{S}}^{l},\mathcal{D}_{\mathcal{S}}^{l'})$ for all possible $l,l'\in\mathbb{N}_0$ and show for any $n\in\mathbb{N}_0$, $n\geq l$, any $\Psi\in \mathcal{D}_{\mathcal{S}}^{n}$ and $\Psi'\in\mathcal{D}_{\mathcal{S}}^{n-l+l'}$ that $$ (\Psi',\extD{l+1,\ldots ,n}{l'+1,\ldots, n-l+l'}A\Psi)={(\extD{l'+1,\ldots, n-l+l'}{l+1,\ldots ,n}A^{\dag}\Psi,\Psi')}. $$ Note that the combinatoric factors canceled out and we put away the symmetrization, for both $\Psi$ and $\Psi'$ being symmetric already. The last line is equivalent to $$ \int \left(\evaluateP{3l'+1,\ldots,3(n-l+l')}{n-l+l'}\Psi'(\arrs{\vec{p}}{n-l}),A\evaluateP{3l+1,\ldots,3n}{n}\Psi(\arrs{\vec{p}}{n-l})\right)d^{3(n-l)}\arrs{\vec{p}}{n-l}= $$ $$\int \left(A^{\dag}\evaluateP{3l'+1,\ldots,3(n-l+l')}{n-l+l'}\Psi'(\arrs{\vec{p}}{n-l}),\evaluateP{3l+1,\ldots,3n}{n}\Psi(\arrs{\vec{p}}{n-l}) \right)d^{3(n-l)}\arrs{\vec{p}}{n-l}, $$ which holds by Definition \ref{def:DS/conj}. \end{proof} Let us now compute the product $\widehat{A}\widehat{B}$ for some $A,B\in\widecheck{\mathcal{L}}(\mathcal{D}_{\mathcal{S}})$. First of all, note that the creation and operators in (\ref{eq:DS/2Q-symbolic}) are normally ordered. Then to present $\widehat{A}\widehat{B}$ again in the form (\ref{eq:DS/2Q-symbolic}) we need some kind of the Wick theorem. So, we need a formalization of the Wick product with contractions. Looking once again on (\ref{eq:DS/2Q-symbolic}) we identify the tensor product in $\widecheck{\mathcal{L}}(\mathcal{D}_{\mathcal{S}})$ with the Wick product: $$ :\widehat{A}\widehat{B}:= \widehat{A\otimes B}. $$ Inspired by that we introduce the following \begin{Def}\label{def:DS/2Q/contractions} Let $A\in\mathcal{L}(\mathcal{D}_{\mathcal{S}}^{l_A},DS^{l'_A})$, $B\in\mathcal{L}(\mathcal{D}_{\mathcal{S}}^{l_B},DS^{l'_B})$ and let $r\leq \min(l_A,l'_B)$. Then the \emph{tensor product with $r$ contractions} $$A\otimes_r B\in \mathcal{L}(\mathcal{D}_{\mathcal{S}}^{l_A+l_B-r},DS^{l'_A+l'_B-r})$$ is $$ A\otimes_r B=\symmze{3}{}\left[(\extD{3(l_A-r)+1,\ldots,3(l_A'+l'_B-r)}{3(l'_A-r)+1,\ldots,3(l'_A-l_B+r)}A)\circ{}(\extD{1,\ldots,3(l_A-r)}{1,\ldots,3(l_A-r)}B)\right]. $$ \end{Def} \begin{rmk}\label{rmk:DS/2Q/contractions} For clarity, we provide more readable versions of Definition \ref{def:DS/2Q/contractions}. First of all, reading the proof of Proposition \ref{prop:DS/2Q/Wick} it may be convenient to keep in mind that $$ A\otimes_{r}B= \symmze{3}{}\left[\left(A\otimes \mathbb{1}_{\mathcal{L}\left(\SRR{3(l'_B-r)}\right)}\right)\left( 1_{\mathcal{L}\left(\SRR{3(l_A-r)}\right)}\otimes B\right)\right]. $$ To see that this is indeed the Wick product with $r$ contractions, the symbolic form is more suitable. For the sake of clarity, we omit the symmetrization operator which is not relevant by Remark \ref{rmk:DS/2Q/injective-sym}. $$ A\otimes_{r}B=\symmze{3}{}\left[A\otimes'_{r}B\right], $$ $$ \left(A\otimes'_{r}B\right)\symbDistArgs{\arrs{\vec{p}}{l_A+l_B-r}}{\arrs{\vec{p}'}{l'_A+l'_B-r}}= \int A\symbDistArgs{\arrs{\vec{p}}{l_A-r},\arrs{\vec{k}}{r}}{\arrs{\vec{p}'}{l'_A}}\times $$ $$ B\symbDistArgs{\arrsM{\vec{p}}{l_A-r+1}{l_A+l_B-r}}{\arrs{\vec{k}}{r},\arrsM{\vec{p}'}{l'_A+1}{l'_A+l'_B-r}}d^{3r}\arrs{\vec{k}}{r}. $$ Comparing it with (\ref{eq:DS/2Q-symbolic}) we see that our terminology is natural. Finally, such a product may be presented graphically as in Fig. \ref{fig:DS/2Q/contractions}. Here each dot denotes an operator and each line incident to it is an argument of the corresponding operator, distributional (for lines coming from the right), or parametric (for the lines going to the left). The arguments, connecting two dots are subject to integration. Note that the graph is totally ordered \footnote{see Subsection \ref{Intro/prelim} for terminology related to the Feynman graphs}. Clearly, consequent products may be given such representation in terms of totally ordered graphs. \end{rmk} \begin{figure} \centering \begin{tikzpicture} \filldraw[black] (0,0) circle (2pt) node[anchor=south]{A}; \filldraw[black] (2,-2) circle (2pt) node[anchor=north]{B}; % \draw (-1,1) -- (0,0); \draw (-1,0.5) -- (0,0); \node[] at (-0.8,0) {$\vdots$}; \draw (-1,-0.8) -- (0,0); \draw [decorate, decoration = {brace, amplitude=5pt}] (-1.1,-0.8) -- (-1.1,1); \node[] at (-1.4,0) {$l'_A$}; % \draw (3,1) -- (0,0); \draw (3,0.5) -- (0,0); \node[] at (2.8,0) {$\vdots$}; \draw (3,-0.8) -- (0,0); \draw [decorate, decoration = {brace, amplitude=5pt}] (3.1,1) -- (3.1,-0.8); \node[anchor=west] at (3.3,0) {$l_A-r$}; % \draw (0,0) to[bend left=30] (2,-2); \draw (0,0) to[bend right=45] (2,-2); \draw (0,0) to[bend left=15] (2,-2); \node[] at (1,-1) {$\iddots$}; \draw [decorate, decoration = {brace, amplitude=5pt}] (1.6,-0.8) -- (0.6,-1.7); \node[] at (1.4,-1.4) {$r$}; % \draw (-1,-2) -- (2,-2); \draw (-1,-2.2) -- (2,-2); \node[] at (-0.8,-2.4) {$\vdots$}; \draw (-1,-3) -- (2,-2); \draw [decorate, decoration = {brace, amplitude=5pt}] (-1.1,-3) -- (-1.1,-2); \node[anchor=east] at (-1.2,-2.5) {$l'_B-r$}; % \draw (3,-1.5) -- (2,-2); \draw (3,-1.7) -- (2,-2); \node[] at (2.8,-2) {$\vdots$}; \draw (3,-2.7) -- (2,-2); \draw [decorate, decoration = {brace, amplitude=5pt}] (3.1,-1.5) -- (3.1,-2.7); \node[anchor=west] at (3.2,-2.2) {$l_B$}; \end{tikzpicture} \caption{Graphical presentation of the contracted product $A\otimes_{r}B\in \widecheck{\mathcal{L}}(\mathcal{D}_{\mathcal{S}}^{l_A+l_B-r},\mathcal{D}_{\mathcal{S}}^{l'_A+l'_B-r})$, $A\in\widecheck{\mathcal{L}}(\mathcal{D}_{\mathcal{S}}^{l_A},\mathcal{D}_{\mathcal{S}}^{l'_A})$, $B\in\widecheck{\mathcal{L}}(\mathcal{D}_{\mathcal{S}}^{l_B},\mathcal{D}_{\mathcal{S}}^{l'_B})$. } \label{fig:DS/2Q/contractions} \end{figure} \begin{prop}\label{prop:DS/2Q/Wick} Let $A\in\mathcal{L}(\mathcal{D}_{\mathcal{S}}^{l_A},\mathcal{D}_{\mathcal{S}}^{l'_A})$, $B\in\mathcal{L}(\mathcal{D}_{\mathcal{S}}^{l_B},\mathcal{D}_{\mathcal{S}}^{l'_B})$. Then $\widehat{A}\widehat{B}=\widehat{A\otimes_{\bullet}B}$ with $$ A\otimes_{\bullet}B=\sum_{r=0}^{\min(l_A,l'_B)}r! \binom{l_A}{r}\binom{l'_B}{r} A\otimes_r B\in\widecheck{\mathcal{L}}(\mathcal{D}_{\mathcal{S}}). $$ \end{prop} \begin{rmk} The binomial coefficients are nothing but the number of selecting $r$ pairs from the creation operators of $B$ and annihilation operators of $A$, so the above is indeed a form of the Wick theorem. \end{rmk} The proof of Proposition \ref{prop:DS/2Q/Wick} is straightforward, but technical. It is convenient to separate the following combinatoric facts. \begin{lem}\label{lem:prop:DS/2Q/Wick/Comb/Sigmas} Let $n,l_A,l_B,l_A',l_B'\in\mathbb{N}_0$ and $n\geq \min(l_A,l_A+l_B-l_B')$. Introduce $R: \symmgr{n-l_B+l'_B}\longrightarrow \mathbb{N}_0$ as $$ R(\sigma)=\left|\{j=1,\ldots,l_B'|\sigma(j)\leq l_A\}\right|. $$ For each $r\in\{\max(0,l_A+l_B-n),\max(0,l_A+l_B-n)+1,\ldots,\min(l_A,l_B')\}$ define $\sigma^{(r)}$ by setting $\sigma^{(r)}\in \symmgr{n-l_B+l_B'}$ as $$ \sigma^{(r)}(j)=j+l_A-r, \qquad 1\leq j \leq l_B', $$ $$ \sigma^{(r)}(j)=j-l_B', \qquad l_B'< j \leq l_A+l_B'-r, $$ $$ \sigma^{(r)}(j)=j, \qquad l_A+l_B'-r< j \leq n. $$ Then \begin{enumerate} \item $\max(0,l_A+l_B-n)\leq R(\sigma)\leq \min(l_A,l_B')$ $\forall \sigma\in\symmgr{n-l_B+l_B'}$; \item $R(\sigma^{(r)})=r$, $r\in\{\max(0,l_A+l_B-n),\ldots,\min(l_A,l_B')\} $ \item For any $\sigma\in\symmgr{n-l_B+l_B'}$ there is a (not necessary unique) decomposition \begin{equation} \sigma=(\sigma_1\times \sigma_2)\circ \sigma^{(R(\sigma))}\circ (\sigma_3\times \sigma_4) \label{eq:lem:prop:DS/2Q/Wick/Comb/Sigma} \end{equation} with $\sigma_1\in \symmgr{l_A}$, $\sigma_2\in \symmgr{n-l_B+l_B'-l_A}$, $\sigma_3\in \symmgr{l_B'}$, $\sigma_4\in\symmgr{n-l_B}$, Here we assume the natural inclusion $\symmgr{k}\times \symmgr{k'}\subset \symmgr{k+k'}$ with the first and second factors acting, respectively, on the first $k$ the rest $k'$ elements; \item \begin{equation}\label{eq:lem:prop:DS/2Q/Wick/Comb/count} \big|\{\sigma \in \mathfrak{S}_{n+l_B'-l_B}|R(\sigma)=r\}\big|=\binom{l_B'}{r}\binom{l_A}{r}r! \frac{(n+l_B'-l_B-l_A)!(n-l_B)!}{(n-l_B-l_A+r)!}. \end{equation} \end{enumerate} \end{lem} \begin{proof} \begin{enumerate} \item The upper bound $R(\sigma)\leq \min(l_A,l_B')$ follows directly from the definition. At the same time $$ l_B'-R(\sigma)=\left|\{j=1,\ldots,l_B'|l_A+1\leq \sigma(j) \leq n-l_B+l_B'\}\right|, $$ so $l_B'-R(\sigma)\leq n-l_B+l_B'-l_A$ giving the desired lower bound. \item Directly by construction of $\sigma^{(r)}$ $$ \{j=1,\ldots,l_B'|\sigma(j)\leq l_A\}=\{1,\ldots,r\}, $$ so $R(\sigma^{(r)})=r$. \item[3,4] Let us classify all permutations $\sigma$ with fixed $R(\sigma)=r$. To fix such a permutation we do the following: \begin{itemize} \item We select $r$ elements among the first $l_B'$ numbers, find for them places among the first $l_A$ numbers, and chose a bijection between the former and the latter. This gives $$\binom{l_B'}{r}\binom{l_A}{r}r!$$ possibilities; \item The rest $l_B'-r$ numbers from $1$ to $l_B'$ should be placed somewhere from $l_A+1$ to $n+l_B'-l_B$ leading to $$\frac{(n+l_B'-l_B-l_A)!}{(n-l_B-l_A+r)!}$$ possibilities; \item Finally, we should fix a bijection between the numbers from $l_B'+1$ to $n+l_B'-l_B$ and the yet free places in $$(n-l_B)!$$ ways. \end{itemize} The product of these factor give (\ref{eq:lem:prop:DS/2Q/Wick/Comb/count}), and the described procedure gives a decomposition of type (\ref{eq:lem:prop:DS/2Q/Wick/Comb/Sigma}). \end{enumerate} \end{proof} \begin{proof}[Proof of Proposition \ref{prop:DS/2Q/Wick}] Take first $n< \max(l_B,l_A+l_B-l'_B)$ and $\Psi\in \mathcal{D}_{\mathcal{S}}^{n}$. We get $\widehat{A} \widehat{B}\Psi=0$ immediately as well as $\widehat{A\otimes_{r}B}\Psi$ for any $r\in\{1,\ldots,\min(l_A,l'_B)\}$. \par Now take $n\geq \max(l_B,l_A+l_B-l'_B)$ and again $\Psi\in\mathcal{D}_{\mathcal{S}}^{n}$. We have $$ \widehat{A} \widehat{B}\Psi=\frac{\sqrt{n!(n-l_B+l'_B)!}}{(n-l_B)!}\frac{\sqrt{(n-l_B+l'_B)!(n-l_A+l'_A-l_B+l'_B)!}}{(n-l_B+l'_B-l_A)!}\times$$ $$\symmze{3}{}\left(\extD{3l_A+1,\ldots ,3n-3l_B+3l_B'}{3l'_A+1,\ldots, 3n_A-3(l_A+l_B)+3(l'_A+l'_B)}A\right)\symmze{3}{}\left(\extD{3l_B+1,\ldots ,3n}{3l_B'+1,\ldots, 3n-3l_B+3l_B'}B\right)\Psi. $$ Now we present the operator $\symmze{3}{}$ in the middle explicitly: $$ \widehat{A} \widehat{B}\Psi=c\sum_{\sigma\in\mathfrak{S}_{n-l_B+l_B'}}\symmze{3}{}\left(\extD{3l_A+1,\ldots ,3n-3l_B+3l_B'}{3l'_A+1,\ldots, 3n_A-3(l_A+l_B)+3(l'_A+l'_B)}A\right)\circ$$ $$\permNKS{n-l_B+l'_B}{3}[\sigma] \left(\extD{3l_B+1,\ldots ,3n}{3l_B'+1,\ldots, 3n-3l_B+3l_B'}B\right)\Psi, $$ $$ c=\frac{\sqrt{n!(n-l_A+l'_A-l_B+l'_B)!}}{(n-l_B)!(n-l_B+l'_B-l_A)!}. $$ We apply Lemma \ref{lem:prop:DS/2Q/Wick/Comb/Sigmas} to classify the permutations. By the decomposition \ref{eq:lem:prop:DS/2Q/Wick/Comb/Sigma} $$ \permNKS{n-l_B+l'_B}{3}[\sigma]=(\permNKS{l_A}{3}[\sigma_1]\otimes \permNKS{n-l_B+l'_B-l_A}{3}[\sigma_2])\permNKS{n-l_B+l'_B}{3}[\sigma^{(R(\sigma))}](\permNKS{l_B'}{3}[\sigma_3]\otimes \permNKS{n-l_B}{3}[\sigma_4]). $$ Since $\symmze{3}{}[B]=B$ and $\Psi=\symmze{3}{}\Psi$ we get \footnote{We use that the symmetrization operator absorbs any permutations.}, $$ (\permNKS{l_B'}{3}[\sigma_3]\otimes \permNKS{n-l_B}{3}[\sigma_4])\left(\extD{3l_B+1,\ldots ,3n}{3l_B'+1,\ldots, 3n-3l_B+3l_B'}B\right)\Psi= $$ $$ \left(\extD{3l_B+1,\ldots ,3n}{3l_B'+1,\ldots, 3n-3l_B+3l_B'}B\right)( \extD{1,\ldots ,3l_B}{1,\ldots ,3l_B} \permNKS{n-l_B}{3}[\sigma_4])\Psi=\left(\extD{3l_B+1,\ldots ,3n}{3l_B'+1,\ldots, 3n-3l_B+3l_B'}B\right)Psi. $$ Similarly, using $\symmze{3}{}[A]=A$ we get $$\symmze{3}{}\left(\extD{3l_A+1,\ldots ,3n-3l_B+3l_B'}{3l'_A+1,\ldots, 3n_A-3(l_A+l_B)+3(l'_A+l'_B)}A\right) (\permNKS{l_A}{3}[\sigma_1]\otimes \permNKS{n-l_B+l'_B-l_A}{3}[\sigma_2])= $$ $$ \symmze{3}{} \left(\extD{3l_A+1,\ldots ,3n-3l_B+3l_B'}{3l'_A+1,\ldots, 3n_A-3(l_A+l_B)+3(l'_A+l'_B)}A\right)$$ Finally, $$ \left(\extD{3l_A+1,\ldots ,3n-3l_B+3l_B'}{3l'_A+1,\ldots, 3n_A-3(l_A+l_B)+3(l'_A+l'_B)}A\right)\left(\extD{3l_B+1,\ldots ,3n}{3l_B'+1,\ldots, 3n-3l_B+3l_B'}B\right)=$$ $$ \extD{3l_A+3l_B+1,\ldots,3n}{3l'_A+3l_B'+1,\ldots, 3n-3l_A-3l_B+3l'_A+3l_B'}\left(A\otimes_r B\right). $$ Putting this all back and using $\widehat{A\otimes_{r} B}\Psi=0$ whenever $n<l_A+l_B-r$ we get $$ \widehat{A} \widehat{B}\Psi=\sum_{r=0}^{\min(l_A,l_B')}C_r\widehat{A\otimes_{r} B}\Psi , $$ where $$C_r=c\big|\{\sigma \in \mathfrak{S}_{n+l_B'-l_B}|R(\sigma)=r\}\big| \left(\frac{\sqrt{n!(n-l_A+l'_A-l_B+l'_B)!}}{(n-l_A-l_B+r)!}\right)^{-1}= $$ $$ \big|\{\sigma \in \mathfrak{S}_{n+l_B'-l_B}|R(\sigma)=r\}\big| \frac{(n-l_A-l_B+r)!}{(n-l_B)!(n-l_B+l'_B-l_A)!}. $$ Thus by the last assertion of Lemma \ref{lem:prop:DS/2Q/Wick/Comb/Sigmas} $C_r=\binom{l_B'}{r}\binom{l_A}{r}r!$ as in the statement. \end{proof} \begin{rmk}\label{rmk:DS/2Q/formal-alg} We can consider $\widecheck{\mathcal{L}}(\mathcal{D}_{\mathcal{S}})$ as a formal involutive algebra with the product $\otimes_{\bullet}$. Then by Proposition \ref{prop:DS/2Q/Wick} and Remark \ref{rmk:DS/2Q/2Q-conj} the second quantization is a homomorphism of such algebras. On the other hand, it means that the whole analysis (see also Remark \ref{rmk:DS/2Q/formal-alg-gen}) of this paper can be done completely within the formal algebra, without mentioning its particular representation by unbounded operators on the physical Hilbert space (similar to perturbative AQFT \cite{pAQFT}). \end{rmk} \subsubsection{Generalization to operator-valued functions and (possibly parameter-dependent) distributions} From this and the previous subsections it is clear that it is enough to define the secondary quantization for parameter-dependent operator-valued distribution as a map $$\widecheck{\mathcal{L}}(\DSS{}{n},\DSSMlt{}{m}{k})$$ for $n,m,k\in\mathbb{N}_0$. Then the case of operator-valued functions follows by setting $n=0$ , while parameterless distributions correspond to $m=k=0$. \par Generalization of Definition \ref{def:DS/2Q/AhatDS} is straightforward: \begin{Def}\label{def:DS/2Q/AhatGen} Let $n,m,k,l,l'\in\mathbb{N}_0$ and $ A\in \mathcal{L}(\DSS{l}{n},\DSSMlt{l'}{m}{k})$. The \emph{second quantization} of $A$ is $\widehat{A}\in\mathcal{L}(\DSS{}{n},\DSSMlt{}{m}{k})$ defined by $$ \widehat{A}\Psi=\frac{\sqrt{L!(L-l+l')!}}{(L-l)!}\left(\left(\extD{3L+1,\ldots,3L+m+k}{3L+1,\ldots,3L+m+k}\symmze{3}{}\right)\circ \left(\extD{3l+1,\ldots ,3L}{3l'+1,\ldots, 3L-3l+3l'}A\right)\right)\Psi, $$ $$\forall\Psi\in \DSS{L}{n}, \qquad L\geq l; $$ $$ \widehat{A}\Psi=0,\qquad \forall\Psi\in \DSS{L}{n}, \qquad L<l. $$ \end{Def} \begin{rmk}\label{rmk:AhatGenCommutes} By Remark \ref{rmk:Framework/Augment/algebra}, \emph{the secondary quantization commutes with evaluation}, i.e. for $A$ as in Definition \ref{def:DS/2Q/AhatGen} $$ \widehat{\underline{A}[f](\bm{x})}=\underline{\widehat{A}}[f](\bm{x}), \,\forall f\in\SRR{n}, \forall \bm{x}\in\RR{m+k}. $$ As a consequence, $$\widehat{A}^{\dag}=\widehat{A^{\dag}}.$$ \end{rmk} \begin{rmk}[Wick's theorem for functions and distributions]\label{rmk:DS/2Qgen/Wick} Take $l_A,l'_A,l_B,l'_B,n,n',n''\in\mathbb{N}_{0}$, $A\in\mathcal{L}\left(\DSS{l_A}{n'},\DSS{l'_A}{n''}\right)$ and $B\in\mathcal{L}\left(\DSS{l_B}{n},\DSS{l'_B}{n'}\right)$. Then Proposition \ref{prop:DS/2Q/Wick} holds in the same symbolic form with $$A\otimes_r B\in \mathcal{L}\left(\DSS{l_A+l_B-r}{n},\DSS{l'_A+l'_B-r}{n''}\right)$$ defined precisely as in Definition \ref{def:DS/2Q/contractions}. The proof goes along the same lines. Generalization to the multiplier and mixed spaces goes along the lines of Subsection \ref{Framework/Mlt}. \end{rmk} \begin{rmk} Let $n,m,k,r\in\mathbb{N}_0$, $A\in\widecheck{\mathcal{L}}(\DSS{}{n},\DSSMlt{}{m}{k})$, and two tuples of pairwise distinct numbers $\arrs{i}{r}\in\{1,\ldots,n+r\}$ and $\arrs{i'}{r}\in\{1,\ldots,m+k+r\}$. Define $ B=\extDH{\arrs{i}{r}}{\arrs{i'}{r}}A $. Then, by Remark \ref{rmk:Framework/Augment/algebra}, $$ \widehat{B}=\extDH{\arrs{i}{r}}{\arrs{i'}{r}}\widehat{A}. $$ In other word, \emph{secondary quantization commutes with augmentations}. \end{rmk} Further properties of the secondary quantization and its generalization will be stated after some examples. \subsubsection{Important examples} We list a few applications of the formalism introduced in this subsection. On the one hand, they illustrate the framework we have constructed. On the other hand, they play important roles in what follows. \begin{exmp}\label{exmp:DS/2Q/apm} Define\footnote{Here we use $\DSS{0}{3}=\SRR{3}$. Simple identifications appear in other examples with no further comments.} $a_{+}\in \mathcal{L}(\SRR{3},\DSS{}{3})$ $$ a_{+}[f](\vec{p})=f(\vec{p}) $$ and $a_{-}\in \mathcal{L}(\DSS{1}{3},\mathbb{C})$, $$ a_{-}[f]=\int{f(\vec{p},\vec{p})d^3{\vec{p}}} . $$ Then $a_{-}=a_{+}^{\dag}$, thus $\widehat{a}_{-}=\widehat{a}_{+}^{\dag}$. The operator-valued distributions $ \underline{\widehat{a}_{\pm}} $ coincide with the the standard creation and annihilation distributions. By Wick's theorem of Remark \ref{rmk:DS/2Qgen/Wick} (or by direct computation), \begin{equation} \uwhat{a_{-}}(\vec{p})\uwhat{a_{+}}(\vec{p}')-\uwhat{a_{+}}(\vec{p}')\uwhat{a_{-}}(\vec{p})=\delta^{(3)}(\vec{p}-\vec{p}')\mathbb{1}_{\LL(\mathcal{D}_{\mathcal{S}})}, \label{eq:DS/CCR} \end{equation} Where we treat $\delta^{(3)}(\vec{p}-\vec{p}')$ as a (symbolic kernel of a) distribution on $\RR{6}$. \end{exmp} \begin{exmp}\label{exmp:DS/2Q/amR} There is a restriction $a_{-}^R=\restrD{1,2,3}{1,2,3}a_{-}\in\mathcal{L}(\mathcal{D}_{\mathcal{S}}^{3},\SRR{3})$, $$ a_-^R[\psi](\vec{p})=\psi(\vec{p},\vec{p}),\,\forall \psi\in\mathcal{D}_{\mathcal{S}}^{1},\,\forall \vec{p}\in\RR{3}. $$ It has the secondary quantization $\widehat{a}^{R}_{-}\in\mathcal{L}(\mathcal{D}_{\mathcal{S}},\DSS{}{3})$ and does not have the conjugate operator. This example is in agreement with the well-known fact that the annihilation (but not the creation) operator is in fact a smooth function of its parameter. \end{exmp} \begin{rmk} The commutation relations for $a_{-}^R$ in place of $a_-$ have the same form, but the delta-function should be understood as a parameter-dependent distribution, namely $$\delta^{(3)}(\vec{p}-\vec{p}')=\mathbb{1}_{\mathcal{L}(\SRR{3})}$$ (see Notation \ref{notn:Framework/symb/restrict}). \end{rmk} \begin{exmp}\label{exmp:DS/2Q/numdist} Another class of examples comes from the identification $$ \mathcal{L}(\SRR{n},\SMlt{m}{k})=\mathcal{L}(\DSS{0}{n},\DSSMlt{0}{m}{k}). $$ By Definition \ref{def:DS/2Q/AhatGen}, for $F\in\mathcal{L}(\SRR{n},\SMlt{m}{k})$, $$ \widehat{F}=\mathbb{1}_{\mathcal{D}_{\mathcal{S}}}\otimes F\in\mathcal{L}(\DSS{0}{},\DSSMlt{}{m}{k}). $$ It is consistent with Notation \ref{notn:DS/hat-op}. The Wick Theorem of Remark \ref{rmk:DS/2Qgen/Wick}, applied to products with one of the multipliers of this class leads to rather trivial result (as the number of contraction is always zero). In particular\footnote{As usual, we put $k=0$ for simplicity}, for $F\in \mathcal{L}(\SRR{n},\SRR{m})$, $A\in\widecheck{\mathcal{L}}\left(\DSS{}{n'},\DSS{}{n}\right)$ and $B\in\widecheck{\mathcal{L}}\left(\DSS{}{m},\DSS{}{m'}\right)$ we get $$F\otimes_{\bullet} A=F\otimes_{0} A=\widehat{F}\circ A\in \widecheck{\mathcal{L}}\left(\DSS{}{n'},\DSS{}{m}\right)$$ and $$B\otimes_{\bullet}F=B\otimes_{0}F=B\circ \widehat{F}\in \widecheck{\mathcal{L}}\left(\DSS{}{n},\DSS{}{m'}\right).$$ In the symbolic notation we write $$ \underline{\widehat{F\otimes_{\bullet} A}}\symbDistArgs{\bm{x}}{\bm{y}}=\int F(\bm{z}|\bm{y})\underline{\widehat{A}}\symbDistArgs{\bm{x}}{\bm{z}}d^{n}\bm{z}, \quad \bm{x}\in\RR{n'}, \bm{y}\in\RR{m}, $$ and $$ \underline{\widehat{B\otimes_{\bullet}F}}\symbDistArgs{\bm{x}}{\bm{y}}=\int\underline{\widehat{B}}\symbDistArgs{\bm{z}}{\bm{y}} F\symbDistArgs{\bm{x}}{\bm{z}}d^{m}\bm{z}, \quad,\bm{x}\in\RR{n}, \,\bm{y}\in\bm{m'}, $$ where we omit the hat on the symbolic kernel of the numerical quantization $F$ (as in Notation \ref{notn:DS/symb-opval}). The main lesson we learned here is that composition with (hatted) numerical operator as above can be pulled into the second quantization. \end{exmp} We are now ready to define the quantum fields. \begin{exmp} \label{exmp:DS/2Q/fields} Let $\omega_0$ be a massive dispersion function (Definition \ref{def:dispRel}). We define $\widetilde{\phi}^{R}_0\in\widecheck{\mathcal{L}}(\DSS{}{3},\DSMlt{}{1})$ as $$ \widetilde{\phi}^R_0=\widetilde{\phi}^R_{0+}+\widetilde{\phi}^R_{0-}, $$ where\footnote{We use properties of $\omega_0$ postulated in Definition \ref{def:dispRel}} \begin{equation}\label{eq:DS/2Q/field-def} \widetilde{\phi}^R_{0\pm}\symbDistArgs{\vec{p}}{t}=a_{\pm}(\pm \vec{p})\varphi_{\pm}(t,\vec{p}), \end{equation} $$ \varphi_{\pm} \in \Mlt{4},\quad \varphi_{\pm}(\vec{p},t)=\frac{e^{\mp \mathrm{i} \omega_0(\vec{p})t}}{\sqrt{(2\pi)^3 2\omega_0(\vec{p})}}\, (\forall \vec{p}\in\RR{3},\,\forall t\in\RR{}). $$ We also set $\widetilde{\phi}_0=\intDH{1}{1}\widetilde{\phi}_0^R$. Then $$ \uwhat{\phi}_{0,\pm }(t,\vec{k})=\uwhat{\phi}_{0,\pm }^R\symbDistArgs{\vec{k}}{t}=\uwhat{a}_{\pm}(\vec{k})\frac{e^{\mp \mathrm{i} \omega_0(\vec{k})t}}{\sqrt{(2\pi)^3 2\omega_0(\vec{k})}}. $$ which is nothing but the positive and the negative frequency parts of the partial Fourier transform of the real scalar quantum field. \end{exmp} \begin{rmk} For the completeness we present a translation of (\ref{eq:DS/2Q/field-def}) from the symbolic language. We take arbitrary $f\in\DSS{}{3}$ and write $$ \underline{\widetilde{\phi}^R_{0\pm}\circ f}=\int \underline{a}_{\pm}(\pm \vec{p})\varphi_{\pm}(t,\vec{p})\underline{f}(t,\vec{p})\psi dt d^3{\vec{p}}=\widetilde{\phi}_{0\pm}[f]=\int \underline{a}_{\pm}( \vec{p})\varphi_{\pm}(t,\pm\vec{p})\underline{f}(\pm\vec{p})dt d^3{\vec{p}}. $$ As $\varphi_{\pm}\in\Mlt{4}$, we present multiplication by it as $\MltM{\varphi_{\pm}}\extD{1,2,3}{2,3,4}{\mathbb{1}_{\Mlt{1}}}$ and add a hat according to Notation \ref{notn:DS/symb-opval}. Treating the linear transform once again by Notations \ref{notn:Framework/symb/stdop} and \ref{notn:DS/symb-opval} we arrive to $$ \widetilde{\phi}^R_{0\pm}= a_{\pm} \circ \extDH{1}{1}\widehat{\left(\pm\mathbb{1}_{\mathcal{L}(\RR{3})}\right)} \circ \widehat{\MltM{\varphi_{\pm}}}\circ \extDH{1,2,3}{2,3,4}{\widehat{\mathbb{1}_{\Mlt{1}}}}. $$ \end{rmk} We also set the position space presentation for the quantum fields as $\phi_0^R=\widetilde{\phi_0^R}\circ \Fourier{3}{+}$ and $\phi_0=\intDH{1}{1}\phi_0^R$. Symbolically we have, for example, $$ \phi_0^R\symbDistArgs{t}{\vec{x}}=\int e^{\mathrm{i} \vec{k}\cdot\vec{x}} \widetilde{\phi}_0^R(\vec{k})d^3\vec{k}. $$ \begin{rmk}\label{rmk:DS/2Q/symbolic} With Examples \ref{exmp:DS/2Q/apm}-\ref{exmp:DS/2Q/numdist} the expression (\ref{eq:DS/2Q-symbolic}) gets a precise sense. Its translation from the symbolic language for $A\in\mathcal{L}(\mathcal{D}_{\mathcal{S}}^{l},\mathcal{D}_{\mathcal{S}}^{l'})$ ($l,l'\in\mathbb{N}_0$) is $$ \widehat{A}=\widehat{a}_{+}^{\widehat{\otimes} l'}\circ \widehat{A\circ \symmze{3}{}}\circ \widehat{a}_{-}^{R\widehat{\otimes} l}, $$ where $A\circ \symmze{3}{}\in{\mathcal{L}\left(\SRR{3l},\SRR{3l'}\right)}$ is a numerical distribution, so for $\widehat{A\circ \symmze{3}{}}$ we can use Notation \ref{notn:DS/hat-op}. Similar expression can be written for general Definition \ref{def:DS/2Q/AhatGen}. \end{rmk} \begin{rmk}\label{rmk:DS/2Q/formal-alg-gen} The Wick's theorem of Remark \ref{rmk:DS/2Qgen/Wick} may be derived from (\ref{eq:DS/CCR}) by means of the symbolic calculus. In this sense the spaces $\widecheck{\mathcal{L}}\left(\DSS{}{n},\DSS{}{m}\right)$ can be considered as formal spaces generated by $a_{+}$ and $a_{-}^R$ with the product $\otimes_{\bullet}$, the partially defined involution $\dag$ and operations of Notation \ref{notn:DS/hat-op}. As was anticipated in Remark \ref{rmk:DS/2Q/formal-alg}, this allows to forget about the particular realization of the elements of $\widecheck{\mathcal{L}}\left(\DSS{}{n},\DSS{}{m}\right)$ as parameter-dependent (unbounded) operator-valued distributions acting on $\mathcal{H}_{\rm{phys}}$ and define Hamiltonian perturbative QFT (Section \ref{HpQFT}) on the formal algebra language. \end{rmk} For future use we also define the Wick products. Note that the singular products never appear in the non-local quantum field theory, so we can survive with a very simple definition. \begin{exmp}\label{exmp:DS/2Q/exmp-norm} For $n\in\mathbb{N}_0$ and $\arrs{\alpha}{n}\in\{+,-\}$ we set \begin{equation}\label{eq:DS/2Q/exmp/normProd} :\prod_{j=1}^n \uwhat{a}_{\alpha_j}(\vec{p}_j): =\uwhat{a_{:\arrs{\alpha}{n}}:}(\arrs{\vec{p}}{n}), \end{equation} where $a_{:\arrs{\alpha}{n}:}\in\widecheck{\mathcal{L}}(\DSS{}{3n},\mathcal{D}_{\mathcal{S}}{})$ is uniquely defined by \begin{equation} a_{\arrs{\alpha}{n}}\left[\bigotimes_{j=1}^n f_j \right]=\symmze{3}{}\left[\bigotimes_{j=1}^n \underline{a_{\alpha_j}}[f_j]\right], \,\forall \arrs{f}{n}\in\RR{3}. \label{eq:DS/2Q/exmp/normProd-first} \end{equation} To see that (\ref{eq:DS/2Q/exmp/normProd}) indeed defines the Wick product, note that the tensor product is the same as product with no contructions. In particular, if $\alpha_j$ are normally ordered (i.e. if $\alpha_i=+$ and $\alpha_j=-$, then $i<j$), then $$ :\prod_{j=1}^n \uwhat{a}_{\alpha_j}(\vec{p}_j):=\left(\widehat{\bigotimes}_{j=1}^n\uwhat{a}_{\alpha_j} \right)(\arrs{\vec{p}}{n}), $$ and the other cases follow by obvious symmetry of (\ref{eq:DS/2Q/exmp/normProd-first}). It is, of course, possible to present explicitly for each $l\in\mathbb{N}_0$ action of such operator on $\DSS{l}{3n}$, but we omit it to avoid unecessary complicated combinatorics. \end{exmp} \begin{exmp}\label{exmp:DS/2Q/exmp-norm-fields} Continuing the previous example, we set $$ :\prod_{j=1}^n\uwhat{\widetilde{\phi}}^R_{0\alpha_j}\symbDistArgs{\vec{p}_j}{t_j}:=\uwhat{a_{:\arrs{\alpha}{n}}:}(\arrs{\vec{p}}{n})\prod_{j=1}^n \varphi_{\alpha_j}(t_j,\vec{p}_j), $$ and $$:\prod_{j=1}^n\uwhat{\widetilde{\phi}}^R_{0\alpha_j}\symbDistArgs{\vec{p}_j}{t_j}:=\sum_{\arrs{\alpha}{n}=\pm}:\prod_{j=1}^n\uwhat{\phi}^R_{0\alpha_j}\symbDistArgs{\vec{p}_j}{t_j}, $$ and $$:\prod_{j=1}^n\uwhat{\phi}^R_{0\alpha_j}\symbDistArgs{\vec{x}_j}{t_j}:=\int :\prod_{j=1}^n\uwhat{\widetilde{\phi}}^R_{0\alpha_j}\symbDistArgs{\vec{p}_j}{t_j} e^{\mathrm{i}\sum_{j=1}^n\vec{p}_j\cdot \vec{x}_j} d^{3n}\arrs{\vec{p}}{n}. $$ Clearly, all these operators can be presented as second quantization of some objects in $\widecheck{\mathcal{L}}(\DSS{}{3n},\DSMlt{}{1})$. The unrestricted form and the can be defined similarly. \end{exmp} \section{Non-local Hamiltonian Perturbation Quantum Field Theory}\label{HpQFT} \subsection{Motivation: Hamiltonian Perturbation Quantum Field Theory} In order to motivate the technical constructions of this section and fix the terminology we briefly (and rather informally) recall how the Hamiltonian perturbation theory is usually constructed. A rigorous version of this construction for a class of non-local quantum field theories is presented in the next subsections. \par The goal is to construct the operator-valued distributions $\phi$, $\pi$ satisfying the commutation relations \begin{equation}\label{eq:HpQFT/AdmTh/canonicalQuant} [\phi(t,\vec{x}),\pi(t,\vec{x}')]=-i\delta(\vec{x}-\vec{x}'). \end{equation} and the equations of motion \begin{equation} \label{eq:HpQFT/AdmTh/HamEv} \partial_t \phi(x)=-i [H(t),\phi(t,\vec{x})], \end{equation} \begin{equation}\label{eq:HpQFT/AdmTh/HamEvPi} \partial_t \pi(x)=-i [H(t),\pi(t,\vec{x})]. \end{equation} Here the Hamiltonian operator $H(t)$ $$ H(t)=H_0(t)+H_{int}(t). $$ The first term, the free Hamiltonian $H_0$ is chosen so that the solution of (\ref{eq:HpQFT/AdmTh/canonicalQuant}-\ref{eq:HpQFT/AdmTh/HamEvPi}) for $H_0$ in place of $H$ is known. More precisely, we are given $\phi_0$, $\pi_0$ such, that \begin{equation}\label{eq:HpQFT/AdmTh/canonicalQuant0} [\phi_0(t,\vec{x}),\pi_0(t,\vec{x}')]=-i\delta(\vec{x}-\vec{x}'). \end{equation} \begin{equation} \label{eq:HpQFT/AdmTh/HamEv0} \partial_t \phi_0(x)=-i [H(t),\phi_0(t,\vec{x})], \end{equation} \begin{equation}\label{eq:HpQFT/AdmTh/HamEvPi0} \partial_t \pi_0(x)=-i [H(t),\pi_0(t,\vec{x})]. \end{equation} The interaction part of the Hamiltonian $H_{int}(t)$ has the form $$ H_{int}(t)=g\int{h_{int}(t,\vec{x})\lambda(t,\vec{x})}d^3\vec{x}, $$ where $h_int(x)$ is a fixed translationally-invariant polynomial functional of the field $\phi(t,\vec{x})$ and its conjugated momentum $\pi(t,\vec{x})$. The function $\lambda(t,\vec{x})$ is the adiabatic cut-off. As mentioned in Introduction, its presence is necessary due to the Haag theorem, forbidding the existence of the unitary equivalence between the free and the interacting theories (\ref{eq:HpQFT/AdmTh/IntPhi}) below for translationally-invariant interaction. At this moment we need $\lambda(t,\vec{x})$ to decay then $t\rightarrow \infty$ in order to make the integral in (\ref{eq:HpQFT/AdmTh/UT}) below converge. In the next subsection we will see that $\lambda$ should also decay whenever $\vec{x}\rightarrow \infty$ for $H_{int}$ to be a well-defined unbounded operator. The function $\lambda$ should also be smooth enough to make the switching adiabatic. These conditions are guaranteed if we assume $\lambda \in \SRR{4}$. To get physically relevant information we should pass to the limit $\lambda(t,\vec{x})\rightarrow 1$ in the appropriate sense. Finally, $g$ is a formal parameter introduced for convenience. \par Then we construct a formal solution of (\ref{eq:HpQFT/AdmTh/canonicalQuant}-\ref{eq:HpQFT/AdmTh/HamEvPi}) in the form \begin{equation}\label{eq:HpQFT/AdmTh/IntPhi} \phi(\vec{x},t)=U(t)^{-1}\phi_0(\vec{x},t)U(t), \end{equation} \begin{equation} \label{eq:HpQFT/AdmTh/IntPi} \pi(\vec{x},t)=U(t)^{-1}\pi_0(\vec{x},t)U(t), \end{equation} where $U(t)$ is given by \begin{equation}\label{eq:HpQFT/AdmTh/UT} U(t)=\sum_{n} (-i)^n \int_{-\infty}^t dt_1 \int_{-\infty}^{t_1} dt_2\ldots \int_{-\infty}^{t_{n-1}} dt_n {\prod_{j=1}^n H_{I}(t_j)d^{n}\arrs{t}{n}}= \end{equation} $$ \timeorder{e^{-i \int_{\tau<t'}{H_{I}(\tau)d\tau}}}, $$ where $\timeorder{\ldots}$ is the symbolic time-ordering operator and \begin{equation}\label{eq:HpQFT/AdmTh/HI} H_I(t)=U(t)H_{int}(t)U(t)^{-1}. \end{equation} Assuming that $H_{int}$ is symmetric, we may expect that $U(t)$ is (in some sense) unitary. \par We claim that (\ref{eq:HpQFT/AdmTh/IntPhi}-\ref{eq:HpQFT/AdmTh/IntPi}) is a solution of (\ref{eq:HpQFT/AdmTh/canonicalQuant}-\ref{eq:HpQFT/AdmTh/HamEvPi}). First, (\ref{eq:HpQFT/AdmTh/IntPhi}-\ref{eq:HpQFT/AdmTh/IntPi}) together with (\ref{eq:HpQFT/AdmTh/canonicalQuant0}) imply the commutation relations (\ref{eq:HpQFT/AdmTh/canonicalQuant}). To get the equations of motion we formally differentiate (\ref{eq:HpQFT/AdmTh/UT}) to get $$ \frac{dU(t)}{dt}=-i H_I(t)U(t). $$ Now differentiating (\ref{eq:HpQFT/AdmTh/HamEvPi0}-\ref{eq:HpQFT/AdmTh/HamEv0}) and substituting (\ref{eq:HpQFT/AdmTh/HI}) we arrive to (\ref{eq:HpQFT/AdmTh/HamEvPi}-\ref{eq:HpQFT/AdmTh/HamEv}). Now let us assume for simplicity that $H_{int}(t)$ is a polynomial functional of $\phi$ and $\pi$ at the time $t$. Then by (\ref{eq:HpQFT/AdmTh/IntPhi}-\ref{eq:HpQFT/AdmTh/IntPi}) definition of $H_I$ (\ref{eq:HpQFT/AdmTh/HI}) may be rewritten as \begin{equation}\label{eq:HpQFT/AdmTh/Hloc} H_I(t)=H_{int}(t)_{\phi\rightarrow \phi_0,\quad \pi\rightarrow \pi_0}. \end{equation} \par There are (at least) two ways to get physically relevant information from this construction. First of all, we may the scattering operator. For simplicity of the interpretation let us assume that $\lambda(y,\vec{x})$ vanishes for $t>|T|$. Then the same holds for $H_{int}(t)$ and thus $H_{I}(t)$. We have $$U(t)=\bm{1}, \quad t<-T;\qquad U(t)=S, \quad t>T $$ for some unitary operator $S$. Then from (\ref{eq:HpQFT/AdmTh/IntPhi}-\ref{eq:HpQFT/AdmTh/IntPi}) we see that in the distant past the interacting field $\phi$ coincides with the free one, while in the distant future it belongs to a unitary equivalent realization of the same commutation relations. These two realizations induce two interpretations of the Hilbert space as a Fock space, giving rise the states of incoming and the outgoing particles respectively. Then we can recognize the operator $S$, intertwining these two realizations, as the scattering operator. This object has a direct physical interpretation, but its adiabatic limit (see next subsection or the introduction for a discussion of the adiabatic limit) is rather delicate \cite{EG76}. We postpone it for the next publication \cite{PAP1}. \par Another type of objects of interest are the correlators. We consider two families of distributions\footnote{We use a non-standard partially Fourier transformed presentation of all functions for technical convenience.}: \begin{itemize} \item The \emph{Wightman functions} $\mathcal{W}_n\in\SRR{4n}$ \begin{equation} \mathcal{W}_n\left((t_j,\vec{p}_j)_{j=1\ldots n}\right)=\left(\Omega,\prod_{j=1}^{\infty}\widetilde{\phi}(\vec{t}_j,\vec{p}_j)\Omega\right) \label{eq:HpQFT/WightmanDef} \end{equation} \item The \emph{Green functions} $\mathcal{G}_n\in\SRR{4n}$ \begin{equation} \mathcal{G}_n\left((t_j,\vec{p}_j)_{j=1\ldots n}\right)=\left(\Omega,\timeorder{\prod_{j=1}^{\infty}\widetilde{\phi}(\vec{t}_j,\vec{p}_j)}\Omega\right). \label{eq:HpQFT/GreenDef} \end{equation} \end{itemize} Equations (\ref{eq:HpQFT/WightmanDef}-\ref{eq:HpQFT/GreenDef}) are written already in the adiabatic limit (or at least the temporal adiabatic limit $\lambda(\vec{x},t)\rightarrow\lambda_{\mathrm{S}}(\vec{x})$), and $\Omega$ is the vacuum state of the full Hamiltonian. To compute it in the frame of the perturbation theory one may use the Gell-Man and Low theorem \cite{GL, Molinari} giving \begin{equation}\label{eq:HpQFT/AdmTh/WightDef} \mathcal{W}_n=\lim_{\lambda(\vec{x},t)\rightarrow\lambda_{\mathrm{S}}(\vec{x})} \frac{W_n}{W_0},\,\forall n\in\mathbb{N}_0 \end{equation} and \begin{equation}\label{eq:HpQFT/AdmTh/GreenDef} \mathcal{G}_n=\lim_{\lambda(\vec{x},t)\rightarrow\lambda_{\mathrm{S}}(\vec{x})} \frac{G_n}{G_0},\,\forall n\in\mathbb{N}_0 \end{equation} where \begin{equation} W_n\left((t_j,\vec{p}_j)_{j=1\ldots n}\right)= \left(\Omega_0,S\prod_{j=1}^n \widetilde{\phi}_0(t_j,\vec{p}_j)\Omega_0\right) = \end{equation} $$ \left(\Omega_0,U(+\infty,t_1)\prod_{j=1}^n \widetilde{\phi}_0(t_j,\vec{p}_j)U(t_j,t_{j+1})\Omega_0\right)$$ and \begin{equation}\label{eq:HpQFT/GreenUnNormDef} G_n\left((t_j,\vec{p}_j)_{j=1\ldots n}\right)=\left(\Omega_0,S\timeorder{\prod_{j=1}^n \widetilde{\phi}_0(t_j,\vec{p}_j) }\Omega_0\right)= \end{equation} $$ \left(\Omega_0,\timeorder{\prod_{j=1}^n\widetilde{\phi}_0(t_j,\vec{p}_j)e^{-i \int_{-\infty}^{+\infty}{H_{I}(\tau)d\tau}}}\Omega_0\right)$$ with $\Omega_0$ being the vacuum vector of the free theory (in accordance with the one defined in Section \ref{DS}), assuming that the limits above exist. This limit is known as the weak adiabatic limit and is the central object of this paper. Comparing the weak and strong adiabatic limit existence theorems \cite{EG73,EG76} we may expect that the former will be much easier to prove than the latter. \par The LSZ procedure\footnote{The LSZ reduction for non-local theories was considered in \cite{PhDThesis}.} gives a connection between the strong and weak adiabatic limit, allowing to retrieve the matrix elements of $S$ from the residues of the Fourier transform of the Green functions. \par Before going further let us briefly summarize the unclarified issues of the exposition above. \begin{rmk}\label{rmk:HpQFT/AdmTh/fields-dist} The objects $\phi$ and $\pi$ (respectively $\phi_0$ and $\pi_0$) satisfying (\ref{eq:HpQFT/AdmTh/canonicalQuant}) (respectively (\ref{eq:HpQFT/AdmTh/canonicalQuant0})) may exist only as distributions valued in unbounded operators. We build them as elements of $\mathcal{L}(\DSS{}{4},\mathcal{D}_{\mathcal{S}}{})$ which has restrictions\footnote{Otherwise the simultaneous commutation relations can not be defined.} and the mentioned relations must be rewritten more carefully, as was done in Example \ref{exmp:DS/2Q/fields}. In other words, they make sense then both hands sides are applied to a vector in $\DSS{}{n}$ with appropriate $n\in\mathbb{N}_0$. \end{rmk} \begin{rmk} We deal with right-hand sides of (\ref{eq:HpQFT/AdmTh/UT}) as with a formal power series in terms of the formal parameter $g$. Thus, in addition to the previous Remark, to make sense of all expressions containing $H$, $H_{int}$, $S$, $U$, $\phi$ and $\pi$ we have to truncate them at arbitrary finite order (see Subsection \ref{Intro/prelim}). \end{rmk} \begin{rmk} To make sense of (\ref{eq:HpQFT/AdmTh/IntPhi}-\ref{eq:HpQFT/AdmTh/HI}) and dependent equations we have to treat $t$ as a parameter rather than a distributional argument. In Remark \ref{rmk:HpQFT/AdmTh/fields-dist} we already noted that by writing (\ref{eq:HpQFT/AdmTh/canonicalQuant}- \ref{eq:HpQFT/AdmTh/HamEvPi0}) we have implicitly assumed the existence of restrictions of operator-valued distributions $\phi$, $\pi$, $\phi_0$ and $\pi_0$ to fixed values of $t$. We also have to require that $U$ and $H_I$ are functions of $t$. In local QFT, due to the UV singularities, $H_I$ exists only as a distribution and thus all expressions including the time-ordered product require a regularization. This problem does not appear in the class of non-local quantum field theories we define in the rest of the section. \end{rmk} \subsection{Admissible Quantum Field Theories}\label{HpQFT/AdmTh} Here we define a class of admissible quantum field theories for which the construction of the previous subsection becomes precise. \par \subsubsection{Free Quantum Field and admissible dispersion relation} Instead of the fields themselves, in the non-local case it is more convenient to work with their partial Fourier transforms. The appropriate definition of such an object was already given in Example \ref{exmp:DS/2Q/fields}. We use it always assuming the mass gap. Analogously, within the same notation we can define the momentum $\widetilde{\pi}_0$ as $$ \pi_0(t,\vec{k})=\partial_{t}\phi_0(t,\vec{k}). $$ Using the Wick Theorem of Remark \ref{rmk:DS/2Qgen/Wick} we may show that $$ [\uwhat{\phi}(t,\vec{k}),\uwhat{\pi}(t,\vec{k}')]=-i\widehat{\delta}(\vec{k}+\vec{k'})\mathbb{1}_{\mathcal{L}(\mathcal{D}_{\mathcal{S}}) $$ which is the precise form of the Fourier transform of (\ref{eq:HpQFT/AdmTh/canonicalQuant0}). \subsubsection{Admissible interaction} The next ingredient is the interaction $H_I$. Our approach mostly follows \cite{PhDThesis}. \begin{Def}\label{def:HpQFT/AdmInt} We say that $h_I\in\widecheck{\mathcal{L}}(\DSS{}{3},\DSMlt{}{1})$ is an \emph{admissible interaction density} if for each $l,l'\in\mathbb{N}_0$ there is$F_{(l,l')}\in\Mlt{3(l+l')}$, which we call \emph{the admissible interaction kernels} such that \begin{equation}\label{eq:def:HpQFT/AdmInt} h_I[\psi]_{l'}(\arrs{\vec{p'}}{l'},t)=\sum_{l=0}^{\infty}\int{F'_{(l',l)}(\arrs{\vec{p'}}{l'},\arrs{\vec{p}}{l})\widehat{\Fourier{-}{3}}\psi\left(\arrs{\vec{p}}{l},\sum_{j=1}^{l'}\vec{p'}_j-\sum_{j=1}^{l}\vec{p}_{j}\right)}\times \end{equation} $$ \exp\left(\mathrm{i}\left(\sum_{j=1}^{l'}\omega_0(\vec{p}'_j)-\sum_{j=1}^{l}\omega_0(\vec{p}_j)\right)t \right) d^{3l}\arrs{\vec{p}}{l}, $$ $$ \forall \psi\in\DSS{l'}{3}, \quad \forall \arrs{\vec{p'}}{l'}\in \RR{3l'}, \, \forall t\in \RR{}, $$ \begin{equation}\label{eq:def:HpQFT/AdmInt/KernelGrowth} \left((\arrs{\vec{p}}{l},\arrs{\vec{p}'}{l'})\mapsto F_{(l,l')} s\left(\sum_{j=1}^{l'}\vec{p'}_j-\sum_{j=1}^{l}\vec{p}_{j}\right)\right) \in\SRR{3(l+l')},\, \forall s\in\SRR{3(l+l')} \end{equation} and $$ h_I^{\dag}=h_I. $$ \end{Def} The following remarks clarify the technical definition above. \begin{rmk} The condition (\ref{eq:def:HpQFT/AdmInt/KernelGrowth}) basically limits the directions in which $F_{(l,l')}$ can grow. We could instead change the variables of integration in (\ref{eq:def:HpQFT/AdmInt}) and ask the kernel to belong to the mixed space, but that would break the symmetry of Definition \ref{def:HpQFT/AdmInt}. \end{rmk} \begin{rmk} An admissible interaction is characterized by its kernels. These kernels are constrained by the following. By definition of $\widecheck{\mathcal{L}}$ there can be only finitely many non-zero kernels among $F_{(l',l)}$. Each such kernel should be symmetric with respect to its first $l'$ arguments for $h_I$ to value in $\DSS{}{1}$. Symmetry in the next $l$ arguments can be always assumed as we do from now on. Finally, the ''hermicity'' condition translates to $$ F'_{(l',l)}(\arrs{\vec{p'}}{l'},\arrs{\vec{p}}{l})=\overline{F'_{(l,l')}(\arrs{\vec{p}}{l},\arrs{\vec{p'}}{l'})}, \quad \forall l,l'\in \mathbb{N}_0, \,\forall \arrs{\vec{p}}{l}\RR{3l},\,\arrs{\vec{p'}}{l'}\in\RR{3l'}. $$ It is easy to see that there is a one-to-one correspondence between the admissible interaction densities and the families of kernels satisfying all conditions listed here. \end{rmk} \begin{rmk}\label{rmk:HpQFT/AdmTh/forms-of-HI} In the symbolic notation Definition \ref{def:HpQFT/AdmInt} reads as\footnote{to make it precise we have to use presentation of $F_{l,l'}$ via $F'_{l,l'}$ as in Definition \ref{eq:def:HpQFT/AdmInt}.} $$ \uwhat{h_I}\symbDistArgs{\vec{x}}{t}=\sum_{l,l'=0}^{\infty}\frac{1}{l!l'!}\int F_{(l',l)}(\arrs{\vec{p'}}{l'},\arrs{\vec{p}}{l})\prod_{j=1}^{l'}\uwhat{a}_+(\vec{p}'_j)\prod_{j=1}^{l}\uwhat{a}_-(\vec{p}_j)$$ $$ \exp\left(\mathrm{i}\left(\sum_{j=1}^{l'}\omega_0(\vec{p}'_j)-\sum_{j=1}^{l}\omega_0(\vec{p}_j)\right) -\mathrm{i} \left(\sum_{j=1}^{l'}\vec{p'}_j-\sum_{j=1}^{l}\vec{p}_{j}\right)\vec{x}\right) d^{3l'}\arrs{\vec{p}'}{l'}d^{3l}\arrs{\vec{p}}{l}. $$ Alternatively, one may write \begin{equation}\label{eq:HpQFT/AdmTh/hI-timeloc} \uwhat{h_I}\symbDistArgs{\vec{x}}{t}=\sum_{n=0}^{\infty}\frac{1}{n!}\sum_{\arrs{\alpha}{n}\in\{+,-\}}\int \mathcal{F}^{\arrs{\alpha}{n}}(\arrs{\vec{p}}{n})\prod_{j=1}^{n}{\mathpunct{:}}\uwhat{\widetilde{\phi}}_{0,\alpha_j}(t,-\alpha_j\vec{p}_j){\colon} \end{equation} $$ e^{\mathrm{i}\left(\sum_{j=1}^{l}\vec{p}_{j}\right)\vec{x}}d^{3n}\arrs{\vec{p}}{n}, $$ with $$ \mathcal{F}^{\overbrace{+\ldots+}^{l'}\overbrace{-\ldots-}^{l}}(\arrs{\vec{p'}}{l'},\arrs{\vec{p}}{l})=\mathcal{F}^{(l',l)}(\arrs{\vec{p'}}{l'},\arrs{\vec{p}}{l})=$$ $$ F_{(l',l)}(-\arrs{\vec{p'}}{l'},\arrs{\vec{p}}{l})\left(\prod_{j=1}^l\sqrt{(2\pi)^32\omega_0(\vec{p}_k)}\right)\left( \prod_{j=1}^{l'}\sqrt{(2\pi)^32\omega_0(\vec{p}_k)}\right), $$ $$ \mathcal{F}_{\arrsP{\alpha}{n}{\sigma}}(\arrsP{\vec{k}}{n}{\sigma})=\mathcal{F}_{\arrs{\alpha}{n}}(\arrs{\vec{k}}{n}), \, \forall \arrs{\vec{k}}{n}\in\RR{3n},\, \forall \arrs{\alpha}{n}\in\{+,-\}^{n}, \, \forall \sigma\in\symmgr{n}. $$ We use this form in the formulation of the Feynman rules in Appendix \ref{appFR}. \end{rmk} \begin{rmk} \label{rmk:HpQFT/AdmTh/forms-of-HI-pos} If $F_{(l,l')}\in\SRR{3(l+l')}$, we may further write \begin{equation}\label{eq:HpQFT/AdmTh/hI-timeloc-pos} \uwhat{h_I}\symbDistArgs{\vec{x}}{t}=\sum_{n=0}^{\infty}\frac{1}{n!}\sum_{\arrs{\alpha}{n}\in\{+,-\}}\int \mathcal{K}^{\arrs{\alpha}{n}}((\vec{x}_j-\vec{x})_{j=1,\ldots,n})\prod_{j=1}^{n}{\mathpunct{:}}\uwhat{{\phi}}_{0,\alpha_j}(t,\vec{x}_j){\colon}d^{3n}\arrs{\vec{x}}{n}, \end{equation} where $\mathcal{K}^{\arrs{\alpha}{n}}\in\SRR{3n}$ is given by the relation $$ \mathcal{K}^{\arrs{\alpha}{n}}(\arrs{\vec{x}}{n})=\int{\mathcal{F}^{\arrs{\alpha}{n}}(\arrs{\vec{x}}{n}) e^{-\mathrm{i} \sum_{j=1}^n \alpha_j \vec{p}_j\vec{x}_j}}d^{3n}\arrs{\vec{x}}{n}. $$ The clear advantage of this form is the manifest translational invariance. Unfortunately, in general case $\mathcal{K}$ is only a distribution, so (\ref{eq:HpQFT/AdmTh/hI-timeloc-pos}) may have no sense. \end{rmk} \begin{exmp} Let $\kappa\in\SRR{4n}$. Then \begin{equation}\label{eq:HpQFT/AdmInt/exmp-smooth} \uwhat{h_I}\symbDistArgs{\vec{x}}{t}= \end{equation} \begin{equation} \int :\prod_{j=1}^n \uwhat{\widetilde{\phi}}_0(t_j,\vec{x}_j):\kappa\left((\vec{x}_j-\vec{x},{t}_j-t)_{j=1\ldots n}\right)d^{3n}\arrs{\vec{x}}{n}d^n\arrs{t}{n} \label{eq:HpQFT/AdmInt/hI-kappa} \end{equation} defines an admissible interaction density. \end{exmp} \begin{proof} Note, that (\ref{eq:HpQFT/AdmInt/exmp-smooth}) is clearly well-defined, as it is essentially evaluation of a distribution on a test-function. It involves a non-injective linear transform $(t,\arrs{t}{n})\mapsto ((t_j-t)_{j=1\ldots n})$, but this is not a problem since we allow $h_I$ to be a multiplier. \par Now we use Example \ref{exmp:DS/2Q/exmp-norm-fields} to write $$ h_I\symbDistArgs{\vec{x}}{t}= \int a_{:\arrs{\alpha}{n}:}(\arrs{\vec{p}}{n})\left(\prod_{j=1}^n\varphi_{\alpha_j}(t_j,\alpha_j\vec{p}_j)e^{\alpha_j\mathrm{i} \vec{p}_j\vec{x}_j}\right)\times$$ $$ \kappa\left((\vec{x}_j-\vec{x},{t}_j-t)_{j=1\ldots n}\right)d^{3n}\arrs{\vec{x}}{n}d^n\arrs{t}{n}d^{3n}\arrs{\vec{p}}{n}= $$ $$ \int a_{:\arrs{\alpha}{n}:}(\arrs{\vec{p}}{n})e^{\mathrm{i}\left(\sum_{j=1}^n\alpha_j\vec{p}_j\right)\cdot \vec{x}-\mathrm{i}\left(\sum_{j=1}^n\alpha_j\omega_0(\vec{p}_j)\right)t}F_{\arrs{\alpha}{n}}(\arrs{\vec{p}}{n})d^{3n}\arrs{\vec{p}}{n}, $$ where $$ F_{\arrs{\alpha}{n}}(\arrs{\vec{p}}{n})=\int \left(\prod_{j=1}^n\varphi_{\alpha_j}(t_j,\alpha_j\vec{p}_j)e^{\alpha_j\mathrm{i} \vec{p}_j\vec{x}_j}\right)\times$$ $$ \kappa((\vec{x}_j,{t}_j)_{j=1\ldots n})d^{3n}\arrs{\vec{x}}{n}d^n\arrs{t}{n}. $$ Then it is easy to see that $F_{\arrs{\alpha}{n}}\in\SRR{3n}$ and we arrive to the form of Remark \ref{rmk:HpQFT/AdmTh/forms-of-HI}. \end{proof} Informally, one may say that the admissible interactions are the ones of the form (\ref{eq:HpQFT/AdmInt/hI-kappa}), but with slightly singular $\kappa$. There are two allowed types of singularities: first, we can use the fact that the quantum field restricts with respect to time variable, second we can use the fact that $h_I$ is a distribution with respect to $\vec{x}$. The first type we have already seen in Remarks \ref{rmk:HpQFT/AdmTh/forms-of-HI} and \ref{rmk:HpQFT/AdmTh/forms-of-HI-pos}. As for the second one, we consider the following. \begin{exmp}[Quantum Wick Product] The Quantum Wick Product is a combination of the Quantum Diagonal Map \cite{BahnsEtAl,BahnsPhD}, a well-behaved replacement of the pointwise product in the DFR quantum spacetimes \cite{AreaDistanceVolume}, with the Wick product of quantum field theory. It defines an admissible interaction $$ \uwhat{h_I}\symbDistArgs{\vec{x}}{t}=\int \delta\left(t-\frac{1}{n}\sum_{j=1}^n t_j\right)\delta^{(3)}\left(\vec{x}-\frac{1}{n}\sum_{j=1}^n \vec{x}_j\right)\exp \left( -\frac{1}{2l_P^2} \sum_{j=1}^n ((t_j-t)^2) \right)$$ $$:\prod_{j=1}^n \uwhat{\widetilde{\phi}}^R_0(t_j|\vec{x}_j): \exp \left( -\frac{1}{2l_P^2} \sum_{j=1}^n ((\vec{x}_j-\vec{x})^2) \right)d^{3n}\arrs{\vec{x}}{n}d^n\arrs{t}{n}, $$ where $l_{P}$ is the Planck scale. \end{exmp} \begin{proof} To see well-definedness let us multiply with $s\in\SRR{1}$ (since the result is in the multipliers space) and act on $f\in\DSS{}{3}$. After change of variables we get: $$ \uwhat{h_I}[f](t)s(t)=\int s(t) \delta\left(t-\frac{1}{n}\sum_{j=1}^n t_j\right)\delta^{(3)}\left(\vec{y}\right)\times$$ $$\exp \left( -\frac{1}{2l_P^2} \sum_{j=1}^n ((t_j-t)^2) \right)s(t):\prod_{j=1}^n \uwhat{\widetilde{\phi}}^R_0(t_j|\vec{x}_j): \times$$ $$ \exp \left( -\frac{1}{2l_P^2} \sum_{j=1}^n ((\vec{x}_j-\vec{y}-\frac{1}{n}\sum_{j=1}^n \vec{x}_j)^2) \right)\underline{f}\left(\vec{y}+\frac{1}{n}\sum_{j=1}^n \vec{x}_j\right)d^{3n}\arrs{\vec{x}}{n}d^n\arrs{t}{n}. $$ Reading from the right to the left, we see multiplication by $$ \left((\vec{y},\arrs{\vec{x}}{n})\mapsto \exp \left( -\frac{1}{2l_P^2} \sum_{j=1}^n ((\vec{x}_j-\vec{y}-\frac{1}{n}\sum_{j=1}^n \vec{x}_j)^2) \right)\right)\in\Mlt{3n+3}, $$ (augmented) action of $:\prod_{j=1}^n \uwhat{\widetilde{\phi}}^R_0:\in\mathcal{L}(\DSS{}{3n},\DSMlt{}{3})$, multiplication by $$ \left((t,\arrs{t}{n})\mapsto \exp \left( -\frac{1}{2l_P^2} \sum_{j=1}^n ((t_j-t)^2) \right)s(t)\right) \in\SRR{n+1}, $$ and application of a numerical distribution to an element of $\DSS{}{n+3}$. \par To see that the interaction is admissible, proceed as in the previous example. \end{proof} \begin{exmp}[Not an example: the star product] Very popular in physics literature (see e.g. \cite{NekrasovNCQFT}) and initially suggested for the DFR spacetime \cite{DFR}, interaction terms, based on the replacement of the pointwise product with a non-commutative product is \emph{not} admissible. Indeed, it would require $$ \kappa_{*}(\arrs{\tau}{n}\arrs{\vec{r}}{n})\sim \exp\left(-2\mathrm{i} \sum_{i<j}(\tau_i,\vec{r}_i)^{\mu}Q_{\mu\nu}(\tau_j,\vec{r}_j)^{\nu} \right), $$ where $(\tau_i,\vec{r}_i)^{\mu}$ stands for the $\mu$th component of the four-dimensional vector $(\tau_i,\vec{r}_i)$, $Q_{\mu\nu}$ is the commutator of the coordinates (we use the notation of \cite{DFR}) and the Einstein summation rule is assumed. A notable property of $\kappa_{*}$ is that it does not decay at large displacements $\vec{r}_j$ and for this reason can not lead to smooth kernels $F_{l,l'}$ (note that in the examples above, the interaction kernels are essentialy Fourier transforms of $\kappa$). Physically it means that the interaction does not become negligible at large distances. For this reason the adiabatic cut-off in Definition \ref{def:HpQFT/HI} is not enough to regularize the infrared divergences of the theory. In fact, in \cite{BahnsWick} it was already noted that the infrared divergences are present already before the adiabatic limit is taken. For rigorous treatment one should introduce further $\vec{r}_i$ adiabatic cut-off functions already in $\kappa$. The adiabatic limit of such theories is more involved and is not considered here. \end{exmp} To conclude, we see that Definition \ref{def:HpQFT/AdmInt} allows translational-invariant interaction of type (\ref{eq:HpQFT/AdmInt/hI-kappa}) with $\kappa$ fast decaying and having restricted singularities. \subsubsection{Admissible Hamiltonian} \begin{Def}\label{def:HpQFT/HI} Let $h_{I}$ be a fixed admissible interaction. Then for each $\widetilde{\lambda}\in\SRR{4}$ we define the \emph{interaction Hamiltonian} $H_I^{[g\widetilde{\lambda}]}[\Psi]\in\widecheck{\mathcal{L}}(\mathcal{D}_{\mathcal{S}},\DSS{}{1})$ by setting $$ H_I^{[g\widetilde{\lambda}]}=g\widehat{\DiagM{1}}\widehat{\circ{\extD{4}{1}}}(h_I\circ \widehat{\Fourier{3}{+}})[\Psi\otimes \widetilde{\lambda}],\forall \Psi\in\mathcal{D}_{\mathcal{S}}.$$ \end{Def} \begin{rmk} The symbolic form of (\ref{def:HpQFT/HI}) is $$ \underline{H_I^{[g\widetilde{\lambda}]}}(t)=\int \lambda(t,\vec{x})\underline{h_I}\symbDistArgs{\vec{x}}{t}d^3\vec{x}, t\in\RR{}, $$ where $$ \lambda(t,\vec{x})=\int{\widetilde{\lambda}(\vec{k},t)e^{i\vec{k}\cdot\vec{x}}}d^{3}\vec{x}. $$ By direct computation $$ \underline{H_I^{[g\widetilde{\lambda}]\dag}}(t)=\underline{H_I^{[g\widetilde{\lambda}]}}(t) $$ whenever $$ \lambda(t,\vec{x})\in\RR{}, \quad \forall t\in\RR{}, \forall \vec{x}\in\RR{3}. $$ Naturally, we interpret $H_I$ as the interaction representation of the interaction part of the Hamiltonian and $\lambda$ as the adiabatic cut-off function. \end{rmk} \subsubsection{Evolution and scattering operators} \begin{prop}\label{prop:HpQFT/US} Let $h_I$ be fixed interaction density, $\widetilde{\lambda}\in\SRR{4}$ and $H_I^{[g\widetilde{\lambda}]}$ be the corresponding interaction Hamiltonian (see Definition \ref{def:HpQFT/HI}). Then There is unique $U^{[g\widetilde{\lambda}]}\in\widecheck{\mathcal{L}}(\mathcal{D}_{\mathcal{S}}{},\DSMlt{}{2})\formalPS{g}$ such that \begin{equation} \uwhat{U}^{[g\widetilde{\lambda}]}(t,t)=\mathbb{1}_{\LL(\mathcal{D}_{\mathcal{S}})}, \label{eq:prop:HpQFT/US/U(0)} \end{equation} \begin{equation} \label{eq:prop:HpQFT/US/Urec-dif} \partial_{t_2}\uwhat{U}^{[g\widetilde{\lambda}]}(t_2,t_1)=-i\uwhat{H_{I}}^{[g\widetilde{\lambda}]}(t_2)\uwhat{U}^{[g\widetilde{\lambda}]}(t_2,t_1). \end{equation} The operator $U^{[g\widetilde{\lambda}]}$ can be found recurrently from \begin{equation} \uwhat{U}^{[g\widetilde{\lambda}]}(t_2,t_1)=\mathbb{1}_{\LL(\mathcal{D}_{\mathcal{S}})}-i\int_{t_1}^{t_2}\uwhat{H_I}^{[g\widetilde{\lambda}]}(t)\uwhat{U}^{[g\widetilde{\lambda}]}(t,t_1)dt, \label{eq:prop:HpQFT/US/Urec} \end{equation} or explicitly \begin{equation}\label{eq:prop:HpQFT/US/Uexpl} \uwhat{U}^{[g\widetilde{\lambda}]}(t,t_0)= \end{equation} $$ \sum_{n=0}^{\infty}(-i)^n \int_{t_0}^{t}dt_1\int_{t_0}^{t_1}dt_2\cdots\int_{t_0}^{t_{n-1}}dt_n \prod_{j=1}^{n}\uwhat{H_I}^{[g\widetilde{\lambda}]}(t_i). $$ \end{prop} \begin{proof} To prove the theorem we need only to decipher the symbolic notation of Subsection \ref{Framework/Symbolic} (generalized in Notation \ref{notn:DS/symb-opval}) and use the Wick Theorem of Remark \ref{rmk:DS/2Qgen/Wick} to present the result as a second quantization. For completeness we present all the translations from the symbolic language. \par First, we rewrite (\ref{eq:prop:HpQFT/US/Urec}) explicitly\footnote{We use the integration operator $J$ defined in Subsection \ref{Framework/exmp}.}: \begin{equation} U^{[g\widetilde{\lambda}]}=\mathbb{1}_{\Mlt{2}}-i \left(\extDH{3}{2}\left(\widehat{J}\circ\widehat{\DiagM{1}}\right)\right)\circ \extDH{1,2}{2,3}H_I^{[g\widetilde{\lambda}]}\otimes_{\bullet} U^{[g\widetilde{\lambda}]}, \label{eq:prop:proof:HpQFT/US/Urec-expl} \end{equation} where $$\mathbb{1}_{\Mlt{2}}\in\Mlt{2}=\mathcal{L}(\mathcal{D}_{\mathcal{S}}^{0},\DSMlt{0}{2}),$$ $$ \mathbb{1}_{\Mlt{2}}(t,t')=1, \,\forall t,t'\in\RR{}. $$ Since $H_I^{[g\widetilde{\lambda}]}$ is proportional to $g$, (\ref{eq:prop:proof:HpQFT/US/Urec-expl}) gives a well-defined expression of higher orders of $U^{g\widetilde{\lambda}}$ via its lower orders, and the zeroth order is given by $$ U^{[g\widetilde{\lambda}]}_{g=0}=\mathbb{1}_{\Mlt{2}}. $$ Thus (\ref{eq:prop:proof:HpQFT/US/Urec-expl}) can be solved order by order. By (\ref{eq:Framework/exmp/Int-dif}) we see that (\ref{eq:prop:HpQFT/US/U(0)}-\ref{eq:prop:HpQFT/US/Urec}) is equivalent to (\ref{eq:prop:HpQFT/US/Urec}). \par Finally, we secondary quantize and formally iterate (\ref{eq:prop:proof:HpQFT/US/Urec-expl}) to get $$ \widehat{U}^{g\widetilde{\lambda}}=\sum_{i=0}^n(-i)^n \left( \left(\extDH{3}{2}\left(J\circ\widehat{\DiagM{1}}\right)\right)\circ \extDH{1,2}{2,3}\widehat{H_I}^{[g\widetilde{\lambda}]}\right)^{\circ n}\circ \mathbb{1}_{\Mlt{2}}, $$ which is one of the possible forms of (\ref{eq:prop:HpQFT/US/Uexpl}). \end{proof} \begin{=>}\label{=>:HpQFT/Uprop} For $U^{[g\widetilde{\lambda}]}$ as in proposition above, assuming in addition that $\extD{1}{1}\Fourier{3}{+}\widetilde{\lambda}$ is real-valued, \begin{enumerate} \item For any $t,t_0\in\RR{}$ \begin{equation}\label{eq:=>:HpQFT/Uprop/Udag} \uwhat{U}^{[g\widetilde{\lambda}]}(t,t_0)^{\dag}=\uwhat{U}^{[g\widetilde{\lambda}]}(t_0,t); \end{equation} \item For any $t_0,t_1,t_2\in\RR{}$: \begin{equation}\label{eq:=>:HpQFT/Uprop/Ucompose} \uwhat{U}^{[g\widetilde{\lambda}]}(t_2,t_1)\uwhat{U}^{[g\widetilde{\lambda}]}(t_1,t_0)=\uwhat{U}^{[g\widetilde{\lambda}]}(t_2,t_0) \end{equation} \item There are well-defined limits $U_{\mathrm{R}},U_{\mathrm{A}}\in\widecheck{\mathcal{L}}(\mathcal{D}_{\mathcal{S}}{},\DSS{}{1})$ and $S\in\widecheck{\mathcal{L}}(\mathcal{D}_{\mathcal{S}})$ defined by $$\uwhat{U_{\mathrm{R}}}^{[g\widetilde{\lambda}]}(t)=\lim_{t_0\rightarrow -\infty}\uwhat{U}^{[g\widetilde{\lambda}]}(t,t_0)$$ $$\uwhat{U_{\mathrm{A}}}^{[g\widetilde{\lambda}]}(t)=\lim_{t_0\rightarrow +\infty}\uwhat{U}^{[g\widetilde{\lambda}]}(t_0,t)$$ $$\uwhat{S}^{[g\widetilde{\lambda}]}=\lim_{t\rightarrow +\infty,t_0 \rightarrow -\infty }\uwhat{U}^{[g\widetilde{\lambda}]}(t_0,t);$$ Moreover, relations (\ref{eq:=>:HpQFT/Uprop/Udag}-\ref{eq:=>:HpQFT/Uprop/Ucompose}) remain correct then one or more of the timestamps goes to $\pm\infty$. \end{enumerate} \end{=>} \begin{proof} For simplicity we use the symbolic notation. \par For the first statement follows from (\ref{eq:prop:HpQFT/US/Uexpl}) and Remark \ref{rmk:DS/LDS/conj-product-gen}. \par For the second statement it is more convenient to use the recurrent relation (\ref{eq:prop:HpQFT/US/Urec}) and proceed inductively. The zeroth order is trivial. Assuming that (\ref{eq:=>:HpQFT/Uprop/Ucompose}) is valid up to $g^n$ we have $$\left( \uwhat{U}^{[g\widetilde{\lambda}]}(t_2,t_1)\uwhat{U}^{[g\widetilde{\lambda}]}(t_1,t_0)\right)_{g^n} = $$ $$ \left(\mathbb{1}_{\LL(\mathcal{D}_{\mathcal{S}})}-i\int_{t_1}^{t_2}\uwhat{H_I}^{[g\widetilde{\lambda}]}(t)\uwhat{U}^{[g\widetilde{\lambda}]}(t,t_1)\uwhat{U}^{[g\widetilde{\lambda}]}(t_1,t_0)dt\right)_{g^n}= $$ $$ \left( \mathbb{1}_{\LL(\mathcal{D}_{\mathcal{S}})}-i\int_{t_1}^{t_2}\uwhat{H_I}^{[g\widetilde{\lambda}]}(t)\uwhat{U}^{[g\widetilde{\lambda}]}(t,t_0)\right)_{g^n}dt=\left( \uwhat{U}^{[g\widetilde{\lambda}]}(t_2,t_0)\right)_{g^n}, $$ where we have used the fact that $H_I$ is proportional to $g$, so we can use the statement (\ref{eq:=>:HpQFT/Uprop/Ucompose}) for lower orders. \par Finally, the last statement follows from (\ref{Framework/exmp/IntLim}). \end{proof} \begin{rmk} In particular, we see that $\uwhat{U}^{[g\widetilde{\lambda}]}$,$\uwhat{U_{\mathrm{A}}}^{[g\widetilde{\lambda}]}$, $\uwhat{U_{\mathrm{R}}}^{[g\widetilde{\lambda}]}$ and $S^{[g\widetilde{\lambda}]}$ are valued in the "unitary" operators in the sense that $$ \uwhat{U}^{[g\widetilde{\lambda}]}(t,t')\uwhat{U}^{[g\widetilde{\lambda}]}(t,t')^{\dag}=\uwhat{U}^{[g\widetilde{\lambda}]}(t,t')^{\dag}\uwhat{U}^{[g\widetilde{\lambda}]}(t,t')=\mathbb{1}_{\LL(\mathcal{D}_{\mathcal{S}})}. $$ \end{rmk} From the above we see that $\uwhat{U}^{[g\widetilde{\lambda}]}_{\mathrm{R}}$ is an analogue of $U$ from the previous subsection. Similarly, $\uwhat{S}$ may be interpreted as the scattering operator. \subsubsection{Interacting fields} To finish the perturbation quantum field theory construction we need to define the interacting quantum field and its momentum similarly to (\ref{eq:HpQFT/AdmTh/IntPhi}-\ref{eq:HpQFT/AdmTh/IntPi}) \begin{prop}\label{prop:HpQFT/QF} There are unique operator-valued distributions $\phi^R,\pi^R\in\widecheck{\mathcal{L}}(\DSS{}{3},\DSS{}{1})$ such that $$ \uwhat{\phi}^{\mathrm{R}[g\widetilde{\lambda}]}\symbDistArgs{\vec{x}}{t}=\uwhat{U_{\mathrm{R}}}^{[g\widetilde{\lambda}]}(t)\uwhat{\phi^{\mathrm{R}}_0}\symbDistArgs{\vec{x}}{t}\uwhat{U_{\mathrm{R}}}^{[g\widetilde{\lambda}]}(t)^{\dag}, $$ $$ \uwhat{\pi}^{\mathrm{R}[g\widetilde{\lambda}]}\symbDistArgs{\vec{x}}{t}=\uwhat{U_{\mathrm{R}}}^{[g\widetilde{\lambda}]}(t)\uwhat{\pi^{\mathrm{R}}_0}\symbDistArgs{\vec{x}}{t}\uwhat{U_{\mathrm{R}}}^{[g\widetilde{\lambda}]}(t)^{\dag}. $$ They satisfy $$ [\uwhat{\phi}^{\mathrm{R}[g\widetilde{\lambda}]}\symbDistArgs{\vec{x}}{t},\uwhat{\phi}^{\mathrm{R}[g\widetilde{\lambda}]}\symbDistArgs{\vec{x}'}{t}]=0, $$ $$ [\uwhat{\pi}^{\mathrm{R}[g\widetilde{\lambda}]}\symbDistArgs{\vec{x}}{t},\uwhat{\pi}^{[\mathrm{R}g\widetilde{\lambda}]}\symbDistArgs{\vec{x}'}{t}]=0, $$ $$ [\uwhat{\pi}^{\mathrm{R}[g\widetilde{\lambda}]}\symbDistArgs{\vec{x}}{t},\uwhat{\phi}^{\mathrm{R}[g\widetilde{\lambda}]}(\vec{x}'|t)]=\mathrm{i}(2\pi)^3\delta^{(3)}(\vec{x}-\vec{x}')\mathbb{1}_{\LL(\mathcal{D}_{\mathcal{S}})}, $$ $$ \partial_{t}\uwhat{\phi}^{\mathrm{R}[g\widetilde{\lambda}]}\symbDistArgs{\vec{x}}{t}=-\mathrm{i}[\uwhat{H_I}^{[g\widetilde{\lambda}]}(t),\uwhat{\phi}^{\mathrm{R}[g\widetilde{\lambda}]}\symbDistArgs{\vec{x}}{t}] $$ $$ \partial_{t}\uwhat{\pi}^{\mathrm{R}[g\widetilde{\lambda}]}\symbDistArgs{\vec{x}}{t}=-\mathrm{i}[\uwhat{H_I}^{[g\widetilde{\lambda}]}(t),\uwhat{\pi}^{\mathrm{R}[g\widetilde{\lambda}]}\symbDistArgs{\vec{x}}{t}]. $$ \end{prop} \begin{proof} Direct application of Proposition \ref{prop:HpQFT/US}, Corollary \ref{=>:HpQFT/Uprop} and translations of the symbolic expressions \end{proof} \par For future use, we introduce several alternative forms of the quantum field and its conjugated momentum. \par First we define the unrestricted form, $$ \phi^{[g\widetilde{\lambda}]}=\intD{1}{1}\phi^{\mathrm{R}[g\widetilde{\lambda}]} $$ $$ \pi^{[g\widetilde{\lambda}]}=\intD{1}{1}\pi^{\mathrm{R}[g\widetilde{\lambda}]} $$ to define the unrestricted versions. \par Instead of $\phi$ and $\pi$ it is more convenient to work with $$ \uwhat{\phi_{\pm}}^{\mathrm{R}[g\widetilde{\lambda}]}\symbDistArgs{\vec{x}}{t}=\uwhat{U_{\mathrm{R}}}^{[g\widetilde{\lambda}]}(t)\uwhat{\phi^{\mathrm{R}}_{0,\pm}}\symbDistArgs{\vec{x}}{t}\uwhat{U_{\mathrm{R}}}^{[g\widetilde{\lambda}]}(t)^{\dag}. $$ Since $\phi^{\mathrm{R}}_{0,\pm}$ is a linear combination of $\phi^{\mathrm{R}}_{0}$ and $\pi^{\mathrm{R}}_{0}$, these objects are well-defined already by Proposition \ref{prop:HpQFT/QF}. \begin{rmk} The decomposition of the interacting field into $$\widetilde{\phi}=\widetilde{\phi}_{+}+\widetilde{\phi}_{-}$$ have no physical sense. Neither do they represent positive and negative energy parts, no they annihilate the vacuum acting from the right and left. It is introduced for the convenience of formulations only. \end{rmk} \subsubsection{Correlators} Our next goal is to give a precise form to (\ref{eq:HpQFT/WightmanDef}-\ref{eq:HpQFT/GreenUnNormDef}). We are mostly interested in the Green functions due to their role in physical computations of the scattering amplitudes, but for technical reasons it is more convenient to work with the Wightman function because they can be restricted to fixed values of the timestamps. It is also convenient to work with the correlators of the partial fields $\widetilde{\phi}_{\alpha}$ rather than with the field itself. So, we start by defining $$ \PWightmanRnag{n}{\alpha}{g\widetilde{\lambda}} \in \mathcal{L}(\SRR{3n},\Mlt{n})\formalPS{g}, $$ $$ \PWightmanRnag{n}{\alpha}{g\widetilde{\lambda}}\symbDistArgs{\arrs{\vec{p}}{n}}{\arrs{t}{n}}=\left(\Omega_0,S\prod_{j=1}^n \widetilde{\phi}^{R}_{\alpha}\symbDistArgs{\vec{p}_j} {t_j}\Omega_0\right). $$ Then we normalize it by introducing \begin{equation} \WightmanRnag{n}{\alpha}{g\widetilde{\lambda}}\symbDistArgs{\arrs{\vec{p}}{n}}{\arrs{t}{n}}=\frac{\PWightmanRnag{n}{\alpha}{g\widetilde{\lambda}}\symbDistArgs{\arrs{\vec{p}}{n} }{\arrs{t}{n}}}{\PWightmanRng{0}{g\widetilde{\lambda}}}. \label{eq:HpQFT/Wightman-norm} \end{equation} Finally, we define the unrestricted $$ \WightmanUnag{n}{\alpha}{g\widetilde{\lambda}}=\intD{1,\ldots,n}{1,5,\ldots,4n-3}\WightmanRnag{n}{\alpha}{g\widetilde{\lambda}} $$ and time-ordered \begin{equation}\label{eq:HpQFT/GreenUnag} \GreenUnag{n}{\alpha}{g\widetilde{\lambda}}((t_j,\vec{p}_j)_{j=1,\ldots,n})= \end{equation} $$\sum_{\sigma\in\symmgr{n}}\left(\prod_{j=1}^{n-1}\theta(t_{\sigma_{j}}-t_{\sigma_{j+1}})\right)\WightmanRnagP{n}{\alpha}{g\widetilde{\lambda}}{\sigma}\symbDistArgs{\arrsP{\vec{p}}{n}{\sigma}}{\arrsP{t}{n}{\sigma}}. $$ versions. \begin{rmk} For completeness, we present a form of (\ref{eq:HpQFT/GreenUnag}) not involving the symbolic notation. $$ \GreenUnag{n}{\alpha}{g\widetilde{\lambda}}((t_j,\vec{p}_j)_{j=1,\ldots,n})=$$ $$\sum_{\sigma\in\symmze{4}{}}L_{\permK{\sigma}{4}}\circ I_{\RR{}}\circ\extD{n}{1}\theta^{\otimes n} \circ L_{K_n^{-1}} \circ {\DiagM{n}} \circ\extD{1,5,\ldots,4n-3}{n+1,n+2,\ldots,2n}\WightmanRnagP{n}{\alpha}{g\widetilde{\lambda}}{\sigma}, $$ where $$ K_n: \RR{n}\rightarrow \RR{n} $$ is $$ K_n(\arrs{t}{n})=\left((t_j-t_{j+1)_{j=1\ldots n-1}},t_n\right). $$ \end{rmk} Putting it all back together one may verify that $$ \GreenUnag{n}{\alpha}{g\widetilde{\lambda}}((t_j,\vec{p}_j)_{j=1\ldots n})=\sum_{\sigma\in\symmgr{n}}\prod_{j=1}^{n-1}\theta(t_{\sigma_{j}}-t_{\sigma_{j+1}}) \frac{(\Omega_0,S\prod_{j=1}^n \widetilde{\phi}^{R}_{\alpha_{\sigma_j}}(t_{\sigma_j},\vec{p}_{\sigma_j})\Omega_0) }{(\Omega_0,S\Omega_0)}= $$ $$ \frac{\left(\Omega_0,S\timeorder{\prod_{j=1}^n \widetilde{\phi}^{R}_{\alpha_{\sigma_j}}(t_{\sigma_j},\vec{p}_{\sigma_j})}\Omega_0\right) }{(\Omega_0,S\Omega_0)}. $$ The original Wightman and Green functions (\ref{eq:HpQFT/AdmTh/WightDef}-\ref{eq:HpQFT/AdmTh/GreenDef}) are $$ \WightmanUng{n}{g\widetilde{\lambda}}((t_j,\vec{p}_j)_{j=1\ldots n})=\sum_{\alpha_j\in\{+,-\},j=1,\ldots,n}\WightmanUnag{n}{\alpha}{g\widetilde{\lambda}}((t_j,\vec{p}_j)_{j=1\ldots n}). $$ $$ \GreenUng{n}{g\widetilde{\lambda}}((t_j,\vec{p}_j)_{j=1\ldots n})=\sum_{\alpha_j\in\{+,-\},j=1,\ldots,n}\GreenUnag{n}{\alpha}{g\widetilde{\lambda}}((t_j,\vec{p}_j)_{j=1\ldots n}). $$ \subsubsection{Summary and outline of the subsection} Comparing Proposition \ref{prop:HpQFT/US}, Corollary \ref{=>:HpQFT/Uprop} and Proposition \ref{prop:HpQFT/QF} with the exposition in Subsection \ref{HpQFT/AdmTh} we conclude that we have constructed the non-local Hamiltonian quantum field theory. So it is natural to fix the following. \begin{Def} By a massive \emph{admissible Hamiltonian perturbation Quantum Field} (\emph{admissible HpQFT}) Theory we mean a pair $(\omega_0,h_I)$, where $\omega_0\in\SRR{3}$ is a massive dispersion relation\footnote{See Definition \ref{def:dispRel}} and $h_I\in\widecheck{\mathcal{L}}(\SRR{3},\Mlt{1})$ is an admissible interaction density. \end{Def} All admissible HpQFT are non-local in the physical sense (see Remark \ref{rmk:Intro/TwoLocalities} by construction. The algebraic locality is broken in general. Still, some residual locality persists as explained in the following three remarks. \begin{rmk}\label{rmk:HpQFT/loc-phys} The Hamiltonian can always be presented in the time-localized form (\ref{eq:HpQFT/AdmTh/hI-timeloc-pos}). So, from the physical point of view, the interaction is always "local in time". In view of this, the main advantage of the Hamiltonian approach with respect to the Lagrangian approaches\footnote{Recall that the Lagrangian approaches fail to construct a unitary scattering operator when theory is not local in time \cite{NekrasovNCQFT}} seems to be lost. In fact, as was shown in \cite{PhDThesis}, HpQFT is equivalent to a Lagrangian non-local perturbation theory. The corresponding Lagrangian is the Legendre transform of the Hamiltonian with the time-localized interaction (\ref{eq:HpQFT/AdmTh/hI-timeloc-pos}) \footnote{Note that (\ref{eq:HpQFT/AdmTh/hI-timeloc-pos}) can be rewritten in terms of $\phi_0$ and $\pi_0$ instead of $\phi_{0\pm}$.}. The Lagrangian theories of this form, however, are more complicated and, in particular, require regularization of the UV divergences despite the spatial non-locality. \end{rmk} \begin{rmk}\label{rmk:HpQFT/loc-alg} From the algebraic (AQFT) point of view, the commutation relations (\ref{eq:HpQFT/AdmTh/canonicalQuant}) make the theory look like a local one if the time is fixed. Unlike the physical locality in the previous remark, this property is inherent to the theory: if the quantum fields $\phi$ are related to local measurements in any sense, (\ref{eq:HpQFT/AdmTh/canonicalQuant}) has measurable consequences. In particular, it selects a special reference frame, drastically breaking the Lorentz invariance. It is worth noting, that this issue can be bypassed by giving up the direct interpretation of the interacting field. For example, in \cite{HamQFT} it was shown that the Lorentz-covariant approach based on the Yang-Feldman equation can be formulated in the Hamiltonian style by appropriate reference frame-dependent redefinition of the fields. \end{rmk} One may note that these two residual localities are in some sense complementary. \begin{rmk} Looking with attention at (\ref{eq:prop:HpQFT/US/Uexpl}) and (\ref{eq:=>:HpQFT/Uprop/Ucompose}), one may conclude that all the admissible HpQFT possess some residual causality property, similar to the causality of local theories \cite{EG73,pAQFT}. Formalization of this property is postponed to \cite{PAP1} where it plays an important role. For now we underline that this property also holds in one particular reference frame only. \end{rmk} \begin{rmk}\label{rmk:HpQFT/higher-g} We have assumed that the interaction Hamiltonian has precisely the first order in terms of the interaction constant $g$. However, up to some technical change, all construction remains valid for more general Hamiltonians, presented by formal power series in terms of $g$, provided that the zeroth order vanishes. This is necessary to allow the (finite) counterterms for renormalization. We ignore this detail for simplicity. \end{rmk} \begin{rmk}\label{rmk:HpQFT/Hint} It may look unnatural to assume that the interacting Hamiltonian is fixed already in the interaction picture. In particular, this is the cause of the residual locality observed in Remark \ref{rmk:HpQFT/loc-phys}. In \cite{PhDThesis} an alternative way to construct non-local Hamiltonian theories, based on setting by analogy with (\ref{eq:HpQFT/AdmTh/HI}) $$ \uwhat{h_I}\symbDistArgs{\vec{x}}{t}=\uwhat{U_{\rm{R}}}^{[g\widetilde{\lambda}]}(t)\uwhat{h_{int}}\symbDistArgs{\vec{x}}{t}\uwhat{U_{\rm{R}}}^{\dag[g\widetilde{\lambda}]}(t), $$ where $$ \uwhat{h_{int}}\symbDistArgs{\vec{x}}{t}= $$ $$\int A\left(\arrs{\tau}{n},\frac{\sum_{j=1}^n\vec{x}_j}{n}-\vec{x}\right){\mathpunct{:} }\prod_{j=1}^n\underline{\widehat{\phi}}(t+\tau_j,\vec{x}_j){\mathpunct{:}} \times $$ $$ B\left(\left(\vec{x}_j-\vec{x}\right)_{j=1,\ldots,n},\arrs{\tau}{n}\right) d^3\vec{x}d^{3n}\arrs{\vec{r}}{n}d^{n}\arrs{\tau}{n}, $$ and $U^{[g\widetilde{\lambda}]}$, $\phi$ are defined as in Proposition \ref{prop:HpQFT/QF}. It is easy to see that this operation is well-defined in the sense of formal series\footnote{Note that in the leading zeroth with respect to $g$ the interacting quantum field can be replaced by the free one, and the higher orders are always expressed via lower ones.}, Propositions \ref{prop:HpQFT/US}, \ref{prop:HpQFT/QF} and Corollary \ref{=>:HpQFT/Uprop} still hold, and $$ \uwhat{h_{I}} \symbDistArgs{\vec{x}}{t}= $$ $$\int A\left(\arrs{\tau}{n},\frac{\sum_{j=1}^n\vec{x}_j}{n}-\vec{x}\right){\mathpunct{:} }\prod_{j=1}^n\uwhat{U}^{[g\widetilde{\lambda}]}(t,t+\tau_j)\underline{\widehat{\phi}_0}(t+\tau_j,\vec{x}_j)\uwhat{U}^{[g\widetilde{\lambda}]}(t+\tau_j,t){\mathpunct{:}} \times $$ $$ B\left(\left(\vec{x}_j-\vec{x}\right)_{j=1,\ldots,n},\arrs{\tau}{n}\right) d^3\vec{x}d^{3n}\arrs{\vec{r}}{n}d^{n}\arrs{\tau}{n}. $$ With minimal technical effort, one may show that the correctly written analogue of the equations of motion (\ref{eq:HpQFT/AdmTh/HamEv}) holds. At the same time one may note that the class of effective theories defined in this way is essentially the same. \end{rmk} \subsection{Feynman rules with adiabatic cut-off} \label{HpQFT/FR} In this subsection we present a technical version of the Feynman rules for computation of the restricted Wightman function in the presence of the adiabatic cut-off which we use in the proof of Theorem \ref{thm:MainW}. More practical Feynman rules are presented in Appendix. We assume a massive admissible HpQFT to be fixed. \par \subsubsection{Preparation} We start by the following observation based on construction of $\PWightmanRnag{n}{\alpha}{g\widetilde{\lambda}}$ and $U^{[g\widetilde{\lambda}]}$. \begin{rmk}\label{rmk:HpQFT/V-expansion} For any $n\in\mathbb{N}_0$, $\widetilde{\lambda}\in\SRR{4}$ and $f\in\SRR{3n}$ \begin{equation}\label{eq:HpQFT/FR/V-expansion} \PWightmanRnag{n}{\alpha}{g\widetilde{\lambda}}[f]= \sum_{V=0}^{\infty}g^V \PWightmanRnaV{n}{\alpha}{V}[f\otimes\widetilde{\lambda}^{\otimes V}], \end{equation} where $\PWightmanRnaV{n}{\alpha}{V}[f\otimes\widetilde{\lambda}^{\otimes V}]\in\mathcal{L}(\SRR{3n+4V},\SRR{n})$ is given by $$ \PWightmanRnaV{n}{\alpha}{V}=\sum_{\substack{v_j\in\mathbb{N}_0, j=0,\ldots, n\\ v_0+\ldots+v_n=V}} \PWightmanRnaVarr{n}{\alpha}{v} $$ and $\PWightmanRnaVarr{n}{\alpha}{v}\in \mathcal{L}(\SRR{3n+4(\sum_{j=0}^{n}v_j)},\SRR{n})$ is defined by $$ \PWightmanRnaVarr{n}{\alpha}{v}\symbDistArgs{\arrs{\vec{k}}{n},(\tau_{i,j},\vec{q}_{i,j})_{i=0\ldots n, j=1,\ldots, v_i}}{\arrs{t}{n}}= $$ $$ \intUSimplex{+}{v_0}\symbDistArgs{(\tau_{0,j})_{j=1,\ldots,v_0}}{t_1}\left(\prod_{i=1}^{n-1}\intSimplex{v_i}\symbDistArgs{(\tau_{i,j})_{j=1,\ldots,v_i}}{t_{i+1},t_i}\right)\intUSimplex{-}{v_n}\symbDistArgs{(\tau_{n,j})_{j=1,\ldots,v_n}}{t_n} $$ $$ (\Omega_0,\prod_{j=1}^{v_0}\uwhat{\widetilde{h}}_{I}\symbDistArgs{\vec{k}_{0,j}}{\tau_{0,j}} \prod_{i=1}^{n}\uwhat{\phi^{R}_{0\alpha_i}}\symbDistArgs{\vec{k}_i}{\vec{t}_i}\prod_{j=1}^{v_i}\uwhat{\widetilde{h}}_{I}\symbDistArgs{\vec{q}_{i,j}}{\tau_{i,j}} \Omega_0). $$ \end{rmk} \begin{rmk}\label{rmk:HpQFT/FR/combFactor} Here we use the symbolic presentation of the integration operators. For eventual convenience we present the corresponding symbols as discontinuous functions\footnote{Here we identify the Heaviside function with the corresponding discontinuous function. This is safe as long as no derivatives are involved (see also Remark \ref{rmk:Framework/symb/disc-restriction}).} $$ \intSimplex{v}\symbDistArgs{\arrs{\tau}{v}}{t,t'}=(-1)^{v\mathrm{sign} (t'-t)}\theta((\tau_n-t)\mathrm{sign}(t'-t))\theta((t'-\tau_1)\mathrm{sign}(t'-t))\times $$ $$ \prod_{j=1}^{v-1}\theta(\mathrm{sign} (t'-t)(\tau_j-\tau_{j+1}))= T_{1,\ldots,v;<}^{\arrs{\tau}{v};-\mathrm{sign} (t'-t)} \prod_{j=1}^{v} J\symbDistArgs{\tau_j}{t,t'}, $$ where $T_{1,\ldots,v;<}^{\arrs{\tau}{v};+}\in\{0,1\}$ is defined below. In the same way, we have $$ \intUSimplex{\pm 1}{v}\symbDistArgs{\arrs{\tau}{v}}{t}=T_{1,\ldots,v;<}^{\arrs{\tau}{v};-1}J_{\pm}\symbDistArgs{\tau_j}{t} $$ \end{rmk} \begin{notn} For any partially ordered set $(A,\prec)$, $v\in\mathbb{N}$, any choice of $a_1,\ldots,a_v\in A$ and $\eta=\pm 1$ we set the \emph{ordering indicator function} $\RR{v}\rightarrow \{0,1\}$ $$ (\arrs{\tau}{v})\mapsto T_{a_1,\ldots,a_v;\prec}^{\arrs{\tau}{v};+1}. $$ so that $T_{a_1,\ldots,a_v;\prec}^{\arrs{\tau}{v};+1}$ if \begin{equation}\label{eq:HpQFT/FR/ord-agr} a_i\prec a_j \Rightarrow \eta(\tau_i-\tau_j)<0, \, i,j=1,\ldots, v. \end{equation} \end{notn} The combinatoric interpretation of the factors above leads to the following results: \begin{lem}\label{lem:HpQFT/FR/independent} Let $(A',\prec')$ and $(A'',\prec'')$ be partially ordered sets and define $$(A,\prec)=(A',\prec')\sqcup (A'',\prec'').$$ Then for any $n',n''\in\mathbb{N}_0$, any $\arrs{a'}{n'}\in A'$, $\arrs{a''}{n''}\in A''$, $\arrs{\tau'}{n'},\arrs{\tau''}{n''}\in\RR{}$ and $\eta=\pm$ one has $$ T_{\arrs{a'}{n'},\arrs{a''}{n''};\prec}^{\arrs{\tau'}{n'},\arrs{\tau''}{n''};\eta}=T_{\arrs{a'}{n'};\prec'}^{\arrs{\tau'}{n'};\eta}T_{\arrs{a''}{n''};\prec''}^{\arrs{\tau''}{n''};\eta} $$ \end{lem} \begin{lem}\label{lem:HpQFT/FR/incompatible} Let $(A,\prec)$ be a partially ordered set. Let $\overline{\prec}$ be the set of all linear extensions of $\prec$. Take r any , $v\in\mathbb{N}$, $a_1,\ldots,a_v\in A$ , $\eta=\pm 1$ and $\arrs{\tau}{v}$ such that $$i\neq j\Rightarrow \tau_{i}\neq \tau_j, \, i,j=1,\ldots,v.$ Then $$ T_{\arrs{a}{v};\prec}^{\arrs{\tau}{v};\eta}=\sum_{\prec'\in\overline{\prec}}T_{\arrs{a}{v};\prec'}^{\arrs{\tau}{v};\eta}. $$ \end{lem} \begin{proof}[Proof of Lemmas \ref{lem:HpQFT/FR/independent} and \ref{lem:HpQFT/FR/incompatible}] The statements are similar to standard results in combinatorics and elementary probability. In the first case the ordering conditions (\ref{eq:HpQFT/FR/ord-agr}) for the two disjoint subsets are independent, hence the indicator function factors. \par In the second case, we note that if all $\arrs{\tau}{v}$ are different, then there is precisely one total $\prec$ order satisfying (\ref{eq:HpQFT/FR/ord-agr}). At the same time, if $\prec$ is a partial order, then it satisfies the condition (\ref{eq:HpQFT/FR/ord-agr}) if and only if one of its linear extensions does. Thus Lemma \ref{lem:HpQFT/FR/incompatible} follows by summing over the incompatible possibilities \end{proof} \subsubsection{Feynman rules for unrenormalized correlators} The next step is to apply Wick's theorem in the form of Proposition \ref{prop:DS/2Q/Wick} and Remark \ref{rmk:DS/2Qgen/Wick} to get the Feynman rules. \begin{prop}\label{prop:HpQFT/FR/PW-total} The distribution $$\PWightmanRnaVarr{n}{\alpha}{v}\symbDistArgs{\arrs{\vec{k}}{n},(\tau_{i,j},\vec{q}_{i,j})_{i=0\ldots n, j=1,\ldots, v_i}}{\arrs{t}{n}}$$ can be computed by the following Feynman rules\footnote{See Subsection \ref{Intro/prelim} for the terminology.}. \begin{itemize} \item The relevant are the Feynman graphs $\Gamma$ with $V=\sum_{j=0}^{n}$ enumerated internal vertices $\bullet_{(i,j)}$ with $i=0,\ldots,n$ and $j=1,\ldots,v_i$, and $n$ enumerated external vertices $\circ_i$, $i=1,\ldots, n$; \item For convenience we fix total order $\prec_{\Gamma}$ generated by the following relations: \begin{equation}\label{eq:HpQFT/FR/ord1} \circ_{i+1}\prec_{\Gamma} \circ_{i}, \quad i=1,\ldots n, j=1,\ldots,v_i \end{equation} \begin{equation}\label{eq:HpQFT/FR/ord2} \bullet_{(i,j)}\prec_{\Gamma} \circ_{i}, \quad i=1,\ldots n, j=1,\ldots,v_i \end{equation} \begin{equation}\label{eq:HpQFT/FR/ord3} \circ_{i+1}\prec_{\Gamma} \bullet_{(i,j)}, \quad i=1,\ldots n-1, j=1,\ldots,v_i \end{equation} \begin{equation}\label{eq:HpQFT/FR/ord4} \bullet_{(i+1,j)}\prec_{\Gamma} \bullet_{(i,j)}, \quad i=0,\ldots n, j=1,\ldots,v_i-1. \end{equation} \item To each line assign a free momentum flux $\vec{p}\in\RR{3}$ directed from the earlier to the later end; \item The factors corresponding to each element are shown in Table \ref{tab:FRules-TM}; \item The overall factor is $$ C_{\Gamma}= \intUSimplex{+}{v_0}\symbDistArgs{(\tau_{0,j})_{j=1,\ldots,v_0}}{t_1}\left(\prod_{i=1}^{n-1}\intSimplex{v_i}\symbDistArgs{(\tau_{i,j})_{j=1,\ldots,v_i}}{t_{i+1},t_i}\right)\times$$ $$\intUSimplex{-}{v_n}\symbDistArgs{(\tau_{n,j})_{j=1,\ldots,v_n}}{t_n}= $$ $$ \left(\prod_{j=1}^{v_0}J_{+}\symbDistArgs{\tau_{0,j}}{t_0}\right) \left(\prod_{i=1}^{n-1}\prod_{j=1}^{v_i}J\symbDistArgs{\tau_{i,j}}{t_{i+1},t_i}\right) \left(\prod_{j=1}^{v_n}J_{-}\symbDistArgs{\tau_{n,j}}{t_n}\right)\times $$ $$ T_{1,\ldots,v_0;<}^{(\tau_{0,j})_{j=1,\ldots,v_0};-} \left(\prod_{i=1}^{n-1}T_{1, \ldots, v_i;<}^{(\tau_{i,j})_{j=1,\ldots,v_i};-\mathrm{sign} (t_{i}-t_{i+1})}\right) T_{1,\ldots,v_n;<}^{(\tau_{n,j})_{j=1,\ldots,v_n};-} % $$ \end{itemize} \end{prop} \begin{proof} The proof goes as usual with using the Wick Theorem of Proposition \ref{prop:DS/2Q/Wick} and Remark \ref{rmk:DS/2Qgen/Wick}. Each external vertex is an insertion of $\phi_0$, each internal is an insertion of $h_I$ and each line depicts a contraction as explained in Remark \ref{rmk:DS/2Q/contractions} and the ordering is just the right-to-left ordering of operators in the composition. The internal vertices are labeled with two numbers $(i,j)$ \end{proof} The ordering (\ref{eq:HpQFT/FR/ord1}-\ref{eq:HpQFT/FR/ord4}) completely fixed by labeling of the vertices is convenient for estimations in the next section. \begin{rmk}\label{rmk:HpQFT/FR/symmze} More standard object would be to deal with symmetrization of $\PWightmanRnaV{n}{\alpha}{V}$, which can be introduced as a formal variational derivative \begin{equation} g^{-V}\frac{\delta^W\PWightmanRnag{n}{\alpha}{g\widetilde{\lambda}}{\arrs{\vec{k}}{n}}{\arrs{t}{n}}}{\delta \widetilde{\lambda}(\tau_1,\vec{q}_1)\cdots \delta \widetilde{\lambda}(\tau_V,\vec{q}_V)}=\left(\PWightmanRnaV{n}{\alpha}{V}\circ \extD{1,\ldots,3n}{1,\ldots,3n}\symmze{4}{}\right)\symbDistArgs{\arrs{\vec{k}}{n},(\tau_{i},\vec{q}_{i})_{i=1\ldots n}}{\arrs{t}{n}}. \label{eq:HpQFT/FR/symmze} \end{equation} As $\symmze{4}{}[\lambda^{\otimes V}]=\lambda^{\otimes V}$, the right-hand sides of (\ref{eq:HpQFT/FR/symmze}) can be safely placed instead of $\PWightmanRnaV{n}{\alpha}{V}$ in (\ref{eq:HpQFT/FR/V-expansion}). Similarly we can perform partial symmetrization of $\PWightmanRnaVarr{n}{\alpha}{v}$ by introducing $\PWightmanRnaVarrS{n}{\alpha}{v}\in\mathcal{L}(\SRR{3n+4V}),\SRR{n})$, \begin{equation} \PWightmanRnaVarrS{n}{\alpha}{v}=\left(\PWightmanRnaVarr{n}{\alpha}{v}\circ \extD{1,\ldots,3n}{1,\ldots,3n}\bigotimes_{i=0}^{n}\symmze{4}{v_i} \right). \label{eq:HpQFT/FR/symmzeP} \end{equation} \end{rmk} \begin{prop}\label{prop:HpQFT/FR/PW-partial} $\PWightmanRnaVarrS{n}{\alpha}{v}(\arrs{\vec{k}}{n},\symbDistArgs{(\tau_{i,j},\vec{q}_{i,j})_{i=0\ldots n, j=1,\ldots, v_i}}{\arrs{t}{n}}$ can be computed according to the following Feynman rules: \begin{itemize} \item The relevant are all partially ordered Feynman graphs $(\Gamma,\prec_{\Gamma})$ with $V$ internal vertices $\bullet_{j}$, $j=1,\ldots, v$ and $n$ external vertices $\circ_j$, $j=1,\ldots,n$; \item The order is constrained by relations (\ref{eq:HpQFT/FR/ord1}-\ref{eq:HpQFT/FR/ord3}); \item To each line a momentum flux, directed from earlier to later vertex is assigned; \item Factors corresponding to elements of the diagrams are presented in Table \ref{tab:FRules-TM}; \item The overall factor is $$ C'_{\Gamma,\prec_{\Gamma}}=\left(\prod_{j=1}^{v_0}J_{+}\symbDistArgs{\tau_{0,j}}{t_0}\right) \left(\prod_{i=1}^{n-1}\prod_{j=1}^{v_i}J\symbDistArgs{\tau_{i,j}}{t_{i+1},t_i}\right) \left(\prod_{j=1}^{v_n}J_{-}\symbDistArgs{\tau_{n,j}}{t_n}\right)\times $$ $$ T_{(\bullet_{(0,j) })_{j=1,\ldots,v_0};\prec_\Gamma}^{(\tau_{(i,0),i=1,\ldots,dj})_{j=1,\ldots,v_0};+} \left(\prod_{i=1}^{n-1}T_{(\bullet_{(i,j)})_{j=1,\ldots,v_i};\prec_{\Gamma}}^{(\tau_{i,j})_{j=1,\ldots,v_i};\mathrm{sign} (t_{i}-t_{i+1})}\right) T_{(\bullet_{(n,j) })_{j=1,\ldots,v_n};\prec_\Gamma}^{(\tau_{n,j})_{j=1,\ldots,v_n};-}. % $$ \end{itemize} \end{prop} \begin{proof} Directly symmetrizing the Feynman rules of Proposition \ref{prop:HpQFT/FR/PW-total} with respect to permutations of the internal vertices within sets $\{(\bullet_{(i,j)})|j=1,\ldots,v_i\}$ for each $i=0,\ldots,n$ leads to the Feynman rules as above, but with completely ordered graphs. Indeed, the complete ordering is just a way to describe the permutation restoring of the vertices restoring (\ref{eq:HpQFT/FR/ord4}). \par The factors assigned to elements of the diagrams depend on the relative order of the vertices connected by a line (since the vertex factor distinguishes the incoming and outgoing lines), so we may consider partially ordered graphs as in the statement, but the overall factor will be $$ \sum_{\prec'\mathrm{i} \overline{\prec_{\Gamma}}}C'_{\Gamma,\prec'}=C'_{\Gamma,\prec_{\Gamma}}, $$ where we used notation and statement of Lemma \ref{lem:HpQFT/FR/incompatible} for the last step. \end{proof} \begin{table} \centering \begin{tabular}{|m{60pt}|m{80pt}|m{160pt}|} \hline \textbf{Element} & \textbf{Figure} & \textbf{Factor} \\ \hline Internal vertex & \begin{tikzpicture} \node[] at (-1.2,1.6) {$\vec{p}'_1$}; \draw[black, thick] (-1.2,1.2)--(0,0); \draw[black, thick,<-] (-1.2,1.4)--(-0.8,1.0); \node[] at (0.5,1.4) {$\vec{p}_1$}; \draw[black, thick] (1,1)--(0,0); \draw[black, thick,->] (1,1.4)--(0.1,0.6); % \node[] at (0.6,-1.2) {$\vec{p}_l$}; \draw[black, thick] (1,-1)--(0,0); \draw[black, thick,->] (0.9,-1.2)--(0.3,-0.6); \filldraw[black] (0,0) circle (2pt) node[anchor=west] {$(i,j)$}; \node[] at (-1,0.2) {\Huge $\vdots$}; \node[] at (1,0.2) {\Huge $\vdots$}; % \node[] at (-1,-1.2) {$\vec{p}'_{l'}$}; \draw[black, thick] (-1.5,-1)--(0,0); \draw[black, thick,<-] (-1.4,-1.2)--(-0.8,-0.8); \end{tikzpicture} & $$F_{(l',l)}(\arrs{\vec{p}'}{l'},\arrs{\vec{p}}{l})\cdot$$ $$ \delta^{(3)}\left(\sum_{k=1}^{l'}\vec{p}'_k-\sum_{k=1}^{l}\vec{p}_k-\vec{q}_{i,j}\right)\cdot $$ $$ \exp\left(\mathrm{i} \left(\sum_{j=1}^{l'}\omega_0(\vec{p}'_j)-\sum_{j=1}^{l}\omega_0(\vec{p}_j)\right)\tau_{i,j}\right) $$ \\ \hline Internal line & \begin{tikzpicture} \node[] at (1,0.8) {$\vec{p}$}; \filldraw[black] (0,0) circle (2pt); \filldraw[black] (2,0) circle (2pt); \draw[black, thick] (0,0)--(2,0); \draw[black, thick,->] (0.5,0.5)--(1.5,0.5); \end{tikzpicture} & $$ 1$$ \\ \hline External vertex & \begin{tikzpicture} \node[] at (0.7,0.2) {$\vec{p}$}; \node[] at (0.8,-1.3) {$i$}; \draw[black, thick] (0,0)--(1,-1); \draw[black, thick,<-] (0.2,0.3)--(.8,-.2); \filldraw[black,thick, fill=white] (1,-1) circle (2pt); % \node[] at (1.7,-0.4) {or}; % \begin{scope}[shift={(0.5,0)}] \node[] at (1.9,0.2) {$\vec{p}$}; \node[] at (1.6,-1.3) {$i$}; \draw[black, thick] (1.5,-1)--(2.5,0); \draw[black, thick,<-] (1.7,-0.3)--(2.3,.2); \filldraw[black,thick, fill=white] (1.5,-1) circle (2pt); \end{scope} \end{tikzpicture} & \begin{center} $\delta_{\alpha_i,+}\delta^{(3)}(\vec{k}_i-\vec{p})\frac{e^{\mathrm{i}\omega_0(\vec{p})t_i}}{\sqrt{(2\pi)^32 \omega_0(\vec{p})}}$ or $\delta_{\alpha_i,-}\delta^{(3)}(\vec{k}_i+\vec{p})\frac{e^{-\mathrm{i}\omega_0(\vec{p})t_i}}{\sqrt{(2\pi)^32 \omega_0(\vec{p})}}$ \end{center} \\ \hline \end{tabular} \caption{Ordered Feynman rules with adiabatic cut-off, time-momentum presentation} \label{tab:FRules-TM} \end{table} \subsubsection{Normalized correlator} The next step is to identify the denominator of (\ref{eq:HpQFT/Wightman-norm}). For it we note that by Corollary \ref{=>:HpQFT/Uprop} we have $$ \left(\Omega_0,\uwhat{S}^{[g\widetilde{\lambda}]}\Omega_0\right)= $$ $$ \left(\Omega_0,\uwhat{U_{\mathrm{R}}}^{[g\widetilde{\lambda}]}(t_1)\prod_{j=1}^{n-1}\uwhat{U}^{[g\widetilde{\lambda}]}(t_j,t_j+1)\uwhat{U_{\mathrm{R}}}^{[g\widetilde{\lambda}]}(t_n)\Omega_0\right),\, \forall n\in\mathbb{N}, \forall \arrs{t}{n}\in\RR{}. $$ This leads to the following rather strange version of the Feynman rules. \begin{prop}\label{prop:HpQFT/FR/denom} For $n\in\mathbb{N}_0$ and $\arrs{t}{n}\in\RR{}$ one has $$ \left(\Omega_0,\uwhat{S}^{[g\widetilde{\lambda}]}\Omega_0\right)=\sum_{\arrsM{v'}{0}{n}\in\mathbb{N}_0} g^{\sum_{j=0}^n v_j} N_{n;\arrs{v'}{n}}[\widetilde{\lambda}^{\otimes(\sum_{j=0}^n v_j)}](\arrs{t}{n}), $$ where $N_{n;\arrs{v'}{n}}\in\mathcal{L}\left(\SRR{4(\sum_{j=0}^n v_j)},\SRR{n}\right)$ can be computed according to the Following Feynman rules: \begin{itemize} \item The relevant are all partially ordered Feynman graphs $(\Gamma,\prec_{\Gamma})$ with $V$ internal vertices $\bullet_{(i,j)}$, $i=0,\ldots,n$, $j=1,\ldots, v_i$; \item The order is constrained by relations $$ \bullet_{(i+1,j)}\prec \bullet_{(i,j')}, \quad, i=0,\ldots,n-1,\,j=1,\ldots,v_{i+1},\,j'=1,\ldots,v_i; $$ \item To each line a momentum flux, directed from earlier to later vertex is assigned; \item Factors corresponding to elements of the diagrams are presented in Table \ref{tab:FRules-TM}; \item The overall factor is $$ C'_{\Gamma,\prec_{\Gamma}}=\left(\prod_{j=1}^{v_0}J_{+}\symbDistArgs{\tau_{0,j}}{t_0}\right) \left(\prod_{i=1}^{n-1}\prod_{j=1}^{v_i}J\symbDistArgs{\tau_{i,j}}{t_{i+1},t_i}\right) \left(\prod_{j=1}^{v_n}J_{-}\symbDistArgs{\tau_{n,j}}{t_n})\right)\times $$ $$ T_{(\bullet_{(0,j) })_{j=1,\ldots,v_0};\prec_\Gamma}^{(\tau_{(i,0),i=1,\ldots,dj})_{j=1,\ldots,v_0};+} \left(\prod_{i=1}^{n-1}T_{(\bullet_{(i,j)})_{j=1,\ldots,v_i};\prec_{\Gamma}}^{(\tau_{i,j})_{j=1,\ldots,v_i};\mathrm{sign} (t_{i}-t_{i+1})}\right) T_{(\bullet_{(n,j) })_{j=1,\ldots,v_n};\prec_\Gamma}^{(\tau_{n,j})_{j=1,\ldots,v_n};-}. % $$ \end{itemize} \end{prop} \begin{proof} Repeat all the constructions of Remark \ref{rmk:HpQFT/V-expansion} and Propositions \ref{prop:HpQFT/FR/PW-total}, \ref{prop:HpQFT/FR/PW-partial} without insertions of $\widetilde{\phi}_{0}$ (hence no external vertices in the Feynman graphs). \end{proof} \begin{=>}\label{=>:HpQFT/FR/W-total} For any $n\in\mathbb{N}_0$, $\widetilde{\lambda}\in\SRR{4}$, $\arrsM{v}{0}{n}$ and $f\in\SRR{3n}$ define $\WightmanRnaVarr{n}{\alpha}{v}\in \mathcal{L}(\SRR{3n+4(\sum_{j=0}^{n}v_j)},\SRR{n})$ as the distributions computed by the Feynman rules of Proposition \ref{prop:HpQFT/FR/PW-total} but disregarding all graphs with vacuum components. Then \begin{equation} \sum_{\arrsM{v'}{0}{n}}\sum_{\arrsM{v''}{0}{n}\in\mathbb{N}_0}g^{\sum_{j=0}^{n}(v'_j+v''_j)} \WightmanRnaVarr{n}{\alpha}{v'}[f\otimes \widetilde{\lambda}^{\otimes (\sum_{j=0}^{n}v'_j)}]\cdot N_{n;\arrsM{v''}{0}{n}}[\widetilde{\lambda}^{\otimes (\sum_{j=0}^{n}v''_j)}]= \label{eq:=>:HpQFT/FR/W-total} \end{equation} $$\sum_{\arrsM{v}{0}{n}}g^{\sum_{j=0}^{n}(v_j)}\PWightmanRnaVarr{n}{\alpha}{v}[f\otimes \widetilde{\lambda}^{\otimes (\sum_{j=0}^{n}v_j)}],\,\forall f\in\SRR{3n} $$ where $\cdot$ denotes the pointwise multiplication in $\SRR{n}$ \end{=>} \begin{proof} Repeating the argument of Remark \ref{rmk:HpQFT/FR/symmze}, we replace $\WightmanRnaVarr{n}{\alpha}{v'}$ with its symmetrization computed by Feynman rules of Proposition \ref{prop:HpQFT/FR/PW-partial} with the graphs with vacuum corrections omitted. Now substitute that symmetrization together with the diagrammatic expansions of Propositions \ref{prop:HpQFT/FR/PW-partial} and \ref{prop:HpQFT/FR/denom} into the left-hand sides of (\ref{eq:=>:HpQFT/FR/W-total}) and expand out the product. Each term of the resulting sum can be bijectively identified with a Feynman graph of Proposition \ref{prop:HpQFT/FR/PW-partial}. Indeed, the factor coming from $N_{n;\arrsM{v''}{0}{n}}$ corresponds to the vacuum components and $\WightmanRnaVarr{n}{\alpha}{v'}$ to the non-vacuum components. The factors, corresponding to the elements of a graph in all the cases are given by Table \ref{tab:FRules-TM}, so, apart from the overall factors, the contribution of the whole graph is a product of the contributions of the vacuum components and the rest of the graph. To factorize the overall factor we use Lemma \ref{lem:HpQFT/FR/independent} and the trivial observation that the vacuum components are always disjoint from the rest of the graph\footnote{Recall that the partial orders we consider are always generated by orientation of the edges, so disjoint graph components correspond to disjoint partially ordered subsets.}. Thus the formal sums are equal. \end{proof} \begin{=>} For any $n\in\mathbb{N}_0$, $\widetilde{\lambda}\in\SRR{4}$ and $f\in\SRR{3n}$ $$\WightmanRnag{n}{\alpha}{g\widetilde{\lambda}}[f]= \sum_{V=0}^{\infty}g^V \WightmanRnaV{n}{\alpha}{V}[f\otimes\widetilde{\lambda}^{\otimes V}], $$ where $\WightmanRnaV{n}{\alpha}{V}[f\otimes\widetilde{\lambda}^{\otimes V}]\in\mathcal{L}(\SRR{3n+4V},\SRR{n})$ is given by $$ \WightmanRnaV{n}{\alpha}{V}=\sum_{\substack{v_j\in\mathbb{N}_0, j=0,\ldots, n\\ v_0+\ldots+v_n=V}} \WightmanRnaVarr{n}{\alpha}{v}, $$ where $\WightmanRnaVarr{n}{\alpha}{v}$ are as in Corollary \ref{=>:HpQFT/FR/W-total}. \end{=>} \begin{notn} The set of all graphs relevant for computation of $\WightmanRnaVarr{n}{\alpha}{v}$ by Feynman rules of Corollary \ref{=>:HpQFT/FR/W-total} is denoted with $\mathfrak{G}_{\arrs{\alpha}{n}}^{\arrsM{v}{0}{n}}$. For $\Gamma\in\mathfrak{G}_{\arrs{\alpha}{n}}^{\arrsM{v}{0}{n}}$ we denote with $\WightmanRnaVarrG{n}{\alpha}{v}{\Gamma}$ the corresponding contribution. \end{notn} \section{Weak adiabatic limit} \subsection{Formulation} In this subsection we formulate the main result of the paper and show how it implies the weak adiabatic limit existence. The rest of the section is devoted to the proof of this statement, presented as a series of technical lemmas. \par Basically, we want to claim that the Green function restricts to a smooth function of the momentum and (off-shell) energy defects, and thus can be evaluated when all the defects are zero. There is a small complication caused by singularities far from the origin in the space of energy-momentum defects. To avoid it, we temporally bound support of the adiabatic cut-off function $\widetilde{\lambda}$ to remove some unwanted singularities. As we work in the position representation for the time coordinate, it can be done by a convolution. For $h\in \SRR{1}$ we define $ \mathcal{M}_{[h]}^{V,n}\in\mathcal{L}(\SRR{4(V+n)}) $ $$ \mathcal{M}_{[h]}^{V,n}[f](\arrs{(\tau,\vec{q})}{V},\arrs{(t,\vec{p})}{n})=\int{\prod_{j=1}^{V}h(\arrs{\tau'}{V})f((\tau_j-\tau'_j,\vec{q}_j)_{j=1\ldots V},\arrs{(t,\vec{p})}{n})}d^V\arrs{\tau'}{V}. $$ As explained below, in Corollary \ref{=>:MainW} and Remark \ref{rmk:MainW}, this trick does not affect the physical adiabatic limit. We also set $$ Q_{\Delta}=\{h\in\SRR{4}| \mathrm{supp} {\Fourier{1}{-}}^{-1}[h] \in [-\Delta,\Delta] \}. $$ \begin{thm}\label{thm:MainW} Fix a massive HpQFT with admissible interaction. Take $n, V\in\mathbb{N}_0$, $\arrs{\alpha}{n}\in \{+,-\}$, $\arrs{v}{n}\in\mathbb{N}_0$, $\Gamma\in \mathfrak{G}_{\arrs{\alpha}{n}}^{\arrsM{v}{0}{n}}$ and $h\in Q_{\frac{M}{V+1}}$. Then the distribution $\WightmanRnaVarrG{n}{\alpha}{v}{\Gamma} \circ \mathcal{M}_{[h]}^{V,n}$ restricts to $$\restrictH{3n+1,\ldots,3n+4V}{1,\ldots,4V}\left(\WightmanRnaVarrG{n}{\alpha}{v}{\Gamma}\circ \mathcal{M}_{[h]}^{V,n}\right)\in\mathcal{L}(\SRR{3n},\SMlt{4V}{n}).$$ \end{thm} \begin{=>}\label{=>:MainW} Fix a massive admissible HpQFT. Take $\widetilde{\lambda}\in\SRR{4}$ such that $$ \int \widetilde{\lambda}(\vec{k},t)d^3{\vec{k}}=1 \,\forall t\in\RR{}, $$ and for any $L>0$ define $\widetilde{\lambda}_{L}\in \SRR{4}$ by $$ \widetilde{\lambda}_L(\vec{k},t)=L^3\widetilde{\lambda}(\vec{k}L,L^{-1}t). $$ Then for any $n\in\mathbb{N}$ and any $f\in\SRR{4n}$ \begin{equation} \lim_{L\rightarrow \infty}\GreenUnag{n}{\alpha}{g\widetilde{\lambda}_L}[f]=\GreenUnagAd{n}{\alpha}{g}[f], \label{eq:Main:Green} \end{equation} \begin{equation} \lim_{L\rightarrow \infty}\WightmanRnag{n}{\alpha}{g\widetilde{\lambda}_L}[f]=\WightmanRnagAd{n}{\alpha}{\alpha}{g}[f], \label{eq:Main:WightR} \end{equation} \begin{equation} \lim_{L\rightarrow \infty}\WightmanUnag{n}{\alpha}{g\widetilde{\lambda}_L}[f]=\WightmanUnagAd{n}{\alpha}{g}[f], \label{eq:Main:WightU} \end{equation} where $\GreenUnagAd{n}{\alpha}{g},\WightmanUnagAd{n}{\alpha}{g}\in\SpRR{4n}\formalPS{g}$ and $\WightmanRnagAd{n}{g}\in\mathcal{L}(\SRR{3n},\MltT(n))\formalPS{g}$ do not depend on $\widetilde{\lambda}$. \end{=>} \begin{proof} Fix $V\in \mathbb{N}_0$ and $f\in\SRR{3n}$. Take $h\in\SRR{4}$ as in Theorem \ref{thm:MainW} and for each $L>0$ define $$\widetilde{\lambda}^{(1)}_L=\mathcal{M}_{[h]}^{1,0}[\widetilde{\lambda}_L],$$ $$\widetilde{\lambda}^{(2)}_L=\widetilde{\lambda}_L-\widetilde{\lambda}^{(1)}_L.$$ It is easy to see that $\widetilde{\lambda}^{(2)}_L\subset{L\rightarrow \infty}{\longrightarrow} 0$ in $\SRR{4}$, so $$ \lim_{L\rightarrow \infty}\WightmanRnaV{n}{\alpha}{V}\left[f\otimes \left((\widetilde{\lambda}^{\otimes V}_L-\widetilde{\lambda}^{(1)\otimes V}_L\right)\right]=0. $$ Now by Theorem \ref{thm:MainW} we have: $$ \WightmanRnaV{n}{\alpha}{V}[f\otimes\widetilde{\lambda}^{(1)\otimes V}_L](\arrs{t}{n})= (\WightmanRnaV{n}{\alpha}{V}\circ \mathcal{M}_{[h]}^{1,0})[f\otimes\widetilde{\lambda}^{\otimes V}_L]\arrs{t}{n}= $$ $$ \int{\left(\prod_{j=1}^{V} \widetilde{\lambda}_L(\vec{k}_j,\tau)\right)\restrictH{3n+1,\ldots,3n+4V}{1,\ldots,4V}(\WightmanRnaV{n}{\alpha}{V}\circ \mathcal{M}_{[h]}^{1,0})[f]\left(\left(\tau_j,\vec{k}_j\right)_{j=1\ldots V},\arrs{t}{n}\right)d^{3}\vec{k}_jd\tau_j. }$$ Considering now $\widetilde{\lambda}$ as a distribution and noting that by assumptions of the Corollary it weakly converges to $\delta(\vec{k})$, we get that the limit $L\rightarrow \infty$ exists and is equal to $$ \int{\restrictH{n+1,\ldots,n+4V}{1,\ldots,4V}(\WightmanRnaV{n}{\alpha}{V}\circ \mathcal{M}_{[h]}^{1,0})[f]\left(\arrs{t}{n},\arrs{\left(\tau_i,\vec{0},\right)}{V}\right)d^V\arrs{\tau}{V}. } $$ This proves (\ref{eq:Main:WightR}). Analogously, one can show (\ref{eq:Main:Green}) and (\ref{eq:Main:WightU}). \end{proof} \begin{rmk}\label{rmk:MainW} Theorem \ref{thm:MainW} says more than Corollary \ref{=>:MainW}. There is no need to take the uniform scaling limit in all directions. Instead, we can take any sequence or family of function $$ \widetilde{\lambda}_L(t,\vec{k})\underset{L\rightarrow\infty}{\longrightarrow} \delta(\vec{k})\, \mathrm{in}\, \SpRR{4}, $$ provided that $$ \widetilde{\lambda}_L-\mathcal{M}_{[h]}^{1,0}\widetilde{\lambda}_L \underset{L\rightarrow\infty}{\longrightarrow} 0\, \mathrm{in}\, \SRR{4}. $$ Extending the proof above we may also conclude the spatial and temporal limits can be taken in any order. Finally, we can put to the adiabatic limit each vertex of the graph individually. As a result, the Feynman rules in the adiabatic limit can be derived from Subsection \ref{HpQFT/FR} (some variants are listed in Appendix). This nice behavior differs drastically from the strong adiabatic limit \cite{EG73}. \end{rmk} \subsection{Applying Feynman rules} $n, V\in\mathbb{N}_0$, $\arrs{\alpha}{n}\in \{+,-\}$, $\arrs{v}{n}\in\mathbb{N}_0$, $\Gamma\in \mathfrak{G}_{\arrs{\alpha}{n}}^{\arrsM{v}{0}{n}}$, $f\in \SRR{3V}$ and $\arrs{t}{n}\in\RR{}$ by Corollary \ref{=>:HpQFT/FR/W-total} we have: \begin{equation}\label{eq:Main/FormalW} \WightmanRnaVarrG{n}{\alpha}{v}{\Gamma}[f](\arrs{t}{n})= \int \intUSimplex{+}{v_0}\symbDistArgs{(\tau_{0,j})_{j=1,\ldots,v_0}}{t_1}\times \end{equation} $$\left(\prod_{i=1}^{n-1}\intSimplex{v_i}\symbDistArgs{(\tau_{i,j})_{j=1,\ldots,v_i}}{t_{i+1},t_i}\right) \intUSimplex{-}{v_n}\symbDistArgs{(\tau_{n,j})_{j=1,\ldots,v_n}}{t_n}$$ $$ F_{\Gamma}(\arrs{\vec{k}}{I_{\Gamma}},\arrs{\vec{p}}{n}) f\left(\arrs{\vec{p}}{n},\left(\tau_{i,j},\vec{q}_{i,j}^{\Gamma}(\arrs{\vec{k}}{I_{\Gamma}},\arrs{\vec{p}}{n})\right)_{i=1\ldots n,j=1\ldots v_n}\right)\times $$ $$ \exp\left(\mathrm{i}\sum_{i=0}^{n}\sum_{j=1}^{v_i}\Delta_{i,j}^{\Gamma}(\arrs{\vec{k}}{I_{\Gamma}},\arrs{\vec{p}}{n})\tau_{i,j}\right) \left(\prod_{i=0}^{n} \prod_{j=1}^{v_i} d\tau_{i,j}\right)d^{3I_{\Gamma}}\arrs{\vec{k}}{I_{\Gamma}}d^{3n}\arrs{\vec{p}}{n}. $$ The rest of this subsection is devoted to the clarification of the notation used in (\ref{eq:Main/FormalW}). \par For each graph $\Gamma\in \mathfrak{G}_{\arrs{\alpha}{n}}^{\arrsM{v}{0}{n}}$ we set $I_{\Gamma}$ for the total number of internal (i.e. connecting to internal vertices) lines. Then $\arrs{\vec{k}}{I_{\Gamma}}\in\RR{3I_{\Gamma}}$ denotes the corresponding internal momenta. $\arrs{\vec{p}}{n}\in\RR{3n}$ are the $n$ external momenta. \par For a vertex marked by $(i,j)$ of a graph $\Gamma\in \mathfrak{G}_{\arrs{\alpha}{n};\arrs{t}{n}}$ we denote with $l_{i,j}^{\Gamma}$ and ${l'}_{i,j}^{\Gamma}$ the number of incoming and outgoing lines respectively. $$ \kappa^{\Gamma}_{i,j}(\arrs{\vec{k}}{I_{\Gamma}},\arrs{\vec{p}}{n})\in \RR{3(l_{i,j}^{\Gamma}+{l'}_{i,j}^{\Gamma})} $$ denotes the collection of the corresponding incoming/outgoing momenta as a function of the introduced above internal and external momenta. By construction for $i=0,\ldots,n$ and $j=1,\ldots,v_i$, $\kappa^{\Gamma}_{i,j}$ is a linear map. For convenience, we assume that the first $l'$ and the rest $l$ components are the momenta of the outgoing and incoming particles respectively. \par Finally for every $\arrs{\vec{k}}{I_{\Gamma}}\in\RR{3V}$ and every vertex $(i,j)$ (where as always $i=0,\ldots,n$ and $j=1,\ldots,v$) we define the momentum and energy defects $$ \vec{q}_{i,j}^{\Gamma}\left(\arrs{\vec{k}}{I_{\Gamma}},\arrs{\vec{p}}{n}\right)= \sum_{r=1}^{{l'}_{i,j}^{\Gamma}} \left( \kappa^{\Gamma}_{i,j}(\arrs{\vec{k}}{I_{\Gamma}},\arrs{\vec{p}}{n})\right)_r -\sum_{r={l'}_{i,j}^{\Gamma}+1}^{{l}_{i,j}^{\Gamma}+{l'}_{i,j}^{\Gamma}}\left( \kappa^{\Gamma}_{i,j}(\arrs{\vec{k}}{I_{\Gamma}},\arrs{\vec{p}}{n})\right)_r, $$ $$\Delta_{i,j}^{\Gamma}\left(\arrs{\vec{k}}{I_{\Gamma}},\arrs{\vec{p}}{n})\right)= $$ $$ \sum_{r=1}^{{l'}_{i,j}^{\Gamma}} \omega_0\left(\left( \kappa^{\Gamma}_{i,j}(\arrs{\vec{k}}{I_{\Gamma}},\arrs{\vec{p}}{n})\right)_r\right) -\sum_{r={l'}_{i,j}^{\Gamma}+1}^{{l}_{i,j}^{\Gamma}+{l'}_{i,j}^{\Gamma}}\omega_0\left(\left( \kappa^{\Gamma}_{i,j}(\arrs{\vec{k}}{I_{\Gamma}},\arrs{\vec{p}}{n})\right)_r\right),$$ i.e. the sum of the (on-shell) energies (momenta) of all the outgoing particles minus the sum of the (on-shell) energies (momenta) of all the incoming particles. Here by $(\cdot)_r$ we mean projection on the $r$th factor of $\RR{3({l}_{i,j}^{\Gamma}+{l'}_{i,j}^{\Gamma})}$ treated as $\left(\RR{3}\right)^{\otimes({l}_{i,j}^{\Gamma}+{l'}_{i,j}^{\Gamma})}$. \par All vertex factors are collected into one function $$ F_{\Gamma}(\arrs{\vec{k}}{I_{\Gamma}},\arrs{\vec{p}}{n})= \prod_{i=1}^{n+1} \prod_{j=1}^{v_i} F_{{l'}_{i,j}^{\Gamma},l_{i,j}^{\Gamma,i,j}}\left(\kappa^{\Gamma}_{i,j}(\arrs{\vec{k}}{I_{\Gamma}},\arrs{\vec{p}}{n})\right), $$ $$ \forall \arrs{\vec{k}}{I_{\Gamma}}\in\RR{3I_{\Gamma}}\, \forall \arrs{\vec{p}}{n}\in\RR{3n}. $$ \par \subsection{Spatial adiabatic limit} We analyze (\ref{eq:Main/FormalW}) part by part, starting from the integral over the momenta. \par The UV convergence in the momenta space is controlled by the vertex factors as follows from the following lemma. \begin{lem}\label{lem:Main/SpatialUV} Using the notation introduced above, for any graph $\Gamma\in \mathfrak{G}_{\arrs{\alpha}{n}}^{\arrsM{v}{0}{n}}$ one has $F_{\Gamma}\in \SRR{3(I_{\Gamma}+n)}$: \end{lem} \begin{proof} Since each $F_{l',l}\in\SRR{3(l+l')}$ and $\kappa^{\Gamma}_{i,j}$ is a linear function, it is clear that $F_{\Gamma}$ is smooth. For the same reason each its factor is at least bounded together with all its partial derivatives considered as a function on $\RR{3(I_{\Gamma}+n)}$. At the same time each of $\vec{k}_j$, $j=1,\ldots, I_{\Gamma}$ and each of $\vec{p}_{j=1,\ldots,n}$ appears as an argument of at least one of the factors. From this the standard estimations of type (\ref{eq:xy<x}) one concludes that $F_{\Gamma}$, as well as all its partial derivatives, decays at infinity faster than any polynomial. \end{proof} So, we see that convergence of the integral over the internal momenta in (\ref{eq:Main/FormalW}) is completely controlled by the part independent of both the adiabatic cut-off (hidden in the test function $f$) and the timestamps $\tau_{i,j}$. \par For the infrared behavior we need also to effectively resolve all momentum conservation constraints expressing the internal momenta via external momenta, momentum defects, and independent loop momenta. Formally this is expressed by the following lemma. \begin{lem}\label{lem:Main/spatialIR} For any choice of $n,V\in\mathbb{N}$ and $\arrsM{v}{0}{n}\in \mathbb{N}_0^{n+1}$ with $\sum_{i=0}^n v_i=V$ and a graph $\Gamma\in \mathfrak{G}_{\arrs{\alpha}{n}}^{\arrsM{v}{0}{n}}$ the functionals $$\kappa_{i,j}^{\Gamma}\in{\RR{3V}}',\quad i=1,\ldots,n, \quad, j=1,\ldots,v_i$$ are linearly independent. \end{lem} This fact (although not in this form) is well-known in quantum physics, but we still formally prove it for completeness. \begin{proof} Assume that there are $\alpha_{i,j}\in\mathbb{R}$, $i=0,\ldots,n$, $j=1,\ldots,v_i$ such that \begin{equation}\label{eq:Main/KappaIndependent} \sum_{i=0}^n\sum_{j=1}^{v_i}\alpha_{i,j}\kappa_{i,j}^{\Gamma}=0. \end{equation} For each internal vertex $(i,j)$ of the graph define $d_{\Gamma}(i,j)$, the shortest length of a path in $\Gamma$, connecting $(i,j)$ with an external vertex. Recall that all connected components of the relevant graphs contain some external vertices, so this function is well-defined. Thus we can prove that $\alpha_{i,j}=0$ for any vertex $(i,j)$ by induction in $d_{\Gamma}(i,j)$. \begin{itemize} \item Base of induction: if $d_{\Gamma}(i,j)=1$ then there is an external vertex connected with the internal vertex $(i,j)$.Let it be the $r$th external vertex. We evaluate (\ref{eq:Main/KappaIndependent}) by setting all the internal momenta and all the external momenta except $\vec{p}_r$ to zero. Then only $\kappa_{i,j}^{\Gamma}$ survives and we get $\alpha_{i,j}=0$. \item If we already know that $\alpha_{i,j}=0$ for all vertices $(i,j)$ such that $d_{\Gamma}(i,j)\leq d_0$. Take a vertex $(i',j')$ such that $d_{\Gamma}(i',j')=d_0+1$. Then it is connected with a vertex, say, $(i,j)$ with $d_{\Gamma}(i,j)=d_0$. Assume that one of the connecting edges is marked by $r$. We evaluate (\ref{eq:Main/KappaIndependent}) setting all the external and internal momenta except $\vec{k}_r$ to zero. Then only $\kappa_{i,j}^{\Gamma}$ and $\kappa_{i',j'}^{\Gamma}$ survive, but we already know that $\alpha_{i,j}=0$. Thus $\alpha_{i',j'}=0$ too. \end{itemize} \end{proof} \begin{=>}\label{=>:Main/Spatial} The expression (\ref{eq:Main/FormalW}) may be rewritten as \begin{equation}\label{eq:Main/FormalWS}\WightmanRnaVarrG{n}{\alpha}{v}{\Gamma}[f](\arrs{t}{n})= \int \intUSimplex{+}{v_0}\symbDistArgs{(\tau_{0,j})_{j=1,\ldots,v_0}}{t_1}\left(\prod_{i=1}^{n-1}\intSimplex{v_i}\symbDistArgs{(\tau_{i,j})_{j=1,\ldots,v_i}}{t_{i+1},t_i}\right)\times \end{equation} $$ \intUSimplex{-}{v_n}\symbDistArgs{(\tau_{n,j})_{j=1,\ldots,v_n}}{t_1} $$ $$F^{\mathrm{S}}_{\Gamma}(\left(\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_n},\arrs{\vec{q}}{I_{\Gamma}+n-V})\times$$ $$ f\left(\arrs{\vec{p}_{\Gamma}(\left(\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_n},\arrs{\vec{q}}{I_{\Gamma}+n-V})}{n},\left(\tau_{i,j},\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_n}\right)\times $$ $$ \exp\left(\mathrm{i}\sum_{i'=0}^{n}\sum_{j'=1}^{v_{i'}}\Delta_{i',j'}^{\mathrm{S}\Gamma}\left(\left(\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_i},\arrs{\vec{q}}{I_{\Gamma}+n-V}\right)\tau_{i',j'}\right)\times $$ $$ \left(\prod_{i=0}^{n} \prod_{j=1}^{v_i} d\tau_{i,j}d\kappa_{i,j}\right)d^{3(I_{\Gamma}+n-V)}\arrs{\vec{q}}{I_{\Gamma}+n-V}. $$ Here $\vec{p}_{\Gamma}$ is a linear function and $F^{\mathrm{S}}_{\Gamma}$, $\Delta_{i,j}^{\mathrm{S}\Gamma}$ are precompositions of $F^{\mathrm{S}}_{\Gamma}$, $\Delta_{i,j}^{\mathrm{S},\Gamma}$ with an invertible linear transform. In particular, $F^{\mathrm{S}}_{\Gamma}\in\SRR{3(n+I_{\Gamma} )}$. \end{=>} \begin{proof} By Lemma \ref{lem:Main/spatialIR} we can add coordinates $\vec{q}^{\Gamma}_i\in\RR{3(n+I_{\Gamma})*}$, $i=1,\ldots,n+I_{\Gamma}-V$ completing the family of functionals $\kappa_{i,j}^{\Gamma}$ to the basis. Then the form (\ref{eq:Main/FormalWS}) exists. The last statement is a reformulation of Lemma \ref{lem:Main/SpatialUV}. \end{proof} \subsection{Temporal adiabatic limit}\label{Main/Temp} Now we focus on integrals over the timestamp positions, starting from the inner vertices. From (\ref{eq:Main/FormalW}) we get \begin{equation}\label{eq:Main/FormalWS*h} \left(\WightmanRnaVarrG{n}{\alpha}{v}{\Gamma}\circ \mathcal{M}_{[h]}^{V,n}\right)[f](\arrs{t}{n})= \end{equation} $$ \int \intUSimplex{+}{v_0}\symbDistArgs{(\tau_{0,j})_{j=1,\ldots,v_0}}{t_1}\left(\prod_{i=1}^{n-1}\intSimplex{v_i}\symbDistArgs{(\tau_{i,j})_{j=1,\ldots,v_i}}{t_{i+1},t_i}\right)\times $$ $$ \intUSimplex{-}{v_n}\symbDistArgs{(\tau_{n,j})_{j=1,\ldots,v_n}}{t_1} $$ $$F^{\mathrm{S}}_{\Gamma}(\left(\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_n},\arrs{\vec{q}}{I_{\Gamma}+n-V})\times\prod_{i=0}^{n}\prod_{j=1}^{v_i}h(\tau_{i,j}-\tau'_{i,j})$$ $$ f\left(\arrs{\vec{p}_{\Gamma}(\left(\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_n},\arrs{\vec{q}}{I_{\Gamma}+n-V})}{n},\left(\tau'_{i,j},\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_n}\right)\times $$ $$ \exp\left(\mathrm{i}\sum_{i'=0}^{n}\sum_{j'=1}^{v_{i'}}\Delta_{i',j'}^{\mathrm{S}\Gamma}\left(\left(\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_i},\arrs{\vec{q}}{I_{\Gamma}+n-V}\right)\tau_{i',j'}\right)\times $$ $$ \left(\prod_{i=0}^{n} \prod_{j=1}^{v_i} d\tau_{i,j}d\tau'_{i,j}d\kappa_{i,j}\right)d^{3(I_{\Gamma}+n-V)}\arrs{\vec{q}}{I_{\Gamma}+n-V}. $$ Since all the integrals above are absolutely convergent, the integration can be performed in any order. This way we arrive to \begin{equation}\label{eq:Main/Temp-spec} \left(\WightTf{n}{\alpha}{\arrsM{v}{0}{n}}\circ \mathcal{M}_{[h]}^{V,n}\right)[f](\arrs{t}{n})= \sum_{\Gamma\in\mathfrak{G}_{\arrs{\alpha}{n}}^{\arrsM{v}{0}{n}}} \int \end{equation} $$F^{\mathrm{S}}_{\Gamma}(\left(\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_n},\arrs{\vec{q}}{I_{\Gamma}+n-V})\times$$ $$ O_{[h]\Gamma,0}\left(\left(\tau'_{0,j}\right)_{j=1\ldots v_{0}},\left(\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_i},\arrs{\vec{q}}{I_{\Gamma}+n-V},t_{1} \right) $$ $$\prod_{i'=1}^{n-1} O_{[h]\Gamma,i'}\left(\left(\tau'_{i',j}\right)_{j=1\ldots v_{i'}},\left(\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_i},\arrs{\vec{q}}{I_{\Gamma}+n-V},t_{i},t_{i+1} \right)\times $$ $$ O_{[h]\Gamma,n}\left(\left(\tau'_{,n}\right)_{j=1\ldots v_{n}},\left(\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_i},\arrs{\vec{q}}{I_{\Gamma}+n-V},t_{n} \right) $$ $$ f\left(\arrs{\vec{p}_{\Gamma}(\left(\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_i},\arrs{\vec{q}}{I_{\Gamma}+n-V})}{n},\left(\tau'_{i,j},\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_n}\right)\times $$ $$ \left(\prod_{i=0}^{n} \prod_{j=1}^{v_i} d\tau'_{i,j}d\kappa_{i,j}\right)d^{3(I_{\Gamma}+n-V)}\arrs{\vec{q}}{I_{\Gamma}+n-V}, $$ where for $i'=1,\ldots,n-1$ \begin{equation} \label{eq:Main/OGamma-Inner} O_{[h]\Gamma,i'}\left(\arrs{\tau'}{v_{i'}},\left(\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_i},\arrs{\vec{q}}{I_{\Gamma}+n-V},t',t\right)= \end{equation} $$ \int_{ t}^{t'}d\tau_{1}\int_{ t}^{\tau_{1}}d\tau_{2}\cdots \int_{ t}^{\tau_{v_i-1}}d\tau_{v_i} \exp\left(\mathrm{i}\sum_{j'=1}^{v_{i'}}\Delta_{i',j'}^{\mathrm{S}\Gamma}\left(\left(\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_i},\arrs{\vec{q}}{I_{\Gamma}+n-V}\right)\tau_{i',j'}\right)\times $$ $$ \prod_{j=1}^{v_{i'}}h(\tau_{j}-\tau'_{j}) \prod_{j=1}^{v_{i'}} d\tau_{j}, $$ and \begin{equation} \label{eq:Main/OGamma-Pre} O_{[h]\Gamma,0}\left(\arrs{\tau'}{v_{0}},\left(\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_i},\arrs{\vec{q}}{I_{\Gamma}+n-V},t',t\right)= \end{equation} $$ \int_{t}^{+\infty}d\tau_{1}\int_{ t}^{\tau_{1}}d\tau_{2}\cdots \int_{ t}^{\tau_{v_0-1}}d\tau_{v_0} \exp\left(\mathrm{i}\sum_{j'=1}^{v_{0}}\Delta_{0,j'}^{\mathrm{S}\Gamma}\left(\left(\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_i},\arrs{\vec{q}}{I_{\Gamma}+n-V}\right)\tau_{0,j'}\right)\times $$ $$ \prod_{j=1}^{v_{0}}h(\tau_{j}-\tau'_{j}) \prod_{j=1}^{v_{0}} d\tau_{j}, $$ \begin{equation} \label{eq:Main/OGamma-Post} O_{[h]\Gamma,n}\left(\arrs{\tau'}{v_{n}},\left(\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_i},\arrs{\vec{q}}{I_{\Gamma}+n-V},t\right)= \end{equation} $$ \int_{-\infty}^{t}d\tau_{1}\int_{ -\infty}^{\tau_{1}}d\tau_{2}\cdots \int_{-\infty}^{\tau_{v_n-1}}d\tau_{v_n} \exp\left(\mathrm{i}\sum_{j'=1}^{v_{n}}\Delta_{n,j'}^{\mathrm{S}\Gamma}\left(\left(\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_i},\arrs{\vec{q}}{I_{\Gamma}+n-V}\right)\tau_{n,j'}\right)\times $$ $$ \prod_{j=1}^{v_{n}}h(\tau_{j}-\tau'_{j}) \prod_{j=1}^{v_{n}} d\tau_{j}. $$ Note that here we use the direct functional interpretation of distributions $\intSimplex{n}$ and $\intUSimplex{n}{\pm}$ rather than the combinatoric version of Remark \ref{rmk:HpQFT/FR/combFactor}. The main advantage of this approach is the manifest smoothness with respect to $\arrs{t}{n}$. \par There are two very different kinds of contributions: the \emph{inner vertices} $\bullet_{(i,j)}$, $i=1,\ldots,n_1$, and \emph{outer vertices} $\bullet_{(0,j)}$ and $\bullet_{(n,j)}$. We treat them separately in two subsections. The terminology comes from the fact that the inner vertices correspond to operators standing between two source insertions (the external vertices), while the operators corresponding to the outer ones appear either before or after all the insertions. This distinction is crucial for the treatment of the corresponding factors. \subsubsection{Inner vertices: compact region of integration} In (\ref{eq:Main/OGamma-Inner}) the region of integration is compact and all the estimations are rather direct. We have to deal with the fact that the integration region depends on the arguments but they do not bring anything but a polynomial factor. It is convenient to first formalize this fact. \begin{lem}\label{lem:Main/TInner} Fix $v,m\in\mathbb{N}_0$ and $f\in\SRR{m+v+2}$. Define a function $H$ on $\RR{m+v+2}$ by $$ H(\arrs{\tau'}{v},x,t',t)= \int_{ t}^{t'}d\tau_{1}\int_{ t}^{\tau_{1}}d\tau_{2}\cdots \int_{ t}^{\tau_{v-1}}d\tau_v g((\tau_j-\tau'_j)_{j=1\ldots v},\bm{x},t',t)d^{v}\arrs{\tau}{v}, $$ $$ \forall t,t'\in \RR{}, \forall \arrs{\tau'}{v}\in\RR{v}, \, \forall \bm{x}\in\RR{m} $$ Then $H\in\SRR{n+v+2}$. \end{lem} \begin{proof} The smoothness is clear. In particular, $H$ is continuous together with all its derivatives, so it is enough to estimate it for $t\neq t'$. In this case we can change variables of integration to $$ \xi_i=\frac{\tau_i-t}{t'-t}, \quad, i=1,\ldots,v. $$ This way for any allowed arguments we get $$ H(\arrs{\tau'}{v},\bm{x},t',t)=$$ $$(t'-t)^{v} \int_{0}^1d\xi_1\int_{0}^{\xi_1}d\xi_2\ldots\int_{0}^{\xi_{n-1}}d\xi_n g((\xi_j(t'-t)+t-\tau'_j)_{j=1\ldots v},\bm{x},t',t)d^{v}\arrs{\xi}{v}. $$ The integrand is clearly fast decaying with respect to $t$, $t'$ and $\bm{x}$. For $\tau'_j$ note, that whenever $0\leq\xi_j\leq 1$, we have $$ (1+|\xi_j(t'-t)+t-\tau'_j|)(1+|t|)(1+|t'|)\geq 1+|\tau'_j|, $$ so the integrand is fast decaying with respect to $\tau'$. Then using standard estimations of type (\ref{eq:x<xy}-\ref{eq:xy<x}) we get that the integral is fast decaying, and the partial derivatives can be treated in the same way. \end{proof} \begin{=>}\label{=>:Main/TempI} For $n,V,\arrs{v}{n},\arrs{\alpha}{n},\Gamma$ as in Theorem \ref{thm:MainW}, $i\in\{1,\ldots,n\}$ and $h\in Q_{\frac{M}{V+1}}$ one has $$O_{[h]\Gamma,i}\in\SMlt{v_i}{3(I^{\Gamma}+n)+2}.$$ \end{=>} \begin{proof} By Remark \ref{rmk:Framework/Mlt/Alt} it is enough to prove that for any $\chi\in\SRR{3(I_{\Gamma}+n)+2}$ $\MltMn{\chi}{v}[O_{[h]\Gamma,i'}]\in\SRR{3(I_{\Gamma}+n)+2+v_i}$. \par For that use Lemma \ref{lem:Main/TInner} with $v=v_i$, $m=3(I_{\Gamma}+n)$ and $$ g(\arrs{\tau}{v_i},\bm{x},t',t)=$$ $$ \exp\left(\mathrm{i}\sum_{j'=1}^{v_{i'}}\Delta_{i',j'}^{\mathrm{S}\Gamma}\left(\bm{x}\right)\tau_{j'}\right) \prod_{j=1}^{v_{i'}}h(\tau_{j}) \chi\left(\bm{x},t',t\right), \forall \bm{x}\in\RR{3(I_{\Gamma}+n)+2}. $$ Then $$ \MltMn{\chi}{v}[O_{[h]\Gamma,i'}]\left(\arrs{\tau'}{v_i},\bm{x},t',t\right)=H(\arrs{\tau'}{v_i},\bm{x},t',t)R(\arrs{\tau'}{v_i},\bm{x}) $$ where $H\in\SRR{v_i+3(I_{\Gamma}+n)+2}$ is constructed in Lemma \ref{lem:Main/TInner} and $$ R(\arrs{\tau'}{v_i},\bm{x})= \exp\left(\mathrm{i}\sum_{j'=1}^{v_{i'}}\Delta_{i',j'}^{\mathrm{S}\Gamma}\left(\bm{x}\right)\tau'_{i',j'}\right). $$ The statement then follows by $R\in\Mlt{v_i+3(I_{\Gamma}+n)}$. \end{proof} \subsubsection{Outer vertices: spectral properties} The integration regions in the right-hand sides of (\ref{eq:Main/OGamma-Pre}-\ref{eq:Main/OGamma-Post}) are not compact, so at the first sight, the convergence is controlled by the adiabatic cut-off function only. The situation becomes more clear in the spectral representation. Then $h\in Q_{\frac{M}{V+1}}$ makes the integration region compact. The key observation is that physically these groups of vertices describe creation of several particles from the vacuum (if $i=n$) or completion annihilation of particles up to the vacuum (if $i=0$) as an on-shell process. \par We consider in details the case $i=0$. \par It is convenient to shift the variables of integration to $\arrs{s^{\mathrm{f}}}{v}$ as follows $$\tau_{j}=t+\sum_{j'=1}^{j}s^{\mathrm{f}}_{j'}.$$ Then the integration region becomes $\RR{v_0}_{+}$ and \begin{equation}\label{eq:Main/O0} O_{[h]\Gamma,0}\left(\arrs{\tau'}{v_0},\left(\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_i},\arrs{\vec{q}}{I_{\Gamma}+n-V},t\right)= \end{equation} $$ \exp\left(-\mathrm{i}\Omega_{v_0}^{\mathrm{f},\Gamma}\left(\left(\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_i},\arrs{\vec{q}}{I_{\Gamma}+n-V}\right)t\right)\times $$ $$ \int_{\RR{v_0}_+} \exp\left(-\mathrm{i}\sum_{j=1}^{v_{0}}\Omega_{j}^{\mathrm{0}\Gamma}\left(\left(\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_i},\arrs{\vec{q}}{I_{\Gamma}+n-V}\right)s_j\right)\times $$ $$ \prod_{j=1}^{v_{0}}h\left(t_0+\sum_{j'=1}^{j}s_{j'}-\tau'_{j}\right) ds_j. $$ Here we introduced the commulated energy deffects \begin{equation}\label{eq:Main/Omega-f} \Omega_{j}^{\mathrm{f}\Gamma}=\sum_{j'=j}^{v_0} (-\Delta_{0,j'}^{\mathrm{S}\Gamma}). \end{equation} The following observation is important. \begin{rmk}\label{rmk:Main/MassBound} For any values of the arguments \begin{equation}\label{eq:Main/OmegaBound} \Omega_{j}^{\mathrm{f}\Gamma}\left(\left(\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_i},\arrs{\vec{q}}{I_{\Gamma}+n-V}\right)\geq M. \end{equation} \end{rmk} \begin{proof} It follows from the fact that all lines started at some of vertices $(0,j)$ with $j=1,\ldots,v'$ must end at some other vertex from the same set, because the outgoing state is the vacuum and all external vertices may appear only before $(0,j)$. Let us see some details. \par Each $(-\Delta_{0,j'}^{\mathrm{S}\Gamma})$ contains both positive contributions of the incoming lines and negative contributions of the outgoing lines of the vertex $(0,j')$. Each outgoing line should end at some later vertex $(0,j'')$, $j''>j$. In other words, all negative contributions to the right-hand sides of (\ref{eq:Main/Omega-f}]) cancel out. Finally, if for some vertex all positive contributions were also canceled it would mean that this vertex belongs to a disconnected pieces of the diagram which is forbidden. Then by the mass gap assumed we get (\ref{eq:Main/OmegaBound}). \end{proof} Let us see how it helps to deal with $O_{[h]\Gamma,0}$. As long as $h\in Q_{\frac{M}{V+1}}$, (\ref{eq:Main/O0}) is equivalent to \begin{equation}\label{eq:Main/O0-spec} O_{[h]\Gamma,0}\left(\arrs{\tau'}{v_0},\left(\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_i},\arrs{\vec{q}}{I_{\Gamma}+n-V},t\right)= \end{equation} $$ \int\exp\left(-\mathrm{i}\left(\sum_{j=1}^{v_0}\Delta_j+\Omega_{v_0}^{\mathrm{f}\Gamma}\left(\left(\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_i},\arrs{\vec{q}}{I_{\Gamma}+n-V}\right)\right)t\right)\times $$ $$ \prod_{j=1}^{v_0} \frac{\mathrm{i} \exp \left(\mathrm{i}\tau'_j\Delta_j\right)\widetilde{h}(\Delta_j) d\Delta_j} {-\sum_{j'=j}^{v_0}\Delta_{j'}-\Omega_{j}^{\mathrm{f}\Gamma}\left(\left(\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_i},\arrs{\vec{q}}{I_{\Gamma}+n-V}\right)+\ii0}. $$ \begin{rmk}\label{rmk:Main/SpectralInt} In principle, this expression involves symbolic integral which should be understood as an evaluation of a distribution $$ F(\arrs{\Delta}{v_0})= \prod_{j=1}^{v_0} \frac{\mathrm{i} \exp \left(\mathrm{i}\tau'_j\Delta_j\right)\widetilde{h}(\Delta_j)} {-\sum_{j'=j}^{v_0}\Delta_{j'}-\Omega_{j}^{\mathrm{f}\Gamma}\left(\left(\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_i},\arrs{\vec{q}}{I_{\Gamma}+n-V}\right)+\ii0} $$ on a test function. But by Remark \ref{rmk:Main/MassBound} and $h\in Q_{\frac{M}{V+1}}$ the denominators are bounded from below, \begin{equation} \label{eq:Main/TempOuterDenF0} \left| -\sum_{j'=j}^{v_0}\Delta_{j'}-\Omega_{j}^{\mathrm{0}\Gamma}\left(\left(\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_i},\arrs{\vec{q}}{I_{\Gamma}+n-V}\right)\right|\geq \end{equation} $$ M-v_0\frac{M}{V+1}\geq \frac{M}{V+1}. $$ So, the integral in (\ref{eq:Main/O0-spec}) may be treated literally and $+\ii0$ can be ignored. \end{rmk} In the same way we get \begin{equation}\label{eq:Main/On-spec} O_{[h]\Gamma,n}\left(\arrs{\tau'}{v_n},\left(\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_i},\arrs{\vec{q}}{I_{\Gamma}+n-V},t'\right)= \end{equation} $$ \int\exp\left(-\mathrm{i}\left(\sum_{j=1}^{v_n}\Delta_j-\Omega_{v_n}^{\mathrm{f},\Gamma}\left(\left(\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_i},\arrs{\vec{q}}{I_{\Gamma}+n-V}\right)\right)t'\right)\times $$ $$ \prod_{j=1}^{v_n} \frac{\mathrm{i} \exp \left(\mathrm{i}\tau'_j\Delta_j\right)\widetilde{h}(\Delta_j) d\Delta_j} {\sum_{j'=1}^{j'}\Delta_{j'}-\Omega_{j}^{\mathrm{i}\Gamma}\left(\left(\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_i},\arrs{\vec{q}}{I_{\Gamma}+n-V}\right)+\ii0}, $$ where \begin{equation} \label{eq:Main/Omega-i} \Omega_{i}^{\mathrm{i}\Gamma}=\sum_{j'=1}^{j} (\Delta_{0,j'}^{\mathrm{S}\Gamma}). \end{equation} Similarly to Remark \ref{rmk:Main/MassBound} we have \begin{equation} \label{eq:Main/TempOuterDenF} \left |\sum_{j'=1}^{j'}\Delta_{j'}-\Omega_{j}^{\mathrm{i}\Gamma}\left(\left(\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_i},\arrs{\vec{q}}{I_{\Gamma}+n-V}\right)\right|\geq \frac{M}{V+1}, \end{equation} and in particular the $+\ii0$ symbol can be ignored. % \begin{lem}\label{lem:Main/TempOuter} For For $n,V,\arrs{v}{n},\arrs{\alpha}{n},\Gamma$ as in Theorem \ref{thm:MainW}, $$O_{[h]\Gamma,i}\in \SMlt{v_i}{3(I_{\Gamma}+n)+1}, \, i=0,n.$$ \end{lem} \begin{proof} By (\ref{eq:Main/O0-spec}) and (\ref{eq:Main/On-spec}) for $i=0,n$ $O_{[h]\Gamma,i}$ is a Fourier transform of $\widetilde{O}_{[h]\Gamma,i}$, $$\widetilde{O}_{[h]\Gamma,0}\left(\arrs{\Delta}{v_0},\left(\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_i},\arrs{\vec{q}}{I_{\Gamma}+n-V},t\right)=$$ $$ \exp\left(-\mathrm{i}\left(\sum_{j=1}^{v_0}\Delta_j+\Omega_{v_0}^{\mathrm{f}\Gamma}\left(\left(\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_i},\arrs{\vec{q}}{I_{\Gamma}+n-V}\right)\right)t\right)\times $$ $$ \prod_{j=1}^{v_0} \frac{\mathrm{i} \widetilde{h}(\Delta_j)} {-\sum_{j'=j}^{v_0}\Delta_{j'}-\Omega_{j}^{\mathrm{f}\Gamma}\left(\left(\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_i},\arrs{\vec{q}}{I_{\Gamma}+n-V}\right)+\ii0} $$ and $$\widetilde{O}_{[h]\Gamma,n}\left(\arrs{\Delta}{v_n},\left(\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_i},\arrs{\vec{q}}{I_{\Gamma}+n-V},t'\right)=$$ $$ \exp\left(-\mathrm{i}\left(\sum_{j=1}^{v_n}\Delta_j-\Omega_{v_n}^{\mathrm{f},\Gamma}\left(\left(\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_i},\arrs{\vec{q}}{I_{\Gamma}+n-V}\right)\right)t'\right)\times $$ $$ \prod_{j=1}^{v_n} \frac{\mathrm{i}\widetilde{h}(\Delta_j) } {\sum_{j'=1}^{j'}\Delta_{j'}-\Omega_{j}^{\mathrm{i}\Gamma}\left(\left(\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_i},\arrs{\vec{q}}{I_{\Gamma}+n-V}\right)+\ii0}, $$ where we used the accumulated energy defects (\ref{eq:Main/Omega-f}) and (\ref{eq:Main/Omega-i}). Taking into account (\ref{eq:Main/TempOuterDenF0}) and (\ref{eq:Main/TempOuterDenF}) we get $$\widetilde{O}_{[h]\Gamma,i}\in \SMlt{v_i}{3(I_{\Gamma}+n)+1}, \, i=0,n.$$ Thus the same holds for its Fourier transform. \end{proof} \begin{rmk} It is worth noting that the argument above is applicable only because both initial and final states are the vacuum. In particular, it does not work for the adiabatic limit of scattering amplitudes. \end{rmk} \subsection{Proof of Theorem \ref{thm:MainW}} \begin{proof} We have only to sum up the results of the two previous subsections. Indeed, applying Corollary \ref{=>:Main/TempI} and Lemma \ref{lem:Main/TempOuter} to the form (\ref{eq:Main/Temp-spec}), we have \begin{equation*} \left(\WightTf{n}{\alpha}{\arrsM{v}{0}{n}}\circ \mathcal{M}_{[h]}^{V,n}\right)[f](\arrs{t}{n})= \sum_{\Gamma\in\mathfrak{G}_{\arrs{\alpha}{n}}^{\arrsM{v}{0}{n}}} \int \end{equation*} $$N^{[h]}_{\Gamma}(\left(\tau'_{i,j}\right)_{i=0\ldots n, j=1\ldots v_i},\left(\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_n},\arrs{\vec{q}}{I_{\Gamma}+n-V},\arrs{t}{n})\times$$ $$ f\left(\arrs{\vec{p}_{\Gamma}(\left(\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_i},\arrs{\vec{q}}{I_{\Gamma}+n-V})}{n},\left(\tau'_{i,j},\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_n}\right)\times $$ $$ \left(\prod_{i=0}^{n} \prod_{j=1}^{v_i} d\tau'_{i,j}d\kappa_{i,j}\right)d^{3(I_{\Gamma}+n-V)}\arrs{\vec{q}}{I_{\Gamma}+n-V}, $$ where $$ N^{[h]}_{\Gamma}\in\SMltT{V+3(I_{\Gamma}+n)}{n}, $$ and $\vec{p}_{\Gamma}$ is a linear function from Lemma \ref{lem:Main/spatialIR}. Then the necessary restriction is defined by $$ \restrictH{n+1,\ldots,n+4V}{1,\ldots,4V}(\WightmanRnaV{n}{\alpha}{V}\circ \mathcal{M}_{[h]}^{1,0})[f]((\tau'_{i,j},\kappa_{i,j})_{i=1\ldots n, j=1\ldots v_i},\arrs{t}{n})= \sum_{\Gamma\in\mathfrak{G}_{\arrs{\alpha}{n}}^{\arrsM{v}{0}{n}}} \int $$ $$N^{[h]}_{\Gamma}(\left(\tau'_{i,j}\right)_{i=0\ldots n, j=1\ldots v_i},\left(\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_n},\arrs{\vec{q}}{I_{\Gamma}+n-V},\arrs{t}{n})\times$$ $$ f\left(\arrs{\vec{p}_{\Gamma}(\left(\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_i},\arrs{\vec{q}}{I_{\Gamma}+n-V})}{n},\left(\tau'_{i,j},\vec{\kappa}_{i,j}\right)_{i=1\ldots n,j=1\ldots v_n}\right)\times $$ $$d^{3(I_{\Gamma}+n-V)}\arrs{\vec{q}}{I_{\Gamma}+n-V}, $$ \end{proof}
1,108,101,562,567
arxiv
\section{Introduction} Hodgkin-Huxley (HH) model of neuronal excitability \cite{HH52} provides a milestone for biophysical understanding of information processing in living systems \cite{Koch} in terms of electrical spikes mediated by ionic currents through voltage-dependent membrane pores made by ion channel proteins. One considers the cell membrane as an insulator with specific electrical capacitance $C_m$ per unit of area, which is perforated by ionic channels providing generally voltage-dependent parallel ionic pathways with specific conductances $G_i$ per unit of area for various sorts of ion channels. This yields the following equation for transmembrane electrical potential difference $V$ \begin{equation} \label{voltage-eq} C_m \frac{dV}{dt} +G_{\chem{K}}(n) (V-E_{\chem{K}}) +G_{\chem{Na}}(m,h) ( V - E_{\chem{Na}}) +G_{\chem{L}} (V - E_{L}) = I_{\rm ext}\, . \end{equation} Here, three ionic currents are taken into account, sodium $\chem{Na}$, potassium $\chem{K}$ and unspecific leakage current (mainly due to chloride ions). This is nothing else the Kirchhoff current law, which takes into account the ionic and capacitance currents, as well as an external current $I_{\rm ext}$ which can stimulate electrical excitations. This equation reflects assumption on Ohmic conductance of completely open ion channels with $E_i$ being the reversal or Nernst potentials. They emerge due to the difference of ionic concentrations inside and outside of the excitable cell, which are kept approximately constant by the work of ionic pumps, which is not considered explicitely. Nonlinearity comes from the open-shut gating dynamics of sodium and potassium channels. The corresponding specific conductances \begin{eqnarray} \label{conductances} G_{\chem{K}}(n)&&=g_{\chem{K}}^{\mathrm{max}}n^{4}(V,t) , \nonumber \\ G_{\chem{Na}}(m,h)&&=g_{\chem{Na}}^{\mathrm{max}}m^{3}(V,t) h(V,t)\, , \end{eqnarray} depend on three voltage-dependent gating variables, $n$, $m$, and $h$, where $n(t)$ is the probability of one gate of potassium channel to be open (more precisely the fraction of open gates), $m$ corresponds to one activation gate of sodium channel, and $h$ is the fraction of closed sodium inactivation gates. One assumes four independent identical gates for potassium channel, hence its opening probability is $n^4$, as well as three activation and one inactivation gate for the sodium channel. Hence, $m^3h$ is the fraction of open sodium channels. The maximal conductances $g_{\chem K}^{\mathrm{max}}$ and $g_{\chem{Na}}^{\mathrm{max}}$ can be expressed via the unitary conductances $g_{i,0}$ of single ion channels as $g_{i}^{\mathrm{max}}=g_{i,0}\rho_i$, where $\rho_i$ is the membrane density of the ion channels of sort $i$. The gating dynamics is in turn described by the relaxation kinetics \begin{equation} \label{gates} \frac{d}{dt} x = \alpha_{x}(V)\ (1-x)-\beta_{x}(V)\ x , \quad x=m,h,n\, , \end{equation} with voltage-dependent rates \begin{eqnarray} \label{rates1} \alpha_{m}(V) &&= \frac{ 0.1\ ( V + 40 )}{1-\exp [ - ( V + 40 ) / 10] },\; \beta_{m}(V) = 4 \ \exp [ - ( V + 65 ) / 18 ]\, , \\ \alpha_{h}( V ) &&= 0.07 \ \exp [ - ( V + 65 ) / 20 ], \; \beta_{h}( V ) = \{ 1 + \exp [ - ( V + 35 ) / 10 ] \}^{-1}\, , \\ \alpha_{n}( V ) &&= \frac{ 0.01 \ ( V + 55 )}{ 1 - \exp [ -( V + 55 )/10 ]},\; \label{eq:rates2} \beta_{n}( V ) = 0.125 \ \exp [ - ( V + 65 ) / 80 ]\, . \end{eqnarray} Here the voltage is measured in millivolts and rates in inverse milliseconds. Other classical parameters of HH model suitable to describe excitable dynamics of squid giant axon are: $C_m=1 \un{\mu F/cm^2}$, $E_{\chem{Na}}=50\un{mV}$, $E_{\chem{K}}=-77\un{mV}$, $E_{\mathrm{L}}=-54.4\un{mV}$, $G_{\mathrm{L}} =0.3\un{mS/cm^2}$, $g_{\chem K}^{\mathrm{max}}=36\un{mS/cm^2}$, $g_{\chem{Na}}^{\mathrm{max}}=120\un{mS/cm^2}$. The set of four coupled nonlinear differential equations defined by (\ref{voltage-eq})- (\ref{eq:rates2}) presents a milestone in biophysics and neuroscience because of its very clear and insightful physical background. In the same spirit, one can build up various other conductance-based biophysical models starting from the pertinent molecular background and following to the bottom-up approach. However, it assumes macroscopically large numbers of ion channels in neglecting completely the mesoscopic channel noise effects. The number of ion channels in any cell is, however, finite, and the corresponding channel noise can be substantial \cite{Koch}. Especially, one confronts with this question by considering the spatial spike propagation among approximately piece-wise isopotential membrane clusters of ion channels \cite{Anna}. \section{Stochastic Hodgkin-Huxley equations} How to take stochastic dynamics of ion channels within the physical framework of HH model into account is subject of numerous studies \cite{Koch,Strassberg,FoxLu}. The most rigorous way is to consider the variable population of open ion channels as a birth-and-death process \cite{Kampen}. Consider for simplicity a population of $N$ independent two-state ion channels (one gate only) with opening rate $\alpha$ and closing rate $\beta$ (constant under voltage clamp). Each ion channel fluctuates dichotomously between the closed state with zero conductance and the open state having unitary conductance $g_0$. For such two-state Markovian channels, the stationary probability of opening is $p_0=\alpha/(\alpha+\beta)$ and the averaged conductance is $\langle g(t)\rangle=p_0g_0$. The number of open channels $n$ is binomially distributed with probability $P_{N}^{\rm st}(n)=p_0^n(1-p_0)^{N-n}N!/(n!(N-n)!)$, average $\langle n\rangle =p_0N$, and variance $\langle (n-\langle n\rangle)^2\rangle=Np_0(1-p_0)=N\alpha\beta/(\alpha+\beta)^2$. For sufficiently large $N\geq 100$, we introduce quasi-continuous variable $0\leq x(t)=n(t)/N\leq 1$, with smoothened binomial probability density $p_{N}^{\rm st}(x)=Np^{xN}(1-p)^{N(1-x)}N!/(\Gamma(1+xN)\Gamma(1+(1-x)N))$. Use of approximate Stirling formula $n!\approx (n/e)^n$ yields \begin{eqnarray}\label{stat} p^{\rm st}_N(x)\approx C_N(\alpha,\beta)\left(\frac{\alpha}{x}\right)^{Nx} \left(\frac{\beta}{1-x}\right)^{N(1-x)}\;, \end{eqnarray} where $C_N(\alpha,\beta)$ is a normalization constant. We are looking for the best diffusional (continuous) approximation for discrete state birth-and-death process defined by the master equation \begin{eqnarray} \dot P_N(n)&&=F(n-1)P_N(n-1)+B(n+1)P_N(n+1)-[F(n)+B(n)]P_N(n) \end{eqnarray} for $1\leq n\leq N-1$, with forward rate $F(n)=\alpha (N-n)$ and backward rate $B(n)=\beta n$, complemented by the boundary conditions \begin{eqnarray} \dot P_N(0)&&=B(1)P_N(1)-F(0)P_N(0), \\ \dot P_N(N)&&=F(N-1)P_N(N-1)-B(N)P_N(N)\;. \end{eqnarray} \subsection{Diffusional approximations for birth-and-death process} \subsubsection{Kramers-Moyal expansion and standard diffusional approximation.} A standard way to obtain diffusional approximation for $p_N(x):=P_N(x N)/\Delta x$ ($\Delta x=1/N$) with rates $f(x):=F(xN)\Delta x$, $b(x):=B(xN) \Delta x$ is to do the Kramers-Moyal expansion \cite{Kampen,Hanggi82}, like $p_N(x+\Delta x)\approx p_N(x)+(\partial p_N(x)/\partial x)\Delta x+ (\partial^2 p_N(x)/\partial x^2)(\Delta x)^2/2$, $f(x+\Delta x)\approx f(x)+(df(x)/dx)\Delta x+(d^2f(x)/dx^2)(\Delta x)^2/2$, to the second order. This yields the Fokker-Planck equation \begin{eqnarray} \frac{\partial }{\partial t} p(x,t)=-\frac{\partial }{\partial x}[f(x)-b(x)]p(x,t)+ \frac{\partial^2 }{\partial x^2}D_{\rm KM}(x)p(x,t) \end{eqnarray} with diffusion coefficient $D_{\rm KM}(x)=[f(x)+b(x)]/(2N)$. This Fokker-Planck equation corresponds to the Langevin equation \begin{eqnarray} \dot x=f(x)-b(x)+\sqrt{2D_{\rm KM}(x)}\xi(t), \end{eqnarray} where $\xi(t)$ is white Gaussian noise of unit intensity, $\langle\xi(t)\xi(t') \rangle=\delta(t-t') $, in pre-point, or Ito interpretation \cite{Gard}. This equation is quite general for any one-dimensional birth-and-death process within this standard diffusional approximation. For the considered population of ion channels, \begin{eqnarray}\label{standard} \dot x =\alpha(1-x)-\beta x+\sqrt{[\alpha(1-x)+\beta x]/N}\xi(t). \end{eqnarray} This is stochastic equation for a gating variable in the stochastic generalization of Hodgkin-Huxley equations by Fox and Lu \cite{FoxLu}. It replaces Eq. (\ref{gates}) with corresponding voltage-dependent $\alpha_x(V)$, $\beta_x(V)$, and $N=N_{\rm Na}=\rho_{\rm Na}S$ for $m$ and $h$, or $N=N_{\rm K}=\rho_{\rm K}S$ for the variable $n$. $S$ is the area of membrane patch, and $\rho_{\rm Na}=60\mu m^{-2}$, $\rho_{\rm K}=18\mu m^{-2}$ within HH model \cite{Strassberg}. Clearly, in the limit $N\to\infty$ the channel noise vanishes, restoring the deterministic HH model. We name this model the second model by Fox and Lu (Fox-Lu 2) in application to stochastic HH dynamics. \subsubsection{Linear noise approximation.} The further approximation (Fox-Lu 1 within stochastic HH model) is obtained by $D_{\rm KM}(x)\to D_{\rm KM}(x_{\rm eq})=const$, where $x_{\rm eq}$ is equilibrium point of deterministic dynamics, $f(x_{\rm eq})=b(x_{\rm eq})$. It corresponds to the so-called $1/\Omega$ expansion with linear additive noise approximation advocated by van Kampen \cite{Kampen}. Then, with $x_{\rm eq}=p_0=\alpha/(\alpha+\beta)$ Eq. (\ref{standard}) reduces to \begin{eqnarray} \dot x =\alpha(1-x)-\beta x+\sqrt{2\alpha\beta/[N(\alpha+\beta)]}\xi(t). \end{eqnarray} \subsubsection{Diffusional approximation with natural boundaries.} The both diffusional approximations are not quite satisfactory because they do not guarantee the boundary conditions in a natural way. As a result, for a sufficiently small opening probability $p_0\ll 1$, and not sufficiently large number of channels the negative values, $x<0$, become possible with appreciable probabilities $p(x,t)>0$. Likewise, the larger than one values, $x>1$, are also possible when the opening probability $p_0$ is close to one. However, this deficiency can easily be corrected numerically by imposing reflecting boundary conditions at $x=0$ and $x=1$ in stochastic simulations. With this correction, Langevin approximation of stochastic HH dynamics is widely used \cite{Schmid01,Schmid04a,Schmid06,Anna}. However, it is not quite clear if this procedure indeed delivers the correct results \cite{FNL}. To clarify the issue, we consider a different diffusional approximation with natural reflecting boundaries which naturally bound stochastic dynamics to the interval $0\leq x \leq1$. For this, we first demand that the diffusional approximation is consistent with the stationary distribution of birth-and-death process, which can be expressed as $P_N^{\rm st}(n)= \exp [-\Phi(n)]P_N^{\rm st}(0)$ in terms of a pseudo-potential $\Phi(n)=-\sum_{i=1}^{n}\ln\left [F(i-1)/B(i) \right]$ \cite{Kampen}. Hence, in the continuous limit, $p_N^{\rm st}(x)\propto \exp [-N\phi(x)]$, with pseudo-potential $\phi(x)=-\int_0^x \ln\left [f(x')/b(x') \right]d x'= \ln(1-x)-x\ln(\alpha(1-x)/(x\beta))$. This indeed yields the probability density (\ref{stat}). The corresponding Fokker-Planck equation must read \begin{eqnarray} \frac{\partial }{\partial t} p(x,t)= \frac{\partial }{\partial x}\left (D(x)e^{-N\phi(x)} \frac{\partial }{\partial x} e^{N\phi(x)}p(x,t)\right )\\ =\frac{\partial }{\partial x}ND(x)\phi'_x(x)p(x,t)+\frac{\partial }{\partial x}D(x) \frac{\partial }{\partial x}p(x,t) \end{eqnarray} with \begin{eqnarray} ND(x)\phi'_x(x)=b(x)-f(x) \; \end{eqnarray} in order to be also consistent with the deterministic limit $N\to\infty$. The last equation fixes the diffusion coefficient as \begin{eqnarray} D(x)=\frac{1}{N}\frac{f(x)-b(x)}{\ln [f(x)/b(x)]} \;. \end{eqnarray} The Langevin equation which corresponds to this best diffusional approximation of the birth-and-death processes \cite{Hanggi82,Hanggi84} reads \begin{eqnarray} \dot x=f(x)-b(x)+\sqrt{2D(x)}\xi(t), \end{eqnarray} in the post-point, or Klimontovich-H\"anggi interpretation \cite{Hanggi82}. In the standard Ito interpretation suitable for integration with stochastic Euler algorithm \cite{Gard} the corresponding Langevin equation becomes \begin{eqnarray} \dot x=f(x)-b(x)+D'_x(x)+\sqrt{2D(x)}\xi(t) \; \end{eqnarray} with spurious drift $D'_x(x)$. In application to stochastic dynamics of one gating variable it reads \begin{eqnarray} \dot x=\alpha(1-x)-\beta x+D'_x(x)+\sqrt{2D(x)}\xi(t) \;. \end{eqnarray} with \begin{eqnarray} D(x)=\frac{1}{N}\frac{\alpha(1-x)-\beta x}{\ln [\alpha(1-x)/(\beta x)]} \;. \end{eqnarray} Replacing with such equations the stochastic equations for gating variables in the standard Langevin variant of stochastic Hodgkin-Huxley equations we obtain the improved Langevin description of mesoscopic channel noise, with natural boundaries because $D(0)=D(1)=0$, i.e. the channel noise (and the probability flux) vanishes exactly at the reflecting boundaries, in the theory. Nevertheless, in numerical algorithm one must yet additionally secure such boundaries for any \textit{finite} integration time step $\delta t$. Notice also that near the equilibrium point with $|f(x)-b(x)|\ll f(x)+b(x)$, $D(x)\approx D_{\rm KM} (x)$, and the standard diffusional approximation is almost restored, almost, if to neglect the spurious drift correction $D'_x(x)$, which still remains within the Ito interpretation. We test the best diffusional approximation for a gating variable against the earlier Langevin descriptions with reflecting boundary conditions implemented numerically. For this we use stochastic Euler algorithm with time step $\delta t=0.001$ for several values of $N$ and the simulation software XPPAUT \cite{Bard}. The results are shown for $\alpha=1$ and $\beta=9$ with $p_0=0.1$ in Fig. \ref{Fig1} for $N=100$ (a) and $N=10$ (b). As a big surprise, the simplest linear noise approximation actually seems to work best, if only to implement reflecting boundary conditions. For $N=100$, it reproduces well the still somewhat skewed binomial distribution with the exact mean $\langle x \rangle=0.1$ and standard deviation $\langle \Delta x^2\rangle^{1/2}=0.03$. Even for $N=10$, it gives the mean closer to the correct value of $0.1$ within the discrete state model. However, the variance then deviates from the theoretical value $\langle \Delta x^2\rangle^{1/2}\approx 0.095$ larger than within two other approximations. For a sufficiently large $N=1000$ (not shown), all three diffusional approximations give practically identical results, within the statistical errors of simulations. Surprisingly, all three work reasonably well even for $N=10$! However, such a performance is \textit{a priori} not guaranteed for stochastic nonlinear dynamics with voltage-dependent $\alpha(V(t))$ and $\beta(V(t))$. In fact, for a multistable dynamics the best diffusional approximation is generically expected \cite{Hanggi84} to operate essentially better. \begin{figure} \centering \includegraphics[height=4.3cm]{Fig1a}\hfill \includegraphics[height=4.3cm]{Fig1b} \caption{Stationary distributions of gating variable $x$ for two ensembles of ion channels with $\alpha=1$ and $\beta=9$, (a) $N=100$ and (b) $N=10$. Numerics are compared with binomial distribution (a) and distribution (\ref{stat}) for the best diffusional approximation } \label{Fig1} \end{figure} We compare three different Langevin descriptions of stochastic HH dynamics in Fig. \ref{Fig2}, for two different membrane patches. Here, the interspike interval distributions are presented, together with the corresponding mean, $\langle \tau\rangle$, standard deviation, $\langle [ \tau-\langle \tau\rangle]^2\rangle^{1/2}$, and the relative standard variation, or the coefficient of variation, $C_V=\langle [ \tau-\langle \tau\rangle]^2\rangle^{1/2}/\langle \tau\rangle$, which measures the spike coherence. For $S=10\;\mu m^2$, all three approximations agree well. However, for $S=1\; \mu m^2$ the discrepancies become apparent, and we prefer our improved Langevin description on general theoretical grounds. The coefficient of variation $C_V$, calculated within our Langevin variant of stochastic HH model, is plotted \textit{vs.} the patch size $S$ in Fig. \ref{Fig3}. It displays a typical coherence resonance \cite{Pikovsky} behavior revealed earlier within stochastic HH models in \cite{Schmid01,Jung} as a system-size coherence resonance. There exists an optimal patch size (optimal number of ion channels) with most coherent stochastic dynamics due to internal mesoscopic noise. \subsection{Summary and Conclusions} In this paper, we presented the best diffusional Langevin approximation for excitable cell dynamics within stochastic Hodgkin-Huxley model, with natural boundary conditions for the channel noise implemented. It has clear theoretical advantages over the standard diffusional approximation in the case of transitions induced by mesoscopic noise as discussed for bistable birth-and-death processes long time ago \cite{Hanggi84}. However, within stochastic HH model for a sufficiently large number of ion channels, the standard diffusional approximations were shown to work also very good. Hence, this work confirmes the validity of the previous work done within the Langevin approximations of stochastic HH dynamics, for a sufficently large number of channels. This does not mean, however, that for other excitable models the situation will not be changed. Generally, the improved Langevin description should operate better. Other stochastic models of excitable dynamics, e.g. stochastic Morris-Lecar model can easily be improved accordingly. This task, as well as comparison with discrete state stochastic models for channel noise, is left for a future investigation. \subsubsection*{Acknowledgments.} Support by the DFG (German Research Foundation), Grant GO 2052/1-2, is gratefully acknowledged. \begin{figure} \centering \includegraphics[height=4.2cm]{Fig2a}\hfill \includegraphics[height=4.2cm]{Fig2b} \caption{ Interspike time interval distribution for self-excitable dynamics, $I_{\rm ext}=0$, due to the channels noise for two membrane patches: (a) $S=10\mu m^2$ ($N_{\rm Na}=600$, $N_{\rm K}=180$), and (b) $S=1\mu m^2$ ($N_{\rm Na}=60$, $N_{\rm K}=18$). } \label{Fig2} \end{figure} \begin{figure} \vspace{0.5cm} \centering \includegraphics[height=4.2cm]{Fig3} \caption{ Coefficient of variation versus the membrane patch size within our variant of stochastic HH model. Self-excitable dynamics, $I_{\rm ext}=0$.} \label{Fig3} \end{figure}
1,108,101,562,568
arxiv
\section{Introduction} \subsection{Statistical mixture models} A statistical mixture model~\cite{FiniteMixtureModels:2000} $M\sim m$ with $k\in\mathbb{N}$ weighted components has underlying probability distribution: \begin{equation} m(x|w,\theta) = \sum_{i=1}^k w_i p(x|\theta_i), \end{equation} with $w=(w_1, ..., w_k)$ and $\theta=(\theta_1, ..., \theta_k)$ denoting the mixture parameters: The $w_i$'s are positive weights summing up to one, and the $\theta_i$'s denote the individual component parameters. (Appendix~\ref{sec:notations} summarizes the notations used throughout the paper.) Mixture models of $d$-dimensional Gaussians\footnote{Also called MultiVariate Normals (MVNs) in software packages.} are the most often used statistical mixtures~\cite{FiniteMixtureModels:2000}. In that case, each component distribution $N(\mu_i,\Sigma_i)$ is parameterized by a mean vector $\mu_i\in\mathbb{R}^d$ and a covariance matrix $\Sigma_i\succ 0$ that is symmetric and positive definite. That is, $\theta_i=(\mu_i,\Sigma_i)$. The Gaussian distribution has the following probability density defined on the support $\mathbb{X}=\mathbb{R}^d$: \begin{equation}\label{eq:mvn} p(x;\mu_i,\Sigma_i)=\frac{1}{(2\pi)^{\frac{d}{2}}\sqrt{|\Sigma_i|}}e^{-\frac{1}{2} M_{\Sigma_i^{-1}}(x-\mu_i,x-\mu_i)}, \end{equation} where $M_Q$ denotes the squared Mahalanobis distance~\cite{bvd-2010} \begin{equation} M_{Q}(x,y)=(x-y)^T Q (x-y), \end{equation} defined for a symmetric positive definite matrix $Q\succ 0$ ($Q_i=\Sigma_i^{-1}$, the precision matrix). To draw a random variate from a Gaussian mixture model (GMM) with $k$ components, we first draw a multinomial variate $z\in\{1, ...,k\}$, and then sample a Gaussian variate from $N(\mu_z,\Sigma_z)$. A multivariate normal variate $x$ is drawn from the chosen component $N(\mu,\Sigma)$ as follows: First, we consider the Cholesky decomposition of the covariance matrix: $\Sigma=CC^T$, and take a $d$-dimensional vector with coordinates being random standard normal variates: $y=[y_1\ ...\ y_d]^T$ with $y_i=\sqrt{-2\log u_1}\cos(2\pi u_2)$ (for $u_1$ and $u_2$ uniform random variates in $[0,1)$). Finally, we assemble the Gaussian variate $x$ as $x=\mu+Cy$. This drawing process emphasizes that sampling a statistical mixture is a {\it doubly stochastic process} by essence: First, we sample a multinomial law for choosing the component, and then we sample the variate from the selected component. Figure~\ref{fig:GMM5D}(b) shows a GMM with $k=32$ components learned from a color image modeled as a 5D xyRGB point set (Figure~\ref{fig:GMM5D}(a)). Since a GMM is a {\it generative model}, we can sample the GMM to create a ``sample image'' as shown in Figure~\ref{fig:GMM5D}(c). Observe that low frequency information of the image is nicely modeled by GMMs. Figure~\ref{fig:GMMhighD}(f) shows a GMM with $k=32$ components learned from a color image modeled as a high-dimensional point set. Each $s\times s$ color image patch anchored at $(x,y)$ is modeled as a point in dimension $d=2+3s^2$. GMM representations of images and videos~\cite{VideoGMM:2004} provide a compact feature representation that can be used in many applications, like in information retrieval (IR) engines~\cite{Blobworld:2002}. \begin{figure} \centering (a)\includegraphics[bb=0 0 512 512,width=0.25\textwidth]{Figure/lena.png} (b)\includegraphics[bb=0 0 800 600,width=0.25\textwidth, height=0.25\textwidth]{Figure/lena_ellipses.png} (c)\includegraphics[bb=0 0 512 512,width=0.25\textwidth]{Figure/lena_sample.png} \caption{A RGB color image (a) is interpreted as a 5D xyRGB point set on which a Gaussian mixture model (GMM) with $k=32$ components is trained (b). Drawing many random variates from the generative GMM yields a sample image(c) that keeps low-frequency visual information. \label{fig:GMM5D}} \end{figure} In this paper, we consider the general case of mixtures of distributions belonging the same exponential family~\cite{EMEF:1974}, like Gaussian mixture models~\cite{GMM-CauchySchwarz-2011} (GMMs), Rayleigh mixture models~\cite{RMM:2011} (RMMs), Laplacian mixture models (LMMs)\cite{LaplacianMM-2007}, Bernoulli mixture models~\cite{BernoulliMM-2006} (BMMs), Multinomial Mixture models~\cite{MMM:2007} (MMMs), Poisson Mixture Models (PMMs)~\cite{PMM:2006}, Weibull Mixture Models~\cite{WeiMM:2010} (WeiMMs), Wishart Mixture Models~\cite{WisMM:2012} (WisMM), etc. \def3cm{3cm} \begin{figure} \centering \begin{tabular}{ccc} \includegraphics[bb=0 0 512 512, width=3cm]{Figure/baboon.png} & \includegraphics[bb=0 0 667 508, width=3cm, height=3cm]{Figure/baboon_ellipsesbig.png} & \includegraphics[bb=0 0 512 512, width=3cm]{Figure/baboon_hardsegmentation.png} \\ (a) & (b) & (c)\\ \includegraphics[bb=0 0 512 512, width=3cm]{Figure/baboon_sample.png} & \includegraphics[bb=0 0 512 512, width=3cm]{Figure/baboon_patches_hardseg8x8.png} & \includegraphics[bb=0 0 92 44, width=3cm]{Figure/baboon_patches8x8.png}\\ \\ (d) & (e) & (f) \end{tabular} \caption{Modeling a color image using a Gaussian mixture model (GMM): (a) Image {\tt Baboon} source image, (b) a 5D $32$-GMM modeling depicted by its covariance ellipses, (c) hard segmentation using the GMM, (d) sampling the 5D GMM, (e) Mean colors ($8\times 8$ patches) for GMM with patch size $s=8$, (f) patch mean $\mu$ for $s=8$ patch size width.\label{fig:GMMhighD}} \end{figure} \subsection{Contributions and prior work} Expectation-Maximization~\cite{em-1977} (EM) is a traditional algorithm for learning finite mixtures~\cite{FiniteMixtureModels:2000}. Banerjee et al.~\cite{bregmankmeans-2005} proved that EM for mixture of exponential families amounts to perform equivalently a soft Bregman clustering. Furthermore, this EM-Bregman soft clustering equivalence was extended to total Bregman soft clustering for curved exponential families~\cite{tBDPami:2012}. Although mathematically convenient, we should remember that mixture data should be hard clustered as each observation should emanate from exactly one component. It is well-known that $k$-means clustering technique can be interpreted as a limit case of EM for isotropic Gaussian mixtures~\cite{ClusteringMVN:2009}. Kearns~et al.~\cite{HardSoftClustering:1997} casted further light on the hard/soft relationship using an information-theoretic analysis of hard $k$-means and soft expectation-mazimization assignments in clustering. Banerjee et al~\cite{BanerjeeMLEEF:2004} proved a mathematical equivalence between the estimation of maximum likelihood of exponential family mixtures (MLME, Maximum Likelihood Mixture Estimation) and a rate distortion problem for Bregman divergences. Furthermore, Banerjee et al.~\cite{ClusteringvMF:2005} proposed the hardened expectation for the special case of von Mises-Fisher mixtures (hard EM, Section 4.2 of~\cite{ClusteringvMF:2005}) for computational efficiency. In this paper, we build on the duality between Bregman divergences and exponential families~\cite{bregmankmeans-2005} to design $k$-MLE that iteratively (1) assigns data to mixture components, (2) update mixture parameters \`a la $k$-means and repeat step (1) until local convergence, (3) update weights and reiterate from (1) until local convergence (see Algorithm~\ref{algo:kmle}). We prove that $k$-MLE maximizes monotonically the complete likelihood function. We also discuss several initialization strategies and describe a probabilistic initialization $k$-MLE++ with guaranteed performance bounds. The paper is organized as follows: Section~\ref{sec:preliminaries} recall the basic notions of exponential families, Legendre transform, Bregman divergences, and demonstrate the duality between Bregman divergences and exponential families to study the Maximum Likelihood Estimator (MLE). Section~\ref{sec:kmle-theta} presents the framework of $k$-MLE for mixtures with prescribed weights, based on the Bregman-exponential family duality. The generic $k$-MLE algorithm is described in Section~\ref{sec:fullkmle}, and Section~\ref{sec:speedup} discusses on proximity location data-structures to speed up the assignment step of the algorithm. Section~\ref{sec:kmlepp} presents $k$-MLE++, a probabilistic initialization of $k$-MLE. Finally, Section~\ref{sec:concl} concludes the paper and discusses on avenues for future research. \section{Preliminaries}\label{sec:preliminaries} \subsection{Exponential family} An exponential family~\cite{BrownExpFam:1986} $E_F$ is a set of parametric probability distributions \begin{equation} E_F=\{p_F(x;\theta)\ |\ \theta\in\Theta\} \end{equation} whose probability density\footnote{For sake of simplicity and brevity, we consider without loss of generality in the remainder continuous random variables on $\mathbb{R}^d$. We do not introduce the framework of probability measures nor Radon-Nikodym densities.} can be decomposed canonically as \begin{equation} \label{eq:cexpfam} p_F(x;\theta) = e^{\inner{t(x)}{\theta}-F(\theta)+k(x)} \end{equation} where $t(x)$ denotes the sufficient statistics, $\theta$ the natural parameter, $F(\theta)$ the log-normalizer, and $k(x)$ a term related to an optional auxiliary carrier measure. $\inner{x}{y}$ denotes the inner product (i.e., $x^T y$ for vectors $\mathrm{tr}(X^T Y)$ for matrices, etc.). Let \begin{equation} \Theta=\left\{\theta\ |\ \int p_F(x;\theta)\mathrm{d}x < \infty\right\} \end{equation} denotes the natural parameter space. The dimension $D$ of the natural parameter space is called the order of the family. For the $d$-variate Gaussian distribution, the order is $D=d+\frac{d(d+1)}{2}=\frac{d(d+3)}{2}$. It can be proved using the Cauchy-Schwarz inequality~\cite{BrownExpFam:1986} that the log-normalizer\footnote{Also called in the literature as the log-partition function, the cumulant function, or the log-Laplace function.} $F$ is a strictly convex and differentiable function on an open convex set $\Theta$. The log-density of an exponential family is \begin{equation} l_{F}(x;\theta)= \inner{t(x)}{\theta}-F(\theta)+k(x) \end{equation} To build an exponential family, we need to choose a basic density measure on a support $\X$, a sufficient statistic $t(x)$, and an auxiliary carrier measure term $k(x)$. Taking the log-Laplace transform, we get \begin{equation} F(\theta) = \int_{x\in\mathbb{X}} e^{\inner{t(x)}{\theta}+k(x)} \mathrm{d}x, \end{equation} and define the natural parameter space as the $\theta$ values ensuring convergence of the integral. In fact, many usual statistical distributions such as the Gaussian, Gamma, Beta, Dirichlet, Poisson, multinomial, Bernoulli, von Mises-Fisher, Wishart, Weibull are exponential families in disguise. In that case, we start from their probability density or mass function to retrieve the canonical decomposition of Eq.~\ref{eq:cexpfam}. See~\cite{ef-flashcards-2009} for usual canonical decomposition examples of some distributions that includes a bijective conversion function $\theta(\lambda)$ for going from the usual $\lambda$-parameterization of the distribution to the $\theta$-parametrization. Furthermore, exponential families can be parameterized canonically either using the natural coordinate system $\theta$, or by using the dual moment parameterization $\eta$ (also called mean value parameterization) arising from the Legendre transform (see Appendix~\ref{sec:mvn} for the case of Gaussians). \subsection{Legendre duality and convex conjugates} For a strictly convex and differentiable function $F:\mathds{N}\rightarrow \mathbb{R}$, we define its convex conjugate by \begin{equation} F^*(\eta) = \sup_{\theta\in\mathds{N}} \{ \underbrace{\inner{\eta}{\theta}-F(\theta)}_{l_F(\eta;\theta)} \} \end{equation} The maximum is obtained for $\eta=\nabla F(\theta)$ and is unique since $F$ is convex $\nabla^2_\theta l_F(\eta;\theta) = -\nabla ^2 F(\theta) \prec 0$: \begin{equation} \nabla_\theta l_F(\eta;\theta) = \eta-\nabla F(\theta)=0 \Rightarrow \eta=\nabla F(\theta) \end{equation} Thus strictly convex and differentiable functions come in pairs $(F,F^*)$ with gradients being functional inverses of each other $\nabla F=(\nabla F^*)^{-1}$ and $\nabla F^*=(\nabla F)^{-1}$. Legendre transform is an involution: ${(F^*)}^*=F$ for strictly convex and differentiable functions. In order to compute $F^*$, we only need to find the functional inverse $(\nabla F)^{-1}$ of $\nabla F$ since \begin{equation}\label{eq:Fdual} F^{*}(\eta) = \inner{(\nabla F)^{-1}(\eta)}{\eta} - F((\nabla F)^{-1}(\eta)). \end{equation} However, this inversion may require numerical solving when no analytical expression of $\nabla F^{-1}$ is available. See for example the gradient of the log-normalizer of the Gamma distribution~\cite{ef-flashcards-2009}, the Dirichlet or von Mises-Fisher distributions~\cite{ClusteringvMF:2005}. \subsection{Bregman divergence} A Bregman divergence $B_F$ is defined for a strictly convex and differentiable generator $F$ as \begin{equation} B_F(\theta_1 : \theta_2) = F(\theta_1)-F(\theta_2)-\inner{\theta_1-\theta_2}{\nabla F(\theta_2)}. \end{equation} The Kullback-Leibler divergence (relative entropy) between two members $p_1=p_F(x;\theta_1)$ and $p_2=p_F(x;\theta_2)$ of the same exponential family amounts to compute a Bregman divergence on the corresponding swapped natural parameters: \begin{eqnarray} \mathrm{KL}(p_1:p_2) &=& \int_{x\in\mathbb{X}} p_1(x)\log\frac{p_1(x)}{p_2(x)}\mathrm{d}x,\\ &=& B_F(\theta_2:\theta_1),\\ &=& F(\theta_2)-F(\theta_1)-\inner{\theta_2-\theta_1}{\nabla F(\theta_1)} \end{eqnarray} The proof follows from the fact that $E[t(X)]=\int_{x\in\mathbb{X}} t(x)p_F(x;\theta)\mathrm{d}x=\nabla F(\theta)$~\cite{CrossEntropy:2010}. Using Legendre transform, we further have the following equivalences of the relative entropy: \begin{eqnarray} B_F(\theta_2:\theta_1) &=& B_{F*}(\eta_1 :\eta_2),\\ &=& \underbrace{ F(\theta_2) + F^*(\eta_1) -\inner{\theta_2}{\eta_1}}_{C_F(\theta_2 : \eta_1)=C_{F^*}(\eta_1:\theta_2)} \label{eq:cd}, \end{eqnarray} where $\eta=\nabla F(\theta)$ is the dual moment parameter (and $\theta=\nabla F^*(\eta)$). Information geometry~\cite{informationgeometry-2000} often considers the canonical divergence $C_F$ of Eq.~\ref{eq:cd} that uses the mixed coordinate systems $\theta/\eta$, while computational geometry~\cite{bvd-2010} tends to consider dual Bregman divergences, $B_F$ or $B_{F^*}$, and visualize structures in one of those two canonical coordinate systems. Those canonical coordinate systems are dually orthogonal since $\nabla^2 F(\theta) \nabla^2 F^*(\eta)=I$, the identity matrix. \subsection{Maximum Likelihood Estimator (MLE)} For exponential family mixtures with a single component $M\sim E_F(\theta_1)$ ($k=1$, $w_1=1$), we easily estimate the parameter $\theta_1$. Given $n$ independent and identically distributed observations $x_1, ..., x_n$, the Maximum Likelihood Estimator (MLE) is maximizing the likelihood function: \begin{eqnarray} \hat\theta &=& \mathrm{argmax}_{\theta\in\Theta} L(\theta; x_1, ..., x_n), \\ & =& \mathrm{argmax}_{\theta\in\Theta} \prod_{i=1}^n p_F(x_i;\theta),\\ &=& \mathrm{argmax}_{\theta\in\Theta} e^{\sum_{i=1}^n \inner{t(x_i)}{\theta}-F(\theta)+k(x_i)} \end{eqnarray} For exponential families, the MLE reports a unique maximum since the Hessian of $F$ is positive definite ($X\sim E_F(\theta) \Rightarrow \nabla^2 F = \mathrm{var}[t(X)]\succ 0$): \begin{equation} \nabla F(\hat\theta) = \frac{1}{n} \sum_{i=1}^n t(x_i) \end{equation} The MLE is consistent and efficient with asymptotic normal distribution: \begin{equation} \hat\theta \sim N \left(\theta,\frac{1}{n} I_F^{-1}(\theta)\right), \end{equation} where $I_F$ denotes the Fisher information matrix: \begin{equation} I_F(\theta) = \mathrm{var}[t(X)] = \nabla ^2 F(\theta) =(\nabla^2 G(\eta))^{-1} \end{equation} (This proves the convexity of $F$ since the covariance matrix is necessarily positive definite.) Note that the MLE may be biased (for example, normal distributions). By using the Legendre transform, the log-density of an exponential family can be interpreted as a Bregman divergence~\cite{bregmankmeans-2005}: \begin{equation} \log p_F(x;\theta) = -B_{F^*}(t(x) : \eta) + F^*(t(x)) + k(x) \end{equation} Table~\ref{tab:duality} reports some illustrating examples of the Bregman divergence $\leftrightarrow$ exponential family duality. \begin{table} \centering \begin{tabular}{|ccc|}\hline Exponential Family & $\Leftrightarrow$ & Dual Bregman divergence \\ $p_F(x|\theta)$ & &$B_{F^*}$\\ \hline Spherical Gaussian & $\Leftrightarrow$ & Squared Euclidean divergence\\ Multinomial & $\Leftrightarrow$ & Kullback-Leibler divergence\\ Poisson & $\Leftrightarrow$ & $I$-divergence\\ Geometric & $\Leftrightarrow$ & Itakura-Saito divergence\\ Wishart & $\Leftrightarrow$ & log-det/Burg matrix divergence\\ \hline \end{tabular} \caption{Some examples illustrating the duality between exponential families and Bregman divergences.\label{tab:duality}} \end{table} Let us use the Bregman divergence-exponential family duality to prove that \begin{equation} \hat\theta=\arg\max_{\theta\in\Theta} \prod_{i=1}^n p_F(x_i;\theta)=\nabla F^{-1}\left(\sum_{i=1}^n t(x_i)\right). \end{equation} Maximizing the average log-likelihood $\bar l=\frac{1}{n} \log L$, we have: \begin{eqnarray} &\max_{\theta\in\mathds{N}} & \bar l(\theta;x_1, ..., x_n)=\frac{1}{n} \sum_{i=1}^n (\inner{t(x_i)}{\theta}-F(\theta)+k(x_i)) \\ &\max_{\theta\in\mathds{N}} & \frac{1}{n} \sum_{i=1}^n -B_{F^*}(t(x_i):\eta)+F^*(t(x_i)) + k(x_i)\\ &\equiv \min_{\eta\in\mathds{M}} & \frac{1}{n} \sum_{i=1}^n B_{F^*}(t(x_i):\eta) \end{eqnarray} Since right-sided Bregman centroids defined as the minimum average divergence minimizers coincide always with the center of mass~\cite{bregmankmeans-2005} (independent of the generator $F$), it follows that \begin{equation} \hat\eta=\frac{1}{n}\sum_{i=1}^n t(x_i)=\nabla F(\hat\theta). \end{equation} It follows that $\hat\eta=(\nabla F)^{-1}(\frac{1}{n}\sum_{i=1}^n t(x_i))$. In information geometry~\cite{informationgeometry-2000}, the point $\hat P$ with $\eta$-coordinate $\hat\eta$ (and $\theta$-coordinate $\nabla F^{-1}(\hat\eta)=\hat\theta$) is called the {\em observed} point. The best average log-likelihood reached by the MLE at $\hat\eta$ is \begin{eqnarray} l(\hat\theta; x_1,..., x_n) &=& \frac{1}{n} \sum_{i=1}^n ( -B_{F^*}(t(x_i):\hat\eta)+F^*(t(x_i)) + k(x_i) ), \\ &=& \frac{1}{n} \sum_{i=1}^n (-F^*(t(x_i))+F^*(\hat\eta)+\inner{t(x_i)-\hat\eta}{\nabla F^*(\hat\eta)}+F^*(t(x_i)) + k(x_i)),\\ &=& F^*(\hat\eta)+\frac{1}{n}\sum_{i=1}^n k(x_i)+\Inner{\underbrace{\frac{1}{n}\sum_{i=1}^n t(x_i)-\hat\eta}_{0}}{\hat\theta},\\ &= & F^*(\hat\eta) +\frac{1}{n}\sum_{i=1}^n k(x_i). \end{eqnarray} The Shannon entropy $H_F(\theta)$ of $p_F(x;\theta)$ is $H_F(\theta)=-F^*(\eta)-\int k(x)p_F(x;\theta)\mathrm{d}x$~\cite{CrossEntropy:2010}. Thus the maximal likelihood is related to the minimum entropy (i.e., reducing the uncertainty) of the empirical distribution. Another proof follows from the Appendix~\ref{sec:kmeans} where it is recalled that the Bregman information~\cite{bregmankmeans-2005} (minimum of average right-centered Bregman divergence) obtained for the center of mass is a Jensen diversity index. Thus we have \begin{eqnarray} \bar l &=& -J_{F^*}(\sum_{i=1}^n t(x_i))+\frac{1}{n} \sum_{i=1}^n F^*(t(x_i)) + \frac{1}{n} \sum_{i=1}^n k(x_i),\\ &=& - \left(\sum_{i=1}^n F^*(t(x_i)) -F^*(\hat\eta)\right) +\frac{1}{n} \sum_{i=1}^n F^*(t(x_i)) + \frac{1}{n} \sum_{i=1}^n k(x_i),\\ &=& F^*(\hat\eta)+ \frac{1}{n} \sum_{i=1}^n k(x_i) \end{eqnarray} Appendix~\ref{sec:mvn} reports the dual canonical parameterizations of the multivariate Gaussian distribution family. \section{$k$-MLE: Learning mixtures with given prescribed weights}\label{sec:kmle-theta} Let $\X=\{x_1, ..., x_n\}$ be a sample set of independently and identically distributed observations from a finite mixture $m(x|w,\theta)$ with $k$ components. The joint probability distribution of the observed observations $x_i$'s with the missing component labels $z_i$'s is \begin{equation} p(x_1, z_1, ..., x_n, z_n | w,\theta) = \prod_{i=1}^n p(z_i | w) p(x_i | z_i, \theta) \end{equation} To optimize the joint distribution, we could test (theoretically) all the $k^n$ labels, and choose the best assignment. This is not tractable in practice since it is exponential in $n$ for $k>1$. Since we do not observe the latent variables $z_1, ..., z_n$, we marginalize the hidden variables to get \begin{equation} p(x_1, ..., x_n | w,\theta) = \prod_{i=1}^n \sum_{j=1}^k p(z_i=j|w) p(x_i|z_i=j,\theta_j) \end{equation} The average log-likelihood function is \begin{eqnarray} \bar l(x_1, ..., x_n | w,\theta) &=& \frac{1}{n} \log p(x_1, ..., x_n | w,\theta),\\ &=& \frac{1}{n} \sum_{i=1}^n \log \sum_{j=1}^k p(z_i=j|w) p(x_i|z_i=j,\theta_j). \end{eqnarray} Let $\delta_j(z_i)=1$ if and only if $x_i$ has been sampled from the $j$th component, and $0$ otherwise. We have the complete average log-likelihood that is mathematically rewritten as \begin{eqnarray} \bar l(x_1, z_1, ..., x_n, z_n | w,\theta) &=& \frac{1}{n} \sum_{i=1}^n \log \prod_{j=1}^k (w_j p_F(x_i|\theta_j))^{\delta_j(z_i)} \\ &=& \frac{1}{n} \sum_{i=1}^n \sum_{j=1}^k \delta_j(z_i) (\log p_F(x_i|\theta_j) + \log w_j) \end{eqnarray} Using the bijection between exponential families and dual Bregman divergences~\cite{bregmankmeans-2005}, we have the mathematical equivalence $\log p_F(x|\theta_j) = -B_{F^*}(t(x):\eta_j)+F^*(t(x))+k(x)$, where $\eta_j=\nabla F(\theta_j)$ is the moment parameterization of the $j$-th component exponential family distribution. It follows that the complete average log-likelihood function is written as \begin{eqnarray} \bar l(x_1, ..., x_n | w,\theta) &=& \frac{1}{n} \sum_{i=1}^n \sum_{j=1}^k \delta_j(z_i) (-B_{F^*}(t(x_i):\eta_j)+F^*(t(x_i))+k(x_i)+ \log w_j)\\ &=& \left( \frac{1}{n} \sum_{i=1}^n \sum_{j=1}^k \delta_j(z_i) (-B_{F^*}(t(x_i):\eta_j)+ \log w_j) \right) + \frac{1}{n} \sum_{i=1}^n F^*(t(x_i))+k(x_i). \label{eq:ll} \end{eqnarray} By removing the constant terms $ \frac{1}{n}\sum_{i=1}^n (F^*(t(x_i))+k(x_i))$ independent of the mixture moment parameters (the $\eta$'s), maximizing the complete average log-likelihood amounts to equivalently minimize the following loss function: \begin{eqnarray} \bar l' &=& \frac{1}{n} \sum_{i=1}^n \sum_{j=1}^k \delta_j(z_i) (B_{F^*}(t(x_i):\eta_j)-\log w_j),\\ &= & \frac{1}{n} \sum_{i=1}^n \min_{j=1}^k (B_{F^*}(y_i:\eta_j)-\log w_j), \label{eq:dualkmeans}\\ &=& \mathrm{kmeans}_{F^*,\log w}(\mathcal{Y} : H), \end{eqnarray} where $\mathcal{Y}=\{y_1=t(x_1), ..., y_n=t(x_n( \}$ and $H=\{\eta_1, ..., \eta_k\}$. \begin{remark} This is the argmin of Eq.~\ref{eq:dualkmeans} that gives the hidden component labels for the $x_i$'s. \end{remark} \begin{remark} Observe that since $\forall i\in\{1, ...,k\}, -\log w_i \geq 0$ (since $w_i\leq 1$), we have the following additive dual Bregman divergence $B_{F^*}(y_i:\eta_j)-\log w_j>0$ per cluster. Depending on the weights (e.g., $w\rightarrow 0$), we may have some empty clusters. In that case, the weight of a cluster is set to zero (and the component parameter is set to $\emptyset$ by convention). Note that it makes sense to consider $(\leq k)$-means instead of $k$-means in the sense that we would rather like to upper bound the maximum complexity of the model rather than precisely fixing it. \end{remark} Eq.~\ref{eq:dualkmeans} is precisely the loss function of a per-cluster additive Bregman $k$-means (see the appendix~\ref{sec:kmeans}) defined for the Legendre convex conjugate $F^*$ of the log-normalizer $F$ of the exponential family for the sufficient statistic points $\mathcal{Y} = \{y_i=t(x_i)\}_{i=1}^n$. It follows that {\em any} Bregman $k$-means heuristic decreases monotonically the loss function and reaches a local minimum (corresponding to a local maximum for the equivalent complete likelihood function). We can either use the batched Bregman Lloyd's $k$-means~\cite{bregmankmeans-2005}, the Bregman Hartigan and Wong's greedy cluster swap heuristic~\cite{KmeansHartiganWong-1979,HartiganKmeans:2010}, or the Kanungo et al.~\cite{KanungoKmeans-2004} $(9+\epsilon)$-approximation global swap approximation algorithm. \begin{remark} The likelihood function $L$ is equal to $e^{n\bar l}$. The average likelihood function $\bar L$ is defined by taking the geometric mean $\bar L=L^{\frac{1}{n}}$. \end{remark} The following section shows how to update the weights once the local convergence of the assignment-$\eta$ of the $k$-MLE loop has been reached. \section{General $k$-MLE including mixture weight updates}\label{sec:fullkmle} When $k$-MLE with prescribed weights reaches a local minimum (see Eq.~\ref{eq:ll} and Eq.~\ref{eq:dualkmeans} and the appendix~\ref{sec:kmeans}), the current loss function is equal to \begin{eqnarray} \bar l=\underbrace{\frac{1}{n} \sum_{i=1}^n \sum_{j=1}^k \delta_j(z_i) (B_{F^*}(t(x_i):\eta_j) - \log w_j)}_{\mbox{Minimized by additive Bregman $k$-means, see Appendix}} &-& \left( \frac{1}{n} \sum_{i=1}^n F^*(t(x_i))+k(x_i) \right),\\ \bar l=\sum_{j=1}^k \alpha_j J_{F^*}(\mathcal{C}_j)-\alpha_j\log w_j &-& \left(\frac{1}{n}\sum_{i=1}^n F^*(t(x_i))+k(x_i)\right),\label{eq:llw} \end{eqnarray} where $\alpha_i=\frac{|\mathcal{C}_i|}{n}$ denotes the proportion of points assigned to the $i$-th cluster $\mathcal{C}_i$, and $\alpha_i J_{F^*}(\mathcal{C}_i)$ is the weighted Jensen diversity divergence of the cluster. In order to further minimize the average complete likelihood of Eq.~\ref{eq:llw}, we update the mixture weights $w_i$'s by minimizing the criterion: \begin{eqnarray} &&\min_{w\in\Delta_k} \sum_{j=1}^k -\alpha_j\log w_j \\ &=& \min_{w\in\Delta_k} H^{\times}(\alpha:w), \end{eqnarray} where $H^{\times}(p:q)=-\sum_{i=1}^k p_i\log q_i$ denotes the Shannon cross-entropy, and $\Delta_k$ the $(k-1)$-dimensional probability simplex. The cross-entropy $H^\times(p:q)$ is minimized for $p=q$, and yields $H^\times(p,p)=H(p)=-\sum_{i=1}^k p_i \log p_i$, the Shannon entropy. Thus we update the weights by taking the relative proportion of points falling into the clusters: \begin{equation} \forall i\in\{1, ...,k\}, w_i\leftarrow \alpha_i. \end{equation} After updated the weights, the average complete log-likelihood is \begin{equation} \bar l= \sum_{i=1}^k w_i J_{F^*}(\mathcal{C}_i) + H(w) - \left( \frac{1}{n}\sum_{i=1}^n F^*(t(x_i))+k(x_i) \right). \end{equation} We summarize the $k$-MLE algorithm in the boxed Algorithm~\ref{algo:kmle}. \begin{algo} \caption{Generic $k$-MLE for learning an exponential family mixture model.\label{algo:kmle}} \underline{Input}:\\ \begin{tabular}{lll} $\X$ &:& a set of $n$ identically and independently distributed observations: $\X=\{x_1, ..., x_n\}$\\ $F$ &:& log-normalizer of the exponential family, characterizing $E_F$ \\ $\nabla F$ &:& gradient of $F$ for moment $\eta$-parameterization: $\eta=\nabla F(\theta)$\\ $\nabla F^{-1}$ &:& functional inverse of the gradient of $F$ for $\theta$-parameterization: $\theta=\nabla F^{-1}(\eta)$\\ $t(x)$ &:& the sufficient statistic of the exponential family\\ $k$ &:& number of clusters\\ \end{tabular} \begin{itemize} \item 0. {\bf Initialization}: $\forall i\in\{1, ..., k\},$ let $w_i=\frac{1}{k}$ and $\eta_i=t(x_i)$\\ (Proper initialization is further discussed later on). \item 1. {\bf Assignment}: $\forall i\in\{1, ...,n\}, z_i=\mathrm{argmin}_{j=1}^k B_{F^*}(t(x_i):\eta_j)-\log w_j$.\\ Let $\forall i\in\{1, ..., k\}\ \mathcal{C}_i=\{x_j | z_j=i\}$ be the cluster partition: $\X=\cup_{i=1}^k \mathcal{C}_i$.\\ (some clusters may become empty depending on the weight distribution) \item 2. {\bf Update the $\eta$-parameters}: $\forall i\in\{1, ..., k\}, \eta_i=\frac{1}{|\mathcal{C}_i|}\sum_{x\in\mathcal{C}_i} t(x)$.\\ (By convention, $\eta_i=\emptyset$ if $|\mathcal{C}_i|=0$) {\bf Goto step~1} unless local convergence of the complete likelihood is reached. \item 3. {\bf Update the mixture weights}: $\forall i\in\{1, ..., k\}, w_i=\frac{1}{n}|\mathcal{C}_i|$.\\ {\bf Goto step~1} unless local convergence of the complete likelihood is reached. \end{itemize} \underline{Output}: An exponential family mixture model $m(x)$ (EFMM) parameterized in the natural coordinate system: $\forall i\in\{1,...,k\}, \theta_i=(\nabla F)^{-1}(\eta_i)=\nabla F^*(\eta_i)$: $$ m(x) = \sum_{i=1}^k w_i p_F(x|\theta_i) $$ \end{algo} \begin{remark} Note that we can also do after the assignment step of data to clusters both (i) the mixture $\eta$-parameter update and (ii) the mixture $w$-weight update consecutively in a single iteration of the $k$-MLE loop. This corresponds to the Bregman hard expectation-maximization (Bregman Hard EM) algorithm described in boxed Algorithm~\ref{algo:hardEM}. This Hard EM algorithm is straightforwardly implemented in legacy source codes by hardening the weight membership in the E-step of the EM. Hard EM was shown computationally efficient when learning mixtures of von-Mises Fisher (vMF) distributions~\cite{ClusteringvMF:2005}. Indeed, the log-normalizer $F$ (used when computing densities) of vMF distributions requires to compute a modified Bessel function of the first kind~\cite{vMF:2011}, that is only invertible approximately using numerical schemes. \end{remark} \begin{algo} \caption{Hard EM for learning an exponential family mixture model.\label{algo:hardEM}} \begin{itemize} \item 0. {\bf Initialization}: $\forall i\in\{1, ..., k\},$ let $w_i=\frac{1}{k}$ and $\eta_i=t(x_i)$\\ (Proper initialization is further discussed later on). \item 1. {\bf Assignment}: $\forall i\in\{1, ...,n\}, z_i=\mathrm{argmin}_{j=1}^k B_{F^*}(t(x_i):\eta_j)-\log w_j$.\\ Let $ \forall i\in\{1, ..., k\}\ \mathcal{C}_i=\{x_j | z_j=i\}$ be the cluster partition: $\X=\cup_{i=1}^k \mathcal{C}_i$. \item 2. {\bf Update the $\eta$-parameters}: $\forall i\in\{1, ..., k\}, \eta_i=\frac{1}{|\mathcal{C}_i|}\sum_{x\in\mathcal{C}_i} t(x)$. \item 3. {\bf Update the mixture weights}: $\forall i\in\{1, ..., k\}, w_i=\frac{|\mathcal{C}_i|}{n}$. \item {\bf Goto step~1} unless local convergence of the complete likelihood is reached. \end{itemize} \end{algo} We can also sparsify EM by truncating to the first $D$ entries on each row (thus, we obtain a well-defined centroid per cluster for non-degenerate input). This is related to the sparse EM proposed in~\cite{SparseEM:1998}. Degeneraties of the EM GMM is identified and discussed in~\cite{EMConvergence:2003}. Asymptotic convergence rate of the EM GMM is analyzed in~\cite{EM-GMM-convergence:2001}. There are many ways to initialize $k$-means~\cite{kmeansInit:1999}. Initialization shall be discussed in Section~\ref{sec:kmlepp}. \section{Speeding up $k$-MLE and Hard EM using Bregman NN queries}\label{sec:speedup} The proximity cells $\{\V_1, ..., \V_k\}$ induced by the cluster centers $\mathcal{C}=\{c_1, ..., c_k\}$ (in the $\eta$-coordinate system) are defined by: \begin{equation} \V_j = \left\{ x\in\mathbb{X} \ |\ B_{F^*}(t(x):\eta_j)-\log w_j \leq B_{F^*}(t(x):\eta_l)-\log w_l, \forall l\in\{1, ...,k\}\backslash\{j\} \right\} \end{equation} partitions the support $\mathbb{X}$ into a Voronoi diagram. It is precisely equivalent to the intersection of a Bregman Voronoi diagram for the dual log-normalizer $F^*$ with additive weights~\cite{bvd-2010} on the expectation parameter space $\mathds{M}=\{\eta=\nabla F(\theta)\ | \ \theta\in\mathds{N} \}$ with the hypersurface\footnote{Note that there is only one global minimum for the distance $B_{F^*}(y:\eta)$ with $y\in\mathbb{T}$.} $\mathbb{T}=\{t(x)\ |\ x\in\mathbb{X}\}$. For the case of Gaussian mixtures, the log-density of the joint distribution $w_i p_F(x;\mu_i,\Sigma_i)$ induces a partition of the space into an anisotropic weighted Voronoi diagram~\cite{AnisotropicVoronoiDiagram:2003}. This is easily understood by taking {\em minus the log-density} of the Gaussian distribution (see Eq.~\ref{eq:mvn}): \begin{equation} -\log p(x;\mu_i,\Sigma_i)= \frac{1}{2} D_{\Sigma_i^{-1}}(x-\mu_i,x-\mu_i) + \frac{1}{2}\log |\Sigma_i| +\frac{d}{2}\log 2\pi, \end{equation} with $M_Q$ the squared Mahalanobis distance $M_{Q}(x,y)=(x-y)^T Q (x-y)$. This is an additively weighted Bregman divergence with mass $m_i=\frac{1}{2}\log |\Sigma_i| +\frac{d}{2}\log 2\pi$ and generator $F_i(x)=\inner{x}{\Sigma_i^{-1} x}$, the precision matrix (see the Appendix). Figure~\ref{fig:voronoi} displays the anisotropic Voronoi diagram~\cite{AnisotropicVoronoiDiagram:2003} of a 5D xyRGB GMM restricted to the xy plane. We color each pixel with the mean color of the anisotropic Voronoi cell it belongs to. \begin{figure} \centering (a)\includegraphics[bb=0 0 512 512,width=0.45\textwidth]{Figure/baboon.png} (b)\includegraphics[bb=0 0 512 512,width=0.45\textwidth]{Figure/baboon_hardgeneration.png} \caption{From the source color image (a), we buid a 5D GMM with $k=32$ components, and color each pixel with the mean color of the anisotropic Voronoi cell it belongs to.\label{fig:voronoi}} \end{figure} When the order of the exponential family (i.e., number of parameters) is small (say, $D\leq 3$), we can compute explicitly this additively weighted Bregman Voronoi diagrams in the moment parameter space $\mathds{M}$, and use proximity location data-structures designed for geometric partitions bounded by planar walls. Otherwise, we speed up the assignment step of $k$-MLE/Hard EM by using proximity location data-structures such as Bregman ball trees~\cite{2009-W-BregmanTrees-EuroCG} or Bregman vantage point trees~\cite{2009-BregmanVantagePointTree-IEEE}. See also~\cite{BregmanSearch:2011}. Besides Lloyd's batched $k$-means heuristic~\cite{LLoyd:1982,MacQueen:1967,Forgy-1965}, we can also implement other $k$-means heuristic like the greedy Hartigan and Wong's swap~\cite{KmeansHartiganWong-1979,HartiganKmeans:2010} in $k$-MLE that selects a point and optimally reassign it, or Kanungo et al.~\cite{KanungoKmeans-2004} global swap optimization, etc. \begin{remark} The MLE equation $\hat\eta=\nabla F(\hat\theta)=\frac{1}{n}\sum_{i=1}^n t(x_i)$ may yield a transcendental equation. That is, when $(\nabla F)^{-1}$ is not available analytically (e.g., von Mises-Fisher family~\cite{ClusteringvMF:2005}), the convex conjutate $F^*$ needs to be approximated by computing numerically the reciprocal gradient $\nabla F^{-1}$ (see Eq.~\ref{eq:Fdual}). Sra~\cite{vMF:2011} focuses on solving efficiently the MLE equation\footnote{See also, software R package {\tt movMF}} for the von Mises-Fisher distributions. \end{remark} \section{Initializing $k$-MLE using $k$-MLE++}\label{sec:kmlepp} To complete the description of $k$-MLE of boxed Algorithm~1, it remains the problem to properly initializing $k$-MLE (step $0$). One way to perform this initialization is to compute the global MLE parameter for the full set $\X$: \begin{equation} \hat\eta=\nabla F^{-1}\left(\frac{1}{n}\sum_{i=1}^n t(x_i)\right), \end{equation} and then consider the {\em restricted exponential family} of order $d\leq D$ with {\em restricted sufficient statistic} the first $d$ components of full family statistic $(t_1(x), ..., t_d(x))$. We initialize the $i$-th cluster with $\eta_i^{(0)}=(t_1(x_i), ..., t_d(x_i), \hat\eta_{d+1}, ..., \hat\eta_D)$. For the case of multivariate Gaussians with $D=\frac{d(d+3)}{2}$, this amounts to compute the covariance matrix $\hat\Sigma$ of the full set and then set the translation parameter to $x_i$: $\eta_i^{(0)}=(x_i,-\frac{1}{2}(\hat\Sigma+x_ix_i^T))$ (see appendix~\ref{sec:mvn}). This initialization is a heuristic with {\it no guaranteed performance} on the initial average complete log-likelihood $\bar l$ compared to the best one $\bar l^*$. Note that when $D=d$ (e.g., Poisson, Weibull, Rayleigh, isotropic Gaussian, etc.), we need to have distinct initializations so that instead of taking the global MLE, we rather split the data set into $k$ groups of size $\frac{n}{k}$, and take the MLE of each group for initialization. A good geometric split is given by using a Voronoi partition diagram as follows: We run Bregman $k$-means on $\mathcal{Y}$ for the dual convex conjugate $F^*$ and set the mixture parameters as the MLEs of clusters and the weights as the relative proportion of data in clusters. This corroborates an experimental observation by Banerjee et al.~\cite{bregmankmeans-2005} that observes that clustering works experimentally best if we choose the dual Bregman divergence associated with the exponential family mixture sample set. Let us further use the dual Bregman $k$-means interpretation of EM to perform this initialization efficiently. Assume uniform weighting of the mixtures. That is, $\forall i\in\{1, ...,k\}, w_i=\frac{1}{k}$. Maximizing the average complete log-likelihood amounts to minimize (see Eq.~\ref{eq:dualkmeans}): \begin{equation} \bar l'' = \frac{1}{n} \sum_{i=1}^k \min_{j=1}^k B_{F^*}(y_i=t(x_i):\eta_j). \end{equation} The likelihood function $L(x_1, ..., x_n |\theta,w)$ is \begin{equation} L = e^{-n\mathrm{kmeans}_{F^*}(\mathcal{C})+n\log k+\sum_{i=1}^n (F^*(x_i)+k(x_i))}. \end{equation} Thus for uniform mixture weights, the ratio between two different $k$-means optimization with respective cluster centers $\mathcal{C}$ and $\mathcal{C}'$ is: \begin{equation} \frac{L}{L'} = e^{-n (\mathrm{kmeans}_{F^*}(\mathcal{C})-\mathrm{kmeans}_{F^*}(\mathcal{C}')) } \end{equation} We can use the standard Bregman $k$-means++ initialization~\cite{BregmanClustering-2010} on the convex conjugate $F^*$ that gives probabilistically a guaranteed $O(\mu^{-2} \log k)$ performance, where $\mu$ is a constant factor to be explained below. The Bregman $k$-means++ algorithm is recalled in boxed Algorithm~\ref{algo:bregkmeanspp}. \begin{algo} \caption{Bregman $k$-means++: probabilistically guarantees a good initialization.\label{algo:bregkmeanspp}} \begin{itemize} \item Choose first seed $\mathcal{C}=\{y_l\}$, for $l$ uniformly random in $\{1, ..., n\}$. \item For $i\leftarrow 2$ to $k$ \begin{itemize} \item Choose $c_i\in \{y_1, ..., y_n\}$ with probability $$ p_i = \frac{B_F(c_i:\mathcal{C})}{\sum_{i=1}^n B_F(y_i:\mathcal{C})} = \frac{B_F(\mathcal{Y}:\mathcal{C})}{\mathrm{kmeans}_F(\mathcal{Y}:\mathcal{C})}, $$ where $B_F(c:\mathcal{C})=\min_{p\in\mathcal{C}} B_F(c:p)$. \item Add selected seed to the initialization seed set: $\mathcal{C}\leftarrow \mathcal{C}\cup \{c_i\}$, and reiterate until $|\mathcal{C}|=k$. \end{itemize} \end{itemize} \end{algo} Let $\mathrm{kmeans}_F^{*}$ denote the optimal Bregman $k$-means average loss function for generator $F$. Bregman $k$-means++~\cite{BregmanClustering-2010} described in Algorithm~\ref{algo:bregkmeanspp} ensures that \begin{equation} {\mathrm{kmeans}_F}^*(\mathcal{Y} : \mathcal{C}) \leq \mathrm{kmeans}_F(\mathcal{Y} : \mathcal{C}) \leq \frac{8}{\mu^2} (2+\log k) {\mathrm{kmeans}_F}^*(\mathcal{Y} : \mathcal{C}) \end{equation} The factor $\mu$ in the upper bound is related to the notion of $\mu$-similarity that we now concisely explain. Observe that the squared Mahalanobis distance $M_Q(p,q) = (p-q)^T Q (p-q)$ satisfies the double triangle inequality: \begin{equation} M_Q(p,q) \leq 2 (M_Q(p,r) + M_Q(r,q) ). \end{equation} A Bregman divergence is said to have the $\mu$-similarity on a domain $\mathcal{Y}$ if there exists a positive definite matrix $Q\succ 0$ on $\mathcal{Y}=\mathrm{conv}(y_1, ..., y_n)$ and a real $0<\mu\leq 1$ such that \begin{equation} \mu M_Q(p,q) \leq B_F(p:q) \leq M_Q(p,q) \end{equation} Since a Bregman divergence can also be interpreted as the remainder of a Taylor expansion using the Lagrange error term: \begin{equation} B_F(p:q) = (p-q)^T \frac{\nabla ^2 F(\epsilon_{pq})}{2} (p-q), \end{equation} with $\epsilon_{pq}$ being a point on the line segment $[pq]$. It follows that by considering the Hessian $\nabla^2 F$ on a compact subset $\mathcal{Y}=\mathrm{conv}(y_1, ..., y_n)$, we get a bound~\cite{MixedBregmanClustering:2008} for $\mu$ as follows: \begin{equation} \mu=\min_{p,q\in\mathcal{Y}} \frac{\min_{y\in\mathcal{Y}} (p-q)^T \nabla^2 F(y) (p-q)}{\max_{y\in\mathcal{Y}} (p-q)^T \nabla^2 F(y) (p-q)}. \end{equation} By considering a hyperrectangle bounding the convex hull $\mathcal{Y}=\mathrm{conv}(y_1, ..., y_n)$, it is usually easy to compute bounds for $\mu$. See~\cite{BregmanClustering-2010} for some examples. The notion of $\mu$-similarity also allows one to design fast proximity queries~\cite{BregmanSearch:2011} based on the following two properties: \begin{description} \item[Approximately symmetric.] \begin{equation} B_F(p:q) \leq \frac{1}{\mu} B_F(q,p) \end{equation} \item[Deficient triangle inequality.] \begin{equation} B_F(p:q) \leq \frac{2}{\mu} (B_F(p:r)+B_F(q:r)) \end{equation} \end{description} For mixtures with prescribed but different non-zero weighting, we can bound the likelihood ratio using $w^+=\max_i w_i\geq \frac{1}{k}$ and $w^-=\min_i w_i$. When mixture weights are unknown, we can further discretize weights by increments of size $\delta$ ($O(1/\delta^k)$ such weight combinations, where each combination gives rise to a fixed weighting) and choose the initialization that yields the best likelihood. \section{Concluding remarks and discussion}\label{sec:concl} Banerjee et al.~\cite{bregmankmeans-2005} proved that EM for learning exponential family mixtures amount to perform a dual Bregman soft clustering. Based on the duality between exponential families and Bregman divergences, we proposed $k$-MLE, a Bregman hard clustering in disguise. While $k$-MLE decreases monotonically the complete likelihood until it converges to a local minimum after a finite number of steps, EM monotonically decreases the expected complete likelihood and requires necessarily a prescribed stopping criterion. Because $k$-MLE uses hard membership of observations, it fits the doubly stochastic process of sampling mixtures (for which soft EM brings mathematical convenience). Both $k$-MLE and EM are local search algorithm that requires to properly initialize the mixture parameters. We described $k$-MLE++, a simple initialization procedure that builds on Bregman $k$-means++~\cite{BregmanClustering-2010} to probabilistically guarantee an initialization not too far from the global optimum (in case of known weights). While we use Lloyd $k$-means~\cite{LLoyd:1982} heuristic for minimizing the $k$-means loss, we can also choose other $k$-means heuristic to design a corresponding $k$-MLE. One possible choice is Hartigan's greedy swap~\cite{HartiganKmeans:2010} that can further improve the loss function when Lloyd's $k$-means is trapped into a local minimum. A local search technique such as Kanungo et al. swap~\cite{KanungoKmeans-2004} also guarantees a global $(9+\epsilon)$-approximation. The MLE may yield degenerate situations when, say, one observation point is assigned to one component with weight close to one. For example, the MLE of one point for the normal distribution is degenerate as $\sigma\rightarrow 0$ (and $w\rightarrow 1$)), and the likelihood function tends to infinity. That is the unboundedness drawback of the MLE. See~\cite{GMM-MLEPenalized:2001,GMMDegeneracy:2007} for further discussions on this topic including a penalization of the MLE to ensure boundedness. Statistical mixtures with $k$ components are generative models of overall complexity $k-1+kD$, where $D$ is the order of the exponential family. An interesting future direction would be to compare mixture models versus a {\em single} multi-modal exponential family~\cite{Cobb:1983:EMR} (with implicit log-normalizer $F$). We did not address the model selection problem that consists in determining the appropriate number of components, nor the type of distribution family. Although there exists many criteria like the Akaike Information Criterion (AIC), model selection is a difficult problem since some distributions exhibit the indivisibility property that makes the selection process unstable. For example, a normal distribution can be interpreted as a sum of normal distributions: $\forall k\in\mathbb{N},\ N(\mu,\sigma^2) = \sum_{i=1}^k N\left(\frac{\mu}{k},\frac{\sigma^2}{k}\right)$. From the practical point of view, it is better to overestimate $k$, and then perform mixture simplification using entropic clustering~\cite{jMEF-2010}. Belkin and Sinha~\cite{PolynomialLearningDistr:2010} studied the polynomial complexity of learning a Gaussian mixture model. We conclude by mentioning that it is still an active research topic to find good GMM learning algorithms in practice (e.g., see~the recent entropy-based algorithm~\cite{VBGMM:2012}). \section*{Acknowledgments} FN (5793b870) would like to thank Joris Geessels for an early prototype in Python, Professor Richard Nock for stimulating discussions, and Professor Mario Tokoro and Professor Hiroaki Kitano for encouragements.
1,108,101,562,569
arxiv
\section{Introduction} The COVID-19 pandemic surprised the world, generating an enormous health crisis with profound social and economic implications. Reopening the economy becomes a latent issue as many countries are vaccinating considerable portions of their populations~\cite{owidcoronavirus}. The transition period between the prohibition of access to services and the complete and unrestricted reopening of the economy has been marked by the use of proof of vaccination for the SARS-CoV-2 virus, commonly called COVID passports, or COVID pass~\cite{9558786}. Countries, states, and cities are now demanding proof of vaccination, as is the case in the city of São Paulo~\cite{pass} and the state of New York~\cite{nys}. Such vaccination certificates help economic recovery, allowing vaccinated people to access services that invariably require close contact with others. However, the use of vaccination certificates poses privacy-related challenges. For example, you need to ensure that whoever presents a vaccination certificate is who they say they are. It is not trivial to solve this problem without de-anonymizing the certificate submitter. Another challenge is to allow a vaccinated person, in addition to proving their vaccination anonymously, not to reveal which laboratory manufactured their vaccine or even the exact date of vaccination. Vaccination certificates can be made in various ways, from a physical version of paper or plastic card to electronic. While the physical version is more vulnerable to fraud and doesn't solve the challenges listed above, it's cheap and readily available to everyone. The electronic version, which presupposes access to technology, can offer greater privacy, security, and scalability if properly designed and developed, possibly successfully tackling the challenges described above. One promising way to attack such challenges electronically is to use Self-Sovereign Identity (SSI). SSI is a user-centric paradigm of digital identity~\cite{allen,schardong2021selfsovereign}. In an SSI-based ecosystem, the user is central to identity management, having control and cryptographic guarantees over shared personal data. The SSI concept is especially relevant in sensitive personal data, such as health data. In addition to GDPR~\cite{gdpr2016}, health data generally have specific legislation, such as HIPAA~\cite{hipaa}, which establishes a legal framework requiring special care regarding the storage, handling, and sharing of health information. We tackled the challenge of creating a digital vaccination pass that respects users' privacy in this work. It allows their proof of vaccination to be verified while reducing the exposure of personal and health data. Our SSI solution to this challenge maintains user anonymity through blockchain and Zero-Knowledge Proof (ZKP). This article is organized as follows. In Section~\ref{sec:id}, we present the concepts of Digital Identity and Self-Sovereign Identity. Section~\ref{sec:con} introduces concepts required to follow this work, namely Verifiable Credentials (VCs), ZKP, Decentralized Identifiers (DIDs), and Blockchain. We present our proposal and detail our implementation in Sections~\ref{sec:proposal} and \ref{sec:solution}, respectively. In Section~\ref{sec:related}, related works are discussed, and in Section~\ref{sec:conclusion}, we close the article presenting our conclusions. \section{Digital Identity}\label{sec:id} \input{figs/identity-models} ISO 24760-1, which addresses security and privacy for identity management, defines digital identity as ``a set of attributes related to an entity''~\cite{iso2019}. In~\cite{4385303}, digital identity is defined as the representation of an entity linked to a specific context. Linking digital identity to specific contexts is a commonly accepted idea in the literature~\cite{8776589,schardong2021selfsovereign,josang2005user}. Identity management also called Identity and Access Management (IAM), is a set of policies, processes, and technologies related to the administration of digital identities~\cite{iso2019}. An identity management system defines how entities are identified, authenticated, and authorized to access restricted access services~\cite{8776589}. Digital identity and IAMs can be modeled in different ways. In this section, we describe the three traditional models of digital identity~\cite{allen}, pointing out their problems to, finally, explain SSI and how this fourth model attacks the issues of the previous models. \subsection{Isolated Identity} The isolated model was the first identity model. It is the simplest model~\cite{8776589}, as depicted in Figure~\ref{fig:iam:centralized}. In this paradigm, only the user and the service provider (SP) exist, and the SP also operates as an Identity Provider (IdP). Each web service that requires authentication and authorization must implement its own IAM. One of the consequences of this model is that users have many digital identities spread across different online services. Furthermore, each SP has to bear the costs of implementing and maintaining an IAM, which involves protecting against possible attacks, vulnerabilities, and taking care of the demands imposed by GDPR. \subsection{Outsourced IdP} The natural evolution of the previous model is the separation of IAM functionalities into a specific service for this purpose, giving rise to the identity provider as a service in itself. In this way, the IdP positions itself as an intermediary between the user and the SP, as shown in Figure~\ref{fig:iam:outsourced}. In this model, the SP outsources identity management to the IdP. This model tackles the problems of (i) usability, \textit{i.e.} users having many accounts, and; (ii) responsibility of SPs, \textit{i.e.} SPs having to design, implement and care for sensitive user information. In this model, users do not need to have several distinct identities, and SPs do not need to implement their own IAM framework. However, this model generated unexpected consequences, such as an immense concentration of data in the hands of a few IdPs, \textit{e.g.} Facebook, Google, and Twitter. These companies have become substantial data silos, trapping people in an oligarchy of few IDPs without portability among themselves~\cite{schardong2021selfsovereign}. By concentrating high amounts of personal data, these silos have become centers of attention, decoys for attacks. In addition, issues such as data leaks, security, data property, and privacy raise concerns. The data belongs to users, but they do not own and control them and are unaware of all services that consume them. \subsection{User-Centric} One of the answers to the problem of users having to hand over their data to an IdP was given by the user-centric identity model~\cite{josang2005user}, illustrated in Figure~\ref{fig:iam:usercentric}. The proposal is for the user to store access credentials issued by SPs in a personal authentication device, such as a smartcard or smartphone. The user authenticates to the personal authentication device using a PIN code or another method, and the authentication device, in turn, authenticates with the SP. Although this model advances privacy, allowing the user to control their access credentials, it does not address the management of user attributes or the incorporation of attributes guaranteed by third parties. The latter issue is of great relevance, as many SPs will only trust the value of attributes if they are issued by the IdPs they trust. For example, a car rental company will not accept a driver's license if the user issues it. \subsection{Self-Sovereign Identity} Although a precise definition of SSI is still under debate, as discussed in \cite{schardong2021selfsovereign}, \cite{Muhle2018} and \cite{8776589}, the SSI literature solidifies the idea that the user should own and manage their data. In other words, both attributes and credentials, which can be self-signed or signed by third parties, must be controlled only by users, who present their data to SPs whenever they desire. This is depicted in Figure~\ref{fig:iam:ssi}. Therefore, users can choose to have few or many digital personas, and no silos of personal data are created. In 2016, Christopher Allen proposed ten guiding principles for SSI~\cite{allen}, and although it is a blog post, it is treated as a whitepaper in the area~\cite{schardong2021selfsovereign}. The ten principles are as follows. (i) Existence; the user must exist independent of any service provider/identity provider. (ii) Control; the user must control their identities. (iii) Access; the user must always have access to their data. (iv) Transparency; all systems and algorithms must be transparent to the user. (v) Persistence; identities must be persisted. (vi) Portability; the services and information about the user's identity must be transportable. (vii) Interoperability; identities should have as wide a range of use as possible across different systems. (viii) Consent: The user must always consent to the usage of his identity data. (ix) Minimization; when necessary to show some information, always show as little as possible to accomplish the task. (x) Protection; users' rights must be protected. While it is possible to implement SSI without Blockchain~\cite{vanbokkem2019selfsovereign} technology, using it brings significant benefits. For example, storing credential revocation schemes on the ledger brings high availability and an immutable record of revoked data. Another advantage is to remove third parties when establishing trust by allowing entities to be part of the network and prove they are who they say they are in novel ways. For instance, a customer physically accessing a bank agency scans a QR code posted on the wall and connects to the bank, thus not requiring the bank to have an x509 certificate within a PKI hierarchy nor an IANA-controlled DNS domain. Finally, using blockchain to implement SSI fosters an ontologically coherent ecosystem, as it publishes credential metadata that can be reused and augmented by all participants. \section{Terminology and Concepts}\label{sec:con} This section introduces the fundamental concepts needed to understand this work. These are the decentralized identifiers, verifiable credentials standards, the concept of Zero-Knowledge Proofs (ZKP), and blockchain. \subsection{Decentralized Identifiers} SSI aims to give users control over their data. Part of this effort consists of ensuring that communications in peer-to-peer relations do not involve third parties. However, trust in today's communications over the internet is rooted in third-party authorities: chains of certificates tied to root certificate authorities (CAs) chosen \textit{a priori} by browsers and operating systems. The decentralized identifiers (DIDs) standard is being developed at W3C to mitigate the participation of authorities in communications and relationships in SSI~\cite{did2021}. DIDs are designed to operate independently of centralized registries, identity providers, and CAs. Every DID is linked to a unique DID document, a JSON structure that can be stored on-chain or off-chain, that describes an entity, specifying public keys, service endpoints, etc. For instance, an entity can be a person, a group, a relationship between entities, an organization, or an internet of things object~\cite{did2021}. The format of a DID identifier is a textual string composed of three parts separated by colons, where the first part is always \texttt{DID}; the second is a method identifier; the third is an entity identifier in this method. A DID method is a technique that describes the creation and management of method-specific identifiers and how to obtain their respective DID documents. For example, the \texttt{did:indy} method describes identifiers on Hyperledger Indy blockchains, while the \texttt{did:key} method describes identifiers generated from cryptographic keys. An example of DID address of this method is \texttt{did:key:z6MkpTHR8VN sBxYAAWHut2Geadds9dVWtWAnuB}. \subsection{Verifiable Credentials} Working with user attributes is a crucial part of any system that operates with digital identities. Attributes are commonly organized into structures called credentials, which are also used for identification and authentication. A credential contains a set of one or more claims made by an issuer about an entity~\cite{Muhle2018}. More formally, a credential is a set of claims and not attributes because claims can be false or represent an incomplete perception of the whole. In contrast, attributes are properties, \textit{i.e.} absolute truths, of entities. Verifiable Credential (VC) is a set of claims and metadata that can cryptographically prove the identity of the issuer through a digital signature, providing integrity and authenticity~\cite{vc2019}. VCs have metadata that describes the issuer, expiration date, public key, and revocation information. Regarding the latter, the cryptographic accumulator is often employed to create private-preserving revocation registries. The cryptographic accumulator is an algorithm that combines a set of values into one short accumulator, such that evidence is produced that the accumulator incorporated a given value without revealing it.~\cite{fazio2002cryptographic}. Part of the W3C VC standard defines the verifiable presentation (VP) of claims~\cite{vc2019}. VP is the process of revealing one or more claims to a verifier. The verifier may or may not trust the claims presented but must always be able to verify integrity and authenticity. VPs can also reveal the result of operators on claims, for instance, that someone's birth date was at least 18 years ago. This technique is called Zero-Knowledge Proof (ZKP). \subsection{Zero-Knowledge Proof} Zero-Knowledge Proof (ZKP) is a cryptographic protocol where one proves to someone else that they know a value, but without revealing any other information other than the fact that they know it~\cite{schnorr1989efficient}. Formally, ZKP is an interactive proofing system $(P, V)$ for proving a language membership statement over a language $L=\{0,1\}^*$ with two actors, the prover $P$ and the verifier $V$. To prove that an instance $x \in L$, $P$ and $V$ must share $x$, denoted by $(P, V)(x)$. Then, $P$ and $V$ exchange a sequence of messages so that at the end of the interactions, the result is $(P, V)(x) \in \{accept, reject\}$, representing the acceptance or not of $V$ for the statement of $P$ that $x \in L$~\cite{6206859}. ZKP schemes must have two properties~\cite{goldreich1994definitions}: (i) completeness, an interactive proof is complete if the participants are honest, \textit{i.e.} follow the protocol correctly, and the protocol succeeds with overwhelming probability; and (ii) soundness, an interactive proof is sound if a dishonest prover can only mislead $V$ with negligible probability. ZKPs are not proofs in the mathematical sense as there is a small probability of convincing $V$ of a false statement. Thus, multiple protocol instances are used to reduce the likelihood to a negligible probability of convincing $V$ of a false statement. In the context of VCs, ZKPs are used to build VPs that convince the verifier about something concerning the claims it contains without revealing them. For instance, a driver's license was issued to me and is valid, or my credit score is higher than a given threshold. To reduce the communication between the prover and the verifier, Non-Interactive Zero-Knowledge Proof (NIZKP)~\cite{micali2000computationally} is used. NIZKP enables proofs to be built and sent via an out-of-band method, such as a QR code. This technique uses Fiat-Shamir paradigm~\cite{fiat1986prove} to remove interactions from the protocol. \subsection{Blockchain} Most SSI systems~\cite{Muhle2018} use distributed ledger technology to: (i) decentralize storage and, consequently, reduce authoritarian control over data; and (ii) have guaranteed immutability of the information stored in the ledger. Those features are possible because of the properties of blockchain data structures such as append-only hash lists and Merkle trees~\cite{merkle1987digital}. In a blockchain, blocks of data contain, together with its data, the hash of the previous block. Thus, to modify the data of a single block within a sequence of linked blocks (\textit{i.e.} a chain), one would need to change all subsequent blocks. Having an append-only structure guarantees immutability on a local level. Nonetheless, this structure must be distributed in a network, and all participant nodes must agree on the correct values of the blocks. This agreement is achieved through a consensus algorithm. Most solutions for this problem are attempts on solving the Byzantine generals problem~\cite{lamport2019byzantine}. The most popular solution to this problem are algorithms based on proof of work, popularised by Bitcoin~\cite{nakamoto}. \section{Digital Vaccination Pass}\label{sec:proposal} The undertaking of designing a credential for COVID vaccination is complex. Such a credential must comply with different actors from different areas, both local and international. Several study groups and collaborative efforts~\cite{ghpc2021,cci2021,who2022} were formed around the world to design or standardize these vaccination certificates. One such initiative, the Good Health Pass Collaborative (GHPC)~\cite{ghpc2021} is a multi-sector initiative to create a blueprint for the interoperability of digital health pass systems~\cite{ghpcblue2021}. In their first White Paper, the GHPC initiative defined four critical requirements that health credential systems must satisfy~\cite{ghpcwhite2021}: (i) the credential must be able to work across borders and comply with local and international legislation; (ii) the credential must have the collaboration of different areas such as health, governments, tourism, travel, etc; (iii) the credential must comply with privacy and data protection regulations and must be able to link to the credential holder; and (iv) the credential shall not add costs or other burdens to users. This work aims to comply with these requirements, adopting open-source tools and open standards as a way to make this technology available. SSI introduces essential ideas and principles regarding people's privacy. The ten principles are especially relevant in the contemporary context of the COVID-19 pandemic, which has accelerated the digitization of society and popularized the debate over the privacy of health data~\cite{9558786}. Based on the ten SSI principles, the concepts discussed above, and the requirements laid down by the GHPC, it is possible to tackle the challenge of allowing a person to prove their vaccination status while preserving their privacy. In other words, to prove whether or not one is vaccinated for a disease without revealing when or where they were vaccinated or even hide the laboratory that produced the vaccine. Our proposal for a proof of vaccination uses the concepts discussed above. It starts with a public or private entity previously authorized to administer vaccines, namely a vaccinator. It is registered on the blockchain to issue VCs for vaccinated people, \textit{i.e.} the vaccinees. Since any entity on the blockchain can issue VCs, it is necessary to use VC issuing authorization schemes as proposed in~\cite{lauinger2021poa} to ensure that only VCs issued by authorized entities are legitimately recognized. Figure~\ref{fig:triade} illustrates the three entities involved in our solution. \begin{figure}[h] \centering \input{figs/roles} \caption{The three roles and their relations in our solution.} \label{fig:triade} \end{figure} The VC states the vaccinee's full name and date of birth, the laboratory that produced the administered vaccine, the applied dose (first, second, third, or more if applicable), and the date it was administered. Our VC format does not include country-specific identifiers, such as the SSN of USA or South Corea's Resident's Registration Number, for two reasons: (i) our solution is country independent; and (ii) there are countries with large numbers of unregistered citizens~\cite{id4d}. The vaccinee stores the VC in a digital wallet. This digital wallet can be a smartwatch, a smartphone app that uses a secure hardware element, or another Personal Assistant Device (PAD). Sending the VC from the vaccinator to the vaccinee takes place immediately after the act of vaccination through the most convenient and secure way for the user. For instance, if the PAD has Near Field Communication (NFC) and Bluetooth, NFC is favored because of the low range of the protocol. QR code scanning is also available. Regardless of the communication technology used to connect the digital wallet with the issuer, DIDs represent the entities involved and identify the relationship between these two entities. Note that the credential is held by the user in their wallet, causing no personal data to be stored on the blockchain, and therefore the system has no GDPR compliance obligations. Personal data are in possession of their owner, and their consent is required for sharing them. This arrangement makes it difficult for issuing organizations to track user data activities. In addition, in the hands of the owners, it is an excellent way to transport this data across borders without the need for substantial centralized data centers. When needed, the vaccinee can use her PAD's digital wallet to produce a VP, proving that she is vaccinated to a verifier. The user can customize the VP to have more or less personal data. In this way, the amount of exposed data can be adjusted to suit the context better. For instance, suppose the vaccinee does not wish to reveal the laboratory that produced her vaccine. In this case, she can create a VP using ZKP to prove that the value of the laboratory field on her VC is one of the values in a list of laboratories accepted by the verifier. Alternatively, the verifier might define the VP format. For instance, suppose that to access a social event such as a concert or play, the verifier (\textit{i.e.} the entity responsible for the event) needs to ensure that participants have taken at least one dose of the COVID-19 vaccine. The verifier defines a VP request with a specific format in this case. They send it to the customers, who, through their digital wallets, accept or not to produce a VP following the requested format. An airline, however, may be obliged to demand complete vaccination of its passengers 14 days before the flight. In this case, the airline constructs a VP request for this context, and passengers agree or disagree to produce a VP when boarding the plane. Finally, the verifier validates the VP and chooses to trust it if they trust the vaccinator who issued the VC. Verifiers can confirm the authentication and authorization of vaccinators through the DID registered in the blockchain. The blockchain also stores the value of the cryptographic accumulator, which is used to create a revocation registry of VCs, which can be used if, for example, expired vaccine doses have been applied by mistake. \section{Empirical Experimentation}\label{sec:solution} The shift of focus regarding data ownership introduced by SSI allows us to propose a privacy-preserving digital vaccination pass. This section presents the tools and technologies adopted to realize our goal, an architectural overview of our system, and specific implementation details and results. \subsection{Underlying Technologies} We previously presented concepts, standards, and technologies, and now we discuss the tools employed in our solution and how they work. With regards to the blockchain, we use a blockchain specially created for identity management in the SSI model named Hyperledger Indy. It is an open-source project of the Hyperledger community that provides an ecosystem for SSI based on Blockchain~\cite{indy}. The Indy distributed ledger consists of two subprojects, \texttt{Indy-Plenum} and \texttt{Indy-Node}. While the former is a general-purpose blockchain that implements the consensus algorithm, the latter is a specialization of the former, where identity-specific transactions are implemented~\cite{shcherbakov2019}. It is important to note that Indy does not store personal data on the blockchain. The data saved on the ledger are DIDs of entities that issue VCs, their public keys, contact endpoints, a cryptographic accumulator value to serve as a revocation list, and VC schema metadata. Using blockchain here has different advantages. It is possible to create an international chain of trust through decentralization, with various governments issuing their lists of trusted DIDs. It also makes it challenging to have a point of failure that would stop the system. Also, a centralized personal and health data repository cannot be created because there is no personal data on the blockchain or elsewhere. To foster and facilitate the adoption of SSI, the Hyperledger community has also created Hyperledger Aries~\cite{aries}. It is an abstraction layer above Indy, implementing methods to produce, transfer and store VCs and VPs independent of the blockchain solution below. Through this abstraction, developers can focus on business rules and not worry about implementation details. The abstraction layer that Aries introduces happens through a software agent called the Aries agent. The Aries agent interacts with other entities via DID Communication (DIDComm) or other communication protocols. DIDComm is a protocol that allows asynchronous communication of DIDs through encrypted messages~\cite{didcomm}. Two parts make up the Aries agent, namely the agent and the controller. The former is responsible for creating, signing, and reading transactions on the blockchain, interacting with other agents, managing secure storage, creating and presenting VPs using ZKP, and exchanging messages with the controller. The latter implements business logic, indicating how the agent should respond to events. The agent and controller communicate via REST API. The agent sends HTTP webhook calls to the controller, which in turn analyzes and responds to the agent accordingly~\cite{become}. \subsection{System Overview} We have previously presented the three roles involved in the vaccine pass and their interactions. We now detail the architecture of our system and how these roles concretely interact using the technologies presented. Figure~\ref{fig:arquiProto} shows the architecture of our system. \begin{figure}[h] \centering \input{figs/architecture} \caption{System architecture.} \label{fig:arquiProto} \end{figure} First, it's important to note that the first time an organization instantiates our solution, some preparation is required. Specifically, an Aries agent must communicate with the Indy blockchain to: (i) record the DID of each health agency that will administer vaccines; (ii) define the format of the VCs that will be issued; and (iii) specify the revocation registration of the VCs. To ensure the chain of trust, DIDs issued to health agencies must be endorsed by the government or the responsible agency. The government then makes available a list of trusted DIDs, contributing to a global chain of trust. After carrying out the preparations described above, each health agency authorized to administer vaccines must control an Aries agent through software, mobile app, or website. The Aries agent can run on the same device that controls it or remotely from an organization's or third-party cloud data center. When a health agency representative vaccinates someone, an HTTP call via REST API is made to the health agency's agent requesting the issuance of a VC. The vaccinee's name and date of birth are sent in the request, along with information about the applied vaccine such as manufacturing laboratory, pathogen the vaccine fights, dose number, place, and date of application. The information sent to the agent can be customized. The Aries agent controlled by the health agency stores the private key of the entity it represents in a secure enclave and uses it to issue a VC for the received data. The newly created VC belongs to the vaccinee and must be forwarded to him. A peer-to-peer connection using DIDComm is made between the vaccinator's Aries agent and the vaccinee's digital wallet to transfer the VC. It is important to note that the blockchain only stores the value of the cryptographic accumulator, to prove the unrevoked status of the VC and nothing else. Finally, the vaccinee can create a VP to prove something about his VC as described earlier. The secure environment of the digital wallet creates the VP and sends it via DIDComm to the Aries agent of the verifier. The verifier's agent connects to Indy to check whether the VC that originated the VP is revoked or not. All data exchanges are private and peer-to-peer, between issuer and holder and between holder and verifier. \subsection{Prototype Implementation} We employ the tools described above to implement a privacy-preserving solution to the problem of proving vaccination status, which is available online\footnote{URL omitted for peer-review}. With regards to the blockchain, our prototype uses Sovrin, an implementation of Hyperledger Indy that acts as a network of interoperable SSI networks~\cite{sov}. It is one of the first SSI offerings and perhaps the most studied~\cite{Kuperberg2019, Muhle2018, Lim2018}, allowing different SSI systems to share VC metadata and to interoperate, thus fostering the adoption of SSI. We created two websites that adapt to desktops and mobile devices using \texttt{HTML} and \texttt{JavaScript} for the proof-of-concept implementation. The vaccinator operates one website to issue VCs, while the verifier uses the second website to define VP formats and present this request to vaccinees. We integrated the backend of each website with an Aries agent. We chose to use a proprietary Aries agent implementation called Verity because it is readily available and runs in the cloud. It is a product of Evernym, the company that created Indy and donated it to the Hyperledger Foundation~\cite{tobin2016inevitable}. Nonetheless, there are free and open-source Aries agents such as \texttt{ACA-Py}~\cite{acapy}. Once a vaccine is applied, the vaccine and vaccinee data are submitted to the vaccinator's website as shown in Figure~\ref{fig:telaEmissaoVC}. In the background, the system requests its Aries agent to create a VC and transfer it to the vaccinee's digital wallet. The transference begins with the vaccinee reading a QR code that is returned by our application, as shown in Figure~\ref{fig:qrcode}. \begin{figure}[h] \centering \includegraphics[width=.55\linewidth]{figs/issue-tela-zoom.png} \caption{The website where the vaccinator fills in the vaccine and vaccinee data.} \label{fig:telaEmissaoVC} \end{figure} \begin{figure}[h] \centering \includegraphics[width=.5\linewidth]{figs/qr-code-embaixo-central.png} \caption{VC transfer via QR code. Vaccinee needs to scan the QR code with their digital wallet.} \label{fig:qrcode} \end{figure} In our implementation, we used the digital wallet \texttt{connect.me} to perform the actions of a vaccinee~\cite{connect}. There are a variety of free and paid digital wallets capable of receiving VCs and producing VPs~\cite{wallets}. Figure~\ref{fig:a} shows \texttt{connect.me} asking for user confirmation to either accept or deny connecting to the vaccinator after scanning the QR code. Although the wallet does not inform how this connection happens to the user, it is a peer-to-peer connection using \texttt{DIDComm}. Figure~\ref{fig:b} shows the wallet asking the vaccinee to accept or deny receiving their VC, which occurs immediately after confirming the incoming connection from the vaccinator. Upon acceptance, the vaccinee can use their digital wallet to produce VPs and prove their vaccination status. Establishing a biometric link between the wallet and the vaccinee is imperative in all transactions. For instance, when the issuer connects to the vaccinee to send a VC, the issuer must mandate that the wallet is secured with a biometric factor, such as fingerprint or facial recognition. Likewise, when the verifier checks a VP, the wallet that created the VP must be secured with a biometric factor, ensuring that the wallet holder is, in fact, the owner of the VC. In the \texttt{connect.me} wallet, blocking and unblocking by biometric factors is available. \begin{figure} \centering \subfigure[Digital wallet requesting confirmation to connect with the vaccinator.]{\label{fig:a}\includegraphics[width=41mm]{figs/tela-conecta2.jpg}} \subfigure[Digital wallet requesting confirmation to receive the VC.]{\label{fig:b}\includegraphics[width=41mm]{figs/tela-aceita-cred.jpg}} \caption{Vaccinee connects to vaccinator through \texttt{DIDComm} and receives her VC.} \end{figure} The interaction between vaccinee and verifier also begins through QR code, which we omit for lack of space. Figure~\ref{requisicaoProva} shows the verifier's interface, which defines a VP request format. The vaccinee will need to present the laboratory and pathogen their vaccinee fights, and the dose number must be greater than or equal to 1. Figure~\ref{fig:c} shows how the digital wallet shows the VP request to the vaccinee. Finally, we show in Figure~\ref{fig:d} a VP request that the vaccinee cannot produce. In this case, the vaccinee is asked to prove that she has taken three or more vaccine doses for SARS-CoV-2. However, since no VC satisfies this requirement, the digital wallet cannot produce the VP for this request. Selective disclosure and ZKP allow the holder to produce specific VPs for each context. With ZKP, it is possible to issue a credential and create fully anonymous VPs. Such anonymity makes no sense in a context where the holder is already identified, \textit{e.g.}, at work or university, and VPs can be adapted to that. It is important to remark that all interactions of our system are contactless, helping with the necessary distancing measures to combat the pandemic. \begin{figure}[h] \centering \includegraphics[width=.55\linewidth]{figs/verify-tela-zoom.png} \caption{Request for proof of first dose.} \label{requisicaoProva} \end{figure} \begin{figure}[h] \centering \subfigure[Digital wallet asks the vaccinee to share a VP.]{\label{fig:c}\includegraphics[width=41mm]{figs/tela-aceita.jpg}} \subfigure[Digital wallet is unable to create the requested VP.]{\label{fig:d}\includegraphics[width=41mm]{figs/tela-rejeita-com-requested-by.png}} \caption{Interactions between vaccinee and verifier through the digital wallet.} \end{figure} \section{Related Work}\label{sec:related} In early 2021, the Good Health Pass Collaborative (GHPC) was launched~\cite{ghpc2021} with more than 25 companies and organizations from multiple sectors. In the second half of 2021, the GHPC initiative released its blueprint~\cite{ghpcblue2021} with recommendations regarding credential and operational infrastructure. Design considerations and technical choices are based on five pillars: (i) individuals must be at the center of data exchange; (ii) the credential must allow the individual to provide evidence of their vaccination status; (iii) a decentralized approach is needed for global security and scalability; (iv) open standards are essential for interoperability and participation; and (v) pragmatic and realistic approach. The authors agree and adopt the recommendations of the GHPC for the elaboration of this work. In a similar effort, the World Health Organization (WHO) created the Digital Documentation of COVID-19 Certificates (DDCC) intending to establish standards for an architecture for digital vaccination certificates~\cite{who2022}. In the second half of 2021, the DDCC published a report~\cite{whodocument2021} with the technical specifications and implementation guide for the COVID-19 certificates. The document assumes an existing Public Key Infrastructure (PKI) for each member state and proposes a digital counterpart to the paper certificate. It recognizes the importance of a global trust framework but does not detail how to implement a global health trust framework to store the public keys of member states. It is outside the report's scope to present technical features for the prospect of selective disclosure. So, unlike our approach, DDCC is not concerned with implementing a global chain of trust or a certificate that cares about data anonymity. Similar to the WHO initiative, the European Union created the EU Digital COVID Certificate~\cite{eupass2021}. This initiative also uses PKI and certificates whose data come in plain text, without anonymity. The approach is feasible within the European Union with its own PKI. Still, acceptance in foreign countries is uncertain, depending on case-by-case interoperability agreements between nations. In addition to national and international institutions, several private companies have also created their versions of health passes, such as the BLOK Pass~\cite{blockpass2021} and the Evernym Travel Pass~\cite{travelpass2021}. Both initiatives follow SSI principles and keep users' data only on their smartphones. However, a digital vaccination pass must be open-source, enabling code transparency and making it possible for anyone to verify the system. Regarding academic efforts with similar objectives to this work, we present three research papers and point out their shortcomings. The authors of~\cite{9105054} acknowledge that proof of vaccination or robust antibody testing will be in high demand and question what format such a certificate should take. They claim that a digital certificate would make more sense as long as it is: (i) privacy-preserving; (ii) un-forgeable; (iii) easy to administer; (iv) easy to verify while preserving privacy; (v) scalable to millions of users; and (vi) cost-effective. To achieve these goals, the authors propose a mobile application that implements an architecture based on: (i) VC; (ii) the Solid~\cite{solid2021} decentralized data platform, which stores the VCs issued on the user's smartphone; and (iii) a private Ethereum blockchain that uses proof of authority~\cite{curran2018proof}, where nodes are previously registered and authorized to confirm transactions. This solution, however, needs to register each user and each issued certificate in the blockchain, resulting in a high amount of transactions. Our solution does not require the blockchain for vaccinee registration and VC issuance, as it uses direct communication between the vaccinator and the vaccinee via DIDComm. In~\cite{9286584} the authors seek to solve the challenge of proposing an immunity pass using Blockchain, SSI, and re-encryption proxies~\cite{Ateniese2006}, which allow an encrypted message with the public key of $A$ to be decrypted with the private key of $B$ through a proxy that re-encrypts the ciphertext. The solution features smart contracts for the Ethereum Blockchain~\cite{ethereum2021}. The patient's smart contract stores the hash of the vaccination and immunity records and travel history to perform contact tracing. However, as it is based on smart contracts in the Ethereum network, transactions have costs, although small, making it necessary for those involved to have financial resources available for execution. Lastly, the authors of~\cite{articleCovid} chose to demonstrate the use of blockchain to create immunity certificates for SARS-CoV-2 in a pre-vaccine period with a document that certifies that the individual has been infected and is immune. The authors claim that the use of ``immunity licenses'' could create restrictions on who can and cannot interact in social spaces, encouraging counterfeiting. Therefore, the paper proposes using a government-operated blockchain registry, although it does not specify which one. After an antibody test, the details are kept in a smart contract confidentially. Their proposal uses biometric authentication as a private key for the ``account'' of patients on the blockchain, and their data is encrypted with the biometric data of the tested patient. However, the work does not offer any empirical evidence of the functioning of the proposed system. \section{Final Remarks}\label{sec:conclusion} SSI brings greater privacy, security, and ownership to user data than previous identity models. This work presents a system architecture that allows the issuance and verification of VCs based on SSI for proof of vaccination. Our implementation produces a VC with vaccination information that, through selective disclosure and ZKP, ensures proof of vaccination with a high level of privacy. While no single solution will be universally appropriate, researchers and practitioners can easily customize our system for different use cases by exchanging components such as digital wallets, blockchain, and software agents to their needs. \printcredits \bibliographystyle{cas-model2-names}
1,108,101,562,570
arxiv
\section{Introduction} \label{sec:intro} Explaining the origin of neutrino masses is a key open problem in particle physics. The significant difference in magnitudes between the masses of the charged and neutral leptons suggests that the dynamics responsible for the observed light neutrino masses, generically denoted here as $m_\nu$, are different than those of the Standard Model (SM) Higgs mechanism. Among the most widely considered is the seesaw mechanism. Its theoretical attractiveness rests in part on the idea that the suppression of $m_\nu$ results from a ratio of physical scales rather than the appearance of tiny dimensionless Yukawa couplings in the Lagrangian. Several variants of the seesaw mechanism have been studied over the years, with perhaps the types I, II, and III models\,\cite{Minkowski:1977sc,Ramond:1979py,GellMann:1980vs,Yanagida:1979as,Mohapatra:1979ia,Schechter:1980gr,Schechter:1981cv,Konetschny:1977bn,Cheng:1980qt,Lazarides:1980nt,Magg:1980ut,Foot:1988aq,Witten:1985bz,Mohapatra:1986aw,Mohapatra:1986bd,Val86,Barr:2003nn,Mohapatra:1980yp} the most thoroughly considered. It remains to be seen which, if any of these scenarios, is realized in nature. In the conventional type-I model\,\cite{Minkowski:1977sc,Yanagida:1979as,GellMann:1980vs,Mohapatra:1979ia,Ramond:1979py}, the scale of the heavy, right-handed (RH) Majorana neutrinos, $M_N$, lies well above the energies directly accessible in the laboratory, making a direct probe of this scenario infeasible. Theorists have considered lower scale variants with $M_N$ at the TeV scale or below, a possibility that allows for more direct experimental tests, including the observation of the RH neutrinos in high energy collider searches or beam dump experiments. In this case, the scale of the relevant Yukawa couplings need not be too different from those of the charged leptons. In this study, we consider the type-II scenario\,\cite{Konetschny:1977bn,Magg:1980ut,Schechter:1980gr,Cheng:1980qt,Lazarides:1980nt,Mohapatra:1980yp}, wherein the scale of $m_\nu$ is governed by the product of Yukawa couplings $h_\nu$ and the vacuum expectation value (vev) $v_\Delta$ of the neutral component of a complex triplet $\Delta$ that transforms as (1,3,2) under the SM gauge groups. Constraints from electroweak precision tests require that $v_\Delta$ be no larger than a few GeV, though it could be considerably smaller. Consequently, the Yukawa couplings $h_\nu$ may be as large as $\mathcal{O}(1)$. As in the case of low scale type I models, the mass scale of the $\Delta$ may lie at the TeV scale or below without introducing new naturalness issues beyond those already present in the SM Higgs sector. It is, then, interesting to ask under what conditions one may discover the new degrees of freedom essential to the type II scenario and to what extent its interactions determined. In this study, we focus on these questions, paying particular attention to the $\Delta$ interactions in the scalar sector. With the discovery of the SM-like Higgs boson\cite{Aad:2012tfa,Chatrchyan:2012xdj}, it is timely to consider the scalar sector potential in more detail. In general, the presence of additional scalar degrees of freedom that interact with the Higgs doublet $\Phi$ may enhance stability of the potential, as has been noted in the case of the $\Delta$ in Refs.{{\,\cite{Chao:2012mx,Haba:2016zbu,Chun:2012jw,Bonilla:2015eha}}}. In addition, $\Delta$-$\Phi$ interactions may allow for a strong first order electroweak phase transition (SFOEWPT), thereby providing the needed conditions for generation of the cosmic baryon asymmetry through electroweak baryogenesis\footnote{The electroweak symmetry-breaking transition in the SM is of a crossover type\cite{Aoki:1999fi,Rummukainen:1998as,Csikor:1998eu,Laine:1998jb,Gurtler:1997hr,Kajantie:1995dw}.}. In both cases, knowledge of the Higgs portal couplings $\lambda_4$ and $\lambda_5$ (defined below) is essential. This study represents our first effort to provide a roadmap for discovery of the $\Delta$ and determination of its scalar sector couplings, building on the results of earlier studies that focus on the collider phenomenology of the $\Delta$ at LEP and the LHC\footnote{Note that the triplet $\Delta$ also exists in the left-right symmetric model (LRSM), see Ref.\,\cite{Pati:1974yy,Mohapatra:1974gc,Senjanovic:1975rk,Maiezza:2016ybz,Cao:2012ng,Hsieh:2010zr,Dev:2016dja,Jung:2008pz,Barenboim:2000nn,Huitu:1997vh,Cvetic:1991kh,Grifols:1988ag} and reference therein for related works.} as well as its impact contributions on the SM Higgs di-photon decay rate\,\cite{Quintero:2012jy, Kanemura:2013vxa, Chun:2013vma, Yagyu:2014aaa, Kanemura:2014goa, Muhlleitner:2003me, Chun:2012zu, Kanemura:2014ipa, Chun:2013fya, Chiang:2012dk, Ghosh:2017pxl, Mitra:2016wpr, Haba:2016zbu, Chen:2013dh, Dev:2013ff, Akeroyd:2011zza, Yue:2010zu, Akeroyd:2005gt, Akeroyd:2012ms, Akeroyd:2011ir, Ong:2011fx, Akeroyd:2009hb, Huitu:2017cpc, Biswas:2017tnw, Das:2016bir, Kikuchi:2013kya, Aoki:2012jj, Aoki:2012yt, Arhrib:2012vp, Kanemura:2012rs, Aoki:2011pz, Rodejohann:2010bv, Chen:2010uc, Fukuyama:2009xk, Nishiura:2009yd, Nishiura:2009jn, Akeroyd:2009nu, Petcov:2009zr, Godbole:1994np, Gogoladze:2008gf, Akeroyd:2007zv, Garayoa:2007fw, Ma:2006mr, deS.Pires:2005au, Kakizaki:2003jk, Alanakian:1997ii, CoarasaPerez:1995wa, Arhrib:2014nya, Arbabifar:2012bd, Han:2015hba, Akeroyd:2010je, Akeroyd:2012nd, Akeroyd:2012rg, delAguila:2013mia, Cao:2014nta, Shen:2014rpa, Han:2015sca, Bi:2015fra, Shen:2015ora, Shen:2015bna, Shen:2015pih, Cao:2016hvg, Melfo:2011nx, Xing:2015wzz, Yagyu:2011hh, kang:2014jia, Bonilla:2015jdf, Perez:2008ha, Barger:1982cy, Gunion:1989in, Han:2007bk, Huitu:1996su, Dion:1998pw, Dev:2017ouk, Sui:2017qra, Agrawal:2018pci, Cai:2017mow, Li:2018jns}. {\color{black} Searches for the complex triplet scalars -- including doubly charged $H^{\pm\pm}$, singly charged $H^\pm$, and neutral Higgs particles $H$ and $A$ -- have been carried out at the LHC. A smoking gun for the CTHM has conventionally been the presence of the $H^{\pm\pm}$ decaying into a same-sign di-lepton final state and has been intensively investigated by the ATLAS and CMS collaboration\,\cite{Aad:2012cg, Aaltonen:2011rta, ATLAS:2012mn, ATLAS:2012hi, Aad:2014hja, ATLAS:2014kca, Aad:2015oga, Sirunyan:2017ret, Aaboud:2017qph}. For other channels related to CTHM discovery, there are also many studies have been done at the LHC, see Appendix\,\ref{app:expcon} for a detailed summary.} In what follows, we explore the potential for both discovery of the $\Delta$ and determination of its scalar sector couplings at a prospective future 100\,TeV proton-proton collider, such as the Super Proton Proton Collider (SppC) under consideration in China and the CERN Future Circular Collider (FCC-hh). Given the higher center of mass energy and prospective integrated luminosity, a 100 TeV pp collider will provide coverage for a considerably larger portion of model parameter space than is feasible with the Large Hadron Collider (LHC). In this context, there exist two distinct mass spectra for the $\Delta$ (governed by the model parameters), as discussed in detail in Sec.\,\ref{subsec:input}. By working in the \lq\lq normal mass hierarchy", where $m_h\le m_{H/A}\approx m_\Delta\le m_{H^\pm}\le m_{H^{\pm\pm}}$ with $m_\Delta$ the mass scale of the model, we find that: \begin{itemize} \item The future 100\,TeV $pp$ collider with an integrated luminosity of $30\,\rm ab^{-1}$ can discover the triplet model up to $m_\Delta\lesssim4.5$\,TeV for $v_\Delta\le10^{-4}$\,GeV and $m_\Delta\lesssim1$\,TeV for $v_\Delta\gtrsim10^{-4}$\,GeV. Our result is shown in Fig.\,\ref{bdtdis}. \item Upon discovery, the Higgs portal parameter $\lambda_5$ can be determined from the mass spectrum of $H^{\pm\pm}$ and $H^\pm$ for $m_\Delta\lesssim1$\,TeV, while $\lambda_4$ is determined by the branching ratio (BR) of $H^\pm\to hW^\pm$. The $h\to\gamma\gamma$ decay rate also provides a complementary probes of the related parameter space, as we discuss below in relation to Fig.\,\ref{haa}. \end{itemize} In our analysis leading to these conclusions, we first study the same-sign di-lepton decay channel for $pp\to H^{++}H^{--}$, whose production cross section at $\sqrt{s}=100$ TeV is the largest among all triplet scalar channels. We find that this channel is only suitable for the triplet model discovery at small $v_\Delta$, where the corresponding Yukawa couplings $h_\nu$ that govern the $H^{\pm\pm}$ decay rate can be relatively large and still consistent with the scale of $m_\nu$. For relatively large $v_\Delta$, we find that there exist other promising discovery channels, particularly $pp\to H^{\pm\pm}H^\mp$ with $H^{\pm\pm} \to W^\pm W^\pm/\ell^\pm\ell^\pm$ and $H^\mp \to h W^\mp$. Considering these channels at both small and large $v_\Delta$ will allow for discovery over the entire range of $v_\Delta$ parameter space for triplet mass up to $\sim 4.5$ TeV ($\sim 1$ TeV) as can be seen from Fig.\,\ref{bdtdis}. Assuming discovery, the next question we ask is: How does one determine the Higgs portal couplings? We find that measurement of the rate for $pp\to H^{\pm\pm}H^\mp$ with $H^{\pm\pm}H^\mp\to W^\pm W^\pm h W^\mp/\ell^\pm\ell^\pm h W^\mp$ $W^\mp$ decaying leptonically will be advantageous. These two channels probe a signifiant portion of the relevant entire parameter space as can be seen from Fig.\,\ref{haa}. The presence of the charged triplet scalars with masses and couplings in the same range could also lead to an observable deviation of the $h\to\gamma\gamma$ signal strength compared to Standard Model expectations. For triplet scalar masses below roughly one TeV, the prospective future collider (circular $e^+e^-$ and $pp$) measurements of the Higgs di-photon decay rate could yield significant constraints on the values or the Higgs portal coupling needed for discovery of the $H^{\pm\pm}H^\mp\to W^\pm W^\pm h W^\mp/\ell^\pm\ell^\pm h W^\mp$ modes. For heavier triplet masses, the discovery potential for these modes would be relatively unconstrained. {{The structure of this paper is as follows: In Sec.\,\ref{sec:model}, we set up the complex triplet Higgs model and discuss its key features and various model constraints. We also discuss neutrino mass generation from the type-II seesaw mechanism as well as experimental constraints on the neutrino masses. In Sec.\,\ref{paramdeter}, we focus on how to determine the model parameters from future collider measurements, and in Sec.\,\ref{sec:decayprod}, we study production cross sections and decay patterns of the triplet Higgs particles. Sec.\,\ref{sec:modeldis} presents our result for model discovery at the 100\,TeV collider, and Sec.\,\ref{sec:lam45} discusses a strategy for the determination of $\lambda_4$. Sec.\,\ref{sec:conclusion} is our conclusion, and we summarize the details in the Appendices.}} \section{The Complex Triplet Higgs Model} In this section, we will discuss setup of the triplet model and various model constraints. We will also discuss key features of the model in Sec.\,\ref{subsec:mdlkey} and close this section by illustrating how neutrino masses are generated through a Type-II seesaw mechanism and by discussing current constraints on the neutrino masses. \label{sec:model} \subsection{Model setup}\label{model:setup} The type-II seesaw model contains the SM Higgs doublet $\Phi$ with hypercharge $Y_\Phi=1$ and the complex triplet Higgs field $\Delta$ with hypercharge $Y_\Delta=2$\,\cite{Konetschny:1977bn} written in a matrix form \,\cite{Mohapatra:1979ia, Cheng:1980qt, Lazarides:1980nt, Schechter:1980gr} \begin{eqnarray}\label{basis} \Phi=\left[ \begin{array}{c} \varphi^+\\ \frac{1}{\sqrt{2}}(\varphi+v_\Phi+i\chi) \end{array}\right], \quad \Delta = \left[ \begin{array}{cc} \frac{\Delta^+}{\sqrt{2}} & H^{++}\\ \frac{1}{\sqrt{2}}(\delta+v_\Delta+i\eta) & -\frac{\Delta^+}{\sqrt{2}} \end{array}\right], \end{eqnarray} where $v_\Phi$ denotes the doublet vev satisfying $\sqrt{v_\Phi^2 + v_\Delta^2}\equiv v\approx246$\,GeV, which is the scale of electroweak spontaneous symmetry breaking (EWSB). And as will be discussed below, $v_\Delta$ will be strongly constrained by the $\rho$ parameter. This scalar extension extension of the SM is also know as the complex triplet Higgs model (CTHM). The kinetic Lagrangian is \begin{align} \mathcal{L}_{\rm{kin}}&=(D_\mu \Phi)^\dagger (D^\mu \Phi)+\rm{Tr}[(D_\mu \Delta)^\dagger (D^\mu \Delta)], \end{align} with the covariant derivatives \begin{equation} D_\mu \Phi=\left(\partial_\mu+i\frac{g}{2}\tau^aW_\mu^a+i\frac{g'Y_{\Phi}}{2}B_\mu\right)\Phi, \quad D_\mu \Delta=\partial_\mu \Delta+i\frac{g}{2}[\tau^aW_\mu^a,\Delta]+i\frac{g'Y_{\Delta}}{2}B_\mu\Delta, \end{equation} where $g'$ and $g$ are the U(1)$_Y$ and SU(2)$_L$ gauge couplings, respectively. The second term in $D_\mu \Delta$ introduces new interactions between the electroweak gauge bosons and the triplet, which contributes to the masses of the former when the triplet gets a nonzero vev. We write the general CTHM potential as \begin{align} V(\Phi,\Delta)&= - m^2\Phi^\dagger\Phi + M^2\rm{Tr}(\Delta^\dagger\Delta)+\left[\mu \Phi^Ti\tau_2\Delta^\dagger \Phi+\rm{h.c.}\right]+\lambda_1(\Phi^\dagger\Phi)^2 \nonumber\\ &~~~~+\lambda_2\left[\rm{Tr}(\Delta^\dagger\Delta)\right]^2 +\lambda_3\rm{Tr}[ \Delta^\dagger\Delta \Delta^\dagger\Delta] +\lambda_4(\Phi^\dagger\Phi)\rm{Tr}(\Delta^\dagger\Delta)+\lambda_5\Phi^\dagger\Delta\Delta^\dagger\Phi, \end{align} where $m$ and $M$ are the mass parameters and $\lambda_i$ (i=1,$\ldots$, 5) are the dimensionless quartic scalar couplings, which are all real due to hermiticity of the Lagrangian. The $\mu$ parameter, however, is in general complex and, thus, a possible source of CP violation (CPV). But as discussed in Ref.\,\cite{Arhrib:2011uy,Dey:2008jm}, the CPV phase from $\mu$ is in fact unphysical and can always be absorbed by a redefinition of the triplet field. After EWSB, the minimization conditions \be \frac{\partial V}{\partial \Phi_j} = 0, \qquad \frac{\partial V}{\partial \Delta_j} = 0 \ee imply that \bea m^2 &=& \lambda_1 v_\Phi^2 + \frac{\lambda_{45}v_\Delta^2}{2} - \sqrt2 \mu v_\Delta, \\ M^2 &=& \frac{ \mu v_\Phi^2}{\sqrt{2}v_\Delta} - \lambda_{23} v_\Delta^2 - \frac{\lambda_{45}v_\Phi^2}{2}, \eea with \begin{align} \lambda_{ij}\equiv\lambda_i+\lambda_j. \end{align} We will use the same notation below. The scalar states are, in general, mixtures of the field components that carry the same electric charge: ($\varphi$, $\delta$, $\chi$, $\eta$); ($\varphi^{\pm}$, $\Delta^\pm$); and $H^{\pm\pm}$, which is already in its mass eigenstate. The absence of a CPV phase in the potential implies that the real and imaginary parts of the neutral doublet and triplet fields cannot mix with each other. To diagonalize the corresponding mass matrices, we introduce the following matrices to rotate them into their mass eigenstates $G^0$, $A$, $h$, $H$, $G^\pm$ and $H^\pm$: \begin{eqnarray} \left(\begin{array}{c}\varphi\\\delta\end{array}\right)&=&\left(\begin{array}{cc}\cos \alpha & -\sin\alpha \\\sin\alpha & \cos\alpha\end{array}\right)\left(\begin{array}{c}h\\H\end{array}\right),\label{mhpp} \quad \left(\begin{array}{c}\varphi^\pm\\\Delta^\pm\end{array}\right)=\left(\begin{array}{cc}\cos \beta_\pm & -\sin\beta_\pm \\\sin\beta_\pm & \cos\beta_\pm\end{array}\right)\left(\begin{array}{c} G^\pm\\H^\pm\end{array}\right),\nonumber\\ \left(\begin{array}{c}\chi\\\eta\end{array}\right)&=&\left(\begin{array}{cc}\cos \beta_0 & -\sin\beta_0 \\\sin\beta_0 & \cos\beta_0\end{array}\right)\left(\begin{array}{c} G^0\\ A\end{array}\right), \end{eqnarray} with the mixing angles given by \begin{eqnarray} \cos\beta_\pm&=&\frac{v_\Phi}{\sqrt{v_\Phi^2+2v_\Delta^2}},\quad \sin\beta_\pm=\frac{\sqrt{2}v_\Delta}{\sqrt{v_\Phi^2+2v_\Delta^2}},\quad \tan\beta_\pm=\frac{\sqrt{2}v_\Delta}{v_\Phi},\label{betapm} \\ \cos\beta_0&=&\frac{v_\Phi}{\sqrt{v_\Phi^2+4v_\Delta^2}},\quad \sin\beta_0=\frac{2v_\Delta}{\sqrt{v_\Phi^2+4v_\Delta^2}},\quad \tan\beta_0=\frac{2v_\Delta}{v_\Phi}, \label{beta0}\\ \tan2\alpha &=&\frac{v_\Delta}{v_\Phi}\cdot\frac{2v_\Phi \lambda_{45}-\frac{2 \sqrt2 \mu v_\Phi}{v_\Delta} }{2v_\Phi\lambda_1-\frac{v_\Phi\mu}{\sqrt{2}v_\Delta}-\frac{2 v_\Delta^2 \lambda_{23}}{v_\Phi}}. \label{tan2a} \end{eqnarray} Here $G^0$ and $G^\pm$ are the would-be Goldstone bosons that become the longitudinal components of the $Z$ and $W^\pm$. Among the remaining scalars, $A$ is the pseudoscalar; $h$ is the CP-even Higgs, which is recognized as the SM Higgs particle; $H$ is the other CP-even Higgs particle with a heavier mass compared with $h$; and $H^\pm$ and $H^{\pm\pm}$ are the singly- and doubly-charged Higgs particles respectively. It is useful to express the corresponding mass eigenvalues in terms of the parameters in the potential, vevs, and mixing angles: \begin{eqnarray} &&m_{H^{\pm\pm}}^2=m_\Delta^2-v_\Delta^2\lambda_3-\frac{\lambda_5}{2}v_\Phi^2,\label{mhpp}\\ &&m_{H^\pm}^2=\left(m_\Delta^2-\frac{\lambda_5}{4}v_\Phi^2\right)\left(1+\frac{2v_\Delta^2}{v_\Phi^2}\right),\label{mhp}\\ &&m_A^2 =m_\Delta^2\left(1+\frac{4v_\Delta^2}{v_\Phi^2}\right), \label{mA}\\ &&m_h^2= 2v_\Phi^2\lambda_1\cos^2\alpha+\left( m_\Delta^2+2\lambda_{23}v_\Delta^2\right) \sin^2\alpha + \left( \lambda_{45} v_\Phi v_\Delta - \frac{2v_\Delta}{v_\Phi}m_\Delta^2\right) \sin2\alpha,\label{mh}\\ &&m_H^2=2v_\Phi^2\lambda_1\sin^2\alpha+ \left( m_\Delta^2+2\lambda_{23}v_\Delta^2 \right) \cos^2\alpha - \left( \lambda_{45} v_\Phi v_\Delta - \frac{2v_\Delta}{v_\Phi}m_\Delta^2 \right) \sin2\alpha,\label{mH} \end{eqnarray} where \begin{align} m_\Delta^2\equiv \frac{v_\Phi^2\mu}{\sqrt{2}v_\Delta}. \end{align} As will be discussed below, experimental constraints on the $\rho$ parameter require $v_\Delta\ll v_{\Phi}$, which in turn results in a small $\sin\alpha$ in general as can be seen from Eq.\eqref{tan2a}. Taking the small $v_\Delta$ and $\sin\alpha$ limit, we see that, from the mass expressions above, $m_\Delta$ basically determines the mass scale of the CTHM. We will discuss this in more detail in Sec.\,\ref{subsec:input}. Since we seek to gain information about the potential parameters from measurements of the scalar boson properties, it is also useful to express the potential parameters in terms of the masses, vevs, and mixing angles: \begin{eqnarray} \mu&=&\frac{\sqrt{2}v_\Delta^2}{v_\Phi^2}m_\Delta^2 =\frac{\sqrt{2}v_\Delta}{v_\Phi^2+4v_\Delta^2}m_A^2,\\ \lambda_1 & = &\frac{1}{2v_\Phi^2}(m_h^2\cos^2\alpha+m_H^2\sin^2\alpha),\\ \lambda_2 & =& \frac{1}{2v_\Delta^2}\left[2m_{H^{\pm\pm}}^2+v_\Phi^2\left(\frac{m_A^2}{v_\Phi^2+4v_\Delta^2}-\frac{4m_{H^{\pm}}^2}{v_\Phi^2+2v_\Delta^2}\right)+m_H^2\cos^2\alpha+m_h^2\sin^2\alpha\right],\label{lam2}\\ \lambda_3 & =& \frac{v_\Phi^2}{v_\Delta^2}\left(\frac{2m_{H^{\pm}}^2}{v_\Phi^2+2v_\Delta^2}-\frac{m_{H^{\pm\pm}}^2}{v_\Phi^2}-\frac{m_A^2}{v_\Phi^2+4v_\Delta^2}\right), \label{lam3} \end{eqnarray} \begin{eqnarray} \lambda_4 & =& \frac{4m_{H^{\pm}}^2}{v_\Phi^2+2v_\Delta^2}-\frac{2m_A^2}{v_\Phi^2+4v_\Delta^2}+\frac{m_h^2-m_H^2}{2v_\Phi v_\Delta}\sin2\alpha, \\ \lambda_5 & =& 4\left(\frac{m_A^2}{v_\Phi^2+4v_\Delta^2}-\frac{m_{H^{\pm}}^2}{v_\Phi^2+2v_\Delta^2}\right). \end{eqnarray} From Eq.\,\eqref{lam2}-\eqref{lam3}, we observe that $v_\Delta$ appears in the denominators. Thus, if we take the physical masses as our model input, then in the small $v_\Delta$ limit, we may need to fine tune the masses in order to maintain perturbative values for the couplings $\lambda_{2,3}$. {{Consequently, we will use $\lambda_{2,3}$ as independent input parameters for simulation.}} \vspace{-0.5cm} \subsection{Model constraints} \label{model:const} \subsubsection{Constraint on $v_\Delta$ from the $\rho$ parameter}\label{vdcons} After the EWSB, the electroweak gauge boson masses receive contributions from both the doublet and triplet vevs. At tree level, one has \begin{eqnarray} m_W^2 = \frac{g^2}{4}(v_\Phi^2+2v_\Delta^2), \quad m_Z^2 =\frac{g^2}{4\cos^2\theta_W}(v_\Phi^2+4v_\Delta^2), \end{eqnarray} with $\theta_W$ the weak mixing angle. The ratio between $m_W$ and $m_Z$ is strongly constrained through the $\rho$ parameter which is defined as \begin{eqnarray} \rho \equiv \frac{m_W^2}{m_Z^2\cos^2\theta_W}\longeq{CTHM}\frac{1+\frac{2v_\Delta^2}{v_\Phi^2}}{1+\frac{4v_\Delta^2}{v_\Phi^2}}.\label{mwz} \end{eqnarray} The SM predicts $\rho=1$ exactly at tree level, which has been confirmed experimentally to high precision. One therefore expects $v_\Delta$ to be much smaller than $v_\Phi$ from Eq.\,\eqref{mwz} in the CTHM, and in small $v_\Delta$ limit, \begin{eqnarray} \rho \simeq 1 - \frac{2v_\Delta^2}{v_\Phi^2}. \end{eqnarray} Electroweak precision tests\cite{Agashe:2014kda} gives the $1\sigma$ result $\rho=1.0006\pm0.0009$, which leads to \bea 0\le v_\Delta \lesssim 3.0 {\rm ~\,GeV} \eea and thus $v_\Delta\ll v_\Phi$. \subsubsection{Constraint from stability, perturbative unitarity, and perturbativity}\label{secallconst} \begin{figure}[thb!] \captionstyle{flushleft} \begin{tabular}{cc} \includegraphics[scale=0.32]{plot/stability5.pdf} ~& ~ \includegraphics[width=75mm,height=75mm]{plot/running.pdf} \end{tabular} \caption{{{Left panel: Tree-level vacuum stability (green region) and perturbative unitarity (orange region) constraints on the $\lambda_4$-$\lambda_5$ plane with $\lambda_2=0.2$ and $\lambda_3=0$. Right panel: One-loop running of the Higgs quartic couplings at $\lambda_2=0.2$, $\lambda_3=0$, $\lambda_4=0$ and $\lambda_5=-0.1$ with $M_t=173.1$\,GeV being our input scale. The black arrow in the left figure corresponds to regions in which vacuum stability is stable up to a higher scale.}}}\label{stability} \end{figure} {{Constraints from vacuum stability, perturbative unitarity, and perturbativity have been studied in\,\cite{Arhrib:2011uy,Haba:2016zbu,Chao:2006ye,Schmidt:2007nq, Bonilla:2015eha,Machacek:1983tz,Machacek:1983fi,Machacek:1984zw,Ford:1992pn,Arason:1991ic,Barger:1992ac,Luo:2002ey,Chao:2012mx,Chun:2012jw} and are summarized below in our notation:}} \begin{itemize} \item Vacuum stability (VS)\footnote{Here and below, ``\&'' means the logical conjunction ``and''.}: \begin{align} \lambda_1\ge0 ~\&~ \lambda_2+\text{Min}\left\{\lambda_3,\frac{\lambda_3}{2}\right\}\ge0 ~\&~ \nonumber\\ \lambda_4+\text{Min}\left\{0,\lambda_5\right\}+2\text{Min}\left\{\sqrt{\lambda_1\lambda_{23}},\sqrt{\lambda_1(\lambda_2+\frac{\lambda_3}{2})}\right\}\ge0. \end{align} \item Perturbative unitarity (PU): \begin{align} |\lambda_{45}|\le\kappa\pi ~\&~ |\lambda_4|\le\kappa\pi ~\&~ |2\lambda_4+3\lambda_5|\le2\kappa\pi~\&~2|\lambda_1|\le\kappa\pi~\&~2|\lambda_2|\le\kappa\pi ~\&~ \nonumber\\ 2|\lambda_{23}|\le\kappa\pi~\&~|\lambda_4-\frac{\lambda_5}{2}|\le\kappa\pi~\&~|2\lambda_2-\lambda_3|\le\kappa\pi ~\&~ \nonumber\\ |\lambda_{12}+2\lambda_3\pm\sqrt{(\lambda_1-\lambda_2-2\lambda_3)^2+\lambda_5^2}|\le\kappa\pi ~\&~ \nonumber\\ |3\lambda_{13}+4\lambda_2\pm\sqrt{(3\lambda_1-4\lambda_2-3\lambda_3)^2+\frac{3}{2}(2\lambda_4+\lambda_5)^2}|\le\kappa\pi,\label{peruni} \end{align} where $\kappa=8$ or $16$ depending on one's choice on the partial wave amplitude of an elastic scalar scattering from the consideration of S-matrix unitarity. For detailed discussion, see Ref.\,\cite{Arhrib:2011uy}. \item Perturbativity: {Keeping only the top Yukawa coupling, gauge interactions, and scalar potential couplings, the one-loop renormalization group equations (RGEs) rewritten in our notation are\footnote{Two-loop RGEs for the Higgs portal parameters have been studied in Ref.\,\cite{Chao:2012mx}.}} \begin{align} \left(4\pi\right)^{2}\frac{dg_{i}}{dt} & =b_{i}g_{i}^{3}\textrm{ with }b_{i}=\left(\frac{47}{10},-\frac{5}{2},-7\right)\,,\\ \left(4\pi\right)^{2}\frac{dy_t}{dt} & =y_t\left[\frac{9}{2}y_t^2-\left(\frac{17}{20}g_1^2+\frac{9}{4}g_2^2+8g_3^2\right)\right]\,,\\ \left(4\pi\right)^{2}\frac{d\lambda_{1}}{dt} & =\frac{27}{200}g_{1}^{4}+\frac{9}{20}g_{1}^{2}g_{2}^{2}+\frac{9}{8}g_{2}^{4}-\left(\frac{9}{5}g_{1}^{2}+9g_{2}^{2}\right)\lambda_{1}+24\lambda_{1}^{2}+3\lambda_{4}^{2}+3\lambda_{4}\lambda_{5}+\frac{5}{4}{\lambda_{5}}^{2}\nonumber \\ & +12\lambda_{1}y_{t}^{2}-6y_{t}^{4}\,,\\ \left(4\pi\right)^{2}\frac{d\lambda_{2}}{dt} & =\frac{54}{25}g_{1}^{4}-\frac{36}{5}g_{1}^{2}g_{2}^{2}+15g_{2}^{4}-\left(\frac{36}{5}g_{1}^{2}+24g_{2}^{2}\right)\lambda_{2}+2\lambda_{4}^{2}+2\lambda_{4}\lambda_{5}\nonumber \\ & +28\lambda_{2}^{2}+24\lambda_{2}\lambda_{3}+6{\lambda_{3}}^{2}\,, \end{align} \begin{align} \left(4\pi\right)^{2}\frac{d\lambda_{3}}{dt} & =\frac{72}{5}g_{1}^{2}g_{2}^{2}-6g_{2}^{4}+{\lambda_{5}}^{2}-\left(\frac{36}{5}g_{1}^{2}+24g_{2}^{2}\right)\lambda_{3}+24\lambda_{2}\lambda_{3}+18{\lambda_{3}}^{2}\,,\\ \left(4\pi\right)^{2}\frac{d\lambda_{4}}{dt} & =\frac{27}{25}g_{1}^{4}-\frac{18}{5}g_{1}^{2}g_{2}^{2}+6g_{2}^{4}-\left(\frac{9}{2}g_{1}^{2}+\frac{33}{2}g_{2}^{2}\right)\lambda_{4}+12\lambda_{1}\lambda_{4}+4\lambda_{1}\lambda_{5}+4\lambda_{4}^{2}\nonumber \\ & +16\lambda_{2}\lambda_{4}+12\lambda_{3}\lambda_{4}+{\lambda_{5}}^{2}+6\lambda_{2}\lambda_{5}+2\lambda_{3}\lambda_{5}+6\lambda_{4}y_{t}^{2}\,,\\ \left(4\pi\right)^{2}\frac{d\lambda_{5}}{dt} & =\frac{36}{5}g_{1}^{2}g_{2}^{2}-\left(\frac{9}{2}g_{1}^{2}+\frac{33}{2}g_{2}^{2}\right)\lambda_{5}+4\lambda_{1}\lambda_{5}+8\lambda_{4}\lambda_{5}+4{\lambda_{5}}^{2}+4\lambda_{2}\lambda_{5}\nonumber \\ & +8\lambda_{3}\lambda_{5}+6\lambda_{5}y_{t}^{2}\,. \end{align} with $t\equiv\ln(\mu/m_t)$. For perturbativity, we require a similar approximate condition on the quartic Higgs couplings as in Ref.\,\cite{Gonderinger:2012rd}, which is based on the work of Ref.\,\cite{Riesselmann:1996is} i.e., \begin{align} \lambda_i(\mu)\lesssim\lambda_{\rm FP}/3, \quad \forall\ m_Z\leq\mu\leq\Lambda, \end{align} where $\lambda_{\rm FP}\simeq12$ in the renormalization of Ref.\,\cite{Hambye:1996wb} and $\Lambda$ is the cutoff scale of the theory. \end{itemize} {\color{black}{Fig.~\ref{stability} gives constraints from VS (green region) and PU (orange region) at tree-level. The black dot corresponds to our benchmark point discussed in Sec.\,\ref{subsec:BMandS}, {\it i.e.}, \begin{equation} \lambda_2=0.2\, , \quad \lambda_3 = \lambda_4 = 0\, , \quad \lambda_5 = -0.1\,. \end{equation} After solving the above mentioned RGEs, one finds that that VS and perturbativity up to the Planck scale impose stringent constraints on $\lambda_i$'s\,\cite{Chao:2012mx}. For our benchmark point as input at the scale $\mu=m_t$, the resulting running couplings are shown in Fig.\,\ref{stability}. From the right panel of Fig.\,\ref{stability}, it is clear that the CTHM stays perturbative even at the Planck scale. We also find that the potential develops a second minimum at $\mathcal{O}(10^{5}\text{-}10^{6}\rm \,GeV)$. The presence of this second minimum implies that the SM vacuum may become either unstable or metastable above this scale. In principle, stability could be preserved to higher scales with the presence of additional contributions to the RGEs associated with particles heavier than this threshold. A detailed investigation of the possible U.V. embedding of the CTHM goes beyond the scope of the present study. We observe, however, that the stability region for our benchmark point lies well above the range of triplet scalar masses that we consider below. Moreover, one may also increase the scale at which the potential may develop a second minimum by increasing $\lambda_4$ while preserving perturbativity, which is indicated by the black arrow in the left panel of Fig.\,\ref{stability}. We will discuss this point further in Sec.\,\ref{subsec:disbdt}.}} \subsection{Key features of the CTHM} \label{subsec:mdlkey} Since $v_\Delta\ll v_\Phi$ due to the $\rho$ parameter constraint, we expect, in general, $\tan2\alpha$ (and thus $\sin\alpha$) to be small. In this case, we have from from Eq.\,\eqref{tan2a}, \begin{align} \tan2\alpha\approx \frac{v_\Delta}{v_\Phi} \cdot \frac{2v_\Phi^2\lambda_{45}-4m_\Delta^2}{{2\lambda_1 v_\Phi^2}-m_\Delta^2} {{\approx\frac{v_\Delta}{v_\Phi}\cdot\frac{2v_\Phi^2\lambda_{45}-4m_\Delta^2}{m_h^2-m_\Delta^2} }},\label{t2aapprox} \end{align} {{Then in this small $\sin\alpha$ limit, the expressions for the masses given in Eq.\,(\ref{mhpp}-\ref{mH}) can be simplified to}} \begin{align}\label{hierarchy} m_h^2\simeq2v_\Phi^2\lambda_1\simeq2v^2\lambda_1,~~ m_H \simeq m_\Delta \simeq m_A,~~ m_{H^\pm}^2\simeq m_\Delta^2-\frac{\lambda_5}{4}v_\Phi^2,~~ m_{H^{\pm\pm}}^2\simeq m_\Delta^2-\frac{\lambda_5}{2}v_\Phi^2. \end{align} We see that $m_\Delta$ sets the overall mass scale of the triplet scalars whereas $\lambda_1$ is basically determined by $m_h$ and $v$. Moreover, in the large $m_\Delta$ limit, the mass splitting is \begin{align} \Delta m=|m_{H^{\pm\pm}}-m_{H^\pm}|\approx|m_{H^\pm}-m_{H,A}|\approx\frac{|\lambda_5|v_\Phi^2}{8m_\Delta}\approx\frac{|\lambda_5|v^2}{8m_\Delta},\label{massspec} \end{align} which depends only on $\lambda_5$, $m_\Delta$, and $v$. Thus, by measuring the masses of any two triplet scalars of differing charges, one could determine both $m_\Delta$ and the Higgs portal coupling $\lambda_5$. A practical corollary is in the large $m_\Delta$ limit, once one of the triplet Higgs particles is discovered, the relatively small mass splitting (compared to $m_\Delta$) would provide guidance as to the mass region for discovery of the other triplet Higgs scalars. \subsection{Neutrino masses from a type-II seesaw mechanism} \label{subsec:seesaw} In the CTHM, the neutrino masses are generated through a type-II seesaw mechanism via the Yukawa Lagrangian\,\cite{Mohapatra:1979ia, Cheng:1980qt, Lazarides:1980nt, Schechter:1980gr} \begin{align} \mathcal{L}_Y= & (h_\nu)_{ij}\overline{L^{ic}}i\tau_2\Delta L^j+\rm{h.c.}.\label{nvlag} \end{align} Here, $L=(v_L,e_L)^T$ is the l SU(2)$_L$ doublet; $h_\nu$ is the neutrino Yukawa matrix, which is a $3\times3$ complex and symmetric matrix as has been shown for a general case in Ref.\,\cite{Bilenky:1987ty}. After the EWSB with $v_\Delta\neq0$, neutrinos of different flavors mix through $h_\nu$, as implied by neutrino oscillations. The mass matrix $h_\nu v_\Delta$ also breaks the lepton number explicitly\footnote{In principle, one could assign a lepton number of -2 to $\Delta$ so that the overall Lagrangian conserves lepton number before EWSB. The third term in $V(\Phi, \Delta)$ would then explicitly break lepton number conservation. The coefficient of the the dimension five lepton number violating mass term ${\bar{L^C}} H^T H L$ is then proportional to $\mu/M^2$.}, implying that neutrinos are of the Majorana type with their masses being \begin{align} (m_\nu)_{ij}=\sqrt{2}(h_\nu)_{ij} v_\Delta. \label{numass} \end{align} {{Experimentally, sum of neutrino masses is constrained to be $\sum_i m_i<0.23$\,eV by the Planck Collaboration via assuming the existence of three light massive neutrinos, the validity of the $\Lambda$ Cold Dark Matter ($\Lambda$CDM) model and using the supernovae and the Baryon Acoustic Oscillations data\,\cite{Agashe:2014kda, Ade:2015xua}. Given this constraint, we choose $m_\nu=0.01\rm\,eV$ for each of the three light neutrinos throughout the paper. In principle, one can choose a larger (smaller) value for the neutrino masses while still satisfying the experimental constraints. Larger (smaller) neutrino masses will correspond to a larger (smaller) $h_\nu$ for fixed $v_\Delta$, which will in turn affect the same-sign di-lepton decay BRs of $H^{\pm\pm}$. The BRs will then affect the parameter space relevant for model discovery. We will discuss effects from smaller/larger $m_\nu$ in Sec.\,\ref{subsec:disbdt}.}} \section{Model parameter determination}\label{paramdeter} The model parameters for the CTHM are, na\"ively, $\{g, g', v_\Phi, v_\Delta, \mu, \lambda_i, h_{\nu}\}$(i=1,2,3,4,5), or in the mass eigenstates after the EWSB, $\{\alpha_{E.M.}, G_F, m_Z, m_h, m_H, m_A, m_{H^\pm},$ $m_{H^{\pm\pm}}, v_\Delta, \sin\alpha, m_\nu\}$. $\alpha_{E.M.}, G_F, m_Z, m_h$ are already well-known from electroweak precision and Higgs mass measurements, and in order to further determine other parameters of the CTHM, we will need discovery of the new particles to know their masses and the measurement of the mixing angle $\sin\alpha$ as well. Therefore, in the following sub-sections, we will discuss how to experimentally determine the other parameters of the CTHM. In the end of this section, we will also discuss how to determine the input model parameters from consideration of perturbativity, which is essential for our collider study in Sec.\,\ref{sec:modeldis} and Sec.\,\ref{sec:lam45}. \subsection{Mass spectrum and determination of $\lambda_1$ and $\lambda_5$} \label{subsec:input} From Sec.\,\ref{subsec:mdlkey}, we conclude that $\sin\alpha$ is in general small and in this small $\sin\alpha$ limit, we have Eq.\eqref{hierarchy}, i.e., \begin{align} m_h^2\simeq2v_\Phi^2\lambda_1,~~ m_H \simeq m_\Delta \simeq m_A,~~ m_{H^\pm}^2\simeq m_\Delta^2-\frac{\lambda_5}{4}v_\Phi^2,~~ m_{H^{\pm\pm}}^2\simeq m_\Delta^2-\frac{\lambda_5}{2}v_\Phi^2,\tag{\ref{hierarchy}} \end{align} we see that: (a) When $\lambda_5\le0$, $m_h<m_H\simeq m_A\le m_{H^\pm}\le m_{H^{\pm\pm}}$, we call this the Normal Mass Hierarchy (NMH); (b) while when $\lambda_5\ge0$, $m_{H^{\pm\pm}} \le m_{H^{\pm}}\le m_A\simeq m_H$ and $m_h< m_H$, we call this the Reversed Mass Hierarchy (RMH). For the NMH, SM $h$ is the lightest particle and $H^{\pm\pm}$ is the heaviest one, the order of the mass spectra is unique. While for the RMH, $A$ or equivalently $H$ is the heaviest particle, but the mass order between $h$ and ($H^\pm$, $H^{\pm\pm}$) is unclear and will generally depend on our model input. On the other hand, from $m_h^2\simeq2v^2\lambda_1$, we conclude that $\lambda_1\approx\frac{m_h^2}{2v^2}\approx0.129$. While to determine $\lambda_5$, one can use the mass splitting $\Delta m\approx\frac{|\lambda_5|v^2}{8m_\Delta}$ as defined in Eq.\,\eqref{massspec} upon discovery. \subsection{Measurement of the mixing angle $\sin\alpha$ for determination of $\lambda_4$}\label{lam4determ} \begin{figure}[thb!] \captionstyle{flushleft} \begin{tabular}{cc} \includegraphics[width=80mm,height=65mm]{plot/sa1.pdf} & \includegraphics[width=80mm,height=65mm]{plot/sa2.pdf} \end{tabular} \caption{The dependence of $\sin\alpha$ on $\lambda_{23}$ is negligible due to the smallness of $v_\Delta$, and $\lambda_1\approx m_h^2/(2v^2)\approx0.129$, such that $\sin\alpha$ is approximately a function of $\lambda_{45}$, $m_\Delta$ and $v_\Delta$. On the left (right) panel we fix $m_\Delta=300$\,GeV ($v_\Delta=0.1$\,GeV) and plot $\sin\alpha$ with respect to $\lambda_{45}$ with different $v_\Delta$'s ($m_\Delta$'s). One observes that $\sin\alpha$ becomes sufficiently small for increasing $m_\Delta$ and/or decreasing $v_\Delta$.}\label{saplot} \end{figure} To determine $\lambda_4$, we note that from Eq.\,\eqref{t2aapprox}, we can solve for $\alpha$: \begin{align} \alpha\approx\left\{\begin{array}{cc}\frac{1}{2}\arctan\left(\frac{v_\Delta}{v_\Phi}\cdot\frac{2v_\Phi^2\lambda_{45}-4m_\Delta^2}{m_h^2-m_\Delta^2}\right), & \text{if~} \frac{2v_\Phi^2\lambda_{45}-4m_\Delta^2}{m_h^2-m_\Delta^2}\ge0\\ \pi+\frac{1}{2}\arctan\left(\frac{v_\Delta}{v_\Phi}\cdot\frac{2v_\Phi^2\lambda_{45}-4m_\Delta^2}{m_h^2-m_\Delta^2}\right), & \text{if~} \frac{2v_\Phi^2\lambda_{45}-4m_\Delta^2}{m_h^2-m_\Delta^2}<0\end{array}\right.,\label{alphaeq} \end{align} which implies $\sin\alpha$ is in general a two-to-one function. This feature of $\sin\alpha$ is graphically reflected in Fig.\,\ref{saplot}. In addition, from Fig.\,\ref{saplot}, we see that $\sin\alpha$ indeed decreases with increasing $m_\Delta$ and/or decreasing $v_\Delta$. For example, when $m_\Delta\gtrsim300$\,GeV and/or $v_\Delta\lesssim0.1$\,GeV, {{$\sin\alpha\lesssim0.01$.}} \begin{center} \begin{minipage}{\linewidth} \centering \captionof{table}{Three-point vertices related to the determination of $\lambda_{4,5}$. $\lambda_5$ is determined through mass splitting, $\lambda_4$ is determined through the mixing angle $\sin\alpha$, which is sensitive to $\lambda_{45}$.} \label{tab:1} \begin{tabular}{ C{1.25in} C{3.85in} }\toprule[1.5pt] \bf Vertex & \bf Coupling \\\midrule $hAZ$ & $-\frac{g}{2\cos\theta_W}(\cos\alpha\sin\beta_0-2\sin\alpha \cos\beta_0)$ \\\midrule $HZZ$ & $\frac{2ie m_Z}{\sin2\theta_W}(2\sin\beta_0\cos\alpha-\cos\beta_0\sin\alpha)$ \\\midrule $HW^+W^-$ & $igm_Z\cos\theta_W(\sin\beta_0\cos\alpha - \cos\beta_0\sin\alpha)$ \\\midrule $hH^-W^+$ & $\frac{ig}{2}(\sin\beta_\pm\cos\alpha - \sqrt{2}\cos\beta_\pm\sin\alpha)$ \\ \bottomrule[1.25pt] \end {tabular}\par \bigskip \end{minipage} \end{center} On the other hand, the variation of $\sin\alpha$ with $\lambda_{45}$ can also be used to determine $\lambda_{45}$ through various gauge boson-Higgs couplings. {We focus on gauge boson-Higgs vertices as electroweak production of the triplet Higgs particles is the dominant production mechanism in the CTHM. After a careful investigation of all the triple vertices listed in Appendix\,\ref{frapp}, we find that only four of the gauge boson-Higgs couplings are linearly dependent on $\sin\alpha$.}\footnote{{Some of the non gauge boson-Higgs type vertices are also $\sin\alpha$ linearly dependent as can be seen from the $hH^{++}H^{--}$ vertex in Appendix\,\ref{frapp}, but the corresponding production cross section is smaller compared with the dominant electroweak production.}} {These couplings will eventually affect the decay BRs of the BSM particles. Thus, after their discovery, one could determine $\lambda_5$ from the mass splitting and $\lambda_4$ from the triplet Higgs decay BRs} \footnote{{Here we remind the reader that the Higgs portal parameters $\lambda_{4,5}$ are of particular interest as they may allow a SFOEWPT to explain the baryon asymmetry of the universe (BAU). In this paper, however, we will not discuss the effects on phase transition or baryogenesis from the CTHM but rather leave it for future work.}}. \subsection{$\lambda_2$ and $\lambda_3$ determination} Different from the determination of $\lambda_4$ and $\lambda_5$, however, $\lambda_2$ and $\lambda_3$ are in general very difficult or even impossible to measure as they are always suppressed by $v_\Delta^2$ (for mass terms) or by $v_\Delta$ (for three-body interactions). One possible way to measure them is through the quartic triplet Higgs interactions, but the production cross section will again be suppressed by the smallness of $v_\Delta$ in general. Note that since $\lambda_2$ and $\lambda_3$ are irrelevant to electroweak phase transition, it is unnecessary to pay too much attention to their determination. \subsection{Choice of input model parameters} \label{sec:input} As discussed in last three sub-sections, experimentally, one can use the SM Higgs mass, the mass difference and the mixing angle to determine $\lambda_1$, $\lambda_4$ and $\lambda_5$. But recall that, in Sec.\,\ref{vdcons}, the $\rho$ parameter requires $v_\Delta$ to be negligible compared with $v_\Phi$ or $v$, which is about the same order as the Higgs masses. The ratio of the Higgs masses and $v_\Delta$ will then lead to very large $\lambda_{2,3}$ by referring back to Eq.\,(\ref{lam2}-\ref{lam3}), thus to preserve perturbativity of the CTHM, one will have to ``fine-tune'' the Higgs masses to obtain reasonable values for $\lambda_{2,3}$. To avoid the ``fine tuning'', we choose $\lambda_{2,3}$ instead as our input in our theoretical study. As also discussed in Sec.\,\ref{subsec:input} and Sec.\,\ref{lam4determ}, (a) Since we know the Higgs mass exactly, we choose $m_h$ instead of $\lambda_1$ as our model input; (b) we choose $m_\Delta$ and $\lambda_5$ as our model input as they determine the mass spectrum; (c) $\sin\alpha$ is negligible at small $v_\Delta$, thus to avoid ``fine tuning'' $\lambda_4$, we choose $\lambda_4$ instead of $\sin\alpha$ as our model input. Another reason for choosing $\lambda_4$ as our model input is that it {frequently} always appears in pair with $\lambda_5$ such that one can infer $\lambda_4$ from the combination once we know $\lambda_5$. {At the same time, relevant quantities may depend separately on $\lambda_4$ and $\lambda_5$, {\it e.g.}, $H^\pm$ decay BRs.} To summarize, our model input parameters are $\{\alpha_{E.M.}, G_F, m_Z, m_h, m_\Delta, v_\Delta, \lambda_2, \lambda_3, \lambda_4, \lambda_5, m_\nu\}$. {Here we emphasize that the input parameters need to be carefully chosen to avoid fine tuning the masses or to preserve the validity of perturbation theory from $\lambda_{2,3}$, otherwise one may easily fall into the region where perturbation theory is invalid. For example, for the plots in the second row of Fig.\,2 in Ref.\,\cite{Aoki:2011pz}, the authors used the scalar masses as their input. We find that using their input, only when $v_\Delta\gtrsim1$\,GeV will the value of $\lambda_3$ respect perturbativity, whereas for smaller $v_\Delta$'s, $\lambda_3$ can be as large as $10^{21}$.} \section{Production and Decay Rates of the Scalars in the CTHM} \label{sec:decayprod} As discussed in last section, the mass ordering of the RMH will in general depend on our model input. For simplicity, we will work in the NMH throughout the paper, in which framework the production and decay rates of the BSM Higgs particles are studied in detail below. {{While we want to point out that, in the RMH, though the decay patterns, the decay BRs and thus our Fig.\,\ref{bdtdis} and Fig.\,\ref{haa} will change, the same channels studied in this paper can still be used for model discovery and Higgs portal parameter determination.}} \subsection{Production cross section of the Higgs particles in the CTHM}\label{subsec:sg7bg} \label{subsec:prod} In SM, the Higgs boson can be produced via gluon fusion or vector boson fusion (VBF), but in the CTHM, single production of the triplet Higgs particles via gluon fusion or VBF is highly suppressed by small $v_\Delta$\footnote{SM $h$ production via gluon fusion, however, does not suffer from suppression from the smallness of $v_\Delta$.}. Therefore, single production of the triplet Higgs particles through gluon fusion or VBF will not be considered in this paper. For double scalar production, a pair of triplet scalars can { be produced through electoweak Drell-Yan processes or gluon fusion}. As in the single Higgs production case, however, double scalar Higgs particle production via an intermediate $H$ or $A$, which is produced through gluon fusion, is again highly suppressed by small $v_\Delta$. {No such suppression occurs for electroweak pair production. Consequently, we focus on the latter.} To study quantitively the production cross sections of the triplet Higgs particles, we first use {\tt Mathematica} and {\tt FeynRules}\,2.3.13\,\cite{Alloul:2013bka,Christensen:2008py} to generate the Universal FeynRules Output (UFO) model file\,\cite{Degrande:2011ua} of the CTHM, then we use {\tt MadGraph}\,2.3.3\,\cite{Alwall:2014hca} to implement the CTHM UFO file to obtain the production cross sections at $\sqrt{s}=14$\,TeV and $\sqrt{s}=100$\,TeV. However, we find that for the channels we are going to study in this paper, the number of events at $\sqrt{s}=14$\,TeV and $\mathcal{L}=3\,\rm ab^{-1}$ is too few even without considering the corresponding backgrounds, so we only list the cross section result at $\sqrt{s}=100$\,TeV here. \begin{figure}[thb!] \captionstyle{flushleft} \begin{tabular}{cc} \includegraphics[width=90mm,height=75mm]{plot/prod111.pdf} & \includegraphics[width=90mm,height=75mm]{plot/prod222.pdf} \end{tabular} \caption{ Production cross section as a function of $m_\Delta$ at $\sqrt{s}=100$\,TeV with $v_\Delta=10^{-3}$\,GeV. We set $\lambda_2=0.2$, $\lambda_3=0$, $\lambda_4=0$ and $\lambda_5=-0.1$, which correspond to the black dot in the left panel of Fig.\,\ref{stability} in order to be consistent with the NMH framework and to satisfy the model constraints discussed in Sec.\,\ref{secallconst}. The left panel is for associated Higgs production channels while the right one is for pair production except the $HA$ channel. Since the production cross section of $HA$ is very close to $H^-H^{++}$, we include it in the right panel to make the plots more readable.}\label{higgsprod} \end{figure} {The pair production cross sections depend on the couplings of the electroweak gauge bosons to the scalars and on the scalar masses. In what follows, we cast these dependences in terms of our independent parameters.} {{Note that $\lambda_1$ is basically fixed by $v$ and $m_h$, while the effects of $\lambda_{2,3}$ are suppressed by small $v_\Delta$. In short, the production cross sections will be largely insensitive to $\lambda_{2,3}$ but will depend significantly on $\lambda_{4,5}$.} To be consistent with the NMH, which requires a negative $\lambda_5$, and to satisfy the constraints discussed in Sec.\,\ref{model:const}, we choose $\lambda_2=0.2$, $\lambda_3=0$, $\lambda_4=0$ and $\lambda_5=-0.1$.} As an example, we fix $v_\Delta=10^{-3}$\,GeV and obtain the production cross sections given in Fig.\,\ref{higgsprod}, from which we see that pair production of $H^{++}H^{--}$ has the largest production cross section followed by $H^{++}H^-$. On the other hand, $H^+H^{--}$ will always be produced simultaneously with $H^-H^{++}$. We therefore expect an enhancement of the cross section from the combination of $H^-H^{++}$ and $H^+H^{--}$ channels. The hierarchy of the various production cross sections is briefly explained below: (a) {Besides a factor of four enhancement from the electric charge of $H^{\pm\pm}$, $H^{++}H^{--}$ pair has a larger cross section than $H^+H^-$ because it is constructively produced through $s$-channel $\gamma$ and $Z$ exchange. In contrast, the $H^+H^-$ pair production is suppressed due to destructive interference\,\cite{Akeroyd:2011zza}}.{ Note that even though $m_{H^{\pm\pm}}>m_{H^{\pm}}$, the mass splitting is not large due to our choice of $\lambda_5$; therefore, the lighter $H^\pm$ mass does not compensate for the aforementioned factors. (b) $H^{++}H^{-}$ has a larger cross section than $H^{--}H^{+}$ because the former is dominantly produced through a $W^+$ while the latter is through a $W^-$. (c) {{$HH$ and $AA$ channels, or $H^\pm A$ and $H^\pm H$ channels, have the same production cross sections due to mass degeneracy of $H$ and $A$.}} (d) {{$H^\pm A/H^\pm H$ has a smaller cross section than $HH/AA$, and $HA$ has a smaller cross section than $H^{++}H^{--}/H^{++}H^{-}$, because of the couplings.}}} (e) In the NMH, $m_{H^\pm}>m_{H/A}$, but the couplings involved for $H^+H^-$ is larger than those for $H^+A/H^+H$, the phase space and the couplings will compete such that at small $m_\Delta$, $H^+H^-$ has larger cross section while at large $m_\Delta$, $H^+A/H^+H$ has a larger cross section. This is also true for $HA$ and $H^+H^-$ channels. In order to study the collider signatures of the triplet Higgs particles, it is natural to focus on $H^{\pm\pm}H^{\mp\mp}$ and $H^{\pm\pm}H^{\mp}$ channels since they have the largest production cross sections compared with other channels. To determine the final states, we will study their dominant decay channels in next sub-section. \subsection{Decay rates of the scalar Higgs particles in the CTHM}\label{moddec} To further determine the dominant decay modes of the triplet Higgs particles in the CTHM for collider simulation, we calculate their decay rates by taking $h_\nu=\mathbb{1}_{3\times3}$ for simplicity. All our decay formulas agree with those in Appendix A of Ref.\,\cite{Aoki:2011pz} if one also takes the unit matrix limit there. {\color{black} In order to illustrate the potential parameter-dependence of various decay channels, we show in Fig.~\ref{slicedecay} the BRs for the charged and neutral triplet states as functions of the relevant combinations of $\lambda_4$ and $\lambda_5$ for representative values of $m_\Delta=400$\,GeV and $v_\Delta=10^{-4}$\,GeV. \begin{figure}[thb!] \captionstyle{flushleft} \begin{tabular}{cc} \includegraphics[width=75mm]{plot/hppbr.pdf} & \includegraphics[width=75mm]{plot/hpbr.pdf} \\ \includegraphics[width=75mm]{plot/Hbr.pdf} & \includegraphics[width=75mm]{plot/Abr.pdf} \\ \end{tabular} \caption{Decay BRs for $H$, $A$, $H^{\pm\pm}$ and $H^\pm$ as a function of $\lambda_4$ and $\lambda_5$ for representative values of $m_\Delta=400$\,GeV and $v_\Delta=10^{-4}$\,GeV. For a detailed discussion on the decay features, one can refer to the main text in Sec.\,\ref{moddec}.}\label{slicedecay} \end{figure} In this study, we will focus on the NMH with $\lambda_5<0$. From the top left panel of Fig.~\ref{slicedecay}, we observe that the $H^{\pm\pm}$ BRs to $H^\pm W^\pm$ and $W^\pm W^\pm$ depend strongly on this parameter in the vicinity of our benchmark point value: $\lambda_5=-0.1$. From the top right plot, we also observe that the $\mathrm{BR}(H^\pm \to h W^\pm)$ also depends strongly on $\lambda_4+\lambda_5$. Even though in the vicinity of our benchmark point with $\lambda_4+\lambda_5=-0.1$ the $h W^\pm$ mode is subdominant, the corresponding BR depends more strongly on $\lambda_4+\lambda_5$ than do the other modes. Consequently, we will focus on this channel for the decay of the singly-charged scalar. The bottom two panels give the neutral scalar BRs. Though we will not utilize this information in the present study, we include them here for completeness and for future reference. It is also useful to determine how the $H^{\pm\pm}$ BRs vary with $m_\Delta$ and $v_\Delta$. To that end, } {{in Fig.\,\ref{decayregionplotHPP}, we show the regions of parameter space where the BR to various final states is greater than 40\% for $H^{\pm\pm}$. In the left panel of Fig.\,\ref{decayregionplotHPP}, we consider the ($v_\Delta$, $\lambda_5$) plane for fixed $m_\Delta$, while the right panel gives the ($m_\Delta$, $\lambda_5$) plane for fixed $v_\Delta$. Note that $H^{\pm\pm}$ decay BRs are independent on $\lambda_4$ and for the NMH, one has $\lambda_5 < 0$. \begin{figure}[thb!] \captionstyle{flushleft} \begin{tabular}{cc} \includegraphics[width=75mm,height=65mm]{plot/redo_HPP1.pdf} & \includegraphics[width=75mm,height=65mm]{plot/redo_HPP2.pdf} \\ \end{tabular} \caption{Decay region plots for $H^{\pm\pm}$ with BR$\ge40\%$. Left panel is with $m_\Delta=400\rm\,GeV$ and right panel is with $v_\Delta=10^{-4}\rm\,GeV$. Purple region is the $H^\pm W^\pm$ channel, black is the same-sign di-W boson channel and blue is the same-sign di-lepton channel. $\lambda_5$ is in the negative region to be consistent with the NMH framework.}\label{decayregionplotHPP} \end{figure} \begin{figure}[thb!] \captionstyle{flushleft} \begin{tabular}{cc} \includegraphics[width=75mm,height=65mm]{plot/redo_HP1.pdf} & \includegraphics[width=75mm,height=65mm]{plot/redo_HP2.pdf} \\ \includegraphics[width=75mm,height=65mm]{plot/redo_HP3.pdf} & \includegraphics[width=75mm,height=65mm]{plot/redo_HP4.pdf} \\ \includegraphics[width=75mm,height=65mm]{plot/redo_HP5.pdf} & \includegraphics[width=75mm,height=65mm]{plot/redo_HP6.pdf} \\ \end{tabular} \caption{Decay region plots for $H^{\pm}$ with $\rm BR\ge40\%$. Purple region is for $HW$ and $AW$, blue for $ZW$, orange for $hW$ and black for the lepton final state. The first row is with the same $\lambda_5$ but opposite-sign $\lambda_4$; the second row is with the same $v_\Delta$ but opposite-sign $\lambda_{45}$ and the third row is with the same $v_\Delta$ but different $\lambda_5$. From those plots we conclude that $H^\pm\rightarrow hW^\pm$ channel prefers $\lambda_{45}<0$ in general. For $\lambda_5=-0.01$, $H^\pm\rightarrow hW^\pm$ also gains a large branching ratio when $\lambda_4$ goes from negative to positive as can be seen from the last graph.}\label{decayregionplotHP} \end{figure} From Fig.\,\ref{decayregionplotHPP}, we observe that for $H^{\pm\pm}$, the dominant decay channels are $H^{\pm\pm}\rightarrow \ell^\pm\ell^\pm~(W^\pm W^\pm)$ at small (large) $v_\Delta$ when $m_\Delta=400\rm\,GeV$. For intermediate values of the triplet vev, {\it e.g.} $v_\Delta=10^{-4}\rm\,GeV$, those two channels dominate when $\lambda_5\ge-0.2$. Besides the large BRs in the corresponding regions of $v_\Delta$, additional advantages for these channels are: (1) Clean final states: Leptons in the final states are relatively easy to identify and analyze experimentally; (2) Absence of cascade decay: The $H^{\pm}W^\pm$ decay mode will introduce extra decay chains, {{making the final state more complicated. We emphasize, however, that even though the same-sign di-$W$ boson (di-lepton) channel dominates for large (small) $v_\Delta$, one may still probe the intermediate $v_\Delta$ region using the $\ell^\pm\ell^\pm$ and $W^\pm W^\pm$ channels. Although these channels have relatively small BRs in this $v_\Delta$ region, we find that by combining these channels with information from other triplet Higgses, one could still explore this region without resorting to the $H^{\pm\pm}\to W^\pm H^\pm$ channel. This feature will become more apparent in our main discovery reach plot Fig.~\ref{bdtdis} and attendant discussion.}} We also note in passing that at small $v_\Delta$, same-sign di-lepton channel dominates and actually has a $100\%$ decay BR. For those regions where the same-sign di-lepton channel has a $100\%$ decay BR, experimental constraints are strong. We will discuss this point in detail in Sec.\,\ref{subsec:disbdt}.} {{In Fig.\,\ref{decayregionplotHP}, we show the regions of parameter space where the $H^\pm$ decay BR to various final states is greater than 40\%. Since the BR functions for $H^\pm$ depend on $v_\Delta$, $m_\Delta$, $\lambda_4$ and $\lambda_5$ individually, the decay region plots for $H^\pm$ are more complicated than those for the doubly charged scalars. We thus plot the dominant decay channels in different planes: In the first row of Fig.\,\ref{decayregionplotHP}, we consider the ($v_\Delta$, $m_\Delta$) plane with varying $\lambda_{45}$, while in the second (third) row, we consider the ($v_\Delta$, $\lambda_{5(4)}$) plane with fixed $\lambda_{4(5)}$ and $v_\Delta$.}} Recall that from Table\,\ref{tab:1}, only the $H^\pm\rightarrow hW^\pm$ channel is related to the determination of $\lambda_4$ {{through the mixing angle $\sin\alpha$ as discussed in Sec.\,\ref{lam4determ}}}. {\color{black} We observe that $\lambda_{45}<0$ generally leads to a large BR for the $H^\pm\rightarrow hW^\pm$ channel, though there also exist some regions giving a large BR$(H^\pm\to hW^\pm)$ for $\lambda_{45}>0$.} With the foregoing observations in mind, we will next study the following channels for model discovery: {{$p p \to H^{++}H^{--}$ and $p p \to H^{\pm\pm}H^{\mp}$ with $H^{\pm\pm}\rightarrow \ell^\pm\ell^\pm~(W^\pm W^\pm)$ and $H^{\mp}\rightarrow hW^\mp$.}} \subsection{Present experimental constraints} \label{subsec:sum} Present experimental constraints on the charged Higgs particles we study here already exclude some portions of the CTHM parameter space especially from studies on the $p p \to H^{++}H^{--} \to \ell^+\ell^-\ell'^-\ell'^-$ ($\ell=e,\mu$) process. Thus, before moving to the detailed collider study of some specific channels, we review the current direct LHC experimental constraints. A detailed summary can be found in Appendix \ref{app:expcon}, with the most stringent ones given below: \begin{enumerate} \item For $H^{\pm\pm}$: By assuming a 100\% di-lepton decay BR, the lower limit on $m_{H^{\pm\pm}}$ is constrained to be 870\,GeV\,\cite{Aaboud:2017qph} for a $\mu^\pm \mu^\pm$ final state. In Ref.\,\cite{ATLAS:2012mn}, an upper limit on the cross section with the $\ell^\pm\ell^\pm$ {{($\ell=e,\mu$)}} final state is set to be between 1.7\,fb and 67\,fb. While by assuming $H^{\pm\pm}$ is long-lived\footnote{As explained in the footnote of Ref.\,\cite{Aad:2015oga}, ``long-lived'' means a particle that does not decay within the full depth of the ATLAS detector.}, $m_{H^{\pm\pm}}\in[50,600]$\,GeV is excluded\,\cite{Aad:2015oga}. \item For $H^\pm$: $\sigma(pp\rightarrow H^\pm t[b])\times\text{BR}(H^\pm\rightarrow \tau\nu)<\text{1.9\text{\,fb}-15}$\,fb for $m_H^\pm\in(200,2000)$\,GeV\,\cite{Aaboud:2016dig}, while for a VBF produced $H^\pm$, $\sigma(pp\rightarrow H^\pm+X)\times\text{BR}(H^\pm\rightarrow W^\pm Z)<\text{36\text{\,fb}-573}$\,fb for $m_H^\pm\in(200,2000)$\,GeV\,\cite{Sirunyan:2017sbn}. {{Here, a larger mass corresponds to a smaller upper bound on the product of the production cross section and the BR. A similar meaning is implied in the following.}} \item For $H$ and $A$: In Ref.\,\cite{Khachatryan:2016are}, the upper limit on $\sigma(pp\rightarrow S' \rightarrow SZ)\times\text{BR}(S\rightarrow b\bar{b}(\tau^+\tau^-))\times\text{BR}(Z\rightarrow\ell^+\ell^-)$ ($S',~S$ are $H$ or $A$ with $m_{S'}>m_S$) is constrained to be 5\,fb-10\,fb for $\ell^+\ell^-\tau^+\tau^-$ final state with $m_{H/A}\in(500,1000)$\,GeV and $m_{A/H}\in(90,400)$\,GeV; while for $\ell^+\ell^-b\bar{b}$ final state, the upper limit is 1\,fb-100\,fb with $m_H\in[300,100000]$\,GeV. For the degenerate case, i.e., $m_A=m_H$, which is true in our case, the parameter space remains unexplored. \end{enumerate} For the charged Higgs particles, we will recast constraints from the charged Higgs particles to the parameter space of the CTHM in Sec.\,\ref{subsec:disbdt}, in which we show the part of the parameter space that is already ruled out by current experimental constraints for the benchmark point we choose. \section{Model discovery} \label{sec:modeldis} As discussed in last section, $H^{++}H^{--}$ has the largest production cross section and will be the dominant discovery channel for the triplet model; $H^{\pm}H^{\mp\mp}$ has the second largest production cross section and is directly related to the determination of $\lambda_{4,5}$. In addition, since the same-sign di-lepton decay channel of the $H^{\pm\pm}$ particle is dominant only at small $v_\Delta$ from left panel of Fig.\,\ref{decayregionplotHPP} and the $H^\pm\rightarrow hW^\pm$ decay channel dominates at large $v_\Delta$ from first row of Fig.\,\ref{decayregionplotHP}, we expect these two channels to be complementary to each other to cover most of the model parameter space. Therefore, in this section, we will study in detail the discovery of the triplet model through these two channels, i.e., $pp\rightarrow H^{++}H^{--}$ and $pp\rightarrow H^{\pm\pm}H^{\mp}$ with $H^{\pm\pm}\rightarrow\ell^\pm\ell^\pm/W^\pm W^\pm$ and $H^\pm\rightarrow hW^\pm$. \subsection{Discovery for small $v_\Delta$: $pp\rightarrow H^{++}H^{--}\rightarrow\ell^+\ell^+\ell'^-\ell'^-$} The dominant discovery channel for the triplet model is $H^{++}H^{--}$ and the cleanest discovery process is $pp\rightarrow H^{++}H^{--}\rightarrow\ell^+\ell^+\ell'^-\ell'^-$. Several theoretical and experimental phenomenological studies of its LHC signatures have been performed\,\cite{Sui:2017qra, Ghosh:2017pxl,Dev:2017ouk,Perez:2008ha, Barger:1982cy, Gunion:1989in, Muhlleitner:2003me, Han:2007bk, Huitu:1996su, Dion:1998pw, Akeroyd:2005gt, Biswas:2017tnw, Aad:2012cg,Aaltonen:2011rta,ATLAS:2012mn,ATLAS:2012hi,Aad:2014hja,ATLAS:2014kca,Aad:2015oga,Sirunyan:2017ret,Aaboud:2017qph}. Recent related theoretical studies relevant to higher energy colliders include: (1) at a lepton collider with $\sqrt{s}=380\rm\,GeV$ and 3\,TeV, the production and decays of $H^{\pm\pm}$ were studied by Agrawal {\it et al.}\,\cite{Agrawal:2018pci}; (2) the $H^{++}H^{--}$ pair production cross section at the future 100\,TeV $pp$ collider was studied by Cai {\it et al.}\,\cite{Cai:2017mow}; (3) the $H^{++}H^{--}\rightarrow\tau^\pm\ell^\pm\ell^\mp\ell^\mp/\ell^+\tau^+\ell^-\tau^-$ processes were studied by Li\,\cite{Li:2018jns} at the high-luminosity and high-energy LHC as well as the future 100\,TeV circular $pp$ collider (FCC); (4) the multi-lepton final state of $H^{++}H^{--}$ at 13\,TeV LHC and FCC was studied by Mitra {\it et al.}\,\cite{Mitra:2016wpr} in the RMH by fixing $\lambda_{1}=0.13$ and $\lambda_2=\lambda_3=\lambda_4=1$. To the best of our knowledge, in the NMH this channel at the FCC has not yet been studied. In what follows, we discuss our collider simulation for this channel with a mass range from 40 GeV to 5000 GeV. The simulation is done by using {\tt MadGraph}\,2.3.3\,\cite{Alwall:2014hca} and the aforementioned pre-generated CTHM UFO file to generate events, and then each generated event undergoes parton shower and hadronization through {\tt Pythia-pgs}\,2.4.4\,\cite{Sjostrand:2006za} before arriving at the detector. The detector response is simulated by {\tt {\tt Delphes}}\,3.3.0\,\cite{deFavereau:2013fsa}, where the 100\,TeV FCC {\tt Delphes} card\,\cite{delphescard} is used at this step. To analyze the data collected by {\tt Delphes}, we use {\tt ROOT}\,6.06.02\,\cite{Antcheva:2009zz}. The dominant backgrounds for this channel are $ZW^\pm W^\mp$ and $ZZ$ as we are performing an exclusive analysis. In total, we generate 1,000,000 events for both the signal and the two backgrounds, and our preselection cuts for the signal and the backgrounds are: (1) transverse momentum $p_T>20$\,GeV for all final state particles; (2) absolute pseudorapidity $|\eta|<2.5$ for all final state particles. Since the Boosted Decision Trees (BDT)\,\cite{tmva} can maximize the cut efficiency and thus have better performance than a cut-based analysis\,\cite{Roe:2004na}, we will utilize this feature of BDT to train and test all the events that have passed the preselection cuts. We list the variables used during BDT training and test in Table \ref{bdtvar}: \begin{table}[!ht] \caption{A list of BDT variables for the $p p \to H^{\pm\pm}H^{\mp\mp}\to\ell^+\ell^+\ell'^-\ell'^-$ signal and its backgrounds.}\label{bdtvar} \centering \begin{tabular}{c}\toprule[1.5pt] $\slashed{E}_T$: Missing transverse energy; $HT$: Scalar sum of transverse momentum\\ $m_{H^{++}}$ : Positively doubly-charged Higgs mass, $m_{H^{--}}$ : Negatively doubly-charged Higgs mass\\ $p_{T,\ell^{+}}^{\text{leading}}$, $p_{T,\ell^{+}}^{\text{sub-leading}}$: Transverse momentum of the $\ell^+$ with leading and sub-leading $p_T$ respectively\\ $p_{T,\ell^{-}}^{\text{leading}}$, $p_{T,\ell^{-}}^{\text{sub-leading}}$: Transverse momentum of the $\ell^-$ with leading and sub-leading $p_T$ respectively\\ $\Delta\phi_{\ell^+\ell^+}$, $\Delta R_{\ell^+\ell^+}$: $\Delta\phi$ and $\Delta R$ of the two positively charged leptons\\ $\Delta\phi_{\ell^-\ell^-}$, $\Delta R_{\ell^-\ell^-}$: $\Delta\phi$ and $\Delta R$ of the two negatively charged leptons\\ $m_{Z,1}$, $m_{Z,2}$: Two minimal combinations of the four leptons with same flavor and opposite charges\\ \bottomrule[1.25pt] \end{tabular}\par \end{table} \subsection{Discovery for large $v_\Delta$: $pp\rightarrow H^{++}H^{--}\rightarrow W^+W^+W^-W^-\to\ell^+\ell^+\ell'^-\ell'^-\slashed{E}_T$} From the BR discussion in Sec.\,\ref{moddec}, we observe that the $H^{++}H^{--}\rightarrow\ell^+\ell^+\ell'^-\ell'^-$ channel can only cover the small $v_\Delta$ region, and we expect the large $v_\Delta$ region to be covered by the $pp\rightarrow H^{++}H^{--}\rightarrow W^+W^+W^-W^-$ channel. In this paper, we only focus on the $W^\pm\to\ell^\pm\nu_\ell$ mode for all the four $W$ bosons. In this case, the $4W$ channel has exactly the same backgrounds as the $H^{++}H^{--}\rightarrow\ell^+\ell^+\ell'^-\ell'^-$ channel considered in last sub-section. Repeating the same procedures as for the $H^{++}H^{--}\rightarrow\ell^+\ell^+\ell'^-\ell'^-$ channel, we generate 1,000,000 events for our signal and use the background data generated in last sub-section. We also use the same BDT training and test variables as those listed in Table\,\ref{bdtvar} to analyze this channel. \subsection{Discovery for intermediate and large $v_\Delta$: $pp\rightarrow H^{\pm\pm}H^{\mp}\rightarrow\ell^\pm\ell^\pm hW^\mp\rightarrow \ell^\pm\ell^\pm b\bar{b}\ell^\mp\slashed{E}_T$ and $pp\rightarrow H^{\pm\pm}H^{\mp}\rightarrow W^\pm W^\pm hW^\mp\rightarrow \ell^\pm\ell^\pm b\bar{b}\ell^\mp\slashed{E}_T$} While the $H^{++}H^{--}\rightarrow\ell^+\ell^+\ell'^-\ell'^-$ ($pp\rightarrow H^{++}H^{--}\rightarrow W^+W^+W^-W^-\to\ell^+\ell^+\ell'^-\ell'^-\slashed{E}_T$) only covers the small (large) $v_\Delta$ region, the $H^\pm H^{\mp\mp}$ can provide complementary discovery potential for the large and intermediate $v_\Delta$ region. To obtain information about $\lambda_{4,5}$, we require $H^{\pm}$ to decay into a $hW^\pm$ final state, while $H^{\mp\mp}$ can decay into either an $\ell^{\mp}\ell^{\mp}$ or a $W^\mp W^\mp$ final state. These two processes yield the same final state particles and, thus, share the same backgrounds. The backgrounds we consider for these two processes are: $hZW^\pm; t\bar{t}j$ and $W^\pm W^\mp b\bar{b}j$ with the light jet $j$ misidentified as a lepton with a fake rate of 0.01\%\,\cite{delphescard}; $t\bar{t}W^\pm$, $t\bar{t}Z$ and $ZZh$ with one lepton missing; $ZW^\pm jj$ with the two light jets misidentified as two $b$ quarks with a fake rate of 10\% for $c$ misidentified as $b$ and a 0.01\% fake rate for other light quarks\,\cite{delphescard}; and $ZW^\pm b\bar{b}$. The signals and the backgrounds are summarized below in Table \ref{table:s-bkg}. \begin{table}[htbp!] \caption{Signals for intermediate and large $v_\Delta$ are listed in the first two rows. The two signals share the same backgrounds, which are listed in the following eight rows.}\label{table:s-bkg} \centering \begin{tabular}{c|c|c|c|c|c|c} \hline \multirow{2}{*}{\bf Signal} & \multicolumn{6}{|c}{$pp\rightarrow H^\mp H^{\pm\pm}\rightarrow hW^\mp \ell^\pm \ell^\pm\rightarrow b\bar{b}\ell'^\mp \ell^\pm \ell^\pm \slashed{E}_T$ (for intermediate $v_\Delta$)} \\ \cline{2-7} & \multicolumn{6}{|c}{$pp\rightarrow H^\mp H^{\pm\pm}\rightarrow hW^\mp W^\pm W^\pm\rightarrow b\bar{b}\ell'^\mp \ell^\pm \ell^\pm \slashed{E}_T$ (for large $v_\Delta$)} \\ \hhline{=|=|=|=|=|=|=} \multirow{8}{*}{\bf Background} & \multicolumn{6}{|c}{$pp\rightarrow hZW^\pm\rightarrow b\bar{b}\ell^+\ell^-\ell'^\pm \slashed{E}_T$}\\ \cline{2-7} & \multicolumn{6}{|c}{$pp\rightarrow hZZ\rightarrow b\bar{b}\ell^+\ell^-\ell'^+\ell'^-$} \\ \cline{2-7} & \multicolumn{6}{|c}{$pp\rightarrow ZW^\pm j j\rightarrow \ell^+\ell^-\ell'^\pm j j\slashed{E}_T$} \\ \cline{2-7} & \multicolumn{6}{|c}{$pp\rightarrow t\bar{t}Z\rightarrow W^+bW^-\bar{b}\ell^+\ell^-\rightarrow b\bar{b}\ell'^+\ell''^-\ell^+\ell^- \slashed{E}_T$} \\ \cline{2-7} & \multicolumn{6}{|c}{$pp\rightarrow ZW^\pm b\bar{b}\rightarrow b\bar{b}\ell^+\ell^-\ell'^\pm \slashed{E}_T$} \\ \cline{2-7} & \multicolumn{6}{|c}{$pp\rightarrow W^+W^-b\bar{b}j\rightarrow b\bar{b}\ell^+\ell'^- j \slashed{E}_T$} \\ \cline{2-7} & \multicolumn{6}{|c}{$pp\rightarrow t\bar{t}W^\pm\rightarrow W^+bW^-\bar{b}\ell^\pm\slashed{E}_T\rightarrow b\bar{b}\ell'^+\ell''^-\ell^\pm \slashed{E}_T$} \\ \cline{2-7} & \multicolumn{6}{|c}{$pp\rightarrow t\bar{t}j\rightarrow W^+bW^-\bar{b}j \rightarrow b\bar{b}\ell'^+\ell''^-j \slashed{E}_T$} \\ \hline \end{tabular} \end{table} \begin{table}[!ht] \caption{A list of BDT variables for $W^\pm W^\pm hW^\mp$, $\ell^\pm\ell^\pm hW^\mp$ channels and their backgrounds. Since these two signals share the same backgrounds, we use the same BDT variables for both channels.}\label{sigbdtvar} \centering \begin{tabular}{c}\toprule[1.5pt] $\slashed{E}_T$: Missing transverse energy; $HT$: Scalar sum of transverse momentum\\ $m_{H^{\pm\pm}}$ : Doubly-charged Higgs mass\\ $m_h$, $m_Z$: SM Higgs and $Z$ boson mass; $m_{W,T}$: transverse mass of $W^\mp$ boson\\ $\Delta\phi_{b\bar{b}}$, $\Delta R_{b\bar{b}}$: $\Delta\phi$ and $\Delta R$ of two $b$ quarks; $\Delta\phi_{\ell^\pm\ell^\pm}$, $\Delta R_{\ell^\pm\ell^\pm}$: $\Delta\phi$ and $\Delta R$ of two same-sign leptons\\ $p_{T,b}^{\text{leading}}$, $p_{T,b}^{\text{sub-leading}}$: leading and sub-leading transverse momentum of the $b$ quark\\ $\eta_{b}^{\text{leading}}$, $\eta_{b}^{\text{sub-leading}}$: pseudo-rapidity of the $b$ quark with leading and sub-leading $p_T$ respectively\\ $p_{T,\ell^{\text{same}}}^{\text{leading}}$, $p_{T,\ell^{\text{same}}}^{\text{sub-leading}}$: leading and sub-leading transverse momentum of the same-sign leptons\\ $\eta_{\ell}^{\text{leading}}$, $\eta_{\ell}^{\text{sub-leading}}$: pseudo-rapidity of the same-sign leptons with leading and sub-leading $p_T$ respectively\\ $\eta_{\ell^{\text{oppo.}}}$, $p_{T,\ell^{\text{oppo.}}}$: pseudo-rapidity and transverse momentum of the opposite-sign lepton\\ \bottomrule[1.25pt] \end{tabular}\par \end{table} As for the $H^{++}H^{--}$ process, we use the same tools to generate events, the same preselection cuts to analyze the events. For the BDT training and test, the training variables we use for these two processes and the backgrounds are listed in Table \ref{sigbdtvar}. In addition, for the $t\bar{t}j$, $W^\pm W^\mp b\bar{b}j$ and $ZW^\pm jj$ backgrounds, {{we also add the following cuts at the generator level}}: (1) $p_T^{j,b}\ge10$\,GeV; (2) $|\eta^{j,b}|\le5$; (3)$\Delta R^{jj,bb,bj}\ge0.05$. With these requirements, in total, we generate 50,000,000 events for signal $\ell^\pm\ell^\pm hW^\mp$ and 1,000,000 events for signal $W^\pm W^\pm hW^\mp$; 4,579,172 events for $W^\pm W^\mp b\bar{b}j$; 5,000,000 events for $ZZh$ and $ZhW^\pm$; 29,000,000 events for $t\bar{t}Z$; 30,000,000 events for $t\bar{t}W^\pm$ and $t\bar{t}j$; 15,000,000 events for $ZW^\pm jj$ and $ZW^\pm bb$. \subsection{Discovery potential at the 100\,TeV collider}\label{subsec:disbdt} The significance is defined as $\frac{S}{\sqrt{S+B}}$ throughout the paper, with $S=\sigma_s\cdot\mathcal{L}$ and $B=\sigma_{\text{bkg}}^{\text{tot}}\cdot\mathcal{L}$ the total signal and background event number at the collider, where $\sigma_s$ and $\sigma_{\text{bkg}}^{\text{tot}}$ are the final signal and final total background cross section respectively, and $\mathcal{L}$ is the integrated luminosity, which we choose to be $30\,\text{ab}^{-1}$\,\cite{fcc1,fcc2} throughout the paper. By requiring the signal significance to be greater or equal to 5, the BDT based result for the discovery channels is given in Fig.\,\ref{bdtdis}. Several features of these results merit emphasizing: \begin{itemize} \item We see that at small $v_\Delta$ where the neutrino masses are naturally generated through the type-II seesaw mechanism, the CTHM can be discovered over a very wide mass range from tens of \,GeV to several \,TeV through the $pp\rightarrow H^{++}H^{--}\rightarrow\ell^+\ell^+\ell'^-\ell'^-$ channel. We also recast the current LHC constraints for this channel at 8\,TeV and 13\,TeV, which is done by rescaling the production cross sections and the BRs in Refs.\,\cite{ATLAS:2014kca,Aaboud:2017qph}. We find that the current LHC constraints only exclude the relatively small $m_\Delta$ and small $v_\Delta$ region of the CTHM parameter space for our benchmark point, which therefore motivates a future collider study as we have done above. \item For the benchmark point we use, \begin{align} m_{H^{\pm\pm}}=m_\Delta^2+3001{\rm\,(GeV^2)}\Rightarrow m_{H^{\pm\pm}}\gtrsim54.78\rm\,GeV, \end{align} such that LEP constraints\,\cite{Acton:1992zp,Abbiendi:2001cr} are automatically satisfied. Note that our Fig.\,\ref{bdtdis} is plotted as a function of $m_\Delta$ such that $m_\Delta=0$ corresponds a minimal mass of $m_{H^{\pm\pm}}\simeq54.78$\,GeV. \item For the large $v_\Delta$ region, the $pp\rightarrow H^{\pm\pm}H^\mp\rightarrow W^\pm W^\pm hW^\mp$ channel allows discovery of the CTHM up to about 1 TeV. The LHC constraints for this channel are currently absent, and the corresponding parameter space will be covered by the future 100\,TeV collider. {{In addition, for intermediate $v_\Delta$'s, the overlap among $W^\pm W^\pm hW^\mp$, $\ell^\pm \ell^\pm hW^\mp$ and $H^{++}H^{--}$ channels can also allow us to roughly determine $m_\Delta\in[400,1000]$\,GeV and $v_\Delta\in[10^{-4.4},10^{-3.9}]$\,GeV if all these three channels are observed with significance 5 or more. The redundancy among these models would provide an important cross check that the signals are due to the CTHM.} } \item {\color{black}For large $v_\Delta$ and large $m_\Delta$ region where the $H^{\pm\pm}\rightarrow W^\pm W^\pm$ channel dominates as can be seen from left panel of Fig.\,\ref{decayregionplotHPP}, one would expect the $H^{++}H^{--}\rightarrow W^+W^+W^-W^-\to \ell^+\ell^+\ell'^-\ell'^-\slashed{E}_T$ channel to cover much of that parameter space. {\color{black} Although our present analysis is not optimized to extend beyond $m_\Delta\sim 1.6$ TeV for this channel, one might expect use of other $W$ decay modes (and a correspondingly different BDT training) to allow extension to higher masses. As an example, we note that the authors in Ref.\,\cite{kang:2014jia} have studied the channel $p p \to H^{++}(\to W^+\ell\nu_\ell)H^{--}(\to W^- jj)$ and concluded that $H^{\pm\pm}$ could be discovered at the 14\,TeV LHC with $\mathcal{L}=10\text{-}30\rm\, fb^{-1}$. It is worth exploring whether use of this channel (or others) may also afford greater coverage for $m_\Delta \buildrel > \over {_\sim} 1.6$ TeV. }} \begin{figure}[thb!] \captionstyle{flushleft} \begin{tabular}{c} \includegraphics[scale=0.4]{plot/bdtdis-with-BM.pdf} \end{tabular} \caption{{\color{black}Regions of significance $\ge 5\sigma $ in the $m_\Delta{-}v_\Delta$ plane with $m_{\nu_\ell}=0.01$\,eV ($\ell=e,\mu,\tau$), $\lambda_4=0$, $\lambda_5=-0.1$ and integrated luminosity of 30\,$\rm ab^{-1}$: The blue region corresponds to discovery using the $pp\rightarrow H^{++}H^{--}\rightarrow\ell^+\ell^+\ell'^-\ell'^-$ channel; the brown region is for the $H^{\pm\pm}H^{\mp}\rightarrow $ $\ell^\pm \ell^\pm$ $ hW^\mp$ channel ; the green region gives discovery using the $H^{\pm\pm}H^{\mp}\rightarrow W^\pm W^\pm$ $hW^\mp$ mode. The yellow and magenta regions indicate the current LHC exclusion limits at $\sqrt{s}=13$ $\rm\,TeV$\,\cite{Aaboud:2017qph} and $\sqrt{s}=8$\,\,TeV\,\cite{ATLAS:2014kca}, respectively. LEP constraints\,\cite{Acton:1992zp,Abbiendi:2001cr} are automatically satisfied since our benchmark point corresponds to $m_{H^{\pm\pm}}\gtrsim54.78$\,GeV. See the main text for a detail discussion. The black dots show two benchmark values of $m_\Delta$ used for Higgs portal coupling determination (see Section \ref{sec:lam45}).}}\label{bdtdis} \end{figure} \item {{One may also consider using the $H^{\pm\pm}H^{\mp\mp}\rightarrow W^\pm W^\pm\ell^\mp\ell^\mp$ channel to cover part of the parameter space. We note, however, that since the same-sign di-$W$ and the same-sign di-lepton decay channels are dominant only at large and small $v_\Delta$ respectively (as can be seen from the left panel of Fig.\,\ref{decayregionplotHPP}), we thus expect these channels to have enough significance only at $v_\Delta\sim(10^{-5},10^{-4})$\text{\,GeV}. The same region is already well covered by the $\ell^\pm\ell^\pm hW^\mp$ and $H^{++}H^{--}\rightarrow\ell^\pm\ell^\pm\ell'^\mp\ell'^\mp$ channels. }} \item The $H^{++}H^{--}$ channel covers a very wide range over $m_\Delta$ at small $v_\Delta$ and the $W^\pm W^\pm$ $hW^\mp$ channel disappears around $m_\Delta=$1\,TeV. The reason for the ``long tail'' of the $H^{++}H^{--}$ channel can be understood from the blue region in Fig.\,\ref{bdtdisexplain}\,(a), from which we see that the $\rm BR(H^{\pm\pm}\rightarrow\ell^\pm\ell^\pm)$ decreases slowly with increasing $m_\Delta$ for $v_\Delta\lesssim10^{-4}$\,GeV, leading to a very slowly decreasing significance. In contrast, for the $W^\pm W^\pm$ $hW^\mp$ channel, the significance drops dramatically at $m_\Delta\approx$1\,TeV because of phase space suppression for heavier particles and decay BR suppression at smaller $v_\Delta$'s as can be seen from Fig.\,\ref{bdtdisexplain}(b). \item {{We remind the reader that we choose $m_\nu=0.01$\,eV for all the three light neutrinos generation throughout the paper. Since a larger (smaller) $m_\nu$ will correspond to a larger (smaller) Yukawa coupling and thus a larger (smaller) same-sign di-lepton decay BR of $H^{\pm\pm}$, we therefore expect the same-sign di-lepton decay regions in Fig.\,\ref{bdtdis} will shift upward (downward) for larger (smaller) $m_\nu$'s. \item {\color{black}Finally, for our benchmark point, vacuum stability is not guaranteed at the Planck scale as discussed in Sec.\,\ref{secallconst}. In Ref.\,\cite{Chao:2012mx}, it was shown that vacuum stability up to the Planck scale actually prefers positive $\lambda_4$'s as indicated by the black arrow in the left panel of Fig.\,\ref{stability}. This difference is not, in general, problematic, as the stability region for our benchmark point amply covers the triplet mass range considered here. One could anticipate additional degrees of freedom modifying the behavior of the potential at larger scale, so as to ensure stability to the Planck scale. Nevertheless, it is interesting to ask how the reach indicated in Fig.~\ref{bdtdis} would evolve as we move along the black arrow in Fig.~\ref{stability} corresponding to higher stability scales. We expect the discovery regions including the $H^\pm\to hW^\pm$ channel in Fig.\,\ref{bdtdis} to shrink for $0\lesssim\lambda_4\lesssim3$ as the $H^\pm\to hW^\pm$ decay BR decreases for $\lambda_4$'s in this region as can be seen directly from the upper right panel of Fig.\,\ref{slicedecay}. For $\lambda_4\gtrsim6$, one would expect the discovery regions including the $H^\pm\to hW^\pm$ chain to expand even though one needs to re-consider all the model constraints discussed in Sec.\,\ref{model:const}. For these larger values of the $\lambda_4$, however, we would expect to reach the limit of perturbativity well below the stability scale.} }} \end{itemize} \begin{figure}[thb!] \captionstyle{flushleft} \begin{tabular}{cc} \includegraphics[scale=0.22]{plot/bdtdis3.pdf} ~~&~~ \includegraphics[scale=0.22]{plot/bdtdis2.pdf}\\ (a) & (b) \end{tabular} \caption{Decay BRs for $\lambda_4=0$, $\lambda_5=-0.1$ and $m_\nu=0.01$\,eV. Figure (a): Decay BR$\ge20\%$ regions for $H^\pm\to hW^\pm$ and $H^{\pm\pm}\to\ell^\pm\ell^\pm$ channels. The slowly decreasing $\rm BR(H^{\pm\pm}\rightarrow \ell^\pm \ell^\pm)$ with increasing $m_\Delta$ explains the ``long-tail'' of the significance plot for $H^{++}H^{--}\rightarrow\ell^+\ell^+\ell'^-\ell'^-$ in Fig.\,\ref{bdtdis}. Figure (b): The solid lines indicate constant contours for $\rm BR(H^\pm\rightarrow hW^\pm)\times BR(H^{\mp\mp}\rightarrow W^\mp W^\mp)$. Product of the BRs is suppressed for small $v_\Delta$'s, which explains feature of the $W^\pm W^\pm$ $hW^\mp$ channel in Fig.\,\ref{bdtdis} in the small $v_\Delta$ region.}\label{bdtdisexplain} \end{figure} \section{Triplet Higgs potential determination and simulation} \label{sec:lam45} {\color{black}{From our result in the previous section, for $m_\Delta\lesssim4500$\,GeV, the $H^{++}H^{--}\to\ell^+\ell^+\ell'^-\ell'^-$, $W^\pm W^\pm hW^\mp$ and $\ell^\pm \ell^\pm hW^\mp$ channels can cover a significant portion of the parameter space of the CTHM except the region where $m_\Delta\gtrsim1$\,TeV and $v_\Delta\gtrsim10^{-4}$\,GeV. We expect some of the latter region to be covered by the $H^{++}H^{--}\to W^+W^+W^-W^-$ channel as discussed in last section. Therefore, the discovery potential for the CTHM at a 100\,TeV $pp$ collider is considerable. Assuming discovery of the doubly- and singly-charged scalars, we can fix $\lambda_5$ straightforwardly through the mass splitting as discussed in Sec.\,\ref{subsec:input}. However, to determine the important Higgs portal parameter $\lambda_4$, \color{black} additional information will be needed. For $v_\Delta$ larger than $\sim 10^{-5}$ GeV, the BR for $H^\pm\rightarrow hW^\pm$ is particularly useful as discussed in Sec.\,\ref{lam4determ}\footnote{Note that, according to Fig.\,\ref{bdtdis} for $v_\Delta$ below $\sim10^{-5}$GeV, the $\ell^\pm \ell^\pm hW^\mp$ and the $W^\pm W^\pm hW^\mp$ channels will not be observable. In this region, one would need to explore other possible channels in order to determine $\lambda_{4,5}$.}. To investigate this possibility, we adopt the following strategy. First, we will carry out a detailed simulation for a choice of $\lambda_4+\lambda_5$ in the region where the BR$(H^\pm\rightarrow hW^\pm)$ is strongly-dependent on $\lambda_4+\lambda_5$, according to the top right panel of Fig.~\ref{slicedecay}. We will carry out this study for a choice of the $\lambda_j$ consistent the the stability and perturbativity considerations discussed above and for two different choices of the overall triplet mass scale, $m_\Delta$. Second, we will scan over the values of $\lambda_4$ and $m_\Delta$ for fixed $\lambda_5$, thereby varying the production cross section and BR from the values corresponding to our benchmark points. In doing so, we will rescale the significance of the signal accordingly. Third, we will repeat this analysis for different representative choices of $v_\Delta$ to indicate how the varying $H^{\pm\pm}$ BR affects the $\lambda_4$-sensitivity. Finally, we will compare the sensitivity with that of the observation of the rate for the SM Higgs boson to decay to a di-photon pair, as loop corrections from charged triplet scalars will affect the corresponding rate as functions of the Higgs portal couplings and $m_\Delta$. The results are plotted in Fig.\,\ref{haa}, where we show the corresponding regions of $5\sigma$ sensitivity to the model parameters. In what follows, we provide a more detailed discussion of the collider simulation and analysis than we provided for the results in Fig.~\ref{bdtdis}, given that we focus on the $H^\pm\rightarrow hW^\pm$ channel for coupling determination. }} \subsection{Benchmark points} \label{subsec:BMandS} {\color{black} As discussed in Sec.\,\ref{moddec}, the $H^{++}H^{--}\rightarrow\ell^+\ell^+\ell'^-\ell'^-$ channel is powerful for the triplet model discovery at small $v_\Delta$, but it can not determine $\lambda_4$ as it is $\lambda_4$-independent. In contrast, $H^\mp H^{\pm\pm}\rightarrow hW^\mp \ell^\pm \ell^\pm/hW^\mp W^\pm W^\pm$ are promising for the determination of $\lambda_{4}$ at intermediate and large $v_\Delta$. In order to determine their collider signatures, we choose two representative benchmark points, taking into account vacuum stability, perturbative unitarity, perturbativity, neutrino masses and our result in Fig.\,\ref{bdtdis}: $m_\Delta=800$\,GeV ($m_\Delta=400$\,GeV), $v_\Delta=10^{-4}$\,GeV, $m_h=125$\,GeV, $m_Z=91.1876$\,GeV, $m_\nu=0.01$\,eV, $\lambda_2=0.2$, $\lambda_3=0$, $\lambda_4=0$, $\lambda_5=-0.1$ for the $W^\pm W^\pm hW^\mp$ ($\ell^\pm \ell^\pm hW^\mp$) channel, which is a representative point of the large (small) $m_\Delta$ region. Note that although these benchmark parameter choices have $\lambda_4=0$, the sum $\lambda_4+\lambda_5$ differs from zero and lies in a region where BR($H^\pm\to hW^\pm)$ varies significantly with this combination of couplings. The choice of two the two different mass scales corresponds to the edges of various overlapping discovery regions, as indicated by the two black points in Fig.~\ref{bdtdis}.} \subsection{Simulation: $pp\rightarrow H^\mp H^{\pm\pm}\rightarrow hW^\mp \ell^\pm \ell^\pm\rightarrow b\bar{b}\ell^\mp \ell^\pm \ell^\pm \slashed{E}_T$ for intermediate $v_\Delta$} \label{sec:simu} {\color{black}{In this section we will first generate data for $pp\rightarrow H^\mp H^{\pm\pm}\rightarrow hW^\mp \ell^\pm \ell^\pm\rightarrow b\bar{b}\ell^\mp \ell^\pm \ell^\pm \slashed{E}_T$ using {\tt MadGraph}, and then analyze the data by both a cut-based analysis and using the BDT method. In the former, we choose a set of ``hard cuts" by first comparing various signal and background distributions and endeavoring to optimize by hand the choice for greatest signal significance. As an alternative, we employ the BDT. As we show below, the BDT method generally provides a better signal efficiency and significance. }} \subsubsection{Cut based analysis: basic cuts}\label{basiccuts} \begin{figure}[thb!] \captionstyle{flushleft} \begin{tabular}{cc} \includegraphics[width=90mm]{plot/leppt.pdf} & \includegraphics[width=90mm]{plot/lepptsub.pdf} \\ (a) same-sign lepton leading $p_T$ & (b) same-sign lepton sub-leading $p_T$ \\[6pt] \includegraphics[width=90mm]{plot/lppdphi.pdf} & \includegraphics[width=90mm]{plot/bdphi.pdf} \\ (c) same-sign lepton $\Delta\phi$ & (d) $\Delta\phi$ of two $b$ quarks \\[6pt] \includegraphics[width=90mm]{plot/htcuts.pdf} & \includegraphics[width=90mm]{plot/mhpp.pdf} \\ (e) $H_T$ & (f) Doubly-Charged Higgs invariant mass\\[6pt] \end{tabular} \caption{Representative reconstructed variables for the $\ell^\pm\ell^\pm hW^\mp$ channel after the basic cuts. We use the word ``signal'' to represent the $p p \to H^{\pm\pm}H^\mp\to \ell^\pm\ell^\pm hW^\mp$ channel in all histograms above.}\label{simu:bc} \end{figure} While analyzing the data by {\tt ROOT}\,6.06.02, we require all the final state particles have transverse momentum $p_T>20$\,GeV and pseudorapidity $|\eta|<2.5$; we also require exactly three leptons in the final state\footnote{with two of them are of same charge and of same flavor, and the third one with an opposite charge only.} and exactly two jets in the final state\footnote{with at least one of them being a $b$ quark.} for the signal and the $t\bar{t}W^\pm$, $t\bar{t}Z$, $hZW^\pm$, $ZW^\pm b\bar{b}$ and $hZZ$ backgrounds. For the $t\bar{t}j$ and $W^+W^-b\bar{b}j$ backgrounds, we require there are exactly two leptons and three jets\footnote{with at least one and at most two of the three jets are $b$ quarks. The light jet with the smallest $p_T$ among these three jets is taken to be a lepton with a 0.01\% fake rate\,\cite{delphescard}.} in the final state. For the $ZW^\pm jj$ background, when the light jet is a $c$ quark, we use a fake rate of $10\%$; and when the light jets are other light quarks, we use a fake rate of $0.01\%$\,\cite{delphescard}. After the basic cuts, the result of reconstructed variables is given in Fig.\,\ref{simu:bc}, and the cut efficiency is given in Table \ref{tab:cft}. {\color{black}{By comparing the signal and the background distributions in Fig.\,\ref{simu:bc}, we find that $\Delta\phi$ and $\Delta R$ between the two $b$ quarks, scalar sum of the transverse momentum $H_T$, same-sign lepton leading and sub-leading $p_T$, same-sign lepton $\Delta\phi$ and $\Delta R$, $m_h$, $m_{H^{\pm\pm}}$ and $W$ boson transverse mass $m_{WT}$ have distinct features between our signal and the backgrounds, which can be exploited to reduce the backgrounds. These variables are the hard cuts we apply next.}} \subsubsection{Cut based analysis: hard cuts}\label{hardcuts} {\color{black}To improve the significance of the signal, we apply the following hard cuts in the same order as they are listed in Table \ref{llhctab}. After applying them}, the cut efficiency for each hard cut and significance of our signal are presented in Table \ref{tab:cft}. From the table, it is seen that the backgrounds are efficiently reduced and our signal has a final cross section about $7.3848\times10^{-4}$\,fb, with the significance being around 4; and the estimated event number for the signal after the basic cuts and the hard cuts is around 22 at the FCC with $\mathcal{L}=30\text{ab}^{-1}$. \begin{table}[!htbp] \centering \caption{Cut flow table for $p p \to H^{\pm\pm}H^\mp\to \ell^\pm\ell^\pm hW^\mp$ under basic cuts (bc) and hard cuts (hc) with integrated luminosity of $30\, \text{ab}^{-1}$. Here and in Table\,\ref{tab:wwhc}, we use the same abbreviations: ``proc.'' for ``processes''; ``E'' for ``base 10 exponential function''; ``cs'' for ``cross section'' with unit $fb$; ``eff.'' for ``efficiency'' in percent; ``signi.'' for ``significance'' and ``hci-j'' means ``applying hard cuts \text{i, $\cdots$, j}''}\label{tab:cft} \begin{adjustbox}{max width = \textwidth} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline proc & original cs& - & bc & hc1 & hc1-2 & hc1-3 & hc1-4 & hc1-5 & hc1-6\\\hhline{|=|=|=|=|=|=|=|=|=|=|} & & eff. & 2.94 & 5.78 & 5.84 & 14.86 & 95.95 & 45.07 & 6.25 \\\cline{3-10} \bf hzw & 0.6817 & cs & 0.02 & 1.1584E-3 & 6.7652E-5 & 1.0053E-5 & 9.6460E-6 & 4.3474E-6 & 2.7268E-7 \\\hhline{|=|=|=|=|=|=|=|=|=|=|} & & eff. & 3.47 & 5.03 & 3.99 & 53.16 & 99.46 & 30.98 & 0 \\\cline{3-10} \bf zzh & 0.1107 & cs & 3.8413E-3 & 1.9322E-4 & 7.7094E-6 & 4.0983E-6 & 4.0762E-6 & 1.2628E-6 & 0 \\\hhline{|=|=|=|=|=|=|=|=|=|=|} & & eff. & 0.25 & 5.04 & 3.34 & 48.39 & 100 & 46.67 & 14.29 \\\cline{3-10} \bf zwjj & 46.165 & cs & 0.1133 & 5.7091E-3 & 1.9082E-4 & 9.233E-5 & 9.233E-5 & 4.3087E-5 & 6.1553E-6 \\\hhline{|=|=|=|=|=|=|=|=|=|=|} & & eff. & 3.98 & 4.73 & 1.85 & 43.25 & 81.88 & 17.09 & 0 \\\cline{3-10} \bf ttz & 135.7 & cs & 5.4044 & 0.2556 & 4.7167E-3 & 2.0402E-3 & 1.6705E-3 & 2.8544E-4 & 0 \\\hhline{|=|=|=|=|=|=|=|=|=|=|} & & eff. & 0.83 & 1.95 & 2.32 & 25 & 100 & 14.29 & 0 \\\cline{3-10} \bf zwbb & 42.66 & cs & 0.3521 & 6.8711E-3 & 1.5926E-4 & 3.9816E-5 & 3.9816E-5 & 5.688E-6 & 0 \\\hhline{|=|=|=|=|=|=|=|=|=|=|} & & eff. & 8.42 & 8.92 & 12.69 & 30.61 & 93.34 & 49.56 & 9.55 \\\cline{3-10} \bf wwbbj & 2.293 & cs & 0.1932 & 1.7223E-2 & 2.1858E-3 & 6.6900E-4 & 6.2442E-4 & 3.0946E-4 & 2.9544E-5 \\\hhline{|=|=|=|=|=|=|=|=|=|=|} & & eff. & 2.74 & 19.40 & 1.18 & 39.94 & 81.03 & 27.33 & 12.57 \\\cline{3-10} \bf ttw & 68.7 & cs & 1.8824 & 0.3652 & 4.3235E-3 & 1.7267E-3 & 1.3992E-3 & 3.8243E-4 & 4.809E-5 \\\hhline{|=|=|=|=|=|=|=|=|=|=|} & & eff. & 6.89 & 16.16 & 0.44 & 51.58 & 82.13 & 27.44 & 8.28 \\\cline{3-10} \bf ttj & 257 & cs & 17.7094 & 2.8610 & 1.2456E-2 & 6.425E-3 & 5.2771E-3 & 1.4478E-3 & 1.1993E-4 \\\hhline{|=|=|=|=|=|=|=|=|=|=|} \bf $\sigma_{\text{bkg}}^{\text{tot}}$ & 507.1454 & - & 25.6786 & 3.5130 & 2.4107E-2 & 1.1007E-2 & 9.1171E-3 & 2.4795E-3 & 2.0399E-4 \\\hhline{|=|=|=|=|=|=|=|=|=|=|} & & eff. & 16.15 & 62.03 & 58.30 & 87.20 & 96.94 & 78.43 & 98.50\\\cline{3-10} \bf signal & 0.0148 & cs & 0.0024 & 1.4862E-3 & 8.6373E-4 & 7.5321E-4 & 7.3012E-4 & 5.7264E-4 & 7.3848E-4 \\\hhline{|=|=|=|=|=|=|=|=|=|=|} \bf signi. & 0.1138 & - & 0.0820 & 0.1373 & 0.9467 & 1.2030 & 1.2744 & 1.7953 & 4.1664 \\ \hline \end{tabular} \end{adjustbox} ~\\~\\ \end{table} \begin{table}[!ht] \caption{A list of hard cuts for the $pp \to H^{\pm\pm}H^\mp\to\ell^\pm\ell^\pm hW^\mp$ channel.}\label{llhctab} \centering \begin{tabular}{c}\toprule[1.5pt] $m_Z\ge82$\,GeV or $m_Z\le 98$\,GeV, $80\text{\,GeV}\le m_h\le130$\,GeV\\ $p_{T,b}^{\text{leading}}\ge80$\,GeV, $p_{T,\ell^{\text{oppo.}}}\ge40$\,GeV, $p_{T,\ell^{\text{same}}}^{\text{leading}}\ge200$\,GeV, $p_{T,\ell^{\text{same}}}^{\text{sub-leading}}\ge70$\,GeV, $H_T\ge700$\,GeV\\ $0\le m_{WT}\le 90$\,GeV\\ $-2\le\Delta\phi_{b\bar{b}}\le2$, $0\le\Delta R_{b\bar{b}}\le2$\\ $-1.8\le\Delta\phi_{\ell^\pm\ell^\pm}\le1.8$, $0.6\le\Delta R_{\ell^\pm\ell^\pm}\le2.8$\\ $340\text{\,GeV}\le m_{H^{\pm\pm}}\le390$\,GeV\\ \bottomrule[1.25pt] \end{tabular}\par \end{table} \subsubsection{BDT based analysis result} To improve the cut efficiency, we also carry out a BDT based analysis as for analyzing model discovery at the 100\,TeV collider in Sec.\,\ref{sec:modeldis}. The result is shown in parallel with the cut-based result in Table\,\ref{tab:bdt} for comparison, and we find that the BDT method improves the signal significance by about a factor of 2 through optimizing the cut efficiency; in addition, the signal efficiency as well as the signal cross section are also improved by about a factor of 3. \begin{center} \begin{minipage}{\linewidth} \centering \captionof{table}{Comparison between BDT and cut-flow based results at $\mathcal{L}=30\,\text{ab}^{-1}$ for $pp \to H^{\pm\pm}H^\mp\to\ell^\pm\ell^\pm hW^\mp$.} \label{tab:bdt} \begin{tabular}{ C{2.5in} C{1.35in} C{1.35in} }\toprule[1.5pt] & \bf BDT & \bf Cut based \\\midrule \bf signal efficiency & 0.839 & 0.308 \\\midrule \bf signal significance & 6.8922 & 4.1664 \\\midrule \bf final signal cross section (fb) & $1.2417\times10^{-2}$ & $7.3848\times10^{-4}$ \\\midrule \bf event number at detector & 60 & 22 \\ \bottomrule[1.25pt] \end {tabular}\par \bigskip \end{minipage} \end{center} \subsection{Simulation: $H^\mp H^{\pm\pm}\rightarrow hW^\mp W^\pm W^\pm\rightarrow b\bar{b}\ell^\mp \ell^\pm \ell^\pm \slashed{E}_T$ process for intermediate and large $v_\Delta$} The $H^\mp H^{\pm\pm}\rightarrow hW^\mp \ell^\pm \ell^\pm$ channel is helpful for the determination of $\lambda_4$ only at intermediate $v_\Delta$, for large $v_\Delta$'s, the $H^\mp H^{\pm\pm}\rightarrow hW^\mp W^\pm W^\pm$ channel can be used. Since it shares the same backgrounds as the $H^\mp H^{\pm\pm}\rightarrow hW^\mp \ell^\pm \ell^\pm$ channel in last sub-section, we generate 1,000,000 events for this signal and use the background data generated for the $H^\mp H^{\pm\pm}\rightarrow hW^\mp \ell^\pm \ell^\pm$ channel to study its collider phenomenologies. We still perform an exclusive analysis, and by using the same basic cuts as for the $\ell^\pm\ell^\pm hW^\mp$ channel, we obtain the reconstructed variables under basic cuts for the $W^\pm W^\pm hW^\mp$ channel shown in Fig.\,\ref{wwbc1}. Note that $\Delta\Phi$ and $\Delta R$ between the two $b$ quarks and the two same-sign leptons, leading $p_T$ of the same-sign leptons, SM $h$, the doubly-charged Higgs and $Z$ boson masses and the transverse $W$ boson mass are the hard cuts that can be applied to further separate the signal from the backgrounds. Those hard cuts are applied in the same order as they are listed in Table \ref{wwhwhc}: \begin{figure}[thb!] \captionstyle{flushleft} \begin{tabular}{cc} \includegraphics[width=90mm]{plot/wwhwleppt.pdf} & \includegraphics[width=90mm]{plot/wwhwmetcuts.pdf} \\ (a) same-sign lepton leading $p_T$ & (b) Missing $E_T$ \\[6pt] \includegraphics[width=90mm]{plot/wwhwlppdR.pdf} & \includegraphics[width=90mm]{plot/wwhwbdR.pdf} \\ (c) same-sign lepton $\Delta R$ & (d) $\Delta R$ of two $b$ quarks \\[6pt] \includegraphics[width=90mm]{plot/wwhwhtcuts.pdf} & \includegraphics[width=90mm]{plot/wwhwmhpp.pdf} \\ (e) $H_T$ & (f) Doubly-charged Higgs invariant mass \\[6pt] \end{tabular} \caption{Reconstructed variables for the $W^\pm W^\pm hW^\mp$ channel under basic cuts. We use the word ``signal'' to represent the $p p \to H^{\pm\pm}H^\mp\to W^\pm W^\pm hW^\mp$ channel in all histograms above.}\label{wwbc1} \end{figure} \begin{table}[!htbp] \centering \caption{Cut flow table for $H^\mp H^{\pm\pm}\rightarrow hW^\mp W^\pm W^\pm$ under basic cuts (bc) and hard cuts (hc) with integrated luminosity of $30\, \text{ab}^{-1}$. Here we use the same abbreviations as in Table\,\ref{tab:cft}.}\label{tab:wwhc} \begin{adjustbox}{max width = \textwidth} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline proc.& original cs & - & bc & hc1 & hc1-2 & hc1-3 & hc1-4 & hc1-5\\\hhline{|=|=|=|=|=|=|=|=|=|} & & eff. & 2.94 & 4.37 & 10.35 & 97.29 & 39.72 & 52.92 \\\cline{3-9} \bf hzw & 0.6817 & cs & 0.02 & 8.741E-4 & 9.0474E-5 & 8.8025E-5 & 3.4965E-5 & 1.8503E-5 \\\hhline{|=|=|=|=|=|=|=|=|=|} & & eff. & 3.47 & 3.80 & 7.02 & 93.30 & 51.62 & 54.71 \\\cline{3-9} \bf zzh & 0.1107 & cs & 3.8413E-3 & 1.4586E-4 & 1.0246E-5 & 9.5603E-6 & 4.9351E-6 & 2.6999E-6 \\\hhline{|=|=|=|=|=|=|=|=|=|} & & eff. & 0.25 & 5.40 & 9.20 & 89.62 & 61.59 & 55.45 \\\cline{3-9} \bf zwjj & 46.165 & cs & 0.1133 & 6.1201E-3 & 5.6308E-4 & 5.0462E-4 & 3.1077E-4 & 1.7231E-4 \\\hhline{|=|=|=|=|=|=|=|=|=|} & & eff. & 3.98 & 5.08 & 4.62 & 63.02 & 36.59 & 41.05 \\\cline{3-9} \bf ttz & 135.7 & cs & 5.4044 & 0.2748 & 1.2704E-2 & 8.0062E-3 & 2.9292E-3 & 1.2026E-3 \\\hhline{|=|=|=|=|=|=|=|=|=|} & & eff. & 0.83 & 1.66 & 3.90 & 82.5 & 74.24 & 44.90 \\\cline{3-9} \bf zwbb & 42.66 & cs & 0.3521 & 5.8326E-3 & 2.2750E-4 & 1.8769E-4 & 1.3935E-4 & 6.2563E-5 \\\hhline{|=|=|=|=|=|=|=|=|=|} & & eff. & 8.42 & 10.72 & 24.12 & 83.77 & 52.14 & 65.83 \\\cline{3-9} \bf wwbbj & 2.293 & cs & 0.1932 & 2.0719E-2 & 4.9978E-3 & 4.1865E-3 & 2.1826E-3 & 1.4369E-3 \\\hhline{|=|=|=|=|=|=|=|=|=|} & & eff. & 2.74 & 22.76 & 1.94 & 59.90 & 42.10 & 53.38 \\\cline{3-9} \bf ttw & 68.7 & cs & 1.8824 & 0.4139 & 8.3198E-3 & 4.9832E-3 & 2.0977E-3 & 1.1198E-3 \\\hhline{|=|=|=|=|=|=|=|=|=|} & & eff. & 6.89 & 18.54 & 1.26 & 65.57 & 45.23 & 34.56 \\\cline{3-9} \bf ttj & 257 & cs & 17.7094 & 3.2826 & 4.1454E-2 & 2.7182E-2 & 1.2293E-2 & 4.2491E-3 \\\hhline{|=|=|=|=|=|=|=|=|=|} \bf $\sigma_{\text{bkg}}^{\text{tot}}$ & 507.1454 & - & 25.6786 & 4.0050 & 6.8367E-2 & 4.5148E-2 & 1.9993E-2 & 8.2645E-3 \\\hhline{|=|=|=|=|=|=|=|=|=|} & & eff. & 5.68 & 51.03 & 79.46 & 100 & 70.07 & 94.24 \\\cline{3-9} \bf signal & 0.0971 & cs & 5.5079E-3 & 2.8104E-3 & 2.2331E-3 & 2.1615E-3 & 1.5146E-3 & 1.4273E-3 \\\hhline{|=|=|=|=|=|=|=|=|=|} \bf sig. & 0.7467 & - & 0.1883 & 0.2433 & 1.4564 & 1.7220 & 1.7896 & 2.5123 \\ \hline \end{tabular} \end{adjustbox} \end{table} Results after applying the hard cuts are given in Table \ref{tab:wwhc}. And for comparison, the BDT based analysis is presented in parallel in Table \ref{bdtwwhc}, we see that BDT based analysis still gives a larger significance, which is about three times larger compared with cut-based result. \begin{table}[!ht] \caption{A list of hard cuts for the $pp\to H^{\pm\pm}H^\mp\to W^\pm W^\pm hW^\mp$ channel.}\label{wwhwhc} \centering \begin{tabular}{c}\toprule[1.5pt] $m_Z\ge80$\,GeV or $m_Z\le 100$\,GeV, $80\text{\,GeV}\le m_h\le140$\,GeV\\ $p_{T,b}^{\text{leading}}\ge80$\,GeV, $p_{T,\ell^{\text{oppo.}}}\ge40$\,GeV, $p_{T,\ell^{\text{same}}}^{\text{leading}}\ge80$\,GeV, $p_{T,\ell^{\text{same}}}^{\text{sub-leading}}\ge50$\,GeV, $800\text{\,GeV}\le H_T\le2200$\,GeV\\ $-1.4\le\Delta\phi_{b\bar{b}}\le1.4$, $0\le\Delta R_{b\bar{b}}\le2$\\ $-2\le\Delta\phi_{\ell^\pm\ell^\pm}\le2$, $0\le\Delta R_{\ell^\pm\ell^\pm}\le2.8$\\ $200\text{\,GeV}\le m_{H^{\pm\pm}}\le800$\,GeV\\ \bottomrule[1.25pt] \end{tabular}\par \end{table} \begin{center} \begin{minipage}{\linewidth} \centering \captionof{table}{Comparison between BDT and cut-flow based results at $\mathcal{L}=30\,\text{ab}^{-1}$ for $pp\to H^{\pm\pm}H^\mp\to W^\pm W^\pm hW^\mp$.} \label{bdtwwhc} \begin{tabular}{ C{2.5in} C{1.35in} C{1.35in} }\toprule[1.5pt] & \bf BDT & \bf Cut based \\\midrule \bf signal efficiency & 0.6009 & 0.2591 \\\midrule \bf signal significance & 6.8507 & 2.5123 \\\midrule \bf final signal cross section (fb) & $3.3097\times10^{-3}$ & $1.4273\times10^{-3}$ \\\midrule \bf event number at detector & 99 & 42 \\ \bottomrule[1.25pt] \end {tabular}\par \bigskip \end{minipage} \end{center} \subsection{Determination of $\lambda_4$ upon discovery at the future 100\,TeV collider}\label{lam4haa} As we have been addressing throughout the paper, the $H^\mp H^{\pm\pm}\rightarrow hW^\mp \ell^\pm \ell^\pm$ and the $H^\mp H^{\pm\pm}\rightarrow hW^\mp W^\pm W^\pm$ channels are important for the determination of $\lambda_4$, but our study above is done at only one benchmark point for both $H^\mp H^{\pm\pm}\rightarrow hW^\mp \ell^\pm \ell^\pm$ and $H^\mp H^{\pm\pm}\rightarrow hW^\mp W^\pm W^\pm$. To see how our result is sensitive to $\lambda_4$, we fix $\lambda_5=-0.1$ and perform a scan in the $\lambda_4\text{-}m_\Delta$ plane\footnote{Note that $\lambda_{2,3}$ are suppressed by $v_\Delta$, so their values do not matter here.}. Doing so, it is straightforward to rescale the signal and, thereby, obtain the variation in signal significance. The corresponding results are given in Fig.\,\ref{haa}\,(a), (b), (c) with $v_\Delta=10^{-1}$\,GeV, $v_\Delta=10^{-4}$\,GeV and $v_\Delta=10^{-5}$\,GeV respectively. There, we indicate the regions giving larger than $5\sigma$ significance for the two channels considered here. In Fig.\,\ref{haa}(a), {\it i.e.}, at large $v_\Delta=10^{-1}\rm\,GeV$, only the $W^\pm W^\pm hW^\mp$ channel is useful, whereas the significance for $\ell^\pm \ell^\pm hW^\mp$ is less than 5 in the entire parameter space. The reason is that the rate for $H^{\pm\pm}\rightarrow\ell^\pm\ell^\pm$ is highly suppressed at large $v_\Delta$ as can be seen from left panel of Fig.\,\ref{decayregionplotHPP}. For $W^\pm W^\pm hW^\mp$, the appearance of the region at the upper-left corner is due to an increase of the decay BR for $H^\pm\rightarrow hW^\pm$ when $\lambda_4$ goes from negative to positive as can be seen from the upper right panel of Fig.\,\ref{slicedecay}. Therefore, at large $v_\Delta$, the $W^\pm W^\pm hW^\mp$ channel is more helpful for the determination of $\lambda_4$ at the FCC. From Fig.\,\ref{haa}(b), {\it i.e.}, corresponding to intermediate $v_\Delta=10^{-4}\rm\,GeV$, both $W^\pm W^\pm hW^\mp$ and $\ell^\pm \ell^\pm hW^\mp$ can help to determine $\lambda_4$. The $W^\pm W^\pm hW^\mp$ channel covers a larger region at a higher mass scale while the $\ell^\pm \ell^\pm hW^\mp$ channel provides more coverage at a lower mass scale. The overlap between these two channels makes them useful as a cross check if the triplet scale is around $m_\Delta\in[400,900]$\,GeV. For $m_\Delta\in[900,1100]$\,GeV, the $W^\pm W^\pm hW^\mp$ channel can be used to determine $\lambda_4$; and for $m_\Delta\in[300,400]$\,GeV, we can use the $\ell^\pm \ell^\pm hW^\mp$ channel. \begin{figure}[thb!] \captionstyle{flushleft} \begin{tabular}{ccc} \includegraphics[width=55mm]{plot/haa07.pdf} & \includegraphics[width=55mm]{plot/haa08.pdf} & \includegraphics[width=53mm]{plot/haa09.pdf}\\ (a) $v_\Delta=10^{-1}$\,GeV & (b) $v_\Delta=10^{-4}$\,GeV & (c) $v_\Delta=10^{-5}$\,GeV \\ \end{tabular} \caption{Blue is significance $\ge5$ region for the $hW^\mp W^\pm W^\pm$ channel and magenta is that for the $hW^\mp \ell^\pm \ell^\pm$ channel. The outermost very light black region is the combined constraint on $R_{h\gamma\gamma}$ from ATLAS and CMS at 7\,TeV and 8\,TeV; the intermediate light black region is the planned FCC-ee constraint and the innermost black region is the planned FCC-ee+FCC-hh constraint on $R_{h\gamma\gamma}$.}\label{haa} \end{figure} And from Fig.\,\ref{haa}(c), {\it i.e.}, at small $v_\Delta=10^{-5}$\,GeV, only the $\ell^\pm \ell^\pm hW^\mp$ channel can be used to determine $\lambda_4$ since the $H^{\pm\pm}\rightarrow W^\pm W^\pm$ channel is highly suppressed as can be seen from the left panel of Fig.\,\ref{decayregionplotHPP}. Comparing this result with those at $v_\Delta=10^{-1}$\,GeV and $v_\Delta=10^{-4}$\,GeV, we see that at $v_\Delta=10^{-5}$\,GeV, the $\ell^\pm \ell^\pm hW^\mp$ channel covers the largest mass region up to about 1.4\,TeV. {\color{black} It is now interesting to consider the possible complementarity between these direct probes of the Higgs portal coupling and mass with indirect tests.} As has been studied in Refs.\,\cite{Arhrib:2011vc,Kanemura:2012rs}, the doubly-charged Higgs particle of the CTHM can give a sizable contribution to the $h\rightarrow \gamma\gamma$ decay rate especially for negative $\lambda_4$ and $\lambda_{45}$ due to a constructive interference\,\cite{Chun:2012jw}. We therefore expect the $h\rightarrow\gamma\gamma$ decay rate to provide an indirect determination of $\lambda_4$ by excluding some of the parameter space on the $\lambda_4\text{-}m_\Delta$ plane. {\color{black} In this context, we consider the ratio $R_{h\gamma\gamma}$ given \begin{align} R_{h\gamma\gamma}=\frac{\Gamma^{\text{NP}}(h\rightarrow\gamma\gamma)+\Gamma^{\text{SM}}(h\rightarrow\gamma\gamma)}{\Gamma^{\text{SM}}(h\rightarrow\gamma\gamma)},\label{haadef} \end{align} with $\Gamma^{\text{NP}}$ and $\Gamma^{\text{SM}}$ the new physics (NP) and pure SM contribution to the decay rate of $h\rightarrow\gamma\gamma$ respectively. From Eq.\,\eqref{haadef} we see that, if nature is completely described by SM, then this ratio will exactly be one; and any value that deviates from one might be a source of new physics.} For the quark loop contributions, we retain only the dominant $t$ quark for the fermion loop contribution to $R_{h\gamma\gamma}$. The current LHC and the proposed FCC constraints on this ratio is indicated in the $\lambda_4\text{-}m_\Delta$ plane in Fig.\,\ref{haa}\,(a), (b), (c)\footnote{The values we use for $R_{h\gamma\gamma}$ are: For the LHC, we use the current experimental value $1.16_{-0.18}^{+0.20}$\,\cite{Khachatryan:2016vau,Biswas:2017tnw}; For the FCC-ee collider, we use the proposed values, {\it i.e.}; $1\pm0.05$, and $1\pm0.01$ for FCC-hh collider\,\cite{Contino:2016spe}. }, where the lightest black region is the combined constraint on $R_{h\gamma\gamma}$ from ATLAS and CMS at 7\,TeV and 8\,TeV; the intermediate black region is the planned FCC-ee constraint and the darkest black region shows the combined planned FCC-ee+FCC-hh constraint on $R_{h\gamma\gamma}$. From Fig.\,\ref{haa}\,(a), we see that the current LHC constraint on $R_{h\gamma\gamma}$ is almost ruled out the small $m_\Delta$ and large $\lambda_4$ region, but in other regions, the current LHC constraints on the $\lambda_4\text{-}m_\Delta$ plane are relatively weak. This situation, however, will be changed considerably by the future 100\,TeV collider as can be seen from the darker black region in Fig.\,\ref{haa}\,(a), (b), (c). {\color{black} Thus, combination of the direct and indirect probes of the CTHM would be advantageous in the determination of $\lambda_4$. If future precision measurements of the $h\to\gamma\gamma$ decay rate agree with the SM expectations, a substantial portion of the $\lambda_4\text{-}m_\Delta$ parameter space will be excluded, thereby assisting in the determination of $\lambda_4$. In the remaining regions of parameter space, $\lambda_4$ could eventually be determined by $H^\mp H^{\pm\pm}\rightarrow \ell^\pm \ell^\pm hW^\mp $ and $H^\mp H^{\pm\pm}\rightarrow W^\pm W^\pm hW^\mp $ based on our study above. {{It is also possible that future experiments at the LHC, FCC-ee, or FCC-hh see a deviation of $R_{h\gamma\gamma}$ from the SM prediction. In this case, if $\lambda_5$ is determined from mass splitting (-0.1 in our case) we might also also conclude that: (1) If the deviation is detected through the $hW^\mp W^\pm W^\pm$ ($hW^\mp \ell^\pm \ell^\pm$) channel, the triplet will have a large (small) vev with $|\lambda_4|\sim 1$; (2) if the deviation is observed from both $hW^\mp W^\pm W^\pm$ and $hW^\mp \ell^\pm \ell^\pm$ channels, an intermediate triplet vev can be inferred with $|\lambda_4| \sim 1$.}}} \section{Conclusion} \label{sec:conclusion} {\color{black} In this paper, we have investigated the model discovery and Higgs portal parameter determination of the Complex Triplet Higgs Model at a prospective 100\,TeV $pp$ collider. The triplet with Y=2 has long been known as a key ingredient in generating non-zero neutrino masses through the type-II seesaw mechanism. The triplet interacts with the SM through its electroweak gauge interactions, its coupling to the leptons in the type-II see saw interaction, and to the Higgs doublet via the Higgs portal parameters $\lambda_4$ and $\lambda_5$. The latter modify the scalar potential and may enable a strong first order electroweak phase transition, as needed for electroweak baryogenesis.} The CTHM parameter space is constrained by current experiments at the LHC in the region where the triplet is light ($\lesssim600$\,GeV) and its vev, $v_\Delta$, is small ($\lesssim10^{-4.6}$\,GeV). In this paper, we have analyzed the reach of a prospective 100\,TeV $pp$ collider by working in the Normal Mass Hierarchy (NMH) framework, wherein the doubly-charged Higgs particle $H^{\pm\pm}$ is the heaviest. Based on our study, we conclude that a large part of the CTHM parameter space will be covered by the 100\,TeV collider in the future as shown in our Fig.\,\ref{bdtdis}. More specifically, we find that : \begin{enumerate} \item The $H^{++}H^{--}$ and $H^{\pm\pm}H^\mp$ channels have the largest and the second largest cross section respectively, making them the dominant discovery channels of the CTHM. Importantly, the $H^{++}H^{--}\rightarrow\ell^+\ell^+\ell'^-\ell'^-$ channel is recognized as the smoking-gun signature of the CTHM, which can be used to discover the triplet up to a mass $\sim$4.5\,TeV when $v_\Delta\lesssim10^{-4}$\,GeV. In addition, for $v_\Delta\gtrsim10^{-4}$\,GeV, the triplet model can be discovered by the $H^{\pm\pm}H^\mp\rightarrow\ell^\pm\ell^\pm hW^\mp/W^\pm W^\pm hW^\mp$ channel when the triplet mass is below $\sim$1\,TeV. \item For $v_\Delta\gtrsim10^{-4}$\,GeV, the triplet can also be discovered through the $H^{++}H^{--}\rightarrow W^+W^+W^-W^-\to\ell^+\ell^+\ell'^-\ell'^-\slashed{E}_T$ channel when the triplet mass is below $\sim$1.7\,TeV. In arriving at this conclusion, we use the same BDT training and test variables as for the $H^{++}H^{--}\rightarrow\ell^+\ell^+\ell'^-\ell'^-$ channel. However, if one were to choose a different set of BDT training and test variables to optimize the cut efficiency, or if one were to study different final states like in Ref.\,\cite{kang:2014jia}, one might anticipate that the quartic-$W$ channel will also cover the upper right white corner in Fig.\,\ref{bdtdis}, such that the whole parameter space can be explored at the future 100\,TeV collider. \item Upon discovery, Higgs portal parameter $\lambda_5$ can be determined straightforwardly from the mass splitting $\Delta m\approx\frac{|\lambda_5|v^2}{8m_\Delta}$ defined in Eq.\,\eqref{massspec}. \end{enumerate} While the triplet can be discovered over a wide range and $\lambda_5$ can be calculated straightforwardly from the mass splitting upon discovery, determination of the other Higgs portal parameter $\lambda_4$ is more complicated even after discovery. Fortunately, we can obtain $\lambda_4$ through precise measurements of the decay branching ratios. We find that only four decay vertices are helpful and summarize them in Table\,\ref{tab:1}. At the same time, to further narrow down the parameter space, precise measurements on the $h\to\gamma\gamma$ decay rate can help indirectly to the determination of $\lambda_4$ by excluding some of the parameter space, as shown in our Fig.\,\ref{haa}. In this work, we only focus on the charged triplet Higgs particles in the NMH framework. However, the neutral triplet Higgs particles can also be used for model discovery and the Higgs portal parameter determination at the 100\,TeV collider. Looking ahead to future studies of the neutral states, we comment that: \begin{enumerate} \item In the NMH framework, the $HA$ channel has the third largest cross section. We present the decay patterns of $H$ and $A$ in Fig.\,\ref{decayregionplotH} and Fig.\,\ref{decayregionplotA} respectively in Appendix\,\ref{HAdecay}. Recall from Table\,\ref{tab:1} that $A\to hZ$ is relevant for $\lambda_4$ determination, we find that the $pp\to HA\to hh\,hZ\to \gamma\gamma b\bar{b}b\bar{b}\ell^+\ell^-$ channel only has $\mathcal{O}(100)$ events at the future collider with $\sqrt{s}=100\rm\,TeV$ and $\mathcal{L}=30\rm\, ab^{-1}$ even without considering the backgrounds. Again, the event number can be improved by studying different final states or different decays chain including vertices in Table\,\ref{tab:1}. \item For $\lambda_4$ determination, the $H^{\pm}\to hW^\pm$ channel has a larger branching ratio for $\lambda_{45}<0$. In comparison, $H\to ZZ$ has a larger branching ratio $\lambda_{45}>0$, which makes the vacuum stable to a higher scale compared with the benchmark point we use in this work. On the other hand, $H\to W^+W^-$/$A\to hZ$ channel dominates for both positive and negative $\lambda_{45}$ as can be seen from the right panel of Fig.\,\ref{decayregionplotH} and Fig.\,\ref{decayregionplotA}. Therefore, theoretically, the $HA$ channel also provides a way to for model discovery and $\lambda_{4,5}$ determination at the 100\,TeV collider. \end{enumerate} \acknowledgments{YD would like to thank Huai-Ke Guo and Hao-Lin Li for many useful discussions on MadGraph and Olivier Mattelaer for quickly answering several technical questions about MadGraph through launchpad. YD, MJRM, and JHY were supported in part under U.S. Department of Energy contract DE-SC0011095. JHY is also supported by the National Science Foundation of China under Grants No. 11875003 and the Chinese Academy of Sciences (CAS) Hundred-Talent Program. }
1,108,101,562,571
arxiv
\section{Introduction} The anomaly mediation \cite{Randall:1998uk} is the most economical mechanism to generate supersymmetry (SUSY) breaking terms in the supersymmetric standard model (SUSY SM). In the breaking mechanism, only dynamical SUSY-breaking sector is required, and no other extra fields are needed. On the assumption of the generic form of K\"ahler potential, all the scalar bosons except the lightest Higgs boson acquire masses, which are of the order of the gravitino mass. The gaugino masses, on the other hand, are generated by the quantum effects, and then they are suppressed by the one-loop factor compared with the gravitino mass. This is a concrete realization of the split SUSY scenario \cite{split}, in which the squarks and sleptons are $O(10^{(1-2)})$ TeV while the gaugino masses are less than $O(1)$ TeV. Such a mass spectrum is favored from phenomenological viewpoints of the SUSY flavor and CP problems \cite{Gabbiani:1996hi} and the lightest Higgs mass bound \cite{Amsler:2008zzb}. Since it is safe from the cosmological gravitino over-production problem \cite{Weinberg:1982zq}, it is also consistent with the thermal leptogenesis \cite{Fukugita:1986hr}. In the anomaly mediation the neutral component of $SU(2)_L$ gauginos, called as Winos, becomes the lightest in gaugino sector. This is because the gaugino masses are proportional to the beta functions of the gauge coupling. Higgsino, on the other hand, can be as heavy as gravitino, depending on the K\"ahler potential. Therefore, the neutral Wino can be the lightest SUSY particle (LSP) in the anomaly mediation scenario, and becomes a viable candidate for dark matter in the universe. The thermal relic abundance of the Wino LSP in the universe is consistent with the WMAP observation when the Wino mass is from 2.7~TeV to 3.0~TeV \cite{Hisano:2006nn}. The lighter Wino predicts too small thermal relic density; however, it is known that decay of gravitino or other quasi-stable particles may produce wino non-thermally so that the relic abundance is consistent with the observation \cite{Gherghetta:1999sw,Moroi:1999zb}. The successful Big-Bang Nucleosynthesis (BBN) also gives bounds on the annihilation cross section of the dark matter, while the large dark matter annihilation in the BBN era may give a solution to the lithium problem \cite{Jedamzik,khori}. In the anomaly mediation, the Wino mass around (150-300)~GeV may be compatible with the lithium problem when the Wino LSP is the dominant component of the dark matter. The direct detection of dark matter is now performed in several experiments with high sensitivities, and its theoretical sides are also extensively studied. The tree-level contribution to the Wino LSP-nucleon ($\tilde{\chi}^0$-$N$) scattering cross section, which is responsible for direct detection of the dark matter, is evaluated at Ref.~\cite{Murakami:2000me}. However, in the case that the SUSY particles and the heavier Higgs bosons have masses of the order of the gravitino mass except the gauginos in the SUSY SM, the tree-level interactions of the Wino LSP with quarks are suppressed by the gravitino mass. Thus, the Wino LSP-nucleon scattering process is dominated by the weak gauge boson loop diagrams. However, despite the loop factor, it was pointed out that the loop contribution is not suppressed by the Wino mass even if it is heavier than the weak scale \cite{Hisano:2004pv}. In this letter, we reevaluate the Wino LSP-nucleon scattering cross section. The one-loop contribution to the process is evaluated by Refs.~\cite{Hisano:2004pv, Cirelli:2005uq,Essig:2007az}; however, their results are not consistent with each other. In addition, while the interaction of Wino and gluon is generated by two-loop diagrams, it has to be included for the complete evaluation of the spin-independent Wino LSP-nucleon interaction. We take into account all the relevant diagrams up to two-loop and derive effective operators, which act as leading contribution in the scattering process. \section{Effective interaction for Wino LSP-nucleon scattering} First, we summarize the effective interactions of the Wino LSP with light quarks ($q=u,d,s$) and gluon, which are relevant to the Wino LSP-nucleon scattering. They are given as follows, \begin{eqnarray} {\cal L}^{\rm eff}&=&\sum_{q=u,d,s}{\cal L}^{\rm{eff}}_q +{\cal L}^{\rm{eff}}_g \ , \end{eqnarray} where \beq {\cal L}^{\rm{eff}}_q &=& d_q\ \overline{\tilde{\chi}^0}\gamma^{\mu}\gamma_5\tilde{\chi}^0\ \bar{q}\gamma_{\mu}\gamma_5 q + f_q m_q\ \overline{\tilde{\chi}^0}\tilde{\chi}^0\ \bar{q}q +f_q'\ \overline{\tilde{\chi}^0}\tilde{\chi}^0\ \bar{q} i\sla{\partial}q \cr &+& \frac{g^{(1)}_q}{m_{\tilde{\chi}^0}} \ \overline{\tilde{\chi}^0} i \partial^{\mu}\gamma^{\nu} \tilde{\chi}^0 \ {\cal O}_{\mu\nu}^q + \frac{g^{(2)}_q}{m^2_{\tilde{\chi}^0}}\ \overline{\tilde{\chi}^0}(i \partial^{\mu})(i \partial^{\nu}) \tilde{\chi}^0 \ {\cal O}_{\mu\nu}^q \ , \label{eff_lagq} \\ {\cal L}^{\rm eff}_{ g}&=& f_G\ \overline{\tilde{\chi}^0}\tilde{\chi}^0 G_{\mu\nu}^aG^{a\mu\nu} \nonumber\\ &+&\frac{g^{(1)}_G}{m_{\tilde{\chi}^0}}\ \overline{\tilde{\chi}^0} i\partial^{\mu}\gamma^{\nu} \tilde{\chi}^0 \ {\cal O}_{\mu\nu}^g + \frac{g^{(2)}_G}{m^2_{\tilde{\chi}^0}}\ \overline{\tilde{\chi}^0}(i\partial^{\mu}) (i\partial^{\nu})\tilde{\chi}^0 \ {\cal O}_{\mu\nu}^g \ . \label{eff_lagg} \end{eqnarray} Here, $m_{\tilde{\chi}^0}$ and $m_q$ is mass of Wino and quark, respectively. The first term of ${\cal L}^{\rm eff}_q$ contributes to the spin-dependent $\tilde{\chi}^0$-$N$ interaction, while the other terms in ${\cal L}^{\rm eff}_{q}$ and ${\cal L}^{\rm eff}_{g}$ generate spin-independent ones. The fourth and fifth terms in ${\cal L}^{\rm eff}_q$ and the second and third terms in ${\cal L}^{\rm eff}_g$ depend on the twist-2 operators (traceless parts of the energy momentum tensor) for quarks and gluon, \beq {\cal O}_{\mu\nu}^q&\equiv&\frac12 \bar{q} i \left(\partial_{\mu}\gamma_{\nu} + \partial_{\nu}\gamma_{\mu} -\frac{1}{2}g_{\mu\nu}\sla{\partial} \right) q \ , \nonumber\\ {\cal O}_{\mu\nu}^g&\equiv&\left(G_{\mu}^{a\rho}G_{\rho\nu}^{a}+ \frac{1}{4}g_{\mu\nu} G^a_{\alpha\beta}G^{a\alpha\beta}\right) \ . \end{eqnarray} The scattering cross section of the Wino LSP with target nuclei is expressed compactly by using the coefficients given in ${\cal L}^{\rm eff}_{q}$ and ${\cal L}^{\rm eff}_{g}$ as follows \cite{Jungman:1995df}, \begin{eqnarray} \sigma&=& \frac{4}{\pi}\left(\frac{m_{\tilde{\chi}^0} m_T}{m_{\tilde{\chi}^0} +m_T}\right)^2 \left[(n_p f_p+n_nf_n)^2+4 \frac{J+1}{J} \left( a_p\left\langle S_p\right\rangle+ a_n\left\langle S_n\right\rangle \right)^2\right]\ , \label{sigma} \end{eqnarray} where $m_T$ is the mass of target nucleus. The first term in the bracket comes from the spin-independent interactions while the second one is generated by the spin-dependent one. In the spin-independent interaction term, $n_p$ and $n_n$ are proton and neutron numbers in the target nucleus, respectively, and the spin-independent coupling of the Wino with nucleon, $f_N~(N=p,n)$, is given as \begin{eqnarray} f_N/m_N&=&\sum_{q=u,d,s} \left( (f_q+f_q') f_{Tq}+\frac{3}{4} (q(2)+\bar{q}(2))(g_q^{(1)}+g_q^{(2)})\right) \nonumber\\ &-&\frac{8\pi}{9\alpha_s}f_{TG} f_G +\frac{3}{4} G(2)\left(g^{(1)}_G +g^{(2)}_G\right) \ . \label{f} \end{eqnarray} The matrix elements of nucleon are expressed by using nucleon mass $m_N$ ($N=p,n$) as\footnote{ We use equations of motion for quarks for evaluation of the matrix elements of $\langle N \vert \bar{q} i\sla{\partial}q \vert N\rangle$, though this term is not relevant to our calculation, which we will see in the next section. } \begin{eqnarray} f_{Tq}&\equiv& \langle N \vert m_q \bar{q} q \vert N\rangle/m_N \ , \nonumber \\ f_{TG}&\equiv& 1-\sum_{u,d,s}f_{Tq} \ , \nonumber\\ \langle N(p)\vert {\cal O}_{\mu\nu}^q \vert N(p) \rangle &=&\frac{1}{m_N} (p_{\mu}p_{\nu}-\frac{1}{4}m^2_N g_{\mu\nu})\ (q(2)+\bar{q}(2)) \ , \nonumber\\ \langle N(p) \vert {\cal O}_{\mu\nu}^g \vert N(p) \rangle & =& \frac{1}{m_N} (p_{\mu}p_{\nu}-\frac{1}{4}m^2_N g_{\mu\nu})\ G(2) \ . \end{eqnarray} Here, $q(2)$, $\bar{q}(2)$ and $G(2)$ are the second moments of the quark, anti-quark and gluon distribution functions, which are expressed as \begin{eqnarray} q(2)+ \bar{q}(2) &=&\int^{1}_{0} dx ~x~ [q(x)+\bar{q}(x)] \ , \cr G(2) &=&\int^{1}_{0} dx ~x ~g(x) \ . \end{eqnarray} They are scale-dependent, and are mixed with each others once the QCD radiative corrections are included. We use the second moments for gluon and quark distribution functions at the scale of $Z$ boson mass, which are derived by the CTEQ parton distribution \cite{Pumplin:2002vw}, and include bottom and charm quark contributions. On the other hand, the constant $a_N$ ($N=p,n$), which is responsible for the spin-dependent contribution, is defined as \begin{eqnarray} a_{N}&=&\sum_{q=u,d,s} d_q \Delta q_N \ , \end{eqnarray} \begin{eqnarray} 2 s_{\mu}\Delta q_N &\equiv& \langle N \vert \bar{q}\gamma_{\mu}\gamma_5 q \vert N \rangle \ , \end{eqnarray} where $s_{\mu}$ is the nucleon's spin, while $J$ and $\langle S_N\rangle= \langle A\vert S_N\vert A\rangle$ in Eq.~(\ref{sigma}) are total spin of nucleus $A$ and the expectation values of the total spin of protons and neutrons in $A$, respectively. \begin{table} \begin{center} \begin{tabular}{|l|l|} \hline \multicolumn{2}{|c|}{For proton}\cr \hline $f_{Tu}$& 0.023\cr $f_{Td}$& 0.034\cr $f_{Ts}$&0.025\cr \hline \multicolumn{2}{|c|}{For neutron}\cr \hline $f_{Tu}$&0.019\cr $f_{Td}$& 0.041\cr $f_{Ts}$& 0.025 \cr \hline \end{tabular} \hskip 1cm \begin{tabular}{|l|l|} \hline \multicolumn{2}{|c|}{Spin fraction}\cr \hline $\Delta u$& 0.77\cr $\Delta d$& -0.49\cr $\Delta s$& -0.15\cr \hline \end{tabular} \hskip 1cm \begin{tabular}{|l|l||l|l|} \hline \multicolumn{4}{|c|}{Second moment at $\mu=m_Z$}\cr \hline $G(2)$&0.48&&\cr $u(2)$&0.22&$\bar{u}(2)$& 0.034\cr $d(2)$&0.11&$\bar{d}(2)$&0.036\cr $s(2)$&0.026&$\bar{s}(2)$&0.026\cr $c(2)$&0.019&$\bar{c}(2)$&0.019\cr $b(2)$&0.012&$\bar{b}(2)$&0.012\cr \hline \end{tabular} \end{center} \caption{ Parameters for quark and gluon matrix elements used in this letter. $f_{Ti}$ $(i=u,d,s)$ is taken from the estimation in Refs.~\cite{Cheng:1988im,Ohki:2008ff}. The spin fractions for proton comes from Ref.~\cite{Adams:1995ufa}. Those for neutron are given by exchange of up and down quarks in the tables. The second moments for gluon and quark distribution functions are calculated at the scale $\mu=m_Z$ ($m_Z$ is $Z$ boson mass) using the CTEQ parton distribution \cite{Pumplin:2002vw}. } \label{table1} \end{table} Notice that the term proportional to $f_G$ in the spin-independent coupling of $\tilde{\chi}^0$-$N$ in Eq.~(\ref{f}) is divided by $\alpha_s$. It comes from definition of the gluon contribution to nucleon mass, $f_{TG}$, and the trace anomaly of the energy momentum tensor as \begin{eqnarray} m_N f_{TG}&=& -\frac{9\alpha_s}{8\pi } \langle N \vert G_{\mu\nu}^aG^{a\mu\nu} \vert N\rangle \end{eqnarray} at the leading order.\footnote{Here, we use three-flavor approximation.} Thus, when evaluating the spin-independent $\tilde{\chi}^0$-$N$ interaction, we need to include $O(\alpha_s)$ correction to $f_G$ \cite{Drees:1993bu}. Other contributions in the Wino LSP and gluon interaction, which come from gluon twist-2 operators, are sub-leading as far as the coefficients are $O(\alpha_s)$. Thus, we neglect the contribution from gluon twist-2 operators in the following discussion. Parameters for quark and gluon matrix elements used in this analysis are summarized in Table~1. Notice that the strange quark content of the nucleon $f_{Ts}$ is much smaller than previous thought according to the recent lattice simulation \cite{Ohki:2008ff}. This leads to significant suppression on the spin-independent cross section, then the interaction of Wino and gluon becomes relatively more important in the cross section. \section{Results} Now we evaluate the coefficients of effective interactions in Eqs.~(\ref{eff_lagq}, \ref{eff_lagg}), which are needed to calculate the scattering cross section. The Wino LSP accompanies the charged Wino ($\tilde{\chi}^-$). The mass difference is dominated by one-loop contribution unless Higgsino and Wino masses are almost degenerate; we ignore it in this letter. The coupling of neutral and charged Winos to the standard model sector is only through gauge interactions as \begin{eqnarray} {\cal L}_{\rm int} &=& -\frac{e}{s_W} \left( \overline{\tilde{\chi}^0}\gamma^\mu\tilde{\chi}^-W^\dagger_\mu + h.c. \right) + e \frac{c_W}{s_W}\overline{\tilde{\chi}^-}\gamma^\mu\tilde{\chi}^-Z_\mu + e\overline{\tilde{\chi}^-}\gamma^\mu\tilde{\chi}^-A_\mu \ . \end{eqnarray} Here, $e$ is the electric charge, $s_W=\sin\theta_W$ and $c_W=\cos\theta_W$ with $\theta_W$ being the Weinberg angle. As is described in Introduction, the effective interactions of the Wino LSP to light quarks are generated by the loop diagrams. The leading contribution comes from one-loop interaction, which is shown in Fig.~1. After calculating the diagrams, the coefficients in Eq.~(\ref{eff_lagq}) are derived as follows, \begin{eqnarray} f_q &=&\frac{\alpha_2^2}{4m_W m_{h^0}^2} g_{\rm H}(x) \ , \label{fq} \\ f_q' &=& 0 \ ,\nonumber\\ d_q&=& \frac{\alpha_2^2}{m_W^2} g_{\rm AV}(x) \ ,\\ g_q^{(1)}&=&\frac{\alpha_2^2}{m_W^3} g_{T1}(x) \ ,\\ g_q^{(2)}&=&\frac{\alpha_2^2}{m_W^3} g_{T2}(x) \ , \label{1loop} \end{eqnarray} where $m_{h^0}$ is the lightest Higgs boson ({\it i.e.}, SM Higgs boson) mass, $x=m_W^2/m^2_{\tilde{\chi}^0}$ and $\alpha_2=\alpha/s_W^2$ (here $m_W$ is the $W$ boson mass and $\alpha$ is the fine-structure constant). The diagram (a) in Fig.~1, which is induced by the SM Higgs boson $(h^0)$ exchange, contributes to $f_q$, while the diagram (b) generates the other terms in Eq.~(\ref{eff_lagq}). With the light quark masses ignored, the mass functions in Eqs.~(\ref{fq}-\ref{1loop}) are given as \begin{eqnarray} g_{\rm H}(x) &=& -\frac{2}{b} (2+2x-x^2)\tan^{-1}(\frac{2 b}{\sqrt{x}}) + 2\sqrt{x}(2-x \log(x)) \ , \nonumber\\ g_{\rm AV}(x) &=& \frac{1}{24b}\sqrt{x}(8-x-x^2)\tan^{-1}(\frac{2b}{\sqrt{x}}) -\frac{1}{24} x(2-(3+x)\log(x)) \ , \label{gav} \nonumber\\ g_{\rm T1}(x) &=& \frac{1}{3}b(2+x^2)\tan^{-1}(\frac{2 b}{\sqrt{x}}) +\frac{1}{12}\sqrt{x}(1-2x-x(2-x)\log{x}) \ , \nonumber\\ g_{\rm T2}(x) &=& \frac{1}{4b} x (2-4x+x^2)\tan^{-1}(\frac{2 b}{\sqrt{x}}) -\frac{1}{4}\sqrt{x}(1-2x-x(2-x)\log(x)) \ , \nonumber\\ \end{eqnarray} with ${b}=\sqrt{1-x/4}$.\footnote {Here, $g_{T1}$ is larger than $F^{(0)}_{T1}$ given in Eq. (42) in \cite{Hisano:2004pv}. We corrected it in this calculation. } \begin{figure}[t] \begin{center} \includegraphics[width=0.55\linewidth]{1loop.eps} \caption{One-loop contributions to effective interactions of Wino LSP and light quarks.} \end{center} \end{figure} As discussed in Ref.~\cite{Hisano:2004pv}, the spin-independent interaction of $\tilde{\chi}^0$-$N$ are not suppressed even if the Wino LSP is much larger than the $W$ boson mass. The mass functions $g_{\rm H}(x)$ and $g_{\rm T1}(x)$ become finite in a limit of $x\rightarrow 0$ while other two functions are zero, as \begin{eqnarray} g_{\rm H}(x)&\simeq& -2\pi \ , \nonumber\\ g_{\rm AV}(x)&\simeq& \frac{\sqrt{x}}6 \pi \ , \nonumber\\ g_{\rm T1}(x)&\simeq& \frac{\pi}3 \ , \nonumber\\ g_{\rm T2}(x)&\simeq& -\frac{\sqrt{x}}6 \ . \end{eqnarray} \begin{figure}[t] \begin{center} \includegraphics[width=0.7\linewidth]{2loop.eps} \caption{Two-loop contributions to interactions of Wino LSP and gluon. Here, $Q$ and $q$ represent heavy and light quarks, respectively. } \end{center} \end{figure} Next, let us discuss the effective interactions of the Wino LSP and gluon. As we discussed in the previous section, the $O(\alpha_s)$ correction to $f_G$ in Eq.~(\ref{eff_lagg}) is relevant at the leading order though it is induced by two-loop order. Three types of diagrams in Fig.~2 contribute to $f_G$. The diagram (a) includes heavy quark loop ($Q=c,b,t$). The heavy quark content of the nucleon is related to the gluon condensate as \cite{Shifman:1978zn} \begin{eqnarray} \langle N \vert m_Q \bar{Q}Q \vert N\rangle &=&-\frac{\alpha_s}{12\pi} \langle N \vert G_{\mu\nu}^aG^{a\mu\nu} \vert N\rangle \ . \label{shifman} \end{eqnarray} Thus, the diagram (a) can be evaluated from Eq.~(\ref{fq}) by replacing light to heavy quarks and using Eq.~(\ref{shifman}). On the other hand, we need to calculate irreducible two-loop diagrams (b) and (c) explicitly. In the diagram (c), the momentum which dominates the quark loop integration is characterized by mass of quark which emits two gluons. Since we are constructing the effective theory under $O(1)$ GeV, the integration in the infrared regime under such energy scale should not be included. Thus, light quarks does not contribute in this diagram. On the other hand, the loop momentum of quark loop in the diagram (b) is dominated by the external momentum of the quark loop diagram ({\it i.e.}, $W$ boson mass in this case); therefore, all quarks contributes in the loop. We express the $O(\alpha_s)$ contribution to $f_G$ as follows, \begin{eqnarray} f_G &=& -3\times \frac{\alpha_s}{12\pi} \frac{\alpha_2^2}{4m_W m_{h^0}^2} g_{\rm H}(x) +\frac{\alpha_s}{4\pi} \frac{\alpha_2^2}{m_W^3} g_{\rm B3}(x,y) +2\times \frac{\alpha_s}{4\pi} \frac{\alpha_2^2}{m_W^3} g_{\rm B1}(x) \ , \end{eqnarray} where $y=m_t^2/m^2_{\tilde{\chi}^0}$ ($m_t$ is top quark mass). The first term represents the contribution from the diagram (a). The second term comes from the diagrams (b) and (c) with the third-generation quark loop, while the the first- and second-generation quarks contribute to the third one. We ignore quark masses except for the top quark. Then, we found that the mass functions $g_{\rm B3}(x,y)$ and $g_{\rm B1}(x)$ are given by \begin{eqnarray} g_{\rm B3}(x,y)&=& -\frac{x^{3/2} (2 y-x)}{12 (y-x)^2} -\frac{x^{3/2} y^3 \log (y) }{24 (y-x)^3} +\frac{x^{5/2} (3 y^2-3 x y+x^2 ) \log (x)}{24 (y-x)^3} \nonumber\\ &+&\frac{x^{3/2} \sqrt{y} (y^3-2 y^2-14 y+6 x) \tan^{-1}(\frac{2 b_t}{\sqrt{y}}) }{24 b_t (y-x)^3 } \nonumber\\ &-&\frac{x \left(x^4-3 y x^3-2 x^3+3 y^2 x^2+6 y x^2+4 x^2-6 y^2 x-6 y x-6 y^2\right) \tan ^{-1}(\frac{2 b}{\sqrt{x}})}{24 b (y-x)^3} \ , \nonumber\\ g_{\rm B1}(x)&=& -\frac{1}{24} \sqrt{x} (x \log (x)-2) +\frac{(x^2-2x +4) \tan ^{-1}(\frac{2 b}{\sqrt{x}})}{24b} \ , \end{eqnarray} where $b_t=\sqrt{1-y/4}$. Notice that the diagrams (b) and (c) also give finite contributions to the spin-independent $\tilde{\chi}^0$-$N$ interaction in a limit of $m_{\tilde{\chi}^0}\rightarrow \infty$, {\it i.e.}, $x,~y\ll 1$, \begin{eqnarray} g_{\rm B3}(x,y)&\simeq&\frac{(3\sqrt{y}+2\sqrt{x}) x} {24(\sqrt{x}+\sqrt{y})^3} \pi \ ,\nonumber\\ g_{\rm B1}(x)&\simeq& \frac{\pi}{12} \ . \end{eqnarray} Now we are at the position to present the scattering cross section. In Fig.~3, we show the spin-independent $\tilde{\chi}^0$-$p$ scattering cross section as a function of $m_{\tilde{\chi}^0}$ (solid line). Here, we take $m_{h^0}= 115$, $130$, 300~GeV, and 1~TeV from bottom to top. While the latter two values may not be realistic in the minimal SUSY SM, the next-minimal SUSY SM (NMSSM), for example, may predict larger Higgs boson mass. It was found that the spin-independent cross section is $O(10^{-(48-46)})$ cm$^2$, depending on the Higgs boson mass. In order to understand the result, we also plot each contribution from the effective operators in $f_p$ in Fig.~4. Solid line represents the Higgs exchange contribution (Fig.~1(a) and Fig.~2(a)), dashed line is for the twist-2 operator contribution (Fig.~1(b)), and dash-dot line is for that from irreducible two-loop diagrams in Fig.~2(b) and (c). As is seen, the contribution from quark twist-2 operators is dominant part. However, we also found that other two also give relatively large contribution by the opposite sign. Consequently, $f_p$ is suppressed by the accidental cancellation, which leads to the smaller spin-independent cross section. When the Higgs boson mass is taken to be larger, the cross section becomes larger since the cancellation is milder. In this letter, we have ignored the tree-level coupling of the Wino LSP and the lightest Higgs boson since it is suppressed by heavy Higgsino mass. When it dominates the spin-independent interaction, the spin-independent cross section is evaluated as \begin{eqnarray} \sigma_p&\simeq&9\times 10^{-47} {\rm cm^2} \times \left(\frac{m_{\tilde{H}}}{10{\rm TeV}}\right)^{-2} \left(\frac{m_{h^0}}{115{\rm GeV}}\right)^{-4} \sin^22\beta \ , \end{eqnarray} where $m_{\tilde{H}}$ is the Higgsino mass and $\beta$ is the vacuum angle in the SUSY SM. Thus, the tree-level contribution may dominate the spin-independent cross section, depending on parameters in the SUSY SM, even if the the SUSY particle masses are of the order of the gravitino mass. For completeness, in Fig.~\ref{fig:sigma}, we also show the spin-dependent cross section in dashed line. As expected from the behavior of the mass function $g_{\rm AV}(x)$, it was found that the cross section is suppressed by the Wino mass. Finally, we comment difference between our result and the previous works. We found a few errors in the calculation in Ref.~\cite{Hisano:2004pv} as described in this text, though the qualitative behavior is not different from this work. On the other hand, compared with Refs.~\cite{Cirelli:2005uq,Essig:2007az}, our result for the spin-independent cross section is smaller by $O(10^{-(2-3)})$. In Ref.~\cite{Cirelli:2005uq} only the contribution to scalar couplings to quarks and gluon are evaluated. In Ref.~\cite{Essig:2007az} the relative sign of the quark twist-2 and the Higgs boson exchange contributions is opposite to ours. Thus, the cross section is not reduced by accidental cancellation in those works. Their loop functions are also different from ours. We could not understand origin of the differences. \begin{figure}[t] \begin{center} \includegraphics[width=0.6\linewidth]{res_sigma.eps} \caption{$\tilde{\chi}^0$-$p$ scattering cross section as a function of $m_{\tilde{\chi}^0}$. Spin-independent (SI) cross section is given in solid line, taking $m_{h^0}=115,~130~{\rm GeV}$, 300~GeV, and 1~TeV from bottom to top. Here, we also plot spin-dependent (SD) one in dashed line. } \end{center} \label{fig:sigma} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=0.6\linewidth]{res_fp.eps} \caption{Each contribution in spin-independent coupling, $f_p$. Solid line represents the Higgs exchange contribution including heavy quark one, dashed line is for the twist-2 operator contribution, and dash-dot line is for that from two-loop diagrams in Fig.~1(b) and (c). The Higgs boson mass is $m_{h^0}=115,~130~{\rm GeV}$, 300~GeV, and 1~TeV from bottom to top. Here, unit is arbitrary. } \end{center} \label{fig:fp} \end{figure} \section{Conclusion and discussion} In this letter, we calculated the Wino LSP-nucleon cross section in the anomaly-mediated SUSY breaking mechanism. We especially consider the scenario in which all SUSY particles except for gauginos are heavy to decouple in electroweak scale, and neutral Wino becomes the LSP. In such a scenario, although the Wino LSP does not interact with nucleon at tree-level, it does in loop diagrams. We have taken into account all the loop diagrams which act as leading contributions to the Wino LSP-nucleon scattering. As a result, the spin-independent cross section turns out to be $O(10^{-(48-46)})~{\rm cm}^2$, depending on the Higgs boson mass. In the calculation, we found that Wino-gluon interaction at two-loop level contributes in opposite sign to the main Wino-quark interaction, which leads to the suppression of the total Wino-nucleon coupling. Therefore, it is concluded that the direct detection of Wino dark matter is difficult in the present experiments in the scenario. We comment on the cancellation that we observed in Wino-nucleon coupling. The cancellation is supposed to be accidental in the scenario that we analysed. Thus, if one consider the other scenarios in SUSY or other models, the two-loop may contribute as large as the lower order diagrams, which may cause enhancement of scattering cross section. Such analysis will be given elsewhere \cite{HisanoIshiwataNagata}. \section*{Acknowledgment} The work was supported in part by the Grant-in-Aid for the Ministry of Education, Culture, Sports, Science, and Technology, Government of Japan, No. 20244037, No. 2054252 and No. 2244021 (J.H.) and Research Fellowships of the Japan Society for the Promotion of Science for Young Scientists (K.I.). The work of J.H. is also supported by the World Premier International Research Center Initiative (WPI Initiative), MEXT, Japan.
1,108,101,562,572
arxiv
\section{Introduction} The newest generations of Intel processors, Xeon Scalable (Skylake) and Xeon Phi (Knights Landing), extend the current standard of AVX2 vector instructions with the 512-bit wide AVX-512 instruction sets \cite{AVX512}. The width of registers is similarly increased and, in addition, the number of floating point registers is doubled to 32. Compilers can make use of the new instructions, but targeted code is required to reach optimal performance. Efficient vectorisation can lead to a doubling of the available register memory and floating point capability OpenQCD-1.6 \cite{openqcd} already includes optional AVX2 and SSE targeted implementations and an extension targeting BlueGene/Q also exists \cite{BGQ}. Following the logic of these extensions we have reimplemented several performance critical functions using Intel intrinsic instructions for AVX-512 vectors. These are mainly the Dirac operator and the vector operations necessary for the conjugate gradient algorithm. Here we publish the extension to openQCD-1.6 \cite{sa2c-github-page}. In addition, we report scaling studies performed for OpenQCD-FASTSUM, the FASTSUM collaboration's extension of openQCD \cite{FASTSUM}, with the AVX-512 implementation. \section{Implementation} The implementation of the extension is guided by the expectation that lattice QCD simulations are memory bandwidth bound. The performance of the application is limited by the memory bandwidth between the processor and different cache levels rather than the capacity for floating point operations. The same assumption guides the existing AVX2, SSE and BlueGene/Q targeted implementations. Gauge matrices and spinors of the Wilson formulation are stored in memory as structures and SIMD vectors are constructed out of spinor degrees of freedom. Our implementation combines spinor and direction indices to construct the 512-bit vectors required. The construction of a vector does not require any rearrangement of the data before loading into registers. However, different Dirac and directional indices are often handled differently, increasing the number of floating point instructions required. Since the primary objective is to optimise memory use, we consider this acceptable. In the SSE and AVX2 extensions, direct insertions of assembly code are used to achieve full control over the compiled code. We use Intel intrinsic instructions. Compilers generally replace these routines with assembly instructions in a one-to-one correspondence. Intrinsic functions offer more freedom in choosing the optimal compiler and allows easier porting to different processor types. They leave the compiler with the task of choosing the optimal instruction for each operation and assigning data to registers. This is especially important since the number of registers is increased in Xeon Scalable CPUs \cite{AVX512}. Using prewritten assembly code would confine the extension only to future processors with the higher register count. We implement several core functions, including the Dirac operator, the application of the Sheikholeslami-Wohlert term, and several linear algebra functions. The extension is activated using the \lstinline{AVX512} preprocessor flag, similarly to the existing \lstinline{AVX2} and \lstinline{SSE} preprocessor flags. When the flags are combined, the AVX-512 implementation is used when available. \section{Benchmarking} \begin{table} \centering \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Volume & \multicolumn{3}{|c|}{ Knights Landing } & \multicolumn{3}{|c|}{ Skylake } \\ & AVX-512 & AVX2 & Speedup & AVX-512 & AVX2 & Speedup \\ \hline \multicolumn{7}{|l|}{ Single Precision } \\ \hline 4 $\times$4$\times$4$\times$4 & 8521 & 5455 & 1.56 & 36177 & 29839 & 1.21 \\ 8 $\times$4$\times$4$\times$4 & 6276 & 4130 & 1.52 & 34649 & 27769 & 1.25 \\ 8 $\times$8$\times$4$\times$4 & 6063 & 4042 & 1.50 & 36167 & 29289 & 1.23 \\ 8 $\times$8$\times$8$\times$4 & 5286 & 3791 & 1.39 & 34894 & 28476 & 1.23 \\ 8 $\times$8$\times$8$\times$8 & 5088 & 3721 & 1.37 & 26408 & 21617 & 1.22 \\ 16$\times$8$\times$8$\times$8 & 4506 & 3338 & 1.35 & 25300 & 19180 & 1.32 \\ \hline \multicolumn{7}{|l|}{ Double precision } \\ \hline 4 $\times$4$\times$4$\times$4 & 6164 & 3725 & 1.65 & 26737 & 24681 & 1.08 \\ 8 $\times$4$\times$4$\times$4 & 4105 & 2857 & 1.44 & 26690 & 24609 & 1.08 \\ 8 $\times$8$\times$4$\times$4 & 3533 & 2517 & 1.40 & 26521 & 19687 & 1.35 \\ 8 $\times$8$\times$8$\times$4 & 3296 & 2421 & 1.36 & 25267 & 19312 & 1.31 \\ 8 $\times$8$\times$8$\times$8 & 3191 & 2405 & 1.33 & 18772 & 14471 & 1.29 \\ 16$\times$8$\times$8$\times$8 & 2911 & 2131 & 1.37 & 15513 & 15125 & 1.03 \\ \hline \end{tabular} \caption{ \label{singlecoretable} The performance of the functions Dw and Dw\_dble (single and double precision respectively) in Mflops on single Knights Landing and Skylake cores. } \end{table} \begin{figure} \centering \includegraphics[width=0.49\linewidth]{Dw.eps} \includegraphics[width=0.49\linewidth]{Dw_dble.eps} \caption{ The performance of Dw() (left), Dw\_dble() (right) performance measures run on a single Skylake core using the AVX-512 and AVX2 implementations. } \label{fig:singlecore} \end{figure} \begin{figure} \centering \includegraphics[width=0.49\linewidth]{Dw_phi.eps} \includegraphics[width=0.49\linewidth]{Dw_dble_phi.eps} \caption{The Dw() (left), Dw\_dble() (right) performance measures on a single KNL core using the AVX-512 and AVX2 implementations. } \label{fig:singlecore_phi} \end{figure} We run several performance tests on the FASTSUM extension of OpenQCD 1.6 with and without the AVX-512 implementation on Cineca Marconi A2 cluster Intel Knights Landing nodes and the Supercomputing Wales Sunbird cluster with Intel Skylake nodes. On Sunbird we use the Intel C Compiler to build the AVX-512 implementation with the compiler flags \begin{lstlisting} -std=c89 -xCORE-AVX512 -mtune=skylake -O3 -DAVX512 -DAVX -DFMA3 -DPM. \end{lstlisting} The original AVX2 version is compiled with \begin{lstlisting} -std=c89 -xCORE-AVX512 -march=skylake -O3 -DPM -DAVX -DFMA3 -DPM. \end{lstlisting} On the Knights Landing cluster the AVX-512 version is compiled with \begin{lstlisting} -std=c89 -xMIC-AVX512 -O3 -DAVX512 -DAVX -DFMA3 -DPM \end{lstlisting} and compared against the AVX2 version compiled using \begin{lstlisting} -std=c89 -xMIC-AVX512 -O3 -DAVX -DFMA3 -DPM. \end{lstlisting} Firstly, we have measured the single core performance of the Dirac operator itself using the timing tools provided in the openQCD code. These tests do not account for memory dependencies, but only measure floating point performance. The performance in Mflops per second is shown in Tab. \ref{singlecoretable} and in Fig. \ref{fig:singlecore} and \ref{fig:singlecore_phi}. With small lattice sizes we observe a significant improvement, exceeding a factor of two with certain cases. Naturally the improvement is smaller in a realistic test case due to data dependencies and MPI communication. \begin{figure} \centering \includegraphics[width=0.49\linewidth]{strong.eps} \includegraphics[width=0.49\linewidth]{strong_knl.eps} \caption{ Left: Strong scaling performance on the Sunbird Skylake cluster measured against the AVX2 implementation. Right: Strong scaling performance on the Marconi Knights Landing cluster measured against the AVX2 implementation } \label{fig:strongscaling_skl} \end{figure} \begin{figure} \centering \includegraphics[width=0.49\linewidth]{weak.eps} \includegraphics[width=0.49\linewidth]{weak_knl.eps} \caption{ Left: Weak scaling performance on the Sunbird Skylake cluster using $n$ cores and the lattice size $V=48^3\times n$ measured against the AVX2 implementation. Right: Weak scaling performance on the Marconi Knights Landing cluster using $n$ cores the lattice size $V=32^3\times n$ measured against the AVX2 implementation. } \label{fig:weakscaling} \end{figure} To get a more complete picture we measure the average time taken to generate a HMC trajectory. We have produced 6 trajectories starting from a random gauge configuration and report the average time per trajectory. We use a Wilson-Yukawa gauge action with $\beta=1.5$, $c_0=5/3$ and $c_1=-1/12$ and include two fermions with $\kappa=0.278000465$ and $0.276509194$ and $c_{sw}=1$. Two levels of smearing are enabled in the fermion actions. Domain Deflation and blocking are enabled. \begin{table} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline $ L $ & $ T $ & $N$ & \multicolumn{2}{|c|}{ Time (s) / trajectory } & Speedup \\ & & & AVX-512 & AVX2 & \\ \hline 32 & 32 & 1 & 1.59e3 & 1.86e3 & 1.17 \\ 32 & 32 & 2 & 8.41e2 & 9.67e2 & 1.15 \\ 32 & 32 & 4 & 4.39e2 & 5.17e2 & 1.18 \\ 32 & 32 & 8 & 2.49e2 & 3.03e2 & 1.22 \\ 32 & 32 & 16 & 1.52e2 & 1.71e2 & 1.13 \\ 32 & 32 & 32 & 1.01e2 & 1.12e2 & 1.11 \\ \hline 32 & 32 & 1 & 1.59e3 & 1.86e3 & 1.17 \\ 32 & 64 & 2 & 1.62e3 & 1.93e3 & 1.19 \\ 32 & 128 & 4 & 1.73e3 & 1.96e3 & 1.13 \\ 32 & 256 & 8 & 1.70e3 & 2.00e3 & 1.18 \\ \hline \end{tabular} \caption{ \label{knltimingtable} Average timings per trajectory using $N$ Knights Landing nodes with the volume $T\times L^3$. Speedup is measured against the AVX2 implementation. } \end{table} \begin{table} \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline $ L $ & $ T $ & $N$ & $n$ & \multicolumn{2}{|c|}{ Time (s) / trajectory } & Relative \\ & & & & AVX-512 & AVX2 & Speedup \\ \hline 32 & 48 & 1 & 16 & 1.31e3 & 1.43e3 & 1.09 \\ 32 & 48 & 1 & 32 & 7.96e2 & 8.58e2 & 1.08 \\ 32 & 48 & 2 & 64 & 3.96e2 & 4.28e2 & 1.08 \\ 32 & 48 & 4 & 128 & 1.99e2 & 2.14e2 & 1.08 \\ 32 & 48 & 7 & 256 & 1.16e2 & 1.23e2 & 1.06 \\ 32 & 48 & 13 & 512 & 6.04e1 & 6.41e1 & 1.06 \\ 32 & 48 & 26 & 1024 & 2.91e1 & 3.13e1 & 1.08 \\ \hline 32 & 24 & 1 & 8 & 2.48e2 & 2.79e2 & 1.13 \\ 32 & 24 & 1 & 16 & 1.54e2 & 1.73e2 & 1.12 \\ 32 & 24 & 1 & 32 & 9.55e1 & 1.03e2 & 1.08 \\ 32 & 24 & 2 & 64 & 4.66e1 & 5.06e1 & 1.09 \\ 32 & 24 & 4 & 128 & 2.36e1 & 2.59e1 & 1.10 \\ \hline 48 & 16 & 1 & 16 & 6.38e2 & 7.146e2& 1.12 \\ 48 & 32 & 1 & 32 & 7.96e2 & 8.55e2 & 1.07 \\ 48 & 64 & 2 & 64 & 7.99e2 & 8.64e2 & 1.08 \\ 48 & 128 & 4 & 128 & 8.01e2 & 8.64e2 & 1.08 \\ 48 & 256 & 7 & 256 & 8.87e2 & 9.81e2 & 1.11 \\ 48 & 512 & 13 & 512 & 9.32e2 & 9.94e2 & 1.07 \\ 48 & 1024 & 26 & 1024 & 9.72e2 & 1.03e3 & 1.06 \\ \hline \end{tabular} \caption{ \label{skltimingtable} Average timings per trajectory using $N$ Skylake nodes with $n$ cores and the volume $T\times L^3$. Speedup is measured relative to the AVX2 implementation.} \end{table} The timings for several lattice sizes and configurations of nodes are given in Tabs. \ref{knltimingtable} and \ref{skltimingtable} and shown in Fig. \ref{fig:strongscaling_skl} and \ref{fig:weakscaling}. Full compilation and runtime parameters and the simulation output are publicly available \cite{zenodo_skl_data} and \cite{zenodo_knl_data}. The two version of the code scale similarly, with the AVX-512 version remaining faster in each case. On the Sunbird Skylake machine, in a full trajectory the improvement is between 6\% and 13\%. Each node has 40 cores and a minimal number allocation of nodes is used in each case. On the Marconi KNL system the improvement is between 11\% and 22\%. In this case all 64 cores are used on each node. No clear dependence on the lattice size or number of nodes can be deduced from the data. \begin{table} \centering \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline $ L $ & $ T $ & $N$ & $n$ & \multicolumn{2}{|c|}{ Time (s) / trajectory } & speedup \\ & & & & AVX-512 & AVX2 & \\ \hline \multicolumn{7}{|l|}{ Skylake } \\ \hline 32 & 24 & 1 & 32 & 2.28e3 & 2.47e3 & 1.08 \\ 32 & 24 & 2 & 64 & 1.12e3 & 1.21e3 & 1.08 \\ 32 & 24 & 4 & 128 & 5.68e2 & 6.22e2 & 1.10 \\ \hline \end{tabular} \caption{ \label{gen2ltable} Average timings starting from a thermalised configuration per trajectory with the volume $T\times L^3$ with $N$ nodes using $n$ cores. } \end{table} Finally, we perform the same test with a thermalised starting configuration and a light quark. Two fermions are included $\kappa=0.27831$ and $0.276509$. The results are reported in Table \ref{gen2ltable}. The speedup achieved is similar to the previous tests, between 8\% and 10\%. \section{Conclusion} We announce an open source implementation of the Dirac operator in OpenQCD 1.6 with extended AVX-512 vector operations using Intel's intrinsic operations. These operations allow the application to make full use of the wider, 512-bit registers, reducing the total number of memory request, in particular those to L1 cache, and the number of floating point operations. The implementation assumes that memory bandwidth is the main bottleneck in the application. Tradeoffs that reduce memory use at the cost of floating point operations and vector shuffles are considered acceptable. The application performs significantly better than the existing AVX2 implementation on Knights Landing and Skylake processors. In realistic benchmarking cases the improvement factor is between 6\% and 12\% on Skylake nodes and 11\% and 22\% on Knight Landing nodes. \section{Acknowledgements} We acknowledge the support of the Supercomputing Wales project, which is part-funded by the European Regional Development Fund (ERDF) via Welsh Government.
1,108,101,562,573
arxiv
\section{Introduction} The advances in multimedia technologies have revolutionized the way we capture, process, transmit and store digital media such as images and videos. Using smart multi-media devices, ranging from digital cameras to smartphones, people capture and share billions of images and videos each day. This tremendous rise in the volume of digital images and videos being captured has led to an ever-increasing demand for ways to assess the quality of the captured images and videos, particularly in the broadcast industry where video data must undergo different transcoding and compression processes to manage data bandwidth while maintaining a certain level of quality in a consistent manner to meet consumer demands. As a result, \textit{image quality assessment (IQA)} has garnered great interest in the research community~\cite{Wang1,Mittal}. IQA methods can be generally grouped into three main categories: 1) full-reference~\cite{George,Claudio,Shao,Lee}, 2) reduced reference~\cite{Wang2}, and 3) no-reference~\cite{Mittal,Gu}. While tremendous progress has been made in the area of full-reference and reduced reference IQA methods, such methods are not suitable for scenarios where there is no reference information to leverage. As such, despite significant research effort in IQA in the past few decades, the area of no-reference image quality assessment remains a great challenge and is largely unsolved. \begin{figure}[h!] \begin{center} \includegraphics[scale = 0.2]{flower} \end{center} \caption{Demonstration of several degradation models on an example image from the CSIQ dataset~\cite{Larson}. From top to bottom row: i) additive Gaussian pink noise, ii) Gaussian blur, iii) global contrast, and iv) JPEG-2000 compression. First and second columns show different degradation levels, where first column has lowest degradation ($c_0$) and right column has worst degradation ($c_4$) } \label{fig1} \end{figure} Motivated by this challenge, we propose a novel no-reference image quality assessment system called Deep Quality, which leverages the power of deep learning to model the complex relationship between visual content and the perceived quality. Deep Quality attempts to learn this complex relationship based on the appearance of the images across different scales via a novel multi-scale deep convolutional neural network, trained to learn to assess image quality based on training samples consisting of different distortions and degradations such as blur, Gaussian noise, and compression artifacts. \section{Methodology} \begin{figure}[h] \begin{center} \includegraphics[scale = 1.5]{dqa} \end{center} \caption{Deep Quality's network architecture. The network have three convolutional, ReLU, and maxpool layers. An input patch of size $64 \times 64$ is mapped to a five-class classification task, where each class corresponds to a particular image quality grade.} \label{fig2} \end{figure} Deep Quality formulates the image quality assessment as a learning problem, where, for an image $I$ of size $n \times m$, the classifier is defined as \begin{equation} C(I) = \sum_{i=0}^{n\times m}(w_i\times f_i(I(X(i))), \end{equation} where $f_i$ is defined as a classifier for the image patch $I(X(i))$ at location $X(i)$ with weight $w_i$. Motivated by the classification performance of convolutional neural networks, Deep Quality uses a convolutional neural network approach to describe $f_i$. A back-propagation-based gradient descent algorithm can be employed to find the parameters $w_i$ and $f_i$ of classifier $C(I)$. An overview of Deep Quality can be described as follows. First, an image is decomposed into several local patches. Second, an image patch quality deep neural network is used to classify individual image patches into five classes, each corresponding to a different image quality grade: $c_0\cdot. c_4$, with the best and worst grade are defined as $c_0$ and $c_4$. Finally, Deep Quality combines the scores of each patch classifier using a linear classifier to estimate the global image quality of an input image. The network architecture of the Deep Quality deep convolutional neural networks is demonstrated in Fig.~\ref{fig2}. The three important components of Deep Quality are described as local image patch pooling, local image patch quality classifier, global image quality estimator. The goal of local image patch pooling is to select $l$ number local image patches from $g$ number of images. To achieve this task, we first generate all possible patches using a sliding window approach, sort those patches based upon their variance, and then select $l$ lowest variance patches for local patch quality classification. The local image patch classifier is realized using 3 convolutional, 3 ReLU, 3 maxpool, and 2 fc layers. The three convolutional layers use $5 \times 5$, $3 \times 3$, and $3 \times 3$ kernels with a single stride, respectively. The three maxpool layers uses a window of size $[1, 2, 2, 1]$. \section{Experimental setup and evaluations} \subsection{Datasets} The effiacy of the proposed Deep Quality system for no-reference IQA is investigated using the CSIQ image quality benchmark dataset \cite{Larson}. The CSIQ dataset consists of 30 different types of natural reference images, with each reference image degraded using five different distortion types: i) JPEG compression, ii) JPEG-2000 compression, iii) global contrast decrements, iv) additive pink Gaussian noise, and v) Gaussian blurring. Some of those distorted images are shown in Fig.~\ref{fig1}. Each distortion type was applied at four to five different levels of distortion, resulting in 866 different distorted images. The CSIQ dataset~\cite{Larson} also consists of a corresponding set of 5000 subjective ratings from 35 different observers reported in the form of DMOS. For the proposed Deep Quality system, the DMOS scores are mapped into 5 different levels of image quality for evaluation purposes. \subsection{Experimental Results and Discussions} For learning local patch level image quality, Deep Quality have sampled $60,000$ image patches from the $866$ distorted input images using a variance pooling mechanism. The variance pooler uses a overlapping sliding window of size $64\times 64$ to generate a large set of local patches and selects $70$ low variance local patches from each images, generating $60,000$ total image patches. These $60,000$ image patches are splitted into two sets: i) training set amounting $50,000$ training samples, and ii) $10,000$ testing samples. For training, we have used sparse cross entropy and logistic function as data cost and have used an $l$2 norm of the weights of two fc layers as the regularization cost. The network is trained for $100$ epochs. The training local patch-level training accuracy and testing accuracy after 100 epochs are \textbf{95.5\%} and \textbf{89\%}, respectively. Furthermore, when the score of each patch of an input image is combined with a linear classifier, Deep Quality's image-level accuracy exceeds \textbf{98\%} which is comparable to state-of-the-art full-reference image quality algorithms. In our experiments, we have trained and tested our model for each noise type separately as well as all noise types combined. We observed that individual noise types can be trained with simpler networks, while to train the system to handle combined noise types required a more complex network. We also observed that region pooling has a high impact on image patch classification accuracy. Variance-based pooling worked best for additive pink Gaussian but did not work well for blur and JPEG-type distortions. \section{Conclusion and Future Work} In this paper, We have developed and implemented a deep non-reference image quality assessment system called Deep Quality. Preliminary results show that Deep Quality was able to achieve \textbf{89\%} patch-level and \textbf{98\%} image-level accuracy using the CSIQ~\cite{Larson} dataset. Our preliminary implementation of Deep Quality decoupled the terms of the Deep Quality estimator and have optimized each step separately. Simultaneous optimization of these terms in a deep learning framework could provide a better quality estimator and is left as future work. \section{Acknowledgment} This work was supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada, and in part by the Canada Research Chairs program.
1,108,101,562,574
arxiv
\section{Introduction} The QCD axion is probably the best solution for the strong CP problem of the Standard Model (SM). This puzzle stems from the non observation of an EDM for the neutron. In the SM, the current experimental bound requires a delicate cancellation of about one part in $10^{10}$~\cite{Abel:2020gbr} between the QCD and electroweak contributions. By contrast, the axion mechanism enforces this cancellation dynamically, and somewhat independently of the details of the axion model. All axion models are based on a spontaneously broken global $U(1)_{PQ}$ symmetry, the Peccei-Quinn (PQ) symmetry, and involve some colored chiral fermions charged under that symmetry~\cite{Peccei:1977hh,Peccei:1977ur}. This makes $U(1)_{PQ}$ anomalous, and ensures the Goldstone boson~\cite{Weinberg:1977ma,Wilczek:1977pj} arising from its breaking, the so-called axion, is anomalously coupled to gluons. Thereupon, out of this coupling with gluons, non-perturbative QCD effects develop an effective potential for the axion field, such that the strong CP problem disappears precisely when the axion falls at the minimum of its potential. Along the process, the axion acquires a mass typically well below the eV scale~\cite{Bardeen:1977bd,Kim:1986ax}. Both its mass and its couplings are governed by the single scale of the spontaneous symmetry breaking of $U(1)_{PQ}$, usually referred as the decay constant $f_{a}$. Constraints from astrophysics and particle physics require for this scale to be much larger than the electroweak scale. If axions are generically light, they are typically accompanied by the heavier SM chiral fermions or with new heavy vector-like fermions. Even if these new fermions have masses beyond the direct collider reach, they would leave imprints in low-energy experiments in a way that is systematically captured by an Effective Field Theory (EFT), where only the axion is present and interacts via non-renormalizable operators with other light particles (usually from the SM). In order to extricate from the current experimental bounds the full information on BSM axion physics, or to prepare the ground for an hypothetical discovery of an axion, we have to identify the relevant parametrisation that best characterize the axion phenomenology. Ultimately, we could test the relic of potential UV complete realizations. Within axion EFTs, the higher dimensional operators involving the axion and two massless SM gauge fields, photons and gluons, are noteworthy. Indeed the QCD axion decays into those bosons are generically kinetically allowed which is the reason why current experimental searches are focusing on them. Alternatively, the couplings of axions to the massive SM gauge fields, namely the $W^{\pm}$ and $Z$ bosons have been in comparison much less investigated since their astrophysical, cosmological and collider signatures seem, a priority, less obvious. Still some interesting work have been performed regarding collider searches~\cite{Jaeckel:2015jla,Brivio:2017ije,Bauer:2017nlg,Alonso-Alvarez:2018irt,Bauer:2017ris,Bauer:2018uxu,Gavela:2019cmq}, as well as several flavour physics analysis~\cite{Izaguirre:2016dfi,Gavela:2019wzg}. In previous works, some of us have shown~\cite{Quevillon:2019zrd, Quevillon:2020hmx,Quevillon:2020aij} that axion models exhibit intrinsic ambiguities in their formulation and this has a dramatic impact on the coupling of axions to massive gauge fields. One of the main conclusion of Ref.~\cite{Quevillon:2019zrd} states as follows: when axion models are specified in a representation in which the axion has only derivative couplings to SM chiral fermions, such as in DFSZ-like axion models~\cite{Dine:1981rt,Zhitnitsky:1980tq}~\footnote{KSVZ-like models~\cite{Kim:1979if,Shifman:1979if} involve vector-like fermions, whose masses are decoupled from the spontaneous electroweak symmetry breaking, and the discussion is much simpler.}, some chiral reparametrization of the fermionic fields are implicit and lie at the root of the so called anomalous axion couplings to gauge field strengths. For vector gauge interactions, it is well-known that derivative couplings to fermions decouple faster than local anomalous operators, which thus capture the whole axion to gauge boson couplings. By contrast, for chiral gauge interactions, derivative interactions do not systematically decouple, ultimately because the gauge symmetry is necessarily spontaneously broken when the chiral fermions get their masses. Importantly, non-decoupling contributions from derivative interactions can arise from the usual axial coupling to fermions, but also from the vector one. Both can be anomalous in the presence of chiral gauge interactions. In practice, keeping track of these non-decoupling effects is crucial to get consistent, parametrization-independent couplings of the axion to gauge bosons. Only with them one can match the results obtained using a linear representation of the complex Peccei-Quinn scalar field, in which the axion has pseudoscalar couplings to chiral fermions, and no anomaly-related ambiguities ever arise. On a more technical side, these results have been derived by appropriately computing anomalous triangle diagrams regularized using Weinberg's method \cite{Weinberg:1996kr,Bilal:2008qx} which allow to parametrize the initial ambiguity inherent to the momentum rooting in the amplitude integrals. This rigorous treatment allows to obtain a generalised Ward identities in which one can tune which current is anomalous, or not, which is physically mandatory and not guaranteed in a more naive computation. The Ref.~\cite{Bonnefoy:2020gyh} reaches similar conclusions from a more anomaly matching EFT-oriented point of view which bring interesting insights to axion couplings to chiral gauge fields. In the present paper, our goal is not only to add up on the understanding and construction of low energy axion EFTs, but also more generally on the possible interplays or entanglements between spontaneous and anomalous symmetry breaking that can arise when chiral fermions are integrated out. Further, our goal is to perform this analysis exclusively in a functional context, by building the low-energy EFT following a step by step integration of the chiral fermion fields, without recourse to triangle Feynman diagrams or Ward identities, and take advantage of the elegant and convenient techniques developed recently to integrate out heavy fermionic fields \cite{Henning:2014wua,Drozd:2015rsp,Fuentes-Martin:2016uol,Zhang:2016pja,Ellis:2016enq,Henning:2016lyp,Ellis:2017jns,Kramer:2019fwz, Ellis:2020ivx,Angelescu:2020yzf,Cohen:2020fcu}. The only ingredients will thus be dimensionally-regulated functional traces, and the order-by-order invariance of the EFT operators under gauge transformations, when the appropriate would-be-Goldstone bosons are accounted for. Ultimately, the same non-decoupling of derivative interactions will be observed, in the sense that the EFT built from them will start with dimension-five operators. The plan of the manuscript is the following. In section \ref{toymodel}, we integrate out a chiral fermion from a toy model to obtain a gauge and Goldstone boson EFT. This section will mainly concentrate on the physical interpretations so the reader can understand the logic behind the construction without entering into the details of the calculation. In section \ref{UOLEA}, we will detail how to evaluate the one-loop effective action from the path integral functional approach and how we deal with the ambiguities originating from the QFT anomalies. The main outcome of this section is Eq.~\eqref{masterf}, a master formula which can be easily used to obtain effective couplings between gauge fields and Golsdtone bosons, and which encapsulates the subtleties occurring when dealing with anomalous global symmetries in a chiral gauge theory. In section \ref{section: application}, we apply this master formula to various models starting with a simple chiral toy model with an additional global $U(1)$ symmetry. We then explicitly apply our results to axion models and recover, for instance, the non intuitive axion couplings involving massive gauge fields in DFSZ-like models. We conclude in section \ref{Ccl} while additional computational details regarding master integrals can be found in appendix \ref{Appendix:master_integrals}. \section{EFTs with spontaneously and anomalously broken symmetries} \label{toymodel} Our goal is to integrate out fermions that can be charged under both global and local symmetries. Further, those fermions will not be assumed vector-like: their left- and right-handed components need not have the same charges under these symmetries. This generates two complications. First, obviously, such fermions can only acquire a mass, and thus be integrated out, when the chiral components of the symmetries is spontaneously broken. Second, the classical symmetries cannot all survive quantization, and there must be some anomalies. These two effects are entangled, and further, they induce some freedom in how the fermionic part of the UV Lagrangian is to be parametrized. So, before any attempt at integrating out the fermions, it is necessary to fix this freedom. As we will discuss in this section, from a functional point of view, one parametrization emerges as the most natural, but requires a specific treatment of anomalous effects and derivative interactions. \subsection{EFTs and gauge invariance} We start from a generic UV Lagrangian exhibiting some set of local symmetries and involving fermionic degrees of freedom. Typically, the fermionic part of the Lagrangian is of the form, including for simplicity only one axial and one vector gauge field, \begin{align} \mathcal{L}_{\rm UV}^{\rm fermion} &= \bar{\Psi} \big( i\partial_{\mu}\gamma^{\mu} + g_{_V}V_{\mu}\gamma^{\mu} - g_{_A}A_{\mu}\gamma^{\mu}\gamma^5 \big) \Psi \, . \label{Lagrangian: UV toy model} \end{align} Let us first consider abelian gauge symmetries for simplicity~\footnote{We will discuss later the peculiarities arising in the non abelian case.}. At the classical level, this theory is invariant under $U(1)_V$ and $U(1)_A$ gauge transformations, which we define as: \begin{align} U(1)_V & : ~ V_{\mu} \rightarrow V_{\mu} + \dfrac{1}{g_{_V}} \partial_{\mu}\theta_{_V} ~ , ~ \Psi \rightarrow \exp( i\theta_{_V}) \Psi \, , \\ U(1)_A & : ~ A_{\mu} \rightarrow A_{\mu} + \dfrac{1}{g_{_A}} \partial_{\mu}\theta_{_A} ~ , ~ \Psi \rightarrow \exp(- i\theta_{_A}\gamma^5 ) \Psi \, . \label{GaugeTF1} \end{align} Our goal is to integrate out the fermion to get the tower of effective interactions by performing an inverse mass expansion~\footnote{More precisely we will use convenient Covariant Derivative Expansion (CDE) techniques.}. This obviously means that the fermion to be integrated out should be massive, which forces the axial gauge symmetry to be spontaneously broken. To be able to consistently account for this, let us include the complex scalar field, $\phi_A$ which by acquiring a vacuum expectation value $v$, will spontaneously break the axial gauge symmetry, \begin{align} \mathcal{L}_{\rm UV}^{\rm fermion} = \bar{\Psi} \big( i\partial_{\mu}\gamma^{\mu} + g_{_V}V_{\mu}\gamma^{\mu} - g_{_A}A_{\mu}\gamma^{\mu}\gamma^5 \big) \Psi -y_{\Psi}\big(\bar{\Psi}_{_L}\phi_A\Psi_{_R} + \text{h.c.} \big) \, , \label{LUVPseudo} \end{align} with $y_{\Psi}$ the Yukawa coupling, and two Weyl components $\Psi_{R,L} = P_{R,L} \, \Psi$ with $P_{R,L} = ( 1 \pm \gamma^5 )/2$. If one wants to focus on manifest gauge invariance, it is convenient to include the Goldstone boson, $\pi_A$, and adopt an exponential or polar representation for the complex scalar field, \begin{align} \phi_{A} = \frac{1}{\sqrt{2}}(v+\sigma_{A}) \exp\bigg[ i\, \dfrac{\pi_A(x)}{v} \bigg] \, . \label{NLgold} \end{align} Indeed, thanks to the exponential parametrization of the Goldstone boson, this theory is still manifestly gauge invariant provided, together with the transformation of Eq.~(\ref{GaugeTF1}), \begin{align} \pi_A \rightarrow \pi_A + 2 v\theta_{_A} \, , \label{GaugeTF2} \end{align} while $\sigma_{A}$ is gauge invariant and plays no rôle in that regard. Said differently, with this representation, it is sufficient to keep only the gauge bosons and the Goldstone fields explicitly to construct the EFT, which will involve only these fields in a gauge invariant way. By contrast, if one insists on manifest renormalizability, the Goldstone boson has to enter linearly, that is, by writing the scalar field acquiring a vacuum expectation value $v$ as linear in all its components, \begin{align} \phi_A=\frac{1}{\sqrt{2}} (v+\sigma_{A}+i\pi_{A}) \, . \label{Lgold} \end{align} The $\sigma_{A}$ is no longer gauge invariant since a $U(1)_{A}$ gauge transformation is nothing but a $SO(2)$ rotation for the $(v+\sigma_{A},\pi_{A})$ vector. If one insists on manifest gauge invariance, the difficulty then is that $\sigma_{A}$ should explicitly appear in $\mathcal{L}_{\rm UV}^{\rm fermion}$. Even if in the abelian case, this is quite simple, this would introduce unnecessary model-dependence in the non-abelian case. So, at the end of the day, it is legitimate in order to be able to consistently account for spontaneous breaking of the axial gauge symmetry, to consider the exponential representation of the Goldstone boson, \begin{align} \mathcal{L}_{\rm UV}^{\rm fermion} = \bar{\Psi} \bigg( i\partial_{\mu}\gamma^{\mu} + g_{_V}V_{\mu}\gamma^{\mu} - g_{_A}A_{\mu}\gamma^{\mu}\gamma^5 - M \exp\bigg[ i\,\frac{\pi_A}{v}\gamma^5\bigg] \bigg) \Psi \, , \label{LUVPseudo} \end{align} where $M \equiv y_{\Psi}v/\sqrt{2} $ stands for the mass of the fermion. Yet, at this stage, a Taylor expansion\footnote{For the purpose of evaluating the one-loop effective action using the Covariant Derivative Expansion (CDE), truncating this expansion is perfectly consistent since operators at most linear in a given Goldstone boson will be considered. Issues related to the apparent non-renormalizability of the exponential parametrization will not affect our developments.} produces the pseudoscalar $\pi_{A}\bar{\Psi}\gamma^5\Psi$ coupling, and is the same as it would be in the linear representation of $\phi_A$. So, the distinction between linear and polar representation may appear quite academic. Yet, the exponential parametrization offers an alternative route. Instead of a Taylor expansion, there is a well-known exact procedure to recover a linearized Lagrangian which allows to transfer the Goldstone dependence from the Yukawa sector to the gauge sector. Based on the chiral rotation that is given by Eq.~\eqref{GaugeTF1}, it suffices to perform a field-dependent reparametrization of the fermion fields \begin{align} \Psi \rightarrow \Psi = \exp\bigg[-i\dfrac{\pi_{_A}(x)}{2 v}\gamma^5 \bigg] \Psi \, , \label{FermionRepar} \end{align} and the Lagrangian in Eq.~\eqref{LUVPseudo} becomes \begin{align} \mathcal{L}_{\rm UV}^{\rm fermion} &= \bar{\Psi}\bigg( i\partial_{\mu}\gamma^{\mu} -M + g_{_V}V_{\mu}\gamma^{\mu} - \bigg[g_{_A} A_{\mu} - \dfrac{\partial_{\mu}\pi_A(x)}{2 v} \bigg]\gamma^{\mu}\gamma^5 \bigg)\Psi \, . \label{LUVDer} \end{align} Under this form, $\Psi$ is invariant under the axial gauge transformation $U(1)_A$, so the mass term does not cause any trouble even for a chiral gauge symmetry and could easily be factored out for an EFT mass expansion. The quadratic operator defined in Eq.~\eqref{LUVDer} has the virtue of being manifestly gauge invariant. The Goldstone boson itself ensures the theory stays invariant when $A_{\mu}\rightarrow A_{\mu} + \frac{1}{g_{_A}}\partial_{\mu}\theta_{_A}$ thanks to $\pi_{A}\rightarrow\pi_{A} + 2v\theta_{_A}$. Evidently, for that to work, one should not get rid of them by moving to the unitary gauge. However, as a side effect, the theory is still not manifestly renormalizable since the $\partial_{\mu}\pi_A(x)$ operator is of dimension five. Yet, this form looks particularly well suited for an inverse mass expansion since $M\sim v$. Let us stress, though, that one should not be tempted to conclude that the $\partial_{\mu}\pi_A(x)$ operator is subleading and can be neglected. Such considerations can only be consistently done after the fermion field has been integrated out, and as we will see in details in the following, this operator does contribute in general to the leading terms in the EFT. \subsection{EFTs and anomalies} The Lagrangian in Eq.~\eqref{LUVDer} looks promising, but to reach it, we had to reparametrize the fermion field, Eq.~(\ref{FermionRepar}), and there is one crucially important caveat for that. The fermion being chiral, this reparametrization does not leave the path integral fermionic measure invariant. In general, given that $\Psi$ is coupled to gauge fields, the Jacobian, obtained using the singlet anomaly result for chiral fermions~\cite{Bilal:2008qx,Bertlmann:1996xk}, sums up to additional terms in the Lagrangian of the form \begin{align} \mathcal{L}_{\rm UV} \supset \mathcal{L}_{\rm UV}^{\rm Jac} = \dfrac{1}{8\pi^{2}} \dfrac{\pi_{A}}{2v} \bigg[g_{_V}^2 F_{_V,\mu\nu} \tilde{F}_{_V}^{\mu\nu} + \dfrac{1}{3} g_{_A}^2 F_{_A,\mu\nu} \tilde{F}_{_A}^{\mu\nu} \bigg] \, , \label{LUVJac} \end{align} with $F_X^{\mu\nu}=\partial^{\mu}X^{\nu}-\partial^{\nu}X^{\mu}$ the usual field strength tensor applied to the generic gauge field X and $\tilde{F}_X^{\mu\nu}=(1/2)\epsilon_{\mu\nu\rho\sigma}F_X^{\rho\sigma}$ its dual field strength tensor with the suffix indicating if this apply to the vector gauge field (V) or the axial one (A). These terms explicitly break gauge invariance, since they get shifted under $\pi_{A}\rightarrow\pi_{A} + 2v\theta_{_A}$. There are two main ways to deal with the anomalous contributions shown in Eq.~\eqref{LUVJac}. If one wants to hold the interactions to be gauged, a first possibility consists in tuning the chiral fermionic content such that the total contribution to the anomaly vanishes (as it happens in the SM). The second possibility, is to give up gauge invariance and reconsider the local symmetry as a global symmetry. We clarify in the following these two cases to consider: \begin{itemize} \item For gauge interactions that are meant to exist at the quantum level (then not being anomalous), the fermionic content is supposed to be just right so that the sum of all Jacobian terms sum up to zero. As is well-known, this is the prototype of the gauge interactions in the SM, where gauge anomalies cancel out only when all matter fields are summed over. The important point is that the corresponding Goldstone fields are allowed to be moved to and from the mass terms without generating a Jacobian contribution. Indeed, the reparametrization in Eq.~(\ref{FermionRepar}) must not generate Jacobian terms since a gauge transformation acts like that on fermions, see Eq.~(\ref{GaugeTF1}). In this context, the strict equivalence between the $\bar{\Psi}\big(\partial_{\mu}\pi_{A}\gamma^{\mu}\gamma^{5}\big)\Psi$ and $\bar{\Psi}\big(M\gamma_{5}\pi_{A}/v\big)\Psi$ couplings can be viewed as the transcription of the non-anomalous Ward identity $\partial_{\mu}A^{\mu}=2iMP$ with $A^{\mu}=\bar{\Psi}\gamma^{\mu}\gamma^{5}\Psi$ and $P=\bar{\Psi}\gamma^{5}\Psi$. Indeed, to the divergence of any correlation function of the axial gauge current, $\bra{0} A^{\mu}{...} \ket{0}$, we can associate that with $\partial_{\mu}\pi_{A}$ from Eq.~(\ref{LUVDer}), which can then be equivalently calculated from Eq.~(\ref{LUVPseudo}) (after Taylor expanding the exponential term). Regarding the vector gauge interactions, the situation is simpler since the mass term is gauge invariant. Imposing $V_{\mu}\rightarrow V_{\mu}+\frac{1}{g_{_V}}\partial_{\mu}\theta_{_V}$ requires the $\partial_{\mu}\theta_{_V}$ piece to cancel out, i.e. any correlation function of the vector gauge currents $V^{\mu}=\bar{\Psi}\gamma^{\mu}\Psi$ satisfies the non-anomalous Ward identity $\partial_{\mu}V^{\mu} = 0$. \item Some of the gauge interactions may simply be absent if their symmetry is kept global. In that case, one can simply remove the corresponding $\mathrm{A}_{\mu}$ from the Lagrangian, but keep the Goldstone bosons since they become independent physical degrees of freedom. These global symmetries may or may not have anomalies, but whenever they do, one should keep track of the Jacobian when passing from pseudoscalar to derivative Goldstone boson couplings to fermions. As explained in Refs.\cite{Quevillon:2019zrd, Quevillon:2020hmx}, one must obtain the same results using either the Lagrangian with pseudoscalar couplings (after Taylor expanding the exponential term in Eq.~\eqref{LUVPseudo}), or that using derivative couplings, Eq.~(\ref{LUVDer}), provided the local anomalous terms, Eq.~(\ref{LUVJac}), are then also included. Indeed, the point is that derivative couplings do induce anomalous effects, that precisely cancel those in the local terms of Eq.~(\ref{LUVJac}). In the inverse mass expansion context, this shows that one must be careful not to perform the limit $M\rightarrow\infty$ too soon, that is, discard the derivative interaction in Eq.~(\ref{LUVDer}) on the basis of its relative $\mathcal{O}(M^{2})$ suppression with respect to the fermion mass term, because it does provide terms of the same order in $M$ as those in Eq.~(\ref{LUVJac}). \end{itemize} \subsection{EFTs with local and global symmetries} The goal in the present paper is to consider scenarios combining both situations we have discussed so far, that is, with spontaneously broken gauge symmetries, and anomalous global symmetries. Generically, our theory of interest corresponds to \begin{align} \mathcal{L}_{\mathrm{UV}} &\supset \mathcal{L}_{\rm UV}^{\rm fermion} + \mathcal{L}_{\rm UV}^{\rm Jac}\, , \label{UVtot} \end{align} with \begin{align} \mathcal{L}_{\rm UV}^{\rm fermion} = \bar{\Psi} \bigg[ i\partial_{\mu}\gamma^{\mu } - M & + \bigg( V_{\mu}-\dfrac{\partial_{\mu}\pi_V}{2v_V}\bigg)\gamma^{\mu} - \bigg( A_{\mu }-\dfrac{\partial_{\mu}\pi_{A}}{2v_A}\bigg)\gamma^{\mu}\gamma^{5} \nonumber \\ & - \bigg(0-\dfrac{\partial_{\mu}\pi_S}{2v_S}\bigg)\gamma^{\mu} - \bigg( 0 -\dfrac{\partial_{\mu}\pi_{P}}{2v_P}\bigg)\gamma^{\mu}\gamma^{5} \bigg]\Psi \, . \label{UVfermtot} \end{align} and for the Jacobian, using the singlet anomaly result for chiral fermions~\cite{Bilal:2008qx,Bertlmann:1996xk}, and noting that $\Psi_{L/R}$ couples to $V^\mu \pm A^\mu$ and $\partial_{\mu}(\pi_S \pm \pi_P)$, \begin{align} \mathcal{L}_{\rm UV}^{\rm Jac} & = \dfrac{1}{16\pi^{2}}\dfrac{\pi_{P}}{2v_{P}}\bigg[ \big(F_{_V}^{\mu\nu}+F_{_A}^{\mu\nu}\big)^{2} +\big(F_{_V}^{\mu\nu}-F_{_A}^{\mu\nu})^{2}\bigg] + \dfrac{1}{16\pi^{2}}\dfrac{\pi_{S}}{2v_{S}}\bigg[ \big(F_{_V}^{\mu\nu}+F_{_A}^{\mu\nu}\big)^{2} - \big(F_{_V}^{\mu\nu}-F_{_A}^{\mu\nu}\big)^{2}\bigg] \nonumber \\ & = \dfrac{1}{8\pi^{2}}\dfrac{\pi_{P}}{2v_P}\left( F_{_{V},\mu\nu} \tilde{F}^{\mu\nu}_{_V} + F_{_{A},\mu\nu}\tilde{F}_{_A}^{\mu\nu}\right) +\dfrac{1}{4\pi^{2}}\dfrac{\pi_{S}}{2v_{S}}F_{_{A},\mu\nu}\tilde{F}_{_V}^{\mu\nu} \, , \label{UVJactot} \end{align} where $\pi_{A}$ and $\pi_{V}$ have no contact interactions with field strength tensors since these gauge interactions are assumed anomaly-free. To insist on the fact that $\pi_{S,P}$ are Goldstone bosons associated to global symmetries, we explicitly assign their respective would-be-gauge fields to $0$ in Eq.~\eqref{UVtot}. In this expression and throughout the rest of this section, we have set all the couplings to one to unclutter the derivation, but they can be straightforwardly reintroduced, as we will do in the following sections. This parametrization of the fermion sector of the UV theory deserves several important comments: \begin{itemize} \item The UV theory necessarily involves several complex scalar fields, several species of fermions to cancel the gauge anomalies, along with some set of scalar and fermion couplings ensuring the existence of the gauge and global symmetries at the Lagrangian level. Further, as will be detailed in Sec.~4, the pseudoscalar components of these scalar fields in general mix, with some combinations eaten by the gauge fields, and some left over as true physical degrees of freedom. With the above parametrization, we single out one of these fermions, and all the other UV features are encoded into the parameters $v_{S,P}$, $v_{A,V}$, which in general involves vacuum expectation values and some mixing angles, and in the fermion mass term $M$, which in general arises from several Yukawa couplings. \item Adopting a non-linear representation for the scalar fields, with its associated loss of renormalisability, is inevitable if one wishes to leave the details of the whole scalar sector unspecified and start at the UV scale with an effective theory involving only the Goldstone bosons. Indeed, those have to be constrained to live on the specific coset space corresponding to the assumed symmetry breaking pattern. Note that for an abelian global symmetries, the dynamics of the Goldstone bosons is particularly simple, as there are no contact interactions among them, and all that remains is the shift symmetry. \item One of the main goal of this work is to build an EFT by integrating out chiral fermions. As we have discussed, it is then convenient to reparametrize the fermion fields, so that the Goldstone boson couplings to fermions involve local partial derivative. This first ensures the gauge and shift symmetries are manifest, but it also makes the fermion mass term invariant under all symmetries. Though not compulsory, it then allows to construct the EFT by factoring the mass term out in a symmetry preserving way. \item For the abelian toy model described here, the Goldstone bosons involved in vector currents, $\pi_S$ and $\pi_V$, actually play no role. Indeed, for the vector gauge interaction, the $\partial_{\mu}\pi_{V}\bar{\Psi}\gamma^{\mu}\Psi$ interaction can always be eliminated by a non-anomalous reparametrization $\Psi\rightarrow\exp\big(i\frac{\pi_{V}}{2v_{V}}\big)\Psi$, which leaves the fermion mass term invariant. Whether it is spontaneously broken or not is thus irrelevant. For the scalar $\pi_S$ Goldstone boson associated to a global symmetry, the reparametrization $\Psi\rightarrow\exp\big(i\frac{\pi_{S}}{2v_{S}}\big)\Psi$ not only removes the $\partial_{\mu}\pi_{S}\bar{\Psi}\gamma^{\mu}\Psi$ interaction, but being anomalous, it induces a Jacobian that precisely kills the $\pi_S$ terms in Eq.~(\ref{UVJactot}). The field $\pi_S$ thus disappears entirely from the theory. These two facts are truly peculiar to the abelian gauge symmetry case, with the fermion in a one-dimensional representation. So, to set up the formalism to deal with more general theories, like the SM, we keep these fields explicitly in the UV parametrization of the fermion couplings.\footnote{Further, integrating out the fermion starting from Eq.~(\ref{UVfermtot}), to verify that the $\pi_S$ derivative interaction indeed induces EFT operators that precisely cancel the Jacobian term in Eq.~(\ref{UVJactot}) provides a non-trivial check for our calculation, see Sec.~4.1.} \end{itemize} So, let us proceed and integrate out the fermion field involving local partial derivatives in its quadratic operator. Details of the calculation will be presented in the following section, but let us already discuss some interesting generic features. If one decides to use Feynman diagrams to integrate out fermions, one will have to deal with divergent triangle amplitudes that one will have to carefully regularise. Even if this is a standard manipulation in QFT, the potential spread of the anomaly has to be considered with great care as discussed in Refs.~\cite{Quevillon:2019zrd, Quevillon:2020hmx}. In the functional approach, that we will follow all along this work, the fact that the axial vector or vector couplings are anomalous manifests itself by the presence of ambiguities in the functional trace~\footnote{More precisely the ambiguity is localised in the Dirac matrices trace if one chooses to use dimensional regularisation, as we will do.}. This means that starting from Eq.~\eqref{UVfermtot}, the fermion-less EFT expansion will start with six dimension-five operators involving the Goldstone bosons $\pi_P$ and $\pi_S$~\footnote{A priori the only non vanishing dimension five operators have to involve Dirac traces with only one $\gamma^{5}$ matrix or with three $\gamma^{5}$ matrices.}, \begin{align} \mathcal{L}_{\rm UV}^{\rm fermion} \rightarrow \mathcal{L}_{\rm EFT}^{\rm 1loop} & = \omega_{_{AVV}}\dfrac{\partial_{\mu}\pi_{P}}{2v_{P}}V_{\nu}\tilde{F}_{_V}^{\mu\nu}+\omega_{_{AAA}}\dfrac{\partial_{\mu}\pi_{P}}{2v_{P}}\left( A_{\nu}-\dfrac{\partial_{\nu}\pi_{A}}{2v_{A}}\right) \tilde{F}_{_A}^{\mu\nu} \nonumber\\ & +\omega_{_{VVA}}\dfrac{\partial_{\mu}\pi_{S}}{2v_{S}}V_{\nu}\tilde{F}_{_A}^{\mu\nu}+\omega_{_{VAV}}\dfrac{\partial_{\mu}\pi_{S}}{2v_{S}}\left( A_{\nu}-\dfrac{\partial_{\nu}\pi_{A}}{2v_{A}}\right) \tilde{F}_{_V}^{\mu\nu}\ . \label{initial4op} \end{align} The evaluation of the $\omega_i$ coefficients involves divergent integrals and after their regularisation, those parameters end up fully ambiguous. We thus need to find a strategy to fix them. Actually, these ambiguities are the exact analog of those arising for the triangle diagrams, whose expressions are ambiguous since they depend on the routing of the momenta when working at the Feynman diagram level. In that case, the ambiguities are removed by imposing the appropriate Ward identities, that is, gauge invariance. So, we would like to do the same here, and impose the vector and axial gauge invariance. However, all the operators in Eq.~\eqref{initial4op} are already gauge invariant! Actually, the would-be-Goldstone bosons $\pi_{A}$ are not even needed to ensure the gauge invariance, and they never contribute to S-matrix elements. The reason is that their contributions, or the $\theta_{_{V,A}}$ terms arising when $V_{\mu}\rightarrow V_{\mu}+\partial_{\mu}\theta_{_V}$ or $A_{\mu}\rightarrow A_{\mu}+\partial_{\mu}\theta_{_A}$, drop out by integration by parts~\footnote{One should note that integration by parts can be performed without any hesitation since the fermion has been formally integrated out.} thanks to the antisymmetry of $\tilde{F}_{_{V,A}}^{\mu\nu}$ and the Bianchi identity. In the initial Lagrangian of Eq.~\eqref{UVtot} we decided to treat both the would-be Golstone bosons ($\pi_V$ and $\pi_A$) and the Golstone bosons ($\pi_S$ and $\pi_P$) on equal footing by writing them with local derivative acting on them. Since this increases the degree of divergence of integrals one would then be tempted, in order to minimise the number integrals to regulazise, to move back to the mass term the would-be Golstone bosons (let us remind that this can be trivially done since the gauge symmetries are assumed not to be anomalous). Then, after Taylor expanding the mass term one obtains, \begin{align} \mathcal{L}_{\rm UV}^{\rm fermion} = \bar{\Psi} \bigg[ i\partial_{\mu}\gamma^{\mu } - M\bigg( 1 +\dfrac{\pi_{A}}{v_A}\,i\gamma^{5} \bigg)+ V_{\mu}\gamma^{\mu} - A_{\mu}\gamma^{\mu}\gamma^{5} + \dfrac{\partial_{\mu}\pi_S}{2v_S} \gamma^{\mu} + \dfrac{\partial_{\mu}\pi_{P}}{2v_P}\gamma^{\mu}\gamma^{5} \bigg]\Psi \, . \label{UVtot2} \end{align} Since by construction the $U(1)_V$ and $U(1)_A$ symmetries are gauged, the would-be-Goldstone bosons, $\pi_V$ and $\pi_A$, are not involved in bosonic operators up to dimension five, starting from Eq.~\eqref{UVtot2}. This means that the fermion-less EFT expansion will start again with four dimension-five operators involving only the Goldstone bosons $\pi_S$ and $\pi_P$~, \begin{align} \mathcal{L}_{\rm UV}^{\rm fermion} \rightarrow \mathcal{L}_{\rm EFT} & = \omega_{_{AVV}}\dfrac{\partial_{\mu}\pi_{P}}{2v_{P}}V_{\nu}\tilde{F}_{_V}^{\mu\nu} +\omega_{_{AAA}}\dfrac{\partial_{\mu}\pi_{P}}{2v_{P}} A_{\nu} \tilde{F}_{_A}^{\mu\nu} \nonumber \\ & + \omega_{_{VVA}}\dfrac{\partial_{\mu}\pi_{S}}{2v_{S}}V_{\nu}\tilde{F}_{_A}^{\mu\nu} + \omega_{_{VAV}}\dfrac{\partial_{\mu}\pi_{S}}{2v_{S}} A_{\nu}\tilde{F}_{_V}^{\mu\nu} \ . \end{align} Since $\pi_A$ drops out of Eq.~\eqref{initial4op} under integration by part, we recover exactly the same effective interactions. Moving the would-be-Goldstone to the fermion mass term, that is, making it gauge-dependent, does not help to fix the $\omega_{i}$ coefficients because gauge invariance is still automatic for the leading dimension-five operators. The only way forward is to perturb the theory to break this automatic gauge invariance, so that non-trivial constraints on the $\omega_{i}$ coefficients can emerge. One possibility is to associate to the Goldstone bosons, $\pi_S$ and $\pi_P$, auxiliary gauge fields $S_{\mu}$ and $P_{\mu}$, respectively, as we will now discuss. \subsection{Remove ambiguities with artificial gauging} One way to fix $\omega_{_{AVV}}$, $\omega_{_{VVA}}$, $\omega_{_{VAV}}$ and $\omega_{_{AAA}}$ using the constraint of gauge invariance is to introduce fictitious~\footnote{At the end of the day, we will still want the global symmetry to stay global and to set to zero these fictitious vector fields.} vector and axial vector gauge fields associated to the $\pi_{S}$ and $\pi_{P}$ Goldstone bosons. These fictitious gauge fields then enter in the effective operators of Eq.~\eqref{initial4op}, and prevent gauge invariance from being automatic under partial integration. They also prevent the contributions involving the would-be-Goldstone bosons of the true symmetries to vanish. This trick, introduced in Ref.~\cite{Bonnefoy:2020gyh}, is the key to derive non-trivial constraints and fix the ambiguous coefficients. One may be a bit uneasy about this gauging of the global symmetries since these are precisely the symmetries that are anomalous. Actually, in the following, we will never need to use the fictitious gauge invariance in any form. All that matters is that these fictitious gauge fields act as background fields for $\partial_{\mu}\pi_{S}$ and $\partial_{\mu}\pi_{P}$, so as to upset the automatic (true) gauge invariances. This is sufficient to derive non-trivial constraints from the true, non-anomalous gauge symmetries. Yet, as advocated in Ref.~\cite{Bonnefoy:2020gyh}, it can also be technically interesting to view these background fields as fictitious gauge fields, because then all the symmetries are treated on the same footing. As we will detail in section~\ref{UOLEA}, the calculation of the EFT becomes fully generic. The nice feature is that under this form, one can decide only at the very end which of the gauge symmetries is to be anomalous, hence fictitious, by imposing the exact invariance of the EFT under the \textit{other} gauge symmetries, those that are kept active. To illustrate all that, let us thus rewrite our initial Lagrangian as \begin{align} \mathcal{L}_{\rm UV, I}^{\rm fermion} = \bar{\Psi} \bigg[ i\partial_{\mu}\gamma^{\mu } - M & +\bigg( V_{\mu}-\dfrac{\partial_{\mu}\pi_V}{2v_V}\bigg)\gamma^{\mu} - \bigg( A_{\mu }-\dfrac{\partial_{\mu}\pi_{A}}{2v_A}\bigg)\gamma^{\mu}\gamma^{5} \nonumber \\ & + \bigg(S_{\mu}-\dfrac{\partial_{\mu}\pi_S}{2v_S}\bigg)\gamma^{\mu} + \bigg( P_{\mu} -\dfrac{\partial_{\mu}\pi_{P}}{2v_P}\bigg)\gamma^{\mu}\gamma^{5} \bigg]\Psi \, . \label{UVtotTrick1} \end{align} In this Lagrangian, the $\partial_{\mu}\pi_{V}$ piece is irrelevant, since it can be eliminated by an innocuous reparametrization, but let us keep it anyway for now. Integrating out the fermion leads to the EFT : \begin{align} \mathcal{L}_{\rm EFT, I}^{\rm 1loop} & = \omega_{_{VVA}}\left( S_{\mu}-\dfrac{\partial_{\mu}\pi_{S}}{2v_{S}}\right) \left(V_{\nu} -\dfrac{\partial_{\nu}\pi_{V}}{2v_{V}}\right) \tilde{F}_{_A}^{\mu\nu} + \omega_{_{AVV}}\left( P_{\mu} - \dfrac{\partial_{\mu}\pi_{P}}{2v_{P}}\right) \left(V_{\nu} -\dfrac{\partial_{\nu}\pi_{V}}{2v_{V} }\right)\tilde{F}_{_V}^{\mu\nu} \nonumber\\ & + \omega_{_{VAV}}\left( S_{\mu}-\dfrac{\partial_{\mu}\pi_{S}}{2v_{S}}\right) \left(A_{\nu} -\dfrac{\partial_{\nu}\pi_{A}}{2v_{A}}\right) \tilde{F}_{_V}^{\mu\nu} +\omega_{_{AAA}}\left( P_{\mu}-\dfrac{\partial_{\mu}\pi_{P}}{2v_{P}}\right) \left(A_{\nu} -\dfrac{\partial_{\nu}\pi_{A}}{2v_{A} }\right) \tilde{F}_{_A}^{\mu\nu} \, , \label{EfftotTrickk1} \end{align} with again the ambiguous coefficients $\omega_{_{AVV}}$, $\omega_{_{VVA}}$, $\omega_{_{VAV}}$ and $\omega_{_{AAA}}$ (the details of the calculation will be presented in the next section). All these interactions are still automatically gauge invariant thanks to the presence of the would-be-Goldstone bosons. Now, the key is to remember that the true gauge interactions are anomaly-free by assumption. This means the $\pi_A$ can be freely moved to the mass term by a reparametrization of the fermion field, without Jacobian, and as said above, the $\partial_{\mu}\pi_{V}$ term can be discarded, again without Jacobian. Thus, the UV Lagrangian can equivalently be written as \begin{align} \mathcal{L}_{\rm UV, II}^{\rm fermion} = \bar{\Psi} \bigg[ i\partial_{\mu}\gamma^{\mu } & - M\bigg( 1 + \dfrac{\pi_{A}}{v_A}\,i\gamma^{5} \bigg)+ V_{\mu}\gamma^{\mu} - A_{\mu}\gamma^{\mu}\gamma^{5} \nonumber \\ & + \bigg(S_{\mu}-\dfrac{\partial_{\mu}\pi_S}{2v_S}\bigg)\gamma^{\mu} + \bigg( P_{\mu} -\dfrac{\partial_{\mu}\pi_{P}}{2v_P}\bigg)\gamma^{\mu}\gamma^{5} \bigg]\Psi \, . \label{UVtotTrick2} \end{align} This time, there is no ambiguity in calculating the Wilson coefficients of the operators involving the would-be-Goldstone bosons. The five-dimensional effective interactions become \begin{align} \mathcal{L}_{\rm EFT, II}^{\rm 1loop} & \supset \omega_{_{VVA}}\left( S_{\mu}-\dfrac{\partial_{\mu}\pi_{S}}{2v_{S}}\right) V_{\nu}\tilde{F}_{_A}^{\mu\nu} + \omega_{_{AVV}}\left(P_{\mu}-\dfrac{\partial_{\mu}\pi_{P}}{2v_{P}}\right) V_{\nu}\tilde{F}_{_V}^{\mu\nu} \nonumber\\ & +\omega_{_{VAV}}\left( S_{\mu}-\dfrac{\partial_{\mu}\pi _{S}}{2v_{S}}\right) A_{\nu} \tilde{F}_{_V}^{\mu\nu} + \eta_{_{ASV}} \dfrac{\pi_A}{v_A} F_{_S,\,\mu\nu} \tilde{F}_{_V}^{\mu\nu} \nonumber\\ & +\omega_{_{AAA}}\left( P_{\mu}-\dfrac{\partial_{\mu}\pi _{P}}{2v_{P}}\right) A_{\nu} \tilde{F}_{_A}^{\mu\nu} + \eta_{_{APA}} \dfrac{\pi_A}{v_A} F_{_P,\,\mu\nu} \tilde{F}_{_A}^{\mu\nu} \, , \label{EfftotTrickk2} \end{align} where $\omega_{_{AVV}}$, $\omega_{_{VVA}}$, $\omega_{_{VAV}}$ and $\omega_{_{AAA}}$ are ambiguous, but not $\eta_{_{ASV}}$ and $\eta_{_{APA}}$ since they arise from convergent integrals. Importantly, under this form, the true $U(1)_V$ and $U(1)_A$ gauge invariances are no longer automatic. Now, we end up with two equivalent ways to fix the ambiguities. Either we enforce the matching of Eq.~(\ref{EfftotTrickk2}) with Eq.~(\ref{EfftotTrickk1}), or we impose gauge invariance on Eq.~(\ref{EfftotTrickk2}). In both cases, the constraints take the same form, but the latter is obviously more economical from a calculation point of view and will be adopted in the next sections. For instance, for the vector gauge fields, since $\pi_{V}$ is absent from Eq.~(\ref{EfftotTrickk2}), matching with Eq.~(\ref{EfftotTrickk1}) requires $\omega_{_{VVA}}$ and $\omega_{_{AVV}}$ to vanish. Equivalently, invariance of Eq.~(\ref{EfftotTrickk2}) under $V_{\mu}\rightarrow V_{\mu}+\partial_{\mu}\theta_{_V}$ immediately imposes $\omega_{_{VVA}}=\omega_{_{AVV}}=0$. This corresponds to the usual result that for vector gauge interactions, the derivative interactions of a Goldtone boson with the fermions contributes only at the subleading order in the mass expansion, otherwise known as the Sutherland-Veltman theorem. The local Jacobian terms in Eq.~\eqref{UVJactot} immediately catch the whole $\pi_{P}VV$ coupling. For the axial gauge field, matching Eq.~(\ref{EfftotTrickk2}) with Eq.~(\ref{EfftotTrickk1}) obviously permits to fix the ambiguous $\omega_{_{VAV}}$ and $\omega_{_{AAA}}$ in terms of $\eta_{_{AVS}}$ and $\eta_{_{APA}}$, which are fully calculable. Alternatively, performing a $U(1)_A$ gauge transformation $A_{\mu}\rightarrow A_{\mu}+\partial_{\mu}\theta_{_A}$ together with $\pi_{A}\rightarrow\pi_{A} + 2v_A\theta_{_A}$ in Eq.~(\ref{EfftotTrickk2}) generates the gauge variation, after integrating by part and using the Bianchi identity, \begin{align} \delta_A\big(\mathcal{L}_{\rm EFT, II}^{\rm 1loop}\big) &= \bigg( \dfrac{1}{2}\omega_{_{VAV}} + 2\eta_{_{ASV}} \bigg) \theta_{_A} F_{_S}^{\mu\nu}\tilde{F}_{_V,\,\mu\nu} + \bigg( \dfrac{1}{2} \omega_{_{AAA}} + 2\eta_{_{APA}} \bigg) \theta_{_A} F_{_P}^{\mu\nu}\tilde{F}_{_A,\,\mu\nu} \, . \label{CoeffTrick1} \end{align} Hence, the requirement of gauge invariance asks for \begin{align} \delta_A\big(\mathcal{L}_{\rm EFT, II}^{\rm 1loop}\big) &= 0 \Leftrightarrow \omega_{_{VAV}} = -4 \eta_{_{ASV}} ~ \text{and}~ \omega_{_{AAA}} = -4 \eta_{_{APA}} \, . \label{CoeffTrick2} \end{align} The effective axion-bosonic Lagrangian is obtained by adding $\mathcal{L}_{\rm UV}^{\rm Jac}$ and $\mathcal{L}_{\rm EFT,II}^{\rm 1loop}$ and finally setting the fictitious vector fields to zero, this gives the result, \begin{align} \mathcal{L}_{\rm EFT} &= \dfrac{1}{16\pi^2}\dfrac{\pi_P}{v_P}F_{_{V},\mu\nu} \tilde{F}^{\mu\nu}_{_V} + \bigg[\dfrac{1}{16\pi^2}-\eta_{_{APA}}\bigg]\dfrac{\pi_P}{v_P}F_{_A,\mu\nu}\tilde{F}_{_A}^{\mu\nu} + \bigg[\dfrac{1}{8\pi^2}-\eta_{_{ASV}}\bigg]\dfrac{\pi_S}{v_S}F_{_A,\mu\nu}\tilde{F}_{_V}^{\mu\nu} \, . \end{align} Let us stress again that the $\eta_{_{APA}}$ and $\eta_{_{ASV}}$ are fully calculable, unambiguous coefficients originating from convergent integrals. The determination of $\omega_{_{VAV}}$ and $\omega_{_{AAA}}$ from the requirement of gauge invariance is now transparent, and precisely matches that using Ward identities in a Feynman diagram context~\cite{Quevillon:2019zrd}. This is the general procedure we will adopt in the following to derive our bosonic EFTs. Of course, in the physical case, none of the interactions parametrised by $\eta_{_{ASV}}$ and $\eta_{_{APA}}$ exist since they require the presence of the fictitious $P_{\mu}$ and $S_{\mu}$ gauge fields as background values\footnote{Looking back, it is clear that gauge invariance under these fictitious symmetries is never imposed in any form. All that matters is to prevent the would-be-Goldstone bosons from being automatically absent from both Eq.~(\ref{EfftotTrickk1}) and Eq.~(\ref{EfftotTrickk2}), and true gauge invariance from being automatic in both EFT Lagrangians.}. Yet, this derivation sheds a new light on the violation of the Sutherland-Veltman theorem in the presence of spontaneously broken axial gauge interactions. Ultimately, it is due to the contribution of the associated would-be-Goldstone boson. The net effect is that the $\pi_{S}VA$ and $\pi_{P}AA$ couplings are not fully determined by the corresponding terms in the Jacobian, Eq.~\eqref{UVJactot}, since derivative interactions do contribute at leading order in the inverse mass expansion. \section{Integrating out chiral fermions}\label{UOLEA} In the previous section we discussed, qualitatively, peculiarities arising when building an EFT, while integrating out fermionic fields, from a UV theory with exact or spontaneous gauge symmetries and anomalous global symmetries. In this section we will, quantitatively, construct these EFTs involving gauge fields and their associated would-be-Golstone bosons and simple Goldstone bosons associated to global symmetries. While the would-be Golstone bosons can display derivative or pseudo-scalar couplings to fermions, since ultimately this depends on the fermion parametrization (as we have discussed before), the Goldstone bosons will have to be taken firmly with local derivative couplings to fermions. Strictly speaking, from a path integral point of view, those details of the model are not mandatory to perform the main computation part, meaning forming the operator basis, evaluating the loop integrals after regularising them. The symmetry aspects of the model will only matter at the very last stage when matching a UV theory onto its EFT. In this section, we will briefly review the core techniques for calculating Wilson coefficients of EFT higher dimensional operators at the one-loop level by utilising the functional approach. Since our interest is about the anomaly structure of specific QFTs, we will concentrate, in a general way, on the task of integrating out chiral fermions or fields which chiraly interact with gauge fields. We will also remind the reader how anomalies arise depending on how the one-loop effective action is regularised. \subsection{Evaluation of the fermionic effective action}\label{effective_action} We consider a generic UV theory containing a heavy Dirac fermion $\Psi$ of mass $M$ interacting bilinearly with a light field $\phi$ which is encapsulated inside the background function $\mathrm{X}[\phi]$ \footnote{For simplicity we will consider $\Psi$ and $\phi$ as singlets but the following procedure is more general and it is still possible to treat them as multiplets.}. The matter Lagrangian of this generic UV theory can be written as follows, \begin{align} \mathcal{L}_{\rm UV}^{\rm fermion}\big[ \Psi, \phi \big] &\supset \bar{\Psi} \bigg[ P_{\mu}\gamma^{\mu} - M + X[\phi] \bigg]\Psi = \bar{\Psi} \, \mathcal{Q}_{\rm UV}[\phi] \, \Psi \, , \label{uolea: UV-Lagrangian} \end{align} where $P_{\mu} = i\partial_{\mu}$ and introducing $\mathcal{Q}_{UV}[\phi]$ the fermionic quadratic operator. The background function $X[\phi]$ that we will consider throughout this paper is \begin{align} X[\phi] & = V_{\mu}[\phi]\gamma^{\mu} - A_{\mu}[\phi]\gamma^{\mu}\gamma^5 - W_1[\phi]i\gamma^5 \, , \end{align} where we decompose $X[\phi]$ in terms of vector $V_{\mu}[\phi]$, axial-vector $A_{\mu}[\phi]$ and pseudo-scalar $W_1[\phi]$ structures\footnote{We note that $V_{\mu}[\phi]$, $A_{\mu}[\phi]$ and $W_1[\phi]$ do not contain any Dirac matrices or momentum variables $q_{\mu}$. The structures $V_{\mu}[\phi]$ and $A_{\mu}[\phi]$ can include gauge fields or local derivative of scalar fields.}, which are all the different type of interactions we will need to match our ``axion motivated'' UV theory to an EFT. In order to obtain the fermionic one-loop effective action, the light field $\phi$ is treated classically, integrating out the fermion field $\Psi$ yields\footnote{The quantity $S_{\rm EFT}^{\rm 1loop}$ corresponds to the fermion 1PI action and it is formally divergent. We will discuss its gauge variation and its regularization in the following.} \begin{align} e^{iS_{\rm EFT}^{\rm 1loop}[\phi]} &= \int \mathcal{D}\overline{\Psi}\mathcal{D}\Psi \, e^{iS_{\rm UV}[\Psi , \, \phi]} \nonumber\\ &\simeq e^{iS_{\rm UV}[ \Psi_c , \, \phi ]} \int \mathcal{D}\bar{\eta} \, \mathcal{D}\eta \, e^{i\int d^4 x \, \bar{\eta} \mathcal{Q}_{\rm UV}[\phi] \eta } = e^{iS_{\rm UV}[ \Psi_c , \, \phi ]} \, \det \mathcal{Q}_{\rm UV}[\phi] \nonumber \\ & = e^{iS_{\rm UV}[\Psi_c , \, \phi ]} e^{ \text{Tr} \ln \mathcal{Q}_{\rm UV}[\phi] } \, , \label{uolea: 1PI-action} \end{align} where in the second line of Eq.~\eqref{uolea: 1PI-action} we have expanded the fermion fields around their classical background values, $\Psi = \Psi_c + \eta$ and performed the integration over the quantum fluctuations $\eta$. Eventually, we have traded the functional determinant for the functional trace, $``\text{Tr}"$, running over the functional space and internal indices of the quadratic operator, $\mathcal{Q}_{\rm UV}[\phi]$. We therefore arrive at the one-loop effective action arising from integrating out a fermion: \begin{align} S_{\rm EFT}^{\rm 1loop} &= -i \, \text{Tr}\ln ( \slashed{P} - M + V_{\mu}[\phi]\gamma^{\mu} - A_{\mu}[\phi]\gamma^{\mu}\gamma^5 - W_1[\phi]i\gamma^5) \, . \label{S1LEFT} \end{align} Generally, in the functional space, one can write the quadratic operator as a function of position, $\hat{x}$, and momentum, $\hat{p}$, operators. Projecting onto position space, these operators become $\hat{x}=x$ and $\hat{p}_{\mu}=i\partial_{\mu}$. The standard initial step is to evaluate the trace over functional space by inserting the momentum eigenstate basis together with employing the canonical quantum mechanical trick of inserting the identity matrix, $\int d^4x \ket{x}\bra{x} = \mathds{1}$~\footnote{For the reader who would like to investigate in details the whole computation steps, we recommend Refs.\cite{Henning:2014wua,Henning:2016lyp,Fuentes-Martin:2016uol,Cohen:2019btp}.}, \begin{align} S_{\rm EFT}^{\rm 1loop} &= -i\, \int \dfrac{d^4q}{(2\pi)^4} \bra{q} \text{tr} \ln \mathcal{Q}_{UV}(\hat{x},\hat{p}_{\mu}) \ket{q} \nonumber \\ &= -i\, \int d^4x \int \dfrac{d^4q}{(2\pi)^4} \braket{q}{x}\bra{x} \text{tr} \ln \mathcal{Q}_{UV}(\hat{x},\hat{p}_{\mu}) \ket{q} \nonumber \\ &= -i\, \int d^4x \int \dfrac{d^4q}{(2\pi)^4} \, e^{iq\cdot x} \, \text{tr} \ln \mathcal{Q}_{UV}(x, i\partial_{\mu}) e^{-iq\cdot x} \nonumber \\ &= \int d^4x \int \dfrac{d^4q}{(2\pi)^4} (-i) \, \text{tr} \ln \mathcal{Q}_{UV}(x, i\partial_{\mu} - q_{\mu}) \, , \label{Tr-Q} \end{align} where ``tr" now denotes the trace over spinor and internal symmetry indices only. Here the $\bra{x}$ denotes the eigenstate of local operator in position space, e.g. $\bra{x}\mathcal{Q}_{UV}(\hat{x},\hat{p})= \mathcal{Q}_{UV}(x,i\partial_{\mu})\bra{x}$, and the convention for inner product is $\braket{x}{q}=e^{-iq\cdot x}$. An ``open" derivative from the kinematic operator will get shifted due to $e^{iq\cdot x} i\partial_{\mu} e^{-iq\cdot x} = i\partial_{\mu} +q_{\mu}$. We perform also a conventional change of integration variable $q \rightarrow -q$. As we will study later, we emphasize that in the case where one has to deal with a local derivative of a bosonic field, e.g. $\big[\partial_{\mu}\pi(x) \big]$, this term will not be shifted under the sandwich of $e^{iq\cdot x}\big[ \partial_{\mu}\pi(x) \big]e^{-iq\cdot x}$ since the partial derivative of this coupling is ``closed". Therefore, on the computational side, depending on the vector or axial-vector nature of the local derivative couplings, one can absorb these terms into the vector ($V_{\mu}[\phi]$) and axial-vector ($A_{\mu}[\phi]$) structures of the UV quadratic operator~\footnote{This underlines the practical usefulness of our initial choice of parametrisation made in Eq.~\eqref{S1LEFT}}. Ultimately, the expansion of the logarithm in terms of a series of local operators suppressed by the fermion mass scale can be performed by a variety of techniques, \begin{align} \mathcal{L}_{\rm EFT}^{\rm 1loop} &= -i \, \text{Tr}\ln ( \slashed{P} - \slashed{q} - M - X[\phi]) \nonumber \\ &= i \, \text{tr} \sum_{n=1}^{\infty} \dfrac{1}{n} \int \dfrac{d^4q}{(2\pi)^4} \left[ \, \dfrac{-1}{\slashed{q}+M} \bigg( -\slashed{P} -V_{\mu}[\phi]\gamma^{\mu} + A_{\mu}[\phi] \gamma^{\mu}\gamma^5 + W_1[\phi]i\gamma^5 \bigg) \right]^n \, . \label{Tr-Q-expansion} \end{align} The remarkable point at this stage is that the q-momentum integration can be factorized out from the generic operator structures. Indeed, regardless of the method used to evaluate the logarithm expansion, it can be done once-and-for-all, and the result is the same and universal in the sense that the final expression is independent of the details of the UV Lagrangian, which remain encapsulated in the X matrix of light fields, covariant derivative $P_\mu$, and mass matrix $M$. This leads to the so called concept of the Universal One-Loop Effective Action (UOLEA) (see Refs~\cite{Drozd:2015rsp,Zhang:2016pja,Ellis:2016enq,Ellis:2017jns,Ellis:2020ivx}). Note that in our calculations we will deal with multiple vector, axial vector and pseudo-scalar interactions so we will consider in all generalities \begin{align} V_{\mu}[\phi] \equiv g_{_V}^i V_{\mu}^i[\phi^i]\, , ~ A_{\mu}[\phi] \equiv g_{_A}^i A_{\mu}^i[\phi^i]\, , ~ W_1[\phi] \equiv g_{_{W_1}}^i W_1^i[\phi^i] \, , \end{align} with an implicit summation over the i index. \subsection{Ambiguities and regularisation of the functional trace}\label{Ambiguous_traces} The evaluation of the one-loop effective Lagrangian Eq.~\eqref{Tr-Q-expansion} usually encounters divergent integrals and we use dimensional regularisation~\cite{tHooft:1972tcz} to evaluate them along with the $\overline{MS}$ scheme for renormalisation. The traces over Dirac matrices have to be performed in $d=4-\epsilon$ dimensions, and the $\epsilon$-terms resulting from the contractions with the metric tensor (satisfying then $g^{\mu\nu}g_{\mu\nu}=d$) must be kept in the computations. These $\epsilon$-terms will then multiply with the $(1/\epsilon)$ pole of the divergent integrals and yield finite contributions. We emphasise that depending on the regularisation scheme for $\gamma^5$ in $d$-dimensions, different results for $\epsilon$-terms in Dirac traces will emerge (see for examples Refs.~\cite{Chanowitz:1979zu,Novotny:1994yx,Belusca-Maito:2020ala}). We will come back shortly to describe in details the prescription we used to evaluate ill-defined Dirac traces involving $\gamma^5$ matrices, in dimensional regularisation. We now turn back on the ambiguities arising in some of our integrals in the 4-dimensional space. Usually, when computing one-loop divergent triangle Feynman diagrams (corresponding to the Adler–Bell–Jackiw anomaly \cite{Adler:1969gk,Bell:1969ts}), it is well-known that, in $d=4$ dimensions, an ambiguity of the loop integral arises. It corresponds to an arbitrariness in the chosen integration variables (see Ref.~\cite{Weinberg:1996kr}), and actually there can be surface terms that do depend on the chosen momentum routing. Those surface terms then contribute to the divergence of vector-currents and axial-vector-currents, and all the naive Ward identities cannot be satisfied simultaneously. At least some of them will be anomalous. The important point is that the arbitrariness of integral variable can be parametrized in terms of free parameters (see the standard Refs.\cite{Weinberg:1996kr,Bilal:2008qx} and the more recent Refs.\cite{Quevillon:2019zrd,Bonnefoy:2020gyh}). By tuning the value of those free parameters, one can decide which symmetry is broken at the quantum level, and which are kept active. Evidently, to obtain the correct physical results, all the gauge symmetries must be preserved. When switching to the d-dimensional space, the ambiguity on the loop integrals does not arise anymore from dependencies on the chosen momentum routing, but it is now inherent from the Dirac algebra sector. Indeed, all the usual properties of the Dirac matrices cannot be maintained once in $d>4$ dimensions, essentially because $\gamma^5$ and the anti-symmetric tensor $\epsilon^{\mu\nu\rho\sigma}$ are intrinsically four-dimensional objects. Whatever the chosen definition, there is no way to consistently preserve both the anticommutativity properties of $\gamma^5$ matrices, i.e. $\{\gamma^{\mu},\gamma^5\}=0$, and the trace cyclicity property in $d>4$ dimensions. In the original work of 't Hooft and Veltman~\cite{tHooft:1972tcz}, they noted that the momentum routing ambiguity is replaced by an ambiguity in the location of $\gamma^5$ in the Dirac traces. Using their prescriptions for the Dirac algebra in $d>4$ dimensions (see Refs.\cite{tHooft:1972tcz,Breitenlohner:1977hr}), it is then possible to introduce free parameters keeping track of all the possible $\gamma^5$ locations in a given Dirac matrices string~\cite{Elias:1982ea}. As before, one can then tune these parameters to choose which symmetry is broken anomalously, and which one have to be preserved. This is the strategy we will employ to calculate the ambiguous Dirac traces in Eq.~\eqref{Tr-Q-expansion}. \subsection{Evaluation of the anomaly related operators}{\label{GCS+Goldstone}} We now concentrate on the derivation of the operators which ultimately involve a mixture of three gauge fields and Goldstone bosons with a derivative acting on them. With our parametrization, they arise from combinations of the generic vector $V_{\mu}[\phi]$ and axial-vector $A_{\mu}[\phi]$ fields. Due to the presence of $\gamma_5$ Dirac matrices in their Wilson coefficients, they are truly ambiguous in dimensional regularisation. Then we will proceed with the evaluation of operators involving one Goldstone boson (without any derivative acting on it), namely the $W_1[\phi]$ field in our generic parametrization. These operators have been evaluated using the usual Feynman diagrams technique (see Refs.\cite{Quevillon:2019zrd,Bonnefoy:2020gyh,Anastasopoulos:2006cz}). Since those computations are subtle and lead to confusions, this is legitimate to wonder how one would perform them from a different point of view, such as within the path integral formalism. Which is what we present now. \subsubsection{Evaluation of the ambiguous terms} We start with the exercise of computing the divergent terms that naturally arise when evaluating Eq.~\eqref{Tr-Q-expansion}. The generic form of these operators is \begin{align} G_{\mu}^i G_{\nu}^j \tilde{F}_{\mu\nu}^k = G_{\mu}^i G_{\nu}^j \bigg( \dfrac{1}{2}\epsilon^{\mu\nu\rho\sigma}\partial_{_{[\rho}}G_{_{\sigma]}}^k \bigg) \, , \label{GCS form} \end{align} where we use the notation $G_{\mu}^i$ to denote a generic gauge field and to avoid confusions with the vector and axial-vector structures in Eq.~\eqref{Tr-Q-expansion}. We also introduce the upper indices $i$,$j$,$k$ to keep the computation as general as possible and offer us the possibility to apply this computations to a multiple gauge field configurations later on. Since starting with Eq.~\eqref{Tr-Q-expansion}, we chose to deal with vector and axial-vector structures, in order to reconstruct the ambiguous operators in the EFT, we need \begin{itemize} \item One insertion of $P_{\mu}$ to account for the partial derivative and then allow to form a field strength tensor. \item Several combinations of vector and axial-vector structures. It is clear that to generate the anti-symmetric tensor $\epsilon^{\mu\nu\rho\sigma}$ the product of Dirac matrices must involve an odd number of $\gamma^5$ matrix. It exists only two possibilities, either an ``$AVV$'' contribution with one $\gamma^5$ or an ``$AAA$'' contribution with three $\gamma^5$. \end{itemize} While evaluating the one loop effective Lagrangian of Eq.~\eqref{Tr-Q-expansion}, the several contributions to the ambiguous effective interaction would arise from the $n=4$ polynomial terms \begin{align} \mathcal{L}_{\rm EFT}^{\rm 1loop} &\supset i \, \text{tr} \, \dfrac{1}{4} \int \dfrac{d^dq}{(2\pi)^d} \left[ \, \dfrac{-1}{\slashed{q}+M} \bigg( -P_{\mu}\gamma^{\mu} -V_{\mu}[\phi]\gamma^{\mu} + A_{\mu}[\phi] \gamma^{\mu}\gamma^5 + W_1[\phi]\gamma^5 \bigg) \right]^4 \nonumber \\ &\supset \sum_N f_{_N}^{^{AVV}}\mathbb{O}^{(PAVV)} + f_{_N}^{^{AAA}}\mathbb{O}^{(PAAA)} \, , \label{GCS: EFT expansion} \end{align} where $\mathbb{O}^{(PAVV)}$ denotes the class of operator containing one $\gamma^5$ matrix and $\mathbb{O}^{(PAAA)}$ the one containing three $\gamma^5$ matrices. \paragraph*{Evaluation of the $\mathbb{O}^{(PAVV)}$ structures.} There are three different types of combination that contribute to the $\mathbb{O}^{(PAVV)}$ structure, namely $\mathcal{O}(VVPA),\,\mathcal{O}(VAPV),\,\mathcal{O}(AVPV)$. Each type of combination contains four universal structures, and it is related together via trace cyclicity. At this stage, due to the ambiguity of $\gamma^5$-positions, one should not use trace cyclicity to minimise the number of universal structures that need to be evaluated. To present in detail the evaluation procedure of the Dirac trace and its regularisation, let us focus on one explicit example out of 12 universal structures included in Eq.~\eqref{GCS: EFT expansion} \begin{align} \mathcal{O}(VVPA) &\supset \dfrac{1}{4} \int \dfrac{d^dq}{(2\pi)^d} \, \text{tr} \bigg[ \dfrac{-1}{\slashed{q}+M} V_{\mu}\gamma^{\mu} \dfrac{-1}{\slashed{q}+M} V_{\nu}\gamma^{\nu} \dfrac{-1}{\slashed{q}+M} P_{\rho}\gamma^{\rho} \dfrac{-1}{\slashed{q}+M} A_{\sigma} \gamma^{\sigma}\gamma^5 \bigg] \nonumber \\ &= \dfrac{i}{4} \bigg[ -4M^4 \mathcal{I}_i^4 + 16M^2 \mathcal{I}[q^2]_i^4 \bigg] \text{tr} \bigg( \epsilon^{\mu\nu\rho\sigma} V_{\mu} V_{\nu} P_{\rho} A_{\sigma} \bigg) \nonumber \\ & + \dfrac{1}{4} \, \mathcal{I}[q^4]_i^4 \bigg[ g^{ab}g^{cd} + g^{ac}g^{bd} + g^{ad}g^{bc} \bigg] \text{tr} \bigg( \gamma_a \gamma^{\mu} \gamma_b \gamma^{\nu} \gamma_c \gamma^{\rho} \gamma_d \gamma^{\sigma} \gamma^5 \bigg) \bigg( V_{\mu} V_{\nu} P_{\rho} A_{\sigma} \bigg) \, , \label{example: avv-CDE-technique} \end{align} where the fermion propagators are decomposed into $\dfrac{-1}{\slashed{q}+M} = \dfrac{M}{q^2-M^2} + \dfrac{-\slashed{q}}{q^2-M^2}$. For the tensorial integrals, we use \begin{align} \int \dfrac{d^dq}{(2\pi)^d} \dfrac{q^{\mu_1}\cdots q^{\mu_{2n_c}}}{(q^2-m_i^2)^{n_i}(q^2-m_j^2)^{n_j}\cdots } &= g^{\mu_1 \cdots \mu_{2n_c}} \mathcal{I}[q^{2n_c}]^{n_i n_j \cdots}_{i j \cdots } \, , \label{formula: master-Integrals} \end{align} where $g^{\mu_1 \cdots \mu_{2n_c}}$ is the completely symmetric tensor, e.g. $g^{\mu\nu\rho\sigma} = g^{\mu\nu}g^{\rho\sigma}+g^{\mu\rho}g^{\nu\sigma}+g^{\mu\sigma}g^{\nu\rho}$, and we denote the master integrals as $\mathcal{I}[q^{2n_c}]^{n_i n_j \cdots}_{i j \cdots }$. The explicit expression and the value of some useful master integrals are derived in the Appendix \ref{Appendix:master_integrals}. In the second line of Eq.~\eqref{example: avv-CDE-technique}, all the loop integrals are finite, one can then evaluate the various Dirac traces in the usual naive scheme. The last line of Eq.~\eqref{example: avv-CDE-technique} contains divergent integrals, $\mathcal{I}[q^4]_i^4$, which have to be regularised. Let us show how to evaluate such an ambiguous quantity as $\text{tr} \big(\gamma_a \gamma^{\mu}\gamma_b\gamma^{\nu}\gamma_c\gamma^{\rho}\gamma_d\gamma^{\sigma}\gamma^5 \big)$ of Eq.~\eqref{example: avv-CDE-technique}. We follow the procedure described earlier in the section \ref{Ambiguous_traces}. Before evaluating the Dirac trace in $d$-dimension, we first write down all possible structures that are equivalent to the original Dirac string by naively anti-commuting $\gamma^5$, \begin{align} \text{tr} \big(\gamma_a \gamma^{\mu}\gamma_b\gamma^{\nu}\gamma_c\gamma^{\rho}\gamma_d\gamma^{\sigma} \gamma^5 \big) &\rightarrow \bar{a}_1 \text{tr} \big(\gamma_a \gamma^{\mu}\Red{\gamma^5} \gamma_b\gamma^{\nu}\gamma_c\gamma^{\rho}\gamma_d\gamma^{\sigma} \big) + \bar{a}_2 \text{tr} \big(\gamma_a \gamma^{\mu}\gamma_b\gamma^{\nu}\Red{\gamma^5}\gamma_c\gamma^{\rho}\gamma_d\gamma^{\sigma} \big) \nonumber \\ &\quad + \bar{a}_3 \text{tr} \big(\gamma_a \gamma^{\mu}\gamma_b\gamma^{\nu}\gamma_c\gamma^{\rho}\Red{\gamma^5}\gamma_d\gamma^{\sigma} \big) + \bar{a}_4 \text{tr} \big(\gamma_a \gamma^{\mu}\gamma_b\gamma^{\nu}\gamma_c\gamma^{\rho}\gamma_d\gamma^{\sigma}\Red{\gamma^5} \big) \, , \label{example: avv ambiguous traces} \end{align} where we introduce the four free parameters, $\bar{a}_i$, to keep track of the position of the $\gamma_5$ matrix in Eq.~\eqref{example: avv ambiguous traces}. Let us briefly comment on the fact that \begin{itemize} \item In $d=4$ dimensions, all Dirac structures on the R.H.S of Eq.~\eqref{example: avv ambiguous traces} are equivalent. \item In $d=4-\epsilon$ dimensions, by using the Breitenlohner-Maison-’t Hooft-Veltman (BMHV) scheme (see Refs.\cite{tHooft:1972tcz,Breitenlohner:1977hr}), the $\gamma^5$ matrix does not anti-commute anymore with Dirac $\gamma^{\mu}$ matrices. Therefore, each Dirac trace will give a different result due to different position of $\gamma^5$ matrix. The free parameters $\bar{a}_i$ is a device to keep track of the $\gamma^5$-positions. \item Enforcing a consistent result in $d=4$ and $d=4-\epsilon$ dimensions requires that $\sum_{i=1}^4\bar{a}_i=1$. \end{itemize} After plugging Eq.~\eqref{example: avv ambiguous traces} into Eq.~\eqref{example: avv-CDE-technique}, one obtains \begin{align} \mathcal{O}(VVPA) &\supset \dfrac{1}{4} \bigg[ 4M^4\mathcal{I}_i^4 - 16M^2 \mathcal{I}[q^2]_i^4 - 24\,\epsilon\,(-\bar{a}_1 + \bar{a}_2 - \bar{a}_3 + \bar{a}_4)\, \mathcal{I}[q^4]_i^4 \, \bigg] \text{tr} \bigg( \epsilon^{\mu\nu\rho\sigma} V_{\mu} V_{\nu} P_{\rho} A_{\sigma} \bigg) \nonumber \\ &= \dfrac{1}{32\pi^2} \bigg[ -1 -\bar{a}_1 + \bar{a}_2 - \bar{a}_3 + \bar{a}_4 \, \bigg] \text{tr} \bigg[ V_{\mu}^iV_{\nu}^j \tilde{F}_{\mu\nu}^{A^k} \bigg] \, , \label{example: avv-Tr-D-dim} \end{align} where in the last line of Eq.~\eqref{example: avv-Tr-D-dim}, we replace the vector and axial-vector structures by $V_{\mu} \equiv g_{_V}^iV_{\mu}^i\,,~ A_{\mu} \equiv g_{_A}^iA_{\mu}^i \,$, we also omitted the gauge couplings to simplify the expression of \eqref{example: avv-Tr-D-dim} and highlight the final value of loop integrals and Dirac traces. We remind the reader that $g_{_V}^i$ and $g_{_A}^i$ will only appear when it is necessary. Also, keep in mind that in Eq.~\eqref{example: avv-Tr-D-dim} the $\epsilon$-terms will hit the pole $\frac{1}{\epsilon}$ of the divergence integral, $\mathcal{I}[q^4]_i^4$, and generate finite contributions. We then apply the same method for the other contributions in $\mathbb{O}^{PAVV}$. One should note that since in Eq.~\eqref{Tr-Q-expansion}, $P_{\mu}=i\partial_{\mu}$, is the ``open" derivative one can therefore omit the operator structures which start with a $P_{\mu}$ since they lead to inert boundary terms. We underline one more time that at this stage, one can not use the cyclicity property of the trace to reduce the number of terms that need to compute. Adding all the different contributions together gives \begin{align} \mathcal{L}_{\rm EFT}^{\rm 1loop} &\supset i\big( \, 24\epsilon\,\bar{a}_{V^iV^jA^k} \, \mathcal{I} [q^4]_i^4 \,\big) \text{tr} \bigg[ V_{\mu}^i V_{\nu}^j \tilde{F}_{\mu\nu}^{A^k} \bigg] \nonumber \\ &+ i\big( -4M^4\mathcal{I}_i^4 + 16M^2\mathcal{I}[q^2]_i^4 + 24\epsilon\,\bar{a}_{V^jA^kV^i} \,\mathcal{I}[q^4]_i^4 \,\big) \text{tr} \bigg[ V_{\mu}^j A^k_{\nu} \tilde{F}_{\mu\nu}^{V^i} \bigg] \nonumber\\ &+ i\big( 4M^4\mathcal{I}_i^4 - 16M^2\mathcal{I}[q^2]_i^4 + i\, 24\epsilon\,\bar{a}_{A^kV^iV^j} \,\mathcal{I}[q^4]_i^4 \,\big) \text{tr} \bigg[ A^k_{\mu} V_{\nu}^i \tilde{F}_{\mu\nu}^{V^j} \bigg] \, . \label{AVV: CDE-result} \end{align} Since the $\bar{a}_{i}$ coefficients are basically free, there are no reasons to give any physical meaning to the different contributions. For each operator structure, we redefine the total values of $\bar{a}_i$ by the new free parameters, e.g. $\bar{a}_{V^iV^jA^k}\,,~ \bar{a}_{V^jA^kV^i}\,,~ \bar{a}_{A^kV^iV^j}$. Readout the value of loop integrals, the above equation reduces to \begin{align} \mathcal{L}_{\rm EFT}^{\rm 1loop} &\supset \dfrac{1}{8\pi^2} \bar{a}_{V^iV^jA^k} \text{tr} \bigg[ V_{\mu}^i V_{\nu}^j \tilde{F}_{\mu\nu}^{A^k} \bigg] + \dfrac{1}{8\pi^2} \bar{a}_{V^jA^kV^i} \text{tr} \bigg[ V_{\mu}^j A^k_{\nu} \tilde{F}_{\mu\nu}^{V^i} \bigg] + \dfrac{1}{8\pi^2} \bar{a}_{A^kV^iV^j} \text{tr} \bigg[ A^k_{\mu} V_{\nu}^i \tilde{F}_{\mu\nu}^{V^j} \bigg] \, . \label{AVV: CDE-result2} \end{align} The three operators of Eq.~\eqref{AVV: CDE-result2} are not independent and by using integration by parts one should always end-up with two independent operators and then two free parameters. As we will see later, in practise one decides to remove such or such operator by use of integration by parts based on the symmetries that are preserved or not since all operators are not invariant under the same vector or axial symmetries. As an example, if one supposes that, within our notation, the $V^i$ current might be anomalous, one may integrate by parts the first operator of Eq.~\eqref{AVV: CDE-result}, $\text{tr} \big(V_{\mu}^i V_{\nu}^j \tilde{F}_{\mu\nu}^{A^k} \big)$, and after discarding the total derivative operator, and redefining the free parameters, one obtains \begin{align} \mathcal{L}_{\rm EFT}^{\rm 1loop} &\supset \dfrac{1}{8\pi^2} \bar{a}_{V^jA^kV^i} \text{tr} \bigg[ V_{\mu}^j A^k_{\nu} \tilde{F}_{\mu\nu}^{V^i} \bigg] + \dfrac{1}{8\pi^2}\bar{a}_{A^kV^iV^j} \text{tr} \bigg[ A^k_{\mu} V_{\nu}^i \tilde{F}_{\mu\nu}^{V^j} \bigg] \, . \label{AVV: CDE-result3} \end{align} At this point, one should comment on the fact that if one would have used the BMHV scheme without performing the decomposition of Eq.~\eqref{example: avv ambiguous traces}, one would have found each Wilson coefficients of the operators in Eq.~\eqref{AVV: CDE-result} to vanish. This is ultimately due to the fact that, by default, vector currents can not be anomalous while only following the BMHV procedure. Even if one would have expected to be able to write effective operators as display in Eq.~\eqref{AVV: CDE-result2} from first principle, we have rigorously shown how to obtain it in dimensional regularisation, i.e the ``AVV'' interaction can be described by two independent operators for which it exists two Wilson coefficients which are ambiguous i.e free. \paragraph*{Evaluation of the $\mathbb{O}^{(PAAA)}$ structures.} We now turn to the second class of operator, $\mathbb{O}^{(PAAA)}$, that contains three $\gamma^5$ matrices. Similarly to the previous case with $\mathbb{O}^{(PAVV)}$, we start here by giving an explicit example for an operator that belongs to this class, \begin{align} \mathbb{O}^{AAPA} &\supset \dfrac{1}{4} \int \dfrac{d^dq}{(2\pi)^d} \, \text{tr} \left[ \dfrac{-1}{\slashed{q}+M} A_{\mu} \gamma^{\mu} \gamma^5 \dfrac{-1}{\slashed{q}+M} A_{\nu} \gamma^{\nu}\gamma^5 \dfrac{-1}{\slashed{q}+M} P_{\rho}\gamma^{\rho} \dfrac{-1}{\slashed{q}+M} A_{\sigma} \gamma^{\sigma}\gamma^5 \right]^4 \nonumber \\ &= i\, \dfrac{1}{4} \bigg[ 4M^4 \mathcal{I}_i^4 + 16M^2 \mathcal{I}[q^2]_i^4 \bigg] \text{tr} \bigg( \epsilon^{\mu\nu\rho\sigma} A_{\mu} A_{\nu} P_{\rho} A_{\sigma} \bigg) \nonumber \\ & + \dfrac{1}{4} \, \mathcal{I}[q^4]_i^4 \bigg[ g^{ab}g^{cd} + g^{ac}g^{bd} + g^{ad}g^{bc} \bigg] \text{tr} \bigg( \gamma_a \gamma^{\mu} \gamma^5 \gamma_b \gamma^{\nu} \gamma^5 \gamma_c \gamma^{\rho} \gamma_d \gamma^{\sigma} \gamma^5 \bigg) \bigg( A_{\mu} A_{\nu} P_{\rho} A_{\sigma} \bigg) \, , \label{example: aaa Tr-ambiguous} \end{align} we then parameterize the ambiguous Dirac trace, $\text{tr} \big(\gamma_a \gamma^{\mu}\gamma^5\gamma_b\gamma^{\nu}\gamma^5\gamma_c\gamma^{\rho}\gamma_d\gamma^{\sigma} \gamma^5 \big)$, by using \begin{align} \text{tr} \big(\gamma_a \gamma^{\mu}\gamma^5\gamma_b\gamma^{\nu}\gamma^5\gamma_c\gamma^{\rho}\gamma_d\gamma^{\sigma} \gamma^5 \big) &\rightarrow \bar{b}_1 \text{tr} \big(\gamma_a \gamma^{\mu}\Red{\gamma^5} \gamma_b\gamma^{\nu}\gamma_c\gamma^{\rho}\gamma_d\gamma^{\sigma} \big) + \bar{b}_2 \text{tr} \big(\gamma_a \gamma^{\mu}\gamma_b\gamma^{\nu}\Red{\gamma^5}\gamma_c\gamma^{\rho}\gamma_d\gamma^{\sigma} \big) \nonumber \\ &\quad + \bar{b}_3 \text{tr} \big(\gamma_a \gamma^{\mu}\gamma_b\gamma^{\nu}\gamma_c\gamma^{\rho}\Red{\gamma^5}\gamma_d\gamma^{\sigma} \big) + \bar{b}_4 \text{tr} \big(\gamma_a \gamma^{\mu}\gamma_b\gamma^{\nu}\gamma_c\gamma^{\rho}\gamma_d\gamma^{\sigma} \Red{\gamma^5} \big) \, . \label{example: aaa ambiguous traces} \end{align} Afterwards, evaluating in $d=4-\epsilon$ dimensions with BMHV's scheme, we obtain \begin{align} \mathbb{O}^{AAPA} &\supset \dfrac{i}{4} \bigg[ 4M^4\mathcal{I}_i^4 + 16M^2 \mathcal{I}[q^2]_i^4 + 24\,\epsilon\, \big(-\bar{b}_1 + \bar{b}_2 -\bar{b}_3 + \bar{b}_4 \big)\, \mathcal{I}[q^4]_i^4 \, \bigg] \text{tr} \bigg( \epsilon^{\mu\nu\rho\sigma} A_{\mu} A_{\nu} P_{\rho} A_{\sigma} \bigg) \nonumber \\ &= \dfrac{1}{32\pi^2}\bigg[ \dfrac{1}{3} +\bar{b}_1 - \bar{b}_2 +\bar{b}_3 - \bar{b}_4 \bigg] \text{tr} \bigg[ A_{\mu}^iA_{\nu}^j\tilde{F}_{\mu\nu}^{A^k} \bigg] \, , \label{example: aaa-Tr-D-dim} \end{align} where in the last step of the computation we evaluate the value of loop integrals, express $A_{\mu}\equiv A_{\mu}^i$. We also note that $g_{_A}^i$ will appear when it is necessary. The computation for the other operators belonging to $\mathbb{O}^{(PAAA)}$ are similar and the full result reads \begin{align} \mathcal{L}_{\rm EFT}^{\rm 1loop} &\supset \big( i\, 24\epsilon \bar{b}_{A^iA^jA^k} \, \mathcal{I}[q^4]_i^4 \, \big) \text{tr} \bigg[ A^i_{\mu}A^j_{\nu} \tilde{F}_{\mu\nu}^{A^k} \bigg] + \big( i \, 24\epsilon \bar{b}_{A^jA^kA^i} \, \mathcal{I}[q^4]_i^4 \,\big) \text{tr} \bigg[ A^j_{\mu}A^{k}_{\nu} \tilde{F}_{\mu\nu}^{A^i} \bigg] \nonumber\\ &\quad + \big( i \, 24\epsilon \bar{b}_{A^kA^iA^j} \, \mathcal{I}[q^4]_i^4 \,\big) \text{tr} \bigg[ A^{k}_{\mu}A^i_{\nu} \tilde{F}_{\mu\nu}^{A^j} \bigg] \, . \end{align} which basically resumes to \begin{align} \mathcal{L}_{\rm EFT}^{\rm 1loop} &\supset \dfrac{1}{8\pi^2} \bar{b}_{A^iA^jA^k} \text{tr} \bigg[ A^i_{\mu}A^j_{\nu} \tilde{F}_{\mu\nu}^{A^k} \bigg] +\dfrac{1}{8\pi^2} \bar{b}_{A^jA^kA^i} \text{tr} \bigg[ A^j_{\mu}A^k_{\nu} \tilde{F}_{\mu\nu}^{A^i} \bigg] + \dfrac{1}{8\pi^2} \bar{b}_{A^kA^iA^j} \text{tr} \bigg[ A^k_{\mu}A^{i}_{\nu} \tilde{F}_{\mu\nu}^{A^j} \bigg] \, . \label{AAA: CDE-result3} \end{align} These three operators in Eq.~\eqref{AAA: CDE-result3} are not independent and one is free to remove one by the use of integration by parts. Consequently, in dimensional regularisation, the ``AAA'' interaction can be described by two independent operators attached to two free Wilson coefficients reflecting ambiguities in the evaluations of such interactions. \subsubsection{Evaluation of the pseudo-scalar unambiguous terms} We are now looking for to evaluate operators involving a pseudo-scalar $\phi$ (without local partial derivative acting on it) and two field strength tensors. The generic operator form is given by \begin{align} \phi \, F_{\mu\nu}^j\tilde{F}_{\mu\nu}^k &= \phi \, \dfrac{1}{2}\, \epsilon^{\mu\nu\rho\sigma} \big( \partial_{_{[\mu}} G_{_{\nu]}}^j \big) \big( \partial_{_{[\rho}} G_{_{\sigma]}}^k \big) \, , \end{align} To reconstruct the pseudo-scalar terms from the expansion of Eq.~\eqref{Tr-Q-expansion}, we need \begin{itemize} \item Two insertions of $P_{\mu}$ to account for the two partial derivatives, then forming field strength tensors. \item One insertion of $W_1[\phi]$ to account for the pseudo-scalar field $\phi$. \item To account for the two gauge fields, we need $VV$ and $AA$ structures. Since $W_1[\phi]$ contains a $\gamma^5$, the combination with $AV$ structure will not contribute to the final result. \end{itemize} We collect the relevant classes of operators that contribute to the Wilson coefficients of these pseudo-scalar terms, \begin{align} \mathcal{L}_{\rm EFT}^{\rm 1loop} &\supset i \, \text{tr} \, \dfrac{1}{5} \int \dfrac{d^dq}{(2\pi)^d} \left[ \, \dfrac{-1}{\slashed{q}+M} \bigg( -P_{\mu}\gamma^{\mu} -V_{\mu}[\phi]\gamma^{\mu} + A_{\mu}[\phi] \gamma^{\mu}\gamma^5 + W_1[\phi]i\gamma^5 \bigg) \right]^5 \nonumber \\ &\supset \sum_N f_{_N}^{^{\pi VV}}\mathbb{O}^{(P^2V^2W_1)} + f_{_N}^{^{\pi AA}}\mathbb{O}^{(P^2A^2W_1)} \, , \label{Goldstone: EFT expansion} \end{align} The evaluation of the class of operator $\mathbb{O}^{P^2V^2W_1}$ and $\mathbb{O}^{P^2A^2W_1}$ can be done very efficiently by using the One Loop Universal Effective Action (UOLEA)~\footnote{These operators have been explicitly evaluated and are then available in the fermionic UOLEA in Ref.~\cite{Ellis:2020ivx}.}. One obtains \begin{align} \mathcal{L}_{\rm EFT}^{\rm 1loop} &\supset -\dfrac{1}{8\pi^2 M} \text{tr} \, \epsilon^{\mu\nu\rho\sigma} \bigg( W_1[P_{\mu},V_{\nu}][P_{\rho},V_{\sigma}] + \dfrac{1}{3}W_1[P_{\mu},A_{\nu}][P_{\rho},A_{\sigma}] \bigg) \nonumber \\ &= \dfrac{1}{16\pi^2 M} \text{tr} \bigg( W_1^i F^{V^j}_{\mu\nu}\tilde{F}^{V^k}_{\mu\nu} + \dfrac{1}{3} W_1^i F^{A^j}_{\mu\nu}\tilde{F}^{A^k}_{\mu\nu} \bigg) \, , \label{uolea: pseudo-scalar-gg} \end{align} where we form the field strength tensors by using \begin{align} \epsilon^{\mu\nu\rho\sigma}[P_{\mu},V^j_{\nu}][P_{\rho},V^k_{\sigma}] = \dfrac{i^2}{4}\epsilon^{\mu\nu\rho\sigma} \big( \partial_{[\mu}V^j_{\nu]} \big)\big( \partial_{[\rho}V^k_{\sigma]} \big) = -\dfrac{1}{2} F^{V^j}_{\mu\nu}\tilde{F}_{\mu\nu}^{V^k} \, , \end{align} and similarly for the axial currents. We note that if $j\neq k$, one need to sum over the exchange of $j,k$ indices to avoid the factor 2 problem. \subsection{Summary and master formula} We summarise the computations and the main outcome of Section~\ref{UOLEA}. Starting with a massive fermion which bilinearly involves some, yet undetermined, vector $V_{\mu}[\phi]$, axial vector $A_{\mu}[\phi]$ and pseudo scalar $W_1[\phi]$ interactions, \begin{align} \mathcal{L}_{\rm UV}^{\rm fermion}\big[ \Psi, \phi \big] &\supset \bar{\Psi} \bigg[ i\gamma^{\mu}\partial_{\mu} - M + V_{\mu}[\phi]\gamma^{\mu} - A_{\mu}[\phi]\gamma^{\mu}\gamma^5 - W_1[\phi]i\gamma^5 \bigg]\Psi \,, \label{LUVsum} \end{align} one obtains after integrating out the fermion field i.e evaluate the one loop effective action by expanding the functional trace with CDE techniques, \begin{align} \mathcal{L}_{\rm EFT}^{\rm 1loop} &= i \, \text{tr} \sum_{n=1}^{\infty} \dfrac{1}{n} \int \dfrac{d^4q}{(2\pi)^4} \left[ \, \dfrac{-1}{\slashed{q}+M} \bigg( -i\partial_{\mu}\gamma^{\mu} -V_{\mu}\gamma^{\mu} + A_{\mu}\gamma^{\mu}\gamma^5 + W_1 i\gamma^5 \bigg) \right]^n \, , \end{align} where in practices, the vector, axial-vector and pseudo-scalar structures are expressed as \begin{align} V_{\mu}[\phi] \equiv g_{_V}^i V_{\mu}^i[\phi^i]\, , ~ A_{\mu}[\phi] \equiv g_{_A}^i A_{\mu}^i[\phi^i]\, , ~ W_1[\phi] \equiv g_{_{W_1}}^i W_1^i[\phi^i] \, , \end{align} with an implicit summation over the $i$ index. One can proceed and form the low energy effective operators and evaluate their associated Wilson coefficients which are regularised in dimensional regularisation. After regularisation, it is important to identify ambiguities of some Wilson coefficients resulting from the fact that the gauge or anomalous aspects of the symmetries have not been addressed yet. The generic one-loop effective Lagrangian, still involving redundant operators as well as the ambiguous $\bar{a}$'s and $\bar{b}$'s coefficients, reads \begin{tcolorbox}[height=3.8cm, valign=center, colback=white] \begin{align} \mathcal{L}_{\rm EFT}^{\rm 1loop} & \supset \dfrac{1}{8\pi^2} \bar{a}_{V^iV^jA^k} \text{tr} \bigg[ V_{\mu}^i V_{\nu}^j \tilde{F}_{\mu\nu}^{A^k} \bigg] + \dfrac{1}{8\pi^2} \bar{a}_{V^jA^kV^i} \text{tr} \bigg[ V_{\mu}^j A^k_{\nu} \tilde{F}_{\mu\nu}^{V^i} \bigg] + \dfrac{1}{8\pi^2} \bar{a}_{A^kV^iV^j} \text{tr} \bigg[ A^k_{\mu} V_{\nu}^i \tilde{F}_{\mu\nu}^{V^j} \bigg] \nonumber \\ & + \dfrac{1}{8\pi^2} \bar{b}_{A^iA^jA^k} \text{tr} \bigg[ A^i_{\mu}A^j_{\nu} \tilde{F}_{\mu\nu}^{A^k} \bigg] +\dfrac{1}{8\pi^2} \bar{b}_{A^jA^kA^i} \text{tr} \bigg[ A^j_{\mu}A^k_{\nu} \tilde{F}_{\mu\nu}^{A^i} \bigg] + \dfrac{1}{8\pi^2} \bar{b}_{A^kA^iA^j} \text{tr} \bigg[ A^k_{\mu}A^{i}_{\nu} \tilde{F}_{\mu\nu}^{A^j} \bigg] \nonumber \\ & + \dfrac{1}{16\pi^2 M} \text{tr} \bigg( W_1^{i} F^{V^j}_{\mu\nu}\tilde{F}^{V^k}_{\mu\nu} + \dfrac{1}{3} W_1^{i} F^{A^j}_{\mu\nu}\tilde{F}^{A^k}_{\mu\nu} \bigg) \, . \label{masterf} \end{align} \end{tcolorbox} This master formula is generic and encapsulates all the needed computations. Indeed, at this stage, imposing the EFT to respect specific gauge invariance relations will link several of these operators together and allow to fix the ambiguities of any free Wilson coefficients in a very simple and elegant way. Since doing so, presuppose having a concrete model in mind or set of symmetries, we now turn back to more phenomenological investigations where this master formula is applied to various models. \section{Application to axions}\label{section: application} In this section we use the results obtained in section~\ref{UOLEA} to build EFT involving would-be-Goldstone bosons of spontaneously broken symmetries and Goldstone bosons of global symmetries. As a first application, we apply the master formula of Eq.~\eqref{masterf} to concretely build the intuited EFT of Eq.~\eqref{initial4op} from the toy model presented in section~\ref{toymodel}. We will then concentrate on more realistic constructions, e.g. building EFT involving the SM gauge fields and an axion or ALP. This task might precisely imply to integrate out chiral fermions which obtain their mass while the electroweak gauge symmetry is spontaneously broken, the global PQ symmetry being spontaneously and anomalously broken. We provide a simple expression adapted to SM gauge groups, and provide explicit use of it to derive axion couplings to massive gauge fields in the original 2HDM setup as proposed by Peccei and Quinn and in a more phenomenologically relevant version of it, the invisible axion DFSZ model~\cite{Dine:1981rt,Zhitnitsky:1980tq}. \subsection{A chiral toy model} So far, we have evaluated the operators involving three vector structures (which can also incorporate derivative couplings) and the operators involving a pseudo-scalar field which couples with two field strength tensors. We now give an example how to use the results of the previous section to derive the EFT resulting from integrating out the chiral fermion of the toy model of section~\ref{toymodel}. We remind the fermionic quadratic operator of this toy model, \begin{align} \mathcal{L}_{\rm UV}^{\rm \text{toy-model}} = \bar{\Psi}\bigg[ i\partial_{\mu}\gamma^{\mu} - M + V_{\mu}\gamma^{\mu} - A_{\mu}\gamma^{\mu}\gamma^5 - W_1 i\gamma^5 \bigg] \Psi \label{toymodel2} \end{align} where the vector, axial-vector, and pseudo-scalar structures decompose as \begin{align} & V_{\mu} = \bigg\{ V_{\mu},\, \bigg[ S_{\mu}-\dfrac{\partial_{\mu}\pi_S}{2v_S} \bigg] \bigg\} \, , ~ A_{\mu} = \bigg\{ A_{\mu},\, \bigg[ P_{\mu}-\dfrac{\partial_{\mu}\pi_P}{2v_P} \bigg] \bigg\} \, , ~ W_{1} = M\dfrac{\pi_A}{v_A} \, , \end{align} and the gauge couplings are omitted for simplicity. Making use of our master formula given in Eq.~\eqref{masterf}, one can straightforwardly obtain \begin{align} \mathcal{L}_{\rm EFT}^{\rm 1loop} &= \omega_{_{VAV}} \left[ S_{\mu}-\dfrac{\partial_{\mu}\pi_{S}}{2v_{S}}\right] A_{\nu} \tilde{F}_{_V}^{\mu\nu} + \omega_{_{AAA}} \left[ P_{\mu}-\dfrac{\partial_{\mu}\pi_{P}}{2v_{P}}\right] A_{\nu} \tilde{F}_{_A}^{\mu\nu} \nonumber \\ & + \eta_{_{ASV}} \bigg[ \dfrac{\pi_A}{v_A} F_{_S,\,\mu\nu} \tilde{F}_{_V}^{\mu\nu} \bigg] + \eta_{_{APA}} \bigg[ \dfrac{\pi_A}{v_A} F_{_P,\,\mu\nu} \tilde{F}_{_A}^{\mu\nu} \bigg] . \label{EFTtoymodel2} \end{align} At this stage, $\omega_{_{VAV}}$, $\omega_{_{AAA}}$ and $\eta_{_{ASV}}$, $\eta_{_{APA}}$ read, \begin{align} \omega_{_{VAV}} = \dfrac{1}{8\pi^2}\big(1-\bar{b}\big) \, , ~ \omega_{_{AAA}} = -\dfrac{1}{8\pi^2} \bar{a} \, ; \quad \eta_{_{ASV}} = \dfrac{1}{8\pi^2} \, , ~ \eta_{_{APA}} = \dfrac{1}{24\pi^2} \, , \end{align} with $\bar{a}$ and $\bar{b}$ the two free parameters. As presented in section 2, we now implement the consistency between the UV model of Eq.~\eqref{toymodel2} and the associated EFT of Eq.~\eqref{EFTtoymodel2} by fixing the nature of each symmetries i.e gauge or anomalous. We identify the precise value of the parameters $\bar{a}$ and $\bar{b}$ by requiring axial gauge invariance (leaving then the possibility that the other transformations, only, could be anomalous) \begin{align} & \delta_A\bigg( \omega_{_{VAV}} \left[ S_{\mu}-\dfrac{\partial_{\mu}\pi_{S}}{2v_{S}}\right] A_{\nu} \tilde{F}_{_V}^{\mu\nu} + \dfrac{1}{8\pi^2 } \dfrac{\pi_A}{v_A} F_{_S,\,\mu\nu} \tilde{F}_{_V}^{\mu\nu} \bigg) = 0 \, , ~ \text{and } \nonumber \\ & \delta_A\bigg( \omega_{_{AAA}} \left[ P_{\mu}-\dfrac{\partial_{\mu}\pi_{P}}{2v_{P}}\right] A_{\nu} \tilde{F}_{_A}^{\mu\nu} + \dfrac{1}{24\pi^2 } \dfrac{\pi_A}{v_A} F_{_P,\,\mu\nu} \tilde{F}_{_A}^{\mu\nu} \bigg) = 0 \, , \label{deltaA} \end{align} where we perform the gauge variation of the axial current, $\delta_A A_{\mu} = \partial_{\mu}\theta_A$, the would-be-Goldstone, $\delta_A \pi_A = 2 v_A \theta_A$, integrate by parts and combine the various contributions proportional to $(\partial_{\mu}\theta_A)P_{\nu}\tilde{F}_{_A}^{\mu\nu}$ and $(\partial_{\mu}\theta_A)S_{\nu}\tilde{F}_{_V}^{\mu\nu}$. This straightforwardly leads to, \begin{align} \omega_{_{VAV}}&= -\dfrac{1}{2\pi^2} ~ \Leftrightarrow ~ \bar{b} =5 ~; \quad \omega_{_{AAA}}= -\dfrac{1}{6\pi^2} ~ \Leftrightarrow ~ \bar{a} = \dfrac{4}{3} \, . \end{align} Finally, one can set to zero the artificial vector fields $S_{\mu}$ and $P_{\mu}$ and write the non-ambiguous dimension-five bosonic operators, simply as \begin{align} \mathcal{L}_{\rm EFT}^{\rm 1loop} &= \dfrac{1}{2\pi^2}\dfrac{\partial^{\mu}\pi_S}{2v_S}A^{\nu}\tilde{F}_{_V,\,\mu\nu} + \dfrac{1}{6\pi^2}\dfrac{\partial^{\mu}\pi_P}{2v_P}A^{\nu}\tilde{F}_{_A,\,\mu\nu} = -\dfrac{1}{8\pi^2}\dfrac{\pi_S}{v_S} F_{_A}^{\mu\nu}\tilde{F}_{_V,\mu\nu} - \dfrac{1}{24\pi^2}\dfrac{\pi_P}{v_P} F_{_A}^{\mu\nu}\tilde{F}_{_A,\mu\nu} \, . \label{toy model: main-result} \end{align} This is the one-loop contributions to the EFT Lagrangian obtained by integrating-out a chiral massive fermion in our toy model. To obtain the full EFT Lagrangian, one must add the Jacobian terms given by Eq.~\eqref{UVJactot} with the one loop terms of Eq.~\eqref{toy model: main-result}, \begin{align} \mathcal{L}_{\rm EFT} &= \dfrac{1}{16\pi^{2}}\dfrac{\pi_{P}}{v_P}\left( F_{_{V},\mu\nu} \tilde{F}^{\mu\nu}_{_V} + \dfrac{1}{3} F_{_{A},\mu\nu}\tilde{F}_{_A}^{\mu\nu}\right) \, . \end{align} We note that integrating out the fermion in a one-dimensional representation starting from Eq.~(\ref{toymodel2}), we obtain that the $\pi_S$ derivative interaction induces EFT operators that precisely cancel the Jacobian term in Eq.~(\ref{UVJactot}) as expected starting from an abelian gauge theory, as discussed earlier, and this provides a non-trivial check for our calculation. We should also remark that the $\pi_{P}VV$ coupling entirely arises from the Jacobian term, as predicted by the Sutherland-Veltman theorem. However, the $\pi_{P}AA$ coupling does not and displays an additional factor of $1/3$ due to the one loop contribution. We now move on to more concrete axion models for which we will compute one-loop induced effective couplings between axions and gauge bosons, with a particular interest for those involving massive gauge fields~\footnote{These results should and will reproduce those derived, using different techniques, in Refs.~\cite{Quevillon:2019zrd,Bonnefoy:2020gyh}.}. \subsection{Axion couplings to gauge fields} The axion field is a relic of the spontaneous symmetry breaking of a global $U(1)_{PQ}$ symmetry. A realistic model involving the QCD axion or an ALP, being a pseudo-scalar field $a(x)$, basically couples to fermions (of the SM or not) which have to be charged under the Global $U(1)_{PQ}$ group but also other abelian or non abelian groups such as the one of the SM. For a massive chiral fermion, its bilinear form, after gauge symmetry breaking, generically reads \begin{align} \mathcal{L}_{\rm UV} = \mathcal{L}_{\rm UV}^{\rm Jac} + \bar{\Psi}\bigg[ i\partial_{\mu}\gamma^{\mu} - M + V_{\mu}\gamma^{\mu} - A_{\mu}\gamma^{\mu}\gamma^5 - W_1 i\gamma^5 \bigg]\Psi \, , \label{Appl: Lagrangian-UV-generic} \end{align} where vector, axial-vector and pseudo-scalar structures include\footnote{Note that for convenience we have used a different normalisation convention for the PQ charges than the one used for gauge charges.} % \begin{align} V_{\mu} = \big\{ g_{_V}^i V_{\mu}^i \,,\, g_{_V}^{PQ}\big(\partial_{\mu}a-V_{\mu}^{PQ}\big) \big\} \, , ~ A_{\mu} = \big\{ g_{_A}^i A_{\mu}^i \, , \, g_{_A}^{PQ}\big(\partial_{\mu}a-A_{\mu}^{PQ}\big) \big\} \, , ~ W_1 = M \dfrac{\pi_{_A}^i}{v_A} \, , \end{align} with $V_{\mu}^i,\,A_{\mu}^i$ stand for vector and axial-vector components of a generic chiral gauge field $G_{\mu}^i$, the term $\pi_{_A}^i(x)$ stands for the would-be-Goldstone boson in the case where $G_{\mu}^i$ obtains its mass from gauge spontaneous symmetry breaking. $V_{\mu}^{PQ}$ and $A_{\mu}^{PQ}$ are the fictitious auxiliary gauge fields associated to the global PQ symmetry. Writing Eq.~\eqref{Appl: Lagrangian-UV-generic} presuposes a chiral fermion reparametrisation which induces a Jacobian term, $\mathcal{L}_{\rm UV}^{\rm Jac}$. This contribution, before gauge spontaneous symmetry breaking, reads \begin{align} \mathcal{L}_{\rm UV}^{\rm Jac} &= \dfrac{1}{16\pi^2f_a} \mathcal{N}_{PQ} \, a(x) F_{\mu\nu}^i \tilde{F}^{i,\,\mu\nu} \, , \label{Appl: Jacobian-generic} \end{align} where the $i$-index only runs for the gauge field strength tensors. The anomaly coefficient can be generally expressed as, \begin{align} \mathcal{N}_{PQ} = \sum_{\Psi=\Psi_R,\Psi_L^{\dagger}} \text{tr} \bigg[PQ(\Psi) \otimes G(\Psi) \otimes G(\Psi) \bigg] \, , \label{formula: anomaly coef.} \end{align} with $PQ(\Psi)$ and $G(\Psi)$ the PQ and gauge charge of the chiral fermion $\Psi$. Integrating out the chiral fermion and making use of the master formula Eq.~\eqref{masterf}, one obtains \begin{align} \mathcal{L}_{\rm EFT}^{\rm 1loop} &= \omega_{_{VAV}} \bigg[\, g_{_V}^{PQ}g_{_A}^ig_{_V}^j\, \big(\partial_{\mu}a - V_{\mu}^{PQ}\big)A_{\nu}^i\tilde{F}^{V^j,\mu\nu} \bigg] + \omega_{_{AAA}}\bigg[\,g_{_A}^{PQ}g_{_A}^ig_{_A}^j\, \big(\partial_{\mu}a - A_{\mu}^{PQ} \big)A_{\nu}^i\tilde{F}^{A^j,\mu\nu} \bigg] \nonumber \\ & + \dfrac{1}{8\pi^2} \big(g_{_V}^{PQ} g_{_V}^j \big) \dfrac{\pi_{_A}^i}{v_A} F_{\mu\nu}^{V^{PQ}} \tilde{F}^{V^j,\mu\nu} + \dfrac{1}{24\pi^2} \big(g_{_A}^{PQ} g_{_A}^j \big) \dfrac{\pi_{_A}^i}{v_A} F_{\mu\nu}^{A^{PQ}} \tilde{F}^{A^j,\mu\nu} \nonumber \\ &= -\dfrac{1}{4\pi^2} \big(\, g_{_V}^{PQ}g_{_A}^ig_{_V}^j \big)\, a\,F_{\mu\nu}^{A^i}\tilde{F}^{V^j,\mu\nu} - \dfrac{1}{12\pi^2}\big(\,g_{_A}^{PQ}g_{_A}^ig_{_A}^j\big)\, a\,F_{\mu\nu}^{A^i}\tilde{F}^{A^j,\mu\nu} \, . \label{Appl: One-loop-terms} \end{align} In order to get the last line of the above equation, we imposed the crucial axial gauge invariance, used integration by parts and Bianchi identity, neglected the surface terms, and at the end of the computation we removed the fictitious fields $V_{\mu}^{PQ}$ and $A_{\mu}^{PQ}$. Adding all together, we are now able to build the axion-bosonic effective Lagrangian described by $\mathcal{L}_{\rm EFT} = \mathcal{L}_{\rm UV}^{\rm Jac} + \mathcal{L}_{\rm EFT}^{\rm 1loop}$ where the generic formula of $\mathcal{L}_{\rm UV}^{\rm Jac}$ and $\mathcal{L}_{\rm EFT}^{\rm 1loop}$ are given by Eqs.\,\eqref{Appl: Jacobian-generic},\eqref{Appl: One-loop-terms}. \subsubsection{SM gauge and PQ symmetries} We now present two examples where the axion field couples with the SM gauge fields. Our first example will be the original Peccei and Quinn scenario in which the axion is the pseudo-scalar component of a Two Higgs Doublet Model (2HDM). Our second application will be to consider the so-called DFSZ axion model~\cite{Dine:1981rt,Zhitnitsky:1980tq}. To illustrate the results and properties discussed in the previous sections, we will integrate out only one generation of quarks, let us say $\big( u ~ d \big)$. This computation was performed in Ref.\cite{Quevillon:2019zrd} by Feynman diagram technique accompanied with Pauli-Villars regularization. We will recover some of its results by using the functional method for one-loop matching. We begin with the Jacobian terms which induce tree-level axion couplings to the SM gauge fields, \begin{align} \mathcal{L}_{\rm UV}^{\rm Jac} &= \dfrac{1}{16\pi^2f_a}\, \bigg( g_s^2\mathcal{N}_C \, a G_{\mu\nu}\tilde{G}^{\mu\nu} + g^2\mathcal{N}_L\, a W_{\mu\nu}^i\tilde{W}^{i,\mu\nu} + g'^2\mathcal{N}_Y\, a B_{\mu\nu}\tilde{B}^{\mu\nu} \bigg) \, , \label{jacobian: PQ-SM} \end{align} with the anomaly coefficients $\mathcal{N}_i$ computable as follows, \begin{align} & \mathcal{N}_C = \sum_{\Psi=q_L^{\dagger},u_R,d_R} C_{_{SU(3)_c}}(\Psi)\, d_{_{SU(2)_L}}(\Psi) \, PQ(\Psi) \, , \nonumber \\ & \mathcal{N}_L = \sum_{\Psi=q_L^{\dagger};\,l_L^{\dagger}} d_{_{SU(3)_c}}(\Psi)\, C_{_{SU(2)_L}}(\Psi) \, PQ(\Psi) \, , \nonumber \\ & \mathcal{N}_Y = \sum_{\Psi=q_L^{\dagger},u_R,d_R;\,l_L^{\dagger},e_R} d_{_{SU(3)_c}}(\Psi)\, d_{_{SU(2)_L}}(\Psi) C_{_{U(1)_Y}}(\Psi) \, PQ(\Psi) \, , \label{anomaly coeff: PQ-SM} \end{align} where we closely followed the conventions and notations of Ref.\cite{Quevillon:2019zrd} with $d_{_{SU(3)_c}}(\Psi)$, $d_{_{SU(2)_L}}(\Psi)$ and $C_{_{SU(3)_c}}(\Psi)$, $C_{_{SU(2)_L}}(\Psi)$ are respectively the $SU(3)_c$ and $SU(2)_L$ dimensions and quadratic Casimir invariant of the representation carried by the chiral fermion field $\Psi$. Besides, $PQ(\Psi)$ is the PQ charge of the fermion $\Psi$ which is model-dependent. We will come back to these PQ charges when discussing a peculiar axion model. The one-loop effective Lagrangian resulting by integrating out a SM chiral fermion is \begin{align} \mathcal{L}_{\rm EFT}^{\rm 1loop} \supset \sum_f \dfrac{-1}{4\pi^2} \bigg[ & \big( g_{_V}^{PQ}g_{_A}^Zg_{_V}^Z \big)^f \, \bigg( a\, F_{\mu\nu}^{A^Z} \tilde{F}^{V^Z,\mu\nu} \bigg) + \dfrac{1}{3}\big( g_{_A}^{PQ}g_{_A}^Zg_{_A}^Z \big)^f \, \bigg( a\, F_{\mu\nu}^{A^Z}\tilde{F}^{A^Z,\mu\nu} \bigg) \nonumber \\ + & \big( g_{_V}^{PQ}g_{_A}^Wg_{_V}^W \big)^f \, \bigg( a\, F_{\mu\nu}^{A^W} \tilde{F}^{V^W,\mu\nu} \bigg) + \dfrac{1}{3}\big( g_{_A}^{PQ}g_{_A}^Wg_{_A}^W \big)^f \, \bigg( a\, F_{\mu\nu}^{A^W} \tilde{F}^{A^W,\mu\nu} \bigg) \nonumber \\ + & \big( g_{_V}^{PQ}g_{_A}^Zg_{_V}^{\gamma} \big)^f \, \bigg( a\, F_{\mu\nu}^{A^Z} \tilde{F}^{V^{\gamma},\mu\nu} \bigg) \bigg] \, , \label{loopEFT: PQ-SM-gauge} \end{align} where $g_{_V}^{PQ},\,g_{_A}^{PQ}$ are axion-fermion-fermion couplings written in terms of Dirac bilinear form. A summary of the gauge charges of SM fermions can be found in Table\,\ref{tab:SM-gauge-fermion}. \begin{table}[h!] \centering \begin{tabular}{c|cccc} $i$ & g & W & $\gamma$ & Z \\ \hline \T\B $\big(g_{_{V}}^{\,i}\big)^f$ & $g_s T_a^f$ & $\dfrac{g}{\sqrt{2}}T_3^f$ & $eQ^f$ & $\dfrac{g}{2\cos\theta_w} (T_3^f - 2\sin^2\theta_w Q^f)$ \\ \hline \T\B $\big(g_{_{A}}^{\,i}\big)^f$ & 0 & $\dfrac{g}{\sqrt{2}}T_3^f$ & 0 & $\dfrac{g}{2\cos\theta_w}T_3^f$ \end{tabular} \caption{SM fermion couplings to the SM gauge fields, where $T_a^f,\,T_3^f,\,Q^f,\,\theta_w$ are respectively the $SU(3)_C$ generators, the eigenvalue of the isospin operator, the electromagnetic charge and the weak mixing angle. } \label{tab:SM-gauge-fermion} \end{table} \\ The only thing that remains to be determined in Eqs.~\eqref{jacobian: PQ-SM},\,\eqref{anomaly coeff: PQ-SM},\,\eqref{loopEFT: PQ-SM-gauge} are the fermions PQ charge, that we discuss now for several axion model. \subsubsection{PQ axion model} We first consider the original PQ scenario where the QCD axion is identified as the orthogonal state of the would-be-Goldstone of the Z boson in a 2HDM model (see Refs.~\cite{Peccei:1977hh,Peccei:1977ur}). The starting point is a fermion-Higgs Yukawa interaction, that we assume of type II, which can be written as \begin{align} \mathcal{L}_{\rm Yukawa}^{\rm 2HDM} &= -\bigg[ Y_u\bar{u}_R\, \Phi_1\, q_L + Y_d\bar{d}_R\,\Phi_2^{\dagger}\,q_L \bigg] - Y_e\bar{e}_R\,\Phi_2^{\dagger}\,l_L + \text{h.c.} \, . \label{Yukawa: 2HDM} \end{align} The two complex scalar fields can be written as \begin{align} \Phi_1 = \dfrac{1}{\sqrt{2}} e^{i\frac{\eta_1}{v_1}} \begin{pmatrix} 0 \\v_1 \end{pmatrix} \, , ~ \Phi_2 = \dfrac{1}{\sqrt{2}} e^{i\frac{\eta_2}{v_2}} \begin{pmatrix} 0 \\v_2 \end{pmatrix} \, , \label{2HDM: Higgs-doublets} \end{align} where $\eta_1,\,\eta_2$ are Goldstone bosons of the scalar fields $\Phi_1$ and $\Phi_2$. The vacuum expectation value of the scalar fields, $v_1$ and $v_2$ are related by $v_1^2 + v_2^2 \equiv v^2 \simeq \big(246\,\text{GeV}\big)^2$, and one usually introduces the $\beta$ angle such that $v_1 = v\sin\beta$, $v_2=v\cos\beta$ and $v_2 / v_1 = \big(1 / \tan\beta\big) \equiv x$. The next step is to identify the would-be-Goldstone boson (that generates the mass of the Z-boson) from its orthogonal state, defining then the axion. One has the following relations \begin{align} \begin{pmatrix} G^0 \\ a\, \end{pmatrix} = \begin{pmatrix} \cos\beta & \sin\beta \\ -\sin\beta & \cos\beta \end{pmatrix} \begin{pmatrix} \eta_2 \\ \eta_1 \end{pmatrix} \, . \label{diagonal Goldstone-axion} \end{align} The Higgs doublets can be re-written as \begin{align} \Phi_1 = \dfrac{1}{\sqrt{2}}e^{i\frac{G^0}{v_1}}e^{i\,x\frac{a\, }{v}} \begin{pmatrix} 0 \\ v_1 \end{pmatrix} \, , ~ \Phi_2 = \dfrac{1}{\sqrt{2}}e^{i\frac{G^0}{v_2}}e^{i\,\big(-\frac{1}{x}\big)\frac{a\, }{v}} \begin{pmatrix} 0 \\ v_2 \end{pmatrix} \, , \label{2HDM: Higgs-doublet-axion} \end{align} where $G^0$ is PQ neutral and the Higgs doublets carry the following PQ charge, $PQ(\Phi_1)=x$ and $PQ(\Phi_2)=-1/x$. In order to identify the PQ axion model with Eq.~\eqref{Appl: Lagrangian-UV-generic}, we first make the Yukawa Lagrangian becomes PQ-invariant by performing the chiral rotation, \begin{align} \Psi \rightarrow e^{iPQ(\Psi)\frac{a}{v}} \Psi \, . \end{align} The PQ charges for one generation of quarks $\big(u ~ d \big)$ are assigned such as \begin{align} PQ\big(q_L;u_R,d_R\big) = \bigg(\alpha;\alpha+x,\alpha+\dfrac{1}{x}\bigg) \, . \label{PQ charges: LR-2HDM} \end{align} $\alpha$ is a free parameter that corresponds to the conservation of the baryon number\footnote{For a general setup including also the lepton sector see Refs.\cite{Quevillon:2019zrd,Quevillon:2020hmx}.}. The chiral rotation leads to the derivative coupling of axion with SM fermions as defined in Eq.~\eqref{Appl: Lagrangian-UV-generic} and the axion couplings to fermions read \begin{align} & \big( g_{_V}^{PQ} \big)^u = -\dfrac{1}{2v}(2\alpha + x) \, , ~ \big( g_{_A}^{PQ} \big)^u = \dfrac{1}{2v} x \, ; \quad \big( g_{_V}^{PQ} \big)^d = -\dfrac{1}{2v}\bigg(2\alpha + \dfrac{1}{x}\bigg) \, , ~ \big( g_{_A}^{PQ} \big)^d = \dfrac{1}{2v}\bigg( \dfrac{1}{x} \bigg) \, . \label{PQ charges: VA-2HDM} \end{align} Plugging Eq.~\eqref{PQ charges: LR-2HDM} into Eq.~\eqref{jacobian: PQ-SM} and rotating the electroweak gauge fields from their interaction basis to their physical mass basis using $W_{\mu}^3=c_wZ_{\mu}+s_wA_{\mu},\, B_{\mu}=-s_wZ_{\mu}+c_wA_{\mu}$ along with $e=gs_w=g'c_w$, one obtains the following Lagrangian for the Jacobian contribution \begin{align} \mathcal{L}_{\rm Jac}^{\{u,d\}} = \dfrac{1}{16\pi^2v} \bigg( & \dfrac{g_s^2}{2}\bigg[x+\dfrac{1}{x}\bigg] a\,G_{\mu\nu}^a \tilde{G}^{a,\mu\nu} + e^2N_c \bigg[ \dfrac{4}{9}x + \dfrac{1}{9x} \bigg] a\,F_{\mu\nu}\tilde{F}^{\mu\nu} \nonumber \\ & -\big[ g^2N_c \, \alpha \big]a\, W_{\mu\nu}^+\tilde{W}^{-,\mu\nu} - \dfrac{2e^2}{c_ws_w}N_c\bigg[ \dfrac{1}{2}\alpha + s_w^2\bigg( \dfrac{4}{9}x + \dfrac{1}{9x} \bigg) \bigg]a\, Z_{\mu\nu}\tilde{F}^{\mu\nu} \nonumber \\ & + \dfrac{e^2}{c_w^2s_w^2}N_c\bigg[ -(1-2s_w^2)\dfrac{\alpha}{2} + s_w^4\bigg( \dfrac{4}{9}x + \dfrac{1}{9x} \bigg) \bigg]a\, Z_{\mu\nu}\tilde{Z}^{\mu\nu} \bigg) \, , \end{align} where $N_c = 3$. Plugging Eq.~\eqref{PQ charges: VA-2HDM} into Eq.~\eqref{loopEFT: PQ-SM-gauge} and performing the same electroweak rotation lead to the following one-loop effective Lagrangian, \begin{align} \mathcal{L}_{\rm EFT}^{\rm 1loop-\{u,d\}} &= \dfrac{1}{16\pi^2 v} \bigg( g^2N_c \bigg[ \alpha + \dfrac{1}{6}\bigg(x+\dfrac{1}{x}\bigg) \bigg]a\, W_{\mu\nu}^+\tilde{W}^{-,\mu\nu} + \dfrac{e^2}{c_ws_w}N_c\bigg[ \alpha + \bigg(\dfrac{1}{3}x+\dfrac{1}{6x}\bigg) \bigg]a\, Z_{\mu\nu}\tilde{F}^{\mu\nu} \nonumber \\ &\qquad\qquad~\, + \dfrac{e^2}{c_w^2s_w^2}N_c\bigg[ (1-2s_w^2)\dfrac{\alpha}{2} + \dfrac{1}{12}\bigg(x+\dfrac{1}{x}\bigg) - s_w^2\bigg(\dfrac{1}{3}x+\dfrac{1}{6x}\bigg) \bigg]a\, Z_{\mu\nu}\tilde{Z}^{\mu\nu} \bigg) \, . \end{align} The effective axion-bosonic Lagrangian is obtained by adding $\mathcal{L}_{\rm Jac}^{\{u,d\}}$ and $\mathcal{L}_{\rm EFT}^{\rm 1loop-\{u,d\}}$ and gives the compact result, \begin{align} \mathcal{L}_{\rm EFT}^{\rm a-bosonic} &= \dfrac{1}{16\pi^2v} \bigg( \dfrac{g_s^2}{2}\bigg[x+\dfrac{1}{x}\bigg] a\,G_{\mu\nu}^a \tilde{G}^{a,\mu\nu} + e^2N_c \bigg[ \dfrac{4}{9}x + \dfrac{1}{9x} \bigg] a\,F_{\mu\nu}\tilde{F}^{\mu\nu} \nonumber \\ & \, + g^2N_c \,\dfrac{1}{6}\bigg[x+\dfrac{1}{x}\bigg] a\, W_{\mu\nu}^+\tilde{W}^{-,\mu\nu} + \dfrac{e^2}{c_w s_w}N_c\bigg[ \bigg(\dfrac{1}{3}x + \dfrac{1}{6x}\bigg) - 2s_w^2\bigg(\dfrac{4}{9}x+\dfrac{1}{9x} \bigg) \bigg]a\, Z_{\mu\nu}\tilde{F}^{\mu\nu} \nonumber \\ & \, + \dfrac{e^2}{c_w^2s_w^2}N_c\bigg[ \dfrac{1}{12}\bigg(x+\dfrac{1}{x}\bigg) - s_w^2\bigg(\dfrac{1}{3}x+\dfrac{1}{6x}\bigg) + s_w^4\bigg( \dfrac{4}{9}x + \dfrac{1}{9x} \bigg) \bigg] a\, Z_{\mu\nu}\tilde{Z}^{\mu\nu} \bigg) \, . \label{a-bosonicEFT: 2HDM} \end{align} \subsubsection{DFSZ axion model} Concerning the case of the more realistic axion DFSZ model~\cite{Dine:1981rt,Zhitnitsky:1980tq}, the Yukawa couplings are the same as in the 2HDM model, but now the scalar potential is modified. Typically, the 2HDM model is extended by a gauge-singlet complex scalar field $\phi$, with the scalar potential \begin{align} V_{\rm DFSZ} &= V_{\rm 2HDM} + V_{\phi 2HDM} + V_{\phi PQ} + V_{\phi} \, , \end{align} where we have \begin{align} \begin{cases} V_{\phi 2HDM} = a_1\big(\phi^{\dagger}\phi \big)\big(\Phi^{\dagger}_1\Phi_1 \big) + a_2\big(\phi^{\dagger}\phi \big)\big(\Phi^{\dagger}_2\Phi_2 \big) \, , \\ V_{\phi PQ} = \lambda_{12}\big(\phi^{\dagger}\phi \big)\Phi_1^{\dagger}\Phi_2 + \text{h.c.} \, , \\ V_{\phi} = \mu^2\big(\phi^{\dagger}\phi \big) + \lambda\big(\phi^{\dagger}\phi \big)^2 \, . \end{cases} \end{align} Similarly to $\Phi_i$ of Eq.~\eqref{2HDM: Higgs-doublet-axion}, one can also write the new complex scalar field $\phi$ as \begin{align} \phi = \dfrac{1}{\sqrt{2}}e^{i\frac{\eta_a}{f_a}} \begin{pmatrix} 0 \\ f_a \end{pmatrix} \, . \end{align} In summary, for the DFSZ axion model, one obtains the PQ-charges and the breaking-scale of the PQ-symmetry by rescaling their values in the axion PQ model, simply as follows, \begin{align} x \rightarrow \dfrac{2x^2}{x^2+1} \, , ~ \dfrac{1}{x} \rightarrow \dfrac{2}{x^2+1} \, , ~ v \rightarrow f_a \, . \label{DFSZ rescale} \end{align} The effective DFSZ axion-bosonic Lagrangian, obtained by adding $\mathcal{L}_{\rm Jac}^{\{u,d\}}$ and $\mathcal{L}_{\rm EFT}^{\rm 1loop-\{u,d\}}$, is given by \begin{align} \mathcal{L}_{\rm EFT}^{\rm a-bosonic} = \dfrac{1}{16\pi^2f_a} \bigg( & g_s^2\, a\,G_{\mu\nu}^a \tilde{G}^{a,\mu\nu} + e^2N_c \dfrac{8x^2+2}{\,9(x^2+1)} a\,F_{\mu\nu}\tilde{F}^{\mu\nu} \nonumber \\ & + \dfrac{g^2N_c}{3} a\, W_{\mu\nu}^+\tilde{W}^{-,\mu\nu} + \dfrac{e^2}{c_w s_w}N_c \dfrac{3+6x^2-4s_w^2(4x^2+1)}{9(x^2+1)} a\, Z_{\mu\nu}\tilde{F}^{\mu\nu} \nonumber \\ & + \dfrac{e^2}{c_w^2s_w^2}N_c\bigg[ \dfrac{1}{6} - s_w^2\dfrac{2x^2+1}{3(x^2+1)} + s_w^4 \dfrac{8x^2+2}{9(x^2+1)} \bigg] a\, Z_{\mu\nu}\tilde{Z}^{\mu\nu} \bigg) \, . \label{a-bosonicEFT: DFSZ} \end{align} These results do agree with those derived in Refs.~\cite{Quevillon:2019zrd}, using the more traditional approach of Feynman diagram computations. It is certainly a good moment to pause and appreciate the difference in strategy with this last reference. The main and obvious distinction is that in this work, we favored the path integral method to evaluate one-loop processes. However, we believe that another elegant and insightful feature of this axionic EFT derivation is due to the direct and consistent way to deal with gauge and anomalous symmetries. Indeed, one needs not to use the anomalous Ward-identities to alleviate ambiguities inherent to anomaly in QFTs. Equivalently, one can uses the interplay between higher-dimensional operators involving the axion and the would-be-Goldstone bosons in order to consistently and easily derive axion EFTs. This is offering a neat method to also explore other sectors of axion EFTs. \section{Conclusion\label{Ccl}} In this work, we have considered the task to build EFTs by integrating out fermions charged under both local and global symmetries. These symmetries can be spontaneously broken, and the global ones might also be anomalously broken. This setting is typically that encountered in axion models, where a new global but anomalous symmetry, $U(1)_{PQ}$, is spontaneously broken, so as to generate a Goldstone boson, the axion, coupled to gluons. The main novelties of our approach are twofold. First, the heavy fermion to be integrated out is allowed to have chiral charges for both the local and global symmetries. The analysis is then much more intricate because of the presence of anomalies in various currents, and because the fermion can only have a mass when all the chiral symmetries are spontaneously broken. Second, we perform our analysis in a functional approach, by systematically building EFTs using an inverse mass expansion, that is, identifying leading operators and calculating their Wilson coefficients with the help of Covariant Derivative Expansion. In more details, our main results are the following: \begin{itemize} \item It exists many motivations to introduce Goldstone bosons of global symmetries using a polar representation. Once this choice has been done, we have identified an appropriate parametrization of the fermionic part of the UV Lagrangian. Essentially, with the purpose of an inverse mass expansion, if one wants to perform an exact computation without truncating the initial UV theory, it is desirable to write the fermion mass term as a invariant quantity under the various symmetries, even for a chiral fermion. This requires some fermion field redefinitions. Only then one can clearly identify the fermion bilinear operator to be inverted. \item Usually, Ward identities are used to enforce the desired gauge symmetries. When dealing with anomalous quantities, these constraints are crucial to remove the ambiguities that creep in through the regularization process. But in our functional approach, this cannot be immediately implemented because the leading operators in the EFT end up being automatically gauge invariant. The only way forward is to perturb the theory to upset this automatic gauge invariance. This is done with the help of background fields, in a way very similar as in Ref.~\cite{Bonnefoy:2020gyh}. Then, the necessary Ward identity constraints can be recovered thanks to EFT operators involving the would-be-Goldstone bosons of the exact gauge symmetries. \item The parametrization of the fermion bilinear operator involves derivative interactions with scalar and pseudoscalar fields. To our knowledge, a precise description of how to perform the calculation of the determinant of such operators has never been presented. It should be noted that in that calculation, regularization is necessary. For that, we adopt dimensional regularization and follow the 't Hooft-Veltman prescription. We show that the two-parameter ambiguities, well-known in the context of triangle Feynman diagrams, can be recovered. Those are crucial to allow one to enforce all the gauge constraints in a consistent way. \item We recover in the functional context the results of Refs.~\cite{Quevillon:2019zrd, Quevillon:2020hmx, Bonnefoy:2020gyh}, that is, that the derivative coupling of the Goldstone boson $\pi$ to the fermions, $\bar{\Psi}(\partial_{\mu}\pi\gamma^{\mu}\gamma^{5})\Psi$ and $\bar{\Psi}(\partial_{\mu}\pi\gamma^{\mu})\Psi$, do not necessarily vanish in the infinite mass limit. They do contribute to the leading EFT operator $\pi VA$, $\pi AA$, but not $\pi VV$. In other words, this last coupling satisfies the Sutherland-Velman theorem, and is fully driven by the anomaly, but not the other two. \end{itemize} In this paper we have presented how to deal with scenarios combining both spontaneous and anomalous symmetry breaking. When building an EFT by integrating out chiral fermions charged under those various symmetries it is legitimate to keep local partial derivative interactions instead of traditional pseudo-scalar ones, but this has a cost. Now the anomaly is spread into several contributions which have to be recombined with high care when evaluating the S-matrix (see also Refs.~\cite{Quevillon:2019zrd, Quevillon:2020hmx, Bonnefoy:2020gyh}). We have integrated these peculiar fermions in the elegant and minimal functional approach and showed how to remove the ambiguities one has to face to evaluate the functional trace in dimensional regularization. Inevitably, this corresponds to implement the anomalous Ward identities in a consistent way within the path integral formalism. We did so by introducing fictitious vector fields associated to the global symmetries so one can cure potential ambiguities undermining the theory while enforcing gauge invariance. More generally, this work shows a possible, neat and systematic path to follow to consistently build an entire EFT involving anomalous symmetries and should be applied to derive other higher dimensional operators. All in all, this procedure allowed us to compute in a transparent and in a very generic way the Wilson coefficients of higher dimensional operators involving Goldstone bosons, this is encapsulated in the master formula Eq.~\eqref{masterf}. Furthermore, we showed how to apply this master formula to the case of SM gauge interactions. Ultimately, we applied these results to the axion Golstone boson (in the general sense i.e being the QCD axion or simply an ALP). We obtained in a closed form the higher dimensional operators involving the axion and SM gauge fields and collected them so that one can recover the non-intuitive physical coupling between axions and massive SM gauge fields which have been recently derived by some of us in Ref.~\cite{Quevillon:2019zrd}. The phenomenological relevance of these couplings are of particular interest for collider ALPs searches but also their imprints in the early universe. \subsubsection*{Acknowledgments} We thank Quentin Bonnefoy and Christophe Grojean for helpful discussions. J.Q. and P.N.H.V. have benefited from fruitful discussions and collaborations with Sebastian Ellis, Tevong You and Zhengkang Zhang on several past works on functional matching. This work is supported by the IN2P3 Master project ``Axions from Particle Physics to Cosmology''. J.Q.'s work is also supported by the IN2P3 Master project Théorie-BSMGA and the IN2P3 Master project UCMN. J.Q. acknowledges support by Institut Pascal at Université Paris-Saclay during the Paris-Saclay Astroparticle Symposium 2021, with the support of the P2IO Laboratory of Excellence (program “Investissements d’avenir” ANR-11-IDEX-0003-01 Paris-Saclay and ANR-10-LABX-0038), the P2I axis of the Graduate School Physics of Université Paris-Saclay, as well as IJCLab, CEA, IPhT, APPEC and EuCAPT.
1,108,101,562,575
arxiv
\section{Principle of Operation} \label{sec:workingPrinc} \begin{figure*}[ht!] \centering \includegraphics[width=0.99\textwidth]{PhaseCam_principle_ver6} \caption{A schematic layout of the new camera. The quarter-wave plate, Pockels cell and polarizing beamsplitter form an optical switch that intensity modulates the beam incident on the sCMOS camera. Spatially-resolved magnitude and phase maps of the heterodyne beat between a reference field and a signal field that is frequency shifted from a reference field is calculated using four camera images acquired with modulation phases separated by $\pi/2$.} \label{fig:PhaseCamConcept_Layout} \end{figure*} To illustrate the operation of the new camera we consider a beam consisting of two components: a reference field $E_r(x,y) \exp[i(\omega_r t + \varphi_r(x,y))]$ and a signal field $E_s(x,y)\exp[i(\omega_s t + \varphi_s(x,y))]$. We wish to determine the spatial distribution of the amplitude and phase of the signal field relative to a reference field, which is phase-locked to and perhaps frequency offset from the input carrier field, as shown in magenta in Fig.~\ref{Fig:ifo_fields}. Measuring this composite field using a photodetector would yield a voltage \begin{equation} \begin{split} \ V(x,y) &\propto E_r(x,y)^2 + E_s(x,y)^2 \\ & \hspace{0.5cm} + 2 E_r(x,y) E_s(x,y) \sin \left(\Omega t+ \varphi(x,y) \right) \label{eq:voltage} \end{split} \end{equation} where $\Omega = \omega_r -\omega_s$ and $\varphi(x,y)= \varphi_r(x,y) - \varphi_s(x,y)$. However, the frequency of the heterodyne beat is much larger than the bandwidth of a typical pixelated camera and would not be measurable. Thus, we synchronously amplitude modulate the field incident on each pixel as shown in Fig.~\ref{Fig:WorkingPrinciple}. In this example, a square-wave amplitude modulation is applied to the beam at a frequency $\Omega$, with a phase $\phi=\varphi$ that yields the largest signal. For in-phase modulation the pixel detector observes intensities that are greater than the unmodulated intensity, resulting in a DC output $(V_r + V_s)/2 + \delta V$, where the $V_{r/s}$ are due to the $E_{r/s}(x,y)^2$ terms in Eq.\ref{eq:voltage} and $\delta V$ is due to the RMS average of the $E_r(x,y) E_s(x,y)$ term. Similarly, for the modulation phase $\phi=\varphi + \pi$, the pixel observes intensities that are less than the unmodulated intensity, $(V_r + V_s)/2 - \delta V$. Subtraction of these provides $2\delta V \propto E_r(x,y) E_s(x,y)$~\cite{supl:1}. The optimum demodulation phase $\phi$ is not known a priori. Thus we record four camera images, $V_\phi$ at $\phi = \{0, \pi/2, \pi, 3\pi/2\}$ for example. Combining these images yields the magnitude and phase of the heterodyne beat: \begin{align} \mathbf{I} &\equiv V_0 - V_\pi \\ \mathbf{Q} &\equiv V_{3\pi/2} - V_{\pi/2} \\ | E_r(x,y) E_s(x,y)| & \propto \sqrt{\left(\mathbf{I}\right)^2 + \left(\mathbf{Q}\right)^2 } ,\label{eq:amplitudeMap} \\ \varphi & = \arctan \left(-\frac{\mathbf{Q}}{\mathbf{I}}\right).\label{eq:phaseMap} \end{align} where we refer to $\mathbf{I}$ and $\mathbf{Q}$ as the "in-phase" and "quadrature" signals. The heterodyne beat has thus been demodulated to baseband by the optical switching, and hence the analogy to a lock-in amplifier. A schematic of a practical realization is shown in Fig.\ref{fig:PhaseCamConcept_Layout}. The composite beam is first filtered using a polarizer and then circularly polarized using a quarter-wave plate. It then passes through a Pockels cell (PC) driven with a half-wave voltage that switches the polarization of the beam between \textit{s} and \textit{p} linear polarization. The polarizer converts this polarization modulation into an amplitude modulation. Typical camera images and the result of processing using Eq.~\ref{eq:amplitudeMap} and \ref{eq:phaseMap} are shown in Fig.~\ref{fig:PhaseCamConcept_Layout}. The maximum image rate could in principle be doubled by recording both the transmitted and reflected beams simultaneously. In practice it is difficult to overlap the images from both cameras to enable an accurate subtraction. Additional differential effects, such as variation in the responsivity of the sCMOS arrays and aberrations in the polarizing beamsplitter, also reduce the performance in dual camera operation. \section{Optical Lock-in cameras for gravitational wave detectors}\label{sec:gw} The interferometer shown in Fig.\ref{Fig:ifo_fields} uses two sets of phase-modulation sidebands at 9~MHz and 45~MHz to control the length and alignment of the interferometer cavities~\cite{Izumi_2016}. The reflected RF fields are used to control the positions of the mirrors so that (a) the carrier is resonant in the the power recycling cavity (PRC) and arm cavities, (b) the 9~MHz sidebands are resonant in the PRC, and (c) the 45~MHz sidebands transmit through the PRC and are resonant in the SRC. Ideally, the upper and lower sidebands within each pair have the same spatial distribution and amplitude. However, as discussed earlier, differential wavefront distortion upsets this balance and degrades this ideal resonant condition. Locations for phase cameras that could be used to investigate the sideband fields are also shown in Fig.\ref{Fig:ifo_fields}. In the simplest operating mode, a phase camera would analyze the heterodyne beat of the sampled carrier and a sideband field. An independent frequency-offset reference field could be used to diagnose the carrier and sideband fields individually. Imaging these simultaneously would require additional optical lock-in cameras. Alternatively the fields could be imaged sequentially with one camera which has a fixed demodulation frequency and its own frequency shifter. The frequency of the reference can then be changed to pick which RF field is demodulated and imaged. The balance of the 9~MHz sideband pair and the mode-matching into the PRC can be analyzed using the \textit{Pick-off camera} and \textit{Reflection camera}, respectively. The balance of the 45~MHz sideband pair could be analyzed using the \textit{Anti-symmetric camera}. Additionally, the differential wavefront distortion leads to 9~MHz sideband fields in the SRC, resulting in a 36~MHz heterodyne beat. The high spatial resolution and sampling speed of the optical lock-in camera could thus be used to measure the spatial distribution and amplitudes of individual sideband fields. These images can then be used to investigate the effect of differential wavefront distortion on the interferometer control, optimize thermal compensation systems, and investigate the effect of any high-spatial-frequency wavefront distortions. Lastly, the field circulating within each arm cavity could be analyzed using the \textit{X and Y transmission cameras}. This will enable imaging of unexpected higher-order mode content in the arm cavities. For example, parametric instabilities which produce sidebands at $\approx$10--100~kHz around the carrier. The optical lock-in camera can image these fields and form part of future active control schemes to identify and suppress such instabilities~\cite{PhysRevD.91.092001, Ma_2017}. \section{Test setup}\label{sec:expt} We follow the approach used by~\citet{Goda:04} to demonstrate the operation and sensitivity of the optical lock-in camera. A schematic of the test system is shown in Fig.\ref{fig:PhaseCamExperiment_Layout}. It consists of two parts: a test field generator that produces a reference and signal field and the lock-in camera itself to image them. The test field consists of a large amplitude, TEM$_{00}$ mode and a higher-order mode of a high-finesse, $\approx 4000$, ring cavity that has a free spectral range of 540~MHz. The TEM$_{00}$ mode is produced by phase-locking a Nd:YAG NPRO to a TEM$_{00}$ mode of the ring cavity using the Pound-Drever-Hall technique~\cite{Black:2001} and the electro-optic phase modulator EOM1. Higher-order modes are excited in the cavity by misaligning the incident beam using M1 and M2 and phase-modulating the beam at the cavity offset frequency using EOM2. The odd number of mirrors in the ring cavity breaks the resonance degeneracy between odd- and even-parity optical modes due to the odd-parity modes accumulating an additional $\pi$ phase shift during each round trip \cite{Waldman:2009,Arai:2013}. In our cavity, the TEM$_{30}$ and TEM$_{12}$ Hermite-Gauss modes resonate closest to the TEM$_{00}$ mode, at offset frequencies of 15.7~MHz and 15.3~MHz respectively. \begin{figure}[!t] \centering \includegraphics[width=\columnwidth]{PhaseCamDiagram_Future2.pdf} \caption{Schematic of the optical system used to demonstrate the camera. The test field generator shown in the red box is used to produce a beam consisting of a reference and signal field.} \label{fig:PhaseCamExperiment_Layout} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.4\textwidth]{Result_Singleframe_ver3} \caption{Comparison between camera measurements and the predictions from a \textsc{Finesse} simulation. The digitized pixel values are given in units of thousands of digital-numbers (kDN) and plotted using the false-color scale bars.} \label{Fig:Result_singleframe} \end{figure} For the test described here, we chose to drive EOM2 at 15.4~MHz as it enabled the excitation of both modes. The beam emitted by the ring cavity therefore consists of a large-amplitude TEM$_{00}$ reference field with frequency $\omega_r$, and a smaller-amplitude TEM$_{30}$ and TEM$_{12}$ signal field oscillating mostly at the 15.4~MHz-shifted frequency, $\omega_s$. The performance of the camera is affected by the sCMOS properties. A high dynamic range, bit-depth, and linearity are crucial as we must subtract images to remove the offset due to the high power carrier. A high frame rate is also required as four frames are required to produce the intensity and phase images, and to allow averaging of of shot noise, provided it does not result in an unacceptable reduction in dynamic range or spatial resolution. In this work we use a Zyla~4.2 sCMOS camera, which has a sensor size of 2048x2048 pixels, a dynamic range of 89~dB, a 16-bit readout, a maximum frame rate of 100~fps and a quantum efficiency of 5\% at 1064~nm. The camera window was anti-reflection coated for the 1064~nm. The rolling-frame shutter for this camera does not affect the measurement process as the demodulation phases for each pixel are still separated by $\pi/2$. \section{Results} \begin{figure*}[t] \centering \includegraphics[width =\textwidth]{Noisefloor_Fig_ver6.pdf} \caption{Typical images of (a) $V_0$ or $V_\pi$ image, and (b) $\log_{10}(|V_0-V_\pi|)$ for a single pair of images. (c) Shows how the RMS of $|V_0-V_\pi|$ decreases with averaging. (d, e) Maps of the magnitude of the heterodyne beat for $N_\mathrm{ave}=1$ and $N_\mathrm{ave}=20$. (g,h) Maps of the phase of the heterodyne beat for $N_\mathrm{ave}=1$ and $N_\mathrm{ave}=20$. Images (e) and (h) were taken with $2\times2$ pixel binning. (f) Plot of the magnitude variation along the center of (d) and (e). (i) Plot of the Phase variation along the center of (f) and (g).} \label{Fig:Noisefloor} \end{figure*} Typical $\mathbf{I}$ and $\mathbf{Q}$ images and the result of a numerical simulation of the test-field generator using \textsc{Finesse}\cite{Finesse} are shown in Fig.~\ref{Fig:Result_singleframe}. In this case, the TEM$_{30}$ mode is apparent in the Q demodulation while the TEM$_{12}$ mode occurs mostly in the I demodulation. Only the two central maxima of the TEM$_{30}$ mode are observed in this demonstration as the amplitude of the TEM$_{00}$ reference field is much smaller at the location of the outer maxima. The \textsc{Finesse} simulation used plausible misalignments and included shot noise to reproduce outputs of the optical system. For the simulation shown in Fig.\ref{Fig:Result_singleframe}, the ratio of the power in higher-order mode to that in the TEM$_{00}$ was 14\% for the TEM$_{30}$ and 8\% for the TEM$_{12}$ modes, and thus the magnitude is dominated by the TEM$_{30}$ mode but the phase shows some influence of the weaker TEM$_{12}$ mode, which degrades the spatial resolution we are able to demonstrate below. The sensitivity of the optical lock-in camera was investigated by removing the 15.4~MHz modulation from EOM2 and recording frames with the demodulation phase alternating between 0 and $\pi$. An image typical of individual $V_0$ and $V_\pi$ frames is shown in Fig.~\ref{Fig:Noisefloor}(a). The magnitude of a typical $V_0-V_\pi$ image is show in Fig.\ref{Fig:Noisefloor}(b). The $RMS$ average of the residual values can be decreased by averaging multiple $V_0-V_\pi$ pairs as shown in Fig.\ref{Fig:Noisefloor}(c). It is also apparent from Fig.\ref{Fig:Noisefloor}(c) that the decrease in the $RMS$ is $\propto 1/\sqrt{N_\text{ave}}$ where $N_\text{ave}$ is the number of pairs in the average, thereby showing that the residuals in Fig.\ref{Fig:Noisefloor}(b) are due to pixel shot noise. The improvement in sensitivity due to averaging was demonstrated by reinstating the 15.4~MHz modulation of EOM2 and recording twenty frames at each of the four demodulation phases. The magnitude and phase of the beat with $N_\text{ave}=1$ and $N_\text{ave}=20$ are shown in Fig.~\ref{Fig:Noisefloor}(d) and (e), and (f) and (g) respectively. Averaging over 20 frames improves the signal-to-noise ratio in the maps as seen in Fig.~\ref{Fig:Noisefloor}(f) and (i). In addition to the averaging, pixel-binning can also be employed for further SNR improvements without sacrificing speed---as was used for the $N_\text{ave}=20$ cases above, where $2\times2$ binning was employed. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{ModeSeparation_PhaseSweep_4.pdf} \caption{The measured and simulated demodulated signal mode content. $\phi=0^o, 85^o, 135^o$ are shown on the left with the corresponding simulation showing the individual modes. The data and model have been scale normalized.} \label{Fig:PhaseCam_sweep} \end{figure*} The minimum signal power detectable can be estimated from the ratio of the digital number~(DN) noise on the central peaks in Fig.~\ref{Fig:Noisefloor}(g), approximately 0.1~kDN, to the DN of the reference field in Fig.~\ref{Fig:Noisefloor}(a), approximately 60~kDN: as $2E_sE_r/(E_r)^2 \approx 0.1/60$ and thus $(E_s/E_r)^2 \approx -62$~dB below the power in the reference field, a 12 dB improvement on that reported by~\citet{Goda:04}. The relatively poor signal-to-noise associated with the outer maxima of the TEM$_{30}$ signal field is due to the small diameter of the TEM$_{00}$ reference field in the test system. It could be improved by using a larger diameter reference field that is frequency-offset locked to the signal field, or by using a liquid crystal attenuator or spatial light modulator~\cite{Nayar2003,ZHONGDONG201431,MANNAMI2007359}. To analyze the output of phase cameras it will be important to extract the relative phase of the higher order modes in a beam. Fig.\ref{Fig:PhaseCam_sweep} shows how the modal content extracted from the in-phase and quadrature images varies with demodulation phase. We can see that the TEM$_{12}$ mode is out-of-phase with the carrier at 85$^\circ$ and the TEM$_{30}$ at 135$^\circ$---this phase relationship agrees well with that predicted by the \textsc{Finesse} model. \section{Conclusion} In this work we have introduced a new type of phase camera, the optical lock-in camera, and demonstrated its ability to produce high spatial resolution maps of the phase and intensity of a coherent light field. This is achieved with a higher acquisition rate and resolution than previous phase camera implementations. The camera is both more compact and does not rely on any mechanically moving parts, thus reducing scattered light and enabling operation during scientific observations in gravitational wave interferometers. The phase and intensity of a specific frequency component of a beam is imaged by creating a heterodyne beat with a reference field and synchronously amplitude modulating it. The key element is the Pockels cell which acts as a fast optical switch to provide the amplitude modulation. By switching over the entire field optically, rather than electronically, and imaging with a sCMOS array, each pixel can behave as an optical lock-in amplifier. The results of our proof-of-principle measurements are in excellent agreement with the predictions of a theoretical \textsc{Finesse} model in our test system. We also demonstrate that the sensitivity is limited purely by shot-noise and can be improved by simple averaging, resulting in a noise floor of -62~dBc from data recorded in 2s. The performance can be easily improved by using faster or more sensitive cameras, such as InGaAs arrays which can achieve $>100$~Hz frame rates, or by sacrificing spatial resolution for faster acquisition rates on dense sCMOS arrays, by region-of-interest extraction or pixel-binning. Various applications of this camera in advanced gravitational wave detectors have been highlighted. The additional information provided by them should enable better diagnostics of high spatial frequency effects within an interferometer. This will provide a new tool for improving both their duty-cycle and sensitivity. This will be particularly important for the thermal compensation systems as ever increasing stored optical power is used in current and future generations of detectors. These cameras can also offer the ability to image physical processes such as parametric instabilities, offering a new method to monitor them or to act as a sensor in an active suppression scheme. This project was funded by the Australian Research Council grants DP150103359 and CE170100004. \bibliographystyle{unsrtnat}
1,108,101,562,576
arxiv
\section{Introduction}\label{Intro} Most types of binary stars observed today in a long-term interacting phase of their evolution --- such as cataclysmic variables, X-ray binaries and contact binaries --- and other types that are no longer actively transferring mass, are survivors of a dynamical phase of evolution in a more violent past. Such rapid evolutionary phases accompany the onset of mass transfer that occurs when angular momentum losses or stellar evolutionary processes bring one component of a detached system into contact with its Roche lobe. It is important to investigate the onset and stability of the ensuing mass transfer in order to understand the origin and the evolutionary links between the various types of binary systems that we observe today. The stability of a given binary system upon contact depends on the system's mass ratio, the structure of the donor star, and how mass and angular momentum is redistributed in the binary during the ensuing mass-transfer event. By making reasonable assumptions that encapsulate the essential physics while simplifying the governing equations, analytical treatments can be devised that provide insight into not only the stability, but the long-term evolutionary behavior of mass-transferring, semi-detached binaries \citep{SoPh97, HaWe99, GPF}. But because rapid mass-transfer events, in particular, are intrinsically complex --- involving, for example, nonlinear supersonic flows and a gravitational field that changes in a nontrivial way in response to the dynamical exchange of matter between the stars --- the rapid phases through which many binaries evolve are not likely to be well understood until they are modeled hydrodynamically with the inclusion of as much physics as possible. In this paper, we demonstrate how large-scale numerical, hydrodynamic simulations can be employed to better understand the stability and the long-term evolutionary behavior of binaries as they undergo rapid phases of mass-transfer. Note that in this paper the term ``dynamical" is used to mean processes that result in significant changes in the binary parameters and mass-transfer rates in a few orbital periods. Among several possible mechanisms driving binary evolution, close binaries inevitably lose orbital energy and angular momentum by the emission of gravitational waves. Double degenerate white dwarfs (DWDs) are of particular interest since their expected numbers and rates of emission of gravitational waves are such that they are likely to constitute an important source of background noise for the LISA mission \citep{HiBe00}. Details of the time-evolutionary behavior of the gravitational-wave signal that is emitted from any given system depend on the details of mass and angular momentum flow in the binary. Reliable estimates for the rates of emission and the time-frequency characteristics of the emitted signal (the ``template") depend on having a good understanding of the above processes. While some progress can be made with analytical prescriptions, theoretically constructed templates are not likely to be useful for the extraction from the background of signals from individual sources until detailed hydrodynamical models can accurately simulate dynamical mass-transfer events in DWDs. Furthermore, it has been realized recently that many of the theoretically predicted and abundant detached DWDs that are driven to contact in a Hubble time, would start mass transfer in a direct impact mode \citep{Maet04}. In this mode, the accretion stream hits the accretor's surface and dissipates without forming an accretion disk. Therefore the orbital angular momentum carried by the stream serves to spin up the accretor and is generally not returned by tides to the orbit. One would like to study these processes without the approximations usually adopted for mathematical convenience. Since relatively little is known about these direct impact binaries, it is also of interest to investigate the structure and properties of these flows and explore possible observational signatures. The above considerations have motivated us to undertake a long-term program that utilizes detailed numerical hydrodynamic simulations to model dynamical phases of mass transfer that can arise when binaries of different types reach a semi-detached state. We have developed a hydrodynamical code that is capable of following such binaries through more than 30 orbits while conserving mass and angular momentum to a high level of precision. As a first step, we studied the evolution of detached polytropic binaries with components of equal and unequal mass (Motl, Tohline \& Frank 2002, hereafter MTF). In the present paper we describe some important improvements that have been made to the code described by MTF and report on our first results of simulations that follow dynamical mass-transfer events in a fully self-consistent manner. We follow the evolution of two different polytropic binary systems whose properties were selected to illustrate the capabilities of our code and to provide some overlap with earlier, related studies. In particular, one of our models has been chosen specifically for comparison with a smooth particle hydrodynamics (SPH) simulation that was presented a decade ago by \citet{RS95}. To our knowledge, this work has not been repeated with an Eulerian code in the intervening years. Much of the related numerical simulation work that has preceded ours (see \S\ref{PreviousSimulations} for elaboration) has concentrated on stars having relatively stiff equations of state and on systems that encounter and evolve through a tidal instability. The hydrodynamics of the coalescence, including brief phases of mass-transfer, has been modeled when necessary as a prelude to an almost inevitable merger. Our focus is, instead, on binaries whose components obey a relatively soft equation of state. Our objective is to investigate in more detail the hydrodynamics of mass transfer, the structures arising from this transfer, and the role mass transfer plays during the dynamical phase, sometimes leading the binary closer to a tidal instability and merger, but sometimes saving it from merger and leading it to a slower evolutionary phase. \section{Theoretical Background}\label{secBackground} Our present investigation focuses on the dynamical stability of semi-detached, polytropic binary star systems in circular (or very nearly circular) orbits. A polytrope is a star whose equation of state is governed by the relation, \begin{eqnarray} p &=& K\rho^{1+1/n} \, , \end{eqnarray} where $p$ is pressure, $\rho$ is mass density, $K$ is a constant that defines the specific entropy of the gas, and $n$ is the so-called polytropic index that specifies the degree of compressibility of the gas. Each initial configuration is uniquely defined by specifying: the mass of each star, $M_{\rm d}$ (the donor) and $M_{\rm a}$ (the accretor); the orbital separation $a$ ({\it i.e.}, the distance between the centers of mass of the two stars); the polytropic index $n$; and each star's polytropic constant, $K_{\rm d}$ and $K_{\rm a}$. Throughout this presentation, we also will frequently refer to the system mass ratio, \begin{eqnarray} q \equiv \frac{M_{\rm d}}{M_{\rm a}} \, . \label{massRatioDefined} \end{eqnarray} Although our tools are capable of evolving binaries with any polytropic index and even more general equations of state, here we will only be considering polytropic binaries in which $n=3/2$. This particular choice is of interest because key dynamical properties of mass-transferring binary systems that contain low-mass main-sequence (MS) stars or white dwarfs (WD) can be realistically modeled using an $n=3/2$ polytropic equation of state, if an appropriate choice is made for the polytropic constants of the two binary components. For spherically symmetric polytropic stars, the radius of the star $R$ is uniquely determined once the three parameters $n$, $M$, and $K$ have been specified (cf., Chandrasekhar 1958). For a given polytropic index, the star's mass-radius relationship is uniquely defined as well; specifically, \begin{eqnarray}\label{PolytropicM_R} M &=& k_n R^{(3-n)/(1-n)} \, , \end{eqnarray} where the proportionality constant $k_n$ depends on $K$ and the gravitational constant $G$ through an expression that is determined from a solution of the relevant Lane-Emden equation (cf., Chandrasekhar 1958), for example, $k_{3/2} = 13.1 [K/G]^{3}$. In close binaries, both stars generally are rotationally flattened and tidally distorted. Hence, their geometric shape cannot be accurately characterized by a single radius. Nevertheless, an ``effective'' radius $R_{\rm d}$ and $R_{\rm a}$ of the donor and accretor, respectively, can be defined as the radius of a sphere having a volume $V_{\rm d}$ or $V_{\rm a}$ that is filled by each distorted star. The proportionality constant that relates the mass to the effective radius is different from the value of $k_n$ that holds for spherical polytropes, but the power-law in the mass-radius relationship is approximately the same. For example, if the structure of the donor is well described by a polytrope of index $n=3/2$, the donor's effective radius increases as it loses mass according to the relation, \begin{eqnarray} R_{\rm d} \propto M_{\rm d}^{-1/3} \, . \label{MassRadiusRelationship} \end{eqnarray} \subsection{Concepts derived from the point-mass approximation} For reference we summarize here, using the notation introduced in this paper, some well-known results concerning mass transfer in binaries. For more details, the interested reader may refer to reviews by \citet{Pac71, RaJoWe82, HWe87, VeRa88, Ki88, SoPh97, HaWe99, FKR}. If, for the moment, we assume that the stars are point masses in a circular orbit, then three parameters ($M_{\rm d}$, $M_{\rm a}$, and $a$) are sufficient to uniquely define the binary system's orbital frequency, \begin{eqnarray} \Omega &=& \biggl[\frac{G(M_{\rm a}+M_{\rm d})}{a^3}\biggr]^{1/2} \, , \end{eqnarray} and its total, purely orbital, angular momentum, \begin{eqnarray} J_\mathrm{tot}=J_\mathrm{orb} &=& \frac{M_{\rm d} M_{\rm a}}{M_{\rm a}+M_{\rm d}}~ a^2\Omega = M_{\rm d} M_{\rm a} \biggl(\frac{G a}{M_{\rm a}+M_{\rm d}}\biggr)^{1/2} \, . \label{jorb} \end{eqnarray} For a given total mass, $M_\mathrm{tot} = (M_a + M_d)$, and angular momentum, $J_\mathrm{tot}$, then, it is easy to show that the binary system will have its minimum separation when $M_{\rm d} = M_{\rm a}$, that is, when $q=1$. More generally, in a binary whose orbital evolution is driven by systemic angular momentum losses $(\dot{J})_\mathrm{sys}$ such as gravitational radiation, the separation will evolve according to \begin{equation} \frac{\dot a}{2a} = \left(\frac{\dot J}{J_{\rm orb}}\right)_{\rm sys} - \frac{\dot M_{\rm d}}{M_{\rm d}}\left(1-q\right)\, , \label{adotpoint} \end{equation} where the dots indicate differentiation with respect to time, and we have assumed $\dot M_{\rm a}= -\dot M_{\rm d}$, so that mass is conserved. For a point-mass binary, the critical Roche surface can be defined analytically in implicit form ({\it e.g.}, Frank et al. 2002). The effective radius $R_L$ of the Roche lobe around the donor is also well-defined. As shown by \citet{Eggleton83}, the ratio $R_L/a$ is only a function of the mass ratio, $q$, and it is fairly accurately given by the approximate expression, \begin{eqnarray} \frac{R_L}{a} \approx \frac{0.49 q^{2/3}}{0.69q^{2/3} + \ln(1+q^{1/3})} \, . \label{Eggleton} \end{eqnarray} A simpler to use, but somewhat cruder approximation, correct to within 6\% in the range $0<q<4$ is due to Pacy\'nski (1971): \begin{eqnarray} \frac{R_L}{a} \approx 0.4622 \left(\frac{q}{1+q}\right)^{1/3} \, . \label{Pacynski} \end{eqnarray} With these ideas in mind, if angular momentum and mass are conserved during a mass-transfer event (fully conservative mass transfer), the binary separation is smallest when $q=1$, and the Roche lobe is smallest when $q\approx 5/6$. When mass transfer occurs in binaries where the donor may be approximated as an $n = 3/2$ polytrope, the donor expands when it loses mass according to the radius-mass relationship given by expression (\ref{MassRadiusRelationship}). The rate of mass transfer in a semi-detached binary in which the donor slightly overfills its Roche lobe depends mainly on the depth of contact which is proportional to $R_{\rm d}-R_L$. Clearly, this rate increases or decreases according to whether the depth of contact itself increases or decreases. With the above assumptions, it is easy to show that \begin{eqnarray} \frac{\dot{R}_{\rm d}-\dot{R}_L}{R_{\rm d}} \approx \frac{\dot{R}_{\rm d}}{R_{\rm d}}-\frac{\dot{R}_L}{R_L}= \frac{-2\dot{M}_{\rm d}}{M_{\rm d}}(q-{2\over3}) \, . \label{LinearStability} \end{eqnarray} Since $\dot{M}_{\rm d}$ is negative, the depth of contact increases upon fully conservative mass transfer if $q>q_{\mathrm{stable}}=2/3$, or decreases if $q<q_{\mathrm{stable}}=2/3$. For a more complete discussion of the stability of mass transfer in double-degenerate binaries see \citet{HaWe99} (see also references cited therein). It therefore proves instructive to divide our discussion of stability into two broad regimes: $q>2/3$ and $q<2/3$. \subsection{Expectations}\label{SecExpectations} \subsubsection{$q>2/3$} For $q > 2/3$, the donor expands more rapidly than the Roche lobe, which may actually contract if $q>5/6$. The binary separation itself expands if $q<1$ or contracts if $q>1$. Thus, if the donor is initially more massive than the accretor, the orbital separation will decrease with time and the effective radius of the Roche lobe will decrease as well. Since the donor expands upon mass loss, mass transfer will be clearly unstable as the Roche lobe encroaches on the donor, even in the absence of driving. On the other hand, for systems with $q_{\mathrm{stable}} < q \leq 5/6$, the donor expands faster upon mass loss than its Roche lobe can expand and the mass transfer rate will still grow with time. Of these systems, if $q$ is initially only slightly above $q_{\mathrm{stable}}$, unstable mass transfer may proceed until the mass ratio falls below the stability limit. What happens thereafter depends on whether or not the system falls prey to tidal instabilities. If it survives, the mass transfer decays in the absence of driving or evolves toward stable mass transfer if steady driving is present. Systems with an initial mass ratio significantly higher than $q_\mathrm{stable}$ are likely to be unstable and merge through a common envelope phase or the donor may be tidally disrupted. \subsubsection{ $q<2/3$ } When $q < 2/3$, the donor is initially less massive than the accretor. Hence, in the point-mass approximation as mass is transferred from the donor to the accretor, the orbital separation will increase with time, and $R_L$ will increase as well since $q<5/6$. This will tend to stabilize the system because the Roche lobe expands away from the original surface of the donor. Even as the donor expands, in the absence of driving the depth of contact will decrease and so will the mass-transfer. Thus in the absence of driving, the mass transfer would ultimately decay to zero, while it would tend to a stable value if driving is present. Our discussion above assumes that the total angular momentum is purely orbital and that mass transfer is fully conservative. This is adequate provided that the accretion stream has sufficient angular momentum to form a disk around the accretor. In that case tidal torques on the disk will return the angular momentum advected by the stream back to the orbit. As discussed further below, even if mass transfer is fully conservative, when the mass transfer stream directly impacts the accretor and no disk forms, the advected angular momentum spins up the accretor at the expense of the orbital angular momentum reducing $q_{\mathrm{stable}}$ to values below 2/3. \subsubsection{Direct Impact and Other Finite-Size Effects}\label{SecDirectImpact} When the finite size of the stars is taken into account, the total angular momentum must be written as the sum of the orbital angular momentum, $J_{\rm orb}$ as given by Eq.~(\ref{jorb}), and the spin angular momenta of the donor $J_{\rm d}$ and the accretor $J_{\rm a}$. Angular momentum may be exchanged between the orbit and the binary components by advection and by tides. We cite here without proof an equation that describes in an approximate fashion the rate of change of the binary separation under the effects of a systemic loss, direct impact and possible spin evolution due to tidal interactions (see Marsh et al. 2004 or Gokhale et al. 2005 for details): \begin{equation} \frac{\dot a}{2a} = \left(\frac{\dot J}{J_{\rm orb}}\right)_{\rm sys}- \left(\frac{\dot J_{\rm a} + \dot J_{\rm d}}{J_{\rm orb}}\right)_{\rm tides} - \frac{\dot M_{\rm d}}{M_{\rm d}}\left(1-q-\sqrt{(1+q)r_h}\right)\, . \label{adot} \end{equation} Following \citet{Maet04}, the specific angular momentum that is carried by the stream has been expressed in terms of the circularization radius $r_h \equiv R_{\rm circ}/a$, and the term on the right-hand-side of this equation that contains $r_h$ accounts for the rate at which angular momentum is transferred by the stream to the accretor. The second term on the right-hand-side represents the effects of purely tidal changes in the spin angular momenta. As has been emphasized by \citet{MaSt02} and \citet{Maet04}, the consequential loss of orbital angular momentum that accompanies direct impact accretion (compare the last term on the right-hand-sides of Eqs. \ref{adotpoint} and \ref{adot}) acts to destabilize mass transfer. Recent semi-analytic work suggests that this effect alone can reduce the stability limit from $2/3$ to a value $q_\mathrm{stable} \approx 0.22$ \citep{GPF}. However, at the onset of mass transfer in a DWD --- especially if, initially, $q \ge q_{\mathrm{stable}}$ --- the mass accretion rate $\dot{M}_a$ may well exceed the critical rate that yields the Eddington luminosity and the excess mass may be blown away. This effect will act to slightly increase $q_{\mathrm{stable}}$ \citep{HaWe99}. In this work, we relax the assumptions required to treat the mass-transfer semi-analytically and instead investigate the stability of $n=3/2$ polytropic binaries through direct hydrodynamical simulations. Thus we are able to follow the internal flow of mass and angular momentum via the stream and tides without any of the assumptions usually adopted for mathematical convenience. It should be kept in mind, however, that we do not include in this investigation a self-consistent treatment of thermal relaxation, and the effects of radiative transfer are ignored. \subsection{Previous, Related, Hydrodynamical Simulations} \label{PreviousSimulations} The equilibrium and stability of polytropic binary sequences in nearly circular orbits, including the synchronous and irrotational cases, has been comprehensively discussed in a series of papers by Lai, Rasio \& Shapiro \citep{LRS1, LRS2, LRS3, LRS4, LRS5} for systems having various polytropic indexes and mass ratios. Their results were confirmed and extended in a series of papers by Rasio \& Shapiro (1992, 1994, 1995 --- henceforth RS92, RS94, RS95 respectively). Using a relaxation method they constructed synchronously rotating equilibrium binaries for various polytropic indexes, mass ratios and initial separations, and followed their hydrodynamic evolution using smoothed-particle hydrodynamics (SPH). In most cases the evolution led to coalescence of the binary, although in one case that was meant to represent binary neutron stars with a stiff equation of state ($n=1/2$, $q= q_\mathrm{RS} = 0.5$) the model binary returned to a new stable configuration after a phase of mass transfer (RS94). Of particular interest to us in the context of this paper are the results of RS95 who investigated the equilibrium and stability properties of binaries with a variety of initial mass ratios\footnote{In RS95, the parameter $q$ was defined as the ratio of the less massive star to the more massive star so that $q\leq 1$ in all cases. In order to avoid confusion, we will use $q_\mathrm{RS}$ when referring to the mass-ratios quoted in RS95 then, in order to be consistent with the definition given here in Eq.~(\ref{massRatioDefined}), we will set $q = 1/q_\mathrm{RS}$ when the donor is initially more massive than the accretor.} and with polytropic index $n = 3/2$. For equal-mass binaries, both MS and WD binaries were represented in RS95 by setting the polytropic constants of the two components to be equal, that is, $K_{\rm d} = K_{\rm a}$. A sequence of equilibrium configurations was constructed for a range of separations, $a$, specified by the parameter $r \equiv a/R_{1}$, where $R_{1}$ was the unperturbed radius of the more massive star (primary). These $q= q_\mathrm{RS} = 1$, equilibrium binaries were then evolved in time using the SPH method and systems with $r\lesssim 2.45$ were found to suffer a dynamical instability. For MS binaries with $q_\mathrm{RS} < 1$ (polytropic constants were adjusted so as to obtain the MS mass-radius relation) it is the more massive star that overflows its Roche lobe first so, consistent with the expectations discussed above, RS95 found that the resulting mass transfer tended to be unstable. Systems with $q_\mathrm{RS} \lesssim 0.4$ (that is, $q = 1/q_\mathrm{RS} \gtrsim 2.5$) were found by RS95 to be secularly unstable even before the primary star filled its Roche lobe, while a dynamical instability was encountered before the Roche limit in binaries with $q_\mathrm{RS} \lesssim 0.25$ (that is, $q = 1/q_\mathrm{RS} \gtrsim 4$). Finally, $q < 1$ systems with WD components (the polytropic constants are set to be equal in order to realize the appropriate WD mass-radius relationship, given here by expression \ref{MassRadiusRelationship}) were found by RS95 to remain secularly and dynamically stable until $r$ was small enough for the donor to overflow its Roche lobe. The WD binary, $q = 0.5$, was found to be unstable to mass transfer and the SPH simulation led to tidal disruption of the donor star and final merger after five orbital periods. \subsection{In the Context of this Paper} In \S\S\ref{sec_0.843_UB} and \ref{sec_mdot_results} of this paper, we present the results of nine separate nonlinear hydrodynamical simulations that we have conducted in an effort to better understand mass-transfer instabilities in close binary systems and, at the same time, to better understand the capabilities and limitations of our numerical tools. The initial models for these simulations were all unequal-mass, synchronously rotating, $n=3/2$ polytropic binaries in circular orbit. As is illustrated in Figure \ref{equipotentials}, we have examined systems having three different initial mass ratios: $q_0=0.843$, $q_0 = 1.323$, and $q_0= 0.500$. The first of these ($q_0=0.843$) was designed to provide a comparison with the benchmark ``model UB'' evolution that was discussed by MTF. The initial model with $q_0=1.323$ (that is, $q_\mathrm{RS} = 1/q \approx 0.76$) was designed to represent a MS binary in which the more massive star makes first contact with the critical Roche surface. The model with $q_0 = q_\mathrm{RS} = 0.5$ was designed to represent a low-mass, DWD binary in which the less massive star has the larger radius and therefore makes first contact with its Roche lobe. Results from these new simulations will be compared with the expectations that have been drawn from earlier analytical and semi-analytical investigations, as well as with the nonlinear simulations presented by RS95. A particularly detailed comparison will be made of the $q_0=0.5$ WD binary system since the parameters defining the initial state of our model were chosen to match as closely as possible the initial state that was investigated by RS95. \section{Overview of the Computational Tools}\label{code_review} The computational tools used in our present study are the same as those used in MTF except for two modifications, which we describe in detail in \S\ref{sec_corr_code}, below. In summary, we employ: A self-consistent-field (SCF) code to construct each initial binary model; and a three-dimensional, finite-difference hydrodynamics code to evolve each initial model forward in time. Both codes utilize a cylindrical computational grid with $R, \phi$ and $z$ denoting the radial, azimuthal and vertical coordinates, respectively. As stated earlier, our present study is confined to initial states in which the fluid in both stars obeys an $n=3/2$ polytropic equation of state, but in general $K_{\rm d} \neq K_{\rm a}$. During the hydrodynamical evolutions, the state variables of every fluid element vary in such a way that they follow adiabats having a ratio of specific heats, $\gamma = 1+1/n = 5/3$. Our SCF method is based on the iterative technique developed by Hachisu (1986; see also Hachisu et al. 1986). In the past, this technique has been used to construct initial conditions for hydrodynamical studies of the relative stability of equal-mass binary systems \citep{New97, SWC}. It also has been used to construct a wide variety of unequal-mass, detached and semi-detached binaries (MTF). Here we use the SCF code to build models of synchronously rotating, unequal-mass binaries in a frame that is corotating with the binary's initial orbital frequency $\Omega_0$, so the two stars are stationary in this frame. The axis of rotation is taken to be parallel to the $z$-axis of the coordinate grid. As input to the code, we must specify: three boundary points on the stars where the mass-density, $\rho$, must vanish; the maximum density $\rho^\mathrm{max}$ of each star; and an initial guess for the density distribution, $\rho(R,\phi,z)$. The three boundary points lie along the line that joins the centers of the two stars; they correspond to the inner and outer edges of one star (usually the accretor), and the inner edge for the companion star. The SCF code then iteratively solves for the initial equilibrium configuration through the following steps. For the given density distribution, the resulting Newtonian gravitational potential, $\Phi(R,\phi,z)$, is calculated by solving Poisson's equation (\ref{PoissonEq}). Using the value of $\Phi$ at the three boundary points, the code determines the angular frequency of the binary orbit and two integration constants. With these data, the enthalpy $H(R,\phi,z)$ is computed within each star. Using $H$ and the prescribed polytropic equation of state, an improved ``guess'' for the density distribution is calculated. The iteration repeats until a prescribed convergence criterion is met. The hydrodynamical code has been designed to solve the equations that govern the flow of inviscid, compressible, self-gravitating fluids in a frame of reference that is rotating with an arbitrarily chosen angular velocity (including zero, in which case it would be an inertial reference frame). Throughout this paper all the simulations have been conducted in a frame that is rotating with the binary system's {\em initial} orbital frequency, $\Omega_0$, as derived from the SCF code. The primary variables that are evolved forward in time by the hydrodynamical code are the volume-densities of five conserved quantities: the mass density $\rho$, the radial momentum density, $S$, the vertical momentum density, $T$, the angular momentum density, $A$, and an entropy tracer, $\tau$. These quantities are advanced in time via a conservative formulation and an explicit integration of five, first-order hyperbolic partial differential equations (\ref{EulerEqns1})-(\ref{entropyTracerEq}): the three components of Euler's equation, and two continuity equations (one for the mass density and one for the entropy tracer). As described in \S 4 of MTF, the hydrodynamical time loop consists of applying the source, advection and artificial viscosity operators in a sequence that ensures a nearly second-order accurate time-integration. This is in addition to enforcing the boundary conditions and solving Poisson's equation for the gravitational potential so that the fluid is accelerated in a self-consistent Newtonian gravitational field. In mass-transfer systems, flow across the L1 Lagrange point is generally expected to be transonic and, thereafter, the mass-transfer stream can quickly acquire Mach numbers $\gtrsim 10$ as it falls toward the accretor. It is therefore not surprising that in the mass-transfer simulations presented here, relatively strong shocks develop as the accretion stream obliquely impacts the surface of the accretor. With this in mind, it is worth reviewing how shocks are handled in our hydrocode (see MTF for more details). In the vicinity of a shock, our (normally) second-order-accurate advection scheme (using Van Leer monotonic interpolation) is reduced to a first-order-accurate scheme to ensure numerical stability, and artificial viscosity is introduced to mediate the shock, spreading the (ideally, infinitesimally thin) shock front over a small number of computational grid zones. (The extra ``artificial viscosity'' source terms that are introduced into the three components of Euler's equation to accomplish this task are not shown in our \S3.2 summary of the equations, but they are enumerated in MTF.) As a result, momentum is properly conserved across all shocks. Mass is also properly conserved, as the equation of continuity (Eq.~18) remains unchanged in the presence of shocks. Finally, energy will be conserved in an adiabatic shock only if the specific entropy of material increases as it moves through the shock. We have not added source terms to the energy equation in our hydrocode (Eq.~19) to account for this generation of entropy. As a result, material in the accretion stream retains its ``pre-shock'' specific entropy (it remains on precisely the same $\gamma = 5/3$ adiabat as it moves from one side of the shock to the other); and post-shock densities and pressures are somewhat higher than would be expected if energy were conserved and the so-called ``adiabatic shock jump conditions'' \citep{LaLi} were realized. With this implementation of the energy equation, we are effectively assuming that the post-shock gas instantaneously radiates away the ``extra'' heat that should have been generated by the shock. While one might argue that it is unreasonable to expect radiation to cool the gas back down to precisely its pre-shock entropy condition, it also seems unreasonable to expect that material immediately behind a realistic accretion shock will not be subject to some amount of radiative cooling. Our handling of the energy equation in the presence of a shock effectively provides a measure of cooling, and does so in a manner that is straightforward to implement numerically and readily reproducible by others who might choose to use our simulations as a benchmark for further work. \subsection{Recent Modifications to the Hydrodynamical Code}\label{sec_corr_code} The task of self-consistently modeling the dynamical evolution of a binary system that is undergoing mass transfer is computationally challenging, as can be understood from the following considerations. The vast majority of material in the hydrodynamical simulation is gravitationally confined within the two stars and is nearly at rest (as viewed from the rotating reference frame). Mild deviations from this state occur in response to the exchange of mass and momentum between the two stars on a timescale that is on the order of the orbital period. At the same time, the dynamics of the mass-accretion stream --- which generally will contain supersonic flows that are confined to a relatively small volume of the computational domain --- must be accurately resolved and may severely limit the size of the time step that is permitted by the explicit integration scheme. The evolutionary code must maintain the approximate force balance within and between both stars to a high degree of precision in order to permit an accurate treatment of the response of the binary to Roche-lobe overflow. In this context, it is important to emphasize that, as is traditional in the astrophysical fluid dynamics community, our hydrodynamics code has been developed around a conservative formulation of the dynamical equations. This, in itself, ensures that the advection operators preserve the integral of conserved densities to machine precision. However, we are solving a more complex problem where the fluid flow is coupled to the gravitational field through Poisson's equation and, in the presence of the related gravitational source terms, the code, in its entirety, is no longer strictly conservative. Two modifications to the algorithm described by MTF have been crucial to the recent success of our Roche-lobe overflow simulations. As is detailed in the following two subsections, both have involved subtle modifications to the source terms in Euler's equation. \subsubsection{Treatment of Pressure Gradients} As has been described by MTF, the inertial-frame source terms in Euler's equation were originally written as the gradient of an effective potential that includes the fluid enthalpy. This formulation of the source terms is consistent with the manner in which initial equilibrium structures are constructed in the SCF code. However, when fluid that is tidally stripped from the donor falls directly onto the surface of the accretor, this formulation produces incorrect pressure gradients if the material from the donor has a different specific entropy from the material in the accretor. Accordingly, we have modified the source terms in Euler's equation to properly account for gradients in the fluid's specific entropy. The hydrocode now explicitly calculates ${\bf \nabla}p$ instead of the product $\rho{\bf \nabla}H$, as was indicated in the MTF code description. Implications of this change are enumerated in later sections of this paper. \subsubsection{Center-of-Mass Motion} \label{commotion} By following the evolution of an unequal-mass, detached binary system (model ``UB'') through just over five orbits, MTF showed that the code conserves angular momentum to an accuracy $\Delta J_z/J_z \approx 10^{-4}$ per orbit. (See the first row of our Table 3 for a summary of related measurements reported by MTF for model UB.) They also showed that, because linear momentum was not precisely conserved, the center of mass of the binary system slowly wandered away from its initial position (the center of the cylindrical coordinate grid). As Figure 15 of MTF illustrates, by the end of the UB model evolution (just over five orbits), the center of mass of the system had moved a distance $\approx \Delta R/4$ away from the center, where $\Delta R$ was the radial size of one grid zone. As MTF pointed out, this level of momentum conservation is excellent when compared to the results of other groups who have performed simulations of comparable (or even less) complexity using finite-difference hydrodynamical schemes. As we embarked upon our present project to model mass transfer in semi-detached binary systems, we were concerned that even very slow motion of the center of mass away from the center of the computational grid might cause problems in evolutions that were followed through significantly more than five orbits. Most importantly, we were concerned that the surface of one or both stars might run into the outer edge of the computational grid as the center of mass of the system wandered farther and farther from the center of the grid. In an effort to confine the center of mass of the system to a region very close to the center of the coordinate grid, we have added a small ``artificial'' acceleration, \begin{eqnarray} {\bf a}^\mathrm{art} = {\bf e}_R a_R^\mathrm{art}+{\bf e}_\phi a_\phi^\mathrm{art}+{\bf e}_z a_z^\mathrm{art} \end{eqnarray} to the source terms of Euler's equation that is designed to counteract the empirically measured rate at which the center of mass was otherwise being accelerated. As viewed from an inertial reference frame, it is easy to explain how the requisite size and direction of this artificial acceleration vector should be estimated. Simply ``measure'' the size and direction of the residual (and unphysical) center-of-mass acceleration ${\bf [\ddot{r}}_\mathrm{com}]_\mathrm{inertial}$ at any point in time, then set ${\bf a}^\mathrm{art} = - [\ddot{\bf r}_\mathrm{com}]_\mathrm{inertial}$. Because our simulations have been performed in a frame of reference that is rotating with a constant angular velocity ${\bf \Omega} = {\bf e}_z \Omega_0$, we have used the standard reference frame transformation, \begin{eqnarray} [\ddot{\bf r}]_\mathrm{inertial} &=& [\ddot{\bf r} + 2{\bf \Omega\times \dot{r}} + {\bf \Omega\times}({\bf \Omega\times r})]_\mathrm{rotating} \, , \end{eqnarray} and determined ${\bf a}^\mathrm{art}$ from the expression, \begin{eqnarray} {\bf a}^\mathrm{art} &=& -~[\ddot{\bf r}_\mathrm{com} + 2{\bf \Omega\times \dot{r}}_\mathrm{com} + {\bf \Omega\times}({\bf \Omega\times r}_\mathrm{com})]_\mathrm{rotating} \, \nonumber \\ &=& - ~{\bf e}_R[a_{R,\mathrm{com}} - 2\Omega_0 v_{\phi,\mathrm{com}} - \Omega_0^2 R_\mathrm{com}] -~{\bf e}_\phi [a_{\phi,\mathrm{com}} + 2\Omega_0 v_{R,\mathrm{com}}] -~{\bf e}_z [a_{z,\mathrm{com}}] \, , \label{comAcceleration} \end{eqnarray} where it is understood in the last expression that all components of the center-of-mass position, velocity, and acceleration are measured in the rotating frame. Appendix \ref{comAppendix} details how, in practice, each of the terms in Eq.~(\ref{comAcceleration}) is evaluated in the hydrodynamic code. We emphasize that, at each time step, the same vector ${\bf a}^\mathrm{art}$ was included in the calculation of the acceleration of every fluid element, but a new value of ${\bf a}^\mathrm{art}$ was determined every integration time step. Although this was not a particularly sophisticated way to correct for center-of-mass motions in our simulations, it proved to be quite successful. For example, Figure \ref{Q0.5 xyRcom} shows that the motion of the center of mass of a binary with $q_0=0.5$ was confined to within approximately one radial zone of the center of the grid for over 30 orbits for the longest of the three runs presented. \subsection{Summary of Equations} \label{equations} In contrast to the form of the three components of Euler's equation that was presented in MTF, our present simulations of mass-transferring binary star systems have utilized the following equations: \begin{eqnarray} \frac{\partial S}{\partial t} + \mbox{\boldmath$\nabla$} \cdot (S \mbox{\boldmath$\upsilon$} ) &=& - \frac{\partial p}{\partial R} - \rho \frac{\partial}{\partial R} \left[ \Phi - \frac{1}{2} \Omega_0^{2} R^{2} \right] + \frac{A^{2}}{\rho R^{3}} + 2 \Omega_0 \frac{A}{R} \nonumber \\ && -\rho [a_{R,\mathrm{com}} -2\Omega_0 v_{\phi,\mathrm{com}} -\Omega_0^2 R_\mathrm{com}] \, , \label{EulerEqns1} \\ \frac{\partial T}{\partial t} + \mbox{\boldmath$\nabla$} \cdot (T \mbox{\boldmath$\upsilon$} ) &=& - \frac{\partial p}{\partial z} - \rho \frac{\partial \Phi}{\partial z} -\rho[a_{z,\mathrm{com}}] \, , \label{EulerEqns2} \\ \frac{\partial A}{\partial t} + \mbox{\boldmath$\nabla$} \cdot (A \mbox{\boldmath$\upsilon$} ) &=& - \frac{\partial p}{\partial \phi} - \rho \frac{\partial \Phi}{\partial \phi} - 2 \Omega_0 S R - \rho R[a_{\phi,\mathrm{com}} +2\Omega_0 v_{R,\mathrm{com}}] \, . \label{EulerEqns3} \end{eqnarray} The statements of mass and entropy conservation, and the Poisson equation remain as presented by MTF, namely, \begin{eqnarray} \frac{\partial \rho}{\partial t} + \mbox{\boldmath$\nabla$} \cdot (\rho \mbox{\boldmath$\upsilon$} ) &=& 0 \, , \label{continuityEq} \\ \frac{\partial \tau}{\partial t} + \mbox{\boldmath$\nabla$} \cdot (\tau \mbox{\boldmath$\upsilon$} ) &=& 0 \, , \label{entropyTracerEq} \\ \nabla^2\Phi &=& 4\pi G \rho \, . \label{PoissonEq} \end{eqnarray} \begin{deluxetable}{lcclcc} \tabletypesize{\scriptsize} \tablecaption{Initial Parameters for Model Q0.8\tablenotemark{\dag}} \tablewidth{0pt} \tablehead{ \colhead{System} & \colhead{Initial} & ~ & \colhead{Component} & ~ & ~ \\ \colhead{Parameter} & \colhead{SCF Value} & ~ & \colhead{Parameter} & \colhead{Donor} & \colhead{Accretor} } \startdata $q_0$ & 0.843 & ~~~~~~~~~~ & $M_i$ & 0.0223 & 0.0265 \\ $a_0$ & 0.9707 & ~~~~~~~~~~ & $\rho^\mathrm{max}_i$ & 0.8800 & 1.0000 \\ $\Omega_0$ & 0.2327 & ~~~~~~~~~~ & $K_i$ & 0.0387 & 0.0417 \\ $J_\mathrm{tot}$ & $2.93\times 10^{-3}$ & ~~~~~~~~~~ & $V_i$ & 0.1577 & 0.1634 \\ ${R}_\mathrm{com}$ & $2.28\times 10^{-6}$ & ~~~~~~~~~~ & $V^\mathrm{RL}_i$ & 0.1788 & 0.2273 \\ \enddata \tablenotetext{\dag}{Parameter values are given here in dimensionless polytropic units. To scale these numbers to other (e.g., cgs) units, see the discussion in Appendix~\ref{unitsAppendix}.} \label{table1_0.843} \end{deluxetable} \section{Results of detached binary simulations}\label{sec_0.843_UB} The two hydrocode modifications described in \S\ref{sec_corr_code} were introduced to improve the accuracy and long-term stability of our simulations so that we would be able to follow the dynamical evolution of close binary systems through many orbits. In order to assess the effects of these changes, we have carefully analyzed the evolution of a binary system whose initial structure matched as closely as possible the unequal-mass, detached binary system with $q_0 = 0.843$ that was discussed in detail by MTF. In MTF, this ``benchmark'' configuration was referred to as model ``UB'' (for ``unequal-mass binary''); we will henceforth refer to our closely related initial configuration as model ``Q0.8,'' reflecting the value of the model's initial mass-ratio. For analysis purposes, this same initial model, Q0.8, was evolved through approximately 5.3 orbits {\it four separate times} using slightly different versions of the hydrodynamical code. In what follows we first detail the properties of this initial model, then we describe the results of the four separate evolutions. \begin{deluxetable}{crrrcc} \tabletypesize{\scriptsize} \tablecaption{Computational Grid Parameters} \tablewidth{0pt} \tablehead{ \colhead{Model} & \colhead{$N_R$} & \colhead{$N_\phi$} & \colhead{$N_z$} & \colhead{$R_\mathrm{grid}$} & \colhead{$\Delta R$\tablenotemark{a}} } \startdata Q0.8 & 130 & 256 & 98 & 1.000 & $7.87\times 10^{-3}$ \\ Q1.3 & 162 & 256 & 98 & 1.251 & $7.87\times 10^{-3}$ \\ Q0.5 & 162 & 256 & 98 & 1.251 & $7.87\times 10^{-3}$ \enddata \tablenotetext{a}{In all cases, $\Delta R = R_\mathrm{grid}/(N_R - 3)$; $\Delta z = \Delta R$; and $\Delta\phi = 2\pi/N_\phi$.} \label{ComputationalGrid} \end{deluxetable} Figure \ref{equipotentials}$a$ shows the critical Roche surface (dashed curve) and the equatorial-plane density contours (solid curves) that trace the surface of the two stars in model Q0.8, as constructed by our SCF code. The inner (L1) Lagrange point on the critical Roche surface is also identified. Table \ref{table1_0.843} lists the numerical values of various physical parameters that define this initial model. (Where appropriate, the subscript ``0'' has been used to emphasize that these are initial parameter values derived from the SCF code. All of these physical variables were permitted to vary with time during the course of the hydrodynamic evolutions.) These include parameters for the binary system as a whole --- $q_0$, $a_0$, $\Omega_0$, $J_\mathrm{tot}$, and $R_\mathrm{com}$ (as defined by Eq.~\ref{RcomDefined}) --- and parameters that define the structure of the individual polytropic stars --- $M_i$, $K_i$, $\rho^\mathrm{max}_i$, the volume occupied by each star $V_i$, and the volume of the associated Roche lobe $V^\mathrm{RL}_i$, where the subscript ``$i$'' refers either to the donor or the accretor. We note that, by volume, the less-massive star (destined to become the donor) initially filled $88\%$ of its Roche lobe and the more-massive star (the accretor) initially filled $72\%$ of its Roche lobe. We note as well that, initially, the center of mass of this model was positioned almost exactly at the center of the coordinate grid; specifically, relative to the initial binary separation, $R_\mathrm{com}/a_0 = 2.35\times 10^{-6}$ initially. All of the parameter values in Table \ref{table1_0.843} are given in units such that $G = R_\mathrm{SCF} =\rho^\mathrm{max}_{\rm a}(t=0) = 1$, where $R_\mathrm{SCF}$ is the outer edge of the cylindrical grid that was used to generate the model in the SCF code. Appendix~\ref{unitsAppendix} provides expressions that can be used to scale the values of these dimensionless parameters to more meaningful ({\it e.g.}, cgs) units. \begin{figure}[!ht] \centering \includegraphics[scale=0.7]{f1.ps} \caption{A slice through the equatorial-plane is shown for (a) model Q0.8, (b) model Q1.3, and (c) model Q0.5. In each panel, the solid, isodensity contours denote the surfaces of the two stars; they are drawn at a level, $\rho / \rho^{\mathrm{max}}_{a} = 10^{-5}$. The ``donor'' star is located on the left in each panel. The dashed lines trace selected equipotential contours of the self-consistently calculated Roche potential for each model. The outermost circle drawn in panels (b) and (c) identifies the radial edge, $R_\mathrm{grid}$, of the computational grid that was used to evolve these initial models hydrodynamically. The location of the inner ($L_1$) Lagrange point is identified for each system; for models Q1.3 and Q0.5, two additional ($L_2$ and $L_3$) Lagrange points are identified because they fall at positions $R < R_\mathrm{grid}$.} \label{equipotentials} \end{figure} As is detailed in the first row of Table \ref{ComputationalGrid}, model Q0.8 was evolved on a uniform grid that had a resolution of (130, 256, 98) zones in $(R, \phi, z)$, respectively. The hydrocode grid had the same radial extent, $R_\mathrm{grid}$, and the same size grid zones, $\Delta z = \Delta R = R_\mathrm{grid}/(N_R-3)$ as the grid that was used by the SCF code to construct the initial model, that is, $R_\mathrm{grid} = R_\mathrm{SCF} = 1$ and $\Delta R = 7.87\times 10^{-3}$. As was mentioned above, we evolved model Q0.8 through approximately 5.3 orbits, four separate times, using slightly different versions of the hydrodynamical code. The versions of the code differed, as follows (see \S\ref{sec_corr_code} for additional clarification): \begin{itemize} \item Evolution Q0.8-H: The pressure source term in Euler's equation was calculated using $\rho{\nabla} H$, and no correction was made to limit the motion of the system's center-of-mass. (This is identical to the manner in which source terms were handled in the ``benchmark'' UB model evolution published by MTF.) \item Evolution Q0.8-P: The pressure source term in Euler's equation was calculated using $\nabla p$, instead of $\rho{\nabla} H$, but no correction was made to limit the motion of the system's center-of-mass. \item Evolution Q0.8-HC: A very small, artificial acceleration was applied in an attempt to minimize the motion of the system's center-of-mass, but the pressure source term in Euler's equation was calculated using $\rho{\nabla} H$. \item Evolution Q0.8-PC: The pressure source term in Euler's equation was calculated using $\nabla p$, instead of $\rho{\nabla} H$, and a very small, artificial acceleration was applied in an attempt to minimize the motion of the system's center-of-mass. \end{itemize} Columns 2 and 3 of Table \ref{TableUB} highlight the differences between these four model evolutions: a ``yes'' in column 2 means that $\nabla p$ was used in place of $\rho\nabla H$; and a ``yes'' in column 3 means that ${\bf a}^\mathrm{art}$ was applied in an effort to correct for the small, but undesired center-of-mass motion. Although both stars initially filled a large fraction of their respective Roche volumes, neither star was actually in contact with its Roche lobe initially. Hence, the system was not initially susceptible to a mass-transfer instability and nothing was done during these four simulations to artificially excite the instability. As has been explained by MTF, the evolution of this ``benchmark'' model serves to illustrate how well the fully dynamical hydrocode can preserve the detailed structure of a complex, equilibrium configuration that is close to, but has not exceeded, contact. Column 4 of Table \ref{TableUB} records the length of time --- in units of the initial orbital period, $P_0 = 2\pi/\Omega_0$ --- that model Q0.8 was evolved in each of these benchmark simulations. In accordance with our expectations, throughout all four model evolutions, the binary system remained detached, the two stars moved along very nearly circular orbits at a separation $a(t)$ that deviated only slightly from the initial separation, and both stars individually preserved their initial detailed, force-balanced structures to a high degree of precision. The top three panels of Figure \ref{Q0.8 plots} show how well key system parameters were preserved throughout all four evolutions: (a) the $z$-component of the total angular momentum $J_{z}(t)$, measured relative to the initial value, $J_\mathrm{tot}$; (b) the binary separation $a(t)$, normalized to the initial separation $a_0$; and (c) the change in total system mass $\delta M(t) \equiv \{M(t) - M_\mathrm{tot}\}$, measured relative to the initial total mass $M_\mathrm{tot}$, where $M(t)$ is the total system mass at time $t$. The separation $a(t)$ reported in all the simulations in this paper is the distance between the centers of mass of the two components. In detail, we identify material belonging to each star by dividing the computational grid into two regions with a plane perpendicular to the binary axis going through the approximate location of the inner Lagrange point. Individual grid cells contribute to the center of mass summation if they are more strongly bound to their respective star than the surface layer (a constant density surface). Finally, panel (d) shows the small drift of the center of mass away from the center of the grid as a function of time for the various runs. Note that in all cases the center of mass remained within the first radial cell ($\Delta R=7.87\times 10^{-3}$) and that the drift is significantly reduced by the center of mass corrections for both Q0.8-PC and Q0.8-HC, being almost invisible for the latter. As the curves in Figure \ref{Q0.8 plots} show, over the course of approximately 5 orbits in all four model Q0.8 simulations, the binary lost a very small fraction of its mass, it slowly gained a very small amount of angular momentum and, at the same time, it experienced a slow, very slight secular decay of the orbit. In addition to the slight orbital decay, a small-amplitude oscillation occurs in the binary separation with a period approximately equal to the initial orbital period of the binary. This ``epicyclic'' motion probably simply reflects that the orbit is not precisely circular, in which case the mean amplitude of the epicyclic oscillation, $(\Delta a/a)_\mathrm{epicyclic}$, can be interpreted as the eccentricity of the orbit. The epicyclic motion probably arises because, when the initial model was inserted into the hydrodynamical code, its assigned angular velocity, $\Omega_0$, was slightly different from the value that would have been required to place the stars into a perfectly circular orbit. Columns 5 - 8 of Table \ref{TableUB} list, respectively, the secular rate of change per orbit of $J_{z}$, $a$ and the total system mass $M$, as well as the epicyclic amplitude $(\Delta a/a)_\mathrm{epicyclic}$ that resulted from each of our four model Q0.8 simulations; also listed in the first row of the table are the corresponding values from MTF's UB simulation.\footnote{We report here a typographical error in the values quoted by MTF for $\Delta M_{1}/M_{\rm tot}$ and $\Delta M_{2}/M_{\rm tot}$. The correct values are $\Delta M_{1}/M_{\rm tot} \approx -3.0 \times 10^{-6}$ and $\Delta M_{2}/M_{\rm tot} \approx -1.1 \times 10^{-5}$} \begin{figure}[!ht] \centering \includegraphics[scale=0.8,viewport=12 16 540 650,clip]{f2.ps} \caption{Results from the evolution of model Q0.8 using four slightly different versions of the hydrocode (evolution H -- dotted curve; HC -- dash-dot curve; P -- dashed curve; PC -- solid curve). From top to bottom: (a) The $z$-component of the system's angular momentum $J_z$, normalized to its initial value $J_\mathrm{tot}$, is plotted as a function of time $t$, normalized to the system's initial orbital period $P_0 = 2\pi/\Omega_0$. (b) The binary separation $a$, normalized to its initial value $a_0$, is plotted as a function of $t/P_0$. (c) The difference between the total mass of the binary system and its initial value, $\delta M \equiv M(t)-M_\mathrm{tot}$, normalized to the initial mass, is plotted as a function of $t/P_0$. (d) The distance in code units $R_{\rm com}$ from the axis of the grid to the instantaneous center of mass as a function of $t/P_0$. Note that the boundary of the innermost radial cell in these units is at $R_{\rm com}=0.008$.} \label{Q0.8 plots} \end{figure} \begin{deluxetable}{lccccccc} \tabletypesize{\scriptsize} \tablecaption{Results from Detached Binary Evolutions (Model Q0.8)} \tablewidth{0pt} \tablehead{ \colhead{Simulation} & \colhead{$\nabla P$} & \colhead{COM} & \colhead{$t/P_0$} & \colhead{$\Delta J_z/J_\mathrm{tot}$} & \colhead{$(\Delta a/a)_\mathrm{secular}$} & \colhead{$(\Delta M/M_\mathrm{tot})$} & \colhead{$(\Delta a/a)_\mathrm{epicyclic}$} \\ \colhead{(1)} & \colhead{(2)} & \colhead{(3)} & \colhead{(4)} & \colhead{(5)} & \colhead{(6)} & \colhead{(7)} & \colhead{(8)} } \startdata UB & no & no & 5.178 & $+1.5\times 10^{-4}$ & $-1.9\times 10^{-4}$ & $-1.4\times 10^{-5}$ & $2.2\times 10^{-4}$ \\ Q0.8-H & no & no & 5.263 & $+1.4\times 10^{-4}$ & $-1.5\times 10^{-4}$ & $-8.6\times 10^{-6}$ & $2.0\times 10^{-4}$ \\ Q0.8-HC & no & yes & 5.265 & $+1.4\times 10^{-4}$ & $-1.5\times 10^{-4}$ & $-8.5\times 10^{-6}$ & $2.0\times 10^{-4}$ \\ Q0.8-P & yes & no & 5.269 & $+4.7\times 10^{-5}$ & $-6.1\times 10^{-4}$ & $-1.6\times 10^{-5}$ & $2.2\times 10^{-3}$ \\ Q0.8-PC & yes & yes & 5.261 & $+4.4\times 10^{-5}$ & $-4.6\times 10^{-4}$ & $-1.6\times 10^{-5}$ & $2.0\times 10^{-3}$ \\ \enddata \label{TableUB} \end{deluxetable} In comparing the detailed results of the four Q0.8 evolutions, as summarized in Table \ref{TableUB}, we notice that simulations that use the gradient of the pressure conserve angular momentum slightly better (to a level of 5 parts in $10^5$, per orbit, instead of 14 parts in $10^5$) than simulations in which the gradient of the enthalpy was used. At the same time, however, the amplitude of the epicyclic motion was roughly an order of magnitude larger (the orbital eccentricity was $\approx 2\times 10^{-3}$ instead of $\approx 2\times 10^{-4}$) and the rate of mass loss was approximately twice as large (mass was conserved to a level of $16$ parts in $10^6$, per orbit, instead of $8$ parts in $10^6$) in the simulations that used the gradient of the pressure. As was noted in \S\ref{sec_corr_code}, expressing the pressure source term in Euler's equation in terms of the gradient of the pressure provides the more physically correct description of the evolution of unequal-mass binaries undergoing mass transfer. This may be the reason evolutions Q0.8-P and Q0.8-PC conserved angular momentum better than simulations that used the gradient of the enthalpy. In the context of the difference in the epicyclic amplitude for the two implementations we recall that the SCF code uses the gradient of the enthalpy, rather than the pressure, to construct the initial state. When this initial configuration is placed into the version of the hydrocode that implements the $\rho\nabla H$ source term (evolutions Q0.8-H and Q0.8-HC), no resultant forces are introduced and the equilibrium of the initial model is therefore well preserved. However, this will not be the case for the version of the hydrocode that evolves the fluid with $\nabla p$. Although, strictly speaking, $\nabla p = \rho\nabla H$ in an isentropic fluid (such as the fluid in either one of the stars in this detached binary), in practice the finite-difference expressions for these two gradients produce slightly different values of the source term. We suspect that the $\nabla p$ implementation introduces very small, but nonzero accelerations throughout the interior of both stars that cause the two stars to oscillate slightly about their own, ideal equilibrium configurations, and that it is the coupling between these stellar oscillations and orbital dynamics of the system as a whole that excites the larger epicyclic motions in evolutions Q0.8-P and Q0.8-PC. The effect of the center of mass correction was less dramatic than implementing the gradient of the pressure or enthalpy; it slightly improved the conservation of angular momentum and also slightly reduced the amplitude of the epicyclic motion. However, the total mass of the binary, on the scale presented here, remained virtually unchanged for simulations with or without the center of mass correction. The three principal spurious effects --- the slow decay of the orbit, the slow gain of angular momentum and the slow loss of mass from the system --- seen in MTF's UB simulation are still present in the simulations presented here. However, depending on the version of hydrodynamical code we choose, our simulations show that the size of these effects can be modified somewhat. The criterion for choosing one version over another rests on the type of binary system we want to simulate. For detached/semi-detached binaries having similar components with respect to the polytropic constant, $\nabla H$ will correctly describe the evolution even if mass transfer occurs in the system. On the other hand if the components have different polytropic constants ({\it i.e.}, the fluid in the two stars has different specific entropies) then the $\nabla p$ scheme needs to be used, particularly for binary systems that undergo mass-transfer. The disadvantage of using the $\nabla p$ scheme is that a relatively large epicyclic motion is induced in the orbit, and mass is conserved to a slightly lower degree of accuracy than when the $\nabla H$ scheme is utilized. The former can be reduced by modifying the angular velocity to place the initial model in a nearly circular orbit or by ``tweaking'' the initial state so as to balance the forces which arise from implementing the $\nabla p$ scheme. In all five of the mass-transfer simulations that we present below in \S\ref{sec_mdot_results}, the ``PC'' version of the hydrocode has been used (consistent with the set of equations that has been summarized above in \S\ref{equations}) because this version is best suited to follow evolutions through many orbits when the material residing in the donor has a specific entropy that differs from the material initially making up the accretor. The number recorded in column 7 of the last row of Table \ref{TableUB} provides a reasonable estimate of the lowest mass-transfer rate that we will be able to resolve with our existing hydrocode. Specifically, we should expect to only be able to resolve mass-transfer rates $\dot{M}_d(P_0/M_\mathrm{tot}) > \dot{M}_\mathrm{min} = - 1.6\times 10^{-5}$ because, as has just been demonstrated, even a detached binary system like our model Q0.8 slowly loses mass at this rate. In order to accommodate this restriction in our simulations of mass-transferring binary systems, we will be required to ``drive'' the donor into sufficiently deep contact with its Roche lobe that mass transfer proceeds at a rate that exceeds $\dot{M}_\mathrm{min}$. For MS Algol-type binaries ($M_\mathrm{tot}\approx 5 M_\odot, P_0 \approx 3~\mathrm{days}$) \citep{BRM95}, this limit is equivalent to $\sim 10^{-2} M_\odot/\mathrm{yr}$; and for AM CVn-type DWD binaries ($M_\mathrm{tot}\approx 1 M_\odot, P_0 \approx 0.3~\mathrm{hr}$) \citep{PHS93, NPVY01}, this limit is equivalent to $\sim 0.4 M_\odot/\mathrm{yr}$. Both of these limits are orders of magnitude larger than the stable mass-transfer rates that are observed in Algol-type or AM CVn-type binary systems. Hence, as we remarked in the introductory section of this paper, we are unable to model long-term, stable evolutionary phases of mass-transfer in such systems. However, during phases of unstable mass-transfer --- which are the focus of our present study --- the rate of mass-transfer can easily climb to levels above $\dot{M}_\mathrm{min}$ and can be satisfactorily modelled with our hydrocode. Note that the minimum resolvable mass transfer $\dot{M}_\mathrm{min}$ is on the order of the Eddington critical rate for both white dwarfs and main-sequence donors, but to treat radiative forces is beyond the scope of this paper. Analytic considerations \citep{HaWe99} suggest that the effect of mass loss occurring at super-Eddington mass-transfer rates is stabilizing, thus ignoring the radiative forces at this stage provides a more stringent test of stability. Ideally one would like to be able to follow the evolution with all relevant physical effects included and to resolve realistic mass transfer rates, but this is not possible with the computational resources available today. \section{Mass-Transfer Simulations}\label{sec_mdot_results} We now present results from mass-transfer simulations of two semi-detached, $n=3/2$ polytropic binary systems with initial mass ratios, $q_0 = 1.323$ and $q_0 =0.5$; these will henceforth be referred to as models Q1.3 and Q0.5, respectively. Paralleling the information provided in Table \ref{table1_0.843} for model Q0.8, Tables \ref{table1_1.3} and \ref{table0.5} list system parameters as well as information about the structure of the component stars for these two initial models, and Figures \ref{equipotentials}b and \ref{equipotentials}c illustrate their initial equatorial-plane structures. In both panels, the donor is the star on the left which nearly fills its critical Roche surface. The outermost circle in Figures \ref{equipotentials}b and \ref{equipotentials}c identifies the edge of the computational grid on which both of these models were evolved. In contrast to our simulations of model Q0.8, and as is summarized in Table \ref{ComputationalGrid}, the edge of the hydrocode grid ($N_R = 162; R_\mathrm{grid} = 1.251$) was extended beyond the grid that was used in the SCF code ($N_R = 130; R_\mathrm{SCF} = 1.000$) in order to ensure that the L2 and L3 Lagrange points both were included in the computational domain. Material that flows radially outward across either one of these two saddle points in the effective potential is unlikely to return to either one of the stars on a dynamical timescale, but the dynamical motions of material that lies inside the L2 and L3 locations should be fully included in the hydrodynamical simulations. These particular models were chosen for this investigation, in part, because their values of $q_0$ fall into two separate stability regimes, as outlined above in \S \ref{SecExpectations}. For model Q1.3, $q_0 > 1$ so the binary is expected to be violently unstable to mass transfer. For model Q0.5, $q_0$ falls below the value $q_\mathrm{stable} = 2/3$ at which binaries with $n=3/2$ polytropic structures should become stable against mass transfer \citep{HaWe99}, if the effect of direct-impact accretion is ignored. In addition, the properties of the stars in these two models were specified in such a way that they permit us to examine two distinctly different, but astrophysically interesting evolutionary scenarios: In model Q1.3 (see Table \ref{table1_1.3}), the ratio of the effective stellar radii $R_d/R_a = (V_d/V_a)^{1/3} \approx M_d/M_a$, which mimics the mass-radius relationship of MS stars; and in model Q0.5 (see Table \ref{table0.5}), $K_d = K_a$ and $(R_d/R_a)^3 = (V_d/V_a) \approx M_a/M_d$, which represents well the structural properties and mass-radius relationship of stars in a low-mass DWD system. Finally, in the $q <2/3$ parameter regime, we specifically selected an initial model with $q_0 = 0.5$ in order to permit a direct comparison with one of the SPH simulations that was reported by RS95. In both models, as they were generated by the SCF code, the donor star slightly underfills its Roche lobe initially. Specifically, for model Q1.3 (see Table \ref{table1_1.3}), $R_d/R^\mathrm{RL}_d = (V_{\mathrm{d}}/V^{\mathrm{\rm RL}}_{\mathrm{d}})^{1/3} = 0.989$, and for model Q0.5 (see Table \ref{table0.5}), $R_d/R^\mathrm{RL}_d = (V_{\mathrm{d}}/V^{\mathrm{\rm RL}}_{\mathrm{d}})^{1/3} = 0.965$. This means that neither model was actually undergoing mass-transfer when it was introduced into the hydrocode. In the context of the SCF technique, it is possible to iteratively refine the converged model configuration to produce semi-detached binaries that come progressively closer to filling their self-consistently determined Roche volume, up to some limiting volume-filling factor $\lesssim 1.00$ that is set by the finite grid resolution of the SCF code. But the SCF technique will not converge to a model where stellar material extends beyond the Roche lobe as this material can not be in hydrostatic equilibrium. Even if the SCF code were able to generate models where the donor star completely fills its critical Roche surface, the models would have little additional practical utility over models, such as Q1.3 and Q0.5, where $R_d$ is $97 - 99\%$ of $R_d^\mathrm{RL}$. This is because such models would only be marginally unstable to mass transfer so the time that would be required for $\dot{M}_d$ to grow to levels that are of interest (or even resolvable) in the present investigation would be prohibitively long for a time-explicit hydrodynamics scheme such as ours. It is therefore necessary for us to employ some means of artificially ``driving'' each binary model into deep enough contact with its Roche surface to ensure that, early in the hydrodynamical evolution, the mass transfer rate climbs to a rate that is resolvable, that is, to a rate $\gtrsim \dot{M}_\mathrm{min}$. After this has been achieved, the driving is turned off, except in one case of the Q0.5 runs in which the driving was applied throughout the entire eveolution. One can imagine several possible mechanisms for driving a binary system into contact. For example, RS95 began with two widely separated, spherical stars and slowly damped the orbital velocity with an artificial friction term until the donor began to lose particles through the inner Lagrange point. This technique is particularly well adapted to the SPH method, as it is free of a computational grid. We instead begin with tidally deformed, rotationally flattened equilibrium models that are already very near contact. To proceed from this point we could simply impart an inward radial kick to the donor, but this would certainly cause the orbit to immediately become noncircular. Alternatively, we could attempt to mimic natural processes by (a) forcing the donor to slowly expand to fill its Roche lobe --- a process that will occur in MS binaries as the more massive star evolves off the main sequence --- or (b) slowly removing orbital angular momentum, thereby forcing the Roche lobe to contract around the donor --- as occurs in DWD systems as angular momentum is lost from the system via gravitational radiation. Ideally, the outcome of a given mass-transfer instability should be insensitive to the evolutionary process that has brought the donor star into contact with its critical Roche surface. In our present investigation, we have explored both of these more natural processes as a means of driving the donor into sufficiently deep contact with its critical Roche surface so that $\dot{M}_d$ climbs above $\dot{M}_\mathrm{min}$. Of course, because the hydrodynamical evolutions have been followed with an explicit time-integration scheme, we have found it necessary to ``drive'' the donor into contact at a rate that far exceeds the natural thermal expansion rate of evolving MS stars or the natural rate at which angular momentum is lost from DWD binaries. The rates we have employed are, nevertheless, slow enough that during the early (and generally brief) phase of artificial driving, the two stars individually, as well as the system as a whole, remain very near the initial equilibrium state as generated by the SCF code. Thus, as desired, the artificial driving introduces only secular, rather than dynamical, changes in the system. In order to initiate mass transfer in model Q0.5 (see \S \ref{sec_0.5_results}), we drained orbital angular momentum from the system at a rate of 1\% of $J_\mathrm{tot}$ per orbit for varying amounts of time in order to achieve varying depths of contact and levels of mass transfer. These simulations will henceforth be collectively referred to as Q0.5-D and individually as Q0.5-Da, -Db, and -Dc in order of increasing driving time (see \S\ref{sec_0.5_results} for details). For model Q1.3 (see \S \ref{sec_1.3_results}), we tried both mechanisms: In simulation Q1.3-D, we drained orbital angular momentum from the system at a rate of 1\% of $J_\mathrm{tot}$ per orbit for two orbits; in simulation Q1.3-E, we forced a slow expansion of the donor by increasing $K_d$ at a rate of 1.67\% of its initial value over a timescale of 1 orbit for the first two orbits (see Appendix~\ref{drivingAppendix} for details). A detailed comparison of simulations Q1.3-D and Q1.3-E confirms that the outcome of the instability is insensitive to the process by which the donor has been brought into contact with its critical Roche surface. \subsection{Simulations with $M_{\mathrm{d}} = 1.323 M_{\mathrm{a}}$ Initially} \label{sec_1.3_results} \subsubsection{Evolution Q1.3-D} \begin{deluxetable}{lcclcc} \tabletypesize{\scriptsize} \tablecaption{Initial Parameters for Model Q1.3\tablenotemark{\dag}} \tablewidth{0pt} \tablehead{ \colhead{System} & \colhead{Initial} & ~ & \colhead{Component} & ~ & ~ \\ \colhead{Parameter} & \colhead{SCF Value} & ~ & \colhead{Parameter} & \colhead{Donor} & \colhead{Accretor} } \startdata $q_0$ & 1.323 & ~~~~~~~~~~ & $M_i$ & 0.0176 & 0.0133 \\ $a_0$ & 0.8882 & ~~~~~~~~~~ & $\rho_i^\mathrm{max}$ & 0.6000 & 1.0000 \\ $\Omega_0$ & 0.2113 & ~~~~~~~~~~ & $K_i$ & 0.0372 & 0.0264 \\ $J_\mathrm{tot}$ & $1.40\times 10^{-3}$ & ~~~~~~~~~~ & $V_i$ & 0.1810 & 0.0799 \\ ${R}_\mathrm{com}$ & $7.11\times 10^{-5}$ & ~~~~~~~~~~ & $V_i^\mathrm{RL}$ & 0.1869 & 0.1261 \\ \enddata \tablenotetext{\dag}{Parameter values are given here in dimensionless polytropic units. To scale these numbers to other (e.g., cgs) units, see the discussion in Appendix~\ref{unitsAppendix}.} \label{table1_1.3} \end{deluxetable} Figure \ref{Q1.3-D images} presents a sequence of images showing the structure of the binary at selected times during the Q1.3-D evolution. Each image is a three-dimensional rendering of the mass-density distribution as viewed looking down on the equatorial plane of the binary from a frame of reference rotating with the initial orbital frequency, $\Omega_0$. The four nested iso-density surfaces have been drawn at levels $\rho/ \rho^{\mathrm{\rm max}}_{a} = 0.5$ (green), 0.1 (yellow), $10^{-3}$ (red) and $10^{-5}$ (blue), where $\rho^{\mathrm{\rm max}}_{a}$ is given in Table \ref{table1_1.3}. The eight images presented in Figure \ref{Q1.3-D images} have been taken from an animation sequence that includes approximately 1500 frames (120 frames per orbital period) and illustrates in much more detail the dominant structures that developed during the Q1.3-D evolution. The first seven images displayed here are equally spaced in time at intervals of $2P_0$. The gradual, counter-clockwise shift in the position angle of the line that connects the centers of the two stars reflects the gradual, monotonic increase in the system's orbital angular velocity $\Omega$, which in turn reflects a gradual decrease in the system's orbital separation, $a$. \begin{figure}[!ht] \centering \includegraphics[scale=0.8,viewport=67 498 521 719,clip]{f3.ps} \caption{A three-dimensional rendering of the mass density at selected time slices during the Q1.3-D evolution that demonstrate direct impact accretion. Times are shown on the top right of each frame in units of the initial orbital period. Each image is a snapshot as viewed by an observer looking down on the equatorial plane while rotating with the angular velocity $\Omega_0$. The four colored transparent shells are iso-density surfaces shown at levels $\rho/ \rho^{\mathrm{\rm max}}_{\rm a}$ = 0.5 (green), 0.1 (yellow), $10^{-3}$ (red) and $10^{-5}$ (blue) where $\rho^{\mathrm{\rm max}}_{\rm a}$ is the maximum density of the accretor --- the star on the right in the initial frame (t = 0) of the figure. In the online version this figure is supplemented by a mpeg animation showing the full evolution.} \label{Q1.3-D images} \end{figure} \begin{figure}[!ht] \centering \includegraphics[scale=0.8,viewport=12 16 465 592,clip]{f4.ps} \caption{Top: The $z$-component of the total angular momentum with respect to an inertial frame centered at the center of mass, $J_{z}$, normalized to its initial value for the Q1.3-D (solid curve). The initial loss rate of angular momentum at 1\% per orbit is denoted by the dotted line. The driving for Q1.3-D is stopped after two orbits and this can be seen when the solid curve becomes horizontal. Middle and bottom: evolution of the orbital angular momentum and the spin angular momenta of the components all normalized to the total initial angular momentum for the Q1.3-D run.} \label{Q1.3-D J plots} \end{figure} By the end of the first two orbits, an accretion stream has begun to develop. We quickly appreciate that, for the selected mass-radius relationship of this binary system, the accretor has a sufficiently large radius that the mass-transfer stream must directly impact, rather than go into orbit around, the accretor. The material is moving supersonically as it hits the accretor's surface. This creates a standing shock wave of relatively high thermal-pressure material on the accretor's surface (visible as a ridge of material in the images shown at $t/P_0 =$ 6, 8, and 10) that balances the ram pressure of the in-falling donor material. The stream clearly strikes the surface of the accretor at an oblique angle. Hence, while the stream's motion perpendicular to the surface is abruptly halted by the standing shock front, its motion tangent to the surface continues unabated. In this manner, angular momentum is added to the spin of the accretor. It is precisely this complex, three-dimensional flow --- along with the tidal torques generated by the perturbed density distribution --- that characterizes direct-impact accretion and necessitates a fully three-dimensional hydrodynamic treatment to quantify its role. Throughout the Q1.3-D evolution, the accretion stream steadily grows thicker, reflecting a steady increase in the rate at which material from the donor is being transferred to the accretor. By the time the system has completed twelve orbits, it can no longer be described as two stars in nearly circular orbits that are loosely tied together by a mass-transfer stream. Instead, the two stars have begun to plunge toward one another and, only one quarter of an orbit later (see Figure \ref{Q1.3-D images}) the central cores of the two stars have merged. This final, short merger phase of the evolution results from what \citet{LRS3} have identified as a tidal instability. At late times a fairly steady stream of very low density material (identified by the blue iso-density surface) also emerges from the trailing edge of the less massive star, that is, the accretor, flows out through the $L_2$ Lagrange point, and accumulates along the boundary of the computational grid. In this manner, $ \sim 0.1\%$ of the total system mass is lost from the system during the first 12 orbits --- a remarkably high level of conservation through the mass transfer event, given the very high mass-transfer rate attained prior to merger. For purposes of further discussion it is productive to divide the Q1.3-D evolution into three phases that are contiguous in time: the ``driving'' phase ($0\leq t/P_0 \leq 2$); the ``mass-transfer'' phase ($2\leq t/P_0 \leq 12$); and the final ``merger'' phase ($12 \leq t/P_0 \leq 12.25$). More quantitative descriptions of the first two of these evolutionary phases are provided by Figures \ref{Q1.3-D J plots} and \ref{Q1.3-D-E Mqa plots}. Figure \ref{Q1.3-D J plots} shows the time-dependent behavior of (top panel, solid line) the $z$-component of the system's total angular momentum, (middle panel) the system's orbital angular momentum, and (bottom panel) the spin angular momentum of both the donor (dot-dashed curve) and the accretor (dashed curve). In all three panels of Figure \ref{Q1.3-D J plots}, the quantity being plotted is normalized to the system's initial total angular momentum, $J_\mathrm{tot}$, and the curves have not been extended beyond $t/P_0 = 12$ because at later times it becomes difficult to distinguish between the individual stellar components. The solid curves in Figure \ref{Q1.3-D-E Mqa plots} display the time-dependent behavior of the mass transfer rate $\dot{M}_d$ normalized to the ratio $\dot M_{\rm ref}\equiv M_d(t=0)/P_0$ (top panel), the system mass ratio $q$ (middle panel), and the orbital separation $a$ normalized to $a_0$ (bottom panel). For reasons that will become apparent, below, in Figure \ref{Q1.3-D-E Mqa plots} the time coordinate (horizontal axis) has been shifted to $t_\mathrm{merge}\equiv (t_{\rm D} - 12P_0)$, where $t_{\rm D}$ is the evolutionary time recorded during the Q1.3-D simulation, so that the onset of the final merger phase occurs at the origin ({\it i.e.}, at $t_\mathrm{merge} = 0$). \begin{figure}[!ht] \centering \includegraphics[scale=0.8,viewport=12 16 465 592,clip]{f5.ps} \caption{The evolution of the mass-transfer rate (top), the mass-ratio $q$, and the separation $a$ for the same initial binary driven to contact by removing angular momentum (Q1.3-D, solid curves) and by expansion of the donor (Q1.3-E, dotted curves). The times shown are times to `merger' measured in initial binary periods. The dashed line in the top panel indicates the minimum level of mass transfer ${\dot M}_{\rm min}$ that is resolvable in our simulations (see text). The separation $a$ is normalized to its initial value $a_0$. The mass transfer rate is normalized to the reference value $\dot M_{\rm ref}\equiv M_{\rm d} (0)/P_0$ (initial donor mass divided by initial orbital period). For most of the evolution $q>1$, so $a$ decreases as a result of mass transfer even in the absence of driving. The dots mark the points at which driving was stopped.} \label{Q1.3-D-E Mqa plots} \end{figure} As was explained earlier, during the ``driving'' phase of the Q1.3-D evolution, angular momentum was artificially extracted from the system at a rate of 1\% of $J_\mathrm{tot}$ per orbit. This is directly reflected in the top panel of Figure \ref{Q1.3-D J plots}, where the behavior of $J_z(t)$ can be compared with a (dotted) line whose slope is precisely $0.01 J_\mathrm{tot}/P_0$. During this phase of the evolution, angular momentum was extracted from the system in such a way that the spin angular momenta of the two stellar components $J_d$ and $J_a$ (bottom panel of Figure \ref{Q1.3-D J plots}) remained essentially unchanged. (See Appendix~\ref{drivingAppendix} for a precise description of this extraction method.) In practice, then, the extraction of angular momentum resulted in a steady drop in the system's orbital angular momentum over the first two orbits (see the middle panel of Figure \ref{Q1.3-D J plots}) as, in a terminology that is consistent with the discussion associated with Eq.~(\ref{adot}), the system experienced a systemic loss of angular momentum $(\dot{J}/J_\mathrm{orb})_\mathrm{sys} \approx 0.01/P_0$. According to Eq.~(\ref{adot}), therefore, it is no surprise that during the driving phase of this evolution the orbital separation decreased at a rate of $\dot{a}/a_0 \approx 0.02/P_0$ (see the bottom panel of Figure \ref{Q1.3-D-E Mqa plots}), that is, to a value $a/a_0 \approx 0.96$ after two orbits. This, in turn, produced an $\approx 4\%$ reduction in the effective Roche-lobe radii, which produced the desired result of bringing the donor's Roche lobe into a sufficiently deep contact with the stellar surface of the donor that the mass-transfer event was initiated. As is shown in the top panel of Figure \ref{Q1.3-D-E Mqa plots}, at the end of this ``driving'' phase of the evolution (marked by the solid dot), $|\dot{M}_d| \approx 10^{-3} M_d/P_0$. This mass-transfer rate was sufficiently large (compared to $\dot{M}_\mathrm{min}$, for example) that it produced a recognizable and spatially resolvable accretion stream (Figure \ref{Q1.3-D images}). At the end of the driving phase of the evolution, we stopped extracting angular momentum from the system; in effect, the external driving that had been producing a systemic loss of angular momentum $(\dot{J}/ J_\mathrm{orb})_\mathrm{sys}$ was set to zero. Throughout the remainder of the Q1.3-D evolution, therefore, the system's total angular momentum was conserved to a high level of precision (see the top panel of Figure \ref{Q1.3-D J plots}, where the solid curve is perfectly horizontal), and variations in the orbital separation could only be attributed to the internal {\it redistribution} of angular momentum associated with mass transfer and tides, as is approximately described by the last two terms on the right-hand-side of Eq.~(\ref{adot}). A few key features are identifiable in the plot of $a(t)$ during the ``mass-transfer'' phase of the Q1.3-D evolution (solid curve in the bottom panel of Figure \ref{Q1.3-D-E Mqa plots}). First, the separation slowly, but steadily decreases in step with the slow, steady decrease of the system mass ratio $q$ (solid curve in the middle panel of Figure \ref{Q1.3-D-E Mqa plots}). This appears to be in accord with the explicit $q$-dependence of the last term on the right-hand-side of Eq.~(\ref{adot}). Second, as was observed in the benchmark evolutions of model Q0.8, $a(t)$ displays a low-amplitude oscillation with a period $\approx P_0$. This ``epicyclic'' oscillation reflects the fact that the orbit is slightly noncircular. Although there is evidence that this epicyclic motion was amplified somewhat during the ``driving'' phase of this evolution, it is perhaps significant that the eccentricity of the orbit did not noticeably grow during the mass-transfer phase. Third, in association with the onset of a tidal instability, $a(t)$ begins to decrease rapidly as $t_\mathrm{merge}$ approaches zero. As the solid curve in the top panel of Figure \ref{Q1.3-D-E Mqa plots} shows, during the driving phase of the Q1.3-D evolution the accretion rate remains low as the surface of the donor is being brought closer to, and then into deeper contact with its Roche lobe. However, after the mass-transfer phase begins in ernest, the accretion rate steadily increases. This is as expected because, although the mass ratio $q$ steadily decreases throughout the mass-transfer phase of the evolution, it remains larger than unity until the rapid plunge and merger phase gets underway. (As shown in the middle panel of Figure \ref{Q1.3-D-E Mqa plots}, $q > 1$ until $t/P_0 = 11.884$.) Hence, throughout most of this evolution, $R_d^\mathrm{RL}$ steadily decreases while $R_d$ steadily increases, so a larger and larger fraction of the donor's envelope rises above the critical Roche surface. This, in itself, is a formula for disaster and is sufficient to explain why merger was the inevitable outcome of this $q_0 > 1$ mass-transfer evolution. It should be noted as well that, as discussed in \S \ref{SecDirectImpact} above, the phenomenon of direct-impact accretion (Marsh \& Steeghs 2002; Marsh et al. 2004) must also have acted to further destabilize mass transfer in this system. Throughout the mass-transfer phase of this evolution, $\dot{M}_d$ exhibits a low-amplitude oscillation on top of its steady, secular rise. This oscillation has a period $\approx P_0$, strongly suggesting that it is associated with the epicyclic oscillation seen in the plot of $a(t)$. This association is substantiated by the realization that, during each oscillation when $\dot{M}_d$ reaches a local maximum, the orbital separation $a$ is at a local minimum. That is, the simulation permits us to see variations in the mass-transfer rate that are directly associated with the slightly non-circular shape of the orbit. Because the eccentricity of the orbit ({\it i.e.}, the amplitude of the epicyclic oscillations) does not noticeably increase during the evolution, whereas the mass-transfer rate does steadily increase, it is understandable that the relative amplitude of the oscillations that are seen in the plot of $\dot{M}_d(t)$ decrease with time. During the final two orbits of this evolution, the mass-transfer rate climbs rapidly to a very high amplitude, reaching a level $\gtrsim 10\% (M_d/P_0)$ shortly before the final merger phase. This behavior of $\dot{M}_d(t)$ during the last portion of the mass-transfer phase of evolution Q1.3-D strongly resembles the behavior that has been predicted by the approximate analytical description of an unstable mass-transfer evolution presented by \citet{WebIben}. The information plotted in Figure \ref{Q1.3-D J plots} allows us to examine, in part, the role that ``direct impact'' accretion plays in driving the evolution of this particular binary system. Throughout most of the mass-transfer phase of the evolution, the accretor gradually spins up (the dashed curve labelled $J_a$ in the bottom panel) as material from the stream is deposited onto its surface at an oblique impact angle. It is significant that the curves for $J_a(t)$ and $J_\mathrm{orb}(t)$ are practically mirror images of one another, while the curve for $J_d(t)$ is almost perfectly flat until the final ``plunge'' occurs. This means that the spin-up of the accretor occurs almost entirely at the expense of the orbit. That is, at the time of impact, the specific angular momentum of the accretion stream material does not reflect the specific angular momentum of the star from which it originated ({\it i.e.}, the donor) but instead reflects the amount by which the time-varying torque applied by the system's complex (and time-varying) gravitational field has been able to transfer angular momentum from the orbit to the stream as the stream material ``falls'' from the vicinity of the $L_1$ Lagrange point to the surface of the accretor. This result provides strong support for the arguments that have led \citet{Maet04} and \citet{GPF} to account for the effects of direct-impact accretion in their semi-analytical models of mass-transferring binary systems through a term that is built around the relatively simple concept of a circularization radius, as described above in the context of Eq.~(\ref{adot}). At the very end of the ``mass-transfer'' phase of the Q1.3-D evolution, the binary loses orbital angular momentum to the spin of both stellar components catastrophically, as the stars plunge towards one another and eventually merge. The above discussion suggests the following interpretation: initially and during most of the evolution, while the angular momentum of the donor remains flat, the system is evolving as a result of the mass transfer instability since $q>q_{\rm stable}$. Only near the end, tidal effects on the donor further reduce the orbital angular momentum causing the final merger. Thus this final phase should be considered the proper tidal instability. Binary systems in which direct impact occurs will have a harder time than disk systems avoiding the tidal instability since $q_{\rm stable}<2/3$. \subsubsection{Evolution Q1.3-E} In an attempt to ascertain to what extent the outcome of a mass-transfer event depends on the manner in which the donor star is brought into contact with its critical Roche surface, we performed a second simulation in which the initial state was defined by model Q1.3 (with the associated parameters shown in Table \ref{table1_1.3}) but, instead of artificially extracting angular momentum from the system, during the ``driving'' phase of the evolution we gradually increased the specific entropy of the gas inside the donor star. This caused a slow, secular increase in the effective radius of the donor and, within the first two orbits, brought the surface of the donor into sufficiently deep contact with its critical Roche surface that the mass-transfer event was initiated. More specifically, for this Q1.3-E evolution the polytropic constant $K_d$ for the material inside the donor was increased at a rate of 1.67\% of its initial value per orbit for 2 orbits. (See Appendix~\ref{drivingAppendix} for a precise description of how this ``driving'' technique was implemented in the hydrocode.) Because $R\propto K$ in an $n=3/2$ polytropic star (see, for example, the discussion in \S\ref{secBackground} associated with Eq.~\ref{PolytropicM_R}), this level of driving should have caused the effective radius of the donor to increase by approximately 3.3\% by the end of the driving phase of the Q1.3-E evolution. As was anticipated, the results of this Q1.3-E simulation were very similar to the results of the Q1.3-D simulation. The accretion stream that had become well-defined by the end of the ``driving'' phase grew steadily thicker throughout the ``mass-transfer'' phase and, shortly after the system mass-ratio dropped below unity, the cores of the two stars catastrophically merged into a single object. There were, however, quantifiable differences between the two model evolutions. Most noticeably, the stars in the Q1.3-E evolution merged at an earlier evolutionary time. We expected that the two evolutions would be somewhat offset in time from one another. Given that two distinctly different mechanisms were employed to drive the system into contact, it was unlikely that, for example, $\dot{M}_d$ would be precisely the same in both simulations at the end of the driving phase (beginning of the mass-transfer phase) hence, it was unlikely that a basic system parameter such as $q$ would be precisely the same at a given time, $t$, because its value depends on the {\it integral} over $\dot{M}_d$ up to the time $t$. Before making further comparisons between these two simulations, we decided to synchronize their evolutionary times at the end, rather than at the beginning, of the mass-transfer phase. More specifically, we took a plot of $\log{|\dot{M}_d|}$ versus time from the Q1.3-D simulation (solid curve in the top panel of Figure \ref{Q1.3-D-E Mqa plots}) and slid it horizontally across the analogous plot from the Q1.3-E simulation --- normalized in the same manner as the top panel of Figure \ref{Q1.3-D-E Mqa plots} --- until a reasonable match between the two time-dependent functions was achieved at late times. In this manner we were able to identify a ``merger time'' for the Q1.3-E evolution that could be reasonably well associated with the merger time $t_\mathrm{merge}$ that was defined earlier for the Q1.3-D evolution. For the Q1.3-E evolution, a time offset of $5.9 P_0$ was required to align the origin in time with the end of the mass-transfer phase, that is, $t_\mathrm{merge}\equiv (t_E - 5.9P_0)$, where $t_E$ is the evolutionary time recorded during the Q1.3-E simulation. By comparison, a time offset of $12.0 P_0$ was required to arrange the same alignment for the Q1.3-D evolution. The dotted curves in the top, middle, and bottom panels of Figure \ref{Q1.3-D-E Mqa plots} display the behavior of, respectively, $\log(|\dot{M}_d|/\dot M_{\rm ref})$, $q$, and $a/a_0$ as a function of $t_\mathrm{merge}/P_0$ from simulation Q1.3-E. The mass-transfer rate and the function $q(t_\mathrm{merge})$ for this simulation match the mass-transfer rate and the function $q(t_\mathrm{merge})$ for simulation Q1.3-D very well over the entire ``mass-transfer'' phase of its evolution. The dotted curve for $\dot{M}_d(t_\mathrm{merge})$ exhibits lower-amplitude oscillations than the corresponding solid curve, indicating that the binary orbit has remained more nearly circular throughout the Q1.3-E evolution. In the plot of the time-dependent behavior of the binary separation (bottom panel of Figure \ref{Q1.3-D-E Mqa plots}), the solid curve lies below the dotted curve at all times. This is understandable because, due to the manner in which the system was initially driven into contact, the model in simulation Q1.3-D (solid curve) has a slightly smaller orbital angular momentum than the model in simulation Q1.3-E (dotted curve) at all times. The top panel of Figure \ref{Q1.3-D-E Mqa plots} explicitly shows that, at the end of the driving phase of simulation Q1.3-E ($t_E = 2P_0$, hence $t_\mathrm{merge} = - 3.9 P_0$), the mass-transfer rate was higher than it was at the end of the driving phase of simulation Q1.3-D ($t_D = 2 P_0$, hence $t_\mathrm{merge} = -10 P_0$). As was forecast, above, this in itself explains why the Q1.3-E simulation merged more quickly. Had we stopped ``driving'' the expansion of the donor somewhat sooner --- say, after only 1.5 orbits instead of after 2 orbits --- $\dot{M}_d$ would have been smaller at the onset of the mass-transfer phase of simulation Q1.3-E and it would have taken longer for the two stars to merge. Nevertheless, the mass-transfer phases of the two separate Q1.3 model simulations are remarkably similar and certainly the outcome of the evolutions (catastrophic merger) is the same. This supports our expectation that the endpoint of a mass-transfer evolution will be insensitive to the manner in which the binary system is initially brought into contact. \begin{figure}[!ht] \centering \includegraphics[scale=0.8,viewport=67 573 521 719,clip]{f6.ps} \caption{Two evolutions with $q_0=1.3$, in which mass transfer is initiated by angular momentum losses (Q1.3-D, top row) and by expansion of the donor (Q1.3-E, bottom row), lead to an almost identical merger. The entire Q1.3-E evolution is depicted in a mpeg animation in the online version of this article. The time shown in the movie is $t_{\rm E}$.} \label{Q1.3-D-E images compared} \end{figure} The top half of Figure \ref{Q1.3-D-E images compared} contains six frames from the animation sequence (mentioned earlier) that was generated to visually illustrate the complex fluid-dynamical flows that developed during simulation Q1.3-D. Four of these images are identical to ones that were presented earlier in Figure \ref{Q1.3-D images}, but here the evolutionary time $t_\mathrm{merge}$ has been used to identify when each image was drawn from the simulation. For comparison, images from simulation Q1.3-E are displayed in the bottom half of Figure~\ref{Q1.3-D-E images compared} at the same synchronized times, $t_\mathrm{merge}$. Each of the last two images in the bottom row of Figure~\ref{Q1.3-D-E images compared} is remarkably similar to its corresponding image in the top row of the Figure, given that they represent highly distorted structures that have developed from complex, nonlinear, mass-transfer evolutions and given that they have arisen at two quite different ``absolute'' times in the separate numerical simulations. (At earlier times, {\it e.g.}, $t_\mathrm{merge} = - 1 P_0$, the binary structures are also quite similar, although there is an identifiable orbital phase discrepancy due to the fact that the orbital separations and, hence, the orbital frequencies are slightly different in the two simulations.) The correspondence between these highly distorted structures at late times in these two separate simulations provides perhaps the strongest empirical evidence that the endpoint of a mass-transfer evolution will be insensitive to the manner in which a binary system is initially brought into contact. At the same time, the quantitative similarity between the late-time results of evolutions Q1.3-D and Q1.3-E serves as a strong, affirmative convergence test for our hydrocode. \subsection{Simulations with $M_{\rm d} = 0.5 M_{\rm a}$ Initially}\label{sec_0.5_results} As an example of mass transfer in binaries with $q<2/3$, we now discuss the results of three Q0.5-D simulations in which driving was applied for different times and compare the results with those obtained by RS95 for a DWD binary of identical mass ratio (hereafter, referred to as simulation Q0.5-RS). Building on our detailed analysis and description of the model Q1.3 evolutions, the Q0.5-D simulations can be described fairly concisely. In particular, Figures \ref{Q0.5a images}-\ref{Q0.5c images} display selected results from our Q0.5-D evolutions in the same manner in which results from simulation Q1.3-D were presented in Figures \ref{Q1.3-D images}-\ref{Q1.3-D-E Mqa plots}. \begin{figure}[!ht] \centering \includegraphics[scale=0.8,viewport=67 498 521 719,clip]{f7.ps} \caption{Same as Figure \ref{Q1.3-D images}, but for our Q0.5-Da evolution; images are separated in time by four initial orbital periods. Even after 32 orbits, the system continues to undergo mass transfer and there is no indication that the system will merge. The entire evolution is depicted in the mpeg animation accompanying the online article.} \label{Q0.5a images} \end{figure} \begin{deluxetable}{lcclcc} \tabletypesize{\scriptsize} \tablecaption{Initial Parameters for Model Q0.5\tablenotemark{\dag}} \tablewidth{0pt} \tablehead{ \colhead{System} & \colhead{Initial} & ~ & \colhead{Component} & ~ & ~ \\ \colhead{Parameter} & \colhead{SCF Value} & ~ & \colhead{Parameter} & \colhead{Donor} & \colhead{Accretor} } \startdata $q_0$ & 0.500 & ~~~~~~~~~~ & $M_i$ & $3.073\times 10^{-3}$ & $6.143\times 10^{-3}$ \\ $a_0$ & 0.8764 & ~~~~~~~~~~ & $\rho_i^\mathrm{max}$ & 0.235 & 1.0000 \\ $\Omega_0$ & 0.1174 & ~~~~~~~~~~ & $K_i$ & 0.016 & 0.016 \\ $J_\mathrm{tot}$ & $1.97\times 10^{-4}$ & ~~~~~~~~~~ & $V_i$ & 0.0814 & 0.0370 \\ ${R}_\mathrm{com}$ & $1.04\times 10^{-5}$ & ~~~~~~~~~~ & $V_i^\mathrm{RL}$ & 0.0906 & 0.2380 \\ \enddata \tablenotetext{\dag}{Parameter values are given here in dimensionless polytropic units. To scale these numbers to other (e.g., cgs) units, see the discussion in Appendix~\ref{unitsAppendix}.} \label{table0.5} \end{deluxetable} Initially, the donor star in our model Q0.5-D simulations was slightly detached from its Roche lobe: $V_{\rm d}/V^{\mathrm{\rm RL}}_{\mathrm{d}} = 0.90$ (Table \ref{table0.5}). In the first of our three Q0.5-D runs, henceforth simulation Q0.5-Da, in order to bring the system into contact and initiate the mass-transfer event, angular momentum was drained from the binary --- the ``driving'' phase of the simulation --- at a rate of 1\% per orbital period over 2.7 periods. The system was then allowed to evolve in the absence of driving --- the ``mass-transfer'' phase of the evolution --- since there was no driving present in the simulations of RS95 and because our primary objective was to see if mass transfer, once initiated, is unstable. In a second Q0.5-D simulation, subsequently referred to as Q0.5-Db, we extended the driving phase to 5.3 orbits at the same level of 1\% per orbit, and then let it evolve without driving. This allowed us to investigate the effects of deeper initial contact and a higher mass-transfer rate on the evolution of the binary. Finally, in a third simulation, referred to as Q0.5-Dc, we applied driving at the rate of 1\% per orbit throughout the evolution. The length of time over which angular momentum was extracted from the system in all three of these Q0.5-D simulations is reflected in panel (a) of Figure~\ref{Q0.5 J plots}. During the driving phase the angular momentum follows closely the dotted line, turning horizontal when the driving stops for evolutions Q0.5-Da (blue curve) and Q0.5-Db (green curve). The Q0.5-Dc evolution (red curve) is driven throughout, so it follows the dotted line to the end. According to expectations, the longer we applied driving, the shorter the time it took for the binary to evolve to a comparable mass transfer level and evolutionary stage. Consequently, the longest of these three simulations was Q0.5-Da (32 orbits), while Q0.5-Dc was the shortest (8.5 orbits). \begin{figure}[!ht] \centering \includegraphics[scale=0.8,viewport=12 16 465 592,clip]{f8.ps} \caption{The same quantities as in Figure \ref{Q1.3-D J plots}, but for the Q0.5-D evolutions: Q0.5-Da (blue), Q0.5-Db (green), and Q0.5-Dc (red). Note that the vertical scale for panel (d) showing $J_{\rm d}/J_{\rm tot}$ has been expanded for clarity.} \label{Q0.5 J plots} \end{figure} We focus first on Figure \ref{Q0.5a images}, which displays images of the mass-density distribution at selected time slices during the Q0.5-Da evolution, and on the blue curve in the bottom panel of Figure \ref{Q0.5 Mqa plots}, which shows the time-evolutionary behavior of the binary separation for this same model evolution. During the brief ``driving'' phase of the evolution, the separation decreases at a rate that is consistent with the rate at which angular momentum is being extracted from the system. An epicyclic oscillation with a period $\approx P_0$ and an amplitude of just over 1\% is present in the $a(t)$ curve; as before, we suspect this arises from initial conditions and driving. In addition there is another oscillation with a period of about $3P_0$ which appears to be related to a slow meandering of the position of the center of mass of the system (see the discussion, below, related to Figure \ref{Q0.5 xyRcom}). We note that these oscillations also appear --- although at a somewhat reduced amplitude --- in evolutions Q0.5-Db (green curve) and Q0.5Dc (red curve), and they are mirrored in the mass transfer rates shown in the top panel of Figure \ref{Q0.5 Mqa plots}. After driving has been turned off in evolution Q0.5-Da, the separation hovers around a value $a \approx 0.95 a_0$ for more than twenty orbits, then the binary begins to separate and the mass-transfer rate (blue curve in the top panel of Figure \ref{Q0.5 Mqa plots}) levels off. The images in Figure \ref{Q0.5a images} reflect this $a(t)$ behavior. Initially the binary advances in the corotating frame as its orbital frequency increases. At late times, as the separation increases, the binary slows down, stalls and after some hesitation due to the various oscillations described above, it moves in a retrograde sense when the orbital frequency falls below $\Omega_0$ at $t\approx 31 P_0$. (This behavior is quite clear in the $\approx 3800$-frame animation sequence from which the individual images shown here were extracted.) In summary, we have followed the Q0.5-Da evolution through more than 30 orbits in the presence of a steady mass-transfer stream and there is no indication that the system is going to merge. This result is significantly different from the Q0.5-RS evolution, which was violently unstable to mass transfer and led to tidal disruption of the donor within $\sim 5$ orbits. \begin{figure}[!ht] \centering \includegraphics[scale=0.8,viewport=12 16 465 592,clip]{f9.ps} \caption{Same as Figure \ref{Q1.3-D-E Mqa plots} but for the Q0.5-D evolutions: Q0.5-Da (blue), Q0.5-Db (green), and Q0.5-Dc (red). All quantities are plotted versus the evolutionary time measured in units of the initial orbital period of the system. For evolutions Q0.5-Da and Q0.5-Db we also show in the top panel, as dashed lines, a 3-orbit boxcar average of the mass transfer rate.} \label{Q0.5 Mqa plots} \end{figure} In an effort to understand why our Q0.5-Da evolution differed from the Q0.5-RS evolution, we extended the initial ``driving'' phase to $5.3$ orbits (model Q0.5-Db) in order to bring the Roche lobe into deeper contact with the donor and bring the system to a higher mass-transfer rate as it entered the ``mass-transfer'' phase of its evolution. As the green curve in the bottom panel of Figure \ref{Q0.5 Mqa plots} shows, the orbital separation steadily decreased (as expected) during the extended ``driving'' phase of this evolutions, but after the driving ceased the separation $a(t)$ evolved in a slightly different manner from the Q0.5-Da evolution. Specifically, the system did not hover at its minimum separation as long; it began separating and its mass-transfer rate began to level off after only $\approx 11 P_0$. We can understand this difference in behaviors if we refer to Eq.~(\ref{adot}). In the absence of tidal effects, Eq.~(\ref{adot}) predicts that the orbital separation should increase as soon as driving is interrupted. [Note that this result holds even when there is a significant consequential transfer of orbital angular momentum to the spin of the accretor --- see panels (b) and (c) of Figure \ref{Q0.5 J plots} where, as was seen in the Q1.3-D evolution, $J_\mathrm{a}(t)$ is practically a mirror image of $J_\mathrm{orb}(t)$.] However, after the driving phase has ended in evolution Q0.5-Da, the separation does not immediately begin to increase; in fact, a careful examination of the blue curve in the bottom panel of Figure \ref{Q0.5 Mqa plots} shows that the separation continues to decrease, albeit at a much reduced rate. This deviation from the ``expected'' behavior must be attributed to the tidal terms in Eq.~(\ref{adot}). The blue curve in panel (d) of Figure \ref{Q0.5 J plots} provides evidence that tides are at work: over the first approximately 14 orbits of evolution Q0.5-Da, the donor is also being spun up, storing part of the orbital angular momentum and thereby allowing the separation to continue to decrease slowly. This is readily understandable because the initial driving rapidly shrinks the binary, thereby increasing the orbital frequency. The spin of the donor lags initially and recovers gradually as it is spun up by tides. Some time after $J_{\rm d}$ peaks at $t\sim 14 P_0$, the binary separation finally begins to increase at $t\sim 18-20 P_0$, and the mass-transfer rate levels off. In evolution Q0.5-Db, the separation stops decreasing almost immediately after the driving phase has ended (see the green curve in Figure \ref{Q0.5 Mqa plots}). It increases slowly at first, then much more rapidly at the end of the simulation. Because the driving phase lasted longer in this evolution than in Q0.5-Da, the mass-transfer rate was able to climb to a sufficiently high level by the time driving was stopped to permit the ``$\dot{M}_d$'' term in Eq.~(\ref{adot}) to exceed the negative tidal terms. At the end of both the Q0.5-Da and Q0.5-Db simulations, not only is the binary separation increasing, but the mass-transfer rate has leveled off. We speculate that if we were able to follow these evolutions significantly farther in time in the absence of driving, the magnitude of $\dot{M}_d$ would steadily decrease and we would find that the binary eventually detaches and mass-transfer ceases. We also conjecture that the mass of the remnant donor would be a decreasing function of the level and duration of the original phase of driving. Confirmation of these conjectures must await further improvements in our simulation tools (see further discussion, below). In order to investigate how an even higher mass-transfer rate might affect the evolution of this $q_0 = 0.5$ binary system, we performed a third simulation (model Q0.5-Dc) with continuous ``driving'' at a rate of 1\% per orbit. Because driving was never turned off, systemic angular momentum losses played a dominant role in dictating how the orbital parameters of the system varied throughout most of this evolution. As the red curve in the bottom panel of Figure \ref{Q0.5 Mqa plots} shows, the orbital separation steadily decreased for $\approx 8 P_0$ at a rate that would be predicted by the first term alone on the right-hand-side of Eq.~(\ref{adot}). As a consequence, the Roche lobe sank quite deep into the envelope of the donor and, as depicted by the red curve in the top panel of Figure \ref{Q0.5 Mqa plots}, the mass-transfer rate steadily grew throughout the evolution, reaching a level that was an order of magnitude higher than the maximum rate acquired in evolution Q0.5-Db and roughly two orders of magnitude higher than the maximum rate acquired in evolution Q0.5-Da. Even under these extreme conditions, however, the orbital separation eventually reached a minimum (at $t/P_0 \approx 8$) and the system began separating (red curve in the bottom panel of Figure \ref{Q0.5 Mqa plots}). Presumably this reversal occurred because the mass-transfer rate became large enough for the ``$\dot{M}_d$'' term in Eq.~(\ref{adot}) to finally dominate over systemic angular momentum losses. As the images in Figure \ref{Q0.5c images} illustrate, through approximately $7 P_0$, model Q0.5-Dc evolved through configurations that resemble those seen in the Q0.5-Da evolution (see Figure \ref{Q0.5a images}). But late in the evolution, the accreted material has formed a much more prominent, time-dependent, nonaxisymmetric disk-like structure around the accretor; and at the end of the simulation, the donor is being tidally ripped apart, even as the measured orbital separation is increasing. This evolution was terminated when a significant fraction of material from the tidally elongated donor hit the outer edge of the computational domain. Interestingly, the evolution of this ``driven'' system during its final 2-3 orbits bears a strong resemblance to the final 2-3 orbits of the published Q0.5-RS evolution. \begin{figure}[!ht] \centering \includegraphics[scale=0.8,viewport=12 480 600 700,clip]{f11.ps} \caption{Same as Figure \ref{Q0.5a images}, but for our Q0.5-Dc evolution; images are separated in time by one initial orbital period, except the last image. Since this binary is continuously driven, the separation and the orbital period decrease throughout, except perhaps at the very end.} \label{Q0.5c images} \end{figure} For evolutions Q0.5-Da and Q0.5-Db, we think that our results are more aptly described as the donor being gradually stripped of its mass and partially disrupted by tides rather than as a catastrophic merger of the binary. We suspect that, if these evolutions could be followed accurately beyond what the present version of our code is able to do, a sizeable portion of the donor would survive (final mass ratio $q\sim 0.3$) in an elliptical orbit. In fact, even in evolution Q0.5-Dc it appears that although the donor is largely tidally disrupted, a small remnant may survive at a larger radius. Unfortunately, because of the growing drift of the center of mass (see Figure~\ref{Q0.5 xyRcom}), we cannot follow evolutions Q0.5-Da and Q0.5-Db far enough to unambiguously show that a long-lived remnant of the donor survives the tidal disruption. Despite these limitations, our results are significantly different from the Q0.5-RS evolution, which was violently unstable to mass transfer and led to tidal disruption of the donor within $\sim 5 $ orbits. As was briefly forecast in \S\ref{SecExpectations}, and as has been discussed in early paragraphs of this section of the paper, the results we have obtained are consistent with the behavior expected from Eq.~(\ref{adot}). They also are consistent with the results of integrations of orbit-averaged evolution equations to be described elsewhere \citep{GPF}. A possible reason that the Q0.5-RS binary was found by RS95 to be unstable to mass transfer is that the donor may have been in deeper contact with its critical Roche surface at the start of their simulation. To test this we have calculated the values of the separation $r$ (as defined in RS95) at the point the driving is terminated in our simulations and have compared it with RS95. In case Q0.5-Da, $r\approx 4.0$, which is greater than the initial separation, $r = 3.9$, reported by RS95 for evolution Q0.5-RS (note that in RS95 the donor is in contact with its Roche lobe at the outset of the simulation). It appears as though the two stars are initially closer to each other in simulation Q0.5-RS and, hence, the donor is in deeper contact with its critical Roche surface initially. In turn, this implies that the mass-transfer simulation conducted by RS95 started with a higher accretion rate. From Figure 13a of RS95, $\log{\vert \dot{M}_{\rm d}\vert}\approx -1$ after only 4 orbits, which is higher than the mass transfer rate we observed at any stage of our Q0.5-Da simulation. In case Q0.5-Db, $r\approx 3.8$, very close to the initial conditions of RS95, and yet the accretion rate appears to peak below $\log{\vert \dot{M}_{\rm d}\vert}\approx -1$. Finally, in run Q0.5-Dc, the separation appears to turn around at the end when $r\approx 3.6$, above the value at which we expect the tidal instability to set in. Unfortunately, because the donor bumps up against the edge of our computational domain at the end of our simulation, we cannot follow model Q0.5-Dc far enough to deduce its ultimate fate. However, the accretion rate is still growing rapidly while the orbital angular momentum is plummeting suggesting a full tidal disruption and perhaps eventual merger. Determining whether the donor survives in this case remains an obvious goal for future simulations. We also note that in the Q0.5-RS evolution, the distance between the centers of mass of the two stars decreases while the density maxima of the two components separate. This behavior seems odd, but can perhaps be understood if the mass in the stream is large enough to significantly shift the center of mass of the distorted donor in the downstream direction. In our Q0.5-D simulations the mass in the stream is always a small fraction of the donor's mass. As a result, both the distance between the centers of mass of the binary components and the separation of their density maxima increase at late times thus saving the donor from tidal disruption, except perhaps in the Q0.5-Dc case. It should be emphasized that, in our Q0.5-D simulations, the adopted initial driving rate was orders of magnitude larger than what one would expect in a realistic DWD binary. A milder driving favors stability since all effects discussed above will also be milder. With driving applied throughout the entire Q0.5-Dc evolution at the initial rate of 1\% per orbit, a tidal disruption may well be the final outcome, and it remains to be seen if any fraction of the donor survives. The behavior observed in this case comes closest qualitatively to the evolution reported by RS95 for the Q0.5-RS simulation. Finally, Figure \ref{Q0.5 xyRcom} shows the time-dependent meandering of the position of the center of mass of the Q0.5 binary system throughout our Q0.5-D simulations. Its radial distance from the cylindrical coordinate axis, $R_\mathrm{com}$ (bottom panel), and its associated equatorial-plane Cartesian coordinates ($x_\mathrm{com}$, top panel; and $y_\mathrm{com}$, middle panel) as viewed from a frame of reference rotating with the initial orbital frequency, $\Omega_0$, are plotted as a function of $t/P_0$. As was described by MTF and discussed in \S \ref{commotion} above, there is a tendency for an unequal-mass binary system to drift away from the cylindrical coordinate axis during a simulation. If left unchecked, some part of the binary would hit the boundary of the grid in a time $\sim 10 P_0$. During our Q0.5-D simulations the corrections described in \S \ref{commotion} have succeeded in confining the center of mass to within $\sim 1$ zone of the computational grid up to the time when the binary begins to separate rapidly and the donor approaches the boundary. In run Q0.5-Da the simulation remains well-behaved for over 30 orbits. This is significantly longer than any other self-consistent hydrodynamic simulation of a binary evolution with or without mass-transfer that we are aware of. Despite this success, however, there is still room for improvement. For example, it appears that the residual center-of-mass motion shown in Figure \ref{Q0.5 xyRcom} has been reflected in an undesirable way in other dynamical features of simulation Q0.5-D. Most noticeably, the oscillation that is seen in all three panels of Figure \ref{Q0.5 xyRcom} with a period $\sim 3 P_0$ and an amplitude $\sim \Delta R/a_0 = 0.9\%$ appears to be modulating the natural epicyclic oscillations that appear in the functions $a(t)$ and $\dot{M}_{\rm d}(t)$ throughout the mass-transfer phase of the evolution (see the bottom and top panels of Figure \ref{Q0.5 Mqa plots}). In addition, we have observed that, with the correction in place, as the binary begins to separate rapidly, a numerical instability can eventually cause the center of mass to rapidly spiral outward. We have stopped the Q0.5-D simulations before this instability sets in. \begin{figure}[!ht] \centering \includegraphics[scale=0.8,viewport=12 16 465 592,clip]{f10.ps} \caption{Drift of the center of mass during the Q0.5-D simulations: Q0.5-Da (blue), Q0.5-Db (green), and Q0.5-Dc (red). From top to bottom: the $x$ and $y$ components of the position of the center of mass in the corotating frame, and the cylindrical radial distance to the center of mass $R_{\rm com}$, as functions of $t/P_0$. The dotted lines at $\pm \Delta R$ show the extent of the innermost radial grid zone. The center of mass stays very nearly within this one radial grid zone throughout these evolutions.} \label{Q0.5 xyRcom} \end{figure} \section{Discussion and Conclusions} \label{Conclusions} This is the second in a series of papers that describe results of direct simulations of the dynamical evolution of unequal-mass binaries using a three-dimensional, finite-difference hydrodynamics technique. Our long-term goal is to gain a better understanding of the origin and survival of various classes of binaries --- including double white dwarfs, contact binaries and direct-impact accretors --- by accurately simulating hydrodynamical flows that arise when binaries of various kinds become semi-detached. In contrast to this, much of the related analytic and numerical work that has been carried out over the past decade has placed an emphasis on the onset of the tidal instability and on the coalescence and merger of nearly identical neutron stars. In these earlier studies, mass transfer events were modeled when necessary as a prelude to the almost inevitable merger. In fact, the inevitability of mergers seems to have been accepted also for double white dwarf binaries. While this may well be the case with massive white dwarfs, the results presented here already suggest that low-mass WD binaries may escape merger and survive as AM CVn systems \citep{TY96, NPVY01}. In the present paper, after describing improvements that have been made in the hydrodynamics code that was developed by and described in MTF, we have taken the first steps toward elucidating the role that is played by mass transfer in the outcome of the dynamical phase of evolution following first contact. We have been especially concerned with those binary systems in which the mass ratios and equations of state are such that Roche lobe contact occurs before a tidal instability, so we have focused on binaries with $n=3/2$ ($\gamma=5/3$) polytropic components \citep{RS95, UE1, UE2}. We have presented two evolutions of a system having an initial mass ratio $q_0 = 1.323$ that was dynamically unstable to mass transfer. In one evolution, mass transfer was initiated by removing angular momentum from the binary (Q1.3-D) while in the other evolution we expanded the donor star causing it to overflow its Roche lobe (Q1.3-E). While there were subtle differences in the structure of the binary and the depth of contact by the time driving was terminated, in both cases the binary was dynamically unstable and a merger was the final result. In fact, we demonstrated that if one looks at the last two or three orbital periods before the merger, the behavior of both evolutions is consistent with being the same. We therefore conclude that, for unstable systems, it does not matter how the system gets into contact; once the instability kicks in, the behavior is the same. This result also has served as a convergence test for our hydrodynamics code since two evolutions with identical initial states but that were brought into contact in different ways and therefore had different detailed histories, ended up looking indistinguishable during the final stages leading to a merger. At the very end of the ``mass-transfer'' phase of both the Q1.3-D and Q1.3-E evolutions, the binary transferred orbital angular momentum to the spin of both components catastrophically, as the stars plunged towards one another and eventually merged. Overall the behavior we observed during the final few orbits agrees with the predictions of the tidal instability \citep{LRS1, LRS2, LRS3, LRS4, LRS5}. This suggests that one can recognize two epochs during the evolution of a $q_0 > q_{\rm stable}$ binary system that undergoes an episode of mass-transfer: Over an extended period of time, the rate of mass transfer grows while the orbital angular momentum changes very slowly. This instability is driven by an ever increasing depth of contact and thus can be considered a mass-transfer instability. Once the separation has been reduced sufficiently, the second instability sets in, during which the orbital angular momentum drops rapidly while the spin angular momenta increase driven by tides. We have also presented detailed results from three mass-transfer evolutions (simulations Q0.5-Da, Db, and Dc) of a polytropic binary with an initial mass ratio $q_0=0.5$ and in which the two components had the same specific entropy. Thus this initial state could represent a DWD system with components having the same composition. It also corresponds to one system that was simulated by \citet{RS95} using an SPH technique. In each of our simulations, the less massive star was driven into contact with its Roche lobe via the slow removal of orbital angular momentum, but in simulations Q0.5-Da and Q0.5-Db this artificial driving was turned off shortly after the mass-transfer event was initiated (after 2.7 and 5.3 initial orbital periods, respectively). Our results differ in some important ways from the results reported by RS95. Most significantly, in the two simulations that were evolved for an extended period of time in the absence of systemic angular momentum losses, we do not observe a merger nor a tidal disruption; in evolution Q0.5-Da (Db) our binary survives for more than 30 (14) orbital periods and at the end it is separating while the mass-transfer rate has leveled off. Via these extended simulations, we have demonstrated that our numerical tools permit us to accurately model accretion flows in dynamically evolving mass-transfer systems. We can, for example, analyze how angular momentum is exchanged between the orbit and the spin of the two stars during a phase of direct-impact accretion, and we can analyze how the structure of the accretor dynamically readjusts as relatively high specific angular momentum material is deposited onto its equatorial region. Studies of this type should assist in the determination of which simplifying assumptions are justified -- as well as which are not -- in models that attempt to describe extended phases of mass-transfer evolutions semi-analytically \citep{WebIben, Maet04}. By conducting a variety of related simulations, we ultimately hope to be able to determine what the critical mass ratio $q_\mathrm{stable}$ is that defines which binary systems are stable or unstable against mass-transfer. We speculate that the outcomes of our Q0.5-Da and Q0.5-Db simulations were different from the Q0.5-RS simulation because the evolution presented in \citet{RS95} was started from a significantly deeper initial contact, and thus transferred a larger fraction of the mass before separating. In RS95, the maximum density of the donor appears to be moving away and yet the separation calculated as the distance between the centers of mass of the material in their respective Roche lobes seems to be decreasing. Therefore, our results could be described as reproducing some of the initial features of the SPH simulation by \citet{RS95} at a slower pace. Evidently, if the mass ratio of the binary is such that initially the system is unstable to mass transfer, the final fate of the binary will depend on whether during the initial phase of mass transfer the separation decreases sufficiently for the tidal instability to take over. In our Q0.5-Dc simulation, in which driving remained on throughout the evolution, it appears as though the system encounters the tidal instability; the last 2-3 orbits of this evolution resemble fairly closely the evolutionary behavior of the published Q0.5-RS simulation. However, in our Q0.5-Da and Q0.5-Db simulations, once the driving is cut off the tidal effects only succeed in delaying the tendency of the binary to separate as mass transfer proceeds by temporarily storing some orbital angular momentum in the spin of the donor. Once the separation begins to increase, tides become ineffective and the binary avoids the merger. Given an equation of state for the binary components, the mode of mass transfer, and the expected mass loss, if any, it is possible to make predictions about the evolutionary outcome after contact. However, our simulations suggest that the eventual fate of such a binary depends not only on the initial mass ratio $q_0$, but also on how far $q_0$ is above the appropriate $q_{\rm stable}$, and even on the rate of driving unless this is very slow. In other words, the detailed outcome depends on the non-linear development of the mass-transfer and tidal instabilities. Only when $q_0$ is well above $q_{\rm stable}$ will the evolution of mass transfer proceed rapidly enough to resemble qualitatively the predictions of the analytic solution derived in \citet{WebIben}. When $q_0$ is only slightly above $q_{\rm stable}$, non-linear effects come into play that make it possible for the system to survive the mass-transfer instability and avoid merger. We shall discuss these questions further in two forthcoming papers \citep{GPF, MDTF}. As has already been mentioned, the tools we have developed will also enable us in the future to investigate in more detail the hydrodynamics of mass transfer and the structures arising from this transfer, transient flows, oscillations, mixing and convection. We see already some of these features in our simulations and we think they warrant further investigation. \acknowledgements This work has been supported in part by NSF grants AST 04-07070 and PHY 03-26311, and in part through NASA's ATP program grants NAG5-8497 and NAG5-13430. The computations were performed primarily at NCSA through grant MCA98N043, which allocated resources on the Tungsten cluster, and on the SuperMike and SuperHelix clusters at LSU, which are operated by the Center for Computation and Technology (CCT). J. E. T. acknowledges support from the NSF-sponsored Institute for Pure and Applied Mathematics at UCLA, which provided an environment in May, 2005 that was conducive to writing significant portions of this manuscript. We thank the referee for many insightful comments and for encouraging us to perform some additional simulations.
1,108,101,562,577
arxiv
\part{Elements of functional analysis} \section{Norms and seminorms} \label{norms, seminorms} \setcounter{equation}{0} Let $V$ be a vector space over the real numbers ${\bf R}$ or complex numbers ${\bf C}$. A nonnegative real-valued function $N(v)$ on $V$ is said to be a \emph{seminorm} on $V$ if \begin{equation} N(t \, v) = |t| \, N(v) \end{equation} for every $v \in V$ and $t \in {\bf R}$ or ${\bf C}$, as appropriate, and \begin{equation} N(v + w) \le N(v) + N(w) \end{equation} for every $v, w \in V$. Here $|t|$ denotes the absolute value of $t$ when $t$ is a real number, and the usual modulus of $t$ when $t$ is a complex number. A seminorm $N(v)$ on $V$ is said to be a \emph{norm} if $N(v) > 0$ for every $v \in V$. Of course, the absolute value defines a norm on ${\bf R}$, and the modulus defines a norm on ${\bf C}$. As a basic class of examples, let $E$ be a nonempty set, and let $V$ be the vector space of real or complex-valued functions on $E$, with respect to pointwise addition and scalar multiplication. If $x \in E$ and $f \in V$, then \begin{equation} N_x(f) = |f(x)| \end{equation} defines a seminorm on $V$. Let $\ell^\infty(E)$ be the linear subspace of $V$ consisting of bounded functions on $E$, which may be denoted $\ell^\infty(E, {\bf R})$ or $\ell^\infty(E, {\bf C})$ to indicate whether the functions are real or complex-valued. It is easy to see that \begin{equation} \label{||f||_infty = sup_{x in E} |f(x)|} \|f\|_\infty = \sup_{x \in E} |f(x)| \end{equation} defines a norm on $\ell^\infty(E)$. \section{Norms and metrics} \label{norms, metrics} \setcounter{equation}{0} Let $V$ be a vector space over the real or complex numbers, and let $\|v\|$ be a norm on $V$. It is easy to see that \begin{equation} d(v, w) = \|v - w\| \end{equation} defines a metric on $V$, using the corresponding properties of a norm. More precisely, $d(v, w)$ is a nonnegative real-valued function defined for $v, w \in V$ which is equal to $0$ if and only if $v = w$, $d(v, w)$ is symmetric in $v$ and $w$, and \begin{equation} d(v, z) \le d(v, w) + d(w, z) \end{equation} for every $v, w, z \in V$. Thus open and closed subsets of $V$, convergence of sequences, and so on may be defined as in the context of metric spaces. Moreover, one can check that the topology on $V$ determined by the metric associated to the norm is compatible with the algebraic structure corresponding to the vector space operations. This means that addition of vectors is continuous as a mapping from the Cartesian product of $V$ with itself into $V$, and that scalar multiplication is continuous as a mapping from the Cartesian product of ${\bf R}$ or ${\bf C}$ with $V$ into $V$. This can also be described in terms of the convergence of a sum of two convergent sequences in $V$, and the convergence of a product of a convergent sequence in ${\bf R}$ or ${\bf C}$ with a convergent sequence in $V$. \section{Seminorms and topologies} \label{seminorms, topologies} \setcounter{equation}{0} Let $V$ be a real or complex vector space, and let $\mathcal{N}$ be a collection of seminorms on $V$. A set $U \subseteq V$ is said to be open with respect to $\mathcal{N}$ if for each $u \in U$ there are finitely many seminorms $N_1, \ldots, N_l \in \mathcal{N}$ and positive real numbers $r_1, \ldots, r_l$ such that \begin{equation} \{v \in V : N_j(u - v) < r_j, \, j = 1, \ldots, l\} \subseteq U. \end{equation} It is easy to see that this defines a topology on $V$. If $u \in V$, $N \in \mathcal{N}$, and $r > 0$, then one can check that the corresponding ball \begin{equation} \{v \in V : N(u - v) < r\} \end{equation} is an open set in $V$, using the triangle inequality. By construction, the collection of these open balls is a subbase for the topology on $V$ associated to $\mathcal{N}$. Let us say that $\mathcal{N}$ is \emph{nice} if for every $v \in V$ with $v \ne 0$ there is an $N \in \mathcal{N}$ such that $N(v) > 0$. This is equivalent to the condition that $\{0\}$ be a closed set in $V$ with respect to the topology associated to $\mathcal{N}$, which is to say that $V \backslash \{0\}$ is an open set in this topology. If $\mathcal{N}$ is nice, then the topology on $V$ associated to $\mathcal{N}$ is Hausdorff. If $\|v\|$ is a norm on $V$, then the collection of seminorms on $V$ consisting only of $\|v\|$ is nice, and the corresponding topology on $V$ is the same as the one determined by the metric associated to $\|v\|$, as in the previous section. If $\mathcal{N}$ is any collection of seminorms on $V$, then addition of vectors defines a continuous mapping from $V \times V$ into $V$, and scalar multiplication defines a continuous mapping from ${\bf R} \times V$ or ${\bf C} \times V$, as appropriate, into $V$. Thus $V$ is a \emph{topological vector space}, at least when $\mathcal{N}$ is nice, since it is customary to ask that $\{0\}$ be a closed set in a topological vector space. In particular, a vector space with a norm is a topological vector space, with respect to the topology determined by the metric associated to the norm, as in the previous section. If $V$ is the space of real or complex-valued functions on a nonempty set $E$, and if $\mathcal{N}$ is the collection of seminorms of the form $N_x(f) = |f(x)|$, $x \in E$, as in Section \ref{norms, seminorms}, then $\mathcal{N}$ is a nice collection of seminorms on $V$. In this case, $V$ can be identified with a Cartesian product of copies of ${\bf R}$ or ${\bf C}$, indexed by $E$, and the topology on $V$ associated to $\mathcal{N}$ is the same as the product topology. \section{Convergent sequences} \label{convergent sequences} \setcounter{equation}{0} Remember that a sequence of elements $\{x_j\}_{j = 1}^\infty$ of a topological space $X$ is said to converge to an element $x$ of $X$ if for every open set $U$ in $X$ with $x \in U$ there is an $L \ge 1$ such that \begin{equation} x_j \in U \end{equation} for each $j \ge L$. If the topology on $X$ is determined by a metric $d(x, y)$, then this is equivalent to the condition that \begin{equation} \lim_{j \to \infty} d(x_j, x) = 0. \end{equation} Similarly, if $V$ is a real or complex vector space with a norm $\|\cdot \|$, and if $\{v_j\}_{j = 1}^\infty$ is a sequence of elements of $V$, then $\{v_j\}_{j = 1}^\infty$ converges to another element $v$ of $V$ when \begin{equation} \lim_{j \to \infty} \|v_j - v\| = 0. \end{equation} If instead the topology on $V$ is determined by a collection $\mathcal{N}$ of seminorms on $V$, then $\{v_j\}_{j = 1}^\infty$ converges to $v$ when \begin{equation} \lim_{j \to \infty} N(v_j - v) = 0 \end{equation} for every $N \in \mathcal{N}$. In these last two cases, $\{v_j\}_{j = 1}^\infty$ converges to $v$ if and only if $\{v_j - v\}_{j = 1}^\infty$ converges to $0$. A topological space $X$ has a countable local base for the topology at $x \in X$ if there is a sequence $U_1(x), U_2(x), \ldots$ of open subsets of $X$ such that $x \in U_l(x)$ for each $l$, and for each open set $U \subseteq X$ with $x \in U$ there is an $l \ge 1$ such that $U_l(x) \subseteq U$. In this case, one can also ask that $U_{l + 1}(x) \subseteq U_l(x)$ for each $l$, by replacing $U_l(x)$ with the intersection of $U_1(x), \ldots, U_l(x)$ if necessary. Under this condition, if $x$ is in the closure of a set $E \subseteq X$, then there is a sequence of elements of $E$ that converges to $x$. Otherwise, one may have to use nets or filters instead of sequences. Of course, the limit of a convergent sequence of elements of a set $E \subseteq X$ is in the closure of $E$ in any topological space $X$. If $X$ has a countable local base for the topology at each point, then the closed subsets of $X$ can be characterized in terms of convergent sequences, as in the previous paragraph. Equivalently, the topology on $X$ is determined by convergence of sequences. If the topology on $X$ is defined by a metric, then $X$ automatically satisfies this condition, with $U_l(x)$ equal to the open ball centered at $x$ with radius $1/l$. In particular, this applies to a real or complex vector space $V$ with a norm. Suppose that the topology on $V$ is given by a nice collection $\mathcal{N}$ of seminorms. If $\mathcal{N}$ consists of only finitely many seminorms $N_1, \ldots, N_l$, then \begin{equation} \|v\| = \max_{1 \le j \le l} N_j(v) \end{equation} is a norm on $V$, and the topology on $V$ associated to $\mathcal{N}$ is the same as the one associated to $\|v\|$. If $\mathcal{N}$ consists of an infinite sequence $N_1, N_2, \ldots$ of seminorms and $v \in V$, then \begin{equation} U_l(v) = \{w \in V : N_1(v - w), \ldots, N_l(v - w) \le 1/l\} \end{equation} is a countable local base for the topology of $V$ at $v$. Conversely, suppose that $U_1, U_2, \ldots$ is a sequence of open subsets of $V$ such that $0 \in U_l$ for each $l$, and for each open set $U$ in $V$ with $0 \in U$ there is an $l \ge 1$ such that $U_l \subseteq U$. By the definition of the topology on $V$ associated to $\mathcal{N}$, for each $l \ge 1$ there are finitely many seminorms $N_{l, 1}, \ldots, N_{l, n_l} \in \mathcal{N}$ and positive real numbers $r_{l, 1}, \ldots, r_{l, n_l}$ such that \begin{equation} \{v \in V : N_{l, j}(v) < r_{l, j}, \ j = 1, \ldots, n_l\} \subseteq U_l. \end{equation} If $\mathcal{N}'$ is the collection of seminorms of the form $N_{l, j}$, $1 \le j \le n_l$, $l \ge 1$, then $\mathcal{N}'$ is a subset of $\mathcal{N}$ with only finitely or countably many elements. One can also check that the topology on $V$ determined by $\mathcal{N}'$ is the same as the topology on $V$ determined by $\mathcal{N}$. \section{Metrizability} \label{metrizability} \setcounter{equation}{0} Let $X$ be a set, and let $\rho(x, y)$ be a nonnegative real-valued function defined for $x, y \in X$. We say that $\rho(x, y)$ is a \emph{semimetric} on $X$ if it satisfies the same conditions as a metric, except that $\rho(x, y)$ may be equal to $0$ even when $x \ne y$. Thus $\rho(x, y)$ is a semimetric if $\rho(x, x) = 0$ for each $x \in X$, \begin{equation} \rho(x, y) = \rho(y, x) \end{equation} for every $x, y \in X$, and \begin{equation} \rho(x, z) \le \rho(x, y) + \rho(y, z) \end{equation} for every $x, y, z \in X$. If $N$ is a seminorm on a real or complex vector space $V$, then \begin{equation} \label{rho(v, w) = N(v - w)} \rho(v, w) = N(v - w) \end{equation} defines a semimetric on $V$. If $\rho(x, y)$ is a semimetric on a set $X$ and $t$ is a positive real number, then \begin{equation} \label{rho_t(x, y) = min (rho(x, y), t)} \rho_t(x, y) = \min (\rho(x, y), t) \end{equation} is also a semimetric on $X$. The main point is that $\rho_t(x, y)$ also satisfies the triangle inequality, since $\rho(x, y)$ does. If $\rho(x, y)$ is a metric on $X$, then $\rho_t(x, y)$ is too, and they determine the same topology on $X$. Let $V$ be a real or complex vector space, and let $\mathcal{N}$ be a nice collection of seminorms on $V$. If $\mathcal{N}$ consists of only finitely many seminorms, then their maximum is a norm on $V$ which determines the same topology on $V$ as $\mathcal{N}$, as in the preceding section. If $\mathcal{N}$ consists of an infinite sequence of seminorms $N_1, N_2, \ldots$, then \begin{equation} d(v, w) = \max_{l \ge 1} \min(N_l(v - w), 1/l) \end{equation} defines a metric on $V$ that determines the same topology on $V$ as $\mathcal{N}$. More precisely, if $v = w$, then $N_l(v - w) = 0$ for each $l$, and so $d(v, w) = 0$. If $v \ne w$, then $N_j(v - w) > 0$ for some $j$, because $\mathcal{N}$ is nice, and \begin{equation} \min(N_l(v - w), 1/l) \le 1/l < N_j(v - w) \end{equation} for all but finitely many $l$, so that the maximum in the definition of $d(v, w)$ always exists. This also shows that $d(v, w) > 0$ when $v \ne w$, and $d(v, w)$ is obviously symmetric in $v$ and $w$. It is not difficult to check that $d(v, w)$ satisfies the triangle inequality, using the fact that \begin{equation} \min (N_l(v - w), 1/l) \end{equation} satisfies the triangle inequality for each $l$, as in the previous paragraphs. If $r$ is a positive real number, then $d(v, w) < r$ if and only if $N_l(v - w) < r$ when $l \le 1/r$, and one can use this to show that $d(v, w)$ determines the same topology on $V$ as $\mathcal{N}$. Suppose now that $\mathcal{N}$ is a nice collection of seminorms on $V$, and that there is a countable local base for the topology on $V$ associated to $\mathcal{N}$ at $0$. This implies that there is a subset $\mathcal{N}'$ of $\mathcal{N}$ with only finitely or countably many elements that determines the same topology on $V$, as in the preceding section. It follows that there is a metric on $V$ that determines the same topology on $V$, as in the previous paragraph. Note that this metric is invariant under translations on $V$, since it depends only on $v - w$. \section{Comparing topologies} \label{comparing topologies} \setcounter{equation}{0} Let $V$ be a real or complex vector space, and let $\mathcal{N}$, $\mathcal{N}'$ be collections of seminorms on $V$. Suppose that every open set in $V$ with respect to $\mathcal{N}'$ is also an open set with respect to $\mathcal{N}$. If $N' \in \mathcal{N}'$, then it follows that the open unit ball with respect to $N'$ is an open set with respect to $\mathcal{N}$. This implies that there are finitely many seminorms $N_1, \ldots, N_l \in \mathcal{N}$ and positive real numbers $r_1, \ldots, r_l$ such that \begin{equation} \label{{v in V : N_j(v) < r_j, j = 1, ldots, l} subseteq {v in V : N'(v) < 1}} \{v \in V : N_j(v) < r_j, \ j = 1, \ldots, l\} \subseteq \{v \in V : N'(v) < 1\}, \end{equation} since $0$ is an element of the open unit ball corresponding to $N'$. Equivalently, \begin{equation} N'(v) < 1 \quad\hbox{when}\quad \max_{1 \le j \le l} r_j^{-1} \, N_j(v) < 1, \end{equation} and so \begin{equation} N'(v) \le \max_{1 \le j \le l} r_j^{-1} \, N_j(v) \end{equation} for every $v \in V$. This implies in turn that \begin{equation} \label{N'(v) le C max_{1 le j le l} N_j(v)} N'(v) \le C \max_{1 \le j \le l} N_j(v) \end{equation} for every $v \in V$, where $C$ is the maximum of $r_1^{-1}, \ldots, r_l^{-1}$. Conversely, if for every $N' \in \mathcal{N}'$ there are finitely many seminorms $N_1, \ldots, N_l \in \mathcal{N}$ such that (\ref{N'(v) le C max_{1 le j le l} N_j(v)}) holds for some $C \ge 0$, then every open set with respect to $\mathcal{N}'$ is also open with respect to $\mathcal{N}$. Of course, one can interchange the roles of $\mathcal{N}$ and $\mathcal{N}'$, so that they determine the same topology on $V$ if and only if $\mathcal{N}$ and $\mathcal{N}'$ both satisfy this condition relative to the other. Let us apply this to the case where $\mathcal{N}'$ consists of a single norm $\|v\|$. If every open set in $V$ with respect to this norm is also an open set with respect to $\mathcal{N}$, then there are finitely many seminorms $N_1, \ldots, N_l \in \mathcal{N}$ such that \begin{equation} \|v\| \le C \max_{1 \le j \le l} N_j(v) \end{equation} for some $C > 0$ and every $v \in V$. In particular, \begin{equation} \|v\|' = \max_{1 \le j \le l} N_j(v) \end{equation} is also a norm on $V$ in this case. Similarly, if every open set in $V$ with respect to $\mathcal{N}$ is also an open set with respect to $\|v\|$, then for each $N \in \mathcal{N}$ there is a $C(N) \ge 0$ such that \begin{equation} N(v) \le C(N) \, \|v\| \end{equation} for every $v \in V$. If the topologies on $V$ associated to $\mathcal{N}$ and $\|v\|$ are the same, then $\|v\|'$ also determines the same topology on $V$. As a basic class of examples, let $V$ be the vector space of real or complex-valued functions on a nonempty set $E$, and let $\mathcal{N}$ be the collection of seminorms on $V$ of the form $N_x(f) = |f(x)|$, $x \in E$. If there is a norm $\|v\|$ on $V$ such that the open unit ball in $V$ with respect to $\|v\|$ is an open set with respect to $\mathcal{N}$, then it follows that the maximum of finitely many elements of $\mathcal{N}$ is a norm on $V$, as in the previous paragraphs. This implies that $E$ has only finitely many elements. Conversely, if $E$ has only finitely many elements, then the maximum of $N_x(f)$, $x \in E$, is a norm on $V$ that determines the same topology. Note that the topology on $V$ is metrizable if and only if $E$ has only finitely or countably many elements, as in the preceding section. Now let $E$ be the set ${\bf Z}_+$ of positive integers, and let $V$ be the vector space of real or complex-valued functions on ${\bf Z}_+$ that are rapidly decreasing in the sense that $f(j)$ is bounded by a constant multiple of $j^{-k}$ for each nonnegative integer $k$. Put \begin{equation} \label{N_k(f) = sup_{j ge 1} j^k |f(j)|} N_k(f) = \sup_{j \ge 1} j^k \, |f(j)| \end{equation} for each $k \ge 0$, which is a norm on $V$ that reduces to the $\ell^\infty$ norm when $k = 0$ and is monotone increasing in $k$. It is easy to see that the topology on $V$ associated to this collection of norms is not determined by finitely many of these norms. Hence the topology on $V$ associated to this collection of norms is not determined by any single norm at all. However, this topology is metrizable, as in the preceding section. \section{Continuous linear functionals} \label{continuous linear functionals} \setcounter{equation}{0} Let $V$ be a real or complex vector space with a nice collection of seminorms $\mathcal{N}$. As usual, a linear functional on $V$ is a linear mapping from $V$ into the real or complex numbers, as appropriate. Let $V^*$ be the space of linear functionals on $V$ that are continuous with respect to the topology on $V$ determined by $\mathcal{N}$. This may be described as the topological dual of $V$, to distinguish it from the algebraic dual of all linear functionals on $V$. These dual spaces are also vector spaces over the real or complex numbers, as appropriate, using pointwise addition and scalar multiplication of functions. If $\lambda \in V^*$, then the set of $v \in V$ such that $|\lambda(v)| < 1$ is open, because $\lambda$ is continuous. Of course, $0$ is an element of this set, because $\lambda(0) = 0$. It follows that there are finitely many seminorms $N_1, \ldots, N_l \in \mathcal{N}$ and positive real numbers $r_1, \ldots, r_l$ such that \begin{equation} \{v \in V : N_j(v) < r_j, \ j = 1, \ldots, l\} \subseteq \{v \in V : |\lambda(v)| < 1\}. \end{equation} As in the previous section, this implies that \begin{equation} |\lambda(v)| \le \max_{1 \le j \le l} r_j^{-1} \, N_j(v) \end{equation} for every $v \in V$. In particular, if $C$ is the maximum of $r_1^{-1}, \ldots, r_l^{-1}$, then \begin{equation} \label{|lambda(v)| le C max_{1 le j le l} N_j(v)} |\lambda(v)| \le C \max_{1 \le j \le l} N_j(v) \end{equation} for every $v \in V$. Conversely, suppose that $\lambda$ is a linear functional on $V$ for which there are finitely many seminorms $N_1, \ldots, N_l \in \mathcal{N}$ and a nonnegative real number $C$ such that (\ref{|lambda(v)| le C max_{1 le j le l} N_j(v)}) holds. In this case, \begin{equation} \label{|lambda(v) - lambda(w)| = ... le C max_{1 le j le l} N_j(v - w)} |\lambda(v) - \lambda(w)| = |\lambda(v - w)| \le C \max_{1 \le j \le l} N_j(v - w) \end{equation} for every $v, w \in V$, because $\lambda$ is linear. It is easy to see that $\lambda$ is continuous on $V$ with respect to the topology associated to $\mathcal{N}$ under these conditions. More precisely, for each $v \in V$ and $\epsilon > 0$, we have that \begin{equation} |\lambda(v) - \lambda(w)| < \epsilon \end{equation} for every $w \in V$ such that $N_j(v - w) < C^{-1} \, \epsilon$ for $j = 1, \ldots, l$. Remember that open balls defined in terms of seminorms in $\mathcal{N}$ are automatically open sets with respect to $\mathcal{N}$, as in Section \ref{seminorms, topologies}. If the topology on $V$ is determined by a single norm $\|v\|$, then the previous discussion can be simplified. If $\lambda$ is a continuous linear functional on $V$, then $|\lambda(v)| < 1$ on an open ball around $0$ in $V$. As before, this implies that there is a nonnegative real number $C$ such that \begin{equation} \label{|lambda(v)| le C ||v||} |\lambda(v)| \le C \, \|v\| \end{equation} for every $v \in V$. Conversely, if $\lambda$ is a linear functional on $V$ that satisfies (\ref{|lambda(v)| le C ||v||}) for some $C \ge 0$, then \begin{equation} |\lambda(v) - \lambda(w)| = |\lambda(v - w)| \le C \, \|v - w\| \end{equation} for every $v, w \in V$, because of linearity. This clearly implies that $\lambda$ is continuous with respect to the metric $d(v, w) = \|v - w\|$ associated to $V$, as in Section \ref{norms, metrics}. \section{${\bf R}^n$ and ${\bf C}^n$} \label{R^n, C^n} \setcounter{equation}{0} Let $n$ be a positive integer, and let ${\bf R}^n$, ${\bf C}^n$ be the space of $n$-tuples of real and complex numbers, respectively. As usual, these are vector spaces with respect to coordinatewise addition and scalar multiplication. Put \begin{equation} \|v\|_\infty = \max_{1 \le j \le n} |v_j| \end{equation} for each $v = (v_1, \ldots, v_n) \in {\bf R}^n$ or ${\bf C}^n$. It is easy to see that this defines a norm on ${\bf R}^n$, ${\bf C}^n$, for which the corresponding topology is the standard topology. The latter is the same as the product topology on ${\bf R}^n$, ${\bf C}^n$ as the Cartesian product of $n$ copies of ${\bf R}$, ${\bf C}$, with their standard topologies. Another simple norm on ${\bf R}^n$, ${\bf C}^n$ is given by \begin{equation} \|v\|_1 = \sum_{j = 1}^n |v_j|. \end{equation} Note that \begin{equation} \|v\|_\infty \le \|v\|_1 \end{equation} for every $v \in {\bf R}^n$ or ${\bf C}^n$. Similarly, \begin{equation} \|v\|_1 \le n \, \|v\|_\infty \end{equation} for each $v \in {\bf R}^n$, ${\bf C}^n$. It follows that $\|v\|_1$ also determines the standard topology on ${\bf R}^n$, ${\bf C}^n$. If $a_1, \ldots, a_n$ are real or complex numbers, then \begin{equation} \label{lambda(v) = sum_{j = 1}^n a_j v_j} \lambda(v) = \sum_{j = 1}^n a_j \, v_j \end{equation} defines a linear functional on ${\bf R}^n$ or ${\bf C}^n$, as appropriate. It is easy to see that $\lambda$ is continuous with respect to the standard topology on ${\bf R}^n$ or ${\bf C}^n$. Of course, every linear functional on ${\bf R}^n$, ${\bf C}^n$ is of this form. More precisely, if $\lambda$ is any linear functional on ${\bf R}^n$ or ${\bf C}^n$, then $\lambda$ can be expressed as in (\ref{lambda(v) = sum_{j = 1}^n a_j v_j}), with \begin{equation} \label{a_j = lambda(e_j)} a_j = \lambda(e_j) \end{equation} for each $j$, where $e_1, \ldots, e_n$ are the standard basis vectors in ${\bf R}^n$, ${\bf C}^n$. These are defined by taking the $l$th component of $e_j$ equal to $1$ when $j = l$ and $0$ otherwise, so that \begin{equation} v = \sum_{j = 1}^n v_j \, e_j \end{equation} for each $v \in {\bf R}^n$ or ${\bf C}^n$. If $N$ is any seminorm on ${\bf R}^n$ or ${\bf C}^n$, then \begin{equation} N(v) = N\Big(\sum_{j = 1}^n v_j \, e_j\Big) \le \sum_{j = 1}^n N(e_j) \, |v_j|. \end{equation} This implies that \begin{equation} \label{N(v) le (sum_{j = 1}^n N(e_j)) ||v||_infty} N(v) \le \Big(\sum_{j = 1}^n N(e_j)\Big) \, \|v\|_\infty \end{equation} and \begin{equation} \label{N(v) le (max_{1 le j le n} N(e_j)) ||v||_1} N(v) \le \Big(\max_{1 \le j \le n} N(e_j)\Big) \, \|v\|_1 \end{equation} for every $v \in {\bf R}^n$ or ${\bf C}^n$. Thus $N$ is automatically bounded by constant multiples of the basic norms $\|v\|_\infty$, $\|v\|_1$. Using the triangle inequality, we get that \begin{equation} N(v) - N(w) \le N(v - w) \end{equation} and \begin{equation} N(w) - N(v) \le N(v - w) \end{equation} for every $v, w \in {\bf R}^n$ or ${\bf C}^n$, as appropriate. It follows that \begin{equation} \label{|N(v) - N(w)| le N(v - w)} |N(v) - N(w)| \le N(v - w) \end{equation} for every $v$, $w$. Combining this with the estimates in the previous paragraph, we get that $N$ is continuous as a real-valued function on ${\bf R}^n$ or ${\bf C}^n$. Suppose now that $N$ is a norm on ${\bf R}^n$ or ${\bf C}^n$. The set of $v \in {\bf R}^n$ or ${\bf C}^n$ with $\|v\|_\infty = 1$ is closed and bounded, and hence compact, with respect to the standard topology. Because $N$ is continuous, it attains its minimum on this set, which is therefore positive. Hence there is a positive real number $c$ such that \begin{equation} N(v) \ge c \end{equation} when $\|v\|_\infty = 1$, which implies that \begin{equation} \label{N(v) ge c ||v||_infty} N(v) \ge c \, \|v\|_\infty \end{equation} for every $v \in {\bf R}^n$ or ${\bf C}^n$, as appropriate, by homogeneity. We already know from (\ref{N(v) le (sum_{j = 1}^n N(e_j)) ||v||_infty}) that $N(v)$ is bounded from above by a constant multiple of $\|v\|_\infty$, and we may now conclude that the topology on ${\bf R}^n$ or ${\bf C}^n$ determined by $N$ is the same as the standard topology. Let $\mathcal{N}$ be any nice collection of seminorms on ${\bf R}^n$ or ${\bf C}^n$, and let us check that the topology on ${\bf R}^n$ or ${\bf C}^n$ associated to $\mathcal{N}$ is the same as the standard topology. Let $N_1$ be an element of $\mathcal{N}$ that is not identically zero. If $N_1$ is a norm, then we stop, and otherwise we choose $N_2 \in \mathcal{N}$ such that $N_2(v) > 0$ for some $v \in {\bf R}^n$ or ${\bf C}^n$ with $v \ne 0$ and $N_1(v) = 0$. Note that the set of $v \in {\bf R}^n$ or ${\bf C}^n$ such that $N_1(v) = 0$ is a proper linear subspace of ${\bf R}^n$ or ${\bf C}^n$. If this linear subspace contains a nonzero element, then the set of $v \in {\bf R}^n$ or ${\bf C}^n$ such that $N_1(v) = N_2(v) = 0$ is a proper linear subspace of it. By repeating the process, we get finitely many seminorms $N_1, \ldots, N_l \in \mathcal{N}$ with $l \le n$ whose maximum defines a norm on ${\bf R}^n$ or ${\bf C}^n$, as appropriate. The topology on ${\bf R}^n$ or ${\bf C}^n$ associated to this norm is the same as the standard topology, as before. It follows that the topology on ${\bf R}^n$ or ${\bf C}^n$ associated to $\mathcal{N}$ is the same as the standard topology, since every seminorm on ${\bf R}^n$, ${\bf C}^n$ is bounded by a constant multiple of the usual norms $\|v\|_\infty$,$\|v\|_1$. \section{Weak topologies} \label{weak topologies} \setcounter{equation}{0} Let $V$ be a real or complex vector space. If $\lambda$ is any linear functional on $V$, then \begin{equation} \label{N_lambda(v) = |lambda(v)|} N_\lambda(v) = |\lambda(v)| \end{equation} defines a seminorm on $V$. Let $\Lambda$ be a collection of linear functionals on $V$, and let $\mathcal{N}(\Lambda)$ be the corresponding collection of seminorms $N_\lambda$, $\lambda \in \Lambda$. If $\Lambda$ is nice in the sense that for each $v \in V$ with $v \ne 0$ there is a $\lambda \in \Lambda$ such that $\lambda(v) \ne 0$, then $\mathcal{N}(\Lambda)$ is a nice collection of seminorms on $V$. This leads to a topology on $V$, as in Section \ref{seminorms, topologies}, which is the weak topology associated to $\Lambda$. Under these conditions, each element of $\Lambda$ is a continuous linear functional on $V$ with respect to the weak topology associated to $\Lambda$. This implies that any finite linear combination of elements of $\Lambda$ is also continuous with respect to this topology. Conversely, if $\lambda$ is a continuous linear functional on $V$ with respect to the weak topology associated to $\Lambda$, then there are finitely many elements $\lambda_1, \ldots, \lambda_n$ of $\Lambda$ and a nonnegative real number $C$ such that \begin{equation} \label{|lambda(v)| le C max_{1 le j le n} |lambda_j(v)|} |\lambda(v)| \le C \max_{1 \le j \le n} |\lambda_j(v)| \end{equation} for every $v \in V$. In particular, $\lambda(v) = 0$ when $\lambda_j(v) = 0$ for $j = 1, \ldots, n$, and an elementary argument in linear algebra shows that $\lambda$ can be expressed as a linear combination of the $\lambda_j$'s. One may wish to reduce first to the case where the $\lambda_j$'s are linearly independent, by discarding any that are linear combinations of the rest. Let $E$ be a nonempty set, and let $V$ be the vector space of real or complex-valued functions on $E$. Note that $\lambda_x(f) = f(x)$ is linear functional on $V$ for each $x \in E$. This defines a nice collection of linear functionals on $V$, for which the corresponding collection of seminorms has been mentioned previously. It follows from the discussion in the previous paragraph that a linear functional $\lambda$ on $V$ is continuous with respect to the topology associated to this collection of seminorms if and only if it is a finite linear combination of $\lambda_x$'s, $x \in E$. Let $V$ be any real or complex vector space, and let $\mathcal{N}$ be a nice collection of seminorms on $V$. This leads to the corresponding dual space $V^*$ of continuous linear functionals on $V$. If $v \in V$ and $v \ne 0$, then there is a $\lambda \in V^*$ such that $\lambda(v) \ne 0$. This follows from the Hahn--Banach theorem, as in the next section. Thus $V^*$ is itself a nice collection of linear functionals on $V$, which determines a weak topology on $V$ as before, also known as the weak topology associated to $\mathcal{N}$. Note that every open set in $V$ with respect to this weak topology is also an open set with respect to the topology associated to $\mathcal{N}$, because the elements of $V^*$ are continuous with respect to the topology associated to $\mathcal{N}$. Every element of $V^*$ is automatically continuous with respect to the weak topology on $V$, and conversely every continuous linear functional on $V$ with respect to the weak topology is continuous with respect to the topology associated to $\mathcal{N}$. Hence $V^*$ is also the space of continuous linear functionals on $V$ with respect to the weak topology, which follows from the earlier discussion for the weak topology associated to any collection of linear functionals on $V$ as well. \section{The Hahn--Banach theorem} \label{hahn--banach theorem} \setcounter{equation}{0} Let $V$ be a real or complex vector space, and let $N$ be a seminorm on $V$. Also let $\lambda$ be a linear functional on a linear subspace $W$ of $V$ such that \begin{equation} \label{|lambda(v)| le C N(v)} |\lambda(v)| \le C \, N(v) \end{equation} for some $C \ge 0$ and every $v \in W$. The Hahn--Banach theorem states that there is an extension of $\lambda$ to a linear functional on $V$ that satisfies (\ref{|lambda(v)| le C N(v)}) for every $v \in V$, with the same constant $C$. We shall not go through the proof here, but we would like to mention some aspects of it, and some important consequences. Sometimes the Hahn--Banach theorem is stated only in the case where $N$ is a norm on $V$. This does not really matter, because essentially the same proof works for seminorms. Alternatively, if $N$ is a seminorm on $V$, then \begin{equation} \label{Z = {v in V : N(v) = 0}} Z = \{v \in V : N(v) = 0\} \end{equation} is a linear subspace of $V$. One can begin by extending $\lambda$ to the linear span of $W$ and $Z$ by setting \begin{equation} \lambda(w + z) = \lambda(w) \end{equation} for every $w \in W$ and $z \in Z$, which makes sense because $\lambda(v) = 0$ when $v$ is in $W \cap Z$, by (\ref{|lambda(v)| le C N(v)}). One can then reduce to the case of norms by passing to the quotient of $V$ by $Z$. To prove the Hahn--Banach theorem in the real case, one first shows that $\lambda$ can be extended to the linear span of $W$ and any element of $V$, while maintaining (\ref{|lambda(v)| le C N(v)}). If $W$ has finite codimension in $V$, then one can apply this repeatedly to extend $\lambda$ to $V$. If $N$ is a norm on $V$ and $V$ has a countable dense set, then one can apply this repeatedly to extend $\lambda$ to a dense linear subspace of $V$, and then extend $\lambda$ to all of $V$ using continuity. Otherwise, the extension of $\lambda$ to $V$ is obtained using the axiom of choice, through Zorn's lemma or the Hausdorff maximality principle. The complex case can be reduced to the real case, by treating the real part of $\lambda$ as a linear functional on $W$ as a real vector space, and then complexifying the extension to $V$ afterwards. As an application, let $\mathcal{N}$ be a nice collection of seminorms on $V$, let $u \in V$ with $u \ne 0$ be given, and choose $N \in \mathcal{N}$ such that $N(u) > 0$. We can define $\lambda$ on the one-dimensional subspace $W$ of $V$ spanned by $u$ by \begin{equation} \label{lambda(t u) = t N(u)} \lambda(t \, u) = t \, N(u) \end{equation} for each $t \in {\bf R}$ or ${\bf C}$, as appropriate. This satisfies (\ref{|lambda(v)| le C N(v)}) with $C = 1$, and the Hahn--Banach theorem implies that there is an extension of $\lambda$ to $V$ that also satisfies (\ref{|lambda(v)| le C N(v)}) with $C = 1$. In particular, this extension is a continuous linear functional on $V$ with respect to the topology associated to $\mathcal{N}$ such that $\lambda(u) \ne 0$, as in the previous section. One can also use the Hahn--Banach theorem to show that a closed linear subspace of $V$ with respect to the topology associated to $\mathcal{N}$ is also closed with respect to the weak topology. \section{Dual norms} \label{dual norms} \setcounter{equation}{0} Let $V$ be a real or complex vector space with a norm $\|v\|$. Remember that a linear functional $\lambda$ on $V$ is continuous with respect to the topology associated to $\|v\|$ if and only if there is a nonnegative real number $C$ such that \begin{equation} |\lambda(v)| \le C \, \|v\| \end{equation} for every $v \in V$. In this case, the dual norm $\|\lambda\|_*$ of $\lambda$ is defined by \begin{equation} \|\lambda\|_* = \sup \{|\lambda(v)| : v \in V, \ \|v\| \le 1\}. \end{equation} This is the same as the smallest value of $C$ for which the previous inequality holds. It is not difficult to check that $\|\lambda\|_*$ defines a norm on the dual space $V^*$ of continuous linear functionals on $V$. If $v \in V$ and $v \ne 0$, then there is a $\lambda \in V^*$ such that $\|\lambda\|_* = 1$ and \begin{equation} \lambda(v) = \|v\|. \end{equation} This uses the Hahn--Banach theorem, as in the previous section. More precisely, the argument in the previous section shows that $\|\lambda\|_* \le 1$, and equality holds because of the value of $\lambda(v)$. Suppose that $V = {\bf R}^n$ or ${\bf C}^n$ for some positive integer $n$. If $a \in V$, then \begin{equation} \label{lambda_a(v) = sum_{j = 1}^n a_j v_j} \lambda_a(v) = \sum_{j = 1}^n a_j \, v_j \end{equation} defines a linear functional on $V$, and every linear functional on $V$ is of this form. Note that \begin{equation} |\lambda_a(v)| \le \Big(\sum_{j = 1}^n |a_j|\Big) \max_{1 \le j \le n} |v_j| = \|a\|_1 \, \|v\|_\infty \end{equation} for every $a, v \in V$, where $\|a\|_1$, $\|v\|_\infty$ are as in Section \ref{R^n, C^n}. This shows that the dual norm of $\lambda_a$ on $V$ with respect to $\|v\|_\infty$ is less than or equal to $\|a\|_1$. If one chooses $v \in V$ such that $\|v\|_\infty = 1$ and $a_j \, v_j = |a_j|$ for each $j$, then one gets that \begin{equation} |\lambda_a(v)| = \|a\|_1, \end{equation} and hence the dual norm of $\lambda_a$ with respect to $\|v\|_\infty$ is equal to $\|a\|_1$. Similarly, \begin{equation} |\lambda_a(v)| \le \|a\|_\infty \, \|v\|_1 \end{equation} for every $a, v \in V$. This shows that the dual norm of $\lambda_a$ on $V$ with respect to $\|v\|_1$ is less than or equal to $\|a\|_\infty$, and one can check that the dual norm is equal to $\|a\|_\infty$ using standard basis vectors for $v$ to get equality in the previous inequality. \section{Topological vector spaces} \label{topological vector spaces} \setcounter{equation}{0} A \emph{topological vector space} is basically a vector space with a topology that is compatible with the vector space operations. More precisely, let $V$ be a vector space over the real or complex numbers, and suppose that $V$ is also equipped with a topological structure. In order for $V$ to be a topological vector space, the vector space operations of addition and scalar multiplication ought to be continuous. Addition of vectors corresponds to a mapping from the Cartesian product $V \times V$ of $V$ with itself into $V$, and continuity of addition means that this mapping should be continuous, where $V \times V$ is equipped with the product topology associated to the given topology on $V$. Similarly, scalar multiplication corresponds to a mapping from ${\bf R} \times V$ or ${\bf C} \times V$ into $V$, depending on whether $V$ is a real or complex vector space. Continuity of scalar multiplication means that this mapping is continuous when ${\bf R} \times V$ or ${\bf C} \times V$ is equipped with the product topology associated to the standard topology on ${\bf R}$ or ${\bf C}$ and the given topology on $V$. It is customary to ask that topological vector spaces also satisfy a separation condition, which will be mentioned in a moment. Note that continuity of addition implies that the translation mapping \begin{equation} \label{tau_a(v) = a + v} \tau_a(v) = a + v \end{equation} is continuous as a mapping from $V$ into itself for every $a \in V$. This implies that $\tau_a$ is actually a homeomorphism from $V$ onto itself for each $a \in V$, since $\tau_a$ is a one-to-one mapping from $V$ onto itself whose inverse is $\tau_{-a}$, which is also continuous for the same reason. In the same way, the dilation mapping \begin{equation} \label{delta_t(v) = t cdot v} \delta_t(v) = t \cdot v \end{equation} is a continuous mapping on $V$ for every $t \in {\bf R}$ or ${\bf C}$, as appropriate, because of continuity of scalar multiplication. If $t \ne 0$, then $\delta_t$ is a one-to-one mapping from $V$ onto itself, with inverse equal to $\delta_{1/t}$, and hence a homeomorphism. The additional separation condition for $V$ to be a topological vector space is that the set $\{0\}$ consisting of the additive identity element $0$ in $V$ be a closed set in $V$. This implies that every subset of $V$ with exactly one element is closed, because of the continuity of the translation mappings. One can also use continuity of addition at $0$ to show that $V$ is Hausdorff under these conditions. It is easy to see that ${\bf R}^n$ and ${\bf C}^n$ are topological vector spaces with respect to their standard topologies. If a real or complex vector space $V$ is equipped with a norm $N$, then $V$ is a topological vector space with respect to the topology determined by the metric determined by $N$ as in Section \ref{norms, metrics}. If instead $V$ is equipped with a nice collection $\mathcal{N}$ of seminorms, then $V$ is a topological vector space with respect to the topology defined in Section \ref{seminorms, topologies}. In particular, the requirement that $\mathcal{N}$ be nice corresponds exactly to the separation condition discussed in the previous paragraph. In linear algebra, one is often interested in linear mappings between vector spaces. Similarly, in topology, one is often interested in continuous mappings between topological spaces. In the context of topological vector spaces, one is often interested in continuous linear mappings between topological vector spaces. This includes continuous linear functionals from a topological vector space into the real or complex numbers, as appropriate. Thus the topological dual $V^*$ of a topological vector space $V$ may be defined as the space of continuous linear functionals on $V$, as in Section \ref{continuous linear functionals}. Let $V$ and $W$ be topological vector spaces, both real or both complex. If $\phi$ is a one-to-one linear mapping from $V$ onto $W$, then the inverse mapping $\phi^{-1}$ is a one-to-one linear mapping from $W$ onto $V$ as well. If $\phi : V \to W$ and $\phi^{-1} : W \to V$ are also continuous, so that $\phi$ is a homeomorphism from $V$ onto $W$, then $\phi$ is said to be an \emph{isomorphism} between $V$ and $W$ as topological vector spaces. It can be shown that a finite-dimensional real or complex topological vector space of dimension $n$ is isomorphic to ${\bf R}^n$ or ${\bf C}^n$, as appropriate, with its standard topology. A topological vector space $V$ is said to be \emph{locally convex} if there is a local base for the topology of $V$ at $0$ consisting of convex open subsets of $V$. If the topology on $V$ is determined by a nice collection of seminorms, then it is easy to see that $V$ is locally convex. Conversely, if $V$ is locally convex, then one can show that the topology on $V$ may be described by a nice collection of seminorms. In any topological space, a necessary condition for the existence of a metric that describes the same topology is that there be a countable local base for the topology at each point. If a topological vector space $V$ has a counatble local base for the topology at $0$, then it has a countable local base for the topology at every point, because the topology is invariant under translations. In this case, it can be shown that there is a metric on $V$ that describes the same topology and which is invariant under translations. If $V$ has a countable local base for the topology at $0$ and the topology on $V$ is determined by a nice collection of seminorms, then only finitely or countably many seminorms are necessary to describe the topology, as in Section \ref{convergent sequences}, and one can get a translation-invariant metric as in Section \ref{metrizability}. The definition of a Cauchy sequence can be extended to topological vector spaces, as follows. A sequence $\{v_j\}_{j = 1}^\infty$ of elements of a topological vector space $V$ is said to be a \emph{Cauchy sequence} if for every open set $U$ in $V$ with $0 \in U$ there is a positive integer $L$ such that \begin{equation} \label{v_j - v_l in U} v_j - v_l \in U \end{equation} for every $j, l \ge L$. If $d(v, w)$ is a metric on $V$ that determines the given topology on $V$, and if $d(v, w)$ is invariant under translations on $V$ in the sense that \begin{equation} d(v - z, w - z) = d(v, w) \end{equation} for every $v, w, z \in V$, then it is easy to see that the usual definition of a Cauchy sequence in $V$ with respect to $d(v, w)$ is equivalent to the preceding condition using the topological vector space structure. Remember that a metric space $X$ is said to be \emph{complete} if every Cauchy sequence of elements of $X$ converges to another element of $X$. Similarly, let us say that a topological vector space $V$ is \emph{sequentially complete} if every Cauchy sequence of elements of $V$ as in the previous paragraph converges to an element of $V$. If there is a countable local base for the topology of $V$ at $0$, then this is equivalent to completeness of $V$ with respect to any translation-invariant metric that determines the same topology on $V$. Otherwise, one can also consider Cauchy conditions for nets or filters on $V$. \section{Summable functions} \label{summable functions} \setcounter{equation}{0} Let $E$ be a nonempty set, and let $f(x)$ be a real or complex valued function on $E$. We say that $f$ is \emph{summable} on $E$ if the sums \begin{equation} \label{sum_{x in A} |f(x)|} \sum_{x \in A} |f(x)| \end{equation} over finite subsets $A$ of $E$ are uniformly bounded. Of course, this holds trivially when $E$ has only finitely many elements, since we can take $A = E$. If $E$ is the set ${\bf Z}_+$ of positive integers, then this is equivalent to saying that $\sum_{j = 1}^\infty |f(j)|$ converges, which means that $\sum_{j = 1}^\infty f(j)$ converges absolutely. We would like to define the sum \begin{equation} \sum_{x \in E} f(x) \end{equation} when $f$ is a summable function on $E$. Again this is trivial when $E$ has only finitely many elements. If $E = {\bf Z}_+$, then the sum may be considered as a convergent infinite series, since it converges absolutely. If $E$ is a countably infinite set, then one can reduce to the case where $E = {\bf Z}_+$ using an enumeration of $E$. Different enumerations lead to the same value of the sum, because the sum of an absolutely convergent series is invariant under rearrangements. If $f$ is a summable function on any infinite set $E$, then one can check that the set of $x \in E$ such that $|f(x)| \ge \epsilon$ has only finitely many elements for each $\epsilon > 0$. This implies that the set of $x \in E$ such that $f(x) \ne 0$ has only finitely or countably many elements, so that the definition of the sum can be reduced to the previous case. Alternatively, if $f$ is a nonnegative real-valued summable function on $E$, then one can define the sum over $E$ to be the supremum of the subsums (\ref{sum_{x in A} |f(x)|}) over all finite subsets of $E$. If $f$ is any summable function on $E$, then $f$ can be expressed as a linear combination of nonnegative real-valued summable functions, so that the definition of the sum can be reduced to that case. It is easy to see that this approach is compatible with the one in the previous paragraph. The space of summable functions on $E$ is denoted $\ell^1(E)$, or more precisely $\ell^1(E, {\bf R})$ or $\ell^1(E, {\bf C})$ to indicate whether real or complex-valued functions on $E$ are being used. One can check that this is a vector space with respect to pointwise addition and scalar multiplication, and that the sum over $E$ defines a linear functional on $\ell^1(E)$. If $f \in \ell^1(E)$, then put \begin{equation} \|f\|_1 = \sum_{x \in E} |f(x)|. \end{equation} One can check that this is a norm on $\ell^1(E)$, and that \begin{equation} \biggl|\sum_{x \in E} f(x)\biggr| \le \|f\|_1. \end{equation} Let us say that a function $f$ on $E$ has \emph{finite support} if $f(x) = 0$ for all but finitely many $x \in E$. It is not difficult to show that these functions are dense in $\ell^1$, by considering finite sets $A \subseteq E$ for which $\sum_{x \in A} |f(x)|$ approximates $\|f\|_1$. Of course, $\sum_{x \in E} f(x)$ reduces to a finite sum when $f$ has finite support on $E$. This gives another way to look at the sum of an arbitrary summable function on $E$. Namely, it is the unique continuous linear functional on $\ell^1(E)$ that is equal to the ordinary finite sum on the dense linear subspace of functions with finite support. \section{$c_0(E)$} \label{c_0(E)} \setcounter{equation}{0} Let $E$ be a nonempty set, and let us say that a real or complex-valued function $f(x)$ on $E$ \emph{vanishes at infinity} if for each $\epsilon > 0$ the set of $x \in E$ such that $|f(x)| \ge \epsilon$ has only finitely many elements. The space of these functions is denoted $c_0(E)$, or $c_0(E, {\bf R})$, $c_0(E, {\bf C})$ to indicate whether real or complex-valued functions are being used. It is easy to see that these functions are bounded, and that they form a closed linear subspace of $\ell^\infty(E)$ with respect to the $\ell^\infty$ norm. Moreover, functions on $E$ with finite support are dense in $c_0(E)$, and $c_0(E)$ is the closure of the linear subspace of functions on $E$ with finite support in $\ell^\infty(E)$. If $f$ is a bounded function on $E$ and $g$ is a summable function on $E$, then $f \, g$ is a summable function on $E$, and \begin{equation} \|f \, g\|_1 \le \|f\|_\infty \, \|g\|_1. \end{equation} In particular, \begin{equation} \lambda_g(f) = \sum_{x \in E} f(x) \, g(x) \end{equation} is well-defined and satisfies \begin{equation} |\lambda_g(f)| \le \|f\|_\infty \, \|g\|_1. \end{equation} This shows that $\lambda_g$ defines a continuous linear functional on $\ell^\infty(E)$, whose dual norm with respect to the $\ell^\infty$ norm is less than or equal to $\|g\|_1$. The dual norm of $\lambda_g$ with respect to the $\ell^\infty$ norm is actually equal to $\|g\|_1$, because one can choose $f \in \ell^\infty(E)$ so that $\|f\|_\infty = 1$ and $f(x) \, g(x) = |g(x)|$ for each $x \in E$. We can also restrict $\lambda_g$ to $c_0(E)$, to get a continuous linear functional on $c_0(E)$ whose dual norm is less than or equal to $\|g\|_1$. The dual norm of $\lambda_g$ on $c_0(E)$ with respect to the $\ell^\infty$ norm is still equal to $\|g\|_1$, but we have to do a bit more to show that. The problem is that the function $f$ mentioned at the end of the previous paragraph may not vanish at infinity on $E$. To fix that, we can choose for each nonempty finite set $A \subseteq E$ a function $f_A(x)$ such that $f_A(x) = 0$ when $x \in E \backslash A$, $f_A(x) \, g(x) = |g(x)|$ when $x \in A$, and $\|f_A\|_\infty = 1$. Thus $f_A$ has finite support on $E$, and hence vanishes at infinity. By construction, \begin{equation} \lambda_g(f_A) = \sum_{x \in A} |g(x)|. \end{equation} This shows that the dual norm of $\lambda_g$ on $c_0(E)$ is greater than or equal to $\sum_{x \in A} |g(x)|$ for every nonempty finite set $A \subseteq E$, and it follows that the dual norm is equal to $\|g\|_1$, by taking the supremum over $A$. Suppose now that $\lambda$ is any continuous linear functional on $c_0(E)$. If $x \in E$, then let $\delta_x(y)$ be the function on $E$ equal to $1$ when $x = y$ and to $0$ otherwise. Thus $\delta_x \in c_0(E)$, and we can put \begin{equation} \label{g(x) = lambda(delta_x)} g(x) = \lambda(\delta_x) \end{equation} for each $x \in E$. If $f$ is a function on $E$ with finite support, then $f$ can be expressed as a linear combination of $\delta$'s, and we get that \begin{equation} \lambda(f) = \sum_{x \in E} f(x) \, g(x), \end{equation} by linearity. Using functions like $f_A$ in the previous paragraph, we get that \begin{equation} \sum_{x \in A} |g(x)| \le \|\lambda\|_* \end{equation} for every nonempty finite set $A \subseteq E$, where $\|\lambda\|_*$ is the dual norm of $\lambda$ on $c_0(E)$. Hence $g \in \ell^1(E)$, and $\|g\|_1 \le \|\lambda\|_*$. We have already seen that $\lambda(f) = \lambda_g(f)$ when $f$ has finite support on $E$, and it follows that this holds for every $f \in c_0(E)$, since functions with finite support are dense in $c_0(E)$, and both $\lambda$ and $\lambda_g$ are continuous on $c_0(E)$. \section{The dual of $\ell^1$} \label{dual of ell^1} \setcounter{equation}{0} Let $E$ be a nonempty set, and suppose that $f$ is a summable function on $E$, and that $g$ is a bounded function on $E$. As in the previous section, $f \, g$ is a summable function on $E$, and \begin{equation} \|f \, g\|_1 \le \|f\|_1 \, \|g\|_\infty. \end{equation} Hence \begin{equation} \lambda_g(f) = \sum_{x \in E} f(x) \, g(x) \end{equation} is well-defined and satisfies \begin{equation} |\lambda_g(f)| \le \|f\|_1 \, \|g\|_\infty. \end{equation} Thus $\lambda_g$ defines a continuous linear functional on $\ell^1(E)$, with dual norm less than or equal to $\|g\|_\infty$. One can check that the dual norm of $\lambda_g$ is actually equal to $\|g\|_\infty$, using functions $f$ on $E$ that are equal to $1$ at one point and $0$ elsewhere. Conversely, suppose that $\lambda$ is a bounded linear functional on $\ell^1(E)$. Let $\delta_x$ be as in the previous section, and put $g(x) = \lambda(\delta_x)$ for each $x \in E$, as before. Thus \begin{equation} |g(x)| = |\lambda(\delta_x)| \le \|\lambda\|_* \, \|\delta_x\|_1 = \|\lambda\|_* \end{equation} for each $x \in E$, where $\|\lambda\|_*$ is the dual norm of $\lambda$ on $\ell^1(E)$. This shows that $g \in \ell^\infty(E)$, and that $\|g\|_\infty \le \|\lambda\|_*$. In particular, $\lambda_g$ is a continuous linear functional on $\ell^1(E)$, as in the preceding paragraph. If $f$ has finite support on $E$, then $f$ can be expressed as a linear combination of $\delta$'s, and hence $\lambda(f) = \lambda_g(f)$. It follows that this holds for every $f \in \ell^1(E)$, because functions with finite support are dense in $\ell^1(E)$, and $\lambda$, $\lambda_g$ are continuous on $\ell^1$. Suppose now that $E$ is an infinite set, and let $c(E)$ be the space of real or complex-valued functions $f(x)$ on $E$ that have a limit at infinity. This means that there is a real or complex number $a$, as appropriate, such that for each $\epsilon > 0$, \begin{equation} \label{|f(x) - a| < epsilon} |f(x) - a| < \epsilon \end{equation} for all but finitely many $x \in E$. Equivalently, $f \in c(E)$ if there is an $a \in {\bf R}$ or ${\bf C}$ such that $f(x) - a \in c_0(E)$. As usual, this space may also be denoted $c(E, {\bf R})$ or $c(E, {\bf C})$, to indicate whether real or complex-valued functions are being used. It is easy to see that these functions are bounded, and that they form a closed linear subspace of $\ell^\infty(E)$ with respect to the $\ell^\infty$ norm. If $f$, $a$ are as in the previous paragraph, then put \begin{equation} \lim_{x \to \infty \atop x \in E} f(x) = a. \end{equation} It is easy to see that this limit is unique when it exists, and that \begin{equation} \biggl|\lim_{x \to \infty \atop x \in E} f(x)\biggr| \le \|f\|_\infty. \end{equation} Thus the limit defines a continuous linear functional on $c(E)$. The Hahn--Banach theorem implies that there is a continuous linear functional $L$ on $\ell^\infty(E)$ with dual norm equal to $1$ such that $L(f)$ is equal to this limit when $f \in c(E)$. However, one can also check that there is no $g \in \ell^1(E)$ such that $\lambda_g(f) = L(f)$ for every $f \in c(E)$, where $\lambda_g$ is as in the previous section. \section{Filters} \label{filters} \setcounter{equation}{0} A nonempty collection $\mathcal{F}$ of nonempty subsets of a set $X$ is said to be a \emph{filter} if \begin{equation} A \cap B \in \mathcal{F} \hbox{ for every } A, B \in \mathcal{F}, \end{equation} and \begin{eqnarray} && E \in \mathcal{F} \hbox{ for every } E \subseteq X \hbox{ for which} \\ && \hbox{there is an } A \in \mathcal{F} \hbox{ such that } A \subseteq E. \nonumber \end{eqnarray} Suppose that $X$ is a topological space, and that $p$ is an element of $X$. A filter $\mathcal{F}$ on $X$ is said to \emph{converge} to $p$ if \begin{equation} U \in \mathcal{F} \end{equation} for every open set $U$ in $X$ with $p \in U$. If $X$ is Hausdorff, then the limit of a convergent filter on $X$ is unique. A filter $\mathcal{F}'$ on a set $X$ is said to be a \emph{refinement} of another filter $\mathcal{F}$ on $X$ if $\mathcal{F} \subseteq \mathcal{F}'$, as collections of subsets of $X$. Suppose that $X$ is a topological space, and let $p$ be an element of $X$. Remember that $\overline{A}$ denotes the closure in $X$ of a subset $A$ of $X$. If $\mathcal{F}$ is a filter on $X$ and \begin{equation} p \in \bigcap_{A \in \mathcal{F}} \overline{A}, \end{equation} then there is a refinement of $\mathcal{F}$ that converges to $p$. To see this, let $\mathcal{F}'$ be the collection of subsets $E$ of $X$ such that $A \cap U \subseteq E$ for some $A \in \mathcal{F}$ and open set $U \subseteq X$ with $p \in U$. By hypothesis, $p \in \overline{A}$, and hence $A \cap U \ne \emptyset$ under these conditions. One can check that the intersection of two elements of $\mathcal{F}'$ is also an element of $\mathcal{F}$, because of the corresponding properties of $\mathcal{F}$ and open neighborhoods of $p$. We also have that $E \in \mathcal{F}'$ for every $E \subseteq X$ for which there is a $B \in \mathcal{F}'$ such that $B \subseteq E$, by construction. Thus $\mathcal{F}'$ is a filter on $X$, which is clearly a refinement of $\mathcal{F}$, since we can take $U = X$. If $U$ is any open set in $X$ that contains $p$, then $U \in \mathcal{F}'$, since $A \cap U \subseteq U$ for every $A \in \mathcal{F}$. This shows that $\mathcal{F}'$ is a filter which is a refinement of $\mathcal{F}$ that converges to $p$, as desired. Conversely, suppose that $\mathcal{F}'$ is a refinement of $\mathcal{F}$ that converges to $p$. If $U$ is an open set in $X$ that contains $p$, then $U \in \mathcal{F}'$, and so $A \cap U \in \mathcal{F}'$ for every $A \in \mathcal{F}'$. Hence $A \cap U \ne \emptyset$, and this holds in particular for every $A \in \mathcal{F}$, because $\mathcal{F}'$ is a refinement of $\mathcal{F}$. It follows that $p \in \overline{A}$, since this works for every open neighborhood $U$ of $p$ in $X$. Thus $p \in \overline{A}$ for every $A \in \mathcal{F}$, as before. Now let $V$ be a real or complex topological vector space. A filter $\mathcal{F}$ on $V$ satisfies the \emph{Cauchy condition} if for every open set $U$ in $V$ with $0 \in U$ there is an $A \in \mathcal{F}$ such that \begin{equation} A - A \subseteq U, \end{equation} where \begin{equation} A - A = \{v - w : v, w \in A\}. \end{equation} It is easy to see that convergent filters on $V$ satisfy the Cauchy condition, using the continuity of \begin{equation} \label{(v, w) mapsto v - w} (v, w) \mapsto v - w \end{equation} as a mapping from $V \times V$ into $V$. One can say that $V$ is complete if every Cauchy filter on $V$ converges, as in Section \ref{topological vector spaces}. \section{Compactness} \label{compactness} \setcounter{equation}{0} Remember that a topological space $X$ is \emph{compact} if for every collection $\{U_\alpha\}_{\alpha \in A}$ of open subsets of $X$ such that \begin{equation} X = \bigcup_{\alpha \in A} U_\alpha, \end{equation} there are finitely many indices $\alpha_1, \ldots, \alpha_n \in A$ such that \begin{equation} X = U_{\alpha_1} \cup \cdots \cup U_{\alpha_n}. \end{equation} A collection $\{E_i\}_{i \in I}$ of closed subsets of $X$ is said to have the \emph{finite intersection property} if \begin{equation} E_{i_1} \cap \cdots \cap E_{i_n} \ne \emptyset \end{equation} for every collection $i_1, \ldots, i_n$ of finitely many indices in $I$. If $X$ is compact and $\{E_i\}_{i \in I}$ is a collection of closed subsets of $X$ with the finite intersection property, then \begin{equation} \bigcap_{i \in I} E_i \ne \emptyset. \end{equation} Otherwise, if $\bigcap_{i \in I} E_i = \emptyset$, then $U_i = X \backslash E_i$ would be an open covering of $X$ with no finite subcovering. Conversely, if $X$ is not compact, then there is an open covering $\{U_\alpha\}_{\alpha \in A}$ of $X$ for which there is no finite subcovering. If $E_\alpha = X \backslash U_\alpha$ for each $\alpha \in A$, then it is easy to see that $\{E_\alpha\}_{\alpha \in A}$ is a collection of closed subsets of $X$ with the finite intersection property. However, the intersection of all of the $E_\alpha$'s is empty, because $\{U_\alpha\}_{\alpha \in A}$ is an open covering of $X$. This shows that $X$ is compact when the intersection of any collection of closed subsets of $X$ with the finite intersection property is nonempty. Let $\mathcal{F}$ be a filter on $X$. As in the previous section, $\mathcal{F}$ has a refinement that converges to an element of $X$ if and only if \begin{equation} \label{bigcap_{A in mathcal{F}} overline{A} ne emptyset} \bigcap_{A \in \mathcal{F}} \overline{A} \ne \emptyset. \end{equation} Note that $\{\overline{A} : A \in \mathcal{F}\}$ has the finite intersection property, because of the definition of a filter and the elementary fact that \begin{equation} \overline{A \cap B} \subseteq \overline{A} \cap \overline{B} \end{equation} for every $A, B \subseteq X$. If $X$ is compact, then it follows that every filter on $X$ has a refinement that converges to an element of $X$. Conversely, let $\{E_i\}_{i \in I}$ be a collection of closed subsets of $X$ with the finite intersection property. Let $\mathcal{F}$ be the set of all $A \subseteq X$ such that \begin{equation} \label{E_{i_1} cap cdots cap E_{i_n} subseteq A} E_{i_1} \cap \cdots \cap E_{i_n} \subseteq A \end{equation} for some finite collection of indices $i_1, \ldots, i_n \in I$. It is easy to see that $\mathcal{F}$ is a filter on $X$. If there is a refinement of $\mathcal{F}$ that converges to an element of $X$, then it follows that $\bigcap_{i \in I} E_i \ne \emptyset$. Thus $X$ is compact when every filter on $X$ has a refinement that converges to an element of $X$. \section{Ultrafilters} \label{ultrafilters} \setcounter{equation}{0} A maximal filter on a set $X$ is said to be an \emph{ultrafilter}. More precisely, a filter $\mathcal{F}$ on $X$ is an ultrafilter if the only filter on $X$ that is a refinement of $\mathcal{F}$ is itself. If $p \in X$ and $\mathcal{F}_p$ is the collection of subsets $A$ of $X$ such that $p \in A$, then it is easy to see that $\mathcal{F}_p$ is an ultrafilter on $X$. One can show that every filter has a refinement that is an ultrafilter, using the axiom of choice through Zorn's lemma or the Hausdorff maximality principle. If $X$ is a compact topological space and $\mathcal{F}$ is an ultrafilter on $X$, then it follows that $\mathcal{F}$ converges to an element of $X$. More precisely, $\mathcal{F}$ has a refinement that converges, as in the previous section, and this refinement is the same as $\mathcal{F}$, since $\mathcal{F}$ is an ultrafilter. Conversely, if every ultrafilter on a topological space $X$ converges, then $X$ is compact. This is because every filter on $X$ has a refinement which is an ultrafilter, and hence converges. Suppose that $\mathcal{F}$ is a filter on a set $X$, and that $B$ is a subset of $X$ such that $A \cap B \ne \emptyset$ for every $A \in \mathcal{F}$. Let $\mathcal{F}_B$ be the collection of subsets $E$ of $X$ for which there is an $A \in \mathcal{F}$ such that \begin{equation} A \cap B \subseteq E. \end{equation} It is easy to see that $\mathcal{F}_B$ is a filter on $X$ that is a refinement of $\mathcal{F}$. If $\mathcal{F}$ is an ultrafilter on $X$, then it follows that $\mathcal{F}_B = \mathcal{F}$, and hence that $B \in \mathcal{F}$. Conversely, suppose that $\mathcal{F}$ is a filter on $X$ such that $B \in \mathcal{F}$ for every $B \subseteq X$ such that $A \cap B \ne \emptyset$ for every $A \in \mathcal{F}$. If $\mathcal{F}'$ is a filter on $X$ that is a refinement of $\mathcal{F}$, and if $B \in \mathcal{F}'$, then $A \cap B \in \mathcal{F}'$ for every $A \in \mathcal{F} \subseteq \mathcal{F}'$, and hence $A \cap B \ne \emptyset$ for every $A \in \mathcal{F}$. It follows that every $B \in \mathcal{F}'$ is also in $\mathcal{F}$, which means that $\mathcal{F}' = \mathcal{F}$. Thus $\mathcal{F}$ is an ultrafilter under these conditions. Let $\mathcal{F}$ be an ultrafilter on a set $X$, and let $B$ be a subset of $X$. If $A \cap B = \emptyset$ for some $A \in \mathcal{F}$, then $A \subseteq X \backslash B$, and hence $X \backslash B \in \mathcal{F}$. Otherwise, if $A \cap B \ne \emptyset$ for every $A \in \mathcal{F}$, then $B \in \mathcal{F}$, as in the previous paragraph. This shows that for every $B \subseteq X$, either \begin{equation} B \in \mathcal{F} \quad\hbox{or}\quad X \backslash B \in \mathcal{F} \end{equation} when $\mathcal{F}$ is an ultrafilter. Conversely, if $\mathcal{F}$ is a filter on $X$ with this property, then $\mathcal{F}$ is an ultrafilter. To see this, let $\mathcal{F}'$ be a filter on $X$ that is a refinement of $\mathcal{F}$. If $B \in \mathcal{F}'$ and $X \backslash B \in \mathcal{F} \subseteq \mathcal{F}'$, then we get a contradiction, since $B \cap (X \backslash B) = \emptyset$. Thus each $B \in \mathcal{F}'$ is an element of $\mathcal{F}$, which implies that $\mathcal{F}' = \mathcal{F}$, as desired. Let $X$, $Y$ be sets, and let $f$ be a mapping from $X$ into $Y$. If $\mathcal{F}$ is a filter on $X$, then one can check that \begin{equation} f_*(\mathcal{F}) = \{A \subseteq Y : f^{-1}(A) \in \mathcal{F}\} \end{equation} is a filter on $Y$. If $\mathcal{F}$ is an ultrafilter on $X$, then $f_*(\mathcal{F})$ is an ultrafilter on $Y$. To see this, let $B$ be a subset of $Y$, and note that $f^{-1}(Y \backslash B) = X \backslash f^{-1}(B)$, so that $f^{-1}(B)$ or $f^{-1}(Y \backslash B)$ is an element of $\mathcal{F}$. Thus $B$ or $Y \backslash B$ is an element of $f_*(\mathcal{F})$, which implies that $\mathcal{F}$ is an ultrafilter on $Y$, as in the previous paragraph. \section{Tychonoff's theorem} \label{tychonoff's theorem} \setcounter{equation}{0} Let $\{X_i\}_{i \in I}$ be a collection of compact topological spaces, and let $X = \prod_{i \in I} X_i$ be their Cartesian product. A famous theorem of Tychonoff states that $X$ is also compact with respect to the product topology. There is a well-known proof of this using ultrafilters, as follows. Let $\mathcal{F}$ be an ultrafilter on $X$, and let us show that $\mathcal{F}$ converges. Let $p_i$ be the standard coordinate projection from $X$ onto $X_i$ for each $i \in I$. As before, $(p_i)_*(\mathcal{F})$ is an ultrafilter on $X_i$ for each $i \in I$, and hence converges to an element $x_i$ of $X_i$, by compactness. If $x \in X$ satisfies $p_i(x) = x_i$ for each $i$, then we would like to check that $\mathcal{F}$ converges to $x$. Let $U$ be an open set in $X$ such that $x \in U$. By the definition of the product topology, there are open sets $U_i \subseteq X_i$ for each $i \in I$ such that $x_i \in U_i$ for each $i$, $U_i = X_i$ for all but finitely many $i$, and \begin{equation} \prod_{i \in I} U_i \subseteq U. \end{equation} Because $(p_i)_*(\mathcal{F})$ converges to $x_i$ for each $i$, we get that $U_i \in (p_i)_*(\mathcal{F}_i)$ for each $i$, which means that $p_i^{-1}(U_i) \in \mathcal{F}$ for each $i$. Of course, \begin{equation} \prod_{i \in I} U_i = \bigcap_{i \in I} p_i^{-1}(U_i). \end{equation} This is the same as the intersection of $p_i^{-1}(U_i)$ over finitely many $i \in I$, since $U_i = X_i$ and hence $p_i^{-1}(U_i) = X$ for all but finitely many $i$. It follows that the intersection is contained in $\mathcal{F}$, which implies that $U$ is contained in $\mathcal{F}$, as desired. \section{The weak$^*$ topology} \label{weak^* topology} \setcounter{equation}{0} Let $V$ be a real or complex topological vector space, and let $V^*$ be the dual space of continuous linear functionals on $V$. If $v \in V$, then \begin{equation} L_v(\lambda) = \lambda(v) \end{equation} defines a linear functional on $V^*$. This is automatically a nice collection of linear functionals on $V^*$ in the sense of Section \ref{weak topologies}, since $\lambda = 0$ in $V^*$ when $\lambda(v) = 0$ for each $v \in V$. The weak topology on $V^*$ corresponding to this collection of linear functionals is known as the \emph{weak$^*$ topology}. Suppose now that $V$ is equipped with a norm $\|v\|$ that determines the given topology on $V$, and let $\|\lambda\|_*$ be the corresponding dual norm on $V^*$. Observe that $L_v$ is a continuous linear functional on $V^*$ with respect to $\|\lambda\|_*$ for each $v \in V$. More precisely, \begin{equation} |L_v(\lambda)| \le \|\lambda\|_* \, \|v\| \end{equation} for every $v \in V$ and $\lambda \in V^*$, by definition of $\|\lambda\|_*$. This shows that the dual norm of $L_v$ as a continuous linear functional on $V^*$ with respect to $\|\lambda\|_*$ is less than or equal to $\|v\|$ for each $v \in V$. The dual norm of $L_v$ on $V^*$ is actually equal to $\|v\|$, since for each $v \in V$ there is a $\lambda \in V^*$ such that $\|\lambda\|_* = 1$ and $\lambda(v) = \|v\|$, by the Hahn--Banach theorem. Note that every open set in $V^*$ with respect to the weak$^*$ topology is also an open set with respect to the dual norm. This follows from the fact that $L_v$ is continuous on $V^*$ with respect to the dual norm for each $v \in V$, as in the previous paragraph. Consider the closed unit ball $B^*$ in $V^*$ with respect to the dual norm, which consists of all $\lambda \in V^*$ with $\|\lambda\|_* \le 1$. This is the same as the set of $\lambda \in V^*$ such that $|\lambda(v)| \le 1$ for every $v \in V$ with $\|v\| \le 1$. It follows easily from this description that $B^*$ is a closed set in the weak$^*$ topology. The Banach--Alaoglu theorem states that $B^*$ is a compact set with respect to the weak$^*$ topology. The basic idea is to show that $B^*$ is homeomorphic to a closed subset of a product of closed intervals in the real case, or a product of closed disks in the complex case, and then use Tychonoff's theorem. \section{Filters on subsets} \label{filters, subsets} \setcounter{equation}{0} Let $X$ be a nonempty set, and let $E$ be a nonempty subset of $X$. If $\mathcal{F}_0$ is a filter on $E$, then there is a natural filter $\mathcal{F}_1$ on $X$ associated to it, given by \begin{equation} \mathcal{F}_1 = \{B \subseteq X : B \cap E \in \mathcal{F}_0\}. \end{equation} In particular, $\mathcal{F}_0 \subseteq \mathcal{F}_1$. Equivalently, if $f : E \to X$ is the inclusion mapping that sends every element of $E$ to itself as an element of $X$, then $\mathcal{F}_1 = f_*(\mathcal{F}_0)$. Conversely, if $\mathcal{F}_1$ is a filter on $X$ such that $E \in \mathcal{F}_1$, then \begin{equation} \label{mathcal{F}_0 = {A subseteq E : A in mathcal{F}_1}} \mathcal{F}_0 = \{A \subseteq E : A \in \mathcal{F}_1\} \end{equation} is a filter on $E$. It is easy to see that this transformation between filters is the inverse of the one described in the previous paragraph. Thus we get a one-to-one correspondence between filters on $E$ and filters on $X$ that contain $E$ as an element. Moreover, refinements of filters on $E$ correspond exactly to refinements of filters on $X$ that contain $E$ as an element in this way. Of course, any refinement of a filter on $X$ that contains $E$ as an element also contains $E$ as an element, and hence corresponds to a filter on $E$ too. Suppose now that $X$ is a topological space. It is easy to check that a filter $\mathcal{F}_1$ on $X$ that contains $E$ as an element converges to a point $p \in E$ if and only if the corresponding filter $\mathcal{F}_0$ on $E$ converges to $p$ with respect to the induced topology on $E$. Using the remarks about refinements in the previous paragraph, it follows that $\mathcal{F}_1$ has a refinement on $X$ that converges to an element of $E$ if and only if $\mathcal{F}_0$ has a refinement on $E$ that converges to an element of $E$ with respect to the induced topology. Hence $E$ is compact if and only if every filter $\mathcal{F}_1$ on $X$ that contains $E$ as an element has a refinement that converges to an element of $E$. In the same way, ultrafilters on $E$ correspond exactly to ultrafilters on $X$ that contain $E$ as an element, and $E$ is compact if and only if every ultrafilter on $X$ that contains $E$ as an element converges to an element of $E$. \section{Bounded linear mappings} \label{bounded linear mappings} \setcounter{equation}{0} Let $V$ and $W$ be vector spaces, both real or both complex, and equipped with norms $\|v\|_V$, $\|w\|_W$, respectively. A linear mapping $T : V \to W$ is said to be \emph{bounded} if there is a nonnegative real number $A$ such that \begin{equation} \|T(v)\|_W \le A \, \|v\|_V \end{equation} for every $v \in V$. Because of linearity, this implies that \begin{equation} \label{||T(v) - T(v')||_W le A ||v - v'||_V} \|T(v) - T(v')\|_W \le A \, \|v - v'\|_V \end{equation} for every $v, v' \in V$, and hence that $T$ is uniformly continuous with respect to the metrics on $V$ and $W$ associated to their norms. Conversely, if $T : V \to W$ is continuous at $0$, then there is a $\delta > 0$ such that \begin{equation} \label{||T(v)||_W < 1} \|T(v)\|_W < 1 \end{equation} for every $v \in V$ with $\|v\|_V < \delta$. It is easy to see that this implies that $T$ is bounded, with $A = 1/\delta$. If $T$ is a bounded linear mapping from $V$ into $W$, then the \emph{operator norm} $\|T\|_{op}$ of $T$ is defined by \begin{equation} \|T\|_{op} = \sup \{\|T(v)\|_W : v \in V, \, \|v\|_V \le 1\}. \end{equation} The boundedness of $T$ says exactly that the supremum is finite, and is less than or equal to the nonnegative real number $A$ mentioned in the previous paragraph. Equivalently, $T$ satisfies the boundedness condition in the previous paragraph with $A = \|T\|_{op}$, and this is the smallest value of $A$ with this property. Note that $\|T\|_{op} = 0$ if and only if $T = 0$. Let $T$ be a bounded linear mapping from $V$ into $W$, and let $a$ be a real or complex number, as appropriate. Of course, the product $a \, T$ of $a$ and $T$ is the linear mapping that sends $v \in V$ to $a \, T(v)$ in $W$. It is easy to see that $a \, T$ is also a bounded linear mapping, and that \begin{equation} \|a \, T\|_{op} = |a| \, \|T\|_{op}. \end{equation} Similarly, if $R$ is another bounded linear mapping from $V$ into $W$, then the sum $R + T$ is defined as the linear mapping that sends $v \in V$ to $R(v) + T(v)$ in $W$. It is easy to see that $R + T$ is also a bounded linear mapping from $V$ into $W$, and that \begin{equation} \|R + T\|_{op} \le \|R\|_{op} + \|T\|_{op}. \end{equation} Let $\mathcal{BL}(V, W)$ be the space of bounded linear mappings from $V$ into $W$. It follows from the remarks in the preceding paragraph that $\mathcal{BL}(V, W)$ is a real or complex vector space, as appropriate, with respect to pointwise addition and scalar multiplication of linear mappings, and that the operator norm defines a norm on this vector space. If $W$ is the one-dimensional vector space of real or complex numbers, as appropriate, then $\mathcal{BL}(V, W)$ is the same as the dual space $V^*$ of bounded linear functionals on $V$, and the operator norm is the same as the dual norm on $V^*$. Suppose that $W$ is complete as a metric space with respect to the metric associated to the norm, so that $W$ is a \emph{Banach space}. In this case, the space $\mathcal{BL}(V, W)$ of bounded linear mappings from $V$ into $W$ is also complete with respect to the operator norm, and thus a Banach space. To see this, let $\{T_j\}_{j = 1}^\infty$ be a Cauchy sequence in $\mathcal{BL}(V, W)$. This means that for each $\epsilon > 0$ there is an $L(\epsilon) \ge 1$ such that \begin{equation} \label{||T_j - T_l||_{op} le epsilon} \|T_j - T_l\|_{op} \le \epsilon \end{equation} for every $j, l \ge L(\epsilon)$. Equivalently, \begin{equation} \label{||T_j(v) - T_l(v)||_W le epsilon ||v||_V} \|T_j(v) - T_l(v)\|_W \le \epsilon \, \|v\|_V \end{equation} for every $j, l \ge L(\epsilon)$ and $v \in V$, so that $\{T_j(v)\}_{j = 1}^\infty$ is a Cauchy sequence in $W$ for every $v \in V$. Because $W$ is complete, it follows that $\{T_j(v)\}_{j = 1}^\infty$ converges in $W$ for every $v \in V$. Put \begin{equation} T(v) = \lim_{j \to \infty} T_j(v) \end{equation} for every $v \in V$. It is easy to see that $T$ is a linear mapping from $V$ into $W$, because of the linearity of the $T_j$'s. Observe that \begin{equation} \label{||T_j(v) - T(v)||_W le epsilon ||v||} \|T_j(v) - T(v)\|_W \le \epsilon \, \|v\| \end{equation} for every $j \ge L(\epsilon)$ and $v \in V$, by taking the limit as $l \to \infty$ in (\ref{||T_j(v) - T_l(v)||_W le epsilon ||v||_V}). In particular, \begin{equation} \|T(v)\|_W \le \|T_j(v)\|_W + \epsilon \, \|v\| \end{equation} when $j \ge L(\epsilon)$. Applying this to $\epsilon = 1$ and $j = L(1)$, we get that \begin{equation} \label{||T(v)||_W le ... le ||T_{L(1)}||_{op} ||v|| + ||v||} \|T(v)\|_W \le \|T_{L(1)}(v)\|_W + \|v\| \le \|T_{L(1)}\|_{op} \, \|v\| + \|v\|. \end{equation} This implies that $T$ is bounded, with $\|T\|_{op} \le \|T_{L(1)}\|_{op} + 1$. Using (\ref{||T_j(v) - T(v)||_W le epsilon ||v||}), we also get that \begin{equation} \label{||T_j - T||_{op} le epsilon} \|T_j - T\|_{op} \le \epsilon \end{equation} when $j \ge L(\epsilon)$. Thus $\{T_j\}_{j = 1}^\infty$ converges to $T$ with respect to the operator norm. This shows that every Cauchy sequence in $\mathcal{BL}(V, W)$ converges to an element of $\mathcal{BL}(V, W)$ when $W$ is complete, as desired. In particular, we can apply this to $W = {\bf R}$ or ${\bf C}$, as appropriate, to get that the dual space $V^*$ of bounded linear functionals on $V$ is complete with respect to the dual norm, since the real and complex numbers are complete with respect to their standard metrics. Suppose now that $V_1$, $V_2$, and $V_3$ are vector spaces, all real or all complex, and equipped with norms. Let $T_1$ be a bounded linear mapping from $V_1$ into $V_2$, and let $T_2$ be a bounded linear mapping from $V_2$ into $V_3$. As usual, the composition $T_2 \circ T_1$ is the linear mapping from $V_1$ into $V_3$ that sends $v \in V_1$ to $T_2(T_1(v))$. It is easy to see that $T_2 \circ T_1$ is also bounded, and that \begin{equation} \|T_2 \circ T_1\|_{op, 13} \le \|T_1\|_{op, 12} \, \|T_2\|_{op, 23}. \end{equation} Here the subscripts in the operator norms are included to indicate the vector spaces and norms being used. \section{Topological vector spaces, continued} \label{topological vector spaces, continued} \setcounter{equation}{0} Let $V$ be a topological vector space over the real or complex numbers. If $v \in V$ and $A \subseteq V$, then put \begin{equation} v + A = \{v + a : a \in A\}. \end{equation} If $A$ is an open or closed set in $V$, then $v + A$ has the same property, because translations determine homeomorphisms on $V$. Similarly, if $A, B \subseteq V$, then put \begin{equation} A + B = \{a + b : a \in A, b \in B\}. \end{equation} Equivalently, \begin{equation} \label{A + B = bigcup_{a in A} (a + B) = bigcup_{b in B} (b + A)} A + B = \bigcup_{a \in A} (a + B) = \bigcup_{b \in B} (b + A), \end{equation} which shows that $A + B$ is an open set in $V$ as soon as either $A$ or $B$ is open, since it is the union of a collection of open sets. If $A \subseteq V$ and $t$ is a real or complex number, as appropriate, then we put \begin{equation} t \, A = \{t \, a : a \in A\}. \end{equation} If $t \ne 0$ and $A$ is an open or closed set in $V$, then $t \, A$ has the same property, because multiplication by $t$ defines a homeomorphism on $V$. Of course, $t \, A = \{0\}$ when $t = 0$ and $A \ne \emptyset$. If $t = -1$, then $t \, A$ may be expressed simply as $- A$. Suppose that $U$ is an open set in $V$ that contains $0$. Continuity of addition at $0$ in $V$ implies that there are open sets $U_1, U_2 \subseteq V$ that contain $0$ and satisfy \begin{equation} U_1 + U_2 \subseteq U. \end{equation} If $v \in V$ and $v \ne 0$, then one can apply this to $U = V \backslash \{v\}$, to get that \begin{equation} U_1 \cap (v - U_2) = \emptyset. \end{equation} Here $v - U_2 = v + (- U_2)$, which is an open set in $V$ that contains $v$. This shows that $V$ is Hausdorff, using also translation-invariance of the topology on $V$. Let $U$ be an open set in $V$ that contains $0$ again. Continuity of scalar multiplication at $0$ implies that there is an open set $U_0 \subseteq V$ that contains $0$ and a positive real number $\delta > 0$ such that \begin{equation} t \, U_0 \subseteq U \end{equation} for every $t \in {\bf R}$ or ${\bf C}$, as appropriate, with $|t| < \delta$. Consider \begin{equation} \label{widetilde{U}_0 = bigcup_{|t| < delta} t U_0} \widetilde{U}_0 = \bigcup_{|t| < \delta} t \, U_0, \end{equation} where more precisely the union is taken over all real or complex numbers $t$ such that $|t| < \delta$, as appropriate. Equivalently, \begin{equation} \widetilde{U}_0 = \bigcup_{0 < |t| < \delta} t \, U_0, \end{equation} since $0 \in U_0$, and hence $0 \in \widetilde{U}_0$. This shows that $\widetilde{U}_0$ is an open set in $V$, because it is a union of open sets. A set $E \subseteq V$ is said to be \emph{balanced} if \begin{equation} r \, E \subseteq E \end{equation} for every $r \in {\bf R}$ or ${\bf C}$, as appropriate, with $|r| \le 1$. Thus every nonempty balanced set contains $0$ automatically. It is easy to see that the set $\widetilde{U}_0$ described in the previous paragraph is balanced by construction. This shows that for every open set $U \subseteq V$ with $0 \in U$ there is a nonempty balanced open set $\widetilde{U}_0 \subseteq U$. To put it another way, the nonempty balanced open sets in $V$ form a local base for the topology of $V$ at $0$. Let $U$ be an open set in $V$ that contains $0$ again, and let $v \in V$ be given. Because $0 \, v = 0 \in U$, continuity of scalar multiplication at $v$ implies that there is a $\delta(v, U) > 0$ such that \begin{equation} \label{t v in U} t \, v \in U \end{equation} for every $t \in {\bf R}$ or ${\bf C}$, as appropriate, with $|t| < \delta(v, U)$. Let $A$ be any subset of $V$, and let $U \subseteq V$ be an open set that contains $0$. If $v$ is an element of the closure $\overline{A}$ of $A$ in $V$, then \begin{equation} \label{(v - U) cap A ne emptyset} (v - U) \cap A \ne \emptyset, \end{equation} since $v - U$ is an open set in $V$ that contains $v$. Equivalently, \begin{equation} v \in A + U. \end{equation} It follows that \begin{equation} \label{overline{A} subseteq A + U} \overline{A} \subseteq A + U. \end{equation} \section{Bounded sets} \label{bounded sets} \setcounter{equation}{0} Let $V$ be a topological vector space over the real or complex numbers. A set $E \subseteq V$ is said to be \emph{bounded} if for every open set $U \subseteq V$ with $0 \in U$ there is a real or complex number $t_1$, as appropriate, such that \begin{equation} \label{E subseteq t_1 U} E \subseteq t_1 \, U. \end{equation} If $U$ is balanced, then it follows that \begin{equation} \label{E subseteq t U} E \subseteq t \, U \end{equation} for every $t \in {\bf R}$ or ${\bf C}$, as appropriate, such that $|t| \ge |t_1|$. If $U$ is any open set in $V$ that contains $0$, then there is a nonempty balanced open set $U' \subseteq U$, as in the previous section. In order to check that a set $E \subseteq V$ is bounded, it is therefore enough to consider nonempty balanced open sets in $V$, instead of arbitrary neighborhoods of $0$. If $U$ is an arbitrary neighborhood of $0$ in $V$, then we also get that (\ref{E subseteq t U}) holds for all real or complex numbers $t$, as appropriate, for which $|t|$ is sufficiently large. This follows by applying the stronger form of boundedness to a nonempty balanced open subset of $U$. If $E \subseteq V$ has only finitely many elements, then it is easy to see that $E$ is bounded, using the property (\ref{t v in U}) of neighborhoods of $0$ in $V$. It is also easy to see that the union of finitely many bounded subsets of $V$ is bounded, using the stronger form of boundedness described in the previous paragraphs. If the topology on $V$ is determined by a norm $\|v\|$, then a set $E \subseteq V$ is bounded if and only if $\|v\|$ is bounded on $E$. Similarly, if the topology on $V$ is determined by a nice collection of seminorms $\mathcal{N}$, then one can check that a set $E \subseteq V$ is bounded if and only if each seminorm $N \in \mathcal{N}$ is bounded on $V$. Of course, subsets of bounded sets are also bounded. If $U$ is a neighborhood of $0$ in $V$, then \begin{equation} \bigcup_{n = 1}^\infty n \, U = V, \end{equation} because of (\ref{t v in U}). If $K \subseteq V$ is compact, then it follows that \begin{equation} \label{K subseteq n_1 U cup cdots cup n_l U} K \subseteq n_1 \, U \cup \cdots \cup n_l \, U \end{equation} for some finite collection $n_1, \ldots, n_l$ of positive integers. If $U$ is also balanced, then (\ref{K subseteq n_1 U cup cdots cup n_l U}) implies that \begin{equation} \label{K subseteq n U} K \subseteq n \, U, \end{equation} where $n$ is the maximum of $n_1, \ldots, n_l$. This shows that compact subsets of $V$ are bounded, since it suffices to check boundedness with respect to nonempty balanced open subsets of $V$, as before. Suppose that $E_1, E_2 \subseteq V$ are bounded, and let us check that $E_1 + E_2$ is bounded as well. Let $U$ be a neighborhood of $0$ in $V$, and let $U_1$, $U_2$ be neighborhoods of $0$ such that $U_1 + U_2 \subseteq U$, as in the previous section. Thus \begin{equation} \label{E_j subseteq t U_j} E_j \subseteq t \, U_j \end{equation} when $t$ is a real or complex number, as appropriate, for which $|t|$ is sufficiently large, and $j = 1, 2$. This implies that \begin{equation} E_1 + E_2 \subseteq t \, U_1 + t \, U_2 \subseteq t \, U \end{equation} when $t \in {\bf R}$ or ${\bf C}$ is sufficiently large, as desired. In particular, it follows that translations of bounded sets are bounded, since sets with only one element are bounded. Let us show now that the closure $\overline{E}$ of a bounded set $E \subseteq V$ is bounded. Let $U$ be a neighborhood of $0$ in $V$ again, and let $U_1$, $U_2$ be neighborhoods of $0$ such that $U_1 + U_2 \subseteq V$. Because $E$ is bounded, there is a nonzero real or complex number $t$, as appropriate, such that \begin{equation} E \subseteq t \, U_1. \end{equation} We also have that \begin{equation} \label{overline{E} subseteq E + t U_2} \overline{E} \subseteq E + t \, U_2, \end{equation} as in (\ref{overline{A} subseteq A + U}), since $t \, U_2$ is a neighborhood of $0$ in $V$. Hence \begin{equation} \label{overline{E} subseteq E + t U_2 subseteq t U_1 + t U_2 subseteq t U} \overline{E} \subseteq E + t \, U_2 \subseteq t \, U_1 + t \, U_2 \subseteq t \, U, \end{equation} as desired. If $E \subseteq V$ is bounded and $r$ is a real or complex number, then it is easy to see that $r \, E$ is bounded too. More generally, suppose that $W$ is another topological vector space over the real or complex numbers, depending on whether $V$ is real or complex. If $T$ is a continuous linear mapping from $V$ into $W$ and $E \subseteq V$ is bounded, then it is easy to see that $T(E)$ is bounded in $W$ as well. \section{Uniform boundedness} \label{uniform boundedness} \setcounter{equation}{0} Let $M$ be a complete metric space, and let $\mathcal{E}$ be a nonempty collection of continuous nonnegative real-valued functions on $M$. Suppose that $\mathcal{E}$ is bounded pointwise on $M$, in the sense that \begin{equation} \label{mathcal{E}(x) = {f(x) : x in M}} \mathcal{E}(x) = \{f(x) : x \in M\} \end{equation} is a bounded set of real numbers for each $x \in M$. Put \begin{equation} \mathcal{E}_n = \{x \in M : f(x) \le n \hbox{ for every } f \in \mathcal{E}\} \end{equation} for each positive integer $n$. Thus $\mathcal{E}_n$ is a closed set in $M$ for every $n$, because the elements of $\mathcal{E}$ are supposed to be continuous functions on $M$, and \begin{equation} \bigcup_{n = 1}^\infty \mathcal{E}_n = M, \end{equation} by the hypothesis that $\mathcal{E}$ be bounded pointwise on $M$. The Baire category theorem implies that $\mathcal{E}_n$ has nonempty interior for some $n$, so that $\mathcal{E}$ is uniformly bounded on a nonempty open set in $M$. Now let $V$ be a real or complex vector space with a norm $\|v\|$, and let $\Lambda$ be a nonempty collection of continuous linear functionals on $V$. Suppose that $\Lambda$ is bounded pointwise on $V$, so that \begin{equation} \Lambda(v) = \{\lambda(v) : \lambda \in \Lambda\} \end{equation} is a bounded set of real or complex numbers, as appropriate, for every $v \in V$. If $V$ is also complete, then it follows from the argument in the previous paragraph that $\Lambda$ is uniformly bounded on a nonempty open set in $V$. Using the linearity of the elements of $V$, one can show that $\Lambda$ is actually bounded on the unit ball in $V$, which means that the dual norms of the elements of $\Lambda$ are uniformly bounded. This is a version of the Banach--Steinhaus theorem. Let $V^*$ be the dual space of continuous linear functionals on $V$, as usual. Thus $V^*$ is equipped with the dual norm $\|\lambda\|_*$, as in Section \ref{dual norms}, and also the weak$^*$ topology, as in Section \ref{weak^* topology}. It is easy to see that every bounded set in $V^*$ with respect to the dual norm is also bounded with respect to the weak$^*$ topology, in the sense described in the previous section. Conversely, if $V$ is complete, then every bounded set in $V^*$ with respect to the weak$^*$ topology is also bounded with respect to the dual norm, by the principle of uniform boundedness described in the previous paragraph. Similarly, we can consider $V$ equipped with the weak topology associated to the collection of all continuous linear functionals on $V$ with respect to the norm, as in Section \ref{weak topologies}. If $E \subseteq V$ is bounded with respect to the norm, then it is easy to see that $E$ is also bounded with respect to the weak topology on $V$. Conversely, suppose that $E$ is bounded with respect to the weak topology on $V$. This means that \begin{equation} \label{E(lambda) = {lambda(v) : v in E}} E(\lambda) = \{\lambda(v) : v \in E\} \end{equation} is a bounded set of real or complex numbers, as appropriate, for each $\lambda \in V^*$. As in Section \ref{weak^* topology}, \begin{equation} \label{L_v(lambda) = lambda(v)} L_v(\lambda) = \lambda(v) \end{equation} defines a continuous linear functional on $V^*$ with respect to the dual norm $\|\lambda\|_*$ for every $v \in V$. Let $V^{**} = (V^*)^*$ be the space of continuous linear functionals on $V^*$ with respect to the dual norm on $V^*$. Thus $V^{**}$ is also equipped with a weak$^*$ topology, as the dual of $V^*$. Consider \begin{equation} \mathcal{L} = \{L_v : v \in E\}, \end{equation} as a subset of $V^{**}$. It is easy to see that $\mathcal{L}$ is bounded with respect to the weak$^*$ topology on $V^{**}$, because $E$ is bounded with respect to the weak topology on $V$. We also know that $V^*$ is complete with respect to the dual norm, as in Section \ref{bounded linear mappings}. It follows from the discussion in the previous paragraph that $\mathcal{L}$ is bounded with respect to the dual norm on $V^{**}$ associated to the dual norm on $V^*$. As in Section \ref{weak^* topology}, the dual norm of $L_v$ as a continuous linear functional on $V^*$ is equal to the norm of $v$ as an element of $V$ for every $v \in V$, by the Hahn--Banach theorem. This implies that $E$ is bounded with respect to the norm on $V$. \section{Bounded linear mappings, continued} \label{bounded linear mappings, continued} \setcounter{equation}{0} Let $V$, $W$ be topological vector spaces, both real or both complex. A linear mapping $T : V \to W$ is said to be \emph{bounded} if for every bounded set $E \subseteq V$, $T(E)$ is a bounded set in $W$. It is easy to see that continuous linear mappings are bounded in this sense, as mentioned at the end of Section \ref{bounded sets}. Conversely, if the topology on $V$ is determined by a norm and $T : V \to W$ is bounded, then $T$ is continuous. More precisely, if there is an open set $U \subseteq V$ that contains $0$ such that $T(U)$ is bounded in $W$, then it is not difficult to check that $T$ is continuous. In particular, this condition holds when $T : V \to W$ is bounded and there is a bounded neighborhood $U$ of $0$ in $V$. If the topology on $V$ is determined by a norm, then one can simply take $U$ to be the open unit ball in $V$. Let $V$ be a vector space over the real or complex numbers equipped with a norm. As in the previous section, the uniform boundedness principle implies that every bounded set in $V$ with respect to the weak topology is also bounded with respect to the norm. Equivalently, the identity mapping on $V$ is bounded as a mapping from $V$ with the weak topology into $V$ with the norm topology. However, the identity mapping on $V$ is not continuous as a mapping from $V$ with the weak topology into $V$ with the norm topology, unless $V$ is finite-dimensional. This is because the open unit ball in $V$ with respect to the norm is not an open set with respect to the weak topology when $V$ is infinite-dimensional, since an open set in $V$ with respect to the weak topology that contains $0$ also contains a linear subspace of $V$ of finite codimension. Similarly, if $V$ is complete, then every bounded set in $V^*$ with respect to the weak$^*$ topology is bounded with respect to the dual norm, as in the previous section. This implies that the identity mapping on $V^*$ is bounded as a mapping from $V^*$ with the weak$^*$ topology into $V^*$ with the topology determined by the dual norm. As in the preceding paragraph, this mapping is not continuous when $V$ is infinite-dimensional. Note that $V^*$ is infinite-dimensional when $V$ is, by the Hahn--Banach theorem. Let $V$, $W$ be topological vector spaces again, both real or both complex. If $T : V \to W$ is a bounded linear mapping and $a$ is a real or complex number, as appropriate, then $a \, T$ is also a bounded linear mapping from $V$ into $W$. This follows from the fact that a scalar multiple of a bounded set in a topological vector space is bounded as well. Similarly, if $R : V \to W$ is another bounded linear mapping, then the sum $R + T$ is bounded too. This uses the boundedness of the sum of two bounded subsets of a topological vector space. Now let $V_1$, $V_2$, and $V_3$ be topological vector spaces, all real or all complex. If $T_1 : V_1 \to V_2$ and $T_2 : V_2 \to V_3$ are bounded linear mappings, then it is easy to see that their composition $T_2 \circ T_1$ is a bounded linear mapping from $V_1$ into $V_3$, directly from the definition of a bounded linear mapping. \section{Bounded sequences} \label{bounded sequences} \setcounter{equation}{0} Let $V$ be a topological vector space over the real or complex numbers. A sequence $\{v_j\}_{j = 1}^\infty$ of elements of $V$ is said to be bounded if the set of $v_j$'s is bounded in $V$. If $\{v_j\}_{j = 1}^\infty$ converges to an element $v$ of $V$, then it is easy to see that the set $K$ consisting of the $v_j$'s and $v$ is compact, which works as well in any topological space. This implies that convergent sequences are bounded, since compact sets are bounded. One can also show this more directly from the definitions, which is especially simple when $\{v_j\}_{j = 1}^\infty$ converges to $0$. Similarly, one can check that Cauchy sequences are bounded in $V$. If $\{v_j\}_{j = 1}^\infty$ is a bounded sequence in $V$, and $\{t_j\}_{j = 1}^\infty$ is a sequence of real or complex numbers, as appropriate, that converges to $0$, then it is easy to see that $\{t_j \, v_j\}_{j = 1}^\infty$ converges to $0$ in $V$. Suppose now that there is a countable local base for the topology of $V$ at $0$. This means that there is a sequence $U_1, U_2, \ldots$ of open subsets of $V$ that contain $0$ with the property that if $U$ is any other open set in $V$ containing $0$, then $U_l \subseteq U$ for some $l$. As in Section \ref{topological vector spaces, continued}, we can also take the $U_l$'s to be balanced subsets of $V$. We may as well ask that $U_{l + 1} \subseteq U_l$ for each $l$ too, since otherwise we can replace $U_l$ with $U_1 \cap \cdots \cap U_l$ for each $l$. Let $\{v_j\}_{j = 1}^\infty$ be a sequence of elements of $V$ that converges to $0$, and let us show that there is a sequence of positive real numbers $\{r_j\}_{j = 1}^\infty$ such that $r_j \to \infty$ as $j \to \infty$ and $\{r_j \, v_j\}_{j = 1}^\infty$ converges to $0$ in $V$. Because $\{v_j\}_{j = 1}^\infty$ converges to $0$ and $l^{-1} \, U_l$ is an open set in $V$ that contains $0$ for each $l$, there is a positive integer $N_l$ for each $l$ such that \begin{equation} v_j \in l^{-1} \, U_l \end{equation} when $j \ge N_l$. We may as well ask that $N_{l + 1} > N_l$ for every $l$ too, by increasing the $N_l$'s if necessary. Put \begin{equation} r_j = l \quad\hbox{when } N_l \le j < N_{l + 1}, \end{equation} and $r_j = 1$ when $1 \le j < N_1$ if $N_1 > 1$. Thus $r_j \to \infty$ as $j \to \infty$, and \begin{equation} r_j \, v_j \in U_l \quad\hbox{when } N_l \le j < N_{l + 1}. \end{equation} This implies that $r_j \, v_j \in U_l$ when $j \ge N_l$, since $U_{l + 1} \subseteq U_l$ for each $l$. It follows that $\{r_j \, v_j\}_{j = 1}^\infty$ converges to $0$ in $V$, as desired. In particular, $\{r_j \, v_j\}_{j = 1}^\infty$ is a bounded sequence in $V$. Let $W$ be another topological vector space, which is real if $V$ is real and complex if $V$ is complex. If $T$ is a bounded linear mapping from $V$ into $W$ and $V$ has a countable local base for its topology at $0$, then a well known theorem states that $T$ is continuous. To see this, it suffices to show that if $\{v_j\}_{j = 1}^\infty$ is a sequence of elements of $V$ that converges to $0$, then $\{T(v_j)\}_{j = 1}^\infty$ converges to $0$ in $W$. Let $\{r_j\}_{j = 1}^\infty$ be a sequence of positive real numbers such that $r_j \to +\infty$ as $j \to \infty$ and $\{r_j \, v_j\}_{j = 1}^\infty$ converges to $0$ in $V$, as in the previous paragraphs. Thus $\{r_j \, v_j\}_{j = 1}^\infty$ is bounded in $V$, which implies that $\{T(r_j \, v_j)\}_{j = 1}^\infty$ is bounded in $W$, since $T : V \to W$ is bounded by hypothesis. It follows that $T(v_j) = r_j^{-1} \, T(r_j \, v_j)$ converges to $0$ as $j \to \infty$ in $W$, because $\{r_j^{-1}\}_{j = 1}^\infty$ converges to $0$ in the real line. This shows that $T$ is sequentially continuous at $0$, and hence that $T$ is continuous at $0$, since $V$ has a countable local base for its topology at $0$. Of course, a linear mapping between topological vector spaces is continuous at every point as soon as it is continuous at $0$. \section{Bounded linear functionals} \label{bounded linear functionals} \setcounter{equation}{0} If $V$ is a topological vector space over the real or complex numbers, then we can restrict our attention in the previous section to the case where $W$ is the one-dimensional vector space of real or complex numbers, as appropriate. Thus a bounded linear functional on $V$ is a linear functional on $V$ that is bounded as a linear mapping into ${\bf R}$ or ${\bf C}$. Suppose now that $V$ is equipped with a norm $\|v\|$, so that a linear functional on $V$ is bounded if and only if it is continuous, as in the previous section. Let $V^*$ be the dual space of bounded linear functionals on $V$, which is equipped with the dual norm $\|\lambda\|_*$, as in Section \ref{dual norms}. Let $V^{**}$ be the space of bounded linear functionals on $V^*$, which is equipped with a dual norm $\|L\|_{**}$ associated to the dual norm $\|\lambda\|_*$ on $V^*$. As in Section \ref{weak^* topology}, each $v \in V$ determines a bounded linear functional $L_v$ on $V^*$, defined by \begin{equation} L_v(\lambda) = \lambda(v), \end{equation} and we also have that \begin{equation} \|L_v\|_{**} = \|v\|. \end{equation} This defines an isometric linear embedding $v \mapsto L_v$ of $V$ into $V^{**}$. A Banach space $V$ is said to be \emph{reflexive} if every bounded linear functional on $V^*$ is of the form $L_v$ for some $v \in V$. It is easy to see that finite-dimensional Banach spaces are automatically reflexive. If $E$ is a nonempty set, then we have seen in Section \ref{c_0(E)} that the dual of $c_0(E)$ may be identified with $\ell^1(E)$, and we have seen in Section \ref{dual of ell^1} that the dual of $\ell^1(E)$ may be identified with $\ell^\infty(E)$. In this case, the natural embedding of $c_0(E)$ into $c_0(E)^{**}$ described in the previous paragraph corresponds exactly to the standard inclusion of $c_0(E)$ in $\ell^\infty(E)$ as a linear subspace. If $E$ has infinitely many elements, then $c_0(E)$ is a proper linear subspace of $\ell^\infty(E)$, and it follows that $c_0(E)$ is not reflexive. If $V$ is a real or complex vector space equipped with a norm $\|v\|$, then every subset of $V^*$ that is bounded with respect to the dual norm is also bounded with respect to the weak$^*$ topology. This implies that every bounded linear functional on $V^*$ with respect to the weak$^*$ topology is also bounded with respect to the dual norm. Conversely, if $V$ is also complete with respect to the norm, then every bounded subset of $V^*$ with respect to the weak$^*$ topology is also bounded with respect to the dual norm, as in Section \ref{uniform boundedness}. This implies that every bounded linear functional on $V^*$ with respect to the dual norm is also bounded with respect to the weak$^*$ topology. However, a linear functional on $V^*$ is continuous with respect to the weak$^*$ topology if and only if it is of the form $L_v$ for some $v \in V$, as in Section \ref{weak topologies}. \section{Uniform boundedness, continued} \label{uniform boundedness, continued} \setcounter{equation}{0} Let $V$ be a topological vector space over the real or complex numbers, and let $\Lambda$ be a nonempty collection of continuous linear functionals on $V$. Suppose that $\Lambda$ is bounded pointwise on $V$, in the sense that \begin{equation} \Lambda(v) = \{\lambda(v) : \lambda \in \Lambda\} \end{equation} is a bounded set of real or complex numbers, as appropriate, for each $v \in V$. This is equivalent to asking that $\Lambda$ be bounded with respect to the weak$^*$ topology on the dual space $V^*$ of continuous linear functionals on $V$. If the topology on $V$ is determined by a norm, and if $V$ is complete with respect to this norm, then $\Lambda$ is bounded with respect to the dual norm on $V^*$, as in Section \ref{uniform boundedness}. Suppose now that $V$ is metrizable and complete, even if the topology on $V$ may not be determined by a norm. If $\Lambda \subseteq V^*$ is bounded with respect to the weak$^*$ topology on $V$, and hence bounded pointwise on $V$, then it follows from the Baire category theorem that there is a nonempty open set $U_1 \subseteq V$ on which $\Lambda$ is uniformly bounded, as before. If $u_1 \in U_1$, then $U = U_1 - u_1$ is an open set in $V$ that contains $0$, and $\Lambda$ is also uniformly bounded on $U$, because the elements of $\Lambda$ are linear. This is another version of the theorem of Banach and Steinhaus. Let us restrict our attention now to the case where the topology on $V$ is determined by a nice collection of seminorms $\mathcal{N}$. More precisely, we ask that $\mathcal{N}$ have only finitely or countably many elements, so that $V$ is metrizable, and we still ask that $V$ be complete. If $\Lambda \subseteq V^*$ is bounded pointwise on $V$, then $\Lambda$ is uniformly bounded on a neighborhood of $0$, as in the previous paragraph. In this case, this implies that there are finitely many seminorms $N_1, \ldots, N_l \in \mathcal{N}$ and a nonnegative real number $C$ such that \begin{equation} |\lambda(v)| \le C \, \max_{1 \le j \le l} N_j(v) \end{equation} for every $v \in V$. This is analogous to the discussion in Section \ref{continuous linear functionals}. Of course, if there are finitely many seminorms $N_1, \ldots, N_l \in \mathcal{N}$ and a $C \ge 0$ such that the preceding condition holds for every $\lambda \in \Lambda$ and $v \in V$, then $\Lambda$ is bounded pointwise on $V$. In this situation, the choice of $N_1, \ldots, N_l$ is part of the uniform boundedness condition. \section{Another example} \label{another example} \setcounter{equation}{0} Let $V$ be the vector space of real or complex-valued functions on the set ${\bf Z}_+$ of positive integers. If $f \in V$ and $\rho$ is a positive real-valued function on ${\bf Z}_+$, then put \begin{equation} \label{B_rho(f) = {g in V : |f(l) - g(l)| < rho(l) for every l in {bf Z}_+}} B_\rho(f) = \{g \in V : |f(l) - g(l)| < \rho(l) \hbox{ for every } l \in {\bf Z}_+\}. \end{equation} Let us say that a set $U \subseteq V$ is an open set if for every $f \in U$ there is a positive real-valued function $\rho$ on ${\bf Z}_+$ such that \begin{equation} B_\rho(f) \subseteq U. \end{equation} It is easy to see that this defines a topology on $V$, and that $B_\rho(f)$ is an open set in $V$ with respect to this topology for every $f \in V$ and positive function $\rho$ on ${\bf Z}_+$. Equivalently, $V$ is the same as the Cartesian product of a sequence of copies of the real or complex numbers, and this topology on $V$ corresponds to the ``strong product topology'', generated by arbitrary products of open subsets of ${\bf R}$ or ${\bf C}$. One can also check that \begin{equation} (f, g) \mapsto f + g \end{equation} defines a continuous mapping from $V \times V$ into $V$, using the product topology on $V \times V$ determined by the topology just described on $V$. Similarly, \begin{equation} \label{f mapsto t f} f \mapsto t \, f \end{equation} is continuous as a mapping from $V$ into itself for each $t \in {\bf R}$ or ${\bf C}$, as appropriate. However, if $f(l) \ne 0$ for infinitely many $l \in {\bf Z}_+$, then \begin{equation} \label{t mapsto t f} t \mapsto t \, f \end{equation} is not continuous as a mapping from the real or complex numbers with the standard topology into $V$, and so $V$ is not a topological vector space. Let $V_0$ be the linear subspace of $V$ consisting of functions $f$ such that $f(l) = 0$ for all but finitely many $l \in {\bf Z}_+$. It is not difficult to verify that $V_0$ is a topological vector space with respect to the topology induced by the one just defined on $V$. In particular, if $f \in V_0$, then (\ref{t mapsto t f}) is continuous as a mapping from the real or complex numbers into $V$. Let us check that $V_0$ is a closed set in $V$. Let $f \in V \backslash V_0$ be given, and let $\rho$ be defined on ${\bf Z}_+$ by \begin{equation} \label{rho(l) = |f(l)|} \rho(l) = |f(l)| \end{equation} when $f(l) \ne 0$, and $\rho(l) = 1$ otherwise. Thus $\rho(l) > 0$ for every $l \in {\bf Z}_+$. If $g \in B_\rho(f)$, then \begin{equation} \label{|f(l) - g(l)| < rho(l) = |f(l)|} |f(l) - g(l)| < \rho(l) = |f(l)| \end{equation} when $f(l) \ne 0$, which implies that $g(l) \ne 0$ for infinitely many $l \in {\bf Z}_+$. This shows that \begin{equation} B_\rho(f) \subseteq V \backslash V_0, \end{equation} and hence that $V \backslash V_0$ is an open set in $V$, as desired. Let $U_1, U_2, \ldots$, be a sequence of relatively open sets in $V_0$ containing $0$. By construction, there is a sequence $\rho_1, \rho_2, \ldots$ of positive functions on ${\bf Z}_+$ such that \begin{equation} B_{\rho_j}(0) \cap V_0 \subseteq U_j \end{equation} for each $j$. Put \begin{equation} \label{rho(j) = frac{rho_j(j)}{2}} \rho(j) = \frac{\rho_j(j)}{2} \end{equation} for each $j \in {\bf Z}_+$, so that $\rho(j)$ is another positive function on ${\bf Z}_+$. Thus $B_\rho(0) \cap V_0$ is another relatively open set in $V_0$ that contains $0$, and \begin{equation} \label{B_{rho_j}(0) cap V_0 not subseteq B_rho(0) cap V_0} B_{\rho_j}(0) \cap V_0 \not\subseteq B_\rho(0) \cap V_0 \end{equation} for each $j$, because $\rho(j) < \rho_j(j)$ for each $j$. This implies that \begin{equation} U_j \not\subseteq B_\rho(0) \cap V_0 \end{equation} for each $j$, and it follows that $V_0$ does not have a countable local base for its topology at $0$. Let $E$ be a subset of $V_0$, and let $L(E)$ be the set of $l \in {\bf Z}_+$ for which there is an $f \in E$ such that $f(l) \ne 0$. Also let $\rho$ be a positive function on ${\bf Z}_+$ such that \begin{equation} \rho(l) = \frac{|f_l(l)|}{l} \end{equation} for some $f_l \in E$ with $f_l(l) \ne 0$ when $l \in L(E)$. Thus \begin{equation} f_l \not\in t \, B_\rho(0) \end{equation} when $l \in L(E)$ and $t \in {\bf R}$ or ${\bf C}$ satisfies $|t| \le l$. If $L(E)$ has infinitely many elements, then it follows that \begin{equation} E \not\subseteq t \, B_\rho(0) \end{equation} for any real or complex number $t$, as appropriate. This shows that $E$ can have only finitely or countably many elements when $E$ is bounded in $V$. Let $V_{0, n}$ be the $n$-dimensional linear subspace of $V_0$ consisting of functions $f$ on ${\bf Z}_+$ such that $f(l) = 0$ when $l > n$, for each positive integer $n$. Note that \begin{equation} \bigcup_{n = 1}^\infty V_{0, n} = V_0. \end{equation} If $E \subseteq V_0$ is bounded, then $E \subseteq V_{0, n}$ for some $n$, as in the previous paragraph. In this case, $E$ is also bounded as a subset of $V_{0, n}$ in the usual sense, which is to say that \begin{equation} E_j = \{f(j) : f \in E\} \end{equation} is bounded in ${\bf R}$ or ${\bf C}$, as appropriate, for each $j \le n$. Conversely, if $E \subseteq V_{0, n}$ and $E_j$ is bounded for each $j \le n$, then $E$ is bounded in $V_{0, n}$, and hence in $V_0$. Let $\tau$ be a positive real-valued function on ${\bf Z}_+$, and consider the norm $N_\tau$ on $V_0$ defined by \begin{equation} N_\tau(f) = \max_{j \ge 1} |f(j)| \, \tau(j). \end{equation} If $\mathcal{N}$ is the collection of all of these norms $N_\tau$ on $V_0$, then it is not difficult to check that the topology on $V_0$ associated to $\mathcal{N}$ is the same as the topology on $V_0$ induced from the one on $V$ as before. To see this, observe that the open unit ball in $V_0$ with respect to $N_\tau$, \begin{equation} \{f \in V_0 : N_\tau(f) < 1\}, \end{equation} is the same as the set of $f \in V_0$ for which there is a positive real number $r < 1$ such that \begin{equation} |f(j)| < r \, \tau(j)^{-1} \end{equation} for each $j \in {\bf Z}_+$. This is contained in $B_\rho(0)$ with $\rho = 1/\tau$, and more precisely it is equal to \begin{equation} \label{bigcup_{0 < r < 1} B_{r rho}(0)} \bigcup_{0 < r < 1} B_{r \, \rho}(0), \end{equation} which is close enough to show that the topologies are the same. Similarly, if $\sigma$ is a positive real-valued function on ${\bf Z}_+$, then \begin{equation} \label{N'_sigma(f) = sum_{j = 1}^infty |f(j)| sigma(j)} N'_\sigma(f) = \sum_{j = 1}^\infty |f(j)| \, \sigma(j) \end{equation} defines a norm on $V_0$. Clearly \begin{equation} N_\sigma(f) \le N'_\sigma(f) \end{equation} for every $f \in V_0$. In the other direction, if we put \begin{equation} \tau(j) = j^2 \, \sigma(j) \end{equation} for each $j \in {\bf Z}_+$, then \begin{equation} N'_\sigma(f) \le \Big(\sum_{j = 1}^\infty \frac{1}{j^2}\Big) \, N_\tau(f) \end{equation} for every $f \in V_0$. If $\mathcal{N}'$ is the collection of all of these norms $N'_\sigma$ on $V_0$, then it follows that $\mathcal{N}'$ determines the same topology on $V_0$ as $\mathcal{N}$ does. Hence the topology on $V_0$ associated to $\mathcal{N}'$ is also the same as the one induced on $V_0$ by the topology on $V$ defined at the beginning of the section. Let $N$ be any seminorm on $V_0$, and let $\delta_j(l)$ be the function on ${\bf Z}_+$ equal to $1$ when $j = l$ and to $0$ otherwise. If \begin{equation} N(\delta_j) \le \sigma(j) \end{equation} for each $j \in {\bf Z}_+$, then we get that \begin{equation} N(f) \le N'_\sigma(f) \end{equation} for every $f \in V_0$. More precisely, if $f \in V_{0, n}$, then $f = \sum_{j = 1}^n f(j) \, \delta_j$, and hence \begin{equation} N(f) \le \sum_{j = 1}^n |f(j)| \, N(\delta_j) \le N'_\sigma(f). \end{equation} This implies that open balls with respect to $N$ are also open sets in $V_0$. Let $h$ be a real or complex-valued function on ${\bf Z}_+$, as appropriate, and consider \begin{equation} \label{lambda_h(f) = sum_{j = 1}^infty f(j) h(j)} \lambda_h(f) = \sum_{j = 1}^\infty f(j) \, h(j) \end{equation} for $f \in V_0$. This defines a linear functional on $V_0$, and every linear functional on $V_0$ is of this form. If \begin{equation} |h(j)| \le \sigma(j) \end{equation} for each $j \in {\bf Z}_+$, then it follows that \begin{equation} |\lambda_h(f)| \le N'_\sigma(f) \end{equation} for every $f \in V_0$. Thus $\lambda_h$ is continuous on $V_0$, and hence every linear functional on $V_0$ is continuous. \part{Algebras of functions} \section{Homomorphisms} \label{homomorphisms} \setcounter{equation}{0} Let $X$ be a set, and let $\mathcal{F}$ be an ultrafilter on $X$. If $f$ is a real or complex-valued function on $X$, then $f_*(\mathcal{F})$ is an ultrafilter on ${\bf R}$ or ${\bf C}$, as appropriate, as in Section \ref{ultrafilters}. If $f$ is bounded on $X$, then one can check that $f_*(\mathcal{F})$ converges to an element of ${\bf R}$ or ${\bf C}$. In this case, it is a bit simpler to think of $f$ as taking values in a compact subset $K$ of ${\bf R}$ or ${\bf C}$, so that $f$ maps $\mathcal{F}$ to an ultrafilter on $K$, which therefore converges. Although this is not quite the same as $f_*(\mathcal{F})$ as an ultrafilter on ${\bf R}$ or ${\bf C}$, they are almost the same, and converge to the same limit, as in Section \ref{filters, subsets}. Let $L_\mathcal{F}(f)$ denote the limit of $f_*(\mathcal{F})$, which may also be described as the limit of $f$ along $\mathcal{F}$. If $p \in X$ and $\mathcal{F}$ is the ultrafilter $\mathcal{F}_p$ based at $p$ as in the previous section, then $L_\mathcal{F}(f) = f(p)$ for every bounded function $f$ on $X$. It is easy to see that every ultrafilter on $X$ is of this type when $X$ has only finitely many elements. Otherwise, if $X$ is an infinite set, then the collection of subsets $A$ of $X$ such that $X \backslash A$ has only finitely many elements is a filter on $X$. Any ultrafilter on $X$ which is a refinement of this filter is not the same as $\mathcal{F}_p$ for any $p \in X$. Observe that \begin{equation} L_\mathcal{F}(f) \in \overline{f(X)} \end{equation} for every $f \in \ell^\infty(X)$. In particular, \begin{equation} |L_\mathcal{F}(f)| \le \|f\|_\infty. \end{equation} If $f$ is a constant function on $X$, then $L_\mathcal{F}(f)$ is equal to this constant value. One can also check that \begin{equation} L_\mathcal{F}(f + g) = L_\mathcal{F}(f) + L_\mathcal{F}(f) \end{equation} and \begin{equation} L_\mathcal{F}(f \, g) = L_\mathcal{F}(f) \, L_\mathcal{F}(g) \end{equation} for every $f, g \in \ell^\infty(X)$. This is analogous to standard facts about the limits of a sum and product being equal to the corresponding sum or product of limits. If $X$ is an infinite set and $\mathcal{F} \ne \mathcal{F}_p$ for any $p \in X$, then one can check that $L_\mathcal{F}(f) = 0$ for every $f \in c_0(X)$. Similarly, $L_\mathcal{F}(f)$ is the same as the limit of $f(x)$ at infinity when $f \in c(X)$, as in Section \ref{dual of ell^1}. Remember that the limit of $f(x)$ at infinity defines a continuous linear functional on $c(X)$, with dual norm equal to $1$ with respect to the $\ell^\infty$ norm. Thus $L_\mathcal{F}(f)$ is an extension of this linear functional on $c(X)$ to a continuous linear functional on $\ell^\infty(X)$, also with dual norm equal to $1$. The existence of such an extension was mentioned before, as a consequence of the Hahn--Banach theorem. \section{Homomorphisms, continued} \label{homomorphisms, continued} \setcounter{equation}{0} Let $X$ be a nonempty set, and note that the product of two bounded real or complex-valued functions on $X$ is bounded as well. Suppose that $L$ is a linear functional on $\ell^\infty(X)$ which is a homomorphism with respect to multiplication of functions, in the sense that \begin{equation} L(f \, g) = L(f) \, L(g) \end{equation} for every $f, g \in \ell^\infty(X)$. If $L(f) = 0$ for every $f \in \ell^\infty(X)$, then $L$ satisfies these conditions trivially, and so we suppose that $L(f) \ne 0$ for at least one $f \in \ell^\infty(X)$. This implies that \begin{equation} L({\bf 1}_X) = 1, \end{equation} where ${\bf 1}_X$ is the constant function equal to $1$ on $X$, since ${\bf 1}_X \, f = f$ and hence \begin{equation} L(f) = L({\bf 1}_X \, f) = L({\bf 1}_X) \, L(f). \end{equation} We would like to show that $L$ is associated to an ultrafilter on $X$, as in the previous section. If $A \subseteq X$, then let ${\bf 1}_A(x)$ be the indicator function on $X$ associated to $A$, equal to $1$ when $x \in A$ and to $0$ when $x \in X \backslash A$. Thus ${\bf 1}_A^2 = {\bf 1}_A$, which implies that \begin{equation} L({\bf 1}_A) = L({\bf 1}_A^2) = L({\bf 1}_A)^2, \end{equation} and hence $L({\bf 1}_A) = 0$ or $1$. Because ${\bf 1}_A + {\bf 1}_{X \backslash A} = {\bf 1}_X$, \begin{equation} L({\bf 1}_A) + L({\bf 1}_{X \backslash A}) = L({\bf 1}_X) = 1, \end{equation} so that exactly one of $L({\bf 1}_A)$ and $L({\bf 1}_{X \backslash A})$ is equal to $1$. If $A, B \subseteq X$, then ${\bf 1}_A \, {\bf 1}_B = {\bf 1}_{A \cap B}$, and so \begin{equation} L({\bf 1}_{A \cap B}) = L({\bf 1}_A) \, L({\bf 1}_B). \end{equation} This shows that $L({\bf 1}_{A \cap B}) = 1$ when $L({\bf 1}_A) = L({\bf 1}_B) = 1$. Similarly, if $A \subseteq B$ and $L({\bf 1}_A) = 1$, then $A \cap B = A$, and we get that $L({\bf 1}_B) = 1$. Of course, ${\bf 1}_A = 0$ when $A = \emptyset$, so that $L({\bf 1}_A) = 0$. If \begin{equation} \mathcal{F}_L = \{A \subseteq X : L({\bf 1}_A) = 1\}, \end{equation} then it follows that $\mathcal{F}_L$ is a filter on $X$. More precisely, $\mathcal{F}_L$ is an ultrafilter on $X$, since $A$ or $X \backslash A$ is in $\mathcal{F}_L$ for each $A \subseteq X$. It is easy to see that $L({\bf 1}_A)$ is the same as the limit of ${\bf 1}_A$ along $\mathcal{F}_L$ as in the previous section. This implies that $L(f)$ is equal to the limit of $f$ along $\mathcal{F}_L$ when $f$ is a finite linear combination of indicator functions of subsets of $X$, by linearity. One can also check that finite linear combinations of indicator functions of subsets of $X$ are dense in $\ell^\infty(X)$. We already know that the limit along an ultrafilter defines a continuous linear functional on $\ell^\infty(X)$, as in the previous section, and we would like to check that $L$ is also a continuous linear functional on $\ell^\infty(X)$. This would imply that $L(f)$ is equal to the limit of $f$ along $\mathcal{F}$ for every $f \in \ell^\infty(X)$, by continuity and density. Suppose that $f$ is a bounded function on $X$ such that $f(x) \ne 0$ for every $x \in X$ and $1/f$ is also bounded. Thus \begin{equation} L(f) \, L(1/f) = L({\bf 1}_X) = 1, \end{equation} and hence $L(f) \ne 0$ in particular. Equivalently, $0 \in \overline{f(X)}$ when $L(f) = 0$. This implies that \begin{equation} L(f) \in \overline{f(X)} \end{equation} for every $f \in \ell^\infty(X)$, since one can reduce to the case where $L(f) = 0$ by subtracting $L(f) \, {\bf 1}_X$ from $f$, using the fact that $L({\bf 1}_X) = 1$. In particular, \begin{equation} |L(f)| \le \|f\|_\infty \end{equation} for every $f \in \ell^\infty$, which implies that $L$ is a continuous linear functional on $\ell^\infty(X)$ with dual norm equal to $1$, as desired. \section{Bounded continuous functions} \label{bounded continuous functions} \setcounter{equation}{0} Let $X$ be a topological space, and let $C_b(X)$ be the space of bounded real or complex-valued continuous functions on $X$. As usual, this may also be denoted $C_b(X, {\bf R})$ or $C_b(X, {\bf C})$, to indicate whether real or complex-valued functions are being used. Of course, $C_b(X)$ is the same as $\ell^\infty(X)$ when $X$ is equipped with the discrete topology. If $X$ is compact, then continuous functions are automatically bounded on $X$. Constant functions on $X$ are always continuous, and the existence of nonconstant functions on $X$ depends on the behavior of $X$. Remember that sums and products of continuous functions are continuous. Similarly, sums and products of bounded functions are bounded, so that sums and products of bounded continuous functions are bounded and continuous. It follows that $C_b(X)$ is a vector space with respect to pointwise addition and scalar multiplication, and a commutative algebra with respect to multiplication of functions. The supremum norm on $C_b(X)$ is defined by \begin{equation} \label{||f||_{sup} = sup_{x in X} |f(x)|} \|f\|_{sup} = \sup_{x \in X} |f(x)|, \end{equation} and it is easy to see that this is indeed a norm. Moreover, \begin{equation} \|f \, g\|_{sup} \le \|f\|_{sup} \, \|g\|_{sup} \end{equation} for every $f, g \in C_b(X)$. Suppose that $\phi$ is a linear functional on $C_b(X)$ which is also a homomorphism with respect to multiplication of functions, in the sense that \begin{equation} \phi(f \, g) = \phi(f) \, \phi(g) \end{equation} for every $f, g \in C_b(X)$. If $\phi(f) = 0$ for each $f \in C_b(X)$, then $\phi$ satisfies these conditions trivially, and so we also ask that $\phi(f) \ne 0$ for some $f \in C_b(X)$. As before, this implies that \begin{equation} \phi({\bf 1}_X) = 1, \end{equation} where ${\bf 1}_X$ is the constant function equal to $1$ at every point in $X$. Of course, \begin{equation} \phi_p(f) = f(p) \end{equation} has these properties for every $p \in X$. If $f$ is a bounded continuous function on $X$ such that $f(x) \ne 0$ for every $x \in X$, then $1/f$ is also a continuous function on $X$. If $1/f$ is bounded as well, then \begin{equation} \phi(f) \, \phi(1/f) = \phi({\bf 1}_X) = 1, \end{equation} which implies that $\phi(f) \ne 0$. If $f$ is any bounded continuous function on $X$ such that $\phi(f) = 0$, then it follows that $0 \in \overline{f(X)}$, since otherwise $1/f \in C_b(X)$. This implies that \begin{equation} \label{phi(f) in overline{f(X)}} \phi(f) \in \overline{f(X)} \end{equation} for every $f \in C_b(X)$, by applying the previous statement to $f - \phi(f) \, {\bf 1}_X$. In particular, \begin{equation} \label{|phi(f)| le ||f||_{sup}} |\phi(f)| \le \|f\|_{sup} \end{equation} for every $f \in C_b(X)$, so that $\phi$ is a continuous linear functional on $C_b(X)$. The dual norm of $\phi$ with respect to the sumpremum norm is equal to $1$, since $\phi({\bf 1}_X) = 1$. In the complex case, (\ref{phi(f) in overline{f(X)}}) implies that $\phi(f) \in {\bf R}$ when $f$ is real-valued. In both the real and complex cases, we get that \begin{equation} \label{phi(f) ge 0} \phi(f) \ge 0 \end{equation} for every bounded nonnegative real-valued function $f$ on $X$. If $A \subseteq X$ is both open and closed, then the corresponding indicator function ${\bf 1}_A$ is continuous on $X$, and $\phi({\bf 1}_A)$ is either $0$ or $1$. Let $B^*$ be the closed unit ball in the dual of $C_b(X)$, with respect to the dual norm associated to the supremum norm on $C_b(X)$. Thus multiplicative homomorphisms on $C_b(X)$ are elements of $B^*$, because of (\ref{|phi(f)| le ||f||_{sup}}). It is easy to see that the set of multiplicative homomorphisms on $C_b(X)$ is closed with respect to the weak$^*$ topology, since $\phi(f)$, $\phi(g)$, and $\phi(f \, g)$ are continuous functions of $\phi \in C_b(X)^*$ with respect to the weak$^*$ topology for every $f, g \in C_b(X)$. The set of nonzero multiplicative homomorphisms on $C_b(X)$ is also closed in the weak$^*$ topology, since it can be described by the additional condition $\phi({\bf 1}_X) = 1$, and $\phi({\bf 1}_X)$ is a continuous function of $\phi$ with respect to the weak$^*$ topology. Hence the set of nonzero multiplicative homomorphisms on $C_b(X)$ is compact with respect to the weak$^*$ topology, because it is a closed subset of $B^*$, which is compact by the Banach--Alaoglu theorem. If $p \in X$, then $\phi_p(f) = f(p)$ is a nonzero multiplicative homomorphism on $C_b(X)$, as before. Thus $p \mapsto \phi_p$ defines a mapping from $X$ into $B^*$. It is easy to see that this mapping is continuous with respect to the weak$^*$ topology on $B^*$, since $\phi_p(f) = f(p)$ is continuous on $X$ for every $f \in C_b(X)$. If $X$ is equipped with the discrete topology, then $p \mapsto \phi_p$ is a one-to-one mapping of $X$ into $B^*$, and the topology induced on the set \begin{equation} \{\phi_p : p \in X\} \end{equation} by the weak$^*$ topology is the same as the discrete topology. If $X$ is infinite, then of course this set is not compact. Let $\phi \in B^*$ be a limit point of this set with respect to the weak$^*$ topology, which is therefore not in the set. If $f \in c(X)$, then one can check that $\phi(f)$ is equal to the limit of $f(x)$ at infinity on $X$. This is another way to get homomorphisms on $\ell^\infty(X)$ extending the limit at infinity on $c(X)$. \section{Compact spaces} \label{compact spaces} \setcounter{equation}{0} Let $X$ be a compact topological space, and let $C(X)$ be the space of continuous real or complex-valued functions on $X$. This may also be denoted $C(X, {\bf R})$ or $C(X, {\bf C})$, to indicate whether real or complex-valued functions are being used. As before, continuous functions on compact spaces are automatically bounded, so that $C(X) = C_b(X)$. Let $\phi$ be a nonzero multiplicative homomorphism on $C(X)$, as in the previous section. We would like to show that there is a $p \in X$ such that $\phi(f) = f(p)$ for every $f \in C(X)$. Suppose for the sake of a contradiction that for each $p \in X$ there is a continuous function $f_p$ on $X$ such that $\phi(f_p) \ne f_p(p)$. We may as well ask also that $\phi(f_p) = 0$, since otherwise we can replace $f_p$ with $f_p - \phi(f_p) \, {\bf 1}_X$, using the fact that $\phi({\bf 1}_X) = 1$. Thus $f_p(p) \ne 0$. Similarly, we may suppose that $f_p$ is a nonnegative real-valued function on $X$ for each $p \in X$, by replacing $f_p$ with $|f_p|^2$ if necessary. More precisely, in the real case, \begin{equation} \phi(|f_p|^2) = \phi(f_p^2) = \phi(f_p)^2 = 0, \end{equation} while in the complex case, \begin{equation} \phi(|f_p|^2) = \phi(f_p \overline{f_p}) = \phi(f_p) \, \phi(\overline{f_p}) = 0. \end{equation} Of course, we also get that $f_p(p) > 0$ after this substitution. Consider \begin{equation} U(p) = \{x \in X : f_p(x) > 0\}. \end{equation} This is an open set in $X$ for each $p \in X$, because $f_p$ is continuous, and $p \in U(p)$ by construction. Thus $U(p)$, $p \in X$, is an open covering of $X$, and so there are finitely many elements $p_1, \ldots, p_n$ of $X$ such that \begin{equation} X = \bigcup_{j = 1}^n U(p_j), \end{equation} by compactness. If $f = \sum_{j = 1}^n f_{p_j}$, then $f$ is continuous on $X$, $\phi(f) = 0$, and $f(x) > 0$ for every $x \in X$. This is a contradiction, because $1/f$ is also a continuous function on $X$, which implies that $\phi(f) \ne 0$, as in the previous section. \section{Closed ideals} \label{closed ideals} \setcounter{equation}{0} Let $X$ be a topological space, and let $C(X)$ be the space of continuous real or complex-valued functions on $X$. As usual, this is a vector space with respect to pointwise addition and scalar multiplication, and a commutative algebra with respect to pointwise multiplication of functions. A linear subspace $\mathcal{I}$ of $C(X)$ is said to be an \emph{ideal} if for every $a \in C(X)$ and $f \in \mathcal{I}$ we have that $a \, f \in C(X)$. In this section, we shall restrict our attention to compact Hausdorff spaces $X$, so that continuous functions on $X$ are automatically bounded. We shall also be especially interested in ideals that are closed subsets of $C(X)$ with respect to the supremum norm. If $E \subseteq X$, then let $\mathcal{I}_E$ be the collection of $f \in C(X)$ such that $f(x) = 0$ for every $x \in X$. It is easy to see that this is a closed ideal in $C(X)$, directly from the definitions. If $\overline{E}$ is the closure of $E$ in $X$, then \begin{equation} \label{mathcal{I}_{overline{E}} = mathcal{I}_E} \mathcal{I}_{\overline{E}} = \mathcal{I}_E, \end{equation} because any continuous function that vanishes on $E$ automatically vanishes on the closure of $E$ as well. Thus we may as well restrict our attention to closed subsets $E$ of $X$. We would like to show that any closed ideal $\mathcal{I}$ in $C(X)$ is of the form $\mathcal{I}_E$ for some closed set $E \subseteq X$. If $\mathcal{I}$ is any subset of $C(X)$, then \begin{equation} E = \{x \in X : f(x) = 0\} \end{equation} is a closed set in $X$. This is because the set where a continuous function is equal to $0$ is a closed set, and $E$ is the same as the intersection of the zero sets associated to the elements of $\mathcal{I}$. By construction, \begin{equation} \mathcal{I} \subseteq \mathcal{I}_E. \end{equation} We would like to show that equality holds when $\mathcal{I}$ is a closed ideal in $C(X)$. If $\phi$ is a real or complex-valued function on $X$, then the \emph{support} of $\phi$ is denoted $\mathop{\rm supp} \phi$ and is defined to be the closure of the set of $x \in X$ such that $\phi(x) \ne 0$. Suppose that $\phi$ is a continuous function on $X$ whose support is contained in the complement of $E$ in $X$. If $p \in \mathop{\rm supp} \phi$, so that $p \not\in E$, then there is an $f_p \in \mathcal{I}$ such that $f_p(p) \ne 0$. Note that \begin{equation} U(p) = \{x \in X : f_p(x) \ne 0\} \end{equation} is an open set in $X$, because $f_p$ is continuous. Thus $U(p)$, $p \in \mathop{\rm supp} \phi$, is an open covering of $\mathop{\rm supp} \phi$ in $X$, since $p \in U(p)$ for each $p$. We also know that $\mathop{\rm supp} f$ is compact, because it is a closed set in a compact space. It follows that there are finitely many elements $p_1, \ldots, p_n$ of $\mathop{\rm supp} \phi$ such that \begin{equation} \mathop{\rm supp} \phi \subseteq \bigcup_{j = 1}^n U(p_j). \end{equation} Observe that \begin{equation} \sum_{l = 1}^n |f_{p_l}(x)|^2 > 0 \end{equation} for every $x \in \mathop{\rm supp} \phi$. Put \begin{equation} \psi(x) = \phi(x) \, \Big(\sum_{l = 1}^n |f_{p_l}(x)|^2\Big)^{-1}, \end{equation} which is interpreted as being $0$ when $x \not\in \mathop{\rm supp} \phi$. This is a continuous function on $X$, because it is equal to $0$ on a neighborhood of every $x \in X \backslash \mathop{\rm supp} \phi$, and because it is a quotient of continuous functions with nonzero denominator on a neighborhood of every $x \in \mathop{\rm supp} \phi$. In the real case, we have that \begin{equation} \phi(x) = \sum_{j = 1}^n (\psi(x) \, f_j(x)) \, f_j(x) \end{equation} for every $x \in X$, and in the complex case we have that \begin{equation} \phi(x) = \sum_{j = 1}^n (\psi(x) \, \overline{f_j(x)}) \, f_j(x), \end{equation} where $\overline{f_j(x)}$ is the complex conjugate of $f_j(x)$. This implies that $\phi \in \mathcal{I}$, since $f_j \in \mathcal{I}$ for each $j$, $\psi \, f_j \in C(X)$ in the real case and $\psi \, \overline{f_j} \in C(X)$ in the complex case, and $\mathcal{I}$ is an ideal. Now let $f$ be a continuous function on $X$ such that $f(x) = 0$ for every $x \in E$, and let $\epsilon > 0$ be given. Thus \begin{equation} K(\epsilon) = \{x \in X : |f(x)| \ge \epsilon\} \end{equation} is a closed set in $X$ contained in the complement of $E$. Let $V(\epsilon)$ be an open set in $X$ such that $K(\epsilon) \subseteq V(\epsilon)$ and $\overline{V(\epsilon)} \subseteq X \backslash E$. By Urysohn's lemma, there is a continuous real-valued function $\theta_\epsilon$ on $X$ such that $\theta(x) = 1$ for every $x \in K(\epsilon)$, $\theta_\epsilon(x) = 0$ when $x \not\in V(\epsilon)$, and $0 \le \theta_\epsilon(x) \le 1$ for every $x \in X$. In particular, the support of $\theta_\epsilon$ is contained in $\overline{V(\epsilon)}$, which is contained in the complement of $E$. Of course, the support of $\theta_\epsilon \, f$ is contained in the support of $\theta_\epsilon$. Hence \begin{equation} \theta_\epsilon \, f \in \mathcal{I} \end{equation} for each $\epsilon > 0$, by the discussion in the previous paragraph. Moreover, \begin{equation} |\theta_\epsilon(x) \, f(x) - f(x)| = (1 - \theta_\epsilon(x)) \, |f(x)| < \epsilon \end{equation} for every $x \in X$, because $1 - \theta_\epsilon(x) = 0$ when $x \in K(\epsilon)$, $|f(x)| < \epsilon$ when $x \in X \backslash K(\epsilon)$, and $0 \le \theta_\epsilon(x) \le 1$ for every $x \in X$. This implies that $\theta_\epsilon \, f \to f$ uniformly on $X$ as $\epsilon \to 0$. Thus $f \in \mathcal{I}$ when $\mathcal{I}$ is closed with respect to the supremum norm, since $\theta_\epsilon \, f \in \mathcal{I}$ for each $\epsilon > 0$. This shows that $\mathcal{I} = \mathcal{I}_E$ when $\mathcal{I}$ is a closed ideal in $C(X)$ and $E$ is associated to $\mathcal{I}$ as before. Let $E$ be any closed set in $X$, and consider the corresponding closed ideal $\mathcal{I}_E$. In particular, $\mathcal{I}_E$ is a linear subspace of $C(X)$, and the quotient space $C(X) / \mathcal{I}_E$ can be defined as a real or complex vector space, as appropriate. By standard arguments in abstract algebra, there is a natural operation of multiplication on the quotient, so that the quotient mapping from $C(X)$ onto $C(X) / \mathcal{I}_E$ is a multiplicative homomorphism, because $\mathcal{I}_E$ is an ideal in $C(X)$. If $E = \emptyset$, then $\mathcal{I}_E = C(X)$, and $C(X) / \mathcal{I}_E = \{0\}$, and so we suppose from now on that $E \ne \emptyset$. We also have a homomorphism $R_E : C(X) \to C(E)$, defined by sending a continuous function $f$ on $X$ to its restriction $R_E(f)$ to $E$. The kernel of this homomorphism is equal to $\mathcal{I}_E$, which leads to a one-to-one homomorphism $r_E : C(X) / \mathcal{I}_E \to C(E)$. By the Tietze extension theorem, every continuous function on $E$ has an extension to a continuous function on $X$. This says exactly that $R_E(C(X)) = C(E)$, and hence that $r_E$ maps $C(X) / \mathcal{I}_E$ onto $C(E)$. \section{Locally compact spaces} \label{locally compact spaces} \setcounter{equation}{0} Let $X$ be a locally compact Hausdorff topological space, and let $C(X)$ be the space of continuous real or complex-valued functions on $X$, as usual. If $K \subseteq X$ is nonempty and compact, then the corresponding \emph{supremum seminorm} is defined on $C(X)$ by \begin{equation} \|f\|_K = \sup_{x \in K} |f(x)|. \end{equation} Of course, every continuous function $f$ on $X$ is bounded on $K$, because $f(K)$ is a compact set in ${\bf R}$ or ${\bf C}$, as appropriate. It is easy to see that this is indeed a seminorm on $C(X)$, and that \begin{equation} \|f \, g\|_K \le \|f\|_K \, \|g\|_K \end{equation} for every $f, g \in C(X)$. It follows that multiplication of functions is continuous as a mapping from $C(X) \times C(X)$ into $C(X)$ with respect to the topology on $C(X)$ determined by the collection of supremum seminorms associated to nonempty compact subsets of $X$. If $X$ is compact, then we can take $X = K$, and simply use the supremum norm on $X$. Thus we shall focus on the case where $X$ is not compact in this section. Suppose that $X$ is \emph{$\sigma$-compact}, so that there is a sequence $K_1, K_2, \ldots$ of compact subsets of $X$ such that $X = \bigcup_{l = 1}^\infty K_l$. We may also ask that $K_l \ne \emptyset$ and $K_l \subseteq K_{l + 1}$ for each $l$, by replacing $K_l$ with the union of $K_1, \ldots, K_l$ if necessary. Moreover, we can enlarge these compact sets in such a way that $K_l$ is contained in the interior of $K_{l + 1}$ for each $l$. This uses the local compactness of $X$, to get that any compact set in $X$ is contained in the interior of another compact set. In particular, it follows that the union of the interiors of the $K_l$'s is all of $X$ under these conditions. If $H$ is any compact set in $X$, then the interiors of the $K_l$'s form an open covering of $H$, for which there is a finite subcovering. This implies that $H$ is contained in the interior of $K_l$ for some $l$, since the $K_l$'s are increasing. Hence $H \subseteq K_l$ for some $l$, which implies that the supremum seminorms associated to the $K_l$'s determine the same topology on $C(K)$ as the collection of supremum seminorms corresponding to all nonempty compact subsets of $X$. Therefore this topology on $C(X)$ is metrizable in this case. If $E \subseteq X$ and $\mathcal{I}_E$ is the collection of $f \in C(X)$ such that $f(x) = 0$ for every $x \in E$, then $\mathcal{I}_E$ is a closed ideal in $C(X)$, as in the previous section. We also have that $\mathcal{I}_{\overline{E}} = \mathcal{I}_E$, where $\overline{E}$ is the closure of $E$ in $X$. If $\mathcal{I}$ is any subset of $X$ and $E$ is the set of $x \in X$ such that $f(x) = 0$ for every $f \in \mathcal{I}$, then $E$ is a closed set in $X$, and $\mathcal{I} \subseteq \mathcal{I}_E$. We would like to show that $\mathcal{I} = \mathcal{I}_E$ when $\mathcal{I}$ is a closed ideal in $C(X)$, as before. Let $f$ be a continuous function on $X$ such that $f(x) = 0$ for every $x \in E$, and let us check that $f \in \mathcal{I}$. If $f$ has compact support contained in the complement of $E$, then one can show that $f \in \mathcal{I}$ in the same way as in the previous section. Otherwise, it suffices to show that $f$ can be approximated by continuous functions with compact support contained in $X \backslash E$ in the topology of $C(X)$. Let $K$ be a nonempty compact set in $X$, and let $\epsilon > 0$ be given. Thus \begin{equation} \label{K(epsilon) = {x in K : |f(x)| ge epsilon}} K(\epsilon) = \{x \in K : |f(x)| \ge \epsilon\} \end{equation} is a compact set in $X$, since it is the intersection of the compact set $K$ with the closed set where $|f(x)| \ge \epsilon$. Also, $K(\epsilon) \subseteq X \backslash E$, because $f = 0$ on $E$ by hypothesis. Let $V(\epsilon)$ be an open set in $X$ such that $K(\epsilon) \subseteq V(\epsilon)$, $\overline{V(\epsilon)}$ is compact, and $\overline{V(\epsilon)} \subseteq X \backslash E$. This is possible, because $X$ is locally compact and Hausdorff. By Urysohn's lemma, there is a continuous real-valued function $\theta_\epsilon$ on $X$ which satisfies $\theta_\epsilon(x) = 1$ when $x \in K(\epsilon)$, $\theta_\epsilon(x) = 0$ when $x \in X \backslash V(\epsilon)$, and $0 \le \theta_\epsilon \le 1$ on all of $X$. In particular, the support of $\theta_\epsilon$ is contained in $\overline{V(\epsilon)}$, which is a compact subset of $X \backslash E$. Hence $\theta_\epsilon \, f$ is a continuous function with compact support in $X \backslash E$, which implies that $\theta_\epsilon \, f \in \mathcal{I}$. We also have that \begin{equation} |\theta_\epsilon(x) \, f(x) - f(x)| = (1 - \theta_\epsilon(x)) \, |f(x)| < \epsilon \end{equation} for every $x \in K$, because $1 - \theta_\epsilon(x) = 0$ when $x \in K(\epsilon)$, $|f(x)| < \epsilon$ when $x \in K \backslash K(\epsilon)$, and $0 \le \theta_\epsilon(x) \le 1$ for every $x \in X$. This shows that $f$ can be approximated by elements of $\mathcal{I}$ in the topology of $C(X)$, which implies that $f \in \mathcal{I}$, as desired, since $\mathcal{I}$ is supposed to be closed in $C(X)$. \section{Locally compact spaces, continued} \label{locally compact spaces, continued} \setcounter{equation}{0} Let $X$ be a locally compact Hausdorff space, and let $C_{com}(X)$ be the space of continuous real or complex-valued functions on $X$ with compact support. As usual, this may be denoted $C_{com}(X, {\bf R})$ or $C_{com}(X, {\bf C})$, to indicate whether real or complex-valued functions are being used. If $K \subseteq X$ is compact, then there is an open set $V$ in $X$ such that $K \subseteq V$ and $\overline{V}$ is compact, because $X$ is locally compact. Urysohn's lemma implies that there is a continuous real-valued function $\theta$ on $X$ such that $\theta(x) = 1$ when $x \in K$, $\theta(x)= 0$ when $x \in X \backslash V$, and $0 \le \theta(x) \le 1$ for every $x \in X$. Thus the support of $\theta$ is contained in $\overline{V}$, and hence is compact. If $f$ is any continuous function on $X$, then $\theta \, f$ is a continuous function on $X$ with compact support that is equal to $f$ on $K$. In particular, this implies that $C_{com}(X)$ is dense in $C(X)$ with respect to the topology determined by supremum seminorms associated to nonempty compact subsets of $X$. Suppose that $\lambda$ is a continuous linear functional on $C(X)$ with respect to this topology. This implies that there is a nonempty compact set $K \subseteq X$ and an nonnegative real number $A$ such that \begin{equation} \label{|lambda(f)| le A ||f||_K} |\lambda(f)| \le A \, \|f\|_K \end{equation} for every $f \in C(X)$. In this context, it is not necessary to take the maximum of finitely many seminorms on the right side of this inequality, because the union of finitely many compact subsets of $X$ is also compact. Note that $\lambda(f) = 0$ when $f(x) = 0$ for every $x \in K$, so that $\lambda(f)$ depends only on the restriction of $f$ to $K$. Let $X^*$ be the one-point compactification of $X$. Thus $X^*$ is a compact Hausdorff space consisting of the elements of $X$ and an additional element ``at infinity'', for which the induced topology on $X$ as a subset of $X^*$ is the same as its given topology. By construction, a set $K \subseteq X$ is closed as a subset of $X^*$ if and only if it is compact in $X$. In this case, Tietze's extension theorem implies that every continuous function on $K$ can be extended to a continuous function on $X^*$, and to a continuous function on $X$ in particular. If $X$ is already compact, then one can simply use $X$ instead of $X^*$. If $\lambda$ is a continuous linear functional on $C(X)$ that satisfies (\ref{|lambda(f)| le A ||f||_K}), then it follows that $\lambda$ corresponds to a continuous linear functional $\lambda_K$ on $C(K)$ in a natural way. More precisely, if $g$ is a continuous function on $K$, then there is a continuous function $f$ on $X$ such that $f = g$ on $K$, and we put \begin{equation} \lambda_K(g) = \lambda(f). \end{equation} This does not depend on the particular extension $f$ of $g$, by the earlier remarks. By construction, $\lambda_K$ satisfies the same continuity condition on $C(K)$ as $\lambda$ does on $C(X)$, with the same constant $A$. Let $C_0(X)$ be the space of continuous functions $f$ on $X$ that vanish at infinity. This means that for every $\epsilon > 0$ there is a compact set $K_\epsilon \subseteq X$ such that \begin{equation} \label{|f(x)| < epsilon} |f(x)| < \epsilon \end{equation} for every $x \in X \backslash K_\epsilon$. This space may also be denoted $C_0(X, {\bf R})$ or $C_0(X, {\bf C})$, to indicate whether real or complex-valued functions are being used. If $X$ is compact, then one can take $K_\epsilon = X$ for each $\epsilon$, and $C_0(X) = C(X)$. If $X$ is not compact, then $f \in C_0(X)$ if and only if $f$ has a continuous extension to the one-point compactification $X^*$ of $X$ which is equal to $0$ at the point at infinity. Note that continuous functions on $X$ that vanish at infinity are automatically bounded, so that $C_0(X) \subseteq C_b(X)$. It is not difficult to check that $C_0(X)$ is a closed linear subspace of $C_b(X)$, with respect to the supremum norm. Of course, continuous functions with compact support automatically vanish at infinity, so that $C_{com}(X) \subseteq C_0(X)$. One can also check that $C_0(X)$ is the same as the closure of $C_{com}(X)$ in $C_b(X)$ with respect to the supremum norm, using functions $\theta$ as before. This is all trivial when $X$ is compact, in which case these spaces are all the same as $C(X)$. Suppose that $X$ is not compact, let $f$ be a real or complex-valued continuous function on $X$, and let $a$ be a real or complex number, as appropriate. We say that $f(x) \to a$ as $x \to \infty$ in $X$ if for each $\epsilon > 0$ there is a compact set $K_\epsilon \subseteq X$ such that \begin{equation} |f(x) - a| < \epsilon \end{equation} for every $x \in X \backslash K_\epsilon$. Thus $f$ vanishes at infinity if and only if this holds with $a = 0$. If $a$ is any real or complex number, then $f(x) \to a$ as $x \to \infty$ in $X$ if and only if $f(x) - a$ vanishes at infinity. It is easy to see that the limit $a$ is unique when it exists. Similarly, $f(x) \to a$ as $x \to \infty$ in $X$ if and only if $f$ has a continuous extension to the one-point compactification $X^*$ of $X$ which is equal to $a$ at the point at infinity. Note that $f$ is bounded when $f$ has a limit at infinity. One can also check that the collection of continuous functions on $X$ which have a limit at infinity is a closed linear subspace of $C_b(X)$ with respect to the supremum norm. \section{$\sigma$-Compactness} \label{sigma-compactness} \setcounter{equation}{0} Let $X$ be a topological space, and let $\{U_\alpha\}_{\alpha \in A}$ be a collection of open subsets of $X$ such that $\bigcup_{\alpha \in A} U_\alpha = X$, which is to say an open covering of $X$. Suppose that $X$ is $\sigma$-compact, so that there is a sequence $K_1, K_2, \ldots$ of compact subsets of $X$ such that $X = \bigcup_{l = 1}^\infty K_l$. Because $\{U_\alpha\}_{\alpha \in A}$ is an open covering of $K_l$ for each $l$ and $K_l$ is compact, there is a finite set of indices $A_l \subseteq A$ such that $K_l \subseteq \bigcup_{\alpha \in A_l} U_\alpha$. If $B = \bigcup_{l = 1}^\infty A_l$, then $B$ has only finitely or countably many elements, and $\bigcup_{\alpha \in B} U_\alpha = X$. Conversely, if $X$ is locally compact and every open covering of $X$ can be reduced to a subcovering with only finitely or countably many elements, then $X$ is $\sigma$-compact. This follows by using local compactness to cover $X$ by open sets that are contained in compact sets. In particular, if $X$ is locally compact and there is a base for the topology of $X$ with only finitely or countably many elements, then $X$ is $\sigma$-compact, since every open covering of $X$ can be reduced to a subcovering with only finitely or countably many elements in this case. Suppose that the topology on $X$ is determined by a metric. It is well known that there is a base for the topology of $X$ with only finitely or countably many elements if and only if $X$ is separable, in the sense that there is a dense set in $X$ with only finitely or countably many elements. Compact metric spaces are separable, and it follows that $X$ is separable when $X$ is $\sigma$-compact. Urysohn's famous metrization theorem states that a regular topological space is metrizable when there is a countable base for its topology. Note that locally compact Hausdorff spaces are automatically regular. Suppose now that $X$ is a locally compact Hausdorff topological space which is $\sigma$-compact. As before, this implies that there is a sequence $K_1, K_2, \ldots$ of compact subsets of $X$ such that $X = \bigcup_{l = 1}^\infty K_l$ and $K_l$ is contained in the interior of $K_{l + 1}$ for each $l$. By Urysohn's lemma, there is a continuous real-valued function $\theta_l$ on $X$ for each positive integer $l$ such that $\theta(x) > 0$ when $x \in K_l$, $0 \le \theta_l(x) \le 1$ for every $x \in X$, and the support of $\theta_l$ is contained in $K_{l + 1}$. Let $a_1, a_2, \ldots$ be a sequence of positive real numbers such that $\sum_{l = 1}^\infty a_l$ converges, and consider \begin{equation} f(x) = \sum_{l = 1}^\infty a_l \, \theta_l(x). \end{equation} This series converges everywhere on $X$, by the comparison test. The partial sums of this series converge uniformly on $X$, as in Weierstrass' $M$-test. Thus $f$ is a continuous function on $X$, which also vanishes at infinity, because $\theta_l$ has compact support for each $l$. Moreover, $f(x) > 0$ for every $x \in X$, by construction. \section{Homomorphisms, revisited} \label{homomorphisms, revisited} \setcounter{equation}{0} Let $X$ be a locally compact Hausdorff topological space, and let $C(X)$ be the algebra of real or complex-valued continuous functions on $X$. Also let $\phi$ be linear functional on $C(X)$ which is a homomorphism with respect to multiplication. If $\phi(f) \ne 0$ for some $f \in C(X)$, then it follows that $\phi({\bf 1}_X) = 1$, where ${\bf 1}_X$ is the constant function equal to $1$ on $X$, as before. Let us suppose from now on that this is the case. If $f$ is a continuous function on $X$ such that $f(x) \ne 0$ for every $x \in X$, then $1/f$ is defines a continuous function on $X$ as well. This implies that $\phi(f) \ne 0$, since \begin{equation} \phi(f) \, \phi(1/f) = \phi({\bf 1}_X) = 1. \end{equation} If $f$ is any continuous function on $X$ and $c$ is a real or complex number, as appropriate, such that $c \not\in f(X)$, then $g = f - c \, {\bf 1}_X$ is a continuous function on $X$ such that $g(x) \ne 0$ for every $x \in X$, so that $\phi(g) \ne 0$. Thus $\phi(f) \ne c$, and hence \begin{equation} \phi(f) \in f(X). \end{equation} In particular, if $C(X)$ is the algebra of complex-valued continuous functions on $X$, and $f$ happens to be real-valued, then it follows that $\phi(f) \in {\bf R}$. Suppose now that $\phi$ is continuous with respect to the topology on $C(X)$ determined by the supremum seminorms corresponding to nonempty compact subsets of $X$. This means that there is a nonempty compact set $K \subseteq X$ and a nonnegative real number $A$ such that \begin{equation} |\phi(f)| \le A \, \|f\|_K \end{equation} for every $f \in C(X)$, as in Section \ref{locally compact spaces, continued}. In particular, $\phi(f) = 0$ when $f(x) = 0$ for every $x \in K$, so that $\phi(f)$ depends only on the restriction of $f$ to $K$. As in Section \ref{locally compact spaces, continued} again, every continuous real or complex-valued function on $K$ has a continuous extension to $X$, so that $\phi$ determines a continuous linear functional $\phi_K$ on $C(K)$. It is easy to see that $\phi_K$ is also a homomorphism with respect to multiplication on $C(K)$. Hence there is a $p \in K$ such that $\phi_K(f) = f(p)$ for every $f \in C(K)$, as in Section \ref{compact spaces}. This implies that \begin{equation} \phi(f) = f(p) \end{equation} for every $f \in C(X)$. Alternatively, consider \begin{equation} \mathcal{I}_\phi = \{f \in C(X) : \phi(f) = 0\}. \end{equation} It is easy to see that this is a closed ideal in $C(X)$ when $\phi$ is a continuous homomorphism on $C(X)$. As in Section \ref{locally compact spaces}, there is a closed set $E \subseteq X$ such that $\mathcal{I}_\phi = \mathcal{I}_E$, where $\mathcal{I}_E$ consists of $f \in C(X)$ such that $f(x) = 0$ for every $x \in E$. Note that $\mathcal{I}_\phi$ has codimension $1$ as a linear subspace of $C(X)$, since it is the same as the kernel of the nonzero linear functional $\phi$. Using this, one can check that $E$ has exactly one element, which may be denoted $p$. Thus $\phi(f) = 0$ for every $f \in C(X)$ such that $f(p) = 0$. If $f$ is any continuous function on $X$, then $f - f(p) \, {\bf 1}_X$ is equal to $0$ at $p$, and hence $\phi(f - f(p) \, {\bf 1}_X) = 0$. This implies that $\phi(f) = f(p)$ for every $f \in C(X)$, since $\phi({\bf 1}_X) = 1$. Remember that the same conclusion holds for every nonzero homomorphism $\phi$ on $C(X)$ when $X$ is compact, without the additional hypothesis of continuity, as in Section \ref{compact spaces}. Suppose now that $X$ is a locally compact Hausdorff which is not compact but $\sigma$-compact, and that $\phi$ is a nonzero homomorphism on $C(X)$. Let $X^*$ be the one-point compactification of $X$, and note that the space $C(X^*)$ of continuous functions on $X^*$ can be identified with the subalgebra of $C(X)$ consisting of functions with a limit at infinity, as in Section \ref{locally compact spaces, continued}. The restriction of $\phi$ to this subalgebra determines a homomorphism on $C(X^*)$, which is nonzero because it sends constant functions to their constant values. It follows that there is a $p \in X^*$ such that $\phi(f) = f(p)$ when $f \in C(X)$ has a limit at infinity, as in Section \ref{compact spaces}. If $p$ is the point at infinity in $X^*$, then $f(p)$ refers to the limit of $f$ at infinity on $X$. Let us check that $p$ cannot be the point at infinity in $X^*$ when $X$ is $\sigma$-compact. In this case, there is a continuous real-valued function $f$ on $X$ that vanishes at infinity such that $f(x) > 0$ for every $x \in X$, as in the previous section. Because $\phi$ is defined on all of $C(X)$, we also have that $\phi(f) \ne 0$, as discussed at the beginning of the section. If $p$ were the point at infinity, then we would have that $\phi(f) = 0$, since $f \in C_0(X)$. Thus $p \in X^*$ is not the point at infinity, which means that $p \in X$. If $g$ is any bounded continuous function on $X$, then $f \, g \in C_0(X)$, which implies that \begin{equation} \phi(f \, g) = f(p) \, g(p), \end{equation} and so \begin{equation} \phi(f) \, \phi(g) = f(p) \, g(p), \end{equation} because $\phi$ is a homomorphism on $C(X)$. This shows that $\phi(g) = g(p)$ for every bounded continuous function $g$ on $X$. If $h$ is any continuous function on $X$ and $\epsilon > 0$, then \begin{equation} h_\epsilon = \frac{h}{1 + \epsilon \, |h|^2} \end{equation} is a bounded continuous function on $X$, and so $\phi(h_\epsilon) = h_\epsilon(p)$. One can also check that \begin{equation} \phi(h_\epsilon) = \frac{\phi(h)}{1 + \epsilon \, |\phi(h)|^2} \end{equation} for every $\epsilon > 0$, because $\phi$ is a homomorphism. Hence \begin{equation} \frac{\phi(h)}{1 + \epsilon \, |\phi(h)|^2} = \frac{h(p)}{1 + \epsilon \, |h(p)|^2} \end{equation} for every $\epsilon > 0$, which implies that $\phi(h) = h(p)$ for every $h \in C(X)$. \section{$\sigma$-Compactness, continued} \label{sigma-compactness, continued} \setcounter{equation}{0} Let $X$ be a locally compact Hausdorff topological space which is $\sigma$-compact, and let $K_1, K_2, \ldots$ be a sequence of compact subsets of $X$ such that $X = \bigcup_{l = 1}^\infty K_l$ and $K_l$ is contained in the interior of $K_{l + 1}$ for each $l$. By Urysohn's lemma, there is a continuous real-valued function $\theta_l$ on $X$ for each positive integer $l$ such that $\theta_l(x) = 1$ for every $x$ in a neighborhood of $K_l$, $0 \le \theta_l(x) \le 1$ for every $x \in X$, and the support of $\theta_l$ is contained in $K_{l + 1}$. In particular, $\theta_l(x) \le \theta_{l + 1}(x)$ for each $x \in X$ and $l \ge 1$. It will be convenient to also put $K_0 = \emptyset$ and $\theta_0 = 0$. Let $b_1, b_2, \ldots$ be a sequence of nonnegative real numbers, and consider \begin{equation} \label{B(x) = ...} B(x) = b_1 \, \theta_1(x) + \sum_{l = 2}^\infty b_l \, (\theta_l(x) - \theta_{l - 2}(x)). \end{equation} Note that $\theta_l(x) - \theta_{l - 2}(x) = 0$ for every $x$ in a neighborhood of $K_{l - 2}$, and when $x \in X \backslash K_{l + 1}$, for $l \ge 2$. This implies that at most three terms on the right side of (\ref{B(x) = ...}) are different from $0$ for any $x \in X$, and more precisely that every $x \in X$ has a neighborhood on which at most three terms on the right side of (\ref{B(x) = ...}) are different from $0$, so that $B(x)$ is continuous on $X$. We also have that \begin{equation} B(x) \ge b_1 \, \theta_1(x) \ge b_1 \end{equation} when $x \in K_1$, and \begin{equation} B(x) \ge b_l \, (\theta_l(x) - \theta_{l - 2}(x)) \ge b_l \end{equation} when $x \in K_l \backslash K_{l - 1}$, $l \ge 2$. Suppose that $E$ is a bounded subset of the space $C(X)$ of continuous real or complex-valued continuous functions on $X$ with respect to the collection of supremum seminorms associated to nonempty compact subsets of $X$. Thus the elements of $E$ are uniformly bounded on compact subsets of $X$, and so for each positive integer $l$ there is a nonnegative real number $b_l$ such that \begin{equation} |f(x)| \le b_l \end{equation} for every $f \in E$ and $x \in K_l$. This implies that \begin{equation} \label{|f(x)| le B(x)} |f(x)| \le B(x) \end{equation} for every $f \in E$ and $x \in X$, where $B$ is as in the previous paragraph. Now let $\phi$ be a linear functional on $C(X)$ which is a homomorphism with respect to multiplication, and which satisfies $\phi(f) \ne 0$ for some $f \in C(X)$. As in the previous section, $\phi(f) \in f(X)$ for every $f \in C(X)$, and in particular $\phi(f) \ge 0$ when $f$ is a nonnegative real-valued continuous function on $X$. If $f \in C(X)$ satisfies (\ref{|f(x)| le B(x)}), then it follows that \begin{equation} \label{|phi(f)| le phi(B)} |\phi(f)| \le \phi(B). \end{equation} More precisely, if $f$ is real-valued, then $B(x) \pm f(x) \ge 0$ for each $x \in X$, and so \begin{equation} \phi(B) \pm \phi(f) = \phi(B \pm f) \ge 0. \end{equation} Similarly, if $f$ is complex-valued, then one can use the fact that $\mathop{\rm Re} \alpha f(x) \le B$ for each $\alpha \in {\bf C}$ with $|\alpha| = 1$ to get that \begin{equation} \mathop{\rm Re} \alpha \, \phi(f) = \phi(\mathop{\rm Re} \alpha \, f) \le \phi(B), \end{equation} which implies (\ref{|phi(f)| le phi(B)}). This shows that $\phi$ is uniformly bounded on every bounded set $E \subseteq C(X)$ with respect to the collection of supremum seminorms associated to nonempty compact subsets of $X$. There is also a countable local base for the topology at $0$ in $C(X)$ with respect to this collection of seminorms, because $X$ is $\sigma$-compact. It follows that $\phi$ is continuous with respect to this topology on $C(X)$, by the result discussed in Section \ref{bounded sequences}. This gives another way to show that there is a point $p \in X$ such that $\phi(f) = f(p)$ for every $f \in C(X)$, by reducing to the case of continuous homomorphisms, as in the previous section. \section{Holomorphic functions} \label{holomorphic functions} \setcounter{equation}{0} Let $U$ be a nonempty open set in the complex plane ${\bf C}$, and let $C(U)$ be the algebra of continuous complex-valued functions on $U$. Of course, $U$ is locally compact with respect to the topology inherited from the standard topology on ${\bf C}$, and it is also $\sigma$-compact, because it is a separable metric space, and hence has a countable base for its topology. As usual, $C(U)$ gets a nice topology from the collection of supremum seminorms associated to nonempty compact subsets of $U$. Remember that a complex-valued function $f(z)$ on $U$ is said to be complex-analytic or holomorphic if the complex derivative \begin{equation} \label{f'(z) = lim_{h to 0} frac{f(z + h) - f(z)}{h}} f'(z) = \lim_{h \to 0} \frac{f(z + h) - f(z)}{h} \end{equation} exists at every point $z$ in $U$. In particular, the existence of the limit implies that $f$ is continuous, so that the space $\mathcal{H}(U)$ of holomorphic functions on $U$ is contained in $C(U)$. More precisely, $\mathcal{H}(U)$ is a linear subspace of $C(U)$, which is actually a subalgebra, because the product of two holomorphic functions is holomorphic as well. Note that constant functions on $U$ are automatically holomorphic, since they have derivative equal to $0$ at every point. It is well known that $\mathcal{H}(U)$ is closed in $C(U)$, with respect to the topology determined by the collection of supremum seminorms associated to nonempty compact subsets of $U$. This is equivalent to the statement that if $\{f_j\}_{j = 1}^\infty$ is a sequence of holomorphic functions on $U$ that converges uniformly on compact subsets of $U$ to a function $f$ on $U$, then $f$ is also holomorphic on $U$. To see this, one can use the Cauchy integral formula to show that the sequence of derivatives $\{f'_j\}_{j = 1}^\infty$ converges uniformly on compact subsets of $U$, and that the limit is equal to the derivative $f'$ of $f$. Let $\phi$ be a linear functional on $\mathcal{H}(U)$ which is a homomorphism with respect to multiplication. As before, if $\phi(f) \ne 0$ for some $f \in \mathcal{H}(U)$, then $\phi({\bf 1}_U) = 1$, where ${\bf 1}_U$ is the constant function on $U$ equal to $1$. Let us suppose from now on that this is the case. If $f$ is a holomorphic function on $U$ such that $f(z) \ne 0$ for every $z \in U$, then it is well known that $1/f$ is holomorphic on $U$ too. This implies that \begin{equation} \phi(f) \, \phi(1/f) = \phi({\bf 1}_U) = 1, \end{equation} and hence $\phi(f) \ne 0$. If $c$ is a complex number such that $c \not\in f(U)$, then we can apply this to $f - c \, {\bf 1}_U$ to get that $\phi(f) \ne c$. Thus $\phi(f) \in f(U)$, as in the context of continuous functions. In particular, this holds when $f(z) = z$ for every $z \in U$, which is holomorphic with derivative equal to $1$ at every point. If $\phi(f)$ is denoted $p$ when $f(z) = z$ for every $z \in U$, then it follows that $p \in U$. We would like to show that \begin{equation} \phi(g) = g(p) \end{equation} for every $g \in \mathcal{H}(U)$. If $g(p) = 0$, then $g$ can be expressed as \begin{equation} g(z) = (z - p) \, h(z) \end{equation} for some $h \in \mathcal{H}(U)$, by standard results in complex analysis. This implies that $\phi(g) = 0$, by the definition of $p$ and the fact that $\phi$ is a homomorphism. If $g(p) \ne 0$, then one can reduce to the case where $g(p) = 0$ by subtracting a constant from $g$. \section{The disk algebra} \label{disk algebra} \setcounter{equation}{0} Let $U$ be the open unit disk in the complex plane ${\bf C}$, \begin{equation} U = \{z \in {\bf C} : |z| < 1\}. \end{equation} Thus the closure $\overline{U}$ of $U$ is the closed unit disk, \begin{equation} \overline{U} = \{z \in {\bf C} : |z| \le 1\}, \end{equation} and the boundary $\partial U$ of $U$ is the same as the unit circle, \begin{equation} \partial U = \{z \in {\bf C} : |z| = 1\}. \end{equation} Let $C(\overline{U})$ be the algebra of continuous complex-valued functions on $\overline{U}$, equipped with the supremum norm. Let $\mathcal{A}$ be the collection of $f \in C(\overline{U})$ such that the restriction of $f$ to $U$ is holomorphic. Thus $\mathcal{A}$ is a subalgebra of $C(\overline{U})$, since sums and products of holomorphic functions are also holomorphic, which is known as the \emph{disk algebra}. Note that constant functions on $\overline{U}$ are elements of $\mathcal{A}$, and that $\mathcal{A}$ is a closed set in $C(\overline{U})$ with respect to the supremum norm, for the same reasons as in the previous section. If $f \in \mathcal{A}$ and $f(z) \ne 0$ for every $z \in \overline{U}$, then $1/f$ is continuous on $\overline{U}$ and holomorphic on $U$, and hence is in $\mathcal{A}$ too. If $f \in C(\overline{U})$ and $0 \le r < 1$, then \begin{equation} f_r(z) = f(r \, z) \end{equation} is an element of $C(\overline{U})$ as well. Note that $f$ is automatically uniformly continuous on $\overline{U}$, because $f$ is continuous on $\overline{U}$ and $\overline{U}$ is a compact set in a metric space. Using this, it is easy to see that $f_r \to f$ uniformly on $\overline{U}$ as $r \to 1$. If $f$ is a holomorphic function on the open unit disk $U$, then \begin{equation} f(z) = \sum_{j = 0}^\infty a_j \, z^j \end{equation} for some complex numbers $a_0, a_1, \ldots$ and every $z \in U$. More precisely, $z^j$ is interpreted as being equal to $1$ for every $z$ when $j = 0$, and the convergence of the series when $|z| < 1$ is part of the conclusion. The series actually converges absolutely for every $z \in U$, and the partial sums converge uniformly on compact subsets of $U$. If $0 \le r < 1$, then \begin{equation} f_r(z) = f(r \, z) = \sum_{j = 0}^\infty a_j r^j \, z^j \end{equation} for every $z \in \overline{U}$. Under these conditions, the series converges absolutely when $|z| \le 1$, and the partial sums converge uniformly on $\overline{U}$, by the remarks in the previous paragraph. If $f \in \mathcal{A}$, then $f$ can be approximated uniformly by $f_r$ as $r \to 1$, and $f_r$ is approximated uniformly by partial sums of its series expansion for each $r < 1$. It follows that $f$ can be approximated uniformly by polynomials in $z$ on $\overline{U}$ when $f \in \mathcal{A}$. Let $\phi$ be a linear functional on $\mathcal{A}$ which is a homomorphism with respect to multiplication. As usual, we suppose that $\phi(f) \ne 0$ for some $f \in \mathcal{A}$, so that $\phi$ sends constant functions on $\overline{U}$ to their constant values. If $f \in \mathcal{A}$ and $f(z) \ne 0$ for every $z \in \overline{U}$, then $1/f \in \mathcal{A}$, and we get that $\phi(f) \ne 0$. This implies that \begin{equation} \phi(f) \in f(\overline{U}) \end{equation} for every $f \in \mathcal{A}$, as before. In particular, \begin{equation} \label{|phi(f)| le sup_{|z| le 1} |f(z)|} |\phi(f)| \le \sup_{|z| \le 1} |f(z)| \end{equation} for every $f \in \mathcal{A}$, so that $\phi$ is continuous with respect to the supremum norm on $\mathcal{A}$. Of course, $f(z) = z$ defines an element of $\mathcal{A}$, and we can put $\phi(f) = p$ for this choice of $f$. Note that $p \in \overline{U}$, by the previous remarks. If $g$ is a polynomial in $z$, then \begin{equation} \phi(g) = g(p), \end{equation} because $\phi$ is a homomorphism. This also works for every $g \in \mathcal{A}$, because polynomials are dense in $\mathcal{A}$ with respect to the supremum norm, and because $\phi$ is continuous on $\mathcal{A}$ with respect to the supremum norm. If $f \in \mathcal{A}$, then \begin{equation} \sup_{|z| = 1} |f(z)| = \sup_{|z| \le 1} |f(z)|, \end{equation} by the maximum modulus principle. In particular, if $f(z) = 0$ for every $z \in \partial U$, then $f(z) = 0$ for every $z \in \overline{U}$. This implies that $f$ is determined on the closed disk $\overline{U}$ by its restriction to the unit circle $\partial U$. Using this, one can identify the disk algebra with a closed subalgebra of the algebra of continuous complex-valued functions on the unit circle. \section{Bounded holomorphic functions} \label{bounded holomorphic functions} \setcounter{equation}{0} Let $U$ be the open unit disk in the complex plane again, and let $C_b(U)$ be the algebra of bounded continuous complex-valued functions on $U$, equipped with the supremum norm. Also let $\mathcal{B}$ be the collection of bounded holomorphic functions on $U$, which is the same as the intersection of $C_b(U)$ with $\mathcal{H}(U)$. As usual, this is a closed subalgebra of $C_b(U)$ with respect to the supremum norm. Let $\phi$ be a linear functional on $\mathcal{B}$ which is a homomorphism with respect to multiplication. Suppose also that $\phi(f) \ne 0$ for some $f \in \mathcal{B}$, which implies that $\phi$ sends constant functions on $U$ to their constant values. If $f \in \mathcal{B}$ and $|f(z)| \ge \delta$ for some $\delta > 0$ and every $z \in U$, then $1/f$ is also a bounded holomorphic function on $U$, and it follows that $\phi(f) \ne 0$, because $\phi(f) \, \phi(1/f) = 1$. This implies that \begin{equation} \phi(f) \in \overline{f(U)} \end{equation} for every $f \in \mathcal{B}$, as in the previous situations, and hence that \begin{equation} |\phi(f)| \le \sup_{|z| < 1} |f(z)|. \end{equation} Thus $\phi$ is a continuous linear functional on $\mathcal{B}$ with respect to the supremum norm, with dual norm equal to $1$, since $\phi$ sends constants to themselves. Each element $p$ of $U$ determines a nonzero homomorphism $\phi_p$ on $\mathcal{B}$, given by evaluation at $p$, or \begin{equation} \phi_p(f) = f(p). \end{equation} The collection of nonzero homomorphisms on $\mathcal{B}$ is contained in the unit ball of the dual of $\mathcal{B}$ with respect to the supremum norm, as in the previous paragraph, and it is also a closed set with respect to the weak$^*$ topology, as in Section \ref{bounded continuous functions}. Hence the collection of nonzero homomorphisms on $\mathcal{B}$ is compact with respect to the weak$^*$ topology on the dual of $\mathcal{B}$, by the Banach--Alaoglu theorem. Of course, the restriction of any nonzero homomorphism on $C_b(U)$ is a nonzero homomorphism on $\mathcal{B}$, which includes evaluation at elements of $U$. Suppose that $z_1, z_2, \ldots$ is a sequence of elements of $U$ such that $|z_j| \to 1$ as $j \to \infty$. Also let $L$ be a nonzero homomorphism on $\ell^\infty({\bf Z}_+)$ which is equal to $0$ on $c_0({\bf Z}_+)$. This determines a nonzero homomorphism on $C_b(U)$, by applying $L$ to $f(z_j)$ as a bounded function on ${\bf Z}_+$ for each $f \in C_b(U)$. If $w_1, w_2, \ldots$ is another sequence of elements of $U$ such that $|w_j| \to 1$ as $j \to \infty$, then we can apply $L$ to $f(w_j)$ to get another homomorphism on $C_b(U)$. If $z_j \ne w_l$ for every $j, l \ge 1$, then it is easy to see that these are distinct homomorphisms on $C_b(U)$, because one can choose a bounded continuous function $f$ on $U$ such that $f(z_j) = 0$ and $f(w_l) = 1$ for each $j$, $l$. If $f$ is a bounded holomorphic function on $U$, then one can check that there is a $C \ge 0$ such that \begin{equation} \sup_{|z| < 1} (1 - |z|) \, |f'(z)| \le C \, \sup_{|z| < 1} |f(z)|. \end{equation} This follows from the Cauchy integral formula for $f'(z)$ applied to the disk centered at $z$ with radius $(1 - |z|) / 2$, for instance. Suppose that $z_j$, $w_l$ are as before, and satisfy the additional property that \begin{equation} \lim_{j \to \infty} \frac{|z_j - w_j|}{(1 - |z_j|)} = 0. \end{equation} If $f$ is a bounded holomorphic function on $U$, then \begin{equation} \lim_{j \to \infty} (f(z_j) - f(w_j)) = 0. \end{equation} This follows from the fact that $(1 - |z|) |f'(z)|$ is bounded on $U$, as in the previous paragraph. If $L$ is a nonzero homomorphism on $\ell^\infty({\bf Z}_+)$ that vanishes on $c_0({\bf Z}_+)$, then $L$ applied to $f(z_j) - f(w_j)$ is equal to $0$, so that $L$ applied to $f(z_j)$ is the same as $L$ applied to $f(w_j)$. This shows that distinct homomorphisms on $C_b(U)$ may determine the same homomorphism on $\mathcal{B}$. A sequence $\{z_j\}_{j = 1}^\infty$ of points in $U$ is said to be an \emph{interpolating sequence} if for every bounded sequence of complex numbers $\{a_j\}_{j = 1}^\infty$ there is a bounded holomorphic function $f$ on $U$ such that $f(z_j) = a_j$ for each $j$. Equivalently, $\{z_j\}_{j = 1}^\infty$ is an interpolating sequence in $U$ if \begin{equation} \label{f mapsto {f(z_j)}_{j = 1}^infty} f \mapsto \{f(z_j)\}_{j = 1}^\infty \end{equation} maps $\mathcal{B}$ onto $\ell^\infty({\bf Z}_+)$. Of course, (\ref{f mapsto {f(z_j)}_{j = 1}^infty}) defines a bounded linear mapping from $\mathcal{B}$ into $\ell^\infty({\bf Z}_+)$ for any sequence $\{z_j\}_{j = 1}^\infty$ of elements of $U$, and is also a homomorphism with respect to pointwise multiplication. A famous theorem of Carleson characterizes interpolating sequences in $U$. In particular, there are plenty of them. \section{Density} \label{density} \setcounter{equation}{0} Let $X$ be a topological space, and let $\psi$ be a nonzero homomorphism from $C_b(X)$ into the real or complex numbers, as appropriate. As in Section \ref{bounded continuous functions}, $\psi$ is automatically a bounded linear functional on $C_b(X)$, and thus an element of the dual space $C_b(X)^*$. If $p \in X$, then let $\phi_p(f) = f(p)$ be the corresponding point evaluation homomorphism on $C_b(X)$, as usual. We would like to show that $\psi$ can be approximated by point evealuations with respect to the weak$^*$ topology on $C_b(X)^*$, so that point evaluations are dense in the set of nonzero homomorphisms on $C_b(X)$ with respect to the weak$^*$ topology on $C_b(X)^*$. More precisely, we would like to show that for any finite collection of bounded continuous functions $f_1, \ldots, f_n$ on $X$ and any $\epsilon > 0$ there is a $p \in X$ such that \begin{equation} |\psi(f_j) - \phi_p(f)| = |\psi(f_j) - f_j(p)| < \epsilon \end{equation} for $j = 1, \ldots, n$. Otherwise, there are $f_1, \ldots, f_n \in C_b(X)$ and $\epsilon > 0$ such that \begin{equation} \label{max_{1 le j le n} |psi(f_j) - f_j(p)| ge epsilon} \max_{1 \le j \le n} |\psi(f_j) - f_j(p)| \ge \epsilon \end{equation} for every $p \in X$. We may as well ask also that $\psi(f_j) = 0$ for each $j$, since this can always be arranged by subtracting $\psi(f_j)$ as a constant function on $U$ from $f_j$. In this case, (\ref{max_{1 le j le n} |psi(f_j) - f_j(p)| ge epsilon}) reduces to \begin{equation} \label{max_{1 le j le n} |f_j(p)| ge epsilon} \max_{1 \le j \le n} |f_j(p)| \ge \epsilon \end{equation} for each $p \in X$. If \begin{equation} g(p) = \sum_{j = 1}^n |f_j(p)|^2, \end{equation} then $g$ is a bounded continuous function on $X$, and $g(p) \ge \epsilon^2$ for each $p \in U$, by (\ref{max_{1 le j le n} |f_j(p)| ge epsilon}). Thus $1/g$ is also a bounded continuous function on $X$, which implies that $\psi(g) \ne 0$, as in Section \ref{bounded continuous functions}. Of course, $g$ can also be expressed as \begin{equation} g = \sum_{j = 1}^n f_j^2 \end{equation} in the real case, and as \begin{equation} g = \sum_{j = 1}^n f_j \, \overline{f_j} \end{equation} in the complex case, where $\overline{f_j}$ is the complex conjugate of $f_j$. In both cases, this implies that $\psi(g) = 0$, a contradiction, because $\psi(f_j) = 0$ for each $j$, and $\psi$ is a homomorphism. Let $\mathcal{B}$ be the algebra of bounded holomorphic functions on the open unit disk $U$, as in the preceding section. Carleson's corona theorem states that every nonzero homomorphism $\psi$ on $\mathcal{B}$ can be approximated by point evaluations $\phi_p(f) = f(p)$, $p \in U$, with respect to the weak$^*$ topology on the dual of $\mathcal{B}$. As before, if this were not the case, then there would be bounded holomorphic functions $f_1, \ldots, f_n$ on $U$ and $\epsilon > 0$ such that $\psi(f_j) = 0$ for $j = 1, \ldots, n$ and (\ref{max_{1 le j le n} |f_j(p)| ge epsilon}) holds. However, the previous argument does not work, because $\overline{f_j}$ is not holomorphic on $U$ unless $f_j$ is constant. Instead, one can try to show that there are bounded holomorphic functions $g_1, \ldots, g_n$ on $U$ such that \begin{equation} \sum_{j = 1}^n f_j(p) \, g_j(p) = 1 \end{equation} for every $p \in U$, which would give a contradiction as before. \section{Mapping properties} \label{mapping properties} \setcounter{equation}{0} Let $X$ be a topological space, and let us use $\mathop{\rm Hom}(X)$ to denote the set of nonzero homomorphisms from $C_b(X)$ into ${\bf R}$ or ${\bf C}$, as appropriate. In situations in which other types of algebras are considered as well, this may be denoted more precisely as $\mathop{\rm Hom}(C_b(X))$, to avoid confusion. As in Section \ref{bounded continuous functions}, $\mathop{\rm Hom}(X)$ is a compact subset of $C_b(X)^*$ with respect to the weak$^*$ topology. If $p \in X$, then $\phi_p(f) = f(p)$ is an element of $\mathop{\rm Hom}(X)$, and we let $\mathop{\rm Hom}_1(X)$ be the subset of $\mathop{\rm Hom}(X)$ consisting of homomorphisms on $C_b(X)$ of this form. Thus $\mathop{\rm Hom}(X) = \mathop{\rm Hom}_1(X)$ when $X$ is compact, as in Section \ref{compact spaces}. Otherwise, $\mathop{\rm Hom}_1(X)$ is dense in $\mathop{\rm Hom}(X)$ with respect to the weak$^*$ topology on $C_b(X)^*$ for any $X$, as in the previous section. We have also seen in Section \ref{bounded continuous functions} that $p \mapsto \phi_p$ is continuous as a mapping from $X$ into $C_b(X)^*$ with the weak$^*$ topology. By definition, this mapping sends $X$ onto $\mathop{\rm Hom}_1(X)$ in $C_b(X)^*$. If $X$ is compact, then it follows that $\mathop{\rm Hom}_1(X)$ is compact with respect to the weak$^*$ topology on $C_b(X)^*$, and hence closed. This gives another way to show that $\mathop{\rm Hom}_1(X) = \mathop{\rm Hom}(X)$ when $X$ is compact, since $\mathop{\rm Hom}(X)$ is the same as the closure of $\mathop{\rm Hom}_1(X)$ with respect to the weak$^*$ topology on $C_b(X)^*$ for any $X$. Note that $p \mapsto \phi_p$ is a one-to-one mapping of $X$ into $C_b(X)^*$ exactly when continuous functions separate points on $X$. If $X$ is completely regular, then it is easy to see that $p \mapsto \phi_p$ is a homeomorphism from $X$ onto $\mathop{\rm Hom}_1(X)$ with respect to the topology on $\mathop{\rm Hom}_1(X)$ induced by the weak$^*$ topology on $C_b(X)^*$. In particular, if $X$ is compact and Hausdorff, then $p \mapsto \phi_p$ is a homeomorphism from $X$ onto $\mathop{\rm Hom}(X)$ with respect to the topology on $\mathop{\rm Hom}(X)$ induced by the weak$^*$ topology on $C_b(X)^*$. Remember that compact Hausdorff topological spaces are normal and hence completely regular. Now let $Y$ be another topological space, and let $\rho$ be a continuous mapping from $X$ into $Y$. This leads to a linear mapping $T_\rho : C_b(Y) \to C_b(X)$, defined by \begin{equation} \label{T_rho(f) = f circ rho} T_\rho(f) = f \circ \rho \end{equation} for each $f \in C_b(Y)$. Observe that \begin{equation} \label{||T_rho(f)||_{sup, X} le ||f||_{sup, Y}} \|T_\rho(f)\|_{sup, X} \le \|f\|_{sup, Y} \end{equation} for every $f \in C_b(Y)$, where the subscripts $X$, $Y$ indicate on which space the supremum norm is taken. This shows that $T_\rho$ is a bounded linear mapping from $C_b(Y)$ into $C_b(X)$ with respect to the supremum norm, with operator norm less than or equal to $1$, and the operator norm is actually equal to $1$, because $T_\rho({\bf 1}_Y) = {\bf 1}_X$. If $\rho(X)$ is dense in $Y$, then $T_\rho$ is an isometric embedding of $C_b(Y)$ into $C_b(X)$ with respect to their supremum norms. Let $T_\rho^* : C_b(X)^* \to C_b(Y)^*$ be the dual mapping associated to $T_\rho$. This sends a bounded linear functional $\lambda$ on $C_b(X)$ to the bounded linear functional $\mu = T_\rho^*(\lambda)$ defined by \begin{equation} \mu(f) = \lambda(T_\rho(f)) = \lambda(f \circ \rho) \end{equation} for each $f \in C_b(Y)$. The fact that $\mu = T_\rho^*(\lambda)$ is a bounded linear functional on $C_b(Y)$ uses the fact that $T_\rho$ is a bounded linear mapping from $C_b(Y)$ into $C_b(X)$, as well as the boundedness of $\lambda$ on $C_b(X)$. Similarly, it is easy to see that $T_\rho^*$ is bounded as a linear mapping from $C_b(X)^*$ into $C_b(Y)^*$ with respect to the corresponding dual norms. It is also easy to see that $T_\rho^*$ is continuous as a mapping from $C_b(X)^*$ into $C_b(Y)^*$ with respect to their corresponding weak$^*$ topologies. Observe that $T_\rho$ is a homomorphism from $C_b(Y)$ into $C_b(X)$, in the sense that \begin{equation} \label{T_rho(f g) = T_rho(f) T_rho(g} T_\rho(f \, g) = T_\rho(f) \, T_\rho(g) \end{equation} for every $f, g \in C_b(Y)$. If $\lambda$ is a homomorphism from $C_b(X)$ into the real or complex numbers, as appropriate, then it follows that $T_\rho^*(\lambda)$ is a homomorphism on $C_b(Y)$ too. If $\lambda$ is a nonzero homomorphism on $C_b(X)$, so that $\lambda({\bf 1}_X) = 1$, then $T_\rho^*(\lambda)$ is nonzero on $C_b(Y)$ too, because \begin{equation} T_\rho^*(\lambda)({\bf 1}_Y) = \lambda(T_\rho({\bf 1}_Y) = \lambda({\bf 1}_Y \circ \rho) = \lambda({\bf 1}_X) = 1. \end{equation} Thus $T_\rho^*(\mathop{\rm Hom}(X)) \subseteq \mathop{\rm Hom}(Y)$. If $q \in Y$, then let $\psi_q(f) = f(q)$ be the corresponding point evaluation on $C_b(Y)$. Observe that \begin{equation} T_\rho^*(\phi_p) = \psi_{\rho(p)} \end{equation} for each $p \in X$, since \begin{equation} T_\rho^*(\phi_p)(f) = \phi_p(T_\rho(f)) = \phi_p(f \circ \rho) = f(\rho(p)) = \psi_{\rho(p)}(f) \end{equation} for every $f \in C_b(Y)$. Thus $T_\rho^*(\mathop{\rm Hom}_1(X)) \subseteq \mathop{\rm Hom}_1(Y)$. If $\rho(X)$ is dense in $Y$, then it follows that $T_\rho^*(\mathop{\rm Hom}_1(X))$ is dense in $\mathop{\rm Hom}_1(Y)$ with respect to the weak$^*$ topology on $C_b(Y)^*$, because $q \mapsto \psi_q$ is a continuous mapping from $Y$ into $C_b(Y)^*$ with respect to the weak$^*$ topology on $C_b(Y)^*$. This implies that $T_\rho^*(\mathop{\rm Hom}_1(X))$ is dense in $\mathop{\rm Hom}(Y)$ with respect to the weak$^*$ topology on $C_b(Y)^*$ when $\rho(X)$ is dense in $Y$, since $\mathop{\rm Hom}_1(Y)$ is dense in $\mathop{\rm Hom}(Y)$ with respect to the weak$^*$ topology on $C_b(Y)^*$. If $\rho(X)$ is dense in $Y$, then we also get that \begin{equation} \label{T_rho^*(Hom(X)) = Hom(Y)} T_\rho^*(\mathop{\rm Hom}(X)) = \mathop{\rm Hom}(Y). \end{equation} Remember that $\mathop{\rm Hom}(X)$ is compact in $C_b(X)^*$ with respect to the weak$^*$ topology, which implies that $T_\rho^*(\mathop{\rm Hom}(X))$ is compact in $C_b(Y)^*$ with respect to the weak$^*$ topology, because $T_\rho^*$ is a continuous mapping from $C_b(X)^*$ into $C_b(Y)^*$ with respect to their weak$^*$ topologies. Hence $T_\rho^*(\mathop{\rm Hom}(X))$ is a closed set in $C_b(Y)^*$ with respect to the weak$^*$ topology. This implies that $\mathop{\rm Hom}(Y)$ is contained in $T_\rho^*(\mathop{\rm Hom}(X))$, because $T_\rho^*(\mathop{\rm Hom}_1(X)) \subseteq T_\rho^*(\mathop{\rm Hom}(X))$ is dense in $\mathop{\rm Hom}(Y)$ with respect to the weak$^*$ topology on $C_b(Y)^*$ when $\rho(X)$ is dense in $Y$, as in the previous paragraph. Therefore (\ref{T_rho^*(Hom(X)) = Hom(Y)}) holds, since $T_\rho^*(\mathop{\rm Hom}(X))$ is contained in $\mathop{\rm Hom}(Y)$ automatically. Suppose now that $Y$ is compact and Hausdorff, so that $\mathop{\rm Hom}_1(Y) = \mathop{\rm Hom}(Y)$, and $q \mapsto \psi_q$ defines a homeomorphism from $Y$ onto $\mathop{\rm Hom}(Y)$ with respect to the topology on $\mathop{\rm Hom}(Y)$ induced by the weak$^*$ topology on $C_b(Y)^*$. In this case, the restriction of $T_\rho^*$ to $\mathop{\rm Hom}(X)$ can be identified with a mapping into $Y$. If $\rho(X)$ is dense in $Y$, then we get a mapping from $\mathop{\rm Hom}(X)$ onto $Y$, as in the previous paragraph. If $X$ is completely regular, so that $p \mapsto \phi_p$ defines a homeomorphism from $X$ onto $\mathop{\rm Hom}_1(X)$ with respect to the topology induced on $\mathop{\rm Hom}_1(X)$ by the weak$^*$ topology on $C_b(X)^*$, then the restriction of $T_\rho^*$ to $\mathop{\rm Hom}(X)$ is basically an extension of $\rho$. If $X$ is compact and Hausdorff, then the restriction of $\rho$ to $\mathop{\rm Hom}(X)$ is essentially the same as $\rho$ itself. \section{Discrete sets} \label{discrete sets} \setcounter{equation}{0} Let $X$ be a nonempty set, and let $\beta X$ be the set of all untrafilters on $X$. As in Sections \ref{homomorphisms} and \ref{homomorphisms, continued}, there is a natural one-to-one correspondence between $\beta X$ and the set of all nonzero homomorphisms on $\ell^\infty(X)$. If $X$ is equipped with the discrete topology, then $\ell^\infty(X)$ is the same as $C_b(X)$, and the set of nonzero homomorphisms on $\ell^\infty(X)$ is the same as the set $\mathop{\rm Hom}(X)$ discussed in the previous section. In this section, we shall see how properties of $\mathop{\rm Hom}(X)$ can be described more directly in terms of ultrafilters on $X$. If $A \subseteq X$, then let $\widehat{A} \subseteq \beta X$ be the set of ultrafilters $\mathcal{F}$ on $X$ such that $A \in \mathcal{F}$. Thus $\widehat{X} = \beta X$, and there is a natural one-to-one correspondence between $\widehat{A}$ and $\beta A$ for any $A$, in which an ultrafilter on $A$ is extended to an ultrafilter on $X$ that contains $A$ as an element, as in Section \ref{filters, subsets}. It is easy to see that \begin{equation} \widehat{A \cap B} = \widehat{A} \cap \widehat{B} \end{equation} for every $A, B \subseteq X$. Moreover, \begin{equation} \widehat{X \backslash A} = \widehat{X} \backslash \widehat{A} = \beta X \backslash \widehat{A} \end{equation} for every $A \subseteq X$, because any ultrafilter $\mathcal{F}$ on $X$ contains exactly one of $A$ and $X \backslash A$ as an element. It follows that \begin{equation} \widehat{A \cup B} = \widehat{A} \cup \widehat{B} \end{equation} for every $A, B \subseteq X$. Let us define a topology on $\beta X$ by saying that a subset of $\beta X$ is an open set if it can be expressed as a union of subsets of the form $\widehat{A}$, $A \subseteq X$. Equivalently, $\widehat{A}$ is an open set in $\beta X$ for each $A \subseteq X$, and these open subsets of $\beta X$ form a base for the topology of $\beta X$. It is easy to see that the intersection of two open subsets of $\beta X$ is also open, so that this does define a topology on $\beta X$, because of the fact about intersections mentioned in the previous paragraph. The fact about complements mentioned in the previous paragraph implies that $\widehat{A}$ is both open and closed for every $A \subseteq X$. If $\mathcal{F}$ is an ultrafilter on $X$, then let $L_\mathcal{F}$ be the corresponding homomorphism on $\ell^\infty(X)$, as in Section \ref{homomorphisms}. Let $A$ be a subset of $X$, and let ${\bf 1}_A$ be the indicator function on $X$ corresponding to $A$, so that ${\bf 1}_A(x) = 1$ when $x \in A$ and ${\bf 1}_A(x) = 0$ when $x \in X \backslash A$. It is easy to check that \begin{eqnarray} \label{L_mathcal{F}({bf 1}_A) = ...} L_\mathcal{F}({\bf 1}_A) & = & 1 \hbox{ when } A \in \mathcal{F} \\ & = & 0 \hbox{ when } X \backslash A \in \mathcal{F}, \nonumber \end{eqnarray} directly from the definition of $L_\mathcal{F}$. Remember that $\mathcal{F} \mapsto L_\mathcal{F}$ defines a one-to-one correspondence between $\beta X$ and the set $\mathop{\rm Hom}(X)$ of nonzero homomorphisms on $\ell^\infty(X) = C_b(X)$. Using (\ref{L_mathcal{F}({bf 1}_A) = ...}), one can check that $\widehat{A}$ corresponds to a relatively open subset of $\mathop{\rm Hom}(X)$ with respect to the weak$^*$ topology on $\ell^\infty(X)^*$ for each $A \subseteq X$. This implies that every open set in $\beta X$ with respect to the topology described earlier corresponds to a relatively open set in $\mathop{\rm Hom}(X)$ with respect to the weak$^*$ topology on $\ell^\infty(X)$. Conversely, one can show that relatively open subsets of $\mathop{\rm Hom}(X)$ with respect to the weak$^*$ topology on $\ell^\infty(X)^*$ correspond to open subsets of $\beta X$. This uses the facts that finite linear combinations of indicator functions of subsets of $X$ are dense in $\ell^\infty(X)$, and that homomorphisms on $\ell^\infty(X)$ have bounded dual norm. In particular, $\beta X$ should be compact and Hausdorff with respect to the topology defined before, because of the corresponding properties of $\mathop{\rm Hom}(X)$ with respect to the topology induced by the weak$^*$ topology on $\ell^\infty(X)^*$. Let us check these properties directly from the definition of the topology on $\beta X$. If $\mathcal{F}$, $\mathcal{F}'$ are distinct ultrafilters on $X$, then there is a set $A \subseteq X$ such that $A \in \mathcal{F}$ and $X \backslash A \in \widehat{F}'$. Hence $\mathcal{F} \in \widehat{A}$ and $\mathcal{F}' \in \widehat{X \backslash A}$, so that $\mathcal{F}$, $\mathcal{F}'$ are contained in disjoint open subsets of $\beta X$, which implies that $\beta X$ is Hausdorff. To show that $\beta X$ is compact, let $\mathcal{U}$ be an arbitrary ultrafilter on $\beta X$, and let us show that $\mathcal{U}$ converges to an element of $\beta X$. Let $\mathcal{F}$ be the collection of subsets $A$ of $X$ such that $\widehat{A} \in \mathcal{U}$. It is easy to see that $\mathcal{F}$ is a filter on $X$, because $\mathcal{U}$ is a filter on $\beta X$. If $A \subseteq X$, then either $\widehat{A}$ or $\widehat{X \backslash A} = \beta X \backslash \widehat{A}$ is an element of $\mathcal{U}$, because $\mathcal{U}$ is an ultrafilter on $\beta X$. This implies that either $A$ or $X \backslash A$ is an element of $\mathcal{F}$ for every $A \subseteq X$, and hence that $\mathcal{F}$ is an ultrafilter on $X$. It remains to check that $\mathcal{U}$ converges to $\mathcal{F}$ as an element of $\beta X$. By definition, this means that every neighborhood of $\mathcal{F}$ in $\beta X$ should be an element of $\mathcal{U}$. Because the sets $\widehat{A}$, $A \subseteq X$, form a base for the topology of $\beta X$, it suffices to have $\widehat{A} \in \mathcal{U}$ for every $A \subseteq X$ such that $A \in \mathcal{F}$, which follows from the definition of $\mathcal{F}$. If $p \in X$, then the collection $\mathcal{F}_p$ of $A \subseteq X$ with $p \in A$ is an ultrafilter on $X$. Thus $p \mapsto \mathcal{F}_p$ defines a natural embedding of $X$ into $\beta X$. It is easy to see that the set of ultrafilters $\mathcal{F}_p$, $p \in X$, is dense in $\beta X$ with respect to the topology defined earlier. One can also check that the homomorphism $L_{\mathcal{F}_p}$ on $\ell^\infty(X)$ corresponding to $\mathcal{F}_p$ is the same as evaluation at $p$. Let $Y$ be a compact Hausdorff topological space, and let $\rho$ be a mapping from $X$ into $Y$. If $\mathcal{F}$ is an ultrafilter on $X$, then we can define $\rho_*(\mathcal{F})$ as usual as the collection of sets $E \subseteq Y$ such that $\rho^{-1}(E) \in \mathcal{F}$. In particular, we have seen that $\rho_*(\mathcal{F})$ is an ultrafilter on $Y$. It follows that $\rho_*(\mathcal{F})$ converges to a unique element of $Y$, because $Y$ is compact and Hausdorff. Let $\widehat{\rho}(\mathcal{F})$ be the limit of $\rho_*(\mathcal{F})$ in $Y$, which defines $\widehat{\rho}$ as a mapping from $\beta X$ into $Y$. If $p \in X$, then it is easy to see that $\widehat{\rho}(\mathcal{F}_p) = \rho(p)$. Thus $\widehat{\rho}$ is basically an extension of $\rho$ to a mapping from $\beta X$ into $Y$. Let us check that $\widehat{\rho}$ is continuous as a mapping from $\beta X$ into $Y$. Let $\mathcal{F}$ be an ultrafilter on $X$, and let $W$ be an open set in $Y$ that contains $\widehat{\rho}(\mathcal{F})$ as an element. Because $Y$ is compact and Hausdorff, it is regular, which implies that there is an open set $V$ in $Y$ such that $\widehat{\rho}(\mathcal{F}) \in V$ and the closure $\overline{V}$ of $V$ in $Y$ is contained in $W$. Remember that $\rho_*(\mathcal{F})$ converges to $\widehat{\rho}(\mathcal{F})$ in $Y$, which implies that $V \in \rho_*(\mathcal{F})$. This implies in turn that $\rho^{-1}(V) \in \mathcal{F}$, by the definition of $\rho_*(\mathcal{F})$. Put $A = \rho^{-1}(V)$, so that $\widehat{A}$ is an open set in $\beta X$ that contains $\mathcal{F}$ as an element. Let $\mathcal{F}'$ be any other ultrafilter on $X$ that is an element of $\widehat{A}$. This means that $\rho^{-1}(V) = A \in \mathcal{F}'$, and hence that $A \in \rho_*(\mathcal{F}')$. By construction, $\rho_*(\mathcal{F}')$ converges to $\widehat{\rho}(\mathcal{F}')$ in $Y$, which implies that $\widehat{\rho}(\mathcal{F}') \in \overline{V}$. This shows that $\widehat{\rho}(\mathcal{F}') \in \overline{V} \subseteq W$ for every $\mathcal{F}' \in \widehat{A}$, and hence that $\widehat{\rho}$ is continuous at $\mathcal{F}$ for every $\mathcal{F} \in \beta X$, as desired. \section{Locally compact spaces, revisited} \label{locally compact spaces, revisited} \setcounter{equation}{0} Let $X$ be a locally compact Hausdorff topological space which is not compact, and let $X^*$ be the one-point compactification of $X$, as in Section \ref{locally compact spaces, continued}. Also let $C_{lim}(X)$ be the space of continuous real or complex-valued functions on $X$ which have a limit at infinity, as in Section \ref{locally compact spaces, continued}. As usual, this may also be denoted $C_{lim}(X, {\bf R})$ or $C_{lim}(X, {\bf C})$, to indicate whether real or complex-valued functions are being used. As in Section \ref{locally compact spaces, continued}, $C_{lim}(X)$ is a closed subalgebra of the algebra $C_b(X)$ of bounded continuous functions on $X$ with respect to the supremum norm, and $C_{lim}(X)$ is the same as the linear span in $C_b(X)$ of the subspace $C_0(X)$ of functions that vanish at infinity on $X$ and the constant functions on $X$. Equivalently, $C_{lim}(X)$ is the same as the space of continuous functions on $X$ that have a continuous extension to $X^*$. Thus a nonzero homomorphism $\phi$ from $C_{lim}(X)$ into the real or complex numbers, as appropriate, is basically the same as a nonzero homomorphism on $C(X^*)$. As in Section \ref{compact spaces}, every nonzero homomorphism on $C(X^*)$ can be represented by evaluation at a point in $X^*$, because $X^*$ is compact. This point in $X^*$ is either an element of $X$, or the point at infinity in $X^*$. This implies that either there is a $p \in X$ such that \begin{equation} \phi(f) = f(p) \end{equation} for every $f \in C_{lim}(X)$, or that \begin{equation} f(x) \to \phi(f) \hbox{ as } x \to \infty \end{equation} for every $f \in C_{lim}(X)$. Suppose now that $\phi$ is a nonzero homomorphism on $C_b(X)$. The restriction of $\phi$ to $C_{lim}(X)$ is a nonzero homomorphism on $C_{\lim}(X)$, since $\phi({\bf 1}_X) = 1$. If $\phi(f) = f(p)$ for some $p \in X$ and every $f \in C_{lim}(X)$, then we would like to check that this also holds for every $f \in C_b(X)$. To see this, we can use Urysohn's lemma to get a continuous function $\theta$ with compact support on $X$ such that $\theta(p) = 1$. Let $f$ be a bounded continuous function on $X$, and observe that $\theta \, f \in C_{lim}(X)$, because it has compact support on $X$. This implies that \begin{equation} \phi(\theta \, f) = (\theta \, f)(p) = \theta(p) \, f(p) = f(p), \end{equation} since $\theta(p) = 1$. Similarly, \begin{equation} \phi((1 - \theta) \, f) = \phi(1 - \theta) \, \phi(f) = (1 - \theta(p)) \, \phi(f) = 0. \end{equation} More precisely, this uses the hypothesis that $\phi$ is a homomorphism on $C_b(X)$ in the first step, and then the fact that $1 - \theta \in C_{lim}(X)$ to get that $\phi(1 - \theta)$ is equal to $1 - \theta(p)$. Combining these two equations, we get that $\phi(f) = f(p)$, as desired. Let $\rho$ be the standard embedding of $X$ into $X^*$, which sends each $p \in X$ to itself as an element of $X^*$. As in Section \ref{mapping properties}, this leads to a mapping $T_\rho$ from $C(X^*)$ into $C_b(X)$, which sends $C(X^*)$ onto $C_{lim}(X)$ in this case. The corresponding dual mapping $T_\rho^*$ sends the set $\mathop{\rm Hom}(X)$ of nonzero homomorphisms on $C_b(X)$ into the analogous set $\mathop{\rm Hom}(X^*)$ for $X^*$, which can be identified with $X^*$, because $X^*$ is compact and Hausdorff. Remember that $\mathop{\rm Hom}_1(X) \subseteq \mathop{\rm Hom}(X)$ is the set of homomorphisms on $C_b(X)$ defined by evaluation at elements of $X$, and that $T_\rho^*$ maps $\mathop{\rm Hom}_1(X)$ to the point evaluations on $C(X^*)$ that correspond to elements of $X$. The discussion in the previous paragraph implies that $T_\rho^*$ sends every other element of $\mathop{\rm Hom}(X)$ to the point evaluation on $C(X^*)$ that corresponds to the point at infinity in $X^*$. \section{Mapping properties, continued} \label{mapping properties, continued} \setcounter{equation}{0} Let $U$ be the open unit disk in the complex plane, so that $\overline{U}$ is the closed unit disk. Also let $\rho$ be the standard embedding of $U$ into $\overline{U}$, which sends each $z \in U$ to itself as an element of $\overline{U}$. This leads to a mapping $T_\rho$ from $C(\overline{U})$ into $C_b(U)$, as in Section \ref{mapping properties}, which sends a continuous function $f$ on $\overline{U}$ to its restriction to $U$. The dual mapping $T_\rho^* : C_b(U)^* \to C(\overline{U})^*$ sends the set $\mathop{\rm Hom}(U)$ of nonzero homomorphisms on $C_b(U)$ into the analogous set $\mathop{\rm Hom}(\overline{U})$ for $\overline{U}$, as before. If $\phi$ is a nonzero homomorphism on $C_b(U)$, then $T_\rho^*(\phi)$ is basically the same as the restriction of $\phi$ to $C(\overline{U})$, which is identified with a subalgebra of $C_b(U)$. Each nonzero homomorphism on $C(\overline{U})$ can be represented as a point evaluation, as in Section \ref{compact spaces}. If there is a $p \in U$ such that $\phi(f) = f(p)$ for every $f \in C(\overline{U})$, then the same relation holds for every $f \in C_b(U)$, as in the previous section. If $\phi \in \mathop{\rm Hom}(U)$ does not correspond to evaluation at a point in $U$, then it follows that the restriction of $\phi$ to $C(\overline{U})$ corresponds to evaluation at a point in $\partial U$. Let $\mathcal{A}$ be the algebra of continuous complex-valued functions on $\overline{U}$ that are holomorphic on $U$, as in Section \ref{disk algebra}, and let $\mathcal{B}$ be the algebra of bounded holomorphic functions on $U$, as in Section \ref{bounded holomorphic functions}. If $f \in \mathcal{A}$, then the restriction of $f$ to $U$ is an element of $\mathcal{B}$, and $f$ is determined on $\overline{U}$ by its restriction to $U$, by continuity. Thus we can identify $\mathcal{A}$ with a subalgebra of $\mathcal{B}$. Let $\mathop{\rm Hom}(\mathcal{A})$, $\mathop{\rm Hom}(\mathcal{B})$ denote the sets of nonzero homomorphisms from $\mathcal{A}$, $\mathcal{B}$ into the complex numbers, respectively. As in Sections \ref{disk algebra} and \ref{bounded holomorphic functions}, these are subsets of the duals of $\mathcal{A}$, $\mathcal{B}$, and we are especially interested in the topologies induced on $\mathop{\rm Hom}(\mathcal{A})$, $\mathop{\rm Hom}(\mathcal{B})$ by the weak$^*$ topologies on the corresponding dual spaces. If $p \in \overline{U}$, then $\phi_p(f) = f(p)$ defines a homomorphism on $\mathcal{A}$, and we have seen in Section \ref{disk algebra} that every nonzero homomorphism on $\mathcal{A}$ is of this form. Of course, $\phi_p(f) = f(p)$ is a continuous function on $\overline{U}$ for every $f \in \mathcal{A}$, by definition of $\mathcal{A}$, which implies that $p \mapsto \phi_p$ is continuous as a mapping from $\overline{U}$ into $\mathop{\rm Hom}(\mathcal{A})$ with respect to the weak$^*$ topology on $\mathcal{A}$. If $f_1(z)$ is the element of $\mathcal{A}$ defined by $f_1(z) = z$ for each $z \in \overline{U}$, then $\phi_p(f_1) = p$ for each $p \in \overline{U}$. This shows that $p \mapsto \phi_p$ is actually a homeomorphism from $\overline{U}$ onto $\mathop{\rm Hom}(\mathcal{A})$ with respect to the topology induced on $\mathop{\rm Hom}(\mathcal{A})$ by the weak$^*$ topology on $\mathcal{A}^*$. Similarly, if $p \in U$, then $\phi_p(f) = f(p)$ defines a nonzero homomorphism on $\mathcal{B}$, and $p \mapsto \phi_p$ defines a continuous mapping from $U$ into $\mathop{\rm Hom}(\mathcal{B})$ with respect to the weak$^*$ topology on $\mathcal{B}^*$. Let $\mathop{\rm Hom}_1(\mathcal{B})$ be the set of homomorphisms on $\mathcal{B}$ of this form. If $f_1(z)$ is the element of $\mathcal{B}$ defined by $f_1(z) = z$ for each $z \in U$, then $\phi_p(f_1) = p$ for each $p \in U$. This implies that $p \mapsto \phi_p$ is a homeomorphism from $U$ onto $\mathop{\rm Hom}_1(\mathcal{B})$ with respect to the topology induced on $\mathop{\rm Hom}_1(\mathcal{B})$ by the weak$^*$ topology on $\mathcal{B}^*$. If $\phi$ is a nonzero homomorphism on $\mathcal{B}$, then the restriction of $\phi$ to $\mathcal{A}$ is a nonzero homomorphism on $\mathcal{A}$. This defines a natural mapping from $\mathop{\rm Hom}(\mathcal{B})$ into $\mathop{\rm Hom}(\mathcal{A})$. It is easy to see that this mapping is continuous with respect to the topologies induced on $\mathop{\rm Hom}(\mathcal{A})$, $\mathop{\rm Hom}(\mathcal{B})$ by the weak$^*$ topologies on $\mathcal{A}^*$, $\mathcal{B}^*$, respectively. Let $f_1$ be the element of $\mathcal{B}$ defined by $f_1(z) = z$ for each $z \in U$ again. Also let $\phi$ be a nonzero homomorphism on $\mathcal{B}$, and put $p = \phi(f_1)$. Note that $p \in \overline{U}$, since $\phi$ has dual norm equal to $1$ with respect to the supremum norm on $\mathcal{B}$, as in Section \ref{bounded holomorphic functions}. If $f \in \mathcal{A}$, then $\phi(f) = f(p)$, by the arguments in Section \ref{disk algebra} applied to the restriction of $\phi$ to $\mathcal{A}$. Suppose that $p \in U$, and let us check that $\phi(f) = f(p)$ for every $f \in \mathcal{B}$. Any holomorphic function $f$ on $U$ can be expressed as \begin{equation} f(z) = f(p) + (z - p) \, g(z) \end{equation} for some holomorphic function $g$ on $U$, and $g$ is also bounded on $U$ when $f$ is. This implies that $\phi(f) = f(p)$ for every $f \in \mathcal{B}$, because $\phi$ applied to $z - p$ is equal to $0$, by definition of $p$. If $\phi \in \mathop{\rm Hom}(\mathcal{B}) \backslash \mathop{\rm Hom}_1(\mathcal{B})$, then it follows that $p \in \partial U$. This is analogous to the situation for bounded continuous functions on $U$ mentioned at the beginning of the section. Let us take $C_b(U)$ to be the algebra of bounded continuous complex-valued functions on $U$, so that $\mathcal{B}$ is a subalgebra of $C_b(U)$. Let us also use $\mathop{\rm Hom}(C_b(U))$ to denote the set of nonzero homomorphisms on $C_b(U)$, to be more consistent with the notation for $\mathcal{B}$. If $\phi$ is a nonzero homomorphism on $C_b(U)$, then the restriction of $\phi$ to $\mathcal{B}$ is a nonzero homomorphism on $\mathcal{B}$. This defines a natural mapping $R$ from $\mathop{\rm Hom}(C_b(U))$ into $\mathop{\rm Hom}(\mathcal{B})$, which is easily seen to be continuous with respect to the topologies induced by the weak$^*$ topologies on $C_b(U)^*$ and $\mathcal{B}^*$, respectively. By construction, $R$ sends $\mathop{\rm Hom}_1(C_b(U))$ onto $\mathop{\rm Hom}_1(\mathcal{B})$. We also know that $\mathop{\rm Hom}(C_b(U))$ is compact with respect to the weak$^*$ topology on $C_b(U)^*$, which implies that $R(\mathop{\rm Hom}(C_b(U)))$ is compact with respect to the weak$^*$ topology on $\mathcal{B}^*$. In particular, $R(\mathop{\rm Hom}(C_b(U)))$ is closed with respect to the weak$^*$ topology on $\mathcal{B}^*$. As in Section \ref{density}, Carleson's corona theorem states that $\mathop{\rm Hom}_1(\mathcal{B})$ is dense in $\mathop{\rm Hom}(\mathcal{B})$ with respect to the weak$^*$ topology on $\mathcal{B}^*$. It follows that $R$ maps $\mathop{\rm Hom}(C_b(U))$ onto $\mathop{\rm Hom}(\mathcal{B})$, so that every nonzero homomorphism on $\mathcal{B}$ is the restriction to $\mathcal{B}$ of a nonzero homomorphism on $C_b(U)$. \section{Banach algebras} \label{banach algebras} \setcounter{equation}{0} A vector space $\mathcal{A}$ over the real or complex numbers is said to be an (associative) algebra if every $a, b \in \mathcal{A}$ has a well-defined product $a \, b \in \mathcal{A}$ which is linear in $a$ and $b$ separately and satisfies the associative law \begin{equation} (a \, b) \, c = a \, (b \, c) \quad\hbox{for every } a, b, c \in \mathcal{A}. \end{equation} We shall be primarily concerned here with commutative algebras, so that \begin{equation} a \, b = b \, a \end{equation} for each $a, b \in \mathcal{A}$. We also ask that there be a nonzero multiplicative identity element $e$ in $\mathcal{A}$, which means that $e \ne 0$ and \begin{equation} e \, a = a \, e = a \end{equation} for every $a \in \mathcal{A}$. We have seen several examples of algebras of functions in the previous sections, for which the multiplicative identity element is the constant function equal $1$. Suppose that $\mathcal{A}$ is equipped with a norm $\|a\|$. This norm should also be compatible with multiplication on $\mathcal{A}$, in the sense that $\|e\| = 1$ and \begin{equation} \label{||a b|| le ||a|| ||b||} \|a \, b\| \le \|a\| \, \|b\| \end{equation} for every $a, b \in \mathcal{A}$. We say that $\mathcal{A}$ is a Banach algebra if it is also complete as a metric space with respect to the metric $d(a, b) = \|a - b\|$ associated to the norm. The algebra of bounded continuous functions on any topological space is a Banach algebra with respect to the supremum norm. Closed subalgebras of Banach algebras are also Banach algebras, such as the disk algebra and the algebra of bounded holomorphic functions on the unit disk. Suppose that $\mathcal{A}$ is any Banach algebra, and let $a$ be an element of $\mathcal{A}$. If $n$ is a positive integer, then $a^n$ is the product $a \, a \, \cdots a$ of $n$ $a$'s in $\mathcal{A}$, which can also be described by $a^n = a$ when $n = 1$, and $a^{n + 1} = a \, a^n$ for every $n$. This is interpreted as being equal to the multiplicative identity element $e$ when $n = 0$. Observe that \begin{equation} \label{||a^n|| le ||a||^n} \|a^n\| \le \|a\|^n \end{equation} for each $n \ge 0$, where again the right side is interpreted as being equal to $1$ when $n = 0$. An element $a$ of $\mathcal{A}$ is said to be \emph{invertible} if there is another element $a^{-1}$ of $\mathcal{A}$ such that \begin{equation} a \, a^{-1} = a^{-1} \, a = e. \end{equation} It is easy to see that the inverse $a^{-1}$ of $a$ is unique when it exists. If $a$, $b$ are invertible elements of $\mathcal{A}$, then their product $a \, b$ is also invertible, with \begin{equation} (a \, b)^{-1} = b^{-1} \, a^{-1}. \end{equation} If $x$ is an invertible element of $\mathcal{A}$ and $y$ is another element of $\mathcal{A}$ that commutes with $x$, so that $x \, y = y \, x$, then $y$ also commutes with $x^{-1}$, \begin{equation} \label{y x^{-1} = x^{-1} y} y \, x^{-1} = x^{-1} \, y. \end{equation} If $a$, $b$ are commuting elements of $\mathcal{A}$ whose product $a \, b$ is invertible, then $a$, $b$ are also invertible, with \begin{equation} a^{-1} = b \, (a \, b)^{-1}, \quad b^{-1} = (a \, b)^{-1} \, a. \end{equation} This uses the fact that $a$, $b$ commute with $(a \, b)^{-1}$, since they commute with $a \, b$. Note that these statements do not involve the norm on $\mathcal{A}$. If $a \in \mathcal{A}$ and $n$ is a positive integer, then \begin{equation} (e - a) \, \Big(\sum_{j = 0}^n a^j\Big) = \Big(\sum_{j = 0}^n a^j\Big) \, (e - a) = e - a^{n + 1}. \end{equation} This is basically the same as for real or complex numbers. If $\|a\| < 1$, then \begin{equation} \lim_{n \to \infty} a^n = 0 \end{equation} in $\mathcal{A}$, since $\|a^n\| \le \|a\|^n \to 0$ as $n \to \infty$. Similarly, \begin{equation} \sum_{j = 0}^\infty \|a^j\| \le \sum_{j = 0}^\infty \|a\|^j = \frac{1}{1 - \|a\|}. \end{equation} As in the context of real or complex numbers, the convergence of $\sum_{j = 0}^\infty \|a^j\|$ means that $\sum_{j = 0}^\infty a^j$ converges absolutely. More precisely, this implies that the partial sums $\sum_{j = 0}^n a^j$ of $\sum_{j = 0}^\infty a^j$ form a Cauchy sequence in $\mathcal{A}$, which converges when $\mathcal{A}$ is complete. It follows that \begin{equation} (e - a) \, \Big(\sum_{j = 0}^\infty a^j\Big) = \Big(\sum_{j = 0}^\infty a^j\Big) \, (e - a) = e \end{equation} when $a \in \mathcal{A}$, $\|a\| < 1$, and $\mathcal{A}$ is a Banach algebra. Thus $e - a$ is invertible in $\mathcal{A}$ under these conditions, with \begin{equation} (e - a)^{-1} = \sum_{j = 0}^\infty a^j. \end{equation} We also get that \begin{equation} \|(e - a)^{-1}\| \le \frac{1}{1 - \|a\|}. \end{equation} If $b$ is any invertible element of $\mathcal{A}$ and $\|a\| \, \|b^{-1}\| < 1$, then $b - a$ is also invertible in $\mathcal{A}$, because \begin{equation} b - a = (e - a \, b^{-1}) \, b \end{equation} and $e - a \, b^{-1}$ is invertible by the previous argument. This shows that the invertible elements in a Banach algebra $\mathcal{A}$ form an open set in $\mathcal{A}$ with respect to the metric associated to the norm. Let $\mathcal{A}$ be a real or complex algebra, and let $\phi$ be a linear functional on $\mathcal{A}$, which is to say a linear mapping from $\mathcal{A}$ into the real or complex numbers, as appropriate. We say that $\phi$ is a \emph{homomorphism} on $\mathcal{A}$ if \begin{equation} \phi(a \, b) = \phi(a) \, \phi(b) \end{equation} for every $a, b \in \mathcal{A}$. Of course, $\phi$ satisfies this condition trivially when $\phi(a) = 0$ for every $a \in \mathcal{A}$, and we are primarily interested in the nonzero homomorphisms $\phi$, which means that $\phi(a) \ne 0$ for some $a \in \mathcal{A}$. This implies that \begin{equation} \phi(e) = 1, \end{equation} because $\phi(a) = \phi(e) \, \phi(a)$, since $a = e \, a$. If $b$ is any invertible element of $\mathcal{A}$, then we get that \begin{equation} \phi(b) \, \phi(b^{-1}) = \phi(b \, b^{-1}) = \phi(e) = 1, \end{equation} and hence $\phi(b) \ne 0$. Suppose now that $\mathcal{A}$ is a Banach algebra again, and let $\phi$ be a nonzero homomorphism on $\mathcal{A}$. If $a \in \mathcal{A}$ and $\|a\| < 1$, then $e - a$ is invertible, and so \begin{equation} \phi(e - a) \ne 0, \end{equation} which means that $\phi(a) \ne 1$. By the same argument, $\phi(t \, a) = t \, \phi(a) \ne 1$ for every $t \in {\bf R}$ or ${\bf C}$, as appropriate, such that $|t| < 1$. This implies that $|\phi(a)| < 1$ when $a \in \mathcal{A}$ satisfies $\|a\| < 1$. Hence \begin{equation} |\phi(a)| \le \|a\| \end{equation} for every $a \in \mathcal{A}$, which shows that $\phi$ is a continuous linear functional on $\mathcal{A}$ with dual norm less than or equal to $1$. The dual norm of $\phi$ is actually equal to $1$, because $\phi(e) = 1$. It is easy to see that the collection of nonzero homomorphisms on $\mathcal{A}$ is closed with respect to the weak$^*$ topology on the dual of $\mathcal{A}$. It follows that the collection of nonzero homomorphisms on $\mathcal{A}$ is compact with respect to the weak$^*$ topology, by the Banach--Alaoglu theorem. A linear subspace $\mathcal{I}$ of a real or complex algebra $\mathcal{A}$ is said to be an \emph{ideal} in $\mathcal{A}$ if $a \, x$ and $x \, a$ are contained in $\mathcal{I}$ for every $a \in \mathcal{A}$ and $x \in \mathcal{I}$. Of course, $\mathcal{A}$ itself and the trivial subspace $\{0\}$ are ideals in $\mathcal{A}$, and an ideal $\mathcal{I}$ in $\mathcal{A}$ is said to be proper if $\mathcal{I} \ne \mathcal{A}$. If $\mathcal{I}$ is an ideal in $\mathcal{A}$ and $\mathcal{I}$ contains the identity element $e$, or any invertible element $x$, then $\mathcal{I} = \mathcal{A}$. If $\mathcal{A}$ is a Banach algebra and $\mathcal{I}$ is an ideal in $\mathcal{A}$, then it is easy to see that the closure $\overline{\mathcal{I}}$ of $\mathcal{I}$ with respect to the norm on $\mathcal{A}$ is also an ideal in $\mathcal{A}$. If $\mathcal{I}$ is a proper ideal in a Banach algebra $\mathcal{A}$, then $e \not\in \overline{\mathcal{I}}$. This is because elements of $\mathcal{A}$ sufficiently close to $e$ are invertible, as before. Thus the closure of a proper ideal in a Banach algebra is still proper. A proper ideal $\mathcal{I}$ in an algebra $\mathcal{A}$ is said to be maximal if $\mathcal{A}$ and $\mathcal{I}$ are the only ideals that contain $\mathcal{I}$. It is easy to see that the kernel of a nonzero homomorphism on $\mathcal{A}$ is maximal, since it has codimension $1$. A maximal ideal $\mathcal{I}$ in a Banach algebra $\mathcal{A}$ is automatically closed, because its closure $\overline{I}$ is a proper ideal that contains $\mathcal{I}$, and hence is equal to $\mathcal{I}$. Using the axiom of choice, one can show that every proper ideal in an algebra with nonzero multiplicative identity element is contained in a maximal ideal. More precisely, one can use Zorn's lemma or the Hausdorff maximality principle, by checking that the union of a chain of proper ideals is a proper ideal. To get properness, one uses the fact that the ideals do not contain the identity element. If $\mathcal{A}$ is a commutative algebra and $a \in \mathcal{A}$, then \begin{equation} \mathcal{I}_a = \{a \, b : b \in \mathcal{A}\} \end{equation} is an ideal in $\mathcal{A}$. Moreover, $\mathcal{I}_a$ is a proper ideal in $\mathcal{A}$ if and only if $a$ is not invertible in $\mathcal{A}$. Suppose from now on that $\mathcal{A}$ is a complex Banach algebra. Let $a \in \mathcal{A}$ be given, and suppose that $t \, e - a$ is invertible in $\mathcal{A}$ for every $t \in {\bf C}$. If $\lambda$ is a continuous linear functional on $\mathcal{A}$, then one can show that \begin{equation} f_\lambda(t) = \lambda((t \, e - a)^{-1}) \end{equation} is a holomorphic function on the complex plane ${\bf C}$. One can also check that $(t \, e - a)^{-1} \to 0$ in $\mathcal{A}$ as $|t| \to \infty$, so that $f_\lambda(t) \to 0$ as $|t| \to \infty$ for each $\lambda$. This implies that $f_\lambda(t) = 0$ for every $t \in {\bf C}$ and continuous linear functional $\lambda$ on $\mathcal{A}$, by standard results in complex analysis. Using the Hahn--Banach theorem, it follows that $(t \, e - a)^{-1} = 0$ for every $t \in {\bf C}$, contradicting the fact that invertible elements of $\mathcal{A}$ are not zero. This is a brief sketch of the well-known fact that for each $a \in \mathcal{A}$ there is a $t \in {\bf C}$ such that $t \, e - a$ is not invertible. Suppose that every nonzero element of $\mathcal{A}$ is invertible. If $a \in \mathcal{A}$, then there is a $t \in {\bf C}$ such that $t \, e - a$ is not invertible, as in the previous paragraph. In this case, it follows that $a = t \, e$, so that $\mathcal{A}$ is isomorphically equivalent to the complex numbers. If $\mathcal{A}$ is an algebra $\mathcal{I}$ is an ideal in $\mathcal{A}$, then the quotient $\mathcal{A} / \mathcal{I}$ defines an algebra in a natural way, so that the corresponding quotient mapping is a homomorphism from $\mathcal{A}$ onto $\mathcal{A} / \mathcal{I}$ with kernel equal to $\mathcal{I}$. If $\mathcal{A}$ has a nonzero multiplicative identity element and $\mathcal{I}$ is proper, then $\mathcal{A} / \mathcal{I}$ also has a nonzero multiplicative identity element. If $\mathcal{I}$ is a maximal ideal, then $\mathcal{A} / \mathcal{I}$ contains no nontrivial proper ideals. If $\mathcal{A}$ is commutative and $\mathcal{I}$ is maximal, then it follows that every nonzero element of $\mathcal{A} / \mathcal{I}$ is invertible in the quotient. If $\mathcal{A}$ is a Banach algebra and $\mathcal{I}$ is a proper closed ideal in $\mathcal{A}$, then $\mathcal{A} / \mathcal{I}$ is also a Banach algebra, with respect to the usual quotient norm. If $\mathcal{A}$ is a complex commutative Banach algebra and $\mathcal{I}$ is a maximal ideal in $\mathcal{A}$, then it follows that $\mathcal{A} / \mathcal{I}$ is isomorphic to the complex numbers. This implies that every maximal ideal in a commutative complex Banach algebra $\mathcal{A}$ is the kernel of a homomorphism from $\mathcal{A}$ onto the complex numbers. If $\mathcal{A}$ is a commutative complex Banach algebra and $a \in \mathcal{A}$ is not invertible, then $a$ is contained in a maximal ideal in $\mathcal{A}$, and hence there is a nonzero homomorphism $\phi : \mathcal{A} \to {\bf C}$ such that $\phi(a) = 0$. \section{Ideals and filters} \label{ideals, filters} \setcounter{equation}{0} Let $E$ be a nonempty set, and let $\mathcal{A}$ be the algebra of all real or complex-valued functions on $E$. Put \begin{equation} Z(f) = \{x \in E : f(x) = 0\} \end{equation} for each $f \in \mathcal{A}$. Thus \begin{equation} Z(f) \cap Z(g) \subseteq Z(f + g) \end{equation} and \begin{equation} Z(f) \cup Z(g) = Z(f \, g) \end{equation} for every $f, g \in \mathcal{A}$. If $f$, $g$ are nonnegative real-valued functions on $E$, then \begin{equation} Z(f) \cap Z(g) = Z(f + g). \end{equation} If $\mathcal{F}$ is a filter on $E$, then put \begin{equation} \mathcal{I}(\mathcal{F}) = \{f \in \mathcal{A} : Z(f) \in \mathcal{F}\}. \end{equation} It is easy to see that this is an ideal in $\mathcal{A}$, using the properties of the zero sets of sums and products of functions mentioned in the previous paragraph. More precisely, $\mathcal{I}(\mathcal{F})$ is a proper ideal in $\mathcal{A}$, since the elements of a filter are nonempty sets. As a special case, suppose that $A \subseteq E$ is not empty, and let $\mathcal{F}^A$ be the collection of subsets $B$ of $E$ such that $A \subseteq B$. This is a filter on $E$, and the corresponding ideal $\mathcal{I}(\mathcal{F}^A)$ is the same as \begin{equation} \mathcal{I}_A = \{f \in \mathcal{A} : f(x) = 0 \hbox{ for every } x \in A\}. \end{equation} In this case, the quotient $\mathcal{A} / \mathcal{I}_A$ can be identified with the algebra of real or complex-valued functions on $A$, as appropriate. In particular, if $A$ consists of a single point, then the quotient is isomorphic to the real or complex numbers, as appropriate. Conversely, if $\mathcal{I}$ is a proper ideal in $\mathcal{A}$, then put \begin{equation} \mathcal{F}(\mathcal{I}) = \{Z(f) : f \in \mathcal{I}\}. \end{equation} It is easy to check that this is a filter on $E$. In connection with this, note that \begin{equation} Z(|f|) = Z(f) \end{equation} for every $f \in \mathcal{A}$, and that $|f| \in \mathcal{I}$ when $f \in \mathcal{I}$. This implies that $\mathcal{F}(\mathcal{I})$ is the same as the collection of zero sets of nonnegative real-valued functions on $E$ in $\mathcal{I}$. Observe also that \begin{equation} \mathcal{F}(\mathcal{I}(\mathcal{F})) = \mathcal{F} \end{equation} for every filter $\mathcal{F}$ on $E$, and that \begin{equation} \mathcal{I}(\mathcal{F}(\mathcal{I})) = \mathcal{I} \end{equation} for every proper ideal $\mathcal{I}$ in $\mathcal{A}$. This shows that every proper ideal $\mathcal{I}$ in $\mathcal{A}$ is of the form $\mathcal{I}(\mathcal{F})$ for some filter $\mathcal{F}$ on $E$. If $\mathcal{F}$, $\mathcal{F}'$ are filters on $E$, then it is easy to see that \begin{equation} \mathcal{I}(\mathcal{F}) \subseteq \mathcal{I}(\mathcal{F}') \end{equation} if and only if $\mathcal{F} \subseteq \mathcal{F}'$, which is to say that $\mathcal{F}'$ is a refinement of $\mathcal{F}$. It follows that ultrafilters on $E$ correspond exactly to maximal ideals in $\mathcal{A}$. In particular, if $\mathcal{F}$ is an ultrafilter on $E$, then $\mathcal{A} / \mathcal{I}(\mathcal{F})$ is a field. One can also see this more directly, as follows. Suppose that $f \in \mathcal{A}$ and $f \not\in \mathcal{I}(\mathcal{F})$, so that the element of the quotient $\mathcal{A} / \mathcal{I}(\mathcal{F})$ corresponding to $f$ is not zero. Thus $Z(f) \not\in \mathcal{F}$, by definition of $\mathcal{I}(\mathcal{F})$, and so $E \backslash Z(f) \in \mathcal{F}$, because $\mathcal{F}$ is an ultrafilter. If $g \in \mathcal{A}$ satisfies $f(x) \, g(x) = 1$ for every $x \in E \backslash Z(f)$, then $f \, g - 1 \in \mathcal{I}(\mathcal{F})$, which means that the product of the elements of the quotient $\mathcal{A} / \mathcal{I}(\mathcal{F})$ corresponding to $f$, $g$ is equal to the multiplicative identity element in the quotient, as desired. \section{Closure} \label{closure} \setcounter{equation}{0} Let $X$ be a topological space, and remember that $C_b(X)$ is the algebra of bounded continuous real or complex-valued functions on $X$, equipped with the supremum norm. Put \begin{equation} \label{Z_epsilon(f) = {x in X : |f(x)| le epsilon}} Z_\epsilon(f) = \{x \in X : |f(x)| \le \epsilon\} \end{equation} for every $f \in C_b(X)$ and $\epsilon > 0$, which is a closed set in $X$, since $f$ is continuous. Note that $Z_\epsilon(f) = \emptyset$ for some $\epsilon > 0$ if and only if $f$ is invertible in $C_b(X)$. If $\mathcal{F}$ is a filter on $X$, then let $\mathcal{I}(\mathcal{F})$ be the collection of $f \in C_b(X)$ such that $f_*(\mathcal{F})$ converges to $0$ in ${\bf R}$ or ${\bf C}$, as appropriate. Equivalently, \begin{equation} \mathcal{I}(\mathcal{F}) = \{f \in C_b(X) : Z_\epsilon(f) \in \mathcal{F} \hbox{ for every } \epsilon > 0\}. \end{equation} This is analogous to, but different from, the definition in the previous section. It is not difficult to check that $\mathcal{I}(\mathcal{F})$ is a proper closed ideal in $C_b(X)$ under these conditions. This uses the fact that \begin{equation} Z_{\epsilon/2}(f) \cap Z_{\epsilon/2}(g) \subseteq Z_\epsilon(f + g) \end{equation} for every $f, g \in C_b(X)$ and $\epsilon > 0$, and that \begin{equation} Z_{\epsilon/k}(f) \subseteq Z_\epsilon(f \, g) \end{equation} when $|g(x)| \le k$ for each $x \in X$ and $k > 0$. Let $\overline{\mathcal{F}}$ be the collection of subsets $B$ of $X$ for which there is an $A \in \mathcal{F}$ such that $\overline{A} \subseteq B$. One can check that $\overline{\mathcal{F}}$ is also a filter on $X$, and that $\mathcal{I}(\overline{\mathcal{F}}) = \mathcal{I}(\mathcal{F})$. Thus one might as well restrict one's attention to filters on $X$ generated by closed subsets of $X$. As a special case, if $A \subseteq X$ is nonempty and $\mathcal{F}^A$ is the filter consisting of $B \subseteq X$ such that $A \subseteq B$, then $\overline{\mathcal{F}^A} = \mathcal{F}^{\overline{A}}$. Now let $\mathcal{I}$ be a proper ideal in $C_b(X)$, and put \begin{equation} \mathcal{F}(\mathcal{I}) = \{A \subseteq X : Z_\epsilon(f) \subseteq A \hbox{ for some } f \in \mathcal{I} \hbox{ and } \epsilon > 0\}. \end{equation} Again this is analogous to, but different from, the corresponding definition in the previous section. One can also check that $\mathcal{F}(\mathcal{I})$ is a filter on $X$ under these conditions. This uses the fact that $Z_\epsilon(f) \ne \emptyset$ for each $f \in \mathcal{I}$ and $\epsilon > 0$, because $\mathcal{I}$ is proper. If $f \in \mathcal{I}$, then $|f|^2 \in \mathcal{I}$, and \begin{equation} Z_{\epsilon^2}(|f|^2) = Z_\epsilon(f), \end{equation} which means that one can restrict one's attention to nonnegative real-valued functions in $\mathcal{I}$. If $f$, $g$ are nonnegative real-valued functions on $X$ and $\epsilon > 0$, then \begin{equation} Z_\epsilon(f + g) \subseteq Z_\epsilon(f) \cap Z_\epsilon(g). \end{equation} This implies that $A \cap B \in \mathcal{F}(\mathcal{I})$ for every $A, B \in \mathcal{F}(\mathcal{I})$. Note that $\mathcal{F}(\mathcal{I})$ is automatically generated by closed subsets of $X$. One can also check that $\mathcal{F}(\mathcal{I})$ is the same as the filter associated to the closure of $\mathcal{I}$ in $C_b(X)$, with respect to the supremum norm. This uses the fact that \begin{equation} Z_{\epsilon/2}(f) \subseteq Z_\epsilon(g) \end{equation} when $|f(x) - g(x)| \le \epsilon/2$ for every $x \in X$. By construction, $\mathcal{I} \subseteq \mathcal{I}(\mathcal{F}(\mathcal{I}))$. We have seen that $\mathcal{I}(\mathcal{F})$ is closed in $C_b(X)$ for any filter $\mathcal{F}$ on $X$, and so $\overline{\mathcal{I}} \subseteq \mathcal{I}(\mathcal{F}(\mathcal{I}))$. In order to show that \begin{equation} \overline{\mathcal{I}} = \mathcal{I}(\mathcal{F}(\mathcal{I})), \end{equation} let $f \in \mathcal{I}(\mathcal{F}(\mathcal{I}))$ and $\epsilon > 0$ be given. By definition of $\mathcal{I}(\mathcal{F}(\mathcal{I}))$, there are a $g \in \mathcal{I}$ and a $\delta > 0$ such that \begin{equation} Z_\delta(g) \subseteq Z_\epsilon(f). \end{equation} Put \begin{equation} f_\eta = f \, \frac{|g|^2}{|g|^2 + \eta^2} \end{equation} for each $\eta > 0$. Thus $f_\eta \in C_b(X)$ for each $\eta$, and in fact $f_\eta \in \mathcal{I}$, because $g \in \mathcal{I}$. We would like to check that \begin{equation} |f(x) - f_\eta(x)| = |f(x)| \, \frac{\eta^2}{|g(x)|^2 + \eta^2} \le \epsilon \end{equation} for every $x \in X$ when $\eta$ is sufficiently small. If $x \in Z_\epsilon(f)$, then this holds for every $\eta > 0$, since $|f(x)| \le \epsilon$ and $\eta^2/(|g(x)|^2 + \eta^2) \le 1$. If $x \not\in Z_\epsilon(f)$, then $x \not\in Z_\delta(g)$, so that $|g(x)| > \delta$, and the desired estimate holds when $\eta$ is sufficiently small, because $f$ is bounded. \section{Regular topological spaces} \label{regular spaces} \setcounter{equation}{0} Remember that a topological space $X$ is said to be \emph{regular}, or equivalently to satisfy the third separation condition, if it has the following two properties. First, $X$ should satisfy the first separation condition, so that subsets of $X$ with exactly one element are closed. Second, for each $x \in X$ and closed set $E \subseteq X$ with $x \not\in E$, there should be disjoint open subsets $U$, $V$ of $X$ such that $x \in U$ and $E \subseteq V$. In particular, this implies that $X$ is Hausdorff, since one can take $E = \{y\}$ when $y \in X$ and $y \ne x$. Sometimes the term ``regular'' is used for topological spaces with the second property just mentioned, and then the third separation condition is defined to be the combination of regularity with the first separation condition. We shall include the first separation condition in the definition of regularity here for the sake of simplicity. As in Section \ref{sigma-compactness}, it is well known that locally compact Hausdorff topological spaces are regular. Equivalently, $X$ is regular if it satisfies the first separation condition and for each $x \in X$ and open set $W \subseteq X$ with $x \in W$ there is an open set $U \subseteq X$ such that $x \in U$ and $\overline{U} \subseteq W$. This corresponds to the previous definition with $W = X \backslash E$. Let $\mathcal{F}$ be a filter on $X$, and let $\overline{\mathcal{F}}$ be the filter on $X$ generated by the closures of the elements of $\mathcal{F}$, as in the previous section. If $\mathcal{F}$ converges to a point $x \in X$ and $X$ is regular, then it is easy to see that $\overline{\mathcal{F}}$ also converges to $x$ in $X$. For if $W$ is an open set in $X$ that contains $x$ and $U$ is an open set in $X$ that contains $x$ and satisfies $\overline{U} \subseteq W$, then $U \in \mathcal{F}$, because $\mathcal{F}$ converges to $x$, and hence $W \in \overline{\mathcal{F}}$. Now let $x \in X$ be given, and let $\mathcal{F}(x)$ be the collection of subsets $A$ of $X$ for which there is an open set $U \subseteq X$ such that $x \in U$ and $U \subseteq A$. This is a filter on $X$ that converges to $x$, by construction. The filter $\overline{\mathcal{F}(x)}$ generated by the closed subsets of $X$ is the same as the collection of subsets $B$ of $X$ for which there is an open set $U \subseteq X$ such that $x \in U$ and $\overline{U} \subseteq B$. If $\overline{\mathcal{F}(x)}$ converges to $x$, then for each open set $W \subseteq X$ with $x \in W$ there is an open set $U \subseteq X$ such that $x \in U$ and $\overline{U} \subseteq W$. It follows that $X$ is regular if it satisfies the first separation condition and $\overline{\mathcal{F}(x)}$ converges to $x$ for every $x \in X$. Of course, metric spaces are regular as topological spaces. Real and complex topological vector spaces are also regular as topological spaces. To see this, remember that if $U$ is an open set in a topological vector space $V$ that contains $0$, then there are open subsets $U_1$, $U_2$ of $V$ that contain $0$ and satisfy \begin{equation} U_1 + U_2 \subseteq U, \end{equation} as in Section \ref{topological vector spaces, continued}. Moreover, \begin{equation} \overline{U_1} \subseteq U_1 + U_2, \end{equation} as in (\ref{overline{A} subseteq A + U}). Hence $\overline{U_1} \subseteq U$, which implies that $V$ is regular, because of the translation-invariance of the topology on $V$. \section{Closed sets} \label{closed sets} \setcounter{equation}{0} Let $X$ be a topological space, and let us say that a nonempty collection $\mathcal{E}$ of nonempty closed subsets of $X$ is a \emph{C-filter} if $A \cap B \in \mathcal{E}$ for every $A, B \in \mathcal{E}$, and if $E \in \mathcal{E}$ whenever $E \subseteq X$ is a closed set such that $A \subseteq E$ for some $A \in \mathcal{E}$. This is the same as a filter on $X$, except that we restrict our attention to closed subsets of $X$. If $\mathcal{F}$ is a filter on $X$ and $\mathcal{E}(\mathcal{F})$ is the collection of closed subsets of $X$ that are elements of $X$, then $\mathcal{E}(\mathcal{F})$ is a C-filter. This can also be described as the collection of closures of elements of $\mathcal{F}$, since the closure of an element of $\mathcal{F}$ is a closed set in $X$ that is contained in $\mathcal{F}$. A C-filter $\mathcal{E}$ on $X$ also generates an ordinary filter $\mathcal{F}(\mathcal{E})$ on $X$, consisting of the subsets $B$ of $X$ that contain an element of $\mathcal{E}$ as a subset. If $\mathcal{F}$ is any filter on $X$, and $\mathcal{E}(\mathcal{F})$ is the C-filter obtained from it as in the preceding paragraph, then the filter generated by $\mathcal{E}(\mathcal{F})$ is the same as the filter $\overline{\mathcal{F}}$ defined previously. However, if $\mathcal{E}$ is any C-filter on $X$, and $\mathcal{F}(\mathcal{E})$ is the ordinary filter generated by $\mathcal{E}$, then the C-filter of closed sets in $\mathcal{F}(\mathcal{E})$ is the same as $\mathcal{E}$. Let us say that a C-filter $\mathcal{E}$ on $X$ converges to a point $x \in X$ if for every open set $U \subseteq X$ with $x \in U$ there is an $E \in \mathcal{E}$ such that $E \subseteq U$. This is equivalent to saying that $U \in \mathcal{F}(\mathcal{E})$ for every open set $U \subseteq X$ with $x \in U$, so that $\mathcal{E}$ converges to $x$ if and only if $\mathcal{F}(\mathcal{E})$ converges to $x$. If $X$ is Hausdorff, then the limit of a convergent C-filter on $X$ is unique, for the same reasons as for ordinary filters. If $\mathcal{F}$ is an ordinary filter on $X$ that converges to a point $x \in X$ and $X$ is regular, then the corresponding C-filter $\mathcal{E}(\mathcal{F})$ also converges to $x$, for the same reasons as in the preceding section. Let $A$ be a nonempty subset of $X$, and let $\mathcal{E}^A$ be the collection of closed sets $B \subseteq X$ that contain $A$. This is a C-filter on $X$, and it is easy to see that $\mathcal{E}^{\overline{A}} = \mathcal{E}^A$ for every $A \subseteq X$. Note that $A \in \mathcal{E}^A$ if and only if $A$ is a closed set in $X$. If $A = \{p\}$ for some $p \in X$ and $X$ satisfies the first separation condition, then $\{p\} \in \mathcal{E}^A$, and $\mathcal{E}^A$ converges to $p$ in $X$. Suppose that $\mathcal{E}$ is a C-filter on $X$ that converges to a point $p \in X$, and let $A \in \mathcal{E}$ be given. If $U$ is an open set in $X$ that contains $p$, then there is an $E \in \mathcal{E}$ such that $E \subseteq U$, by definition of convergence. This implies that $A \cap U \ne \emptyset$, because $A \cap E$ is contained in $A \cap U$ and nonempty, since it is an element of $\mathcal{E}$. It follows that $p \in A$ for every $A \in \mathcal{E}$, because every $A \in \mathcal{E}$ is a closed set in $X$. Let $\mathcal{E}$ be a C-filter on $X$, and suppose that $B \subseteq X$ satisfies $A \cap B \ne \emptyset$ for every $A \in \mathcal{E}$. Let $\mathcal{E}_B$ be the collection of closed subsets $E$ of $X$ such that $A \cap B \subseteq E$ for some $A \in \mathcal{E}$. It is easy to see that this is also a C-filter on $X$, which is a refinement of $\mathcal{E}$ in the sense that $\mathcal{E} \subseteq \mathcal{E}_B$ as collections of subsets of $X$. If $B$ is a closed set in $X$, then $A \cap B \in \mathcal{E}_B$ for every $A \in \mathcal{E}$. A C-filter $\mathcal{E}$ on $X$ may be described as a \emph{C-ultrafilter} if it is maximal with respect to inclusion. More precisely, $\mathcal{E}$ is a C-ultrafilter if for every C-filter $\mathcal{E}'$ such that $\mathcal{E} \subseteq \mathcal{E}'$, we have that $\mathcal{E} = \mathcal{E}'$. Using Zorn's lemma or the Hausdorff maximality principle, one can show that every C-filter has a refinement which is a C-ultrafilter, just as for ordinary ultrafilters. For each $p \in X$, let $\mathcal{E}_p$ be the C-filter consisting of all closed subsets of $X$ that contain $p$ as an element. This is the same as $\mathcal{E}^A$ with $A = \{p\}$, as before. If $X$ satisfies the first separation condition, then $\{p\}$ is a closed set in $X$, $\{p\} \in \mathcal{E}_p$, and it is easy to see that $\mathcal{E}_p$ is a C-ultrafilter on $X$. If $\mathcal{E}$ is any C-filter on $X$ and $p \in E$ for each $E \in \mathcal{E}$, then $\mathcal{E} \subseteq \mathcal{E}_p$, and hence $\mathcal{E} = \mathcal{E}_p$ when $\mathcal{E}$ is a $C$-ultrafilter. In particular, this holds when $\mathcal{E}$ converges to $p$. If $\mathcal{E}$ is a C-filter on $X$ and $X$ is compact, then $\bigcap_{E \in \mathcal{E}} E \ne \emptyset$, because $\mathcal{E}$ has the finite intersection property. If $\mathcal{E}$ is a C-ultrafilter, then it follows that $\mathcal{E} = \mathcal{E}_p$ for some $p \in X$. Let $\mathcal{E}$ be a C-filter on $X$, and suppose that $B$ is a closed set in $X$ such that $A \cap B \ne \emptyset$ for every $A \in \mathcal{E}$. This implies that $\mathcal{E} \subseteq \mathcal{E}_B$, where $\mathcal{E}_B$ is the C-filter generated by the intersections $A \cap B$ with $A \in \mathcal{E}$, as before. If $\mathcal{E}$ is a C-ultrafilter, then it follows that $\mathcal{E} = \mathcal{E}_B$, and hence $B \in \mathcal{E}$. Conversely, a C-filter $\mathcal{E}$ is a C-ultrafilter when $B \in \mathcal{E}$ for every closed set $B \subseteq X$ such that $A \cap B \ne \emptyset$ for every $A \in \mathcal{E}$. For if $\mathcal{E}'$ is a C-filter on $X$ such that $\mathcal{E} \subseteq \mathcal{E}'$, then $A \cap B$ is contained in $\mathcal{E}'$ and is therefore nonempty for every $A \in \mathcal{E}$ and $B \in \mathcal{E}'$. Let $X$, $Y$ be topological spaces, and let $f$ be a continuous mapping from $X$ into $Y$. Thus $f^{-1}(B)$ is a closed set in $X$ for every closed set $B$ in $Y$. Also let $\mathcal{E}$ be a C-filter on $X$, and let $f_*(\mathcal{E})$ be the collection of closed sets $B \subseteq Y$ such that $f^{-1}(B) \in \mathcal{E}$. It is easy to see that $f_*(\mathcal{E})$ is a C-filter on $Y$. Note that the closure of $f(A)$ in $Y$ is an element of $f_*(\mathcal{E})$ for each $A \in \mathcal{E}$. Suppose that $Y$ is compact, so that $\bigcap_{B \in f_*(\mathcal{E})} B \ne \emptyset$, and let $q$ be an element of the intersection. Thus $q$ is contained in the closure of $f(A)$ in $Y$ for every $A \in \mathcal{E}$. If $V$ is any open set in $Y$ that contains $q$, then $f(A) \cap V \ne \emptyset$ for every $A \in \mathcal{E}$, and hence $A \cap f^{-1}(V) \ne \emptyset$. Let $\mathcal{E}'$ be the collection of closed sets $E$ in $X$ such that $A \cap f^{-1}(V) \subseteq E$ for some $A \in \mathcal{E}$ and open set $V \subseteq Y$ with $q \in V$. It is easy to see that $\mathcal{E}'$ is a C-filter on $X$ that is a refinement of $\mathcal{E}$, and that $A \cap f^{-1}(\overline{V}) \in \mathcal{E}'$ for every open set $V \subseteq Y$ with $q \in V$. In particular, $f^{-1}(\overline{V}) \in \mathcal{E}'$ under these conditions, which means that $\overline{V} \in f_*(\mathcal{E}')$. If $Y$ is also Hausdorff, and hence regular, then it follows that $f_*(\mathcal{E}')$ converges to $q$ in $Y$. If $\mathcal{E}$ is a C-ultrafilter on $X$, then $\mathcal{E} = \mathcal{E}'$, and $f_*(\mathcal{E})$ converges to $q$ in $Y$. \section{Multi-indices} \label{multi-indices} \setcounter{equation}{0} Let $n$ be a positive integer, which will be kept fixed throughout this section. A \emph{multi-index} $\alpha = (\alpha_1, \ldots, \alpha_n)$ is an $n$-tuple of nonnegative integers. The sum of two multi-indices is defined coordinatewise, and we put \begin{equation} |\alpha| = \alpha_1 + \cdots + \alpha_n. \end{equation} If $\alpha$ is a multi-index and $x = (x_1, \ldots, x_n) \in {\bf R}^n$, then the corresponding monomial $x^\alpha$ is defined by the product \begin{equation} \label{x^alpha} x^\alpha = x_1^{\alpha_1} \cdots x_n^{\alpha_n}. \end{equation} More precisely, $x_j^{\alpha_j}$ is interpreted as being equal to $1$ for every $x_j \in {\bf R}$ when $\alpha_j = 0$, so that $x^\alpha = 1$ for every $x \in {\bf R}^n$ when $\alpha = 0$. Note that $|\alpha|$ is the same as the degree of the monomial $x^\alpha$, and a polynomial on ${\bf R}^n$ is the same as a linear combination of finitely many monomials. Moreover, \begin{equation} x^{\alpha + \beta} = x^\alpha \, x^\beta \end{equation} for all multi-indices $\alpha$, $\beta$ and $x \in {\bf R}^n$. If $l$ is a positive integer, then $l!$ is ``$l$ factorial'', the product of $1, \ldots, l$. It is customary to include $l = 0$ by setting $0! = 1$. If $\alpha$ is a multi-index, then we put \begin{equation} \alpha! = \alpha_1! \cdots \alpha_n!. \end{equation} If $\alpha$ is a multi-index and $x, y \in {\bf R}^n$, then \begin{equation} (x + y)^\alpha = \sum_{\alpha = \beta + \gamma} \frac{\alpha!}{\beta! \, \gamma!} \, x^\beta \, y^\gamma, \end{equation} where the sum is taken over all multi-indices $\beta$, $\gamma$ such that $\alpha = \beta + \gamma$. This follows from the binomial theorem applied to $(x_j + y_j)^{\alpha_j}$ for $j = 1, \ldots, n$. Let $\partial_j = \partial / \partial x_j$ be the usual partial derivative in $x_j$, $1 \le j \le n$. If $\alpha$ is a multi-index, then the corresponding differential operator $\partial^\alpha$ is defined by \begin{equation} \label{partial^alpha} \partial^\alpha = \partial_1^{\alpha_1} \cdots \partial_n^{\alpha_n}. \end{equation} Here $\partial_j^{\alpha_j}$ is interpreted as being the identity operator when $\alpha_j = 0$, so that $\partial^\alpha$ reduces to the identity operator when $\alpha = 0$. Observe that \begin{equation} \partial^{\alpha + \beta} = \partial^\alpha \, \partial^\beta \end{equation} for all multi-indices $\alpha$, $\beta$. \section{Smooth functions} \label{smooth functions} \setcounter{equation}{0} Let $U$ be a nonempty open set in ${\bf R}^n$ for some positive integer $n$, and let $C^\infty(U)$ be the space of real or complex-valued functions on $U$ that are smooth in the sense that they are continuously-differentiable of all orders. As usual, this may also be denoted $C^\infty(U, {\bf R})$ or $C^\infty(U, {\bf C})$, to indicate whether real or complex-valued functions are being used. It is well known that $C^\infty(U)$ is a commutative algebra with respect to pointwise addition and multiplication, since sums and products of smooth functions are smooth. If $\alpha$ is a multi-index and $K \subseteq U$ is a nonempty compact set, then \begin{equation} \label{||f||_{alpha, K} = sup_{x in K} |partial^alpha f(x)|} \|f\|_{\alpha, K} = \sup_{x \in K} |\partial^\alpha f(x)| \end{equation} defines a seminorm on $C^\infty(U)$. This is the same as the supremum seminorm $\|f\|_K$ of $f$ over $K$ when $\alpha = 0$, and otherwise this is the same as the supremum seminorm of $\partial^\alpha f$ over $K$. The collection of all of these seminorms defines a topology on $C(U)$, as in Section \ref{seminorms, topologies}. Of course, $U$ is a locally compact Hausdorff topological space with respect to the topology induced by the standard topology on ${\bf R}^n$, and one can also check that $U$ is $\sigma$-compact. As in Section \ref{locally compact spaces}, there is a sequence of compact subsets $K_1, K_2, \ldots$ of $U$ such that every compact set $H \subseteq U$ is contained in $K_l$ for some $l$. It follows that the seminorms $\|f\|_{\alpha, K_l}$ are sufficient to determine the same topology on $C^\infty(U)$ as the one that was just described, where $\alpha$ is a multi-index and $l$ is a positive integer. In particular, this collection of seminorms on $C^\infty(U)$ is countable, since there are only countably many multi-indices. If $f$, $g$ are smooth functions on $U$ and $\alpha$ is a multi-index, then \begin{equation} \label{partial^alpha (f g) = ...} \partial^\alpha (f \, g) = \sum_{\alpha = \beta + \gamma} \frac{\alpha!}{\beta! \, \gamma!} \, (\partial^\beta f) \, (\partial^\gamma g), \end{equation} where the sum is taken over all multi-indices $\beta$, $\gamma$ such that $\alpha = \beta + \gamma$. This can be derived from the usual product rule for first derivatives, starting with the $n = 1$ case. Using this identity, it is easy to check that multiplication of functions is continuous as a mapping from $C^\infty(U) \times C^\infty(U)$ into $C^\infty(U)$, with respect to the topology on $C^\infty(U)$ defined in the previous paragraph. Let $\phi$ be a homomorphism from $C^\infty(U)$ into the real or complex numbers, as appropriate. As usual, we suppose also that $\phi$ is nontrivial in the sense that $\phi(f) \ne 0$ for some $f \in C^\infty(U)$. This implies that $\phi({\bf 1}_U) = 1$, where ${\bf 1}_U$ is the constant function on $U$ equal to $1$ at every point. If $f$ is a smooth function on $U$ such that $f(x) \ne 0$ for every $x \in U$, then $1/f(x)$ is also a smooth function on $U$, and it follows that \begin{equation} \phi(f) \, \phi(1/f) = \phi({\bf 1}_U) = 1. \end{equation} In particular, $\phi(f) \ne 0$ when $f(x) \ne 0$ for every $x \in U$. Equivalently, if $f$ is any smooth function on $U$ and $\phi(f) = 0$, then $f(x) = 0$ for some $x \in U$. If $f$ is any smooth function on $U$ and $\phi(f) = c$, then there is an $x \in U$ such that $f(x) = c$, since one can apply the previous statement to $f - c \, {\bf 1}_U$. Let $f_j$ be the smooth function on $U$ defined by $f_j(x) = x_j$, $j = 1, \ldots, n$, and put $p_j = \phi(f_j)$. We would like to check that \begin{equation} p = (p_1, \ldots, p_n) \in U. \end{equation} Consider the smooth function on $U$ given by \begin{equation} f(x) = \sum_{j = 1}^n (x_j - p_j)^2. \end{equation} Equivalently, \begin{equation} f = \sum_{j = 1}^n (f_j - p_j \, {\bf 1}_U)^2, \end{equation} and so \begin{equation} \phi(f) = \sum_{j = 1}^n (\phi(f_j) - p_j)^2 = 0, \end{equation} because $\phi$ is a homomorphism. Hence $f(x) = 0$ for some $x \in U$, as in the previous paragraph, which is only possible if $x = p$, in which case $p \in U$. If $g$ is a smooth function on $U$ and $U$ is convex, then \begin{eqnarray} g(x) - g(p) & = & \int_0^1 (\partial / \partial t) g(t \, x + (1 - t) \, p) \, dt \\ & = & \sum_{j = 1}^n (x_j - p_j) \, \int_0^1 (\partial_j g)(t \, x + (1 - t) \, p) \, dt. \nonumber \end{eqnarray} Hence there are smooth functions $g_1, \ldots, g_n$ on $U$ such that \begin{equation} \label{g(x) = g(p) + sum_{j = 1}^n (x_j - p_j) g_j(x)} g(x) = g(p) + \sum_{j = 1}^n (x_j - p_j) \, g_j(x). \end{equation} This also works when $g$ is the restriction to $U$ of a smooth function on a convex open set that contains $U$, such as ${\bf R}^n$ itself. In particular, this works when $g(x) = 0$ on the complement of a closed ball contained in $U$. Otherwise, if $g(x) = 0$ for every $x$ in a neighborhood of $p$, then we can simply take $g_j(x)$ to be $(x_j - p_j) / |x - p|^2$ times $g(x)$, where $|x - p|^2 = \sum_{l = 1}^n (x_l - p_l)^2$, and $g_j(p) = 0$. Any smooth function on $U$ can be expressed as the sum of a smooth function supported on a closed ball in $U$ and a smooth function that vanishes on a neighborhood of $p$, using standard cut-off functions. It follows that every smooth function $g$ on $U$ can be expressed as in (\ref{g(x) = g(p) + sum_{j = 1}^n (x_j - p_j) g_j(x)}) for some smooth functions $g_1, \ldots, g_n$ on $U$. Using this representation, we get that $\phi(g) = g(p)$ for every $g \in C^\infty(U)$. Of course, $\phi_p(g) = g(p)$ defines a homomorphism on $C^\infty(U)$ for every $p \in U$. \section{Polynomials} \label{polynomials} \setcounter{equation}{0} Let $\mathcal{P}({\bf R}^n)$ be the space of polynomials on ${\bf R}^n$ with real coefficients, which can be expressed as finite linear combinations of the monomials $x^\alpha$, where $\alpha$ is a multi-index. This is an algebra in a natural way, corresponding to pointwise addition and multiplication of functions. If $p \in {\bf R}^n$, then $\phi_p(f) = f(p)$ defines a homomorphism on $\mathcal{P}({\bf R}^n)$, as usual. Conversely, if $\phi$ is a homomorphism on $\mathcal{P}({\bf R}^n)$ which is not identically $0$, then $\phi = \phi_p$ for some $p \in {\bf R}^n$. As in the previous section, $p = (p_1, \ldots, p_n)$ is given by $p_j = \phi(f_j)$, where $f_j(x) = x_j$. In this case, the fact that $\phi(f) = f(p)$ for every polynomial $f$ on ${\bf R}^n$ follows from simple algebra. There are analogous statements for polynomials on ${\bf C}^n$ with complex coefficients, which can be expressed as finite linear combinations of monomials $z^\alpha = z_1^{\alpha_1} \cdots z_n^{\alpha_n}$, $z = (z_1, \ldots, z_n) \in {\bf C}^n$. \section{Continuously-differentiable functions} \label{C^1 functions} \setcounter{equation}{0} A real or complex-valued function $f$ on the closed unit interval $[0, 1]$ is said to be \emph{continuously differentiable} if it satisfies the following three conditions. First, the derivative $f'(x)$ of $f$ should exist at every $x$ in the open unit interval $(0, 1)$. Second, the appropriate one-sided derivatives should exist at the endpoints $0$, $1$, which will also be denoted $f'(0)$, $f'(1)$ for simplicity. Third, the resulting function $f'(x)$ should be continuous on $[0, 1]$. Of course, differentiability of $f$ implies that $f$ is continuous on $[0, 1]$. Equivalently, a continuous function $f$ on $[0, 1]$ is continuously differentiable if it is differentiable on $(0, 1)$, and if the derivative can be extended to a continuous function on $[0, 1]$, also denoted $f'$. More precisely, one can check that the one-sided derivatives of $f$ exist at the endpoints, and are given by the extension of $f'$ to $0$, $1$. This follows from the fact that \begin{equation} f(y) - f(x) = \int_x^y f'(t) \, dt \end{equation} when $0 \le x \le y \le 1$. The space of continuously-differentiable functions on $[0, 1]$ may be denoted $C^1([0, 1])$, or by $C^1([0, 1], {\bf R})$, $C^1([0, 1], {\bf C})$ to indicate whether real or complex-valued functions are being used. As usual, $C^1([0, 1])$ is an algebra with respect to pointwise addition and scalar multiplication of functions. If \begin{equation} \|f\|_{sup} = \sup_{0 \le x \le 1} |f(x)| \end{equation} is the supremum norm of a bounded function on $[0, 1]$, then \begin{equation} \|f\|_{C^1} = \|f\|_{C^1([0, 1])} = \|f\|_{sup} + \|f'\|_{sup} \end{equation} is a natural choice of norm on $C^1([0, 1])$. In particular, \begin{equation} \|f \, g\|_{C^1} \le \|f\|_{C^1} \, \|f\|_{C^1} \end{equation} for every $f, g \in C^1([0, 1])$. To see this, remember that \begin{equation} \|f \, g\|_{sup} \le \|f\|_{sup} \, \|g\|_{sup}, \end{equation} so that \begin{equation} \|f \, g\|_{C^1} = \|f \, g\|_{sup} + \|(f \, g)'\|_{sup} \le \|f\|_{sup} \, \|g\|_{sup} + \|(f \, g)'\|_{sup}. \end{equation} The product rule implies that \begin{equation} \|(f \, g)'\|_{sup} = \|f' \, g + f \, g'\|_{sup} \le \|f'\|_{sup} \|g\|_{sup} + \|f\|_{sup} \, \|g'\|_{sup}, \end{equation} and hence \begin{eqnarray} \|f \, g\|_{C^1} & \le & \|f\|_{sup} \, \|g\|_{sup} + \|f'\|_{sup} \, \|g\|_{sup} + \|f\|_{sup} \, \|g'\|_{sup} \\ & \le & (\|f\|_{sup} + \|f'\|_{sup}) \, (\|g\|_{sup} + \|g'\|_{sup})\nonumber\\ & = & \|f\|_{C^1} \, \|g\|_{C^1}. \nonumber \end{eqnarray} Note that the $C^1$ norm of a constant function is the same as the absolute value or modulus of the corresponding real or complex number. One can also check that $C^1([0, 1])$ is complete with respect to the $C^1$ norm, so that $C^1([0, 1])$ is a Banach algebra. Remember that continuous functions on $[0, 1]$ can be approximated uniformly by polynomials, by Weierstrass' approximation theorem. Using this, one can show that continuously-differentiable functions on $[0, 1]$ can be approximated by polynomials in the $C^1$ norm. More precisely, in order to approximate a continuously-differentiable function $f$ on $[0, 1]$ by polynomials in the $C^1$ norm, one can integrate polynomials that approximate $f'$ uniformly on $[0, 1]$. One can choose the constant terms of these approximations to $f$ to be equal to $f(0)$, so that the approximation of $f$ follows from the approximation of $f'$. Let $\phi$ be a homomorphism from $C^1([0, 1])$ into the real or complex numbers, as appropriate. Suppose also that $\phi(f) \ne 0$ for some $f \in C^1([0, 1])$, so that $\phi$ takes the constant function equal to $1$ on $[0, 1]$ to $1$, by the usual argument. If $f$ is a continuously-differentiable function on $[0, 1]$ such that $f(x) \ne 0$ for every $x \in [0, 1]$, then $1/f$ is also a continuously-differentiable function on $[0, 1]$, and hence $\phi(f) \ne 0$. This implies that $\phi(f) \in f([0, 1])$ for every $f \in C^1([0, 1])$, as in previous situations. In particular, it follows that \begin{equation} \label{|phi(f)| le ||f||_{sup} le ||f||_{C^1}} |\phi(f)| \le \|f\|_{sup} \le \|f\|_{C^1} \end{equation} for every $f \in C^1([0, 1])$. Of course, $f_0(x) = x$ is a continuously-differentiable function on $[0, 1]$. Put $p = \phi(f_0)$, so that $p \in f_0([0, 1]) = [0, 1]$. It follows that \begin{equation} \phi(f) = f(p) \end{equation} when $f$ is a polynomial, by simple algebra. The same relation holds for every $f \in C^1([0, 1])$, because polynomials are dense in $C^1([0, 1])$ with respect to the supremum norm. We do not need the stronger fact that polynomials are dense in $C^1([0, 1])$ with respect to the $C^1$ norm here, because $\phi$ is continuous with respect to the supremum norm, by (\ref{|phi(f)| le ||f||_{sup} le ||f||_{C^1}}). Alternatively, we can use the continuity of $\phi$ with respect to the supremum norm to extend $\phi$ to a homomorphism on $C([0, 1])$, since $C^1([0, 1])$ is dense in $C([0, 1])$ with respect to the supremum norm. This permits us to use the results about homomorphisms on $C(X)$ when $X$ is compact, as in Section \ref{compact spaces}. This approach has the advantage of working in more abstract situations, such as on compact manifolds. The same type of arguments as in Section \ref{compact spaces} can also be used directly in these situations. At any rate, every nonzero homomorphism on $C^1([0, 1])$ can be represented as $\phi(f) = f(p)$ for some $p \in [0, 1]$. Of course, $\phi_p(f) = f(p)$ is a homomorphism on $C^1([0, 1])$ for every $p \in [0, 1]$. \section{Spectral radius} \label{spectral radius} \setcounter{equation}{0} Let $(\mathcal{A}, \|\cdot \|)$ be a Banach algebra over the real or complex numbers with nonzero multiplicative identity element $e$. If $x \in \mathcal{A}$ satisfies $\|x\| < 1$, then $e - x$ is invertible in $\mathcal{A}$, as in Section \ref{banach algebras}. The same conclusion also holds when $\|x^n\| < 1$ for any positive integer $n$. One way to see this is to use the previous result to get that $e - x^n$ is invertible, and then observe that \begin{equation} \label{(e - x) (sum_{j = 1}^{n - 1} x^j) = ... = e - x^n} (e - x) \Big(\sum_{j = 1}^{n - 1} x^j\Big) = \Big(\sum_{j = 1}^{n -1} x^j\Big) (e - x) = e - x^n. \end{equation} This shows that the product of $e - x$ with an element of $\mathcal{A}$ that commutes with it is invertible, which implies that $e - x$ is invertible too, as in Section \ref{banach algebras}. Alternatively, one can check that $\sum_{j = 1}^\infty \|x^j\|$ converges when $\|x^n\| < 1$ for some $n$, and then argue as in Section \ref{banach algebras} that $\sum_{j = 1}^\infty x^j$ converges in $\mathcal{A}$, and that the sum is the inverse of $e - x$. To do this, note first that every positive integer $j$ can be represented as $l \, n + r$ for some nonnegative integers $l$, $r$ with $r < n$. This leads to the estimate \begin{equation} \label{||x^j|| le ||x^n||^l ||x||^r} \|x^j\| \le \|x^n\|^l \, \|x\|^r, \end{equation} which implies the convergence of $\sum_{j = 1}^\infty \|x^j\|$ when $\|x^n\| < 1$. If $x$ is any element of $\mathcal{A}$, then put \begin{equation} r(x) = \inf_{n \ge 1} \|x^n\|^{1/n}, \end{equation} where more precisely the infimum is taken over all positive integers $n$. Thus $e - x$ is invertible in $\mathcal{A}$ when $r(x) < 1$, as in the previous paragraph. Observe also that \begin{equation} r(t \, x) = |t| \, r(x) \end{equation} for every real or complex number $t$, as appropriate. It follows that $e - t \, x$ is invertible in $\mathcal{A}$ when $|t| \, r(x) < 1$. Equivalently, $t \, e - x$ is invertible in $\mathcal{A}$ when $|t| > r(x)$. Let us check that \begin{equation} \lim_{j \to \infty} \|x^j\|^{1/j} = r(x), \end{equation} where the existence of the limit is part of the conclusion. Because of the way that $r(x)$ is defined, it suffices to show that \begin{equation} \limsup_{j \to \infty} \|x^j\|^{1/j} \le r(x), \end{equation} which is the same as saying that \begin{equation} \label{limsup_{j to infty} ||x^j||^{1/j} le ||x^n||^{1/n}} \limsup_{j \to \infty} \|x^j\|^{1/j} \le \|x^n\|^{1/n} \end{equation} for each $n \ge 1$. As before, each positive integer $j$ can be represented as $l \, n + r$ for some nonnegative integers $l$, $r$ with $r < n$, and (\ref{||x^j|| le ||x^n||^l ||x||^r}) implies that \begin{equation} \|x^j\|^{1/j} \le (\|x^n\|^{1/n})^{l n / j} \, \|x\|^{r / j} = (\|x^n\|^{1/n})^{1 - (r/j)} \, \|x\|^{r/j}. \end{equation} It is not difficult to derive (\ref{limsup_{j to infty} ||x^j||^{1/j} le ||x^n||^{1/n}}) from this estimate, using the fact that $a^{1/j} \to 1$ as $j \to \infty$ for every positive real number $a$. This is trivial when $x^n = 0$, since $x^j$ is then equal to $0$ for each $j \ge n$. As a basic class of examples, suppose that $\mathcal{A}$ is the algebra $C_b(X)$ of bounded continuous functions on a topological space $X$, equipped with the supremum norm. In this case, it is easy to see that \begin{equation} \|f^n\|_{sup} = \|f\|_{sup}^n \end{equation} for every $f \in C_b(X)$ and $n \ge 1$, and hence that \begin{equation} r(f) = \|f\|_{sup}. \end{equation} Suppose now that $\mathcal{A}$ is the algebra $C^1([0, 1])$ of continuously-differentiable functions on the unit interval, as in the previous section. Thus $\|f\|_{C^1} \ge \|f\|_{sup}$, and hence \begin{equation} r(f) \ge \|f\|_{sup} \end{equation} for every $f \in C^1([0, 1])$. In the other direction, \begin{eqnarray} \|f^n\|_{C^1} & = & \|f^n\|_{sup} + \|(f^n)'\|_{sup} \\ & = & \|f\|_{sup}^n + \|n \, f' \, f^{n - 1}\|_{sup} \nonumber \\ & \le & \|f\|_{sup}^n + n \, \|f'\|_{sup} \, \|f\|_{sup}^{n - 1}.\nonumber \end{eqnarray} for each $n$. Using this, it is not too difficult to show that \begin{equation} r(f) = \lim_{n \to \infty} \|f^n\|_{C^1}^{1/n} = \|f\|_{sup}. \end{equation} This also uses the fact that $(a + b \, n)^{1/n} \to 1$ as $n \to \infty$ for any two positive real numbers $a$, $b$. Let $\mathcal{A}$ be a complex Banach algebra, and put \begin{equation} \label{R(x) = sup {|t| : t in {bf C} and t e - x is not invertible}} R(x) = \sup \{|t| : t \in {\bf C} \hbox{ and } t \, e - x \hbox{ is not invertible}\} \end{equation} for every $x \in \mathcal{A}$. We have already seen that $t \, e - x$ is invertible in $\mathcal{A}$ when $|t| > r(x)$, which works for both real and complex Banach algebras. If $\mathcal{A}$ is a complex Banach algebra, then for each $x \in \mathcal{A}$ there is a $t \in {\bf C}$ such that $t \, e - x$ is not invertible, as in Section \ref{banach algebras}. Thus the supremum in the definition of $R(x)$ makes sense, and $R(x) \le r(x)$. A well-known theorem states that $r(x) \le R(x)$ for every $x \in \mathcal{A}$ when $\mathcal{A}$ is a complex Banach algebra, and hence $r(x) = R(x)$. To see this, note that $t \, e - x$ is invertible when $t \in {\bf C}$ satisfies $|t| > R(x)$, which implies that $e - t \, x$ is invertible when $|t| \, R(x) < 1$. As in Section \ref{banach algebras}, the basic idea is to look at \begin{equation} f(t) = (e - t \, x)^{-1} \end{equation} as a holomorphic function on the disk where $|t| \, R(x) < 1$ with values in $\mathcal{A}$. In particular, the composition of $f$ with a continuous linear functional on $\mathcal{A}$ defines a complex-valued function on this disk which is holomorphic in the usual sense. We also know that $f(t)$ is given by the power series $\sum_{j = 0}^\infty t^j \, x^j$ when $|t|$ is sufficiently small, as in Section \ref{banach algebras}. By standard arguments in complex analysis, one can estimate the size of the coefficients of this power series in $t$ in terms of the behavior of $f(t)$ on any circle $|t| = a$ with $a \, R(x) < 1$. Note that $f(t)$ is bounded on any circle of this type, because the circle is compact and $f(t)$ is continuous on it. More precisely, one can show that for each positive real number $a$ with $a \, R(x) < 1$, there is a $C(a) \ge 0$ such that \begin{equation} a^j \, \|x^j\| \le C(a) \end{equation} for every $j \ge 1$. Equivalently, $a \, \|x^j\|^{1/j} \le C(a)^{1/j}$ for each $j$, which implies that $a \, r(x) \le 1$ when $a \, R(x) < 1$, by taking the limit as $j \to \infty$. Thus $r(x) \le R(x)$, as desired. \section{Topological algebras} \label{topological algebras} \setcounter{equation}{0} Let $\mathcal{A}$ be an associative algebra over the real or complex numbers, as in Section \ref{banach algebras}. Suppose that $\mathcal{A}$ is also equipped with a topology which makes it into a topological vector space, as in Section \ref{topological vector spaces}. In the same way, one can ask that multiplication in $\mathcal{A}$ be continuous as a mapping from $\mathcal{A} \times \mathcal{A}$ into $\mathcal{A}$, using the product topology on $\mathcal{A} \times \mathcal{A}$ associated to the given topology on $\mathcal{A}$. Under these conditions, we can say that $\mathcal{A}$ is a \emph{topological algebra}. As before, we are especially interested here in the case where multiplication on $\mathcal{A}$ is commutative. Of course, Banach algebras are topological algebras, with respect to the topology associated to the norm. If $X$ is a locally compact Hausdorff topological space, then the algebra of continuous functions on $X$ is a topological algebra with respect to the topology determined by the collection of supremum seminorms corresponding to nonempty compact subsets of $X$, as in Section \ref{locally compact spaces}. If $U$ is a nonempty open set in ${\bf R}^n$, then the algebra of smooth functions on $U$ is a topological algebra with respect to the collection of supremum seminorms of derivatives of $f$ over nonempty compact subsets of $U$, as in Section \ref{smooth functions}. As in the case of Banach algebras, one may wish to look at topological algebras $\mathcal{A}$ that are complete as topological vector spaces. If $\mathcal{A}$ has a countable local base for its topology at $0$, then this can be defined in terms of convergence of Cauchy sequences, as usual. Otherwise, one can consider more general Cauchy conditions for nets or filters on $\mathcal{A}$. It is not too difficult to show that the examples of topological algebras of continuous and smooth functions mentioned in the previous paragraph are complete. If $U$ is a nonempty open set in the complex plane, then the algebra $\mathcal{H}(U)$ of holomorphic functions on $U$ may be considered as a subalgebra of the algebra $C(U)$ of continuous complex-valued functions on $U$. More precisely, we have seen that $\mathcal{H}(U)$ is a closed subalgebra of $C(U)$ with respect to the topology associated to the collection of supremum seminorms over nonempty compact subsets of $U$. Of course, $\mathcal{H}(U)$ is also a topological algebra with respect to the topology determined by this collection of seminorms, and it follows that $\mathcal{H}(U)$ is complete as well, because $C(U)$ is complete. \section{Fourier series} \label{fourier series} \setcounter{equation}{0} Let ${\bf T}$ be the unit circle in the complex plane, consisting of the $z \in {\bf C}$ with $|z| = 1$. It is well known that \begin{equation} \label{int_{bf T} z^j |dz| = 0} \int_{\bf T} z^j \, |dz| = 0 \end{equation} for each nonzero integer $j$, where $|dz|$ is the element of arc length along ${\bf T}$. This integral is the same as $-i$ times the line integral \begin{equation} \label{oint_{bf T} z^{j - 1} dz} \oint_{\bf T} z^{j - 1} \, dz, \end{equation} the vanishing of which when $j \ne 0$ is a basic fact in complex analysis. More precisely, the relationship between these two integrals follows from identifying $i \, z$ with the unit tangent vector to ${\bf T}$ at $z$ in the positive orientation. Note that \begin{equation} \label{overline{(int_{bf T} z^j |dz|)} = int_{bf T} z^{-j} |dz|} \overline{\Big(\int_{\bf T} z^j \, |dz|\Big)} = \int_{\bf T} z^{-j} \, |dz|, \end{equation} since $\overline{z} = z^{-1}$ when $|z| = 1$, and so it suffices to verify (\ref{int_{bf T} z^j |dz| = 0}) when $j$ is a positive integer. If $j = 0$, then $z^j$ is interpreted as being equal to $1$ for each $z$, so that the integral in (\ref{int_{bf T} z^j |dz| = 0}) is equal to the length $2 \, \pi$ of ${\bf T}$. If $f$ is a continuous complex-valued function on ${\bf T}$ and $j$ is an integer, then the $j$th Fourier coefficient of $f$ is defined by \begin{equation} \label{widehat{f}(j) = frac{1}{2 pi} int_{bf T} f(w) w^{-j} |dw|} \widehat{f}(j) = \frac{1}{2 \pi} \int_{\bf T} f(w) \, w^{-j} \, |dw|. \end{equation} The corresponding Fourier series is given by \begin{equation} \label{sum_{j = -infty}^infty widehat{f}(j) z^j} \sum_{j = -\infty}^\infty \widehat{f}(j) \, z^j. \end{equation} For the moment, this should be considered as a formal sum, without regard to convergence. If $f(z) = z^l$ for some integer $l$, then $\widehat{f}(j)$ is equal to $1$ when $j = l$ and to $0$ when $j \ne l$, as in the previous paragraph. Thus the Fourier series (\ref{sum_{j = -infty}^infty widehat{f}(j) z^j}) reduces to $f(z)$ in this case, and also when $f(z)$ is a linear combination of $z^l$ for finitely many integers $l$. Suppose that $f(z)$ is a continuous complex-valued function on the closed unit disk in ${\bf C}$ which is holomorphic on the open unit disk. By standard results in complex analysis, $f(z)$ can be represented by an absolutely convergent power series \begin{equation} f(z) = \sum_{j = 0}^\infty a_j \, z^j \end{equation} on the open unit disk, which is to say for $z \in {\bf C}$ with $|z| < 1$. In this case, \begin{equation} a_j = \widehat{f}(j) \end{equation} for each $j \ge 0$, where $\widehat{f}(j)$ is the $j$th Fourier coefficient of the restriction of $f$ to the unit circle. This follows from the usual Cauchy integral formulae, where one integrates over the unit circle. Normally one might integrate over circles of radius $r < 1$ when dealing with holomorphic functions on the open unit disk, but one can pass to the limit $r \to 1$ when $f$ extends to a continuous function on the closed unit disk. Under these conditions, we also have that $\widehat{f}(j) = 0$ when $j < 0$. This can be derived from Cauchy's theorem for line integrals of holomorphic functions, starting with integrals over circles of radius $r < 1$, and then passing to the limit $r \to 1$ as in the previous paragraph. Conversely, if $f$ is a continuous function on the unit circle with $\widehat{f}(j) = 0$ when $j < 0$, then it can be shown that $f$ has a continuous extension to the closed unit disk which is holomorphic on the open unit disk. More precisely, the holomorphic function on the open unit disk is given by the power series defined by the Fourier coefficients of $f$, as before. The remaining point is to show that the combination of this holomorphic function on the open unit disk with the given function $f$ on the unit circle is continuous on the closed unit disk, which will be discussed in Section \ref{poisson kernel}. \section{Absolute convergence} \label{absolute convergence} \setcounter{equation}{0} Let $\ell^1({\bf Z})$ be the space of doubly-infinite sequences $a = \{a_j\}_{j = -\infty}^\infty$ of complex numbers such that \begin{equation} \|a\|_1 = \sum_{j = -\infty}^\infty |a_j| \end{equation} converges. This is equivalent to the definition in Section \ref{summable functions} with $E = {\bf Z}$, but in this case it is a bit simpler to think of a sum over ${\bf Z}$ as a combination of two ordinary infinite series, corresponding to sums over $j \ge 0$ and $j < 0$. In particular, if $a \in \ell^1({\bf Z})$, then $\sum_{j = 0}^\infty a_j$ and $\sum_{j = 1}^\infty a_{-j}$ converge absolutely, so that their sum $\sum_{j = -\infty}^\infty a_j$ is well-defined, and satisfies \begin{equation} \biggl|\sum_{j = -\infty}^\infty a_j\biggr| \le \|a\|_1. \end{equation} As before, it is easy to see that $\|a\|_1$ defines a norm on $\ell^1({\bf Z})$. If $a \in \ell^1({\bf Z})$, $z \in {\bf C}$, and $|z| = 1$, then put \begin{equation} \widehat{a}(z) = \sum_{j = -\infty}^\infty a_j \, z^j, \end{equation} which is the Fourier transform of $a$. This makes sense, because \begin{equation} \sum_{j = -\infty}^\infty |a_j \, z^j| = \sum_{j = -\infty}^\infty |a_j| \end{equation} converges. Moreover, \begin{equation} \sup_{z \in {\bf T}} |\widehat{a}(z)| \le \|a\|_1. \end{equation} The partial sums $\sum_{j = -n}^n a_j \, z^j$ are continuous functions that converge to $\widehat{a}(z)$ uniformly on the unit circle, by Weierstrass' M-test, and so $\widehat{a}(z)$ is a continuous function on ${\bf T}$. It is easy to see that \begin{equation} \widehat{(\widehat{a})}(j) = \frac{1}{2 \pi}\int_{\bf T} \widehat{a}(z) \, z^{-j} \, |dz| = a_j \end{equation} for each $j$, using the uniform convergence of the partial sums to reduce to the identities discussed in the previous section. The convolution $a * b$ of $a, b \in \ell^1({\bf Z})$ is defined by \begin{equation} (a * b)_j = \sum_{l = -\infty}^\infty a_{j - l} \, b_l. \end{equation} The sum on the right converges absolutely as soon as one of $a$, $b$ is summable and the other is bounded, and in particular when both $a$, $b$ are summable. We also have that \begin{equation} |(a * b)_j| = \sum_{l = -\infty}^\infty |a_{j - l}| \, |b_l|, \end{equation} which implies that \begin{equation} \sum_{j = -\infty}^\infty |(a * b)_j| \le \sum_{j = -\infty}^\infty \sum_{l = -\infty}^\infty |a_{j - l}| \, |b_l|. \end{equation} Interchanging the order of summation, we get that \begin{equation} \sum_{j = -\infty}^\infty |(a * b)_j| \le \sum_{l = -\infty}^\infty \sum_{j = -\infty}^\infty |a_{j - l}| \, |b_l|. \end{equation} Of course, \begin{equation} \sum_{j = -\infty}^\infty |a_{j - l}| = \sum_{j = -\infty}^\infty |a_j| \end{equation} for each $l$, by making the change of variables $j \mapsto j + l$. Thus \begin{equation} \sum_{j = -\infty}^\infty |(a * b)_j| \le \Big(\sum_{j = -\infty}^\infty |a_j|\Big) \, \Big(\sum_{l = -\infty}^\infty |b_l|\Big), \end{equation} so that $a * b \in \ell^1({\bf Z})$ when $a, b \in \ell^1({\bf Z})$. Equivalently, \begin{equation} \|a * b\|_1 \le \|a\|_1 \, \|b\|_1. \end{equation} If $a, b \in \ell^1({\bf Z})$, $z \in {\bf C}$, and $|z| = 1$, then \begin{equation} \widehat{(a * b)}(z) = \sum_{j = -\infty}^\infty \Big(\sum_{l = -\infty}^\infty a_{j - l} \, b_l\Big) z^j. \end{equation} This is the same as \begin{equation} \sum_{j = -\infty}^\infty \sum_{l = -\infty}^\infty a_{j - l} \, z^{j - l} \, b_l \, z^l, \end{equation} which is equal to \begin{equation} \sum_{l = -\infty}^\infty \sum_{j = -\infty}^\infty a_{j - l} \, z^{j - l} \, b_l \, z^l, \end{equation} by interchanging the order of summation. This uses the absolute summability shown in the previous paragraph. As before, we can make the change of variables $j \mapsto j + l$, to get that \begin{equation} \sum_{j = -\infty}^\infty a_{j - l} \, z^{j - l} = \sum_{j = -\infty}^\infty a_j \, z^j = \widehat{a}(z) \end{equation} for each $l$. Substituting this into the previous double sum, we get that \begin{equation} \label{widehat{(a * b)}(z) = widehat{a}(z) widehat{b}(z)} \widehat{(a * b)}(z) = \widehat{a}(z) \, \widehat{b}(z) \end{equation} for every $z \in {\bf T}$. Let $\delta(n) = \{\delta_j(n)\}_{j = -\infty}^\infty$ be defined for each integer $n$ by putting $\delta_j(n) = 1$ when $j = n$ and $\delta_j(n) = 0$ when $j \ne n$, so that $\|\delta(n)\|_1 = 1$ for each $n$. It is easy to see that \begin{equation} \delta(n) * \delta(r) = \delta(n + r) \end{equation} for every $n, r \in {\bf Z}$, and that \begin{equation} \delta(0) * a = a * \delta(0) = a \end{equation} for every $a \in \ell^1({\bf Z})$. One can also check that \begin{equation} a * b = b * a \end{equation} and \begin{equation} (a * b) * c = a * (b * c) \end{equation} for every $a, b, c \in \ell^1({\bf Z})$, directly from the definition of convolution, or using the fact that linear combinations of the $\delta(n)$'s are dense in $\ell^1({\bf Z})$. It is well known and not too difficult to show that $\ell^1({\bf Z})$ is complete with respect to the $\ell^1$ norm $\|a\|_1$. It follows that $\ell^1({\bf Z})$ is a commutative Banach algebra, with convolution as multiplication and $\delta(0)$ as the multiplicative identity element. Suppose that $\phi$ is a linear functional on $\ell^1({\bf Z})$ that is also a homomorphism with respect to convolution, so that $\phi(a * b) = \phi(a) \, \phi(b)$ for every $a, b \in \ell^1({\bf Z})$. If $\phi(a) \ne 0$ for some $a \in \ell^1({\bf Z})$, then $\phi(\delta(0)) = 1$, and $\phi$ is a continuous linear functional on $\ell^1({\bf Z})$ with dual norm $1$, as in Section \ref{banach algebras}. We would like to show that \begin{equation} \phi(a) = \widehat{a}(z) \end{equation} for some $z \in {\bf T}$ and every $a \in \ell^1({\bf Z})$. Of course, we have already seen that $\phi_z(a) = \widehat{a}(z)$ defines a homomorphism on $\ell^1({\bf Z})$ for every $z \in {\bf T}$. If $z = \phi(\delta(1))$, then $|z| \le 1$, because $\|\delta(1)\|_1 = 1$ and $\phi$ has dual norm $1$. We also know that $\delta(-1) * \delta(1) = \delta(0)$, which implies that $\phi(\delta(-1)) \, \phi(\delta(1)) = 1$. Thus $z \ne 0$, $z^{-1} = \phi(\delta(-1))$, and hence $|z^{-1}| \le 1$, because $\|\delta(-1)\|_1 = 1$ and $\phi$ has dual norm $1$. It follows that $|z| = 1$, and that $\phi(\delta(n)) = z^n$ for each $n \in {\bf Z}$. Equivalently, $\phi(a) = \widehat{a}(z)$ when $a = \delta(n)$ for some $n$. This also works when $a$ is a finite linear combination of $\delta(n)$'s, by linearity. Therefore $\phi(a) = \widehat{a}(z)$ for every $a \in \ell^1({\bf Z})$, because linear combinations of the $\delta(n)$'s are dense in $\ell^1({\bf Z})$. \section{The Poisson kernel} \label{poisson kernel} \setcounter{equation}{0} Let $f(z)$ be a continuous complex-valued function on the unit circle ${\bf T}$. Note that the Fourier coefficients of $f$ are bounded, with \begin{equation} |\widehat{f}(j)| \le \frac{1}{2 \pi} \int_{\bf T} |f(w)| \, |dw| \le \sup_{w \in {\bf T}} |f(w)| \end{equation} for each $j \in {\bf Z}$. Put \begin{equation} \phi(z) = \sum_{j = 0}^\infty \widehat{f}(j) \, z^j + \sum_{j = 1}^\infty \widehat{f}(-j) \, \overline{z}^j \end{equation} for each $z \in {\bf C}$ with $|z| < 1$, where $z^j$ is interpreted as being equal to $1$ for each $z$ when $j = 0$, as usual. These two infinite series converge absolutely when $|z| < 1$, because $\widehat{f}(j)$ is bounded. If $|z| = 1$, then $\overline{z} = z^{-1}$, and the sum of these two series is formally the same as the Fourier series (\ref{sum_{j = -infty}^infty widehat{f}(j) z^j}) associated to $f$. Equivalently, $\phi = \phi_1 + \phi_2$, where \begin{equation} \phi_1(z) = \sum_{j = 0}^\infty \widehat{f}(j) \, z^j, \quad \phi_2(z) = \sum_{j = 1}^\infty \widehat{f}(-j) \, \overline{z}^j. \end{equation} Of course, $\phi_1$ is a holomorphic function on the open unit disk, and $\phi_2$ is the complex conjugate of a holomorphic function on the open unit disk. It is well known that a holomorphic function $h(z)$ is harmonic, meaning that it satisfies Laplace's equation \begin{equation} \frac{\partial^2 h}{\partial x^2} + \frac{\partial^2 h}{\partial y^2} = 0 \end{equation} when we identify the complex plane ${\bf C}$ with ${\bf R}^2$, and where $x$, $y$ correspond to the real and imaginary parts of $z \in {\bf C}$. More precisely, Laplace's equation applies to the real and imaginary parts of $h(z)$ separately, both of which are harmonic. Thus the complex conjugate of a holomorphic function is also harmonic, and hence $\phi$ is a harmonic function on the open unit disk. The Poisson kernel is defined by \begin{equation} \label{P(z, w) = ...} P(z, w) = \frac{1}{2 \pi} \Big(\sum_{j = 0}^\infty z^j \, \overline{w}^j + \sum_{j = 1}^\infty \overline{z}^j \, w^j\Big) \end{equation} for $z, w \in {\bf C}$ with $|z| < 1$ and $|w| = 1$. Of course, these series converge absolutely under these conditions, and their partial sums converge uniformly on the set where $|z| \le r$ and $|w| = 1$ for every $r < 1$. This implies that \begin{equation} \label{phi(z) = int_{bf T} P(z, w) f(w) |dw|} \phi(z) = \int_{\bf T} P(z, w) \, f(w) \, |dw| \end{equation} for every $z$ in the open unit disk, using uniform convergence for $w \in {\bf T}$ to interchange the order of summation and integration. In particular, \begin{equation} \label{int_{bf T} P(z, w) |dw| = 1} \int_{\bf T} P(z, w) \, |dw| = 1 \end{equation} for every $z$ in the open unit disk, because $\phi(z) = 1$ for each $z$ when $f$ is the constant function equal to $1$ on the unit circle. Observe that \begin{equation} \sum_{j = 1}^\infty \overline{z}^j \, w^j = \overline{\Big(\sum_{j = 1}^\infty z^j \, \overline{w}^j\Big)}, \end{equation} and hence \begin{equation} P(z, w) = \frac{1}{2 \pi} \Big(2 \mathop{\rm Re} \sum_{j = 0}^\infty z^j \, \overline{w}^j - 1\Big) \end{equation} for all $z$, $w$ as before. Here $\mathop{\rm Re} a$ denotes the real part of a complex number $a$, and we are using the simple fact that $a + \overline{a} = 2 \mathop{\rm Re} a$. Summing the geometric series, we get that \begin{equation} \label{sum_{j = 0}^infty z^j overline{w}^j = ...} \sum_{j = 0}^\infty z^j \, \overline{w}^j = \frac{1}{1 - z \, \overline{w}} = \frac{1 - \overline{z} \, w}{|1 - z \, \overline{w}|^2} \end{equation} when $|z| < 1$ and $|w| = 1$. Thus \begin{equation} P(z, w) = \frac{1}{2 \pi} |1 - z \, \overline{w}|^{-2} (2 - 2 \mathop{\rm Re} z \, \overline{w} - |1 - z \, \overline{w}|^2). \end{equation} We can expand $|1 - z \, \overline{w}|^2$ into $(1 - z \, \overline{w}) (1 - \overline{z} \, w)$, which reduces to $1 - 2 \mathop{\rm Re} z \, \overline{w} - |z|^2$ when $|w| = 1$. It follows that \begin{equation} \label{P(z, w) = ..., 2} P(z, w) = \frac{1}{2 \pi} \frac{1 - |z|^2}{|1 - z \, \overline{w}|^2} = \frac{1}{2 \pi} \frac{1 - |z|^2}{|w - z|^2}, \end{equation} using $|w| = 1$ again in the second step. In particular, $P(z, w) > 0$. If $z_0, w \in {\bf T}$ and $z_0 \ne w$, then $P(z, w) \to 0$ as $z \to z_0$, where the limit is restricted to $z$ in the open unit disk. This is an immediate consequence of (\ref{P(z, w) = ..., 2}), which also shows that we have uniform convergence for $w \in {\bf T}$ that satisfy $|w - z_0| \ge \delta$ for some $\delta > 0$. Note that \begin{equation} \phi(z) - f(z_0) = \int_{\bf T} P(z, w) \, (f(w) - f(z_0)) \, |dw| \end{equation} for every $z_0 \in {\bf T}$ and $z$ in the open unit disk, because of (\ref{int_{bf T} P(z, w) |dw| = 1}), and hence \begin{equation} |\phi(z) - f(z_0)| \le \int_{\bf T} P(z, w) \, |f(w) - f(z_0)| \, |dw|. \end{equation} Using this and the continuity of $f$, one can check that $\phi(z) \to f(z_0)$ as $z \to z_0$ in the open unit disk. More precisely, $f(w) - f(z_0)$ is small when $w$ is close to $z_0$, while $P(z, w)$ is small when $w$ is not too close to $z_0$ and $z$ is very close to $z_0$. It follows that the function defined on the closed unit disk by taking $\phi$ on the open unit disk and $f$ on the unit circle is continuous. In particular, if $\widehat{f}(j) = 0$ when $j < 0$, then $\phi = \phi_1$ is holomorphic, as mentioned at the end of Section \ref{fourier series}. \section{Cauchy products} \label{cauchy products} \setcounter{equation}{0} If $\sum_{j = 0}^\infty a_j \, z^j$, $\sum_{j = 0}^\infty b_l \, z^l$ are power series with complex coefficients, then \begin{equation} \Big(\sum_{j = 0}^\infty a_j \, z^j\Big) \, \Big(\sum_{l = 0}^\infty b_l \, z^l\Big) = \sum_{n = 0}^\infty c_n \, z^n \end{equation} formally, where \begin{equation} c_n = \sum_{j = 0}^n a_j \, b_{n - j}. \end{equation} In particular, \begin{equation} \label{(sum_{j = 0}^infty a_j) (sum_{l = 0}^infty b_l) = sum_{n = 0}^infty c_n} \Big(\sum_{j = 0}^\infty a_j\Big) \, \Big(\sum_{l = 0}^\infty b_l\Big) = \sum_{n = 0}^\infty c_n \end{equation} formally. These identities clearly hold when $a_j = b_l = 0$ for all but finitely many $j$, $l$, for instance. If $a_j$, $b_l$ are nonnegative real numbers, then it is easy to see that \begin{equation} \sum_{n = 0}^N c_n \le \Big(\sum_{j = 0}^N a_j\Big) \, \Big(\sum_{l = 0}^N b_l\Big) \end{equation} for every nonnegative integer $N$. Similarly, \begin{equation} \Big(\sum_{j = 0}^N a_j\Big) \, \Big(\sum_{l = 0}^N b_l\Big) \le \sum_{n = 0}^{2 N} c_n. \end{equation} Hence $\sum_{n = 0}^\infty c_n$ converges and satisfies (\ref{(sum_{j = 0}^infty a_j) (sum_{l = 0}^infty b_l) = sum_{n = 0}^infty c_n}) when $\sum_{j = 0}^\infty a_j$, $\sum_{l = 0}^\infty b_l$ converge. If $a_j$, $b_l$ are arbitrary real or complex numbers, then \begin{equation} |c_n| \le \sum_{j = 0}^n |a_j| \, |b_{n - j}| \end{equation} for each $n$. If $\sum_{j = 0}^\infty a_j$, $\sum_{l = 0}^\infty b_l$ converge absolutely, then it follows that $\sum_{n = 0}^\infty c_n$ converges absolutely too, by the remarks in the previous paragraph. In this case, one can check that (\ref{(sum_{j = 0}^infty a_j) (sum_{l = 0}^infty b_l) = sum_{n = 0}^infty c_n}) holds, by expressing these series as linear combinations of convergent series of nonnegative real numbers, and using the remarks in the previous paragraph. Alternatively, one can approximate these series by ones with only finitely many nonzero terms, and estimate the remainders using absolute convergence. Suppose now that $\sum_{j = 0}^\infty a_j \, z^j$, $\sum_{l = 0}^\infty b_l \, z^l$ are power series that converge when $|z| < 1$, and hence converge absolutely when $|z| < 1$, by standard results. Thus $\sum_{n = 0}^\infty c_n \, z^n$ converges absolutely when $|z| < 1$, and is equal to the product of the other two series. The partial sums of these series also converge uniformly for $|z| \le r$ when $r < 1$, by standard results. Put $f(z) = \sum_{j = 0}^\infty a_j \, z^j$, $g(z) = \sum_{l = 0}^\infty b_l \, z^l$, and $h(z) = \sum_{n = 0}^\infty c_n \, z^n$ when $|z| < 1$, so that \begin{equation} f(z) \, g(z) = h(z), \end{equation} as in the preceding paragraph. If $f(z)$, $g(z)$ have continuous extensions to the closed unit disk, then it follows that $h(z)$ does as well. Note that \begin{equation} a_j \, r^j = \frac{1}{2 \pi} \int_{\bf T} f(r \, z) \, z^{-j} \, |dz| \end{equation} for each $j \ge 0$ and $0 < r < 1$, and similarly for $g$, $h$. This is because $f(r \, z)$ is defined by an absolutely convergent Fourier series, so that we can reduce to the usual identities for the integral of a power of $z$ on the unit circle by interchaning the order of integration and summation. If $f$ extends continuously to the closed unit disk, then this formula also holds with $r = 1$. If $\sum_{j = -\infty}^\infty a_j$, $\sum_{l = -\infty}^\infty b_l$ are doubly-infinite series of complex numbers, then we have again that \begin{equation} \label{sum_{n = -infty}^infty c_n = ...} \Big(\sum_{j = -\infty}^\infty a_j\Big) \, \Big(\sum_{l = -\infty}^\infty b_l\Big) = \sum_{n = -\infty}^\infty c_n \end{equation} with $c_n = \sum_{j = -\infty}^\infty a_j \, b_{n - j}$, and similarly \begin{equation} \label{sum_{n = -infty}^infty c_n z^n = ...} \Big(\sum_{j = -\infty}^\infty a_j \, z^j\Big) \, \Big(\sum_{l = -\infty}^\infty b_l \, z^l\Big) = \sum_{n = -\infty}^\infty c_n \, z^n, \end{equation} at least formally. As before, there is no problem with these identities when $a_j = b_l = 0$ for all but finitely many $j$, $l$. Otherwise, even the definition of $c_n$ requires some convergence conditions. If the $a_j$'s are absolutely summable and the $b_l$'s are bounded, or vice-versa, then the series defining $c_n$ converges absolutely, and \begin{equation} |c_n| \le \sum_{j = -\infty}^\infty |a_j| \, |b_{n - j}| \end{equation} for each $n$. If both the $a_j$'s and $b_l$'s are absolutely summable, then it is easy to see that $c_n$'s are absolutely summable too, with \begin{equation} \sum_{n = -\infty}^\infty |c_n| \le \Big(\sum_{j = -\infty}^\infty |a_j|\Big) \, \Big(\sum_{l = -\infty}^\infty |b_l|\Big). \end{equation} This follows from the previous estimate for $|c_n|$ by interchanging the order of summation. One can also check that (\ref{sum_{n = -infty}^infty c_n = ...}) holds under these conditions, in the same way as in the earlier situation for sums over nonnegative integers. Of course, this implies that (\ref{sum_{n = -infty}^infty c_n z^n = ...}) holds as well when $|z| = 1$, which is basically the same as (\ref{widehat{(a * b)}(z) = widehat{a}(z) widehat{b}(z)}). \section{Inner product spaces} \label{inner products} \setcounter{equation}{0} Let $V$ be a vector space over the real or complex numbers. An \emph{inner product} on $V$ is a function $\langle v, w \rangle$ defined for $v, w \in V$ with values in ${\bf R}$ or ${\bf C}$, as appropriate, that satisfies the following three conditions. First, \begin{equation} \lambda_w(v) = \langle v, w \rangle \end{equation} is linear as a function of $v$ for each $w \in V$. Second, \begin{equation} \langle w, v \rangle = \langle v, w \rangle \end{equation} for every $v, w \in V$ in the real case, and \begin{equation} \langle w, v \rangle = \overline{\langle v, w \rangle} \end{equation} for every $v, w \in V$ in the complex case. This implies that $\langle v, w \rangle$ is linear in $w$ in the real case, and conjugate-linear in $w$ in the complex case. It also implies that \begin{equation} \langle v, w \rangle = \overline{\langle v, v \rangle} \in {\bf R} \end{equation} for every $v \in V$ in the complex case. The third condition is that $\langle v, v \rangle \ge 0$ for every $v \in V$ in both the real and complex cases, with equality only when $v = 0$. Put \begin{equation} \|v\| = \langle v, v \rangle^{1/2} \end{equation} for every $v \in V$. This satisfies the positivity and homogeneity requirements of a norm, and we would like to show that it also satisfies the triangle inequality. Observe that \begin{eqnarray} 0 \le \|v + t \, w\|^2 & = & \langle v, v \rangle + t \, \langle v, w \rangle + t \, \langle w, v \rangle + t^2 \langle w, w \rangle \\ & = & \|v\|^2 + 2 \, t \, \langle v, w \rangle + t^2 \, \|w\|^2 \nonumber \end{eqnarray} for every $v, w \in V$ and $t \in {\bf R}$ in the real case, and similarly \begin{eqnarray} 0 \le \|v + t \, w\|^2 & = & \langle v, v \rangle + t \, \langle v, w \rangle + \overline{t} \, \langle w, v \rangle + |t|^2 \, \langle w, w \rangle \\ & = & \|v\|^2 + t \, \langle v, w \rangle + \overline{t} \, \overline{\langle v, w\rangle} + |t|^2 \, \|w\|^2 \nonumber\\ & = & \|v\|^2 + 2 \mathop{\rm Re} t \, \langle v, w \rangle + |t|^2 \, \|w\|^2 \nonumber \end{eqnarray} for every $v, w \in V$ and $t \in {\bf C}$ in the complex case. In both cases, we get that \begin{equation} 0 \le \|v\|^2 - 2 \, r \, |\langle v, w \rangle| + r^2 \|w\|^2 \end{equation} for every $v, w \in V$ and $r \ge 0$, by taking $t = - r \, \alpha$, where $|\alpha| = 1$ and \begin{equation} \alpha \, \langle v, w \rangle = |\langle v, w \rangle|. \end{equation} Equivalently, \begin{equation} 2 \, r \, |\langle v, w \rangle| \le \|v\|^2 + r^2 \, \|w\|^2 \end{equation} for every $v, w \in V$ and $r \ge 0$, and hence \begin{equation} |\langle v, w \rangle| \le \frac{1}{2} \, (r^{-1} \, \|v\|^2 + r \, \|w\|^2) \end{equation} when $r > 0$. If $v, w \ne 0$, then we can take $r = \|v\| / \|w\|$ to get that \begin{equation} |\langle v, w \rangle| \le \|v\| \, \|w\|. \end{equation} This is the \emph{Cauchy--Schwarz inequality}, which also holds trivially when $v = 0$ or when $w = 0$. As before, \begin{equation} \|v + w\|^2 = \|v\|^2 + 2 \, \langle v, w \rangle + \|w\|^2 \end{equation} for every $v, w \in V$ in the real case, and \begin{equation} \|v + w\|^2 = \|v\|^2 + 2 \, \mathop{\rm Re} \langle v, w \rangle + \|w\|^2 \end{equation} for every $v, w \in V$ in the complex case. In both case, \begin{eqnarray} \|v + w\|^2 & \le & \|v\|^2 + 2 \, |\langle v, w \rangle| + \|w\|^2 \\ & \le & \|v\|^2 + 2 \, \|v\| \, \|w\| + \|w\|^2 = (\|v\| + \|w\|)^2, \nonumber \end{eqnarray} using the Cauchy--Schwarz inequality in the second step. This implies that \begin{equation} \|v + w\| \le \|v\| + \|w\| \end{equation} for every $v, w \in V$, so that $\|v\|$ defines a norm on $V$, as desired. The standard inner products on ${\bf R}^n$ and ${\bf C}^n$ are given by \begin{equation} \langle v, w \rangle = \sum_{j = 1}^n v_j \, w_j \end{equation} and \begin{equation} \langle v, w \rangle = \sum_{j = 1}^n v_j \, \overline{w_j}, \end{equation} respectively. In both cases, the corresponding norm is given by \begin{equation} \|v\| = \Big(\sum_{j = 1}^n |v_j|^2\Big)^{1/2}. \end{equation} This is the standard Euclidean norm on ${\bf R}^n$, ${\bf C}^n$, for which the corresponding topology is the standard topology. \section{$\ell^2(E)$} \label{ell^2(E)} \setcounter{equation}{0} Let $E$ be a nonempty set, and let $\ell^2(E)$ be the space of real or complex-valued functions $f(x)$ on $E$ such that $|f(x)|^2$ is a summable function on $E$, as in Section \ref{summable functions}. As usual, this may also be denoted $\ell^2(E, {\bf R})$ or $\ell^2(E, {\bf C})$, to indicate whether real or complex-valued functions are being used. Remember that \begin{equation} a \, b \le \frac{a^2 + b^2}{2} \end{equation} for every $a, b \ge 0$, since \begin{equation} 0 \le (a - b)^2 = a^2 - 2 \, a \, b + b^2. \end{equation} If $f, g \in \ell^2(E)$, then it follows that \begin{eqnarray} |f(x) + g(x)|^2 & \le & (|f(x)| + |g(x)|)^2 \\ & = & |f(x)|^2 + 2 \, |f(x)| \, |g(x)| + |g(x)|^2 \nonumber\\ & \le & 2 \, |f(x)|^2 + 2 \, |g(x)|^2 \nonumber \end{eqnarray} for every $x \in E$. Hence $f + g \in \ell^2(E)$, because $|f(x)|^2$, $|g(x)|^2$ are summable on $E$ by hypothesis. Similarly, \begin{equation} |f(x)| \, |g(x)| \le \frac{1}{2} \, |f(x)|^2 + \frac{1}{2} \, |g(x)|^2 \end{equation} is a summable function on $E$ when $f, g \in \ell^2(E)$. Put \begin{equation} \langle f, g \rangle = \sum_{x \in E} f(x) \, g(x) \end{equation} in the real case, and \begin{equation} \langle f, g \rangle = \sum_{x \in E} f(x) \, \overline{g(x)} \end{equation} in the complex case. Thus \begin{equation} \langle f, f \rangle = \sum_{x \in E} |f(x)|^2 \end{equation} in both cases. It is easy to see that $\ell^2(E)$ is a vector space with respect to pointwise addition and scalar multiplication, and that $\langle f, g \rangle$ defines an inner product on $\ell^2(E)$. The norm associated to this inner product is denoted $\|f\|_2$. If $f \in \ell^1(E)$, then $f$ is bounded, and $\|f\|_\infty \le \|f\|_1$. This implies that \begin{equation} \sum_{x \in E} |f(x)|^2 \le \|f\|_\infty \, \sum_{x \in E} |f(x)| = \|f\|_\infty \, \|f\|_1 \le \|f\|_1^2, \end{equation} so that $f \in \ell^2(E)$ and \begin{equation} \|f\|_2 \le \|f\|_1. \end{equation} Similarly, if $f \in \ell^2(E)$, then $f$ is bounded on $E$, and \begin{equation} \|f\|_\infty \le \|f\|_2. \end{equation} One can also check that $f \in c_0(E)$, for the same reasons as for summable functions, and hence \begin{equation} \ell^1(E) \subseteq \ell^2(E) \subseteq c_0(E). \end{equation} As in the case of $\ell^1(E)$, one can show that functions with finite support on $E$ are dense in $\ell^2(E)$. If $(V, \langle v, w \rangle)$ is a real or complex inner product space, then $\lambda_w(v) = \langle v, w \rangle$ defines a continuous linear functional on $V$ for every $w \in V$. This uses the Cauchy--Schwarz inequality, which implies that the dual norm of $\lambda_w$ is less than or equal to the norm of $w$. The dual norm of $\lambda_w$ is actually equal to the norm of $w$, as one can check by taking $v = w$. If $V = \ell^2(E)$ with the inner product defined before, then one can show that every continuous linear functional is of this form, using arguments like those in Sections \ref{c_0(E)} and \ref{dual of ell^1}. An inner product space $(V, \langle v, w \rangle)$ is said to be a \emph{Hilbert space} if $V$ is complete as a metric space with respect to the metric determined by the norm associated to the inner product. It is well known that $\ell^2(E)$ is complete with respect to the $\ell^2$ norm, and hence is a Hilbert space. Conversely, it can be shown that every Hilbert space is isometrically equivalent to $\ell^2(E)$ for some set $E$. This is simpler when $V$ is separable, in the sense that it has a countable dense set, in which case $E$ has only finitely or countably many elements. One can also show more directly that every continuous linear functional on a Hilbert space can be expressed as $\lambda_w(v)$ for some $w \in V$. \section{Orthogonality} \label{orthogonality} \setcounter{equation}{0} Let $(V, \langle v, w \rangle)$ be a real or complex inner product space. We say that $v, w \in V$ are \emph{orthogonal} if \begin{equation} \langle v, w \rangle = 0, \end{equation} which implies that \begin{equation} \|v + w\|^2 = \|v\|^2 + \|w\|^2. \end{equation} A collection of vectors $v_1, \ldots, v_n \in V$ is said to be \emph{orthonormal} if $v_j$ is orthogonal to $v_l$ when $j \ne l$, and $\|v_j\| = 1$ for each $j$. This implies that \begin{equation} \bigg\langle \sum_{j = 1}^n a_j \, v_j, \sum_{l = 1}^n b_l \, v_l \bigg\rangle = \sum_{j = 1}^n a_j \, b_j \end{equation} for every $a_1, \ldots, a_n, b_1, \ldots, b_n \in {\bf R}$ in the real case, and \begin{equation} \bigg\langle \sum_{j = 1}^n a_j \, v_j, \sum_{l = 1}^n b_l \, v_l \bigg\rangle = \sum_{j = 1}^n a_j \, \overline{b_l} \end{equation} for every $a_1, \ldots, a_n, b_1, \ldots, b_n \in {\bf C}$ in the complex case. Suppose that $v_1, \ldots, v_n \in V$ are orthonormal, and put \begin{equation} P(v) = \sum_{j = 1}^n \langle v, v_j \rangle v_j \end{equation} for each $v \in V$. Thus $P(v)$ is an element of the linear span of $v_1, \ldots, v_n$ for each $v \in V$, and $P(v) = v$ when $v$ is in the linear span of $v_1, \ldots, v_n$. Moreover, \begin{equation} \langle P(v), v_l \rangle = \langle v, v_l \rangle \end{equation} for every $v \in V$ and $l = 1, \ldots, n$, which implies that \begin{equation} \langle v - P(v), v_l \rangle = 0 \end{equation} for $l = 1, \ldots, n$. Hence $v - P(v)$ is orthogonal to every element of the linear span of $v_1, \ldots, v_n$. In particular, $v - P(v)$ is orthogonal to $P(v)$, which implies that \begin{equation} \label{||v||^2 = ... = ||v - P(v)||^2 + sum_{j = 1}^n |langle v, v_j rangle|^2} \|v\|^2 = \|v - P(v)\|^2 + \|P(v)\|^2 = \|v - P(v)\|^2 + \sum_{j = 1}^n |\langle v, v_j \rangle|^2. \end{equation} Let $w$ be any element of the linear span of $v_1, \ldots, v_n$. Thus $v - P(v)$ is orthogonal to $w$, and hence $v - P(v)$ is orthogonal to $P(v) - w$. This implies that \begin{equation} \label{||v - w||^2 = ||v - P(v)||^2 + ||P(v) - w||^2 ge ||v - P(v)||^2} \|v - w\|^2 = \|v - P(v)\|^2 + \|P(v) - w\|^2 \ge \|v - P(v)\|^2, \end{equation} so that $P(v)$ is the element of the linear span of $v_1, \ldots, v_n$ closest to $v$. Let $A$ be a nonempty set, and suppose that for each $\alpha \in A$ we have a vector $v_\alpha \in V$ such that $\|v_\alpha\| = 1$ and $v_\alpha$ is orthogonal to $v_\beta$ when $\beta \in A$ and $\alpha \ne \beta$. Thus $v_\alpha$, $\alpha \in A$, is an orthonormal family of vectors in $V$. If $v \in V$ and $\alpha_1, \ldots, \alpha_n$ are distinct elements of $A$, then (\ref{||v||^2 = ... = ||v - P(v)||^2 + sum_{j = 1}^n |langle v, v_j rangle|^2}) implies that \begin{equation} \sum_{j = 1}^n |\langle v, v_{\alpha_j}\rangle|^2 \le \|v\|^2. \end{equation} It follows that $\langle v, v_\alpha \rangle$ is an element of $\ell^2(A)$ as a function of $\alpha$, with \begin{equation} \sum_{\alpha \in A} |\langle v, v_\alpha\rangle|^2 \le \|v\|^2. \end{equation} If $v$ is in the closure of the linear span of the $v_\alpha$'s, $\alpha \in A$, with respect to the norm associated to the inner product on $V$, then one can check that \begin{equation} \label{sum_{alpha in A} |langle v, v_alpha rangle|^2 = ||v||^2} \sum_{\alpha \in A} |\langle v, v_\alpha \rangle|^2 = \|v\|^2. \end{equation} \section{Parseval's formula} \label{parseval} \setcounter{equation}{0} Let $C({\bf T})$ be the space of continuous complex-valued functions on the unit circle. It is easy to see that \begin{equation} \label{langle f, g rangle = frac{1}{2 pi} int_{bf T} f(z) overline{g(z)} |dz|} \langle f, g \rangle = \frac{1}{2 \pi} \int_{\bf T} f(z) \, \overline{g(z)} \, |dz| \end{equation} defines an inner product on $C({\bf T})$, for which the corresponding norm is given by \begin{equation} \|f\| = \Big(\frac{1}{2 \pi} \int_{\bf T} |f(z)|^2 \, |dz|\Big)^{1/2}. \end{equation} As in Section \ref{fourier series}, the functions on ${\bf T}$ of the form $z^j$, $j \in {\bf Z}$, are orthonormal with respect to this inner product. The Fourier coefficients of a continuous function $f$ on ${\bf T}$ can also be expressed as \begin{equation} \widehat{f}(j) = \langle f, z^j \rangle. \end{equation} \emph{Parseval's formula} states that \begin{equation} \sum_{j = -\infty}^\infty |\widehat{f}(j)|^2 = \frac{1}{2 \pi} \int_{\bf T} |f(z)|^2 \, |dz|. \end{equation} That the sum on the left is less than or equal to the integral on the right follows immediately from the orthonormality of $z^j$, $j \in {\bf Z}$, as in the previous section. In order to show that equality holds, it suffices to check that $f$ can be approximated by finite linear combinations of the $z^j$'s with respect to the norm associated to the inner product. In fact, a continuous function $f$ on the unit circle can be approximated uniformly by a finite linear combinations of the $z^j$'s, $j \in {\bf Z}$. To see this, one can use the function $\phi(z)$ on the open unit disk discussed in Section \ref{poisson kernel}. Remember that $\phi$ extends to a continuous function on the closed unit disk, which is equal to $f$ on the unit circle. It follows that $\phi(r \, z)$ converges uniformly to $f(z)$ for $z \in {\bf T}$ as $r \to 1$, because continuous functions on compact sets are uniformly continuous. It is easy to see that $\phi(r \, z)$ can be approximated uniformly on ${\bf T}$ by a finite linear combination of the $z^j$'s for each $r < 1$, because of the absolute convergence of the series defining $\phi(r \, z)$ when $r < 1$. This implies that $f$ can be approximated uniformly by finite linear combinations of the $z^j$'s on ${\bf T}$, as desired. \section{$\ell^p(E)$} \label{ell^p(E)} \setcounter{equation}{0} Let $E$ be a nonempty set, and let $p$ be a positive real number. A real or complex-valued function $f(x)$ on $E$ is said to be \emph{$p$-summable} if $|f(x)|^p$ is a summable function on $E$. The space of $p$-summable functions on $E$ is denoted $\ell^p(E)$, or $\ell^p(E, {\bf R})$, $\ell^p(E, {\bf C})$ to indicate whether real or complex-valued functions are being used. This is consistent with previous definitions when $p = 1, 2$. Observe that \begin{equation} (a + b)^p \le (2 \max (a, b))^p = 2^p \max(a^p, b^p) \le 2^p \, (a^p + b^p) \end{equation} for any pair of nonnegative real numbers $a$, $b$. If $f$, $g$ are $p$-summable functions on $E$, then it follows that $f + g$ is also $p$-summable, with \begin{eqnarray} \sum_{x \in E} |f(x) + g(x)|^p & \le & \sum_{x \in E} (|f(x)| + |g(x)|)^p \\ & \le & 2^p \sum_{x \in E} |f(x)|^2 + 2^p \sum_{x \in E} |g(x)|^p. \nonumber \end{eqnarray} This implies that $\ell^p(E)$ is a vector space with respect to pointwise addition and scalar multiplication over the real or complex numbers, as appropriate. If $f$ is a $p$-summable function on $E$, then we put \begin{equation} \|f\|_p = \Big(\sum_{x \in E} |f(x)|^p\Big)^{1/p}. \end{equation} It is easy to see that $f$ vanishes at infinity on $E$, as in the $p = 1$ case. In particular, $f$ is bounded, and we have that \begin{equation} \|f\|_\infty \le \|f\|_p. \end{equation} This implies that $f$ is $q$-summable when $p \le q < \infty$, since \begin{equation} \sum_{x \in E} |f(x)|^q \le \|f\|_\infty^{q - p} \sum_{x \in E} |f(x)|^p. \end{equation} More precisely, we get that \begin{equation} \|f\|_q^q \le \|f\|_\infty^{q - p} \, \|f\|_p^p \le \|f\|_p^q, \end{equation} and hence \begin{equation} \label{||f||_q le ||f||_p} \|f\|_q \le \|f\|_p. \end{equation} If $0 < p \le 1$, then \begin{equation} a + b \le (a^p + b^p)^{1/p} \end{equation} for every $a, b \ge 0$. This follows from (\ref{||f||_q le ||f||_p}) with $q = 1$, using a set $E$ with two elements. Equivalently, \begin{equation} (a + b)^p \le a^p + b^p. \end{equation} If $f$, $g$ are $p$-summable functions on $E$, then we get that \begin{eqnarray} \sum_{x \in E} |f(x) + g(x)|^p & \le & \sum_{x \in E} (|f(x)| + |g(x)|)^p \\ & \le & \sum_{x \in E} |f(x)|^p + \sum_{x \in E} |g(x)|^p. \nonumber \end{eqnarray} Thus \begin{equation} \label{||f + g||_p^p le ||f||_p^p + ||g||_p^p} \|f + g\|_p^p \le \|f\|_p^p + \|g\|_p^p. \end{equation} This is a bit better than what we had before, since there is no longer an extra factor of $2^p$. Note that $\|f\|_p$ does not satisfy the ordinary triangle inequality when $0 < p < 1$ and $E$ has at least two elements, and hence is not a norm on $\ell^p(E)$. However, $\|f - g\|_p^p$ defines a metric on $\ell^p(E)$ when $0 < p \le 1$, by (\ref{||f + g||_p^p le ||f||_p^p + ||g||_p^p}). \section{Convexity} \label{comvexity} \setcounter{equation}{0} It is well known that $\phi_p(r) = r^p$ defines a convex function of $r \ge 0$ when $p \ge 1$. Therefore \begin{equation} (t \, a + (1 - t) \, b)^p \le t \, a^p + (1 - t) \, b^p \end{equation} for every $a, b \ge 0$ and $0 \le t \le 1$ when $p \ge 1$. In particular, if we take $t = 1/2$, then we get that \begin{equation} (a + b)^p \le 2^{p - 1} \, (a^p + b^p). \end{equation} This improves an inequality in the previous section by a factor of $2$. If $f$, $g$ are $p$-summable functions on a set $E$, $0 \le t \le 1$, and $p \ge 1$, then it follows that \begin{eqnarray} \sum_{x \in E} |t \, f(x) + (1 - t) \, g(x)|^p & \le & \sum_{x \in E} (t \, |f(x)| + (1 - t) \, |g(x)|)^p \\ & \le & t \sum_{x \in E} |f(x)|^p + (1 - t) \sum_{x \in E} |g(x)|^p. \nonumber \end{eqnarray} Equivalently, \begin{equation} \label{||t f + (1 - t) g||_p^p le t ||f||_p^p + (1 - t) ||g||_p^p} \|t \, f + (1 - t) \, g\|_p^p \le t \, \|f\|_p^p + (1 - t) \, \|g\|_p^p. \end{equation} \emph{Minkowski's inequality} states that \begin{equation} \|f + g\|_p \le \|f\|_p + \|g\|_p \end{equation} for every $f, g \in \ell^p(E)$ when $p \ge 1$. This implies that $\|f\|_p$ is a norm on $\ell^p(E)$ when $p \ge 1$, because $\|f\|_p$ satisfies the positivity and homogeneity conditions of a norm for every $p > 0$. To prove Minkowski's inequality, we may as well suppose that neither $f$ nor $g$ is identically $0$ on $E$, since it is trivial otherwise. Put $f' = f/\|f\|_p$, $g' = g/\|g\|_p$, so that $\|f'\|_p = \|g'\|_p = 1$. Thus \begin{equation} \label{||t f' + (1 - t) g'||_p le 1} \|t \, f' + (1 - t) \, g'\|_p \le 1 \end{equation} when $0 \le t \le 1$, by (\ref{||t f + (1 - t) g||_p^p le t ||f||_p^p + (1 - t) ||g||_p^p}). If \begin{equation} \label{t = frac{||f||_p}{(||f||_p + ||g||_p)}} t = \frac{\|f\|_p}{(\|f\|_p + \|g\|_p)}, \end{equation} then $1 - t = \|g\|_p / (\|f\|_p + \|g\|_p)$, and Minkowski's inequality follows from (\ref{||t f' + (1 - t) g'||_p le 1}). Remember that a subset $A$ of a vector space $V$ is said to be \emph{convex} if \begin{equation} t \, v + (1 - t) \, w \in A \end{equation} for every $v, w \in A$ and $0 \le t \le 1$. If $N(v)$ is a seminorm on $V$, then it is easy to see that the corresponding closed unit ball \begin{equation} \label{B = {v in V : N(v) le 1}} B = \{v \in V : N(v) \le 1\} \end{equation} is a convex set in $V$. Conversely, if a nonnegative real-valued function $N(v)$ on $V$ satisfies the homogeneity condition of a seminorm and $B$ is convex, then one can check $N(v)$ is a seminorm on $V$. This is basically the same as the argument in the previous paragraph for $\|f\|_p$, at least when $N(v)$ satisfies the positivity condition of a norm. Otherwise, some minor adjustments are needed to deal with $v \in V$ such that $N(v) = 0$ but $v \ne 0$. \section{H\"older's inequality} \label{holder's inequality} \setcounter{equation}{0} Let $1 < p, q < \infty$ be conjugate exponents, in the sense that \begin{equation} \frac{1}{p} + \frac{1}{q} = 1. \end{equation} If $E$ is a nonempty set, $f \in \ell^p(E)$, and $g \in \ell^q(E)$, then \emph{H\"older's inequality} states that $f \, g \in \ell^1(E)$, and \begin{equation} \|f \, g\|_1 \le \|f\|_p \, \|g\|_q. \end{equation} This also works when $p = 1$ and $q = \infty$, or the other way around, and is much simpler. The $p = q = 2$ case can be reduced to the Cauchy--Schwarz inequality. Using the convexity of the exponential function, one can check that \begin{equation} a \, b \le \frac{a^p}{p} + \frac{b^q}{q} \end{equation} for every $a, b \ge 0$. Applying this to $a = |f(x)|$, $b = |g(x)|$, and summing over $x \in E$, we get that \begin{equation} \label{sum_{x in E} |f(x)| |g(x)| le ...} \sum_{x \in E} |f(x)| \, |g(x)| \le p^{-1} \sum_{x \in E} |f(x)|^p + q^{-1} \sum_{x \in E} |g(x)|^q. \end{equation} In particular, $f \, g \in \ell^1(E)$, and \begin{equation} \label{||f g||_1 le p^{-1} ||f||_p^p + q^{-1} ||g||_q^q} \|f \, g\|_1 \le p^{-1} \, \|f\|_p^p + q^{-1} \, \|g\|_q^q, \end{equation} which implies H\"older's inequality in the special case where $\|f\|_p = \|g\|_q = 1$. If $f$ and $g$ are not identically $0$ on $E$, then one can reduce to this case, by considering $f' = f / \|f\|_p$, $g' = g/\|g\|_q$. Otherwise, if $f$ or $g$ is identically $0$ on $E$, then the result is trivial. If $f \in \ell^p(E)$, $g \in \ell^q(E)$, then put \begin{equation} \lambda_g(f) = \sum_{x \in E} f(x) \, g(x). \end{equation} H\"older's inequality implies that \begin{equation} |\lambda_g(f)| \le \|f\|_p \, \|g\|_q, \end{equation} so that $\lambda_g(f)$ defines a continuous linear functional on $\ell^p(E)$ for each $g \in \ell^q(E)$, with dual norm less than or equal to $\|g\|_q$. One can check that the dual norm of $\lambda$ on $\ell^p(E)$ is actually equal to $\|g\|_q$, by choosing $g$ such that \begin{equation} \label{f(x) g(x) = |f(x)|^p = |g(x)|^q} f(x) \, g(x) = |f(x)|^p = |g(x)|^q \end{equation} for every $x \in E$. These conditions on $g$ are consistent with each other, because $p$ and $q$ are conjugate exponents. Conversely, if $\lambda$ is a continuous linear functional on $\ell^p(E)$, then one can show that $\lambda = \lambda_g$ for some $g \in \ell^q(E)$. As usual, one can start by putting $g(x) = \lambda(\delta_x)$, where $\delta_x$ is the function on $E$ equal to $1$ at $x$ and to $0$ elsewhere. This permits $\lambda_g(f)$ to be defined as in the previous paragraph when $f$ has finite support on $E$, in which cas it agrees with $\lambda(f)$, by linearity. The next step is to show that \begin{equation} \Big(\sum_{x \in A} |g(x)|^q\Big)^{1/q} \end{equation} is bounded by the dual norm of $\lambda$ on $\ell^p(E)$ when $A$ is a finite subset of $E$. This can be done by choosing $f$ such that (\ref{f(x) g(x) = |f(x)|^p = |g(x)|^q}) holds when $x \in A$, and $f(x) = 0$ when $x \in E \backslash A$. This implies that $g \in \ell^q(E)$, and that $\|g\|_q$ is less than or equal to the dual norm of $\lambda$ on $\ell^p(E)$. The remaining point is that $\lambda(f) = \lambda_g(f)$ for every $f \in \ell^p(E)$. We already know that this holds when $f$ has finite support on $E$, which implies that it holds for every $f \in \ell^p(E)$, because functions with finite support are dense in $\ell^p(E)$, and because $\lambda$ and $\lambda_g$ are continuous on $\ell^p(E)$. \section{$p < 1$} \label{p < 1} \setcounter{equation}{0} Let $E$ be a nonempty set, and let $p$ be a positive real number strictly less than $1$. As in Section \ref{ell^p(E)}, \begin{equation} d_p(f, g) = \|f - g\|_p^p \end{equation} defines a metric on $\ell^p(E)$. It is easy to see that addition and scalar multiplication are continuous with respect to the topology associated to this metric, so that $\ell^p(E)$ becomes a topological vector space. If $E$ has only finitely many elements, then $\ell^p(E)$ can be identified with ${\bf R}^n$ or ${\bf C}^n$, as appropriate, where $n$ is the number of elements of $E$, and the topology on $\ell^p(E)$ determined by this metric corresponds exactly to the standard topology on ${\bf R}^n$ or ${\bf C}^n$. If $f \in \ell^p(E)$ and $g \in \ell^\infty(E)$, then $f \, g \in \ell^p(E) \subseteq \ell^1(E)$, and we can put \begin{equation} \lambda_g(f) = \sum_{x \in E} f(x) \, g(x). \end{equation} Moreover, \begin{equation} |\lambda_g(f)| \le \|f\|_1 \, \|g\|_\infty \le \|f\|_p \, \|g\|_\infty. \end{equation} Using this estimate, it is easy to see that $\lambda_g$ is a continuous linear functional on $\ell^p(E)$ with respect to the topology associated to the metric defined in the previous paragraph. Conversely, suppose that $\lambda$ is a continuous linear functional on $\ell^p(E)$. This implies that there is a $\delta > 0$ such that \begin{equation} |\lambda(f)| \le 1 \end{equation} for all $f \in \ell^p(E)$ such that $d_p(f, 0) = \|f\|_p^p < \delta$. Equivalently, there is a $C \ge 0$ such that \begin{equation} |\lambda(f)| \le C \, \|f\|_p \end{equation} for every $f \in \ell^p(E)$, because of linearity. Put $g(x) = \lambda(\delta_x)$ for each $x \in E$, where $\delta_x$ is the function on $E$ equal to $1$ at $x$ and to $0$ elsewhere. Thus $|g(x)| \le C$ for every $x \in E$, because $\|\delta_x\|_p = 1$. This permits us to define $\lambda_g$ as in the preceding paragraph. By construction, $\lambda(f) = \lambda_g(f)$ when $f$ has finite support on $E$. It is easy to see that functions with finite support on $E$ are dense in $\ell^p(E)$, for basically the same reasons as when $1 \le p < \infty$. Hence $\lambda(f) = \lambda_g(f)$ for every $f \in \ell^p(E)$, since $\lambda$, $\lambda_g$ are both continuous on $\ell^p(E)$. If $E$ has at least two elements, then the unit ball in $\ell^p(E)$ is not convex, unlike the situation when $p \ge 1$. If $E$ has infinitely many elements, then the convex hull of the unit ball in $\ell^p(E)$ is not even bounded with respect to $\|f\|_p$, since it contains all functions $f$ on $E$ with finite support such that $\|f\|_1 \le 1$, for instance. However, if $f, g \in \ell^p(E)$, $0 \le t \le 1$, and $h$ is another function on $E$ that satisfies \begin{equation} |h(x)| \le |f(x)|^t \, |g(x)|^{1 - t} \end{equation} for every $x \in E$, then $h \in \ell^p(E)$, and \begin{equation} \|h\|_p \le \|f\|_p^t \, \|g\|_p^{1 - t}. \end{equation} This follows from H\"older's inequality, and works for all $p > 0$. In particular, $\|h\|_p \le 1$ when $\|f\|_p, \|g\|_p \le 1$, which is a multiplicative convexity property of the unit ball in $\ell^p(E)$. \section{Bounded linear mappings, revisited} \label{bounded linear mappings, revisited} \setcounter{equation}{0} Let $V$ be a real or complex vector space with a norm $\|v\|_V$, and consider the space $\mathcal{BL}(V) = \mathcal{BL}(V, V)$ of bounded linear mappings from $V$ into itself. This is an associative algebra, with composition of linear operators as multiplication, and the identity operator $I$ on $V$ as the multiplicative identity element. Note that $\|I\|_{op} = 1$, except in the trivial case where $V$ consists of only the zero element. If $V$ is complete, then $\mathcal{BL}(V)$ is also complete with respect to the operator norm, as in Section \ref{bounded linear mappings}. Thus $\mathcal{BL}(V)$ is a Banach algebra when $V$ is a Banach space and $V \ne \{0\}$. If $V$ is finite-dimensional, then $\mathcal{BL}(V)$ is the same as the algebra of all linear transformations on $V$. In particular, $\mathcal{BL}(V)$ is not commutative when the dimension of $V$ is greater than or equal to $2$. This includes the case where $V$ is infinite-dimensional, since the Hahn--Banach theorem may be used to get plenty of bounded linear operators on $V$ with finite rank. As an example, let $V$ be the space of real or complex-valued continuous functions on $[0, 1]$, equipped with the supremum norm. If $f$ is a continuous function on $[0, 1]$, then let $T(f)$ be the function defined on $[0, 1]$ by \begin{equation} \label{T(f)(x) = int_0^x f(y) dy} T(f)(x) = \int_0^x f(y) \, dy. \end{equation} Note that $T(f)$ is continuously-differentiable on $[0, 1]$, with derivative equal to $f$. In particular, $T(f)$ is continuous on $[0, 1]$. Moreover, \begin{equation} |T(f)(x)| \le \int_0^x |f(y)| \, dy \le \int_0^1 |f(y)| \, dy \le \|f\|_{sup} \end{equation} for every $f \in C([0, 1])$ and $x \in [0, 1]$, which implies that \begin{equation} \|T(f)\|_{sup} \le \int_0^1 |f(y)| \, dy \le \|f\|_{sup}. \end{equation} It follows that $T$ is a bounded linear mapping from $C([0, 1])$ into itself, with operator norm less than or equal to $1$. It is easy to see that $\|T\|_{op} = 1$, by considering the case where $f$ is the constant function equal to $1$ on $[0, 1]$. Let $n$ be a positive integer, and let $T^n = T \circ \cdots \circ T$ be the $n$-fold composition of $T$. This can be expressed by the $n$-fold integral \begin{equation} \label{T^n(f)(x) = ...} T^n(f)(x) = \int_0^x \int_0^{y_n} \cdots \int_0^{y_2} f(y_1) \, dy_1 \cdots dy_{n - 1} \, dy_n. \end{equation} Thus \begin{eqnarray} |T^n(f)(x)| & \le & \int_0^x \int_0^{y_n} \cdots \int_0^{y_2} |f(y_1)| \, dy_1 \cdots dy_{n - 1} \, dy_n \\ & \le & \int_0^1 \int_0^{y_n} \cdots \int_0^{y_2} |f(y_1)| \, dy_1 \cdots dy_{n - 1} \, dy_n. \nonumber \end{eqnarray} If \begin{equation} \sigma(n) = \int_0^1 \int_0^{y_n} \cdots \int_0^{y_2} dy_1 \cdots dy_{n - 1} \, dy_n, \end{equation} then we get that \begin{equation} \|T^n(f)\|_{sup} \le \sigma(n) \, \|f\|_{\sup}. \end{equation} This shows that the operator norm of $T^n$ on $C([0, 1])$ is less than or equal to $\sigma(n)$, and it is again easy to see that $\|T^n\|_{op} = \sigma(n)$, by considering the case where $f$ is the constant function equal to $1$ on $[0, 1]$. In fact, if ${\bf 1}_{[0, 1]}$ denotes the constant function equal to $1$ on $[0, 1]$, then it is easy to check that \begin{equation} T^n({\bf 1}_{[0, 1]})(x) = \frac{x^n}{n!}, \end{equation} using induction on $n$. In particular, \begin{equation} \label{sigma(n) = T^n({bf 1}_{[0, 1]})(1) = frac{1}{n!}} \sigma(n) = T^n({\bf 1}_{[0, 1]})(1) = \frac{1}{n!}. \end{equation} Alternatively, $\sigma(n)$ is the same as the $n$-dimensional volume of the $n$-dimensional simplex \begin{equation} \label{def of Sigma(n)} \Sigma(n) = \{y \in {\bf R}^n : 0 \le y_1 \le y_2 \le \cdots \le y_{n - 1} \le y_n \le 1\}. \end{equation} That the volume of $\Sigma(n)$ is equal to $1/n!$ can also be seen geometrically, by decomposing the unit cube in ${\bf R}^n$ into $n!$ copies of $\Sigma(n)$ with disjoint interiors. These copies of $\Sigma(n)$ are obtained by permuting the standard coordinates of ${\bf R}^n$, using the $n!$ permutations on the set $\{1, \ldots, n\}$. Each copy of $\Sigma(n)$ has the same $n$-dimensional volume as $\Sigma(n)$, and the intersection of any two distinct copies has measure $0$. Thus the sum of the volumes of all of these copies of $\Sigma(n)$ is equal to $n!$ times the volume of $\Sigma(n)$, and is also equal to the volume of the unit cube, which is equal to $1$. Observe that $n! \ge k^{n - k + 1}$ for each positive integer $k$ when $n \ge k$, so that \begin{equation} \label{(n!)^{-1/n} le k^{(k - 1)/n - 1}} (n!)^{-1/n} \le k^{(k - 1)/n - 1} \end{equation} when $n \ge k$. In particular, \begin{equation} (n!)^{-1/n} \le k^{-1/2} \end{equation} when $n \ge 2 k$, which implies that \begin{equation} \lim_{n \to \infty} (n!)^{-1/n} = 0, \end{equation} since the previous statement works for every positive integer $k$. It follows that \begin{equation} \lim_{n \to \infty} \|T^n\|_{op}^{1/n} = 0, \end{equation} because $\|T^n\|_{op} = \sigma(n) = 1/n!$. Equivalently, this shows that $r(T) = 0$, in the notation of Section \ref{spectral radius}. This would be trivial if $T^n = 0$ for some positive integer $n$, which is clearly not the case in this example. Let $\mathcal{A}$ be an associative algebra over the real or complex numbers with a multiplicative identity element, such as the algebra of bounded linear operators on a vector space with a norm. If $x \in \mathcal{A}$, then let $\mathcal{A}(x)$ be the subalgebra of $\mathcal{A}(x)$ generated by $x$, consisting of linear combinations of the multiplicative identity element and positive powers of $x$. It is easy to see that this is a commutative subalgebra of $\mathcal{A}$, even if $\mathcal{A}$ is not commutative. If $\mathcal{A}$ is a topological algebra, then the closure of a commutative subalgebra of $\mathcal{A}$ is also commutative. If $\mathcal{A}$ is a Banach algebra, then closed subalgebras of $\mathcal{A}$ are Banach algebras too. \section{Involutions} \label{involutions} \setcounter{equation}{0} Let $\mathcal{A}$ be an associative algebra over the real or complex numbers. A mapping \begin{equation} \label{x mapsto x^*} x \mapsto x^* \end{equation} on $\mathcal{A}$ is said to be an \emph{involution} if it satisfies the following three conditions. First, (\ref{x mapsto x^*}) should be linear in the real case, and conjugate-linear in the complex case. This means that \begin{equation} (x + y)^* = x^* + y^* \end{equation} for every $x, y \in \mathcal{A}$ in both cases, \begin{equation} (t \, x)^* = t \, x^* \end{equation} for every $x \in \mathcal{A}$ and $t \in {\bf R}$ in the real case, and \begin{equation} (t \, x)^* = \overline{t} \, x^* \end{equation} in the complex case. Second, (\ref{x mapsto x^*}) should be compatible with multiplication in $\mathcal{A}$, in the sense that \begin{equation} \label{(x y)^* = y^* x^*} (x \, y)^* = y^* \, x^* \end{equation} for every $x, y \in \mathcal{A}$. Of course, (\ref{(x y)^* = y^* x^*}) is the same as \begin{equation} \label{(x y)^* = x^* y^*} (x \, y)^* = x^* \, y^* \end{equation} when $\mathcal{A}$ is commutative. The third condition is that \begin{equation} (x^*)^* = x \end{equation} for every $x \in \mathcal{A}$. In particular, this implies that (\ref{x mapsto x^*}) is a one-to-one mapping of $\mathcal{A}$ onto itself. If $\mathcal{A}$ has a multiplicative identity element $e$, then it follows from the multiplicativity condition (\ref{(x y)^* = y^* x^*}) that \begin{equation} \label{e^* = e} e^* = e. \end{equation} If $\mathcal{A}$ is equipped with a norm, then one normally asks also that the involution be isometric, so that \begin{equation} \label{||x^*|| = ||x||} \|x^*\| = \|x\| \end{equation} for every $x \in \mathcal{A}$. If $\mathcal{A}$ is the algebra of continuous complex-valued functions on a topological space, then \begin{equation} \label{f(p) mapsto overline{f(p)}} f(p) \mapsto \overline{f(p)} \end{equation} defines an involution on $\mathcal{A}$. This would not work for holomorphic functions, because the complex-conjugate of a holomorphic function $f$ is also holomorphic only when $f$ is constant. If $\mathcal{A}$ is the algebra of $n \times n$ matrices of real numbers with respect to matrix multiplication, then the transpose of a matrix defines an involution on $\mathcal{A}$. If instead $\mathcal{A}$ is the algebra of $n \times n$ matrices of complex numbers with respect to matrix multiplication, then one can get an involution on $\mathcal{A}$ by taking the complex conjugates of the entries of the transpose of a matrix. If $(V, \langle v, w \rangle)$ is a real or complex Hilbert space and $T$ is a bounded linear operator on $V$, then it is well known that there is a unique bounded linear operator $T^*$ on $V$ such that \begin{equation} \langle T(v), w \rangle = \langle v, T^*(w) \rangle \end{equation} for every $v, w \in V$, known as the \emph{adjoint} of $T$. It is easy to see that this defines an involution on the algebra $\mathcal{BL}(V)$ of bounded linear operators on $V$. The adjoint of $T$ corresponds exactly to the transpose of a real matrix or the complex conjugate of the transpose of a complex matrix when $T$ is represented by a matrix with respect to an orthonormal basis for $V$. Using the definition of the norm associated to an inner product and the Cauchy--Schwarz inequality, one can check that \begin{equation} \|T\|_{op} = \sup \{|\langle T(v), w \rangle| : v, w \in V, \, \|v\|, \|w\| \le 1\} \end{equation} for every bounded linear operator $T$ on $V$. This implies that \begin{equation} \|T^*\|_{op} = \|T\|_{op} \end{equation} for every $T \in \mathcal{BL}(V)$, using the symmetry properties of the inner product and interchanging the roles of $v$ and $w$ in the previous expression for the operator norm of $T^*$. Moreover, \begin{equation} \label{||T^* circ T||_{op} = ||T||_{op}^2} \|T^* \circ T\|_{op} = \|T\|_{op}^2. \end{equation} Of course, \begin{equation} \|T^* \circ T\|_{op} \le \|T^*\|_{op} \, \|T\|_{op} = \|T\|_{op}^2, \end{equation} and so it suffices to show the opposite inequality. Observe that \begin{equation} \langle (T^*(T(v)), v \rangle = \langle T(v), T(v) \rangle = \|T(v)\|^2, \end{equation} by the definition of the adjoint operator $T^*$. This implies that \begin{equation} \|T(v)\|^2 \le \|(T^*(T(v))\| \, \|v\| \le \|T^* \circ T\|_{op} \, \|v\|^2, \end{equation} by the Cauchy--Schwarz inequality and the definition of the operator norm. Thus \begin{equation} \|T\|_{op}^2 \le \|T^* \circ T\|_{op}, \end{equation} as desired. A Banach algebra $(\mathcal{A}, \|x\|)$ equipped with an isometric involution $x \mapsto x^*$ is said to be a \emph{$C^*$ algebra} if \begin{equation} \label{||x^* x|| = ||x||^2} \|x^* \, x\| = \|x\|^2 \end{equation} for every $x \in \mathcal{A}$. This includes the algebras of bounded linear operators on real or complex Hilbert spaces, as in the previous paragraphs. This also includes the algebra of real or complex-valued bounded continuous functions on a topological space $X$ with respect to the supremum norm, where the involution is given by complex conjugation as in (\ref{f(p) mapsto overline{f(p)}}) in the complex case, and by the identity operator in the real case. The same involutions are defined and isometric on the algebras of real and complex-valued continuously-differentiable functions on the unit interval, as in Section \ref{C^1 functions}, but the $C^1$ norm does not satisfy the $C^*$ condition (\ref{||x^* x|| = ||x||^2}). Suppose that $\tau$ is a continuous involution on a topological space $X$, which is to say a continuous mapping from $X$ into itself such that \begin{equation} \tau(\tau(p)) = p \end{equation} for every $p \in X$. Equivalently, $\tau$ is its own inverse, and hence a homeomorphism from $X$ onto itself. Under these conditions, \begin{equation} f(p) \mapsto f(\tau(p)) \end{equation} is an involution on the algebra of real-valued continuous functions on $X$, and \begin{equation} f(p) \mapsto \overline{f(\tau(p))} \end{equation} is an involution on the algebra of complex-valued continuous functions on $X$. These involutions also preserve the supremum norms of bounded continuous functions on $X$. However, the $C^*$ condition (\ref{||x^* x|| = ||x||^2}) does not work when $\tau$ is not the identity mapping on $X$, at least when $X$ is sufficiently regular to have enough continuous functions. As a variant of this, let $U$ be the open disk in the complex plane. If $f(z)$ is a holomorphic function on $U$, then it is well known that \begin{equation} \label{overline{f(overline{z})}} \overline{f(\overline{z})} \end{equation} is also holomorphic on $U$. It is easy to see that this defines an involution on the algebra of holomorphic functions on $U$, which preserves the supremum norm of bounded holomorphic functions on $U$. However, if \begin{equation} f(z) = z + i, \end{equation} then the supremum norm of $f$ on $U$ is equal to $2$, and the supremum norm of \begin{equation} \label{overline{f(overline{z})} f(z) = (z - i) (z + i) = z^2 + 1} \overline{f(\overline{z})} \, f(z) = (z - i) (z + i) = z^2 + 1 \end{equation} is equal to $2$ as well. Thus the $C^*$ condition (\ref{||x^* x|| = ||x||^2}) does not work in this case either, when we restrict our attention to bounded holomorphic functions on $U$, since the supremum norm of (\ref{overline{f(overline{z})} f(z) = (z - i) (z + i) = z^2 + 1}) on $U$ is strictly less than the square of the supremum norm of $f$. Let $(\mathcal{A}, \|x\|, x^*)$ be a real or complex $C^*$ algebra, and suppose that $x \in \mathcal{A}$ satisfies \begin{equation} x^* = x. \end{equation} In this case, the $C^*$ condition (\ref{||x^* x|| = ||x||^2}) reduces to \begin{equation} \|x^2\| = \|x\|^2. \end{equation} If $l$ is a positive integer, then \begin{equation} (x^l)^* = (x^*)^l = x^l, \end{equation} and so we can apply the previous statement to $x^l$ to get that \begin{equation} \|x^{2 l}\| = \|x^l\|^2. \end{equation} Applying this repeatedly, we get that \begin{equation} \|x^{2^n}\| = \|x\|^{2^n} \end{equation} for each positive integer $n$. Of course, \begin{equation} \|x^l\| \le \|x\|^l \end{equation} for any positive integer $n$, by the submultiplicative property of the norm. If we choose a positive integer $n$ such that $l \le 2^n$, then we get that \begin{equation} \|x\|^{2^n} = \|x^{2^n}\| \le \|x^l\| \, \|x\|^{2^n - l}, \end{equation} using the submultiplicative property of the norm again. This implies that \begin{equation} \|x\|^l \le \|x^l\|, \end{equation} and hence that \begin{equation} \|x^l\| = \|x\|^l \end{equation} for each positive integer $l$. If $y$ is any element of $\mathcal{A}$, then $x = y^* \, y$ satisfies $x^* = x$. Thus we get \begin{equation} \|(y^* \, y)^l\| = \|y^* \, y\|^l = \|y\|^{2 \, l} \end{equation} for each positive integer $l$. Suppose that $y^*$ commutes with $y$, so that \begin{equation} (y^* \, y)^l = (y^*)^l \, y^l = (y^l)^* \, y^l \end{equation} for each $l$, and hence \begin{equation} \|(y^* \, y)^l\| = \|(y^l)^* \, y^l\| = \|y^l\|^2. \end{equation} This implies that \begin{equation} \|y^l\| = \|y\|^l \end{equation} for each positive integer $l$, as before. \part{Several variables} \section{Power series} \label{power series} \setcounter{equation}{0} Let $n$ be a positive integer, and let \begin{equation} \sum_\alpha a_\alpha \, z^\alpha \end{equation} be a power series in $n$ complex variables. More precisely, the sum is taken over all multi-indices $\alpha = (\alpha_1, \ldots, \alpha_n)$, $z^\alpha = z_1^{\alpha_1} \cdots z_n^{\alpha_n}$ is the corresponding monomial, and the coefficients $a_\alpha$ are complex numbers. Let $A$ be the set of $z = (z_1, \ldots, z_n) \in {\bf C}^n$ for which this series converges absolutely, in the sense that $a_\alpha \, z^\alpha$ is a summable function of $\alpha$ on the set of multi-indices. Thus $0 \in A$ trivially, and $w \in A$ whenever there is a $z \in A$ such that $|w_j| \le |z_j|$ for $j = 1, \ldots, n$, by the comparison test. Let $\sum_\alpha b_\alpha \, z^\alpha$ be another power series, and let $B$ be the set of $z \in {\bf C}^n$ on which this series converges absolutely, as before. Note that \begin{equation} \sum_\alpha (a_\alpha + b_\alpha) \, z^\alpha \end{equation} converges absolutely for every $z \in A \cap B$. The product of these two power series can be expressed formally as \begin{equation} \label{(sum a_alpha z^alpha) (sum b_beta z^beta) = sum c_gamma z^gamma} \Big(\sum_\alpha a_\alpha \, z^\alpha\Big) \, \Big(\sum_\beta b_\beta \, z^\beta\Big) = \sum_\gamma c_\gamma \, z^\gamma, \end{equation} where \begin{equation} \label{c_gamma = sum_{alpha + beta = gamma} a_alpha b_beta} c_\gamma = \sum_{\alpha + \beta = \gamma} a_\alpha \, b_\beta. \end{equation} More precisely, the sum on the right is taken over all multi-indices $\alpha$, $\beta$ such that $\alpha + \beta = \gamma$, of which there are only finitely many. If $z \in A \cap B$, then one can check that $\sum_\gamma c_\gamma \, z^\gamma$ converges absolutely, and that the sum satisfies (\ref{(sum a_alpha z^alpha) (sum b_beta z^beta) = sum c_gamma z^gamma}). As a first step, one can verify that \begin{equation} \label{sum |c_gamma| |z^gamma| le ...} \sum_\gamma |c_\gamma| \, |z^\gamma| \le \Big(\sum_\alpha |a_\alpha| \, |z^\alpha|\Big) \, \Big(\sum_\beta |b_\beta| \, |z^\beta|\Big), \end{equation} by estimating the sum over finitely many $\gamma$'s in terms of the product of sums over finitely many $\alpha$'s and $\beta$'s. This implies that $\sum_\gamma c_\gamma \, z^\gamma$ converges absolutely when $z \in A \cap B$, and one can show that (\ref{(sum a_alpha z^alpha) (sum b_beta z^beta) = sum c_gamma z^gamma}) holds by approximating infinite sums by sums with only finitely many nonzero terms. It suffices to consider the case where $z = (1, \ldots, 1)$, since otherwise the monomials in $z$ can be absorbed into the coefficients. One can also use linearity to reduce to the case where the coefficients are nonnegative real numbers, and estimate products of sums of finitely many $a_\alpha$'s and $b_\beta$'s in terms of sums of finitely many $c_\gamma$'s. Let us return to a single power series $\sum_\alpha a_\alpha \, z^\alpha$, and suppose that $w, z \in A$ and $u \in {\bf C}^n$ satisfy \begin{equation} \label{|u_j| le |w_j|^t |z_j|^{1 - t}} |u_j| \le |w_j|^t \, |z_j|^{1 - t} \end{equation} for some $t \in {\bf R}$, $0 < t < 1$, and each $j = 1, \ldots, n$. Hence \begin{equation} |u^\alpha| \le |w^\alpha|^t \, |z^\alpha|^{1 - t} \end{equation} for each multi-index $\alpha$. The convexity of the exponential function on the real line implies that \begin{equation} \label{k^t l^{1 - t} le t k + (1 - t) l} k^t \, l^{1 - t} \le t \, k + (1 - t) \, l \end{equation} for every $k, l \ge 0$. Applying this to $k = |w^\alpha|$, $l = |z^\alpha|$ and summing over $\alpha$, we get that $u \in A$, because \begin{equation} \sum_\alpha |a_\alpha| \, |u^\alpha| \le t \sum_\alpha |a_\alpha| \, |w^\alpha|^t + (1 - t) \sum_\alpha |a_\alpha| \, |z^\alpha|^{1 - t}. \end{equation} \section{Power series, continued} \label{power series, continued} \setcounter{equation}{0} Let $n$ be a positive integer, and let $\sum_\alpha a_\alpha \, z^\alpha$ be a power series with complex coefficients in $z = (z_1, \ldots, z_n)$. If $l$ is a nonnegative integer, then \begin{equation} p_l(z) = \sum_{|\alpha| = l} a_\alpha \, z^\alpha \end{equation} is a homogeneous polynomial of degree $l$ in $z$, where more precisely the sum is taken over the finitely many multi-indices $\alpha$ such that $|\alpha| = l$. Of course, \begin{equation} \label{sum_{l = 0}^infty p_l(z) = sum_alpha a_alpha z^alpha} \sum_{l = 0}^\infty p_l(z) = \sum_\alpha a_\alpha \, z^\alpha \end{equation} formally, which gives another way to look at the convergence of $\sum_\alpha a_\alpha \, z^\alpha$. In particular, if $\sum_\alpha a_\alpha \, z^\alpha$ converges absolutely for some $z \in {\bf C}^n$, then $\sum_{l = 0}^\infty p_l(z)$ converges absolutely, and the two sums are the same. This uses the fact that \begin{equation} |p_l(z)| \le \sum_{|\alpha| = l} |a_\alpha| \, |z^\alpha| \end{equation} for each $l$. Let $\sum_\alpha b_\alpha \, z^\alpha$ be another power series, with the corresponding polynomials \begin{equation} q_l(z) = \sum_{|\alpha| = l} b_\alpha \, z^\alpha. \end{equation} Thus $p_l(z) + q_l(z)$ are the polynomials associated to $\sum_\alpha (a_\alpha + b_\alpha) \, z^\alpha$. Suppose that $\sum_\gamma c_\gamma \, z^\gamma$ is the power series obtained by formally multiplying $\sum_\alpha a_\alpha \, z^\alpha$ and $\sum_\beta b_\beta \, z^\beta$, so that \begin{equation} c_\gamma = \sum_{\alpha + \beta = \gamma} a_\alpha \, b_\beta. \end{equation} It is easy to check that the corresponding polynomials \begin{equation} r_l = \sum_{|\gamma| = l} c_\gamma \, z^\gamma \end{equation} are also given by \begin{equation} r_l = \sum_{j = 0}^l p_j(z) \, q_{l - j}(z). \end{equation} This shows that $r_l$ is the Cauchy product of the $p_j$'s and $q_k$'s. Note that \begin{equation} \label{sum_{l = 0}^infty p_l(t z) = sum_{l = 0}^infty t^l p_l(z)} \sum_{l = 0}^\infty p_l(t \, z) = \sum_{l = 0}^\infty t^l \, p_l(z) \end{equation} may be considered as an ordinary power series in $t \in {\bf C}$ for each $z \in {\bf C}^n$. This gives another way to look at the Cauchy product in the preceding paragraph, as the coefficients of the product of two power series in $t$. If $\sum_{l = 0}^\infty p_l(z)$ converges for some $z \in {\bf C}^n$, then $\{p_l(z)\}_{l = 1}^\infty$ converges to $0$, and hence $\{p_l(z)\}_{l = 1}^\infty$ is bounded. This implies that (\ref{sum_{l = 0}^infty p_l(t z) = sum_{l = 0}^infty t^l p_l(z)}) converges absolutely when $|t| < 1$, by the comparison test. Consider \begin{equation} p^*(z) = \limsup_{l \to \infty} |p_l(z)|^{1/l}, \end{equation} which takes values in $[0, \infty]$. Observe that \begin{equation} \label{p^*(t z) = |t| rho(z)} p^*(t \, z) = |t| \, p^*(z) \end{equation} for each $t \in {\bf C}$ and $z \in {\bf C}^n$, because $p_l(z)$ is homogeneous of degree $l$. The right side of (\ref{p^*(t z) = |t| rho(z)}) should be interpreted as being $0$ when $t = 0$, even when $p^*(z) = +\infty$, because $p^*(0) = 0$. The \emph{root test} states that $\sum_{l = 0}^\infty p_l(z)$ converges absolutely when $p^*(z) < 1$, and diverges when $p^*(z) > 1$. It follows that the radius of convergence of (\ref{sum_{l = 0}^infty p_l(t z) = sum_{l = 0}^infty t^l p_l(z)}) as a power series in $t$ is equal to $1/p^*(z)$. \section{Linear transformations} \label{linear transformations} \setcounter{equation}{0} Let $n$ be a positive integer, and let $T$ be a one-to-one linear transformation from ${\bf C}^n$ onto itself. Consider the mapping $\rho_T$ acting on complex-valued functions on ${\bf C}^n$ defined by \begin{equation} \rho_T(f)(z) = f(T^{-1}(z)). \end{equation} Thus \begin{equation} \rho_T(f + g) = \rho_T(f) + \rho_T(g) \end{equation} and \begin{equation} \rho_T(f \, g) = \rho_T(f) \, \rho_T(g). \end{equation} for any pair of functions $f$, $g$ on ${\bf C}^n$. If $f$ is a polynomial on ${\bf C}^n$, then it is easy to see that $\rho_T(f)$ is a polynomial too. If $f$ is a homogeneous polynomial, then $\rho_T(f)$ is a homogeneous polynomial as well, of the same degree. Of course, $\rho_T(f) = f$ for every function $f$ on ${\bf C}^n$ when $T$ is the identity transformation on ${\bf C}^n$. If $R$, $T$ are arbitrary invertible linear transformation on ${\bf C}^n$, then \begin{eqnarray} \rho_R(\rho_T(f))(z) & = & \rho_T(f)(R^{-1}(z)) = f(T^{-1}(R^{-1}(z))) \\ & = & f((R \circ T)^{-1}(z)) = \rho_{R \circ T}(f)(z). \nonumber \end{eqnarray} In particular, $\rho_{T^{-1}} = (\rho_T)^{-1}$. Let $GL({\bf C}^n)$ be the group of invertible linear transformations on ${\bf C}^n$, with composition of mappings as the group operation. It follows that $T \mapsto \rho_T$ is a homomorphism from $GL({\bf C}^n)$ into the group of invertible linear transformations on the space of functions on ${\bf C}^n$, which is to say a representation of $GL({\bf C}^n)$ on the space of functions on ${\bf C}^n$. Let $f(z) = \sum_\alpha a_\alpha \, z^\alpha$ be a formal power series with complex coefficients. This can also be expressed as $\sum_{l = 0}^\infty p_l(z)$, where $p_l(z)$ is a homogeneous polynomial of degree $l$ for each $l \ge 0$. If $T$ is an invertible linear transformation on ${\bf C}^n$, then we can take $\rho_T(f)$ to be the formal power series that corresponds to $\sum_{l = 0}^\infty \rho_T(p_l)$. It is easy to see that this preserves sums and products of power series, just as for ordinary functions. In particular, this defines a representation of $GL({\bf C}^n)$ on the space of formal power series. If $\sum_{l = 0}^\infty p_l(z)$ converges for some $z \in {\bf C}^n$, then $\sum_{l = 0}^\infty \rho_T(p_l)(T(z))$ converges and has the same sum, because it is the same series of complex numbers. If $\sum_{l = 0}^\infty \rho_l(z)$ converges for every $z \in {\bf C}^n$, then $\sum_{l = 0}^\infty \rho_T(p_l)(T(z))$ converges for every $z \in {\bf C}^n$, and has the same sum. Hence the formal and pointwise definitions of $\rho(f)$ are consistent with each other in this case. \section{Abel summability} \label{abel summability} \setcounter{equation}{0} Let $\sum_{j = 0}^\infty a_j$ be an infinite series of complex numbers, and put \begin{equation} \label{A(r) = sum_{j = 0}^infty a_j r^j} A(r) = \sum_{j = 0}^\infty a_j \, r^j \end{equation} when $0 \le r < 1$. More precisely, we suppose that the sum on the right converges for each $r < 1$, which implies that $\{a_j \, r^j\}_{j = 0}^\infty$ converges to $0$ for each $r < 1$, and hence that $\{a_j \, r^j\}_{j = 0}^\infty$ is bounded for each $r < 1$. Conversely, if $\{a_j \, r^j\}_{j = 0}^\infty$ is bounded for each $r < 1$, then $\sum_{j = 0}^\infty a_j \, t^j$ converges absolutely for each $t < 1$, as one can see by taking $t < r < 1$ and using the comparison test, since $\sum_{j = 0}^\infty (t/r)^j$ is a convergent geometric series under these conditions. The expressions $A(r)$ are known as the \emph{Abel sums} associated to $\sum_{j = 0}^\infty a_j$, and we say that $\sum_{j = 0}^\infty a_j$ is \emph{Abel summable} if \begin{equation} \lim_{r \to 1-} A(r) \end{equation} exists. If $\sum_{j = 0}^\infty a_j$ converges in the usual sense, then it is Abel summable. To see this, let \begin{equation} \label{s_n = sum_{j = 0}^n a_j} s_n = \sum_{j = 0}^n a_j \end{equation} be the $n$th partial sum of $\sum_{j = 0}^\infty a_j$ when $n \ge 0$, and put $s_{-1} = 0$. Thus $a_j = s_j - s_{j - 1}$ for each $j \ge 0$, and hence \begin{equation} A(r) = \sum_{j = 0}^\infty (s_j - s_{j - 1}) \, r^j = \sum_{j = 0}^\infty s_j \, r^j - \sum_{j = 0}^\infty s_{j - 1} \, r^j \end{equation} when $0 \le r < 1$. There is no problem with the convergence of the series on the right, because the convergence of $\sum_{j = 0}^\infty a_j$ implies that $\{a_j\}_{j = 0}^\infty$ converges to $0$ and is therefore bounded, which implies that $s_n = O(n)$. Of course, \begin{equation} \sum_{j = 0}^\infty s_{j - 1} \, r^j = \sum_{j = 1}^\infty s_{j - 1} \, r^j = \sum_{j = 0}^\infty s_j \, r^{j + 1}, \end{equation} because $s_{-1} = 0$, which implies that \begin{equation} A(r) = \sum_{j = 0}^\infty s_j (r^j - r^{j + 1}) = (1 - r) \sum_{j = 0}^\infty s_j \, r^j \end{equation} when $r < 1$. We would like to show that \begin{equation} \lim_{r \to 1-} A(r) = \lim_{j \to \infty} s_j \end{equation} when the limit on the right side exists. Put $s = \lim_{j \to \infty} s_j$, let $\epsilon > 0$ be given, and choose $L \ge 0$ such that \begin{equation} \label{|s_j - s| < frac{epsilon}{2}} |s_j - s| < \frac{\epsilon}{2} \end{equation} for every $j \ge L$. Observe that \begin{equation} A(r) - s = (1 - r) \sum_{j = 0}^\infty (s_j - s) \, r^j \end{equation} when $r < 1$, because $(1 - r) \sum_{j = 0}^\infty r^j = 1$. It follows that \begin{eqnarray} |A(r) - s| & \le & (1 - r) \sum_{j = 0}^\infty |s_j - s| \, r^j \\ & < & (1 - r) \sum_{j = 0}^{L - 1} |s_j - s| \, r^j + (1 - r) \sum_{j = L}^\infty (\epsilon/2) \, r^j \nonumber \\ & \le & (1 - r) \sum_{j = 0}^{L - 1} |s_j - s| \, r^j + \frac{\epsilon}{2} \nonumber \end{eqnarray} for each $r < 1$. If $r$ is sufficiently close to $1$, then \begin{equation} (1 - r) \sum_{j = 0}^{L - 1} |s_j - s| \, r^j \le (1 - r) \sum_{j = 0}^{L - 1} |s_j - s| < \frac{\epsilon}{2} \end{equation} so that $|A(r) - s| < \epsilon / 2 + \epsilon / 2 = \epsilon$, as desired. If $a \in {\bf C}$ satisfies $|a| = 1$, then \begin{equation} \sum_{j = 0}^\infty a^j \, r^j = \frac{1}{1 - a \, r} \end{equation} when $0 \le r < 1$. Hence $\sum_{j = 0}^\infty a^j$ is Abel summable when $a \ne 1$, with the sum equal to $(1 - a)^{-1}$. Let $\sum_{j = 0}^\infty a_j$, $\sum_{j = 0}^\infty b_j$ be infinite series of complex numbers with Abel sums $A(r)$, $B(r)$, respectively, and note that $\sum_{j = 0}^\infty (a_j + b_j)$ has Abel sums given by $A(r) + B(r)$. If $\sum_{j = 0}^\infty a_j$, $\sum_{j = 0}^\infty b_j$ are Abel summable, then it follows that $\sum_{j = 0}^\infty (a_j + b_j)$ is Abel summable, with the Abel sum of the latter equal to the sum of the Abel sums of the first two series. Suppose now that $c_n = \sum_{j = 0}^n a_j \, b_{n - j}$ is the Cauchy product of the $a_j$'s and $b_j$'s, and let $C(r)$ be the corresponding Abel sums. As in Section \ref{cauchy products}, \begin{equation} \label{C(r) = A(r) B(r)} C(r) = A(r) \, B(r) \end{equation} when $0 \le r < 1$. More precisely, if the series defining $A(r)$, $B(r)$ converge absolutely, then the series defining $C(r)$ also converges absolutely, and satisfies (\ref{C(r) = A(r) B(r)}). The existence of the Abel sums for $\sum_{j = 0}^\infty a_j$, $\sum_{j = 0}^\infty b_j$ for each $r < 1$ implies that this condition holds for every $r < 1$, as discussed at the beginning of this section. If $\sum_{j = 0}^\infty a_j$, $\sum_{j = 0}^\infty b_j$ are Abel summable, then it follows that $\sum_{n = 0}^\infty c_n$ is Abel summable, and that the Abel sum of the latter equal to the product of the Abel sums of the former. \section{Multiple Fourier series} \label{multiple fourier series} \setcounter{equation}{0} Let $n$ be a positive integer, and let ${\bf T}^n$ be the $n$-dimensional torus, consisting of $z = (z_1, \ldots, z_n) \in {\bf C}^n$ such that $|z_j| = 1$ for $j = 1, \ldots, n$. If $\alpha = (\alpha_1, \ldots, \alpha_n)$ is an $n$-tuple of integers, then put \begin{equation} z^\alpha = z_1^{\alpha_1} \cdots z_n^{\alpha_n}, \end{equation} with the usual convention that $z_j^{\alpha_j} = 1$ when $\alpha_j = 0$. Thus \begin{equation} \frac{1}{(2 \pi)^n} \int_{{\bf T}^n} z^\alpha \, |dz| = \prod_{j = 1}^n \frac{1}{2 \, \pi} \int_{\bf T} z_j^{\alpha_j} \, |dz_j| \end{equation} is equal to $0$ when $\alpha \ne 0$, and is equal to $1$ when $\alpha = 0$. Here $|dz|$ is the $n$-dimensional element of integration on ${\bf T}^n$ corresponding to the element $|dz_j|$ of arc length in each variable. If $f$ is a continuous complex-valued function on ${\bf T}^n$ and $\alpha \in {\bf Z}^n$, then we put \begin{equation} \label{widehat{f}(alpha) = frac{1}{(2 pi)^n} int_{T^n} f(z) z^{-alpha} |dz|} \widehat{f}(\alpha) = \frac{1}{(2 \pi)^n} \int_{{\bf T}^n} f(z) \, z^{-\alpha} \, |dz|. \end{equation} The corresponding Fourier series is given by \begin{equation} \label{sum_{alpha in {bf Z}^n} widehat{f}(alpha) z^alpha} \sum_{\alpha \in {\bf Z}^n} \widehat{f}(\alpha) \, z^\alpha. \end{equation} For example, if $f(z) = z^\beta$ for some $\beta \in {\bf Z}^n$, then $\widehat{f}(\alpha) = 1$ when $\alpha = \beta$ and is equal to $0$ otherwise. Thus (\ref{sum_{alpha in {bf Z}^n} widehat{f}(alpha) z^alpha}) reduces to $f$ in this case, or when $f$ is a finite linear combination of $z^\beta$'s. Note that \begin{equation} \label{|widehat{f}(alpha)| le frac{1}{(2 pi)^n} int_{{bf T}^n} |f(z)| |dz|} |\widehat{f}(\alpha)| \le \frac{1}{(2 \pi)^n} \int_{{\bf T}^n} |f(z)| \, |dz| \end{equation} for any continuous function $f$ on ${\bf T}^n$ and $\alpha \in {\bf Z}^n$. Let $U^n$ be the $n$-dimensional open unit polydisk, consisting of $z \in {\bf C}^n$ with $|z_j| < 1$ for $j = 1, \ldots, n$. The $n$-dimensional Poisson kernel $P_n(z, w)$ can be defined for $z \in U^n$ and $w \in {\bf T}^n$ by \begin{equation} P_n(z, w) = \prod_{j = 1}^n P(z_j, w_j), \end{equation} where $P(z_j, w_j)$ is the ordinary Poisson kernel evaluated at $z_j$, $w_j$, as in Section \ref{poisson kernel}. If $f$ is a continuous function on ${\bf T}^n$, then its Poisson integral is defined on $U^n$ by \begin{equation} \phi(z) = \int_{{\bf T}^n} P_n(z, w) \, f(w) \, |dw|. \end{equation} As before, one can show that $\phi(z) \to f(z_0)$ as $z \in U^n$ tends to $z_0 \in {\bf T}^n$, but one can also do more than this. Let $\overline{U}^n$ be the $n$-dimensional closed unit polydisk, consisting of $z \in {\bf C}^n$ such that $|z_j| \le 1$ for each $j$. Of course, this is the same as the closure of $U^n$ in ${\bf C}^n$. The boundary $\partial U^n$ of $U^n$ in ${\bf C}^n$ consists of $z \in {\bf C}^n$ such that $|z_j| \le 1$ for each $j$ and $|z_j| = 1$ for at least one $j$. In particular, ${\bf T}^n \subseteq \partial U^n$, but ${\bf T}^n$ is a relatively small subset of $\partial U^n$ when $n > 1$. More precisely, $U^n$ has complex dimension $n$ and hence real dimension $2 \, n$, $\partial U^n$ has real dimension $2 n - 1$, and ${\bf T}^n$ has real dimension $n$. One can extend $\phi(z)$ to $z \in \overline{U}^n$ in the following way. If $z \in {\bf T}^n$, then we simply put $\phi(z) = f(z)$. If $z \in \partial U^n \backslash {\bf T}^n$, then $|z_j| < 1$ for at least one $j$, and we define $\phi(z)$ by taking the Poisson integral of $f$ in the $j$th variable when $|z_j| < 1$, and simply evaluating $f$ at $z_j$ in the $j$th variable when $|z_j| = 1$. It is not too difficult to show that this defines a continuous function on $\overline{U}^n$. If $z \in {\bf C}^n$ and $\alpha \in {\bf Z}^n$, then put \begin{equation} \label{widetilde{z}^alpha = ...} \widetilde{z}^\alpha = \widetilde{z}_1^{\alpha_1} \cdots \widetilde{z}_n^{\alpha_n}, \end{equation} where $\widetilde{z}_j^{\alpha_j} = z_j^{\alpha_j}$ when $\alpha_j \ge 0$ and $\widetilde{z}_j^{\alpha_j} = \overline{z_j}^{-\alpha_j}$ when $\alpha_j < 0$. Thus $\widetilde{z}^\alpha = z^\alpha$ when $z \in {\bf T}^n$, and \begin{equation} \label{|widetilde{z}^alpha| = |z_1|^{|alpha_1|} cdots |z_n|^{|alpha_n|}} |\widetilde{z}^\alpha| = |z_1|^{|\alpha_1|} \cdots |z_n|^{|\alpha_n|} \end{equation} for every $z \in {\bf C}^n$. If $z \in U^n$, then it is easy to see that \begin{equation} \label{phi(z) = sum_{alpha in {bf Z}^n} widehat{f}(alpha) widetilde{z}^alpha} \phi(z) = \sum_{\alpha \in {\bf Z}^n} \widehat{f}(\alpha) \, \widetilde{z}^\alpha, \end{equation} using the analogous expansion for the Poisson kernel in one variable. Note that this series converges absolutely for every $z \in U^n$, since the Fourier coefficients $\widehat{f}(\alpha)$ are bounded, as in (\ref{|widehat{f}(alpha)| le frac{1}{(2 pi)^n} int_{{bf T}^n} |f(z)| |dz|}). If $z \in {\bf T}^n$ and $0 \le r < 1$, then put \begin{equation} \label{f_r(z) = phi(r z) = ...} f_r(z) = \phi(r \, z) = \sum_{\alpha \in {\bf Z}^n} \widehat{f}(\alpha) \, r^{|\alpha|} \, z^\alpha, \end{equation} where $|\alpha| = |\alpha_1| + \cdots + |\alpha_n|$. One can check that $f_r \to f$ as $r \to 1$ uniformly on ${\bf T}^n$, using the fact that continuous functions on compact sets are uniformly continuous. The sum on the right side of (\ref{f_r(z) = phi(r z) = ...}) can be approximated by finite subsums uniformly on ${\bf T}^n$ for each $r < 1$, as in Weierstrass' M-test. It follows that every continuous function $f$ on ${\bf T}^n$ can be approximated uniformly by finite linear combinations of $z^\alpha$'s, $\alpha \in {\bf Z}^n$. Observe that $\phi(z)$ is ``polyharmonic'', in the sense that it is harmonic as a function of $z_j$ on the set where $|z_j| < 1$ for each $j$. This follows from the remarks about harmonic functions of one complex variable in Section \ref{poisson kernel}. In addition, \begin{equation} \label{sup_{z in overline{U}^n} |phi(z)| = sup_{z in {bf T}^n} |f(z)|} \sup_{z \in \overline{U}^n} |\phi(z)| = \sup_{z \in {\bf T}^n} |f(z)|. \end{equation} More precisely, the right side of (\ref{sup_{z in overline{U}^n} |phi(z)| = sup_{z in {bf T}^n} |f(z)|}) is less than or equal to the left side because ${\bf T}^n \subseteq \overline{U}^n$ and $\phi = f$ on ${\bf T}^n$. To get the opposite inequality, one can use the fact that the Poisson kernel is positive and has integral equal to $1$. If $f$, $g$ are continuous complex-valued functions on ${\bf T}^n$, then put \begin{equation} \label{langle f, g rangle = (2 pi)^{-n} int_{T^n} f(z) overline{g(z)} |dz|} \langle f, g \rangle = \frac{1}{(2 \pi)^n} \int_{{\bf T}^n} f(z) \, \overline{g(z)} \, |dz|. \end{equation} This defines an inner product on the vector space $C({\bf T}^n)$ of continuous complex-valued functions on ${\bf T}^n$, for which the corresponding norm is given by \begin{equation} \label{||f|| = (frac{1}{(2 pi)^n} int_{{bf T}^n} |f(z)|^2 |dz|)^{1/2}} \|f\| = \Big(\frac{1}{(2 \pi)^n} \int_{{\bf T}^n} |f(z)|^2 \, |dz|\Big)^{1/2}. \end{equation} It is easy to see that the functions $z^\alpha$, $\alpha \in {\bf Z}^n$ are orthonormal with respect to this inner product, and that the Fourier coefficients of a continuous function $f$ on ${\bf T}^n$ can be expressed by \begin{equation} \label{widehat{f}(alpha) = langle f, z^alpha rangle} \widehat{f}(\alpha) = \langle f, z^\alpha \rangle. \end{equation} The $n$-dimensional version of Parseval's formula states that \begin{equation} \label{sum_{alpha in {bf Z}^n} |widehat{f}(alpha)|^2 = ...} \sum_{\alpha \in {\bf Z}^n} |\widehat{f}(\alpha)|^2 = \frac{1}{(2 \pi)^n} \int_{{\bf T}^n} |f(z)|^2 \, |dz|, \end{equation} where the summability of the sum on the left is part of the conclusion. This follows from the orthonormality of the $z^\alpha$'s and the fact that their finite linear combinations are dense in $C({\bf T})$, as in the one-dimensional case. Suppose that $f$, $g$ are continuous functions on ${\bf T}^n$, and let us check that \begin{equation} \label{widehat{(f g)}(alpha) = ...} \widehat{(f \, g)}(\alpha) = \sum_{\beta \in {\bf Z}^n} \widehat{f}(\alpha - \beta) \, \widehat{g}(\beta). \end{equation} This can be derived formally by multiplying the Fourier series for $f$, $g$ and collecting terms. To make this rigorous, observe first that $\widehat{f}(\alpha - \beta) \, \widehat{g}(\beta)$ is summable in $\beta$, because $\widehat{f}, \widehat{g} \in \ell^2({\bf Z}^n)$, as in the previous paragraph. If $g(z) = z^\gamma$ for some $\gamma \in {\bf Z}^n$, then it is easy to see that both sides of (\ref{widehat{(f g)}(alpha) = ...}) are equal to $\widehat{f}(\alpha - \gamma)$. It follows that (\ref{widehat{(f g)}(alpha) = ...}) holds when $g$ is a finite linear combination of $z^\gamma$'s, and the same conclusion for an arbitrary continuous function $g$ on ${\bf T}^n$ can be obtained by approximation by linear combinations of $z^\gamma$'s. If $a(\alpha)$, $b(\alpha)$ are summable functions on ${\bf Z}^n$, then their convolution can be defined by \begin{equation} \label{(a * b)(alpha) = sum_{beta in {bf Z}^n} a(alpha - beta) b(beta)} (a * b)(\alpha) = \sum_{\beta \in {\bf Z}^n} a(\alpha - \beta) \, b(\beta), \end{equation} as in the one-dimensional case. More precisely, $a * b$ is also summable on ${\bf Z}^n$, and satisfies \begin{equation} \label{||a * b||_1 le ||a||_1 ||b||_1} \|a * b\|_1 \le \|a\|_1 \, \|b\|_1, \end{equation} where $\|a\|_1$ is the $\ell^1$ norm of $a$ on ${\bf Z}^n$. This follows by interchanging the order of summation, as before, and one can also check that $\ell^1({\bf Z}^n)$ is a commutative Banach algebra with respect to convolution. The Fourier transform of $a$ in $\ell^1({\bf Z}^n)$ is defined by \begin{equation} \label{widehat{a}(z) = sum_{alpha in {bf Z}^n} a(alpha) z^alpha} \widehat{a}(z) = \sum_{\alpha \in {\bf Z}^n} a(\alpha) \, z^\alpha \end{equation} for $z \in {\bf T}^n$. The sum on the right is absolutely summable for each $z \in {\bf T}^n$, because $a(\alpha)$ is summable, and can be approximated by finite subsums uniformly on ${\bf T}^n$, as in Weierstrass' M-test. This implies that $\widehat{a}(z)$ is continuous on ${\bf T}^n$, and it is easy to see that \begin{equation} \label{widehat{(a * b)}(z) = widehat{a}(z) widehat{b}(z), n dimensions} \widehat{(a * b)}(z) = \widehat{a}(z) \, \widehat{b}(z) \end{equation} for every $a, b \in \ell^1({\bf Z}^n)$ and $z \in {\bf T}^n$, as before. Conversely, every nonzero multiplicative homomorphism on $\ell^1({\bf Z}^n)$ with respect to convolution can be represented as $a \mapsto \widehat{a}(z)$ for some $z \in {\bf T}^n$, as in the one-dimensional situation. Note that the Fourier coefficients of $\widehat{a}$ are given by $a(\alpha)$ for every $a \in \ell^1({\bf Z}^n)$, because of the orthogonality properties of the $z^\alpha$'s. Observe also that \begin{equation} \label{sum_{alpha in {bf Z}^n} a(alpha) widetilde{z}^alpha} \sum_{\alpha \in {\bf Z}^n} a(\alpha) \, \widetilde{z}^\alpha \end{equation} is absolutely summable for every $z \in \overline{U}^n$, which is the analogue of the function $\phi$ discussed earlier. As usual, (\ref{sum_{alpha in {bf Z}^n} a(alpha) widetilde{z}^alpha}) can be approximated by finite subsums uniformly on $\overline{U}^n$ under these conditions, which implies more directly that it defines a continuous function on $\overline{U}^n$ than in the earlier discussion. \section{Functions of analytic type} \label{functions of analytic type} \setcounter{equation}{0} Let $A({\bf T}^n)$ be the collection of continuous functions $f$ on ${\bf T}^n$ such that \begin{equation} \label{widehat{f}(alpha) = 0} \widehat{f}(\alpha) = 0 \end{equation} when $\alpha \in {\bf Z}^n$ satisfies $\alpha_j < 0$ for some $j$. If $f, g \in A({\bf T}^n)$, then it is easy to see from (\ref{widehat{(f g)}(alpha) = ...}) that their product $f \, g$ is in $A({\bf T}^n)$ too. Note that the sum on the right side of (\ref{widehat{(f g)}(alpha) = ...}) has only finitely many nonzero terms in this situation. It follows that $A({\bf T}^n)$ is a subalgebra of $C({\bf T}^n)$, since the former is clearly a linear subspace of the latter. If $f \in A({\bf T}^n)$, then (\ref{phi(z) = sum_{alpha in {bf Z}^n} widehat{f}(alpha) widetilde{z}^alpha}) reduces to \begin{equation} \phi(z) = \sum_\alpha \widehat{f}(\alpha) \, z^\alpha, \end{equation} where now the sum is taken over all multi-indices $\alpha$, which is to say $\alpha \in {\bf Z}^n$ such that $\alpha_j \ge 0$ for each $j$. Thus we get an ordinary power series in this case, in the sense that the $z^\alpha$'s are the usual monomials, instead of the modified monomials $\widetilde{z}^\alpha$ that may include complex conjugation. In particular, this implies that $f$ can be approximated uniformly on ${\bf T}^n$ by a finite linear combinations of $z^\alpha$'s, where the $\alpha$'s are multi-indices, by the same type of argument as in the previous section. Of course, $z^\alpha \in A({\bf T}^n)$ for every multi-index $\alpha$, and $A({\bf T}^n)$ is a closed set in $C({\bf T}^n)$ with respect to the supremum norm. It follows that $A({\bf T}^n)$ is the same as the closure in $C({\bf T}^n)$ of the linear span of the $z^\alpha$'s, where $\alpha$ is a multi-index. Let $\phi_f$ be the continuous function $\phi$ on the closed unit polydisk $\overline{U}^n$ associated to $f \in C({\bf T}^n)$ as in the previous section. If $f, g \in A({\bf T}^n)$, then \begin{equation} \label{phi_{f g} = phi_f phi_g} \phi_{f g} = \phi_f \, \phi_g. \end{equation} This follows by multiplying the series expansions for $\phi_f$, $\phi_g$ in the previous paragraph and collecting terms, as in (\ref{(sum a_alpha z^alpha) (sum b_beta z^beta) = sum c_gamma z^gamma}) and (\ref{c_gamma = sum_{alpha + beta = gamma} a_alpha b_beta}). This also uses the formula (\ref{widehat{(f g)}(alpha) = ...}) for the Fourier coefficients of the product $f \, g$. The main point is that \begin{equation} z^\alpha \, z^\beta = z^{\alpha + \beta} \end{equation} while $\widetilde{z}^\alpha \, \widetilde{z}^\beta$ is not necessarily the same as $\widetilde{z}^{\alpha + \beta}$. More precisely, this argument works on the open unit polydisk $U^n$, where the series expansions for $\phi_f$, $\phi_g$ are absolutely summable. This implies that (\ref{phi_{f g} = phi_f phi_g}) holds on $\overline{U}^n$, by continuity. Observe that $A({\bf T}^n)$ is a commutative Banach algebra with respect to the supremum norm, since it is a closed subalgebra of $C({\bf T}^n)$ that contains the constant functions, and hence the multiplicative identity element. If $p \in \overline{U}^n$, then $f \mapsto \phi_f(p)$ defines a nonzero homomorphism from $A({\bf T}^n)$ into the complex numbers. Conversely, suppose that $h$ is a nonzero homomorphism on $A({\bf T}^n)$, and let us show that there is a $p \in \overline{U}^n$ such that $h(f) = \phi_f(p)$ for every $f \in A({\bf T}^n)$. As usual, $h({\bf 1}_{{\bf T}^n}) = 1$, where ${\bf 1}_{{\bf T}^n}$ is the constant function equal to $1$ on ${\bf T}^n$, and \begin{equation} \label{|h(f)| le sup_{z in {bf T}^n} |f(z)|} |h(f)| \le \sup_{z \in {\bf T}^n} |f(z)| \end{equation} for every $f \in A({\bf T}^n)$. Consider $f_j(z) = z_j$, $j = 1, \ldots, n$, as an element of $A({\bf T}^n)$. If $p_j = h(f_j)$, then $|p_j| \le 1$ for each $j$, by (\ref{|h(f)| le sup_{z in {bf T}^n} |f(z)|}). Hence $p = (p_1, \ldots, p_n) \in \overline{U}^n$. By construction, $h(f) = \phi_f(p)$ when $f = f_j$ for some $j$, and it follows that this also holds when $f$ is a polynomial, because $h$ is a homomorphism. Using (\ref{|h(f)| le sup_{z in {bf T}^n} |f(z)|}) again, we get that $h(f) = \phi_f(p)$ for every $f \in A({\bf T}^n)$, because polynomials are dense in $A({\bf T}^n)$. Let $\ell^1_A({\bf Z}^n)$ be the set of $a \in \ell^1({\bf Z}^n)$ such that $a(\alpha) = 0$ whenever $\alpha \in {\bf Z}^n$ satisfies $\alpha_j < 0$ for some $j$. It is easy to see that this is a closed subalgebra of $\ell^1({\bf Z}^n)$ with respect to convolution. If $a \in \ell^1_A({\bf Z}^n)$, then (\ref{sum_{alpha in {bf Z}^n} a(alpha) widetilde{z}^alpha}) reduces to an ordinary power series \begin{equation} \label{sum_alpha a(alpha) z^alpha} \sum_\alpha a(\alpha) \, z^\alpha, \end{equation} where the sum is taken over all multi-indices $\alpha$. If $b \in \ell^1_A({\bf Z}^n)$ too, then \begin{equation} \Big(\sum_\alpha a(\alpha) \, z^\alpha\Big) \, \Big(\sum_\beta b(\beta) \, z^\beta\Big) = \sum_\gamma (a * b)(\gamma) \, z^\gamma \end{equation} for every $z \in \overline{U}^n$, which is basically the same as (\ref{(sum a_alpha z^alpha) (sum b_beta z^beta) = sum c_gamma z^gamma}) again. Thus the mapping from $a$ to (\ref{sum_alpha a(alpha) z^alpha}) defines a homomorphism from $\ell^1({\bf Z}^n)$ into the complex numbers for each $z \in \overline{U}^n$, using convolution as multiplication on $\ell^1_A({\bf Z}^n)$. Conversely, let us check that any nonzero homomorphism $h$ from $\ell^1_A({\bf Z}^n)$ into the complex numbers is of this form. If $\alpha \in {\bf Z}^n$, then let $\delta_\alpha$ be the function on ${\bf Z}^n$ defined by $\delta_\alpha(\beta) = 1$ when $\alpha = \beta$ and $\delta_\alpha(\beta) = 0$ otherwise. Thus $\delta_\alpha \in \ell^1_A({\bf Z}^n)$ when $\alpha_j \ge 0$ for each $j$. In particular, $\delta_0 \in \ell^1_A({\bf Z}^n)$, which is the multiplicative identity element for $\ell^1({\bf Z}^n)$, and hence for $\ell^1_A({\bf Z}^n)$. It follows that $h(\delta_0) = 1$, and we also have that \begin{equation} \label{|h(a)| le ||a||_1} |h(a)| \le \|a\|_1 \end{equation} for every $a \in \ell^1_A({\bf Z}^n)$, since $\ell^1_A({\bf Z}^n)$ is a Banach algebra. Let $\alpha(l)$ be the element of ${\bf Z}^n$ with $l$th component equal to $1$ and other components equal to $0$, for $l = 1, \ldots, n$. Put $z_l = h(\delta_{\alpha(l)})$, so that $|z_l| \le 1$, since $\|\delta_{\alpha(l)}\|_1 = 1$. Thus $z = (z_1, \ldots, z_n) \in \overline{U}^n$, and $h(\delta_\alpha) = z^\alpha$ for every $\alpha \in {\bf Z}^n$ with $\alpha_j \ge 0$ for each $j$, because $h$ is a homomorphism with respect to convolution on $\ell^1_A({\bf Z}^n)$. More precisely, this uses the fact that \begin{equation} \delta_\alpha * \delta_\beta = \delta_{\alpha + \beta} \end{equation} for every $\alpha, \beta \in {\bf Z}^n$. This implies that $h(a)$ is equal to (\ref{sum_alpha a(alpha) z^alpha}) for every $a$ in $\ell^1_A({\bf Z}^n)$ and this choice of $z$, by the linearity and continuity of $h$. If $h$ were a homomorphism on all of $\ell^1({\bf Z}^n)$, then we would have (\ref{|h(a)| le ||a||_1}) for every $a \in \ell^1({\bf Z}^n)$, which would imply that $|z_l| = 1$ for each $l$. This is because $\delta_{\alpha(l)} * \delta_{-\alpha(l)} = \delta_0$, so that \begin{equation} z_l \, h(\delta_{-\alpha(l)}) = h(\delta_{\alpha(l)}) \, h(\delta_{-\alpha(l)}) = 1, \end{equation} while $|h(\delta_{-\alpha(l)})| \le 1$ by (\ref{|h(a)| le ||a||_1}). In this case, we would get that $h(a)$ is equal to $\widehat{a}(z)$ as in (\ref{widehat{a}(z) = sum_{alpha in {bf Z}^n} a(alpha) z^alpha}) for every $a \in \ell^1({\bf Z}^n)$, by essentially the same argument as before. Of course, $a \mapsto \widehat{a}(z)$ defines a homomorphism on $\ell^1({\bf Z}^n)$ for every $z \in {\bf T}^n$, as in the previous section. Similarly, if $h$ is a nonzero homomorphism on all of $C({\bf T}^n)$ and $f_j(z) = z_j$, then $|h(f_j)| = 1$ for each $j$, because $z_j^{-1}$ is also a continuous function on ${\bf T}^n$ with supremum norm equal to $1$. Using this, one can show that $h(f) = f(p)$ for every $f \in C({\bf T}^n)$, where $p = (p_1, \ldots, p_n) \in {\bf T}^n$ is defined by $p_j = h(f_j)$, in the same way as before. Although this is a special case of the results discussed in Section \ref{compact spaces}, the present approach has the advantage of making the relationship with $A({\bf T}^n)$ more clear. \section{The maximum principle} \label{maximum principle} \setcounter{equation}{0} Let $D$ be a nonempty bounded connected open set in the complex plane ${\bf C}$. If $f$ is a continuous complex-valued function on the closure $\overline{D}$ of $D$ in ${\bf C}$, then the extreme value theorem implies that the $|f(z)|$ attains its maximum on $\overline{D}$. If $f$ is also holomorphic on $D$, then the \emph{maximum modulus principle} implies that the maximum of $|f(z)|$ on $\overline{D}$ is attained on the boundary $\partial D$ of $D$. More precisely, if $|f(z)|$ has a local maximum on $D$, then $f$ is constant. This follows from the fact that a nonconstant holomorphic function on a connected open set in ${\bf C}$ is an open mapping, in the sense that it maps open sets to open sets. Alternatively, suppose that $z \in D$, and that the closed disk centered at $z$ with radius $r > 0$ is contained in $D$. If $f$ is holomorphic on $D$, then \begin{equation} \label{f(z) = frac{1}{2 pi r} int_{|w - z| = r} f(w) |dw|} f(z) = \frac{1}{2 \pi r} \int_{|w - z| = r} f(w) \, |dw|, \end{equation} by the Cauchy integral formula. If $|f|$ has a local maximum at $z$, then one can use this to show that $f(w) = f(z)$ when $|w - z|$ is sufficiently small, and hence that $f$ is constant on $D$ when $D$ is connected. This identity is known as the ``mean value property'', since it says that the value of $f$ at $z$ is given by the average of $f$ on the circle $|w - z| = r$. This also works for harmonic functions on $D$, and the analogous statement for harmonic functions on open subsets of ${\bf R}^n$ holds for every $n$. In particular, if $f$ is a harmonic function on a connected open set $D \subseteq {\bf R}^n$, and if $|f|$ has a local maximum on $D$, then one can show that $f$ is constant on $D$. Similarly, if $f$ is a real-valued harmonic function on $D$ with a local maximum on $D$, then $f$ is constant on $D$. A holomorphic function of several complex variables is holomorphic in each variable separately. In particular, such a function is harmonic, but one can get stronger versions of the maximum principle by considering restrictions of the function to complex lines, or even ``analytic disks'' that do not have to be flat. Suppose for instance that $D$ is the unit polydisk $U^n$. If $f$ is a continuous complex-valued function on $\overline{U}^n$ that is holomorphic on $U^n$, then one can show that the maximum of $|f(z)|$ on $\overline{U}^n$ is actually attained on ${\bf T}^n$. This is the same as the boundary of $U^n$ when $n = 1$, but otherwise is significantly smaller, as mentioned previously. This version of the maximum principle was implicitly given already in (\ref{sup_{z in overline{U}^n} |phi(z)| = sup_{z in {bf T}^n} |f(z)|}), using Poisson integrals. This also works for functions that are polyharmonic instead of holomorphic, which is to say harmonic in $z_j$ for $j = 1, \ldots, n$. This can also be derived from the maximum principle for the unit disk, by looking at restrictions of the function to disks in which all but one variable is constant. \section{Convex hulls} \label{convex hulls} \setcounter{equation}{0} Let $A$ be a nonempty subset of ${\bf R}^n$ for some positive integer $n$. The \emph{convex hull} of $A$ is denoted $\mathop{\rm Con}(A)$ and is defined to be the set of $x \in {\bf R}^n$ for which there are finitely many elements $y_1, \ldots, y_l$ of $A$ and nonnegative real numbers $t_1, \ldots, t_l$ such that $\sum_{j = 1}^l t_j = 1$ and \begin{equation} x = \sum_{j = 1}^l t_j \, y_j. \end{equation} It is easy to see that $\mathop{\rm Con}(A)$ is a convex set in ${\bf R}^n$, and that $\mathop{\rm Con}(A) \subseteq B$ whenever $A \subseteq B$ and $B \subseteq {\bf R}^n$ is convex. Thus $\mathop{\rm Con}(A)$ is the smallest convex set in ${\bf R}^n$ that contains $A$. It is well known that every element of $\mathop{\rm Con}(A)$ can be expressed as a convex combination of less than or equal to $n + 1$ elements of $A$. This uses the fact that ${\bf R}^n$ is an $n$-dimensional real vector space, while the definition of the convex hull and the other remarks in the previous paragraph would work just as well in any real vector space. Using this, one can show that $\mathop{\rm Con} (A)$ is compact when $A \subseteq {\bf R}^n$ is compact. Otherwise, the \emph{closed convex hull} of $A$ is defined to be the closure of the convex hull of $A$, and is automatically convex, because the closure of any convex set in ${\bf R}^n$ is also convex. This is the smallest closed convex set that contains $A$, because any closed convex set that contains $A$ also contains $\mathop{\rm Con}(A)$ and hence $\overline{\mathop{\rm Con}(A)}$. If $E$ is a nonempty closed convex set in ${\bf R}^n$ and $x \in {\bf R}^n \backslash E$, then a well-known separation theorem states that there is a linear function $\lambda(y)$ on ${\bf R}^n$ such that \begin{equation} \sup_{y \in E} \lambda(y) < \lambda(x). \end{equation} To see this, observe first that there is an element $u$ of $E$ that minimizes the distance to $x$ with respect to the standard Euclidean metric, so that \begin{equation} \sum_{j = 1}^n (y_j - x_j)^2 \ge \sum_{j = 1}^n (u_j - x_j)^2 \end{equation} for every $y \in E$. This follows immediately from the extreme value theorem when $E$ is compact, and otherwise one can reduce to that case by considering the intersection of $E$ with a closed ball centered at $x$ with sufficiently large radius. Without loss of generality, we may suppose that $u = 0$, since otherwise we can translate everything by $-u$ to reduce to this case. Thus the previous inequality becomes \begin{equation} \sum_{j = 1}^n (y_j - x_j)^2 \ge \sum_{j = 1}^n x_j^2, \end{equation} which holds for every $y \in E$. Equivalently, \begin{equation} \sum_{j = 1}^n y_j^2 \ge \sum_{j = 1}^n y_j \, x_j \end{equation} for every $y \in E$. Because $u = 0 \in E$ and $E$ is convex, $t \, y \in E$ for every $y \in E$ and $t \in [0, 1]$. Hence \begin{equation} \sum_{j = 1}^n (t \, y_j)^2 \ge \sum_{j = 1}^n (t \, y_j) \, x_j \end{equation} for every $y \in E$ and $0 \le t \le 1$. This implies that \begin{equation} t \sum_{j = 1}^n y_j^2 \ge \sum_{j = 1}^n y_j \, x_j \end{equation} when $y \in E$ and $0 < t \le 1$. Taking the limit as $t \to 0$, we get that \begin{equation} \sum_{j = 1}^n y_j \, x_j \le 0 \end{equation} for every $y \in E$. Put $\lambda(y) = \sum_{j = 1}^n y_j \, x_j$, so that $\lambda(y) \le 0$ for every $y \in E$, by the preceding inequality. Note that $x \ne u = 0$, because $x \not\in E$ and $u \in E$. Thus we also have that $\lambda(x) = \sum_{j = 1}^n x_j^2 > 0$, as desired. Let $A$ be a nonempty set in ${\bf R}^n$, and let $\lambda$ be a linear function on ${\bf R}^n$. Suppose that $x \in \mathop{\rm Con}(A)$, so that there are $y_1, \ldots y_l \in A$ and $t_1, \ldots, t_l \ge 0$ such that $\sum_{j = 1}^j t_j = 1$ and $x = \sum_{j = 1}^l t_j \, y_j$. In particular, \begin{equation} \lambda(x) = \sum_{j = 1}^l t_j \, \lambda(y_j) \le \max_{1 \le j \le l} \lambda(y_j). \end{equation} This implies that \begin{equation} \label{lambda(x) le sup_{y in A} lambda(y)} \lambda(x) \le \sup_{y \in A} \lambda(y), \end{equation} where the supremum on the right side may be $+\infty$, in which case the inequality is trivial. If $x \in \overline{\mathop{\rm Con}(A)}$, then it is easy to see that (\ref{lambda(x) le sup_{y in A} lambda(y)}) also holds, by continuity. However, if $x \in {\bf R}^n \backslash \overline{\mathop{\rm Con}(A)}$, then there is a linear function $\lambda$ on ${\bf R}^n$ for which (\ref{lambda(x) le sup_{y in A} lambda(y)}) does not hold, as in the previous paragraph. Thus the closed convex hull of $A$ is the same as the set of $x \in {\bf R}^n$ such that (\ref{lambda(x) le sup_{y in A} lambda(y)}) holds for every linear function $\lambda$ on ${\bf R}^n$. \section{Polynomial hulls} \label{polynomial hulls} \setcounter{equation}{0} Let $E$ be a nonempty subset of ${\bf C}^n$ for some positive integer $n$. The \emph{polynomial hull} of $E$ in ${\bf C}^n$ is denoted $\mathop{\rm Pol}(E)$ and defined to be the set of $z \in {\bf C}^n$ such that \begin{equation} \label{|p(z)| le sup_{w in E} |p(w)|} |p(z)| \le \sup_{w \in E} |p(w)| \end{equation} for every polynomial $p$ on ${\bf C}^n$. More precisely, to say that $p$ is a polynomial on ${\bf C}^n$ means that $p$ can be expressed as \begin{equation} p(w) = \sum_{|\alpha| \le N} a_\alpha \, w^\alpha \end{equation} for some nonnegative integer $N$, where the sum is taken over all multi-indices $\alpha$ with $|\alpha| \le N$, and $a_\alpha \in {\bf C}$ for each $\alpha$. If $E$ is unbounded, then $p$ may be unbounded on $E$, so that the supremum in (\ref{|p(z)| le sup_{w in E} |p(w)|}) is $+\infty$, and the inequality is trivial. Of course, \begin{equation} E \subseteq \mathop{\rm Pol}(E) \end{equation} by definition. If $E_1 \subseteq E_2 \subseteq {\bf C}^n$, then \begin{equation} \mathop{\rm Pol}(E_1) \subseteq \mathop{\rm Pol}(E_2). \end{equation} It is easy to see that $\mathop{\rm Pol}(E)$ is always a closed set in ${\bf C}^n$, because polynomials are continuous. Similarly, \begin{equation} \label{pol(E) = pol(overline{E})} \mathop{\rm Pol}(E) = \mathop{\rm Pol}(\overline{E}), \end{equation} and so we may as well restrict our attention to closed sets $E \subseteq {\bf C}^n$. If $E$ is any nonempty subset of ${\bf C}^n$ and $p$ is a polynomial on ${\bf C}^n$, then \begin{equation} \sup_{z \in \mathop{\rm Pol}(E)} |p(z)| = \sup_{w \in E} |p(w)|. \end{equation} More precisely, the right side is less than or equal to the left side because $E \subseteq \mathop{\rm Pol}(E)$, while the opposite inequality follows from the definition of $\mathop{\rm Pol}(E)$. If $\zeta \in \mathop{\rm Pol}(\mathop{\rm Pol}(E))$, then we get that \begin{equation} |p(\zeta)| \le \sup_{z \in \mathop{\rm Pol}(E)} |p(z)| = \sup_{w \in E} |p(w)| \end{equation} for every polynomial $p$ on ${\bf C}^n$, which implies that $\zeta \in \mathop{\rm Pol}(E)$. Thus $\mathop{\rm Pol}(\mathop{\rm Pol}(E))$ is contained in $\mathop{\rm Pol}(E)$, and hence \begin{equation} \mathop{\rm Pol}(\mathop{\rm Pol}(E)) = \mathop{\rm Pol}(E), \end{equation} because $\mathop{\rm Pol}(E) \subseteq \mathop{\rm Pol}(\mathop{\rm Pol}(E))$ automatically. As an example, let us check that \begin{equation} \mathop{\rm Pol}({\bf T}^n) = \overline{U}^n. \end{equation} If $z \in \overline{U}^n$, then $z \in \mathop{\rm Pol}({\bf T}^n)$, as in Section \ref{maximum principle}, and so $\overline{U}^n \subseteq \mathop{\rm Pol}({\bf T}^n)$. However, if $z \in {\bf C}^n \backslash \overline{U}^n$, then $|z_j| > 1$, and one can check that $z \not\in \mathop{\rm Pol}({\bf T}^n)$, by taking $p(w) = w_j$. Thus $\mathop{\rm Pol}({\bf T}^n) \subseteq \overline{U}^n$, as desired. If $E$ is any nonempty bounded subset of ${\bf C}^n$, then \begin{equation} \label{pol(E) subseteq overline{con(E)}} \mathop{\rm Pol}(E) \subseteq \overline{\mathop{\rm Con}(E)}. \end{equation} To see this, we identify ${\bf C}^n$ with ${\bf R}^{2n}$ as a real vector space, so that the results in the previous section are applicable. If $z \in {\bf C}^n \backslash \overline{\mathop{\rm Con}(E)}$, then there is a real-valued real-linear function $\lambda$ on ${\bf C}^n \cong {\bf R}^{2 n}$ such that \begin{equation} \sup_{w \in E} \lambda(w) < \lambda(z), \end{equation} as in the previous section. Equivalently, $\lambda$ can be expressed as the real part of a complex-linear function $\mu$ on ${\bf C}^n$, and \begin{equation} \label{sup_{w in E} re mu(w) < re mu(z)} \sup_{w \in E} \mathop{\rm Re} \mu(w) < \mathop{\rm Re} \mu(z). \end{equation} We would like to show that \begin{equation} \label{sup_{w in E} |1 + t mu(w)| < |1 + t mu(z)|} \sup_{w \in E} |1 + t \, \mu(w)| < |1 + t \, \mu(z)| \end{equation} when $t$ is a sufficiently small positive real number, so that $z \not\in \mathop{\rm Pol}(E)$. Note that \begin{eqnarray} \, |1 + t \, \mu(w)|^2 & = & (1 + t \, \mathop{\rm Re} \mu(w))^2 + t^2 \, (\mathop{\rm Im} \mu(w))^2 \\ & = & 1 + 2 \, t \, \mathop{\rm Re} \mu(w) + t^2 \, (\mathop{\rm Re} \mu(w))^2 + t^2 \, (\mathop{\rm Im} \mu(w))^2 \nonumber \end{eqnarray} for every $t \in {\bf R}$ and $w \in {\bf C}^n$. Because $E$ is bounded, \begin{equation} |\mu(w)|^2 = (\mathop{\rm Re} \mu(w))^2 + (\mathop{\rm Im} \mu(w))^2 \le C \end{equation} for some $C \ge 0$ and every $w \in E$. Thus \begin{equation} \label{sup_{w in E} |1 + t mu(w)|^2 le 1 + 2 t sup_{w in E} re mu(w) + C t^2} \sup_{w \in E} |1 + t \, \mu(w)|^2 \le 1 + 2 \, t \, \sup_{w \in E} \mathop{\rm Re} \mu(w) + C \, t^2 \end{equation} for every $t > 0$. Using this and (\ref{sup_{w in E} re mu(w) < re mu(z)}), it is easy to see that \begin{equation} \label{sup_{w in E} |1 + t mu(w)|^2 < |1 + t mu(z)|^2} \sup_{w \in E} |1 + t \, \mu(w)|^2 < |1 + t \, \mu(z)|^2 \end{equation} when $t > 0$ is sufficiently small, as desired. If $n = 1$ and $E \subseteq {\bf C}$ is unbounded, then every nonconstant polynomial $p$ on ${\bf C}$ is unbounded on $E$. Thus $\mathop{\rm Pol}(E) = {\bf C}$ in this case. In particular, $\mathop{\rm Pol}(E)$ may not be contained in $\overline{\mathop{\rm Con}(E)}$ when $E$ is unbounded. Suppose that $E$ is a nonempty set in ${\bf C}^n$ with only finitely many elements. If $z \in {\bf C}^n \backslash E$, then it is easy to see that there is a polynomial $p$ on ${\bf C}^n$ such that $p(w) = 0$ for each $w \in E$ and $p(z) \ne 0$, by taking a product of affine functions that vanish at the elements of $E$, one at a time, and are nonzero at $z$. This implies that $z \not\in \mathop{\rm Pol}(E)$, so that $\mathop{\rm Pol}(E) \subseteq E$. Hence $\mathop{\rm Pol}(E) = E$ when $E$ has only finitely many elements, since $E \subseteq \mathop{\rm Pol}(E)$ automatically. By contrast, the convex hull of a finite set may be much larger. \section{Algebras and homomorphisms} \label{algebras, homomorphisms} \setcounter{equation}{0} Let $E$ be a nonempty compact set in ${\bf C}^n$, and let $C(E)$ be the algebra of continuous complex-valued functions on $E$. Let $PC(E)$ be the subalgebra of $C(E)$ consisting of the restrictions to $E$ of polynomials on ${\bf C}^n$, and let $AC(E)$ be the closure of $PC(E)$ in $C(E)$ with respect to the supremum norm. Thus $AC(E)$ is a closed subalgebra of $C(E)$, and hence a commutative Banach algebra with respect to the supremum norm, since $C(E)$ is. Of course, the constant function equal to $1$ on $E$ is the multiplicative identity element in $C(E)$, and is contained in $PC(E) \subseteq AC(E)$. Suppose that $h$ is a nonzero homomorphism from $AC(E)$ into the complex numbers. Let $f_j$ be the function on $E$ defined by $f_j(w) = w_j$ for $j = 1, \ldots, n$, so that $f_j \in PC(E) \subseteq AC(E)$ for each $j$. Put \begin{equation} z_j = h(f_j) \end{equation} for each $j$, and consider $z = (z_1, \ldots, z_n) \in {\bf C}^n$. If $p$ is any polynomial on ${\bf C}^n$, and $\widetilde{p}$ is the restriction of $p$ to $E$, then $\widetilde{p} \in PC(E) \subseteq AC(E)$, and \begin{equation} h(\widetilde{p}) = p(z). \end{equation} As in Section \ref{banach algebras}, \begin{equation} |h(f)| \le \sup_{w \in E} |f(w)| \end{equation} for every $f \in AC(E)$. It follows that \begin{equation} \label{|p(z)| = |h(widetilde{p})| le ... = sup_{w in E} |p(w)|} |p(z)| = |h(\widetilde{p})| \le \sup_{w \in E} |\widetilde{p}(w)| = \sup_{w \in E} |p(w)| \end{equation} for every polynomial $p$ on ${\bf C}^n$. Thus $z \in \mathop{\rm Pol}(E)$. Conversely, suppose that $z \in \mathop{\rm Pol}(E)$, so that \begin{equation} \label{|p(z)| le sup_{w in E} |p(w)| = sup_{w in E} |widetilde{p}(w)|} |p(z)| \le \sup_{w \in E} |p(w)| = \sup_{w \in E} |\widetilde{p}(w)| \end{equation} for every polynomial $p$ on ${\bf C}^n$. In particular, if $p(w) = 0$ for every $w \in E$, then $p(z) = 0$. This implies that \begin{equation} \label{h_z(widetilde{p}) = p(z)} h_z(\widetilde{p}) = p(z) \end{equation} is well-defined on $PC(E)$, and in fact it is a homomorphism from $PC(E)$ into the complex numbers. Moreover, (\ref{|p(z)| le sup_{w in E} |p(w)| = sup_{w in E} |widetilde{p}(w)|}) implies that $h_z$ is a continuous linear functional on $PC(E)$ with respect to the supremum norm, so that $h_z$ has a unique extension to a continuous linear functional on $AC(E)$. It is easy to see that this extension is also a homomorphism with respect to multiplication. The argument in the preceding paragraph would work just as well if \begin{equation} |p(z)| \le C \, \sup_{w \in E} |p(w)| \end{equation} for some $C \ge 0$ and every polynomial $p$ on ${\bf C}^n$. Note that $p^l$ is also a polynomial on ${\bf C}^n$ for every polynomial $p$ and positive integer $l$. Applying the previous condition to $p^l$, we get that \begin{equation} |p(z)|^l \le C \, \sup_{w \in E} |p(w)|^l. \end{equation} Equivalently, \begin{equation} |p(z)| \le C^{1/l} \, \sup_{w \in E} |p(w)| \end{equation} for each $l \ge 1$ and polynomial $p$ on ${\bf C}^n$. Taking the limit as $l \to \infty$, it follows that the initial inequality holds with $C = 1$. Hence this apprently weaker condition implies that $z \in \mathop{\rm Pol}(E)$. This could also be derived from the earlier discussion, but this approach is more direct. \section{The exponential function} \label{exponential function} \setcounter{equation}{0} Put \begin{equation} \label{E(z) = sum_{j = 0}^infty frac{z^j}{j!}} E(z) = \sum_{j = 0}^\infty \frac{z^j}{j!} \end{equation} for each $z \in {\bf C}$, where $j!$ is ``$j$ factorial'', the product of $1, \ldots, j$. As usual, this is interpreted as being equal to $1$ when $j = 0$. It is easy to see that this series converges absolutely for every $z \in {\bf C}$, by the ratio test, for instance. If $z, w \in {\bf C}$, then \begin{equation} \label{E(z) E(w) = ...} E(z) \, E(w) = \Big(\sum_{j = 0}^\infty \frac{z^j}{j!}\Big) \, \Big(\sum_{l = 0}^\infty \frac{w^l}{l!}\Big) = \sum_{n = 0}^\infty \Big(\sum_{j = 0}^n \frac{z^j \, w^{n - j}}{j! \, (n - j)!}\Big), \end{equation} as in Section \ref{cauchy products}. This uses the absolute convergence of the series defining $E(z)$ and $E(w)$. The binomial theorem states that \begin{equation} \label{sum_{j = 0}^n frac{n!}{j! (n - j)!} z^j w^{n - j} = (z + w)^n} \sum_{j = 0}^n \frac{n!}{j! \, (n - j)!} \, z^j \, w^{n - j} = (z + w)^n, \end{equation} so that \begin{equation} \label{E(z) E(w) = sum_{n = 0}^infty frac{(z + w)^n}{n!} = E(z + w)} E(z) \, E(w) = \sum_{n = 0}^\infty \frac{(z + w)^n}{n!} = E(z + w). \end{equation} In particular, \begin{equation} E(z) \, E(-z) = E(0) = 1 \end{equation} for every $z \in {\bf C}$. Equivalently, $E(z) \ne 0$ for every $z \in {\bf C}$, and $1/E(z) = E(-z)$. If $x$ is a nonnegative real number, then it is clear from the definition of $E(x)$ that $E(x) \in {\bf R}$ and $E(x) \ge 1$. It follows that $E(x)$ is a positive real number for every $x \in {\bf R}$, and that $E(x) \le 1$ when $x \le 0$. Similarly, it is easy to see from the definition that $E(x)$ is strictly increasing when $x \ge 0$, and one can extend this to the whole real line using the fact that $E(-x) = 1/E(x)$. Observe that \begin{equation} \overline{E(z)} = E(\overline{z}) \end{equation} for every $z \in {\bf C}$, by the definition of $E(z)$. This implies that \begin{equation} |E(z)|^2 = E(z) \, \overline{E(z)} = E(z) \, E(\overline{z}) = E(z + \overline{z}) = E(2 \mathop{\rm Re} z), \end{equation} and hence \begin{equation} \label{|E(z)| = E(re z)} |E(z)| = E(\mathop{\rm Re} z) \end{equation} for every $z \in {\bf C}$. If $z = i \, y$ for some $y \in {\bf R}$, then (\ref{|E(z)| = E(re z)}) implies that \begin{equation} |E(i \, y)| = 1. \end{equation} It is well known that \begin{equation} E(i \, y) = \cos y + i \sin y \end{equation} for every $y \in {\bf R}$. One way to see this is to use the standard power series expansions for the sine and cosine. Alternatively, \begin{equation} \frac{d}{dy} E(i \, y) = i \, E(i \, y), \end{equation} as one can check using the series expansion for $E(i \, y)$. We already know that $E(i \, y)$ maps the real line into the unit circle ${\bf T}$ and sends $y = 0$ to $1$. This formula for the derivative of $E(i \, y)$ shows that it goes around the circle at unit speed in the positive orientation. It follows that the real and imaginary parts of $E(i \, y)$ are given by the cosine and sine, respectively, by the geometric definitions of the cosine and sine. \section{Entire functions} \label{entire functions} \setcounter{equation}{0} Suppose that \begin{equation} \label{sum_alpha a_alpha z^alpha} \sum_\alpha a_\alpha \, z^\alpha \end{equation} is a power series with complex coefficients that is absolutely summable for every $z \in {\bf C}^n$, and let $f(z)$ be the sum of this series. If $E$ is a nonempty bounded set in ${\bf C}^n$, then \begin{equation} |f(z)| \le \sup_{w \in E} |f(w)| \end{equation} for every $z \in \mathop{\rm Pol}(E)$. This follows from the definition of the polynomial hull, and the fact that $f$ can be approximated uniformly by polynomials corresponding to finite subsums of (\ref{sum_alpha a_alpha z^alpha}) on bounded subsets of ${\bf C}^n$. Let $u = (u_1, \ldots, u_n) \in {\bf C}^n$ be given, and put \begin{equation} \mu(z) = \sum_{j = 1}^n u_j \, z_j. \end{equation} Observe that $f_\mu(z) = E(\mu(z))$ can be expressed as \begin{equation} \label{prod_{j = 1}^n E(u_j z_j) = sum_alpha frac{u^alpha}{alpha!} z^alpha} \prod_{j = 1}^n E(u_j \, z_j) = \sum_\alpha \frac{u^\alpha}{\alpha!} \, z^\alpha, \end{equation} where $\alpha! = \alpha_1! \cdots \alpha_n!$. In particular, this power series is absolutely summable for every $z \in {\bf C}^n$. Moreover, \begin{equation} |f_\mu(z)| = E(\mathop{\rm Re} \mu(z)), \end{equation} as in the previous section. If $E$ is a nonempty subset of ${\bf C}^n$ and $z \in {\bf C}^n \backslash \overline{\mathop{\rm Con}(E)}$, then there is a complex-linear function $\mu$ on ${\bf C}^n$ such that \begin{equation} \sup_{w \in E} \mathop{\rm Re} \mu(w) < \mathop{\rm Re} \mu(z), \end{equation} as in Section \ref{polynomial hulls}. Hence \begin{equation} \sup_{w \in E} |f_\mu(w)| < |f_\mu(z)|. \end{equation} This has the advantage of working for both bounded and unbounded sets $E$, in exchange for allowing a larger class of functions than polynomials, as before. \section{The three lines theorem} \label{three lines theorem} \setcounter{equation}{0} Let $D$ be the open unit strip in the complex plane, \begin{equation} D = \{z \in {\bf C} : 0 < \mathop{\rm Re} z < 1\}, \end{equation} so that the closure of $D$ is the closed unit strip, \begin{equation} \overline{D} = \{z \in {\bf C} : 0 \le \mathop{\rm Re} z \le 1\}. \end{equation} Also let $f$ be a continuous complex-valued function on $\overline{D}$ which is holomorphic on $D$, and suppose that $A_0$, $A_1$ are positive real numbers such that \begin{equation} |f(x + i \, y)| \le A_x \end{equation} for $x = 0, 1$ and every $y \in {\bf R}$. If $f$ is also bounded on $D$, then the \emph{three lines theorem} states that \begin{equation} \label{|f(x + i y)| le A_0^{1 - x} A_1^x} |f(x + i \, y)| \le A_0^{1 - x} \, A_1^x \end{equation} when $0 < x < 1$ and $y \in {\bf R}$. To do this, we would first like to show that $f$ satisfies the maximum principle, so that \begin{equation} \label{|f(x + i y)| le max (A_0, A_1)} |f(x + i \, y)| \le \max (A_0, A_1) \end{equation} when $0 < x < 1$ and $y \in {\bf R}$. However, $D$ is not bounded, $\overline{D}$ is not compact, and so we cannot use the ordinary maximum principle in quite the usual way. Let us begin with the case where $f$ satisfies the additional condition that $f(z) \to 0$ uniformly on $\overline{D}$ as $|\mathop{\rm Im} z| \to \infty$. Put \begin{equation} D_R = \{z \in D : |\mathop{\rm Im} z| < R\} \end{equation} for each $R > 0$. Thus $D_R$ is bounded, and we can apply the maximum principle to $f$ on $D_R$. If \begin{equation} B_R = \sup \{|f(x + i \, y)| : 0 \le x \le 1, y = \pm R\}, \end{equation} then we get that \begin{equation} |f(x + i \, y)| \le \max (A_0, A_1, B_R) \end{equation} when $0 < x < 1$ and $|y| < R$. By hypothesis, $B_R \to 0$ as $R \to \infty$, and so (\ref{|f(x + i y)| le max (A_0, A_1)}) follows easily in this case. If $f$ is bounded but does not necessarily tend to $0$ at infinity, then we can approximate it by functions that do. Consider \begin{equation} f_\epsilon(z) = f(z) \, E(\epsilon \, z^2) \end{equation} for each $\epsilon > 0$. Observe that \begin{equation} \label{|f_epsilon(z)| = ... = |f(z)| E(epsilon (x^2 - y^2))} |f_\epsilon(z)| = |f(z)| \, |E(\epsilon \, z^2)| = |f(z)| \, E(\epsilon \, (x^2 - y^2)), \end{equation} where $z = x + i \, y$, and hence $\mathop{\rm Re} z^2 = x^2 - y^2$. Thus $f_\epsilon(z)$ is continuous on $\overline{D}$, holomorphic on $D$, and tends to $0$ uniformly as $|y| \to \infty$ for each $\epsilon > 0$, because $f$ is bounded on $\overline{D}$ by hypothesis and $E(\epsilon \, y^2) \to +\infty$ as $y \to + \infty$ for each $\epsilon > 0$. We also have that \begin{equation} |f_\epsilon(i \, y)| \le A_0, \quad |f_\epsilon(i \, y)| \le A_1 \, E(\epsilon) \end{equation} for each $y \in {\bf R}$, since $E(-\epsilon \, y^2) \le 1$ for every $y \in {\bf R}$. This permits us to use the version of the maximum principle in the previous paragraph, to get that \begin{equation} \label{|f_epsilon(x + i y)| le max(A_0, A_1 E(epsilon))} |f_\epsilon(x + i \, y)| \le \max(A_0, A_1 \, E(\epsilon)) \end{equation} when $0 < x < 1$ and $y \in {\bf R}$. Of course, $f_\epsilon(z) \to f(z)$ for each $z \in \overline{D}$ as $\epsilon \to 0$, and it follows that $f$ satisfies (\ref{|f(x + i y)| le max (A_0, A_1)}), by taking the limit as $\epsilon \to 0$ in (\ref{|f_epsilon(x + i y)| le max(A_0, A_1 E(epsilon))}). Note that $E(t) \ge 1 + t$ for every nonnegative real number $t$, by the definition of $E(t)$. Thus $E(t) \to +\infty$ as $t \to +\infty$, as in the previous paragraph, and hence $E(-t) = 1/E(t) \to 0$ as $t \to +\infty$. If $A$ is a positive real number, then it follows that there is a unique real number $\log A$ such that $E(\log A) = A$, because $E(t)$ is a strictly increasing continuous function on the real line. Put $A^z = E(z \, \log A)$, and observe that $|A^z| = A^{\mathop{\rm Re} z}$, by the properties of the exponential function. In order to get (\ref{|f(x + i y)| le A_0^{1 - x} A_1^x}), consider \begin{equation} g(z) = f(z) \, A_0^{z - 1} \, A_1^{-z}. \end{equation} This is a bounded continuous function on $\overline{D}$ which is holomorphic on $D$ and satisfies \begin{equation} \label{|g(x + i y)| le 1} |g(x + i \, y)| \le 1 \end{equation} when $x = 0, 1$ and $y \in {\bf R}$, by the corresponding properties of $f$. The analogue of (\ref{|f(x + i y)| le max (A_0, A_1)}) for $g$ implies that (\ref{|g(x + i y)| le 1}) holds for every $0 < x < 1$ and $y \in {\bf R}$, which is the same as (\ref{|f(x + i y)| le A_0^{1 - x} A_1^x}). \section{Completely circular sets} \label{completely circular sets} \setcounter{equation}{0} A set $E \subseteq {\bf C}^n$ is said to be \emph{completely circular} if \begin{equation} (u_1 \, z_1, \ldots, u_n \, z_n) \in E \end{equation} for every $z = (z_1, \ldots, z_n) \in E$ and $u = (u_1, \ldots, u_n) \in {\bf C}^n$ such that $|u_j| \le 1$ for each $j$. Equivalently, $w \in E$ whenever $w \in {\bf C}^n$ satisfies $|w_j| \le |z_j|$ for some $z \in E$ and each $j$. In particular, $0 \in E$ when $E \ne \emptyset$. Suppose that $w, z \in E$, $0 < t < 1$, and $v \in {\bf C}^n$ satisfy \begin{equation} |v_j| \le |z_j|^t \, |w_j|^{1 - t} \end{equation} for each $j$. We would like to show that $v \in \mathop{\rm Pol}(E)$ when $E$ is completely circular. Thus we would like to show that \begin{equation} |p(v)| \le \sup_{\zeta \in E} |p(\zeta)| \end{equation} for every polynomial $p$ on ${\bf C}^n$. To do this, we shall use the version of the maximum principle discussed in the previous section. By hypothesis, we can express $v$ as \begin{equation} v_j = u_j \, |z_j|^t \, |w_j|^{1 - t}, \end{equation} where $|u_j| \le 1$ for each $j$. Put \begin{equation} g_j(\tau) = u_j \, |z_j|^\tau \, |w_j|^{1 - \tau} \end{equation} for each $\tau \in {\bf C}$ and $1 \le j \le n$. This uses the definition of $A^\tau$ for any positive real number $A$ and complex number $\tau$ as $E(\tau \, \log A)$, as in the previous section, and we put $A^\tau = 0$ for every $\tau \in {\bf C}$ when $A = 0$. Note that \begin{equation} |g_j(\tau)| \le |z_j|^{\mathop{\rm Re} \tau} \, |w_j|^{1 - \mathop{\rm Re} \tau} \end{equation} for each $\tau$ and $j$, and hence \begin{equation} g(\tau) = (g_1(\tau), \ldots, g_n(\tau)) \in E \end{equation} when $\mathop{\rm Re} \tau = 0, 1$. We also have that $g(t) = v$, by construction. Let $p$ be a polynomial on ${\bf C}^n$, and consider \begin{equation} f(\tau) = p(g(\tau)). \end{equation} This is a holomorphic function on the complex plane ${\bf C}$, and in particular it is a holomorphic function on the open unit strip $D$ that extends continuously to the closure $\overline{D}$. Moreover, $f$ is bounded on $\overline{D}$, because $g$ is bounded on $\overline{D}$, and $p$ is bounded on bounded subsets of ${\bf C}^n$. It follows that \begin{equation} |f(t)| \le \sup \{|f(\tau)| : \tau \in {\bf C}, \ \mathop{\rm Re} \tau = 0, 1\}, \end{equation} as in the previous section. This is the same as saying that \begin{equation} |p(v)| \le \sup \{|p(g(\tau))| : \tau \in {\bf C}, \ \mathop{\rm Re} \tau = 0, 1\}, \end{equation} which is exactly what we wanted, since $g(\tau) \in E$ when $\mathop{\rm Re} \tau = 0, 1$. \section{Completely circular sets, continued} \label{completely circular sets, continued} \setcounter{equation}{0} Let $E$ be a nonempty bounded completely circular set in ${\bf C}^n$, and let $z \in {\bf C}^n$ be given. Suppose that $z \ne 0$, and let $I$ be the set of $j = 1, \ldots, n$ such that $z_j \ne 0$. Let $\alpha(I) = (\alpha_1(I), \ldots, \alpha_n(I))$ be the multi-index defined by $\alpha_j(I) = 1$ when $j \in I$, and $\alpha_j(I) = 0$ otherwise. Thus $z^{\alpha(I)} \ne 0$, and if $w^{\alpha(I)} = 0$ for every $w \in E$, then $z \not\in \mathop{\rm Pol}(E)$. Let $E_I$ be the set of $w \in E$ such that $w_j \ne 0$ when $j \in I$, and suppose from now on in this section that $E_I \ne \emptyset$. Also let ${\bf R}^I$ be the set of real-valued functions on $I$, which is basically the same as ${\bf R}^l$, where $l$ is the number of elements of $I$. Thus $\log |w_j|$, $j \in I$, determines an element of ${\bf R}^I$ for each $w \in E_I$, and we let $A_I$ be the subset of ${\bf R}^I$ corresponding to elements of $E_I$ in this way. If $r \in {\bf R}^I$, $t \in A_I$, and \begin{equation} \label{r_j le t_j} r_j \le t_j \end{equation} for each $j \in I$, then $r \in A_I$, because $E$ is completely circular. Let $\zeta$ be the element of ${\bf R}^I$ given by $\zeta_j = \log |z_j|$ for $j \in I$, and suppose that $\zeta$ is not an element of the closure of the convex hull of $A_I$ in ${\bf R}^I$. This implies that there is a linear function $\lambda$ on ${\bf R}^I$ such that \begin{equation} \sup_{r \in A_I} \lambda(r) < \lambda(\zeta). \end{equation} More precisely, $\lambda(r)$ can be given as \begin{equation} \lambda(r) = \sum_{j \in I} \lambda_j \, r_j \end{equation} for some $\lambda_j \in {\bf R}$, $j \in I$, and every $r \in {\bf R}^I$. If $\lambda_j < 0$ for some $j \in I$, then there are $r \in A_I$ for which $\lambda(r)$ is arbitrarily large, because $A_I$ is nonempty and satisfies the condition mentioned at the end of the previous paragraph. Thus \begin{equation} \lambda_j \ge 0 \end{equation} for each $j \in I$, since $\lambda(r)$ is bounded from above for $r \in A_I$. If $z \in \mathop{\rm Pol}(E)$, then \begin{equation} |z^\alpha| \le \sup_{w \in E} |w^\alpha| \end{equation} for every multi-index $\alpha$. Let us restrict our attention to multi-indices $\alpha$ such that $\alpha_j \ge 1$ when $j \in I$ and $\alpha_j = 0$ otherwise, so that $w^\alpha = 0$ when $w \in E \backslash E_I$. In this case, the previous inequality reduces to \begin{equation} \sum_{j \in I} \alpha_j \log |z_j| \le \sup_{w \in E_I} \sum_{j \in I} \alpha_j \, \log |w_j|. \end{equation} Equivalently, \begin{equation} \sum_{j \in I} \alpha_j \, \zeta_j \le \sup_{r \in A_I} \sum_{j \in I} \alpha_j \, r_j. \end{equation} This inequality holds for arbitrary positive integers $\alpha_j$, $j \in I$, and hence for arbitrary positive rational numbers $\alpha_j$, by dividing both sides by a positive integer. It follows that this inequality also holds for arbitrary nonnegative real numbers, by approximation. This uses the hypothesis that $E$ be bounded, so that $r_j$ has an upper bound for each $j \in I$ and $r \in A_I$, and more precisely one should approximate nonnegative real numbers $\alpha_j$ by positive rational numbers $\alpha_j'$ such that $\alpha_j \le \alpha_j'$ for each $j$. Combining this with the discussion in the previous paragraph, we get that $\zeta$ is in the closure of the convex hull of $A_I$ in ${\bf R}^I$ when $z \in \mathop{\rm Pol}(E)$. \section{The torus action} \label{torus action} \setcounter{equation}{0} Let ${\bf T}^n$ be the set of $t = (t_1, \ldots, t_n) \in {\bf C}^n$ such that $|t_j| = 1$ for each $j$, as usual. If $t \in {\bf T}^n$ and $z \in {\bf C}^n$, then put \begin{equation} T_t(z) = (t_1 \, z_1, \ldots, t_n \, z_n), \end{equation} so that $T_t$ is an invertible linear transformation on ${\bf C}^n$ for each $t \in {\bf T}^n$. Note that ${\bf T}^n$ is a commutative group with respect to coordinatewise multiplication, and that $t \mapsto T_t$ is a homomorphism from ${\bf T}^n$ into the group of invertible linear transformations on ${\bf C}^n$. Suppose that $E$ is a nonempty subset of ${\bf C}^n$ such that \begin{equation} \label{T_t(E) subseteq E} T_t(E) \subseteq E \end{equation} for every $t \in {\bf T}^n$. This implies that \begin{equation} \label{T_t(E) = E} T_t(E) = E \end{equation} for each $t \in {\bf T}^n$, because $T_t^{-1}(E) = T_{t^{-1}}(E) \subseteq E$, where $t^{-1} = (t_1^{-1}. \ldots, t_n^{-1})$. Suppose that $z \in \mathop{\rm Pol}(E)$, so that \begin{equation} |p(z)| \le \sup_{w \in E} |p(w)| \end{equation} for every polynomial $p$ on ${\bf C}^n$. If $p_t(w) = p(T_t(w))$, then $p_t$ is also a polynomial on ${\bf C}^n$ for each $t \in {\bf T}^n$, and hence \begin{equation} \label{|p_t(z)| le sup_{w in E} |p_t(w)|} |p_t(z)| \le \sup_{w \in E} |p_t(w)|. \end{equation} We also have that \begin{equation} \sup_{w \in E} |p_t(w)| = \sup_{w \in E} |p(w)| \end{equation} for every $t \in {\bf T}^n$, because of (\ref{T_t(E) = E}). Thus \begin{equation} \label{|p(T_t(z))| = |p_t(z)| le sup_{w in E} |p_t(w)| = sup_{w in E} |p(w)|} |p(T_t(z))| = |p_t(z)| \le \sup_{w \in E} |p_t(w)| = \sup_{w \in E} |p(w)| \end{equation} for every polynomial $p$ on ${\bf C}^n$ and $t \in {\bf T}^n$, which implies that $T_t(z) \in \mathop{\rm Pol}(E)$ for every $t \in {\bf T}^n$. This shows that $T_t(\mathop{\rm Pol}(E)) \subseteq \mathop{\rm Pol}(E)$ for every $t \in {\bf T}^n$, and hence $T_t(\mathop{\rm Pol}(E)) = \mathop{\rm Pol}(E)$ for every $t \in {\bf T}^n$, as before. Let $U^n$ be the open unit polydisk in ${\bf C}^n$, consisting of $u \in {\bf C}^n$ such that $|u_j| < 1$ for each $j$. If $p$ is a polynomial on ${\bf C}^n$, $u \in U^n$, and $w \in {\bf C}^n$, then \begin{equation} |p(u_1 \, w_1, \ldots, u_n \, w_n)| \le \sup_{t \in {\bf T}^n} |p(t_1 \, w_1, \ldots, t_n \, w_n)| \end{equation} More precisely, we can think of $p(u_1 \, w_1, \ldots, u_n \, w_n)$ as a polynomial in $u$ for each $w \in {\bf C}^n$, and apply the maximum principle as in Section \ref{maximum principle}. If $z \in \mathop{\rm Pol}(E)$ and $u \in U^n$, then we get that \begin{equation} \label{|p(u_1 z_1, ldots, u_n z_n)| le ... le sup_{w in E} |p(w)|} |p(u_1 \, z_1, \ldots, u_n \, z_n)| \le \sup_{t \in {\bf T}^n} |p_t(z)| \le \sup_{w \in E} |p(w)| \end{equation} for every polynomial $p$ on ${\bf C}^n$, where the second step is as in the previous paragraph. Thus \begin{equation} \label{(u_1 z_1, ldots, u_n z_n) in pol(E)} (u_1 \, z_1, \ldots, u_n \, z_n) \in \mathop{\rm Pol}(E) \end{equation} for every $z \in \mathop{\rm Pol}(E)$ and $u \in U^n$, and it follows that $\mathop{\rm Pol}(E)$ is completely circular in this case. \section{Another condition} \label{another condition} \setcounter{equation}{0} Let $E$ be a nonempty completely circular closed set in ${\bf C}^n$ such that \begin{equation} \label{E^* = {w in E : w_j ne 0 hbox{ for each } j}} E^* = \{w \in E : w_j \ne 0 \hbox{ for each } j\} \end{equation} is dense in $E$. This happens when $E$ is the closure of a nonempty completely circular open set in ${\bf C}^n$, for instance. Put \begin{equation} A = \{y \in {\bf R}^n : y_j = \log |w_j| \hbox{ for some } w \in E^* \hbox{ and each } j\}. \end{equation} so that \begin{equation} \quad E^* = \{w \in {\bf C}^n : w_j \ne 0 \hbox{ for each } j, \hbox{ and } (\log |w_1|, \ldots, \log |w_n|) \in A\}, \end{equation} because $E$ is completely circular. Thus $E = \overline{E^*}$ is uniquely determined by $A$ under these conditions. If $x \in {\bf R}^n$, $y \in A$, and \begin{equation} x_j \le y_j \end{equation} for each $j$, then we also have that $x \in A$, since $E$ is completely circular. Let $I$ be a nonempty subset of $\{1, \ldots, n\}$, and let ${\bf R}^I$ be the set of real-valued functions on $I$, as before. There is a natural projection from ${\bf R}^n$ onto ${\bf R}^I$, in which one keeps the coordinates corresoponding to $j \in I$ and drops the others. Let $E_I$ be the set of $w \in E$ such that $w_j \ne 0$ when $j \in I$, and let $A_I$ be the subset of ${\bf R}^I$ whose elements correspond to $\log |w_j|$, $j \in I$, with $w \in E_I$. Observe that \begin{equation} \pi_I(A) \subseteq A_I \subseteq \overline{\pi_I(A)}, \end{equation} because $E^* \subseteq E_I$ and $E^*$ is dense in $E$. If $z \in \mathop{\rm Pol}(E)$, then \begin{equation} |z^\alpha| \le \sup_{w \in E} |w^\alpha| \end{equation} for every multi-index $\alpha$. Moreover, \begin{equation} \sup_{w \in E^*} |w^\alpha| = \sup_{w \in E} |w^\alpha|, \end{equation} since $E^*$ is dense in $E$. Hence \begin{equation} \label{sup_{w in E_I} |w^alpha| = sup_{w in E} |w^alpha|} \sup_{w \in E_I} |w^\alpha| = \sup_{w \in E} |w^\alpha| \end{equation} for any $I \subseteq \{1, \ldots, n\}$, because $E_I \subseteq E^* \subseteq E$. Of course, (\ref{sup_{w in E_I} |w^alpha| = sup_{w in E} |w^alpha|}) is trivial when $\alpha_j \ge 1$ for each $j \in I$, so that $w^\alpha = 0$ when $w \in E \backslash E_I$. Let us consider some examples in ${\bf C}^2$ where $E^*$ is not dense in $E$. If \begin{equation} E = ({\bf C} \times \{0\}) \cup (\{0\} \times {\bf C}), \end{equation} then $E$ is closed and completely circular, and $E^* = \emptyset$. Equivalently, \begin{equation} E = \{z = (z_1, z_2) \in {\bf C}^2 : z_1 \, z_2 = 0\}, \end{equation} and it is easy to see that $\mathop{\rm Pol}(E) = E$ in this case. Put \begin{equation} D(r) = \{\zeta \in {\bf C} : |\zeta| \le r\} \end{equation} for each $r > 0$, and consider \begin{equation} E = (D(r_1) \times \{0\}) \cup (\{0\} \times D(r_2)) \end{equation} for some $r_1, r_2 > 0$. Thus $E$ is closed and completely circular again, $E^* = \emptyset$, and $E$ is also bounded in this case. One can check that $\mathop{\rm Pol}(E) = E$ as well, using the polynomials $p_1(w) = w_1$, $p_2(w) = w_2$, and $p(w) = w_1 \, w_2$. If $0 < r < R$ and \begin{equation} E(r, R) = (D(r) \times {\bf C}) \cup (D(R) \times \{0\}), \end{equation} then $E(r, R)$ is closed and completely circular, and \begin{equation} \overline{E(r, R)^*} = D(r) \times {\bf C}. \end{equation} If $p$ is a polynomial on ${\bf C}$ that is bounded on $E(r, R)$, then $p(w_1, w_2)$ is bounded as a polynomial in $w_2$ for each $w_1 \in D(r)$. This implies that $p(w_1, w_2)$ is constant in $w_2$ for each $w_1 \in D(r)$, and hence for every $w_1 \in {\bf C}$. Thus $p(w_1, w_2)$ reduces to a polynomial in $w_1$, and one can use this to show that the polynomial hull of $E(r, R)$ is equal to $D(R) \times {\bf C}$. Put \begin{equation} E(r) = (D(r) \times {\bf C}) \cup ({\bf C} \times \{0\}) \end{equation} for $r > 0$, which is the analogue of $E(r, R)$ with $R = +\infty$. As before, $E(r)$ is closed and completely circular, and \begin{equation} \overline{E(r)^*} = D(r) \times \{0\}. \end{equation} If $p$ is a polynomial on ${\bf C}^2$ that is bounded on $E(r)$, then $p$ is constant, as in the previous paragraph, so that $\mathop{\rm Pol}(E(r)) = {\bf C}^2$. Of course, \begin{equation} E = D(r) \times {\bf C} \end{equation} is closed and completely circular for each $r > 0$, and satisfies $\overline{E^*} = E$. It is also easy to see that $\mathop{\rm Pol}(E) = E$ in this case, using the polynomial $p(w) = w_1$. If $r_1, r_2, R > 0$ and $r_1 < R$, then put \begin{equation} E(r_1, r_2, R) = (D(r_1) \times D(r_2)) \cup (D(R) \times \{0\}). \end{equation} Thus $E(r_1, r_2, R)$ is closed, bounded, and completely circular, and \begin{equation} \overline{E(r_1, r_2, R)^*} = D(r_1) \times D(r_2). \end{equation} If $z = (z_1, z_2) \in \mathop{\rm Pol}(E(r_1, r_2, R))$, then it is easy to see that $|z_1| \le R$ and $|z_2| \le r_2$, using the polynomials $p_1(w) = w_1$ and $p_2(w) = w_2$. If $z_2 \ne 0$, then one can show that $|z_1| \le r_1$, using the polynomials $q_n(w) = w_1^n \, w_2$ for each positive integer $n$. More precisely, \begin{equation} |q_n(z)| \le \sup \{|q_n(w)| : w \in E(r_1, r_2, R)\} \end{equation} implies that $|z_1|^n \, |z_2| \le r_1^n \, r_2$ for each $n$, and hence that \begin{equation} |z_1| \, |z_2|^{1/n} \le r_1 \, r_2^{1/n}. \end{equation} If $z_2 \ne 0$, then we can take the limit as $n \to \infty$ to get that $|z_1| \le r_1$, as desired. Thus $z \in E(r_1, r_2, R)$, which implies that the polynomial hull of $E(r_1, r_2, R)$ is itself. If $E$ is a closed bi-disk \begin{equation} D(r_1) \times D(r_2) \end{equation} for some $r_1, r_2 > 0$, then $E$ is completely circular, $\overline{E^*} = E$, and $\mathop{\rm Pol}(E) = E$. If $E$ is the union of two closed bi-disks \begin{equation} (D(r_1) \times D(r_2)) \cup (D(t_1) \times D(t_2)) \end{equation} for some $r_1, r_2, t_1, t_2 > 0$, then $E$ is completely circular and $\overline{E^*} = E$ again. Of course, this reduces to the single bi-disk $D(t_1) \times D(t_2)$ when $r_1 \le t_1$ and $r_2 \le t_2$, and to $D(r_1) \times D(r_2)$ when $t_1 \le r_1$ and $t_2 \le r_2$. Otherwise, $E \ne \mathop{\rm Pol}(E)$, because $E$ is not multiplicatively convex. Note that all of the other examples mentioned in this section are multiplicatively convex. \section{Multiplicative convexity} \label{multiplicative convexity} \setcounter{equation}{0} Suppose that $E \subseteq {\bf C}^n$ has the property that \begin{equation} \label{(t_1 z_1, ldots, t_n z_n) in E} (t_1 \, z_1, \ldots, t_n \, z_n) \in E \end{equation} when $z = (z_1, \ldots, z_n) \in E$, $t = (t_1, \ldots, t_n) \in {\bf C}^n$, and $|t_j| = 1$ for each $j$, so that $t \in {\bf T}^n$. Let us say that $E$ is \emph{multiplicatively convex} if for each $v, w \in E$ and $a \in (0, 1)$, we have that $u \in E$ whenever $u \in {\bf C}^n$ and \begin{equation} |u_j| = |v_j|^a \, |w_j|^{1 - a} \end{equation} for each $j$. Similarly, if $E$ is completely circular and multiplicatively convex, then $u \in E$ whenever \begin{equation} |u_j| \le |v_j|^a \, |w_j|^{1 - a} \end{equation} for some $v, w \in E$, $0 < a < 1$, and each $j$. If $E$ is completely circular and convex, then $E$ is multiplicatively convex, because \begin{equation} \label{|v_j|^a |w_j|^{1 - a} le a |v_j| + (1 - a) |w_j|} |v_j|^a \, |w_j|^{1 - a} \le a \, |v_j| + (1 - a) \, |w_j| \end{equation} when $0 < a < 1$, by the convexity of the exponential function. More precisely, if $E$ is invariant under the usual action of ${\bf T}^n$, as in (\ref{(t_1 z_1, ldots, t_n z_n) in E}), and $E$ is also nonempty and convex, then it is easy to see that $0 \in E$. This implies that $r \, z \in E$ when $z \in E$ and $0 < r < 1$, and hence that $E$ is completely circular. We have also seen examples of sets that are completely circular and multiplicatively convex, but not convex. If $E \subseteq {\bf C}^n$ is completely circular and $\mathop{\rm Pol}(E) = E$, then $E$ is multiplicatively convex, as in Section \ref{completely circular sets}. Conversely, if $E$ is closed, bounded, completely circular, and multiplicatively convex, then $\mathop{\rm Pol}(E) = E$. This basically follows from the discussion in Section \ref{completely circular sets, continued}, with a few extra details. The main point is that the sets $A_I$ considered there are closed and convex in this case. The convexity of the $A_I$'s corresponds exactly to the multiplicative convexity of $E$. To see that each $A_I$ is closed, one can use the fact that $E$ is closed, and that for each $y \in A_I$ there is a $w = w(y) \in E$ such that $w_j > 0$ and $\log |w_j|$ when $j \in I$, and $w_j = 0$ when $j \not\in I$. This also uses the complete circularity of $E$, and otherwise there is a standard argument based on the compactness of $E$. Although the boundedness of $E$ is not necessary for this step, it is important for the approximation argument in Section \ref{completely circular sets, continued}. Note that the polynomial hull of a bounded set $E \subseteq {\bf C}^n$ is also bounded. More precisely, if $|w_j| \le r_j$ for some $r_j \ge 0$ and each $w \in E$, then $|z_j| \le r_j$ for each $z \in \mathop{\rm Pol}(E)$, as one can see by considering the polynomial $p_j(w) = w_j$. \section{Coefficients} \label{coefficients} \setcounter{equation}{0} Let \begin{equation} p(z) = \sum_{|\alpha| \le N} a_\alpha \, z^\alpha \end{equation} be a polynomial with complex coefficients on ${\bf C}^n$, where the sum is taken over all multi-indices $\alpha$ with $|\alpha| \le N$ for some $N$. Thus \begin{equation} p(t_1 \, z_1, \ldots, t_n \, z_n) = \sum_{|\alpha| \le N} a_\alpha \, t^\alpha \, z^\alpha \end{equation} for each $t \in {\bf T}^n$, and so \begin{equation} a_\beta \, z^\beta = \frac{1}{(2 \pi)^n} \int_{{\bf T}^n} p(t_1 \, z_1, \ldots, t_n \, z_n) \, t^{-\beta} \, |dt| \end{equation} for every multi-index $\beta$, as in Section \ref{multiple fourier series}. In particular, \begin{eqnarray} \label{|a_beta| |z^beta| le ... le sup_{t in T^n} |p(t_1 z_1, ldots, t_n z_n)|} |a_\beta| \, |z^\beta| & \le & \frac{1}{(2 \pi)^n} \int_{{\bf T}^n} |p(t_1 \, z_1, \ldots, t_n \, z_n)| \, |dt| \\ & \le & \sup_{t \in {\bf T}^n} |p(t_1 \, z_1, \ldots, t_n \, z_n)|. \nonumber \end{eqnarray} Let $E$ be a nonempty subset of ${\bf C}^n$ which is completely circular, or at least invariant under the usual action of ${\bf T}^n$. If $p(z)$ is bounded on $E$, then it follows from the discussion in the previous pargraph that each term $a_\beta \, z^\beta$ in $p(z)$ is bounded on $E$. Equivalently, the monomial $z^\beta$ is bounded on $E$ whenever its coefficient $a_\beta$ in $p(z)$ is not equal to $0$. If $E$ is unbounded, then it may be that $z^\beta$ is not bounded on $E$ for any nonzero multi-index $\beta$. This implies that the only polynomials on ${\bf C}^n$ that are bounded on $E$ are constant, and hence that $\mathop{\rm Pol}(E) = {\bf C}^n$. As a nice family of examples in ${\bf C}^n$, consider \begin{equation} E(b) = \{(z_1, z_2) \in {\bf C}^2 : |z_1|^b \, |z_2| \le 1\}, \end{equation} where $b$ is a positive real number. Thus $E(b)$ is closed, completely circular, and multiplicatively convex for each $b > 0$, and \begin{equation} E(b)^* = \{(z_1, z_2) \in {\bf C}^2 : 0 < |z_1|^b \, |z_2| \le 1\} \end{equation} is dense in $E(b)$ for each $b$ as well, as in Section \ref{another condition}. If $b$ is rational, so that $b = \beta_1 / \beta_2$ for some positive integers $\beta_1$, $\beta_2$, then \begin{equation} E(b) = \{(z_1, z_2) \in {\bf C}^2 : |z_1|^{\beta_1} \, |z_2|^{\beta_2} \le 1\} = \{z \in {\bf C}^2 : |z^\beta| \le 1\}, \end{equation} where $\beta = (\beta_1, \beta_2)$. In this case, it is easy to see that $\mathop{\rm Pol}(E(b)) = E(b)$, using the polynomial $p(z) = z^\beta$. Otherwise, if $b$ is irrational, then one can check that $z^\beta$ is unbounded on $E(b)$ for every nonzero multi-index $\beta$, which implies that every nonconstant polynomial on ${\bf C}^n$ is unbounded on $E(b)$, as before, and hence that $\mathop{\rm Pol}(E(b)) = {\bf C}^2$. \section{Polynomial convexity} \label{polynomial convexity} \setcounter{equation}{0} A set $E \subseteq {\bf C}^n$ is said to be \emph{polynomially convex} if $\mathop{\rm Pol}(E) = E$. Thus $E$ has to be closed in this case, since the polynomial hull of any set is closed. Of course, $E \subseteq \mathop{\rm Pol}(E)$ automatically, and so $E$ is polynomially convex when $\mathop{\rm Pol}(E)$ is contained in $E$. We have seen before that finite subsets of ${\bf C}^n$ are polynomially convex, as are compact convex sets. A closed, bounded, and completely circular set is polynomially convex if and only if it is multiplicatively convex, as in Section \ref{multiplicative convexity}. The polynomial hull of any set $E \subseteq {\bf C}^n$ is polynomially convex, because $\mathop{\rm Pol}(\mathop{\rm Pol}(E)) = \mathop{\rm Pol}(E)$. If $p$ is a polynomial on ${\bf C}^n$ and $k$ is a nonnegative real number, then it is easy to see that \begin{equation} \label{E(p, k) = {z in {bf C}^n : |p(z)| le k}} E(p, k) = \{z \in {\bf C}^n : |p(z)| \le k\} \end{equation} is polynomially convex. In particular, one can take $k = 0$, so that the zero set of any polynomial is polynomially convex. If $E_\alpha$, $\alpha \in A$, is any collection of subsets of ${\bf C}^n$, then \begin{equation} \mathop{\rm Pol}\Big(\bigcap_{\alpha \in A} E_\alpha\Big) \subseteq \bigcap_{\alpha \in A} \mathop{\rm Pol}(E_\alpha), \end{equation} because $\bigcap_{\alpha \in A} E_\alpha \subseteq E_\beta$ for each $\beta \in A$, so that $\mathop{\rm Pol}\Big(\bigcap_{\alpha \in A} E_\alpha\Big) \subseteq \mathop{\rm Pol}(E_\beta)$ for each $\beta \in A$. If $E_\alpha$ is polynomially convex for each $\alpha \in A$, then we get that \begin{equation} \mathop{\rm Pol}\Big(\bigcap_{\alpha \in A} E_\alpha\Big) \subseteq \bigcap_{\alpha \in A} \mathop{\rm Pol}(E_\alpha) = \bigcap_{\alpha \in A} E_\alpha. \end{equation} This implies that $\bigcap_{\alpha \in A} E_\alpha$ is also polynomially convex, since it is automatically contained in its polynomial hull, as in the previous paragraph. The polynomial hull of any set $E \subseteq {\bf C}^n$ may be described as the intersection of all sets $E(p, k)$ such that \begin{equation} E \subseteq E(p, k), \end{equation} where $p$ is a polynomial on ${\bf C}^n$ and $k$ is a nonnegative real number, as before. It follows that $E$ is polynomially convex if and only if it can be expressed as the intersection of some collection of sets of the form $E(p, k)$, since these sets are all polynomially convex, and the intersection of any collection of polynomially convex sets is also polynomially convex. Alternatively, to avoid technical problems with unbounded sets, one can expand the definition to say that a closed set $E \subseteq {\bf C}^n$ is polynomially convex if for every compact set $K \subseteq E$ we have that $\mathop{\rm Pol}(K) \subseteq E$. Of course, this still implies that $\mathop{\rm Pol}(E) = E$ when $E$ is compact. With this expanded definition, it is easy to see that every closed convex set in ${\bf C}^n$ is polynomially convex, for essentially the same reasons as before. Similarly, a closed completely circular set $E \subseteq {\bf C}^n$ is polynomially convex in this expanded sense if and only if it is multiplicatively convex. \section{Entire functions, revisited} \label{entire functions, revisited} \setcounter{equation}{0} Let $E$ be a nonempty subset of ${\bf C}^n$, and let $\mathop{\rm Hol}(E)$ be the set of $z \in {\bf C}^n$ such that \begin{equation} \label{|f(z)| le sup_{w in E} |f(w)|} |f(z)| \le \sup_{w \in E} |f(w)| \end{equation} for every complex-valued function $f$ on ${\bf C}^n$ that can be expressed as \begin{equation} \label{f(w) = sum_alpha a_alpha w^alpha} f(w) = \sum_\alpha a_\alpha \, w^\alpha. \end{equation} More precisely, the $a_\alpha$'s are supposed to be complex numbers, and the sum is taken over all multi-indices $\alpha$ and is supposed to be absolutely convergent for every $w \in {\bf C}^n$. This includes the case of polynomials, for which $a_\alpha = 0$ for all but finitely many $\alpha$, and so \begin{equation} \mathop{\rm Hol}(E) \subseteq \mathop{\rm Pol}(E). \end{equation} If $E$ is bounded, then $f$ can be approximated uniformly on $E$ by finite subsums of (\ref{f(w) = sum_alpha a_alpha w^alpha}), which are polynomials, and hence \begin{equation} \mathop{\rm Hol}(E) = \mathop{\rm Pol}(E). \end{equation} If $E$ is not bounded, then $f$ may be unbounded on $E$, so that the supremum in (\ref{|f(z)| le sup_{w in E} |f(w)|}) is $+\infty$, and (\ref{|f(z)| le sup_{w in E} |f(w)|}) holds vacuously. Of course, \begin{equation} E \subseteq \mathop{\rm Hol}(E) \end{equation} automatically. If $E_1 \subseteq E_2$, then \begin{equation} \mathop{\rm Hol}(E_1) \subseteq \mathop{\rm Hol}(E_2). \end{equation} Note that functions on ${\bf C}^n$ expressed by absolutely summable power series are continuous, because of uniform convergence on compact sets, and continuity of polynomials. This implies that $\mathop{\rm Hol}(E)$ is always a closed set in ${\bf C}^n$, and that \begin{equation} \mathop{\rm Hol}(\overline{E}) = \mathop{\rm Hol}(E). \end{equation} As in the case of polynomial hulls, one can check that \begin{equation} \mathop{\rm Hol}(\mathop{\rm Hol}(E)) = \mathop{\rm Hol}(E). \end{equation} Using exponential functions as in Section \ref{entire functions}, one also gets that \begin{equation} \mathop{\rm Hol}(E) \subseteq \overline{\mathop{\rm Con}(E)}. \end{equation} More precisely, this works for both bounded and unbounded sets $E$. If $E$ is invariant under the torus action, as in Section \ref{torus action}, then it is easy to see that $\mathop{\rm Hol}(E)$ is too, as before. One can also use the maximum principle to show that $\mathop{\rm Hol}(E)$ is completely circular in this case, as before. One can use the three lines theorem to show that $\mathop{\rm Hol}(E)$ is multiplicatively convex in this situation as well. However, there are many examples where $E$ is closed, completely circular, and multiplicatively convex, but $\mathop{\rm Hol}(E) \ne E$. This uses the same type of arguments as in Sections \ref{another condition} and \ref{coefficients}, and of course it is important that $E$ be unbounded in these examples. If $E_\alpha$, $\alpha \in A$, is any collection of subsets of ${\bf C}^n$, then \begin{equation} \mathop{\rm Hol}\Big(\bigcap_{\alpha \in A} E_\alpha\Big) \subseteq \bigcap_{\alpha \in A} \mathop{\rm Hol}(E_\alpha), \end{equation} as in the previous section. If $\mathop{\rm Hol}(E_\alpha) = E_\alpha$ for each $\alpha \in A$, then it follows that \begin{equation} \mathop{\rm Hol}\Big(\bigcap_{\alpha \in A} E_\alpha\Big) = \bigcap_{\alpha \in A} E_\alpha, \end{equation} as before. Let $g(w)$ be a complex-valued function on ${\bf C}^n$ that can be expressed by a power series that is absolutely summable for each $w \in {\bf C}^n$, and put \begin{equation} E(g, k) = \{w \in {\bf C}^n : |g(w)| \le k\} \end{equation} for each nonnegative real number $k$. As in the previous section, it is easy to see that \begin{equation} \mathop{\rm Hol}(E(g, k)) = E(g, k). \end{equation} One can also check that $\mathop{\rm Hol}(E)$ is the same as the intersection of all sets of the form $E(g, k)$ such that $E \subseteq E(g, k)$ for any $E \subseteq {\bf C}^n$, as before. \section{Power series expansions} \label{power series expansions} \setcounter{equation}{0} Let $R$ be a positive real number, and put \begin{equation} D(R) = \{w \in {\bf C} : |w| < R\}, \end{equation} as before. Suppose that $f(w)$ is a holomorphic function on $D(R)$, which one can take to mean that $f(w)$ is continuously-differentiable and satisfies the Cauchy--Riemann equations. Of course, it is well known that one can also start with significantly weaker regularity conditions on $f$. If $|z| < r < R$, then Cauchy's integral formula implies that \begin{equation} \label{f(z) = frac{1}{2 pi i} oint_{partial D(r)} frac{f(w)}{w - z} dw} f(z) = \frac{1}{2 \pi i} \oint_{\partial D(r)} \frac{f(w)}{w - z} \, dw. \end{equation} More precisely, this uses an oriented contour integral over the circle centered at $0$ with radius $r$, which is the boundary of the corresponding disk $D(r)$. Let us briefly review the standard argument for obtaining a power series expansion for $f(z)$ from (\ref{f(z) = frac{1}{2 pi i} oint_{partial D(r)} frac{f(w)}{w - z} dw}). If $|z| < r = |w|$, then \begin{equation} \label{frac{1}{w - z} = ... = w^{-1} sum_{j = 0}^infty w^{-j} z^j} \frac{1}{w - z} = \frac{1}{w \, (1 - w^{-1} \, z)} = w^{-1} \sum_{j = 0}^\infty w^{-j} \, z^j, \end{equation} where the series on the right is an absolutely convergent geometric series under these conditions. The partial sums of this series also converge uniformly as a function of $w$ on $\partial D(r)$ for each $z \in D(r)$, by Weierstrass' M-test. This permits us to interchange the order of summation and integration in (\ref{f(z) = frac{1}{2 pi i} oint_{partial D(r)} frac{f(w)}{w - z} dw}), to get that \begin{equation} \label{f(z) = sum_{j = 0}^infty a_j z^j} f(z) = \sum_{j = 0}^\infty a_j \, z^j, \end{equation} where $|z| < r < R$ and \begin{equation} \label{a_j = frac{1}{2 pi i} oint_{partial D(r)} f(w) w^{- j - 1} dw} a_j = \frac{1}{2 \pi i} \oint_{\partial D(r)} f(w) \, w^{- j - 1} \, dw \end{equation} for each $j \ge 0$. Although this expression for $a_j$ implicitly depends on $r$, different choices of $r < R$ lead to the same value of $a_j$. This is an immediate consequence of Cauchy's theorem, and one can also observe that $a_j$ is equal to $1/j!$ times the $j$th derivative of $f$ at $0$, which obviously does not depend on $r$. Alternatively, once one has this power series expansion for $f$ on $D(r)$, one can use it to evaluate integrals of $f$ over circles of radius less than $r$. In particular, the coefficients of the power series are given by the corresponding integrals over circles of radius less than $r$, because of the usual orthogonality properties of the $w^j$'s with respect to integration over the unit circle. This also uses the fact that the partial sums of the power series converge uniformly on compact subsets of $D(r)$, to interchange the order of integration and summation. Note that \begin{equation} \label{|a_j| le frac{1}{2 pi r^{j + 1}} int_{partial D(r)} |f(w)| |dw|} |a_j| \le \frac{1}{2 \pi r^{j + 1}} \int_{\partial D(r)} |f(w)| \, |dw| \end{equation} for each $j$, where the integral is now taken with respect to the element of arc length $|dw|$. In particular, \begin{equation} \label{|a_j| le r^{-j} (sup_{|w| = r} |f(w)|)} |a_j| \le r^{-j} \, \Big(\sup_{|w| = r} |f(w)|\Big). \end{equation} This works for each $r < R$, since $a_j$ does not depend on $r$, as in the previous paragraph. \section{Power series expansions, continued} \label{power series expansions, continued} \setcounter{equation}{0} Let $n$ be a positive integer, and let $R = (R_1, \ldots, R_n)$ be an $n$-tuple of positive real numbers. Also let \begin{equation} D_n(R) = D(R_1) \times \cdots \times D(R_n) \end{equation} be the corresponding polydisk in ${\bf C}^n$. To say that a complex-valued function $f(w)$ on $D(R)$ is holomorphic, we mean that $f(w)$ is continuously-differentiable on $D_n(R)$ and holomorphic as a function of $w_j$ for each $j$, which is to say that $f(w)$ satisfies the Cauchy--Riemann equations as a function of $w_j$ for each $j$. As in the one-variable case, one can start with weaker regularity conditions on $f$, but we shall not pursue this here. One might at least note that it would be sufficient in this section to ask that $f$ be continuous on $D_n(R)$ and holomorphic in each variable separately. If $z \in D(R)$ and $|z_1| < r_1 < R_1$, then we can apply Cauchy's integral formula to $f(w)$ as a holomorphic function of $w_1$ to get that \begin{equation} f(z) = \frac{1}{2 \pi i} \oint_{\partial D(r_1)} \frac{f(w_1, z_2, \ldots, z_n)}{w_1 - z_1} \, dw_1, \end{equation} as in the previous section. Repeating the process, if $|z_j| < r_j < R_j$ for each $j$, then we get that \begin{equation} \label{f(z) = frac{1}{(2 pi i)^n} oint ... dw_1 cdots dw_n} \quad f(z) = \frac{1}{(2 \pi i)^n} \oint_{\partial D(r_1)} \cdots \oint_{\partial D(r_n)} f(w) \, \Big(\prod_{j = 1}^n (w_j - z_j)^{-1}\Big) \, dw_1 \cdots dw_n, \end{equation} which is an $n$-dimensional version of Cauchy's integral formula. Let us pause for a moment to consider ``multiple geometric series''. If $\zeta \in {\bf C}^n$ and $|\zeta_j| < 1$ for each $j$, then \begin{equation} \label{prod_{j = 1}^n (1 - zeta_j)^{-1} = ... = sum_alpha zeta^alpha} \prod_{j = 1}^n (1 - \zeta_j)^{-1} = \prod_{j = 1}^n \Big(\sum_{\ell_j = 0}^\infty \zeta_j^{\ell_j}\Big) = \sum_\alpha \zeta^\alpha, \end{equation} where the last sum is taken over all multi-indices $\alpha$, and $\zeta^\alpha = \zeta_1^{\alpha_1} \cdots \zeta_n^{\alpha_n}$ is the usual monomial. All of these sums converge absolutely under these conditions. If $|z_j| < r_j = |w_j|$ for each $j$, then \begin{equation} \label{prod_{j = 1}^n (w_j - z_j)^{-1} = sum_alpha w^{-alpha - 1} z^alpha} \prod_{j = 1}^n (w_j - z_j)^{-1} = \prod_{j = 1}^n w_j^{-1} \, (1 - w_j^{-1} \, z_j)^{-1} = \sum_\alpha w^{-\alpha - 1} \, z^\alpha, \end{equation} where $w^{-\alpha - 1} = w_1^{- \alpha_1 - 1} \cdots w_n^{- \alpha_n - 1}$. As usual, this sum is absolutely convergent under these conditions, and is uniformly approximated by finite subsums as a function of $w$ on $\partial D(r_1) \times \cdots \times \partial D(r_n)$ for each $z \in D_n(r)$, $r = (r_1, \ldots, r_n)$. If $r_j < R_j$ for each $j$, then put \begin{equation} \label{a_alpha = frac{1}{(2 pi i)^n} oint cdots dw_1 cdots dw_n} a_\alpha = \frac{1}{(2 \pi i)^n} \oint_{\partial D(r_1)} \cdots \oint_{\partial D(r_n)} f(w) \, w^{- \alpha - 1} \, dw_1 \cdots dw_n \end{equation} for each multi-index $\alpha$. Thus \begin{equation} \label{|a_alpha| le frac{r^{-alpha - 1}}{(2 pi)^n} int ... |dw_1| cdots |dw_n|} |a_\alpha| \le \frac{r^{-\alpha - 1}}{(2 \pi)^n} \int_{\partial D(r_1)} \cdots \int_{\partial D(r_n)} |f(w)| \, |dw_1| \cdots |dw_n|, \end{equation} where $r^{-\alpha - 1}$ is as in the previous paragraph, and hence \begin{equation} \label{|a_alpha| le r^{-alpha} sup {|f(w)| : |w_j| = r_j for j = 1, ldots, n}} |a_\alpha| \le r^{-\alpha} \sup \{|f(w)| : |w_j| = r_j \hbox{ for } j = 1, \ldots, n\} \end{equation} for each $\alpha$. If $|z_j| < r_j < R_j$ for each $j$, then we get that \begin{equation} \label{f(z) = sum_alpha a_alpha z^alpha} f(z) = \sum_\alpha a_\alpha \, z^\alpha. \end{equation} More precisely, it is easy to see that the sum on the right converges absolutely under these conditions, by comparison with a convergent multiple geometric series. To get (\ref{f(z) = sum_alpha a_alpha z^alpha}), one can plug (\ref{prod_{j = 1}^n (w_j - z_j)^{-1} = sum_alpha w^{-alpha - 1} z^alpha}) into (\ref{f(z) = frac{1}{(2 pi i)^n} oint ... dw_1 cdots dw_n}), and interchange the order of summation and integration. This uses the fact that the sum in (\ref{prod_{j = 1}^n (w_j - z_j)^{-1} = sum_alpha w^{-alpha - 1} z^alpha}) can be approximated uniformly by finite subsums for $w \in \partial D(r_1) \times \cdots \times \partial D(r_n)$. As in the previous section, the coefficients $a_\alpha$ do not depend on the choice of $r = (r_1, \ldots, r_n)$, as long as $0 < r_j < R_j$ for each $j$. Thus (\ref{f(z) = sum_alpha a_alpha z^alpha}) holds on all of $D_n(R)$, with absolute convergence of the sum for every $z \in D_n(R)$. \section{Holomorphic functions, revisited} \label{holomorphic functions, revisited} \setcounter{equation}{0} Let us say that a complex-valued function $f(z)$ on a nonempty open set $U$ in ${\bf C}^n$ is holomorphic if it is continuously-differentiable in the real-variable sense and holomorphic in each variable separately. As in the previous section, this implies that $f$ can be represented by an absolutely convergent power series on a neighborhood of any point in $U$. In particular, $f$ is automatically continuously-differentiable of all orders on $U$. This would also work if we only asked that $f$ be continuous on $U$ and holomorphic in each variable separately, but we shall not try to deal with weaker regularity conditions here. Let $C(U)$ be the algebra of continuous complex-valued functions on $U$, and let $\mathcal{H}(U)$ be the subspace of $C(U)$ consisting of holomorphic functions. More precisely, $\mathcal{H}(U)$ is a subalgebra of $C(U)$, because the sum and product of two holomorphic functions on $U$ are also holomorphic. Remember that there is also a natural topology on $C(U)$, defined by the supremum seminorms associated to nonempty compact subsets of $U$. As in the one-variable case, one can check that $\mathcal{H}(U)$ is closed in $C(U)$ with respect to this topology, using the $n$-dimensional version of the Cauchy integral formula. Let $f$ be a holomorhic function on $U$, and let $U_0$ be the set of $p \in U$ such that $f = 0$ at every point in a neighborhood of $p$, so that $U_0$ is an open set in $U$, by construction. If $Z$ is the set of $p \in U$ such that $f$ and all of its derivatives are equal to $0$ at $p$, then $Z$ is relatively closed in $U$, because $f$ and its derivatives are continuous on $U$. Clearly $U_0 \subseteq Z$, and $Z \subseteq U_0$ because of the local power series representation of $f$ at each point in $U$. Thus $U_0 = Z$ is both open and relatively closed in $U$. It follows that $U_0 = U$ when $U_0 \ne \emptyset$ and $U$ is connected. Suppose that $h$ is a continuous complex-valued function on a closed disk in the complex plane which is holomorphic in the interior and not equal to $0$ at any point on the boundary. Let $a$ be the number of points in the interior at which $h$ is equal to $0$, counted with their appropriate multiplicity. The argument principle implies that $a$ is the same as the winding number of the boundary values of $h$ around $0$ in the range. This winding number is not changed by small perturbations of $h$ on the boundary with respect to the supremum norm, and hence $a$ is not changed by small perturbations of $h$ as a continuous function on the closed disk which is holomorphic in the interior with respect to the supremum norm. This implies that a holomorphic function $f$ in $n \ge 2$ complex variables cannot have isolated zeros, by considering $f$ as a continuous family of holomorphic functions in one variable parameterized by the other $n - 1$ variables. \section{Laurent expansions} \label{laurent expansions} \setcounter{equation}{0} Let $R$, $T$ be nonnegative real numbers with $R < T$, and let \begin{equation} \label{A(R, T) = {z in {bf C} : R < |w| < T}} A(R, T) = \{z \in {\bf C} : R < |w| < T\} \end{equation} be the open annulus in the complex plane with inner radius $R$ and outer radius $T$. If $f(w)$ is a holomorphic function on $A(R, T)$ and $R < r < |z| < t < T$, then Cauchy's integral formula implies that \begin{equation} f(z) = \frac{1}{2 \pi i} \oint_{\partial A(r, t)} \frac{f(w)}{w - z} \, dw. \end{equation} The boundary of $A(r, t)$ consists of the circles centered at $0$ with radii $r$, $t$ and opposite orientations, and the integral over $\partial A(r, t)$ may be re-expressed as \begin{equation} \oint_{|w| = t} \frac{f(w)}{w - z} \, dw - \oint_{|w| = r} \frac{f(w)}{w - z} \, dw, \end{equation} where these circles have their usual positive orientations in both integrals. As in Section \ref{power series expansions}, \begin{equation} \frac{1}{2 \pi i} \oint_{|w| = t} \frac{f(w)}{w - z} \, dw = \sum_{j = 0}^\infty a_j \, z^j, \end{equation} where \begin{equation} a_j = \frac{1}{2 \pi i} \oint_{|w| = t} f(w) \, w^{- j - 1} \, dw. \end{equation} Note that \begin{equation} |a_j| \le \frac{1}{2 \pi t^{j + 1}} \oint_{|w| = t} |f(w)| \, |dw| \le t^{-j} \Big(\sup_{|w| = t} |f(w)|\Big) \end{equation} for each $j \ge 0$, so that $\sum_{j = 0}^\infty a_j \, z^j$ converges absolutely when $|z| < t$. The other term is a bit different, because $|z| > |w| = r$. This time we use \begin{equation} \frac{-1}{w - z} = \frac{1}{z \, (1 - z^{-1} \, w)} = z^{-1} \sum_{j = 0}^\infty z^{-j} \, w^j \end{equation} to get that \begin{equation} - \frac{1}{2 \pi i} \oint_{|w| = r} \frac{f(w)}{w - z} \, dw = \sum_{j = -1}^{-\infty} a_j \, z^j, \end{equation} where \begin{equation} a_j = \frac{1}{2 \pi i} \oint_{|w| = r} f(w) \, w^{- j - 1} \, dw \end{equation} for $j \le -1$. Thus \begin{equation} |a_j| \le \frac{1}{2 \pi r^{j + 1}} \int_{|w| = r} |f(w)| \, |dw| \le r^{-j} \Big(\sup_{|w| = r} |f(w)|\Big) \end{equation} for each $j \le -1$, so that $\sum_{j = -1}^{-\infty} a_j \, z^j$ converges absolutely when $|z| > r$. Combining these two series, we get that \begin{equation} \label{f(z) = sum_{j = -infty}^infty a_j z^j} f(z) = \sum_{j = -\infty}^\infty a_j \, z^j \end{equation} when $r < |z| < t$, where the coefficients $a_j$ are given as above for $j \ge 0$ and $j \le -1$, respectively. As in Section \ref{power series expansions}, these coefficients do not actually depend on the choices of radii $r, t \in (R, T)$. \section{Laurent expansions, continued} \label{laurent expansions, continued} \setcounter{equation}{0} Let $R$, $T$ be nonnegative real numbers with $R < T$, and let $V$ be a nonempty open set in ${\bf C}^{n - 1}$ for some $n \ge 2$. If $z = (z_1, z_2, \ldots, z_n) \in {\bf C}^n$, then we put $z' = (z_2, \ldots, z_n) \in {\bf C}^{n - 1}$, and identify $z$ with $(z_1, z') \in {\bf C} \times {\bf C}^{n - 1}$, so that \begin{equation} U = A(R, T) \times V \end{equation} is identified with an open set in ${\bf C}^n$. Let $f$ be a holomorphic function on $U$, and let $z$ be an element of $U$, with $r < |z_1| < t$ for some $r, t \in (R, T)$. Applying the discussion in the previous section to $f(z_1, z')$ as a function of $z_1$, we get that \begin{equation} f(z_1, z') = \sum_{j = -\infty}^\infty a_j(z') \, z_1^j, \end{equation} where \begin{equation} a_j(z') = \frac{1}{2 \pi i} \oint_{|w| = t} f(w, z') \, w^{- j - 1} \, dw \end{equation} when $j \ge 0$, and \begin{equation} a_j(z') = \frac{1}{2 \pi i} \oint_{|w| = r} f(w, z') \, w^{- j - 1} \, dw \end{equation} when $j \le -1$. It follows from these expressions that $a_j(z')$ is holomorphic as a function of $z'$ on $V$ for each $j$, because $f$ is holomorphic. Suppose that $V_1$ is a nonempty open subset of $V$, and that $f$ is actually a holomorphic function on the open set \begin{equation} \label{(A(R, T) times V) cup (D(T) times V_1)} (A(R, T) \times V) \cup (D(T) \times V_1) \end{equation} in ${\bf C}^n$. Thus $f(w, z')$ is holomorphic as a function of $w$ on the open disk $D(T)$ for each $z' \in V_1$. This implies that \begin{equation} a_j(z') = 0 \end{equation} when $z' \in V_1$ and $j \le -1$. If $V$ is connected, then it follows that the same conclusion holds for every $z' \in V$ and $j \le -1$, because $a_j(z')$ is holomorphic as a function of $z'$ on $V$ for each $j$. Under these conditions, we get that \begin{equation} f(z_1, z') = \sum_{j = 0}^\infty a_j(z') \, z_1^j \end{equation} for every $z = (z_1, z')$ in (\ref{(A(R, T) times V) cup (D(T) times V_1)}). This series actually converges absolutely when $|z_1| < T$ and $z' \in V$, as one can see by choosing $t$ such that $|z_1| < t < T$, and applying the estimate for $|a_j|$ in the previous section. Similarly, the partial sums of this series converge uniformly on compact subsets of $D(T) \times V$. The partial sums are also holomorphic in $z_1$ and $z'$, and it follows that the series defines a holomorphic function on $D(T) \times V$. Thus $f$ extends to a holomorphic function on $D(T) \times V$ in this case. \section[\ Completely circular domains]{Completely circular domains} \label{completely circular domains} \setcounter{equation}{0} Let $U$ be a nonempty complete circular open subset of ${\bf C}^n$. If $z \in U$, then there is an $n$-tuple $R = (R_1, \ldots, R_n)$ of positive real numbers such that \begin{equation} z \in D_n(R) \subseteq U, \end{equation} where $D_n(R) = D(R_1) \times \cdots \times D(R_n)$ is the polydisk associated to $R$, as before. Thus $U$ can be expressed as a union of open polydisks. As in Section \ref{another condition}, let $U^*$ be the set of $w \in U$ such that $w_j \ne 0$ for each $j$, and let $A$ be the set of $y \in {\bf R}^n$ for which there is a $w \in U^*$ such that $y_j = \log |w_j|$ for each $j$. Note that $A$ is an open set in ${\bf R}^n$, and that for each $z \in U$ there is a $w \in U^*$ such that $|z_j| < |w_j|$, because $U$ is an open set in ${\bf C}^n$. As before, if $x \in {\bf R}^n$, $y \in A$, and $x_j \le y_j$ for each $j$, then $x \in A$, because $U$ is completely circular. Similarly, if $\zeta \in {\bf C}^n$, $x \in A$, and $|z_j| \le \exp x_j$ for each $j$, then $\zeta \in U$. Conversely, for each $\zeta \in U$ there is an $x \in A$ with this property, so that $U$ is completely determined by $A$ under these conditions. Let $f$ be a holomorphic function on $U$. If $R$ is an $n$-tuple of positive real numbers such that $D_n(R) \subseteq U$, then $f$ can be represented by a power series on $D_n(R)$, as in Section \ref{power series expansions, continued}. More precisely, there are complex numbers $a_\alpha$ for each multi-index $\alpha$ such that \begin{equation} f(z) = \sum_\alpha a_\alpha \, z^\alpha \end{equation} for each $z \in D_n(R)$, where the sum converges absolutely. The coefficients $a_\alpha$ can be given by the derivatives of $f$ at $0$ in the usual way, since \begin{equation} \frac{\partial^{|\alpha|} f}{\partial z^\alpha}(0) = \alpha ! \cdot a_\alpha, \end{equation} where $\alpha ! = \alpha_1 ! \cdots \alpha_n !$. In particular, the coefficients $a_\alpha$ do not depend on $R$, and so this power series representation for $f(z)$ holds for every $z \in U$. Remember that $\mathop{\rm Con}(A)$ denotes the convex hull of $A$ in ${\bf R}^n$, which is an open set in ${\bf R}^n$ in this case, because $A$ is open. Similarly, if $x \in {\bf R}^n$, $y \in \mathop{\rm Con}(A)$, and $x_j \le y_j$ for each $j$, then $x \in \mathop{\rm Con}(A)$, because of the corresponding property of $A$. Consider \begin{eqnarray} V & = & \{\zeta \in {\bf C}^n : \hbox{ there is an } x \in \mathop{\rm Con}(A) \hbox{ such that } \\ & & \qquad\qquad\quad |\zeta_j| \le \exp x_j \hbox{ for } j = 1, \ldots, n\}. \nonumber \end{eqnarray} It is easy to see that $V$ is open, completely circular, and multiplicatively convex under these conditions. We also have that $U \subseteq V$, with $U = V$ exactly when $U$ is multiplicatively convex. As in Section \ref{power series}, the set of $z \in {\bf C}^n$ for which $\sum_\alpha a_\alpha \, z^\alpha$ is absolutely summable is completely circular and multiplicatively convex. It is not difficult to check that this happens for each $z \in V$, so that $f$ extends to a holomorphic function on $V$. \section[\ Convex sets]{Convex sets} \label{convex sets} \setcounter{equation}{0} Let $A$ be a nonempty convex set in ${\bf R}^n$. As in Section \ref{convex hulls}, if $x \in {\bf R}^n \backslash \overline{A}$, then there is a linear function $\lambda$ on ${\bf R}^n$ such that \begin{equation} \label{sup_{y in A} lambda(y) < lambda(x)} \sup_{y \in A} \lambda(y) < \lambda(x). \end{equation} More precisely, we can express $\lambda$ as \begin{equation} \lambda(y) = \sum_{j = 1}^n a_j \, y_j \end{equation} for some $a \in {\bf R}^n$. Of course, $a \ne 0$, and we can normalize $a$ so that \begin{equation} \label{max_{1 le j le n} |a_j| = 1} \max_{1 \le j \le n} |a_j| = 1, \end{equation} by multiplying $a$ by a positive real number. Suppose now that $x \in \partial A$, and let us show that there is a nonzero linear functional $\lambda$ on ${\bf R}^n$ such that \begin{equation} \label{lambda(y) le lambda(x)} \lambda(y) \le \lambda(x) \end{equation} for every $y \in A$. By hypothesis, there is a sequence $\{x(l)\}_{l = 1}^\infty$ of elements of ${\bf R}^n \backslash \overline{A}$ that converges to $x$. As in the previous paragraph, for each $l$ there is an $a(l) \in {\bf R}^n$ such that \begin{equation} \max_{1 \le j \le n} |a_j(l)| = 1 \end{equation} and $\lambda_l(y) = \sum_{j = 1}^n a_j(l) \, y_j$ satisfies \begin{equation} \sup_{y \in A} \lambda_l(y) < \lambda_l(x(l)). \end{equation} Passing to a subsequence if necessary, we may suppose that $\{a(l)\}_{l = 1}^\infty$ converges to some $a \in {\bf R}^n$, which also satisfies (\ref{max_{1 le j le n} |a_j| = 1}). If $\lambda$ is the linear functional on ${\bf R}^n$ corresponding to $a$ as before, then it is easy to see that $\lambda$ satisfies (\ref{lambda(y) le lambda(x)}), as desired. If in addition $A$ is an open set in ${\bf R}^n$, then we get that \begin{equation} \lambda(y) < \lambda(x) \end{equation} for every $y \in A$. Otherwise, if $\lambda(y) = \lambda(x)$ for some $y \in A$, then one can use the facts that $A$ is open and $\lambda \ne 0$ to get that $\lambda(z) > \lambda(x)$ for some $z \in A$. As another special case, suppose that $A$ has the property that for each $u \in {\bf R}^n$ and $y \in A$ with $u_j \le y_j$ for each $j$ we have that $u \in A$ too. If $\lambda(y) = \sum_{j = 1}^n a_j \, y_j$ satisfies (\ref{lambda(y) le lambda(x)}), then $a_j \ge 0$ for each $j$. \section[\ Completely circular domains, continued]{Completely circular domains, continued} \label{completely circular domains, continued} \setcounter{equation}{0} Let $U$ be a nonempty open subset of ${\bf C}^n$ that is also completely circular and multiplicatively convex, and let $w$ be an element of the boundary of $U$. Note that $w \ne 0$, because $0 \in U$. Let $I$ be the set of $j = 1, \ldots, n$ such that $w_j \ne 0$, and let $U_I$ be the set of $z \in U$ such that $z_j \ne 0$ when $j \in I$. Also let ${\bf R}^I$ be the set of real-valued functions on $I$, and let $A_I$ be the set of elements of ${\bf R}^I$ of the form $\log |z_j|$, $j \in I$, with $z \in U_I$. If $u \in {\bf R}^I$, $v \in A_I$, and $u_j \le v_j$ for each $j \in I$, then $u \in A_I$ too, because $U$ is completely circular. It is easy to see that $A_I$ is open and convex in ${\bf R}^I$, because $U$ is open and multiplicatively convex. One can also check that $\log |w_j|$, $j \in I$, corresponds to an element of the boundary of $A_I$ in ${\bf R}^I$ under these conditions. As in the previous section, there is an $a \in {\bf R}^I$ such that $a_j \ge 0$ for each $j \in I$, $\max_{j \in I} a_j = 1$, and \begin{equation} \label{sum_{j in I} a_j v_j < sum_{j in I} a_j log |w_j|} \sum_{j \in I} a_j \, v_j < \sum_{j \in I} a_j \, \log |w_j| \end{equation} for each $v \in A_I$. If $j \in I$ and $l$ is a positive integer, then let $\alpha_j(l)$ be the smallest positive integer such that \begin{equation} a_j \, l \le \alpha_j(l). \end{equation} Put $\alpha_j(l) = 0$ when $j \not\in I$, so that $\alpha(l) = (\alpha_1(l), \ldots, \alpha_n(l))$ is a multi-index for each positive integer. By construction, $a_{j_0} = 1$ for some $j_0 \in I$, which implies that $\alpha_{j_0}(l) = l$ for each $l$. In particular, the multi-indices $\alpha(l)$ are all distinct. Consider \begin{equation} \label{f_w(z) = sum_{l = 1}^infty w^{-alpha(l)} z^{alpha(l)}} f_w(z) = \sum_{l = 1}^\infty w^{-\alpha(l)} \, z^{\alpha(l)}. \end{equation} This is a power series in $z$, with coefficients $w^{-\alpha(l)} = \prod_{j \in I} w_j^{-\alpha_j(l)}$, and we would like to show that it converges absolutely when $z \in U$. If $z \in U \backslash U_I$, so that $z_j = 0$ for some $j \in I$, then $z^{\alpha(l)} = 0$ for each $l$, because $\alpha_j(l) \ge 1$ for every $j \in I$ and $l \ge 1$ by construction. Thus we may as well suppose that $z \in U_I$, so that $\log |z_j|$, $j \in I$, determines an element of $A_I$, and hence \begin{equation} \sum_{j \in I} a_j \, \log |z_j| < \sum_{j \in I} a_j \, \log |w_j|. \end{equation} Equivalently, \begin{equation} \label{prod_{j in I} |z_j|^{a_j} < prod_{j in I} |w_j|^{a_j}} \prod_{j \in I} |z_j|^{a_j} < \prod_{j \in I} |w_j|^{a_j}. \end{equation} Observe that \begin{equation} 0 \le \alpha_j(l) - a_j \, l \le 1 \end{equation} for each $j \in I$ and $l \ge 1$. Remember that $\alpha_j(l)$ is the smallest positive integer greater than or equal to $a_j \, l$, so that $\alpha_j(l) - a_j \, l \ge 0$ in particular. If $a_j > 0$, then $\alpha_j(l) - a_j < 1$ for each $l$. Otherwise, if $a_j = 0$, then $\alpha_j(l) = 1$ for each $l$. Of course, \begin{equation} |w^{-\alpha(l)}| \, |z^{\alpha(l)}| = \prod_{j \in I} \Big(\frac{|z_j|}{|w_j|}\Big)^{\alpha_j(l)}. \end{equation} Using the observation in the previous paragraph, we get that \begin{equation} \prod_{j \in I} \Big(\frac{|z_j|}{|w_j|}\Big)^{\alpha_j(l) - a_j \, l} \le C \end{equation} for some $C \ge 0$, where $C$ depends on $w$ and $z$ but not $l$. Hence \begin{equation} \label{|w^{-alpha(l)}| |z^{alpha(l)}| le C prod_{j in I} ...} |w^{-\alpha(l)}| \, |z^{\alpha(l)}| \le C \,\prod_{j \in I} \Big(\frac{|z_j|}{|w_j|}\Big)^{a_j \, l} \end{equation} for each $l$. Equivalently, \begin{equation} \label{|w^{-alpha(l)}| |z^{alpha(l)}| le C (prod_{j in I} ....)^l} |w^{-\alpha(l)}| \, |z^{\alpha(l)}| \le C \, \Big(\prod_{j \in I} \frac{|z_j|^{a_j}}{|w_j|^{a_j}}\Big)^l \end{equation} for each $l$. Note that the quantity in parentheses on the right side is strictly less than $1$, by (\ref{prod_{j in I} |z_j|^{a_j} < prod_{j in I} |w_j|^{a_j}}). It follows that the series in (\ref{f_w(z) = sum_{l = 1}^infty w^{-alpha(l)} z^{alpha(l)}}) converges absolutely when $z \in U_I$, by comparison with a convergent geometric series, as desired. Thus $f_w(z)$ defines a holomorphic function of $z$ on $U$. If $z = w$, then the series in (\ref{f_w(z) = sum_{l = 1}^infty w^{-alpha(l)} z^{alpha(l)}}) does not converge, because every term in the series is equal to $1$. It is easy to see that $t \, w \in U$ when $t$ is a nonnegative real number strictly less than $1$, because $w \in \partial U$ and $U$ is completely circular. In this case, \begin{equation} f_w(t \, w) = \sum_{l = 1}^\infty t^{|\alpha(l)|}, \end{equation} which tends to $+\infty$ as $t \to 1$. It follows that $f_w(z)$ does not have a holomorphic extension to a neighborhood of $w$, since it is not even bounded on $U$ near $w$. \section[\ Convex domains]{Convex domains} \label{convex domains} \setcounter{equation}{0} Let $U$ be a nonempty convex open set in ${\bf C}^n$, and let $w$ be an element of the boundary of $U$. As in Section \ref{convex sets}, there is a complex-linear function $\mu$ on ${\bf C}^n$ such that \begin{equation} \mathop{\rm Re} \mu(z) < \mathop{\rm Re} \mu(w) \end{equation} for every $z \in U$. In particular, \begin{equation} \mu(z) \ne \mu(w) \end{equation} for every $z \in U$. It follows that \begin{equation} g_w(z) = \frac{1}{\mu(z) - \mu(w)} \end{equation} is a holomorphic function on $U$ that is unbounded on the intersection of $U$ with any neighborhood of $w$, and hence does not have a holomorphic extension to the union of $U$ with any neighborhood of $w$. \section[\ Planar domains]{Planar domains} \label{Planar domains} \setcounter{equation}{0} Let $U$ be a nonempty open set in the complex plane, and let $w$ be an element of the boundary of $U$. Observe that \begin{equation} h_w(z) = \frac{1}{z - w} \end{equation} is a holomorphic function on $U$ that is unbounded on the intersection of $U$ with any neighborhood of $w$, and hence cannot be extended to a holomorphic function on the union of $U$ with any neighborhood of $w$. In particular, holomorphic functions in one complex variable can have isolated zeros, and thus isolated singularities. We have seen before that holomorphic functions in two or more complex variables cannot have isolated zeros, and they also cannot have isolated singularities, by the earlier discussion about Laurent expansions. \part{Convolution} \section[\ Convolution on ${\bf T}^n$]{Convolution on ${\bf T}^n$} \label{convolution on T^n} \setcounter{equation}{0} Let $f$, $g$ be continuous complex-valued functions on the $n$-dimensional torus ${\bf T}^n$. The \emph{convolution} $f * g$ is the function defined on ${\bf T}^n$ by \begin{equation} \label{(f * g)(z) = frac{1}{(2 pi)^n} int_{T^n} f(z diamond w^{-1}) g(w) |dw|} (f * g)(z) = \frac{1}{(2 \pi)^n} \int_{{\bf T}^n} f(z \diamond w^{-1}) \, g(w) \, |dw|. \end{equation} As before, $|dw|$ is the $n$-dimensional element of integration on ${\bf T}^n$ corresponding to the element $|dw_j|$ of arc length in each variable. Alternatively, $|dw|$ represents the appropriate version of Lebesgue measure on ${\bf T}^n$. If $z = (z_1, \ldots, z_n)$ and $w = (w_1, \ldots, w_n)$ are elements of ${\bf T}^n$, then we put \begin{equation} w^{-1} = (w_1^{-1}, \ldots, w_n^{-1}) \end{equation} and \begin{equation} z \diamond w = (z_1 \, w_1, \ldots, z_n \, w_n), \end{equation} so that $z \diamond w^{-1}$ is also defined. It is easy to see that $f * g$ is also a continuous function on ${\bf T}^n$ when $f$, $g$ are continuous, using the fact that continuous functions on ${\bf T}^n$ are uniformly continuous, since ${\bf T}^n$ is compact. Observe that \begin{equation} \label{f * g = g * f} f * g = g * f, \end{equation} as one can see using the change of variables $w \mapsto w^{-1} \diamond z$ in (\ref{(f * g)(z) = frac{1}{(2 pi)^n} int_{T^n} f(z diamond w^{-1}) g(w) |dw|}). More precisely, this also uses the fact that the measure on ${\bf T}^n$ is invariant under the mappings $w \mapsto w^{-1}$ and $w \mapsto u \diamond w$ for each $u \in {\bf T}^n$. Similarly, one can check that \begin{equation} (f * g) * h = f * (g * h) \end{equation} for all continuous functions $f$, $g$, and $h$ on ${\bf T}^n$. If $\alpha = (\alpha_1, \ldots, \alpha_n)$ is an $n$-tuple of integers, then the corresponding Fourier coefficient of a continuous function $f$ on ${\bf T}^n$ is defined as usual by \begin{equation} \widehat{f}(\alpha) = \frac{1}{(2 \pi)^n} \int_{{\bf T}^n} f(z) \, z^{-\alpha} \, |dz|. \end{equation} It is easy to check that \begin{equation} \label{widehat{(f * g)}(alpha) = widehat{f}(alpha) widehat{g}(alpha)} \widehat{(f * g)}(\alpha) = \widehat{f}(\alpha) \, \widehat{g}(\alpha) \end{equation} for all continuous functions $f$, $g$ on ${\bf T}^n$ and $\alpha \in {\bf Z}^n$. More precisely, if we substitute the definition of $f * g$ into the definition of the Fourier coefficient, then we get a double integral in $z$ and $w$. This double integral can be evaluated by integrating in $z$ first, using the change of variables $z \mapsto z \diamond w$ and the fact that \begin{equation} (z \diamond w)^{-\alpha} = z^{-\alpha} \, w^{-\alpha} \end{equation} for all $z, w \in {\bf T}^n$ and $\alpha \in {\bf Z}^n$. The double integral then splits into a product of integrals over $z$ and $w$ separately, which leads to (\ref{widehat{(f * g)}(alpha) = widehat{f}(alpha) widehat{g}(alpha)}). Note that the convolution $f * g$ can be defined as before when $f$ is continuous on ${\bf T}^n$ and $g$ is Lebesgue integrable, and satisfies \begin{equation} \label{sup_{z in {bf T}^n} |(f * g)(z)| le ...} \sup_{z \in {\bf T}^n} |(f * g)(z)| \le \Big(\sup_{z \in {\bf T}^n} |f(z)|\Big) \, \Big(\frac{1}{(2 \pi)^n} \int_{{\bf T}^n} |g(w)| \, |dw|\Big). \end{equation} In this case, it is easy to see that $f * g$ is still continuous, because $f$ is uniformly continuous on ${\bf T}^n$. Of course, the analogous statements also hold when the roles of $f$ and $g$ are reversed, because convolution is commutative. If $f$ is bounded and measurable on ${\bf T}^n$ and $g$ is integrable, then the convolution $(f * g)(z)$ can be defined in the same way for each $z \in {\bf T}^n$, and satisfies (\ref{sup_{z in {bf T}^n} |(f * g)(z)| le ...}). The convolution $f * g$ is actually continuous in this case as well, as one can show by approximating $g$ by continuous functions with respect to the $L^1$ norm on ${\bf T}^n$, so that $f * g$ is approximated uniformly by continuous functions on ${\bf T}^n$ by (\ref{sup_{z in {bf T}^n} |(f * g)(z)| le ...}) and the previous remarks. Suppose that $f$, $g$ are nonnegative real-valued integrable functions on ${\bf T}^n$. In this case, \begin{eqnarray} \label{frac{1}{(2 pi)^n} int_{{bf T}^n} (f * g)(z) |dz| = ...} \lefteqn{\frac{1}{(2 \pi)^n} \int_{{\bf T}^n} (f * g)(z) \, |dz| =} \\ & & \Big(\frac{1}{(2 \pi)^n} \int_{{\bf T}^n} f(z) \, |dz|\Big) \Big(\frac{1}{(2 \pi)^n} \int_{{\bf T}^n} g(z) \, |dw|\Big). \nonumber \end{eqnarray} To see this, one can substitute the definition of $(f * g)(z)$ into the integral on the left, which leads to a double integral in $w$ and $z$. One can then interchange the order of integration and use the change of variable $z \mapsto z \diamond w$ to split the double integral into a product of integrals in $z$ and $w$, as before. In particular, it follows that $(f * g)(z)$ is finite for almost every $z \in {\bf T}^n$. Now let $f$, $g$ be integrable complex-valued functions on ${\bf T}^n$. Observe that \begin{equation} \int_{{\bf T}^n} |f(z \diamond w^{-1})| \, |g(w)| \, |dw| < \infty \end{equation} for almost every $z \in {\bf T}^n$, by the argument in the previous paragraph applied to $|f|$, $|g|$. Thus $(f * g)(z)$ is defined for almost every $z \in {\bf T}^n$, and satisfies \begin{equation} \label{|(f * g)(z)| le ...} |(f * g)(z)| \le \frac{1}{(2 \pi)^n} \int_{{\bf T}^n} |f(z \diamond w^{-1})| \, |g(w)| \, dw. \end{equation} Using Fubini's theorem, one may conclude that $f * g$ is an integrable function on ${\bf T}^n$, and that \begin{eqnarray} \lefteqn{\frac{1}{(2 \pi)^n} \int_{{\bf T}^n} |(f * g)(z)| \, |dz| \le} \\ & & \Big(\frac{1}{(2 \pi)^n} \int_{{\bf T}^n} |f(z)| \, |dz|\Big) \Big(\frac{1}{(2 \pi)^n} \int_{{\bf T}^n} |g(w)| \, |dw|\Big). \nonumber \end{eqnarray} One can also check that convolution is commutative and associative on $L^1({\bf T}^n)$, as before. If $f$ is an integrable function on ${\bf T}^n$, then the Fourier coefficients $\widehat{f}(\alpha)$ can be defined in the usual way, and satisfy \begin{equation} |\widehat{f}(\alpha)| \le \frac{1}{(2 \pi)^n} \int_{{\bf T}^n} |f(z)| \, |dz| \end{equation} for each $\alpha \in {\bf Z}^n$. If $f$ and $g$ are integrable functions on ${\bf T}^n$, so that their convolution $f * g$ is also integrable, as in the preceding paragraph, then the Fourier coefficients of $f * g$ are equal to the product of the Fourier coefficients of $f$ and $g$, as in (\ref{widehat{(f * g)}(alpha) = widehat{f}(alpha) widehat{g}(alpha)}). This follows from Fubini's theorem, as before. \section[\ Convolution on ${\bf R}^n$]{Convolution on ${\bf R}^n$} \label{convolution on R^n} \setcounter{equation}{0} Let $f$ and $g$ be nonnegative real-valued integrable functions on ${\bf R}^n$, and put \begin{equation} \label{(f * g)(x) = int_{{bf R}^n} f(x - y) g(y) dy} (f * g)(x) = \int_{{\bf R}^n} f(x - y) \, g(y) \, dy, \end{equation} where $dy$ denotes Lebesgue measure on ${\bf R}^n$, as usual. It is easy to see that \begin{equation} \label{int_{{bf R}^n} (f * g)(x) dx = ...} \int_{{\bf R}^n} (f * g)(x) \, dx = \Big(\int_{{\bf R}^n} f(x) \, dx\Big) \Big(\int_{{\bf R}^n} g(y) \, dy\Big), \end{equation} by interchanging the order of integration and using the change of variables $x \mapsto x + y$, as in the previous section. Thus $f * g$ is integrable on ${\bf R}^n$ under these conditions, and finite almost everywhere on ${\bf R}^n$ in particular. If $f$ and $g$ are arbitrary real or complex-valued integrable functions on ${\bf R}^n$, then it follows that \begin{equation} \int_{{\bf R}^n} |f(x - y)| \, |g(y)| \, dy < \infty \end{equation} for almost every $x \in {\bf R}^n$, by applying the preceding argument to $|f|$ and $|g|$. This shows that the definition (\ref{(f * g)(x) = int_{{bf R}^n} f(x - y) g(y) dy}) of $(f * g)(x)$ also makes sense in this case for almost every $x \in {\bf R}^n$, and satisfies \begin{equation} \label{|(f * g)(x)| le int_{{bf R}^n} |f(x - y)| |g(y)| dy} |(f * g)(x)| \le \int_{{\bf R}^n} |f(x - y)| \, |g(y)| \, dy. \end{equation} One can also check that $f * g$ is measurable, using Fubini's theorem. Integrating in $x$ as before, we get that \begin{equation} \label{int_{{bf R}^n} |(f * g)(x)| dx le ...} \int_{{\bf R}^n} |(f * g)(x)| \, dx \le \Big(\int_{{\bf R}^n} |f(x)| \, dx\Big) \Big(\int_{{\bf R}^n} |g(y)| \, dy\Big), \end{equation} and that $f * g$ is integrable in particular. As in the previous section, it is easy to see that \begin{equation} f * g = g * f, \end{equation} using the change of variables $y \mapsto x - y$. Similarly, one can verify that \begin{equation} \label{(f * g) * h = f * (g * h)} (f * g) * h = f * (g * h) \end{equation} for any integrable functions $f$, $g$, and $h$ on ${\bf R}^n$. If $f$ is an integrable function on ${\bf R}^n$ and $g$ is bounded and measurable, then the convolution $f * g$ can be defined using (\ref{(f * g)(x) = int_{{bf R}^n} f(x - y) g(y) dy}) as before, and satisfies \begin{equation} \label{sup_{x in {bf R}^n} |(f * g)(x)| le ...} \sup_{x \in {\bf R}^n} |(f * g)(x)| \le \Big(\int_{{\bf R}^n} |f(x)| \, dx\Big) \Big(\sup_{y \in {\bf R}^n} |g(y)|\Big). \end{equation} One can also check that $f * g$ is uniformly continuous under these conditions, as follows. If $f$ is a continuous function on ${\bf R}^n$ with compact support, then $f$ is uniformly continuous, and it is easy to see that $f * g$ is uniformly continuous directly from the definitions. Otherwise, if $f$ is any integrable function on ${\bf R}^n$, then it is well known that $f$ can be approximated by continuous functions on ${\bf R}^n$ with compact support in the $L^1$ norm. This implies that $f * g$ can be approximated by uniformly continuous functions on ${\bf R}^n$ with respect to the supremum norm, and hence that $f * g$ is uniformly continuous as well. \section[\ The Fourier transform]{The Fourier transform} \label{fourier transform} \setcounter{equation}{0} If $f$ is an integrable complex-valued function on ${\bf R}^n$, then the \emph{Fourier transform} $\widehat{f}$ of $f$ is defined by \begin{equation} \label{widehat{f}(xi) = int_{{bf R}^n} f(x) exp (- i xi cdot x) dx} \widehat{f}(\xi) = \int_{{\bf R}^n} f(x) \, \exp (- i \xi \cdot x) \, dx. \end{equation} Here $\xi \in {\bf R}^n$, and $\xi \cdot x$ is the usual dot product, given by \begin{equation} \xi \cdot x = \sum_{j = 1}^n \xi_j \, x_j. \end{equation} Also, $\exp (- i \xi \cdot x)$ refers to the complex exponential function, which satisifies $|\exp (i t)| = 1$ for every $t \in {\bf R}$. Thus the integrand in (\ref{widehat{f}(xi) = int_{{bf R}^n} f(x) exp (- i xi cdot x) dx}) is an integrable function, and \begin{equation} \label{|widehat{f}(xi)| le int_{{bf R}^n} |f(x)| dx} |\widehat{f}(\xi)| \le \int_{{\bf R}^n} |f(x)| \, dx \end{equation} for every $\xi \in {\bf R}^n$. Let $R$ be a positive real number, and put $f_R(x) = f(x)$ when $|x| \le R$ and $f_R(x) = 0$ when $|x| > R$. Thus \begin{equation} \widehat{f}_R(\xi) = \int_{|x| \le R} f(x) \, \exp (- i \xi \cdot x) \, dx, \end{equation} and \begin{equation} \label{|widehat{f}(xi) - widehat{f}_R(xi)| le int_{|x| > R} |f(x)| dx} |\widehat{f}(\xi) - \widehat{f}_R(\xi)| \le \int_{|x| > R} |f(x)| \, dx \end{equation} for every $\xi \in {\bf R}^n$ and $R > 0$. In particular, $\widehat{f}_R \to \widehat{f}$ uniformly on ${\bf R}^n$ as $R \to \infty$. It is easy to see that $\widehat{f}_R(\xi)$ is uniformly continuous on ${\bf R}^n$ for each $R > 0$, using the fact that $\exp (i t)$ is uniformly continuous on the real line. It follows that $\widehat{f}(\xi)$ is also uniformly continuous on ${\bf R}^n$, since it is the uniform limit of uniformly continuous functions on ${\bf R}^n$. Now let $f$, $g$ be integrable functions on the real line, so that their convolution $f * g$ is also integrable, as in the previous section. The Fourier transform of $f * g$ is given by \begin{eqnarray} \widehat{(f * g)}(\xi) & = & \int_{{\bf R}^n} (f * g)(x) \, \exp (- i \xi \cdot x) \, dx \\ & = & \int_{{\bf R}^n} \int_{{\bf R}^n} f(x - y) \, g(y) \, \exp (- i \xi \cdot x) \, dy \, dx. \nonumber \end{eqnarray} This is the same as \begin{equation} \int_{{\bf R}^n} \int_{{\bf R}^n} f(x) \, g(y) \, \exp (- i \xi \cdot (x + y)) \, dx \, dy, \end{equation} by interchanging the order of integration and using the change of variables $x \mapsto x + y$. Because $\exp (i (r + t)) = \exp (i r) \, \exp (i t)$ for every $r, t \in {\bf R}$, this double integral reduces to \begin{equation} \Big(\int_{{\bf R}^n} f(x) \, \exp (- i \xi \cdot x) \, dx\Big) \Big(\int_{{\bf R}^n} g(y) \, \exp (- i \xi \cdot y) \, dy\Big). \end{equation} Thus \begin{equation} \widehat{(f * g)}(\xi) = \widehat{f}(\xi) \, \widehat{g}(\xi) \end{equation} for every $\xi \in {\bf R}^n$. \section[\ Holomorphic extensions]{Holomorphic extensions} \label{holomorphic extensions} \setcounter{equation}{0} Let $L^1({\bf R}^n)$ be the space of Lebesgue integrable functions on ${\bf R}^n$ equipped with the $L^1$ norm \begin{equation} \label{||f||_1 = int_{{bf R}^n} |f(x)| dx} \|f\|_1 = \int_{{\bf R}^n} |f(x)| \, dx, \end{equation} as usual. Let us say that $f \in L^1({\bf R}^n)$ has support contained in a closed set $E \subseteq {\bf R}^n$ if $f(x) = 0$ almost everywhere on ${\bf R}^n \backslash E$. The space $L^1_{com}({\bf R}^n)$ of $f \in L^1({\bf R}^n)$ with compact support is a dense linear subspace of $L^1({\bf R}^n)$ which is closed under convolution, in the sense that $f * g \in L^1_{com}({\bf R}^n)$ for every $f$, $g$ in $L^1_{com}({\bf R}^n)$. If $f \in L^1_{com}({\bf R}^n)$ is supported in a compact set $K$, then the Fourier transform $\widehat{f}(\xi)$ extends to a holomorphic function $\widehat{f}(\zeta)$ on ${\bf C}^n$, given by \begin{equation} \label{widehat{f}(zeta) = int_K f(x) exp (- i zeta cdot x) dx} \widehat{f}(\zeta) = \int_K f(x) \, \exp (- i \zeta \cdot x) \, dx. \end{equation} Here $\zeta \in {\bf C}^n$ may be expressed as $\xi + i \eta$, with $\xi, \eta \in {\bf R}^n$, and \begin{equation} \zeta \cdot x = \sum_{j = 1}^n \zeta_j \, x_j, \end{equation} as before. Thus (\ref{widehat{f}(zeta) = int_K f(x) exp (- i zeta cdot x) dx}) reduces to (\ref{widehat{f}(xi) = int_{{bf R}^n} f(x) exp (- i xi cdot x) dx}) when $\zeta = \xi \in {\bf R}^n$, and otherwise it is easy to check that $\widehat{f}(\zeta)$ is a holomorphic function on ${\bf C}^n$, since the exponential function is holomorphic. In addition, \begin{equation} \label{widehat{(f * g)}(zeta) = widehat{f}(zeta) widehat{g}(zeta)} \widehat{(f * g)}(\zeta) = \widehat{f}(\zeta) \, \widehat{g}(\zeta) \end{equation} for every $f, g \in L^1_{com}({\bf R}^n)$ and $\zeta \in {\bf C}^n$, for the same reasons as in the previous section. Let $L^1_+({\bf R})$ be the space of $f \in L^1({\bf R})$ that are supported in $[0, \infty)$, and let $L^1_-({\bf R})$ be the space of $f \in L^1({\bf R})$ that are supported in $(-\infty, 0]$. These are closed linear subspaces of $L^1({\bf R})$ that are closed under convolution, in the sense that $f * g \in L^1_+({\bf R})$ when $f, g \in L^1_+({\bf R})$, and similarly for $L^1_-({\bf R})$. Let $H_+$, $H_-$ be the upper and lower open half-planes in the complex plane, consisting of complex numbers with positive and negative imaginary parts, respectively. Thus their closures $\overline{H}_+$, $\overline{H}_-$ are the upper and lower closed half-planes in ${\bf C}$, consisting of complex numbers with imaginary part greater than or equal to $0$ and less than or equal to $0$, respectively. If $f \in L^1_+({\bf R})$, then \begin{equation} \label{widehat{f}(zeta) = int_0^infty f(x) exp (- i zeta x) dx = ...} \widehat{f}(\zeta) = \int_0^\infty f(x) \, \exp (- i \zeta \, x) \, dx = \int_0^\infty f(x) \, \exp (- i \xi \, x + \eta \, x) \, dx \end{equation} is defined for all $\zeta = \xi + i \, \eta \in \overline{H}_-$. In this case, $\eta \le 0$, so that \begin{equation} |\exp (- i \xi \, x + \eta \, x)| = \exp (\eta \, x) \le 1 \end{equation} for every $x \ge 0$, and hence \begin{equation} |\widehat{f}(\zeta)| \le \|f\|_1 \end{equation} for every $\zeta \in \overline{H}_-$. As in the previous section, one can check that $\widehat{f}(\zeta)$ is uniformly continuous on $\overline{H}_-$. This uses the fact that $\exp (- i \zeta \, x)$ is uniformly continuous as a function of $\zeta$ on $\overline{H}_-$ for each $x \ge 0$, and it is easier to first consider the case where $f$ has compact support in $[0, \infty)$, and then get the same conclusion for any $f \in L_+({\bf R})$ by approximation. One can also check that $\widehat{f}(\zeta)$ is holomorphic on $H_-$, using the holomorphicity of the exponential function and the integrability of the expressions in (\ref{widehat{f}(zeta) = int_0^infty f(x) exp (- i zeta x) dx = ...}). If $f, g \in L^1_+({\bf R})$, then \begin{equation} \widehat{(f * g)}(\zeta) = \widehat{f}(\zeta) \, \widehat{g}(\zeta) \end{equation} for every $\zeta \in \overline{H}_-$, for the same reasons as before. In the same way, the Fourier transform of a function in $L^1_-({\bf R})$ has a natural extension to a bounded uniformly continuous function on $\overline{H}_+$ that is holomorphic on $H_+$, and with the analogous property for convolutions. Let $\epsilon = (\epsilon_1, \ldots, \epsilon_n)$ be an $n$-tuple with $\epsilon_j \in \{1, -1\}$ for each $j$, which is to say an element of $\{1, -1\}^n$. Put \begin{equation} \label{Q_{n, epsilon} = ...} Q_{n,\epsilon} = \{x \in {\bf R}^n : \epsilon_j \, x_j \ge 0 \hbox{ for } j = 1, \ldots, n\}, \end{equation} which is the closed ``quadrant'' in ${\bf R}^n$ associated to $\epsilon$. Let $L^1_\epsilon({\bf R}^n)$ be the set of $f \in L^1({\bf R}^n)$ which are supported in $Q_{n, \epsilon}$. It is easy to see that $L^1_\epsilon({\bf R}^n)$ is a closed linear subspace of $L^1({\bf R}^n)$ that is closed with respect to convolution, as before. Consider \begin{equation} \label{H_{n, epsilon} = ...} H_{n, \epsilon} = \{\zeta = \xi + i \, \eta \in {\bf C}^n : \epsilon_j \, \eta_j > 0 \hbox{ for } j = 1, \ldots, n\}, \end{equation} so that the closure $\overline{H}_{n, \epsilon}$ of $H_{n, \epsilon}$ consists of the $\zeta = \xi + \eta \in {\bf C}^n$ with $\eta \in Q_{n, \epsilon}$. If $f \in L^1_\epsilon({\bf R}^n)$, then \begin{eqnarray} \label{widehat{f}(zeta), f in L^1_epsilon({bf R}^n)} \widehat{f}(\zeta) & = & \int_{Q_{n, \epsilon}} f(x) \, \exp (- i \zeta \cdot x) \, dx \\ & = & \int_{Q_{n, \epsilon}} f(x) \, \exp (- i \xi \cdot x + \eta \cdot x) \, dx \nonumber \end{eqnarray} is defined for every $\zeta = \xi + \eta \in H_{n, -\epsilon}$, where $-\epsilon = (-\epsilon_1, \ldots, -\epsilon_n)$. In this case, $\eta \cdot x \le 0$ for every $x \in Q_{n, \epsilon}$, so that \begin{equation} |\exp (- i \xi \cdot x + \eta \cdot x)| = \exp (\eta \cdot x) \le 1, \end{equation} and hence $|\widehat{f}(\zeta)| \le \|f\|_1$ for every $\zeta \in \overline{H}_{n, - \epsilon}$. As before, one can check that $\widehat{f}(\zeta)$ is uniformly continuous on $\overline{H}_{n, - \epsilon}$, and holomorphic on $H_{n, -\epsilon}$. If $f, g \in L^1_\epsilon({\bf R}^n)$, then the extension of the Fourier transform of $f * g$ to $H_{n, - \epsilon}$ is equal to the product of the extensions of the Fourier transforms of $f$ and $g$ to $H_{n, - \epsilon}$, as usual. \section[\ The Riemann--Lebesgue lemma]{The Riemann--Lebesgue lemma} \label{riemann--lebesgue lemma} \setcounter{equation}{0} If $a$, $b$ are real numbers with $a < b$, then the Fourier transform of the indicator function ${\bf 1}_{[a, b]}$ of the interval $[a, b]$ in the real line is equal to \begin{equation} \label{int_a^b exp (- i xi x) dx = i xi^{-1} (exp (- i xi b) - exp (- i xi a))} \int_a^b \exp (- i \xi \, x) \, dx = i \xi^{-1} \, (\exp (- i \, \xi \, b) - \exp (- i \xi \, a)) \end{equation} when $\xi \ne 0$, and to $b - a$ when $\xi = 0$. In particular, this tends to $0$ as $|\xi| \to \infty$. If $f \in L^1({\bf R})$, then the Riemann--Lebesgue lemma states that \begin{equation} \label{lim_{|xi| to infty} widehat{f}(xi) = 0} \lim_{|\xi| \to \infty} \widehat{f}(\xi) = 0. \end{equation} This follows immediately from the remarks in the previous paragraph when $f$ is a step function, which is to say a finite linear combination of indicator functions of intervals in the real line. Otherwise, any integrable function $f$ on the real line can be approximated by step functions in the $L^1$ norm, which leads to an approximation of the Fourier transform $\widehat{f}$ of $f$ by Fourier transforms of step functions in the supremum norm, by (\ref{|widehat{f}(xi)| le int_{{bf R}^n} |f(x)| dx}). This permits one to derive (\ref{lim_{|xi| to infty} widehat{f}(xi) = 0}) for $f$ from the corresponding statement for step functions. This also works for integrable functions on ${\bf R}^n$. In this case, we can start with a rectangular box $B$ in ${\bf R}^n$, which is to say the Cartesian product of $n$ intervals in the real line. The indicator function of $B$ on ${\bf R}^n$ is the same as the product of the $n$ indicator functions of the corresponding intervals in ${\bf R}$, as functions of $x_1, \ldots, x_n$. Thus the Fourier transform of the indicator function of $B$ is the same as the product of the $n$ one-dimensional Fourier transforms of these indicator functions of intervals in ${\bf R}$, as functions of $\xi_1, \ldots, \xi_n$. This implies that the Fourier transform of the indicator function of $B$ tends to $0$ at infinity, as before. Hence the Fourier transform of any finite linear combination of indicator functions of rectangular boxes in ${\bf R}^n$ also tends to $0$ at infinity. Any integrable function $f$ on ${\bf R}^n$ can be approximated by a finite linear combination of indicator functions of rectangular boxes in the $L^1$ norm, which implies (\ref{lim_{|xi| to infty} widehat{f}(xi) = 0}) as in the one-dimensional case. As in the previous section, the Fourier transform of the indicator function ${\bf 1}_{[a, b]}$ of an interval $[a, b]$ in the real line extends to a holomorphic function on the complex plane, given by \begin{equation} \int_a^b \exp (- i \zeta \, x) \, dx = i \, \zeta^{-1} \, (\exp (- i \zeta \, b) - \exp (- i \zeta \, a)) \end{equation} when $\zeta \ne 0$, and equal to $b - a$ when $\zeta = 0$. If $a, b \ge 0$, then it is easy to see that this tends to $0$ as $|\zeta| \to \infty$ when $\zeta$ is in the closed lower half-plane $\overline{H}_-$. If $f \in L^1_+({\bf R})$, so that the Fourier transform of $f$ has a natural extension $\widehat{f}(\zeta)$ to $\zeta \in \overline{H}_-$, as in the preceding section, then one can use this to show that $\widehat{f}(\zeta) \to 0$ as $|\zeta| \to \infty$ in $\overline{H}_-$, by approximating $f$ by step functions as before. Of course, there is an analogous statement for the extension to the closed upper half-plane $\overline{H}_+$ of the Fourier transform of a function in $L^1_-({\bf R}^n)$. There is also an analogous statement for the extension to $\overline{H}_{n, -\epsilon}$ of the Fourier transform of a function in $L^1_\epsilon({\bf R}^n)$, as in the previous section. \section[\ Translation and multiplication]{Translation and multiplication} \label{translation, multiplication} \setcounter{equation}{0} If $f \in L^1({\bf R}^n)$ and $t \in {\bf R}^n$, then let $T_t(f)$ be the function on ${\bf R}^n$ obtained by translating $f$ by $t$, so that \begin{equation} \label{T_t(f)(x) = f(x - t)} T_t(f)(x) = f(x - t). \end{equation} Thus $T_t(f) \in L^1({\bf R}^n)$ too, and $\|T_t(f)\|_1 = \|f\|_1$. It is easy to see that \begin{equation} \label{widehat{(T_t(f))}(xi) = exp (- i xi cdot t) widehat{f}(xi)} \widehat{(T_t(f))}(\xi) = \exp (- i \xi \cdot t) \, \widehat{f}(\xi), \end{equation} for each $\xi \in {\bf R}^n$, using the change of variable $x \mapsto x + t$ in the definition of $\widehat{T_t(f)}$. Similarly, if $f$ has compact support in ${\bf R}^n$, then $T_t(f)$ does too, and the natural extension of the Fourier transform of $T_t(f)$ to a holomorphic function on ${\bf C}^n$ satisfies \begin{equation} \label{widehat{(T_t(f))}(zeta) = exp (- i zeta cdot t) widehat{f}(zeta)} \widehat{(T_t(f))}(\zeta) = \exp (- i \zeta \cdot t) \, \widehat{f}(\zeta) \end{equation} for each $\zeta \in {\bf C}^n$. Suppose now that $\epsilon \in \{1, -1\}^n$, and that $f \in L^1_\epsilon({\bf R}^n)$, as in Section \ref{holomorphic extensions}. Thus $f$ is supported in the ``quadrant'' $Q_{n, \epsilon}$ defined in (\ref{Q_{n, epsilon} = ...}). If $t \in Q_{n, \epsilon}$, then it is easy to see that $T_t(f)$ is supported in $Q_{n, \epsilon}$ as well, so that $T_t(f) \in L^1_\epsilon({\bf R}^n)$. As in Section \ref{holomorphic extensions}, the Fourier transform of $f$ and $T_t(f)$ have natural extensions to $\overline{H}_{n, -\epsilon}$, which are related by the same expression (\ref{widehat{(T_t(f))}(zeta) = exp (- i zeta cdot t) widehat{f}(zeta)}) as in the previous paragraph. Note that \begin{equation} \label{|exp (- i zeta cdot t)| le 1} |\exp (- i \zeta \cdot t)| \le 1 \end{equation} for each $\zeta \in H_{n, -\epsilon}$ and $t \in Q_{n, \epsilon}$, as in Section \ref{holomorphic extensions}. If $w \in {\bf R}^n$ and $f \in L^1({\bf R}^n)$, then let $M_w(f)$ be the function on ${\bf R}^n$ defined by multiplying $f$ by $\exp (i w \cdot x)$, so that \begin{equation} \label{(M_w(f))(x) = exp (i w cdot x) f(x)} (M_w(f))(x) = \exp (i w \cdot x) \, f(x). \end{equation} Thus $M_w(f) \in L^1({\bf R}^n)$ and $\|M_w(f)\|_1 = \|f\|_1$, since $|\exp (i w \cdot x)| = 1$ for every $x, w \in {\bf R}^n$. It is easy to see that \begin{equation} \label{widehat{(M_w(f))}(xi) = widehat{f}(xi - w)} \widehat{(M_w(f))}(\xi) = \widehat{f}(\xi - w) \end{equation} for every $\xi, w \in {\bf R}^n$, directly from the definition of the Fourier transform. If $w \in {\bf C}^n$, then we can still define $M_w(f)$ for $f \in L^1({\bf R}^n)$ by (\ref{(M_w(f))(x) = exp (i w cdot x) f(x)}), and $M_w(f)$ will be locally integrable on ${\bf R}^n$, but it may not be integrable on ${\bf R}^n$. However, if $f$ has compact support in ${\bf R}^n$, then $M_w(f)$ also has compact support in ${\bf R}^n$ for every $w \in {\bf C}^n$, and $M_w(f)$ is integrable on ${\bf R}^n$ for every $w \in {\bf C}^n$. In this case, the Fourier transform of $f$ extends to a holomorphic function on ${\bf C}^n$, as in Section \ref{holomorphic extensions}, and the Fourier transform of $M_w(f)$ is defined and extends to a holomorphic function on ${\bf C}^n$ for each $w \in {\bf C}^n$. As before, we have that \begin{equation} \label{widehat{(M_w(f))}(zeta) = widehat{f}(zeta - w)} \widehat{(M_w(f))}(\zeta) = \widehat{f}(\zeta - w) \end{equation} for every $\zeta, w \in {\bf C}^n$ when $f \in L^1_{com}({\bf R}^n)$. Let $\epsilon$ be an element of $\{1, -1\}^n$ again, and suppose that $f \in L^1_\epsilon({\bf R}^n)$. As before, $M_w(f)$ is a locally integrable function on ${\bf R}^n$ with support contained in $Q_{n, \epsilon}$ for every $w \in {\bf C}^n$. If $w \in \overline{H}_{n, \epsilon}$, then $|\exp (i w \cdot x)| \le 1$ for every $x \in Q_{n, \epsilon}$, and hence $M_w(f) \in L^1_\epsilon({\bf R}^n)$, with $\|M_w(f)\|_1 \le \|f\|_1$. As in Section \ref{holomorphic extensions}, the Fourier transforms of $f$ and $M_w(f)$ have natural extensions to $\overline{H}_{n, -\epsilon}$ under these conditions, and one can check that they are related as in (\ref{widehat{(M_w(f))}(zeta) = widehat{f}(zeta - w)}) for each $\zeta \in \overline{H}_{n, -\epsilon}$. Note that $\widehat{f}(\zeta - w)$ is defined in this case, because $-w$ and hence $\zeta - w$ is in $\overline{H}_{n, -\epsilon}$. \section[\ Some examples]{Some examples} \label{some examples} \setcounter{equation}{0} Let $a$ be a positive real number, and put \begin{equation} q_{a, +}(x) = \exp (- a \, x) \end{equation} when $x \ge 0$, and $q_{a, +}(x) = 0$ when $x < 0$. Thus $q_{a, +} \in L^1_+({\bf R})$, and so the Fourier transform of $q_{a, +}$ should have a natural extension to the closed lower half-plane in ${\bf C}$, as in Section \ref{holomorphic extensions}. More precisely, \begin{equation} \label{widehat{q_{a, +}}(zeta) = ...} \widehat{q_{a, +}}(\zeta) = \int_0^\infty \exp (- a \, x - i \zeta \, x) \, dx = \frac{-1}{- a - i \zeta} = \frac{1}{a + i \zeta} \end{equation} for every $\zeta \in \overline{H}_-$. Note that $\mathop{\rm Re} (a + i \zeta) \ge a > 0$ when $\zeta \in \overline{H}_-$ and $a > 0$. Similarly, put \begin{equation} q_{a, -}(x) = \exp (a \, x) = \exp (- a \, |x|) \end{equation} when $x \le 0$, and $q_{a, -}(x) = 0$ when $x > 0$. In this case, $q_{a, -} \in L^1_-({\bf R}^n)$, so that the Fourier transform of $q_{a, -}$ should have a natural extension to the closed upper half-plane in ${\bf C}$, as in Section \ref{holomorphic extensions}. Indeed, \begin{equation} \label{widehat{q_{a, -}}(zeta) = ...} \widehat{q_{a, -}}(\zeta) = \int_{-\infty}^0 \exp (a \, x - i \zeta \, x) \, dx = \frac{1}{a - i \zeta} \end{equation} for every $\zeta \in \overline{H}_+$. As before, $\mathop{\rm Re} (a - i \zeta) \ge a > 0$ when $\zeta \in \overline{H}_+$ and $a > 0$. Now let $a = (a_1, \ldots, a_n)$ be an $n$-tuple of positive real numbers, and let $\epsilon$ be an element of $\{1, -1\}^n$. Put \begin{equation} \label{q_{n, a, epsilon}(x) = ...} q_{n, a, \epsilon}(x) = \exp \Big(- \sum_{j = 1}^n a_j \, \epsilon_j \, x_j\Big) = \exp \Big(- \sum_{j = 1}^n a_j |x_j|\Big) \end{equation} when $x \in Q_{n, \epsilon}$, and $q_{n, a, \epsilon}(x) = 0$ when $x \in {\bf R}^n \backslash Q_{n, \epsilon}$. Equivalently, \begin{equation} \label{q_{n, a, epsilon}(x) = prod_{j = 1}^n q_{a_j, epsilon_j}(x_j)} q_{n, a, \epsilon}(x) = \prod_{j = 1}^n q_{a_j, \epsilon_j}(x_j), \end{equation} where the subscript $\epsilon_j$ on the right should be interpreted as $+$ when $\epsilon_j = 1$ and as $-$ when $\epsilon_j = -1$. Of course, $q_{n, a, \epsilon} \in L^1_\epsilon({\bf R}^n)$, and so its Fourier transform should have a natural extension to $\overline{H}_{n, -\epsilon}$, as usual. In fact, the Fourier transform of $q_{n, a, \epsilon}$ can be given as the product of the one-dimensional Fourier transforms of the factors $q_{a_j, \epsilon_j}$, so that \begin{equation} \widehat{q_{n, a, \epsilon}}(\zeta) = \prod_{j = 1}^n \widehat{q_{a_j, \epsilon_j}}(\zeta_j) = \prod_{j = 1}^n \frac{1}{(a_j + i \epsilon_j \, \zeta_j)} \end{equation} for each $\zeta \in \overline{H}_{n, -\epsilon}$. \section[\ Some examples, continued]{Some examples, continued} \label{some examples, continued} \setcounter{equation}{0} Let $a$ be a positive real number, and put \begin{equation} \label{p_a(x) = exp (- a |x|) = q_{a, +}(x) + q_{a, -}(x)} p_a(x) = \exp (- a \, |x|) = q_{a, +}(x) + q_{a, -}(x). \end{equation} This defines an integrable function on the real line, whose Fourier transform is given by \begin{equation} \label{widehat{p_a}(xi) = ...} \widehat{p_a}(\xi) = \widehat{q_{a, +}}(\xi) + \widehat{q_{a, -}}(\xi) = \frac{1}{a + i \xi} + \frac{1}{a - i \xi} = 2 \mathop{\rm Re} \Big(\frac{1}{a + i \xi}\Big) \end{equation} for each $\xi \in {\bf R}$. Of course, \begin{equation} \label{frac{1}{a + i xi} = ...} \frac{1}{a + i \xi} = \frac{a - i \xi}{(a + i \xi) (a - i \xi)} = \frac{a - i \xi}{a^2 + \xi^2}, \end{equation} and so (\ref{widehat{p_a}(xi) = ...}) is the same as \begin{equation} \label{widehat{p_a}(xi) = frac{2 a}{a^2 + xi^2}} \widehat{p_a}(\xi) = \frac{2 a}{a^2 + \xi^2}. \end{equation} It follows easily from (\ref{widehat{p_a}(xi) = frac{2 a}{a^2 + xi^2}}) that $\widehat{p_a}(\xi)$ is an integrable function of $\xi$ on the real line. In order to compute its integral, observe that \begin{equation} \label{int_{bf R} widehat{p_a}(xi) d xi = ...} \int_{\bf R} \widehat{p_a}(\xi) \, d\xi = \lim_{R \to \infty} \int_{-R}^R \widehat{p_a}(\xi) \, d\xi = \lim_{R \to \infty} 2 \mathop{\rm Re} \int_{-R}^R \frac{1}{a + i \xi} \, d\xi. \end{equation} Using the change of variables $\xi \mapsto R \, \xi$, we get that \begin{equation} \int_{-R}^R \frac{1}{a + i \xi} \, d\xi = \int_{-1}^1 \frac{1}{a + R \, \xi} \, R \, d\xi = \int_{-1}^1 \frac{1}{a \, R^{-1} + i \xi} \, d\xi \end{equation} for each $R > 0$. Hence \begin{equation} \int_{\bf R} \widehat{p_a}(\xi) \, d\xi = \lim_{r \to 0+} 2 \mathop{\rm Re} \int_{-1}^1 \frac{1}{r + i \xi} \, d\xi. \end{equation} Let $\log z$ be the principal branch of the logarithm. Remember that this is a holomorphic function defined on the set of $z \in {\bf C}$ such that $z$ is not a real number less than or equal to $0$, which agrees with the ordinary natural logarithm of $z$ when $z$ is a positive real number, and whose derivative is equal to $1/z$. Thus \begin{equation} \int_{-1}^1 \frac{1}{r + i \xi} \, i d\xi = \log (r + i) - \log (r - i) \end{equation} for each $r > 0$, which implies that \begin{eqnarray} 2 \mathop{\rm Re} \int_{-1}^1 \frac{1}{r + i \xi} \, d\xi & = & 2 \mathop{\rm Im} \int_{-1}^1 \frac{1}{r + i \xi} \, i d\xi \\ & = & 2 \mathop{\rm Im} (\log (r + i) - \log (r - i)). \nonumber \end{eqnarray} Taking the limit as $r \to 0+$, we get that \begin{equation} \label{int_{bf R} widehat{p_a}(xi) d xi = 2 im (log i - log (-i)) = 2 pi} \int_{\bf R} \widehat{p_a}(\xi) \, d\xi = 2 \mathop{\rm Im} (\log i - \log (-i)) = 2 \pi, \end{equation} since $\log i = (\pi/2) i$ and $\log (-i) = - (\pi/2) i$. Similarly, if $a = (a_1, \ldots, a_n)$ is an $n$-tuple of positive real numbers, then \begin{equation} p_{n, a}(x) = \prod_{j = 1}^n p_{a_j}(x_j) = \exp \Big(- \sum_{j = 1}^n a_j \, |x_j|\Big) \end{equation} is an integrable function on ${\bf R}^n$. The Fourier transform of $p_{n, a}$ is the product of the one-dimensional Fourier transforms of its factors, given by \begin{equation} \widehat{p_{n, a}}(\xi) = \prod_{j = 1}^n \widehat{p_{a_j}}(\xi_j) = \prod_{j = 1}^n \frac{2 \, a_j}{(a_j^2 + \xi_j^2)}. \end{equation} The integral of this is equal to the product of the one-dimensional integrals of its factors, so that \begin{equation} \int_{{\bf R}^n} \widehat{p_{n, a}}(\xi) \, d\xi = (2 \pi)^n. \end{equation} \section[\ The multiplication formula]{The multiplication formula} \label{multiplication formula} \setcounter{equation}{0} Let $f$, $g$ be integrable functions on ${\bf R}^n$. The \emph{multiplication formula} states that \begin{equation} \label{int_{R^n} widehat{f}(xi) g(xi) d xi = int_{R^n} f(x) widehat{g}(x) dx} \int_{{\bf R}^n} \widehat{f}(\xi) \, g(\xi) \, d\xi = \int_{{\bf R}^n} f(x) \, \widehat{g}(x) \, dx. \end{equation} Note that both sides of this equation make sense, because the Fourier transforms of $f$ and $g$ are bounded. Equivalently, (\ref{int_{R^n} widehat{f}(xi) g(xi) d xi = int_{R^n} f(x) widehat{g}(x) dx}) states that \begin{eqnarray} \lefteqn{\int_{{\bf R}^n} \Big(\int_{{\bf R}^n} f(x) \, \exp (- i \xi \cdot x) \, dx\Big) \, g(\xi) \, d\xi} \\ & = & \int_{{\bf R}^n} \Big(\int_{{\bf R}^n} g(\xi) \, \exp (- i x \cdot \xi) \, d\xi\Big) f(x) \, dx, \nonumber \end{eqnarray} which follows from Fubini's theorem. Let $h$ be integrable function on ${\bf R}^n$, and let $w$ be an element of ${\bf R}^n$. If \begin{equation} g(\xi) = \exp (i \xi \cdot w) \, h(\xi), \end{equation} then \begin{equation} \widehat{g}(x) = \widehat{h}(x - w), \end{equation} as in Section \ref{translation, multiplication}. If $f \in L^1({\bf R}^n)$, then the multiplication formula implies that \begin{equation} \label{int_{R^n} widehat{f}(xi) exp (i xi cdot w) h(xi) = ...} \int_{{\bf R}^n} \widehat{f}(\xi) \, \exp (i \xi \cdot w) \, h(\xi) \, d\xi = \int_{{\bf R}^n} f(x) \, \widehat{h}(x - w) \, dx. \end{equation} The right side is similar to $(f * \widehat{h})(w)$, but not quite the same. If $h_1(\xi) = h(-\xi)$, then \begin{eqnarray} \label{widehat{h_1}(x) = ...} \widehat{h_1}(x) & = & \int_{{\bf R}^n} h(-\xi) \, \exp (- i \xi \cdot x) \, d\xi \\ & = & \int_{{\bf R}^n} h(\xi) \, \exp (i \xi \cdot x) \, dx = \widehat{h}(-x), \nonumber \end{eqnarray} using the change of variable $x \mapsto -x$. Hence \begin{equation} \label{int_{R^n} f(x) widehat{h}(x - w) dx = ... = (f * widehat{h_1})(w)} \int_{{\bf R}^n} f(x) \, \widehat{h}(x - w) \, dx = \int_{{\bf R}^n} f(x) \, \widehat{h_1}(w - x) \, dx = (f * \widehat{h_1})(w). \end{equation} Suppose now that $h$ is an even function on ${\bf R}^n$, so that $h_1 = h$. Thus $\widehat{h}$ is even too, by (\ref{widehat{h_1}(x) = ...}). In this case, (\ref{int_{R^n} widehat{f}(xi) exp (i xi cdot w) h(xi) = ...}) reduces to \begin{equation} \label{int_{R^n} widehat{f}(xi) exp (i xi cdot w) h(xi) d xi = ..., 2} \int_{{\bf R}^n} \widehat{f}(\xi) \, \exp (i \xi \cdot w) \, h(\xi) \, d\xi = (f * \widehat{h})(w). \end{equation} \section[\ Convergence]{Convergence} \label{convergence} \setcounter{equation}{0} Let $a = (a_1, \ldots, a_n)$ be an $n$-tuple of positive real numbers, and put \begin{equation} P_{n, a}(x) = \pi^{-n} \prod_{j = 1}^n \frac{a_j}{(a_j^2 + x_j^2)}. \end{equation} Thus $P_{n, a}$ is a nonnegative integrable function on ${\bf R}^n$ that satisfies \begin{equation} \int_{{\bf R}^n} P_{n, a}(x) \, dx = 1 \end{equation} for each $a$, as in Section \ref{some examples, continued}. Let $f$ be a bounded continuous function on ${\bf R}^n$. By standard arguments, \begin{equation} \label{lim_{a to 0} (P_{n, a} * f)(x) = f(x)} \lim_{a \to 0} (P_{n, a} * f)(x) = f(x) \end{equation} for each $x \in {\bf R}^n$. Because $f$ is uniformly continuous on compact subsets of ${\bf R}^n$, one also gets uniform convergence on compact subsets of ${\bf R}^n$ in (\ref{lim_{a to 0} (P_{n, a} * f)(x) = f(x)}). If $f$ is uniformly continuous on ${\bf R}^n$, then one gets uniform convergence on ${\bf R}^n$. If $f$ is a continuous function on ${\bf R}^n$ with compact support, then $f$ is bounded and uniformly continuous in particular, so that $P_{n, a} * f \to f$ uniformly on ${\bf R}^n$ as $a \to 0$, as in the previous paragraph. In this case, it is easy to check that $P_{n, a} * f \to f$ as $a \to 0$ in the $L^1$ norm on ${\bf R}^n$ too. If $f$ is any integrable function on ${\bf R}^n$, then \begin{equation} \|P_{n, a} * f\|_1 \le \|P_{n, a}\|_1 \, \|f\|_1 = \|f\|_1 \end{equation} for each $a$, as in Section \ref{convolution on R^n}. One can also check that $P_{n, a} * f \to f$ as $a \to 0$ in the $L^1$ norm on ${\bf R}^n$, since this holds on a dense subset of $L^1({\bf R}^n)$, as in the preceding paragraph. \section[\ Inversion]{Inversion} \label{inversion} \setcounter{equation}{0} If $f$ is an integrable function on ${\bf R}^n$, then \begin{equation} \label{inversion formula} \int_{{\bf R}^n} \widehat{f}(\xi) \, \exp (i \xi \cdot w) \, \exp \Big(- \sum_{j = 1}^n a_j \, |\xi_j|\Big) \, d\xi = (2 \pi)^n \, (P_{n, a} * f)(w) \end{equation} for every $n$-tuple $a = (a_1, \ldots, a_n)$ of positive real numbers and $w \in {\bf R}^n$. This follows from (\ref{int_{R^n} widehat{f}(xi) exp (i xi cdot w) h(xi) d xi = ..., 2}), with $h$ equal to $p_{n, a}$ from Section \ref{some examples, continued}. This also uses the fact that $p_{n, a}$ is even and satisfies \begin{equation} \label{widehat{p}_{n, a} = (2 pi)^n P_{n, a}} \widehat{p}_{n, a} = (2 \pi)^n \, P_{n, a}, \end{equation} where $P_{n, a}$ is as in the previous section. If $\widehat{f}$ is also integrable on ${\bf R}^n$, then it is easy to see that \begin{eqnarray} \lefteqn{\lim_{a \to 0} \int_{{\bf R}^n} \widehat{f}(\xi) \, \exp (i \xi \cdot w) \, \exp \Big(- \sum_{j = 1}^n a_j \, |\xi_j|\Big) \, d\xi} \\ & = & \int_{{\bf R}^n} \widehat{f}(\xi) \, \exp (i \xi \cdot w) \, d\xi \nonumber \end{eqnarray} for every $w \in {\bf R}^n$. More precisely, \begin{equation} \label{widehat{f}(xi) exp (- sum_{j = 1}^n a_j |xi_j|) to widehat{f}(xi)} \widehat{f}(\xi) \, \exp \Big(- \sum_{j = 1}^n a_j \, |\xi_j|\Big) \to \widehat{f}(\xi) \end{equation} as $a \to 0$ in the $L^1$ norm on ${\bf R}^n$, so that one has uniform convergence in $w$ in the previous statement. This can be derived from the dominated convergence theorem, but one can also use the same type of argument a bit more directly. The main points are that \begin{equation} \exp \Big(- \sum_{j = 1}^n a_j \, |\xi_j|\Big) \le 1 \end{equation} for every $a$ and $\xi$, and that \begin{equation} \exp \Big(- \sum_{j = 1}^n a_j \, |\xi_j|\Big) \to 1 \end{equation} as $a \to 0$ uniformly on compact subsets of ${\bf R}^n$. It follows that \begin{equation} \label{int_{R^n} widehat{f}(xi) exp (i xi cdot w) d xi = (2 pi)^n f(w)} \int_{{\bf R}^n} \widehat{f}(\xi) \, \exp (i \xi \cdot w) \, d\xi = (2 \pi)^n \, f(w) \end{equation} for almost every $w \in {\bf R}^n$ when $f$ and $\widehat{f}$ are integrable functions on ${\bf R}^n$, since $P_{n, a} * f \to f$ in $L^1({\bf R}^n)$ as $a \to 0$, as in the preceding section. In particular, $f = 0$ almost everywhere on ${\bf R}^n$ when $\widehat{f} = 0$. \section[\ Measures on ${\bf T}^n$]{Measures on ${\bf T}^n$} \label{measures on T^n} \setcounter{equation}{0} There are two basic ways to think about Borel measures on ${\bf T}^n$. The first is as countably-additive real or complex-valued functions on the $\sigma$-algebra of Borel subsets of ${\bf T}^n$. The second way is to look at countinuous linear functionals on the space $C({\bf T}^n)$ of continuous real or complex-valued functions on ${\bf T}^n$, with respect to the supremum norm on $C({\bf T}^n)$. If $\mu$ is a countably-additive real or complex Borel measure on ${\bf T}^n$, then there is a finite nonnegative Borel measure $|\mu|$ on ${\bf T}^n$ associated to it, known as the total variation measure corresponding to $\mu$. This is characterized by the fact that \begin{equation} \label{|mu(E)| le |mu|(E)} |\mu(E)| \le |\mu|(E) \end{equation} for every Borel set $E \subseteq {\bf T}^n$, and that $|\mu|$ is the smallest nonnegative Borel measure on ${\bf T}^n$ with this property. More precisely, if $\nu$ is a nonnegative Borel measure on ${\bf T}^n$ such that $|\mu(E)| \le \nu(E)$ for every Borel set $E \subseteq {\bf T}^n$, then $|\mu|(E) \le \nu(E)$ for every Borel set $E \subseteq {\bf T}^n$. If $f$ is a real or complex-valued Borel measurable function on ${\bf T}^n$ which is integrable with respect to $|\mu|$, then the integral of $f$ with respect to $\mu$ can also be defined, and satisfies \begin{equation} \label{|int_{{bf T}^n} f d mu| le int_{{bf T}^n} |f| d |mu|} \biggl|\int_{{\bf T}^n} f \, d\mu\biggr| \le \int_{{\bf T}^n} |f| \, d|\mu|. \end{equation} In particular, this applies to any bounded Borel measurable function $f$ on ${\bf T}^n$, in which case we get that \begin{equation} \label{|int_{{bf T}^n} f d mu| le (sup_{z in {bf T}^n} |f(z)|) |mu|({bf T}^n)} \biggl|\int_{{\bf T}^n} f \, d\mu\biggr| \le \Big(\sup_{z \in {\bf T}^n} |f(z)|\Big) \, |\mu|({\bf T}^n). \end{equation} Continuous functions on ${\bf T}^n$ are obviously Borel measurable, so that \begin{equation} \label{lambda_mu(f) = int_{{bf T}^n} f d mu} \lambda_\mu(f) = \int_{{\bf T}^n} f \, d\mu \end{equation} defines a bounded linear functional on $C({\bf T}^n)$, with dual norm less than or equal to $|\mu|({\bf T}^n)$ with respect to the supremum norm on $C({\bf T}^n)$. Conversely, a version of the Riesz representation theorem states that every continuous linear functional $\lambda$ on $C({\bf T}^n)$ can be expressed as (\ref{lambda_mu(f) = int_{{bf T}^n} f d mu}) for a unique Borel measure $\mu$ on ${\bf T}^n$. The dual norm of $\lambda$ with respect to the supremum norm on $C({\bf T}^n)$ is also equal to $|\mu|({\bf T}^n)$. Normally one asks that $\mu$ be Borel regular, which means by definition that $|\mu|$ is Borel regular, but this is automatic in this case, because open subsets of ${\bf T}^n$ are $\sigma$-compact. An important advantage of looking at measures on ${\bf T}^n$ in terms of continuous linear functionals on $C({\bf T}^n)$ is that we can use the weak$^*$ topology on the dual of $C({\bf T}^n)$, as in Section \ref{weak^* topology}. \section[\ Convolution of measures]{Convolution of measures} \label{convolution of measures} \setcounter{equation}{0} Let $\mu$, $\nu$ be real or complex Borel measures on ${\bf T}^n$. Their convolution $\mu * \nu$ may be defined as the Borel measure on ${\bf T}^n$ given by \begin{equation} \label{(mu * nu)(E) = ...} (\mu * \nu)(E) = (\mu \times \nu)(\{(z, w) \in {\bf T}^n \times {\bf T}^n : z \diamond w \in E\}). \end{equation} Here $z \diamond w = (z_1 \, w_1, \ldots, z_n \, w_n)$, as in Section \ref{convolution on T^n}, and $\mu \times \nu$ is the product measure on ${\bf T}^n \times {\bf T}^n$ associated to $\mu$, $\nu$. Note that \begin{equation} \label{{(z, w) in {bf T}^n times {bf T}^n : z diamond w in E}} \{(z, w) \in {\bf T}^n \times {\bf T}^n : z \diamond w \in E\} \end{equation} is a relatively open set in ${\bf T}^n \times {\bf T}^n$ when $E$ is a relatively open set in ${\bf T}^n$, because $(z, w) \mapsto z \diamond w$ is continuous as a mapping from ${\bf T}^n \times {\bf T}^n$ into ${\bf T}^n$. This implies that (\ref{{(z, w) in {bf T}^n times {bf T}^n : z diamond w in E}}) is a Borel set in ${\bf T}^n \times {\bf T}^n$ when $E$ is a Borel set in ${\bf T}^n$. Equivalently, if $f$ is a bounded Borel measurable function on ${\bf T}^n$, then \begin{equation} \label{int_{{bf T}^n} f d(mu * nu) = ...} \int_{{\bf T}^n} f \, d(\mu * \nu) = \int_{{\bf T}^n \times {\bf T}^n} f(z \diamond w) \, d(\mu \times \nu)(z, w). \end{equation} It is easy to see that \begin{equation} \mu * \nu = \nu * \mu, \end{equation} and that \begin{equation} (\mu * \nu) * \rho = \mu * (\nu * \rho) \end{equation} for any three Borel measures $\mu$, $\nu$, and $\rho$ on ${\bf T}^n$. Observe that \begin{equation} |(\mu * \nu)(E)| \le (|\mu| * |\nu|)(E) \end{equation} for every Borel set $E \subseteq {\bf T}^n$, and hence \begin{equation} |\mu * \nu|(E) \le (|\mu| * |\nu|)(E). \end{equation} This implies that \begin{equation} |\mu * \nu|({\bf T}^n) \le (|\mu| * |\nu|)({\bf T}^n) = |\mu|({\bf T}^n) \, |\nu|({\bf T}^n). \end{equation} Of course, $\|\mu\| = |\mu|({\bf T}^n)$ is a natural norm on the space of Borel measures on ${\bf T}^n$, also known as the total variation of $\mu$. If one looks at measures on ${\bf T}^n$ in terms of continuous linear functionals on $C({\bf T}^n)$, then convolution can be defined more directly, basically using (\ref{int_{{bf T}^n} f d(mu * nu) = ...}). To do this, the product $\lambda_1 \times \lambda_2$ of two continuous linear functionals $\lambda_1$, $\lambda_2$ on $C({\bf T}^n)$ should first be defined as a continuous linear functional on $C({\bf T}^n \times {\bf T}^n)$. This is not too difficult to do, but there are some details to be checked. If $f(z, w)$ is a continuous function on ${\bf T}^n \times {\bf T}^n$, then one can apply $\lambda_1$ to $f(z, w)$ as a function of $z$ for each $w \in {\bf T}^n$, to get a function of $w$ on ${\bf T}^n$. It is easy to see that this is a continuous function of $w$, using the fact that $f(z, w)$ is uniformly continuous on ${\bf T}^n \times {\bf T}^n$, because ${\bf T}^n$ and hence ${\bf T}^n \times {\bf T}^n$ is compact, and using the continuity of $\lambda_1$ on $C({\bf T}^n)$. Thus one can apply $\lambda_2$ to the resulting function of $w$, to get a real or complex number, as appropriate. This defines $\lambda_1 \times \lambda_2$ as a linear functional on $C({\bf T}^n \times {\bf T}^n)$. By construction, \begin{equation} \label{|(lambda_1 times lambda_2)(f)| le ...} |(\lambda_1 \times \lambda_2)(f)| \le \|\lambda_1\|_* \, \|\lambda_2\|_* \, \Big(\sup_{z, w \in {\bf T}^n} |f(z, w)|\Big), \end{equation} where $\|\lambda_1\|_*$, $\|\lambda_2\|_*$ are the dual norms of $\lambda_1$, $\lambda_2$ with respect to the supremum norm on $C({\bf T}^n)$. This shows that $\lambda_1 \times \lambda_2$ is continuous with respect to the supremum norm on $C({\bf T}^n \times {\bf T}^n)$, with the dual norm less than or equal to $\|\lambda_1\|_* \, \|\lambda_2\|_*$. If $f(z, w) = f_1(z) \, f_2(w)$ for some continuous functions $f_1$, $f_2$ on ${\bf T}^n$, then it follows directly from the definition of $\lambda_1 \times \lambda_2$ that \begin{equation} \label{(lambda_1 times lambda_2)(f) = lambda_1(f_1) lambda_2(f_2)} (\lambda_1 \times \lambda_2)(f) = \lambda_1(f_1) \, \lambda_2(f_2). \end{equation} This implies that the dual norm of $\lambda_1 \times \lambda_2$ on $C({\bf T}^n \times {\bf T}^n)$ is equal to $\|\lambda_1\|_* \, \|\lambda_2\|_*$. Every continuous function on ${\bf T}^n \times {\bf T}^n$ can be approximated uniformly by a finite sum of products of continuous functions of $z$ and $w$ on ${\bf T}^n$, and hence $\lambda_1 \times \lambda_2$ may be characterized as the unique continuous linear functional on $C({\bf T}^n \times {\bf T}^n)$ that satisfies (\ref{(lambda_1 times lambda_2)(f) = lambda_1(f_1) lambda_2(f_2)}) for all $f_1, f_2 \in C({\bf T}^n)$. In particular, suppose that $\lambda_1 \times \lambda_2$ was defined instead by first applying $\lambda_2$ to a continuous function $f(z, w)$ on ${\bf T}^n \times {\bf T}^n$ as a function of $w$ for each $z \in {\bf T}^n$, and then applying $\lambda_1$ to the resulting function of $z$. This would also determine a continuous linear functional on $C({\bf T}^n \times {\bf T}^n)$ that satisfies (\ref{(lambda_1 times lambda_2)(f) = lambda_1(f_1) lambda_2(f_2)}), and which would therefore be equivalent to the previous definition of $\lambda_1 \times \lambda_2$. \section[\ Functions and measures]{Functions and measures} \label{functions, measures} \setcounter{equation}{0} If $g$ is an real or complex-valued function on ${\bf T}^n$ which is integrable with respect to Lebesgue measure, then \begin{equation} \label{mu_g(E) = frac{1}{(2 pi)^n} int_E g(z) |dz|} \mu_g(E) = \frac{1}{(2 \pi)^n} \int_E g(z) \, |dz| \end{equation} defines a Borel measure on ${\bf T}^n$. As usual, \begin{equation} \label{int_{{bf T}^n} f d mu_g = frac{1}{(2 pi)^n} int_{T^n} f(z) g(z) |dz|} \int_{{\bf T}^n} f \, d\mu_g = \frac{1}{(2 \pi)^n} \int_{{\bf T}^n} f(z) \, g(z) \, |dz| \end{equation} for every bounded measurable function $f$ on ${\bf T}^n$. It is also well known that $|\mu_g| = \mu_{|g|}$, and hence \begin{equation} \label{||mu_g|| = |mu_g|({bf T}^n) = frac{1}{(2 pi)^n} int_{T^n} |g(z)| |dz|} \|\mu_g\| = |\mu_g|({\bf T}^n) = \frac{1}{(2 \pi)^n} \int_{{\bf T}^n} |g(z)| \, |dz|. \end{equation} If $h$ is another Lebesgue integrable function on ${\bf T}^n$, then the convolution $g * h$ is also defined as a Lebesgue integrable function on ${\bf T}^n$, as in Section \ref{convolution on T^n}. It is not difficult to check that this is compatible with the definition of convolution of measures in the previous section, in the sense that \begin{equation} \label{mu_g * mu_h = mu_{g * h}} \mu_g * \mu_h = \mu_{g * h}. \end{equation} If $\nu$ is a real or complex Borel measure on ${\bf T}^n$, then the convolution of $\mu_g$ and $\nu$ can be defined as a measure on ${\bf T}^n$ as in the previous section. Alternatively, $g * \nu$ can be defined as a Lebesgue integrable function on ${\bf T}^n$ by \begin{equation} \label{(g * nu)(z) = int_{{bf T}^n} g(z diamond w^{-1}) d nu(w)} (g * \nu)(z) = \int_{{\bf T}^n} g(z \diamond w^{-1}) \, d\nu(w), \end{equation} where $w^{-1} = (w_1^{-1}, \ldots, w_n^{-1})$, as before. The existence of this integral for almost every $z \in {\bf T}^n$ with respect to Lebesgue measure uses Fubini's theorem, as in Section \ref{convolution on T^n}. More precisely, if $g$ and $\nu$ are nonnegative and real-valued, then Fubini's theorem implies that \begin{equation} \label{frac{1}{(2 pi)^n} int_{{bf T}^n} (g * nu)(z) |dz| = ...} \frac{1}{(2 \pi)^n} \int_{{\bf T}^n} (g * \nu)(z) \, |dz| = \Big(\frac{1}{(2 \pi)^n} \int_{{\bf T}^n} g(z) \, |dz|\Big) \, \nu({\bf T}^n). \end{equation} In particular, $(g * \nu) (z) < \infty$ for almost every $z \in {\bf T}^n$ with respect to Lebesgue measure. Otherwise, if $g$ and $\nu$ are real or complex-valued, then one can apply this to $|g|$ and $|\nu|$. This implies that the integral in (\ref{(g * nu)(z) = int_{{bf T}^n} g(z diamond w^{-1}) d nu(w)}) makes sense for almost every $z \in {\bf T}^n$ with respect to Lebesgue measure, and that \begin{equation} \label{frac{1}{(2 pi)^n} int_{{bf T}^n} |(g * nu)(z)| |dz| le ...} \frac{1}{(2 \pi)^n} \int_{{\bf T}^n} |(g * \nu)(z)| \, |dz| \le \Big(\frac{1}{(2 \pi)^n} \int_{{\bf T}^n} |g(z)| \, |dz|\Big) \, |\nu|({\bf T}^n). \end{equation} Of course, if $\nu = \mu_h$ for some Lebesgue integrable function $h$ on ${\bf T}^n$, then this definition of $g * \nu$ reduces to the earlier definition of $g * h$. Similarly, if $\nu$ is any Borel measure on ${\bf T}^n$, then this definition of $g * \nu$ is compatible with the definition of convolution of measures in the previous section, in the sense that \begin{equation} \mu_g * \nu = \mu_{g * \nu}. \end{equation} If $g$ is continuous on ${\bf T}^n$, and hence uniformly continuous, then it is easy to see that $g * \nu$ also defines a continuous function on ${\bf T}^n$. In this case, we also have that \begin{equation} \sup_{z \in {\bf T}^n} |(g * \nu)(z)| \le \Big(\sup_{z \in {\bf T}^n} |g(z)|\Big) \, |\nu|({\bf T}^n). \end{equation} \section[\ Fourier coefficients]{Fourier coefficients} \label{fourier coefficients} \setcounter{equation}{0} Let $\mu$ be a complex Borel measure on ${\bf T}^n$. If $\alpha \in {\bf Z}^n$, then the corresponding Fourier coefficient of $\mu$ is defined by \begin{equation} \label{widehat{mu}(alpha) = int_{{bf T}^n} z^{-alpha} d mu(z)} \widehat{\mu}(\alpha) = \int_{{\bf T}^n} z^{-\alpha} \, d\mu(z). \end{equation} This reduces to the earlier definition of the Fourier coefficients of a Lebesgue integrable function $g$ on ${\bf T}^n$ when $\mu = \mu_g$, as in the previous section. The Fourier coefficients of any complex Borel measure $\mu$ on ${\bf T}^n$ are bounded, with \begin{equation} \label{|widehat{mu}(alpha)| le |mu|({bf T}^n)} |\widehat{\mu}(\alpha)| \le |\mu|({\bf T}^n) \end{equation} for each $\alpha \in {\bf Z}^n$. If $\nu$ is another complex Borel measure on ${\bf T}^n$, then it is easy to see that \begin{equation} \label{widehat{(mu * nu)}(alpha) = widehat{mu}(alpha) widehat{nu}(alpha)} \widehat{(\mu * \nu)}(\alpha) = \widehat{\mu}(\alpha) \, \widehat{\nu}(\alpha) \end{equation} for every $\alpha \in {\bf Z}^n$. Let $U^n$ be the open unit polydisk in ${\bf C}^n$, and let $\widetilde{z}^\alpha$ be defined for $\alpha \in {\bf Z}^n$ and $z \in {\bf C}^n$ as in Section \ref{multiple fourier series}. If $\mu$ is a complex Borel measure on ${\bf T}^n$ and $z \in U^n$, then put \begin{equation} \label{phi_mu(z) = sum_{alpha in Z^n} widehat{mu}(alpha) widetilde{z}^alpha} \phi_\mu(z) = \sum_{\alpha \in {\bf Z}^n} \widehat{\mu}(\alpha) \, \widetilde{z}^\alpha. \end{equation} As in Section \ref{multiple fourier series}, the sum converges absolutely for every $z \in U^n$, because of the boundedness of the Fourier coefficients of $\mu$. This can also be expressed as \begin{equation} \label{phi_mu(z) = (2 pi)^n int_{{bf T}^n} P_n(z, w) d mu(w)} \phi_\mu(z) = (2 \pi)^n \int_{{\bf T}^n} P_n(z, w) \, d\mu(w), \end{equation} where $P_n(z, w)$ is the $n$-dimensional Poisson kernel, discussed in Section \ref{multiple fourier series}. More precisely, \begin{equation} \label{(2 pi)^n P_n(z, w) = sum_{alpha in Z^n} widetilde{z}^alpha w^{-alpha}} (2 \pi)^n \, P_n(z, w) = \sum_{\alpha \in {\bf Z}^n} \widetilde{z}^\alpha \, w^{-\alpha} \end{equation} for each $z \in U^n$ and $w \in {\bf T}^n$. This sum can be approximated uniformly by finite subsums as a function of $w \in {\bf T}^n$ for each $z \in U^n$, which permits one to interchange the order of summation and integration in (\ref{phi_mu(z) = sum_{alpha in Z^n} widehat{mu}(alpha) widetilde{z}^alpha}) to get (\ref{phi_mu(z) = (2 pi)^n int_{{bf T}^n} P_n(z, w) d mu(w)}). Of course, the extra factor of $(2 \pi)^n$ here simply comes from slightly different normalizations being used. If $r \in [0, 1)^n$ and $z, w \in {\bf T}^n$, then $r \diamond z \in U^n$, and \begin{equation} (2 \pi)^n \, P_n(r \diamond z, w) = (2 \pi)^n \, P_n(r, w \diamond z^{-1}). \end{equation} Put \begin{equation} \rho_{n, r}(w) = (2 \pi)^n \, P_n(r, w) \end{equation} for each $r \in [0, 1)^n$ and $w \in {\bf T}^n$. It is easy to see that \begin{equation} \label{rho_{n, r}(w^{-1}) = rho_{n, r}(w)} \rho_{n, r}(w^{-1}) = \rho_{n, r}(w), \end{equation} using the change or variables $\alpha \mapsto - \alpha$ in (\ref{(2 pi)^n P_n(z, w) = sum_{alpha in Z^n} widetilde{z}^alpha w^{-alpha}}). It follows that \begin{equation} \label{phi_mu(r diamond z) = (rho_{n, r} * mu)(z)} \phi_\mu(r \diamond z) = (\rho_{n, r} * \mu)(z) \end{equation} for every $r \in [0, 1)^n$ and $z \in {\bf T}^n$, by (\ref{phi_mu(z) = (2 pi)^n int_{{bf T}^n} P_n(z, w) d mu(w)}). Note that $\rho_{n, r}(w) \ge 0$ and \begin{equation} \label{frac{1}{(2 pi)^n} int_{{bf T}^n} rho_{n, r}(w) |dw| = 1} \frac{1}{(2 \pi)^n} \int_{{\bf T}^n} \rho_{n, r}(w) \, |dw| = 1 \end{equation} for each $r \in [0, 1)$, by the corresponding properties of the Poisson kernel. If $\mu = \mu_g$ for some continuous function $g$ on ${\bf T}^n$, then \begin{equation} \label{phi_mu(r diamond z) to g(z)} \phi_\mu(r \diamond z) \to g(z) \end{equation} as $r \to (1, \ldots, 1)$ for each $z \in {\bf T}^n$, as in previous discussions of Poisson integrals. As usual, the convergence is also uniform over $z \in {\bf T}^n$, because $g$ is uniformly continuous on ${\bf T}^n$. If $\mu = \mu_g$ for a Lebesgue integrable function $g$ on ${\bf T}^n$, then one can show that there is convergence in the $L^1$ norm on ${\bf T}^n$. More precisely, this follows by approximating $g$ by continuous functions on ${\bf T}^n$ in the $L^1$ norm, and using uniform bounds for the $L^1$ norm of $\phi_\mu(r \diamond z)$ as a function of $z \in {\bf T}^n$ over $r \in [0, 1)^n$. If $\mu$ is any complex Borel measure on ${\bf T}^n$ and $f$ is a continuous function on ${\bf T}^n$, then \begin{equation} \frac{1}{(2 \pi)^n} \int_{{\bf T}^n} \phi_\mu(r \diamond z) \, f(z) \, |dz| = \int_{{\bf T}^n} \rho_{n, r} * f \, d\mu, \end{equation} by Fubini's theorem and (\ref{rho_{n, r}(w^{-1}) = rho_{n, r}(w)}). Hence \begin{equation} \frac{1}{(2 \pi)^n} \int_{{\bf T}^n} \phi_\mu(r \diamond z) \, f(z) \, |dz| \to \int_{{\bf T}^n} f \, d\mu \end{equation} as $r \to (1, \ldots, 1)$, because $\rho_{n, r} * f \to f$ uniformly on ${\bf T}^n$ as $r \to (1, \ldots, 1)$, as before. This says that the measure on ${\bf T}^n$ associated to $\phi_\mu(r \diamond z)$ as in the preceding section converges to $\mu$ in the weak$^*$ topology on the dual of $C({\bf T}^n)$ as $r \to (1, \ldots, 1)$, when we identify Borel measures on ${\bf T}^n$ with continuous linear functionals on $C({\bf T}^n)$. \section[\ Measures on ${\bf R}^n$]{Measures on ${\bf R}^n$} \label{measures on R^n} \setcounter{equation}{0} Let $\mu$ be a real or complex Borel measure on ${\bf R}^n$, which is to say a countably-additive real or complex valued function on the $\sigma$-algebra of Borel sets in ${\bf R}^n$. As before, there is a finite nonnegative Borel measure $|\mu|$ on ${\bf R}^n$ associated to $\mu$ such that \begin{equation} |\mu(E)| \le |\mu|(E) \end{equation} for every Borel set $E \subseteq {\bf R}^n$, and which is less than or equal to every other nonnegative Borel measure on ${\bf R}^n$ with this property. If $f$ is a Borel measurable function on ${\bf R}^n$ which is integrable with respect to $|\mu|$, then the integral of $f$ with respect to $\mu$ can also be defined, and satisfies \begin{equation} \label{|int_{{bf R}^n} f d mu| le int_{{bf R}^n} |f| d |mu|} \biggl|\int_{{\bf R}^n} f \, d\mu\biggr| \le \int_{{\bf R}^n} |f| \, d|\mu|. \end{equation} In particular, this works when $f$ is a bounded Borel measurable function on ${\bf R}^n$, for which we have that \begin{equation} \label{|int_{{bf R}^n} f d mu| le (sup_{x in {bf R}^n} |f(x)|) |mu|({bf R}^n)} \biggl|\int_{{\bf R}^n} f \, d\mu\biggr| \le \Big(\sup_{x \in {\bf R}^n} |f(x)|\Big) \, |\mu|({\bf R}^n). \end{equation} Of course, continuous functions on ${\bf R}^n$ are Borel measurable, and so \begin{equation} \label{lambda_mu(f) = int_{{bf R}^n} f d mu} \lambda_\mu(f) = \int_{{\bf R}^n} f \, d\mu \end{equation} defines a bounded linear functional on the space $C_b({\bf R}^n)$ of bounded continuous functions on ${\bf R}^n$ with respect to the supremum norm, with dual norm less than or equal to $|\mu|({\bf R}^n)$. The restriction of $\lambda_\mu$ to the space $C_0({\bf R}^n)$ of continuous functions on ${\bf R}^n$ that vanish at infinity is also bounded with respect to the supremum norm, with dual norm less than or equal to $|\mu|({\bf R}^n)$. Conversely, a version of the Riesz representation theorem states that every bounded linear functional $\lambda$ on $C_0({\bf R}^n)$ corresponds to a unique Borel measure $\mu$ in this way, where the dual norm of $\lambda$ with respect to the supremum norm on $C_0({\bf R}^n)$ is equal to $\|\mu\| = |\mu|({\bf R}^n)$. Normally one also asks $\mu$ to satisfy some additional regularity conditions, but these hold automatically on ${\bf R}^n$, since open sets in ${\bf R}^n$ are $\sigma$-compact. A small part of this theorem implies that a bounded linear functional $\lambda$ on $C_0({\bf R}^n)$ has a natural extension to $C_b({\bf R}^n)$. This extension is characterized by the following additional continuity condition, which is a mild version of the dominated convergence theorem. Namely, if $\{f_j\}_{j = 1}^\infty$ is a sequence of bounded continuous functions on ${\bf R}^n$ that are uniformly bounded on ${\bf R}^n$ and converge uniformly on compact subsets to a function $f$ on ${\bf R}^n$, then $\{\lambda(f_j)\}_{j = 1}^\infty$ converges to $\lambda(f)$. Note that $f$ is bounded and continuous under these conditions, and that any bounded continuous function on ${\bf R}^n$ is the limit of a uniformly bounded sequence of continuous functions with compact support on ${\bf R}^n$ that converges uniformly on compact subsets of ${\bf R}^n$. Hence the extension of $\lambda$ to $C_b({\bf R}^n)$ is uniquely determined by $\lambda$ on $C_0({\bf R}^n)$ when the extension satisfies this additional continuity condition. If $\lambda$ is a bounded linear functional on $C_0({\bf R}^n)$ with compact support, so that $\lambda(f)$ only depends on the restriction of $f$ to a compact set in ${\bf R}^n$, then this extension of $\lambda$ to $C_b({\bf R}^n)$ is basically trivial. Otherwise, it is not too difficult to show that a bounded linear functional $\lambda$ on $C_0({\bf R}^n)$ can be approximated by bounded linear functionals on $C_0({\bf R}^n)$ with compact support with respect to the dual norm. One can then use this approximation to show more directly that $\lambda$ can be extended to a bounded linear functional on $C_b({\bf R}^n)$ that satisfies the additional continuity condition mentioned in the previous paragraph. Note that the dual norm of the extension of $\lambda$ to $C_b({\bf R}^n)$ with respect to the supremum norm on $C_b({\bf R}^n)$ is equal to the dual norm of $\lambda$ on $C_0({\bf R}^n)$. \section[\ Convolution of measures, continued]{Convolution of measures, continued} \label{convolution of measures, continued} \setcounter{equation}{0} If $\mu$, $\nu$ are real or complex Borel measures on ${\bf R}^n$, then their convolution $\mu * \nu$ may be defined as a Borel measure on ${\bf R}^n$ by \begin{equation} \label{(mu * nu)(E) = (mu times nu)({(x, y) in R^n times R^n : x + y in E}} (\mu * \nu)(E) = (\mu \times \nu)(\{(x, y) \in {\bf R}^n \times {\bf R}^n : x + y \in E\}, \end{equation} where $\mu \times \nu$ is the product measure on ${\bf R}^n \times {\bf R}^n$ corresponding to $\mu$, $\nu$. Note that \begin{equation} \label{{(x, y) in {bf R}^n times {bf R}^n : x + y in E}} \{(x, y) \in {\bf R}^n \times {\bf R}^n : x + y \in E\} \end{equation} is an open set in ${\bf R}^n \times {\bf R}^n$ for every open set $E \subseteq {\bf R}^n$, by continuity of addition, which implies that (\ref{{(x, y) in {bf R}^n times {bf R}^n : x + y in E}}) is a Borel set in ${\bf R}^n \times {\bf R}^n$ when $E$ is a Borel set in ${\bf R}^n$. If $f$ is a bounded Borel measurable function on ${\bf R}^n$, then we get that \label{int_{{bf R}^n} f d(mu * nu) = ...} \begin{equation} \int_{{\bf R}^n} f \, d(\mu * \nu) = \int_{{\bf R}^n \times {\bf R}^n} f(x + y) \, d (\mu \times \nu)(x, y). \end{equation} As usual, \begin{equation} \label{nu * mu = mu * nu and (mu * nu) * rho = mu * (nu * rho)} \nu * \mu = \mu * \nu \quad\hbox{and}\quad (\mu * \nu) * \rho = \mu * (\nu * \rho) \end{equation} for any Borel measures $\mu$, $\nu$ and $\rho$ on ${\bf R}^n$. As before, \begin{equation} |(\mu * \nu)(E)| \le (|\mu| * |\nu|)(E) \end{equation} for any Borel set $E$ in ${\bf R}^n$. This implies that \begin{equation} |\mu * \nu|(E) \le (|\mu| * |\nu|)(E) \end{equation} for every Borel set $E \subseteq {\bf R}^n$. In particular, \begin{equation} \|\mu * \nu\| \le \|\mu\| \, \|\nu\|, \end{equation} where $\|\mu\| = |\mu|({\bf R}^n)$, as in the previous section. One can also look at convolution in terms of bounded linear functionals on spaces of continuous functions, as in Section \ref{convolution of measures}. If $\lambda_1$, $\lambda_2$ are bounded linear functionals on $C_0({\bf R}^n)$, then the product linear functional $\lambda_1 \times \lambda_2$ can be defined as a bounded linear functional on $C_0({\bf R}^n \times {\bf R}^n)$, in basically the same way as before. In order to define the convolution $\lambda_1 * \lambda_2$ as a bounded linear functional on $C_0({\bf R}^n)$, one would like to let $\lambda_1 * \lambda_2$ act on $f(x + y)$ as a continuous function on ${\bf R}^n \times {\bf R}^n$, where $f$ is a continuous function on ${\bf R}^n$ tht vanishes at infinity. However, $f(x + y)$ does not vanish at infinity on ${\bf R}^n \times {\bf R}^n$ unless $f \equiv 0$, and so it is better to use the natural extension of $\lambda_1 \times \lambda_2$ to bounded continuous functions on ${\bf R}^n \times {\bf R}^n$, as in the preceding section. \section[\ Functions and measures, continued]{Functions and measures, continued} \label{functions, measures, continued} \setcounter{equation}{0} If $g$ is a real or complex-valued function on ${\bf R}^n$ that is integrable with respect to Lebesgue measure, then \begin{equation} \label{mu_g(E) = int_E g(x) dx} \mu_g(E) = \int_E g(x) \, dx \end{equation} defines a Borel measure on ${\bf R}^n$. As before, \begin{equation} \int_{{\bf R}^n} f \, d\mu_g = \int_{{\bf R}^n} f(x) \, g(x) \, dx \end{equation} for every bounded measurable function $f$ on ${\bf R}^n$. Also, $|\mu_g| = \mu_{|g|}$, so that $\|\mu_g\|$ is the same as the $L^1$ norm of $g$ on ${\bf R}^n$. If $h$ is another integrable function on ${\bf R}^n$, then one can check that \begin{equation} \mu_g * \mu_h = \mu_{g * h}, \end{equation} where $g * h$ is the integrable function on ${\bf R}^n$ defined as in Section \ref{convolution on R^n}. If $\nu$ is a real or complex Borel measure on ${\bf R}^n$, then the convolution of $g$ and $\nu$ can be defined as a Lebesgue integrable function on ${\bf R}^n$ by \begin{equation} \label{(g * nu)(x) = int_{{bf R}^n} g(x - y) d nu(y)} (g * \nu)(x) = \int_{{\bf R}^n} g(x - y) \, d\nu(y). \end{equation} As usual, the existence of this integral almost everywhere on ${\bf R}^n$ uses Fubini's theorem. If $g$ and $\nu$ are nonnegative and real-valued, then \begin{equation} \label{int_{{bf R}^n} (g * nu)(x) dx = (int_{{bf R}^n} g(x) dx) nu({bf R}^n)} \int_{{\bf R}^n} (g * \nu)(x) \, dx = \Big(\int_{{\bf R}^n} g(x) \, dx\Big) \, \nu({\bf R}^n), \end{equation} and in particular $(g * \nu)(x) < \infty$ for almost every $x \in {\bf R}^n$ with respect to Lebesgue measure. Otherwise, if $g$ and $\nu$ are real or complex-valued, then one can apply this to $|g|$ and $|\nu|$, to get that the integral in (\ref{(g * nu)(x) = int_{{bf R}^n} g(x - y) d nu(y)}) makes sense for almost every $x \in {\bf R}^n$ with respect to Lebesgue measure, and that \begin{equation} \label{int_{R^n} |(g * nu)(x)| dx le (int_{R^n} |g(x)| dx) |nu|(R^n)} \int_{{\bf R}^n} |(g * \nu)(x)| \, dx \le \Big(\int_{{\bf R}^n} |g(x)| \, dx\Big) \, |\nu|({\bf R}^n). \end{equation} If $\nu = \mu_h$ for some integrable function $h$ on ${\bf R}^n$, then $g * \nu$ reduces to the usual definition of $g * h$, while if $\nu$ is any Borel measure on ${\bf R}^n$, then $\mu_g * \nu = \mu_{g * \nu}$. If $\nu$ is a real or complex Borel measure on ${\bf R}^n$ and $g$ is a bounded continuous function on ${\bf R}^n$, then $(g * \nu)(x)$ is defined for every $x \in {\bf R}^n$, and satisfies \begin{equation} \label{sup_{x in R^n} |(g * nu)(x)| le (sup_{x in R^n} |g(x)|) |nu|(R^n)} \sup_{x \in {\bf R}^n} |(g * \nu)(x)| \le \Big(\sup_{x \in {\bf R}^n} |g(x)|\Big) \, |\nu|({\bf R}^n), \end{equation} as before. One can also check that $g * \nu$ is continuous on ${\bf R}^n$, using the dominated convergence theorem. If $g$ is bounded and uniformly continuous, then it is easy to see that $g * \nu$ is uniformly continuous too. Alternatively, to show that $g * \nu$ is continuous when $g$ is bounded and continuous, one can use the fact that $g$ is uniformly continuous on compact sets, and approximate $\nu$ by measures with compact support. If $g$ and $\nu$ have compact support in ${\bf R}^n$, then it is easy to see that $g * \nu$ has compact support as well. If $g$ is a continuous function on ${\bf R}^n$ that vanishes at infinity and $\nu$ has compact support, then it is easy to check that $g * \nu$ vanishes at infinity on ${\bf R}^n$ too. This also works when $\nu$ does not have compact support, by approximating $\nu$ by measures with compact support on ${\bf R}^n$. \section[\ The Fourier transform, continued]{The Fourier transform, continued} \label{fourier transform, continued} \setcounter{equation}{0} The Fourier transform of a complex Borel measure $\mu$ on ${\bf R}^n$ can be defined by \begin{equation} \label{widehat{mu}(xi) = int_{{bf R}^n} exp (- i xi cdot x) d mu(x)} \widehat{\mu}(\xi) = \int_{{\bf R}^n} \exp (- i \xi \cdot x) \, d\mu(x) \end{equation} for each $\xi \in {\bf R}^n$. This coincides with the earlier definition for an integrable function $f$ on ${\bf R}^n$ when $\mu = \mu_f$. As before, \begin{equation} \label{|widehat{mu}(xi)| le |mu|({bf R}^n)} |\widehat{\mu}(\xi)| \le |\mu|({\bf R}^n) \end{equation} for every $\xi \in {\bf R}^n$, and one can also check that $\widehat{\mu}(\xi)$ is uniformly continuous on ${\bf R}^n$. This is easier to do when $\mu$ has compact support in ${\bf R}^n$, and otherwise one can approximate $\mu$ by measures with compact support. If $\nu$ is another complex Borel measure on ${\bf R}^n$, then it is easy to see that \begin{equation} \label{widehat{(mu * nu)}(xi) = widehat{mu}(xi) widehat{nu}(xi)} \widehat{(\mu * \nu)}(\xi) = \widehat{\mu}(\xi) \, \widehat{\nu}(\xi) \end{equation} for every $\xi \in {\bf R}^n$. The analogue of the multiplication formula in this context states that \begin{equation} \label{int_{R^n} widehat{mu}(xi) d nu(xi) = int_{R^n} widehat{nu}(x) d mu(x)} \int_{{\bf R}^n} \widehat{\mu}(\xi) \, d\nu(\xi) = \int_{{\bf R}^n} \widehat{\nu}(x) \, d\mu(x) \end{equation} for any pair of complex Borel measures $\mu$, $\nu$ on ${\bf R}^n$. This follows from Fubini's theorem, as before. In particular, \begin{equation} \label{int_{R^n} widehat{mu}(xi) g(xi) d xi = int_{R^n} widehat{g}(x) d mu(x)} \int_{{\bf R}^n} \widehat{\mu}(\xi) \, g(\xi) \, d\xi = \int_{{\bf R}^n} \widehat{g}(x) \, d\mu(x) \end{equation} for every Lebesgue integrable function $g$ on ${\bf R}^n$. As in Section \ref{multiplication formula}, this implies that \begin{equation} \label{int_{{bf R}^n} widehat{mu}(xi) exp (i xi cdot w) h(xi) d xi = ...} \int_{{\bf R}^n} \widehat{\mu}(\xi) \, \exp (i \xi \cdot w) \, h(\xi) \, d\xi = \int_{{\bf R}^n} \widehat{h}(x - w) \, d\mu(w) \end{equation} for every Lebesgue integrable function $h$ on ${\bf R}^n$ and every $w \in {\bf R}^n$. If $h$ is a even function on ${\bf R}^n$, then this reduces to \begin{equation} \label{int_{{bf R}^n} widehat{mu}(xi) exp (i xi cdot w) h(xi) d xi = ..., 2} \int_{{\bf R}^n} \widehat{\mu}(\xi) \, \exp (i \xi \cdot w) \, h(\xi) \, d\xi = (\widehat{h} * \mu)(w). \end{equation} Let $a = (a_1, \ldots, a_n)$ be an $n$-tuple of positive real numbers, and let $P_{n, a}(x)$ be the function on ${\bf R}^n$ discussed in Section \ref{convergence}. Also let $f$ be a continuous function on ${\bf R}^n$ that vanishes at infinity, and observe that \begin{equation} \label{int_{R^n} (P_{n, a} * mu)(w) f(w) dw = int_{R^n} P_{n, a} * f d mu} \int_{{\bf R}^n} (P_{n, a} * \mu)(w) \, f(w) \, dw = \int_{{\bf R}^n} P_{n, a} * f \, d\mu \end{equation} by Fubini's theorem, using also the fact that $P_{n, a}$ is an even function on ${\bf R}^n$. It follows that \begin{equation} \lim_{a \to 0} \int_{{\bf R}^n} (P_{n, a} * \mu)(w) \, f(w) \, dw = \int_{{\bf R}^n} f \, d\mu, \end{equation} because $P_{n, a} * f \to f$ uniformly on ${\bf R}^n$ as $a \to 0$, as in Section \ref{convergence}. This says that the measure on ${\bf R}^n$ associated to $P_{n, a} * \mu$ converges to $\mu$ as $a \to 0$ with respect to the weak$^*$ topology on the dual of $C_0({\bf R}^n)$ when we identify complex Borel measures on ${\bf R}^n$ with bounded linear functionals on $C_0({\bf R}^n)$. As in Section \ref{inversion}, we have that \begin{equation} \label{inversion formula, 2} \quad \int_{{\bf R}^n} \widehat{\mu}(\xi) \, \exp (i \xi \cdot w) \, \exp \Big(- \sum_{j = 1}^n a_j \, |\xi_j|\Big) \, d\xi = (2 \pi)^n \, (P_{n, a} * \mu)(w) \end{equation} for each $w \in {\bf R}^n$, by applying (\ref{int_{{bf R}^n} widehat{mu}(xi) exp (i xi cdot w) h(xi) d xi = ..., 2}) with $h = p_{n, a}$ as in Section \ref{some examples, continued}. This converges to $(2 \pi)^n \, \mu$ as $a \to 0$ with respect to the weak$^*$ topology on the dual of $C_0({\bf R}^n)$, as in the previous paragraph. In particular, $\mu = 0$ when $\widehat{\mu} = 0$. \section[\ Holomorphic extensions, continued]{Holomorphic extensions, continued} \label{holomorphic extensions, continued} \setcounter{equation}{0} Let us say that a complex Borel measure $\mu$ on ${\bf R}^n$ has support contained in a closed set $E \subseteq {\bf R}^n$ if \begin{equation} \mu({\bf R}^n \backslash E) = 0. \end{equation} If $\mu$ has support contained in a compact set $K$ in ${\bf R}^n$, then the Fourier transform $\widehat{\mu}(\xi)$ extends to a holomorphic function $\widehat{\mu}(\zeta)$ on ${\bf C}^n$, given by \begin{equation} \widehat{\mu}(\zeta) = \int_K \exp (- i \zeta \cdot x) \, d\mu(x), \end{equation} as in Section \ref{holomorphic extensions}. If $\mu$, $\nu$ are compactly supported complex Borel measures on ${\bf R}^n$, then one can check that $\mu * \nu$ also has compact support, and that \begin{equation} \widehat{(\mu * \nu)}(\zeta) = \widehat{\mu}(\zeta) \, \widehat{\nu}(\zeta) \end{equation} for each $\zeta \in {\bf C}^n$. Similarly, let $\epsilon \in \{-1, 1\}^n$ be given, let $Q_{n, \epsilon}$ be the closed ``quadrant'' in ${\bf R}^n$ associated to $\epsilon$ as before, and let $H_{n, \epsilon}$ be the corresponding region in ${\bf C}^n$. If $n = 1$, then $Q_{n, \epsilon}$ is a closed half-line in ${\bf R}$, and $H_{n, \epsilon}$ is the open upper or lower half-plane in ${\bf C}$, as appropriate. If $\mu$ is a complex Borel measure on ${\bf R}^n$ with support contained in $Q_{n, \epsilon}$, then the Fourier transform of $\mu$ extends naturally to a bounded uniformly continuous function on $\overline{H}_{n, - \epsilon}$ that is holomorphic on $H_{n, - \epsilon}$, for basically the same reasons as for integrable functions. If $\mu$, $\nu$ are complex Borel measures on ${\bf R}^n$ supported on $Q_{n, \epsilon}$, then one can check that $\mu * \nu$ is also supported on $Q_{n, \epsilon}$, and that the natural extension of the Fourier transform of $\mu * \nu$ to $\overline{H}_{n, - \epsilon}$ is equal to the product of the corresponding extensions of the Fourier transforms of $\mu$, $\nu$. \section[\ Approximation and support]{Approximation and support} \label{approximation, support} \setcounter{equation}{0} Let $X$ be a locally compact Hausdorff topological space, and let $\lambda$ be a bounded linear functional on the space $C_0(X)$ of continuous functions on $X$ that vanish at infinity, with respect to the supremum norm. If $\phi$ is a bounded continuous function on $X$, then \begin{equation} \label{lambda_phi(f) = lambda(phi f)} \lambda_\phi(f) = \lambda(\phi \, f) \end{equation} is also a bounded linear functional on $C_0(X)$, and \begin{equation} \label{||lambda_phi||_* le ||phi||_{sup} ||lambda||_*} \|\lambda_\phi\|_* \le \|\phi\|_{sup} \, \|\lambda\|_*. \end{equation} Here $\|\phi\|_{sup}$ denotes the supremum norm of $\phi$ on $X$, and $\|\lambda\|_*$ is the dual norm of $\lambda$ with respect to the supremum norm on $C_0(X)$. Note that $\phi \, f \in C_0(X)$ when $f \in C_0(X)$ and $\phi \in C_b(X)$. Let $\psi$ be another bounded continuous function on $X$, and let us check that \begin{equation} \label{||lambda_phi||_* + ||lambda_psi||_* le ...} \|\lambda_\phi\|_* + \|\lambda_\psi\|_* \le \sup_{x \in X} (|\phi(x)| + |\psi(x)|) \, \|\lambda\|_*. \end{equation} Let $a$, $b$ be real or complex numbers, as appropriate, and let $f$, $g$ be continuous functions on $X$ that vanish at infinity, with \begin{equation} |a|, |b|, \|f\|_{sup}, \|g\|_{sup} \le 1. \end{equation} Observe that \begin{equation} a \, \lambda_\phi(f) + b \, \lambda_\psi(g) = \lambda(a \, \phi \, f + b \, \psi \, g), \end{equation} and hence \begin{equation} |a \, \lambda_\phi(f) + b \, \lambda_\psi(g)| \le \|a \, \phi \, f + b \, \psi \, g\|_{sup} \, \|\lambda\|_*. \end{equation} Our hypotheses on $a$, $b$, $f$, and $g$ imply that \begin{equation} \|a \, \phi \, f + b \, \psi \, g\|_{sup} \le \sup_{x \in X} (|\phi(x)| + |\psi(x)|), \end{equation} so that \begin{equation} |a \, \lambda_\phi(f) + b \, \lambda_\psi(g)| \le \sup_{x \in X} (|\phi(x)| + |\psi(x)|) \, \|\lambda\|_*. \end{equation} Using suitable choices of $a$ and $b$, we get that \begin{equation} |\lambda_\phi(f)| + |\lambda_\psi(g)| \le \sup_{x \in X} (|\phi(x)| + |\psi(x)|) \, \|\lambda\|_*, \end{equation} which implies (\ref{||lambda_phi||_* + ||lambda_psi||_* le ...}), by taking the supremum over $f$ and $g$. Suppose now that $\phi$ is a bounded real-valued continuous function on $X$ such that $0 \le \phi(x) \le 1$ for each $x \in X$. If we take $\psi = 1 - \phi$ in (\ref{||lambda_phi||_* + ||lambda_psi||_* le ...}), then we get that \begin{equation} \|\lambda_\phi\|_* + \|\lambda_{1 - \phi}\|_* \le \|\lambda\|_*. \end{equation} Of course, \begin{equation} \|\lambda\|_* \le \|\lambda_\phi\|_* + \|\lambda_{1 - \phi}\|_*, \end{equation} because $\lambda_\phi + \lambda_{1 - \phi} = \lambda$, and so \begin{equation} \label{||lambda_phi||_* + ||lambda_{1 - phi}||_* = ||lambda||_*} \|\lambda_\phi\|_* + \|\lambda_{1 - \phi}\|_* = \|\lambda\|_*. \end{equation} Let $\epsilon > 0$ be given, and let $f$ be a continuous function on $X$ that vanishes at infinity such that $\|f\|_{sup} \le 1$ and \begin{equation} |\lambda(f)| > \|\lambda\|_* - \epsilon. \end{equation} We may also ask $f$ to have compact support in $X$, since continuous functions with compact support are dense in $C_0(X)$. Let $\phi$ be a continuous real-valued function on $X$ with compact support such that $\phi(x) = 1$ for every $x$ in the support of $f$ and $0 \le \phi(x) \le 1$ for every $x \in X$, which exists by Urysohn's lemma. Thus $\lambda_\phi(f) = \lambda(f)$, so that \begin{equation} \|\lambda_\phi\|_* > \|\lambda\|_* - \epsilon. \end{equation} This implies that \begin{equation} \|\lambda - \lambda_\phi\|_* = \|\lambda_{1 - \phi}\|_* < \epsilon, \end{equation} by (\ref{||lambda_phi||_* + ||lambda_{1 - phi}||_* = ||lambda||_*}). \section[\ Extensions to $C_b(X)$]{Extensions to $C_b(X)$} \label{extensions to C_b(X)} \setcounter{equation}{0} Let $X$ be a locally compact Hausdorff topological space, and let $\lambda$ be a bounded linear functional on $C_0(X)$. As in Section \ref{measures on R^n}, there is a natural extension of $\lambda$ to a bounded linear functional on $C_b(X)$ with some additional continuity properties. Of course, this is trivial when $X$ is compact, and so we may as well suppose that $X$ is not compact. Remember that there is a natural topology on the space $C(X)$ of all continuous real or complex-valued functions on $X$, which is determined by the collection of supremum seminorms associated to nonempty compact subsets of $X$. If $X$ is $\sigma$-compact, as in the case of $X = {\bf R}^n$, then we have seen that it suffices to consider the supremum seminorms corresponding to a sequence of compact subsets of $X$, which implies that this topology on $C(X)$ is metrizable. If $L$ is a nonnegative real number, then let $C_{b, L}(X)$ be the space of continuous functions $f$ on $X$ such that $|f(x)| \le L$ for every $x \in X$. Similarly, let $C_{0, L}(X)$ be the intersection of $C_0(X)$ and $C_{b, L}(X)$, consisting of all continuous functions $f$ that vanish at infinity and satisfy $\|f\|_{sup} \le L$. It is easy to see that $C_{0, L}(X)$ is dense in $C_{b, L}(X)$ with respect to the topology induced on $C_{b, L}(X)$ by the one on $C(X)$ described in the preceding paragraph. More precisely, for each bounded continuous function $f$ on $X$ with $\|f\|_{sup} \le L$ and every nonempty compact set $K \subseteq X$ there is a continuous function $g$ with compact support on $X$ such that $\|g\|_{sup} \le L$ and $g(x) = f(x)$ for every $x \in K$. To see this, one can take $g = \theta \, f$, where $\theta$ is a continuous real-valued function on $X$ with compact support such that $\theta(x) = 1$ for every $x \in K$ and $0 \le \theta(x) \le 1$ for every $x \in X$. Thus we are actually interested in extending $\lambda$ to a bounded linear functional on $C_b(X)$ with the additional property that the restriction of $\lambda$ to $C_{b, L}(X)$ is continuous with respect to the topology induced by the one on $C(X)$ described before for each $L \ge 0$. This extension would be unique, because $C_{0, L}(X)$ is dense in $C_{b, L}(X)$ with respect to the topology induced by the one on $C(X)$. If $X$ is $\sigma$-compact, then this additional continuity condition is equivalent to asking that $\{\lambda(f_j)\}_{j = 1}^\infty$ converges to $\lambda(f)$ for each uniformly bounded sequence $\{f_j\}_{j = 1}^\infty$ of continuous functions on $X$ that converges uniformly on compact subsets of $X$ to a function $f$ on $X$. Of course, a necessary condition for the existence of an extension of $\lambda$ to $C_b(X)$ with this additional continuity property is that the restriction of $\lambda$ to $C_{0, L}(X)$ be continuous with respect to the topology induced by the one on $C(X)$ for each $L \ge 0$. It is easy to see that $\lambda$ satisfies this condition, using the approximation of $\lambda$ by bounded linear functionals on $C_0(X)$ with compact support, as in the previous section. The existence of the extension of $\lambda$ to $C_b(X)$ with this additional continuity property can be obtained by approximating a bounded continuous function $f$ on $X$ by uniformly bounded continuous functions $g$ on $X$ with compact support, as before, and choosing $\lambda(f)$ so that it is approximated by the $\lambda(g)$'s. This is analogous to the fact that a uniformly continuous real or complex-valued function on a dense subset of a metric space $M$ has a unique extension to a uniformly continuous function $M$. Alternatively, let $\{\phi_j\}_{j = 1}^\infty$ be a sequence of uniformly bounded continuous functions on $X$ with compact support such that the corresponding linear functionals $\lambda_{\phi_j}$ converge to $\lambda$ with respect to the dual norm associated to the supremum norm on $C_0(X)$, as in the previous section. Each $\lambda_{\phi_j}$ has an obvious extension to $C_b(X)$, and one can check that these extensions converge as $j \to \infty$ to a bounded linear functional on $C_b(X)$. One can then take the desired extension of $\lambda$ to $C_b(X)$ to be the limit of this sequence, which amounts to approximating $\lambda(f)$ for $f \in C_b(X)$ by $\lambda(g)$ with uniformly bounded continuous functions $g$ on $X$ with compact support, as before. \section[\ Delta functions]{Delta functions} \label{delta functions} \setcounter{equation}{0} A Dirac delta function is not really a function in the usual sense, but can easily be interpreted as a measure on ${\bf R}^n$. Thus if $u \in {\bf R}^n$, then the corresponding measure $\delta_u$ is defined on ${\bf R}^n$ by \begin{eqnarray} \delta_u(E) & = & 1 \quad\hbox{when } u \in E \\ & = & 0 \quad\hbox{when } u \in {\bf R}^n \backslash E. \nonumber \end{eqnarray} Equivalently, \begin{equation} \int_{{\bf R}^n} f \, d\delta_u = f(u) \end{equation} for any function $f$ on ${\bf R}^n$. The Fourier transform of $\delta_u$ is given by \begin{equation} \widehat{\delta_u}(\xi) = \exp (- i \xi \cdot u) \end{equation} for every $\xi \in {\bf R}^n$. In particular, $\widehat{\delta_0}(\xi) = 1$ for every $\xi \in {\bf R}^n$, and $|\widehat{\delta_u}(\xi)| = 1$ for every $u, \xi \in {\bf R}^n$. This shows that the analogue of the Riemann--Lebesgue lemma for measures instead of integrable functions does not work. As in Section \ref{holomorphic extensions, continued}, there is a natural extension of $\widehat{\delta_u}$ to a holomorphic function on ${\bf C}^n$, given by $\widehat{\delta_u}(\zeta) = \exp (- i \zeta \cdot u)$. If $\epsilon \in \{-1, 1\}^n$ and $u \in Q_{n, \epsilon}$, then $|\widehat{\delta_u}(\zeta)| \le 1$ for every $\zeta \in \overline{H}_{n, -\epsilon}$, as before. If $\mu$ is a real or complex Borel measure on ${\bf R}^n$, then \begin{equation} \label{(mu * delta_u)(E) = mu(E - u)} (\mu * \delta_u)(E) = \mu(E - u) \end{equation} for every Borel set $E \subseteq {\bf R}^n$, where $E - u$ is the set of points in ${\bf R}^n$ of the form $x - u$ with $x \in E$. In particular, \begin{equation} \label{mu * delta_0 = mu} \mu * \delta_0 = \mu \end{equation} for every Borel measure $\mu$, and \begin{equation} \label{delta_u * delta_v = delta_{u + v}} \delta_u * \delta_v = \delta_{u + v} \end{equation} for every $u, v \in {\bf R}^n$. If $f$ is a suitable function on ${\bf R}^n$, then \begin{equation} (f * \delta_u)(x) = f(x - u). \end{equation}
1,108,101,562,578
arxiv
\section{Introduction} \vspace{-0.1cm} \label{sec:introduction} 3D human pose estimation from still RGB images is a challenging task due to changes in lighting conditions, cluttered background, occlusions, inter and intra subject pose variability, as well as ill-posed depth ambiguity. Moreover, due to its nature, accurate annotation of captured data is not a trivial task and most available datasets are captured under controlled environments \cite{h36m_2014, sigal2010humaneva,mehta2017vnect}. \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{images/motivation.png} \vspace{-0.7cm}\caption{Illustration of the proposed model output. Given 1) an input RGB image, 2) an initial estimation of 3D joint locations is applied based on a given CNN (volumetric stacked hourglass (SHN) \cite{newell2016stacked} in this case). Green line shows ground truth and red line shows 3D estimated joints. 3) Our denoising autoencoder model is able to recover pose from structured error. Finally, 4) body mesh is rendered by SMPL based on the proposed SMPL reverse strategy.} \vspace{-0.3cm} \label{fig:motivation} \end{figure} One case of human pose representation is 3D joint locations. However, 3D joints do not implicitly show the morphology of the body. Being able to estimate body shape or mesh along with joints allows a wide method applicability, including movie editing, body soft-biometrics measurements or cloth retexturing, among others. Besides, such a dense body representation may help to achieve more accurate estimations of 3D joints. Available body models range from simple geometrical objects, like compositions of cylinders and spheres, to complex parametric statistical models such as SCAPE~\cite{anguelov2005scape} and SMPL~\cite{loper2015smpl}. SMPL~\cite{bogo2016keep,lassner2017unite,pavlakos2018learning,kanazawa2018end,omran2018neural} generates realistic body meshes based on PCA shape components along with relative axis-angle rotations of joints. Rotations form body pose in a defined kinematic tree and are computed for joints with respect to their parent's nodes. The goal is to estimate SMPL parameters from an RGB image such that generated body mesh describes and fits as much as possible to the visible human in the image. This can be done by fitting generative models ~\cite{sigal2008combined,ivekovivc2008human,guan2009estimating,bogo2016keep,lassner2017unite} or training discriminative deep models~\cite{pavlakos2018learning,kanazawa2018end,omran2018neural,varol18_bodynet}. On the one hand, regular generative optimization solutions are shown to be sensitive to noise and need a careful and complex design of objective functions. On the other hand, while deep models have shown superior performance over the former solutions, they are data-hungry approaches. In both cases, a direct regression of SMPL parameters is a complex task because: 1) SMPL is a many-to-one complex function\footnote{SMPL details are given in the section \ref{sec:smpl} } which is sensitive to data noise (i.e. optimizing SMPL parameters may converge to invalid values), and 2) accurate image annotation with SMPL pose and shape parameters in large in-the-wild datasets is infeasible. Therefore, researchers developed their solutions based on available 2D joints by applying intermediate representations~\cite{pavlakos2018learning,omran2018neural} or adversarial training~\cite{kanazawa2018end} for 3D inference. However, it is known that estimation of 3D data from 2D is ill-posed and can lead to sub-optimal solutions. In this paper, given an RGB image, we first estimate 3D joints and a sparse set of landmarks placed along body surface. Then, we use them to regress SMPL pose and shape parameters. The output is a detailed body mesh. We call this procedure SMPL reverse (SMPLR). One can imagine SMPLR as an autoencoder where latent embeddings are pose and shape components. We define encoder as a number of Multi-layer Perceptron (MLP) networks while decoder is SMPL. By first estimating 3D joints and landmarks, as an intermediate representation, 1) we avoid a direct regression of SMPL parameters, which easily yields non-realistic body meshes, 2) we can safely train SMPLR, even end-to-end, in a simple way without explicit constraints on SMPL, and 3) we provide flexibility to the design, i.e. SMPLR can be trained independently using millions of generated mocap-like data, thus allowing cross-dataset generalization When 3D ground truth data is available for RGB images, any state-of-the-art CNN can be used to estimate 3D joints and landmarks. However, such ground truth data is not available for in-the-wild datasets. Besides, estimated 3D joints can have structured error due to depth ambiguity or occlusions. To handle these cases we design a denoising autoencoder (DAE) network \cite{vincent2010stacked} as an extra module between CNN and SMPLR able to lift 2D joints to 3D and/or recover from structured error. We show the proposed model output in Fig. \ref{fig:motivation}. In summary, our main contributions are as follows: \begin{itemize}\vspace{-0.3cm} \item We build a denoising autoencoder that learns to recover input data from structured error. The model transforms 2D/3D joints to a more human-consistent 3D joint predictions, enforcing symmetry and proportions on bone lengths. \vspace{-0.3cm} \item We design a two-branch MLP network to regress SMPL parameters from 3D joints and landmarks given by DAE. We refer to the combination of DAE+MLP+SMPL as SMPLR. This allows the inference of human body mesh from a sparse point representation. Finally, we gain an improvement over chosen CNN by end-to-end training with SMPLR. \vspace{-0.3cm} \item Throughout our experiments, we demonstrate that it is possible to obtain an accurate human body model from a set of joint and landmark predictions. We obtain state-of-the-art results for SMPL-like architectures on Human3.6M \cite{h36m_2014} and SURREAL \cite{varol17_surreal} datasets \end{itemize} \vspace{-0.1cm} \section{Related work} \label{sec:related} \vspace{-0.1cm} In this section, we review state-of-the-art works on 3D human pose estimation from still RGB images. \textbf{Lifting 2D to 3D. } Depth regression from 2D is an ill-posed problem where several 3D poses can be projected to the same 2D joints. Chen and Ramanan \cite{chen20173d} show that copying depth from 3D mocap data can provide a fair estimation when a nearest 2D matching is given. However, Moreno \cite{moreno20173d} shows that distance of random pairs of poses has more ambiguity in Cartesian space than Euclidean distance matrix. Recent works show that directly using simple \cite{martinez2017baseline} or cascade \cite{hoang2018cascade} MLP networks can be more accurate. Additionally, 2D joints can be wrongly estimated, making previous solutions sub-optimal. Yang \etal \cite{yang20183d} use adversarial training and benefit from available 3D data along with 2D data to infer depth information. In our case, the proposed denoising autoencoder is used to lift 2D pose to 3D in the lack of paired image and 3D ground truth data \textbf{Direct regression} refers to regressing 3D pose directly from an RGB image. Due to the nonlinear nature of the human pose, 3D pose regression without modeling correlation of joints is not a trivial task. Brau and Jiang \cite{brau20163d} estimate 3D joints and camera parameters without direct supervision on them. Instead, they use several loss functions for projected 2D joints, bone sizes and independent Fisher priors. Sun \etal \cite{sun2017compositional} propose a compositional loss function based on relative joints with respect to a defined kinematic tree. They separate 2D joints and depth estimation in the loss. We avoid relying on such complex losses by using 3D joints and landmarks as intermediate representation. \begin{figure*}[!ht] \centering \includegraphics[width=\linewidth]{images/smplr2.pdf} \vspace{-0.7cm} \caption{System pipeline. A CNN estimates volumetric heatmaps. Soft argmax converts heatmaps to joints locations and feeds them to denoising autoencoder module. Soft argmax is differentiable, thus, gradients can be backpropagated. Finally, we compute normalized relative distances $\mathbf{B}$ (eq. \ref{eq:bone}) and normalized relative joints $\mathbf{N}$ (eq. \ref{eq:N}) which are fed to two independent networks designed to regress SMPL parameters. At the end SMPL is responsible to render a realistic body mesh. $\dashedrightarrow$ shows where the loss is applied.} \label{fig:pipeline}\vspace{-0.3cm} \end{figure*} \textbf{Probability maps} are joints likelihood computed for each pixel/volume. From 2D joint heatmaps different solutions are applied to infer the third dimension. Tome \etal \cite{tome2017lifting} iteratively minimize a function based on 2D belief map and a learnt 3D model to find the most likely 3D pose. Since probability maps are dense predictions, fully convolutional networks are usually applied. Luo \etal \cite{luo2018orinet} extend stacked hourglass (SHN) \cite{newell2016stacked} to estimate 2D joint heatmaps along with limb orientation map in each stack. Pavlakos \etal \cite{pavlakos2017coarse} extend SHN to output 3D volumetric data by a coarse-to-fine architecture where at each stack the third dimension is linearly increased with 2D heatmaps. Nibali \etal \cite{nibali20183d} propose marginal heatmaps from different axis viewpoints. \textbf{Pose and shape estimation. } In order to compute a detailed body model, it is common to estimate body volume or mesh along with 3D pose. Early proposals were mainly based on the combination of simple geometric objects \cite{stoll2011fast,sigal2012loose}, while recent approaches are PCA-based parametric models like SMPL \cite{loper2015smpl}. Bogo \etal \cite{bogo2016keep} were the first to apply SMPL for body pose and shape recovery. Their method was based on regular optimization procedures given an objective function with several constraints, minimizing projected joints and pre-estimated 2D joints. Such a complex function design is critical for the success of optimization procedures, since SMPL is a many-to-one function and sensitive to noise. Lassner \etal \cite{lassner2017unite} extended the previous work by including a bi-directional distance between projected mesh and body silhouette. Recent works embed SMPL within deep models. Tung \etal \cite{tung2017self} regressed SMPL pose and shape along with camera parameters. They trained the model with supervision on synthetic data and fine-tuned without supervision at inference time using 2D joints, silhouette and motion losses. Similarly, Kanazawa \etal \cite{kanazawa2018end} regressed as well SMPL and camera parameters, but in addition, they applied adversarial training. Predictions are fed to a discriminator network which classifies them as real/fake with respect to real body scans. Similar to our work, \cite{pavlakos2018learning,omran2018neural,zanfir2018deep} estimate pose and shape parameters from intermediate information, like body segments \cite{omran2018neural}, 2D joint heatmaps and body mask \cite{pavlakos2018learning} or 3D joints \cite{zanfir2018deep}. They include SMPL to obtain 3D joints and mesh which are used to compute the loss either in 2D (back-projected from 3D) or 3D. However, this process is ill-posed and sub-optimal because of the loss of depth information (in the case of \cite{pavlakos2018learning,omran2018neural}) or the ambiguity in joint orientations and shape parameters (in the case of \cite{zanfir2018deep}). We show 3D joints and surface landmarks can better deal with this problem outperforming aforementioned solutions. Recently, Varol \etal \cite{varol18_bodynet} proposed a multi-tasking approach to estimate body 2D/3D pose, pixel segments and volumetric shape. They use SMPL to generate ground truth body volumes and do not embed SMPL function within the network. In this paper, we propose an approach to estimate 3D body pose and shape by the use of intermediate information and SMPL model. We can benefit from end-to-end training. Besides, our method can be adapted to 2D-to-3D solutions when just 2D ground truth data is available \vspace{-0.1cm}\section{Methodology}\vspace{-0.1cm} \label{sec:methodology} We estimate 3D joints $\mathbf{J}=\{j\}_1^K$ and surface vertices $\mathbf{T}=\{t\}_1^C$ from a single RGB image $\mathit{I}$, where $j,t\in\mathbb{R}^3$, and $K$ and $C$ are the number of body joints and surface points, respectively. We define $\mathbf{L}\subset\mathbf{T}$ as a set of sparse surface landmarks and $\mathbf{JL}$ as a concatenation of the two matrices. In order to compute a detailed mesh, we use SMPL model \cite{loper2015smpl}. Our goal is to estimate SMPL parameters from image $\mathit{I}$ using deep learning without directly regressing them. This way, we avoid possible artifacts while keeping the architecture flexible in the lack or presence of 3D ground truth data. Our network contains three main modules shown in Fig. \ref{fig:pipeline}. First, joint and landmark locations are estimated by any chosen CNN ($\mathbf{JL}_{CNN}$). Afterwards, DAE filters structured error or lifts 2D to 3D ($\mathbf{JL}_{DAE}$). Finally, SMPLR recovers pose and shape from the predictions of the previous module and reconstructs a detailed body mesh and 3D joints. Next, we explain DAE and SMPLR in detail. \vspace{-0.1cm}\subsection{SMPL review}\label{sec:smpl}\vspace{-0.1cm} SMPL is a statistical parametric function $\mathit{M}(\beta,\theta;\Phi)$ which maps shape parameters $\beta$ and axis-angle pose parameters $\theta$ into vertices $\mathbf{T}$, given learnt model parameters $\Phi$. Given a template average body mesh with vertices $\mathbf{T}^*$ and a dataset of scanned bodies, two sets of principal components $\mathcal{S}=[\mathbf{S}_1,...,\mathbf{S}_{|\beta|} ]\in \mathbb{R}^{3C\times|\beta|}$ and $\mathcal{P}=[\mathbf{P}_1,...,\mathbf{P}_{9K} ]\in \mathbb{R}^{3C\times9K}$ are learnt to form model parameters $\Phi$ (where $|\beta|=10$, $C=6890$ and $K=24$). Then, template shape vertices $\mathbf{T}^*$ can be morphed to $\mathbf{T}^{*}_{s}$ by ${vec}^{-1}_{3, C}(\mathcal{S}\times\beta)+\mathbf{T}^{*}$ where ${vec}^{-1}(.)$ is a reshaping operator. Bases $\mathbf{P}_i$ are responsible for small pose-based displacements due to body soft-tissue behavior and have small contribution in the shape deformation. Given a kinematic tree (i.e. Fig. \ref{fig:landmarks}) a set of relative rotation matrices $\mathcal{R}=[\mathbf{R}_1,...,\mathbf{R}_{K} ]\in \mathbb{R}^{3\times3}$ are computed for each joint with respect to their parents. Each $\mathbf{R}_i$ is a function of $\theta_i\in \mathbb{R}^3$ and is computed based on Rodrigues formulation. These rotation matrices are mainly used because of two reasons: i) to pose the mesh by rotating body parts relatively in the kinematic tree, and ii) to update the template shape in rest pose $\theta^{*}$ by basis $\mathbf{P}_i$. Rotations are applied based on joints. Nevertheless, joints are a function of body shape, therefore they need to be computed before posing the model. This is done by a regressor matrix $\mathcal{J}$ (as part of parameters $\Phi$) from updated vertices $\mathbf{T}^{*}_{s}$. Please read \cite{loper2015smpl} for more detailed explanations of the SMPL. SMPL model has several characteristics. First, it is differentiable, which yields the possibility to be used along with deep networks. Secondly, it does not constrain invalid pose and shape values and, thus, it is a many-to-one function. This means that given an RGB image, end-to-end training of a CNN from scratch with SMPL attached on top may converge to a non-optimal solution. One of the main reasons of this is the usage of Rodrigues formulation and axis-angle pose parameters, as it is known not to be unique (periodicity of $\theta$). A possible solution is to directly use rotation matrices as proposed in \cite{lassner2017unite,omran2018neural}. Finally, SMPL is a generative model which allows us to generate mocap-like data for free. Also, it can be used to generate synthetic realistic images \cite{varol17_surreal}. \vspace{-0.1cm}\subsection{SMPL reverse}\vspace{-0.1cm} A natural way of embedding SMPL in a deep network is to estimate $\beta$ and $\theta$ given image $\mathit{I}$ and feed them to SMPL. However, this is a challenging task because of the aforementioned many-to-one property, plus the noise sensitivity of the model. Besides, direct regression of SMPL parameters may generate artifacts \cite{kanazawa2018end,tan2017indirect}. Instead, researchers use intermediate representations like 2D joints and body silhouette \cite{pavlakos2018learning} or body segments \cite{omran2018neural} to regress SMPL parameters. Although such data is easier to annotate from RGB images than SMPL data, they provide sub-optimal mapping to SMPL parameters, because 1) estimating 3D from 2D is an ill-posed problem, and 2) the loss is computed from noisy back-projected 2D estimations. In this paper, we instead propose an autoencoder-like scheme, i.e. the input 3D data is recovered in the output, while pose and shape are obtained in the encoder and SMPL is taken as decoder. We refer to this model as SMPL reverse (SMPLR, see Fig. \ref{fig:pipeline}). This design has several benefits: SMPLR can be trained 1) without the need of constraints on SMPL, 2) independent to RGB data using millions of generated 3D mocap-like data, and 3) end-to-end with a CNN. All of these provide simplicity and flexibility in the design and training of the entire network. In the results section we show that SMPLR formulation can generate more accurate estimations than state-of-the-art SMPL-based alternatives. Furthermore, SMPLR acts as a strong regularization on CNN model when trained end-to-end and it enhances the internal coherence among joints for CNN predictions. In sec. \ref{sec:end-to-end ablation} we propose an effective incremental training for this task. We model SMPLR encoder with deep MLP networks. We design two independent networks $\mathcal{R}=\Omega(\mathbf{N};\phi_p)$ and $\beta=\Psi(\mathbf{B};\phi_s)$ with the same architecture (see Fig. \ref{fig:pipeline} for details) for pose and shape estimation, respectively, where $\phi_.$ corresponds to network parameters, $\mathbf{N}$ is a vector of normalized relative joints and $\mathbf{B}$ is a vector of normalized relative distances. A reason for the choice of two networks is that $\mathcal{R}$ and $\beta$ are independent variables. Besides, we want the encoder to be cross-dataset applicable and explainable w.r.t. the pose and shape parameters. In available datasets, while pose parameters have a high variability, there is no much variability in shape parameters. For instance, Human3.6M dataset \cite{h36m_2014} only consists of 11 subjects and training $\Psi$ is not feasible by relying just on this dataset. In the results section we show the contribution of each network to the final joint estimates. Since we define SMPLR as an autoencoder, its input must be $\mathbf{J}$ and $\mathbf{T}$. However, $\mathbf{T}$ is a high dimensional vector and all vertices do not necessarily contribute equally in the computation of pose and shape parameters, wasting network capacity if all of them are considered. To cope with this issue, we empirically select a subset of points as landmarks $\mathbf{L}\subset\mathbf{T}$, which represent body shape and complement $\mathbf{J}$. Without landmarks, network converges to the average body fatness and the problem still remains ill-posed due to the ambiguity of joints orientation. Landmarks help to cope with these two problems. Besides, it is cheaper to gather landmarks in mocap datasets rather than scanning the whole body. We show the 18 selected landmarks and their assigned kinematic tree in Fig. \ref{fig:landmarks}. Next, we explain networks details. \begin{figure}[!t] \centering \begin{subfigure}[b]{0.4\textwidth} \includegraphics[width=\textwidth]{images/smpl_kintree1_2.png} \end{subfigure}\vspace{-0.3cm} \caption{Landmarks and assigned kinematic tree in red (blue is original SMPL kinematic tree). Pelvis is set as root.} \label{fig:landmarks}\vspace{-0.4cm} \end{figure} $\mathbf{J}$ and $\mathbf{L}$ are concatenated to form $\mathbf{JL}$ which has been estimated beforehand by DAE (i.e. $\mathbf{JL}_{DAE}$). For the easiness of the reading we omit the subscript $._{DAE}$ from $\mathbf{JL}_{DAE}$ in the next formulations. Given a kinematic tree $\kappa\in\mathbb{R}^{42}$, we define $\mathbf{N}$ as: \vspace{-0.1cm} \begin{equation}\label{eq:N} \mathbf{N}_i=\frac{\mathbf{JL}_{i} - \mathbf{JL}_{\kappa(i)}}{\| \mathbf{JL}_{i} - \mathbf{JL}_{\kappa(i)} \|_2}, \text{ for } i\in[2..42],\vspace{-0.1cm} \end{equation} where $\kappa(i)$ defines parenthood indices. The reason for this normalization is that, in order to compute relative joint rotation $\mathbf{R}$, we do not need to know relative distances. This frees network capacity from unnecessary data variability. Such relative distances are embedded in the computation of shape parameters. Thus, given relative distances $\mathbf{B}^*$ computed from template joints and landmarks $\mathbf{JL}^*$, we define $\mathbf{B}$ as: \vspace{-0.1cm} \begin{equation} \mathbf{B}_i^*=\| \mathbf{JL}_{i}^* - \mathbf{JL}_{\kappa(i)}^* \|_2 , \text{ for } i\in[2..42], \vspace{-0.1cm} \end{equation} \begin{equation} \mathbf{B}_i=\| \mathbf{JL}_{i} - \mathbf{JL}_{\kappa(i)} \|_2 - \mathbf{B}_i^*, \text{ for } i\in[2..42]. \label{eq:bone}\vspace{-0.1cm} \end{equation} SMPL originally provides two different models for male and female. Selecting a proper gender model for each sample has a crucial impact on the accuracy at inference time. Therefore, we also include gender as an extra term on the shape parameters $\beta$. We assign gender a value from the set $\{-1,+1\}$ and learn it as a regression problem along with other shape parameters. \textbf{Loss function.} $\mathcal{L}_2$ loss has been commonly applied in recent regression problems \cite{tan2017indirect,pavlakos2018learning}. However, we found $\mathcal{L}_2$ loss to have problems in convergence and generalization in case of noisy inputs. Instead we use $\mathcal{L}_1$ loss on $\mathcal{R}$ and $\beta$, called $L_R$ and $L_\beta$, to supervise $\Omega$ and $\Psi$ networks. Firstly, this is done isolated from SMPL, which means no back-propagation is applied through SMPL. This is important for a stable training and fast convergence. Then, for performance gains, we fine-tune the networks adding $\mathcal{L}_1$ loss on SMPL output, called $L_{SMPL}$. SMPL output contains $\mathbf{J}$ and $\mathbf{T}$. However, to have SMPLR architecture resembling an autoencoder, we compute $L_{SMPL}$ on landmarks rather than the whole $\mathbf{T}$. The final SMPLR loss is $L_R+L_\beta+L_{SMPL}$. \vspace{-0.1cm} \subsection{Denoising autoencoder} \vspace{-0.1cm} Estimated joints by any CNN may have structured noise. For instance, in the case of occluded joints the error is higher due to their ambiguity and the lack of visual evidence. Visible joint predictions have as well structured error, following a Gaussian distribution. Such structured or Gaussian noise can be detected and learnt explicitly, helping to further improve initial estimation of $\mathbf{JL}_{CNN}$ to be fed into SMPLR module. Denoising autoencoder networks \cite{vincent2010stacked} are useful tools for such scenario, being able to learn structured patterns of the input better than ordinal autoencoders. In this paper, we propose a DAE network as a bridge between CNN backbone and SMPLR. With the proposed DAE we are able to denoise 3D joints and landmarks. This procedure can be critical for error-prone CNNs, such as shallow networks. Moreover, it can be detached from CNN and trained independently given a large amount of mocap or synthetic SMPL-generated data. However, DAE may not generalize well to the noisy test data if it is trained with noise-free ground truth data. Therefore, it is important to train DAE with adversarial noise in this scenario for generalization purposes. In the section \ref{sec:dae} we show it is possible to train DAE with constrained uniform or Gaussian noise mimicking structured error without loss of generalization. It is also possible that only 2D joints are annotated in a given dataset. In sections \ref{sec:dae} and \ref{sec:sota} we also show DAE can lift 2D estimations to 3D. This could also be done by the use of mocap or synthetic 3D data projected to 2D and adversarial noise. The architecture of DAE is shown in Fig. \ref{fig:pipeline}. We apply two dropouts, one right after the input and the other after last encoder layer. By applying skip connections between encoder and decoder layers we force the network to learn noise structure in a fast and stable way. The input to DAE is the initial estimation of $\mathbf{JL}_{CNN}$ and the output is denoised $\mathbf{JL}_{DAE}$. We apply $\mathcal{L}_1$ loss (so called $L_{DAE}$) on $\mathbf{JL}_{DAE}$ to train this network. In order to force the awareness of adjacent joints correlations, we applied an $\mathcal{L}_1$ loss on the relative joints ($\mathbf{B}$ in Eq. \ref{eq:bone}) as well. However, we observed no significant impact on the results. \begin{table*}[!t]\renewcommand{\arraystretch}{0.9} \centering \begin{tabular}{|l?p{1.7cm}|p{0.9cm}|p{0.9cm}|p{0.9cm}|p{0.9cm}|p{0.9cm}|p{0.9cm}|p{0.9cm}|p{0.9cm}|p{0.9cm}|p{0.8cm}|p{0.8cm}|} \hline & \multicolumn{1}{c|}{Model} & \multicolumn{1}{c|}{Hd} & \multicolumn{1}{c|}{Ts} & \multicolumn{1}{c|}{Sr} & \multicolumn{1}{c|}{Ew} & \multicolumn{1}{c|}{Wt} & \multicolumn{1}{c|}{Hp} & \multicolumn{1}{c|}{Kn} & \multicolumn{1}{c|}{Ft} & \multicolumn{1}{c|}{Avg.} & \multicolumn{1}{c|}{Avg.} & \multicolumn{1}{c|}{Avg.} \\ & & & & & & & & & & Jt & Lm & Bn \\\hline \hline \parbox[t]{2mm}{\multirow{5}{*}{\rotatebox[origin=c]{90}{CNN}}} & ${Alexnet}$ & 100.0 & 41.6 & 99.0 & 179.9 & 246.9 & 34.6 & 138.5 & 217.3 & 133.0 & - & 31.9 \\ \cline{2-13} & ${SHN}_{nL}$ & 47.5 & 23.0 & 44.2 & 77.2 & 112.0 & 16.3 & 61.9 & 102.7 & 62.8 & - & 10.4 \\ \cline{2-13} & $SHN$ & 46.2 & 23.0 & 43.0 & 75.5 & 110.2 & 15.3 & 61.2 & 102.2 & 59.9 & 61.5 & 9.3 \\ \cline{2-13} & ${SHN}_{e2e}$ & 45.1 & 22.2 & 43.3 & 74.4 & 108.2 & 16.0 & 57.9 & 94.2 & 57.8 & 59.6 & \textbf{9.0} \\ \cline{2-13} & ${SHN}^{final}$ & \textbf{40.8} & \textbf{20.9} & \textbf{38.0} & \textbf{66.8} & \textbf{93.4} & \textbf{14.3} & \textbf{55.7} & \textbf{92.9} & \textbf{53.0} & \textbf{54.3} & 9.7 \\ \hline \hline \parbox[t]{2mm}{\multirow{3}{*}{\rotatebox[origin=c]{90}{DAE}}} & ${DAE}_{Alexnet}$ & 89.3 & 37.1 & 87.2 & 160.8 & 230.8 & 29.6 & 131.7 & 205.9 & 121.5 & - & 22.7 \\ \cline{2-13} & ${DAE}_{SHN}$ & \textbf{45.8} & \textbf{22.2} & \textbf{42.2} & \textbf{75.4} & \textbf{108.5} & \textbf{14.4} & \textbf{61.8} & \textbf{103.2} & \textbf{59.2} & \textbf{61.1} & \textbf{9.5} \\ \cline{2-13} & ${DAE}_{SHN}^{2d}$ & 51.4 & 23.1 & 46.2 & 83.7 & 121.7 & 15.1 & 66.6 & 115.4 & 65.2 & 66.1 & 11.8 \\ \hline \hline \parbox[t]{2mm}{\multirow{3}{*}{\rotatebox[origin=c]{90}{SMPLR}}} & $\Psi$ & 16.5 & 9.1 & 13.8 & 17.3 & 19.9 & 5.8 & 11.7 & 21.3 & 14.4 & 11.5 & 6.6 \\ \cline{2-13} & $\Omega$ & 55.1 & 22.3 & 48.8 & 85.8 & 127.1 & 13.4 & 68.6 & 122.3 & 67.8 & - & - \\ \cline{2-13} & $\Omega_{smpl}$ & \textbf{50.1} & \textbf{20.1} & \textbf{44.9} & \textbf{83.6} & \textbf{123.6} & \textbf{12.4} & \textbf{63.9} & \textbf{111.9} & \textbf{63.8} & - & - \\ \hline \hline \parbox[t]{2mm}{\multirow{2}{*}{\rotatebox[origin=c]{90}{ALL}}} & $ALL$ & 57.9 & \textbf{24.0} & 52.8 & 92.7 & 140.4 & \textbf{15.8} & 67.8 & 115.0 & 70.8 & 73.8 & 8.4 \\ \cline{2-13} & ${ALL}_{Proc}$ & \textbf{53.7} & 26.1 & \textbf{49.7} & \textbf{86.2} & \textbf{129.9} & 21.6 & \textbf{67.0} & \textbf{109.2} & \textbf{67.8} & \textbf{70.6} & \textbf{7.7} \\ \cline{2-13} \hline \end{tabular}\vspace{-0.2cm} \caption{Ablation study of model components. Error in mm. Hd:Head, Ts:Torso, Sr:Shoulder, Ew:Elbow, Wt:Wrist, Hp:Hip, Kn:Knee, Ft:Foot, Jt:Joints, Lm:Landmarks and Bn:Bone length, $\{.\}_{nL}$: training without landmarks, $\{.\}^{final}$: training with limb heatmaps and data augmentation, $\{.\}_{Alexnet}$: $Alexnet$ estimations as input, $\{.\}_{SHN}$: $SHN$ estimations as input, $\{.\}^{2d}$: input depth is set to 0, $\{.\}_{smpl}$: training with $L_R+L_{SMPL}$ loss, $\{.\}_{e2e}$: model after end-to-end training, $\{.\}_{Proc}$: results after Procrustes mapping. Best results are bolded.\vspace{-0.4cm} } \label{tab:ablation} \end{table*} \vspace{-0.1cm} \section{Experiments} \label{sec:experiments} \vspace{-0.1cm} This section describes training details, datasets, and evaluation protocol. Then, we perform an ablation study of the different model components and compare it against state-of-the-art alternatives. \vspace{-0.1cm} \subsection{Training details} \label{sec:setup} \vspace{-0.1cm} We build our backbone CNN based on the well-known stacked hourglass network (SHN) \cite{newell2016stacked} using 5 stacks. We extend final layers of each stack to volumetric heatmaps, i.e. including an extra dimension to discretize depth of the joints into 16 bins. The output of each stack is a tensor of size $64\times64\times16\times41$, where 64 is the size of X-Y axis and 41($=23+18$) is the number of joints and landmarks. We train this network with softmax cross entropy loss. All models and experiments were implemented on TensorFlow and trained on a GTX 1080 Ti. We used Adam optimizer in all the experiments with a learning rate of $0.01$ for SHN and $0.001$ for DAE, $\Omega$ and $\Psi$ networks. All networks are trained from scratch using a Xavier initializer. SHN converged in $150$-$250$ epochs with batch size $6$-$10$ samples. The rest of networks in the ablation analysis were trained with batch size $256$. We used a keeping probability 0.8 for dropout layer in DAE. \textbf{Preprocessing.} Images are cropped to a square. To do so, we assume camera focal length and object distance to camera is available beforehand. First, the corners of a $2.5\times2.5$m grid, centered at average joint location and perpendicular to camera axis, are projected to image plane and define the cropping area. Then, cropped images are scaled to network input size ($256\times256$). This enforces a proportionality among pixel and real world sizes, larger people will appear bigger in image space as well. In those cases where the crops land outside the frame, a random image from VOC Pascal dataset is used for padding. Following \cite{varol18_bodynet} we use ground truth focal length and object distance to the camera in all experiments, both in training and inference time. However, to study the impact of scale ambiguity on the 3D joint prediction, we also estimate the cropping area in Human3.6M \cite{h36m_2014} dataset and show the results (see section \ref{sec:sota}). \textbf{End-to-end training.} We applied incremental training. First, all networks (i.e. SHN, DAE, $\Omega$ and $\Psi$) were trained independently and then the whole network was fine-tuned end-to-end. In the ablation study we analyze the effect of different combinations of modules in the training \vspace{-0.1cm} \subsection{Datasets} \vspace{-0.1cm} \textbf{UP-3D \cite{lassner2017unite}.} This dataset was designed by fitting a gender neutral SMPL model into images from LSP, LSP-extended and MPII-HumanPose datasets, keeping samples with better estimates. This yields a total of $8515$ labeled images in the wild, splitted into $5703$ for training, $1423$ for validation and $1389$ for test. Every sample is provided with 2D joints annotations and SMPL parameters. \textbf{SURREAL \cite{varol17_surreal}.} Synthetic dataset of humans generated with SMPL model, containing exact annotations. It is composed of $68$K videos containing SMPL generated humans moving on top of random backgrounds. For sampling, we skip a frame if the average joint distance is lower than $5$cm w.r.t. to last sampled frame. This results in $2.8$M training, $27$K validation and $665$K test samples. \textbf{Human3.6M \cite{h36m_2014}.} Human3.6M is a large dataset offering high precision 3D data thanks to MoCap sensors and calibrated cameras. It is composed of RGB videos of $11$ subjects performing $15$ actions twice while being recorded from $4$ different viewpoints. It contains around $3.6$ million frames. We sampled 1 of every 5 frames, ending with $312$K training and $110$K validation samples. Following state-of-the-art works, we use subjects S1, S5, S6, S7 and S8 for training, and S9 and S11 for testing. We generated ground truth SMPL parameters from the 3D data and body scans available in the dataset. Body scans allow an accurate estimation of shape parameters, computed only once per subject (shape does not change in short periods of time). Afterwards, we empirically defined a correspondence between SMPL joints and available 3D MoCap data. This matching is not perfect for some joints, which are weighted between $[0.25, 0.75]$ empirically to provide good estimations. This matching allows optimization of pose through an $\mathcal{L}_2$ loss. Finally, correspondence of back joints is not accurate, so instead, we use a loss that penalizes unrealistic back bends, by correcting back's pose parameters that land outside an empirically defined symmetrical range centered at 0 \vspace{-0.1cm} \subsection{Evaluation protocol} \vspace{-0.1cm} We evaluate the models by mean per joint position error (MPJPE) in millimeters (mm). The same metric is extended to surface points to report error of the generated body meshes. Following related works we apply two protocols: \textbf{Protocol 1} where all joints/points are subtracted from the root joint and, \textbf{Protocol 2} where estimated joints are aligned with ground truth through Procrustes analysis. We also report mean Intersection over Union (IoU) on body silhouette after mesh projection to the image plane. \vspace{-0.1cm} \subsection{Ablation study} \vspace{-0.1cm} In this section, we study different components of the proposed model on SURREAL validation set. For this task we subsample the training dataset into 89K frames such that every pair of samples has at least one joint displaced 150mm w.r.t. to each other, thus enforcing a uniform distribution over the whole dataset. We use the setup in Sec. \ref{sec:setup} to train each component. We explored several combination strategies during training to see the impact of each on the validation set. Except end-to-end training, all building blocks are trained isolated from the rest. We show results and description of each module in Tab. \ref{tab:ablation}. \vspace{-0.1cm} \subsubsection{CNN backbone} \vspace{-0.1cm} We first evaluate the performance of the CNN backbones. The results are shown in Tab. \ref{tab:ablation} under \textbf{CNN} row. We first train a baseline $Alexnet$ to regress 3D joints (without landmarks) using $L_2$ loss, Adam optimizer, learning rate 0.01 and batch size 32. We chose $Alexnet$ for two reasons: 1) to compare the results with the proposed volumetric SHN, and 2) it is a shallow network and prone to have structured error so that we can study DAE impact as well. As expected $Alexnet$ is not performing well to directly regress 3D joints. We then train volumetric SHN to predict $\mathbf{JL}$ and $\mathbf{J}$ (so-called ${SHN}$ and ${SHN}_{nL}$, respectively). As a result, landmarks help ${SHN}$ to gain 3mm improvement over ${SHN}_{nL}$. Next we explain our contributions to the default volumetric SHN. \textbf{Final volumetric heatmap model.} To evaluate our method against state-of-the-art we extend default volumetric SHN to include limb heatmaps in the output and train the model using data augmentation. Limb heatmaps are $4$ additional volumetric heatmaps in the outputs of ${SHN}$. These heatmaps correspond to limb representations, created by composing segments from joint to joint (see Fig. \ref{fig:heatmap}). By fitting these heatmaps we expect to enforce the model to learn spatial relationships among joints to improve generalization. Besides regular data augmentation (including random color noise, flipping and rotation), two extra methods are applied: random background and artificial occlusion. By using binary masks for subjects provided at each frame, we remove the default background and replace it with a random image from VOC Pascal dataset. Similarly, we place random objects from VOC Pascal on random locations of the image to artificially create occlusions \cite{sarandi2018augmentation}. In both cases we do not use images containing humans. We call this model $SHN^{final}$. The results displayed in the Tab. \ref{tab:ablation} show how these simple strategies indeed enhance the performance of default ${SHN}$ by about 7mm on average joint error. \vspace{-0.1cm} \subsubsection{Denoising autoencoder} \label{sec:dae} \vspace{-0.1cm} We also evaluate DAE trained with different inputs in several scenarios. Results are shown in Tab. \ref{tab:ablation} under \textbf{DAE} row. \textbf{Could we train DAE independent to SHN?} Since DAE sequentially appears after SHN, it receives estimations from SHN. To answer this question we train DAE, as input, with i) 3D ground truth joints plus uniform noise with adapted bounds for each joint and ii) 3D joints estimated by ${SHN}$ (so-called ${DAE}_{noise}$ and ${DAE}_{SHN}$). We then evaluate both models with ${SHN}$ estimations as input at test time. As a result, ${DAE}_{noise}$ has an average error of 61.7mm (not shown in the table) which is similar to ${DAE}_{SHN}$ (61.9mm). This shows the generalization ability of DAE \textbf{Is DAE able to recover from structured noise?} Other than ${DAE}_{SHN}$, we also train and test DAE with $Alexnet$ estimations (called ${DAE}_{Alexnet}$). For $Alexnet$ predicitons, DAE improves the error by 11mm, while on $SHN$ the improvement is 0.7mm. This shows the ability of DAE to learn structured error. \textbf{Is DAE able to lift 2D joints to 3D?} To answer this question, we train and test DAE, following \cite{martinez2017baseline}, with ${SHN}$ estimations while depth is set to 0 (called ${DAE}_{SHN}^{2d}$). In fact, we want to test how DAE performs in the lack of 3D ground truth data. As a result, the average error is slightly higher than 65mm. Although the average error is 3mm higher than $SHN$, it shows DAE can lift 2D pose to 3D with successful results. We note that training ${DAE}_{SHN}^{2d}$ converges way slower than ${DAE}_{SHN}$. \begin{figure}[!t] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{images/heatmap.png} \end{subfigure}\vspace{-0.3cm} \caption{Sample volumetric heatmap of joints (middle) and limbs (right), each limb coded with a different color.} \label{fig:heatmap} \end{figure} \vspace{-0.1cm} \subsubsection{SMPLR} \vspace{-0.1cm} In this section, we evaluate different components of SMPLR using ${SHN}$ estimations as input in both training and test. The results are shown in Tab. \ref{tab:ablation} under \textbf{SMPLR} row. We first evaluate the impact of shape and pose estimations isolated from each other within SMPLR. In test time, shape and pose estimations are fed into SMPL to evaluate final joints error. \textbf{Shape estimation.} We train $\Psi$ network with $L_\beta$ loss. During test we feed estimated $\beta$ along with ground truth \textbf{$\mathcal{R}$} to SMPL. The results are shown as $\Psi$ in Tab. \ref{tab:ablation}. As one can see shape estimation has a low impact on final error (around 14mm avg. joints error). \textbf{Pose estimation.} We train $\Omega$ network first with $L_R$ loss and then fine-tune it with $L_R+L_{SMPL}$ loss. The results are shown as $\Omega$ and $\Omega_{smpl}$ in Tab. \ref{tab:ablation}, respectively. During test we feed estimated $\mathcal{R}$ along with ground truth \textbf{$\beta$} to SMPL. As a result we gain 4mm improvement in pose estimation by applying $L_R+L_{SMPL}$ loss. In general, the higher source of errors in SMPLR is in pose parameters rather than shape. \textbf{Impact of landmarks.} Landmarks provide more visual evidence to the CNN when they are available in the dataset. Comparing $SHN$ to ${SHN}_{nL}$ in Tab. \ref{tab:ablation}, one can see landmarks improve head, arms and hip estimations. We also train $\Psi$ network with and without landmarks. Some qualitative results are shown in Fig.~\ref{fig:abc}(a). \textbf{Gender.} We evaluate the accuracy of gender estimation in $\Psi$ and achieve 89.5\% accuracy. Such a high accuracy is critical for SMPL rendering. This means a given vector of shape parameters is interpreted differently by each gender model, i.e. a correctly estimated shape parameter but wrong gender estimation produces a wrong mesh generation, introducing a high error in SMPL mesh. \begin{figure*}[!ht] \centering \begin{subfigure}{.33\textwidth} \includegraphics[width=5.5cm,height=3.1cm]{images/Landmarks_shape.png} \label{fig:landmarks_shape} \caption{}\vspace{-0.4cm} \end{subfigure} \begin{subfigure}{.33\textwidth} \includegraphics[width=5.5cm,height=3.1cm]{images/E2E_training.png} \label{fig:e2e_train} \caption{}\vspace{-0.4cm} \end{subfigure} \begin{subfigure}{.33\textwidth} \includegraphics[width=5.5cm,height=3.1cm]{images/gender_procrustes.pdf} \label{fig:gender_procrustes} \caption{}\vspace{-0.4cm} \end{subfigure} \caption{Qualitative results of the ablation analysis. a) Visualization of the improvement on shape estimation due to landmarks. Left: image. Middle: estimation without landmarks. Right: estimation with landmarks. b) Prediction improvement due to end-to-end training. Left: image. Middle: prediction before end-to-end training. Right: prediction after end-to-end training. Green and red skeletons correspond to ground truth and predictions, respectively. c) Random samples with mistaken gender or viewpoint. Left: image. Middle: SMPL mesh. Right: SMPL mesh after Procrustes.}\label{fig:abc}\vspace{-0.1cm} \end{figure*} \setlength{\tabcolsep}{1pt} \begin{table}[!t]\centering \begin{tabular}{l|c|c|l|c|c|} \hline \multicolumn{1}{|l|}{} & \multicolumn{1}{l|}{Prot. 1} & \multicolumn{1}{l|}{Prot. 2} & & \multicolumn{1}{l|}{Prot. 1} & \multicolumn{1}{l|}{Prot. 2} \\ \hline \multicolumn{1}{|l|}{Bogo \cite{bogo2016keep}} & \multicolumn{1}{c|}{-} & 82.3 & Tome \cite{tome2017lifting} & 88.4 & 70.7 \\ \hline \multicolumn{1}{|l|}{Lassner \cite{lassner2017unite}} & \multicolumn{1}{c|}{-} & 80.7 & Pavlakos \cite{pavlakos2017coarse} & 71.9 & 51.9 \\ \hline \multicolumn{1}{|l|}{Tung \cite{tung2017self}${^*}$} & 98.4 & \multicolumn{1}{c|}{-} & Zhou \cite{zhou2017towards} & 64.9 & - \\ \hline \multicolumn{1}{|l|}{Pavlakos \cite{pavlakos2018learning}} & \multicolumn{1}{c|}{-} & 75.9 & Martinez \cite{martinez2017baseline} & 62.9 & 47.7 \\ \hline \multicolumn{1}{|l|}{Omran \cite{omran2018neural}} & \multicolumn{1}{c|}{-} & 59.9 & Sun \cite{sun2017compositional} & 59.1 & 48.3 \\ \hline \multicolumn{1}{|l|}{Kanazawa \cite{kanazawa2018end}} & \multicolumn{1}{c|}{87.9} & 56.8 & Sun \cite{sun2018integral}${^{**}}$ & 64.1 & - \\ \hline \hline \multicolumn{6}{|l|}{Errors when cropping image is estimated} \\ \hline \multicolumn{1}{|l|}{$ALL$} & \multicolumn{1}{c|}{68.7} & 52.5 & $SHN^{final}$ & 62.2 & 50.6 \\ \hline \multicolumn{1}{|l|}{${ALL}_{Proc}$} & \multicolumn{1}{c|}{63.5} & 52.5 & $SHN_{e2e}^{final}$ & 57.4 & 46.8 \\ \hline \hline \multicolumn{6}{|l|}{Errors with ground truth cropping image} \\ \hline \multicolumn{1}{|l|}{$ALL$} & \multicolumn{1}{c|}{67.9} & 52.2 & $SHN^{final}$ & 61.3 & 50.1 \\ \hline \multicolumn{1}{|l|}{${ALL}_{Proc}$} & \multicolumn{1}{c|}{\textbf{62.6}} & \textbf{52.2} & $SHN_{e2e}^{final}$ & \textbf{56.5} & \textbf{46.3} \\ \hline \end{tabular}\vspace{-0.2cm} \caption{MPJPE error in mm. on Human3.6M for both protocols. Best results are bolded. Left columns: SMPL-based methods comparison. Right columns: state-of-the-art 3D pose comparison. SMPLR outperforms all SMPL-based methods, and the simple proposed SHN updates show state-of-the-art-results after end-to-end training without using any extra data. SMPL surface errors on this dataset are 88.2 and 81.3mm for $ALL$ and $ALL_{Proc}$, respectively. * Tung \etal \cite{tung2017self} use 32 joints. ** Sun \etal \cite{sun2018integral} report the results with and without extra data in the training. For a fair comparison we take this number from where no extra data has been used.}\vspace{-0.1cm} \label{tab:sota-h36m} \end{table} \begin{table}[!t]\centering \begin{tabular}{|l|c|c|} \hline & SMPL surface & 3D joints \\ \hline Tung \cite{tung2017self} & 74.5 & 64.4 \\ \hline Varol (independent) \cite{varol18_bodynet} & 74.5 & 46.1$^*$ \\ \hline Varol (multi-task) \cite{varol18_bodynet} & 65.8 & \textbf{40.8}$^{*}$ \\ \hline \hline $SHN^{final}$ & - & 42.8$^*$ \\ \hline $SHN_{e2e}^{final}$ & - & \textbf{40.8}$^{*}$ \\ \hline ${ALL}$ & 66.0 & 50.6 \\ \hline ${ALL}_{Proc}$ & \textbf{62.3} & 48.2 \\ \hline \end{tabular}\vspace{-0.2cm} \caption{Errors (mm) on SURREAL dataset (protocol 1). Best results are bolded. $.^{*}$ are intermediate estimated 3D joints used to predict SMPL surface.} \label{tab:sota-surreal} \end{table} \begin{table}[!t]\centering \begin{tabular}{|l|c|c|c|} \hline & SURREAL & Human3.6M & UP-3D \\ \hline Varol (multi-task) \cite{varol18_bodynet} & - & - & 0.73 \\ \hline ${ALL}_{Proc}$ & \textbf{0.75} & \textbf{0.71} & \textbf{0.77} \\ \hline \end{tabular}\vspace{-0.2cm} \caption{Silhouette IoU on three datasets.} \label{tab:iou} \end{table} \begin{table}[h!] \centering \begin{tabular}{|l|l|l|l|l|} \hline & 2D SHN & Vol. SHN & DAE & SMPLR \\ \hline Train & 6 / 2.8 & 6 / 16.8 & 256 / 0.054 & 256 / 2.5 \\ \hline Test & 6 / 0.35 & 6 / 3.1 & 256 / 0.011 & 256 / 0.87 \\ \hline Test & 1 / 0.32 & 1 / 0.36 & 1 / 0.007 & 1 / 0.32 \\ \hline \end{tabular \vspace{-0.3cm}\caption{ Processing time (batch size/time in sec.) of 1 step with 41 heatmaps/points. 2D SHN is the default stack hourglass network for 2D joints estimation. Both 2D and vol. SHN have 5 stacks. Vol. SHN has 16 depth bins. SMPLR includes SMPL on top.} \label{tab:time}\vspace{-0.3cm} \end{table} \begin{figure*}[ht!] \centering \includegraphics[width=\textwidth]{images/qualitative2.pdf} \vspace{-0.6cm} \caption{Qualitative results. Top: SURREAL. Bottom: Human3.6M. Last sample of each row shows a failure case due to inaccurate pose estimation produced mainly by occlusion and/or background confusion.} \label{fig:qualitative1}\vspace{-0.1cm} \end{figure*} \begin{figure*}[ht!] \centering \includegraphics[width=17.3cm,height=3.5cm]{images/UP3D_collage.png}\vspace{-0.3cm} \caption{UP-3D results. Based on estimated 2D joints (middle) we compute 3D joints and render mesh (right) }\vspace{-0.4cm} \label{fig:qualitative2} \end{figure*} \vspace{-0.1cm} \subsection{End-to-end training}\label{sec:end-to-end ablation} \vspace{-0.1cm} Here, we describe how the end-to-end training was performed. Thanks to soft-argmax, the model is differentiable and trainable end-to-end. We first explore a regular end-to-end training by stacking already trained models $SHN$, $DAE_{SHN}$, $\Psi$ and $\Omega_{smpl}$ along with SMPL on top. The loss is a summation of all intermediate losses. The order of magnitude of SHN loss is several times lower than the other losses. Therefore, without a proper balancing, the weights of the ${SHN}$ vanish after few training steps. We empirically set this balance to be around $1e$-$5$. Fine-tuning is performed with a low learning rate, empirically set to $1e$-$4$, to ensure learning stability. We observed this model does not show improvements. Therefore, we propose the next procedure for end-to-end training We first train $DAE$, $\Psi$ and $\Omega_{smpl}$ with ground truth 3D joints until they overfit on training data. Once trained, they are frozen and appended to ${SHN}$. Fine-tuning is performed with the same loss balancing and learning rate as before. The network shows improvement after the first training epoch, and after an additional epoch it fully converges. To ensure that this improvement is the result of the proposed end-to-end training, we trained ${SHN}$ alone for more than $10$ additional epochs without showing any improvement. The results of $SHN$ after end-to-end training in Tab. \ref{tab:ablation} (called ${SHN}_{e2e}$) shows more than 2mm improvement. Some qualitative results are shown in Fig.~\ref{fig:abc}(b). \textbf{Recover SMPLR error.} We stack all trained models to report final SMPL predictions (row \textbf{ALL} in Tab. \ref{tab:ablation}). SMPLR naturally has a generalization error. Wrongly estimated gender is a source of error in SMPLR. Not only gender, but also global rotation error embedded in $\Omega$ can degrade the results. Fortunately, the mesh can be partially corrected by an affine transformation as a post-processing. To do so we apply Procrustes mapping from SMPLR output to its input $\mathbf{JL}$ and update the mesh accordingly. The results in Tab. \ref{tab:ablation} shows 4mm error recovery of $ALL_{Proc}$ vs. $ALL$. Some qualitative examples are shown in Fig.~\ref{fig:abc}(c). Note that for this post-processing step, no ground truth information is required, as we align SMPLR output with SHN output (both model predictions), knowing that SHN is more accurate. This is different from protocol 2, where final predictions are aligned with ground truth. \vspace{-0.1cm} \subsection{State-of-the-art comparison}\label{sec:sota} \vspace{-0.1cm} \textbf{Human3.6M.} We compare our results to state-of-the-art on Human3.6M in Tab. \ref{tab:sota-h36m}, split in two sets: SMPL-based solutions vs. only 3D pose recovery. In the former, we outperform state-of-the-art for both evaluation protocols, especially in protocol 1 improving \cite{kanazawa2018end} over 25mm. We note that we use $\Psi$ network trained on SURREAL dataset to estimate shape parameters, since Human3.6M contains just 5 subjects in the training set (therefore, only 5 shapes). The results in the second set show that our simple modifications to SHN achieve state-of-the-art results after end-to-end training (${SHN}_{e2e}^{final}$). Compared to \cite{pavlakos2017coarse}, a fixed small depth resolution of 16 bins in the volumetric heatmap works better than a coarse-to-fine setup. As we mentioned earlier, we also want to study the results when image cropping is not based on ground truth data. Therefore, we estimate camera focal length and object distance to camera following \cite{sarandi2018augmentation} and use them to compute the cropping. The error on cropped image is less than 5px on each corner. To be robust against this error we fine-tune ${SHN}^{final}$ with an additional random scaling image augmentation. As a result, the 3D joints error is increased less than 1mm in average which is quite marginal on this dataset. Fig. \ref{fig:qualitative1} shows some qualitative results. \textbf{SURREAL.} The competitors on this dataset are \cite{varol18_bodynet} and \cite{tung2017self}. Note that \cite{varol18_bodynet} relies on all ground truth data in a multi-task setup to generate volumetric body shape which limits applicability. That is, volumetric body shape is a coarse representation and not parametric. The results in Tab. \ref{tab:sota-surreal} show we outperform \cite{varol18_bodynet} by 3.5mm for SMPL surface error. Interestingly, our ${SHN}_{e2e}^{final}$ achieves same 3D joints estimation error ($40.8$ mm) than \cite{varol18_bodynet} without performing multi-task learning. We show some qualitative results in Fig. \ref{fig:qualitative1}. \textbf{UP-3D.} We use this dataset to show results in a in-the-wild scenario lifting 2D joints to 3D. To do so, we fine-tune ${SHN}^{final}_{e2e}$ pre-trained on SURREAL. Note that we do not train this model end-to-end on UP-3D dataset. Since this dataset is of small size, we include a subset of 18K images from SURREAL for training. During training the input to DAE is ground truth joints plus uniform noise and depth is set to 0. During testing, SHN estimations (with depth set to 0) are fed to DAE. The inputs to SMPLR are DAE estimations. The SMPL errors in test set before and after Procrustes mapping are around 91mm and 87mm, respectively, being below the 100.5mm error reported in \cite{pavlakos2018learning}. Fig. \ref{fig:qualitative2} shows some qualitative results \textbf{Silhouette IoU.} To check the quality of rendered body, we also compute silhouette IoU of the estimated mesh after projection to image plane. The results can be seen in Tab. \ref{tab:iou}. We achieve a high IoU (more than 0.7) on all datasets without explicitly training the network for this task. \iffalse \begin{table}[!t]\centering \begin{tabular}{|l|c|c|} \hline & Prot. 1 & Prot. 2 \\ \hline bogo & - & 82.3 \\ \hline lassner & - & 80.7 \\ \hline pavlakos smpl & - & 75.9 \\ \hline omran & - & 59.9 \\ \hline kanazawa & 87.9 & 58.1 \\ \hline SMPLR & 71.7 & 56.4 \\ \hline \end{tabular} \caption{h3.6m results comparing to smpl-based methods} \label{tab:sota-smpl-h36} \end{table} \begin{table}[!t]\centering \begin{tabular}{|l|c|c|} \hline & Prot. 1 & Prot. 2 \\ \hline tome & 88.4 & 70.7 \\ \hline pavlakos & 71.9 & 51.9 \\ \hline zhou & 64.9 & - \\ \hline martinez & 62.9 & 47.7 \\ \hline sun & 59.1 & 48.3 \\ \hline yang & 58.6 & 37.7 \\ \hline SHN-final & 62.4 & 50.9 \\ \hline \end{tabular} \caption{h3.6 results comparing to sota just 3D pose} \label{tab:sota-other-h36} \end{table} \fi \vspace{-0.1cmF} \subsection{Time complexity} \vspace{-0.2cm} We show the processing time of each module of the proposed network in Tab. \ref{tab:time}. Experiments were conducted on a GTX1080TI. We compare a default 2D SHN with our proposed volumetric SHN (both SHNs have 5 stacks). As expected, volumetric SHN is 6 times slower than 2D SHN in the training for a batch size of 6. However, there is not much difference between them at inference time for a batch size of 1. We must note that 2D SHN can be fit in GPU memory for a batch size of 12 while volumetric SHN can have at most a batch size of 6. Our SMPLR implementation can be run in 3 FPS for a batch size of 1 at inference time. \vspace{-0.1cm} \section{Conclusions} \label{conclu} \vspace{-0.2cm} We proposed a deep-based framework to recover 3D pose and shape from a still RGB image. Our model is composed of a SHN backbone followed by a DAE and a network capable of reversing SMPL from sparse data. Such model, capable of end-to-end training, is able to accurately reconstruct human body mesh. We experimentally found that processing SHN output joints with DAE removes structured error. We have also shown that SMPL model can be reversed and used to recover 3D pose and shape. Finally, we exploit SMPLR capabilities in the training of deep learning networks by backpropagating SMPL related errors through the SHN. We evaluated our proposal on SURREAL and Human3.6M datasets and improved SMPL-based state-of-the-art alternatives by 3.5 and 25 mm, respectively on each dataset. \section*{Acknowledgements} This work has been partially supported by the Spanish project TIN2016-74946-P (MINECO/FEDER, UE) and CERCA Programme / Generalitat de Catalunya. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the GPU used for this research. This work is partially supported by ICREA under the ICREA Academia programme. {\small \bibliographystyle{ieee}
1,108,101,562,579
arxiv
\section{Introduction} The methods and approaches developed by the similarity searching community are used by a wide range of scientific fields, both within computer science and beyond. While some applications require provable accuracy guarantees, verifiable algorithms, or support for complex similarity functions, many of the similarity searching problems emerging in various areas of research do not have such strict formal constraints. Often, the representations of data or their similarity functions are inherently unreliable, so the idea of a perfect similarity search is unattainable regardless of the quality of the search method. Other times, the problems are too computationally complex, and instead of a perfect answer that would take too long to compute, a best-effort answer in a reasonable amount of time is preferable (perhaps to be refined later using more costly methods). Since the relevance of research on similarity searching is determined by its ability to solve current problems, it should strive to employ the widest possible array of problem-solving tools. There are multiple similarity search indexes, falling under the umbrella of approximate searching, capable of adjusting to such use cases by lowering their accuracy thresholds or returning partial results~\cite{vadicamo2021generalizing}. However, in recent years, an entirely new approach has begun to gain traction -- along with most other areas of computer science, the area of data retrieval has started to incorporate various machine learning approaches. These approaches have the potential to not only expand the available toolbox of problem-solving instruments, but also to offer a new way of thinking about the problems themselves. Notably, in 2018, Kraska et al.~\cite{Kraska2018} suggested that all conventional index structures could be viewed as models of data distributions, implying that machine and deep learning models could be used in their place. Even though the idea was originally proposed and tested on structured data, this reframing of the problem has already inspired similar work in the realm of unstructured datasets~\cite{LMI2021}\cite{hunemorder2021}\cite{tian2022learned}. To investigate the potential of these approaches further, we have chosen to examine the problem of 3D protein structure similarity search. This is an important open problem in biochemical and medical research, which can be viewed as an instance of similarity searching in non-vector datasets, because similarity between a pair of protein structures is usually calculated using a series of non-trivial, computationally expensive operations. Additionally, the amount of 3D protein structure data is currently exploding due to a recent breakthrough in the field~\cite{alphafold2}, and the demand for versatile similarity searching approaches is likely to grow in the near future. In this paper, we demonstrate that even a relatively complex interdisciplinary problem such as 3D protein structure retrieval can be tackled with fast and lightweight solutions. We present a simple pipeline where protein structures are first transformed into short vectors and used to train multiple partitioning and classification models -- these are linked together to form a learned index structure. The index then answers queries by returning several candidate leaf nodes, and filtering the objects stored therein using basic vector (similarity) functions\footnote{The entire functionality is publicly available as a search engine prototype at \url{https://disa.fi.muni.cz/demos/lmi-app-protein/}}. This approach, while based on probabilities rather than mathematical guarantees, provides a reasonable quality of results at a fraction of the computational costs required by previous methods. In addition, its modularity allows us to change algorithms or their parameters for various trade-offs between complexity and accuracy, depending on the particular use case. \section{Related work} \textit{Learned indexing} was first introduced in~\cite{Kraska2018} with the core idea of learning a cumulative distribution function (CDF) to map a key to a record position. This proposition challenged the long-standing paradigm of building indexes solely with classic data structures such as B-trees and allowed for reduction in searching costs compared to the traditional methods. To allow for indexing of large data collections, the authors introduced \textit{Recursive model index (RMI)} -- a hierarchical tree index structure of simple interconnected machine learning models, each learning the mapping on a subset of the dataset. RMI is, however, limited to sorted datasets of structured data, and cannot accommodate multi-dimensional data. The generalization of the learned indexing concept to spatial and low-dimen\-sional data was explored primarily by the \textit{Z-order model}~\cite{wang2019learned}, which makes use of the space-filling curve encoding to produce 1D representation of the original data, and \textit{ML-index}~\cite{davitkova2020ml} which achieves the same with the use of \textit{iDistance}~\cite{jagadish2005idistance}. \textit{RSMI}~\cite{qi2020effectively} introduced a recursive partitioning strategy, building on the Z-order model. Furthermore, \textit{LISA}~\cite{li2020lisa} and \textit{Flood}~\cite{nathan2020learning} both partition the data into grid cells based on data distribution, improving the range and kNN performance of prior approaches. These approaches were recently directly compared to (and surpassed by) \textit{LIMS}~\cite{tian2022learned}, which generalizes to metric spaces and establishes a new state-of-the-art performance on datasets of up to 64 dimensions. Indexing solutions for approximate, rather than precise queries were explored by Chiu et al.~\cite{chiu2019learning} introducing a probability-based ranking model able to learn the neighborhood relationships within data, which can be integrated with existing \textit{product quantization (PQ)} methods. This method was tested on 1-billion datasets with approximately 100 dimensions and has been shown to boost the performance of traditional PQ methods. Following the architectural design of RMI, we proposed the \textit{Learned metric index (LMI)}~\cite{LMI2021}, which can use a series of arbitrary machine learning models to solve the classification problem by learning a pre-defined partitioning scheme. This was later extended to a fully unsupervised (data-driven) version introduced in~\cite{slaninakova2021data}, which is utilized in this work. Finally, the ideas of creating an indexing solution with machine learning have found their use in many different domains, e.g., in trajectory similarity search~\cite{ramadhan2022x} or information retrieval, using a single transformer model~\cite{tay2022transformer}. \subsection*{Protein representation} To enable computational approaches to the problem of uncovering functional properties of proteins, a great amount of research attention has been directed to creating representative (numerical) embeddings of protein structures. There are two distinct categories of embeddings based on the input data -- those that operate with \textit{sequences} and those working with \textit{3D structures}. Although they serve a similar purpose, these two categories are completely distinct in terms of the technical approaches they employ. Specifically, sequence embeddings use techniques such as hidden Markov models~\cite{koski2001hidden} or various natural language processing techniques~\cite{asgari2015continuous} to derive meaning from protein sequences, treating them as encoded sequences of characters -- this approach is not applicable to our research. Embeddings representing protein 3D structures are generally less elaborate, since the information content is more robust to begin with. The most common encoding is a protein distance map, which produces a symmetric 2D matrix of pairwise distances between either atoms, groups of atoms or amino acid residues. This distance map can be transformed into a protein contact map, which is a binary image where each pixel indicates whether two residues are within a certain distance from one another or not. Contact maps have been used in conjunction with machine learning techniques for prediction of protein structure~\cite{cheng2007improved}; another studied problem is the reconstruction of 3D structure based on information in contact maps~\cite{vendruscolo1997recovery}. While these techniques are related to our own approach, we produce embeddings that are considerably more compact and reflective of our similarity searching use case, as will be shown in the following sections. \section{Data domain} We have chosen to test our approach on 3D protein structures for several reasons. First, while protein structure data is very widely used, and the study of this data is vital for almost every area of biochemical research, the issue of efficient search and comparison of protein structures is still unresolved to some extent, with many databases still relying on time-consuming brute-force linear search~\cite{mic2021similarity}. This data is also publicly available in a single database, called the Protein Data Bank (PDB), which is used by the majority of protein researchers and widely agreed upon as the standard. Even though this database is large enough to require efficient search methods, its size (\mbox{$\sim500,000$} structures as of 2022) still makes it possible to download the entire database and scan it exhaustively for ground truth answers if necessary. Just as importantly, it is clear that the issue of efficient search within this data will only become more crucial and challenging in the next few years. Firstly, the common dataset of empirically solved protein structures continues to grow exponentially~\cite{burley2021rcsb} -- its current contents constitute a mere fraction of all the protein structures in nature, and the complex laboratory procedures needed to obtain these structures are being refined every year. Secondly, the computational prediction of protein structures from their sequences has recently seen rapid improvement with the release of AlphaFold 2 in 2021~\cite{alphafold2} -- this has already resulted in hundreds of thousands of new reliable 3D structures that are of great interest to researchers, with tens of millions more to be added in the coming months~\cite{varadi2022alphafold}. Interestingly, the question of efficient protein structure search is not only important for storage and retrieval purposes -- for instance, researchers often discern the function of unknown proteins by comparing them to other, better-known proteins. Since the function of a protein is entirely dictated by its 3D structure, this is equivalent to searching a database for the most similar protein structures. However, the specific needs of this type of research can vary -- while some researchers are looking for extremely specific deviations among a group of very similar proteins (e.g., when studying mutations or conformational changes), others might be looking for much broader patterns of similarity between distant protein families. Structurally, a protein chain consists of a linear sequence of interconnected building blocks called amino acids. Within the right biochemical environment, this linear sequence folds in on itself to form a complex 3D structure. Protein structures are sometimes cited as a typical example of complex unstructured data, since they cannot be meaningfully ordered according to any objective criteria (any search method needs to rely on pairwise similarity), and the similarity of two protein structures often cannot be determined by a single vector operation. Typically, protein molecule data are stored using the three-dimensional coordinates of each of their atoms, with the protein randomly oriented in space. In order to compare a pair of protein structures, they first need to be properly spatially aligned in terms of translation and rotation, and a subset of atoms must be selected for alignment. This typically involves gradual optimization of a spatial distance metric (such as the root-mean-square deviation of all the atom coordinates), which is a computationally expensive process that cannot be directly mapped to a simple vector operation. One commonly-used measure of protein similarity is the $Q_{score}$~\cite{protein_distance}, which is calculated by dividing the number of aligned amino acids in both protein chains by the spatial deviation of this alignment and the total number of amino acids in both chains. Even though this measure is imperfect and not appropriate for all use cases, it is used in several prominent applications, including the PDB's own search engine. Note that two identical structures have a $Q_{score}$ of 1, and completely different structures have a Q-score of 0: as a result, the score needs to be inverted in order to be used as a distance metric ($d(x,y) = 1 - Q_{score}(x,y)$). In the following sections, we will refer to this inverted value as $Q_{distance}$. While there are a few types of protein structure embeddings that are invariant to the spatial alignment of the molecules (see the Related Work section), these were not developed for the purpose of fast data retrieval. As a result, they tend to be too detailed and cumbersome to serve as simple data descriptors. By contrast, the embedding presented in this work has been designed specifically to contain the optimal amount of information for efficient and accurate similarity searching, as will be shown in the following sections. \section{Fast searching in proteins} \begin{figure}[t] \centering \begin{center} \includegraphics[trim={.5cm .2cm 1.3cm .2cm}, clip, width=1\textwidth]{img/proteins-pipeline.png} \end{center} \vspace{-5mm} \caption{A diagram of the proposed solution.} \label{fig:proteins_pipeline} \vspace{-2mm} \end{figure} We present a pipeline (visualized in Figure~\ref{fig:proteins_pipeline}) consisting of three separate components: (i) a simple embedding technique for protein data in the PDB format, (ii) the use of a machine-learning-based index -- Learned Metric Index (LMI) -- to locate a candidate set of similar protein structures, and (iii) fast filtering to produce the final query answer. The embedding we propose divides the protein sequence into $N$ consecutive sections -- the positions of the atoms within each section are averaged, and the section is subsequently treated as a single point in space. We then calculate distances between each pair of these sections, creating an incidence matrix. In this matrix, we prune all the values exceeding a cutoff, and normalize the rest. The matrix is symmetrical and all the diagonal values are 0. The half of this matrix (omitting the main diagonal) is then reduced into a single row in a $M \times (\frac{N^2-N}{2})$ matrix, where $M$ is the number of proteins in the entire dataset (see Figure~\ref{fig:proteins_pipeline}). This produces a very compact embedding for all the proteins, and the entire dataset can be represented by a file that is up to two orders of magnitude smaller than the original database -- see Table~\ref{tab:descriptor_file_sizes}. \begin{table}[t] \centering \begin{tabular}{ |c|r|r|r| } \hline Embedding Size & File Size & Index Build Time (256-64) & Index Build Time (128-128) \Tstrut\Bstrut \\ \hline $5\times5$ & $16$ MB & 246s & 184s \Tstrut\Bstrut \\ $10\times10$ & $51$ MB & 350s & 270s \Tstrut\Bstrut \\ $30\times30$ & $456$ MB & 927s & 655s \Tstrut\Bstrut \\ $50\times50$ & $1275$ MB & 2391s & 1814s \Tstrut\Bstrut \\ \hline \end{tabular} \caption{File size of the protein dataset (518,576 protein chains) stored using protein embeddings, and build times of two different LMI architectures. Note that the size of the original database is 8.2 GB.} \label{tab:descriptor_file_sizes} \vspace{-5mm} \end{table} To reduce the search space to a small number of candidate protein structures, we used the Learned Metric Index (LMI), a tree index structure where each internal node is a learned model trained on a sub-section of data assigned to it by its parent node~\cite{LMI2021}. Specifically, we used the data-driven version of LMI, where the partitioning is determined in an unsupervised manner. We explored different architectural setups -- both in terms of the number of nodes at each level (index breadth), as well as the number of levels (index depth). As the learned models, we explored K-Means, Gaussian Mixture Models, and K-Means in combination with Logistic regression (see~\cite{slaninakova2021data} for details regarding the model setups). For the sake of compactness, in the experimental evaluation we only present the results achieved with the best-performing setup -- a two-level LMI structure with arity of 256 on level 1 and 64 on level 2 (i.e., 256 root descendants, each of them with 64 child nodes), with K-Means chosen as the partitioning algorithm. After LMI is built, we search within it using a query protein structure and return target candidate sets; the size of the candidate sets is determined by a pre-selected stop condition (for instance, a stop condition of 1\% of the dataset corresponds to $\sim5,000$ candidate answers per query) In the final step, we filter the candidate set according to a particular distance function. In our experiments, we have examined filtering based on the Euclidean distance as well as the cosine distance of the vector embeddings, but the filtering step could theoretically be performed using any distance metric, or even the original $Q_{score}$ similarity of the full protein structures. The filtering step returns a subset of the candidate set based on the specified criteria (i.e., kNN or range). \section{Experimental evaluation} We evaluated our approach using range queries, with 512 randomly chosen protein chains from the dataset used as query objects. In order to compare our results against the ground truth, we needed to know the $Q_{distances}$ (based on $Q_{score}$) between the 512 protein chains and all the other chains in the database -- these distances were kindly provided by the researchers behind~\cite{mic2021similarity}, where the same 512 objects were used as the pivots for their search engine. The objects were chosen uniformly randomly with respect to protein chain length, which ensures that even very long proteins are represented among our queries (despite constituting a relatively small portion of the dataset). \begin{figure}[t] \begin{center} \includegraphics[width=0.32\textwidth]{img/recall_time_0.1.png} \includegraphics[width=0.32\textwidth]{img/recall_time_0.3.png} \includegraphics[width=0.32\textwidth]{img/recall_time_0.5.png} \end{center} \vspace{-5mm} \caption{Evaluation of range queries after LMI search and before filtering, using the K-Means method and a 256-64 LMI architecture: (left) Range=0.1, (middle) Range=0.3, (right) Range=0.5.} \label{fig:recall_265_64} \vspace{-2mm} \end{figure} We expected the performance of our method to deteriorate as the range of the queries expands, since a wider search range would require the method to correctly identify more objects which are less similar. To examine this effect, we have chosen three representative query ranges of 0.1, 0.3 and 0.5 -- in a real use case, the range would be chosen by a domain expert based on the particular use case. As a rule of thumb, a range of 0.1 represents a high degree of similarity, while a range of 0.5 represents low (but still biologically significant) similarity; the biological relevance of answers drops sharply beyond this range~\cite{mic2021similarity}. First, we evaluated the performance of the LMI, before the filtering step. The recall shown in Figure \ref{fig:recall_265_64} pertains to the entire candidate set of objects (i.e. how much of the ground truth answer is contained in the 1\%/5\%/10\% of the dataset returned by the LMI for further filtering)\footnote{Precision is not evaluated in this step -- at this point in the pipeline, it is very low and not particularly relevant.}. This figure presents us with two important pieces of information -- firstly, it is clear that LMI can reach very high recall even when trained on the smaller 10x10 embedding -- this makes the embedding a natural choice for further evaluation, since it is efficient while significantly reducing the memory and CPU costs of training compared with the larger embeddings. It can also be seen that, especially in the lower query ranges which are of most interest to us, the 1\% stop condition represents a sensible trade-off between recall and search time, returning relatively few candidate objects (\mbox{$\sim5,000$}) while minimizing the amount of false negatives. \begin{figure*}[t] \centering \begin{minipage}[b]{0.35\textwidth} \includegraphics[width=1\textwidth]{img/bucket_size_distribution.png} \caption{Distribution of objects in the LMI leaf nodes. (A~completely balanced structure would hold \mbox{$\sim30$} objects in each bucket).} \label{fig:bucket_size_distribution} \end{minipage} \hspace{3mm} \begin{minipage}[b]{0.6\textwidth} \centering \includegraphics[trim={0 2cm 0 0cm},width=1\textwidth]{img/correlation_euclidean.png} \caption{Correlation between $Q_{distance}$ and the Euclidean distance used for filtering: Blue points represent proteins returned by the LMI; orange points represent proteins which are present in the ground truth answer but were not returned by the LMI.} \label{fig:correlation} \end{minipage} \vspace{-5mm} \end{figure*} In addition to recall, we also need to investigate the distribution of objects in the index, to ensure that the occupancy of leaf nodes (i.e. data buckets) is not overly imbalanced -- an extremely imbalanced index would achieve high recall, but it would also be more likely to return overly large candidate sets, which would be detrimental to the filtering step. The size distribution of the buckets is shown in Figure~\ref{fig:bucket_size_distribution}, and it confirms that the distribution is not overly skewed towards large buckets, even when the embeddings are quite small, as is the case with N=10. However, the embeddings can only be condensed up to a point -- the 5x5 embedding (which transforms each protein structure into a vector of only 10 values) causes a large portion of the objects to concentrate in a single bucket, as the LMI can no longer distinguish among groups of objects. During the filtering step, recall naturally decreases over time (since the method occasionally filters out relevant answers), while precision should improve as the portion of relevant objects in the candidate set increases. This effect will be different depending on the distance metric used for filtering, as well as the dataset -- we show the effects for two distance metrics (Euclidean distance and cosine distance) in Figure~\ref{fig:filtering}. By analyzing the correlation between $Q_{distance}$ and each of the two distance metrics, we have determined that the Euclidean distance is the better filtering function for this dataset.\footnote{Note that the actual $Q_{distance}$ function scales differently from the Euclidean distance used to filter answers (see Figure~\ref{fig:correlation}) -- this requires simple re-scaling (e.g., to find range=0.5 queries, we set the Euclidean distance cutoff at 0.75).} \begin{figure}[t] \begin{center} \includegraphics[width=0.32\textwidth]{img/filtering_10x10_0.1_256_64.png} \includegraphics[width=0.32\textwidth]{img/filtering_10x10_0.3_256_64.png} \includegraphics[width=0.32\textwidth]{img/filtering_10x10_0.5_256_64.png} \end{center} \vspace{-5mm} \caption{Effects of filtering on the recall and precision of the candidate set of objects (relative to the ground truth answer). } \label{fig:filtering} \vspace{-2mm} \end{figure} Table~\ref{tab:pipeline_evaluation} shows the final results of the range queries with the best-performing configuration of parameters: embedding size N=10, the K-means clustering model, 256-64 LMI architecture, and filtering after the 1\% stop condition using the Euclidean distance metric. The results, especially in the lower query ranges, are very encouraging, although the filtering stage seems to introduce a surprisingly large amount of false negatives by filtering out parts of the correct answer. It is likely that the filtering metric we have chosen was slightly too na\"ive, and the filtering step could have benefited from a different distance function, or at least a different weighting of the vectors before calculating their Euclidean distance. In the future, this presents a natural point of focus to improve our results even further. \begin{table}[t] \centering \begin{tabular}{ |l|r|r|r| } \hline & Range 0.1 & Range 0.3 & Range 0.5 \\ & \scriptsize{Mean \# of objects: 83} & \scriptsize{Mean \# of objects: 236} & \scriptsize{Mean \# of objects: 519} \\ \hline LMI Recall & 0.973 (1.000) & 0.895 (0.999) & 0.755 (0.867) \Tstrut\Bstrut \\ \hline Recall after filtering & 0.742 (0.878) & 0.649 (0.711) & 0.530 (0.637) \\ \hline F1 after filtering & 0.712 (0.855) & 0.669 (0.766) & 0.592 (0.673) \Tstrut\Bstrut \\ \hline \end{tabular} \caption{Overall evaluation of protein range queries: the average values, as well as the median values (in parentheses) are shown. } \label{tab:pipeline_evaluation} \vspace{-5mm} \end{table} Figure~\ref{fig:correlation} further illustrates the strengths and weaknesses of our approach by showing the relationship between the $Q_{distance}$ metric and the Euclidean distances of vector embeddings. There is clear correlation between these two metrics, making a strong case for the simpler one, and LMI is much more successful at finding objects that are more similar to the query (left side of the graph) than it is at finding less similar objects (right side of the graph). It should be restated that even though we use the $Q_{distance}$ metric as the "ground truth", it is merely a subjective similarity metric, and the relevance of results that are close to the edge of the query range is debatable (i.e., an object with a $Q_{distance}$ of 0.499 is not necessarily more relevant than an object with a $Q_{distance}$ of 0.501). Distance computations on long proteins pose a considerable challenge for similarity searching methods, typically requiring a disproportionate amount of computing time~\cite{mic2021similarity}. This does not apply to our embedding approach, which transforms all the proteins into fixed-length vectors using the same algorithm, regardless of their length. It would therefore intuitively follow that such an approach should achieve lower recall on the longer protein queries, since more information is lost in the embedding. However, in practice, that is not the case, as can be seen in Figure~\ref{fig:protein_lengths_recall}. \begin{figure}[t] \begin{center} \includegraphics[trim={.4cm 0 .5cm 0}, clip, width=.32\textwidth]{img/protein_lengths_0.1.png} \includegraphics[trim={.4cm 0 .5cm 0}, clip, width=.32\textwidth]{img/protein_lengths_0.3.png} \includegraphics[trim={.4cm 0 .5cm 0}, clip, width=.32\textwidth]{img/protein_lengths_0.5.png} \end{center} \vspace{-5mm} \caption{Distribution of recall for different protein chain lengths (left to right: the shortest 10\% of chains, each quartile from shortest to longest, and the longest 10\% of chains).} \label{fig:protein_lengths_recall} \vspace{-2mm} \end{figure} This is probably due to the simple fact that the distribution of chain lengths in protein databases is not uniform -- in fact, the long chains are much less numerous. As a result, losing a lot of information about the long chains is not a problem, since they are still relatively easy to find. This gives the fixed-length embedding approach a significant performance advantage -- in a dataset where there are relatively few long protein chains, it would be a waste of resources to calculate and store an excessive amount of data about them, the way many other methods do. As has been mentioned in the previous section, in addition to variable chain lengths, the proteins also have a variable number of neighbors in any given range. This can, again, call into question the reliability of the results based on recall, since recall is a relative measure, and the size of the error will be different based on the size of the actual answer. For instance, if all range(0.1) query answers only consisted of a single easy-to-find object, evaluating recall on its own would give us a biased idea of the searching efficiency. Figure \ref{fig:heatmap_total_query_results} shows how many neighbors each protein structure has according to the ground truth, and how many of these neighbors have been found using our approach -- while there are significantly more proteins with fewer than 10 neighbors in the lower query ranges, these are not the majority, and the errors are distributed evenly relative to query answer size. \begin{figure}[t] \begin{center} \includegraphics[trim={.4cm 0 .5cm 0}, clip, width=.32\textwidth]{img/heatmap_0.1.png} \includegraphics[trim={.4cm 0 .5cm 0}, clip, width=.32\textwidth]{img/heatmap_0.3.png} \includegraphics[trim={.4cm 0 .5cm 0}, clip, width=.32\textwidth]{img/heatmap_0.5.png} \end{center} \vspace{-5mm} \caption{The number of results returned by the LMI compared to the ground truth. The X-axis shows the absolute number of objects in a given query answer (according to the ground truth), while the Y-axis shows the portion of the objects that were found by LMI (i.e., the recall). Each blue point in the scatter plot represents one query, with the larger and more warmly-colored points representing multiple queries with identical graph coordinates. \label{fig:heatmap_total_query_results} \vspace{-2mm} \end{figure} Finally, to provide broader context for the pipeline's performance, we have evaluated it against a more conventional approach. While there is no analogous method for searching 3D protein structures using range queries, there is a similar, recently-published method for nearest neighbor retrieval in the same database of 3D protein structures~\cite{mic2021similarity}. This method uses a three-stage search engine which compares bit-strings in the Hamming space ("sketches") to approximate the distance of protein chains. To allow for a relatively fair comparison of these two methods, we modified our search parameters to more closely match the search results presented in~\cite{mic2021similarity}. Specifically, since their similarity queries were mainly 30NN queries limited by the range 0.5, we have performed range queries in the same range (even though our method's performance is substandard in this range), and in the filtering step, we filtered out all objects beyond the 30 best-ranking answers. Since the sketch-based method was originally compared with the linear search of the PDB database, we present this benchmark as well -- naturally, this is always the slowest method by far, but it requires no index and always finds the exact answer. All of these results can be found in Table~\ref{tab:state_of_the_art} -- while our method clearly does not match the high accuracy of the sketch-based method in this experimental setup, it is faster by at least an order of magnitude, occasionally exceeding 4 orders of magnitude since it does not suffer from an extreme "tail" of worst-case search times caused by evaluation of long proteins. \begin{table}[t] \centering \begin{tabular}{ |l|r|r|r| } \hline & LMI + Filtering \tablefootnote{Before the filtering step, the candidate set returned by LMI has a median recall of 1.0 and an average recall of 0.87; however, since the candidate set is much larger than the final answer ($\sim5,000$ objects), the accuracy is insignificant and has thus been omitted from the table.} & Sketch-based method & PDB Engine \Tstrut\Bstrut\\ \hline Accuracy (median) & $0.660$ & $1.0$ & $1.0$ \Tstrut\Bstrut \\ Accuracy (mean) & $0.626$ & $0.937$ & $1.0$ \Tstrut\Bstrut \\ \hline Time (median) & $0.094$ s & $2.5$ s & $183$ s \Tstrut\Bstrut \\ Time (max) & $0.145$ s & $6109$ s & $14321$ s \Tstrut\Bstrut \\ \hline Index size\tablefootnote{The size of the internal structure of the index, excluding raw protein data.} & $87$ MB & $178$ MB \tablefootnote{495,085 * (320b (sketches) + 1024b (sketches) + 4*6*64b (PPP-codes)) + 512 * 16kB (pivots)} & N/A \Tstrut\Bstrut \\ \hline\end{tabular} \caption{The accuracy, search times, and memory requirements of 30NN protein search queries with a maximum distance radius of 0.5.} \label{tab:state_of_the_art} \vspace{-5mm} \end{table} \section{Summary and Conclusions} In an effort to investigate the potential of new data retrieval techniques in the field of similarity searching, we have developed and evaluated a novel approach to the problem of protein structure search, resulting in a short pipeline consisting of a concise vector embedding, learned indexing, and distance-based answer filtering. By successfully applying this approach on a well-established database of 3D protein structures, we have shown that even in a domain that may, at first, seem poorly suited to simple vector-based transformations, a surprising amount of information can be discerned by learned models. One advantage of our modular approach is that every part of the pipeline can be evaluated separately, allowing experts to identify the weakest spots and alter them based on the current use case and dataset. The experiments presented in this paper serve as a good example -- after evaluating each part of our own pipeline, it is clear that we have chosen an overly simplistic filtering method for our data. In the future, we plan to investigate more sophisticated options for vector-based filtering, as well as a completely different approach to reducing the size of query answers. While it is difficult to compare our work with the state of the art (since there are no direct analogues to our method in the chosen data domain), we have made an effort to modify our method for the fairest possible comparison with a recent, more conventional similarity searching approach in the same domain. In this comparison, our solution, although coming up short in terms of accuracy, is consistently faster by multiple orders of magnitude, and maintains much lower memory requirements. Our work aims to make a case for less conventional solutions to similarity searching problems -- ones that rely on learned approaches, rather than traditional indexing models, to discern the natural distribution of data. This is by no means an argument for universal adoption of machine-learning-based techniques in all similarity searching applications. The approach in this paper is problem-specific, and has several potential drawbacks that need to be weighed against its strengths. Nevertheless, we provide some insight as to how the trend of machine-learned pattern recognition, which has already caused minor and major revolutions in most fields of computer science, could be applied in practical solutions to current similarity searching problems. Based on these results, we remain convinced that learned indexing approaches (such as the Learned Metric Index used in this work) will play an integral part in shaping the future of similarity searching. \bibliographystyle{splncs04}
1,108,101,562,580
arxiv
\section{Introduction}\label{intro} Surface acoustic waves (SAW's) \cite{Farnell78,Mayer95} provide a useful tool for experimental studies of the two-dimensional electron gas (2DEG) in GaAs/AlGaAs-heterostructures. In particular, SAW's have been used in recent years in investigations of the integer \cite{Rampton92,Wixforth86,Wixforth89,Schenstrom88,Esslinger94} and the fractional \cite{Esslinger94,Willet90,Willet94} quantum Hall regimes. Due to the quantum Hall effect, the interaction of the SAW with the charge carriers can lead to strong oscillations in the attenuation and the velocity of the sound waves as function of the applied magnetic field. Quantum oscillations have also been reported for the sound-induced currents and voltages \cite{Esslinger94,Shilton95}. Previous theoretical descriptions of these experiments have been based essentially on classical models for the propagation of SAW's \cite{McFee66,Tucker72}. According to these models, which are originally derived for systems in the absence of an applied magnetic field, the sound attenuation is expressed in terms of the electrical dc conductivity. This relation is derived under the assumptions that $ql \ll 1$ and \mbox{$\omega_q \tau \ll 1$} (`local regime'), where \mbox{\boldmath $q$} and $\omega_q$ are the wave vector and the frequency of the sound, respectively, and $l$ and $\tau$ are the mean free path and the scattering time of the conduction electrons, respectively. If a (classical) magnetic field is applied, the first condition has to be replaced by $qR_c \ll 1$, where $R_c$ is the cyclotron radius\cite{Tucker72}. It is much more difficult, however, to determine under which conditions the above mentioned theories are valid when the 2DEG is subject to a quantizing magnetic field. In this case, the electron system is characterized by (at least) two more length scales, namely the magnetic length $l_B=\sqrt{c\hbar/eB}$ and the localization length \cite{Jansen94} $\xi$. While $ql_B \ll 1$ is always fulfilled under typical experimental conditions, the localization length can be much larger than the surface acoustic wavelength $2\pi/q$. A series of experiments \cite{Rampton92,Wixforth86,Wixforth89,Schenstrom88,Esslinger94,Willet90,Willet94} has shown a reasonable agreement with the predictions of the classical models in a wide range of frequencies $\omega_q$ and magnetic field strengths. On the other hand, some deviations have also been detected. For example, deviations of the SAW attenuation from the classically predicted behavior with increasing frequency have been reported\cite{Wixforth89}. These were attributed to nonlocal effects of the interaction between the SAW and the 2DEG which should occur when the sound wavelength becomes of the order of or smaller than a characteristic length scale of the electron gas. In the fractional quantum Hall regime, an anomaly in the absorption coefficient for filling factor $\bar{\nu}=1/2$ was found \cite{Willet90,Willet94}. This anomaly was discussed in the framework of the composite Fermion model of Ref.\ \onlinecite{Halperin93}. According to this approach, the electrons are replaced by composite Fermions moving in an effective magnetic field of zero average (at $\bar{\nu}=1/2$). Then, the sound absorption due to these particles is described by classical formulae except that the dc conductivity is replaced by the wave vector dependent nonlocal conductivity which, however, represents a very important difference (see Ref.\ \onlinecite{Halperin93} for details). In this paper we study the propagation of SAW's in the integer quantum Hall regime. The calculation of the SAW attenuation is carried out for filling factors near 1/2 and is based on a percolation approach to the electronic states in a very strong magnetic field. From this point of view we may anticipate a nonlocal behavior of the attenuation arising from the large characteristic length scales (e.~g., the size of a percolation cluster $\gg q^{-1}$) inherent in that framework. The effect of electron-electron interaction is taken into account by the screening of the electron-phonon coupling. The same problem has also been studied in Ref.\ \onlinecite{Aleiner94}. These authors calculated the nonlocal conductivity due to variable-range hopping between pairs of localized states. Then, in the spirit of the classical description of sound absorption, this conductivity is related to the SAW attenuation coefficient. A comparison with our results will be given in Sec.\ VI. The system considered is a 2DEG, subject to a very strong magnetic field $B$ and a smooth random potential $V$. The potential can be characterized by its amplitude $\Delta$ and its correlation length $\Lambda$. The amplitude also determines the width of the Landau levels. The correlation length is of the order of the spacer layer that separates the 2DEG from the dopant layers. Under the assumption that $l_B \ll \Lambda$ a quasiclassical description of the electron states can be applied. That is, one considers the drift motion of the guiding center of an electron on the equipotential lines (EL's) of $V$ separately from the rapid motion relative to it \cite{Jansen94}. The drift velocity is given by $v_D=c|\nabla V|/eB$. Using $|\nabla V| \simeq \Delta/\Lambda$, the drift velocity can be estimated to be $v_D \simeq l_B^2 \Delta / \hbar \Lambda$. Depending on the ratio between the correlation length $\Lambda$ and the sound wavelength, two regimes can be distinguished. For $q\Lambda \gg 1$, the electron-phonon interaction can be considered locally neglecting the global structure of the EL's \cite{Zhao93,Heinonen84}. [For single-phonon absorption and emission processes to occur, one has also to require that the sound velocity $v_s$ is smaller than the drift velocity. This is usually referred to as $\check{\mbox{\rm C}}$erenkov condition\cite{Iordansky96}.] This regime is valid for, e.\ g., thermal phonons\cite{Zhao93}. However, SAW's have a much larger wavelength, and hence $q\Lambda \ll 1$ is typically fulfilled. In this case, the local absorption and emission of phonons is exponentially small, and the EL as a whole has to be considered. It becomes important that the motion of the guiding center on an extended EL [with a length $\gg \Lambda$] resembles a random walk, with a diffusion coefficient $D=v_D \Lambda$. Since the ratio $v_D/v_s$, for real systems, is not very different from unity, one deals with the limit $\omega_q \gg Dq^2$ of this diffusion process. [To be precise\cite{Iordansky96}, the parameter $v_D/v_s$ has to lie in the range $q\Lambda \ll v_D/v_s \ll (q\Lambda)^{-3/4}$.] Indeed, for $B=10$T ($l_B=8$nm), $\Delta=1$meV, $\Lambda\approx 50$nm, $q=10^4 $cm${}^{-1}$, and $v_s\approx 3\times 10^5$cms${}^{-1}$ we find $q\Lambda= 0.05$ and $v_D\approx 0.7v_s$. It is this particular (diffusive) regime which will be addressed in this paper. In the same regime, the electron life-time and the energy relaxation time due to interaction with 3D bulk phonons have been calculated in Ref.~\onlinecite{Iordansky96}. The quantum mechanical calculation of the attenuation coefficient (as well as of other quantities associated with SAW's) requires the knowledge of the Hamiltonians which describe the interaction of electrons with acoustic surface phonons. As far as we know these interaction Hamiltonians have not yet been derived. Instead, many theoretical investigations have addressed the interaction of 2D electrons with 3D (bulk) or 2D phonon systems. The latter one, a single layer of vibrating atoms, represents merely a theoretical construction. Three-dimensional phonons do not provide an appropriate approach when the 2DEG is located near a free crystal surface \cite{Badalyan88}. This implies that it is by no means clear that the interaction of a SAW with the 2DEG is described well by the formulae which are valid in the case of bulk phonons. In fact, we find that the interaction vertices appearing in the general electron-phonon interaction Hamiltonian [see Eq.\ (\ref{hdefdet})] differ from those for 3D phonons not only by numerical constants but also in the phonon wave vector dependence and the relative phase between the deformation potential and the piezoelectric interactions. The paper is organized as follows. The interaction vertices are derived and discussed in the next section. In Sec.\ III, we describe the quasiclassical electron states of a 2DEG in a strong magnetic field and a random potential $V$. We show that the absorption of the SAW and the dielectric function depend crucially on the occupation and the properties of electron states which correspond to very long EL's. The structure of these EL's is deduced from the 2D percolation theory. The matrix elements for transitions between different electron states are given in Sec.\ IV. The screening of the electron-phonon interaction due to the 2DEG is accounted for by a dielectric function $\varepsilon(\omega_q, q)$ which is calculated in Sec.\ V. Based on these results, the SAW attenuation coefficient is obtained in Sec.\ VI. Its dependence on the filling factor (or the Fermi energy), the SAW frequency and the temperature are discussed. A short summary is given in Sec.\ VII. \section{Interaction Hamiltonians} \subsection{The displacement field} To simplify the calculations we use the following assumptions. Since the SAW wavelength $2\pi /q$ is much longer than the lattice constant, the crystal can be approximated by a continuous medium. Its elastic properties are assumed to be isotropic. Furthermore, we disregard the fact that the GaAs-substrate is coated with layers which differ slightly in their elastic properties. The overall thickness of these layers \cite{Wixforth89} $d \simeq 100$nm is much smaller than the wavelength of sound. It has been shown \cite{Mayer95} that for $qd \ll 1$ the deviations of the wave propagation resulting from a thin overlayer coating an homogeneous substrate can be accounted for by a systematic expansion in this small parameter. In our case $qd \le 10^{-1}$, i.~e., these corrections are indeed negligible. Thus, we end up with the standard problem of sound waves which are propagated in an isotropic medium bounded by a plane \cite{Landau70,Mayer95}. (Effects resulting from the anisotropy of the lattice become important for $qd \approx 1$, see Ref.\ \onlinecite{Simon96}.) Let the surface be in the $x$-$y$-plane and the medium in the half space $z \ge 0$. The longitudinal and transversal components of the displacement field $\mbox{\boldmath $u$}(\mbox{\boldmath ${r}$},t)$, $\mbox{\boldmath $r$}=(x,y,z)\equiv (\mbox{\boldmath $R$},z)$, obey the wave equations \begin{equation}\label{waveeq} \frac{\partial^2 \mbox{\boldmath $u$}_{l,t}}{\partial t^2} -c_{l,t}^2 \Delta \mbox{\boldmath $u$}_{l,t} =0 \, , \end{equation}% where $c_{l,t}$ are the corresponding sound velocities. By definition, {\rm curl}$\, \mbox{\boldmath $u$}_l=0$ and {\rm div}$\, \mbox{\boldmath $u$}_t=0$. Surface waves are composed of particular solutions of Eqs.\ (\ref{waveeq}) that decay exponentially with increasing distance from the surface. In addition, these solutions have to satisfy the boundary conditions at the free surface $z=0$, namely, the normal components of the stress tensor should vanish there. It turns out that these boundary conditions can only be fulfilled by a linear combination of $\mbox{\boldmath $u$}_l$ and $\mbox{\boldmath $u$}_t$, i.~e., pure longitudinal or transversal surface waves do not exist \cite{Landau70}. The full displacement field for a mode with a two-dimensional wave vector \mbox{\boldmath $q$} can be written as \begin{mathletters}\label{uq} \begin{equation}\label{uq1} \mbox{\boldmath $u$}_{\mbox{\boldmath $q$}} (\mbox{\boldmath $r$},t) = C_q e^{i(\mbox{\boldmath $qR$} -\omega_q t)} \mbox{\boldmath $v$}_{\mbox{\boldmath $q$}} (z) + {\rm c.c.} \, , \end{equation}% with \begin{equation}\label{uq2} \mbox{\boldmath $v$}_{\mbox{\boldmath $q$}} (z) = -i \hat{\mbox{\boldmath $q$}} (e^{-\kappa_l q z} - f \kappa_t e^{-\kappa_t qz}) + \hat{\mbox{\boldmath $z$}} (\kappa_l e^{-\kappa_l q z} -f e^{-\kappa_t qz}). \end{equation}% \end{mathletters}% That is, the displacement $\mbox{\boldmath $u$}_{\mbox{\boldmath $q$}}$ is polarized in the sagittal plane which is spanned by the propagation direction $\hat{\mbox{\boldmath $q$}}=\mbox{\boldmath $q$}/q$ and the surface normal $\hat{\mbox{\boldmath $z$}}$. The decay of the displacements into the interior of the medium is described by \begin{equation}\label{kappa} \kappa_l(\alpha) =\sqrt{1- \alpha \xi^2} \qquad {\rm and} \qquad \kappa_t (\alpha)= \sqrt{1- \xi^2} \, , \end{equation}% where $\alpha= c_t^2/c_l^2$ and $\xi$ is a root of an algebraic equation of sixth order containing the parameter $\alpha$ only (see p.~104 in Ref.~\onlinecite{Landau70}). $\xi$ enters the dispersion relation of the surface waves in the form \begin{equation}\label{omega} \omega_q = \xi c_t q \equiv v_s q \, , \end{equation} where $v_s$ is the SAW velocity. Finally, the factor $f$ is given by \begin{equation}\label{factor} f(\alpha)= \frac{1+\kappa_t^2}{2\kappa_t} = \sqrt{\frac{\kappa_l}{\kappa_t}} . \end{equation}% In order to quantize the displacement field (\ref{uq}), the normalization constant $C_q$ of each individual mode has first to be fixed in an appropriate way. That is, the energy associated with the mode $\mbox{\boldmath $u$}_{\mbox{\boldmath $q$}} (\mbox{\boldmath $r$},t)$ in the normalization volume has to coincide with the energy $\hbar \omega_q$ of the corresponding phonon. Since the wave is propagated freely along the surface, the energy is normalized with respect to a large but finite square of area $L^2$ in the $x$-$y$-plane. On the contrary, no such restriction is necessary with respect to the $z$-coordinate because $\mbox{\boldmath $u$}_{\mbox{\boldmath $q$}}$ decays exponentially with increasing distance from the surface. Thus, the normalization volume can be extended from $z=0$ to $z = \infty$ under the chosen surface area. Adding a kinetic energy term to the potential energy \cite{Landau70} associated with a displacement field $\mbox{\boldmath $u$}$, the total energy can be written as \begin{equation}\label{energy} E(\mbox{\boldmath $u$})= \frac{1}{2} \rho \int \! d^3 \mbox{\boldmath $r$} \, \left[ (\partial \mbox{\boldmath $u$} / \partial t)^2 + (c_l^2 -2 c_t^2 ) ({\rm div} \mbox{\boldmath $u$})^2 + 2 c_t^2 \sum\limits_{i,k} (u_{ik})^2 \right] , \end{equation}% where $\rho$ is the mass density of the medium and \begin{equation}\label{strain} u_{ik} =\frac{1}{2} \left( \frac{ \partial u_i}{\partial x_k} + \frac{ \partial u_k}{\partial x_i} \right) \qquad i,k=x,y,z \end{equation}% is the strain tensor. Inserting $\mbox{\boldmath $u$}_{\mbox{\boldmath $q$}}$, Eq.\ (\ref{uq}), into this formula and imposing the condition $E(\mbox{\boldmath $u$}_{\mbox{\boldmath $q$}}) = \hbar \omega_q$ determines the normalization as \begin{mathletters}\label{norm} \begin{equation}\label{norm1} C_q \equiv C = \frac{1}{L} \sqrt{\frac{\hbar}{\rho v_s a}} \, , \end{equation} with a numerical factor \begin{eqnarray}\label{norm2} a(\alpha)& = & f^3-2f+\frac{1}{\kappa_l}-\frac{\alpha^2 \xi^2}{\kappa_l} \\ && +\frac{1}{\xi^2} \left[ \frac{(1+\kappa_l^2)^2}{2\kappa_l} +\kappa_l(1+f^2) -2f(1+\kappa_l\kappa_t) \right] \, . \nonumber \end{eqnarray}% \end{mathletters}% Equations (\ref{norm}) show that the normalization leads merely to a constant prefactor, i.~e., in contrast to the case of bulk phonons, $C$ does not introduce a further dependence on the wave number $q$. We are now in a position to quantize the displacement field $\mbox{\boldmath $u$}_{\mbox{\boldmath $q$}}$ [Eqs.\ (\ref{uq})] of a SAW. According to the familiar rules, we define the phonon annihilation and creation operators $b_{\mbox{\boldmath $q$}}$ and $b_{\mbox{\boldmath $q$}}^\dagger$ and find for the complete wave field the expression \begin{equation}\label{uqcomplete} \mbox{\boldmath $u$}(\mbox{\boldmath $r$},t) = C \sum_{\mbox{\boldmath $q$}} \left[ b_{\mbox{\boldmath $q$}} e^{i(\mbox{\boldmath $qR$} -\omega_q t)} \mbox{\boldmath $v$}_{\mbox{\boldmath $q$}}(z) +h.c. \right] . \end{equation}% \subsection{Deformation potential interaction} The deformation potential is proportional to the change in volume, ${\rm div} \mbox{\boldmath $u$}$, which an infinitesimal volume element undergoes due to the wave \cite{Gantmakher87}. Introducing an electron-phonon interaction constant $\Xi$, the Hamiltonian of the deformation potential can be written as \begin{equation}\label{hdefgen} H_{DA} = \Xi {\rm div} \mbox{\boldmath $u$}(\mbox{\boldmath $r$},t) \, . \end{equation}% The spread of the transversal component of the electron wave function as well as the distance $d$ of the 2DEG from the surface are small compared to $q^{-1}$. Thus, in evaluating Eq.\ (\ref{hdefgen}), one can set all exponentials in $\mbox{\boldmath $v$}_{\mbox{\boldmath $q$}}(z)$, Eq.\ (\ref{uq2}), equal to 1. Conveniently, the electron-phonon interaction Hamiltonian can be written in the general form \begin{equation}\label{hdefdet} H = \frac{1}{L} \sum_{\mbox{\boldmath $q$}} \gamma_{\mbox{\boldmath $q$}} e^{i\mbox{\boldmath $q$}\mbox{\boldmath $R$}} b_{\mbox{\boldmath $q$}} + h.c. \, . \end{equation}% For a deformation potential interaction we derive from Eqs.\ (\ref{uqcomplete}) and (\ref{hdefgen}) the interaction vertex \begin{equation}\label{vertdp} \gamma_{\mbox{\boldmath $q$}}^{DA} = \sqrt{\frac{\hbar}{\rho v_s a}} \, \alpha \xi^2 \Xi q \, . \end{equation}% Following a notation introduced in Ref.\ \onlinecite{Gantmakher87}, the electron-phonon interaction constant $\Xi$ can be replaced by a nominal scattering time $\tau_{DA}$. This gives \begin{equation}\label{tauda} (\gamma_{\mbox{\boldmath $q$}}^{DA})^2 = a_{DA} \frac{\hbar^2 v_s q^2}{p_\circ^3 \tau_{DA}} \, , \end{equation}% where $\hbar p_\circ = (2m^* \hbar \omega_\circ)^{1/2}$ and $a_{DA}=2\pi\alpha \xi^2/a$. $\omega_\circ$ is the frequency of longitudinal optical phonons and $m^* $ is the effective mass of the electrons. \subsection{Piezoelectric interaction} Along with the deformation potential interaction, the piezoelectric electron-phonon interaction appears in crystals which lack a center of symmetry, cf.\ for example Ref.~\onlinecite{Gantmakher87}. In this case, an elastic wave leads to a polarization \mbox{\boldmath $P$} of the lattice, \begin{equation}\label{pj} P_j = \sum\limits_{k,l} \tilde{\beta}_{jkl} u_{kl} \, , \end{equation}% where $\tilde{\beta}_{jkl}$ is the tensor of the piezoelectric moduli. The corresponding interaction Hamiltonian follows from the electric potential $\varphi(\mbox{\boldmath $r$}, t)$ associated with the polarization and reads \begin{equation}\label{hpiezo} H_{PA} = e \varphi( \mbox{\boldmath $r$},t) \, . \end{equation}% The polarization and the electric potential are related to one another via Poisson's equation \begin{equation}\label{poisson} {\rm div } \mbox{\boldmath $D$} = \varepsilon_\circ {\rm div} (4 \pi \mbox{\boldmath $P$} -{\rm grad} \varphi) = 0 \, , \end{equation}% where $\mbox{\boldmath $D$}$ is the dielectric displacement and $\varepsilon_\circ \approx 12.8$ is the dielectric constant of GaAs. In the case of interest here, the general expression (\ref{pj}) is simplified because the GaAs-samples used in experiments are cubic crystals and a crystal cut is chosen [the (100) surface] where the surface is spanned by two lattice axes \cite{Wixforth89}. Then, the tensor $\tilde{\beta}_{jkl}$ has only components in which all three indices $j$, $k$, $l$ differ from each other and all components are equal to $\beta/8\pi$. Hence, Eq.\ (\ref{pj}) reduces to \begin{equation}\label{pola} P_x=(4\pi)^{-1}\beta u_{yz}, \quad P_y=(4\pi)^{-1}\beta u_{zx} , \quad P_z=(4\pi)^{-1}\beta u_{xy} \, . \end{equation}% Substituting the displacement field (\ref{uqcomplete}) into Eq.\ (\ref{strain}) for the strain tensor yields the polarization (\ref{pola}). Then the Poisson equation (\ref{poisson}) for $\varphi$ can be solved most easily by a Fourier transform in the $x$-$y$-plane, leading to \begin{equation}\label{poft} \left[ \frac{\partial^2}{\partial z^2} -q^2 \right] \varphi(z,t) = \beta C q_x q_y e^{-i\omega_q t} \left[ -3\kappa_l e^{-\kappa_l qz} + f(1+2 \kappa_t^2) e^{-\kappa_t qz} \right] + c.c. \, . \end{equation}% The solution of this inhomogeneous differential equation can be constructed in the usual way. Discarding the exponentially increasing term $e^{qz}$, one obtains that every mode with a wave vector \mbox{\boldmath $q$} is associated with an electric potential \begin{equation}\label{phiq} \varphi_{\mbox{\boldmath $q$}}(\mbox{\boldmath $r$},t) = \beta \xi^{-2} C \hat{q}_x \hat{q}_y e^{i(\mbox{\boldmath $q$}\mbox{\boldmath $R$} - \omega_q t)} \left\{ 3\kappa_l \alpha^{-1} e^{-\kappa_l q z} - f(1+2\kappa_t^2) e^{-\kappa_t q z} +c_1 e^{-qz} \right\} + c.c. , \end{equation}% where $\hat{q}_x = \mbox{\boldmath $q$}\hat{\mbox{\boldmath $x$}}/q$ and $\hat{q}_y = \mbox{\boldmath $q$}\hat{\mbox{\boldmath $y$}}/q$. We note that for the geometry under consideration the total number of decay lengths for the elastic displacements and the electric potential is three, cf.\ Ref.~\onlinecite{Mayer95} for comments on the general case. $c_1$ is a constant of integration which is determined by the boundary conditions at the surface $z=0$. In view of the experiments, we assume that the surface of the crystal is an `electrically free' boundary \cite{Farnell78} to vacuum. That is, the normal component of the dielectric displacement, Eq.\ (\ref{poisson}), and the parallel components of the electric field are continuous at the surface, \begin{mathletters}\label{bcond} \begin{equation}\label{bcond1} 4\pi\varepsilon_\circ P_z -\varepsilon_\circ \frac{\partial}{\partial z} \varphi|_{z=+0} = -\frac{\partial}{\partial z} \varphi|_{z=-0} \, , \end{equation}% \begin{equation}\label{bcond2} \frac{\partial}{\partial \mbox{\boldmath $R$}} \varphi|_{z=+0} = \frac{\partial}{\partial \mbox{\boldmath $R$}} \varphi|_{z=-0} \, . \end{equation}% \end{mathletters}% Note that the boundary conditions (\ref{bcond}) differ from the requirement $\varphi_{z=0} =0$ for a sample which is covered with a thin metallic film. An appropriate ansatz for the electric potential outside of the crystal ($z<0$) is $\varphi=c_2 e^{i(\mbox{\boldmath $q$}\mbox{\boldmath $R$}- \omega_q t)} e^{qz}$. Substituting this ansatz and the general solution (\ref{phiq}) for $z>0$ into the boundary conditions (\ref{bcond}) yields that the constant of integration is \begin{equation}\label{coninte} c_1 = \frac{1}{2\bar\varepsilon} \left[ -3 \kappa_l \alpha^{-1} (1+ \kappa_l \varepsilon_\circ ) + f(1+2 \kappa_t^2)(1+\kappa_t \varepsilon_\circ ) - \varepsilon_\circ \xi^2 (1-f\kappa_t) \right] , \end{equation}% where $\bar\varepsilon=(\varepsilon_\circ +1 )/2$ is the average of the dielectric constants of GaAs and the space above the sample surface (vacuum), respectively. For large values of $\varepsilon_\circ$, $\varepsilon_\circ \gg 1$, this result coincides with the one which follows from the approximate boundary condition $\frac{\partial}{\partial z} \varphi|_{z=-0} =0$, cf.\ Eq.\ (\ref{bcond1}). The electric potential (\ref{phiq}) associated with a single displacement mode is now completely determined. Assigning the amplitudes $b_{\mbox{\boldmath $q$}}$ and $b_{\mbox{\boldmath $q$}}^\dagger$ to the first and second term in Eq.\ (\ref{phiq}), respectively, summing over all wave vectors, and introducing the result into the Hamiltonian (\ref{hpiezo}), the piezoelectric vertex [see Eq.\ (\ref{hdefdet})] becomes \begin{equation}\label{vertpi} \gamma_{\mbox{\boldmath $q$}}^{PA} = \sqrt{\frac{\hbar}{\rho v_s a}} \, \beta e \xi^{-2} \frac{\varepsilon_\circ}{2\bar\varepsilon} \hat{q}_x\hat{q}_y \left[ 3 \kappa_l \alpha^{-1} (1-\kappa_l) -f(1+2 \kappa_t^2)(1-\kappa_t) -\xi^2 (1-f\kappa_t) \right] , \end{equation}% where we set $z=0$ in $\varphi_{\mbox{\boldmath $q$}}(\mbox{\boldmath $r$},t)$, Eq.\ (\ref{phiq}). Obviously, the strongest piezoelectric interaction occurs when the SAW is propagated along a diagonal direction ($\hat{q}_x\hat{q}_y = \pm \frac{1}{2}$). In the experiments just this piezoelectric active direction [$\mbox{\boldmath $q$} \| [011]$] is chosen. In terms of a nominal time $\tau_{PA}$ [cf.\ Eq.\ (\ref{tauda})] the interaction vertex reads \begin{equation}\label{taupa} (\gamma_{\mbox{\boldmath $q$}}^{PA})^2 = a_{PA} (\hat{q}_x\hat{q}_y)^2 \frac{ \hbar^2 v_s}{p_\circ \tau_{PA}} \, , \end{equation}% where all the numerical quantities are absorbed in the prefactor $a_{PA}$. \subsection{Discussion of the interaction vertices} Let us compare the results for the interaction vertices $\gamma_{\mbox{\boldmath $q$}}$ in the Hamiltonian (\ref{hdefdet}) with those valid for 3D bulk phonons (or a fictitious 2D phonon system). There are two significant differences. First, the interaction vertices for SAW's have a different dependence on the wave vector: $|\gamma_{SAW}|^2$ exhibits an additional factor $q$ compared to $|\gamma_{bulk}|^2$. This applies to both the deformation potential and the piezoelectric interaction. Consequently, the use of the SAW interaction vertices in calculations of various physical quantities can give rise to results which deviate from those which are based on the assumption of two- or three-dimensional phonon systems. Second, for surface phonons, the deformation potential and the piezoelectric interaction are {\it in} phase. This is in contrast to the case of bulk phonons where these vertices are {\it out} of phase, i.~e.\ they contribute additively to $|\gamma_{bulk}|^2= |\gamma_{bulk}^{DA} + \gamma_{bulk}^{PA}|^2 = |\gamma_{bulk}^{DA}|^2 + |\gamma_{bulk}^{PA}|^2$, see for example Ref.\ \onlinecite{Gantmakher87}. The absolute value squared of $\gamma_{\mbox{\boldmath $q$}}$ is the relevant quantity which determines the total electron-phonon interaction. Clearly, the `out of phase' or the `in phase' property is of importance only when the interaction vertices for the deformation potential and the piezoelectric interaction are of the same order of magnitude. This depends on the wavelength of the SAW because $\gamma^{PA}$, Eq.\ (\ref{vertpi}), does not depend on the magnitude of $q$ whereas $\gamma^{DA}$, Eq.\ (\ref{vertdp}), increases linearly with $q$. In the case of GaAs, the relative strength of these two interaction mechanisms is thus $\gamma^{DA}/\gamma^{PA} \approx q 10^{-7} {\rm cm} $ where we have used the numerical values given below. Thus, for the range of wavelengths used in recent experiments on the attenuation of a SAW in GaAs-samples (see, for example, Refs.~\onlinecite{Rampton92,Wixforth86,Wixforth89,Schenstrom88,Willet90} and \onlinecite{Guillion91}), the deformation potential scattering can be neglected in comparison with the piezoelectric interaction, except for propagation along $\left\langle 100 \right\rangle$ directions. This result corroborates very well with the fact that the experimental findings could be explained in terms of the piezoelectric electron-phonon coupling alone \cite{Wixforth89}. For easy reference, we summarize the numerical values for the parameters appearing in the interaction vertices $\gamma_{\mbox{\boldmath $q$}}$. For GaAs, $c_l\approx 5\times 10^5$cm/s, $c_t \approx 3\times 10^5$cm/s and, hence, $\alpha=0.36$. The corresponding solution of the algebraic equation \cite{Landau70} for $\xi$ is $\xi\approx 0.9$. Substituting these values in Eq.\ (\ref{norm}) yields $a=1.4$. Using $\tau_{DA}=4$ps, $\tau_{PA}=8$ps, $\hbar \omega_\circ=421$K (this corresponds to $\Xi=7.4 $eV, $e\beta=2.4 \times 10^7$eV/cm), and $\rho = 5.3$g/cm${}^3$ (see Ref.\ \onlinecite{Gantmakher87}), we obtain $\gamma_{\mbox{\boldmath $q$}}^{DA} =5.6\times 10^{-17}q\,$eVcm${}^2$ and $\gamma_{\mbox{\boldmath $q$}}^{PA} = 3.7 \hat{q}_x\hat{q}_y 10^{-10}$eV cm. The above calculations are restricted, with respect to the piezoelectric interaction, to cubic crystals and a particular crystal cut. It is straightforward, however, to perform calculations for different crystals or surfaces along the same lines. \section{Electron states and percolation theory} Consider a 2DEG in a strong magnetic field $B$ perpendicular to the plane of the 2DEG and in a smooth potential $V(\mbox{\boldmath $R$})$ (see, for instance, the paper by Trugman\cite{Trugman83} and references therein). The potential $V(\mbox{\boldmath $R$})$ is assumed to vary slowly on the scale of the magnetic length $l_B=\sqrt{c\hbar/ eB}$. Electron-electron interactions are neglected. The wave function $\Psi(\mbox{\boldmath $R$})$ of a state with energy $\epsilon$ is appreciable only in the vicinity of an equipotential line (EL) of the potential $V(\mbox{\boldmath $R$})=\epsilon$. The width of the wave functions perpendicular to the EL is $l_B$. Explicitly, the electron states of the $n$-th Landau level (LL) can be approximated, in the limit $B \rightarrow \infty$, by \begin{equation}\label{Psi} \Psi(\mbox{\boldmath $R$}) \equiv \Psi(u,v) = [{\cal T}v_D(u,v)]^{-1/2} \chi_n(v) e^{i\varphi(u,v)} \, . \end{equation}% (We have omitted the part $\Psi(z)$ of the wave function which corresponds to the lowest occupied subband perpendicular to the plane of the 2DEG.) The orthogonal variables $u$ and $v$ parametrize the distances along and perpendicular to the EL, respectively. The function $\chi_n(v)$ is the $n$th harmonic oscillator function. Below, we shall restrict ourselves to the lowest Landau level, i.~e.\ $n=0$. In Eq.\ (\ref{Psi}), $\varphi(u,v)$ is a gauge-dependent phase, and ${\cal T}$ is the period associated with one revolution around the EL of an electron moving with the drift velocity $v_D$. That is, \begin{equation}\label{periodt} {\cal T} = \oint \! du \, \frac{1}{v_D(u,v)} \, , \qquad v_D = |\nabla V|l_B^2/\hbar \, . \end{equation}% For the wave function (\ref{Psi}) to be single-valued, $\varphi(u,v)$ has to change by an integral multiple of $2\pi$ around an EL. This condition leads to the quantization of the allowed constant-energy lines. In other words, only a discrete sequence of EL's corresponds to the electron eigenstates. Two adjacent eigenstates enclose an area $2\pi l_B^2$ and are, on the average, a distance $\Delta v \simeq l_B^2/{\cal L}$ apart, where ${\cal L}$ is the length of one of the EL's. An important quantity is the difference $\hbar \omega_{{\cal T}}$ of the corresponding eigenenergies, where the frequency $\omega_{{\cal T}}$ is determined by \begin{equation}\label{omtdef} \omega_{{\cal T}} = 2 \pi / {\cal T} \, . \end{equation}% The quasi-classical description of the electron states that we have outlined above is a valid approximation when\cite{Trugman83} \begin{equation}\label{cond} l_B/r_c \ll 1 \quad \mbox{\rm and } \quad m^* |\nabla V(\mbox{\boldmath $R$})| l_B^3/ \hbar^2 \ll 1 \, , \end{equation}% where $r_c$ is the local radius of curvature of the EL and $m^*$ is the effective electron mass. The first condition is related to the smoothness of the potential, while the second one prevents the mixing of different LL's. Additionally, one should keep in mind that quantum tunneling\cite{Fertig87} between classical EL's is important when $|\epsilon|$ is smaller than $\Delta (l_B/\Lambda)^2$. In what follows $V(\mbox{\boldmath $R$})$ is a smooth {\it random} potential. The potential is assumed to be Gaussian, with \begin{equation}\label{correlator} \left\langle V(\mbox{\boldmath $R$}) V(0) \right \rangle =\Delta^2\phi(R/\Lambda) \, , \end{equation}% where $\Delta=\sqrt{\left\langle V^2 \right \rangle}$ defines its amplitude and $\Lambda$ its correlation length. ($\Delta$ determines also the broadening of the LL.) The zero of energy is chosen such that $\left\langle V(\mbox{\boldmath $R$}) \right \rangle =0$, i.~e., the energy $\epsilon$ is measured from the center of the lowest LL. Using $\Delta$ and $\Lambda$, we can rewrite the conditions (\ref{cond}) in the form \begin{equation}\label{condr} l_B/\Lambda \ll 1 \quad \mbox{\rm and } \quad \frac{\Delta}{\hbar \omega_c} \, \frac{l_B}{\Lambda} \ll 1 \, , \end{equation}% where $\omega_c = eB/cm^*$ is the cyclotron frequency. These conditions are fulfilled, for example, for the following experimental values: $B=10$T ($l_B=8$nm, $\omega_c= 2\pi\times 4.2$THz), $\Delta=1$meV and $\Lambda= 50$nm. The separation between two consecutive LL's is much larger than their broadening, $\hbar\omega_c/\Delta \approx 60$. As discussed in Ref.\ \onlinecite{Iordansky96}, most EL's with $\epsilon$ in the tail of the LL, i.~e. $|\epsilon|\gg \Delta$, have diameters ${\cal D}$ which are small compared to $\Lambda$ and their length ${\cal L}$ is of the order of ${\cal D}$. When the energy approaches the center of the LL, the size of the EL's grows \cite{Mehr88}. In particular, for $|\epsilon|\simeq \Delta$, ${\cal L}\simeq{\cal D}\simeq\Lambda$ holds for most of the EL's. Such EL's will be denoted as `standard'. A further reduction of $|\epsilon|$ does not lead to an increase in the size of almost all EL's, i.~e.\ most of them remain standard ones. However, a minority of the EL's merges and forms large extended EL's with diameters ${\cal D}\gg\Lambda$. The structure of these extended EL's is described by percolation theory \cite{Stauffer92} when the energies $|\epsilon|$ are near the percolation threshold $\epsilon=0$ (or $|\epsilon|/\Delta \ll 1$). The subsequent calculations show that just this range of energies is of interest justifying the use of the percolation picture. An extended EL has a fractal structure which is reflected in the relation between its length and diameter \cite{Stauffer92} \begin{eqnarray} {\cal L}\simeq \Lambda ({\cal D}/\Lambda)^{2/\alpha} \, , \label{ldrela} \end{eqnarray} where $\alpha=8/7$ is the scaling exponent. (For the definition of the perimeter of discrete percolation clusters and the transition to continuum percolation see, e.~g., Refs.\ \onlinecite{Grossman86} and \onlinecite{Mehr88}.) An extended EL can be viewed as a self-avoiding random walk path with steps of length $\Lambda$. Indeed, $2/\alpha =7/4$ is close to the value 2 which applies to a simple random walk. The exponent is less than 2 due to the self-avoiding nature of the EL. The distribution of extended EL's of a given energy $\epsilon$ is described by one scale, the so-called critical diameter \begin{eqnarray} {\cal D}_{c}(\epsilon)\simeq \Lambda ({|\epsilon|/ \Delta})^{-\nu} \, , \label{dcrit} \end{eqnarray} which is considered to be the localization length in the semiclassical theory. The scaling index is $\nu=4/3$. EL's with diameters ${\cal D}\gg {\cal D}_{c}(\epsilon) $ are exponentially rare, while the probability to find an extended EL with a diameter $\Lambda \ll {\cal D}\ll {\cal D}_{c}(\epsilon) $ is proportional to ${\cal D}^{-\rho}$, where $\rho=3$. An electron that moves on an extended EL (${\cal L} \gg \Lambda$) experiences different regions of the random potential. During one revolution, the drift velocity $v_D(u,v)$ [see Eq.\ (\ref{periodt})] follows the varying slope of the potential $V(\mbox{\boldmath $R$})$ and takes on many different values. In other words, the motion on an extended EL corresponds to an averaging process with respect to $v_D(u,v)$. It is therefore reasonable to introduce an average drift velocity\cite{Iordansky96} $\bar v_D$, defined by ${\cal T}={\cal L}/\bar v_D$, that is assumed to be independent of the length of the EL under consideration. The dependence on the energy $\epsilon$ can be generally excluded since $V(\mbox{\boldmath $R$})$ and $\nabla V(\mbox{\boldmath $R$}) \sim v_D(\mbox{\boldmath $R$})$ are statistically independent for a Gaussian distribution \cite{Longuet57}. Consequently, the energy level spacing (\ref{omtdef}) associated with the extended EL's is a function of ${\cal L}$ alone and Eq.\ (\ref{omtdef}) can be written conveniently in the form \begin{equation}\label{omegat} \hbar \omega_{\cal T} ({\cal L }) = \hbar \Omega \frac{\Lambda }{{\cal L }} \, , \end{equation}% where \begin{equation}\label{Omega} \hbar\Omega= \hbar \frac{2\pi \bar v_D}{\Lambda} \simeq \frac{\Delta l_B^2}{\Lambda^2} \, . \end{equation}% The frequency $\Omega$ gives by order of magnitude the level spacing for standard EL's since it is associated with the revolution around an EL with ${\cal L} \simeq \Lambda$. The corresponding frequencies for the extended EL's are lower. The lowest frequencies belong to the longest EL's which have the critical length ${\cal L}_c$ corresponding to the critical diameter ${\cal D}_c$ (\ref{dcrit}). From Eqs.\ (\ref{ldrela}) and (\ref{dcrit}) \begin{equation}\label{areac} {\cal L }_c (\epsilon) \simeq \Lambda |\Delta/ \epsilon|^{2 \nu/\alpha} \, . \end{equation}% Below, we shall use the distribution of the EL's with respect to their lengths. Let $L^2f_{\epsilon}({\cal L})d{\cal L}$ be the number of EL's with energy $\epsilon$ and a length between ${\cal L}$ and ${\cal L}+d{\cal L}$. The normalization of this distribution can be found by equating the total average length of the EL's in an area of size $L^2$ to the result given in the literature (see Sec.\ III.A in Ref.\ \onlinecite{Longuet57} or Ref.\ \onlinecite{Isichenko92}) \begin{equation}\label{normfe} L^2 \int_{0}^{\infty}d{\cal L}{\cal L}f_{\epsilon}({\cal L})= \frac{L^2}{2 \Lambda} [- \phi''(0)]^{1/2} \exp[-{\epsilon^2\over 2\Delta^2}] \, , \end{equation} where $\phi$ is defined in Eq.\ (\ref{correlator}). While the distribution of the standard EL's is not really known, percolation theory gives the following ansatz \cite{Stauffer92} for the distribution of extended EL's \begin{equation}\label{feext} f_{\epsilon}({\cal L})d{\cal L} = C_\epsilon \left( \frac{{\cal L}}{\Lambda} \right)^{-[{\alpha\over 2}(\rho-1)+1]} G\left({{\cal L}\over{\cal L}_{c}(\epsilon)}\right)d{\cal L} \, , \qquad {\cal L}\gg \Lambda \, , \end{equation} where $C_\epsilon$ is the normalization constant and $G(\zeta)$ is a function which is exponentially small for $\zeta\gg 1$ and of order unity for $\zeta\ll 1$. Hence, $G$ yields a smooth cut-off of the distribution for ${\cal L} > {\cal L}_c$, where ${\cal L}_c$ is defined in Eq.\ (\ref{areac}). An additional, energy-independent cut-off appears in a finite sample. Here, the size $L$ of the system restricts the critical diameter ${\cal D}_c$ (\ref{dcrit}) to values such that ${\cal D}_c \lesssim L$. This translates into ${\cal L} \lesssim \Lambda (L/\Lambda)^{2/\alpha}$, using Eq.\ (\ref{ldrela}). Thus, in a finite system, the critical length ${\cal L}_c(\epsilon)$ in Eq.\ (\ref{feext}) should be replaced by min$\{ {\cal L}_c(\epsilon), \Lambda (L/\Lambda)^{2/\alpha}\}$. To find the normalization constant $C_\epsilon$ let us decompose the normalization integral (\ref{normfe}) into $\int_0^\Lambda + \int_\Lambda^\infty$ and estimate both terms of this decomposition. The second integral can be estimated from the distribution (\ref{feext}) of extended EL's. With the value ${\alpha\over 2}(\rho-1)+1=15/7$, this integral is determined by its lower limit $\Lambda$ and is of the order $(L \Lambda)^2$. Using a reasonable ansatz for the distribution of standard and short EL's (for example $f_{\epsilon}({\cal L})=const.$), the first integral is determined by its upper limit $\Lambda$ and is again of the order $(L \Lambda)^2$. Thus, the total normalization integral is also of this order. Up to a factor of order unity, the normalization constant $C_\epsilon$ is then given by $\Lambda^{-3}$. The numerical factor can be absorbed in $G$ leading to the following distribution function for extended EL's \begin{equation}\label{fel} f_{\epsilon}({\cal L})d{\cal L}= {1\over\Lambda^3} \left({{\cal L}\over \Lambda}\right) ^{-[{\alpha\over 2}(\rho-1)+1]} G\left({{\cal L}\over{\cal L}_{c}(\epsilon)}\right)d{\cal L} \, . \end{equation}% We note that the above estimates confirm that the majority of EL's belongs to the standard ones with ${\cal L} \simeq \Lambda$, since these EL's are relevant in the normalization integral. \section{Matrix elements} Emission and absorption of phonons are associated with electronic transitions with energy transfer $\hbar \omega_q$. We have seen in the previous section that the separation in energy between two consecutive EL's is given by $\hbar \omega_{\cal T}$ [Eq.\ (\ref{omtdef})]. Thus, {\it real} transitions are generally restricted to EL's for which $\omega_{\cal T} \le \omega_q$. For the parameters used above for conditions (\ref{condr}), the frequency $\Omega$, Eq.\ (\ref{Omega}), is about $2\pi \times 10$GHz, whereas the frequencies of the SAW's used in experiments vary typically in the range $\omega_q=2\pi\times 1$MHz $\div$ 1GHz. We therefore conclude that only extended EL's for which $\omega_{\cal T} = \Omega \Lambda/{\cal L} \ll \Omega$ [see Eq.\ (\ref{omegat})] contribute to the sound absorption. Thus, the matrix elements of the interaction Hamiltonian (\ref{hdefdet}) \begin{equation}\label{mael} {\cal M}_{if}^{\pm \mbox{\boldmath $q$}}= \frac{1}{L} \gamma_{\mbox{\boldmath $q$}} M_{if}^{\pm \mbox{\boldmath $q$}} \equiv\frac{1}{L} \gamma_{\mbox{\boldmath $q$}} \left\langle f | e^{\pm i \mbox{\boldmath $q$} \mbox{\boldmath $R$}} | i \right\rangle \, , \end{equation}% where $|\left. \! i\right\rangle$ and $|\left. \! f\right\rangle$ denote the initial and final wave functions of the form (\ref{Psi}), have to be calculated for extended trajectories. This calculation has been performed in Ref.\ \onlinecite{Iordansky96}. The matrix element, averaged over all trajectories with the same period ${\cal T}$ and the same energy $\epsilon$, reads \begin{equation}\label{me1} \left\langle |M_{if}^{\pm\mbox{\boldmath $q$}}|^2 \right\rangle_{\epsilon, {\cal T}} = c q^2 \Lambda^2 (\hbar\Omega)^{\alpha} \frac{\hbar\omega_{\cal T}}{|\epsilon_f - \epsilon_i|^{\alpha+1}} \quad \mbox{\rm for} \quad |\epsilon_f - \epsilon_i| \lesssim \hbar\Omega \, , \end{equation}% where $c$ is a numerical factor of order unity. The matrix element is valid under the assumptions $q\Lambda \ll v_D/v_s \ll (q\Lambda)^{-3/4}$, where the exponent $3/4$ follows from $(2-\alpha)/\alpha$ with $\alpha=8/7$, cf.\ Eq.\ (\ref{ldrela}). Clearly, these inequalities imply $q\Lambda \ll 1$. For $|\epsilon_f - \epsilon_i| \gg \hbar\Omega$, the matrix element $\left\langle |M_{if}^{\pm\mbox{\boldmath $q$}}|^2 \right\rangle_{\epsilon, {\cal T}}$ is exponentially small. This implies that transitions occur only within the lowest LL and that transitions to other LL's can be neglected ($\hbar\omega_c \gg \Delta \gg \hbar\Omega$). It is also assumed that the initial and final states are close to one another in real space: In order that $(\chi_0)_i$ and $(\chi_0)_f$ will overlap, the separation in real space, $\Delta v$, should satisfy $\Delta v \lesssim l_B$. The condition $|\epsilon_f - \epsilon_i| \lesssim \hbar\Omega$ is even more restrictive. This can be seen in the following way. As mentioned above, the mean distance in real space between two adjacent EL's is given by $l_B^2/{\cal L}$. Hence, the distance between the two states $i$ and $f$ is of order $(l_B^2/{\cal L}) |\epsilon_f - \epsilon_i|/\hbar \omega_{\cal T}$. The maximum of this expression is found for the largest allowed energy difference $|\epsilon_f - \epsilon_i| \simeq \hbar \Omega$. Using the definition of $\omega_{\cal T}$ in Eq.\ (\ref{omegat}) and the estimate for $\Omega$ in Eq.\ (\ref{Omega}), the corresponding maximum distance in real space is found to be $l_B^2/\Lambda \ll l_B$, cf.\ the inequalities (\ref{condr}). While the sound absorption is due to transitions between extended states, the calculation of the dielectric function (see the next section) necessitates also matrix elements between standard EL's. (Transitions between a standard EL and an extended EL are exponentially rare due to their large separation in space.) Since the majority of the EL's belongs to the standard ones, one might even expect that the standard EL's dominate the dielectric function. This is not the case, as is shown below. Noting that for typical phonon wave vectors (e.~g.\ $q\approx 10^4$cm${}^{-1}$), one has $q\Lambda \ll 1 $, the matrix element for standard EL's with ${\cal L} \simeq \Lambda$ can be approximated by \begin{equation}\label{meshort1} \left\langle f | e^{i \mbox{\boldmath $q$} \mbox{\boldmath $R$}} | i \right\rangle = e^{i \mbox{\boldmath $q$} \mbox{\boldmath $R$}_i} \left\langle f | e^{i \mbox{\boldmath $q$} (\mbox{\boldmath $R$}-\mbox{\boldmath $R$}_i)} | i \right\rangle \approx e^{i \mbox{\boldmath $q$} \mbox{\boldmath $R$}_i} i\mbox{\boldmath $q$} \left\langle f | \mbox{\boldmath $R$}-\mbox{\boldmath $R$}_i | i \right\rangle \, , \end{equation}% where the zero-order term in the expansion of the exponential function disappears due to the orthogonality of the two states $i$ and $f$. The vector $\mbox{\boldmath $R$}_i$ denotes some point in the vicinity of the $i$th EL. The matrix element on the right-hand-side of Eq.\ (\ref{meshort1}) is of order $\Lambda$. Hence, for transitions between two standard EL's \begin{equation}\label{meshort2} \left\langle |M_{if}^{\pm\mbox{\boldmath $q$}}|^2 \right\rangle_{\epsilon, {\cal T}} \approx q^2\Lambda^2 \, , \end{equation}% where it is understood that the EL's $i$ and $f$ are very close in real space and in energy; otherwise the matrix element is exponentially small. The first condition guarantees the overlap of the wave functions $\chi_0^{i(f)}$, see Eq.\ (\ref{Psi}). The second one is necessary to ensure that the integrand of the $u$-integration along the perimeter of the EL's is not a fast oscillating function. Result (\ref{meshort2}) agrees essentially with Eq.\ (\ref{me1}) replacing there the energy difference $|\epsilon_f-\epsilon_i|$ and the level spacing $\hbar \omega_{\cal T}$ by the value $\hbar \Omega$ appropriate for standard EL's. \section{The dielectric function} The matrix element (\ref{mael}) includes the screening of the electron-phonon interaction due to the lattice [Eq.\ (\ref{vertpi})]. The screening arising from the 2DEG can be accounted for by renormalizing the matrix element \begin{equation}\label{renor} |{\cal M}_{if}^{\pm \mbox{\boldmath $q$}}|^2 \rightarrow \frac{ |{\cal M}_{if}^{\pm \mbox{\boldmath $q$}}|^2 }{|\varepsilon(\omega_q, q)|^2} \, , \end{equation}% where $\varepsilon(\omega,q)$ is the dielectric function of the 2DEG. For a nearly half-filled LL, the dielectric function can be calculated assuming linear screening\cite{Wulf88}. That is, the change in the electron density resulting from a small applied potential is proportional to the strength of the perturbing potential. Indeed, one can estimate that for the SAW intensities used in experiments the electron density oscillates only weakly around its average value, see for example Ref.\ \onlinecite{Esslinger94}. The assumption of linear screening leads to the general expression \begin{equation}\label{dfdef} \varepsilon(\omega, q)= 1 + \frac{2 \pi e^2}{\bar\varepsilon q} \Pi(\omega, q) \, , \end{equation}% where \begin{equation}\label{dencor} \Pi(\omega, q)= \frac{1}{L^2} \sum\limits_{i \neq f} \frac{f(\epsilon_i) -f(\epsilon_f)}{\epsilon_f -\epsilon_i -\hbar \omega -i0} |M_{if}^{\mbox{\boldmath $q$}}|^2 \, , \end{equation}% and $\bar\varepsilon$ is defined in Eq.\ (\ref{coninte}). To evaluate $\Pi$ explicitly, we transform the sum $\sum_{i\neq f}$ in Eq.\ (\ref{dencor}) into $\sum_{i < f}$, where $i<f$ means $\epsilon_i <\epsilon_f$. Let us first focus on the case of zero temperature, i.~e.\ all levels below the Fermi energy $\epsilon_F$ are occupied, $f(\epsilon_i)=1$, whereas all levels above $\epsilon_F$ are empty, $f(\epsilon_f)=0$. Then \begin{equation}\label{pitzero} \Pi(\omega, q)=\frac{2}{ L^2} \sum\limits_{i<f} \frac{\epsilon_f-\epsilon_i} {(\epsilon_f -\epsilon_i)^2 -(\hbar \omega +i0)^2} |M_{if}^{\mbox{\boldmath $q$}} |^2 \, . \end{equation}% In order to yield an appreciable matrix element, the EL's corresponding to $\epsilon_i$ and $\epsilon_f$ must be close (in real space and in energy) to an EL with $\epsilon=\epsilon_{F}$. This `Fermi' EL (FEL) need not to be an electron state. We can therefore represent the summation over states in Eq.\ (\ref{pitzero}) as a sum over EL's near a certain FEL and then sum over all FEL's. In the first sum the states are distributed nearly equidistantly with an energy spacing $\hbar\omega_{{\cal T}}\approx const.$, cf.\ below. In the summation over FEL's we may first sum over the FEL's with the same period ${\cal T}$. Since these EL's are situated in different regions of the random potential, this summation is equivalent to an average of the matrix element over FEL's with the same period. Thus, the averaged matrix elements (\ref{me1}) and (\ref{meshort2}) for extended and standard EL's, respectively, can be substituted in Eq.\ (\ref{pitzero}). We begin with the contribution of the extended EL's to $\Pi$. As we shall see, this is the dominant contribution. It is easy to see that the energy spacing for the relevant states near a fixed FEL is given by the value of $\hbar \omega_{\cal T}$ at the Fermi energy. To this end, we have to calculate the change in $\omega_{\cal T}$ arising from a change in the energy of the EL by at most $\hbar\Omega$ [see Eq.\ (\ref{me1})]. Since the frequency $\omega_{\cal T}$ for the extended EL's is merely a function of ${\cal L}$, one has $\Delta \omega_{\cal T}/\omega_{\cal T} \simeq \Delta {\cal L}/{\cal L} \simeq \Delta {\cal A}/{\cal A}$, where ${\cal A}$ is the area enclosed by the EL. To get the second equality, we have used ${\cal L} \simeq \Lambda ({\cal A}/\Lambda^2 )^\lambda$, with\cite{Stauffer92} $\lambda=12/13$. The change in the enclosed area is given by $2\pi l_B^2 \Omega/\omega_{\cal T}$ and thus $\Delta \omega_{\cal T}/\omega_{\cal T} \simeq (l_B^2/\Lambda^2) (\Lambda/{\cal L})^{1/\lambda-1} \ll 1$. Consequently, the sum over EL's which are near a given FEL can be simplified by introducing an explicit representation for the energies \begin{equation}\label{energyeps} \epsilon_f - \epsilon_i = (m-n) \hbar\omega_{\cal T} \, . \end{equation}% The integers $m$ and $n$ are subject to the restrictions $|m-n| \lesssim \Omega/\omega_{\cal T} $ [see Eq.\ (\ref{me1})] and $m-n \neq 0$. Using the representation (\ref{energyeps}) and the matrix element (\ref{me1}), the double sum over states near one FEL in Eq.\ (\ref{pitzero}) can be reduced to a sum over $s=m-n$ and one obtains \begin{equation}\label{pzero1} \sum_{i<f} \frac {1}{(\epsilon_f-\epsilon_i)^2-(\hbar\omega+i0)^2} \frac {\hbar\omega_{{\cal T}}}{|\epsilon_f-\epsilon_i|^{\alpha}}= \left( \frac{x}{\hbar\omega} \right)^{\alpha+1} S(x) \end{equation}% where $x=\omega/\omega_{{\cal T}}$ and \begin{equation}\label{sums} S(x)\equiv \sum_{s=1}^{\infty} {1\over s^2-(x+i0)^2}{1\over s^{\alpha-1}} \, . \end{equation} We have replaced the upper limit in the sum by infinity, since the relevant $s$ are of order $x\ll \Omega/\omega_{{\cal T}} $ and the above mentioned restriction for $|m-n|$ can be neglected. In other words, the EL's which contribute significantly are separated in energy by $\hbar\omega$. For extended states, using Eq.\ (\ref{omegat}), $x=(\omega/\Omega)({\cal L}/\Lambda) $, and hence the contribution to $\Pi(\omega, q)$ from a FEL is a function of its length alone. As a result the total $\Pi(\omega, q)$ can be written as a sum over all lengths. Using the distribution function $f_\epsilon({\cal L})$ given in Eq.\ (\ref{fel}), we find \begin{eqnarray}\label{pzero2} \Pi(\omega, q; \epsilon_F)& = & 2c \, \frac{(q\Lambda)^2 \Omega^\alpha}{\hbar \omega^{\alpha+1}} \int d{\cal L} \, f_{\epsilon_F}({\cal L}) x^{\alpha+1}S(x) \nonumber\\ & = & 2c \, \frac{q^2}{\hbar \omega} \, H(y_F) \, , \end{eqnarray} where \begin{equation}\label{yfdef} y_F= \frac{\Omega \Lambda}{\omega {\cal L}_c(\epsilon_F)} = \left| \frac{\epsilon_F}{\epsilon_\omega} \right|^{2\nu/\alpha} \qquad {\rm and} \qquad \epsilon_\omega= \Delta \left( \frac{\omega}{\Omega} \right)^{\alpha/2\nu} \, . \end{equation}% In a finite system of size $L$, $y_F$ has to be replaced by the maximum of $y_F$ and $y_L\equiv(\Omega/\omega)(\Lambda/L)^{2/\alpha}$, see the discussion following Eq.\ (\ref{feext}). Using the explicit form for the distribution function $f$ with $\rho=3$ yields \begin{equation}\label{fh} H(y)=\int_{0}^{\infty}dxG(xy)S(x) \, . \end{equation} The quantity $\epsilon_\omega$ has a very intuitive interpretation: it is the energy at which the energy level spacing $\hbar\omega_{\cal T}({\cal L}_c(\epsilon))$ for an EL with the critical length is equal to the phonon energy $\hbar\omega$ of the SAW. In other words, $\epsilon_\omega$ determines the absorption threshold in the sense that real transitions occur only for $|\epsilon_F| \lesssim \epsilon_\omega$. This is reflected in the imaginary part of $\Pi$, calculated below. The real and imaginary parts of the function $H$ are given by \begin{mathletters}\label{hf} \begin{equation}\label{hfreal} {\rm Re} H(y)=\int_0^{\infty}dx \, G(xy) \sum_{s=1}^{\infty}\frac{1}{s^{\alpha-1}} \frac{P}{s^2-x^2} \, , \end{equation}% \begin{equation}\label{hfimag} {\rm Im} H(y)=\frac{\pi}{2}\sum_{s=1}^{\infty}\frac{1}{s^{\alpha}} G(sy) \, , \end{equation}% \end{mathletters}% where $P$ denotes the principal part of the integral. The behavior of $H(y)$ in an infinite system is shown in Fig.\ 1. The $y$-axis in Fig.\ 1 has been scaled in terms of $y^{3/7}$ corresponding to the dependence of $H$ on the Fermi energy, see Eq.\ (\ref{yfdef}). The limiting behaviors for large and small arguments are given by \begin{equation}\label{limith} \begin{array}{rclrclrcl} {\rm Re} H & \approx & \zeta(1+\alpha)/y =1.5/y, & \qquad {\rm Im}H & \approx & (\pi/2)G(y), & \qquad y& \gg &1\, , \\ {\rm Re} H & \simeq & y^{\alpha-1}, & \qquad {\rm Im}H & \simeq & 1-y^{\alpha-1}, & \qquad y &\ll &1\, , \\ {\rm Re} H & = & 0, & \qquad {\rm Im}H & = & (\pi/2)\zeta(\alpha)=11.9, & \qquad y& =&0 \, , \end{array} \end{equation} where $\zeta(x)=\sum_{s=1}^\infty s^{-x}$. In order to discuss the analytic expressions of $H(y)$ we note that the sum in Eq.\ (\ref{hfimag}) and the integral in Eq.\ (\ref{hfreal}) are truncated at $s$ or $x$ of order $1/y$, as implied by $G$, Eq.\ (\ref{feext}). The imaginary part (\ref{hfimag}) is therefore exponentially small $\sim G(y)$ for $y\gg 1$ and of order unity in the opposite case. Thus, the sum over $s$ increases with decreasing $y$ and approaches its maximum as $y$ goes to zero. In a finite system, $y$ is restricted from below by $y_L$ imposing an upper limit $1/y_L$ on the sum. This leads to a smaller maximum value of Im$H$. As for the real part of $H(y)$, we are able to study the limiting cases. For both $y \gg 1$ and $y \ll 1$, Re$H(y)$ goes to zero according to a power law. In the intermediate region $y \lesssim 1$ but not $y \ll 1$, Re$H$ is slowly varying and of order unity. In a finite system, the real part approaches a small but finite value as $y$ goes to $y_L$. Up to this point, the evaluation of $\Pi$, Eq.\ (\ref{dencor}), has been performed for zero temperature, $T=0$. The calculation of $\Pi$ for finite temperatures such that $T \gg \hbar\omega$ can be done along the same lines. Therefore, we give only a brief description. The calculations for $T=0$ have shown that the real and the imaginary parts of $H$ reach a value of order unity when the Fermi energy becomes of order $\epsilon_\omega$. This energy range corresponds to the contribution of EL's with a length ${\cal L}_\omega \equiv {\cal L}_c(\epsilon_\omega) \simeq \Lambda (\Omega/\omega)$ and an energy level spacing of order $\hbar\omega$ to the transition processes. Therefore these extended EL's give the dominant contributions to $\Pi$, Eq.\ (\ref{dencor}). This remains true for finite temperatures. However, now the states $i$ and $f$ need not be in the immediate vicinity of the FEL's. Instead, we can fix some initial state $i$ and consider transitions to final states above, $\epsilon_f > \epsilon_i$, and below, $\epsilon_f < \epsilon_i$, the chosen one (essentially in a range $T$ around $\epsilon_F$). To do this, we use the representation (\ref{energyeps}) where $\omega_{\cal T}$ now refers to the energy $\epsilon_i$. Expanding the Fermi distribution $f(\epsilon_f)$ in Eq.\ (\ref{dencor}) around $\epsilon_i$ leads to $-\partial f(\epsilon_i)/\partial \epsilon_i$ (for the relevant transitions $|\epsilon_f-\epsilon_i|$ is much smaller than $T$). The sum over the energies $\epsilon_f$ is reduced to twice the expression (\ref{sums}). Hence \begin{equation}\label{tnz2} \Pi(\omega,q)= 2c\left( \frac{q\Lambda}{L} \right)^2 (\hbar \Omega)^\alpha \sum\limits_i \left(- \frac{\partial f}{\partial \epsilon_i} \right) (\hbar \omega_{\cal T})^{-\alpha} S\left(\frac{\omega+i0}{\omega_{\cal T}}\right) \, . \end{equation}% The sum over the initial states $i$ comprises a summation over all EL's with the same energy $\epsilon_i$ but with different lengths ${\cal L}$, and a summation over $\epsilon_i$. The first sum can be again replaced by an integral using the distribution function $f_\epsilon({\cal L})$, Eq.\ (\ref{fel}). Then, the order of summation and integration is inverted to obtain \begin{equation}\label{tnonzero} \sum\limits_{\epsilon_i} \int_0^\infty d{\cal L} \quad \rightarrow \quad \int_0^\infty d{\cal L} \sum\limits_{|\epsilon_i| \lesssim \epsilon_c({\cal L})} \quad \rightarrow \quad \int_0^\infty d{\cal L} \int_{-\infty}^{\infty} \frac{d\epsilon_i}{\hbar\omega_{\cal T}({\cal L})} \, . \end{equation}% The condition $|\epsilon_i| \lesssim \epsilon_c({\cal L})$ ensures that only those initial states which possess a critical length ${\cal L}_c$ equal to or larger than ${\cal L}_c(\epsilon_i)$ are included. Here $\epsilon_c({\cal L}) \simeq \Delta (\Lambda/{\cal L})^{\alpha/2\nu}$, see Eq.\ (\ref{areac}). However, the dominant contributions to $\sum_{\epsilon_i}$ follow from a particular group of EL's, rendering this condition unnecessary. Taking also into account that the relevant EL's have an energy level spacing which is small compared to the thermal energy $\hbar\omega/T \ll 1$, and that the number of states per energy interval is given by $(\hbar\omega_{\cal T})^{-1}$, the sum over $\epsilon_i$ can be replaced by the integral given on the right-hand-side of Eq.\ (\ref{tnonzero}). The limits of integration have been extended with negligible error. The resulting integral over ${\cal L}$ coincides with the right-hand-side of Eq.\ (\ref{pzero2}) except for the value of the energy: instead of the fixed Fermi energy $\epsilon_F$ there appears now the variable $\epsilon_i$. Thus, we finally arrive at \begin{equation}\label{pzerot} \Pi(\omega,q)= \int_{-\infty}^{\infty} d\epsilon \, \left(- \frac{\partial f}{\partial \epsilon} \right) \Pi(\omega,q; \epsilon) \, , \end{equation}% where we have dropped the index $i$ of $\epsilon_i$. This equation shows that a finite temperature leads to an average of the $T=0$ result over energies within an interval of order $T$ around the Fermi level. Since the function $\Pi(\omega,q; \epsilon)$ varies on the scale $\epsilon_\omega$, finite temperature effects are negligible if $T \ll \epsilon_\omega$. This is the condition for Eq.\ (\ref{pzero2}) to hold. For $T \gtrsim \epsilon_\omega$, the width of $\Pi(\omega,q)$ as function of the Fermi energy increases with temperature, i.~e.\ the behavior of $\Pi$ deviates substantially from the $T=0$ result. In the following we assume $T < \epsilon_\omega$. Substituting the $T=0$ result for $\Pi(\omega,q)$ [Eq.\ (\ref{pzero2}), which is based on transitions between extended EL's] in Eq.\ (\ref{dfdef}) yields for the dielectric function \begin{equation}\label{dfgen} \varepsilon(\omega,q)= 1+\frac{2\pi e^2}{\bar\varepsilon}\frac{q}{\hbar\omega} 2cH(y_F) \end{equation} and, for the renormalization of the matrix element in Eq.\ (\ref{renor}), \begin{equation}\label{dfspec} \varepsilon(\omega_q,q)= 1+\frac{e^2}{\bar\varepsilon \hbar v_s}\, 4\pi c H(y_F) \, . \end{equation} The contribution to the dielectric function resulting from standard EL's is derived below, see Eq.\ (\ref{chishort}). The comparison of that result with Eqs.\ (\ref{dfgen}) and (\ref{dfspec}) shows that the dielectric function is essentially given by the contribution due to extended EL's, whereas the influence of transitions between standard EL's can be neglected. We consider therefore Eq.\ (\ref{dfspec}) as the final result for $\varepsilon(\omega_q,q)$. The dielectric function (\ref{dfspec}) renormalizes the matrix element (\ref{renor}) via the expression $|\varepsilon(\omega_q, q)|^2$. The dependence of $|\varepsilon|^2$ on the Fermi energy is given by $|H|^2$, Eqs.\ (\ref{hf}). The latter is of order unity for $|\epsilon_F| \lesssim \epsilon_\omega$ and decreases for larger values of the Fermi energy according to the power law $(\epsilon_\omega/|\epsilon_F|)^{4\nu/\alpha}$, $4\nu/\alpha=14/3$. Thus, the magnitude of $\varepsilon$, Eq.\ (\ref{dfspec}), is determined by the large dimensionless parameter $e^2/\bar\varepsilon v_s \hbar$ ($\approx 110$ for GaAs). This is the ratio of the electrostatic energy of two electrons a distance $q^{-1}$ apart and the energy of a surface phonon, $(e^2 q/\bar\varepsilon)(\hbar \omega_q)^{-1}$. Let us now show that the contribution of the standard EL's to the dielectric function is negligible. We start afresh from the expression (\ref{pitzero}) for $\Pi$, substituting the matrix element (\ref{meshort2}) for transitions between standard EL's. The energy difference between two standard EL's is of order $\hbar\Omega$, Eq.\ (\ref{omegat}), i.~e.\ much larger than $\hbar\omega$. The latter can thus be neglected in comparison with $\epsilon_f -\epsilon_i$ in Eq.\ (\ref{pitzero}). Then the imaginary term $i0$ can be dropped, as no real transitions can occur. The sum over all final states $f$ leads merely to a factor of order one, since only a few EL's in the immediate vicinity of the initial EL contribute to the matrix element (\ref{meshort2}). The remaining sum over the initial states counts the standard EL's which are just below the Fermi level. The number of these EL's is essentially the number of all FEL's, because the number of very short (${\cal L} \ll \Lambda$) and very long (${\cal L} \gg \Lambda$) EL's is negligibly small for Fermi energies near the center of the LL. The required quantity may therefore be deduced from Eq.\ (\ref{normfe}) which states that the total length of all EL's is, up to a numerical factor, equal to $L^2/\Lambda=\Lambda (L^2/\Lambda^2)$. Since the mean length of all EL's is of order $\Lambda$, the number of FEL's in a system of size $L$ is of order $L^2/\Lambda^2$. Collecting these results, we obtain the contribution due to standard EL's \begin{equation}\label{chishort} \Pi(\omega, q) \simeq \frac{q^2}{\hbar \Omega} \qquad {\rm and} \qquad \varepsilon(\omega, q) -1 \simeq \frac{e^2 q}{\bar\varepsilon \hbar\Omega} \, . \end{equation}% It can be shown that this estimate is valid independent of the ratio $\hbar\Omega/T$ as long as max$\{ \hbar \Omega, T \} \ll \Delta$. The comparison of Eq.\ (\ref{chishort}) with Eq.\ (\ref{dfgen}) shows that the contribution of the standard EL's to the dielectric function is $\omega/\Omega$ times smaller than the term resulting from the extended EL's. It is instructive to consider briefly an alternative derivation of the dielectric function which reproduces correctly the order of magnitude $|\varepsilon| \simeq e^2/\bar\varepsilon v_s \hbar$. This derivation relies on the fact that the motion of an electron on a fractal trajectory can be considered as a self-avoiding random walk with single steps of length $\Lambda$. In fact, the relation between the diameter and the length of an extended EL is similar to what one would expect for a simple random walk, cf.\ Eq.\ (\ref{ldrela}). For the diffusive regime, the density correlator $\Pi$, Eq.\ (\ref{dencor}), is given by \begin{equation}\label{dencorr} \Pi(\omega, q) = g_F \frac{Dq^2}{-i\omega +Dq^2} \, , \end{equation}% where $D$ is the diffusion constant and $g_F$ the density of states at the Fermi level. In our case, we can assume $D \simeq v_D \Lambda$. The density of states of the LL is given by $g_F \simeq (\Delta l_B^2)^{-1}$ for $\epsilon_F \ll \Delta$. Substituting these quantities into Eq.\ (\ref{dfdef}) yields \begin{equation}\label{rpafinal} \varepsilon(\omega_q, q) - 1 \simeq i \frac{e^2}{\bar\varepsilon v_s \hbar} \, , \end{equation}% where the term $Dq^2$ has been neglected compared to $-i\omega_q$ in the denominator of the density correlator. Interestingly, this approach predicts an essentially imaginary result for $\varepsilon -1$ which agrees with the behavior of Eq.\ (\ref{dfspec}) for $|\epsilon_F| \rightarrow 0$, i.~e.\ in the case when some the EL's become arbitrarily long. \section{Surface acoustic wave attenuation} The intensity of the SAW decreases due to absorption of phonons by the 2DEG with the distance $x$ as exp$(-\Gamma x)$. The attenuation coefficient $\Gamma$ can be expressed in terms of the sound velocity $v_s$ [cf.\ Eq.\ (\ref{omega})] and the life time $\tau(\mbox{\boldmath $q$})$ as $\Gamma= (v_s \tau(\mbox{\boldmath $q$}))^{-1}$, where $\tau(\mbox{\boldmath $q$})$ is defined by the rate equation \begin{equation}\label{rate1} \dot{N}_{\mbox{\boldmath $q$}} = - \frac{1}{\tau(\mbox{\boldmath $q$})} N_{\mbox{\boldmath $q$}} \, . \end{equation}% Here $N_{\mbox{\boldmath $q$}}$ is the phonon occupation number. The net change in $N_{\mbox{\boldmath $q$}}$ is given by \begin{eqnarray} \label{rate2} \dot{N}_{\mbox{\boldmath $q$}} & =& \frac{2 \pi}{\hbar |\varepsilon(\omega_q,q)|^2} \,\, \sum_{i \neq f} f(\epsilon_i)(1-f(\epsilon_f)) \nonumber \\ && \times [ |{\cal M}_{if}^{-\mbox{\boldmath $q$}}|^2 (N_{\mbox{\boldmath $q$}} +1) \delta(\epsilon_i - \epsilon_f -\hbar \omega_q ) - |{\cal M}_{if}^{+\mbox{\boldmath $q$}}|^2 N_{\mbox{\boldmath $q$}} \delta(\epsilon_i - \epsilon_f +\hbar \omega_q )] , \end{eqnarray}% where $f(\epsilon)$ is the Fermi distribution function and ${\cal M}_{if}^{\mp \mbox{\boldmath $q$}}$ are the unscreened matrix elements (\ref{mael}) for emission or absorption of a phonon with wave vector $\mbox{\boldmath $q$}$. For a SAW induced by interdigital transducers, the phonon occupation number $N_{\mbox{\boldmath $q$}}$ is macroscopically large. The difference between $N_{\mbox{\boldmath $q$}}+1$ and $N_{\mbox{\boldmath $q$}}$ is therefore negligible. Combining Eqs.\ (\ref{rate1}) and (\ref{rate2}) yields \begin{equation}\label{rate3} \frac{1}{\tau(\mbox{\boldmath $q$})}= \frac{2 \pi}{\hbar |\varepsilon(\omega_q,q)|^2} \sum_{i \neq f} |{\cal M}_{if}^{\mbox{\boldmath $q$}}|^2 [f(\epsilon_i)-f(\epsilon_f)] \delta(\epsilon_i - \epsilon_f +\hbar \omega_q ) \, . \end{equation}% Replacing the $\delta$-function by the imaginary part of $-\pi^{-1} [\epsilon_i - \epsilon_f +\hbar \omega_q +i0]^{-1}$, we find \begin{equation}\label{rate4} \frac{1}{\tau(\mbox{\boldmath $q$})}= \frac{2}{\hbar} \frac{|\gamma_{{\mbox{\boldmath $q$}}}|^2}{|\varepsilon(\omega_q,q)|^2} {\rm Im} \Pi(\omega_q,q)_{\omega_q>0} \, , \end{equation}% where $\Pi$ is defined by Eq.\ (\ref{dencor}). Using the zero-temperature results for $\Pi$ and the dielectric function, Eqs.\ (\ref{pzero2}) and (\ref{dfspec}), respectively, as well as the relation between the life time $\tau(\mbox{\boldmath $q$})$ and the attenuation coefficient, we find \begin{equation}\label{gscr} \Gamma = \Gamma_q \Phi(y_F) \, , \qquad y_F=|\epsilon_F/\epsilon_\omega|^{2\nu/\alpha} \, , \end{equation}% with \begin{equation}\label{gq} \Gamma_q=\frac{1}{4\pi^2c {\rm Im}H(0)} |\gamma_{\mbox{\boldmath $q$}}|^2 \frac{q \bar\varepsilon^2}{e^4} \, , \qquad {\rm and} \qquad \Phi(y)= {\rm Im}H(0) \frac{{\rm Im}H(y)}{|H(y)|^2} \, , \end{equation}% where the term 1 in the expression (\ref{dfspec}) for $\varepsilon$ has been neglected. The function $\Phi(y)$ is defined such that $\Phi(0)=1$ [since Re$H(0)=0$, cf.\ Eqs.\ (\ref{limith})], i.~e., $\Gamma_q$ coincides with the attenuation coefficient at the center of the LL, $\Gamma(\epsilon_F=0)=\Gamma_q$. We begin with the discussion of the magnitude of $\Gamma_q$ and consider the function $\Phi(y)$ afterwards. Substituting the expressions (\ref{tauda}) and (\ref{taupa}) for the interaction vertices in Eqs.\ (\ref{gq}) yields \begin{mathletters}\label{gmag} \begin{equation}\label{gmagda} (\Gamma_q)_{DA} = \frac{a_{DA} }{4\pi^2c {\rm Im}H(0)} \frac{q^3}{v_s p_\circ^3 \tau_{DA}} \left( \frac{\bar\varepsilon v_s \hbar}{e^2} \right)^2 = 2.6\times 10^{-21} q^3 \mbox{\rm cm}^2 \, , \end{equation}% \begin{equation}\label{gmagpa} (\Gamma_q)_{PA} = \frac{a_{PA} }{4\pi^2c {\rm Im}H(0)} \frac{q}{v_s p_\circ \tau_{PA}} \left( \frac{\bar\varepsilon v_s \hbar}{e^2} \right)^2 = 8.0\times 10^{-6} q \, . \qquad \,\, {} \end{equation}% \end{mathletters}% That is, despite the fractal structure of the extended EL's on which these results are based, the frequency dependence of the magnitude of $\Gamma$ is simple and is not characterized by scaling exponents. Moreover, $\Gamma_q$ is independent of the magnetic field and the parameters $\Lambda$ and $\Delta$ of the random potential. The numerical values on the right-hand-side of Eqs.\ (\ref{gmag}) have been calculated replacing the parameters $p_\circ, \tau_{DA},$ etc.\ by theirs values given in Sec.\ II and assuming $c=1$. For a finite system, Im$H(0)$ [see Eqs.\ (\ref{limith})] has to be replaced by Im$H(y_L)<{\rm Im}H(0)$, see the discussion following Eq.\ (\ref{yfdef}), leading to an increase of the attenuation coefficient at the center of the LL. The function $\Phi(y_F)$ in Eq.\ (\ref{gscr}) accounts for the dependence of $\Gamma$ on the Fermi energy (or the filling factor $\bar\nu$ or the magnetic field $B$). This dependence is determined by the ratio of $|\epsilon_F|$ and the energy $\epsilon_\omega=\Delta (\omega_q/\Omega)^{\alpha/2\nu}$ as follows. The absorption of the SAW is very small when the Fermi energy is far from the center of the LL $|\epsilon_F| \gg \epsilon_\omega$, i.~e.\ $y_F\gg 1$. A strong increase of $\Phi$ and, hence, of $\Gamma$ occurs when $|\epsilon_F|$ is reduced to $|\epsilon_F| \approx \epsilon_\omega$. In this region the number of occupied extended EL's with an energy level spacing $\hbar \omega_{\cal T}({\cal L}) \lesssim \hbar \omega_q$, Eq.\ (\ref{omegat}), undergoes the change from an exponentially small quantity to some power-law function of ${\cal L}^{-1}$. (Nevertheless, the number of these states is negligible compared to the majority of EL's with ${\cal L} \simeq \Lambda$.) A further rise of the absorption is prevented by the enhanced screening $\sim ({\rm Im}H)^2$ at $|\epsilon_F|\ll \epsilon_\omega$ which even reduces $\Gamma$ as the Fermi energy goes to zero. This results in a shallow double-peak structure with a cusp at the center of the LL. In fact, if we use the limiting forms of $H(y\ll 1)$, Eqs.\ (\ref{limith}), we find $d\Gamma/d\epsilon_F \simeq {\rm sgn}(\epsilon_F)/|\epsilon_F|^{2\nu(2-\alpha)/\alpha}$. We believe therefore that the double-peak structure of $\Gamma(\epsilon_F)$ is independent of the function $G(z)$ used to describe the exponential cut-off of the extended EL's, see the discussion following Eq.\ (\ref{feext}). The maxima of $\Gamma(\epsilon_F)$ are located near $\pm \epsilon_\omega$, see Fig.\ 2. It is clear, however, that such a particular feature as the cusp has to be considered with caution, for it is exclusively based on the quasiclassical model for the electron states. Quantum tunneling between critical trajectories may modify this result. It is worth to note that the tunneling 'band' of width \cite{Fertig87} $\Omega$ [Eq.\ (\ref{Omega})] around $\epsilon=0$ [cf.\ the discussion after Eq.\ (\ref{cond})] is narrow compared to the characteristic energy range $\epsilon_\omega$; indeed, $\Omega/\epsilon_\omega \approx (l_B/\Lambda)\sqrt{\omega/\Delta} \ll 1$. One may speculate that the absorption coefficient is only weakly affected by quantum tunneling. Indeed, most of the EL's contributing to $\Gamma$ cannot be connected by low saddle points with transmission coefficients of order unity, and so quantum tunneling in-between them is insignificant. To simplify the estimates, we rewrite $\epsilon_\omega$ in the form \begin{equation}\label{epoest} \epsilon_\omega= 0.3 {\rm meV} \left( \frac{ \Delta}{{\rm 1meV}} \right)^{4/7} \left( \frac{\omega_q}{2\pi\times 1{\rm GHz}} \right)^{3/7} \left( \frac{\Lambda}{50{\rm nm}} \right)^{6/7} \left( \frac{ B}{5{\rm T}} \right)^{3/7} \, . \end{equation}% As discussed in Sec.\ V, the zero-temperature result (\ref{pzero2}) for $\Pi$ remains valid for finite temperatures such that $T \ll \epsilon_\omega$. This is also the condition for Eq.\ (\ref{gscr}) to hold. Using the values for $\Delta$ and $\Lambda$ given above and $\omega_q=2\pi\times 100$MHz, we obtain $\epsilon_\omega \approx 1$K. For temperatures of the order of or larger than $\epsilon_\omega$, the attenuation coefficient is found from Eq.\ (\ref{rate4}) using expression (\ref{pzerot}) in the calculation of the dielectric function and Im$\Pi$. Two results for $\Gamma$ at finite temperatures are shown in Fig.\ 2. With increasing temperature the minimum of the attenuation near the center of the LL is reduced and the absorption peak becomes broader. The increasing magnitude of $\Gamma$ results from the significant broadening of the imaginary part of $\Pi$ and the reduced screening, see Eq.\ (\ref{rate4}). For $T \gtrsim \epsilon_\omega$, the magnitude of $\Gamma$ and the width of the absorption region are strongly temperature dependent. The dependence of $\Gamma$ on the SAW frequency $\omega_q$ is shown in Fig.\ 3. The curve is calculated for the low temperature regime $T \ll \epsilon_\omega$ and the piezoelectric electron-phonon interaction. The attenuation coefficient has been written in the form $\Gamma=\Gamma_F (\omega_q/\omega_F) \Phi(\omega_F/\omega_q)$, with $\Gamma_F=(\Gamma_q/q)(\omega_F/v_s)$, $\omega_F=\Omega |\epsilon_F/\Delta|^{2\nu/\alpha}$. (Note that $\Gamma_q/q$ does not depend on frequency.) The Fermi energy is fixed to some value $\epsilon_F \ll \Delta$ and defines the smallest level spacing $\hbar\omega_F$ for extended EL's. Consequently, $\omega_q \simeq \omega_F$ marks the onset of strong SAW attenuation. For high frequencies $\omega_q \gg \omega_F$, the attenuation coefficient increases linearly with frequency. This is just the behavior predicted by the classical description of sound absorption for piezoelectric interaction, see Eq.\ (\ref{gsig}) below. For $T\ll\epsilon_\omega$, the width of the absorption region is determined by $|\epsilon_F|\simeq \epsilon_\omega$. This is merely the condition for real transitions to occur and is neither associated with the interaction vertices $\gamma_{\mbox{\boldmath $q$}}$ nor is a consequence of the matrix element (\ref{me1}) whose derivation is based on the particular assumption $\bar{\nu} \approx 1/2$. We believe therefore that this result applies to other half-integer filling factors $\bar{\nu}$ as well. To express the relation $|\epsilon_F|\simeq \epsilon_\omega$ in terms of the filling factor $\bar\nu=2\pi l_B^2 n$, we write the electron density $n$ as an integral over the Gaussian density of states, \begin{equation}\label{gauss} g(\epsilon)=(2\pi)^{-3/2} (l_B^2\Delta)^{-1} \exp(-\epsilon^2/2\Delta^2) \, , \end{equation and the Fermi distribution function $f$, \begin{equation}\label{nuefint} \bar \nu(\epsilon_F)= \frac{1}{\sqrt{2\pi} \Delta} \int_{-\infty}^\infty d\epsilon \, e^{-\epsilon^2/2\Delta^2} f(\epsilon - \epsilon_F) \, , \end{equation}% and expand around the center of the LL with respect to $|\epsilon_F|/\Delta \ll 1$. This gives for $T\ll \Delta$ \begin{equation}\label{nuef} \Delta \bar\nu(\epsilon_F) = \bar\nu(\epsilon_F) - \bar\nu(0) = \frac{\epsilon_F}{\sqrt{2\pi} \Delta} \, . \end{equation}% Then, the width of the absorption region is obtained as \begin{equation}\label{dnu} |\Delta \bar{\nu}| \simeq \left( \frac{\omega_q}{\Omega} \right)^{\alpha/2\nu} \simeq \left(q \Lambda \frac{v_s}{\bar{v}_D}\right)^{\alpha/2\nu} \, . \end{equation}% The exponent is given by $\alpha/2 \nu=\sigma/\lambda=3/7\approx0.42$. This value agrees with the exponent $\kappa$ which determines the shrinking of the peaks in the longitudinal conductivity \cite{Wei92,Aleiner94} $\sigma_{xx}$ as the temperature $T$ goes to zero, $|\Delta \bar{\nu}| \sim T^\kappa$. In our case the broadening of the absorption peak arises from the frequency $\omega_q$. In this sense, $\hbar\omega_q$ may be considered as an effective temperature which replaces the real temperature $T$. Frequency scaling in the integer quantum Hall regime has been observed in microwave experiments \cite{Engel93}. For spin-split LL's, the width of the peaks in Re$\sigma_{xx}$ corresponding to different LL's was found to scale as $|\Delta \bar{\nu}| \sim\omega^\kappa$, with $\kappa\approx 0.41$. Due to the drift velocity $\bar{v}_D$, the width (\ref{dnu}) depends weakly on the absolute value of the magnetic field. In terms of the filling factor, Eq.\ (\ref{dnu}) can be written in the form $|\Delta \bar{\nu}| \approx (n\Lambda^2\hbar \omega_q/\Delta)^{\alpha/2\nu} (\bar{\nu})^{-\alpha/2\nu}$. Thus, $|\Delta \bar{\nu}|$ is smaller for higher (half-integer) filling factors $\bar{\nu}$. The width of the absorption region scales with the phonon wave vector as $|\Delta \bar{\nu}| \sim q^{\alpha/2\nu}$. In contrast, in the fractional quantum Hall regime, the width increases linearly with $q$ for $\bar{\nu}=1/2$. This linear dependence is derived within the composite Fermion model\cite{Halperin93} and is well confirmed experimentally \cite{Willet90,Willet94}. The absorption of SAW's in the integer quantum Hall regime has also been studied in Ref.\ \onlinecite{Aleiner94}. These authors determine first the ac-conductivity of the 2DEG which is then related to the attenuation coefficient using the equation \begin{equation}\label{gsig} \Gamma = \frac{1}{2} K_{eff}^2 \frac{q\sigma'}{(1+\sigma'')^2+(\sigma')^2} \, , \end{equation}% where $K_{eff}^2$ represents the effective piezoelectric coupling constant ($=6.4\times 10^{-4}$ for GaAs\cite{Wixforth89}) and $\sigma'={\rm Re} \sigma_{xx}(\omega_q,q)/\sigma_M$, $\sigma''={\rm Im} \sigma_{xx}(\omega_q,q)/\sigma_M$ and $\sigma_M=v_s\bar\varepsilon/2\pi$. [Note that Eq.\ (\ref{gsig}) can be obtained from Eq.\ (\ref{gscr}) writing the dielectric function of the 2DEG in the form $\varepsilon(\omega,q)=1+i\sigma_{xx}(\omega_q,q)/\sigma_M$.] Assuming that $|\sigma_{xx}| \ll \sigma_M$ for sufficiently high frequencies\cite{Aleiner94}, Eq.\ (\ref{gsig}) reduces to $\Gamma(\omega_q,q) \sim q {\rm Re} \sigma_{xx}(\omega_q,q)$. That is, in this case, the sound absorption and the longitudinal conductivity are related such that the width and the shape of their peaks as function of $\bar{\nu}$ are identical. The calculation of the ac-conductivity in Ref.\ \onlinecite{Aleiner94} is based on the concept of variable-range hopping between pairs of localized states. For $\hbar\omega_q \gg T$, the absorption of SAW's is due to resonant phononless transitions of the electrons from one site of a pair to the other. This mechanism is strongly affected by the electron-electron interaction \cite{Aleiner94}. The width of the absorption peak at half-integer filling factors was found to be \begin{equation}\label{nuas} |\Delta \bar{\nu}| \simeq (q\xi_\circ)^{1/\gamma} \, , \end{equation}% where $\gamma\approx 2.3$ is the scaling exponent of the localization length \begin{equation}\label{xizero} \xi \simeq \xi_\circ |\bar\nu - \bar\nu(0)|^{-\gamma} \, , \end{equation}% and $\xi_\circ$ is assumed to be of the order of the magnetic length. [Note the differences between the last equation and the semiclassical definition of the localization length in Eq.\ (\ref{dcrit}).] The result of Ref.\ \onlinecite{Aleiner94}, Eq.\ (\ref{nuas}), agrees with our result, Eq.\ (\ref{dnu}), in both the numerical value of the exponent and the dependence on $q$. However, the width $|\Delta \bar{\nu}|$ in Eq.\ (\ref{nuas}) exhibits a different dependence on the magnetic field, namely $|\Delta \bar{\nu}| \sim B^{-1/2\gamma}$ in contrast to $|\Delta \bar{\nu}| \sim B^{\alpha/2\nu}$ predicted by Eq.\ (\ref{dnu}). The authors of Ref.\ \onlinecite{Aleiner94} did not give a definite description of the shape of the absorption peak but rather suggested two scenarios which eventually lead to a flat peak with a broad maximum or a double-peak, respectively. Our results support the latter one, see Fig.\ 2. \section{Summary} We have calculated the dielectric function $\varepsilon(\omega,q)$ and the attenuation coefficient $\Gamma$ of a surface acoustic wave for a 2DEG in a smooth random potential (with amplitude $\Delta$ and correlation length $\Lambda$) and a strong magnetic field corresponding to a filling factor $\bar\nu$ close to $1/2$. Both quantities become independent of temperature as the temperature is reduced below a frequency-dependent value $\epsilon_\omega= \Delta (\omega/\Omega)^{\alpha/2\nu}$, where $\alpha/2\nu=3/7$, $\Omega=2\pi \bar v_D/\Lambda$ and $\bar v_D$ is the average drift velocity of the electrons on the equipotential lines of the random potential. In this low temperature, high frequency regime (e.~g.\ $\epsilon_\omega \simeq 1$K for $\omega=2\pi \times 100$MHz), Im$\varepsilon(\omega,q)$ and $\Gamma$ are only appreciable when $\epsilon_F$ is within a narrow region around the center of the Landau level, and Re$\varepsilon(\omega,q)$ decreases according to a power law with increasing distance from the center. In particular, the attenuation of the SAW is exponentially small except for a region whose width $|\Delta\bar\nu| \sim \omega^{\alpha/2\nu}$. This scaling is non-universal because $|\Delta\bar\nu|$ depends on the absolute value of the magnetic field, see Eq.\ (\ref{dnu}). The dependence of $\Gamma$ on the Fermi energy (or the filling factor) yields a double-peak which is centered at the filling factor $\bar\nu=1/2$, cf.\ Fig.\ 2. The minimum of the absorption at $\bar\nu=1/2$ results from the enhanced screening due to the 2DEG, i.~e., from the large magnitude of the dielectric function $|\varepsilon(\omega_q,q)| \simeq e^2/\bar\varepsilon v_s \hbar$, where $\bar\varepsilon$ is the average of the dielectric constants of GaAs and vacuum, and $v_s$ is the sound velocity. The double-peak in $\Gamma$ is most pronounced for an infinite system where the critical diameter ${\cal D}_c$, Eq.\ (\ref{dcrit}), of the equipotential lines of the random potential is allowed to take on arbitrarily large values. A real system of size $L$ restricts the diameter to ${\cal D}_c \lesssim L$ resulting in an increase of the attenuation coefficient near the center of the Landau level. While this effect is weak for a macroscopic sample size, a similar but more pronounced effect may arise from a non-uniform electron density associated with a spatially varying filling factor. In the high temperature, low frequency regime, the dielectric function decreases with rising temperature leading to an increase of the magnitude of the attenuation coefficient and a significant increase of the width of the absorption region around $\bar\nu=1/2$. \section*{Acknowledgement} Financial support by the German-Israeli Foundation is gratefully acknowledged. We thank J.\ Hajdu and D.\ Polyakov for valuable discussions and comments. One of us (A.~K.) thanks the Deutsche Forschungsgemeinschaft for financial support and B.~Zingermann for a discussion of some properties of Gaussian distributions. \newcommand{\noopsort}[1]{} \newcommand{\printfirst}[2]{#1} \newcommand{\singleletter}[1]{#1} \newcommand{\switchargs}[2]{#2#1}
1,108,101,562,581
arxiv
\section{Introduction} With the expansion of human activities in the oceans towards more extreme environments, state-of-the-art maritime technologies have progressively become less suited at coping with the increased degree of complexity of their missions. As an example, the offshore oil industry is more and more involved in operating in deeper waters and need to acquire baseline and on-going surveys throughout the life history of submerged infrastructures and their interaction with the surrounding ecosystems. Currently, operations of this kind rely heavily on expensive and slow human divers because traditional robots are not as well suited to acquiring in-situ measurements in very close proximity to submerged structures or living organisms. Aerial and marine animal achieve remarkable feats of maneuvering and efficiency by changing their body shape to generate unsteady fluid forces. For example, birds execute precise maneuvers, such as banking, braking, takeoff and landing, all with minimal power expended \citep{Provini2014}. This is in stark contrast to current ``flight-type'' marine and aerial vehicles with fixed wings which have a fixed minimum operating speed and slow response time, or ``hover-type'' vehicles with multiple thrusters which have limited mission lives due to their inefficiency. Starting with the seminal work of \cite{Lighthill1960}, which mathematically formulated how fish produce large forces and high efficiency with undulatory motion, there has been significant research in studying shape-changing unsteady biological flows and exploiting them in maritime engineering designs. While fish swimming itself has now been well studied \citep{Triantafyllou2000} and applied to small robotic vehicles \citep{Triantafyllou1994}, the mechanical complexities make it difficult to adapt fish-propulsion to broader applications. In this manuscript, we review some recent work on biologically inspired mechanisms which generate strong forces, are highly efficient, and are achieved with relatively simple actuation methods, all of which makes them potentially well-suited to maritime applications. \section{Heaving and pitching foils}\label{sec:flap} The first biologically inspired force-producing device was certainly a flapping wing, dating at least as far back as Da Vinci ca. 1485 \citep{Mccurdy1941}. Modern research has revitalized this concept, showing that lifting surfaces which are actuated to dynamically heave and/or pitch have potential advantages over either fixed lifting surfaces or standard propellers. Studies on the thrust forces generated by an oscillating foil have shown the potential for impressive thrust coefficients (maximum of $C_T=2.4$) and efficiency regions of 50-60\% \citep{Read2003}. It has also been shown that an oscillating foil can be used to manipulate incoming vorticity for energy extraction, with efficiencies at and above 45\% \citep{Simpson2008}. However, there are a wide range in observed efficiencies and force magnitudes, and these parameters vary with oscillation type, planform and flexibility of the foil. This section reviews two studies on actuated rigid foils which demonstrate large force production at high efficiency levels with simple kinematics. \subsection{Tandem flapping foils to balance forces and utilize wake energy} A fundamental issue with implementing a flapping foil as a marine propulsor on an otherwise conventional ship or underwater vehicle is the large variation in thrust and side force. Additionally, propulsive efficiency in the range of 50-60\% is not optimum, indicating that mechanical power is being wasted in energizing the wake. A recent study by \cite{Epps2016} investigated the use of tandem flapping foils to mitigate the unbalanced forces and potentially increase efficiency by utilizing energy in the wake of the forward foil. \begin{figure} \centering \subfloat[Single]{ \includegraphics[width=0.5\textwidth] {Figures/single.png} \label{fig:single}} \subfloat[Tandem]{ \includegraphics[width=0.5\textwidth] {Figures/tandem.png} \label{fig:tandem}} \caption{Simulated streaklines for the two-dimensional flow past a single flapping foil and tandem flapping foils. The streaklines are visualized by continuously releasing tracer particles on either side of the foil at the quarter-chord. The tandem case has phase lag $\phi=1.75\pi$, and spacing $s=2c$.} \label{fig:foils} \end{figure} In this study, the foils undergo prescribed harmonic heave $h$ and pitch $\theta$, defined as \begin{align} & h_f(t) = c \sin(\omega t), \quad h_b(t) = c\sin(\omega t+\phi) \\ & \theta_f(t) = \frac \pi 4 \cos(\omega t), \quad \theta_b(t) = \frac \pi 4 \cos(\omega t+\phi) \end{align} where $c$ is the chord length, $\omega$ is the flapping frequency and $\phi$ is the phase lag between the foils, and the $f,b$ subscripts refer to the front and back foils respectively. The frequency is set to achieve a Strouhal number of $St = 4\pi\omega c / U = 0.4$, known to be at the upper end of the range resulting in high thrust for a single foil \citep{Read2003}. The flow speed $U$ is set to achieve a Reynolds number of $Re=Uc/\nu=10^4$. This flow was studied using the Lily Pad computational fluid dynamics software. As discussed in \cite{Weymouth2015b}, Lily Pad is a two-dimensional Cartesian-grid flow solver that uses the Boundary Data Immersion Method \citep[see][]{Maertens2015} and has been extensively validated for unsteady fluid-body interaction problems. For these simulations, a grid spacing of $h=c/64$ and a domain size of $16c$ x $8c$ was used. Figure~\ref{fig:foils} shows a set of Lily Pad results for the flow around single and tandem flapping foils. Streaklines in Figure~\ref{fig:single} show that the characteristic reverse K\'arm\'an street has formed, accelerating the flow behind the single foil. Figure~\ref{fig:tandem} shows a set of streaklines for a tandem case where the leading edge of the back foil is spaced $s=2c$ behind the trailing edge of the front foil and the motion is lagged by $\phi=1.75\pi$. The wake in the tandem case has narrowed and lengthened compared to the single foil case, indicating greater speed and possibly efficiency. \begin{figure} \centering \subfloat[Thrust]{ \includegraphics[width=0.3\textwidth]{Figures/thrust.png} } \subfloat[Lift]{ \includegraphics[width=0.3\textwidth]{Figures/lift.png} } \subfloat[Power]{ \includegraphics[width=0.3\textwidth]{Figures/power.png} } \caption{Performance coefficients for the tandem foils shown in Figure~\ref{fig:tandem}; \textcolor[rgb]{0.8,0,0}{front foil}, \textcolor[rgb]{0,0.5,0.5}{back foil}; dashed lines are the mean values over the cycle.} \label{fig:foilsforces} \end{figure} A set of performance metrics are shown in Figure~\ref{fig:foilsforces}. The thrust $T$ and lift $L$ are defined as the integrated fluid force inline with and perpendicular to the oncoming flow, as usual. The general equation for the power transferred from the body to the fluid is \begin{equation}\label{eq:p} P = \oint_S \left(\vec f(s,t) \cdot \vec u(s,t)\right)\ ds \end{equation} where $\vec f$ is the local fluid force per unit area on the body surface, $\vec u$ is the local body surface velocity, $\oint_S\ ds$ is an integral over the body surface. This formula automatically accounts for both the pitch and heave motion and is also valid for the flexible and deforming bodies used in the next sections. Another key performance metric is the efficiency, which is \textit{the rate of useful work done per unit power consumed}. As such, the hydrodynamic efficiency of a propulsive actuator operating at a steady forward speed is simply \begin{equation}\label{eq:etaT} \eta_{t} = \frac{TU}{P} \end{equation} where $TU$ is the rate of work done in the inline direction. The results in Figure~\ref{fig:foilsforces} are for the tandem case, but the performance of the front foil is essentially independent of the back foil for $s>c$. The front foil results compare well to those presented in the literature for single flapping foils, with a mean thrust coefficient of $C_{T,f}= T_f/(\frac 12 \rho U^2 c) = 0.52$, mean lift of zero, and mean power coefficient of $ C_{P,f}=P_f/(\frac 12 \rho U^3 c) = 1.04$. Therefore the efficiency for this simple choice of kinematics is 50\%. The back foil undergoes the same motion as the front, but operates in its wake, which significantly changes the response. Most noticeable is the large increase this enables in the back foil thrust, $ C_{T,b}=1.02$, twice the value of the front foil. In other words, adding a second foil has not doubled the total thrust, but instead tripled it. This is due to the positive wake interference of the two foils. Negative interference is also possible, and \cite{Epps2016} develops a relationship between the spacing and phase to characterize this interference. In addition, the peak forces on the hind foil are phase shifted by $\phi$ relative to the front foil. By properly setting the spacing and phase, \cite{Epps2016} shows that a tandem foil propulsion system would be capable of greatly reducing the variation in the thrust force compared to a single flapping foil. It is also possible to reduce the variation in lift, but because the thrust peaks are twice as frequent, two foils cannot perfectly cancel both thrust and lift variation. Finally, the increased thrust on the back foil shown in Figure~\ref{fig:foilsforces} does require increased power, but not disproportionally. In fact, the efficiency of the tandem foil system overall is $\eta_t=53\%$, slightly better than that of the front foil alone. \subsection{Rapid pitch-up for impulsive stopping force} One of the most striking advantages of flying animals over fixed-wing aircraft is their ability to come to a complete and controlled landing in only a few body lengths; even large gliding birds such as an eagle \citep{Carruthers2007}. Like aircraft, flight-type underwater vehicles have a minimum operating speed to maintain their depth, and because maritime vessels are proportionally much heavier than aircraft, they are even slower to stop. \cite{Polet2015} studied a simple model of wing kinematics during perching and found that very large dynamic lift and drag forces are produced - and these forces could potentially be utilized to impulsively stop heavy and streamlined maritime vehicles. \cite{Polet2015} focused on one key kinematic characteristic of bird perching, the rapid increase in pitch of the wings during deceleration. Lily Pad simulations ($Re=2000$) and experiments ($Re=22000$) were performed in which the foil speed and pitch angled were varied during the maneuver as \begin{align} U(t) &= U_0 (1-t^*) \label{eq:stop}\\ \theta(t) &= \theta_{final} \left(t^*-\frac{\sin(2\pi t^*)}{2\pi}\right) \label{eq:turn} \end{align} where $U_0$ is the initial velocity, $\theta_{final}=\frac \pi 2$ is the final pitch position, and $t^*= t/\tau$ is time scaled by the period of the maneuver $\tau$ up to $\theta=\pi/2$. A NACA0012 foil section was used and the center of rotation was set $c/6$ from the leading edge. We quantify the impulsiveness of the maneuver using the shape-change number \begin{equation}\label{eq:Xi} \Xi=V/U_0 \end{equation} where $V$ is the speed of the shape-change \citep{Weymouth2013JFM}. For this maneuver we choose $V=c/\tau$, the average cross-flow velocity of the trailing edge. \begin{figure} \subfloat[Kinematics]{ \includegraphics[width=0.3\textwidth]{Figures/polet_kin} } \subfloat[Lift coefficient]{ \includegraphics[width=0.3\textwidth]{Figures/polet_lift} } \subfloat[Drag coefficient]{ \includegraphics[width=0.3\textwidth]{Figures/polet_drag} } \caption{Kinematics and force coefficients (scaled by $U_0$) on a foil with rapidly increasing pitch during deceleration, reproduced from \cite{Polet2015}. The force coefficients from two-dimensional simulations at $Re=2000$, experiments at $Re=22000$, and an inviscid flow model are given over three maneuver speeds.} \label{fig:polet} \end{figure} Figure~\ref{fig:polet} shows the resulting forces from the simulations, experiments, and an inviscid flow model described in \cite{Polet2015}. Forces increased with increasing shape-change number, and at $\Xi=1/2$ the values are ten times larger than the lift and drag at the corresponding static pitch angle, which would help birds maintain lift and come to a controlled stop. However, the drag forces are negative at the end of the maneuver which decreases the average stopping force. \cite{Polet2015} postulate that the unwanted thrust generation is due to the prescribed constant rate of deceleration in equation~\ref{eq:eqp}, which does not match the natural fluid-structure interaction in true perching. To test this theory and to determine the applicability of pitching foils on maritime vehicles we next carry out free-running simulations of a stopping maneuver. The vehicle is set to be a neutrally buoyant ellipsoid with uniform density, diameter $c$ and length $8c$, Figure~\ref{fig:body_pics}. A NACA0012 foil with span $s$ is mounted on either side of the body center and the pitch relative to the body is given by equation~\ref{eq:turn}. We set $Re=U_0 c/\nu=22000$. The dynamics of the vehicle are modeled as \begin{align} \ddot x = \frac{D-\frac 12 C_x \rho A_x \dot x|\dot x|}{m+m_{xx}},\quad \ddot y = \frac{L-\frac 12 C_y \rho A_y \dot y|\dot y|}{m+m_{yy}},\quad \ddot \psi = \frac{M}{I+m_{\psi\psi}} \end{align} where $x,y$ are the body centroid location, $\psi$ is the heading, $m$ is the mass, $I$ is the moment of inertia, and $C_a, A_a, m_{aa}$ are the drag coefficient \citep[taken from][]{Hoerner1965}, projected area, and potential flow added-mass in the a-direction. Note that while the fluid forces on the body are modeled analytically, $D,L,M$ are the measured lift drag and moment of the foil in the coupled simulation. \begin{figure} \begin{minipage}{0.47\textwidth} \centering \subfloat[$t^*=2.5$]{ \includegraphics[trim={0 3cm 0 15mm},clip,width=\textwidth] {Figures/body_t3.png} } \\ \subfloat[$t^*=1$]{ \includegraphics[trim={0 3cm 0 3cm},clip,width=\textwidth] {Figures/body_t2.png} } \\ \subfloat[$t^*=0$]{ \includegraphics[trim={0 3cm 0 3cm},clip,width=\textwidth] {Figures/body_t1.png} } \caption{Foil vorticity field for free-running simulations of an ellipsoid undergoing a stopping maneuver by rapidly pitching foils with $\Xi=1/2$, $\theta_{final} = \pi/2$.} \label{fig:body_pics} \end{minipage} \hspace{2mm} \begin{minipage}{0.47\textwidth} \centering \subfloat[Centroid path]{ \includegraphics[width=\textwidth] {Figures/body_path.png} } \\ \subfloat[Drag coefficient]{ \includegraphics[width=0.5\textwidth] {Figures/body_drag.png} } \subfloat[Power coefficient]{ \includegraphics[width=0.5\textwidth] {Figures/body_power.png} } \caption{Results for four stopping maneuver cases; $\{\Xi,\theta_{final}\}$ = \textcolor[rgb]{0.5,0,0}{$\{1/2, \pi/2\}$}, \textcolor[rgb]{0,0.5,0}{$\{1/4, \pi/2\}$}, \textcolor[rgb]{0,0.5,0.5}{$\{1/2, \pi\}$}, \textcolor[rgb]{0.5,0,0.5}{$\{1/4, \pi\}$}. Points in (a) show increments of $tU_0/c=1$.} \label{fig:body} \end{minipage} \end{figure} The results of the maneuvering simulations are shown in Figure~ \ref{fig:body}. Increasing the shape-change rate increases the forces, and the peak drag magnitudes are similar to the prescribed deceleration case results in Figure~\ref{fig:polet}. However, the free-running case results in only positive drag force, verifying the Polet et al discussion, and helping the vehicle stop. The resulting trajectories show that the pitch-up maneuver is capable of stopping the body's forward motion in $1.6c$, only 20\% of the body length. Figure~ \ref{fig:body} also shows two cases where the final pitch has been increased to $\pi$ in equation~\ref{eq:turn}, e.g. the foil keeps pitching until is faces backwards. This motion ensures that it is the dynamic forces responsible for stopping the body - not just bluff-body drag on the sideways foil. The results show the body not only stops, but fully reverses, and does so with relatively little vertical drift. \section{Size and shape-changing bodies}\label{sec:size} In contrast to rigid body kinematics, such as flapping, little research has been devoted to explosive size and shape-change despite its prevalence in nature. For example, many animal use ``burst and coast'' gaits when performing maneuvers to reduce the cost of transport by as much as 50\% \citep{Weihs1984,Chung2009}. Extreme shape change is also often used in ``survival'' hydrodynamics, i.e. to help an animal hunt or evade attack where extreme accelerations are required \citep{Triantafyllou2016}. In this section we review two series of recent studies on using size-change as a novel form of force generation. Surprisingly, the `ballistic' nature of these novel actuation methods often makes them simpler to implement than the controlled kinematics of the previous section. And advances in soft-robotics are enabling the first tests of these size-and shape changing devices. \subsection{Span-wise retraction to shed vorticity} When birds and marine mammals perform ``burst and coast'' maneuvers they rapidly pull their wings or flippers against their bodies - causing them to effectively `vanish' from the flow. Classic studies such as \cite{Taylor1953} showed that this sudden disappearance would leave a significant vortex in the fluid, generating large forces. \cite{Wibawa2012} attempted to experimentally and numerically study this vanishing phenomenon by quickly retracting a foil along its span while towing it forward. Retraction is much simpler and less power-consuming than flapping and could be easily used in practical maritime designs. The study used a foil with a rectangular planform, square tip, and NACA0012 cross-section. The foil was towed along the tank at $Re=Uc/\nu=14000$ at a $10\deg$ angle of attack and was retracted a distance $1.4c$ with an average speed of $6U$. Experimental results showed that while some circulation was shed, it was less than half of the bound circulation before retracting, and it decayed so quickly that it couldn't be feasibly used to generate maneuvering forces. \begin{figure} \includegraphics[width=\textwidth]{Figures/steele_sims} \caption{Simulations of the retracting foil at $Re=1000$ for three foil geometries. $t^*=tU/c=0$ is when the foil crosses the PIV plane (green line). The left and right of each panel show $\lambda_2$ and $\omega_z$ iso-surfaces, respectively. Reproduced from \cite{Steele2016b}.} \label{fig:steele_sims} \end{figure} \begin{figure} \centering \includegraphics[width=0.8\textwidth]{Figures/steele_exp} \caption{Experiment with dye injection of the retracting open hollow foil at $Re=13700$. The orange dye is from inside the foil, while the green is from the outside. The left image is around $t^*=0.15$, right is around $t^*=1.15$. Reproduced from \cite{Steele2016b}.} \label{fig:steele_exp} \end{figure} Three-dimensional simulations were performed to visualize the complete flow. Figure~\ref{fig:steele_sims}(a) shows a similar simulation, but run at $Re=1000$ to clarify the vorticity structures. The wake structures were found to be highly complex because the impulsive retraction of the foil generated its own wake, which mixed and disturbed the shedding of the bound vorticity. Again, this limits the amount of useful work that the maneuver can achieve. In a follow-up study, \cite{Steele2016b} showed that the shape of the foil geometry can be easily adjusted to achieve different kinds of fluid response. Figure~\ref{fig:steele_sims} shows the result of the same retraction maneuver on two other foil shapes; a foil with a streamlined and rounded wing tip, and a foil which is hollow and open on the wing tip to allow fluid to pass through. Figure~\ref{fig:steele_exp} shows the result of using dye visualization in an experimental test of the retracting hollow foil. The results show that because the hollow foil does need to pull fluid up to fill the wake of its retraction, the vorticity is shed in two large clear vortex structures which could be used to induce dynamic roll moments on trailing control surfaces. \subsection{Shrinking to recover added-mass energy and cancel drag} \begin{figure} \centering \includegraphics[angle=90, width=0.8\textwidth]{Figures/g-s_lam2} \caption{Evolution of the $\lambda_2$ vortex criterion during one oscillation after attainment of zero-damping regime in response to the sharp and smooth radius variations (see Figure~\ref{fig:g-s}). Reproduced from \cite{Giorgio-Serchi2016}.} \label{fig:lambda2} \end{figure} The streamlined foil result in Figure~\ref{fig:steele_sims} is entirely different than that of the hollow foil. Consider the cross-section of the streamlined foil as it retracts through the PIV plane. This is not a `vanishing' body, but a shrinking one. The key difference, as shown in \cite{Weymouth2012JFM}, is that the shrinking body pulls in fluid to fill the void left by its retraction, while a vanishing hollow body does not. In both cases, the reduced size of the body means a corresponding reduction in the fluid added-mass. However, the resulting dynamics of the fluid, and its force on the body could not be more different. For a vanishing body, the surplus fluid kinetic energy goes into the generation of shed vortical structures as shown in Figure~\ref{fig:steele_sims}(c). For a shrinking body, two related effects were found: \begin{enumerate} \item The rapid motion of the boundary generates a layer of vorticity which can cancel the boundary layer vorticity for high shape-change numbers. This is demonstrated by the small amount of shed vorticity in Figure~\ref{fig:steele_sims}(b). \item The cancellation of bound vorticity enables the transfer of the fluid added-mass energy back into the body, resulting in significant instantaneous forces. \end{enumerate} In the case of an inviscid fluid, the bound vorticity cancellation is perfect, and the resulting force is simply \begin{equation}\label{eq:f ma} F = -\frac{\partial}{\partial t}\left(m_{xx} U \right) = -\dot m_{xx} U - m_{xx} \dot U \end{equation} where $\dot m_{xx}$ is the rate of change of the added-mass. The final term is the standard added-mass force due the body acceleration, but the first term is due to the recovery of added-mass energy by the body. For large shape-change numbers $\dot m_{xx} U$ could be sufficient to completely cancel the body drag force. \begin{figure}[t] \centering \subfloat[Time history]{ \includegraphics[width=0.9\textwidth]{Figures/g-s_hist} } \\ \subfloat[Frequency dependence]{ \includegraphics[trim={0 6cm 7cm 6cm}, clip, height=6cm]{Figures/g-s_spring2} } \subfloat[Amplitude dependence]{ \includegraphics[trim={0 6cm 0 6cm}, clip, height=6cm] {Figures/g-s_amp2} } \caption{Transverse response $x$ of the oscillating sphere, with and without volume-change excitation. Sharp excitation refers to the `saw tooth` pattern at the top of (a), while smooth excitation is a simple sin wave pattern. The grey lines in (c) assume 100\% efficient added-mass energy recovery, while the dark lines assume $\eta=0.9$. Reproduced from \cite{Giorgio-Serchi2016}.} \label{fig:g-s} \end{figure} \cite{Giorgio-Serchi2016} used a volume-changing oscillator to test this method of drag cancellation. They simulated the flow on a spherical body with radius $r$ connected to a spring and immersed in water. If this body is released from a large displacement, say $x_0=r$, it will oscillate with a natural frequency $\omega_n$ but the amplitude will quickly decays to nothing due to the drag of the fluid, Figure~\ref{fig:g-s}(a, non-excited). However, if the radius of the sphere changes in time with amplitude $a$, then added-mass energy will transfer back and forth between body and fluid, exciting oscillation, Figure~\ref{fig:lambda2} and \ref{fig:g-s}(a, excited). This is called a parametric-oscillator, and just like a child on a swing changing their center of effort, this can lead to sustained large amplitude oscillations if the oscillator is pumped near the natural frequency. But while a swing would work underwater, the amplitude would be tiny due to drag. By shrinking and growing, the sphere's large bluff body drag force is canceled, enabling oscillation amplitudes up to $X=4.7a$ and $3.5r_0$, Figure~\ref{fig:g-s}. Figure~\ref{fig:g-s} also compares the results to an analytic parametric-oscillator model developed in \cite{Giorgio-Serchi2016} using equation \ref{eq:f ma}. While the frequency match is excellent, the model over predicts $X$ for large $a$ because of the imperfect recovery of added-mass energy. Indeed, Figure~\ref{fig:lambda2} shows the simulated flow features large scale vortex shedding - indicating that at least some portion of the energy is spent stirring up the fluid. To quantify how much energy is wasted, we need to revisit the definition of efficiency. Unlike for an isolated propulsor, the useful work is ill-defined for a self-propelled body. As discussed in \cite{Maertens2015BB} this is because the net force on a steady self-propelled body is zero by definition and the power lost to the environment depends sensitively on the propulsion method. Instead, we must use the quasi-propulsive efficiency \begin{equation}\label{eq:eqp} \eta_{QP} = \frac{P_{tow}}{P_{self}} \end{equation} where $P_{tow}$ is the power lost to the fluid when \textit{towing the rigid body at its operating condition}, and $P_{self}$ is the power usage measured in the self-propelled test. This is, in fact, the standard measure of efficiency used in ship design. In the case of a propeller-driven ship at steady-ahead conditions equation~\ref{eq:eqp} becomes \begin{equation} \eta_{QP} = \frac{RU}{Q\omega} \end{equation} where the towed resistance $R$ times the speed $U$ is the towed power loss, and the propeller shaft torque $Q$ times the rotation rate $\omega$ is the self-propelled power usage. When using equation~\ref{eq:eqp}, the towed body should be rigid and bare (no propulsor) but otherwise operated at the same conditions as the self-propelled test. Applied to the case of the volume-changing and oscillating sphere, we first select a self-propelled case, say $a/r_0=0.35$ and $\omega = \omega_n$ which achieved in $X/a=4.7$ using the smooth profile. We then repeat this case with a rigid sphere towed at the same frequency and amplitude of motion. After using equation~\ref{eq:p} to measure the power used in both cases, the quasi-propulsive efficiency is found to be $\eta_{QP}=0.91$. This was found to be a representative value for the resonant smooth profile cases.\footnote{ Note that the power transfer during sharp inflation is infinite, making this a rather poor choice for energy efficiency. Even use a slightly smoothed profile, the extreme magnitude of the power peaks made computing a meaningful average impossible.} Using this value, the analytic prediction can be corrected and agrees well with experiments, Figure~\ref{fig:g-s}. If drag cancellation with 90\% efficiency seems too good to be true, it may be explained (or perhaps rationalized) by considering that the growing and shrinking of a shape in water induces a completely irrotational fluid motion. Unlike the rotation of a propeller or flapping of a foil then, an inflate-deflate cycle is perfectly reversible, resulting in a zero net transfer of energy to the fluid over the cycle. As the maturing field of soft robotics enables designs with highly deformable parts \citep{Giorgio-Serchi2013}, such efficiencies may be soon be realized experimentally. \subsection{Deflating to power an ultra-fast start} Cephalopods, such as the squid and octopus, greatly increase their size by filling with water, before ejecting the water in a propulsive jet, reducing their size and helping them make a quick escape \citep{Huffard2006}. As a final example of biologically inspired-force production, we review a series of studies that investigated a jet-propelled shrinking vehicle as a model of this system both analytically and experimentally. \cite{Weymouth2013JFM} consider three types of jet-propelled bodies; a rocket in the vacuum of space, a rigid 5:1 ellipsoidal torpedo in water, and an octopus-like vehicle which shrinks from a sphere to a 5:1 ellipsoid as it jets. The acceleration of all three is governed by the simple equation \begin{equation} \ddot x = \frac{F-\dot m U_J}{m}=\frac{\sum F}{m} \end{equation} where $\sum F$ is the total force, which is the fluid force $F$ plus the jet thrust $T_J=-\dot m U_J$, $-\dot m$ is the rate of mass loss and $U_J$ is the jet exit velocity. \begin{figure} \includegraphics[width=\textwidth]{Figures/rocket_theory} \caption{Comparison of three jet-propelled rocket fast-start maneuvers using equation~\ref{eq:f ma} to model the fluid reaction force.} \label{fig:rocket_theory} \end{figure} Figure~\ref{fig:rocket_theory} shows the results for all the three cases when jetting from rest until 96\% of the initial mass $m_0$ has been expelled, keeping $U_J$ constant for the majority of the maneuver. \begin{itemize} \item In a vacuum, $F=0$ and the net force $\sum F$ equals $T_J$. The rocket accelerates at an increasing rate due to decreased inertia, accelerating far beyond the jet velocity. \item If we model the fluid reaction force on the rigid torpedo with equation~\ref{eq:f ma}, then the body experiences no drag, but will have an ever increasing added-mass force such that $\sum F<<T_J$. In essence, the torpedo's added mass is an additional payload which it never sheds, limiting its acceleration. \item The octopus-like vehicle starts as a sphere, meaning its inertia is initially 50\% greater than the rocket in space. However, unlike the rigid torpedo, this is not payload - it is additional propellant! As the body shrinks, the added-mass energy is recovered in the form of thrust by equation~\ref{eq:f ma}. In the second half of the maneuver, when the inertia is reduced, this results in $\sum F>> T_J$. \end{itemize} The final result being that the octopus-like body accelerates to speeds above $3.5U_J$, much faster than the rigid torpedo, and even faster than a rocket in the vacuum of space. As discussed above, the successful recovery of added-mass energy requires that the energy is not lost to shed vorticity. \cite{Weymouth2015} studied this process and developed an analytic parameter to predict the recovery efficiency. As the octopus-like vehicle shrinks, it induces a normal velocity which draws in the boundary layer fluid. If this inward velocity is strong enough to overcome the diffusion of the boundary layer, then the vorticity can be annihilated and the flow energy recovered. \cite{Weymouth2015} liken this to the application of suction on a rigid boundary layer. In analogy to a suction parameter, they define a shrinking parameter \begin{equation} \sigma^* = V\sqrt{\frac L{U\nu}} = \Xi \sqrt{Re} \end{equation} where $V$ is the cross-flow velocity of the deforming body, in this case the rate of change of the minor-axis radius. This modification of the shape-change number includes the rate of boundary layer diffusion, and axis-symmetric boundary layer theory suggests that $\sigma^* > 9 $ should be a thresh-hold value for delayed separation and energy recovery. Note that this threshold is easier to achieve at larger Reynolds numbers, and therefore large body-sizes. Based on this, \cite{Weymouth2015} designed a prototype soft robotics vehicle to maximize $\sigma^*$ during a jet-propelled fast start maneuver. The octopus-inspired vehicle consists of a rigid neutrally buoyant skeleton with an elastic membrane stretched around it to form the outer hull, Figure~\ref{fig:arfm}(a). As with the mantle of the octopus, this membrane can be inflated, giving it an initially bluff shape and storing sufficient energy to power its escape. The fully deflated hull shape is approximately a 5:1 ellipsoid, and is sufficiently streamlined to allow the body to coast dozens of body lengths. The body length is $L=26cm$ and the volume when fully deflated is $1030 cm^3$, so the `payload' mass accelerated by the maneuver is $m_f = 1.03~kg$. \begin{figure} \includegraphics[width=\textwidth]{Figures/ARFM} \caption{Results of the self-propelled test of the octopus-inspired vehicle. Reproduced from \cite{Triantafyllou2016}, where $F_j=T_J$ is the jet thrust.} \label{fig:arfm} \end{figure} Once inflated, the robot is released from a mount allowing it to accelerate forward in open water under its own power. The resulting fast-start maneuver performance is measured using high-speed cameras at 150 frames/second. Figure~\ref{fig:arfm} shows the rapid acceleration and deflation of the shrinking robot from a self-propelled run. The velocity peaks above $10L/s$ or $2.7m/s$ around $t=0.95s$ after release. Based on this and scale of deformation of the body we have $\Xi_{ave}=1/24$, $Re_{ave}=350000$ and the shrinking parameter is $\sigma^*>77$ throughout the maneuver, well above the threshold. This high shrinking parameter indicates we should have efficient energy recovery. To quantify this the outline of the body is measured from the images to determine the mass, mass flux, net force ($m\ddot x$), and jet thrust ($-\dot m U_J$) during the maneuver. Figure~\ref{fig:arfm} shows that the peak net force is 30\% greater than $T_J$, similar to the analytic predictions. We can also measure the payload kinetic energy and the integrated power delivered by the jet: \begin{equation} KE = \frac 12 m_f U^2, \quad P\Delta t = \frac 12 \int_0^\tau \dot m U_J^2 dt \end{equation} Figure~\ref{fig:arfm} shows the ratio of these values, which peaks at $53\%$. This is on par with the theoretical propulsive efficiency of rocket accelerating from rest in a vacuum, which peaks at $65\%$ \citep{Ivey1947}. However, this is \textit{not} the quasi-propulsive efficiency of the prototype. The integrated $P_{tow}$ is the change in kinetic energy \textbf{plus} the integral of $RU$ when towing the deflated body through the same maneuver. Using the conservative values $C_D=0.05$ and $m_{xx}=m_f/10$ for the deflated shape gives a quasi-propulsive efficiency of $\eta_{QP}=68\%$, better than a rocket in space. \section{Discussion and Conclusions} Vorticity generation is the key to all fluid force generation. It is text-book knowledge that increasing the speed of a body will generally generate more vorticity and increase the force. Slightly less well known is that added-mass in a viscous fluid is also based on vorticity generation on the body surface, making this a uniting theme in fluid dynamics \citep{Wu2007book}. In this context, one characteristic stands out in the biological-inspired studies above: \begin{quote} \textit{Unsteady biologically-based propulsors optimize the generation of vorticity by coordinating their kinematics and shape-change with the state of the flow.} \end{quote} The additional degrees of freedom in biologically-based systems gives them the potential to generate vorticity when and where it will be most useful, and this can be utilized to efficiently produce large forces for maritime applications. \begin{itemize} \item In the case of tandem flapping foils, proper phase and distance gaps between the foils enable positive interference to double the thrust on the back fol, or to reduce the variation in lift and thrust. \item A foil pitched-up rapidly is capable of generating large vorticity if the shape-change number $\Xi$ is increased, and can bring a streamlined body to a complete stop in 20\% of its length. \item Spanwise-retraction of a hollow foil minimizes the generation of new vorticity, freeing the bound vorticity to do other useful work. \item On the other hand, retracting a foil with a streamlined planform generates opposite-sign vorticity on the boundary, annihilating the bound vorticity. \item This annihilation enables a body to recover the fluid's added-mass kinetic energy in the form of a large unsteady force. If timed with the natural frequency, this can be used to cancel drag on a size-changing sphere with 91\% efficiency. \item Finally, by treating the added-mass as additional propellant, stored up initially and released throughout, a shrinking underwater vehicle can achieve an ultra-fast start. \end{itemize} This recovery of fluid energy in the form of thrust is especially interesting, and occurs readily as long as its shrinking rate $\sigma^*$ overcomes viscous diffusion. As this number increases with Reynolds number, even greater quasi-propulsive efficiency may soon be realized experimentally. \section*{Acknowledgements} This work was performed in collaboration with excellent research groups world-wide; including Michael Triantafyllou's group at MIT, David Rival's group at Queens University, Brenden Epp's group at Dartmouth University, and Bharathram Ganapathisubramani's group at University of Southampton. \bibliographystyle{plainnat}
1,108,101,562,582
arxiv
\section{Introduction} \label{sec:intro} Protoplanetary disks form in the chaotic environment of molecular cloud cores and in their early stages they are massive enough to have a non negligible effect on the evolution of the overall system. The disk self-gravity may influence the disk dynamics through the propagation of density waves that lead to the formation of prominent structures in the form of one or more spiral arms. These morphologies have been detected by ALMA and VLT-SPHERE in both Class 0/I and Class II systems, and they are usually assumed to be originated by embedded companions (e.g. HD135344B, \citealt{veronesi19}, MWC 758, \citealt{calcino20}) or by self-gravity (e.g. Elias 2-27, \citealt{perezelias227,huang18c}). In the second case, density waves are thought to provide a non negligible contribution to the angular momentum transport and may have a crucial role in the formation of planetesimals through dust trapping at the location of the spirals and the following direct fragmentation of spiral overdensities into bound objects \citep{rice04,rice06,kratter16}. Being able to give an estimate of the disk mass is the first step in order to put other pieces in the puzzle of planet formation and to understand the origin of the observed spirals \citep{marel20,veronesi19,bergin18a}. But how can we determine the mass of these systems? First, dust masses are typically inferred using the optically thin approximation at millimeter wavelengths. It is worth noting that, although it may be trivial, this estimate still carries a high level of uncertainty, due to the assumed optical depth of the dust at (sub-)mm wavelengths (e.g. the dust opacity and the level of dust growth, \citealt{bergin18a}). Once the dust mass is known, one needs to convert this into a total disk mass by assuming some gas/dust ratio, which is generally assumed to be equal to 100 \citep{draine03}, although this number is highly uncertain (see e.g. \citealt{macias21}). On the other hand, it is more difficult to quantify the disk mass from direct gas tracers. A common procedure is to use CO observations in its various isotopologues (such as $^{13}$CO and C$^{18}$O) as a proxy for the gas mass. But since the conversion of the observed CO mass into total gas mass is not well understood \citep{williams14a,bergin18a}, this is not straightforward. Another issue adding complexity to the problem is that the properties of different molecules also vary spatially and temporally, depending on the models \citep{ilee17,quenard18}. Indeed, estimates derived from CO observations result in very low disk masses compared to dust estimates \citep{pascucci16a,ansdell16a,miotello17,long17a}. \cite{miotello16a} associated this trend with multiple possible processes: carbon depletion in the disk \citep{favre13,bosman18,cleeves18}, photodissociation in the upper layers, freeze-out at the disk midplane or in general other isotope-selective processes. Another aspect that should be considered comes from far-IR HD lines \citep{bergin13,trapman17} observations, suggesting that the gas-to-dust ratio measurement is affected by the fact that the emitting regions of various gas tracers differ from each other and in turn differ from the regions where the dust is observed. We can also estimate the total disk mass in a dynamical way, by using the disk rotation curve, and detecting deviations from the expected Keplerian curve. This method has been widely used with galaxies \citep{barbieri05} and sometimes it has been used to estimate also the mass of AGN disks \citep{lodato03}. Usually protoplanetary disks are assumed to be Keplerian, since typically the stellar mass dominates over the disk mass, and their rotation curve can be sufficiently well described by the stellar contribution alone. Instead, when the disk contribution is significant, we could be able to fit the observed rotation curve and to give an independent disk mass estimate \citep{bertin99}. For relatively massive disks, this dynamical estimate is now possible since we have access to a large amount of gas kinematic data with high (angular and velocity) resolution and high sensitivity. From these data, we can infer the geometry of the disk and recover the height and the velocity of the emitting gas layer \citep{pinte18}. One of the most interesting observed spiral structures is the one hosted by the protoplanetary disk orbiting around Elias 2-27. Elias 2-27 is a young 0.8 Myr M0 star \citep{andrews09} located at a distance of $\sim$115 pc \citep{gaia18} in the Ophiucus star-forming region \citep{luhman99}. The surrounding disk is unusually large and massive, with a disk-to-star mass ratio of $\sim$ 0.3 \citep{andrews09,perezelias227}, as estimated by converting dust mass into total disk mass with the usual gas/dust ratio of 100. ALMA observations of this system detected two large-scale spiral arms \citep{perezelias227}, which have been confirmed in the DSHARP survey at higher resolution \citep{andrews18a}. Together with these spiral arms, a 14 au wide, inner gap, located at 69 au from the star \citep{huang18b,huang18c} has been observed. Recent studies have confirmed that a possible origin for the spiral arms is the development of gravitational instabilities \citep{paneque20,hall20,halletal2018,forgan18a,meruetal2017,bae18b}. However, this physical mechanism does not explain the origin of the dust gap, which could have been carved by a companion of $\sim$0.1 $M_{\rm Jup}$ as constrained from hydrodynamical simulations by \cite{zhang18}. Moreover, localized deviations from Keplerian motions at the location of this dust gap have been found recently, reinforcing the hypothesis of a planetary-mass companion \citep{pinte20}. Yet, it has been shown that a low-mass inner companion would be able to explain the gap but not the origin of the observed spiral arms \citep{meruetal2017}. With this background in mind, we decided to take a closer look at the rotation curve of this system in order to provide a dynamical mass estimate of the disk independent of dust-CO measurements and to test the viability of gravitational instabilities as the origin of the observed grand-design spiral structure. In this paper we study the rotation curve of the protoplanetary disk orbiting around Elias 2-27 by comparing two competing models: a Keplerian disk model and a self-gravitating disk model \citep{bertin99,lodato03}. The gravitational field has been computed by solving the Poisson equation including the central point-like object and the disk contribution \citep{bertin99}. We fit the two models to the rotation curve obtained in \cite{paneque20} (following the method proposed by \citealt{pinte18} to derive the height of the CO emitting layer) from the gas CO observations. \section{The rotation curve of a protoplanetary disk} \label{sec:style} For a cool, slowly accreting disk, the centrifugal balance requires: \begin{equation} \Omega^{2} = \frac{1}{R} \frac{\mathrm{d} \Phi_{\sigma}}{\mathrm{d} R}(R,z) +\frac{\mathcal{G} M_{\star}}{(R^{2}+z(R)^2)^{3/2}} + \frac{1}{R}\frac{1}{\rho}\frac{{\rm d}P}{{\rm d}R} \label{eq:rotation} \end{equation} where $M_{\star}$ is the mass of the central object, $\Phi_{\sigma}$ is the disk contribution to the gravitational potential and where we also consider the pressure gradient (under the assumption of a barotropic disk). However, we expect the contribution of the pressure gradient to the rotation curve to be negligible when compared to the disk self-gravity contribution. Indeed, for a marginally stable self-gravitating disk the disk contribution is of the order of $H/R$, while the pressure term is $O(H^2/R^2)$, where $H$ is the pressure scale height \citep{kratter16}. To compute the pressure gradient we consider a disk temperature profile $T(R)\propto R^{-q}$, with $q=0.5$ (with $T=25$ K at $R=60$ au, corresponding to a disk aspect ratio at this location of $H/R=0.11$, \citealt{perezelias227}). We consider two models for the rotation curve of the disk orbiting around Elias 2-27: a Keplerian disk model and a self-gravitating disk model. Usually the Keplerian model is considered when $M_{\rm disk}\ll M_{\star}$, since in this case the contribution of the disk to the gravitational field is negligible \citep{pringle81}. The Keplerian model has also been used by \cite{paneque20} to estimate the stellar mass. In polar cylindrical coordinates, the radial gravitational field generated by the disk can be written as: \begin{equation} \frac{\partial \Phi_{\sigma}}{\partial R}(R, z) = \frac{\mathcal{G}}{R} \int_{0}^{\infty} \Bigg[ K(k)-\frac{1}{4}\left(\frac{k^{2}}{1-k^{2}}\right) \times \left(\frac{R^{\prime}}{R}-\frac{R}{R^{\prime}}+\frac{z^{2}}{R R^{\prime}}\right) E(k)\Bigg] \sqrt{\frac{R^{\prime}}{R}} k \sigma \left(R^{\prime}\right) d R^{\prime} \label{eq:sgpot} \end{equation} where $E(k)$ and $K(k)$ are complete elliptic integrals of the first kind, and $k^2 = 4RR^{\prime}/[(R + R^{\prime})^2 + z^2]$ (see \citealt{gradshteyn80}). \cite{ bertin99} computed the field ${\rm d}\Phi_{\sigma}/{\rm d}R$ in the equatorial plane by taking the limit $z \rightarrow 0$. Instead, we are interested in computing the rotation curve for the gas at a given height. The vertical position $z(R)$ has been determined by \cite{paneque20}, tracing the emitting layers of the CO-isotopologues channel maps with the method outlined in \cite{pinte18}, and has been parameterized as: \begin{equation} z(R)=z_{0}\left(\frac{R}{R_{0}}\right)^{\psi}+z_{1}\left(\frac{R}{R_{0}}\right)^{\varphi}\,, \label{eq:z} \end{equation} where $z_0,z_1,\phi,\psi$ are fitting parameters reported in Table 2 of \cite{paneque20} and $R_0$ is equal to 115.88 au. Note that a major finding of \citet{paneque20} is that the West and East side of the disk show an asymmetry in the height of the gas layer, so the fitting parameters differ for the two sides of the disk. Furthermore, the two isotopologues considered ($^{13}$CO and C$^{18}$O) trace different vertical layers of the disk, and thus, will have distinct fitting parameters. We also take into account the vertical position of the gas $z(R)$ when computing the Keplerian gravitational field in Eq.~\ref{eq:rotation}, as \begin{equation} \Omega_{\rm Kep}^2=\frac{\mathcal{G}M_{\star}}{(R^2+z(R)^2)^{3/2}}\,, \label{eq:kepot} \end{equation} where $z(R)$ is defined in Eq.~\ref{eq:z}. The total disk surface density profile has been chosen after \cite{perezelias227} and \cite{andrews09} as, \begin{equation} \Sigma(R)=\Sigma_{c}\left(\frac{R}{R_{c}}\right)^{-p} \exp \left[-\left(\frac{R}{R_{c}}\right)^{2-p}\right]\,, \label{eq:sigma} \end{equation} where $\Sigma_c$ is a normalisation constant assumed to be a free parameter of the model, while $R_c=200$ au is the truncation radius and the power-law index is fixed at $p=1$. We choose these values for the parameters to match the ones that were parametrized by \cite{perezelias227,paneque20} \section{Results} \label{sec:results} Rather than performing a complete analysis of the channel maps, as done in \cite{paneque20}, we here take their constraints for the rotation curve and directly fit such rotation curve with two analytical competing models, the self-gravitating (see Eqs.~\ref{eq:rotation} and~\ref{eq:sgpot}) and the Keplerian one (see Eq.~\ref{eq:kepot}), using an MCMC algorithm as implemented in \textsc{emcee} \citep{emcee}. We choose 300 walkers and 3500 steps (where the convergence has already been reached at $\sim$2500 steps). We also compare the data with a simple power-law fit, given by \begin{equation} f(R)=p_1 \cdot R^{-p_2}\, \label{eq:powerfit} \end{equation} where $p_1$ is a normalisation constant and $p2$ the power-law slope. An exponent $p_2=0.5$ would point to a Keplerian disk, while $p_2<0.5$ to the presence of a self-gravitating disk. Instead, an exponent $p_2>0.5$ could be suggesting a warp or the presence of chaotic accretion from the cloud (at large scale). In the analysis presented below, we do not fit the data points in the inner 60 au, since this region is strongly affected by the observed dust gap \citep{huang18b,paneque20}, and shows noisier data. However, we have also performed the fit also including this region, obtaining results similar to those showed in the following Sections. A detailed description of the velocity data used here can be found in \citet{paneque20}, along with the procedure used to obtain the height of the CO emitting layer, and we refer the reader to that paper for details. Here we just point out that, due to their higher signal-to-noise, the error bars are much lower for the $^{13}$CO data than for the C$^{18}$O data. Also note that we do not radially bin the data points obtained by \citet{paneque20} and that different velocity points related to the same radius arise from different azimuthal angles, highlighting the intrinsic non-axisymmetry of the disk. Still, our fitting model is by construction axisymmetric, since we are interested in the overall gravitational field of the disk, and we thus expect some non-negligible residuals to our fitting procedure due to this. \begin{figure*} \includegraphics[scale=0.6]{final_ALL.png} \caption{Rotation curve for the $^{13}$CO and C$^{18}$O isotopologues for different models. The fitting procedure has been done simultaneously for the East (right column) and West (left column) side velocity data (plotted as grey markers), and for the two CO-isotopologues (top: $^{13}$CO; bottom: C$^{18}$O). The black solid line corresponds to the self-gravitating fit, the red dashed line to the Keplerian fit and the blue dashed one to the Keplerian curve obtained with the star mass from the SG fit. } \label{fig:all_fit} \end{figure*} \subsection{Combined fit}\label{subsec:all} In Fig.~\ref{fig:all_fit} we show the results obtained with the Keplerian (see Eq.~\ref{eq:kepot}, red and blue dashed lines) and self-gravitating (see Eqs.~\ref{eq:rotation} and~\ref{eq:sgpot}, black solid lines) models, when simultaneously fitting all the data points (but considering the height profiles separately) for both CO isotopologues and for both sides of the disk, shown in separate panels for clarity, with the East and West side of the disk on the right and left columns, respectively, and the two CO-isotopologues (top: $^{13}$CO; bottom: C$^{18}$O). The red line corresponds to the Keplerian best fit model, the blue line shows the rotation curve for a Keplerian disk where the star mass has been fixed to the one found with the self-gravitating best fit. The values obtained for the fit parameters are reported in the third column of Table \ref{tab:kepmodel}. The East data points, especially for the C$^{18}$O, tend to lie above the best fit curves, since in this combined fit the model naturally tends to reproduce the lower uncertainty $^{13}$CO data. If we first look at the power-law model, once we leave the freedom of a general power-law index, the best-fit value of the exponent $p_2$ is smaller than $0.5$ ($p_2 = 0.43\pm 0.03$), by more than $2\sigma$. This already suggests that the data are better reproduced by a self-gravitating model. In such a model, the disk mass obtained from the combined fit is $M_{\rm disk}=0.08\pm 0.04\,M_{\odot}$ with a star mass $M_{\star}=0.46\pm 0.03\,M_{\odot}$. Note that we obtain a non-zero measurement of the disk mass too within $\sim 2\sigma$ uncertainties. Instead, in the Keplerian case, the star mass is $M_{\star}=0.49\pm0.01\,M_{\odot}$. For both models, the stellar mass is in agreement with previous estimates \citep{paneque20}. \begin{deluxetable}{cccc} \label{tab:kepmodel} \tablecaption{Parameter obtained with a Keplerian, self-gravitating and power-law model for each CO-isotopologues (first two columns) and for a combined fit (both sides and both CO-isotopologues, third column). We also show the reduced $\chi^2$ difference between the Keplerian and self-gravitating fit, as $\lambda = \Delta (\chi^2_{\rm red})$.} \tablewidth{0pt} \tablehead{ & \colhead{$^{13}\mathrm{CO}$} & \colhead{$\mathrm{C}^{18}\mathrm{O}$} & \colhead{Combined fit} \\ } \startdata \textbf{Keplerian fit} & & & \\ \hline $M_{\star}\,[M_{\odot}]$ & 0.50$^{+0.01}_{-0.01}$ & 0.46$^{+0.03}_{-0.03}$ & 0.49$^{+0.01}_{-0.01}$ \\ \hline \hline \textbf{Self-gravitating fit} & & & \\ \hline $M_{\star}\,[M_{\odot}]$ & $0.45^{+0.03}_{-0.03}$ & $0.43^{+0.05}_{-0.07}$ & $0.46^{+0.03}_{-0.03}$ \\ $M_{\rm disk}\,[M_{\odot}]$ & $0.1^{+0.05}_{-0.04}$ & $0.08^{+0.08}_{-0.05}$ & $0.08^{+0.04}_{-0.04}$ \\ \hline \hline \textbf{$\lambda = \Delta$($\chi^2_{\rm red}$)} & 2.16 & -0.19 & 1.38 \\ \hline \hline \textbf{Power-law fit} & & & \\ \hline $p_1$ & $13.46^{+2.39}_{-2.07}$ & $25.31^{+14.67}_{-9.39}$ & $13.95^{+2.39}_{-2.06}$ \\ $p_2$ & $0.43^{+0.03}_{-0.03}$ & $0.54^{+0.09}_{-0.1}$ & $0.43^{+0.03}_{-0.03}$ \\ \hline \enddata \end{deluxetable} \begin{figure*} \includegraphics[scale=0.6]{final_BOTH.png} \caption{Rotation curve for the $^{13}$CO and C$^{18}$O isotopologues for different models. The fitting procedure has been done simultaneously for the East (right column) and West (left column) side velocity data (plotted as grey markers). The black solid line corresponds to the self-gravitating fit, the red dashed line to the Keplerian fit and the blue dashed one to the Keplerian curve obtained with the star mass from the SG fit. } \label{fig:both_sides} \end{figure*} \subsection{Individual isotopologues fit}\label{subsec:res_both} We also performed a fit separately for the two CO isotopologues. The result is shown in Fig.~\ref{fig:both_sides}, where the upper and lower panels correspond to the West (left panel) and East (right panel) side of the $^{13}$CO and C$^{18}$O isotopologue, respectively. The parameters obtained from the best-fit models are shown in the first two columns of Table~\ref{tab:kepmodel}. Also in this case, we start by looking at the power-law model. For the $^{13}$CO data, the best-fit value of the exponent $p_2$ is again smaller than $0.5$ ($p_2 = 0.43\pm0.03$), meaning that the self-gravitating model should be preferred to reproduce the data. In contrast, for the C$^{18}$O the best-fit power-law index is $p_2=0.54^{+0.09}_{-0.1}$, where the value of 0.5 is inside the uncertainties, meaning that a purely Keplerian model is consistent with the $^{18}$CO data, given the larger uncertainty in the velocity points in this case. We also note that the obtained value $>0.5$ could suggest the presence of a warp or chaotic accretion from the cloud. By considering the results for the $^{13}$CO, best fitted by a self-gravitating model, the disk mass is $M_{\rm disk} = 0.1^{+0.05}_{-0.04}\,M_{\odot}$, with a stellar mass of $M_{\star}=0.45\pm0.03\,M_{\odot}$. \section{Discussion} \label{sec:discussion} Having performed fits for the self-gravitating and the Keplerian model, we now compare which one is a better fit to the dat . To do so, we compute the reduced $\chi$-square ($\chi^2_{\rm red}$) for each model and each CO-isotopologue. We then compute the likelihood ratio $\lambda$, defined as the difference between the Keplerian and self-gravitating minimum reduced $\chi$-square: \begin{equation} \lambda = (\chi^2_{\rm red})_{\rm min, Kep} - (\chi^2_{\rm red})_{\rm min, SG}\,. \end{equation} For Gaussian, independent measurements, this function is distributed like a $\chi^2$ with $n$ degrees of freedom, where $n$ is the number of new parameters in the more general case ($n=1$), with the hypothesis that the less general model is correct (i.e. the Keplerian model). The computed values are presented in Table~\ref{tab:kepmodel}. We obtain $\lambda\simeq 2.16$ for the $^{13}$CO fit, and $1.38$ for the combined fit, which means that the Keplerian model is rejected with respect to the self-gravitating one. If we consider only the $\mathrm{C}^{18}\mathrm{O}$ data, instead, the likelihood ratio tends to slightly prefer a simple Keplerian model (see Table~\ref{tab:kepmodel}). This means that in this case the two models are indistinguishable, possibly because the errors are larger with respect to the $^{13}\mathrm{CO}$ case. In summary, the best fitting model for the combined set of data, including both available CO isotopologues is a non-Keplerian one, with a disk mass $M_{\rm disk}=0.08\pm0.04\,M_{\odot}$, and a star mass $M_{\star}=0.46\pm0.03\,M_{\odot}$. Considering only the $^{13}$CO data (that are of better quality with respect to the C$^{18}$O ones), we obtain a disk mass of $M_{\rm disk}=0.1^{+0.05}_{-0.04}\,M_{\odot}$ and a star mass $M_{\star}=0.45\pm0.03\,M_{\odot}$. In both cases, we obtain a non-zero disk mass within $2\sigma$. Instead, the C$^{18}$O alone might be compatible with a purely Keplerian rotation curve, even though the self-gravitating fit returns a non-zero disk mass to within 1$\sigma$ uncertainty, $M_{\rm disk}= 0.08^{+0.08}_{-0.05}\,M_{\odot}$ and a star mass of $0.43^{+0.05}_{-0.07}\,M_{\odot}$. Thus, assuming a total disk mass equal to $0.08-0.1$ $M_{\odot}$ (and a star mass of $0.46-0.45$ $M_{\odot}$) as obtained from the fits above, we get a disk-to-star mass ratio of $\sim 0.17-0.22$. Gravitational instabilities arise when the disk-to-star mass ratio becomes of the order of the disk aspect ratio $H/R$, which is typically of the order of $\approx 0.1$ for protostellar disks. The disk mass we derive from the rotation curve is thus in the correct range to produce gravitational instabilities and thus the spiral structure observed. In particular, the observed two-armed grand-design structure is strongly suggestive of an internal origin due to gravitational instabilities. We note that from the relation between the disk-to-star mass ratio and the number of spiral arms $M_{\rm disk}/M_\star\propto 1/m$ \citep{lodato04,CLC09,dong15} the obtained disk mass would point to high $m$ modes, while just two spiral arms are observed through ALMA. However, \cite{dipierro14} have demonstrated that, even if the density structure has an intrinsic $m>2$ spiral, smaller-scale arms can be washed out by the limited resolution of the instrument, leaving only the lowest $m$ modes in ALMA dust continuum observations. The disk mass obtained in this work is consistent with those used in the hydrodynamical simulations that reproduce the observed spirals, as performed by \cite{paneque20}, where they employ a disk-to-star mass ratio in the range of q = 0.1-0.3, and in the simulations of \cite{cadman20}, with a slightly larger q = 0.27 value. Having obtained a dynamical estimate of the total disk mass, and assuming a dust disk mass of $10^{-3}\,M_{\odot}$ \citep{paneque20,perezelias227}, we can put interesting constraints on the gas-to-dust, that turns out to be of the order of $\approx 80-100$ (for the combined fit and the $^{13}$CO isotopologue), which in the first case corresponds to a factor $\sim1.2$ smaller than the usually assumed value of 100. Note that the so obtained gas-to-dust ratio estimate extremely depends on the dust mass derivation and thus it should be considered with care. For this derivation we assumed a dust mass of $10^{-3}\,M_{\odot}$, but \cite{paneque20} showed that the disk being optically thick with a low spectral index, scattering could be important leading to a dust mass estimate up to 2 times larger than previously considered. \begin{figure*}[ht!] \centering \includegraphics[scale=0.55]{residual_ALL.png} \caption{Residuals obtained for the combined fit. The top panels show residual for the $^{13}$CO (blue points and dashed line), while the bottom ones for the C$^{18}$O (red points and dashed line). The left panels correspond to the West side, while the right one to the East side. Points represent the difference between the velocity data and a Keplerian model where the star mass $M_{\star,{\rm sg}}$ has been obtained through the self-gravitating model. The dashed line is the difference between the self-gravitating model and the above mentioned stellar contribution, $M_{\star,{\rm sg}}$. } \label{fig:residual_all} \end{figure*} As a further analysis of the results obtained in the combined fit, we show in Fig.~\ref{fig:residual_all} the residuals for both disk sides (left: West; right: East) and the CO-isotopologues (blue: $^{13}$CO; red: C$^{18}$O). In particular, the points are the difference between the velocity data and a Keplerian model where the star mass $M_{\star,{\rm sg}}$ has been obtained through the self-gravitating model. The dashed line is the difference between the self-gravitating model and the Keplerian velocity with the stellar mass $M_{\star,{\rm sg}}$. From these results it appears that especially the residuals for the West side in the $^{13}$CO do require a significant disk contribution to the gravitational potential. Indeed, the data residuals present an increasing trend, following the disk contribution model, in particular for the West side. Instead, for the C$^{18}$O there is still some large scatter in both directions. This aspect is particularly interesting, indeed, \cite{paneque20} find that there is an important asymmetry between the East and West side data (see their Discussion section). The main characteristic of this asymmetry is that the West side is more compact and brighter than the East side, which is more extended and cloud-contaminated. The rotation curve on the East side then should be considered with care, since the disk can be contaminated by chaotic accretion from the cloud. This infall of material could in principle change the centrifugal balance, increasing the complexity of the system. For this reason, we decided to repeat the fit procedure for the West side only. The obtained results are described in Appendix~\ref{sec:singleside}. We obtain an even stronger indication in favor of a self-gravitating fit, with the Keplerian fit rejected with 80\% confidence for the combined fit and with 97\% confidence considering the $^{13}$CO data only. The resulting disk mass in this case is $M_{\rm disk}=0.16\pm0.06\,M_{\odot}$ with a stellar mass $M_{\star}=0.41\pm0.04\,M_{\odot}$, and thus in a disk-to-star mass ratio of $\sim0.40$. Finally, it has to be noted that small scale gas turbulence could contribute to deviations from keplerian motion, but this generally amounts to no more than $\ 0.1 c_{\rm s} \sim 20$ m/s \citep{flaherty20}, being thus smaller than the observed deviation (the disk contribution is $\sim 50-100$ m/s, see Fig.~\ref{fig:residual_all}). \section{Conclusions} In this paper we have looked for deviations from Keplerian rotation in the disk orbiting around the Elias 2-27 system, providing for the first time a dynamical measurement of the total mass of a planet forming disk, by fitting its rotation curve as derived from CO emission with a model including both the stellar and the disk contribution to the gravitational field. We performed three different fit procedures, that is, a combined fit considering both disk sides and both CO-isotopologues, an individual fit to the data points for the separate isotopologues, and a third fit considering only the less cloud-contaminated West side of the disk; the last case is described in Appendix C. The outcome of these analysis is that the $^{13}$CO isotopologues data, and in particular the West side of the disk, are better reproduced by a self-gravitating disk model rather than a pure Keplerian one. The same is true also considering both isotopologues, although with smaller confidence, with a resulting disk mass of $M_{\rm disk}=0.08\pm0.04\,M_{\odot}$ and a stellar mass $M_{\star}=0.46\pm0.03\,M_{\odot}$ (where the stellar mass is compatible with previous estimates). We point out that we obtain a non-zero measure of the disk mass within 2$\sigma$ uncertaint , both in the combined fit and in the fit for the $^{13}$CO isotopologue alone. Assuming these values for the disk and star mass, and assuming a dust disk mass of $10^{-3}\,M_{\odot}$ \citep{paneque20,perezelias227}, we obtain a disk-to-star mass ratio of $\simeq$ 0.17 and a gas-to-dust ratio of $\simeq 80$. These results highlight the fact that Elias 2-27 should be considered as a self-gravitating disk, reinforcing the internal gravitational instability interpretation for the observed spiral structures. This result is more evident when fitting for the $^{13}$CO data on the West side of the disk, that are less contaminated by the cloud contamination and possible infall \citep{paneque20} for which we obtain a disk-to-star mass ratio of $\sim 0.40$, with a disk mass of $0.16\,M_{\odot}$ and a star mass of $0.41\,M_{\odot}$. We point out that the lower confidence level obtained in the combined isotopologues fit is due to the relatively lower quality of the C$^{18}$O data, which can be improved in future observations. Finally, we remark that this method to estimate the disk mass can be applied to other protoplanetary disks (such as, for example, IM Lup and WaOph 6 from the DSHARP sample, \citealt{huang18c}, and RU Lup, \citealt{huang20}, that also show a prominent spiral structure), aiming to give better constraints (independent of CO or dust to H$_2$ conversion) on the disk mass. Such measurement can also be used to calibrate the conversion factors between dust and total mass, at least for these systems. \acknowledgments The authors want to thank the referee for constructive comments that improved this manuscript. This paper makes use of the following ALMA data: \#2013.1.00498.S, \#2016.1.00606.S and \#2017.1.00069.S. ALMA is a partnership of ESO (representing its member states), NSF (USA), and NINS (Japan), together with NRC (Canada), NSC and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO, and NAOJ. This work has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 823823 (RISE DUSTBUSTERS project). L.M.P.\ acknowledges support from ANID project Basal AFB-170002 and from ANID FONDECYT Iniciaci\'on project \#11181068. We used the \textsc{emcee} algorithm \citep{emcee}, the \textsc{corner} package \cite{corner} to produce corner plots and \textsc{python}-based MATPLOTLIB package \citep{matplotlib} for all the other figures.
1,108,101,562,583
arxiv
\section{Introduction} Highly excited many-particle states in many-body systems can be presented as ``chaotic'' superpositions of shell-model basis states - see the recent calculations for complex atoms \cite{Ce}, multicharged ions \cite{ions}, nuclei \cite{nuclei} and spin systems \cite{Nobel,spins}. Indeed, the number of combinations to distribute $n$ particles over $m$ orbitals is exponentially large ($m!/n!(m-n)!$ in a Fermi system). Therefore, the interval between the many-body levels $D$ is exponentially small and residual interaction between the particles mixes a huge number of the mean-field basis states (Slater determinants) when forming eigenstates. The number of principal basis components in an eigenstate can be estimated as $N_p \sim \Gamma/D$ where $\Gamma$ is the spreading width of a typical component that can be estimated using the Fermi Golden Rule. In such chaotic eigenstates any external weak perturbation is exponentially enhanced. The enhancement factor is $\sim \sqrt{N_p} \propto 1/\sqrt{D}$ - see e.g. \cite{enhancement} and references therein. This huge enhancement have been observed in numerous experiments studying parity violation effects in compound nuclei - see e.g. review \cite{W} and references therein. In a recent work \cite{S} the consideration of many-body chaos has been extended to quantum computers \cite{feynman,shor1,shor2,steane1,steane2,zoller,nmr1,nmr2,vagner,kane,loss,cooper,lattice,monroe}. Any model of a quantum computer is somewhat similar to that of a spin system. In Ref.\cite{S} the authors modelled a quantum computer by a random Hamiltonian, \begin{equation} \label{hamil} H = \sum_{i} \epsilon_i \sigma_{i}^z + \sum_{i<j} J_{ij} \sigma_{i}^x \sigma_{j}^x, \end{equation} where the $\sigma_{i}$ are the Pauli matrices for the qubit $i$ and the second sum runs over nearest-neighbor qubit pairs. The energy spacing between the two states of a qubit was represented by $\epsilon_i$ which was uniformly distributed in the interval $[0.5 \Delta_0, 1.5\Delta_0 ]$. Here $\epsilon_i$ can be viewed as the splitting of nuclear spin levels in a local magnetic field, as discussed in recent experimental proposals \cite{vagner,kane}. The different values of $\epsilon_i$ are needed to prepare a specific initial state by electromagnetic pulses in nuclear magnetic resonance. In this case the couplings $J_{ij}$ will represent the interactions between the spins, which are needed for multi-qubit operations in the quantum computer. The total number of states in this system is $N=2^n$, and the typical interval between the nearby energies of multiqubit states is $\sim \Delta_0 n 2^{-n}$. A rough estimate for the boundary of the chaos in the quantum computer eigenstates is $ J_c \sim \Delta_0/qn$, where $ qn$ is the number of interacting qubit pairs ($qn=2n$ in a 2D square array of ``spins'' with only short-range interactions). This follows from a simple perturbation theory argument: the mixing is strong when the perturbation is larger than the minimal energy interval between the basis states which can be directly mixed by this perturbation (see detailed discussion of the boundary of chaos in many-body systems in Refs. \cite{Altshuler,FI97,FI99}). Numerical simulations in \cite{S} have shown that the boundary of the chaos in the quantum computer eigenstates is $ J_c \simeq 0.4 \Delta_0/n$ . Above this point they observed a transition from Poissonian to Wigner-Dyson statistics for the intervals between the energy levels. For $J < J_c$ one eigenstate is formed by one or few basis states built from the non-interacting qubits (products of ``up'' and ``down'' states). For $J > J_c$ a huge number of basis states are required. Because of the exponential laws it is convenient to study the entropy S of the eigenstates (in many-body systems the entropy $S \simeq \ln{N_p}$, see e.g. Ref. \cite{FI97}). In Ref. \cite{S} they observed a dramatic increase of the eigenstate entropy in the transition from $J<J_c$ to $J> J_c$; in fact , they defined $J_c$ as a point where $S=1$. In Ref. \cite{S} this process of chaotization of the eigenstates with the increase of $J$, or number of qubits $n$, was termed as a melting of the quantum computer and was assumed to lead to destruction of its operability. The authors stress that this destruction of operability takes place in an isolated (closed) system without any external decoherence process ( one could complement this picture by the $\sqrt{N_p}$ enhancement of any weak external perturbation acting on the quantum computer). This straightforward conclusion may be misleading. ``Theoretically'', this picture is similar to that observed in nuclei and atoms. However, the ``experimental'' situation is very different. In nuclei and atoms experiments have resolved particular many-body energy levels. Therefore, the description of the systems based on a consideration of the eigenstates was an adequate one. In quantum computers the energy interval between the eigenstates is extremely small. The authors of Ref. \cite{S} estimated that the average interval between the multi-qubit eigenstates for 1000 qubits, the minimum number for which Shor's algorithm \cite{shor1} becomes useful \cite{steane2} is $D \sim 10^{-298} K$ (for a realistic $\Delta_0 \sim 1K$). Therefore, in the case of a quantum computer it is impossible to resolve multiqubit energy levels. Temperature, or the finite time of the process $\tau$, gives an uncertainty in energy $\delta E \gg D$. In this case the picture with chaotic eigenstates is not an adequate one and we should consider the time evolution of the quantum computer wave function and entropy. Quantum chaos in the eigenstates allows us to apply a statistical approach to this consideration. \section{Time evolution of the chaotic many-body state} Exact (``compound'') eigenstates $\left| k\right\rangle \,$of the Hamiltonian $H$ can be expressed in terms of simple shell-model basis states $\left| f\right\rangle $ in many-body systems or products of qubits in a computer: \begin{equation} \label{slat}\left| k\right\rangle =\sum\limits_f C_f^{(k)}\left| f\right\rangle \,;\,\,\,\,\,\,\,\,\left| f\right\rangle =a_{f_1}^{+}...a_{f_n}^{+}\left| 0\right\rangle. \end{equation} These compound eigenstates $\left| k\right\rangle $ are formed by the residual interaction $J$ ; $a_s^{+}$ are creation or spin-raising operators (if the ground state $\left| 0\right\rangle$ corresponds to spins down). Consider now the time evolution of the system. Assume that initially ($t=0$) the system is in a basis state $\left| i\right\rangle $ (quantum computer in a state with certain spins ``up'') which can be presented as a sum over exact eigenstates: \begin{equation} \label{in}\left| i\right\rangle =\sum\limits_kC_i^{(k)}\left| k\right\rangle. \end{equation} Then the time-dependent wave function is equal to \begin{equation} \label{psit}\Psi (t) =\sum\limits_{k,f}C_i^{(k)}C_f^{(k)}\left| f\right\rangle \exp(-i E^{(k)}t). \end{equation} The sum is taken over the eigenstates $k$ and basis states $f$ ; we put $\hbar=1$. The probability $W_i=|A_i|^2 =|\left\langle i|\Psi(t)\right\rangle|^2$ to find the initial state in this wave function is determined by the amplitude \begin{equation} \label{ampli} A_i= \left\langle i|\exp(-iHt)|i\right\rangle= \sum\limits_k|C_i^{(k)}|^2\exp(-i E^{(k)}t) \simeq \int dE P_i(E)\exp(-i Et). \end{equation} Here we replaced the summation over a very large number of the eigenstates by the integration over their energies $E \equiv E^{(k)}$ and introduced the ``strength function'' $P_i(E)$ which is also known in the literature as the ``local spectral density of states'', \begin{equation} \label{strength}P_i(E)\equiv \overline{|C_i^{(k)}|^2}\rho (E), \end{equation} where $\rho (E)$ is the density of the eigenstates. In chaotic systems the strength function is given by a Breit-Wigner- type formula \cite{BM,FI99}: \begin{equation} \label{FfBW} P_i(E) = \frac{1}{2\pi}\frac{\Gamma_i(E)}{(E_i+\delta_i-E)^2 + (\Gamma_i(E)/2)^2} , \end{equation} \begin{equation} \label{GammaH}\Gamma _i(E)\simeq 2\pi \overline{\left| H_{if}\right| ^2}\rho _f(E) \sim J^2 qn/\Delta_0 . \end{equation} Here $\delta_i$ is the correction to the unperturbed energy level $E_i$ due to the residual interaction $J$, $\rho_f(E) \sim qn/\Delta_0$ is the density of the ``final'' basis states directly connected by the interaction matrix element $H_{if}$ with the initial state $\left| i\right\rangle$. We see from the equations above that the time dynamics is determined by the structure of the eigenstates. It is easy to find $W_i(t)$ for a small time $t$. Let us separate the energy of the initial state $E_i\equiv H_{ii}$ in the exponent and make a second order expansion in $H-E_i$ or $E-E_i$ in eq.(\ref{ampli}). The result is: \begin{equation} \label{At2} A_i= \exp(-i E_i t)(1- (\Delta E)^2 t^2/2) , \end{equation} \begin{equation} \label{Wt2} W_i(t)= (1- (\Delta E)^2 t^2) , \end{equation} \begin{equation} \label{DeltaE} (\Delta E)^2= \sum\limits_{f \neq i} H_{if}^2 = \sum\limits_{i<j}J_{ij}^2 = q n J_r^2 . \end{equation} Here $(\Delta E)^2$ is the second moment of the strength function, $J_r$ is the r.m.s. value of the interaction strength, $J_r^2\equiv\overline{J_{ij}^2}$. The first moment is equal to $E_i=H_{ii}$, see e.g. \cite{FI97,FI99} where one can also find the calculations of $(\Delta E)^2$ and spreading width $\Gamma(E)$ for many-body systems. Note that in the special case of a very strong residual interaction $J \gg \Delta_0$ this short-time dependence can be extended to a longer time using the exact solution for the case of $\Delta_0=0$ (in this case it is easy to calculate $\exp(-iHt)$): \begin{equation} \label{Alarge} A_i=\prod\limits_{i<j} \cos{J_{ij}t} \simeq \prod\limits_{i<j}(1-(J_{ij}t)^2/2) \simeq \exp(-(\Delta E)^2 t^2/2), \end{equation} \begin{equation} \label{Wlarge} W_i(t)\simeq\exp(- (\Delta E)^2 t^2). \end{equation} The strength function and density of states in this limit are also described by Gaussian functions with variance $\sigma^2=(\Delta E)^2$: \begin{equation} \label{PGauss} P_i(E) = \frac{1}{\sqrt{2\pi \sigma^2}} \exp(-\frac{E^2}{2\sigma^2}), \end{equation} \begin{equation} \label{rhoGauss} \rho(E) = \frac{2^n}{\sqrt{2\pi \sigma^2}} \exp(-\frac{E^2}{2\sigma^2}). \end{equation} The density of states remains Gaussian for $\Delta_0 \neq 0$, with $\sigma^2= n \overline{\epsilon^2} + (\Delta E)^2$ if there is no gap in the single-qubit spectra (in Ref. \cite{S} the ``up'' and ``down' spectra were separated by a gap equal to $\Delta_0$). In general the unperturbed density of states ($J=0$) can be presented as a sum of the Gaussian functions (one should separate classes of states with a certain number of spins ``up''). The interaction $J$ in the Hamiltonian (\ref{hamil}) mixes these classes and makes the density closer to the single Gaussian function. The limit at large time in chaotic case can be obtained by calculation of the integral in eq. (\ref{ampli}) in the complex $E$ plain. We should close the contour of integration in the bottom part of the complex plane ($Im(E)<0$) to provide a vanishing contribution at infinity. The limit at large time $t$ is given by the pole of the strength function (\ref{FfBW}) closest to the real $E$ axis . If $\Gamma$ and $\delta_i$ do not depend on $E$ the integration gives the usual exponential decay $W_i=\exp(-\Gamma t)$ \cite{BM}. However, the dependence of the spreading width on energy $E$ is necessary to provide the finite second moment $(\Delta E)^2$ of the strength function (note that in many-body systems the dependence $\Gamma (E)$ can be approximated by a Gaussian function, since the density of final states $\rho_f(E_f)$ in eq. (\ref{GammaH}) is usually close to Gaussian \cite{FI99}). If $\Gamma < \Delta E$ the closest pole is given by $\tilde\Gamma= - 2 Im(E_p)$, where $E_p$ is a solution of the equation $E_p=E_i +\delta_i(E_p) - i \Gamma(E_p)/2$ with a minimal imaginary part. If $\Gamma \ll \Delta E$ we have $\tilde\Gamma=\Gamma$. As a result we obtain an exponential dependence for large t: \begin{equation} \label{Wtinf} W_i(t) \sim \exp(- \tilde\Gamma t). \end{equation} It is useful to have a simple extrapolation formula (valid for $\Gamma < \Delta E$) between the cases of small time eq. (\ref{Wt2}) and large time eq. (\ref{Wtinf}): \begin{equation} \label{Wint} W_i(t)=\exp\left(\frac{\Gamma^2}{2 (\Delta E)^2}- \sqrt{\frac{\Gamma^4}{4 (\Delta E)^4} +\Gamma^2t^2} \right) . \end{equation} Now we can estimate the probabilities of the other components $W_f$. For small time or small interaction $J$, other components can be populated due to direct transitions from the initial state only: \begin{eqnarray} \label{Wf} W_f= |\left\langle f|\exp(-iHt)|i\right\rangle|^2 \simeq |H_{if}|^2\left|\int\limits_0^t |A_i(t)| \exp(i\omega_{if}t)dt\right|^2 \nonumber \\ \simeq\frac{|H_{if}|^2}{\omega_{if}^2+\Gamma^2/4}\left|\exp{(i\omega_{if}-\Gamma/2)t} -1\right|^2 . \end{eqnarray} Here $\omega_{if}=E_f-E_i$. We stress again that this approximate equation does not contain transitions between the small components. For example, it does not contain the width of the state $f$; the width $\Gamma$ stands only to indicate some increase of the denominator and to clarify the ``small time'' condition that should include small $\omega_{if}t$, $\Gamma t$ or $\Delta E t$. For small time $W_f=|H_{if}|^2 t^2$. Here $H_{if}$ is equal to one of the $J_{ij}$ that produces a change of the state of a pair of ``spins'' (qubits), transferring initial state $i$ to another state $f$. The result at larger times is different for perturbative and chaotic regimes. In the perturbative regime, $J \ll \Delta_0/qn$ eq. (\ref{Wf}) is the final one. In the chaotic regime we can find the asymptotic expression for large times. The projection of $\Psi (t)$ in eq. (\ref{psit}) to the component $f$ gives \begin{equation} \label{sfluct} W_f(t) =W_f^s + W_f^{fluct}(t), \end{equation} \begin{equation} \label{ws} W_f^s =\sum\limits_k|C_i^{(k)}|^2|C_f^{(k)}|^2 \simeq \int \frac{dE}{\rho(E)} P_i(E) P_f(E) \simeq \frac{1}{2\pi\rho}\frac{\Gamma_t}{(E_i-E_f)^2 + (\Gamma_t/2)^2} . \end{equation} Here $\Gamma_t \simeq \Gamma_i+\Gamma_f \simeq 2\Gamma$. \begin{equation} \label{fluc} W_f^{fluct}(t)=\sum\limits_{k,p;k\neq p}C_i^{(k)}C_f^{(k)}C_i^{(p)}C_f^{(p)} \exp(i(E^{(k)}-E^{(p)})t). \end{equation} At large time $t$, the different terms in $ W_f^{fluct}(t)$ rapidly oscillate and we can put $ \overline{W_f^{fluct}(t)}=0 $. Thus, asymptotically the distribution of the components in the time-dependent wave function is close to that in the chaotic eigenstates (see eqs (\ref{strength},\ref{FfBW})) with a doubled spreading width. \section{Entropy increase} It is convenient to define the entropy of a many-body state as a sum over the basis components ( a comparison with other definitions can be found, e.g. in Ref. \cite{FI97}): \begin{equation} \label{entropy} S = -\sum\limits_s W_s \log_2 W_s= - W_i \log_2 W_i -\sum\limits_{f\neq i} W_f \log_2 W_f . \end{equation} Initially, we have only one component, $W_i=1$ and the entropy is equal to zero. It is easy to obtain a small-time estimate for the entropy using eqs. (\ref{Wt2}, \ref{DeltaE}, \ref{Wf}): \begin{equation} \label{Ssmall} S \simeq (\Delta E)^2 t^2 \log_2(qn/(\Delta E)^2 t^2) = qn J_r^2 t^2 \log_2(1/J_r^2 t^2) . \end{equation} We see that the initial increase of the entropy is relatively small ($\sim t^2$), however, it is proportional to the number of qubits $n$. The criterion of a quantum computer ``melting'' used in Ref. \cite{S} is the entropy $S=1$. We can extend the small- time consideration to include this point. For small time we have some decrease of the initial component and population of the components directly coupled to the initial one. The number of such small components is equal to the number of interacting pairs (qn) in the Hamiltonian eq.(\ref{hamil}), since each pair can change its state due to interaction and this leads to a different many-body state. Using the normalization condition $\sum_s W_s=1$ we obtain an estimate $\overline{W_f}=(1-W_i)/n_f$ where $n_f$ is the ``principal'' number of the final components. Initially $n_f=qn$. This gives us the following approximate expression for the entropy: \begin{eqnarray} \label{entropyi} S = - W_i \log_2 W_i - \log_2((1-W_i)/n_f) \sum\limits_{f\neq i} W_f \nonumber \\ =- W_i \log_2 W_i - (1-W_i) \log_2((1-W_i)/n_f) \simeq (1-W_i) \log_2(n_f) . \end{eqnarray} The last approximate expression is an estimate with logarithmic accuracy, assuming $\log_2(n_f)$ is large. The condition $S=1$ combined with eq.(\ref{Wint}) for $W_i(t)$ and eq.(\ref{entropyi}) for the entropy $S(t)$ gives \begin{equation} \label{melt} W_i(t)=\exp\left(\frac{\Gamma^2}{2 (\Delta E)^2}- \sqrt{\frac{\Gamma^4}{4 (\Delta E)^4} +\Gamma^2t^2}\right)= 1- 1/ \log_2(n_f). \end{equation} This means that the ``melting'' happens when the probability to be in the initial state $W_i$ is still close to 1 (since $\log_2(n_f)$ is large). The loss of operability of the quantum computer is due to the admixture of a large number of the small components (``wrong'' basis states). We should note that , strictly speaking, the argument of the $\log_2$ may differ from $n_f=qn$, since the point $t_c$ can be outside the small time approximation. However, the estimate in eq. (\ref{melt}) with $\log_2n_f \simeq \log_2n $ is valid with logarithmic accuracy (for example, a more accurate estimate in the case of $\Gamma \ll \Delta E$ is $n_f \simeq qn \Gamma/\Delta_0$; this follows from eq.(\ref{Wf})). Equation (\ref{melt}) allows us to obtain a simple estimate for the maximal operational time $t_c$: \begin{equation} \label{tcc} t_c\simeq\frac{\hbar}{\Gamma \log_2(n)}\sqrt{1+\frac{\Gamma^2 \log_2n} {(\Delta E)^2}}. \end{equation} In the case of $\Gamma \ll \Delta E$ we have \begin{equation} \label{tc} t_c\simeq\frac{\hbar}{\Gamma \log_2(n)}= \frac{\tau_0}{n \log_2(n)} \end{equation} Here $\tau_0= \hbar/\Gamma_0$ is the ``lifetime'' related to a single qubit, $\Gamma_0 = \Gamma/n$; recall that $\Gamma$ is proportional to the number of qubits n. More accurate result can be obtained numerically using expressions for $W_i$ and $W_f$ presented above. At this point we can say something about the effects of the environment. They also lead to ``depolarization'' of a qubit, which means nonzero probability of the opposite spin state. If this probability is small we can speak about the probabilities of the population of $n$ many-qubit basis states. Each admixed basis state in this case has one of the qubit states different from the initial state. To account for this effect one may use a real (experimental) qubit lifetime $\tau_0$ in the estimate (\ref{tc}). For $t > t_c$ the higher orders in $H_{if}^2 t^2$ expansion become important and the number of the small components increases exponentially: each state generates $qn$ new states. This corresponds to an approximately linear increase in the entropy. At $t \gg t_c$ we can use the asymptotic form (\ref{ws}) of the component distribution. It is two times broader ($\Gamma_t= 2 \Gamma$) than the basis component distribution of chaotic stationary states. This means that the asymptotic number of the principal components is equal to $N_p(t)=2 N_p^{(k)}$, where $N_p^{(k)} \sim \Gamma/D$ is the number of principal components in a chaotic eigenstate. It is easy to calculate the entropy in this case. From the normalization condition $\sum_s W_s=1$, it follows that $\overline{W_s}=1/N_p$. Then \begin{equation} \label{entropyinf} S = -\sum\limits_s W_s \log_2 W_s \simeq \log_2 N_p\sum\limits_s W_s= \log_2 N_p. \end{equation} Thus, the asymptotic value of the entropy is $S(t \gg t_c)=\log_2(2 N_p^{(k)})= S^{(k)} +1$, where $S^{(k)}= \log_2 N_p^{(k)}$ is the entropy of a chaotic eigenstate. Note, that it is smaller than the maximal possible entropy $S_{max}=\log_2 2^n= n$. This is due to localization of the wave function within the energy shell centered at the energy of the initial state $E_i$ with the width $2\Gamma$. \section{Conclusion} The time dependence of the closed quantum computer wave function is different in the non-chaotic and chaotic regime. In the non-chaotic case $ J \ll \Delta_0/n$, the number of principal components $N_p \simeq 1$ and the wave function remains localized near the initial state (as it was pointed out in \cite{S} the energy level density of the many-qubit states can be exponentially high even in this case). An increase in the number of qubits $n$ leads to a transition to a chaotic regime where $ J > \Delta_0/n$. In this case one can operate the quantum computer within a limited time $t < t_c=\tau_0/n \log_2 n$, where $\tau_0$ is the ``lifetime'' of one qubit. For $t > t_c$ it is hardly possible to operate the quantum computer, since in this case one faces a hopeless struggle against the second law of thermodynamics: increase of the entropy $S(t)$ and very fast exponential increase of the number of ``wrong'' states $N_p(t)= \exp S(t)$. The asymptotic value of the entropy is then close to that for chaotic eigenstates. A similar picture for the entropy increase is expected in other many-body systems. For example, one can consider a decay of a single-electron wave function in a many-electron quantum dot. In this case $t_c \sim \tau /\log_2 n_f$ where $n_f$ is the effective number of final states that contribute to the decay width $\Gamma=\hbar/\tau$. One may also speculate about the ``entropy'' increase for decay of a single-particle wave function in chaotic quantum billiard or disordered system using expansion of this wave function in the plane wave basis or the orbital angular momentum basis. This work was supported by the Australian Research Council. The author is grateful to M.Yu. Kuchiev for valuable discussion and to A.S. Dzurak for careful reading of the manuscript.
1,108,101,562,584
arxiv
\section{Introduction} In this introduction, we briefly discuss some notions which will be explained in more detail later, and we describe the structure of the paper. It is a classical problem of approximation theory to determine when continuous functions on a compact subset of the complex plane can be uniformly approximated by rational functions. As an extension of this problem, we often consider the algebra of those continuous functions which can be approximated in this way. Examples of compact subsets of the complex plane on which this algebra has interesting properties are obtained by deleting a sequence of carefully chosen open disks from a closed disk. We call such sets {\em Swiss cheese sets}. We are usually interested in Swiss cheese sets where the sum of the radii of the deleted open disks is finite. In this case, if the deleted open disks and the complement of the closed disk have pairwise disjoint closures, then we call the set a {\em classical} Swiss cheese set. In the literature, a Swiss cheese traditionally refers to a compact subset of the plane; whereas, in \cite{feinheath2010,FMY,mason2010} and the current paper, we consider a Swiss cheese to be an underlying object to which we associate a Swiss cheese set. In general, a Swiss cheese set might have some undesirable topological properties, such as the existence of isolated points. However, Feinstein and Heath \cite{feinheath2010} described a process by which, under certain conditions, we can improve the topological properties of the set. We call this process ``classicalisation''. We discuss the desirable properties of classical Swiss cheese sets in Section \ref{Swiss_cheese_sec}. In \cite{feinheath2010}, Feinstein and Heath considered a {\em Swiss cheese} to be a pair consisting of a closed disk and a countable collection of open disks. A Swiss cheese set is associated to such an object by deleting the union of the collection of open disks from the closed disk. They then used Zorn's lemma to prove the Feinstein-Heath classicalisation theorem. In \cite{mason2010}, Mason used transfinite induction to prove the classicalisation theorem. His proof used maps, which he called ``disk assignment functions'', which can also be used to describe Swiss cheese sets. In \cite{FMY}, the current authors considered sequences of pairs, consisting of a complex number and a non-negative real number, each of which corresponds to the centre and radius of a disk in the plane. Using these sequences, which we call ``abstract Swiss cheeses'', we gave a topological proof of the Feinstein-Heath classicalisation theorem, and some related results. In this paper we describe numerous examples of Swiss cheese sets from the literature, and outline some classicalisation results from \cite{feinheath2010,FMY,mason2010}. We also give a short comparison of the various abstract objects used to describe and manipulate Swiss cheese sets and the methods used to prove the Feinstein-Heath classicalisation theorem. We then give a proof of a ``semiclassicalisation'' theorem, where the open disks and the complement of the closed disk of the final Swiss cheese set are pairwise disjoint. This proof uses an inductive construction which terminates at the first infinite ordinal. We conclude with the construction of a classical Swiss cheese set which serves as a counterexample to a conjecture of S. E. Morris, and improves the example in \cite{feinstein2004}. \section{Preliminaries} We say {\em compact space} to mean a non-empty, compact, Hausdorff topological space and {\em compact plane set} to mean a non-empty, compact subset of $\C$. For a commutative, unital Banach algebra $A$ we denote the character space of $A$ by $\Phi_A$. Given the Gelfand topology, $\Phi_A$ is a compact space. Let $X$ be a compact space. We denote the algebra of continuous, complex-valued functions on $X$ by $C(X)$ and we give $C(X)$ the {\em uniform norm} on $X$ defined by \[ \abs f_X:=\sup_{x\in X}\abs{f(x)}\qquad (f\in C(X)). \] A {\em uniform algebra on $X$} is a closed subalgebra $A$ of $C(X)$ which contains all constant functions and for each $x,y\in X$ with $x\neq y$ there is a function $f\in A$ such that $f(x)\neq f(y)$. We say a uniform algebra $A$ is {\em trivial} if $A=C(X)$. We say $A$ is {\em natural} if every character on $A$ corresponds to a {\em point evaluation} $\pe x(f):=f(x)$ ($f\in A$) for some $x\in X$. In general, we shall often identify the point $x$ with the point evaluation $\pe x$. For general background on uniform algebras we refer the reader to \cite{browder1969,gamelin1984,stout1971} and for general background on Banach algebras we refer the reader to \cite{dales2000}. Let $X$ be a compact plane set. We denote the set of all rational functions with poles off $X$ by $R_0(X)$ and we denote its closure in $C(X)$ by $R(X)$. We denote the set of all functions $f\in C(X)$ with $f|_{\inte X}$ analytic by $A(X)$. It is easy to see that $R(X)$ and $A(X)$ are uniform algebras on $X$, \[R_0(X)\subseteq R(X)\subseteq A(X)\subseteq C(X),\] and $A(X)=C(X)$ if and only if $\inte X$ is empty. It is standard (see, for example, \cite[Chapter II]{gamelin1984}) that $R(X)$, $A(X)$ and $C(X)$ are natural uniform algebras on $X$. \begin{definition} Let $A$ be a commutative$,$ unital Banach algebra. Then $A$ is {\em regular} if for each closed $E\subseteq\Phi_A$ and $\varphi\in\Phi_A\setminus E$ there exists $\hat f\in\hat A,$ the Gelfand transform of $A,$ such that $\hat f(\varphi)=1$ and $\hat f(E)\subseteq\{0\}$. Similarly$,$ $A$ is {\em normal} if for each pair of disjoint$,$ closed sets $E,F\subseteq\Phi_A$ there exists $\hat f\in\hat A$ such that $\hat f(E)\subseteq\{1\}$ and $\hat f(F)\subseteq\{0\}$. \end{definition} It is standard that regularity and normality are equivalent for commutative, unital Banach algebras (see, for example, \cite[Proposition~4.1.18]{dales2000}). \begin{definition} Let $A$ be a commutative Banach algebra and $E$ a commutative Banach $A$-bimodule. A {\em derivation} $D:A\to E$ is a linear map such that \[ D(ab)=a\cdot D(b)+D(a)\cdot b\qquad(a,b\in A). \] A commutative Banach algebra is {\em weakly amenable} if there are no non-zero continuous derivations from $A$ into any commutative Banach $A$-bimodule. \end{definition} We refer the reader to \cite{dales2000} for the definition of Banach $A$-bimodules and further details. It is known (\cite{bade1987}) that a commutative Banach algebra $A$ is weakly amenable if and only if there are no non-zero, continuous derivations from $A$ into $A'$, the dual module of $A$. \begin{definition} Let $A$ be a commutative, unital Banach algebra and let $\varphi$ be a character on $A$. A {\em point derivation} at $\varphi$ is a linear functional $d$ on $A$ such that \[ d(ab)=\varphi(a)d(b)+d(a)\varphi(b)\qquad(a,b\in A). \] A {\em point derivation} of order $n\in \mathbb N$ (respectively, $\infty$) at $\varphi$ is a sequence $d_0,d_1,\cdots$ of linear functionals with $d_0=\varphi,$ satisfying \[ d_j(ab) = \sum\limits_{k=0}^jd_k(a)d_{j-k}(b)\qquad(a,b\in A), \] for $j=1,2,\dotsb,n$ (respectively, $j=1,2,\cdots$). A point derivation is continuous if $d_j$ is continuous for $j\leq n$ (respectively, all $n$). \end{definition} Let $A$ be an algebra of functions on a compact space $X$. A point $x\in X$ is a {\em peak point} for $A$ if there exists $f\in A$ such that $f(x)=1$ and $\abs{f(y)}<1$ for all $y\in X$ with $y\neq x$. It is standard that if $x$ is a peak point for $A$ then there are no non-zero point derivations at $x$, see \cite[Corollary 1.6.7]{browder1969}. \begin{definition} A uniform algebra $A$ on a compact space $X$ is {\em essential} if, for each non-empty, proper, closed subset $F\subseteq X,$ there is a function $f\in C(X)\setminus A$ such that $f|_F=0$. \end{definition} It is standard (\cite[Theorem~2.8.1]{browder1969}) that $A$ is essential if and only if the union of all of the supports for annihilating measures for $A$ on $X$ is dense in $X$. \begin{definition} Let $A$ be a uniform algebra on a compact space $X$. Then $A$ is {\em antisymmetric} if every real valued function in $A$ is constant. \end{definition} We remark that every antisymmetric uniform algebra is essential, but the converse is false (\cite[Page~147]{browder1969}). A number of the constructions in this paper involve finding a compact subset of a given compact plane set so that the subset has better topological properties. The following proposition, from \cite{feinheath2010}, lists some properties of $R(X)$ which are preserved when we consider a subset $Y$ of $X$. \begin{proposition}\label{subsetprops} Let $X$ and $Y$ be compact plane sets with $Y\subseteq X$. Then$:$ \begin{enumerate} \item if $R(X)$ is trivial then so is $R(Y);$ \item if $R(X)$ does not have any non-zero bounded point derivations then neither does $R(Y);$ \item if $R(X)$ is regular then so is $R(Y)$. \end{enumerate} \end{proposition} \section{Swiss cheeses} \label{Swiss_cheese_sec} Some of the properties that a uniform algebra can possess depend on the nature of its character space. Swiss cheese sets (as described in the introduction) are often constructed so that $R(X)$ has various combinations of desirable properties. We discuss various such constructions in Section \ref{swisscheeseexamples}. In \cite{feinheath2010,mason2010} a {\em Swiss cheese} object was associated to a Swiss cheese set to allow the manipulation of the closed disk and each open disk. These objects are a pair consisting of a closed disk and a countable collection of open disks, where the associated Swiss cheese set is obtained by deleting each open disk in the collection from the closed disk. These objects can be associated to compact plane sets which previously may not have been considered Swiss cheese sets. In fact, without some additional conditions, every compact plane set is a Swiss cheese set. We usually insist the sum of the radii of the open disks is finite to limit the sets we can obtain from Swiss cheeses. Another way of studying Swiss cheese sets was later developed (in \cite{FMY}) in which a sequence of pairs, consisting of a complex number and a non-negative real number, are associated to a Swiss cheese set. The elements in each pair of these sequences correspond to the centres and radii, respectively, of the disks used to construct a Swiss cheese set. In this way the space of {\em abstract Swiss cheeses} can be given a topology which can be used to prove various results about the associated Swiss cheese sets. \begin{notation}For $a\in \C$ and $r> 0$ we denote the open disk centred at $a$ of radius $r$ by $\ob{a}{r}$ and the corresponding closed disk by $\cb{a}{r}$. We also set $\ob a0=\emptyset$ and $\cb a0=\{a\}$. We denote by $\No$ the set of non-negative integers, $\N$ the set of positive integers and $\Rp$ the set of non-negative real numbers. \end{notation} As in \cite{FMY}, we define the space $\scs:=(\C\times\Rp)^\No$ with the product topology. \begin{definition}\label{absSCdef} Let $A=((a_n,r_n))_{n=0}^\infty\in\scs$. We call $A$ an {\em abstract Swiss cheese }$,$ and we define the following. \begin{enumerate} \item The \emph{significant index set} of $A$ is $S_A:=\{n\in \mathbb N:r_n>0\}$. We say that $A$ is {\em finite} if $S_A$ is a finite set, otherwise it is \emph{infinite}. \item The {\em associated Swiss cheese set} $X_A$ is given by \begin{equation}\label{absScs}X_A:=\cb{a_0}{r_0}\setminus\left( \bigcup\limits_{n=1}^\infty\ob{a_n}{r_n}\right). \end{equation} \item We say that $A$ is {\em semiclassical} if $\sum_{n=1}^\infty r_n<\infty,$ $r_0>0$ and for all $k\in S_A$ the following holds$:$ \begin{enumerate} \item $\ob{a_k}{r_k}\subseteq\ob{a_0}{r_0};$ \item whenever $\ell\in S_A$ has $\ell\neq k,$ we have $\ob{a_k}{r_k}\cap\ob{a_\ell}{r_\ell}=\emptyset$. \end{enumerate} \item We say that $A$ is {\em classical} if $\sum_{n=1}^\infty r_n<\infty,$ $r_0>0$ and for all $k\in S_A$ the following holds$:$ \begin{enumerate} \item $\cb{a_k}{r_k}\subseteq\ob{a_0}{r_0};$ \item whenever $\ell\in S_A$ with $\ell\neq k,$ we have $\cb{a_k}{r_k}\cap\cb{a_\ell}{r_\ell}=\emptyset$. \end{enumerate} \item We say $A$ is {\em annular} if $a_0=a_1$ and $r_0>r_1>0$. \item The (classical) {\em error set} $E(A)$ of $A$ is defined to be the set \[ \bigcup\limits_{\substack{m,n\in S_A\\m\neq n}}\bigg(\cb{a_m}{r_m}\cap \cb{a_n}{r_n}\bigg)\cup\bigcup\limits_{n\in S_A}((\C\setminus\ob{a_0}{r_0})\cap \cb{a_n}{r_n}). \] \end{enumerate} For $\alpha\geq 1$ we define the {\em discrepancy function of order $\alpha,$} $\delta_\alpha:\scs\to[-\infty,\infty),$ by \begin{equation}\label{gendiscr}\delta_\alpha(A):=r_0^\alpha -\sum\limits_{n=1}^\infty r_n^\alpha\qquad(A=((a_n,r_n))_{n\geq 0}\in\scs). \end{equation} The {\em annular discrepancy function} $\delta_{\rm ann}:\scs\to[-\infty,\infty)$ is defined by \[ \delta_{\rm ann}(A):=r_0-r_1-2\sum\limits_{n=2}^\infty r_n\qquad(A=((a_n,r_n))_{n\geq 0}\in\scs). \] \end{definition} We usually denote an abstract Swiss cheese by $A=((a_n,r_n))$, where it is understood in context that the sequence is indexed by the non-negative integers. We shall often say {\em annular Swiss cheese} to mean an annular, abstract Swiss cheese. We say a Swiss cheese set $X$ is {\em semiclassical} ({\em classical}) if there is a semiclassical (classical) abstract Swiss cheese $A$ such that $X=X_A$. We note that the annular discrepancy function is defined for all abstract Swiss cheeses, but is only really considered when dealing with annular Swiss cheeses. As in \cite{FMY}, we introduce the following functions on $\mathcal F$. \begin{definition} The {\em radius sum function} is the map $\rho:\scs\to[0,\infty]$ given by \[ \rho(A):=\sum\limits_{n=1}^\infty r_n\qquad(A=((a_n,r_n))\in\scs). \] The {\em centre bound function} is the map $\mu:\scs\to[0,\infty]$ given by \[ \mu(A):=\sup_{n\in\N}{\abs{a_n}}\qquad(A=((a_n,r_n))\in\scs). \] The {\em annular radius sum function} is the map $\ar:\scs\to[0,\infty]$ given by \[ \ar(A):=\sum\limits_{n=2}^\infty r_n\qquad(A=((a_n,r_n))\in\scs). \] \end{definition} We often impose conditions on the centres and radii. It is easy to see that, for an abstract Swiss cheese $A$, we have $\delta_1(A)>-\infty$ if and only if $\rho(A)<\infty$. Thus the condition $\delta_1(A)>-\infty$ in the definition of a classical Swiss cheese from \cite{feinheath2010} is equivalent to the condition $\rho(A)<\infty$ in Definition \ref{absSCdef}. We denote the collection of all abstract Swiss cheeses $A=((a_n,r_n))$ with $\rho(A)<\infty$ and $(r_n)_{n=1}^\infty$ non-increasing by $\mathcal N$. In addition, for each $M>0$, we denote the set of all those abstract Swiss cheeses $A\in\mathcal N$ such that $\rho(A)\leq M$ by $\mathcal N_M$. It is often useful to consider those abstract Swiss cheeses where there are no obvious ``redundant'' open disks. For example, open disks contained in, or equal to, any other open disk. To this end, we make the following definition. \begin{definition} Let $A=((a_n,r_n))$ be an abstract Swiss cheese. Then $A$ is {\em redundancy-free}, if for all $k\in S_A$ we have $\ob{a_k}{r_k}\cap\cb{a_0}{r_0}\neq\emptyset$, and for all $\ell\in S_A$ with $k\neq \ell$ we have $\ob{a_k}{r_k}\not\subseteq\ob{a_\ell}{r_\ell}$. \end{definition} Let $U\subseteq \mathbb{C}$ be an open set and let $A=((a_n,r_n))$ be an abstract Swiss cheese. We denote by $H_A(U)$ the set of all $n\in S_A$ for which $\cb{a_n}{r_n}\cap U\neq\emptyset$, and we denote by $\rho_U(A)$ the sum $\sum_{n\in H_A(U)}r_n$. The following result is essentially \cite[Lemma~4.3]{FMY}. \begin{lemma}\label{nonredundantcheese} Let $A=((a_n,r_n))\in\mathcal F$ with $\rho(A)<\infty$. Then there exists a redundancy-free abstract Swiss cheese $B=((b_n,s_n))\in\mathcal N_{\rho(A)}$ with $X_B=X_A$ and $\cb{b_0}{s_0}=\cb{a_0}{r_0}$ such that$,$ for all open sets $U\subseteq\C,$ $\rho_U(B)\leq \rho_U(A)$. \end{lemma} In the above lemma, the conditions $\cb{b_0}{s_0}=\cb{a_0}{r_0}$ and $\rho(B)\leq\rho(A)$ together imply that $\delta_1(B)\geq\delta_1(A)$. An application of Fatou's lemma shows that, for each $\alpha\geq1$, the function $\delta_\alpha$ is upper semicontinuous on $\mathcal N$. Furthermore, for each $M>0$ and $\alpha>1$, the dominated convergence theorem can be used to show that $\delta_\alpha$ is continuous on $\mathcal N_M$. Semiclassical (and hence classical) Swiss cheese sets have desirable topological properties such as being rectifiably path connected and locally rectifiably path connected. In fact, in \cite{dalefein2010}, it was proved that for any two points $z,w$ in a classical Swiss cheese set $X$, there is a rectifiable path $\gamma$ in $X$ joining $z$ and $w$ with length at most $\pi\abs{z-w}$; thus, $X$ is {\em uniformly regular} (as defined in \cite{dalefein2010}). In fact, it is easy to see that the same proof works for semiclassical Swiss cheese sets and the constant $\pi$ can be improved to $\pi/2$. These properties imply, in particular, that no semiclassical Swiss cheese set has any isolated points. Note that, for an arbitrary abstract Swiss cheese $A\in\mathcal N$, $X_A$ may have isolated points. Also, as noted in \cite{feinheath2010}, as a consequence of a theorem of \cite{whyburn1958} all classical Swiss cheese sets with empty interior are homeomorphic to the Sierpi\'nski carpet. It was proved in \cite{feinheath2010} that, for any semiclassical Swiss cheese set $X$, the algebra $R(X)$ is essential. We comment that there are slight differences in definitions of Swiss cheese/abstract Swiss cheese used in \cite{feinheath2010,mason2010} and here. However, all three papers use the same definition for classical/semiclassical Swiss cheese sets. We refer the reader to Section~\ref{Comparison} for a comparison of various notions of a Swiss cheese object. Due to an argument of Allard (outlined in \cite[p.~163-164]{browder1969}), any classical Swiss cheese set has positive $2$-dimensional Lebesgue measure. With minor adjustments this argument shows that a semiclassical Swiss cheese must also have positive $2$-dimensional Lebesgue measure (area). An alternative argument uses a theorem of Wesler \cite{wesler1960}. \section{Examples of Swiss cheese sets} \label{swisscheeseexamples} We now look at a selection of examples of Swiss cheese sets and their properties from the literature. The Swiss cheese sets discussed here may or may not be classical. The Swiss cheese sets which we describe as classical were introduced by Alice Roth in \cite{roth1938} (see also \cite{AliceRoth2005}) as examples of compact plane sets $X$ where $R(X)\neq A(X)=C(X)$. Later these examples were discovered independently by Mergelyan \cite{mergelyan1952}. Since then there have been numerous constructions of compact plane sets, fitting our definition of a Swiss cheese set, designed so that $R(X)$ has various properties. Prior to the work in \cite{feinheath2010} the focus was on the Swiss cheese set, rather than the underlying (abstract) Swiss cheese. We shall often use the language of abstract Swiss cheeses to describe these examples. The following is a selection of notable Swiss cheese constructions from the literature. \subsection{Bounded point derivations} The following construction of a classical Swiss cheese set $X$ such that $R(X)$ admits no non-zero bounded point derivations is due to Wermer \cite{wermer1967}. His construction relies on the following observation. Let $X$ be a compact plane set and let $x\in X$, there exists a non-zero bounded point derivation on $R(X)$ at $x$ if and only if there exists a constant $M>0$ such that $\abs{f'(x)}\leq M\abs f_X$ for each $f\in R_0(X)$. We now briefly outline some key steps of the construction from \cite{wermer1967}. Let $X_0$ be a finite, classical Swiss cheese set (where only finitely many open disks have been deleted) and let $M,\varepsilon>0$ with $\varepsilon<1/2$. For each $n\in\N$ and for each $j,k\in\Z$, let $D_{jk}^n$ denote the open disk centred at $(j+ki)/n$ of radius $r_n=\varepsilon/n^2$. For a fixed $n\in\N$, we let $S$ denote the set of all $(j,k)\in\Z^2$ such that $\overline{D_{jk}^n}$ lies in the interior of $X_0$. The key to the construction in \cite{wermer1967} is the following lemma. \begin{lemma}\label{wermerlemma} There exists an integer $N=N(\varepsilon,M,X_0)$ such that \[ \sum\limits_{(j,k)\in S}\frac{\varepsilon/N^2}{\abs{z-(j+ki)/N}^2}>M \] and \[ \sum\limits_{(j,k)\in S}\frac{\varepsilon/N^2}{\abs{z-(j+ki)/N}}\leq 50 \] for all $z\in X':=X\setminus\bigcup_{(j,k)\in S}D_{jk}^N$. \end{lemma} Wermer's construction of a classical Swiss cheese set $X$ such that $R(X)$ admits no non-zero bounded point derivations is as follows. Choose a suitable sequence $(\varepsilon_n)$ and let $X_0$ be the closed unit disk. Construct each finite, classical Swiss cheese set $X_{n}$ ($n\geq 1$) inductively by applying Lemma \ref{wermerlemma} to $X=X_{n-1},$ $\varepsilon=\varepsilon_n$ and $M=n$ to and setting $X_n=X'$. Let $X$ be the intersection the sets $X_n$, which is a classical Swiss cheese set. The choice of sequence $(\varepsilon_n)$ allows us to make the sum of the radii of all open disks arbitrarily small. For each $z\in X$, the construction yields a sequence $(f_n)$ of rational functions on $X$ such that $\abs{f_n'(z)}>n$ (from Lemma \ref{wermerlemma}) and $\abs{f}_X\leq 50$ for each $n\in\N$. It then follows that $R(X)$ admits no non-zero bounded point derivations. This construction was adapted by O'Farrell \cite{o1973isolated}, to construct a Swiss cheese set $X$ for which $R(X)$ admits a non-zero bounded point derivation at exactly one single point of $X$. From the proofs in \cite{wermer1967} and \cite{o1973isolated}, we can distill the following proposition. \begin{proposition}\label{Anulus_class_no_bpd} Let $\varepsilon >0$, and $s_0>s_1\geq 0$. Then there exists a classical abstract Swiss cheese $A = ((a_n,r_n))$ with $r_j = s_j$ for $j=1,2$ and $\ar(A)<\varepsilon$ such that $R(X_A)$ has no non-zero bounded point derivations. \end{proposition} In fact, Wermer's construction can be applied to any finite, classical Swiss cheese set $X_0$. Proposition \ref{Anulus_class_no_bpd} is obtained by choosing $X_0$ to be either a closed disk or a closed annulus. The following result, which is due to Browder, is given in \cite{wermer1967}. \begin{proposition} Let $A=((a_n,r_n))$ be a classical abstract Swiss cheese and suppose that $\sum_{n=1}^\infty r_n\log(1/r_n)<\infty.$ Then there exist non-zero bounded point derivations on $X_A$ almost everywhere with respect to Lebesgue measure. \end{proposition} In particular, there exists classical Swiss cheese sets $X$ such that $R(X)$ admits non-zero bounded point derivations at almost every point, and classical Swiss cheese sets $Y$ such that $R(Y)$ admits no non-zero bounded point derivations. Note that a theorem of Hallstrom \cite{hallstrom1969} gives necessary and sufficient conditions for the existence of non-zero bounded point derivations on $R(X)$ at a point $x\in X$, where $X$ is a compact plane set, in terms of analytic capacity. Browder's result is based on a more elementary sufficient condition. \subsection{Regular uniform algebras} McKissick \cite{mckissick1963nontrivial} constructed a Swiss cheese set $X$ such that $R(X)$ is regular and non-trivial. In fact, this is the first known example of a non-trivial, regular uniform algebra. His original construction, which relies on a construction of Beurling outlined in \cite[p.~349-355]{stout1971}, was greatly simplified by K\"orner \cite{korner1986cheaper}, which is the version presented below adjusted by translation and scaling. \begin{lemma}\label{kornerlemma} Let $\varepsilon>0,$ let $\Delta=\ob ar$ be an open disk and let $E=\C\setminus\Delta$. There exists a sequence of complex numbers $(a_n),$ a sequence of positive real numbers $(r_n)$ and a sequence of rational functions $(f_n)$ such that$,$ if we let $U=\bigcup_{k=1}^\infty \ob{a_k}{r_k},$ then the following hold$:$ \begin{enumerate} \item $\sum_{n=1}^\infty r_n<\varepsilon;$ \item the poles of $f_n$ lie in $U$ for each $n;$ \item the sequence $(f_n)$ converges uniformly on $\C\setminus U$ to a function $f$ such that $f\left(E\setminus U\right) = \{0\}$ and $0\notin f(\Delta\setminus U);$ \item $U\subseteq\{z\in\C:r-\varepsilon<\abs{z-a}<r\};$ \item for each $m,n\in\N$ with $m\neq n$ we have $\ob{a_m}{r_m}\cap\ob{a_n}{r_n}=\emptyset$. \end{enumerate} \end{lemma} Let $\overline{\Delta}$ be a closed disk in $\mathbb C$ with positive radius, and let $(D_m)_{m=1}^\infty$ be an enumeration of open disks in $\C$ with rational centre (a point with rational real and imaginary parts) and rational (non-zero) radii. Then we may apply Lemma~\ref{kornerlemma} to each $D_m$ with $\varepsilon_m =1/2^m$ to obtain $(\Delta_m^{(k)})_{k=1}^\infty$. In this way we obtain a Swiss cheese set $X=\overline{\Delta} \setminus \bigcup_{m,k} \Delta_m^{(k)}$, where $R(X)\neq C(X)$ and $R(X)$ is regular. To see that $R(X)$ is regular, let $K\subseteq X$ be closed and let $z\in X\setminus K$, then there exists $m\in\N$ with $z\in D_m\cap X$ and $D_m\cap K = \emptyset$. From the construction there exists $f\in R(X)$ with $f(z)\neq 0$ and $f$ is identically zero on $K$. McKissick \cite{mckissick1963nontrivial} used the collection of all open disks with rational centres and radii from the plane to give a Swiss cheese set $X$ such that $R(X)$ is regular. In fact, choosing every open disk with rational centre and rational radii is more than is required. In \cite{o1979regular}, O'Farrell uses those open disks whose centre $a$ has rational real and imaginary parts and rational radius $r$ which satisfy either $0<\abs a\leq 1$, $r<\abs{a}/2$ and $r<1-\abs{a}$, or $a=0$ and $r=2^{-n}$ for some $n\in\N$. Applying Lemma~\ref{kornerlemma} to this collection of open disks to obtain the Swiss cheese set $X$ (starting from the closed unit disk) ensures that $R(X)$ is regular and $0\in X$. With careful control over the sum of the radii of the deleted open disks, O'Farrell showed that $R(X)$ admits a bounded point derivation of infinite order at $0$. Using the method from \cite{mckissick1963nontrivial}, along with Proposition \ref{subsetprops}, (see also \cite{o1979regular}) the following can be proved. \begin{proposition}\label{mckissickconstruction} Let $b_0=b_1\in\C$ and $s_0>s_1\geq 0$ and let $\varepsilon>0$. There exists an abstract Swiss cheese $A=((a_n,r_n))$ with $a_j=b_j,$ $r_j=s_j$ for $j=0,1$ and $\ar(A)<\varepsilon$ such that $R(X_A)$ is regular. \end{proposition} Note that Swiss cheese sets obtained by this method are not classical in general. For instance, in McKissick's construction, any deleted open disk contains a sequence of (redundant) deleted open disks. K\"orner's lemma has been used or adapted to construct a number of examples of Swiss cheese sets $X$ for which $R(X)$ has various combinations of properties. For example, in \cite{feinstein1991}, the first author modified the lemma to construct a Swiss cheese set $X$, obtained using a construction similar to that of O'Farrell, such that $R(X)$ has a prime ideal whose closure is not prime. In \cite{wang1975}, Wang uses McKissick's lemma to give an example of a Swiss cheese set $X$ such that $R(X)$ is strongly regular at a non-peak point. In \cite{feinstein2001}, the first author gave an example of a Swiss cheese set $X$ such that $R(X)$ has no non-trivial Jensen measures yet is not regular. In \cite{feinstein2004}, the same author gave an example of a Swiss cheese set $X$, using Wermer's construction (Proposition \ref{Anulus_class_no_bpd}), such that $R(X)$ has no non-zero bounded point derivations but is not weakly amenable. This construction was improved by Heath \cite{Heath2005} who gave an example of a compact plane set $Y$ such that $R(Y)$ was regular and admitted no non-zero bounded point derivations but is not weakly amenable. However, in this example disks were deleted from a square shaped compact set in the plane rather than a closed disk. \subsection{Other examples} There are other examples based on the construction of suitable Swiss cheese sets. In \cite{steen1966}, Steen gave a construction which can be used to give a classical Swiss cheese set $X$ such that $R(X)$ is not antisymmetric. There are several examples of classical Swiss cheese sets (with our current definition) given in \cite{gamelin1984}. These include the {\em roadrunner set} (p.~52), the {\em string of beads} (p.~146), the {\em stitched disk} (Example 9.3), the {\em Champagne bubbles} (p.~227), along with Examples 9.1 and 9.2. Example 9.2 of \cite{gamelin1984} was also used in \cite{dalefein2010}. These examples, unlike most of the above examples, have non-empty interior. Swiss cheese sets, and similar constructions, have also been used to construct interesting Banach algebras of functions, for example in \cite{brennan1973} and \cite{Brennan2013}, and in harmonic approximation, for example in \cite{browder1969}. The latter example is a ``square Swiss cheese'', obtained by deleting from a closed, square shaped, compact plane set $K$ a sequence of open, square shaped, plane sets. \section{Classicalisation theorems} We have seen that semiclassical and classical Swiss cheeses have desirable topological properties. In \cite{feinheath2010}, Feinstein and Heath gave a sufficient condition to find a classical Swiss cheese whose associated plane set is a subset of the original. In fact they proved the following, stated here in the language of abstract Swiss cheeses. \begin{proposition}[Classicalisation theorem]\label{classicalisationtheorem} Let $A$ be an abstract Swiss cheese with $\delta_1(A)>0$. Then there exists a classical abstract Swiss cheese $B$ such that $\delta_1(B)\geq\delta_1(A)$ and $X_B\subseteq X_A$. \end{proposition} This theorem was later proved using a transfinite induction by Mason \cite{mason2010}. In \cite{FMY}, the current authors proved the classicalisation theorem by considering a compact set $\mathcal S$ in the topological space $\scs$. The classical abstract Swiss cheese is obtained by first maximising $\delta_1$ on $\mathcal S$ and then minimising $\delta_2$ on the resulting compact subset of $\mathcal S$. These functions are upper-semicontinuous and continuous, respectively on $\mathcal S$. The compact set $\mathcal S$ depends on the initial abstract Swiss cheese, and consists of those abstract Swiss cheeses which are ``good candidates'' for the final, classical abstract Swiss cheese; see \cite{FMY} for formal definition of $\mathcal S$. In the same paper a similar result (below) for annular Swiss cheeses was proved by this topological method. This result can also be proved using transfinite induction. \begin{proposition}[Annular classicalisation theorem]\label{annularclassicalisation} Let $A=((a_n,r_n))$ be an annular Swiss cheese with $\delta_{\rm ann}(A)>0$. Then there exists a classical$,$ annular Swiss cheese $B=((b_n,s_n))$ with $b_0=a_0$ such that $\delta_{\rm ann}(B)\geq\delta_{\rm ann}(A)$ and $X_B\subseteq X_A$. \end{proposition} For a non-empty compact set $K$ and $M>0,$ we define \[ U(K,M):=\{z\in\C:\dist(z,K)<M\}. \] Let $I$ be a non-empty subset of $\mathbb{N}$, and let $(K_n)_{n\in I}$ be a countable collection of non-empty, compact sets and $(M_n)_{n\in I}$ a countable collection of positive real numbers; we write $U_n$ for $U(K_n,M_n)$ in what follows. We say the countable collection of pairs $((K_n,M_n))_{n\in I}$ is {\em admissible} (with respect to $A$) if $\rho_{U_n}(A)<M_n/2$ for all $n\in I,$ ${U_m}\cap {U_n}=\emptyset$ for all $m\in I$ with $m\neq n$ and $\overline{U_n}\subseteq\ob{a_0}{r_0}$ for all $n\in I$. By constructing a suitable compact subset of $\scs$ the following was proved. Note that this result can also be proved using transfinite induction. \begin{proposition}[Controlled classicalisation theorem]\label{localclassicalisation} Let $A=((a_n,r_n))$ be a redundancy-free abstract Swiss cheese with $\rho(A)<\infty$ and let $I$ be a non-empty subset of $\mathbb N$. Suppose that $((K_n,M_n))_{n\in I}$ is an admissible collection of pairs with $E(A)\subseteq\bigcup_{n\in I} K_n$. Then there exists a classical abstract Swiss cheese $B=((b_n,s_n))$ with $\delta_1(B)\geq\delta_1(A),$ $X_B\subseteq X_A,$ and the following hold$:$ \begin{enumerate} \item for all $k\in S_A\setminus\bigcup_{n\in I}H_A(U_n)$ there exists $\ell \in S_B$ with $\ob{b_\ell}{s_\ell} = \ob{a_k}{r_k};$ \item $\rho_{U_n}(B)\leq \rho_{U_n}(A)$ for all $n\in I$. \end{enumerate} \end{proposition} By combining Propositions \ref{annularclassicalisation} and \ref{localclassicalisation} we obtain, through a sequence of approximations described in \cite{FMY}, the following improvement of Proposition \ref{mckissickconstruction}. \begin{proposition}\label{classicalmckissick} Let $b_0=b_1\in\C$ and $s_0>s_1\geq 0$ and let $\varepsilon>0$. There exists a classical abstract Swiss cheese $A=((a_n,r_n))$ with $a_j = b_j,$ $r_j=s_j$ for $j=1,2$ and $\sum_{n=2}^\infty r_n<\varepsilon$ such that $R(X_A)$ is regular. \end{proposition} \section{Comparison of Swiss cheeses} \label{Comparison} At present there are two distinct abstract notions of a Swiss cheese; a Swiss cheese in the sense of \cite{feinheath2010,mason2010} and the abstract Swiss cheese as in Definition~\ref{absSCdef}. Recall that a Swiss cheese (as in \cite{feinheath2010,mason2010}) is a pair consisting of a closed disk and a countable collection of open disks. In this definition, all disks are assumed to be non-degenerate (have positive radius). We can describe any Swiss cheese set using a Swiss cheese or an abstract Swiss cheese. There is also a related notion of a disk assignment map, introduced in \cite{mason2010}, which can also be used to describe Swiss cheese sets. We now explain the relationship between these different notions. In \cite{FMY}, it was described how, given an abstract Swiss cheese $A=((a_n,r_n))$, we can obtain an associated Swiss cheese $\mathbf D_A$ by setting \[ \mathbf D_A=(\cb{a_0}{r_0},\{\ob{a_n}{r_n}:n\in S_A\}). \] In this way we can obtain any Swiss cheese. The mapping of the collection of all abstract Swiss cheeses onto the collection of all Swiss cheeses, described above, is a surjection (necessarily many-to-one) and preserves the associated Swiss cheese set. We also have $\delta(\mathbf D_A)\geq \delta_1(A)$, where $\delta(\mathbf D_A)$ is defined by \begin{equation}\label{FHDiscrepancy} \delta(\mathbf D):=r(\overline{\Delta})-\sum_{D\in\mathcal D}r(D)\qquad(\mathbf D=(\overline{\Delta},\mathcal D)) \end{equation} and $r(D)$ denotes the radius of the disk $D$. (The quantity $\delta(\mathbf D)$ in \eqref{FHDiscrepancy} is called the {\em discrepancy} of $\mathbf D$.) However, we cannot obtain every abstract Swiss cheese from a Swiss cheese. For instance, in a Swiss cheese, the collection of open disks may not contain any repetitions, whereas an abstract Swiss cheese can contain repeated pairs. Moreover, an abstract Swiss cheese can contain degenerate pairs (where $r_n=0$), which is not allowed in the definition of Swiss cheeses. In \cite{mason2010}, Mason considered {\em disk assignment functions} $d:S\to\mathcal O$, where $S\subseteq\No$ with $0\in S$, $\mathcal O$ denotes the collection of all open disks and complements of closed disks in the plane, and $\mathbf E_d:=\{\mathbb C \setminus d(0),d(S\setminus \{0\})\}$ is a Swiss cheese. Note that disk assignment functions allow for repeated disks, whereas the Feinstein-Heath Swiss cheese does not. All disks considered in \cite{mason2010} were assumed to have positive radius. Such a function has the {\em Feinstein-Heath condition} if the Swiss cheese $\mathbf E_d$ (as shown above) satisfies $\delta(\mathbf E_d)>0$, with $\delta(\mathbf E_d)$ given by \eqref{FHDiscrepancy}. There is a surjection from the set of all abstract Swiss cheeses $A=((a_n,r_n))$ with $r_0>0$ onto the set of all disk assignment functions. To construct this map, take $S=S_A\cup\{0\}$ and use the obvious map. In particular, this map preserves the radius sum and the associated Swiss cheese set. Each Swiss cheese set which can be obtained from an abstract Swiss cheese $A$ with $\delta_1(A)>0$ can also be obtained from a Swiss cheese $\mathbf D=(\overline{\Delta},\mathcal D)$ with $\delta(\mathbf D)>0$ (as defined in \eqref{FHDiscrepancy}). In addition, we can also obtain such a Swiss cheese set from a disk assignment function $d$ satisfying the Feinstein-Heath property. There are notable difference between the three methods used to prove the Feinstein-Heath classicalisation theorem, which is stated as Proposition~\ref{classicalisationtheorem}. In \cite{feinheath2010}, Feinstein and Heath constructed a partial order on a suitable collection and used Zorn's lemma to obtain a maximal object with respect to this partial order, which was associated with a classical Swiss cheese. Mason \cite{mason2010} used transfinite induction to construct a family of self maps on the set $H$ of all disk assignment functions with the Feinstein-Heath condition, which eventually stabilised at a disk assignment function associated with a classical Swiss cheese. In \cite{FMY}, a compact collection of abstract Swiss cheeses was constructed so that maximising the discrepancy function $\delta_1$ and then minimising the function $\delta_2$ yields a classical abstract Swiss cheese. The classical Swiss cheese obtained by Mason's method need not be maximal in the sense of the Feinstein-Heath partial order, maximise the function $\delta_1$ or minimise the function $\delta_2$. We refer the reader to the respective papers for more details on these approaches. Consider the following example. \begin{figure}[htbp] \begin{tikzpicture} \draw( 0, 0) circle [radius= 2.5]; \draw (-0.625,-0.625) circle [radius=0.234]; \draw ( 0,-0.625) circle [radius=0.234]; \draw (0.625,-0.625) circle [radius=0.234]; \draw (-0.625, 0) circle [radius=0.234]; \draw ( 0, 0) circle [radius=0.234]; \draw (0.625, 0) circle [radius=0.234]; \draw (-0.625,0.625) circle [radius=0.234]; \draw ( 0,0.625) circle [radius=0.234]; \draw (0.625,0.625) circle [radius=0.234]; \draw[dashed] ( 0, 0) circle [radius=1.125]; \draw[dashed] ( 0, 0) circle [radius=2.11]; \end{tikzpicture} \caption{A classical Swiss cheese set where the discrepancy can be improved.} \label{counterexample_to_FH_Mason_equiv} \end{figure} Let $\overline{\Delta}$ be the closed unit disk, let $\varepsilon>0$ be small. Let $z_1,\dotsc,z_9$ be the distinct points whose real and imaginary parts are either $0$ or $\pm1/4$. Take $r=3/32$. Let $D_k$ be the open disks centred at $z_k$ of radius $r$ for $k=1,2,\dotsc,9$, as shown in Figure \ref{counterexample_to_FH_Mason_equiv}. Then the $\overline{D_k}$ are disjoint, the resulting Swiss cheese (or abstract Swiss cheese) is classical and satisfies the conditions of the Feinstein-Heath classicalisation theorem. However, applying Mason's construction will yield the same Swiss cheese, but both the Feinstein-Heath approach and the abstract Swiss cheese approach will yield new Swiss cheeses. However, these new (abstract) Swiss cheeses can be different. From the definition of the partial order in \cite{feinheath2010}, it is easy to see that if an abstract Swiss cheese $A$ maximises $\delta_1$ and minimises $\delta_2$, on a suitable compact collection of Swiss cheeses (see \cite{FMY}), then the corresponding Swiss cheese must be maximal in the Feinstein-Heath partial order. The example above, illustrated by Figure \ref{counterexample_to_FH_Mason_equiv}, can be used to show that a Swiss cheese which is maximal in the Feinstein-Heath partial order need not maximise discrepancy. The dashed lines (in Figure \ref{counterexample_to_FH_Mason_equiv}) show the maximum and minimum disks which could be used to replace the collection of smaller disks. If, for example, we form two Swiss cheeses by replacing the collection by the maximum and minimum disks, then these Swiss cheeses are not comparable in the Feinstein-Heath partial order. The (abstract) Swiss cheese obtained by replacing by the minimal disk maximises discrepancy, but replacing by the maximal disk does not change the discrepancy. \section{Semiclassicalisation} Let $A$ be an abstract Swiss cheese with positive discrepancy. By the Feinstein-Heath theorem, we can find a classical abstract Swiss cheese $B$ such that $X_B\subseteq X_A$; in particular, we can obtain a semiclassical abstract Swiss cheese. We describe an inductive process where at each step we seek to increase the discrepancy of the abstract Swiss cheese by combining overlapping open disks and/or pulling in the closed disk. We show that this process produces a sequence which converges to a semiclassical abstract Swiss cheese. This process was originally described in the third author's MSc dissertation at the University of Nottingham. The following elementary lemmas are very minor modification of the lemmas in \cite{FMY}, and we omit details of the proofs, and are illustrated in Figure~\ref{elementary_lemma_fig}. \begin{lemma}\label{combinediscs} Let $a_1,a_2\in \mathbb C$ and $r_1$ and $r_2$ be positive real numbers such that $\ob{a_1}{r_1}\cap \ob{a_2}{r_2}\neq \emptyset$. Then there exists a unique pair $(a,r)\in \mathbb{C}\times (0,\infty)$ such that $r< r_1+r_2,$ $\ob{a_1}{r_1}\cup \ob{a_2}{r_2}\subseteq \ob{a}{r},$ and $r$ is minimal. \end{lemma} \begin{lemma} \label{outsidedisc} Let $a_1,a_2\in \mathbb C$ and $r_1,r_2>0$ be such that $\ob{a_1}{r_1}\nsubseteq \ob{a_2}{r_2}$ and $\ob{a_2}{r_2}\nsubseteq \ob{a_1}{r_1}$. Then there exists a unique pair $(a,r)\in \mathbb{C}\times (0,\infty)$ such that $r>r_1-r_2$, $\cb{a}{r}\subseteq \cb{a_1}{r_1}$, $\cb{a}{r}\cap \ob{a_2}{r_2}=\emptyset$, and $r$ is maximal. \end{lemma} \begin{figure}[htbp] \centering \begin{subfigure}[b]{0.45\textwidth}\centering \begin{tikzpicture} \draw[dashed] (-0.5, 0) circle [radius= 1.5]; \draw[dashed] ( 1.5, 0.5) circle [radius=1.25]; \draw[dashed] (0.379,0.22) circle [radius=2.41]; \node at (-1.12,-0.219) {$\ob{a_1}{r_1}$}; \node at (1.78,0.43) {$\ob{a_2}{r_2}$}; \node at (-0.136,1.89) {$\ob{a}{r}$}; \end{tikzpicture} \caption{Combining open disks.} \label{combininglem} \end{subfigure} ~ \begin{subfigure}[b]{0.45\textwidth} \centering \begin{tikzpicture} \draw(-0.5, 0) circle [radius= 2]; \draw[dashed] ( 1.8, 0) circle [radius=1.35]; \draw (-1.03, 0) circle [radius=1.47]; \node at (-1.49,-0.0409) {$\cb{a}{r}$}; \node at (2.31,-0.00585) {$\ob{a_2}{r_2}$}; \node at (1.6,-1.73) {$\cb{a_1}{r_1}$}; \end{tikzpicture} \caption{Pulling in of the closed disk.} \label{pullinginlem} \end{subfigure} \caption{Elementary lemmas for combining and pulling in disks.} \label{elementary_lemma_fig} \end{figure} We call the open disk $\ob{a}{r}$ in Lemma \ref{combinediscs} the \emph{minimal radius open disk} containing $\ob{a_1}{r_1}\cup\ob{a_2}{r_2}$, and we call the closed disk $\cb{a}{r}$ in Lemma \ref{outsidedisc} the \emph{maximal radius closed subdisk} of $\cb{a_1}{r_1}\setminus\ob{a_2}{r_2}$. Let $D_1$ and $D_2$ be open disks in $\C$ with $D_1\cap D_2 \neq \emptyset$. If we draw a line through the centres of these two disks (if the centres coincide, we just draw any line through the centre), we define the length of the line segment in $D_1\cap D_2$ to be the \emph{length of overlap}. See Figure~3 for an illustration. Let $A=((a_n,r_n))$ be an abstract Swiss cheese with finite discrepancy, and assume for some $k,\ell\in S_A$ we have $\ob{a_k}{r_k}\cap \ob{a_\ell}{r_\ell}\neq \emptyset$. Then by replacing these two open disks by the minimal radius open disk containing them, we can increase the discrepancy of $A$ by half the length of overlap of these two open disks. A more formal formulation of replacing (and also discarding) disks is to be made in the proof of Theorem~\ref{Semiclassicalisation}. If there exist pairs of open disks in $A$ which have non-empty intersection, there exists a pair of them with maximal length of overlap. To see this, first notice that the supremum length of overlap is finite because $\delta_1(A)$ is finite, and we denote it by $M$. If $M=0$ then there is nothing to prove. Otherwise, there are only finitely many pairs of open disks with length of overlap larger than $M/3$, because $\delta_1(A)$ is finite. Then among these pairs of open disks, we can find one pair that achieves maximal length of overlap. Let $B_1$, $B_2$, $D_1$ and $D_2$ be open disks in $\C$ with $B_1\subseteq D_1$, $B_2\subseteq D_2$ and $B_1\cap B_2 \neq \emptyset$. We claim that the length of overlap of $B_1$ and $B_2$ is no larger than the length of overlap of $D_1$ and $D_2$. The proof is elementary and we leave the details to the reader. Let $\overline{D_0}$ be a closed disk and $D_1$ be an open disk in $\C$, both with positive radii, such that $D_0\nsubseteq D_1$ and $D_1\nsubseteq D_0$. If we draw a line through the centres of these two disks, we define the length of the line segment contained in $D_1\setminus \overline{D_0}$ to be the \emph{extrusion length} of $D_1$ from $\overline{D_0}$. See Figure~3 for an illustration. Let $A=((a_n,r_n))$ be an abstract Swiss cheese with positive discrepancy, and assume for some $k\in S_A$ we have $\ob{a_k}{r_k}\nsubseteq \cb{a_0}{r_0}$. Then by replacing $\cb{a_0}{r_0}$ by the maximal radius closed subdisk of $\cb{a_0}{r_0}\setminus\ob{a_k}{r_k}$ and discarding $\ob{a_k}{r_k}$, we can increase the discrepancy of $A$ by half the extrusion length of $\ob{a_k}{r_k}$ from $\cb{a_0}{r_0}$. If there exists $k\in\N$ with $\ob{a_k}{r_k}\nsubseteq\cb{a_0}{r_0}$ then there exists $\ell\in\N$ such that the length of extrusion of $\ob{a_\ell}{r_\ell}$ from $\cb{a_0}{r_0}$ is maximal. (The proof is similar to that for the maximal length of overlap above.) Let $\overline{B_0}$, $\overline{D_0}$ be closed disks and $B_1$ and $D_1$ be open disks in $\C$, with $\overline{B_0}\subseteq \overline{D_0}$, $D_1\subseteq B_1$ and $D_1\nsubseteq \overline{D_0}$. Then the extrusion length of $D_1$ from $\overline{D_0}$ is no larger than the extrusion length of $B_1$ from $\overline{B_0}$. \begin{figure}[htbp] \label{length} \centering \begin{subfigure}[b]{0.45\textwidth}\centering \begin{tikzpicture} \draw[dashed] (0, 0) circle [radius= 1]; \draw[dashed] ( 2, 0) circle [radius=1.5]; \draw[dotted] (0,0) -- (2, 0) ; \draw (0.5,0)--(1,0); \node at (0,0.5) {$D_1$}; \node at (2,0.5) {$D_2$}; \end{tikzpicture} \caption{Length of overlap.} \label{length_of_overlapping} \end{subfigure} ~ \begin{subfigure}[b]{0.45\textwidth} \centering \begin{tikzpicture} \draw[dashed] (0, 0) circle [radius= 1]; \draw ( 2, 0) circle [radius=1.5]; \draw[dotted] (-1,0) -- (2, 0) ; \draw (-1,0)--(0.5,0); \node at (0,0.5) {$D_1$}; \node at (2,0.5) {$\overline{D_0}$}; \end{tikzpicture} \caption{Extrusion length.} \label{length_of_exterior} \end{subfigure} \caption{Length of overlap and extrusion length shown by the unbroken line.} \end{figure} \begin{theorem}[Semiclassicalisation] \label{Semiclassicalisation} Let $A=((a_\ell,r_\ell))$ be an abstract Swiss cheese with $\delta_1(A)>0$. Then there exists a semiclassical abstract Swiss cheese $B$ with $X_B\subseteq X_A$ and $\delta_1(B)\geq \delta_1(A)$. \end{theorem} We remark that Theorem~\ref{Semiclassicalisation} is a corollary of the Feinstein-Heath classicalisation theorem (Proposition~\ref{classicalisationtheorem}). However, the proof given here uses an inductive construction that terminates at the first countable ordinal. In contrast, the proof given in \cite{mason2010} used a transfinite induction which terminates before the first uncountable ordinal. \begin{proof} We construct a suitable sequence of abstract Swiss cheeses $(A^{(m)})_{m=1}^\infty$ by induction such that $\delta_1(A^{(m)})$ is non-decreasing and $X_{A^{(m+1)}}\subseteq X_{A^{(m)}}$. In this construction, we apply, where appropriate, Lemmas \ref{combinediscs} and \ref{outsidedisc} alternately. Let $A^{(0)}=A$, and assume we have constructed $A^{(2m)}$ for some $m\geq 0$. If, for all distinct $k,\ell\in S_{A^{(2m)}}$, we have $\ob{a^{(2m)}_k}{r^{(2m)}_k}\cap \ob{a^{(2m)}_\ell}{r^{(2m)}_\ell}=\emptyset$, then let $A^{(2m+1)}=A^{(2m)}$. Otherwise we can find $k,\ell\in S_{A^{(2m)}}$, with $k<\ell$ such that $\ob{a^{(2m)}_k}{r^{(2m)}_k}\cap \ob{a^{(2m)}_\ell}{r^{(2m)}_\ell}\neq \emptyset$ and they achieve maximal length of overlap. In this case we set $A^{(2m+1)}=((a^{(2m+1)}_n,r^{(2k+1)}_n))$ where $a^{(2m+1)}_n=a^{(2m)}_n$ and $r^{(2m+1)}_n = r^{(2m)}_n$ for $n \neq k,\ell$; $a^{(2m+1)}_k, r^{(2m+1)}_k$ equal to the centre and radius of the minimal radius open disk containing $\ob{a^{(2m)}_k}{r^{(2m)}_k}\cup \ob{a^{(2m)}_\ell}{r^{(2m)}_\ell}$; and $a^{(2m+1)}_\ell=0$, $r^{(2m+1)}_\ell=0$. (Note that the technique of allocating the minimal radius open disk to the smaller index is due to Mason \cite{mason2010}.) If, for all $k\in S_{A^{(2m+1)}}$, we have $\ob{a^{(2m+1)}_k}{r^{(2m+1)}_k}\subseteq \cb{a^{(2m+1)}_0}{r^{(2m+1)}_0}$, we set $A^{(2m+2)}=A^{(2m+1)}$. Otherwise, we can find $k\in S_{A^{(2m+1)}}$ such that \[ \ob{a^{(2m+1)}_k}{r^{(2m+1)}_k}\nsubseteq \cb{a^{(2m+1)}_0}{r^{(2m+1)}_0} \] and the extrusion length of $\ob{a^{(2m+1)}_k}{r^{(2m+1)}_k}$ from $\cb{a^{(2m+1)}_0}{r^{(2m+1)}_0}$ is maximal. In this case let $A^{(2m+2)} = ((a^{(2m+2)}_n, r^{(2m+2)}_n))$ where $a^{(2m+2)}_n = a^{(2m+1)}_n$, $r^{(2m+2)}_n = r^{(2m+1)}_n$ for $n \neq 0,k$; $a^{(2m+2)}_0, r^{(2m+2)}_0$ equal to the centre and radius of the maximal radius closed subdisk of $\cb{a^{(2m+1)}_0}{r^{(2m+1)}_0}\setminus \ob{a^{(2m+1)}_k}{r^{(2m+1)}_k}$; $a^{(2m+2)}_k=0$ and $r^{(2m+2)}_k=0$. In this way we have constructed a sequence of abstract Swiss cheeses $(A^{(m)})_{m\geq 1}$. Let $m\in\No$ and $k\in\N$. If $\ob{a_k^{(2m+1)}}{r_k^{(2m+1)}}$ is not equal to $\ob{a_k^{(2m)}}{r_k^{(2m)}}$ then either $\ob{a_k^{(2m)}}{r_k^{(2m)}}\subseteq\ob{a_k^{(2m+1)}}{r_k^{(2m+1)}}$ or \begin{equation}\label{masontrick} \ob{a_k^{(2m)}}{r_k^{(2m)}}\subseteq\ob{a_\ell^{(2m+1)}}{r_\ell^{(2m+1)}}\quad\text{for some $\ell\in\N$ with $\ell<k$} \end{equation} and $r_k^{(m')}=0$ for all $m'\geq 2m+1$. If $\ob{a_k^{(2m+1)}}{r_k^{(2m+1)}}\neq\ob{a_k^{(2m+2)}}{r_k^{(2m+2)}}$ then we must have $\ob{a_k^{(2m+1)}}{r_k^{(2m+1)}}\subseteq\C \setminus\cb{a_0^{(2m+2)}}{r_0^{(2m+2)}}$ and $r_k^{(m')}=0$ for all $m'\geq 2m+2$. We notice that the sequence of closed disks $(\cb{a^{(m)}_0}{r^{(m)}_0})_m$ is nested decreasing, thus we have $a_0^{(m)}\to b_0\in\C$ and $r_0^{(m)}\to s_0\geq 0$. Such limits exist according to \cite[Proposition~2.3]{feinheath2010}. For each $k\geq 1$, we observe that the sequence of open disks $(\ob{a^{(m)}_k}{r^{(m)}_k})_{m\geq 1}$ is either bounded and nested increasing, or there exists $N\geq 1$ such that, for all $m\geq N$, we have $a^{(m)}_k=r^{(m)}_k=0$. In both cases, we see the sequences $(a^{(m)}_k)_{m}$ and $(r^{(m)}_k)_{m}$ converge and we denote the limits by $b_k$ and $s_k$, respectively. Letting $B = ((b_n,s_n))$ we have constructed the limit abstract Swiss cheese. We see that $A^{(m)}\to B$ in $\mathcal{F}$ with the product topology. From the construction it is clear that $\delta_1(A^{(m)})\geq \delta_1(A)$ and $(\delta_1({A^{(m)}}))$ is non-decreasing. Since $\delta_1$ is upper semicontinuous and $A^{(m)}\to B$ we see that $\delta_1(B)\geq \delta_1(A)>0$. In particular, we have $s_0>0$. We show that $X_B\subseteq X_A$. It is clear that $\cb{b_0}{s_0}\subseteq \cb{a_0}{r_0}$. Let $z\in\C\setminus X_{A}$, we show $z\notin X_B$. If $z\notin\cb{b_0}{s_0}$ then $z\in\C\setminus X_B$. Otherwise, we can find $k\in S_{A}$ such that $z\in\ob{a_k}{r_k}\cap\cb{b_0}{s_0}$. Suppose there exists $N\in \mathbb N$ such that $r_k^{(m)}=0$ for all $m\geq N$. Since $\cb{a_0^{(m)}}{r_0^{(m)}}$ has non-empty intersection with $\ob{a_k}{r_k}$, by \eqref{masontrick}, there exists a non-increasing sequence $(\ell_m)_{m\geq N}$ of integers with $1\leq\ell_m<k$ such that $\ob{a_k}{r_k}\subseteq\ob{a_{\ell_m}^{(m)}}{r_{\ell_m}^{(m)}}$ for all $m\geq N$. This sequence $(\ell_m)_{m\geq N}$ is eventually constant, say $\ell_m=\ell$ for all $m\geq N_1$. Thus $\ob{a_k}{r_k}\subseteq\ob{a_\ell^{(m)}}{r_\ell^{(m)}}$ for all $m\geq N_1$ and it follows that $\ob{a_k}{r_k}\subseteq\ob{b_\ell}{s_\ell}$. Hence $z\notin X_B$ as required. We show that $B$ is a semiclassical abstract Swiss cheese. Assume towards a contradiction that $B$ is not semiclassical. There are two cases. The first case is there exists $k,\ell\in S_B$ such that $\ob{b_k}{s_k}\cap \ob{b_\ell}{s_\ell} \neq \emptyset$. Then there exists $N\geq 1$ such that for all $m\geq N$ we have $\ob{a^{(m)}_k}{r^{(m)}_k}\cap \ob{a^{(m)}_\ell}{r^{(m)}_\ell} \neq \emptyset$. Let $L_m$ be the length of overlap of the disks $\ob{a^{(m)}_k}{r^{(m)}_k}$ and $\ob{a^{(m)}_\ell}{r^{(m)}_\ell}$, which is positive and non-decreasing. From the inductive construction, it is clear that $\delta_1(A^{(2m+1)})-\delta_1(A^{(2m)})\geq L_{2m}/2$ for all $m\geq N$, which contradicts $\delta_1(A^{(m)})\leq r_0$ for all $m$. The second case is there exists $k\in S_B$ such that $\ob{b_k}{s_k}$ is not contained in $\cb{b_0}{s_0}$. Then there exists $N\geq 1$ such that $\ob{a^{(m)}_k}{r^{(m)}_k}$ is not contained in $\cb{a^{(m)}_0}{r^{(m)}_0}$ for all $m\geq N$. Let $I_m$ be the extrusion length of $\ob{a^{(m)}_k}{r^{(m)}_k}$ from $\cb{a^{(m)}_0}{r^{(m)}_0}$. Clearly $I_m>0$ and is non-decreasing. For all $m\geq N$ we have $\delta_1(A^{(2m+2)})-\delta_1(A^{(2m+1)})\geq I_{2m+1}/2$, which is a contradiction since $\delta_1(A^{(M)})\leq r_0$ for all $m$. \end{proof} \section{A classical counterexample to the conjecture of S. E. Morris} In \cite{feinstein2004}, Feinstein gave a counterexample to a conjecture of S. E. Morris by constructing a Swiss cheese set $X$ where $R(X)$ has no non-zero, bounded point derivations but $R(X)$ is not weakly amenable. What he proved is the following. \begin{theorem} \label{Joel_Morris_Cheese} For each $C>0$ there is a compact plane set $X$ obtained by deleting from the closed unit disk a countable union of open disks such that the unit circle $\mathbb{T}$ is a subset of $X,$ $R(X)$ has no non-zero$,$ bounded point derivations$,$ but for all $f,g$ in $R_0(X),$ \begin{equation} \label{cont_deri_dual} \left\lvert\int_\mathbb{T} f'(z)g(z)dz\right\rvert \leq C \abs{f}_X \abs{g}_X. \end{equation} \end{theorem} The existence of a non-zero bounded derivation from $R(X)$ to its dual space is an easy consequence of the estimate in (\ref{cont_deri_dual}) (see also \cite[p. 2390]{feinstein2004}). The theorem follows from the following lemma, which is also proved in \cite{feinstein2004}. \begin{lemma}\label{Joel_Cauchy} Let $D_n$ be a sequence of open disks in $\mathbb{C}$ (not necessarily pairwise disjoint) whose closures are contained in the open unit disk. Set $X=\overline{\Delta}\backslash \bigcup_{n=1}^\infty D_n$, where $\overline{\Delta}$ is the closed unit disk. Let $d_n$ be the distance from $D_n$ to $\mathbb{T}$ and let $r_n$ be the radius of $D_n$. Let $f$ and $g$ be in $R_0(X)$. Then \[ \left\lvert\int_\mathbb{T} f'(z)g(z)dz\right\rvert\leq 4 \pi \abs{f}_X \abs{g}_X \sum_{n=1}^\infty \frac{r_n}{d_n^2}. \] \end{lemma} In this section we prove a new version of Theorem \ref{Joel_Morris_Cheese} where the Swiss cheese set $X$ is classical. This follows our general classicalisation scheme as discussed in \cite{FMY}. The main theorem of this section is the following. \begin{theorem} \label{SEMorris} For each $C>0,$ there exists a classical abstract Swiss cheese $B=((b_n,s_n))$ such that $R(X_B)$ has no non-zero bounded point derivations, $s_0=1,$ the unit circle $\mathbb{T}\subseteq X_B,$ and $\sum_{n=1}^\infty s_n/d_n^2 \leq C,$ where $d_n$ is the distance from the disk $\ob{b_n}{s_n}$ to $\mathbb{T}$ if $s_n>0$ and $d_n=1$ if $s_n=0$. \end{theorem} \begin{proof} We construct a classical abstract Swiss cheese $B=((b_n,s_n))$ such that $R(X_B)$ has no nonzero bounded point derivations and \[\sum_{n=1}^\infty \frac{s_n}{d_n^2}<C.\] For each $n\in \mathbb{N}$, let $A^{(n)}=((a_m^{(n)},r_m^{(n)}))$ be a classical annular Swiss cheese with $r_0^{(n)}=(n+1)/(n+2)$, $r_1^{(n)}=n/(n+1)$, $a_0^{(n)}=a_1^{(n)}=0$, \[ \ar(A^{(n)})<\min \left\{ \frac C {2^{n+3}(n+3)^2}, \frac 1{12(n+3)^2} \right\}, \] and such that $R(X_{A^{(n)}})$ has no non-zero bounded point derivations. For each $n\geq 1$, set \[ K_n = \left\{ z\in \mathbb{C}: \abs{z}\in \left[ \frac {n+1}{n+2}-\frac 1{4(n+3)^2}, \frac{n+1}{n+2}+\frac{1}{4(n+3)^2}\right]\right\}.\] We also choose, for each $n\geq 1$, a sequence of open disks such that the annular Swiss cheese $ B^{(n)}=((b_m^{(n)},s_m^{(n)}))$ with $b_0^{(n)}=b_1^{(n)}=0$, \[ s_0^{(n)}=\frac{n+1}{n+2}+\frac 1{4(n+3)^2},\] \[ s_1^{(n)}=\frac{n+1}{n+2}-\frac 1{4(n+3)^2},\] \[ \ar(B^{(n)})<\min \left\{ \frac C{2^{n+3}(n+3)^2}, \frac 1{12(n+3)^2}\right\},\] is classical, and such that $R(X_{B^{(n)}})$ has no non-zero bounded point derivations. We construct an abstract Swiss cheese $A=((a_m,r_m))$ such that $a_0=0$, $r_0=1$, $(r_m)$ is an enumeration of $(r_m^{(n)})_{n\geq 1, m\geq 2}$ and $(s_m^{(n)})_{n\geq 1, m\geq 2}$, and $(a_m)$ is the enumeration of $(a_m^{(n)})_{n\geq 1,m\geq 2}$ and $(b_m^{(n)})_{n\geq 1,m\geq 2}$ corresponding to $(r_m)$. By Lemma~\ref{nonredundantcheese}, there exists a redundancy-free abstract Swiss cheese $A'=((a'_m,r'_m)) \in \mathcal N$ such that $X_{A'} = X_A$ and $\rho_U(A')\leq \rho_U(A)$ for all open subset $U$ of $\mathbb C$. Notice that for fixed $n$, both disks $\ob{a_m^{(n)}}{r_m^{(n)}}$ and $\ob{b_m^{(n)}}{s_m^{(n)}}$ are contained in the disk $\ob{0}{(n+2)/(n+3)}$. We have \[ \sum_{m=1}^\infty \frac {r'_m}{(d'_m)^2} \leq \sum_{n=1}^\infty \left( (n+3)^2 \sum_{m=2}^\infty (r_m^{(n)} + s_m^{(n)})\right) \leq \frac C 4,\] where $d'_m$ is the distance from the disk $\ob{a'_m}{r'_m}$ to $\mathbb T$ if $r'_m>0$ and $d'_m=1$ otherwise. Set $M_n = 1/(4(n+3)^2)$, and let $U_n=\{z\in \mathbb{C} : \dist(z,K_n)< M_n\}.$ We observe that \begin{align} \label{8_1} \rho_{U_n}(A')\leq \rho_{U_n}(A) &\leq \ar(A^{(n)})+\ar(A^{(n+1)})+\ar(B^{(n)}) \\ \nonumber &<\min\left\{\frac 1{4(n+3)^2}, \frac{3C}{2^{n+3}(n+3)^2}\right\}. \nonumber \end{align} Then $((K_n,M_n))_{n\in \mathbb N}$ is an admissible collection of pairs for the abstract Swiss cheese $A'$, which satisfies the conditions in Proposition~\ref{localclassicalisation}. See Figure \ref{semorrisconjdiagram} for an illustration of a resulting pair $(K_n,U_n)$. Thus, by Proposition \ref{localclassicalisation}, there exists a classical abstract Swiss cheese $B=((b_n,s_n))$ such that $\delta_1(B)\geq\delta_1(A')$, $X_B\subseteq X_{A'}$, $b_0=0$, $s_0=1$ and $\rho_{U_n}(B)\leq \rho_{U_n}(A')$ for all $n\in \mathbb N$. \begin{figure}[htbp] \centering \begin{tikzpicture} \draw (4.8296,-1.2941) arc [radius=5, start angle=-15, end angle= 15]; \node at (4.8296,-1.7) {$\frac{n}{n+1}$}; \draw (6.7615,-1.8117) arc [radius=7, start angle=-15, end angle= 15]; \node at (6.7615,-2.2) {$\frac{n+1}{n+2}$}; \draw (8.5001,-2.2776) arc [radius=8.8, start angle=-15, end angle= 15]; \node at (8.5001,-2.67) {$\frac{n+2}{n+3}$}; \draw (7.2444,-1.9411) arc [radius=7.5, start angle=-15, end angle= 15]; \draw (6.2785,-1.6823) arc [radius=6.5, start angle=-15, end angle= 15]; \node at (6.4,1.95) {$K_n$}; \draw[dashed] (5.7956,-1.5529) arc [radius=6, start angle=-15, end angle= 15]; \draw[dashed] (7.7274,-2.0706) arc [radius=8, start angle=-15, end angle= 15]; \node at (7.5,2.2) {$U_n$}; \end{tikzpicture} \caption{A pair $(K_n,U_n)$ as in the proof of Theorem~\ref{SEMorris}.} \label{semorrisconjdiagram} \end{figure} We have \[ \sum_{n=1}^\infty \frac{s_n}{d_n^2} = \sum_{n\in S_1} \frac{s_n}{d_n^2} + \sum_{n\in S_2} \frac{s_n}{d_n^2}, \] where $S_2 := \bigcup_{n=1}^\infty H_B( U_n)$ and $S_1= \mathbb{N}\backslash S_2$; note here that $H_B(U_m)\cap H_B(U_n) = \emptyset$ for all $m\neq n$ . For all $n\in S_1$ we have $\ob{b_n}{s_n}=\ob{a'_m}{r'_m}$ for some $m\geq 1$. Then we have \begin{equation} \label{summation_1} \sum_{n\in S_1} \frac{s_n}{d_n^2} \leq \sum_{n=1}^\infty \frac{r'_n}{(d'_n)^2}\leq \frac C4.\end{equation} On the other hand, from the construction we have \[ \sum_{n \in S_2} \frac{s_n}{d_n^2} = \sum_{n=1}^\infty \left( \sum_{m\in S_2^{(n)}} \frac{s_m}{d_m^2}\right),\] where $S_2^{(n)} := H_A(U_n).$ For each $m\in S_2^{(n)}$, since \[s_m\leq \rho_{U_n}(B)\leq \rho_{U_n}(A')<1/(4(n+3)^2)\] by \eqref{8_1} we have $\cb{b_m}{s_m}\subseteq \ob{0}{(n+2)/(n+3)}$, so $d_m>1/(n+3)$. Again by \eqref{8_1} we observe that \[ \sum_{m\in S_2^{(n)}} s_m = \rho_{U_n}(B)\leq \rho_{U_n}(A')<\frac {3C}{2^{n+3}(n+3)^2}.\] Therefore we have \[ \sum_{m\in S_2} \frac{s_m}{d_m^2} = \sum_{n=1}^\infty \left(\sum_{m\in S_2^{(n)}} \frac{s_m}{d_m^2}\right) \leq \sum_{n=1}^\infty \sum_{m\in S_2^{(n)}}(n+3)^2s_m <\frac C2.\] Combining with \eqref{summation_1} we conclude that \[ \sum_{n=1}^\infty \frac{s_n}{d_n^2}<C.\] This concludes the proof. \end{proof} \section{Open questions} We raise the following open questions. Let $X$ be a compact plane set. \begin{question} Let $B$ be the classical abstract Swiss cheese constructed in Theorem~$\ref{SEMorris}$. Can $R(X_B)$ be regular? Must $R(X_B)$ be regular? \end{question} \begin{question} If $R(X)$ has no non-zero bounded point derivations, must $R(X)$ be regular$?$ \end{question} \begin{question} If $R(X)$ is weakly amenable, must $R(X)$ be trivial? The same question is open for uniform algebras. \end{question} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
1,108,101,562,585
arxiv
\section{Introduction} In any theory in which the fermions have interactions mediated by heavy (scalar and/or vector) bosons, the low-energy consequences can be conveniently parametrized by four-fermion interactions. Hence, precise low-energy tests of processes involving fermions constitute a window through which one may peek into the nature of the interactions at higher energies. This procedure is, in principle, cleaner in the lepton sector where it is not obscured by hadronization. Furthermore, one usually expects the scalar-mediated interactions to have the fermionic vertices proportional to the fermion masses, thus making the tau decays the ideal system for their study. The exciting new experimental results in leptonic tau decays reported recently \cite{ALEPH:95,ARGUS:95,ARGUS:95b} provide most of the missing pieces of information and warrant for the first time a complete analysis of the lepton sector. In the Standard Model (SM), the quantum numbers of the fermions under SU(2)$_L$ are judiciously chosen in order to obtain a low-energy ``(V$-$A) $\otimes$ (V$-$A)" four-fermion structure, correctly describing the dominant features of the experiments in $\beta$ and $\mu$ decays. One has now the opportunity to test this scheme in tau decays. Should any difference arise, that will be a sign of Physics Beyond the SM. In most extensions of the SM, these new effects arise through differences in the couplings to the $W$ and $Z$ bosons, or through the exchange of new intermediate bosons. The new four-fermion interactions thus obtained will be typically dominated by a single intermediate boson; either the one with the smallest mass or that whose couplings to the leptons are specially large. In any case, important relations exist between the low energy parameters. This program is undertaken in what follows. In section 2 we set up the analysis in terms of the helicity projection form of the four-fermion interaction pointing out the most salient model-independent features. We do this for completeness and to set up the notation for the subsequent sections. In section 3 we summarize the experimental situation and in section 4 we discuss the universality tests. Section 5 is devoted to the analysis of non-standard charged intermediate bosons and section 6 to lepton-flavour-changing neutral-boson interactions. Section 7 contains a summary of some features of our analysis and resulting information on the opportunities for physics beyond the SM. Finally, we draw our conclusions in section 8. The appendix is devoted to the development of relations relevant for the analysis of lepton-flavour-changing neutral bosons and a detailed discussion of the consequences of the unobservability of the final-state neutrinos. We also discuss there the complementary information extractable from neutrinoless charged-lepton decays. \section{The four-fermion hamiltonian} Let us consider the leptonic decays $l^-\to\nu_l l'^-\bar\nu_{l'}$, where the lepton pair ($l$,$l^\prime $) may be ($\mu$,$e$), ($\tau$,$e$), or ($\tau$,$\mu$). The most general derivative-free, lepton-number conserving, four-lepton interaction hamiltonian, consistent with locality and Lorentz invariance, can be written as \cite{scheck} \begin{equation} {\cal H} = 4 \frac{G_{l'l}}{\sqrt{2}} \sum_{\epsilon,\omega = R,L}^{n = S,V,T} g^n_{l'_\epsilon l^{\phantom{'}}_\omega} \left[ \overline{l'_\epsilon} \Gamma^n {(\nu_{l'})}_\sigma \right]\, \left[ \overline{({\nu_l})_\lambda} \Gamma_n l_\omega \right]\ . \label{eq:hamiltonian} \end{equation} The label $n$ refers to the type of interaction, namely \begin{equation} \Gamma^S = 1\ , \hspace{10mm} \Gamma^V = \gamma^\mu\ , \hspace{10mm} \Gamma^T = \frac{1}{\sqrt{2}} \,\sigma^{\mu \nu} \equiv \frac{i}{2 \sqrt{2}} \, ( \gamma^\mu \gamma^\nu - \gamma^\nu \gamma^\mu)\ , \end{equation} for the scalar, vector and tensor interactions, respectively. The neutrino chiralities, $\sigma$ and $\lambda$, are uniquely determined once $n$ and the charged-lepton chiralities, $\epsilon$ and $\omega$, are chosen. Thus, one has 19 real constants, since there are only two non-zero tensor terms and one global phase may be taken away. In any reasonable model, these couplings are the low-energy limit of scalar and/or vector-boson mediated transitions. In general, several such contributions will exist and we write \begin{equation} g^n_{l'_\epsilon l^{\phantom{'}}_\omega} = w^n_{l'_\epsilon l^{\phantom{'}}_\omega} + a^n_{l'_\epsilon l^{\phantom{'}}_\omega} + b^n_{l'_\epsilon l^{\phantom{'}}_\omega} + \cdots\ \ , \end{equation} where the letter $w$ is reserved for the known $W$ boson, and each letter ($a$, $b$, \ldots) refers to couplings originating from a given additional intermediate boson. {}From $W$ decays, as well as from $\beta$ and $\mu$ decays, we know that the $W$ vertices with leptons will necessarily give the dominant contribution to the $\tau$ and $\mu$ leptonic decays. The SM predicts that this is the only contribution, and, moreover, that there are only couplings to left-handed leptons. Hence, in the SM \begin{equation} g^V_{l'_L l^{\phantom{'}}_L} \equiv w^V_{l'_L l^{\phantom{'}}_L} = 1\ , \end{equation} and all other couplings are predicted to vanish. Of course, what one measures in these decays are the sum $g^n_{l'_\epsilon l^{\phantom{'}}_\omega}$ of all the different contributions with the same chiral structure and these may interfere constructively or destructively. For an initial lepton-polarization ${\cal P}_l$, the final charged lepton distribution in the decaying lepton rest frame is usually parametrized in the form \cite{BM:57,KS:57} \begin{eqnarray}\label{eq:spectrum} {d^2\Gamma(x,\cos\theta) \over dx\, d\cos\theta} &\!\!\! = &\!\!\! {m_l\omega^4 \over 2\pi^3}(G_{l'l}^2 N) \sqrt{x^2-x_0^2}\, \Biggl\{ x (1 - x) + {2\over 9} \rho \left(4 x^2 - 3 x - x_0^2 \right) + \eta\, x_0 (1-x) \Biggr.\nonumber\\ & & \Biggl. - {1\over 3}{\cal P}_l \, \xi \, \sqrt{x^2-x_0^2} \cos{\theta} \left[ 1 - x + {2\over 3} \delta \left( 4 x - 4 + \sqrt{1-x_0^2} \right)\right] \Biggr\} \, , \quad \end{eqnarray} where $\theta$ is the angle between the $l^-$ spin and the final charged-lepton momentum, $\, \omega \equiv (m_l^2 + m_{l'}^2)/2 m_l \, $ is the maximum $l'^-$ energy for massless neutrinos, $x \equiv E_{l'^-} / \omega$ is the reduced energy and $x_0\equiv m_{l'}/\omega$. For unpolarized $l's$, the distribution is characterized by the so-called Michel \cite{MI:50} parameter $\rho$ and the low-energy parameter $\eta$. Two more parameters, $\xi$ and $\delta$ can be determined when the initial lepton polarization is known. If the polarization of the final charged lepton is also measured, 5 additional independent parameters \cite{PDG:94} ($\xi'$, $\xi''$, $\eta''$, $\alpha'$, $\beta'$) appear. To determine the constraints on Physics Beyond the SM, it is convenient to express the Michel parameters in terms of their deviation from the SM values \cite{mursula}. One obtains, \begin{eqnarray} \rho - \frac{3}{4} & = & - \frac{3}{4 N} \left[ {|g^V_{LR}|}^2 + {|g^V_{RL}|}^2 + 2 {|g^T_{LR}|}^2 + 2{|g^T_{RL}|}^2 + \mbox{\rm Re}(g^S_{LR} g^{T \ast}_{LR} + g^S_{RL} g^{T \ast}_{RL}) \right]\ , \nonumber\\ \eta & = & \frac{1}{2 N} \mbox{\rm Re}\left[ g^S_{LL} g^{V \ast}_{RR} + g^S_{LR} g^{V \ast}_{RL} + g^S_{RL} g^{V \ast}_{LR} +g^S_{RR} g^{V \ast}_{LL} + 6(g^V_{LR} g^{T \ast}_{RL} +g^V_{RL} g^{T \ast}_{LR}) \right]\ , \nonumber\\ \xi - 1 & = & - \frac{1}{2 N} \left[ {|g^S_{LR}|}^2 + {|g^S_{RR}|}^2 + 4 (-{|g^V_{LR}|}^2 + 2 {|g^V_{RL}|}^2 + {|g^V_{RR}|}^2) \right. \nonumber\\ & & \hspace{10mm} \left. - 4 {|g^T_{LR}|}^2 + 16 {|g^T_{RL}|}^2 - 8 {\rm Re}(g^S_{LR} g^{T \ast}_{LR} - g^S_{RL} g^{T \ast}_{RL}) \right]\ , \\ ({\xi}\delta) - \frac{3}{4} & = & - \frac{3}{4 N} \left[ \frac{1}{2} ({|g^S_{LR}|}^2 + {|g^S_{RR}|}^2) + ({|g^V_{LR}|}^2 + {|g^V_{RL}|}^2 + 2 {|g^V_{RR}|}^2) \right. \nonumber\\ & & \hspace{10mm} \left. + 2 ({2 |g^T_{LR}|}^2 + {|g^T_{RL}|}^2) - \mbox{\rm Re}(g^S_{LR} g^{T \ast}_{LR} - g^S_{RL} g^{T \ast}_{RL}) \right]\ . \nonumber \label{eq:michel} \end{eqnarray} We set the overall normalization factor \begin{eqnarray} N & \equiv & \frac{1}{4} ({|g^S_{LL}|}^2 + {|g^S_{LR}|}^2 + {|g^S_{RL}|}^2 + {|g^S_{RR}|}^2) + ({|g^V_{LL}|}^2 +{|g^V_{LR}|}^2 + {|g^V_{RL}|}^2 + {|g^V_{RR}|}^2) \nonumber\\ & & + 3 ({|g^T_{LR}|}^2 + {|g^T_{RL}|}^2)\ , \label{eq:N} \end{eqnarray} to 1, as it is frequently done. This may always be done\footnote{ Alternatively, one may absorb $N$ for the ($\mu$,$e$) pair, say, use a common $G_F$ in Eq.~\ref{eq:hamiltonian}, and keep the normalization factor $N$ for the other two decays. However, care must then be taken when using published bounds for the coupling constants $g^n_{l'_\epsilon l^{\phantom{'}}_\omega}$, since the normalization in Eq.~(\ref{eq:N}) is usually adopted. } by absorbing it in the definition of $G^2_{l'l}$. We note that the parameters $\eta$ and $G^2_{l'l}$ are the only ones linear in the new-physics contributions. Namely, they have terms proportional to \begin{equation} \eta \sim \frac{1}{2} \mbox{\rm Re}(1 \times g^{S \ast}_{RR})\ , \label{eq:lineareta} \end{equation} and \begin{equation} G^2_{l'l} \propto 1 + 2 \mbox{\rm Re}(1 \times \Delta g^{V \ast}_{LL})\ , \label{eq:linearN} \end{equation} where we have used the fact that the SM contribution to $g^V_{LL}$ is approximately 1, and new contributions to $g^V_{LL}$ have been parametrized by $\Delta g^V_{LL}$. Clearly this last type of variation is only detectable if it is non-universal. It is convenient to introduce \cite{FGJ:86} the probabilities $Q_{\epsilon\omega}$ for the decay of a $\omega$-handed $l^-$ into an $\epsilon$-handed daughter lepton, \begin{eqnarray}\label{eq:Q_LL} Q_{LL} &\!\!\! = &\!\!\! {1 \over 4} |g^S_{LL}|^2 \, + \, |g^V_{LL}|^2 \;\; \phantom{+ \, 3 |g^T_{LR}|^2} = {1 \over 4}\left( -3 +{16\over 3}\rho -{1\over 3}\xi +{16\over 9}\xi\delta +\xi'+\xi'' \right) , \qquad\\ \label{eq:Q_RR} Q_{RR} &\!\!\! = &\!\!\! {1 \over 4} |g^S_{RR}|^2 \, + \, |g^V_{RR}|^2 \; \phantom{+ \, 3 |g^T_{LR}|^2} = {1 \over 4}\left( -3 +{16\over 3}\rho +{1\over 3}\xi -{16\over 9}\xi\delta -\xi'+\xi'' \right) , \quad\\ \label{eq:Q_LR} Q_{LR} &\!\!\! = &\!\!\! {1 \over 4} |g^S_{LR}|^2 \, + \, |g^V_{LR}|^2 \, + \, 3 |g^T_{LR}|^2 = {1 \over 4}\left( 5 -{16\over 3}\rho +{1\over 3}\xi -{16\over 9}\xi\delta +\xi'-\xi'' \right) , \quad\;\\ \label{eq:Q_RL} Q_{RL} &\!\!\! = &\!\!\! {1 \over 4} |g^S_{RL}|^2 \, + \, |g^V_{RL}|^2 \, + \, 3 |g^T_{RL}|^2 = {1 \over 4}\left( 5 -{16\over 3}\rho -{1\over 3}\xi +{16\over 9}\xi\delta -\xi'-\xi'' \right) . \quad\; \end{eqnarray} Upper bounds on any of these (positive-semidefinite) probabilities translate into corresponding limits for all couplings with the given chiralities. The total decay rate is given by \begin{equation}\label{eq:gamma} \Gamma\, = \, {m_l^5 G_{l'l}^2\over 192 \pi^3}\, \left\{ f\!\left({m_{l'}^2\over m_l^2}\right) + 4\eta\, {m_{l'}\over m_l}\, g\!\left({m_{l'}^2\over m_l^2}\right) \right\} r_{\mbox{\rm\scriptsize RC}} \, , \end{equation} where \begin{eqnarray}\label{eq:f_g} f(z) &\!\!\! = &\!\!\! 1 - 8 z + 8 z^3 - z^4 - 12 z^2 \ln{z} \, , \\ g(z) &\!\!\! = &\!\!\! 1 + 9 z - 9 z^2 - z^3 + 6 z (1+z) \ln{z} \, . \end{eqnarray} Thus, the normalization $G_{e \mu}$ corresponds to the Fermi coupling $G_F$, measured in $\mu$ decay. The factor \begin{equation}\label{eq:r_RC} r_{\mbox{\rm\scriptsize RC}} \, = \, \left[1 + {\alpha(m_l) \over 2 \pi } \left({25 \over 4} - \pi^2 \right) \right] \, \left[ 1 + { 3 \over 5 } {m_l^2 \over M_W^2} - 2 {m_{l'}^2 \over M_W^2}\right] \, , \end{equation} takes into account radiative corrections not included in the Fermi coupling constant $G_F$, and the non-local structure of the $W$ propagator. These effects \cite{MS:88} are quite small: $r_{\mbox{\rm\scriptsize RC}}^{\tau\to\mu,e} = 0.9960$; $r_{\mbox{\rm\scriptsize RC}}^{\mu\to e} = 0.9958$. Notice, that the we are adopting the usual procedure of taking the radiative corrections within the Standard Model. Since we assume that the Standard Model provides the dominant contribution to the decay rate, any additional higher-order correction beyond the effective four-fermion Hamiltonian (\ref{eq:hamiltonian}) would be a subleading effect. The kinematical integrations have been done assuming massless neutrinos. The numerical correction induced by a non-zero $\nu_l$ mass, $r_{\nu_l} \equiv 1 + \delta_{\nu_l}\approx 1 - 8 (m_{\nu_l}/m_l)^2$, is quite small. The present experimental upper limits \cite{PDG:94,ALEPH:95b} on the neutrino masses imply: $|\delta_{\nu_\mu}^{\mu\to e}| < 5 \times 10^{-5}$ (90\% CL), $|\delta_{\nu_\mu}^{\tau\to\mu}| < 2 \times 10^{-7}$ (90\% CL), $|\delta_{\nu_\tau}^{\tau\to\mu,e}| < 1.4 \times 10^{-3}$ (95\% CL). It is fortunate that the two parameters which are linear in the new-physics contributions, $\eta$ and $G^2_{l'l}$, are precisely the ones which survive in the total decay width. One can then study them with non-universality searches which already provide very precise tests of the lepton sector. \section{Experimental summary} \label{sec:exp} For $\mu$-decay, where precise measurements of the polarizations of both $\mu$ and $e$ have been performed, there exist \cite{FGJ:86} upper bounds on $Q_{RR}$, $Q_{LR}$ and $Q_{RL}$, and a lower bound on $Q_{LL}$. They imply corresponding upper bounds on the 8 couplings $|g^n_{RR}|$, $|g^n_{LR}|$ and $|g^n_{RL}|$. The measurements of the $\mu^-$ and the $e^-$ do not allow us to determine $|g^S_{LL}|$ and $|g^V_{LL}|$ separately \cite{FGJ:86,JA:66}. Nevertheless, since the helicity of the $\nu_\mu$ in pion decay is experimentally known to be $-1$, a lower limit on $|g^V_{LL}|$ is obtained \cite{FGJ:86} from the inverse muon decay $\nu_\mu e^-\to\mu^-\nu_e$. The present (90\% CL) bounds \cite{PDG:94} on the $\mu$-decay couplings are given in Table~\ref{tab:mu_couplings}. These limits show nicely that the bulk of the $\mu$-decay transition amplitude is indeed of the predicted V$-$A type. \begin{table}[hbt] \centering \begin{tabular}{|l|l|l|} \hline $|g^S_{e_L \mu_L}| < 0.55$ & $|g^V_{e_L \mu_L}| > 0.96$ & \hfil -- \hfil \\ $|g^S_{e_R \mu_R}| < 0.066$ & $|g^V_{e_R \mu_R}| < 0.033$ & \hfil -- \hfil \\ $|g^S_{e_L \mu_R}| < 0.125$ & $|g^V_{e_L \mu_R}| < 0.060$ & $|g^T_{e_L \mu_R}| < 0.036$\\ $|g^S_{e_R \mu_L}| < 0.424$ & $|g^V_{e_R \mu_L}| < 0.110$ & $|g^T_{e_R \mu_L}| < 0.122$\\ \hline \end{tabular} \caption{90\% CL experimental limits \protect\cite{PDG:94} for the $\mu$-decay $g^n_{e_\epsilon \mu_\omega}$ couplings.} \label{tab:mu_couplings} \end{table} The experimental analysis of the $\tau$-decay parameters is necessarily different from the one applied to the muon, because of the much shorter $\tau$ lifetime. The measurement of the $\tau$ polarization and the parameters $\xi$ and $\delta$ is still possible due to the fact that the spins of the $\tau^+\tau^-$ pair produced in $e^+e^-$ annihilation are strongly correlated [14--23]. However, the polarization of the charged lepton emitted in the $\tau$ decay has never been measured. In principle, this could be done for the decay $\tau^-\to\mu^-\bar\nu_\mu\nu_\tau$ by stopping the muons and detecting their decay products \cite{FE:90}. The measurement of the inverse decay $\nu_\tau l^-\to\tau^-\nu_l$ looks far out of reach. The present experimental status on the $\tau$-decay Michel parameters is shown in Table~\ref{tab:tau_michel}, which gives the world-averages of all published \cite{ALEPH:95,ARGUS:95,ARGUS:95b,PDG:94} measurements. The improved accuracy of the most recent experimental analyses has brought an enhanced sensitivity to the different shape parameters, allowing the first measurements of $\eta_{\tau\to\mu}$ \cite{ALEPH:95,ARGUS:95}, $\xi_{\tau\to e}$, $\xi_{\tau\to\mu}$, $(\xi\delta)_{\tau\to e}$ and $(\xi\delta)_{\tau\to\mu}$ \cite{ALEPH:95}. (The ARGUS measurement \cite{ARGUS:95b} of $\xi_{\tau\to l}$ and $(\xi\delta)_{\tau\to l}$ assumes identical couplings for $l=e,\mu$. A measurement of $\sqrt{\xi_{\tau\to e}\xi_{\tau\to\mu}}$ was published previously \cite{ARGUS:93}). \begin{table}[htb] \centering \begin{tabular}{|c|c|c|c|c|} \hline Parameter & $\tau^-\to\mu^-$ & $\tau^-\to e^-$ & With Lepton-Universality & SM \\ \hline $\rho$ & $0.738\pm 0.038$ & $0.736\pm 0.028$ & $0.733\pm 0.022$ & 0.75 \\ $\eta$ & $-0.14\pm 0.23\phantom{-}$ & -- & $-0.01\pm 0.14\phantom{-}$ & 0 \\ $\xi$ & $1.23\pm 0.24$ & $1.03\pm 0.25$ & $1.06\pm 0.11$ & 1 \\ $\xi\delta$ & $0.71\pm 0.15$ & $ 1.11\pm 0.18$ & $ 0.76\pm 0.09$ & 0.75 \\ \hline \end{tabular} \caption{Experimental averages of the $\tau$-decay Michel parameters \protect\cite{ALEPH:95,ARGUS:95,ARGUS:95b,PDG:94}. The fourth column assumes lepton universality.} \label{tab:tau_michel} \end{table} The determination of the $\tau$-polarization parameters \cite{ALEPH:95,ARGUS:95b,RO:95}, allows us to bound the total probability for the decay of a right-handed $\tau$, \begin{equation}\label{eq:Q_R} Q_{\tau_R} \equiv Q_{l'_R\tau^{\phantom{'}}_R} + Q_{l'_L\tau^{\phantom{'}}_R} = \frac{1}{2}\, \left[ 1 + \frac{\xi}{3} - \frac{16}{9} (\xi\delta)\right] \; . \end{equation} One finds (ignoring possible correlations among the measurements): \begin{eqnarray} Q_{\tau_R}^{\tau\to\mu} &\!\!\! =&\!\!\! \phantom{-}0.07\pm 0.14 \; < \, 0.28 \quad (90\%\;\mbox{\rm CL})\, , \\ Q_{\tau_R}^{\tau\to e} &\!\!\! =&\!\!\! -0.32\pm 0.17 \; < \, 0.14 \quad (90\%\;\mbox{\rm CL})\, , \\ Q_{\tau_R}^{\tau\to l} &\!\!\! =&\!\!\! \phantom{-}0.00\pm 0.08 \; < \, 0.14 \quad (90\%\;\mbox{\rm CL})\, , \end{eqnarray} where the last value refers to the $\tau$-decay into either $l=e$ or $\mu$, assuming universal leptonic couplings. Since these probabilities are positive semidefinite quantities, they imply corresponding limits on all $|g^n_{l_R\tau_R}|$ and $|g^n_{l_L\tau_R}|$ couplings. The quoted 90\% CL have been obtained adopting a Bayesian approach for one-sided limits \cite{PDG:94}. Table~\ref{table:g_tau_bounds} gives the implied bounds on the $\tau$-decay couplings. \begin{table}[hbt] \centering \begin{tabular}{||l|l||l||} \hline \hfil $\tau\to\mu$\hfil &\hfil $\tau\to e$ \hfil & \hfil $\tau\to l$ \hfil \\\hline\hline $|g^S_{\mu_R\tau_R}| < 1.05$ & $|g^S_{e_R\tau_R}| < 0.75^*$ & $|g^S_{l_R\tau_R}| < 0.74$ \\ $|g^S_{\mu_L\tau_R}| < 1.05$ & $|g^S_{e_L\tau_R}| < 0.75^*$ & $|g^S_{l_L\tau_R}| < 0.74$ \\ \hline $|g^V_{\mu_R\tau_R}| < 0.53$ & $|g^V_{e_R\tau_R}| < 0.38^*$ & $|g^V_{l_R\tau_R}| < 0.37$ \\ $|g^V_{\mu_L\tau_R}| < 0.53$ & $|g^V_{e_L\tau_R}| < 0.38^*$ & $|g^V_{l_L\tau_R}| < 0.37$ \\ \hline $|g^T_{\mu_L\tau_R}| < 0.30$ & $|g^T_{e_L\tau_R}| < 0.22^*$ & $|g^T_{l_L\tau_R}| < 0.21$ \\ \hline \end{tabular} \caption{90\% CL limits for the $\tau_R$-decay $g^n_{l_\epsilon \tau_R}$ couplings. The numbers with an asterisk use the measured value of $(\xi\delta)_e$; the meaning of the assigned confidence level could be doubtful in this case (see text).} \label{table:g_tau_bounds} \end{table} Notice, however, that the central value of $Q_{\tau_R}^{\tau\to e}$ turns out to be negative at the $2\sigma$ level; i.e.~, there is only a 3\% probability to have a positive value of $Q_{\tau_R}^{\tau\to e}$. Therefore, the limits on $|g^n_{e_R\tau_R}|$ and $|g^n_{e_L\tau_R}|$ should be taken with some caution, since the meaning of the assigned confidence level is not at all clear. The problem clearly comes from the measured value of $(\xi\delta)_e$. In order to get a positive probability $Q_{\tau_R}$, one needs $(\xi -1) > \frac{16}{3} [(\xi\delta) -\frac{3}{4}]$. Thus, $(\xi\delta)$ can only be made larger than $3/4$ at the expense of making $\xi$ correspondingly much larger than one. Hence, if the current values of the Michel parameters for the decay of tau into electron and neutrinos were to be confirmed, one would have to go beyond the effective hamiltonian of Eq.~(\ref{eq:hamiltonian}): {\it the combined observations for $\xi_{\tau\to e}$ and $(\xi\delta)_{\tau\to e}$ are not consistent with an effective four-fermion interaction of the form in Eq.~(\ref{eq:hamiltonian})}. That is to say that no flavour-conserving, derivative-free, four-lepton interaction can be found, satisfying both these results simultaneously. Further, since lepton-flavour violations have no measurable effect if the final neutrinos are massless and unobserved \cite{langacker}, and derivative couplings would be suppressed by $m_\tau^2/M_W^2 \sim 5 \times 10^{-4}$, a sizeable effect not included in the effective Hamiltonian (\ref{eq:hamiltonian}) seems very unlikely\footnote{ The alternative is to go beyond the four-fermion Hamiltonian (\protect\ref{eq:hamiltonian}), allowing, for example, the decay of the tau into an electron and two (unobserved) neutral scalars, such as Majorons \protect\cite{SBP:87} or supersymmetric scalar neutrinos.}. Hence, based solely on theoretical grounds, one can achieve the conclusion that, within a four-fermion hamiltonian, either $(\xi\delta)_{\tau\to e}$ comes into agreement with the SM, or $\xi_{\tau\to e}$ must move by a factor close to $2$ (a most unreasonable proposition). Table~\ref{tab:parameters} gives the world-average values of $m_\tau$, $\tau_\tau$, $B_l\equiv\mbox{\rm Br}(\tau^-\to\nu_\tau l^-\bar\nu_l)$, $B_\pi\equiv\mbox{\rm Br}(\tau^-\to\nu_\tau\pi^-)$, $B_K\equiv\mbox{\rm Br}(\tau^-\to\nu_\tau K^-)$ and $B_h\equiv\mbox{\rm Br}(\tau^-\to\nu_\tau\pi^- + \nu_\tau K^-)$. In view of the significant improvements achieved with the most recent data, updated numbers including preliminary results reported in the last $\tau$ Workshop \cite{MO:94} are also given. \begin{table}[thb] \centering \begin{tabular}{|c|c|c|} \hline Parameter & PDG 94 & Montreux 94 \\ \hline $m_\tau$ & $(1777.1^{+0.4}_{-0.5})$ MeV & $(1777.0\pm 0.3)$ MeV \\ $\tau_\tau$ & $(295.6\pm 3.1)\times 10^{-15}$ s & $(291.6\pm 1.6)\times 10^{-15}$ s \\ $B_e$ & $(18.01\pm 0.18)\% $ & $(17.79\pm 0.09)\% $ \\ $B_\mu$ & $(17.65\pm 0.24)\% $ & $(17.33\pm 0.09)\% $ \\ $B_\pi$ & $(11.7\pm 0.4)\% $ & $(11.09\pm 0.15)\% $ \\ $B_K$ & $(0.67\pm 0.23)\% $ & $(0.68\pm 0.04)\% $ \\ $B_h$ & $(12.88\pm 0.34)\% $ & $(11.77\pm 0.14)\% $ \\ \hline \end{tabular} \caption{World-average values \protect\cite{PDG:94} of the $\tau$ mass, lifetime, leptonic branching ratios and $\mbox{\rm Br}(\tau^-\to\nu_\tau\pi^-/K^-)$. The updated numbers in the third column, include {\it preliminary} results reported in the last $\tau$ Workshop \protect\cite{MO:94}.} \label{tab:parameters} \end{table} \section{Universality tests} \label{sec:universality} The universality of the leptonic couplings can be tested through the ratios of the measured leptonic-decay widths: \begin{eqnarray} \label{eq:Gmu/Ge} {\Gamma_{\tau\to\mu}\over\Gamma_{\tau\to e}} & \Longrightarrow & \left|{\widehat G_{\mu\tau} \over \widehat G_{e\tau} }\right| = \left\{ \begin{array}{cc} 1.0038\pm 0.0087 \quad & \mbox{\rm (PDG 94)} \\ 1.0008 \pm 0.0036 \quad & \mbox{\rm (Montreux 94)} \end{array}\right. , \\ \label{eq:Gtau/Gmu} {\Gamma_{\tau\to\mu}\over\Gamma_{\mu\to e}} & \Longrightarrow & \left|{\widehat G_{\mu\tau} \over \widehat G_{e\mu} }\right| = \left\{ \begin{array}{cc} 0.9970\pm 0.0073 \quad &\mbox{\rm (PDG 94)} \\ 0.9979 \pm 0.0037 \quad & \mbox{\rm (Montreux 94)} \end{array}\right. , \end{eqnarray} where \begin{equation}\label{eq:Ghat_def} \widehat G_{l'l} \,\equiv\, G_{l'l} \, \sqrt{1 + 4\,\eta_{\l\to l'}\, {m_{l'}\over m_l}\, {g\!\left( m_{l'}^2/ m_l^2 \right)\over f\!\left( m_{l'}^2/ m_l^2 \right)}} \, . \end{equation} An important point, emphatically stressed by Fetscher and Gerber \cite{fgreview}, concerns the extraction of $G_{e \mu}$ from $\mu$ decays, whose uncertainty is dominated by the uncertainty in $\eta$. In models where $\eta=0$, $\widehat G_{l'l} = G_{l'l}$; then the limits (\ref{eq:Gmu/Ge}) and (\ref{eq:Gtau/Gmu}) strongly constrain possible deviations from universality. To first-order in new physics, $G_{l'l}\propto 1 + \mbox{\rm Re}(\Delta g_{LL}^V)$. Therefore, at 90\% CL, $-0.005 \ (-0.010) < \mbox{\rm Re}(\Delta g_{\mu_L\tau_L}^V- \Delta g_{e_L\tau_L}^V) < 0.007$ (0.018) and $-0.008 \ (-0.015) < \mbox{\rm Re}(\Delta g_{\mu_L\tau_L}^V- \Delta g_{e_L\mu_L}^V) < 0.004$ (0.009), using the Montreux 94 (PDG~94) data. Conversely, if lepton universality is assumed (i.e. $G_{l'l} = G_F$, $\, g^n_{l'_\epsilon l^{\phantom{'}}_\omega} \! = g^n_{\epsilon\omega}$), the leptonic decay ratios (\ref{eq:Gmu/Ge}) and (\ref{eq:Gtau/Gmu}) provide limits on the low-energy parameter $\eta$. The best sensitivity \cite{stahl} comes from $\widehat G_{\mu\tau}$, where the term proportional to $\eta$ is not suppressed by the small $m_e/m_l$ factor. The measured $B_\mu/B_e$ ratio implies then: \begin{equation}\label{eq:eta_univ} \eta \, = \, \left\{ \begin{array}{cc} 0.034 \pm 0.076 \quad &\mbox{\rm (PDG 94)} \\ 0.007\pm 0.033 \quad & \mbox{\rm (Montreux 94)} \end{array}\right. . \end{equation} This determination is more accurate that the one in Table~\ref{tab:tau_michel}, obtained from the shape of the energy distribution, and is comparable to the value measured in $\mu$-decay: $\eta_{\mu\to e} = -0.007\pm 0.013$ \cite{PDG:94}. A non-zero value of $\eta$ would show that there are at least two different couplings with opposite chiralities for the charged leptons. Since, we assume the V$-$A coupling $g_{LL}^V$ to be dominant, the second coupling would be \cite{FE:90} a Higgs type coupling $g^S_{RR}$ [$\eta\approx\mbox{\rm Re}(g^S_{RR})/2$, to first-order in new-physics contributions]. Thus, Eq.~(\ref{eq:eta_univ}) puts the (90\% CL) bound: $-0.09 \, (-0.18) <\mbox{\rm Re}(g^S_{RR}) < 0.12$ (0.32), using the Montreux 94 (PDG 94) data. Finally, in models in which the new-physics couples exclusively to the lepton sector (so that the CKM matrix is unitary), further information may be found by comparing $G_{l'l}$ with $G_F$ as extracted from the combination of $\beta$ and $K_{e3}$ decays \cite{barroso}. Indeed, the usually quoted values for the CKM angles are extracted assuming that the coupling constant $g$, coupling $W$ to fermions, is the same for quarks and leptons. Thus, if there are new contributions affecting only the lepton couplings, any deviation from unitarity in the first row of the CKM matrix reflects a deviation of $g_\mu$ from the SM value. \subsection{$W$-exchange model} The universality constraints are commonly presented, assuming that the leptonic decays proceed exclusively through the SM V$-$A interaction. In that case the $\widehat G_{l'l}$ ratios reduce to the corresponding ratios of leptonic $W$-couplings: $|\widehat G_{\mu\tau}/\widehat G_{e\tau}| = |g_\mu/g_e|$; $|\widehat G_{\mu\tau}/\widehat G_{e\mu}| = |g_\tau/g_e|$. Eq.~(\ref{eq:Gmu/Ge}) should then be compared with the more accurate value \cite{BR:92,CZ:93} \begin{equation}\label{eq:univ_pi} \left| {g_{\mu} \over g_e} \right| = 1.0017 \pm 0.0015 \, , \end{equation} obtained from the ratio $R_{e/\mu}\equiv\Gamma(\pi^-\to e^-\bar\nu_e)/ \Gamma(\pi^-\to\mu^-\bar\nu_\mu)$. The decay modes $\tau^-\to\nu_\tau\pi^-$ and $\tau^-\to\nu_\tau K^-$ can also be used to test universality through the ratios \begin{eqnarray}\label{eq:R_tp} R_{\tau/\pi} & \!\!\!\equiv &\!\! {\Gamma(\tau^-\to\nu_\tau\pi^-) \over \Gamma(\pi^-\to \mu^-\bar\nu_\mu)} = \left| {g_\tau\over g_\mu}\right|^2 {m_\tau^3\over 2 m_\pi m_\mu^2} \, {(1-m_\pi^2/ m_\tau^2)^2\over (1-m_\mu^2/ m_\pi^2)^2} \left( 1 + \delta R_{\tau/\pi}\right) , \qquad \\ \label{eq:R_tk} R_{\tau/K} &\!\!\! \equiv &\!\!\! {\Gamma(\tau^-\to\nu_\tau K^-) \over \Gamma(K^-\to \mu^-\bar\nu_\mu)} = \left| {g_\tau\over g_\mu}\right|^2 {m_\tau^3\over 2 m_K m_\mu^2} {(1-m_K^2/m_\tau^2)^2\over (1-m_\mu^2/ m_K^2)^2} \left( 1 + \delta R_{\tau/K}\right) , \qquad \end{eqnarray} where the dependence on the hadronic matrix elements (the so-called decay constants $f_{\pi,K}$) factors out. Owing to the different energy scales involved, the radiative corrections to the $\tau^-\to\nu_\tau\pi^-/K^-$ amplitudes are however not the same than the corresponding effects in $\pi^-/K^-\to\mu^-\bar\nu_\mu$. The size of the relative correction has been estimated by Marciano and Sirlin \cite{MS:93} to be $\delta R_{\tau/\pi} = (0.67\pm 1.)\% $, where the 1\% error is due to the missing long-distance contributions to the tau decay rate. A recent evaluation of those long-distance corrections \cite{DF:94} quotes the more precise values \begin{equation}\label{eq:dR_tp_tk} \delta R_{\tau/\pi} = (0.16\pm 0.14)\% , \qquad\qquad \delta R_{\tau/K} = (0.90\pm 0.22)\% . \end{equation} Using these numbers, the measured $\tau^-\to\pi^-\nu_\tau$ and $\tau^-\to K^-\nu_\tau$ decay rates imply \begin{equation}\label{eq:g_tau_mu/pi_k} \left| {g_\tau\over g_\mu}\right|_\pi = \left\{ \begin{array}{c} 1.027\pm 0.018 \\ 1.006 \pm 0.008 \end{array}\right. ; \quad \left| {g_\tau\over g_\mu}\right|_K = \left\{ \begin{array}{cc} 0.96\pm 0.17 \quad & \mbox{\rm (PDG 94)} \\ 0.972\pm 0.029 \quad & \mbox{\rm (Montreux 94)} \end{array}\right. . \end{equation} The inclusive sum of both decay modes, i.e. $\Gamma[\tau^-\to h^-\nu_\tau]$ with $h=\pi,K$, provides a slightly more accurate determination: \begin{equation}\label{eq:g_tau_mu/pi/k} \left| {g_\tau\over g_\mu}\right|_{\pi/K} = \left\{ \begin{array}{cc} 1.043\pm 0.015 \qquad & \mbox{\rm (PDG 94)} \\ 1.004\pm 0.007 \qquad & \mbox{\rm (Montreux 94)} \end{array}\right. . \end{equation} An independent test of lepton universality has been obtained at the $p$-$\bar p$ colliders, by comparing the ratios of the $\sigma \cdot B$ partial production cross-sections for the various $\, W^- \to l^- \bar\nu_l \, $ decay modes. The results of these analyses \cite{UA1:89,UA2:91,CDF:92} are however less precise: \begin{equation} \label{eq:univIV} \left| {g_{\mu} \over g_e } \right| = 1.00 \pm 0.08 \, , \quad \left| {g_{\tau} \over g_e } \right| = 0.99 \pm 0.04 \, . \end{equation} Thus, the present data verify the universality of the leptonic charged-current couplings to the 0.16\% ($e/\mu$) and 0.37\% ($\tau/\mu$) level. The precision of the most recent $\tau$-decay measurements is becoming competitive with the more accurate $\pi$-decay determination. It is important to realize the complementarity of the different universality tests. The pure leptonic decay modes probe the charged-current couplings of a transverse $W$. In contrast, the decays $\pi/K\to l\bar\nu$ and $\tau\to\nu_\tau\pi/K$ are only sensitive to the longitudinal $W$ couplings. One can easily imagine new-physics scenarios which would modify differently the two types of leptonic couplings \cite{MA:94}. For instance, in the usual two-Higgs doublet model, the charged-scalar exchange generates a correction to the ratio $B_\mu/B_e$, but the pion-decay ratio $R_{e/\mu}$ remains unaffected. Similarly, lepton mixing between the $\nu_\tau$ and an hypothetical heavy neutrino would not modify the ratios $B_\mu/B_e$ and $R_{e/\mu}$, but would certainly correct the relation between $\Gamma(\tau^-\to\nu_\tau l^-\bar\nu_l)$ and $\Gamma(\mu^-\to\nu_\mu e^-\bar\nu_e)$. \section{Constraints on new charged bosons} In this section we assume that the interactions are mediated by charged vectors and/or charged scalars; therefore, there are no tensor couplings and Eqs.~(\ref{eq:michel}) become simpler. In particular, the quantities $(1-\frac{4}{3}\rho)$ and $(1-\frac{4}{3}\xi\delta)$ reduce to sums of $|g^n_{l'_\epsilon l_\omega}|^2$, which are positive semidefinite; i.e.~, in the absence of tensor couplings, $\rho\leq\frac{3}{4}$ and $\xi\delta\leq\frac{3}{4}$. This allows us to extract direct bounds on several couplings. The measured values of $\rho_{\mu\to e}$, $\rho_{\tau\to\mu}$, $\rho_{\tau\to e}$ and $\rho_{\tau\to l}$ ($l=e,\mu$) imply: \begin{equation}\label{eq:rho_bounds_CH} \begin{array}{cccc} |g^V_{e_L\mu_R}|^2 + |g^V_{e_R\mu_L}|^2 \, &=\, -0.0024 \pm 0.0035 \; &< \; 0.0045 \quad & (90\%\;\mbox{\rm CL})\, , \\ |g^V_{\mu_L\tau_R}|^2 + |g^V_{\mu_R\tau_L}|^2 \, &=\, \phantom{-0}0.016 \pm 0.051\phantom{0} \; &< \; 0.094 \quad & (90\%\;\mbox{\rm CL})\, , \\ |g^V_{e_L\tau_R}|^2 + |g^V_{e_R\tau_L}|^2 \, &=\, \phantom{-0}0.019 \pm 0.037\phantom{0} \; &< \; 0.074\quad & (90\%\;\mbox{\rm CL})\, , \\ |g^V_{l_L\tau_R}|^2 + |g^V_{l_R\tau_L}|^2 \, &=\, \phantom{-0}0.023 \pm 0.029\phantom{0} \; &< \; 0.064\quad & (90\%\;\mbox{\rm CL})\, . \end{array} \end{equation} Except for $|g^V_{e_L\mu_R}|$, these limits are stronger than the general ones in Tables~\ref{tab:mu_couplings} and \ref{table:g_tau_bounds}. Similarly, one gets from the different $\xi\delta$ measurements: \begin{equation}\label{eq:delta_bounds_CH} \begin{array}{cl} \!\!\!\! |g^V_{e_L\mu_R}|^2 + |g^V_{e_R\mu_L}|^2 + 2 |g^V_{e_R\mu_R}|^2 + \frac{1}{2} |g^S_{e_L\mu_R}|^2 + \frac{1}{2} |g^S_{e_R\mu_R}|^2 &=\, -0.0017 \pm 0.0096 \\ & < \; 0.015 \quad (90\%\;\mbox{\rm CL})\, , \quad \\ \!\!\!\! |g^V_{\mu_L\tau_R}|^2 + |g^V_{\mu_R\tau_L}|^2 + 2 |g^V_{\mu_R\tau_R}|^2 + \frac{1}{2} |g^S_{\mu_L\tau_R}|^2 + \frac{1}{2} |g^S_{\mu_R\tau_R}|^2 &=\, 0.05 \pm 0.20 \\ & < \; 0.36 \quad (90\%\;\mbox{\rm CL})\, , \\ \!\!\!\! |g^V_{e_L\tau_R}|^2 + |g^V_{e_R\tau_L}|^2 + 2 |g^V_{e_R\tau_R}|^2 + \frac{1}{2} |g^S_{e_L\tau_R}|^2 + \frac{1}{2} |g^S_{e_R\tau_R}|^2 &=\, -0.48 \pm 0.24 \\ & < \; 0.20 \quad (90\%\;\mbox{\rm CL})\, , \\ \!\!\!\! |g^V_{l_L\tau_R}|^2 + |g^V_{l_R\tau_L}|^2 + 2 |g^V_{l_R\tau_R}|^2 + \frac{1}{2} |g^S_{l_L\tau_R}|^2 + \frac{1}{2} |g^S_{l_R\tau_R}|^2 &=\, -0.01 \pm 0.12 \\ & < \; 0.19 \quad (90\%\;\mbox{\rm CL})\, . \end{array} \end{equation} The limits on the $(\mu,e)$ couplings are weaker than the ones in Table~\ref{tab:mu_couplings}. The bounds on the vector LR and RL couplings are also worse than the ones coming from Eq.~(\ref{eq:rho_bounds_CH}). However, the resulting limits on the other couplings are stronger than the ones in Table~\ref{table:g_tau_bounds}. The constraint from $(\xi\delta)_{\tau\to e}$ shows explicitly that it is not possible to accommodate a value larger than $3/4$ with charged-boson (vector or/and scalar) exchanges. In the absence of tensor couplings, we can combine the information on $\xi$ and $\rho$ to obtain another positive-semidefinite combination of couplings: $(1-\frac{4}{3}\rho) + \frac{1}{2} (1-\xi)$. The present data imply: \begin{equation}\label{eq:xi_rho} \begin{array}{cl} 3 |g^V_{e_R\mu_L}|^2 + |g^V_{e_R\mu_R}|^2 + \frac{1}{4} |g^S_{e_L\mu_R}|^2 + \frac{1}{4} |g^S_{e_R\mu_R}|^2 &=\, -0.0039 \pm 0.0053 \\ & < \; 0.0067 \quad (90\%\;\mbox{\rm CL})\, , \\ 3 |g^V_{\mu_R\tau_L}|^2 + |g^V_{\mu_R\tau_R}|^2 + \frac{1}{4} |g^S_{\mu_L\tau_R}|^2 + \frac{1}{4} |g^S_{\mu_R\tau_R}|^2 &=\,\phantom{00} -0.10 \pm 0.13\phantom{00} \\ & < \; 0.16 \quad (90\%\;\mbox{\rm CL})\, , \\ 3 |g^V_{e_R\tau_L}|^2 + |g^V_{e_R\tau_R}|^2 + \frac{1}{4} |g^S_{e_L\tau_R}|^2 + \frac{1}{4} |g^S_{e_R\tau_R}|^2 &=\,\phantom{-00} 0.00 \pm 0.13\phantom{00} \\ & < \; 0.21 \quad (90\%\;\mbox{\rm CL})\, , \\ 3 |g^V_{l_R\tau_L}|^2 + |g^V_{l_R\tau_R}|^2 + \frac{1}{4} |g^S_{l_L\tau_R}|^2 + \frac{1}{4} |g^S_{l_R\tau_R}|^2 &=\,\phantom{00} -0.01 \pm 0.06\phantom{00} \\ & < \; 0.10 \quad (90\%\;\mbox{\rm CL})\, . \end{array}\end{equation} The resulting limits on $|g^V_{e_R\mu_L}|$, $|g^V_{\mu_R\tau_L}|$, $|g^V_{\mu_R\tau_R}|$, $|g^S_{\mu_L\tau_R}|$, $|g^S_{\mu_R\tau_R}|$, $|g^V_{e_R\tau_L}|$ and $|g^V_{l_R\tau_L}|$ are stronger than the ones obtained before. Combining the different limits, one gets the bounds shown in Table~\ref{tab:coup_CH}. The numbers with an asterisk have been derived from $(\xi\delta)_e$. If this information is not used, one finds the weaker limits: $|g^S_{e_R\tau_R}|<0.92$, $|g^S_{e_L\tau_R}|<0.92$ and $|g^V_{e_R\tau_R}|<0.46$. \begin{table}[hbt] \centering \begin{tabular}{||l||l|l|l||l||} \hline & \hfil $\mu\to e$\hfil & \hfil $\tau\to\mu$\hfil &\hfil $\tau\to e$ \hfil & \hfil $\tau\to l$ \hfil \\\hline\hline $|g^S_{LL}|$ & $<0.55$ & $\leq 2$ & $\leq 2$ & $\leq 2$ \\ $|g^S_{RR}|$ & $<0.066$ & $<0.80$ & $<0.63^*$ & $<0.62$ \\ $|g^S_{LR}|$ & $<0.125$ & $<0.80$ & $<0.63^*$ & $<0.62$ \\ $|g^S_{RL}|$ & $<0.424$ & $\leq 2$ & $\leq 2$ & $\leq 2$ \\ \hline $|g^V_{LL}|$ & $>0.96$ & $\leq 1$ & $\leq 1$ & $\leq 1$ \\ $|g^V_{RR}|$ & $<0.033$ & $<0.40$ & $<0.32^*$ & $<0.31$ \\ $|g^V_{LR}|$ & $<0.060$ & $<0.31$ & $<0.27$ & $<0.25$ \\ $|g^V_{RL}|$ & $<0.047$ & $<0.23$ & $<0.27$ & $<0.18$ \\ \hline \end{tabular} \caption{90\% CL limits for the couplings $g^n_{\epsilon\omega}$, assuming that there are no tensor couplings. The numbers with an asterisk use the measured value of $(\xi\delta)_e$.} \label{tab:coup_CH} \end{table} Up to now, our only assumption has been the absence of tensor couplings. However, in many extensions of the SM, the bounds we have derived on the couplings can be improved due to additional knowledge of the underlying dynamics. Such is the case with any model whose deviations from the SM in the lepton sector are dominated by one intermediate state. This will typically occur with the least massive gauge boson, if its couplings are not suppressed by some approximate symmetry. In the following, we study the constraints associated with the addition of one `dominant' intermediate boson to the SM. \subsection{Factorization} Let us assume that the interactions are mediated by a single charged boson (either vector or scalar). Then, the previous limits are improved due to additional relations among the couplings. Indeed, the factorization thus implied yields \cite{mursula}, \begin{equation} \alpha^n_{LR}\ \alpha^n_{RL} = \alpha^n_{LL}\ \alpha^n_{RR} \ , \label{eq:factorization} \end{equation} where we have used $\alpha$ (standing for $w$, $a$, $b$, etc.) to stress that these equations relate four-fermion effective couplings originating from the {\it same} boson intermediate state; $n=S$ for scalar mediated decays, and $n=V$ for vector mediated decays. These relations hold within any of the three channels, $(\mu, e)$, $(\tau, e)$, and $(\tau, \mu)$. Moreover, there are additional equations relating different processes, such as \begin{eqnarray}\label{eq:cross} \alpha^n_{\mu_L \tau_L}\ \alpha^n_{e_L \tau_R} & = & \alpha^n_{\mu_L \tau_R}\ \alpha^n_{e_L \tau_L}\ , \nonumber\\*[3mm] \alpha^n_{\mu_L \tau_L}\ \alpha^{n\ast}_{e_L \mu_R} & = & \alpha^n_{\mu_R \tau_L}\ \alpha^{n\ast}_{e_L \mu_L}\ , \\*[3mm] \alpha^n_{e_L \tau_L}\ \alpha^n_{e_R \mu_L} & = & \alpha^n_{e_R \tau_L}\ \alpha_{e_L \mu_L} \nonumber\ , \end{eqnarray} and \begin{equation} \mbox{\rm Im}\left( \alpha^n_{e_\epsilon \mu_\lambda}\ \alpha_{e_\epsilon \tau_\gamma}^{n\ast}\ \alpha^n_{\mu_\lambda \tau_\gamma} \right) = 0 \label{eq:imaggeneral}\ , \end{equation} for any chosen set of chiralities ($\epsilon,\lambda,\gamma$). Other similar equations may be obtained from these with the help of Eq.~(\ref{eq:factorization}). Most of these relations constrain pairs of variables to the space below a hyperbola. \subsection{Non-standard $W$ interactions} In this case we consider only $W$-mediated interactions but admitting the possibility that the $W$ couples non-universally to leptons of any chirality. Then, \begin{equation} g^V_{\epsilon \omega} \equiv w^V_{\epsilon \omega}\ . \end{equation} while all other couplings vanish, leading to $\eta =0$. The normalization condition $N=1$, implies strong (90\% CL) lower bounds on the $g^V_{LL}$ couplings: \begin{equation} |g^V_{e_L\mu_L}| > 0.997\ ; \hspace{3mm} |g^V_{\mu_L\tau_L}| > 0.83\ ; \hspace{3mm} |g^V_{e_L\tau_L}| > 0.87^* \ (0.80) \ ; \hspace{3mm} |g^V_{l_L\tau_L}| > 0.90 \label{eq:WA}\ . \end{equation} The two $|g^V_{e_L\tau_L}|$ limits correspond to the results obtained using the $(\xi\delta)_e$ measurement ($\ast$), or ignoring it (number within brackets). Since in this case the lower bounds of Eq.~(\ref{eq:WA}) are direct limits on the couplings, $w^V_{LL}$, of the intermediate boson under study, we can use the factorization equation (\ref{eq:factorization}), rewritten in the form, \begin{equation} \left| w^V_{e_R \mu_R} \right|= \left| \frac{w^V_{e_L \mu_R}\ w^V_{e_R \mu_L}}{w^V_{e_L \mu_L}} \right| < 0.0028 \label{eq:usefulsame}\ , \end{equation} to improve the bound on $g^V_{e_R \mu_R} = w^V_{e_R \mu_R}$ by an order of magnitude. For the $(\tau, \mu)$ channel, we can use the lower bound on $g^V_{\mu_L \tau_L}$, together with the factorization relations among the couplings of different channels to get the improved (90\% CL) limits: \begin{equation} \left| w^V_{\mu_R \tau_L} \right| = \left| \frac{w^V_{\mu_L \tau_L}\ w^{V \ast}_{e_L \mu_R}} {w^{V \ast}_{e_L \mu_L}} \right| < 0.060 \ ; \hspace{20mm} \left| w^V_{\mu_R \tau_R}\right| = \left| \frac{w^V_{\mu_L \tau_R}\ w^{V \ast}_{e_L \mu_R}} {w^{V \ast}_{e_L \mu_L}}\right| < 0.019 \ . \label{eq:usefulcrossB} \end{equation} Similarly, for the $(\tau, e)$ channel we find \begin{equation} \left| w^V_{e_R \tau_L} \right| = \left|\frac{w^V_{e_L \tau_L}\ w^V_{e_R \mu_L}}{w^V_{e_L \mu_L}} \right| < 0.047 \ ; \hspace{20mm} \left| w^V_{e_R \tau_R} \right|= \left| \frac{w^V_{e_L \tau_R}\ w^V_{e_R \mu_L}}{w^V_{e_L \mu_L}}\right| < 0.013 \ . \label{eq:usefulcrossC} \end{equation} Notice that no information on $(\xi\delta)_e$ has been used here. Thus, for the case of non-standard $W$-mediated interactions, the relations among channels developed above allow us to improve the limits on some couplings by one order of magnitude. Using the bounds (\ref{eq:usefulcrossB}) and (\ref{eq:usefulcrossC}), the normalization condition $N=1$ allows us to further improve the (90\% CL) lower limits on the $w^V_{l_L\tau_L}$ couplings \begin{equation} \left| w^V_{\mu_L\tau_L}\right| > 0.95 \ ; \qquad \left| w^V_{e_L\tau_L}\right| > 0.96 \ , \end{equation} where the last bound is now independent of the $(\xi\delta)_e$ measurement. Table~\ref{tab:W_couplings} summarizes the limits on $W$-mediated interactions. \begin{table}[hbt] \centering \begin{tabular}{||l||l|l|l||} \hline & \hfil $\mu\to e$\hfil & \hfil $\tau\to\mu$\hfil &\hfil $\tau\to e$ \hfil \\\hline\hline $|w^V_{LL}|$ & $>0.997$ & $>0.95$ & $>0.96$ \\ $|w^V_{RR}|$ & $<0.0028$ & $<0.019$ & $<0.013$ \\ $|w^V_{LR}|$ & $<0.060$ & $<0.31$ & $<0.27$ \\ $|w^V_{RL}|$ & $<0.047$ & $<0.060$ & $<0.047$ \\ \hline \end{tabular} \caption{90\% CL limits for the $w^V_{\epsilon \omega}$ couplings, assuming that any additional interactions are negligible.} \label{tab:W_couplings} \end{table} \subsection{SM plus Charged Vector} If in addition to the SM $W$ boson ($w^V_{LL} \neq 0$, and all others zero), there is another vector boson with a mass not too large, then its presence will be constrained by the effective vector couplings ($a^V_{\epsilon \omega}$) that it generates. In particular, we have seen in Sect.~\ref{sec:universality} that differences of \begin{equation} g^V_{LL} = w^V_{LL} + a^V_{LL} \end{equation} corresponding to different channels are well constrained by universality tests. The general analysis follows the one of the previous case, except for the fact that Eqs.~(\ref{eq:usefulsame}), (\ref{eq:usefulcrossB}) and (\ref{eq:usefulcrossC}) do not provide upper bounds on the single couplings on the left hand side. Indeed, the lower bound on $g^V_{e_L \mu_L}$, which affects the sum of the SM with the new contribution, does not translate into a lower bound for $a^V_{e_L \mu_L}$. This is just a reflection of the fact that the experiments are consistent with the inexistence of a contribution from a new vector boson. Of course, those relations are still useful in the form of Eqs.~(\ref{eq:cross}), to limit products of couplings. What we cannot do in this case, is use these relations, together with the lower bound on $g^V_{e_L \mu_L}$, to place limits on a single coupling. For instance, \begin{eqnarray} |a^V_{e_L \mu_L}\ g^V_{e_R \mu_R}| = |g^V_{e_L \mu_R}\ g^V_{e_R \mu_L}| &<& 0.0028 \quad (90\%\;\mbox{\rm CL})\ , \nonumber\\ |a^V_{\mu_L \tau_L}\ g^V_{\mu_R \tau_R}| = |g^V_{\mu_L \tau_R}\ g^V_{\mu_R \tau_L}| & < & 0.071 \;\;\quad (90\%\;\mbox{\rm CL})\ , \\ |a^V_{e_L \tau_L}\ g^V_{e_R \tau_R}| = |g^V_{e_L \tau_R}\ g^V_{e_R \tau_L}| \, & < & 0.073 \;\;\quad (90\%\;\mbox{\rm CL})\ . \nonumber \end{eqnarray} These equations establish non-trivial constraints since they involve $a^V_{LL}$, to which we do not have direct experimental access. So, in addition to direct bounds on individual magnitudes, we have also constrained the allowed values to the space below a hyperbola, in the respective plane. Of course, there are many such constraints. Here we just want to illustrate their existence and point out that these constraints translate into non-trivial information and might be especially useful in specific models that have a small number of parameters. \subsection{SM plus Charged Scalar} In this case $\rho = 3/4$ and \begin{equation} \begin{array}{c} 2\ \eta = \mbox{\rm Re}(w^V_{LL}\ g^{S\ast}_{RR}) \sim \mbox{\rm Re}(g^S_{RR})\ , \\ 2(1-\xi) = 2 \left(1-\frac{4}{3}\xi\delta\right) = \left|g^S_{LR}\right|^2 + \left|g^S_{RR}\right|^2 \, . \end{array} \end{equation} The positivity of $(1-\xi)$ leads now to slightly improved (90\% CL) bounds for the scalar couplings, \begin{equation}\label{eq:cs_coup} \begin{array}{cccc} \left|g^S_{e_L\mu_R}\right|^2 + \left|g^S_{e_R\mu_R}\right|^2 & =& -0.006\pm 0.016 &< \, 0.023 \ , \\*[2mm] \left|g^S_{\mu_L\tau_R}\right|^2 + \left|g^S_{\mu_R\tau_R}\right|^2 & =& -0.46\pm 0.48 &<\, 0.56 \ , \\*[2mm] \left|g^S_{e_L\tau_R}\right|^2 + \left|g^S_{e_R\tau_R}\right|^2 & =& -0.06\pm 0.50 &<\, 0.79 \ , \\*[2mm] \left|g^S_{l_L\tau_R}\right|^2 + \left|g^S_{l_R\tau_R}\right|^2 & =& -0.12\pm 0.22 &<\, 0.30 \ . \end{array} \end{equation} The limits on the $(\mu, e)$ couplings are still weaker than the ones in Table~\ref{tab:mu_couplings}, but the others are stronger than the ones in Eqs.~(\ref{eq:delta_bounds_CH}) and (\ref{eq:xi_rho}). Only the bound obtained from $(\xi\delta)_e$ is better. The information on the low-energy parameter $\eta$ gives the (90\% CL) limits: \begin{equation}\label{eq:eta_bounds_s} \begin{array}{rcl} -0.057 & < \mbox{\rm Re}\left( w^V_{e_L\mu_L} g^{S\ast}_{e_R\mu_R}\right) < & 0.029 \ , \\ -1.03 & < \mbox{\rm Re}\left( w^V_{\mu_L\tau_L} g^{S\ast}_{\mu_R\tau_R}\right) < & 0.47 \ . \end{array}\end{equation} Assuming lepton universality, Eq.~(\ref{eq:eta_univ}) yields a much better bound on the $\tau \to l$ couplings: \begin{equation}\label{eq:eta_univ_tau} \left( \begin{array}{c} -0.18 \\ -0.09 \end{array}\right) < \mbox{\rm Re}\left( w^V_{l_L\tau_L} g^{S\ast}_{l_R\tau_R}\right) < \left( \begin{array}{c} 0.32 \\ 0.12 \end{array}\right) \qquad\quad \left(\begin{array}{c} \mbox{\rm PDG 94} \\ \mbox{\rm Montreux 94}\end{array}\right) \ , \end{equation} which, however, is still worse than the limit obtained from $\eta_{\mu\to e}$. Using the factorization relations, one gets additional limits, such as \begin{eqnarray} |g^S_{e_L \mu_R}\ g^S_{e_R \mu_L}| = |g^S_{e_L \mu_L}\ g^S_{e_R \mu_R}| &<& 3.6 \times 10^{-2}\ , \nonumber\\ |g^S_{e_R \mu_L}\ g^S_{\mu_R \tau_R}| = |g^S_{e_R \mu_R}\ g^S_{\mu_L \tau_R}| &<& 0.050 \ . \end{eqnarray} improving the limits on the products in the left-hand side of the equations over the bounds obtainable directly from Table~\ref{tab:coup_CH}. At present, the $\tau_L$ couplings are only constrained by the normalization condition: $|g^S_{l_\epsilon\tau_L}|<2$ and $|g^V_{l_L\tau_L}|<1$. \section{Constraints on new neutral bosons} In this section we study the possible existence of neutral bosons violating the leptonic $l$ and $l^\prime$ numbers. For example, in models with heavier leptons with non-canonical quantum number assignments, there are non-diagonal $Z^0$ interactions induced by the mixing of the standard leptons with exotic ones. In other models, similar couplings with new neutral scalars arise naturally at levels close to the current experimental values \cite{barroso}. Of course, such interactions will also contribute to the well constrained flavour-violating decays into three charged leptons, such as $\mu \rightarrow eee$. The $l^-\to\nu_l l'^- \bar\nu_{l'}$ decays involve two charged-lepton and two neutrino couplings to the intermediate boson, while decays of the type $l^- \to l_1^- l_2^+ l_3^-$ involve four charged-lepton couplings. Therefore, the two types of decay provide complementary information. Note, however, that in many models the neutrino and charged-lepton couplings are related; in such cases, the constraints from the $l^- \to l_1^- l_2^+ l_3^-$ decays are usually much stronger than those obtained from the $l^-\to\nu_l l'^- \bar\nu_{l'}$ spectra. It is easily shown that if, as we are assuming, the final neutrinos are massless and not observed, one falls back on an effective hamiltonian like that of Eq.~(\ref{eq:hamiltonian}), even in the presence of lepton-number nonconservation \cite{langacker}. In the appendix, this is shown explicitly for the case of neutral boson mediated interactions. We also include there the derivation of some formulae useful in this section, and a discussion of bounds from neutrinoless charged-lepton decays. In the cases studied here there are no relations among different channels. \subsection{SM plus Neutral Vector} When the decay is mediated by neutral vector bosons, all the LR and RL couplings vanish and $\rho = 3/4$. Since there are no tensor couplings, the relevant bounds on Table~\ref{tab:coup_CH} are also valid in this case. Moreover, $(1-\xi)$ is now a positive-semidefinite quantity, which gives the additional (90\% CL) limits, \begin{equation}\label{eq:nv_coup} \begin{array}{ll} \frac{1}{2} \left|g^S_{e_R\mu_R}\right|^2 + 2 \left|g^V_{e_R\mu_R}\right|^2 &< 0.011 \ , \\*[2mm] \frac{1}{2} \left|g^S_{\mu_R\tau_R}\right|^2 + 2 \left|g^V_{\mu_R\tau_R}\right|^2 &< 0.28 \ , \\*[2mm] \frac{1}{2} \left|g^S_{e_R\tau_R}\right|^2 + 2 \left|g^V_{e_R\tau_R}\right|^2 &< 0.39 \ , \\*[2mm] \frac{1}{2} \left|g^S_{l_R\tau_R}\right|^2 + 2 \left|g^V_{l_R\tau_R}\right|^2 &< 0.15 \ . \end{array} \end{equation} The limits on the $(\mu, e)$ couplings are weaker than the ones in Table~\ref{tab:mu_couplings}, but the others are stronger than the ones in Eqs.~(\ref{eq:delta_bounds_CH}) and (\ref{eq:xi_rho}). Only the bound obtained from $(\xi\delta)_e$ is better. As usual, we distinguish the SM $W$ and the neutral vector boson contributions to $g^V_{LL}$ by the letters $w$ and $a$, respectively. Hence, \begin{equation} g^V_{LL} = w^V_{LL} + a^V_{LL}\ . \end{equation} As shown in the appendix A.1, the new contributions satisfy the relation \begin{equation} a^S_{LL}\ a^S_{RR} = 4\ a^V_{LL}\ a^V_{RR}\ , \label{eq:neutralscalar} \end{equation} which yields the 90\% CL bounds \begin{eqnarray} |a^V_{e_L \mu_L}\ g^V_{e_R \mu_R}| = \frac{1}{4}\ |g^S_{e_L \mu_L}\ g^S_{e_R \mu_R}| &<& 9.1 \times 10^{-3}\ , \nonumber\\ |a^V_{\mu_L \tau_L}\ g^V_{\mu_R \tau_R}| = \frac{1}{4}\ |g^S_{\mu_L \tau_L}\ g^S_{\mu_R \tau_R}| &<& 0.37\ , \\ |a^V_{e_L \tau_L}\ g^V_{e_R \tau_R}| = \frac{1}{4}\ |g^S_{e_L \tau_L}\ g^S_{e_R \tau_R}| \, &<& 0.32^* \; (0.44)\ . \nonumber \end{eqnarray} Again, these relations yield constraints on $a^V_{LL}$, to which there is no direct experimental access. \begin{table}[hbt] \centering \begin{tabular}{||c||c|c|c||} \hline & \hfil $\mu\to e$\hfil & \hfil $\tau\to\mu$\hfil &\hfil $\tau\to e$ \hfil \\\hline\hline $|\alpha_{l' l}|^2\ \sum_{m,n} |\theta_{mn}|^2$ & $<1.1 \times 10^{-3}$ & $<0.14$ & $<0.20$ \\ $|\beta_{l' l}|^2\ \sum_{m,n} |\theta_{mn}|^2$ & $<7.6 \times 10^{-2}$ & ----- & ----- \\ \hline \end{tabular} \caption{90\% CL limits on products of quadratic polynomials in the lepton and neutrino couplings. If one uses the measured value of $(\xi\delta)_e$, the number on the last column will read instead $0.10$. } \label{tab:neutvectg} \end{table} Assuming that the neutrinos are not detected, the neutral-vector-induced effective couplings may be written as \begin{eqnarray} a^V_{l'_R l^{\phantom{'}}_R} = \alpha_{l' l} \left[ \sum_{m,n} |\theta_{mn}|^2 \right]^{1/2} \hspace{4mm} & , & \hspace{4mm} a^S_{l'_R l^{\phantom{'}}_R} = -2\ \alpha_{l' l} \left[ \sum_{m,n} |\sigma_{mn}|^2 \right]^{1/2} \ , \nonumber\\ a^V_{l'_L l^{\phantom{'}}_L} = \beta_{l' l} \left[ \sum_{m,n} |\sigma_{mn}|^2 \right]^{1/2} \hspace{4mm} & , & \hspace{4mm} a^S_{l'_L l^{\phantom{'}}_L} = -2\ \beta_{l' l} \left[ \sum_{m,n} |\theta_{mn}|^2 \right]^{1/2} \ , \label{eq:fundneutvect} \end{eqnarray} where $\alpha$ ($\beta$) is the hermitian coupling matrix of the right- (left-) handed charged leptons to the neutral vector and $\theta$ ($\sigma$) is the coupling matrix of the right- (left-) handed neutrinos, in appropriate units (see appendix A.1). The experimental limits on the effective four-fermion couplings constrain then these combinations of the original vector couplings. We summarize these results in Table~\ref{tab:neutvectg}. The bounds on the first line remain the same with $\theta$ substituted by $\sigma$ and the missing numbers on the second line are due to the lack of experimental access to $Q_{\tau_L}$. \subsection{SM plus Neutral Scalars} Finally, we consider the case in which there is a neutral scalar contribution to $\mu$ and $\tau$ leptonic decays, in addition to the SM contribution. These new contributions vanish for the LL and RR couplings and satisfy the relations \begin{equation} a^V_{LR} = a^S_{LR} = 2 a^T_{LR} \hspace{7mm} ; \hspace{7mm} a^V_{RL} = a^S_{RL} = 2 a^T_{RL}\ . \label{eq:VSneutralscalar} \end{equation} This allows us to express everything in terms of the vector couplings. One gets then the positive definite quantities: \begin{equation} 1-{4\over 3}\rho = 1-{4\over 3}\xi\delta = 2 \left( |g^V_{LR}|^2 + |g^V_{RL}|^2 \right) \ , \end{equation} and \begin{equation} \left(1-{4\over 3}\rho\right) + {1\over 2} \left( 1-\xi\right) = 6 |g^V_{RL}|^2 \ . \end{equation} The $N=1$ constraint provides the additional relation \begin{equation} 1 = |g^V_{LL}|^2 + 2 \left( |g^V_{LR}|^2 + |g^V_{RL}|^2 \right) \ . \end{equation} Thus, $1-{4\over 3}\rho = 1-{4\over 3}\xi\delta = \left( 1 - |g^V_{LL}|^2\right)$, which gives lower bounds on all $g^V_{LL}$ couplings. The resulting 90\% CL limits are given in Table~\ref{tab:ns_coup}. \begin{table}[hbt] \centering \begin{tabular}{||l||l|l|l||l||} \hline & \hfil $\mu\to e$\hfil & \hfil $\tau\to\mu$\hfil &\hfil $\tau\to e$ \hfil & \hfil $\tau\to l$ \hfil \\\hline\hline $|g^V_{LL}|$ & $>0.998$ & $>0.95$ & $>0.96$ & $>0.97$ \\ $|g^V_{LR}|$ & $<0.047$ & $<0.22$ & $<0.19$ & $<0.18$ \\ $|g^V_{RL}|$ & $<0.033$ & $<0.16$ & $<0.19$ & $<0.13$ \\ \hline \end{tabular} \caption{90\% CL limits for the $g^n_{\epsilon\omega}$ couplings, taking $g^n_{RR}=0$, $g^S_{LL}=0$, $g^V_{LR}=g^S_{LR}=2 g^T_{LR}$ and $g^V_{RL}=g^S_{RL}=2 g^T_{RL}$.} \label{tab:ns_coup} \end{table} In addition we have the constraints from $\eta$, which at 90\% CL give \begin{equation}\label{eq:eta_bounds_ns} \begin{array}{rcl} -0.007 & < \mbox{\rm Re}\left( g^V_{e_L \mu_R}\ g^{V \ast}_{e_R \mu_L} \right) < & 0.004 \ , \\ -0.13 & < \mbox{\rm Re}\left( g^V_{\mu_L \tau_R}\ g^{V \ast}_{\mu_R \tau_L} \right) < & 0.06 \ . \end{array}\end{equation} These effective couplings may be written in terms of the ones in the original lagrangian as \begin{equation} a^V_{l'_R l^{\phantom{'}}_L} = A_{l' l} \left[ \sum_{m,n} |B_{mn}|^2 \right]^{1/2} \ ,\qquad a^V_{l'_L l^{\phantom{'}}_R} = A^\ast_{l l'} \left[ \sum_{m,n} |B_{mn}|^2 \right]^{1/2}\ , \end{equation} where $A$ ($B$) is the coupling matrix of the charged leptons (neutrinos) to the neutral scalar, in appropriate units (see appendix A.2). So, the previous limits contain combined information from the two sectors. It is important to emphasize that, within the philosophy we sustain of discarding intermediate tensor particles (for they hardly appear in any reasonable model beyond the SM), this is the only possible source of tensorial terms. This fact has interesting consequences which we will explore in the next section. \section{Opportunities for Physics Beyond the SM} In Table~\ref{tab:summary} we present a summary of the theoretical constraints imposed on the measured quantities , for the various cases under study. There, SM denotes that the Standard Model results are recovered and AS indicates that any sign is allowed. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & {\it SM + Charged} & {\it SM + Charged} & {\it SM + Neutral} & {\it SM + Neutral} \\ & {\it Vector} & {\it Scalar} & {\it Vector} & {\it Scalar} \\ & {\it Nonstandard W} & & & \\ \hline $\rho - 3/4$ & $< 0$ & SM & SM & $< 0$ \\ \hline $\xi - 1$ & AS & $< 0$ & $< 0$ & AS \\ \hline $(\delta \xi) - 3/4$ & $< 0$ & $< 0$ & $< 0$ & $< 0$ \\ \hline $\eta$ & SM & AS & AS & AS \\ \hline \end{tabular} \caption{Theoretical constraints on the Michel parameters}\label{tab:summary} \end{table} It is immediately apparent that $\rho \leq 3/4$ and $(\delta \xi) < 3/4$ in all cases that we have studied. Thus one can have new physics and still $\rho$ be equal to the SM value. In fact, any interaction consisting of an arbitrary combination of $g^S_{\epsilon \omega}$'s and $g^V_{\gamma \gamma}$'s yields this result \cite{FE:90}. On the other hand, $(\delta \xi)$ will be different from $3/4$ in any of the cases above providing, in principle, a better opportunity for the detection of Physics Beyond the SM. The above features are easy to understand by looking back at Eqs.~(\ref{eq:michel}) and recalling that the tensor couplings can only be generated by neutral scalar interactions (violating individual lepton flavours), in which case they are proportional to the scalar couplings. It is easy to see that having two such neutral scalars will not alter the situation. Indeed, to obtain $\rho > 3/4$ or $(\delta \xi) > 3/4$ one will also need the presence of a charged scalar. Let's then assume that we have a neutral scalar with couplings \begin{equation} a^V_{LR} = a^S_{LR} = 2\ a^T_{LR} \hspace{7mm} ; \hspace{7mm} a^V_{RL} = a^S_{RL} = 2\ a^T_{RL} \ , \end{equation} and a charged scalar with couplings $b^S_{\epsilon \omega}$. We obtain, \begin{eqnarray} \rho - \frac{3}{4} & = & - \frac{3}{4} \left[ 2 {|a^S_{LR}|}^2 + 2 {|a^S_{RL}|}^2 + \frac{1}{2}{\rm Re}(a^S_{LR} b^{S \ast}_{LR} + a^S_{RL} b^{S \ast}_{RL}) \right]\ , \nonumber\\ ({\xi}\delta) - \frac{3}{4} & = & - \frac{3}{4} \left[ \frac{1}{2}{|b^S_{RR}|}^2 + \frac{1}{8} {|a^S_{LR} - b^S_{LR}|}^2 + \frac{3}{8} {|a^S_{LR} + b^S_{LR}|}^2 \right. \nonumber\\ & & \quad\left. + \frac{3}{2} {|a^S_{LR}|}^2 + 2 {|a^S_{RL}|}^2 + \frac{1}{2} {\rm Re}(a^S_{RL} b^{S \ast}_{RL}) \right]\ . \end{eqnarray} The first equation shows that $\rho$ might exceed $3/4$ provided that \begin{equation} {\rm Re} \left( \frac{b^S_{LR}}{a^S_{LR}} \right) < - 4\ , \end{equation} or \begin{equation} {\rm Re} \left( \frac{b^S_{RL}}{a^S_{RL}} \right) < - 4\ . \end{equation} As for $(\delta \xi)$, it can only exceed the SM value through $RL$ couplings, and only if the last equation is satisfied. Then, detecting $\rho$ greater than the SM value would mean that there were at least a charged scalar and a neutral scalar in action. A measurement of $(\delta \xi)$ greater than $3/4$ would then discriminate between $RL$ and $LR$ couplings. However, as pointed out before, a measurement of $(\delta \xi) > 3/4$ must, in general, be accompanied by a measurement of $\xi > 1$. If the contrary were to become well established, we would have detected physics beyond the four-fermion hamiltonian. \section{Conclusions} We have used the recent measurements on the Michel parameters in tau decays to perform a complete, model independent analysis of the constraints implied for scalar and neutral bosons, as they exist in most models beyond the SM. If the new contributions are dominated by the effect of one such new intermediate boson, relations among the different couplings arise. In the case of charged intermediate bosons, these relations involve couplings from different decays. If the most important new feature is the coupling of the usual $W$ boson with right handed leptons, then the data from muon neutrino scattering off electrons can be used to improve some of the limits on couplings in tau decays by an order of magnitude. In the other cases, it constrains products of couplings of different channels to the space below a hyperbola. This information will be particularly useful for models in which these couplings are functions of the same parameters of the original theory. In case the dominant new features are provided by the exchange of flavour violating neutral scalars, there are no relations among the different channels. The relations within each channel were derived assuming that the final neutrinos are massless and not observed. This shows explicitly that the analysis based on the common four fermion hamiltonian is still valid in this case. It is shown in the appendix that, given the current experimental situation, the bounds obtained from the Michel parameters only compete with those provided by the decays $l^- \to l_1^- l_2^+ l_3^-$, in theories were the charged lepton couplings to the intermediate particle carrying flavour are suppressed by some (exact or approximate) symmetry. \section*{Acknowledgements} This work has been supported in part by CICYT (Spain) under grant No. AEN-93-0234. The work of J.P.S. was funded by the E.U. under the Human Capital and Mobility Program. He is indebted to G.C. Branco, L. Lavoura and Instituto Superior T\'ecnico for their kind hospitality, and to J. Raab and A. Stahl for useful discussions.
1,108,101,562,586
arxiv
\section*{Supplementary Information for ``Tuning nucleation kinetics via nonequilibrium chemical reactions''} \section{A nonequilibrium three-state lattice-gas model} \subsection{Kinetic model of a nonequilibrium lattice gas} We extend the two-dimensional square lattice-gas model by incorporating two particle internal states: a bonding state (B) and an inert state (I). A particle in the bonding state interacts with nearest-neighbor bonding-state particles with bonding strength $\epsilon < 0$. On the other hand, an inert-state particle is isoenergetic to a vacant lattice site and thus does not interact with nearest-neighbor particles. We consider an open system in contact with a particle reservoir. The fugacities in the reservoir are $z_\text{B}$ and $z_\text{I}$ for the B and I internal states, respectively. Utilizing the framework of stochastic thermodynamics, we model the kinetics of particle insertion, removal, and reactions between internal states using Markovian transitions that obey local detailed balance. Particle insertion into an empty (E) lattice site occurs with rates $Dz_\text{B}$ and $Dz_\text{I}$ for the B and I internal states, respectively, where $D$ is a rate related to particle exchange between the open system and the reservoir (or the diffusive transport rate in a closed system). Particle removal occurs with rate $D e^{\beta u}$, where $\beta \equiv (k_{\text{B}}T)^{-1}$ and $u$ is the local potential energy that arises from the nearest-neighbor interactions at that lattice site. Lastly, reactions between the B and I states occur with forward and backward rates $k_{\text{BI}}$ and $k_{\text{IB}}$. For notational simplicity, we write the dimensionless ratios between the reaction rates and the particle exchange rate as $k_{\text{B}\rightarrow\text{I}} \equiv D^{-1}k_{\text{BI}}$ and $k_{\text{I}\rightarrow\text{B}} \equiv D^{-1}k_{\text{IB}}$, respectively, in what follows. Since the transitions at any particular lattice site involve a single-cycle transition network among E, B, and I states, we can define a nonequilibrium drive $\Delta\mu$ along the cycle in the B-to-I direction, \begin{equation}\label{eq:∆µ_def} \beta\Delta\mu = \ln\left[\dfrac{z_\text{B} k_{\text{B}\rightarrow\text{I}}}{e^{\beta u}z_\text{I}k_{\text{I}\rightarrow\text{B}}}\right]. \end{equation} Nonzero $\Delta\mu$ results in a nonzero net probability current. Rearranging \eqref{eq:∆µ_def} into the ratio of internal reaction rates gives the local detailed balance condition for $\text{I}\rightleftharpoons\text{B}$ reactions in terms of $\Delta\mu$, \begin{align} \dfrac{k_{\text{B}\rightarrow\text{I}}}{k_{\text{I}\rightarrow\text{B}}} &= \left(\dfrac{z_\text{I}}{z_\text{B}}\right)e^{\beta u + \beta\Delta\mu}. \label{eq:local detailed balance} \end{align} \subsection{Model specification and simulation parameters} \label{sec:simulation_condition} In order to conduct numerical simulations, we consider two specific choices for the backward reaction rate, $k_{\text{I}\rightarrow\text{B}}$: \begin{equation} \label{eq:kIB_def} k_{\text{I}\rightarrow\text{B}} = \begin{cases} k^\circ & (\text{homogeneous})\\ k^\circ\min[1,\exp(-\beta u - \beta\Delta f_\text{res}-\beta\Delta\mu)] & (\text{inhomogeneous}), \end{cases} \end{equation} where the constant $k^\circ$ sets the relative timescale between the reaction rates and the particle exchange rate with the reservoir. Note that $k_{\text{B}\rightarrow\text{I}}$ follows from the local detailed balance condition, \eqref{eq:local detailed balance}. In the \textit{homogeneous} model, $k_{\text{I}\rightarrow\text{B}}$ is independent of $u$, while in the \textit{inhomogeneous} model, $k_{\text{I}\rightarrow\text{B}}$ depends on $u$. The motivation for studying $u$-independent and $u$-dependent models is described in detail below. We simulate the stochastic evolution of the system via the kinetic Monte Carlo (kMC) simulation method \cite{gillespie2007stochastic} using the first-order transition rates described above. For convenience, we define intrinsic fugacities $\lambda_\text{B}$ and $\lambda_\text{I}$, which are related to the reservoir fugacities by $z_\text{B} \equiv \lambda_\text{B}\exp(\beta\Delta\mu)$ and $z_\text{I} \equiv \lambda_\text{I}$, so that we can tune the chemical drive while holding the equilibrium free-energy difference between the B and I states constant. Unless specified otherwise, all simulation data are obtained from $64 \times 64$ two-dimensional square lattices with periodic boundaries, interaction strength $\beta\epsilon = -2.95$, and relative timescale $k^\circ = 10^{-1}$. The intrinsic fugacities are set such that the total particle density (i.e., the number density of particles in either state) in the vapor phase is $\rho_\text{v} = 0.05$ at the coexistence point. The parameters $\lambda_\text{B}$, $\lambda_\text{I}$, and $\Delta\mu$ are then held constant throughout the course of a simulation. \subsection{Fixed Local Environment approXimation (FLEX)} \label{sec:flex} To motivate our discussion of inhomogeneous chemical reactions, we introduce a simplified theoretical model that we refer to as the Fixed Local Environment approXimation (FLEX). Within FLEX, we assume that particle exchange between the open system and the reservoir relaxes to the steady state more rapidly than any change in the local configuration, or environment, around a given lattice site. For the two-dimensional square lattice-gas model, the local configuration comprises the four nearest-neighbor lattice sites, which determine the local potential energy $u$ when the lattice site is occupied by a bonding-state particle. Following the convention that $W_{ij}$ indicates the first-order transition rate from state $i$ to state $j$, we write the transition matrix for a single tagged lattice site with a fixed local environment as \begin{equation}\label{eq:transition matrix} D^{-1}W = \begin{bmatrix} -(z_\text{B}+z_\text{I}) & z_\text{B} & z_\text{I}\\ e^{\beta u} & -(e^{\beta u}+D^{-1}k_\text{BI}) & D^{-1}k_\text{BI}\\ 1 & D^{-1}k_\text{IB} & -(1+D^{-1}k_\text{IB}) \end{bmatrix}, \end{equation} where the lattice-site states are ordered $(\text{E}, \text{B}, \text{I})$. FLEX allows us to predict the distribution among the three lattice-site states at steady state under a constant $u$ determined from the fixed local environment. Solving the master equation at steady state, $0 = \dot{p} = pW$, leads to \begin{alignat}{4} &\dfrac{p_\text{B}}{p_\text{E}} &&= \dfrac{z_\text{B}+k_{\text{I}\rightarrow\text{B}}(z_\text{B}+z_\text{I})}{e^{\beta u}(1+k_{\text{I}\rightarrow\text{B}})+k_{\text{B}\rightarrow\text{I}}} &&= z_\text{B}\left[\dfrac{1+k_{\text{I}\rightarrow\text{B}}(1+e^{\beta\Delta f_\text{res}})}{1+k_{\text{I}\rightarrow\text{B}}(1+e^{\beta\Delta f_\text{res}+\beta\Delta\mu})}\right]e^{-\beta u} && \equiv z_\text{B}'e^{-\beta u}\label{eq:z_B'}\\ &\dfrac{p_\text{I}}{p_\text{E}} &&= \dfrac{\euz_\text{I} + k_{\text{B}\rightarrow\text{I}}(z_\text{B}+z_\text{I})}{e^{\beta u}(1+k_{\text{I}\rightarrow\text{B}})+k_{\text{B}\rightarrow\text{I}}} &&= z_\text{I} \left[\dfrac{1+k_{\text{I}\rightarrow\text{B}} (1+e^{\beta\Delta f_\text{res}})e^{\beta\Delta\mu}}{1+k_{\text{I}\rightarrow\text{B}}(1+e^{\beta\Delta f_\text{res}+\beta\Delta\mu})}\right] && \equiv z_\text{I}'\label{eq:z_I'}, \end{alignat} where $p_i$ is the steady-state probability of being in state $i$, and $z_\text{B}'$ and $z_\text{I}'$ are fugacity-like values that may depend on $u$, according to the specific functional form of $k_{\text{I}\rightarrow\text{B}}$. However, at equilibrium, $z_\text{B}'=z_\text{B}$ and $z_\text{I}'=z_\text{I}$ regardless of $k_{\text{I}\rightarrow\text{B}}$. The apparent internal free-energy difference in the open system, $\beta\Delta f \equiv -\ln(z_\text{B}'/z_\text{I}')$, is related to the free-energy difference in the reservoir, $\beta\Delta f_\text{res} \equiv -\ln(z_\text{B} / z_\text{I})$, by \begin{equation} \beta\Delta f = \beta\Delta f_\text{res} + \ln\left[\dfrac{1+k_{\text{I}\rightarrow\text{B}}(1+e^{\beta\Delta f_\text{res}})e^{\beta\Delta\mu}}{1+k_{\text{I}\rightarrow\text{B}} (1+e^{\beta\Delta f_\text{res}})}\right]. \label{eq:∆f FLEX} \end{equation} Within FLEX, it is therefore possible to map the non-equilibrium steady state for a single lattice site to an ``effective equilibrium'' model that has the same steady-state density distribution (but different probability currents). We discuss the implications of this insight in the next section, and we describe the predictions of FLEX for nonequilibrium nucleation kinetics in \secref{sec:flex_predictions}. \subsection{Effect of inhomogeneous chemical reactions} \label{sec:inhomogeneous} We now consider the effect of the internal reaction kinetics on the thermodynamics of a fluid that phase separates into dilute (vapor) and condensed (liquid) phases. Within FLEX, \eqref{eq:∆f FLEX} shows that under nonequilibrium conditions ($\Delta\mu \ne 0$), the effective internal free-energy difference between B and I states in the open system, $\Delta f$, may be different from the free-energy difference in the reservoir, $\Delta f_{\text{res}}$. However, as long as $k_{\text{I}\rightarrow\text{B}}$ is the same in both the vapor and liquid phases, a single effective equilibrium model provides a common description of the steady-state distribution in both phases. We refer to such a system as having \textit{homogeneous} internal-state chemical reactions. However, if $k_{\text{I}\rightarrow\text{B}}$ is dependent on the local potential energy, which is on average higher in the vapor phase than in the liquid phase, then the effective equilibrium descriptions must be different in the two phases. As a result, a system with a $u$-dependent $k_{\text{I}\rightarrow\text{B}}$ results in \textit{inhomogeneous} internal-state chemical reactions. The steady-state density distribution in a phase-separated system with inhomogeneous chemical reactions cannot be described by a common effective equilibrium. Our numerical results indicate that these insights provided by FLEX are also useful for analyzing the full lattice model. Motivated by the FLEX analysis, we quantify the extent of inhomogeneous chemical reactions by estimating the effective free-energy difference between the two particle internal states, $\Delta f$, from simulations of each bulk phase. In order to calculate $\Delta f$ for a system at a nonequilibrium steady state (NESS), we start from a grand-canonical model with particle fugacities $z'_\text{B}$ and $z'_\text{I}$. A lattice configuration is defined by the identities of all the lattice sites, $\{c(\bm{r})\}$, where $c\in\{\text{E},\text{B},\text{I}\}$. In a translationally symmetric system, we can assume that a tagged particle is located at the origin, $\bm{r}=0$. Including the empty lattice site as an ``internal state,'' E, with $z_{\text{E}}' = 1$, we can write the equilibrium probability of the tagged particle being in internal state $i$ as \begin{align} p_{i(\bm{r}=0)}^{\text{eq}} &= \Xi^{-1} \sum_{\{c(\bm{r})\}} \delta\left[c(\bm{r}\!=\!0) = i\right] \; \prod_{\bm{r}} z_{c(\bm{r})}' \exp\left\{ - \beta \sideset{}{'}\sum_{\bm{r},\bm{r'}} u[c(\bm{r}),c(\bm{r'})] \right\} \\ &= \left(\frac{z_i'}{z_j'}\right) \Xi^{-1} \sum_{\{c(\bm{r})\}} \exp\left\{-\beta \sideset{}{''}\sum \{u[i,c(\bm{r'})] - u[j,c(\bm{r'})]\}\right\} \nonumber \\ &\qquad \times \delta\left[c(\bm{r}\!=\!0) = j\right] \; \prod_{\bm{r}}z_{c(\bm{r})}' \exp\left\{ - \beta \sideset{}{'}\sum_{\bm{r},\bm{r'}} u[c(\bm{r}),c(\bm{r'})] \right\} \\ &= \left(\frac{z_i'}{z_j'}\right) \left\langle \exp\left\{-\beta \sideset{}{''}\sum \{u[i,c(\bm{r'})] - u[j,c(\bm{r'})]\}\right\} \right\rangle_{j(\bm{r}=0)} p_{j(\bm{r}=0)}^{\text{eq}}, \label{eq:semigrand-bennet} \end{align} where primed summation is over nearest-neighboring pairs (counting each unique bond once), double primed summation is over the nearest-neighbor sites $\bm{r'}$ of $\bm{r}=0$, and the angle brackets indicate an ensemble average conditioned on the particle at $\bm{r}=0$ being in the indicated internal state. \eqref{eq:semigrand-bennet} can be viewed as a Bennet acceptance ratio in the semi-grand ensemble or a generalization of the Widom insertion method. Finally, we use \eqref{eq:semigrand-bennet} to calculate the effective $\beta\Delta f \equiv -\ln(z_\text{B}'/z_\text{I}')$ in a NESS by substituting $p^{\text{eq}}$ with the NESS distribution, $p$, and averaging over the ensemble of lattice configurations at steady state: \begin{align} \label{eq:deltaf1} \begin{split} \beta\Delta f &= -\ln \left( \frac{p_{\text{B}}}{p_{\text{I}}} \right) + \ln \left\langle \exp\left\{-\beta \sideset{}{''}\sum u[\text{B},c(\bm{r'})] - u[\text{I},c(\bm{r'})]\right\} \right\rangle_{\text{I}(\bm{r}=0)}\\ &= -\ln \left( \frac{p_{\text{B}}}{p_{\text{I}}} \right) + \ln \left\langle \exp\left\{-\beta \sideset{}{''}\sum u[\text{B},c(\bm{r'})] \right\} \right\rangle_{\text{I}(\bm{r}=0)} \end{split} \end{align} In the last step, we have exploited the isoenergeticity of the inert state. Note that \eqref{eq:deltaf1} reduces to \eqref{eq:∆f FLEX} if we assume a fixed local environment around the tagged lattice site. \begin{figure}[b] \centering \includegraphics[width=\textwidth]{fig_Dfv2.pdf}\vskip-1.5ex \caption{\textbf{Quantification of inhomogeneous chemical reactions at a NESS.} The internal free-energy differences $\Delta f_\text{v}$ (open squares) in the vapor and $\Delta f_\text{l}$ in the liquid (filled circles) for \textbf{(A)} nonequilibrium inhomogeneous and \textbf{(B)} homogeneous models. Solid and dotted lines show the FLEX predictions for $\Delta f_\text{v}$ and $\Delta f_\text{l}$, respectively. All data shown correspond to the coexistence conditions specified in \secref{sec:simulation_condition}.} \label{fig:fig_Df} \end{figure} We can numerically test the effect of applying nonequilibrium drive to the models with homogeneous and inhomogeneous chemical reactions, \eqref{eq:kIB_def}, by measuring the effective internal free-energy differences in coexisting liquid and vapor phases, $\Delta f_\text{l}$ and $\Delta f_\text{v}$, respectively. \figref{fig:fig_Df} shows the free-energy differences calculated according to \eqref{eq:deltaf1} using the NESS distribution obtained from simulations in each phase. The measured values of the effective internal free-energy differences, $\Delta f_\text{l}$ and $\Delta f_\text{v}$, for nonequilibrium homogeneous systems are identical in the liquid and vapor phases as expected. By contrast, the effective internal free-energy difference in the dilute phase, $\Delta f_\text{v}$, exhibits a monotonic decrease with respect to the nonequilibrium drive in inhomogeneous systems. The FLEX predictions for $\Delta f_\text{v}$ and $\Delta f_\text{l}$ obtained by evaluating \eqref{eq:∆f FLEX} at fixed $u=0$ and $u=4\epsilon$, respectively, show the same trends for both nonequilibrium models. It is important to note that the existence of an effective equilibrium that is the same in both phases does not mean that a driven, homogeneous system corresponds to a true equilibrium, because the entropy production rate is always positive due to particle exchange with the reservoir. \section{Phase coexistence calculations} \subsection{Nonequilibrium Umbrella Sampling (NEUS)} We use a form of nonequilibrium umbrella sampling (NEUS) \cite{warmflash2007umbrella} to obtain the steady-state distribution of the bonding-state particle density, $\rho_\text{B}$. For this purpose, we divide the entire range of $\rho_B$ into non-overlapping boxes and focus on the transition flux between the boxes at steady state. Importantly, transitions are only possible between adjacent boxes in our simulations because each kMC move can insert or remove at most one bonding-state particle. Detailed balance between the boxes always holds under this restriction, regardless of whether the system is driven out of equilibrium, as long as the system is at steady state. The left (L) and right (R) box-boundary crossing fluxes $f_\text{L}(b)$ and $f_\text{R}(b)$ therefore satisfy $f_\text{R}(b) = f_\text{L}(b+1)$ and $f_\text{L}(b) = f_\text{R}(b-1)$ for each box index $b$. (We write the box indices only as necessary in what follows.) The fluxes can be related to the transition probabilities $\{p_{ij}\}$ of reaching the $j$-side boundary starting from the $i$-side boundary of each box, where $i,j\in\{\text{L,R}\}$. By definition, $p_\text{LL} + p_\text{LR} = p_\text{RL} + p_\text{RR} = 1$. Our NEUS algorithm is based on launching trajectories in each box based on the incoming fluxes, and then matching the steady-state distribution between adjacent boxes by enforcing detailed balance. Equating the incoming and outgoing fluxes at the $i$-side boundary of a box leads to $f_i = f_i p_{ii} + f_{i'} p_{i'i}$ and $f_\text{L}/f_\text{R} = p_\text{RL} / p_\text{LR}$, where $i'$ indicates the opposite side of the box from side $i$. At each iteration of the algorithm, we use the previously collected ensembles of configurations at the box boundaries and the current estimate of $\{p_{ij}\}$ to launch new trajectories with probabilities $f_\text{L}/(f_\text{L}+f_\text{R}) = p_\text{RL}/(p_\text{RL}+p_\text{LR})$ and $f_\text{R}/(f_\text{L}+f_\text{R}) = p_\text{LR}/(p_\text{RL}+p_\text{LR})$ from the left and the right boundaries, respectively, of each box. We then record the average time, $t(b)$, until each of the launched trajectories exits box $b$; the average time $t(\rho_\text{B};b)$ that a trajectory spends at $\rho_\text{B}$ within box $b$; and the probability that a trajectory exits through the $i$-side boundary, $p_i = (p_{\text{L}i}f_\text{L} + p_{\text{R}i}f_\text{R})/(f_\text{L}+f_\text{R}) = p_{i'i}/(p_\text{RL}+p_\text{LR})$, of box $b$. We take $p(\rho_\text{B};b) = t(\rho_\text{B};b)/t(b)$ as the steady-state distribution within box $b$. The fluxes associated with trajectories originating within box $b$ and exiting via boundary $i$, $g_i(b) \equiv p_i(b) / t(b)$, are related to the overall fluxes by $g_i(b) = f_i(b) w(b)$, where $w(b)$ is the fraction of time that a steady-state trajectory spends in box $b$. By applying the detailed balance condition between boxes $b$ and $b+1$, $f_\text{R}(b) = f_\text{L}(b+1)$, we can self-consistently determine the box weights, $w(b+1)/w(b) = g_\text{R}(b)/g_\text{L}(b+1)$, and thus solve for the steady-state distribution over the complete range of $\rho_\text{B}$, $p(\rho_\text{B}) = p(\rho_\text{B};b)\times w(b)/\sum_b{w(b)}$. The algorithm is implemented by iteratively obtaining a new ensemble of configurations at each boundary of every box while calculating the steady-state distribution within each box. The initial configurations at each boundary are sampled from brute-force simulations inside of each box, rejecting any kMC events that would allow the system to cross the box boundaries. At each subsequent iteration of the algorithm, we first obtain the transition probabilities, $\{p_{ij}\}$, in each box starting from the current ensemble of configurations at the box boundaries. We then compute the steady-state distribution within each box using trajectories launched from the current ensemble of configurations and the calculated $\{p_{ij}\}$. We save the configurations from these trajectories that exit the box to form the ensemble for the next iteration of the algorithm. Finally, we average the steady-state distribution and the transition probabilities within each box over the successive iterations and reconstruct the steady-state distribution over the complete range of $\rho_\text{B}$ as described above. We apply this algorithm to measure the dimensionless thermodynamic driving force between the bulk phases, $\beta\Delta\Phi \equiv L^{-2} \ln \left( p_\text{l} / p_\text{v} \right)$, as discussed in the main text. Representative results of this algorithm are shown in \figref{fig:fig_NEUS}. The quick decay of $\Delta\Phi$ during the initial iterations in \figref{fig:fig_NEUS}A indicates that the system rapidly relaxes, after which the ensemble of trajectories remains in the steady state. To verify that NEUS converges to the correct steady-state distribution, we also performed NEUS for an equilibrium lattice gas and confirmed that the results match a cluster expansion \cite{hansen2013simpleliquids} up to the third order in $z_\text{B}$ and $z_\text{I}$ (\figref{fig:fig_NEUS}B). When performing systematic calculations at different lattice sizes, we observe a linear dependence of $\Delta\Phi$ with respect to the inverse of the system size, $L^{-1}$, in the case of nonequilibrium inhomogeneous systems (\figref{fig:fig_NEUS}C). By contrast, we do not observe any system-size dependence for the nonequilibrium homogeneous and equilibrium systems (\figref{fig:fig_NEUS}B,D). Thus, in the case of inhomogeneous systems, we calculate the supersaturation in the thermodynamic limit by extrapolating the values obtained from simulations performed in finite systems to the infinite system size (\figref{fig:fig_NEUS}C). We attribute this system-size dependence to the broken particle--hole symmetry between the liquid and the vapor phases induced by the inhomogeneous chemical reactions. We further note that the steady-state distributions for the nonequilibrium homogeneous systems and their equivalent equilibrium systems are identical (\figref{fig:fig_NEUS}D), which is consistent with the prediction of an effective equilibrium for homogeneous systems discussed in \secref{sec:inhomogeneous}. \begin{figure} \centering \includegraphics[scale=0.9]{fig_NEUSv3.pdf} \caption{\textbf{Representative results of Nonequilibrium Umbrella Sampling.} \textbf{(A)} Relaxation to the steady state. The value of $\beta\Delta\Phi$ at each iteration is calculated from the average steady-state distribution over the previous 200 iterations. The solid line and the shaded region show the average and the range of $\beta\Delta\Phi$ observed among four independent NEUS trials, respectively. \textbf{(B)} Test of NEUS (circles) against a third-order cluster expansion (open squares) for equilibrium systems. \textbf{(C)} Analysis of the finite-size effect for nonequilibrium inhomogeneous and \textbf{(D)} homogeneous systems whose coexistence conditions are $\beta\Delta\mu_{\text{coex}} = 1.87$. The open squares in (D) are obtained from a third-order cluster expansion for the corresponding effective equilibrium systems specified by $z_\text{B}'$ and $z_\text{I}'$, as defined in \eqref{eq:z_B'} and \eqref{eq:z_I'}, respectively.} \label{fig:fig_NEUS} \end{figure} \subsection{Validation of phase coexistence via direct coexistence simulations} Phase coexistence is established at a NESS when the net flux of particles, the net flux of thermal energy, and the pressure difference between two phases are zero. These conditions are analogous to the equilibrium phase-coexistence criteria of equal chemical potentials, temperatures, and pressures. In our model, coupling to a single particle reservoir regardless of which phase is currently occupying the lattice, along with the local detailed balance condition governing the reaction rates, ensures that the net particle and heat fluxes vanish between the two phases. In order to satisfy mechanical equilibrium, we propose that the nonequilibrium potential difference defined above, $\Delta\Phi$, should be set equal to zero. This choice is motivated by analogy to equilibrium statistical mechanics, in which case $\Delta\Phi$ is equal to the difference between the grand potential densities of the two phases. Because the grand potential is proportional to the pressure at equilibrium, setting $\Delta\Phi$ equal to zero guarantees mechanical balance at equilibrium. Under nonequilibrium conditions, however, the relation between the nonequilibrium grand potential and the pressure does not hold. Nonetheless, $\Delta\Phi = 0$ still implies that the steady-state probabilities of the two phases occupying a given volume are equal at a NESS. In our lattice model, this condition also means that a long trajectory spends an equal amount of time with each phase completely occupying the lattice. We therefore propose that this condition can be used to determine bulk phase coexistence in the thermodynamic limit at a NESS. \begin{figure}[b] \centering \includegraphics[scale=0.9]{interface_diffusion.pdf} \caption{ \textbf{Stochastic direct coexistence simulations verify that $\Delta\Phi = 0$ guarantees mechanical balance between coexisting phases.} Both the nonequilibrium homogeneous (blue) and inhomogeneous (orange) cases are simulated at $\beta\Delta\mu=1.87$, where NEUS indicates that $\beta\Delta\Phi = 0$ in the thermodynamic limit. The right orange curve is plotted versus $\beta\Delta\Phi(L=16)$ obtained directly from NEUS simulations, while the left orange curve is plotted versus $\beta\Delta\Phi(L=\infty)$ obtained by extrapolating the NEUS simulation results to the thermodynamic limit (see \figref{fig:fig_NEUS}). We estimate that direct coexistence simulations performed in an infinitely large system would lie within the shaded region in between these curves.} \label{fig:fig_interface_diffusion} \end{figure} We verify that our definition of phase coexistence based on $\Delta\Phi = 0$ coincides with mechanical balance between the phases, and thus is a proper extension of the equilibrium concept, by performing stochastic direct coexistence simulations \cite{noya2008}. We simulate two bulk phases in direct contact at a flat interface and allow the system to evolve at steady state until the lattice is fully occupied by either of the bulk phases. Using a slab geometry on a $16\times64$ lattice, we initialize these simulations with half of the lattice in the liquid phase and the other half in the vapor phase. If the bulk phases are in mechanical balance, then the interface should diffuse in either direction without any bias and thus reach either of the absorbing states with equal probability. We therefore measure the probability for a system to reach the liquid phase from the initial condition, $P_\text{l}$, to verify that unbiased diffusion coincides with $\Delta\Phi = 0$. \figref{fig:fig_interface_diffusion} shows that in the homogeneous case, $P_\text{l} = \nicefrac{1}{2}$ at $\Delta\Phi = 0$ as expected. Due to the finite-size effects described above, the results of these simulations in the inhomogeneous case depend on both the longitudinal and transverse dimensions of the lattice. From the difference between $\beta\Delta\Phi(L=16)$ and $\beta\Delta\Phi(L=\infty)$ shown in \figref{fig:fig_NEUS}C, we are able to establish that the $P_\text{l}(\beta\Delta\Phi)$ curve lies within the bounds shown in \figref{fig:fig_interface_diffusion}. These results are thus consistent with $\Delta\Phi = 0$ corresponding to $P_\text{l} = \nicefrac{1}{2}$ in the inhomogeneous case. We note that the region of uncertainty in $\beta\Delta\Phi$ shown in \figref{fig:fig_interface_diffusion} (approximately $\pm0.01$) is much smaller than the magnitude of $\beta\Delta\Phi$ in all simulations used to test the applicability of classical nucleation theory ($\beta\Delta\Phi \geq 0.06$). Taken together, these results demonstrate that our definition of nonequilibrium phase coexistence appropriately identifies the conditions for mechanical balance in both homogeneous and inhomogeneous nonequilibrium systems. \section{Homogeneous nucleation calculations} \subsection{Forward Flux Sampling (FFS)} We utilize the forward flux sampling (FFS) rare-event simulation method \cite{allen2009forward} to calculate the nucleation rate starting from the vapor phase. Using the largest cluster of bonding-state particles, $n$, as the reaction coordinate, we perform FFS using $M=64$ milestones from $n_0 = 6$ to $n_{63} = 1100$, with the spacing between consecutive milestones increasing monotonically from 3 to 50, as we advance the milestones. We first determine the flux across the initial milestone, $\Phi_0$, from a steady-state trajectory in the vapor phase. Likewise, the initial ensemble of configurations at $n_0$ is obtained by randomly selecting 1000 configurations at $n=n_0$ from a steady-state trajectory in the vapor phase. We then calculate the probability $P(n_{i+1} | n_i)$ that a trajectory launched from milestone $n_i$ reaches milestone $n_{i+1}$ before returning to the vapor phase. To this end, we launch trajectories from each milestone $n_i$ until we obtain 1000 configurations at $n_{i+1}$. We halt the simulation when the probability $P$ reaches unity. Based on these probabilities and the initial flux measurement, the FFS expression for the nucleation rate, $I$, is \begin{equation} \label{eq:rate_FFS} I = \Phi_0\prod_{j=0}^M P(n_{j+1} | n_j). \end{equation} \subsection{Committing probability and Zeldovich factor} We compute the Zeldovich factor directly from FFS simulations by analyzing the committing probability, $\phi(n)$. The quantity $\phi(n)$ represents the probability that a system with nucleus size $n$ successfully completes the phase transformation into the stable liquid phase before returning to the metastable vapor phase. We calculate $\phi(n_i)$ at each FFS milestone $n_i$ based on the milestone probabilities $P(n_{i+1}|n_i)$, \begin{equation} \phi(n_i) = \prod_{j=i}^M P(n_{j+1} | n_j). \end{equation} The critical nucleus size $n^*$ is found where $\phi(n^*)=\nicefrac{1}{2}$, which is interpolated from the values of $\phi(n_i)$. In the diffusive limit, the critical nucleus size coincides with the location of the top of the barrier on the (nonequilibrium) landscape $F(n) \equiv -\beta^{-1}\ln p(n)$, where $p(n)$ is the steady-state probability of observing a nucleus of size $n$ \cite{hummer2004}. In this limit, $\phi(n)$ is given by \begin{equation}\label{eq:committor_def} \phi(n) = \int_{n_\text{v}}^n dn'\; e^{\beta F(n')} \bigg/ \int_{n_\text{v}}^{n_\text{l}} dn'\; e^{\beta F(n')}, \end{equation} where $n = n_\text{v}$ and $n = n_\text{l}$ mark the boundaries of the transition region between the vapor and liquid phases. In the high-barrier limit, the exponential integrands in (\ref{eq:committor_def}) are dominated by the barrier height $\beta F(n^*)$, and we can take saddle point approximation around $n=n^*$, \begin{equation} e^{\beta F(n)} \approx e^{\beta F(n^*)}\exp\left[\dfrac{\beta F''(n^*)}{2}(n-n^*)^2\right], \end{equation} which leads to an approximate form of $\phi(n)$: \begin{equation} \phi(n) \approx \dfrac{\erf[\sqrt{-\beta F''(n^*) / 2}(n-n^*)] + \erf[\sqrt{-\beta F''(n^*) / 2}(n^*-n_\text{v})]}{\erf[\sqrt{-\beta F''(n^*) / 2}(n_\text{l}-n^*)] + \erf[\sqrt{-\beta F''(n^*) / 2}(n^*-n_\text{v})]}. \end{equation} The diffusive-limit condition at $\phi(n^*) = 1/2$ and the liquid-phase boundary condition $\phi(\infty) = 1$ further simplify the committing probability into \begin{equation}\label{eq:commit_approx} \phi(n) \approx \dfrac{1}{2}\erf\left[\sqrt{-\dfrac{\beta F''(n^*)}{2}}(n-n^*)\right] + \dfrac{1}{2}. \end{equation} Note that this approximate form of $\phi(n)$ agrees with the vapor phase boundary condition $\phi(n_\text{v}) \approx 0$ as long as $n_\text{v} \ll n^*$. To compute the Zeldovich factor, we evaluate the second derivative $F''(n^*)$ by fitting the committing probabilities at the FFS milestones to \eqref{eq:commit_approx} in the region where $0.25 \leq \phi \leq 0.75$. The Zeldovich factor, \begin{equation} \Gamma \equiv \sqrt{-\frac{\beta F''(n^*)}{2\pi}}, \end{equation} is then calculated from the fitted value of $F''(n^*)$. \figref{fig:Zeldovich}A shows that \eqref{eq:commit_approx} works well inside the fitting region. Furthermore, \figref{fig:Zeldovich}B shows that the fitted value of the Zeldovich factor and the calculated value determined using the classical nucleation theory (CNT) line tension (see \eqref{eq:CNT_rate} below) match one other, so that \eqref{eq:commit_approx} is consistent with CNT. \begin{figure} \centering \includegraphics[scale=0.85]{Figures/fig_Z1.pdf} \caption{\textbf{The relationship between the committing probability, $\phi(n)$, and the Zeldovich factor, $\Gamma$.} (A)~Committing probabilities, $\phi(n)$, interpolated from FFS results (marks) and fit to \eqref{eq:commit_approx} (solid lines) at $S = 1.27$. (B) The Zeldovich factor, $\Gamma$, determined from the committing probability (marks) and from the CNT kinetics equation, \eqref{eq:CNT_rate}, using the analytical value $\beta\sigma_\text{eq}=1.023$ for the equilibrium line tension obtained from \eqref{eq:sigma_analytical} (gray line) and the fitted line tension $\beta\sigma = 0.856$ using \eqref{eq:CNT_rate} (orange line). Data are shown for nonequilibrium homogeneous (blue) and inhomogeneous (orange) systems whose coexistence conditions are $\beta\Delta\mu_{\text{coex}} = 1.87$.} \label{fig:Zeldovich} \end{figure} \subsection{Determination of the nonequilibrium line tension} Ref.~\cite{ryu2010validity} extensively tested CNT for the equilibrium two-dimensional square lattice model and validated the CNT expressions for the free-energy landscape along the nucleus-size reaction coordinate, $n$, and the nucleation rate, $I$, \begin{align} \label{eq:CNT_landscape} \beta F(n) &= \beta\sigma\sqrt{4\pi n} - n\beta\Delta\Phi + 1.25\ln n + d\\ \label{eq:CNT_rate_factors} I &= \rho_1D^*\Gamma\exp[-\beta\Delta F^*]. \end{align} In \eqref{eq:CNT_landscape}, $d$ is a constant chosen to equate the gas-phase monomer number density, $\rho_1$, and $\exp[-\beta F(1)]$, such that $\Delta F^* \equiv F(n^*) - F(1)$. Rearranging the expression for the CNT nucleation rate, we obtain an expression for the nucleation barrier height at a given $\Delta\Phi$, \begin{equation} \ln\left(\dfrac{I}{\rho_1D^*\Gamma}\right) = -\left[\beta\sigma\sqrt{4\pi}({n^*}^{1/2}-1) - \beta\Delta\Phi(n^*-1) + 1.25\ln n^*\right]. \label{eq:CNT_rate} \end{equation} The critical nucleus size $n^*$ follows from the condition $F'(n^*) = 0$, \begin{equation} n^* = 25/(-\beta\sigma\sqrt{4\pi} + \sqrt{4\pi\beta^2\sigma^2 + 20\beta\Delta\Phi})^2. \label{eq:critical_nucleus_size} \end{equation} We determine the apparent line tension in nonequilibrium simulations by fitting \eqref{eq:CNT_rate} over a range of $\beta\Delta\Phi$, using independent measurements of $\ln(I/\rho_1 D^* \Gamma)$, and using $\sigma$ as the sole fitting parameter. We find that we can apply \eqref{eq:CNT_rate} to nonequilibrium systems without modifying the functional form of \eqref{eq:CNT_landscape}, although the value of the line tension may differ from the equilibrium value (see Fig.~2B in the main text). \begin{figure} \centering \includegraphics[width=\textwidth]{fig_factor.pdf} \caption{\textbf{The line tension is the dominant factor governing the nucleation kinetics.} \textbf{(A)} The various factors contributing to the CNT expression for the nucleation rate, \eqref{eq:CNT_rate_factors}, in nonequilibrium inhomogeneous and \textbf{(B)} homogeneous cases. Shown here are the diffusion coefficient $D^*$ (squares), the monomer density $\rho_1$ (triangles), the Zeldovich factor $\Gamma$ (open circles), and the apparent barrier height $\ln (I/\rho_1D^*\Gamma)$ (filled circles) at $S = 1.27$.} \label{fig:fig_factor} \end{figure} In \figref{fig:fig_factor}, we demonstrate that the line tension, $\sigma$, is the most important variable in determining the nucleation kinetics at a NESS, as expected from equilibrium systems. Within the framework of CNT, there are three independent variables affecting the nucleation kinetics at a fixed supersaturation: $\rho_1$, $D^*$, and $\sigma$, where the last variable governs both the nucleation barrier and the Zeldovich factor. In \figref{fig:fig_factor}, direct comparisons among $\rho_1$, $D^*$, $\Gamma$, and the apparent barrier height $\ln (I/\rho_1D^*\Gamma)$ at a fixed value of the supersaturation show that the barrier term is indeed the dominant term for both the homogeneous and inhomogeneous nonequilibrium models. Furthermore, in contrast to the barrier term, $\Gamma$ only shows a weak dependence on $\Delta\mu$. This demonstrates that the effect of the nonequilibrium line tension on the nucleation kinetics primarily originates from the nucleation barrier rather than the Zeldovich factor. \section{Theoretical predictions for thermodynamics and kinetics at a NESS} \label{sec:flex_predictions} In this section, we describe how the approximate theoretical analysis (FLEX) introduced in \secref{sec:flex} can be applied to predict the nucleation kinetics at a NESS. We first coarse-grain the two non-interacting isoenergetic states, I and E, into a single inert state in order to map our three-state model onto an effective two-state model. To this end, we sum up the FLEX steady-state distributions for the I and E lattice states in \eqref{eq:z_B'} and \eqref{eq:z_I'} assuming a fixed value of the local potential energy, $u$. We then apply results for the equilibrium lattice-gas model to our effective equilibrium model. \subsection{FLEX predictions of phase coexistence and entropy production rates} In the equilibrium two-dimensional two-state lattice-gas model, the coexistence condition is given by $\mu = 2\epsilon$, where $\mu$ is the chemical potential of the particle \cite{pathria1996statistical}. This condition reflects the particle--hole symmetry of the two-state lattice-gas model. Within FLEX, we approximate the conditions for phase coexistence by assuming that particle--hole symmetry still applies to the effective equilibrium model. We define the FLEX approximation for the supersaturation, $S_\text{FLEX}$, by considering a lattice site surrounded by precisely two bonding state particles, such that the fixed local environment has $u=2\epsilon$, \begin{equation} S_\text{FLEX} \equiv \left.\dfrac{p_\text{B}}{p_\text{E}+p_\text{I}}\right|_{u=2\epsilon} = \left.\dfrac{z_\text{B}'e^{-\beta u}}{1+z_\text{I}'}\right|_{u=2\epsilon}, \label{eq:S_FLEX} \end{equation} where $z_\text{B}'$ and $z_\text{I}'$ are defined in \eqref{eq:z_B'} and \eqref{eq:z_I'}, respectively. Similar to the true equilibrium model, the effective equilibrium approximation given by \eqref{eq:S_FLEX} predicts that phase coexistence occurs when the steady-state probability of the bonding and the coarse-grained inert states are equal at a lattice site with $u = 2\epsilon$. FLEX predicts the following expression for the entropy production rate density, $\dot{\Sigma}_\text{FLEX}$, \begin{equation} \label{eq:FLEX_entropy_production_rate} k_B^{-1}\dot{\Sigma}_\text{FLEX} = j\times\beta\Delta\mu = \dfrac{z_\text{I}'k_{\text{I}\rightarrow\text{B}}(e^{\beta\Delta\mu}-1)\beta\Delta\mu}{[1+z_\text{B}'e^{-\beta u}+z_\text{I}'][1+k_{\text{I}\rightarrow\text{B}}(1+e^{\beta\Delta f_\text{res}})e^{\beta\Delta\mu}]}, \end{equation} where $j\equiv p_\text{B}W_\text{BI} - p_\text{I}W_\text{IB}$ is the net transition flux in the B to I direction. \eqref{eq:FLEX_entropy_production_rate} indicates that $\dot{\Sigma}$ is always positive unless the system is at equilibrium ($\beta\Delta\mu=0$). Furthermore, the entropy production always depends on $u$, regardless of the functional form of $k_{\text{I}\rightarrow\text{B}}$, which implies that $\dot{\Sigma}$ should in general differ between the vapor and liquid phases. Within FLEX, we estimate the entropy production rate density in the vapor, $\dot{\Sigma}_\text{v}$, and in the liquid phase, $\dot{\Sigma}_\text{l}$, by fixing $u=0$ and $u=4\epsilon$, respectively. \begin{figure} \centering \includegraphics[width=\textwidth]{fig_S.pdf} \caption{\textbf{FLEX predictions of thermodynamic quantities.} \textbf{(A)} Comparison between the FLEX predictions and simulation results for the supersaturation and \textbf{(B)} the entropy production rate density difference $\Delta\dot{\Sigma} \equiv \dot{\Sigma}_\text{v} - \dot{\Sigma}_\text{l}$. Data shown are for nonequilibrium homogeneous (blue) and inhomogeneous (orange) systems, both of whose coexistence conditions are $\beta\Delta\mu_{\text{coex}} = 1.87$. FLEX predictions are shown for the same conditions.} \label{fig:fig_FLEX_thermodynamics} \end{figure} \figref{fig:fig_FLEX_thermodynamics} shows that these predictions agree qualitatively with the simulation results. The supersaturation, $S$, and the steady-state entropy production rate density, $\dot{\Sigma}$, are calculated from NEUS and the simulated trajectory as described in Ref.~\cite{van2015ensemble}, respectively. The FLEX predictions perfectly match the simulation results for the nonequilibrium homogeneous case, while the inhomogeneous case shows systematic deviations; however, there is a clear linear relation between the predictions and the simulation results even when the system is driven far from equilibrium. \subsection{FLEX prediction of the nonequilibrium line tension} We use FLEX to predict the line tension contribution to the CNT rate equation, \eqref{eq:CNT_rate}, by considering the attachment of a single bonding-state particle to a flat liquid--vapor interface at steady state. We perform this calculation at the FLEX-predicted coexistence point, as we expect the line tension to be independent of the supersaturation. We further assume that the interface can be described using the restricted solid-on-solid (RSOS) model \cite{saito1996statistical}. At coexistence, our approximation for the supersaturation, $S_\text{FLEX}$, implies that the formation of a kink at the interface (\figref{fig:fig_line_tension}A) incurs no (effective) free-energy penalty in the effective equilibrium model. We can therefore predict the line tension from the (effective) free-energy cost of attaching a single bonding-state adatom to the interface in the RSOS model. We take the negative of this free-energy cost as the new interaction strength $\beta\tilde\epsilon$ based on the particle--hole symmetry in the effective equilibrium model and predict the line tension using the equilibrium expression \cite{shneidman1999analytical} \begin{equation} \label{eq:sigma_analytical} \beta\sigma(\tilde\epsilon) = \left\{\dfrac{4\tilde\epsilon}{\pi\chi(\beta)}\int_{\beta_c}^\beta K'\left[\dfrac{8(\cosh(\beta'\tilde\epsilon) -1)}{(\cosh(\beta'\tilde\epsilon) +1)^2}\right] \left(\dfrac{\cosh(\beta'\tilde\epsilon)-3}{\sinh(\beta'\tilde\epsilon)}\right) d\beta'\right\}^{1/2}, \end{equation} where $K'$ is the elliptic integral of the first kind, $\chi(\beta) = \left[1-\sinh^{-4}(\beta\tilde\epsilon/2)\right]^{1/8}$, and $\beta_c$ is the (inverse) critical temperature given by $\beta_c\tilde\epsilon = 2\ln(1+\sqrt{2})$. We find that this RSOS approximation qualitatively explains the decreasing trend of the line tension with respect to the nonequilibrium drive for the inhomogeneous model (see Fig.~3a in the main text). By contrast, we do not observe any change in the predicted line tension in the homogeneous case because the interfacial properties are described by an effective equilibrium model that is common to both phases. \begin{figure} \centering \includegraphics[width=\textwidth]{fig_line_tension.pdf} \caption{\textbf{FLEX prediction of the nonequilibrium line tension.} \textbf{(A)} Schematic of a single-layer configuration at a flat vapor--liquid interface. We coarse-grain the inert particle (blue) and unoccupied (empty) lattice sites and thus map the upper configuration to the lower configuration with coarse-grained inert lattice sites (gray). From the steady-state distribution calculated using FLEX, we obtain an effective bonding energy, $\beta\tilde\epsilon$, for an isolated adatom as shown. \textbf{(B)} The relationship between the thermodynamic inhomogeneity, $\Delta\Delta f$, at coexistence and the deviation of the line tension from equilibrium, $\Delta\sigma$. \textbf{(C)}~The dependence of $\Delta\Delta f$ on the relative timescale, $k^\circ$, at the same conditions shown in the inset of Fig.~3a in the main text. Solid lines show the FLEX predictions, and marks report simulation results. Data are shown for nonequilibrium homogeneous (blue) and inhomogeneous (orange) models.} \label{fig:fig_line_tension} \end{figure} Based on this analysis, we postulate that if the inferred nonequilibrium line tension differs from the equilibrium value, then the coexisting phases at a NESS must be thermodynamically inhomogeneous and thus described by different effective equilibrium models. Both FLEX and our simulation results support this postulated relationship between the thermodynamic inhomogeneity and the interfacial properties. In \figref{fig:fig_line_tension}B, the degree of inhomogeneity, $\Delta\Delta f = \Delta f_\text{l} - \Delta f_\text{v}$, is calculated from simulation trajectories obtained in each phase at steady state using \eqref{eq:deltaf1}, while the FLEX predictions are calculating using \eqref{eq:∆f FLEX} and assuming that $u=4\epsilon$ for $\Delta f_\text{l}$ and $u=0$ for $\Delta f_\text{v}$. Both simulation and theory are consistent with our prediction that a deviation in the line tension ($\Delta\sigma \neq 0$) implies a nonzero $\Delta\Delta f$. At the same time, simulation and theory both show that the converse does not necessarily hold, as nonzero values of $\Delta\Delta f$ may not result in nonzero values of $\Delta\sigma$. FLEX suggests that this latter relationship is dependent on the precise functional form of $k_{\text{I}\rightarrow\text{B}}$. For the inhomogeneous model, we find that the maximum of $\beta\Delta\Delta f$ occurs when $k^\circ\approx 1$, which is also when the line tension deviates furthest from the equilibrium value (\figref{fig:fig_line_tension}C; see also the inset of Fig.~3a). This observation further supports our hypothesis that changes in the interfacial properties are only possible when the two coexisting nonequilibrium phases do not share a common effective equilibrium description. We note that, however, that the two limits $k^\circ\rightarrow0$ and $k^\circ\rightarrow\infty$ do not correspond to the same steady-state distribution. In the limit $k^\circ\rightarrow0$, $k_{\text{I}\rightarrow\text{B}}$ also approaches $0$ regardless of its functional form, and the system reverts back to a true equilibrium with the change of variable $\lambda_\text{B}\rightarrow\lambda_\text{B}\exp(\beta\Delta\mu)$, so that $z_\text{B}'=z_\text{B}$ and $z_\text{I}'=z_\text{I}$ (\eqref{eq:z_B'} and \eqref{eq:z_I'}). In the limit $k_{\text{I}\rightarrow\text{B}}\rightarrow\infty$, however, the fugacities in the system and the reservoir are not identical unless the system is at equilibrium ($\beta\Delta\mu = 0$). \subsection{FLEX prediction of the nonequilibrium nucleation kinetics} We can derive approximate expressions for the various factors governing the nucleation kinetics, $\rho_1$, $D^*$, $\Gamma$, and $\beta\Delta F^*$, within the FLEX framework. For the monomer density in the vapor phase, $\rho_1$, we assume that the bonding-state particles are sparsely distributed and thus approximate the local potential energy $u$ as zero for all lattice sites. Then $\rho_1$ and the total particle density, $\rho_\text{v}$, are approximated as \begin{align} \rho_1 &= \left.\dfrac{p_\text{B}}{p_\text{B}+p_\text{I}+p_\text{E}}\right|_{u=0} = \left.\dfrac{z_\text{B}'}{z_\text{B}'+z_\text{I}'+1}\right|_{u=0}\\ \rho_\text{v} &= \left.\dfrac{p_\text{B}+p_\text{I}}{p_\text{B}+p_\text{I}+p_\text{E}}\right|_{u=0} = \left.\dfrac{z_\text{B}'+z_\text{I}'}{z_\text{B}'+z_\text{I}'+1}\right|_{u=0}, \end{align} where the steady-state distributions $p_\text{B}$, $p_\text{I}$, and $p_\text{E}$ are given by \eqref{eq:z_B'} and \eqref{eq:z_I'}. We approximate the diffusion coefficient at the top of the nucleation barrier, $D^*$, as the rate of attaching a bonding-state adatom to a circular nucleus of size $n^*$, which is given by \eqref{eq:critical_nucleus_size}. We assume that the adatom interacts only with the critical nucleus and that the local environment can therefore be described by $u = \epsilon$. Under this condition, the mean time to insert a bonding-state adatom into an unoccupied lattice site, $\tilde{T}$, is given by \begin{equation} \tilde{T} = \left.\dfrac{(1+z_\text{I}')+k_{\text{I}\rightarrow\text{B}}}{z_\text{B}'+k_{\text{I}\rightarrow\text{B}}(z_\text{B}'+z_\text{I}')}\right|_{u=\epsilon}. \end{equation} We approximate the attachment as a first-order transition and define the bonding-state adatom attachment rate, $w_+$, to be the inverse of the mean time, \begin{equation} w_+ \equiv \tilde{T}^{-1} = \left[z_\text{B}' + z_\text{I}'\times\dfrac{k_{\text{I}\rightarrow\text{B}}-z_\text{B}'}{k_{\text{I}\rightarrow\text{B}}+(1+z_\text{I}')}\right]_{u=\epsilon}. \end{equation} Finally, we approximate the diffusion coefficient as the product of the perimeter of a circular critical nucleus, $\sqrt{4\pi n^*}$, and the bonding-state adatom attachment rate per lattice site, $w_+$: \begin{equation} D^* = \sqrt{4\pi n^*}w_+. \end{equation} Given the nonequilibrium line tension $\sigma$ estimated from the RSOS approximation described above, the barrier height $\Delta F^* = F(n^*) - F(1)$ is calculated from \eqref{eq:CNT_landscape}, and the Zeldovich factor $\Gamma$ is given by \begin{equation} \Gamma = \sqrt{-\dfrac{\beta F''(n^*)}{2\pi}} = \sqrt{\dfrac{1}{8\pi}\left[\dfrac{\beta\sigma\sqrt{4\pi}}{(n^*)^{1.5}}+\dfrac{5}{(n^*)^2}\right]}, \end{equation} where the critical nucleus size $n^*$ is estimated from \eqref{eq:critical_nucleus_size}. Comparisons between these FLEX predictions and simulation results for both equilibrium and nonequilibrium systems are shown in \figref{fig:fig_factor_FLEX}. Overall, we find qualitative agreement for all four factors in the CNT rate equation. Importantly, FLEX qualitatively predicts the enhanced kinetics for nonequilibrium systems shown in Fig.~3B in the main text: Homogeneous systems show substantial enhancement only for the diffusion coefficient, while the most substantial contribution to the enhanced kinetics in the inhomogeneous case result from the apparent nucleation barrier, as indicated in \figref{fig:fig_factor}A. \begin{figure}[!h] \centering \includegraphics[scale=0.8]{fig_factor_FLEX.pdf} \caption{\textbf{FLEX predictions for the factors governing the nucleation kinetics at a NESS.} Comparison of the ratio between nonequilibrium and equilibrium values for \textbf{(A)} the monomer density $\rho_1$, \textbf{(B)} the diffusion coefficient $D^*$, \textbf{(C)} the Zeldovich factor $\Gamma$, and \textbf{(D)} the apparent nucleation barrier height $\beta\Delta F^*$ at $S = 1.27$. FLEX predictions are made at the same $\beta\Delta\mu$ and $\rho_\text{v}$ at $S=1.27$ for each simulation point. Nonequilibrium homogeneous (blue) and inhomogeneous (orange) systems share the common coexistence condition $\beta\Delta\mu_{\text{coex}} = 1.87$.} \label{fig:fig_factor_FLEX} \end{figure} \end{document}
1,108,101,562,587
arxiv
\section{Introduction} In 2010, antihydrogen atoms were trapped at CERN \cite{andr:10a,andr:11a,andr:11b,gabr:12}. Since then, CERN's ALPHA collaboration has reported initial experimental results on the two most commonly discussed symmetry tests with antihydrogen: spectral tests of CPT \cite{amol:12a}; and gravity freefall tests of the weak equivalence principle \cite{amol:13}. Future experiments are expected to obtain much more precise results \cite{kell:08,char:11a,ashk:12,amol:13,zhmo:13,cesa:09a,hami:14}. A less commonly discussed test of fundamental symmetries concerns the electric charge of antihydrogen. Assuming that atomic hydrogen (H) is itself charge neutral, CPT demands that antihydrogen also be charge neutral. While there do not appear to be extraordinarily precise measurements on H itself, other normal-matter atoms and molecules are known to be neutral to remarkable precision \cite{bres:11}: to about $10^{-21}e$ for diverse species such as He, H$_2$, and SF$_6$, where $e$ is the elementary charge. The methods used in these studies are unsuitable for antihydrogen as they require macroscopic quantities of atoms or molecules; to date, only about $500$ antihydrogen atoms have been trapped and detected, and there are no prospects for trapping macroscopic quantities. Charge neutrality of antimatter atoms, as well as of matter atoms, is also expected from the condition for quantum anomaly cancellation, which is required for theoretical consistency in quantum field theory \cite{quig:97}. ALPHA recently reported \cite{amol:14a} a bound on the antihydrogen charge $Qe$ of $Q=(\QQ\pm\samperr\pm\syserr)\times 10^{-8}$, where the first error arises from counting statistics, and the second error is estimated based on systematic effects. This bound was based on a search for the deflection of putatively charged antihydrogen atoms by an applied electric field. Here we describe a different technique, related to stochastic acceleration \cite{stur:66,tsus:09}, to measure the charge. This technique uses randomly time-varying electric fields to eject putatively charged antihydrogen atoms from an ALPHA-style trap. Current measurements using this technique \cite{amol:14a} set a bound on $Q$ of about $2\times 10^{-7}$, an order of magnitude looser than that found by the deflection technique, but this stochastic acceleration technique can easily be extended to much better precision, perhaps ultimately reaching $10^{-12}$. \section{Apparatus} ALPHA traps antihydrogen atoms by producing and capturing them in a minimum-B trap \cite{prit:83}. The trap confines those anti-atoms whose magnetic moment $\muHbarb$ is aligned such that they are attracted to the minimum of the trap magnetic field $\mathbf{B}$, and whose kinetic energy is below the trap well depth, $\muHbar(|\mathbf{B}|_\mathrm{Wall}-|\mathbf{B}|_\mathrm{Center})$. In ALPHA (see Fig.~\ref{Fig1SchematicFields}a), this magnetic minimum is created by an octupole magnet which produces transverse fields of magnitude $1.54\,$T at the trap wall ($\Rw=22.3\,$mm), and two mirror coils which produce axial fields of $1\,$T at their centers. The mirror coil centers are symmetrically located at distances $z=\pm 137\,$mm from the trap center at $z=0$ (see Fig.~\ref{Fig1SchematicFields}b). These fields are superimposed on a uniform axial field of 1T produced by an external solenoid \cite{bert:06,andr:08b}; taken together, they create a trap of depth $540\,$mK \cite{amol:14a}. \begin{figure}[tb!] \centerline{\includegraphics[width=3.35in]{Fig1_SchematicFields.pdf}} \caption{{\bf Experimental summary} (a) A schematic of the antihydrogen production and trapping region of the ALPHA apparatus, showing the cryogenically cooled cylindrical Penning-Malmberg trap electrodes, and the mirror and octupole magnet coils. The ALPHA positron source (not shown) is towards the right, and the Antiproton Decelerator (not shown) is towards the left. (b) The on-axis magnetic field $B$ as a function of $z$. (c) Typical on-axis electrostatic potentials used in the prior \cite{amol:14a} deflection-style experiments. (d) Values of the electrostatic potential at $r=0$ (black solid line) and at $r=0.6\Rw=13.6\,$mm (black dashed line) for a possible stochastic acceleration measurement. Here, biases alternating between $\pm 350\,$V are applied to consecutive electrodes in the region between the magnetic field maxima; all other electrodes are kept at $0\,$V. Also shown are the $r=0$ and $r=0.6\Rw$ potentials when pairs of contiguous electrodes are joined and alternated at $\pm 350\,$V (orange dashed line and orange dashed-dotted line). (e) Axial component of the electric field for the potentials in (d). (f) Radial component of the electric fields for the potentials in (d), evaluated at $r=0.6\Rw$. Graphs (e) and (f) use the same line identification scheme as (d).} \label{Fig1SchematicFields} \end{figure} The general methods by which anti-atoms were produced from antiprotons and positrons, and then captured in the ALPHA trap, are described in Refs.~\citenum{andr:10a,andr:11a,andr:11b,andr:11d}; in this article we concentrate only on the last two phases of the ALPHA experiment, where anti-atoms were first held in static magnetic fields for times up to $1000\,$s, and then released from the minimum-B trap by gradually turning off the octupole and mirror fields. The escaping anti-atoms were then detected with about $60$\% efficiency \cite{andr:12} when they annihilated on the trap wall. \section{Charged Particle Deflection Experiments} In the previously reported deflection experiments \cite{amol:14a}, the anti-atoms were subjected to electric fields similar to those derived from the potential shown in Fig.~\ref{Fig1SchematicFields}c. Together with the magnetic field, these electric fields form a well in which the on-axis potential energy of an anti-atom with a putative charge $Qe$ is given by \begin{equation} U(z)=\muHbar B(z)-\frac{QeE}{k_b}z, \label{potential} \end{equation} where all energies are specified in units of kelvins, $\muHbar=0.67\,$K/T is the normalized antihydrogen magnetic moment \cite{amol:12a}, $k_b$ is the Boltzmann constant, and where we approximate the electric field inside the trap by a constant value $E$. As $B(z)$ has a minimum at $z=0$, this well also has a minimum at $z=0$ when $Q=0$. Consequently, the annihilations that result from the last-phase magnet shutoff will be centered around $z=0$. If $Q\neq 0$, then the well minimum will shift \cite{amol:14a} by an amount \begin{equation} \avzD \propto QE/\beta, \label{DeflectionScaling} \end{equation} where we approximate the magnetic field around the minimum as $B(z)=B_0+\beta z^2$. To set the deflection-based $Q$ bound quoted above \cite{amol:14a}, ALPHA used measurements of the experimental $\avzD$, coupled with extensive computer simulations to determine Eq.~(\ref{DeflectionScaling})'s scaling constant. The scaling in Eq.~(\ref{DeflectionScaling}) suggests three methods of tightening the bounds on $Q$: (i) Increasing $E$. Unfortunately, arcing and other experimental concerns limit any increase in $E$ to factors in the range of 2 to 4. (ii) Decreasing $\beta$. The increase in $B(z)$ going from the trap center to the trap axial ends, which is proportional to $\beta$, sets the trap depth. Currently, the trap depth of $540\,$mK cannot be lowered without anti-atoms escaping because many of the anti-atoms are only shallowly trapped \cite{andr:11a,amol:12}. Laser cooling of the trapped anti-atoms \cite{donn:13} has the potential to lower the anti-atom temperature to about $20\,$mK, which would allow us to lower the post-cooling trap well depth to perhaps $30\,$mK without losing too many anti-atoms; $\beta$ would decrease correspondingly. (iii) Obtaining a better experimental determination of the measured $\avzD$. The error in $\avzD$ is set by counting statistics. ALPHA utilized a sample of $386$ anti-atoms collected over two years for its determination \cite{amol:14a} of $Q$. As the statistical error scales as the inverse square root of the number of samples, it would be difficult to decrease this error without significantly increasing the trapping rate. Taken together, these improvements have the potential to tighten the bound on $Q$ by less than a factor of one hundred. Consequently, it is worth investigating other methods to determine $Q$. \section{Stochastic Acceleration Methodology} \subsection{Charged Particle Ejection} The electric fields $E$ effectively remove any inadvertently mirror-trapped antiprotons from the system (cf.\ Table~1 of Ref.~\citenum{amol:12}). However, other particles, such as putatively charged antihydrogen atoms, can also be ejected, and at a charge much less than the unit charge. For the potential in the Fig.~\ref{Fig1SchematicFields}c, the wells predicted by Eq.~(\ref{potential}) will cease to exist for any anti-atom with a $|Q|\gtrsim 2\times 10^{-6}$. Further, anti-atoms with a charge below that required for the well to disappear, i.e. $0<|Q|<2\times 10^{-6}$, will still be accelerated by the application of the electric fields. They can be ejected if the extra increment of energy they gain is sufficient for them to climb over the trap walls. In the previously reported experiment \cite{amol:14a}, the fields were not static; they cycled between potentials similar to the two shown in Fig.~\ref{Fig1SchematicFields}c. Altogether, the fields underwent nine transitions. (The first four transitions used half-strength fields). The transition timescales ($\tstep\approx 2\,$ms) were short compared to the anti-atom orbit timescales (typically $\lesssim 10\,$ms), and the orbit timescales were comparable to the time intervals between transitions ($12\,$ms). Thus, the accelerations at each transition were non-adiabatic, and, at each transition, the anti-atoms received a kinetic energy ``kick.'' These kicks were effectively random; an individual kick might have increased or decreased the anti-atom's energy. Depending on $Q$ and the vagaries of the stochastic process, the kicks might give the anti-atoms enough energy to escape the well. This process is illustrated by the simulation results in Fig.~\ref{EjectionSim}. Large numbers of anti-atoms were ejected for large $Q$, small numbers for small $Q$, and very few anti-atoms were lost when $Q=0$. As approximately half of the simulated anti-atoms were lost at $Q=2\times 10^{-7}$, but experimentally ALPHA observed trapped anti-atoms at the end of these cycles, ALPHA set an experimental limit in the neighborhood of $|Q|<2\times 10^{-7}$ for the anti-atom charge \cite{amol:14a}. However, the absence of an absolute trapping rate measurement makes setting a precise limit impossible for this current dataset using this methodology. \subsection{Experimental Design} We propose \cite{amol:14a,baqu:13} to remedy this problem, and make an improved determination of $Q$, by measuring the trapping rate with and without the application of stochastic electric fields. Specifically, we would measure the number of anti-atoms remaining in the trap (by turning off the trapping magnetic fields) per trapping attempt, or trial, after a set of acceleration cycles were applied for a time $\THeat$, or else after the trap was kept quiescent (no accelerating fields) for the same time $\THeat$. By ensuring that both types of trials hold the anti-atoms for equal times, we would take into account any effects from vacuum annihilation or other anti-atom loss mechanisms. To ameliorate the effects of long time drifts, we would alternate the two types of trials. If we were to observe that the acceleration-on rate was lower than the acceleration-off rate, we could use analytic or numerical means to estimate the $Q$ that would cause the observed difference. If, as is more likely, we observe no statistically significant difference between these rates, we could use these same means to calculate the $Q$ that would have caused a measurable difference; a good criterion would be to find the critical $Q$ which would cause half the anti-atoms to be lost. In either case, this stochastic acceleration methodology would place a value or bound on $Q$. \begin{figure}[h] \centerline{\includegraphics[width=3.35in]{Fig2_EjectionSim.pdf}} \caption{{\bf Clearing simulation results} Simulated axial annihilation locations $z$ versus annihilation time for different values of $Q$: (a) $Q=2\times 10^{-7}$, (b) $Q=-3\times 10^{-8}$, and (c) $Q=0$. Each dot represents one annihilation. The colored vertical lines indicate the start time for different manipulations: the half-strength clearing cycles begin at the leftmost green line, the full-strength clearings cycles begin at the middle blue line, and the final $E$ field (used in the deflection measurement) is applied at the magenta rightmost line. The multiple clumps in (a) and (b) reflect multiple oscillations of the anti-atoms in the potential well of Eq.~(\ref{potential}). The very small losses in (c) are due to the gradual depopulation of the quasi-trapped states \cite{coak:05,andr:10a,amol:12}. In (a), 54\% of the charge is lost from $t=0$ to the end of the plot; in (b), 2.4\% of the charge is lost; in (c), 0.3\% of the charge is lost.} \label{EjectionSim} \end{figure} \subsection{Approximate Determination of $Q$} Let us take $\pm\Deltapot$ to be the typical electrical potential change experienced by an anti-atom during one kick. Such a potential change would result in a center-of-mass kinetic energy change of $\DeltaEk \sim \pm Qe\Deltapot$ for an anti-atom with a charge $Q$. Following the classic random walk scaling, the typical total energy gained after $N$ kicks would be on the order of $|Q|e\Deltapot\sqrt{N}$. If this energy exceeds the trap depth $\Depth$, a condition approximately met if \begin{equation} |Q| \gtrsim \frac{\Depth}{e\Deltapot}\sqrt{\frac{1}{N}}, \label{StochasticLimit} \end{equation} then the kicks will drive a typical anti-atom of charge $Q$ out of the trap. \subsection{Advantages of Stochastic Acceleration} This stochastic acceleration methodology has several advantages over the deflection methodology employed in Refs.~\citenum{amol:14a,baqu:13} and could ultimately lead to a much better bound. First, instead of searching for a small average deflection, which requires hundreds of anti-atoms, the stochastic acceleration methodology relies on the simpler observation that anti-atoms have survived the acceleration cycles; thus, the test can reach statistical significance with only a few tens of anti-atoms. Second, the deflection methodology requires that the electric field everywhere point in the same direction over the entire length of the trap. The stochastic acceleration methodology has no such requirement, and the potential can be inverted over a short distance, resulting in much larger electric fields. For instance, if the potential oscillated between that shown in Fig.~\ref{Fig1SchematicFields}d, and its inverse, the average change in the field would be approximately $10\,\mathrm{V}/\mathrm{mm}$, a 20-fold increase over the fields obtained from the potential in Fig.~\ref{Fig1SchematicFields}c. Third, and most important, we can improve the sensitivity of the measurement by taking advantage of the $\sqrt{N}$ scaling of Eq.~(\ref{StochasticLimit}) on the number of acceleration cycles. As anti-atoms can be held for a very long time \cite{andr:11a}, $N$ can be very large. We note, however, that this methodology does not yield the sign of any observed putative charge. \subsection{Numerical Determination of $Q$} While we can analytically estimate the critical $Q$ from Eq.~\ref{StochasticLimit}, our estimate would rely on a parameter, $\Deltapot$, which can only be approximated analytically. Furthermore, the random walk in energy-space model on which Eq.~\ref{StochasticLimit} is based has faults, most obviously that it would allow energies to become negative. Thus, we use numeric simulations to obtain a more precise estimate of the critical $Q$. These simulations model the anti-atom equation of motion, \begin{equation} M\frac{d^2\boldsymbol{\pos}}{d t^2}=\nabla [\muHbarb\cdot\mathbf{B}(\boldsymbol{\pos},t)] + Qe[\mathbf{E}(\boldsymbol{\pos},t)+\dot{\boldsymbol{\pos}}\times\mathbf{B}(\boldsymbol{\pos},t)], \label{EQM} \end{equation} where $\boldsymbol{\pos}$ is the center of mass position of the anti-atom, and $E(\boldsymbol{\pos},t)$ and $B(\boldsymbol{\pos},t)$ are the position and time dependent electric and magnetic fields. For the low-field-seeking anti-atoms modeled here, the magnetic moment $\muHbarb$ and $\mathbf{B}$ are anti-aligned. Detailed descriptions of similar simulations and various benchmarking tests have been given in prior publications \cite{amol:12, amol:13,amol:14a}. The results of a typical simulation are shown in Fig~\ref{Simulation}. The simulation ran for $\THeat=100\,$s for anti-atoms with a putative charge of $Q=5\times 10^{-10}$, and with the $\DeltaPhi=\pm 350\,$V accelerating potentials shown in Fig.~\ref{Fig1SchematicFields}d. On average, the potentials were inverted every $\tstep=0.3\,$ms, but these switching times were randomized with a standard deviation of $\tsigma=60\,\mu$s. (The reason for the randomization, which follows a uniform distribution, will be discussed later.) During the $100\,$s that the simulation ran, approximately $93$\% of the anti-atoms were forced out of the trap and annihilated on the trap wall. Figure~\ref{Simulation}a shows the $z$ locations of these annihilations. Most of the anti-atoms annihilated near the axial ends of the trap; either on the axial step present at the end, or in the potential well ``holes'' (see Fig.~4 of Ref.~\cite{faja:04} or Ref~\cite{bert:05}) created by the interaction between the octupole and mirror coils. As expected, Fig.~\ref{Simulation}b shows that deeply trapped anti-atoms (anti-atoms with relatively little initial kinetic energy) take the longest to acquire enough energy to escape the trap. Figure~\ref{Simulation}c plots the cumulative escaped fraction as a function of time. At about $\thalf=11.6\,$s, half the anti-atoms have escaped. As the number of simulated anti-atoms is large ($\sim 3000$), the error in determining $\thalf$ is not large; using Greenwood's formula \cite{lawl:03,empi:13}, we can calculate the interval, $10.4\,{\rm s}<\thalf<12.8\,{\rm s}$, where the true $\thalf$ can be found with at least 95\% confidence. \begin{figure}[h] \centerline{\includegraphics[width=3.4in]{Fig3_Simulation.pdf}} \caption{{\bf Typical simulation results} Stochastic acceleration simulation results for $Q=5\times 10^{-10}$, $\DeltaPhi=\pm 350\,$V, $\tstep=0.3\,$ms, and $\tsigma/\tstep=0.2$. (a) The axial location $z$ of the annihilations shows that anti-atoms tend to escape near the octupole ends; particles shown at $t=100\,$s correspond to anti-atoms that remain trapped after the end of the acceleration cycles. Plot (b) shows the initial (blue pluses) and final (red x's) energies of the anti-atoms when they annihilate on the trap wall. Plot (c) shows the cumulative distribution function (CDF) of the probability of escape of the anti-atoms.} \label{Simulation} \end{figure} As shown in Fig.~\ref{EscapeFract}, a large number of simulations similar to those in Fig~\ref{Simulation} can be compiled into a parameter scan showing the fraction of anti-atoms that have escaped as a function of their charge $Q$ for a fixed $\THeat$. Alternately, a simulation parameter survey can be used to find $\thalf$ as a function of $Q$, and the results compiled into Fig.~\ref{StochasticScaling}. \begin{figure}[h] \centerline{\includegraphics[width=3.3in]{Fig4_EscapeFraction.pdf}} \caption{{\bf Escaped Fraction} Fraction of anti-atoms with assumed charge Q that have escaped for $\THeat=500\,$s (black circles) and $\THeat=100\,$s (dark red squares.) The error bars are determined from the CDF bounds in Fig.~\ref{Simulation}c and establish a 95\% confidence interval for the true escape probability. The other simulation parameters were $\DeltaPhi=\pm 350\,$V, $\tstep=0.3\,$ms, and $\tsigma/\tstep=0.2$.} \label{EscapeFract} \end{figure} \begin{figure}[h] \centerline{\includegraphics[width=3.3in]{Fig5_StochHeatPlot.pdf}} \caption{{\bf Stochastic scaling} The acceleration time $\thalf$ for which half the anti-atoms would be expelled as a function of $Q$, as found by simulation. Simulations are shown for our standard trap conditions (black solid lines and diamonds; $\Depth=540\,$mK, $\tstep=0.3\,$ms), with laser cooling (red dashed lines and circles; $\Depth=30\,$mK, $\tstep=1.3\,$ms and an anti-atom temperature $T=20\,$mK), and with laser and adiabatic cooling (blue dot-dashed lines and squares; $\Depth=3\,$mK, $\tstep=4\,$ms and an anti-atom temperature $T=2\,$mK). In all cases, the other simulation parameters were $\DeltaPhi=\pm 350\,$V, and $\tsigma/\tstep=0.2$, and the statistical error at each point is smaller than the point symbol. Also shown are lines scaling as $Q\propto (\thalf)^{-0.56}$, which describes the relationships between $Q$ and $\thalf$. The critical $Q$ can be found be setting $\thalf$ to the total acceleration time $\THeat$, and then finding the corresponding $Q$.} \label{StochasticScaling} \end{figure} From Fig.~\ref{StochasticScaling}, we find numerically that $Q$ and $\thalf$ scale as $Q\propto (\thalf)^{-0.56}$. This differs slightly from the diffusive prediction of Eq.~\ref{StochasticLimit}, which suggests $Q\propto (\thalf)^{-0.50}$. Part of the difference between these two scaling relations stems from the hard upper energy cutoff assumed in Eq.~\ref{StochasticLimit}; in the analytic calculation, anti-atoms are assumed to annihilate immediately on reaching the magnetic well depth $\Depth$. The simulations, however, show that there exist quasi-stable orbits with total energy greater than $\Depth$. Anti-atoms on these orbits will remain trapped for some time \cite{coak:05,andr:11a, amol:12}. Evidence for these ``quasitrapped'' anti-atoms can be seen in Fig.~\ref{Simulation}b, which shows that the final energies of many simulated anti-atoms are well above the trap depth of $\Depth=540\,$mK. If the simulations are prematurely halted when the anti-atoms' energies exceed $\Depth$ rather than when they annihilate on the trap wall, the simulation scaling changes to $Q\propto (\thalf)^{-0.52}$, significantly closer to the analytic prediction \cite{baqu:13}. The remaining difference may be due to the lack of a zero bound in the analytic calculation, and the existence of quasi-periodic orbits in the simulation. \subsection{Comparison of Stochastic Acceleration with Resonant and Autoresonant Acceleration} Since our stochastic acceleration scales with the square root of the number of acceleration cycles, it is not as efficient as resonant or autoresonant (swept frequency) acceleration \cite{faja:01a}. Unfortunately we cannot use either of the latter approaches. Resonant acceleration, i.e.\ simply driving the anti-atoms with an electric field that oscillates at their predicted bounce frequency, requires that a single frequency be resonant with all anti-atoms. While the anti-atoms do undergo an approximately sinusoidal oscillation in $z$ (see Fig.~C1a in Ref.~\citenum{amol:12}), the frequency of this oscillation is not unique. Numerical simulations show that the oscillation frequency increases with energy; the system is ``stiff.'' This would appear to make the system a candidate for an autoresonant drive. However, autoresonance is best at capturing and exciting initially stationary particles \cite{faja:99d}. In our case, the anti-atoms are already excited and few would be captured. Moreover, because of exchanges between parallel and perpendicular energy, the axial oscillation frequency exhibits drifts and shifts (see Fig.~\ref{AxialFreq} for examples in an undriven system). Any anti-atoms that had been captured in a trapping bucket would quickly escape. While we have made no formal study, the longitudinal motion generally appears to be chaotic in the long term. \begin{figure}[h] \centerline{\includegraphics[width=3.3in]{Fig6_AxialFrequency.pdf}} \caption{{\bf Axial oscillation frequency} Typical axial oscillation frequencies for undriven anti-atoms. Note that while the frequency generally appears chaotic, there are periods of stability. This is particularly true for the frequency of low energy anti-atoms such as is shown by the green, bottom curve in this graph. } \label{AxialFreq} \end{figure} \subsection{Stochasticity} Stochastic acceleration is essentially a random walk process, and thus requires an element of randomness in the relation between the drive frequency and oscillation frequency. In many cases, the already-present axial frequency shifts are sufficient. However, Fig.~\ref{AxialFreq} shows that there can be long lasting periods of frequency stability, particularly for low energy anti-atoms, and the anti-atoms may not heat during these periods. To introduce additional randomness into the system, we modulate $\tstep$ by a random function with uniform distribution and standard deviation $\tsigma$. Figure~\ref{Sigma-Escape} plots the relation between $\thalf$, the median escape time and $\tsigma$. It shows that while stochastic acceleration occurs even in the absence of this modulation, the escape time $\tstep$ decreases as the modulation $\tsigma$ increases, reaching a plateau once $\tsigma$ is approximately 20\% of the switching time $\tstep$. Note that once randomization is introduced in the switching times, it is not necessary to randomize the voltage levels between which the electrode potentials switch. On average, doing so only reduces $\DeltaPhi$, thereby decreasing the energy kicks that the anti-atoms receive, and reducing the acceleration. \begin{figure}[h] \centerline{\includegraphics[width=3.3in]{Fig7_sigma.pdf}} \caption{{\bf Switching time randomization} Median escape time $\thalf$ as a function of normalized drive randomization time $\tsigma/\tstep$, for $Q=1\times 10^{-9}$, $\DeltaPhi=\pm 350\,$V, $\tstep=0.3\,$ms.} \label{Sigma-Escape} \end{figure} \subsection{Switching Time} To obtain the shortest mean escape times, the switching time $\tstep$ must be optimized to obtain the best bound on $Q$. In an experiment running for a fixed total time $\THeat$, the number of inversions $N$ is inversely proportional to $\tstep$, thus suggesting a short $\tstep$. However, if $\tstep$ is too short, anti-atoms will not have time to respond to the electric field before the electric field is inverted, and the size of the energy kick for each inversion will diminish. The optimal switching time is expected to be comparable to the time it takes an anti-atom to traverse the distance between two oppositely biased electrodes. For an anti-atom trapped near the top of a $\Depth=540\,$mK well and for the standard electrode configuration, this time is about $0.3\,$ms. Note that this timescale is much shorter than the orbital periods predicted by Fig.~\ref{AxialFreq}, as here the anti-atoms need only transverse the distance between field reversals, typically between adjacent electrodes, not the entire trap. (In the experiments to date, discussed in the Introduction, the fields did extend across the entire trap, and the relevant time scale was the orbital bounce period.) This optimal time is confirmed in Fig.~\ref{Tstep}, which graphs the relation between $\thalf$ and $\tstep$. The optimal (shortest) $\thalf$ depends on the experimental configuration. For $\Depth=540\,$mK and the standard electrode configuration, the optimal $\tstep$ is approximately $0.3\,$ms. As expected, electrically joining adjacent electrodes (see Fig.~\ref{Fig1SchematicFields}d) doubles the optimal $\tstep$. Cooling the anti-atoms by a factor of $100$ increases the optimal $\tstep$ by the expected factor of around $10$. This is unavoidable, but unfortunate because it decreases the number $N$ of kicks that fit into a fixed acceleration time $\THeat$. However, the benefits from the lower trapping potential outweigh the disadvantages of the decreased number of kicks, and cooling is, on net, beneficial (see Fig.~\ref{StochasticScaling}). Fortunately, for a particular configuration, one does not have to hit the optimum exactly. For $\Depth=540\,$mK and the standard electrode configuration, for example, the escape time $\thalf$ varies by an acceptable factor of $2$ over a mean switching time $\tstep$ range that varies by a factor of around $3$. \begin{figure}[h] \centerline{\includegraphics[width=3.3in]{Fig8_SwitchTime.pdf}} \caption{{\bf Switching time optimization} Median escape time $\thalf$ as a function of $\tstep$, for anti-atoms heated with $\DeltaPhi=\pm 350\,$V, and $\tsigma/\tstep=0.2$ for three experimental configurations. The black circles correspond to $Q=10^{-9}$ and to potentials alternating between contiguous electrodes like those shown in Fig.~\ref{Fig1SchematicFields}d by the black line; this is the default configuration generally used elsewhere in this paper. The orange squares correspond to the same $Q$, but with contiguous electrodes joined together and potentials varying between pairs of electrodes, like those shown in Fig.~\ref{Fig1SchematicFields}d by the orange-dot dashed line. In both these cases, the trap depth is $\Depth=540\,$mK. The blue diamonds assume colder anti-atoms ($T=2\,$mK) in a shallower trap ($\Depth = 3\,$mK), and $Q=5\times 10^{-11}$.} \label{Tstep} \end{figure} \section{Conclusion} In earlier work, ALPHA established \cite{andr:11a} that antihydrogen can be trapped for at least $1000\,$s; with expected improvements to the vacuum, it is not unreasonable to assume that anti-atoms could be trapped for $10,000\,$s. This would allow time for $N\approx 10^7$ acceleration cycles; simulations (see Fig.~\ref{StochasticScaling}) then suggest that we could bound $|Q|$ to $3\times 10^{-11}$. With laser cooled anti-atoms at $20\,$mK \cite{donn:13} we could reduce the trapping potential to perhaps $\Depth=30\,$mK, while still retaining most of the anti-atoms, and the bound would drop to $3\times 10^{-12}$. Adiabatic expansion cooling of the anti-atoms might reduce their temperature by a further factor of ten, yielding a bound around $ 10^{-12}$. This bound approaches the limit where antihydrogen polarization effects, studied in Ref.~\citenum{baqu:13}, need to be taken into account. Other systematic effects are likely to be small, as the relevant experimental parameters (the applied electric potentials and magnetic fields) are well controlled, and, from Eq.~(\ref{StochasticLimit}), expected to enter into the result only linearly. We note that the conducting tubes in which an anti-atom gravity experiment would take place would exhibit anomalous ``patch'' electric fields \cite{camp:91,amol:14a}, and this could cause a a significant systematic error \cite{witt:68}. Indeed, a measurement of $Q$ on the order suggested here may be necessary for future precision gravity measurements \cite{kell:08,char:11a,zhmo:13,hami:14}. We also note that the charge anomaly of the antiproton, $\big||q_{\pbar}|-e\big|/e$, is known \cite{beri:12,hori:11,gabr:99a} to be less than $7\times 10^{-10}$ by measurements \cite{hori:06} on $\pbar{\mathrm{He}}^{+}$, while the charge anomaly of the positron \cite{fee:93,beri:12} is less well known: $|(q_{e^+}-e)/e|< 2.5 \times 10^{-8}$, determined by measurements of the positron cyclotron frequency and the positronium Rydberg constant \cite{hugh:92}. Thus, under the assumption that the positron and antiproton charges add exactly to form the charge of the antihydrogen atom, an improved measurement of the antihydrogen charge would improve the bound on the positron charge anomaly. \section{Acknowledgements} This work was supported by the DOE, NSF, LBNL-LDRD (USA); the experimental data was acquired by the ALPHA collaboration with additional support from: CNPq, FINEP/RENAFAE (Brazil); ISF (Israel); FNU (Denmark); VR (Sweden); NSERC, NRC/TRIUMF, AITF, FQRNT (Canada); and EPSRC, the Royal Society and the Leverhulme Trust (UK). The ALPHA collaboration is grateful for the efforts of the CERN AD team, without which the experimental data could not have been collected. This article constitutes part of the Ph.D work of MB-R.
1,108,101,562,588
arxiv
\section{Introduction} The Muon $g$-2 Experiment \cite{TDR}, currently in its commissioning phase at Fermi National Accelerator Laboratory (Fermilab), aims to measure the value of the anomalous magnetic moment of the muon to unprecedented accuracy. This experiment and other similar experiments involving precessions due to magnetic dipole moments and possibly electric dipole moments, rely on a highly polarized beam for success. Hence, understanding the effects of phase space variables and field perturbations on polarization is necessary for these high precision experiments. In this work, the Muon $g$-2 beam delivery system at Fermilab is used as an example and the results of computer simulations of the effects of beam optics on polarization are examined. To begin, we note that the time-dependent behavior of particle spin under the influence of electric and magnetic fields can be described by the Thomas-BMT equation \cite{Thomas}, \cite{BMT}: \begin{equation} \frac{d \vec{S}}{dt} = \frac{e}{\gamma_r m} \vec{S} \times \left[ (1 + a \gamma_r) \vec{B}_{\perp} + (1 + a) \vec{B}_{\parallel} + \left(a \gamma_r + \frac{\gamma_r}{\gamma_r + 1} \right) \frac{\vec{E} \times \vec{\beta}}{c} \right] \label{eq:ThomasBMT} \end{equation} where $a$ is the anomalous magnetic moment, defined as \begin{equation} a = \frac{g-2}{2} \end{equation} and the values of the fields are those in the laboratory frame, whereas the spin vector $\vec{S}$ is in the rest frame of the particle. Here, $g$ is the gyromagnetic factor which, for a pure Dirac particle, would have value of $g=2$. If we examine the special case where there is no electric field and the magnetic field is perpendicular to the motion of the charged particle, we can reduce Eq.~\ref{eq:ThomasBMT} to \begin{equation} \frac{d \vec{S}}{dt} = \frac{e}{\gamma_r m} (1 + a \gamma_r) \vec{S} \times \vec{B}_{\perp}. \label{e:bmt} \end{equation} In a high energy beam transport system composed of magnetic elements with predominantly transverse guiding and focusing fields, we can use this result to make approximations of effects on beam polarization due to several common realistic beam characteristics such as transverse beam emittance, momentum spread, and error fields due to magnet misalignments and mispowerings. These can be compared to the natural polarization created in the manifestation of muons from pion decay in the production of the beam to be used in the experiment. \section{Effects on Polarization} Using a classical description as presented in Eq.~\ref{e:bmt}, one can imagine the spin of a particle being aligned in a particular direction in space and the degree of polarization being the degree to which the spins of an ensemble of particles are aligned with each other. With this picture in mind we will examine below three particular properties that can affect the spread in spin directions of the particles and finally we compare these effects to the polarization, or spread in spin directions of the muon beam produced from pion decay. While the disussions below will generate analytical estimates of the magnitudes of these effects, computer simulations of particle transport along the muon delivery system for the Muon $g$-2 experiment were also performed, using G4beamline \cite{Roberts}. This code can include particle decays as well as keep track of particle polarization. The simulation used a distribution of particles (protons, muons and pions) produced from the targeting 8 GeV (kinetic energy) protons onto the Fermilab AP0 target, and then transported them through the M2 and M3 beam lines to the Delivery Ring, where they circulate 4 times, allowing the pions to decay to muons and the muons to separate from protons by time-of-flight. The resulting muon beam is then extracted and transported to the experiment (in MC-1) via the M4 and M5 lines. A schematic of this roughly 3~km-long system is provided in Fig.~\ \ref{fig:Beamlines}. \begin{figure}[htb] \begin{center} \epsfig{file=muoncampus.png,height=2.5in} \caption{Diagram of the beam lines for muon delivery system at Fermilab. Courtesy Brian Drendel.} \label{fig:Beamlines} \end{center} \end{figure} Below, we will examine how momentum spread, beam emittance, and magnet misalignments contribute to the depolarization of a beam and estimate the magnitude of the contribution of each effect on the beam's polarization. Comparisons will then be made with the polarization generated during the creation of the muons from pion decay. \subsection{Momentum Spread} From Eq.~\ref{eq:ThomasBMT} we can see that the spin vector of a particle passing through a bending magnet of length $\ell$ and field strength $B$ will precess through an angle in the bending plane of amount \begin{equation} \Delta \phi = \frac{d\vec{S}}{dt}\cdot\frac{\ell}{v} = \frac{eB\ell}{\gamma_r mv} \cdot (1+a \gamma_r) = (1+a\gamma_r)\theta \label{eq:momentum} \end{equation} where $a$ is the anomalous magnetic moment, $\gamma_r$ is the relativistic gamma factor, and $\theta$ is the bending angle of the magnet. We assume here small precession angles anticipating that $a$ and $\theta$ are small quantities (note: for the muon, $a \approx 10^{-3}$). Since the central trajectory of the particle beam bends through an angle $\theta$, then relative to this ideal trajectory the {\em additional} rotation of the spin vector is $\Delta\phi = a\gamma_r\theta$. From now on we will only keep track of these additional rotations with respect to the central trajectory in our analyses.\\ The constituents of a particle beam will have a small range of momenta and hence they will be bent by differing amounts through a bending element as also will their spins. A particle beam with a spread in momenta $\Delta p/p$ will thus have a spread in spin rotation angles upon passing through the bending magnet given by \begin{equation} \Delta \phi_{rms} = a \gamma_r \theta \left( \frac{d \gamma_r}{\gamma_r} \right)_{rms} = a \gamma_r \theta \left( \frac{\Delta p}{p} \right)_{rms} \end{equation} where, for the case of a highly relativistic beam, $d \gamma_r / \gamma_r~\approx~dp / p$. In the case of the $g$-2 M2M3 beamline, we can take $a \approx 0.001$, $\gamma_r \approx 30$, and $\frac{dp}{p} \approx 1.5 \% $. The total bending angle of the beam line is $\theta \approx 10^{\circ}$, which corresponds to 0.2 rad. This allows us to estimate that the rms spread in spin directions will be characterized by $\Delta \phi_{rms} \approx 0.1$~mrad upon passage through these beam lines. Performing the same estimation with the Delivery Ring (changing the value of $\theta$ to $8 \pi$ radians), gives $\Delta \phi_{rms} \approx 11$ mrad. \subsection{Emittance} The size of the beam is another source of depolarizing effects, due to the field strength of focusing quadrupoles varying with distance from the ideal path. A larger beam contains particles further from the center, and thus, those particles experience a stronger correcting field and greater precession. The general solution for the transverse position of a particle traveling along a beamline is given in terms of the amplitude function, or $\beta$ function, according to \begin{equation} x (s) = A \sqrt{\beta (s)} \sin (\psi + \delta) \label{eq:gensolution} \end{equation} where $\psi(s)$ is the phase advance, related to $\beta(s)$ by $d\psi(s)/ds = 1/\beta(s)$. It is common to describe the beam ensemble in terms of the particle's coordinates $x$ and $x' = dx/ds$ in $x-x'$ phase space. The elliptical paths in this phase space as the particles travel down the beam line can be reduced to circular phase space trajectories by the transformation \begin{align} x &= x \\ y &= \beta x' + \alpha x \end{align} with $\alpha \equiv -\frac12 d\beta/ds$. In our new phase space coordinates the motion can be described as a rotation along with an appropriate scaling factor given by the values of $\beta$ at the starting and ending points as generated by the optical system. This is illustrated in Fig.~\ref{fig:emittance_circles}, and expressed by \begin{equation} \left( \begin{array}{c} x \\ y \end{array} \right) = \sqrt{\frac{\beta}{\beta_0}} \left( \begin{array}{cc} \cos \psi & \sin \psi \\ - \sin \psi & \cos \psi \end{array} \right) \left( \begin{array}{c} x_0 \\ y_0 \end{array} \right) \label{eq:circemit} \end{equation} where $\psi$ represents the phase advance between the two end points. \begin{figure} \centering \includegraphics[width=0.4\linewidth]{Fig4Andrew1.jpg} ~~~~~~~ \includegraphics[width=0.4\textwidth]{Fig4Andrew2.pdf} \caption[Emittance of circular distribution]{The figures above denote the changes in phase for a distribution under the transformation $y=\alpha x + \beta x'$. Under this transformation, the circular distribution rotates (noted by the change in the angle $\psi$) as the beam traverses the lattice.} \label{fig:emittance_circles} \end{figure} As a particle oscillates through the focusing system its angle $x'$ with respect to the ideal trajectory will change in accordance to Eq.~\ref{eq:circemit} and hence the total change in angle in going from location $s_0$ to location $s$ will be $\Delta x' = x'(s) - x'(s_0)$. Using Eq.~\ref{eq:momentum} we see that the spin direction of the particle will rotate through an angle \begin{equation} \Delta \phi = (1+a \gamma_r) \Delta x'. \end{equation} Note that the reference trajectory is not altered by the purely focusing elements, and so the leading factor in this last equation is $1+a\gamma_r$ and not $a\gamma_r$. Using our new transformation to solve for $x'$ in terms of initial conditions $x_0$ and $y_0 = \beta_0 x'_0 + \alpha_0 x_0$, we find that \begin{equation} x' = \frac{1}{\sqrt{\beta_0 \beta}} \left[ -(\sin \psi + \alpha \cos \psi) x_0 + (\cos \psi - \alpha \sin \psi) y_0 \right]. \end{equation} So, in going from location $s_0$ to location $s$, the angle of the particle's trajectory relative to the design trajectory will change by an amount \begin{equation} \Delta x' = x' - x_0' = \frac{1}{\sqrt{\beta_0 \beta}} \left[ - (\sin \psi + \alpha \cos \psi - r \alpha_0 ) x_0 + (\cos \psi - \alpha \sin \psi - r) y_0 \right] \end{equation} where $r \equiv \sqrt{\beta / \beta_0}$. By squaring this result and averaging over the ensemble of particles it follows that \begin{multline} \Delta x_{rms}' = \sqrt{\frac{<x_0^2>}{\beta_0} \frac{1}{\beta} \left[ 1 + \alpha^2 - 2r(1 + \alpha_0 \alpha) \cos \psi + r^2 (1 + \alpha_0^2) + 2r(\alpha - \alpha_0) \sin \psi \right]} \end{multline} where we note that $\langle x^2 \rangle = \langle y^2 \rangle$ in our chosen cylindrically symmetric coordinate system. We can immediately identify the quantity $\pi \langle x_0^2\rangle/\beta_0$ as the transverse emittance of the beam in the $x$ degree of freedom. In addition, we can also identify the quantity $(1+\alpha^2)/\beta \equiv \gamma$; the combination of $\beta$, $\alpha$, and $\gamma$ as defined above are known as the Courant-Snyder parameters.\cite{Courant} Thus, the rms value of the changes in particle spin angles when transported between two points in our beam lines can be written as \begin{equation} \Delta \phi_{rms} = (1 + a_{\mu} \gamma_r) \sqrt{\frac{2 \epsilon_{rms}}{\pi}} \sqrt{\left[ \frac{\gamma + \gamma_0}{2} - \frac{1 + \alpha_0 \alpha}{\sqrt{\beta_0 \beta}} \cos \psi + \frac{\alpha - \alpha_0}{\sqrt{\beta_0 \beta}} \sin \psi \right]}. \label{e:rmsSpin} \end{equation} To summarize, a particle transported between locations $s_0$ and $s$ will have its spin direction altered by an amount $\Delta \phi = \phi-\phi_0$. To the extent that the change in $\phi$ is independent of its initial orientation, then Eq.~\ref{e:rmsSpin} shows the extent to which the beam will depolarize due to the fact that the particles with larger betatron amplitudes will be steered more strongly by the focusing quadrupoles and hence will have larger precessions. Note that if the transport system begins and ends with identical Courant-Snyder parameters, then our expression reduces to \begin{equation} \Delta\phi_{rms} = 2(1+a_\mu\gamma_r)\sqrt{\epsilon_{rms}\gamma_0/\pi} \;\sin\psi \end{equation} and if the phase advance through this transport system is $\psi = 2\pi$, then the polarization of the beam returns to its initial value. For our M2M3 delivery line, we can use Eq.~\ref{e:rmsSpin} to estimate the rms spin spread based on the input parameters, $a_\mu \approx 0.001$, $\gamma \approx 30$, $\beta_0 = 2.488~$m, $\beta = 5.0328~$m, $\alpha_0~=~0.175$, $\alpha$ = -0.72738 and an rms emittance of 7$\pi$ mm$\cdot$mrad (assuming a $40\pi$ (95\%) emittance for a Gaussian beam (\cite{TDR} p.200)) gives $\Delta \phi_{rms} \approx 2.7$~mrad. Statistical analysis of a sample of 500 muons (with decays turned off) through the M2M3 delivery line gave a value of $\Delta \phi_{rms} \approx 2.8$, which is close to our numeric approximation. \subsection{Misalignments} Similar to emittance effects, there are consequences to the rms spread of spin due to magnetic misalignment. Analytically, we can treat each misalignment separately and sum their effects (for a first-order approximation). This allows us to write the change in slope, $\Delta x'$ as the ratio of the displacement of the magnet to its focal length. \begin{equation} \Delta x' = \frac{d}{F} \end{equation} Summing displacements along a lattice gives the result \begin{equation} \Delta x' = \sum_{i=1}^N \frac{d_i}{F_i}(\cos \psi_i - \alpha_f \sin \psi_i) \sqrt{\frac{\beta_i}{\beta_f}} \end{equation} where $\psi_i$ is the phase advance from the $i-th$ misaligned element to the end of the beam transport system. By squaring and summing terms, it follows that our random displacements would generate an rms change in transverse angle given approximately by \begin{equation} \Delta x_{rms}' = d_{rms} \sqrt{ \left< \frac{\beta}{F^2} \right> \frac{1}{\beta_f} \left( \frac{1}{2} + \frac{\alpha_f^2}{2} \right)} \sqrt{N} = d_{rms} \sqrt{\gamma_f \left< \frac{\beta}{F^2} \right>} \sqrt{\frac{N}{2}} \end{equation} Thus, the spread in the change of spin directions is estimated to be \begin{equation} \Delta \phi_{rms} = (1 + a \gamma_r) d_{rms} \sqrt{\gamma_f \left< \frac{\beta}{F^2} \right>} \sqrt{\frac{N}{2}} \end{equation} Once more using $a \approx 0.001$, $\gamma_r \approx 30$ and the known values for the M2M3 beamline ($\beta = 5~$m and $\alpha = -0.73$), and using an rms displacement of 0.25 mm, and typical M2M3 values for $\frac{\beta}{F^2} = 1.3~\mathrm{m}^{-1}$ for 60 quadrupoles gives a $\Delta \phi_{rms} \approx 1$ mrad. \subsection{Particle Decays} When pions decay into muons, a preference exists for the spin of the muon to be aligned with the momentum vector \cite{Garwin}. Combley and Picasso \cite{Combley} have shown that in terms of the momentum ratio $x = p_\parallel / p_\pi$, where $p_\parallel$ is the component of the muon's momentum that is in the direction of the decayed pion, the resulting longitudinal polarization ($\Sigma_L$) can be described by the equation \begin{equation} \Sigma_L = \cos \phi = \frac{x (1 + b^2) - 2b^2}{x (1 - b^2)} \label{e:Combley} \end{equation} where $b$ represents the mass ratio of the decay product (muon) to the decay parent (pion): \begin{equation} b = \frac{m_\mu}{m_\pi} = 0.757. \end{equation} We may use this result to estimate the polarization, and hence the spread in the spin angles for comparison with our earlier results. The pions selected from the target station have a typical momentum of approximately 3.09 GeV/$c$. Likewise, upon decay, the resulting muon beam will have a similar though slightly lower momentum. The spread in momenta, however, is dictated by the momentum acceptance of the beam line system. For the Muon g-2 beam delivery system the momentum acceptance is approximately $\pm 2$\% or so. Thus, the accepted muons should have a momentum spread on the order of $\pm 2$\% of the average momentum of the pion beam. Hence, for our case, we can assume a typical value of $x \approx 0.98$. We can see the results of a plot of Eq.~\ref{e:Combley} in Fig.~\ref{fig:phi_decays}; $x = 98 \%$ will have an approximate spread of spins of about 0.2 ($\pm 0.1$) rad. This result is also born out through tracking simulations using G4beamline that include particle decays. \begin{figure}[htb] \begin{center} \epsfig{file=Decays.png,height=3.5in} \caption{The graph shows the relationship between the momentum ratio of pions and the beam polarization. The black line represents the longitudinal beam polarization as found in Eq.~\ref{e:Combley}, while the blue line represents the corresponding spin angle (in radians).} \label{fig:phi_decays} \end{center} \end{figure} \subsection{Simulations} The simulations used five sets of random misalignments for the magnets on the M2M3 delivery system and the Delivery Ring, varying from 100 to 500 $\mu$m for the rms quadrupole magnet misalignment. Particle decays are present in each case. The rms spread of the spin (in radians) is plotted against the rms of the displacement for the $x$ direction. Similar results are obtained for displacements in $y$ as well as for combinations of both. \begin{figure} \centering \includegraphics[height=3.5in]{M2M3_x.png} \caption[M2M3 Polarization in X direction]{Spin spread in the $x$ direction at end of M2M3 Line.} \label{fig:M2M3_X} \end{figure} \begin{figure} \centering \includegraphics[height=3.5in]{DR_x.png} \caption[DR Polarization in X direction]{Spin spread in the $x$ direction at end of each turn in the delivery ring. The error bars are associated with the nearest same-color data points.} \label{fig:DR_X} \end{figure} As is shown Figs.~\ref{fig:M2M3_X} and \ref{fig:DR_X}, there exists very little effect for the magnitude of the displacements used in the simulations on the rms spread in spin. At most, the displacement effects are seen to be on the order of 0.25 mrad for a displacement of $250~\mu$m. The magnitude ($\pm$ 100~mrad) of the final rms spin spread from the simulations is in line with estimates due to particle decays. For Fig.~\ref{fig:DR_X}, the increase in rms spin spread is evident as the number of revolutions (turns) about the Delivery Ring increases from one to four, both due to the additional rotations caused by the momentum spread and by the effects of passing through the misaligned magnets a greater number of times. The natural polarization coming from pion to muon decay is still the predominant effect. \section{Concluding Remarks} \begin{table} \label{tab:Results} \begin{center} \begin{tabular}{|c|c|} \hline Source & $\phi_{rms}$\\ \hline \hline Emittance & $\approx 2.7$~mrad\\ \hline Misalignments & $\approx 1$~mrad\\ \hline Momentum spread & $\approx 11$~mrad \\ \hline Particle decays & $\approx 200~(\pm 100)$~mrad \\ \hline \end{tabular} \caption[Summary of Estimates]{The table illustrates the magnitude of the effects on the rms of the spin spread due to the individual factors.} \end{center} \end{table} In table \ref{tab:Results}, we can see a comparison of the magnitude of the effects on the rms spread in the spin orientations of a particle beam. For the purposes of the $g$-2 experiment at Fermilab, the resulting final polarization overwhelmingly is due to the decay process that produces muons from pions. The other effects studied are small in comparison. More generally speaking, however, two important correlations are noted. First, a correlation between momentum and final polarization exists since higher momentum particles precess more in a magnetic field than do lower momentum particles. Second, there will be a correlation between the amplitudes of betatron oscillations and the spin direction, since particles further away from the ideal path experience a stronger corrective focusing in a quadrupole field and thus a greater precession. These two results will be important in the final analysis of the Muon g-2 experimental results. Additionally, other particles such as protons and ions have much larger anomalous magnetic moments and hence will have greater precession in a magnetic field, and so the insights gained here will be more important in future research focusing on highly polarized hadron beams for precision physics, such as in EDM searches for example. \bigskip The authors would like to acknowledge the assistance of Diktys Stratakis and Brian Drendel. \\ This work was supported by the National Science Foundation Grant 1623691.
1,108,101,562,589
arxiv
\section{Introduction} Given training set $T= \{ (x_i,y_i): x_i \in \mathbb{R}^n, y_i \in \mathbb{R},~ i=1,2...,l~ \}$, the problem of regression is concerned with finding a function $f(x)$ which estimates the conditional mean of $y$ given $x$. But, only the estimation of the conditional mean function is not enough to give a full description about the stochastic relationship between the target and response variables. Therefore, in many applications, we are interested in the estimation of the conditional quantile functions $f_{\tau}(x)$ as well. The quantile regression problem had intialy been studied in 1978 by Koenkar and Bassett\cite{quantile1}, which was later detalied and discussed in (Koenker, \cite{quantile2}). Koenkar and Bassett \cite{quantile1} proposed the use of pinball loss function for the estimation of the conditional quantile function $f_{\tau}(x)$. The pinball loss function is an asymmetric loss function which, for a given quantile $\tau \in (0,1)$, is defined as \begin{equation} P_{\tau}(u) ~=~ \begin{cases} \tau u ~~~~~~~~~~\mbox{if}~~ u > 0, \\ (\tau-1)u~~~ \mbox{otherwise}. \end{cases} \label{pinballloss} \end{equation} But, Takeuchi et al \cite{quantile3} were first to initiate the study of the quantile regression problem in a non-parametric framework\cite{quantile3}. It also establishes that a minimizer of pinball loss function (\ref{pinballloss}) asymptotically converges to the real quantile function under a very general conditions. Their formulation consist of minimization of the pinball loss function (\ref{pinballloss}) along with a regularization term for the estimation of the conditional quantile function $f_{\tau}(x)$ . Like Support Vector Regression (SVR) model (Vapnik et al.,\cite{svr1})(Drucker et al.,\cite{svr2}),(Gunn, \cite{GUNNSVM}) their proposed Support Vector Quantile Regression (SVQR) model is also consistent with the Structural Risk Minimization (SRM) principle (Vapnik, \cite{statistical_learning_theory}). It is well known that sparsity is a very desirable property in a regression model. A sparse regression model uses few training data points for the construction of the regression function and is very time efficient in the prediction of the responses of test data points. Unlike $\epsilon$-SVR model, the SVQR model lacks sparsity as all of the training data points contribute to the empirical risk in the pinball loss function. That is why, a SVQR model, which can use the $\epsilon$-insensitive approach efficiently is required for increasing its generalization ability and bringing the sparsity back in the model. \begin{figure} \centering \subfloat[] {\includegraphics[width=2.5in,height=1.65in]{pinlossnoncrosstau03}} \subfloat[] {\includegraphics[width=2.5in,height=1.65in]{pinlossnoncross}} \caption{Symmetric $\epsilon$-insensitive pinball loss function described in (Takeuchi and Furuhashi, \cite{noncrossqsvr}) for (a) $\tau =0.3$ (b) $\tau= 0.5$ with $\epsilon$=5.} \label{pinlossnoncross} \end{figure} The idea of using of $\epsilon$-insensitive tube in the SVQR model seems to be obvious and has been described at number of places in the literature. But, we have not found any formulation which extends the idea of the $\epsilon$-insensitive tube in SVQR model in its true sense. Takeuchi and Furuhashi considered the $\epsilon$-insensitive pinball loss function for estimation of the non-crossing quantile in their work (Takeuchi and Furuhashi, \cite{noncrossqsvr}).They combined the symmetric $\epsilon$-insensitive tube with the asymmetric pinball loss function by considering the following loss function \begin{equation} \phi_{\tau}^{\epsilon}(u)= \begin{cases} (1-\tau)|u|, if~~u < -\epsilon \\ 0, ~~~~~~~~~~if~~ |u| \leq \epsilon \\ \tau|u|,~~~~~if ~~~~u> \epsilon \end{cases} \end{equation} for estimation of the non-crossing quantiles function. However, they had also admitted there that the introduction of $\epsilon$-tube is unfavorable for estimation of the conditional quantile estimator. One of the possible reason for this could be the symmetry of the $\epsilon$-tube i,e. the $\epsilon$-tube is symmetric around the estimated function. We have also plotted the $\epsilon$-insensitive pinball loss function described in (Takeuchi and Furuhashi, \cite{noncrossqsvr}) in the Figure (\ref{pinlossnoncross}) and found that the proposed loss function is not convex and hence cannot be properly minimized using any convex program. Further, the given loss function doesn't reduce to the Vapnik $\epsilon$-insensitive loss function for $\tau=0.5$. Hu et al, had also considered the similar kind of $\epsilon$-insensitive pinball loss function in their work (Hu et al, \cite{onlinesvqr}) for estimation of quantiles but here also, the $\epsilon$-tube was symmetric around the estimated function. Seok et al.\cite{sparsequantile} also attempted to extend the idea of $\epsilon$-insensitive approach in SVQR model. But for this, they proposed an asymmetric e-insensitive pinball loss function for quantile estimation which is as follows. \begin{equation} h_{\tau}(u)= \begin{cases} 0, ~~~~~~~~~~~~~~~~~~~if~~~ \frac{\tau}{\tau-1}\epsilon \leq u \leq \frac{1-\tau}{\tau}\epsilon.\\ \tau u -(1-\tau)\epsilon, ~~~~if~~~u \geq \frac{1-\tau}{\tau}\epsilon. \\ (\tau-1)u-\tau\epsilon,~~~~if ~~~~u\leq \frac{\tau}{\tau-1}\epsilon. \end{cases} \end{equation} Their resulting formulation was termed with 'Sparse Support Vector Quantile Regression' (Sparse SVQR) model. The Sparse SVQR model was able to obtain sparse solution. The e-insensitive pinball loss function of Seok et al.\cite{sparsequantile} can make an asymmetric $\epsilon$-insensitive zone around the estimated function. But, there is still major problem in it. The width of the $\epsilon$-insensitive zone in e-insensitive pinball loss function of Seok et al.\cite{sparsequantile} varies with the $\tau$ values where as it should ideally vary with the variance present in the response values of the training data. It also makes the selection of the good $\epsilon$-value difficult in practice. Further, the Sparse SVQR model requires the tuning of different choices of $\epsilon$ for the prediction of different conditional quantile function for a given training set. Park and Kim \cite{quantilerkhs} had also proposed a similar kind of improvement in the $\epsilon$-insensitive pinball loss function for quantile regression model in reproducing kernel Hilbert space. They have proposed the following loss function \begin{eqnarray} \rho_{\tau}^{\epsilon}(u)= max(0, P_{\tau}(u)) ~~ = \begin{cases} P_{\tau}(u)-\epsilon, ~~~if~~P_{\tau}(u) > \epsilon. \\ 0, ~~~~~~~~~~~~~~~ otherwise. \\ \end{cases} \end{eqnarray} Similar to the e-insensitive pinball loss function of Seok et al.\cite{sparsequantile}, the width of the $\epsilon$-insensitive zone in the loss function of Park and Kim \cite{quantilerkhs} also varies with the $\tau$ values. We have realized the need of developing an $\epsilon$-insensitive pinball loss function which can extend the $\epsilon$-insensitive approach in pinball loss function in true sense for the quantile estimation. For this, we have proposed a novel asymmetric $\epsilon$-insensitive pinball loss function to be used for quantile estimation in this paper. For a given $\tau \in (0,1)$ , the proposed asymmetric $\epsilon$-insensitive pinball loss function is given by \begin{equation} L_{\tau}^{\epsilon}(u)= max(~-(1-\tau)(u+\tau\epsilon),~0~,~\tau(u-(1-\tau)\epsilon)~) \end{equation} For the problem of the quantile regression and given quantile $\tau \in (0,1)$, it can be better understood in the following form. \begin{equation} L_{\tau}^{\epsilon}(y_i,x_i,w,b)= \begin{cases} -(1-\tau)(y_i-(w^Tx_i+b)+ \tau\epsilon), ~if~~y_i-(w^Tx_i+b) < -\tau\epsilon. \\ 0, ~~~~~~~~~~~~~~~~~~~~~~~~if~~ -\tau\epsilon \leq y_i-(w^Tx_i+b)\leq (1-\tau)\epsilon. \\ \tau(y_i-(w^Tx_i+b)-(1-\tau)\epsilon),~if ~y_i-(w^Tx_i+b) >(1-\tau)\epsilon. \end{cases} \end{equation} Unlike other $\epsilon$-insensitive loss functions, the overall width of the $\epsilon$-zone in the proposed asymmetric $\epsilon$-insensitive pinball loss function does not vary for different values of $\tau$ rather, the division of the $\epsilon$-insensitive zone along the regressor is dependent on the specific $\tau$ value. The expected number of training points lying above and below the estimated function decides the length of the $\epsilon$-insensitive zone assigned to below and above the estimated function. In this way, the proposed asymmetric $\epsilon$-insensitive pinball loss function incorporates the concept of the $\epsilon$-insensitive zone in existing SVQR model in true sense. \begin{figure} \centering \subfloat[] {\includegraphics[width=3.0in,height=1.65in]{./lossfunctiontau01eps1.jpg}} \subfloat[] {\includegraphics[width=0.65\linewidth]{./einsensitvelosstau01eps1.jpg}}\\ \subfloat[] {\includegraphics[width=0.65\linewidth]{./lossfunctiontau02.jpg}} \subfloat[] {\includegraphics[width=0.65\linewidth]{./einsensitvelosstau02eps1.jpg}}\\ \subfloat[] {\includegraphics[width=3.0in,height=1.75in]{./lossfunctiontau05eps1.jpg}} \subfloat[] {\includegraphics[width=0.65\linewidth]{./einsensitvelosstau05eps1.jpg}}\\ \subfloat[] {\includegraphics[width=0.65\linewidth]{./lossfunctiontau08eps1.jpg}} \subfloat[] {\includegraphics[width=0.65\linewidth]{./einsensitvelosstau08eps1.jpg}} \caption{ Comparison of the proposed asymmetric $\epsilon$-pinball loss function (left) and e-insensitive pinball loss function of Seok et al.\cite{sparsequantile} (right) for (a) $\tau =0.1$ (b) $\tau= 0.2$ (c) $\tau = 0.5$ and (d) $\tau = 0.8$ with fixed $\epsilon$=1.} \label{ourloss}r \end{figure} Figure \ref{ourloss} shows the comparison of the proposed asymmetric $\epsilon$-pinball loss function and existing e-insensitive loss function of Seok et al.\cite{sparsequantile} for different values of $\tau$ with the fixed values of $\epsilon=1$. From this figure, it can be observed that, unlike the e-insensitive loss function, the total width of the $\epsilon$-insensitive zone is fixed in the proposed $\epsilon$-pinball loss function in all cases. However, the division of the $\epsilon$-insensitive zone is not symmetric and depends on the specific $\tau$ value chosen. The underlying logic behind this division of the $\epsilon$-insensitive zone is that it should be based on the expected number of training points lying above and below the estimated regressor. Further, it can be observed that the total width of the $\epsilon$-insensitive zone is not fixed to 1 in the existing e insensitive loss function and does depend on the $\tau$ value which may lead to the inaccurate result. For example for $\tau=0.1$, the e insensitive loss function assigns very large insensitive zone towards the upside of the estimated regressor which makes it to ignore most of the training points lying up side of the estimated regressor and distort the generalization ability of the estimated regressor. This seems to be a major drawback of the e-insensitive loss function and the resulting Sparse SVQR model\cite{sparsequantile}, which has been very well handled in our proposed $\epsilon$-SVQR model. \begin{figure} \centering \subfloat[] {\includegraphics[width=2.5in,height=1.65in]{./lossfunctionParkKimtau01.png}} \subfloat[] {\includegraphics[width=2.5in,height=1.65in]{./lossfunctionParkKimtau09.png}}\\ \caption{ The $\epsilon$-pinball loss function proposed by Park and Kim \cite{quantilerkhs} for (a) $\tau =0.1$ (b) $\tau= 0.9$ with fixed $\epsilon$=1.} \label{Parkloss} \end{figure} Figure \ref{Parkloss} shows the plot of $\epsilon$-pinball loss function proposed by Park and Kim \cite{quantilerkhs} for $\tau =0.1$ and 0.9 with fixed $\epsilon=1$. Like e-insensitive pinball loss function of Seok et al.\cite{sparsequantile}, the overall width of the $\epsilon$-insensitive zone in the loss function of Park and Kim \cite{quantilerkhs} also varies with the $\tau$ values. It makes the loss function of Park and Kim \cite{quantilerkhs} not suitable for practice. Further, we minimize the proposed asymmetric $\epsilon$-insensitive pinball loss function loss function along with a regularization term in a regression model for quantile estimation. We term the resultant regression model with `$\epsilon$-Support Vector Quantile Regression' ($\epsilon$-SVQR) model. The proposed $\epsilon$-SVQR model extends the $\epsilon$-insensitive approach in SVQR model in true sense. The proposed $\epsilon$-SVQR model\cite{sparsequantile} considers an asymmetric $\epsilon$-insensitive zone around the quantile regressor and ignores data points which lie in this zone. The data points which lie outside of the $\epsilon$-insensitive zone are only allowed to participate in the construction of the regression function. In this way, the proposed $\epsilon$-SVQR model brings the sparsity back in the SVQR model. Unlike Sparse SVQR, the proposed $\epsilon$-SVQR model can obtain major improvement in the prediction by tunning the width of the $\epsilon$-tube. Extensive experiments with several artificial and real-world benchmark datasets show the efficacy of the proposed $\epsilon$-SVQR model. The rest of this paper is organized as follows. Section-2 briefly describes the standard Support Vector Quantile Regression model\cite{quantile3} and Sparse Support Vector Quantile Regression model\cite{sparsequantile}. In Section-3, we present the formulation of proposed $\epsilon$-Support Vector Quantile Regression model. Section-5 contains the numerical results obtained by extensive experiments carried on several artificial and UCI datasets. It empirically shows the advantages of the proposed SVQR model and other existing SVQR models. Section-6 concludes the main contribution of our work. \section{Support Vector Quantile Regression models} For the given training set $T$ and the quantile $\tau \in (0,1)$ , the SVQR model estimates the conditional quantile function $f_{\tau}(x) = w^T\phi(x)+ b $ in the feature space, where $\phi:\mathbb{R}^n \rightarrow \mathcal{H}$ is a mapping from the input space to a higher dimensional feature space $\mathcal{H}$. \subsection{Standard Support Vector Quantile Regression model} The standard SVQR model minimizes \begin{eqnarray} \min_{w,b}~ \frac{1}{2}||w||^2 + C.\sum_{i=1}^{l}P_\tau({y_i-(w^Tx_i+b)}), \end{eqnarray} which can be equivalently converted to the following Quadratic Programming Problem (QPP) \begin{eqnarray} \min_{(w,b,\xi,\xi^*)}~~ \frac{1}{2}||w||^2 + C.\sum_{i=1}^{l}(\tau\xi_i+ (1-\tau)\xi_i^{*}) \nonumber \\ & \hspace{-105mm}\mbox{subject to,}\nonumber\\ & \hspace{-70mm}y_i- (w^T\phi(x_i)+b) \leq \xi_i, \nonumber\\ & \hspace{-70mm}(w^T\phi(x_i)+b)-y_i \leq \xi_i^{*} , \nonumber\\ & \hspace{-60mm}\xi_i \geq 0,~~\xi_i^{*} \geq 0, ~~~ i =1,2,...l. \label{SVQR_primal} \end{eqnarray} Here $C \geq 0$ is a user defined parameter which is used to find a good trade-off between empirical risk and flatness of the regressor. To solve the primal problem (\ref{SVQR_primal}) efficiently, we derive its corresponding Wolfe dual problem and obtained the following QPP \begin{eqnarray} \min_{(\alpha, \alpha^*)} ~\frac{1}{2} \sum_{i=1}^{l}\sum_{j=1}^{l}(\alpha_i- \alpha_j^*)K(x_i,x_j)(\alpha_j-\alpha_i^*) - \sum_{i=1}^{l}y_i(\alpha_i-\alpha_i^{*}) \nonumber \\ & \hspace*{-180mm}\mbox{subject to,} \nonumber \\ & \hspace*{-140mm} 0 \leq \alpha_i \leq \tau C, \nonumber \\ & \hspace*{-105mm} 0 \leq \alpha_i^{*} \leq (1-\tau) C, ~~~~~ i=~1,2,...l. \label{SVQR_dual} \end{eqnarray} After obtaining the optimal values of $\alpha_i$ and $\alpha_i^{*}$ from the dual problem (\ref{SVQR_dual}), the quantile regression function $f_\tau(x) $, for any test data point $x \in \mathbb{R}^n$, is estimated as \begin{equation} f_\tau(x) = \sum_{i=1}^{l}(\alpha_i-\alpha_i^{*})K(x,x_i) + b. \end{equation} The value of bias $b$ can be computed by using the KKT conditions for the primal problem (\ref{SVQR_primal}) as is done in the traditional $\epsilon$-SVR model. \subsection{Sparse Support Vector Regression Model} The Sparse SVQR model \cite{sparsequantile} uses the e-insensitive loss function $h_{\tau}(x)$ to measure the empirical risk along with the regularization term for the estimation of the quantile function. It seeks to find the solution of the optimization problem \begin{eqnarray} \min_{w,b} \frac{1}{2}||w||^2 + C.\sum_{i=1}^{l}h_\tau({y_i-(w^Tx_i+b)}), \label{sparsesvqrprimal} \end{eqnarray} which has been converted to the following constrained optimization problem by Soek et al. in \cite{sparsequantile}) \begin{eqnarray} \min_{(w,b,\xi,\xi^*)} \frac{1}{2}||w||^2 + C.\sum_{i=1}^{l}(\tau\xi_i+ (1-\tau)\xi_i^{*}) \nonumber \\ & \hspace{-110mm}\mbox{subject to,}\nonumber\\ & \hspace{-70mm}y_i- (w^T\phi(x_i)+b) \leq \xi_i + 1-{\tau}^2 \epsilon, \nonumber\\ & \hspace{-70mm}(w^T\phi(x_i)+b)-y_i \leq \xi_i^{*}+ \tau-\tau\epsilon , \nonumber\\ & \hspace{-60mm}\xi_i \geq 0,~~\xi_i^{*} \geq 0, ~~~ i =1,2,...l. \label{Sparse_SVQR_primal} \end{eqnarray} Here $\epsilon \geq 0 $ and $C \geq 0$ are user defined parameters. Like standard SVQR model, the Wolfe dual problem corresponding to the primal problem (\ref{Sparse_SVQR_primal}) can also be solved to obtain the desired quantile estimate. \section{Proposed $\epsilon$-Support Vector Quantile Regression model} In this section, we present our proposed $\epsilon$-Support Vector Quantile Regression model ($\epsilon$-SVQR) model which uses the proposed asymmetric $\epsilon$-pinball loss function for measuring the empirical risk. The proposed $\epsilon$-SVQR model minimizes \begin{eqnarray} \min_{(w,b)} ~\frac{1}{2}||w||^2 + C. \sum_{i=1}^{l}L_{\tau}^{\epsilon}(y_i,x_i,w,b) \nonumber \\ & \hspace{-60mm} = \min_{(w,b)}~ \frac{1}{2}||w||^2 + C. \sum_{i=1}^{l}max(-(1-\tau)(y_i-(w^Tx_i+b)+\tau\epsilon), \nonumber \\ & \hspace {-05mm}0, \tau(y_i-(w^Tx_i+b)-(1-\tau)\epsilon)) \end{eqnarray} which can be further rewritten as \begin{eqnarray} \min_{(w,b)} ~\frac{1}{2}||w||^2 + C\sum_{i=1}^{l}max(~-(1-\tau)(y_i-(w^Tx_i+b)+\tau\epsilon),~0~)\nonumber \\ & \hspace{-110mm} ~+ ~ C\sum_{i=1}^{l}max(~0,~\tau(y_i-(w^Tx_i+b)-(1-\tau)\epsilon)~) \label{esvqr1} \end{eqnarray} After introducing the variables $\xi_i = max(~-(1-\tau)(y_i-(w^Tx_i+b)+\tau\epsilon),~0~)$ and $\xi^{*}_i = max(~0,~\tau(y_i-(w^Tx_i+b)-(1-\tau)\epsilon)~)$, for $i~=1,2,...l$, the problem (\ref{esvqr1}) can be converted to the following QPP \begin{eqnarray} \min_{(w,b,\xi,\xi^*)}~~ \frac{1}{2}||w||^2 + C.\sum_{i=1}^{l}(\xi_i+ \xi_i^{*}) \nonumber \\ & \hspace{-95mm}\mbox{subject to,}\nonumber\\ & \hspace{-50mm} \xi \geq -(1-\tau)(y_i-(w^Tx_i+b)+\tau\epsilon) , \nonumber\\ & \hspace{-50mm} \xi_i^{*} \geq \tau(y_i-(w^Tx_i+b)-(1-\tau)\epsilon) , \nonumber\\ & \hspace{-60mm}\xi_i \geq 0,~~\xi_i^{*} \geq 0, ~~~ i =1,2,...l, \label{esvqr2} \end{eqnarray} which can be written in the standard form as follows \begin{eqnarray} \min_{(w,b,\xi,\xi^*)} ~~\frac{1}{2}||w||^2 + C.\sum_{i=1}^{l}(\xi_i+ \xi_i^{*}) \nonumber \\ & \hspace{-95mm}\mbox{subject to,}\nonumber\\ & \hspace{-50mm}y_i- (w^T\phi(x_i)+b) \leq (1-\tau)\epsilon + \frac{\xi_i}{\tau}, \nonumber\\ & \hspace{-57mm}(w^T\phi(x_i)+b)-y_i \leq \tau\epsilon + \frac{\xi_i^{*}}{1-\tau} , \nonumber\\ & \hspace{-60mm}\xi_i \geq 0,~~\xi_i^{*} \geq 0, ~~~ i =1,2,...l. \label{esvqr3} \end{eqnarray} After considering the replacement $\xi_i:= \frac{\xi_i}{\tau}$ and $\xi_i^{*}:=\frac{\xi_i^{*}}{1-\tau}$ in the above problem (\ref{esvqr3}), the primal problem of the proposed $\epsilon$-SVQR model is obtained as \begin{eqnarray} \min_{(w,b,\xi,\xi^*)} \frac{1}{2}||w||^2 + C.\sum_{i=1}^{l}(\tau\xi_i+ (1-\tau)\xi_i^{*}) \nonumber \\ & \hspace{-110mm}\mbox{subject to,}\nonumber\\ & \hspace{-50mm}y_i- (w^T\phi(x_i)+b) \leq (1-\tau)\epsilon + \xi_i, \nonumber\\ & \hspace{-60mm}(w^T\phi(x_i)+b)-y_i \leq \tau\epsilon + \xi_i^{*} , \nonumber\\ & \hspace{-60mm}\xi_i \geq 0,~~\xi_i^{*} \geq 0, ~~~ i =1,2,...l. \label{esvqr_primal} \end{eqnarray} Here $\epsilon \geq 0$ is the user defined parameter. It is notable that with $\epsilon=0$ , the proposed $\epsilon$-SVQR model reduces to the SVQR model (Takeuschi et al., \cite{quantile3}). For solving the primal problem (\ref{esvqr_primal}) efficiently, we need to derive its Wolfe dual problem. The Lagrangian function for the primal problem (\ref{esvqr_primal}) is obtained as \begin{eqnarray} & \hspace{-160mm} L(w, b, \xi_i ,\xi_i^{*}, \alpha_i,\beta_i,\gamma_i,\lambda_i) = ~~~~ \frac{1}{2}||w||^2 + C.\sum_{i=1}^{l}(\tau\xi_i+ (1-\tau)\xi_i^{*}) \nonumber \\ +\sum_{i=1}^{l}\alpha_i(y_i- (w^T\phi(x_i)+b)- (1-\tau)\epsilon - \xi_i)+ \sum_{i=1}^{l}\beta_i((w^T\phi(x_i)+b)-y_i - \tau\epsilon - \xi_i^{*})\nonumber \\ &\hspace{-220mm}-\sum_{i=1}^{l}\gamma_i\xi_i- \sum_{i=1}^{l}\lambda _i\xi_i^{*} \end{eqnarray} We can now note the KKT conditions for (\ref{esvqr_primal}) as follows \begin{eqnarray} & \hspace{-90mm}\frac{\partial L}{\partial w} = w+ \sum_{i=1}^{l}(\beta_i-\alpha_i)\phi(x_i)=0 \implies w = \sum_{i=1}^{l}(\alpha_i-\beta_i)\phi(x_i) \label{kkt1}\\ & \hspace{-152mm}\frac{\partial L}{\partial b}= \sum_{i=1}^{l}(\beta_i-\alpha_i) = 0. \label{kkt2}\\ & \hspace{-134mm} \frac{\partial L}{\partial \xi_i}= C\tau - \alpha_i -\gamma_i = 0, ~~ i=1 ,2,...,l.\label{kkt3} \\ & \hspace{-125mm}\frac{\partial L}{\partial \xi_i^*}= C(1-\tau) - \beta_i -\lambda_i = 0, ~~ i=1 ,2,...,l.\label{kkt4}\\ & \hspace{-105mm} \alpha_i(y_i- (w^T\phi(x_i)+b)- (1-\tau)\epsilon - \xi_i) =0, ~~ i=1 ,2,...,l. \label{kkt5}\\ \hspace{-8mm}\beta_i((w^T\phi(x_i)+b)-y_i - \tau\epsilon - \xi_i^{*}) = 0, ~~ i=1 ,2,...,l.~~~~~~~\label{kkt6}\\ & \hspace{-140mm} \gamma_i\xi_i = 0,~~ \lambda _i\xi_i^{*} = 0~~ i=1 ,2,...,l.\label{kkt7}\\ & \hspace{-115mm} y_i- (w^T\phi(x_i)+b) \leq (1-\tau)\epsilon + \xi_i, i=1 ,2,...,l.\label{kkt8} \\ & \hspace{-118mm} (w^T\phi(x_i)+b)-y_i \leq \tau\epsilon + \xi_i^{*}, ~~ i=1 ,2,...,l., \label{kkt9}\\ &\hspace{ -140mm}\xi_i \geq 0,~~\xi_i^{*} \geq 0, ~~ i=1 ,2,...,l.\label{kkt10} \end{eqnarray} Making the use the above KKT conditions, the Wolfe dual problem of the primal problem (\ref{esvqr_primal}) can be obtained as follows \begin{eqnarray} \min_{\alpha,\beta} \frac{1}{2}\sum_{i=1}^{l}\sum_{j=1}^{l}(\alpha_i- \beta_j)K(x_i,x_j)(\alpha_j-\beta_i) - \sum_{i=1}^{l}(\alpha_i-\beta_i)y_i + \sum_{i=1}^{l}((1-\tau)\epsilon\alpha_i+\tau\epsilon\beta_i) \nonumber\\ & \hspace*{-240mm}\mbox{subject to,} \nonumber \\ &\hspace{-190mm} \sum_{i=1}^{l}(\alpha_i-\beta_i)= 0, \nonumber \\ & \hspace{-180mm} 0 \leq \alpha_i \leq C\tau,~~ i=~1,2,...l, \nonumber \\ & \hspace{-172mm} 0 \leq \beta_i \leq C(1-\tau), ~~i=~1,2,...l. \label{esvqrdual} \end{eqnarray} The KKT conditions (\ref{kkt1})- (\ref{kkt10}) will help us to discover the various characteristics of the proposed $\epsilon$-SVQR model. At first, we shall state following preposition. \newline \textbf{Preposition 1.} For $\epsilon \geq 0,~ \alpha_i\beta_i$=0 holds $\forall$ $\textit{i=1,2,...l} $.\\ Proof:- If possible, let us suppose there exists an index $i$ such that $\alpha_i\beta_i \neq 0$ holds. It implies that $\alpha_i\neq 0$ and $\beta_i \neq 0$. Therefore, from the KKT condition (\ref{kkt5}) and (\ref{kkt6}) we can obtain \begin{eqnarray} (y_i- (w^T\phi(x_i)+b)- (1-\tau)\epsilon - \xi_i) =0 \label{11}\\ \mbox{and}~~~~~~~~~~~~~~~~~~~~~~ ((w^T\phi(x_i)+b)-y_i - \tau\epsilon - \xi_i^{*}) =0. \label{22} \end{eqnarray} Adding equation (\ref{11}) and (\ref{22}) gives $\xi_i^{*}+ \xi_i = -\epsilon$ which is possible only when either $\xi_i \leq 0$ or $\xi_i^{*} \leq 0$. But, the KKT condition (\ref{kkt10}) requires $\xi_i \geq 0,~~\xi_i^{*} \geq 0, ~~for~ i=1 ,2,...,l.$ which contradicts our assumption. This proves the proposition. Further, let us locate the training points with the help of their obtained Lagrangian multipliers $\alpha_i$ and $\beta_i$ values. For this, we consider the following three disjoint sets \begin{eqnarray} S_1=\{i: 0<\alpha_i \leq C\tau ~~or~~ 0<\beta_i \leq C (1-\tau) \}, \nonumber \\ & \hspace{-86 mm} S_2=\{i: \alpha_i = C\tau ~~or~~ \beta_i = C(1-\tau) \}, \nonumber \\ & \hspace{-96 mm} S_3=\{i: \alpha_i = 0 ~~and~~ \beta_i = 0 \}. \nonumber \end{eqnarray} For all training data points $x_i \in S_1$ we will have $\gamma_i > 0$ (or $\lambda_i >0$) from KKT condition (\ref{kkt3}) (or (\ref{kkt4})). It implies that $\xi_i=0$ (or $\xi_i^*=0$) from KKT condition (\ref{kkt7}). But since, $\alpha_i \geq 0$ (or $\beta_i \geq 0$) so condition (\ref{11}) (or (\ref{22})) must satisfy which will consequently imply that \begin{eqnarray} (y_i- (w^T\phi(x_i)+b)- (1-\tau)\epsilon ) =0 \label{112}\\ (\mbox{or}~~~~~~~ ((w^T\phi(x_i)+b)-y_i - \tau\epsilon ) =0) \label{122} \end{eqnarray} will hold true. It means that all training data point $x_i \in S_1$ will be located on the boundary points of the $\epsilon$-insensitive zone. Further, the data point which satisfies $0<\alpha_i \leq C\tau$ will be lying on the upper boundary of the asymmetric $\epsilon$-insensitive zone. The data point which satisfies $0<\beta_i \leq C(1-\tau)$ will be lying on the lower boundary of the asymmetric $\epsilon$-insensitive zone. For all training data points $x_i \in S_2$, we will have $\gamma_i = 0$ (or $\lambda_i = 0) $ from KKT condition (\ref{kkt3}) (or (\ref{kkt4})). It implies that $\xi_i \geq 0$ (or $\xi^{*} \geq 0$). Using the KKT condition (\ref{kkt5}) ( or (\ref{kkt6})) we can obtain \begin{eqnarray} y_i- (w^T\phi(x_i)+b) > (1-\tau)\epsilon ~~(~\mbox{or}~~ (w^T\phi(x_i)+b)-y_i > \tau\epsilon~~) \label{123} \end{eqnarray} It means that these training data points are lying outside of the asymmetric $\epsilon$-insensitive zone. Further, the data point for which $ \alpha_i = C\tau $ will be lying above the asymmetric $\epsilon$-insensitive zone. The data point for which $ \alpha_i = C(1-\tau) $ will be lying below of the asymmetric $\epsilon$-insensitive zone. For all training data point $x_i \in S_3$ will lie inside of the $\epsilon$-insensitive zone and will not contribute to errors. These data points are ignored and doesn't contribute in the construction of the regression function. Like the $\epsilon$-SVR model, the data points which are lying outside of the $\epsilon$-insensitive zone as well on the boundary of the $\epsilon$-insensitive zone only contribute to the estimated regressor. But, the proportion of their contributions is not equal in the proposed $\epsilon$-QSVR model. It depends upon the location of the data point as well as $\tau$ value. In the proposed $\epsilon$-SVQR model, the data point which lies above and below of the asymmetric $\epsilon$-insensitive zone contribute to the final quantile regressor in the ratio of $(1-\tau)$ and $\tau$. For example, for $\tau < 0.5$ , the data point lying below of the asymmetric $\epsilon$-insensitive zone are more important than data point lying above of the asymmetric $\epsilon$-insensitive zone in the construction of the regressor. It is only because of the fact that for $\tau < 0.5$, few data points will be lying below of the asymmetric $\epsilon$-insensitive zone and more data points will be lying above of the asymmetric $\epsilon$-insensitive zone. After obtaining the solution of the dual problem (\ref{esvqrdual}), the quantile regression function $f_\tau(x) $, for any test data point $x \in \mathbb{R}^n$, is estimated as \begin{equation} f_\tau(x) = \sum_{i=1}^{l}(\alpha_i-\beta_i)K(x,x_i) + b. \end{equation} For obtaining the optimal value of the bias term $b$, we can pick up the training data points in $S_1$ and can compute the value of b from the equation (\ref{112}) or (\ref{122}). In practice, for every $\alpha_i > 0$, we compute the value of $b$ form equation (\ref{112}) and for every $\beta_i > 0$, we compute the value of $b$ form equation (\ref{122}) and use the average of these values as final value of $b$. Further, like $\epsilon$-SVR model discribed in (Gunn, \cite{GUNNSVM}), if the kernel contains a bias term then, the $\epsilon$-SVQR dual problem (\ref{esvqrdual}) can be solved without equality constraint and the quantile regression function is simply estimated by \begin{equation} f_\tau(x) = \sum_{i=1}^{l}(\alpha_i-\beta_i)K(x,x_i) \label{r12} \end{equation} \section{Experimental Results} In this section, we have performed extensive experiments to verify the efficacy of the proposed $\epsilon$-SVQR model. For this, we first describe our experimental setup. We have performed all experiments with MATLAB 16.0 environment (http://in.mathworks.com/) on Intel XEON processor with 16.0 GB of RAM. Since the proposed $\epsilon$-SVQR model is basically an improvement over the standard SVQR model, so we shall only consider existing SVQR models for experiments. The QPPs of proposed $\epsilon$-SVQR and Sparse SVQR has been solved by the quadprog function with interior-point convex algorithm available in the MATLAB 16.0 environment. It is also noteworthy that the SVQR model is a special case of the proposed $\epsilon$-SVQR and Sparse SVQR model with a particular choice of $\epsilon=0$. For all of the experiments, we have used the RBF kernel function $exp(\frac{-||x-y||^2}{q})$, where $q$ is the kernel parameter and quantile regression function is estimated by (\ref{r12}). The proposed $\epsilon$-SVQR model and Sparse SVQR model involves three parameters namely RBF kernel parameter $q$,$C$ and $\epsilon$. These parameters have been tunned with exhaustive search method (Hsu and Lin, \cite{Exhaustivesearch}). The parameter $q$ and $C$ has been searched in the set $\{ 2^i: i=-15,-9,......9,15\} $. The parameter $\epsilon$ has been searched in the set $\{ 0,0.1,0.2,...2,2.5,3..,5\}$. \subsection{\textbf{Performance Criteria}} \label{Perform_criteria} For comparison of the efficacy of SVQR models, we have used some evaluation criteria which is also mentioned in (Xu Q et al., \cite{Weighted_QSVR}). Given the training set $T= \{ (x_i,y_i): x_i \in \mathbb{R}^n, y_i \in \mathbb{R},~ i=1,2...,l~ \}$ and true $\tau$-th conditional quantile function $Q_{\tau}(y/x)$, we list the evaluation criteria as follows. \begin{enumerate} \item[(i)] $ RMSE$: It is Root Mean Square of Error.\\ ~~It is given by $\sqrt{ \frac{1}{l}\sum_{i=1}^{l}( Q_{\tau}(y_i/x_i)- f_\tau(x_i))^2}$. \item[(ii)] $ MAE$: It is Mean of the Absolute Error. \\ ~~~It is given by ${ \frac{1}{l}\sum_{i=1}^{l}|( Q_{\tau}(y_i/x_i)- f_\tau(x_i))|}$. \item[(iii)] $ TheilU$: It is a measure used for the quantile regression estimate. It is given by $\sqrt{ \frac{\frac{1}{l}\sum_{i=2}^{l}( Q_{\tau}(y_i/x_i)- f_\tau(x_i))^2 / Q_{\tau}(y_i/x_i) } { (\frac{1}{l}\sum_{i=2}^{l}( Q_{\tau}(y_{i-1}/x_i)- f_\tau(x_i)))^2 / Q_{\tau}(y_i/x_i)}}$. If its value is less than 1 then the used quatile regression is better than guessing. \item[(iv)] Error $E_\tau$: It is the measure which is used when the true quantile function is unknown. It is given by $E_\tau ~=~ |p_\tau -\tau|$, where $p_{\tau} = P(y_i \leq f_{\tau}(x_i))$ is the coverage probablity. For the real world UCI datasets experiments, we would be using this measure. We shall compute the coverage probability $p_{\tau}$ by obtaining the estimated $\tau$ value in 100 random trails. \item[(v)] Sparsity(u) = $\frac{\#(u=0)}{\#(u)}$, where $\#(r)$ determines the number of the component of the vector $r$ . \end{enumerate} \subsection{Artifical Datasets} We need to observe the role of $\epsilon$-insensitive zone in SVQR models and prove the efficacy of the proposed $\epsilon$-SVQR model over Sparse SVQR model empirically. For this, we have considered artificial datasets with different nature of noises. We have generated the training set $T$ where $x_i$ is drawn from the univariate uniform distribution with $[-4,4]$. The response variable $y_i$ is obtained from polluting a nonlinear function of $x_i$ with different natures of noises in artificial datasets as follows. \begin{eqnarray} & \hspace{5mm} \mbox{AD1:}~~y_i = (1-x_i+2x_i^{2})e^{-0.5x_{i}^2} + \xi_i, \nonumber \mbox{~~~~~where $\xi_i$ is from N(0, $\sigma$).} \nonumber\\ & \hspace{0mm} \mbox{AD2:}~~y_i = (1-x_i+2x_i^{2})e^{-0.5x_{i}^2} + \xi_i, \mbox{~~~~~where $\xi_i$ is from} ~\chi^2(3) \nonumber \end{eqnarray} The artificial datasets AD1 and AD2 contain 200 training points. The true quantile function $Q_\tau(y_i/x_i)$ in these artificial datasets can be obtained as \begin{eqnarray} y_i = (1-x_i+2x_i^{2})e^{-0.5x_{i}^2} + F_{\tau}^{-1}(\xi_i), \nonumber \end{eqnarray} where $F_{\tau}^{-1}(\xi_i)$ is the $\tau$th quantile of random error $\xi_i$. We have evaluated the SVQR models in 100 independent trails by generating 1000 testing points in each trails. The one run simulation of the artificial dataset AD1 with $\sigma =0.2$ has been plotted in the Figure \ref{plot} along with the true quantile estimates and predicted quantile estimates by proposed $\epsilon$-SVQR model for several $\tau$ values. Figure (\ref{plot2}) shows the performance of the proposed $\epsilon$-SVQR model with $\epsilon = 1.5$ on artificial AD1 dataset with $\sigma= 1.5$ for different $\tau$ values. It also shows that how proposed asymmetric $\epsilon$-insensitive pinball loss function can obtain the asymmetric $\epsilon$-insensitive zone around the data with different $\tau$ values. The width of the $\epsilon$-insensitive zone remain fixed for different $\tau$ values but its division varies with $\tau$ values. It enables the proposed $\epsilon$-SVQR model to obtain better estimate irrespective of $\tau$ values. To show the efficacy of proposed $\epsilon$-SVQR model over existing Sparse SVQR model, we have tested both of them on artificial dataset AD1 with different noise variances $\sigma$. We have listed the RMSE values obtained by $\epsilon$-SVQR model and Sparse SVQR model with different $\sigma$ and $\epsilon$ values for $\tau$= 0.1 in Table \ref{table1} and Table \ref{table2} respectively. We can infer following observations from these Tables. \begin{enumerate} \item[(i)] The numerical values listed in the first column of Table \ref{table1} and Table \ref{table2} are same i,e. the Sparse SVQR model and $\epsilon$-SVQR model obtain same RMSE values with $\epsilon~=0$. It is because of fact that the both Sparse SVQR model and $\epsilon$-SVQR models reduce to the standard SVQR model with the value of $\epsilon$=0. Further, it can also be observed that the proposed $\epsilon$-SVQR model obtains better RMSE values on non zeros values of $\epsilon$. Also, Sparse SVQR obtains better RMSE values on non zeros values of $\epsilon$ in several cases. It confirms that the concept of the $\epsilon$ -insensitive zone is quite relevant in the SVQR models. \item[(ii)] It can be observed from RMSE values listed in the Table \ref{table1} that the proposed $\epsilon$-SVQR model can obtain major improvement over SVQR model by tuning its parameter value $\epsilon$. Figure (\ref{plot112}) shows the percentage of improvement in RMSE obtained by proposed $\epsilon$-SVQR model over standard SVQR model. Also, the optimal choice of $\epsilon$ increases along with values of noise variance $\sigma$ present in the responses of training dataset in proposed in $\epsilon$-SVQR model. This fact is also well depicted in the Figure \ref{sigmavseps11}. \item[(iii)]The Sparse SVQR model struggles to obtain a good RMSE values with the given range of $\epsilon$ values. It is because of the fact that the width of the $\epsilon$-insensitive zone in the Sparse SVQR model does depend on the $\tau$ value. Further, it can also be observed that, the Sparse SVQR model obtains its optimal RMSE values at $\epsilon$=0 with the noise variance $\sigma$ =0,0.1,0.2,0.3 ,0.4. It means that the Sparse SVQR may fail to utilize the concept of the $\epsilon$-insensitive zone in certain cases. \end{enumerate} We have listed the optimal RMSE values obtained by the Sparse SVQR model and proposed $\epsilon$-SVQR model with different values of the noise variance $\sigma$ for the $\tau$ = 0.1 and $\tau$ = 0.9 in Table \ref{table3} and Table \ref{table4} respectively . It can observed that in most of cases, the proposed $\epsilon$-SVQR obtains better RMSE values than existing Sparse SVQR model. \begin{figure} \centering \subfloat[] {\includegraphics[width=2.5in,height=1.5in]{./ad1sigma15eps15tau03_1.png}} \subfloat[] {\includegraphics[width=2.5in,height=1.5in]{./ad1sigma15eps15tau05.png}}\\ \subfloat[] {\includegraphics[width=2.5in,height=1.5in]{./ad1sigma15eps15tau07.png}} \subfloat[] {\includegraphics[width=2.5in,height=1.5in]{./ad1sigma15eps15tau08.png}} \caption{Performance of the proposed $\epsilon$-QSVR model with $\epsilon=1.5$ for (a) $\tau=0.3$ (b) $\tau=0.5$ (c) $\tau=0.7$ and (d) $\tau=0.8$} \label{plot2} \end{figure} \begin{figure} \centering \subfloat[] {\includegraphics[width=0.6\linewidth]{plottau01.png}} \subfloat[] {\includegraphics[width=0.6\linewidth]{plottau02.png}}\\ \subfloat[] {\includegraphics[width= 0.6\linewidth]{plottau03.png}} \subfloat[] {\includegraphics[width=0.6\linewidth]{plottau05.png}}\\ \subfloat[] {\includegraphics[width=0.6\linewidth]{plottau08.png}} \subfloat[] {\includegraphics[width=0.6\linewidth]{plottau09.png}} \caption{One run simulation of the proposed $\epsilon$-QSVR model with different $\tau$ values.} \label{plot} \end{figure} \begin{table}[h] {\footnotesize \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline $\sigma/\epsilon$ & 0.0 & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 & 1 \\ \hline 0.1 & 0.043 & 0.027 & 0.035 & 0.058 & 0.077 & 0.116 & 0.165 & 0.214 & 0.264 & 0.311 & 0.365 \\ \hline 0.2 & 0.083 & 0.055 & 0.054 & 0.059 & 0.071 & 0.082 & 0.113 & 0.134 & 0.148 & 0.187 & 0.230 \\ \hline 0.3 & 0.124 & 0.100 & 0.077 & 0.080 & 0.083 & 0.094 & 0.107 & 0.111 & 0.134 & 0.167 & 0.184 \\ \hline 0.4 & 0.165 & 0.134 & 0.113 & 0.103 & 0.106 & 0.104 & 0.122 & 0.129 & 0.144 & 0.146 & 0.158 \\ \hline 0.5 & 0.204 & 0.174 & 0.153 & 0.128 & 0.129 & 0.131 & 0.129 & 0.144 & 0.156 & 0.155 & 0.177 \\ \hline 0.6 & 0.243 & 0.214 & 0.202 & 0.167 & 0.155 & 0.155 & 0.157 & 0.154 & 0.165 & 0.178 & 0.194 \\ \hline 0.7 & 0.282 & 0.256 & 0.232 & 0.210 & 0.185 & 0.182 & 0.184 & 0.184 & 0.182 & 0.181 & 0.200 \\ \hline \end{tabular}} \caption{RMSE obtained by the proposed $\epsilon$-SVQR model with different $\sigma$ and $\epsilon$ values for $\tau$=0.1. } \label{table1} \end{table} \begin{figure} \centering \includegraphics[width=1.0\linewidth]{./comp_svqrs_sigma_tau01.jpg} \caption{Comparisons of minimum RMSE obtained by proposed $\epsilon$-SVQR with other SVQR models after tunning its parameter $\epsilon$ for different variance of noise $\sigma$ for $\tau=0.1$ } \label{comp_svqrs_tau01} \end{figure} \begin{figure} \centering \includegraphics[width=1.0\linewidth, height= 0.3\textheight]{percentofacc} \caption{Percentage of improvement in RMSE obtained by proposed $\epsilon$-SVQR over SVQR model after tunning its parameter $\epsilon$ for different variance of noise $\sigma$ for $\tau=0.1$ } \label{plot112} \end{figure} \begin{table}[h] {\footnotesize \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline $\sigma \setminus\epsilon$ & 0 & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 & 1 \\ \hline 0.1 & 0.043 & 0.369 & 0.832 & 1.108 & 1.108 & 1.107 & 1.106 & 1.106 & 1.105 & 1.105 & 1.105 \\ \hline 0.2 & 0.083 & 0.242 & 0.701 & 1.029 & 1.028 & 1.027 & 1.026 & 1.025 & 1.024 & 1.023 & 1.022 \\ \hline 0.3 & 0.124 & 0.208 & 0.588 & 0.955 & 0.953 & 0.952 & 0.951 & 0.950 & 0.949 & 0.948 & 0.947 \\ \hline 0.4 & 0.165 & 0.176 & 0.466 & 0.847 & 0.893 & 0.892 & 0.890 & 0.888 & 0.887 & 0.886 & 0.884 \\ \hline 0.5 & 0.204 & 0.180 & 0.406 & 0.708 & 0.835 & 0.833 & 0.832 & 0.830 & 0.828 & 0.827 & 0.825 \\ \hline 0.6 & 0.243 & 0.189 & 0.408 & 0.609 & 0.787 & 0.784 & 0.782 & 0.780 & 0.777 & 0.775 & 0.773 \\ \hline 0.7 & 0.282 & 0.173 & 0.345 & 0.579 & 0.750 & 0.747 & 0.744 & 0.742 & 0.739 & 0.737 & 0.734 \\ \hline \end{tabular}} \caption{RMSE obtained by the Sparse SVQR model with different $\sigma$ and $\epsilon$ values for $\tau$=0.1. } \label{table2} \end{table} \begin{table}[h] {\footnotesize \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline $\sigma \setminus\epsilon$ & 0 & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 & 1 \\ \hline 0.1 & 0.043 & 0.425 & 0.911 & 1.103 & 1.103 & 1.103 & 1.103 & 1.103 & 1.103 & 1.103 & 1.103 \\ \hline 0.2 & 0.083 & 0.289 & 0.774 & 1.014 & 1.013 & 1.013 & 1.013 & 1.013 & 1.013 & 1.013 & 1.013 \\ \hline 0.3 & 0.124 & 0.203 & 0.647 & 0.934 & 0.932 & 0.932 & 0.932 & 0.932 & 0.932 & 0.932 & 0.932 \\ \hline 0.4 & 0.165 & 0.194 & 0.519 & 0.862 & 0.857 & 0.855 & 0.857 & 0.860 & 0.862 & 0.862 & 0.862 \\ \hline 0.5 & 0.204 & 0.180 & 0.416 & 0.792 & 0.798 & 0.792 & 0.790 & 0.791 & 0.795 & 0.803 & 0.805 \\ \hline 0.6 & 0.243 & 0.191 & 0.380 & 0.651 & 0.745 & 0.742 & 0.737 & 0.736 & 0.738 & 0.744 & 0.752 \\ \hline 0.7 & 0.282 & 0.217 & 0.399 & 0.552 & 0.696 & 0.698 & 0.698 & 0.694 & 0.693 & 0.697 & 0.704 \\ \hline \end{tabular}} \caption{RMSE obtained by the Park and Kim SVQR model with different $\sigma$ and $\epsilon$ values for $\tau$=0.1. } \label{table3} \end{table} \begin{table}[h] {\footnotesize \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline $\sigma/\epsilon$ & 0.0 & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 & 1 \\ \hline 0.1 & 0.059 & 0.055 & 0.046 & 0.064 & 0.077 & 0.104 & 0.114 & 0.142 & 0.176 & 0.209 & 0.207 \\ \hline 0.2 & 0.112 & 0.119 & 0.109 & 0.090 & 0.091 & 0.105 & 0.118 & 0.134 & 0.142 & 0.162 & 0.160 \\ \hline 0.3 & 0.166 & 0.169 & 0.171 & 0.159 & 0.141 & 0.136 & 0.137 & 0.151 & 0.163 & 0.175 & 0.189 \\ \hline 0.4 & 0.213 & 0.223 & 0.221 & 0.226 & 0.213 & 0.193 & 0.182 & 0.183 & 0.183 & 0.196 & 0.205 \\ \hline 0.5 & 0.263 & 0.279 & 0.259 & 0.263 & 0.259 & 0.257 & 0.250 & 0.235 & 0.232 & 0.227 & 0.234 \\ \hline 0.6 & 0.317 & 0.330 & 0.307 & 0.290 & 0.293 & 0.299 & 0.299 & 0.299 & 0.289 & 0.276 & 0.279 \\ \hline 0.7 & 0.372 & 0.372 & 0.347 & 0.337 & 0.339 & 0.345 & 0.344 & 0.339 & 0.346 & 0.344 & 0.335 \\ \hline \end{tabular}} \caption{RMSE obtained by the proposed $\epsilon$-SVQR model with different $\sigma$ and $\epsilon$ values for $\tau$=0.9. } \label{table11} \end{table} \begin{table}[h] {\footnotesize \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline $\sigma \setminus\epsilon$ & 0 & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 & 1 \\ \hline 0.1 & 0.059 & 0.243 & 0.237 & 0.232 & 0.226 & 0.221 & 0.217 & 0.213 & 0.209 & 0.207 & 0.204 \\ \hline 0.2 & 0.112 & 0.193 & 0.223 & 0.217 & 0.212 & 0.207 & 0.203 & 0.199 & 0.196 & 0.193 & 0.190 \\ \hline 0.3 & 0.166 & 0.214 & 0.264 & 0.258 & 0.251 & 0.246 & 0.240 & 0.235 & 0.231 & 0.226 & 0.223 \\ \hline 0.4 & 0.213 & 0.231 & 0.314 & 0.318 & 0.312 & 0.305 & 0.299 & 0.293 & 0.287 & 0.282 & 0.277 \\ \hline 0.5 & 0.263 & 0.258 & 0.359 & 0.373 & 0.366 & 0.359 & 0.352 & 0.345 & 0.339 & 0.333 & 0.327 \\ \hline 0.6 & 0.317 & 0.294 & 0.400 & 0.439 & 0.431 & 0.424 & 0.417 & 0.410 & 0.403 & 0.397 & 0.390 \\ \hline 0.7 & 0.372 & 0.353 & 0.427 & 0.490 & 0.493 & 0.485 & 0.478 & 0.471 & 0.464 & 0.457 & 0.450 \\ \hline \end{tabular}} \caption{RMSE obtained by the Sparse SVQR model with different $\sigma$ and $\epsilon$ values for $\tau$=0.9. } \label{table22} \end{table} \begin{table}[h] {\footnotesize \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline $\sigma \setminus\epsilon$ & 0 & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 & 1 \\ \hline 0.1 & 0.059 & 0.204 & 0.198 & 0.223 & 0.271 & 0.332 & 0.398 & 0.469 & 0.543 & 0.618 & 0.212 \\ \hline 0.2 & 0.112 & 0.190 & 0.198 & 0.237 & 0.294 & 0.361 & 0.433 & 0.509 & 0.586 & 0.665 & 0.171 \\ \hline 0.3 & 0.166 & 0.198 & 0.215 & 0.257 & 0.317 & 0.381 & 0.452 & 0.528 & 0.606 & 0.686 & 0.207 \\ \hline 0.4 & 0.213 & 0.214 & 0.250 & 0.270 & 0.328 & 0.399 & 0.466 & 0.542 & 0.622 & 0.704 & 0.234 \\ \hline 0.5 & 0.263 & 0.242 & 0.281 & 0.287 & 0.326 & 0.391 & 0.468 & 0.545 & 0.626 & 0.703 & 0.271 \\ \hline 0.6 & 0.317 & 0.276 & 0.342 & 0.323 & 0.342 & 0.393 & 0.463 & 0.546 & 0.623 & 0.698 & 0.321 \\ \hline 0.7 & 0.372 & 0.326 & 0.398 & 0.362 & 0.365 & 0.399 & 0.456 & 0.528 & 0.611 & 0.691 & 0.372 \\ \hline \end{tabular}} \caption{RMSE obtained by the Park and Kim SVQR model with different $\sigma$ and $\epsilon$ values for $\tau$=0.9. } \label{table33} \end{table} \begin{table}[] \centering \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline $\sigma$ & 0.100 & 0.200 & 0.300 & 0.400 & 0.500 & 0.600 & 0.700 \\ \hline SVQR & 0.059 & 0.112 & 0.166 & 0.213 & 0.263 & 0.317 & 0.372 \\ \hline Park and Kim SVQR & 0.059 & 0.112 & 0.166 & 0.213 & 0.242 & 0.276 & 0.326 \\ \hline Sparse SVQR & 0.059 & 0.112 & 0.166 & 0.213 & 0.258 & 0.294 & 0.353 \\ \hline $\epsilon$-SVQR & 0.046 & 0.090 & 0.136 & 0.182 & 0.227 & 0.276 & 0.335 \\ \hline \end{tabular} \caption{Minimum RMSE obtained by proposed different SVQR models with different values of $\sigma$ for $\tau=0.9$.} \label{table4} \end{table} \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & RMSE & MAE & TheilU & (C,s,$\epsilon$) \\ \hline \multicolumn{5}{|l|}{ $\tau$=0.1 } \\ \hline SVQR & 0.2002 0.0166 & 0.1609 0.0152 & 0.2244 0.0224 & (4,1,0) \\ \hline Sparse SVQR & 0.2052 0.0195 & 0.1683 0.0160 & 0.2247 0.0244 & (4,1,0.1) \\ \hline $\epsilon$-SVQR & 0.1483 0.0091 & 0.1242 0.0075 & 0.1337 0.0087 & (4,1,5) \\ \hline \multicolumn{5}{|l|}{ $\tau$=0.3 } \\ \hline SVQR & 0.3545 0.0113 & 0.2869 0.0159 & 0.3304 0.0132 & (4,1,0) \\ \hline Sparse SVQR & 0.2881 0.0283 & 0.2400 0.0215 & 0.2787 0.0286 & (4,1,0.4) \\ \hline $\epsilon$-SVQR & 0.2641 0.0147 & 0.2181 0.0115 & 0.2482 0.0129 & (4,1,1.5) \\ \hline \multicolumn{5}{|l|}{ $\tau$=0.7 } \\ \hline SVQR & 0.7864 0.0180 & 0.6912 0.0130 & 0.7665 0.0239 & (4,1,0) \\ \hline Sparse SVQR & 0.7051 0.0128 & 0.6149 0.0100 & 0.6827 0.0207 & (4,1,0.4) \\ \hline $\epsilon$-SVQR & 0.6556 0.0344 & 0.5345 0.0173 & 0.6289 0.0358 & (4,1,1.3) \\ \hline \multicolumn{5}{|l|}{ $\tau$=0.9 } \\ \hline SVQR & 1.3056 0.0953 & 1.1127 0.0604 & 1.2542 0.0967 & (4,1,0) \\ \hline Sparse SVQR & 0.9470 0.0278 & 0.8176 0.0204 & 0.9016 0.0326 & (4,1,0.4) \\ \hline $\epsilon$-SVQR & 0.9222 0.0422 & 0.8123 0.0260 & 0.8855 0.0445 & (4,1,4) \\ \hline \end{tabular} \caption{Cmparision of SVQR , Sparse SVQR and proposed $\epsilon$-SVQR model on AD2 artificial datasets} \label{table5} \end{table} \begin{figure} \centering \includegraphics[width=0.8\linewidth]{./sigmavseps1} \caption{ Plot of optimal $\epsilon$ values corresponding to minimum RMSE obtained by proposed $\epsilon$-SVQR model with $\sigma$ values for $\tau =0.1$. } \label{sigmavseps11} \end{figure} We have also compared the performance of the Sparse SVQR model with the proposed $\epsilon$-SVQR model on artificial dataset AD2 with different evaluation criteria. The AD2 artifical dataset contains asymmetric noise from $\chi^2(3)$. Table \ref{table5} shows the optimal performance of the standard SVQR , Sparse SVQR model and proposed $\epsilon$-SVQR model using different evaluation criteria for different $\tau$ values along with tunned values of parameters. It can be observed that the Sparse SVQR and proposed $\epsilon$-SVQR model obtains better generalization ability than the standard SVQR model. It means that the use of $\epsilon$-insensitive zone in SVQR model helps it to own better generalization ability. Further, it can also be observed that the proposed $\epsilon$-SVQR model owns better generalization ability than Sparse SVQR model. \subsection{UCI Datasets} We have also considered UCI datasets to show the efficacy of the proposed $\epsilon$-SVQR model. For this, we have downloaded the Servo (~167$\times$5~) , Boston Housing (~506$\times$14~) and Traizines (~186$\times$61~) datasets from the UCI repository \cite{UCIbenchmark}. The $80 \%$ of the data points were used for the training the regression model where as remaining $20\%$ of data points were used for the testing regression estimate. Since the actual true quantile functions for these UCI datasets are unknown, so we have computed the Error $E_{\tau}$ by computing the converge probability described in Section \ref{Perform_criteria} in 100 trails for evaluation of performances of SVQR models. It is noteworthy that the standard SVQR model is equivalent to the proposed $\epsilon$-SVQR model with $\epsilon$ ~=~0. Table \ref{servo_esvqr} lists the Error $E_{\tau}$ obtained by the proposed $\epsilon$-SVQR model with different $\epsilon$ values for different $\tau$ values on the Servo dataset. It can be observed that there exists several choices of non-zero values of $\epsilon$ for which, the proposed $\epsilon$-SVQR can outperform the existing standard SVQR model. It is well depicted by the Figure (\ref{fig:servoeps}). It can be realized that the proposed $\epsilon$-SVQR model is a better substitute of SVQR model. Figure (\ref{servo_esvqr_spar}) shows the plot of sparsity obtained by $\epsilon$-SVQR model against different $\epsilon$ values for different quantiles. It can be observed that irrespective of values of $\epsilon$, the proposed $\epsilon$-SVQR model can obtain sparse solution which increases with the increase in $\epsilon$ value. Table \ref{servo_spasevqr} lists the Error $E_{\tau}$ obtained by the Sparse SVQR model with different $\epsilon$ values for different $\tau$ values on the Servo dataset. The Sparse SVQR model can also obtain the improvement over SVQR model by tunning the $\epsilon$ values but, it is only on few $\tau$ values. For $\tau$ = 0.1,0.2,0.3, the Sparse SVQR model hardly improves the SVQR model. The possible reason behind this fact is that the Sparse SVQR model fails to control the effective width of the $\epsilon$-insensitive zone for lower values of $\tau$. Figure (\ref{servo11}) compares the Sparse SVQR model and proposed $\epsilon$-SVQR model in the terms of percentage of improvement in Error obtained by these models over SVQR model. It can be observed that the proposed $\epsilon$-SVQR is always a better substitute than Sparse SVQR model. Table \ref{boston-hosuing} and \ref{traizines} list the performance of the proposed $\epsilon$-SVQR model with the different $\epsilon$ values in the estimation of different quantiles for Boston Housing (~506$\times$14~) and Traizines (~186$\times$61~) datasets respectively . The similar kinds of observation can also be drawn from these tables. \begin{table}[] \centering {\fontsize{10}{10} \selectfont \begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline $\epsilon$/$\tau$ & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 & 0.8 \\ \hline 0 & 0.040 & 0.055 & 0.067 & 0.071 & 0.078 & 0.084 & 0.073 & 0.069 \\ \hline 0.05 & 0.041 & 0.053 & 0.065 & 0.069 & 0.081 & 0.085 & 0.074 & 0.067 \\ \hline 0.10 & 0.037 & 0.054 & 0.064 & 0.071 & 0.081 & 0.091 & 0.080 & 0.069 \\ \hline 0.15 & 0.038 & 0.056 & 0.060 & 0.074 & 0.080 & 0.092 & 0.087 & 0.068 \\ \hline 0.20 & 0.039 & 0.056 & 0.059 & 0.070 & 0.083 & 0.093 & 0.084 & 0.067 \\ \hline 0.25 & 0.037 & 0.056 & 0.060 & 0.069 & 0.083 & 0.093 & 0.085 & 0.065 \\ \hline 0.30 & 0.039 & 0.069 & 0.059 & 0.066 & 0.081 & 0.088 & 0.082 & 0.066 \\ \hline 0.35 & 0.042 & 0.064 & 0.062 & 0.067 & 0.079 & 0.079 & 0.080 & 0.065 \\ \hline 0.40 & 0.045 & 0.060 & 0.065 & 0.070 & 0.076 & 0.065 & 0.075 & 0.062 \\ \hline 0.45 & 0.044 & 0.054 & 0.064 & 0.068 & 0.075 & 0.062 & 0.069 & 0.061 \\ \hline 0.50 & 0.043 & 0.054 & 0.064 & 0.065 & 0.072 & 0.066 & 0.063 & 0.059 \\ \hline 0.55 & 0.044 & 0.059 & 0.069 & 0.064 & 0.069 & 0.067 & 0.056 & 0.058 \\ \hline 0.60 & 0.043 & 0.064 & 0.072 & 0.064 & 0.069 & 0.070 & 0.058 & 0.059 \\ \hline 0.65 & 0.043 & 0.071 & 0.071 & 0.064 & 0.071 & 0.074 & 0.057 & 0.059 \\ \hline 0.70 & 0.044 & 0.074 & 0.067 & 0.068 & 0.069 & 0.074 & 0.059 & 0.058 \\ \hline 0.75 & 0.044 & 0.072 & 0.061 & 0.069 & 0.077 & 0.081 & 0.070 & 0.055 \\ \hline 0.8 & 0.043 & 0.070 & 0.062 & 0.073 & 0.093 & 0.089 & 0.080 & 0.055 \\ \hline 0.85 & 0.040 & 0.072 & 0.068 & 0.077 & 0.103 & 0.094 & 0.083 & 0.056 \\ \hline 0.90 & 0.040 & 0.072 & 0.073 & 0.084 & 0.109 & 0.101 & 0.084 & 0.056 \\ \hline 0.95 & 0.041 & 0.076 & 0.073 & 0.098 & 0.119 & 0.106 & 0.089 & 0.055 \\ \hline 1.00 & 0.040 & 0.074 & 0.073 & 0.110 & 0.131 & 0.119 & 0.096 & 0.055 \\ \hline \end{tabular}} \caption{Error obtained by the proposed $\epsilon$-SVQR model with different $\epsilon$ values for different $\tau$ values on Servo dataset } \label{servo_esvqr} \end{table} \begin{figure} \centering \includegraphics[width = 6.0in,height=4.0in]{servoeps} \caption{Error obtained by proposed $\epsilon$-SVQR model on Servo dataset with different $\epsilon$ values for $\tau = 0.4$ , $\tau=0.6$ ,$\tau=0.7$ and $\tau=0.8$ respectively. } \label{fig:servoeps} \end{figure} \begin{figure} \centering \includegraphics[width = 4.5in,height=2.5in]{sparsity_servo1.png} \caption{Sparsity obtained by proposed $\epsilon$-SVQR model on Servo dataset with different $\epsilon$ values for different $\tau$ values.} \label{servo_esvqr_spar} \end{figure} \begin{table}[] \centering {\fontsize{10}{10} \selectfont \begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline $\epsilon$/$\tau$ & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 & 0.8 \\ \hline 0 & 0.040 & 0.055 & 0.067 & 0.071 & 0.078 & 0.084 & 0.073 & 0.069 \\ \hline 0.05 & 0.051 & 0.069 & 0.070 & 0.077 & 0.081 & 0.087 & 0.073 & 0.059 \\ \hline 0.1 & 0.056 & 0.089 & 0.076 & 0.079 & 0.083 & 0.083 & 0.063 & 0.057 \\ \hline 0.15 & 0.058 & 0.112 & 0.101 & 0.078 & 0.081 & 0.071 & 0.061 & 0.056 \\ \hline 0.2 & 0.059 & 0.115 & 0.124 & 0.081 & 0.076 & 0.060 & 0.065 & 0.055 \\ \hline 0.25 & 0.057 & 0.119 & 0.129 & 0.084 & 0.072 & 0.077 & 0.083 & 0.055 \\ \hline 0.3 & 0.055 & 0.120 & 0.145 & 0.073 & 0.069 & 0.090 & 0.096 & 0.055 \\ \hline 0.35 & 0.051 & 0.118 & 0.147 & 0.075 & 0.069 & 0.101 & 0.108 & 0.056 \\ \hline 0.4 & 0.051 & 0.117 & 0.142 & 0.071 & 0.093 & 0.122 & 0.111 & 0.058 \\ \hline 0.45 & 0.052 & 0.114 & 0.133 & 0.066 & 0.109 & 0.151 & 0.115 & 0.056 \\ \hline 0.5 & 0.090 & 0.111 & 0.115 & 0.067 & 0.131 & 0.168 & 0.117 & 0.056 \\ \hline 0.55 & 0.095 & 0.104 & 0.096 & 0.076 & 0.153 & 0.178 & 0.118 & 0.056 \\ \hline 0.6 & 0.100 & 0.098 & 0.092 & 0.094 & 0.176 & 0.191 & 0.118 & 0.058 \\ \hline 0.65 & 0.100 & 0.092 & 0.077 & 0.114 & 0.196 & 0.196 & 0.118 & 0.060 \\ \hline 0.7 & 0.100 & 0.090 & 0.070 & 0.129 & 0.204 & 0.199 & 0.119 & 0.069 \\ \hline 0.75 & 0.100 & 0.087 & 0.067 & 0.158 & 0.214 & 0.201 & 0.121 & 0.071 \\ \hline 0.8 & 0.100 & 0.081 & 0.072 & 0.180 & 0.226 & 0.206 & 0.123 & 0.071 \\ \hline 0.85 & 0.100 & 0.075 & 0.075 & 0.200 & 0.242 & 0.209 & 0.128 & 0.072 \\ \hline 0.9 & 0.100 & 0.071 & 0.086 & 0.217 & 0.252 & 0.214 & 0.129 & 0.076 \\ \hline 0.95 & 0.100 & 0.070 & 0.110 & 0.230 & 0.258 & 0.214 & 0.129 & 0.079 \\ \hline 1 & 0.100 & 0.074 & 0.137 & 0.244 & 0.265 & 0.215 & 0.129 & 0.081 \\ \hline \end{tabular}} \caption{Error obtained by the proposed Sparse SVQR model with different $\epsilon$ for different $\tau$ values on Servo dataset } \label{servo_spasevqr} \end{table} \begin{figure} \centering \includegraphics[width=0.8\linewidth]{./improvement_servo} \caption{ Percentage of the improvement in RMSE obtained by proposed $\epsilon$-SVQR and Sparse SVQR model over standard SVQR model for different value of $\tau$. } \label{servo11} \end{figure} \begin{table}[] \centering {\fontsize{10}{10} \selectfont \begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline $\epsilon$/$\tau$ & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 & 0.8 \\ \hline 0 & 0.040 & 0.055 & 0.067 & 0.071 & 0.078 & 0.084 & 0.073 & 0.069 \\ \hline 0.05 & 0.043 & 0.068 & 0.060 & 0.070 & 0.083 & 0.093 & 0.085 & 0.067 \\ \hline 0.1 & 0.038 & 0.067 & 0.065 & 0.071 & 0.076 & 0.064 & 0.065 & 0.059 \\ \hline 0.15 & 0.036 & 0.076 & 0.064 & 0.062 & 0.069 & 0.071 & 0.060 & 0.055 \\ \hline 0.2 & 0.044 & 0.060 & 0.072 & 0.076 & 0.093 & 0.093 & 0.090 & 0.055 \\ \hline 0.25 & 0.068 & 0.060 & 0.091 & 0.117 & 0.131 & 0.128 & 0.109 & 0.055 \\ \hline 0.3 & 0.116 & 0.120 & 0.160 & 0.173 & 0.176 & 0.161 & 0.112 & 0.058 \\ \hline 0.35 & 0.196 & 0.203 & 0.212 & 0.219 & 0.204 & 0.185 & 0.116 & 0.056 \\ \hline 0.4 & 0.055 & 0.289 & 0.258 & 0.252 & 0.226 & 0.198 & 0.118 & 0.056 \\ \hline 0.45 & 0.068 & 0.327 & 0.296 & 0.287 & 0.252 & 0.201 & 0.118 & 0.056 \\ \hline 0.5 & 0.095 & 0.352 & 0.330 & 0.303 & 0.265 & 0.206 & 0.119 & 0.056 \\ \hline 0.55 & 0.100 & 0.386 & 0.374 & 0.334 & 0.275 & 0.212 & 0.123 & 0.056 \\ \hline 0.6 & 0.100 & 0.413 & 0.396 & 0.344 & 0.290 & 0.215 & 0.129 & 0.056 \\ \hline 0.65 & 0.100 & 0.438 & 0.418 & 0.361 & 0.298 & 0.216 & 0.129 & 0.056 \\ \hline 0.7 & 0.100 & 0.389 & 0.437 & 0.372 & 0.303 & 0.216 & 0.129 & 0.056 \\ \hline 0.75 & 0.100 & 0.314 & 0.447 & 0.384 & 0.309 & 0.216 & 0.129 & 0.056 \\ \hline 0.8 & 0.100 & 0.207 & 0.457 & 0.396 & 0.312 & 0.216 & 0.129 & 0.056 \\ \hline 0.85 & 0.100 & 0.064 & 0.461 & 0.396 & 0.316 & 0.219 & 0.129 & 0.056 \\ \hline 0.9 & 0.100 & 0.070 & 0.461 & 0.397 & 0.316 & 0.220 & 0.129 & 0.056 \\ \hline 0.95 & 0.100 & 0.108 & 0.442 & 0.400 & 0.316 & 0.223 & 0.129 & 0.056 \\ \hline 1 & 0.100 & 0.140 & 0.420 & 0.402 & 0.316 & 0.221 & 0.129 & 0.056 \\ \hline \end{tabular}} \caption{Error obtained by the proposed Park and Kim SVQR model with different $\epsilon$ for different $\tau$ values on Servo dataset } \label{servo_spasevqr} \end{table} \begin{table}[] \centering \begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline $\tau$ & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 & 0.8 \\ \hline SVQR & 0.040 & 0.055 & 0.067 & 0.071 & 0.078 & 0.084 & 0.073 & 0.069 \\ \hline $\epsilon$-SVQR & 0.037 & 0.053 & 0.059 & 0.064 & 0.069 & 0.062 & 0.056 & 0.055 \\ \hline Sparse SVQR & 0.040 & 0.055 & 0.067 & 0.066 & 0.069 & 0.060 & 0.061 & 0.055 \\ \hline Park and Kim SVQR & 0.036 & 0.055 & 0.060 & 0.062 & 0.069 & 0.064 & 0.060 & 0.055 \\ \hline \end{tabular} \caption{Minimum RMSE obtained by different SVQR model for different $\tau$ values.} \end{table} \begin{table}[] \begin{tabular}{|l|l|l|l|l|} \hline & & Error & Sparsity & CPU time (s) \\ \hline & $\epsilon$=0 & 0.0287$~\pm~$0.0223 & 0.00 & 12.44 \\ \cline{2-5} $\tau$=0.1 & $\epsilon$=0.8 & 0.0223$~\pm~$0.0196 & 2.20 & 13.65 \\ \cline{2-5} & $\epsilon$=0.9 & 0.0225$~\pm~$0.0190 & 2.51 & 13.39 \\ \cline{2-5} & $\epsilon$=1 & 0.0226$~\pm~$0.0185 & 2.89 & 13.93 \\ \hline & $\epsilon$=0 & 0.0447$~\pm~$0.0363 & 0.00 & 13.02 \\ \cline{2-5} $\tau$=0.5 & $\epsilon$= 0.5 & 0.0435$~\pm~$0.0360 & 5.79 & 13.25 \\ \cline{2-5} & $\epsilon$ =0.6 & 0.0433$~\pm~$0.0359 & 7.14 & 13.94 \\ \cline{2-5} & $\epsilon$= 0.7 & 0.0436$~\pm~$0.0366 & 4.54 & 13.10 \\ \hline & $\epsilon$=0 & 0.0393$~\pm~$0.0278 & 0.00 & 14.60 \\ \cline{2-5} & $\epsilon$=3 & 0.0315$~\pm~$0.0227 & 12.34 & 14.08 \\ \cline{2-5} $\tau$=0.9 & $\epsilon$= 4 & 0.0317$~\pm~$0.0209 & 16.84 & 14.42 \\ \cline{2-5} & $\epsilon$=5 & 0.0286$~\pm~$0.0198 & 21.17 & 15.07 \\ \hline \end{tabular} \caption{Performance of the proposed $\epsilon$-SVQR model on Boston Housing dataset for different value of $\tau$} \label{boston-hosuing} \end{table} \begin{table}[] \begin{tabular}{|l|l|l|l|l|} \hline & & Error & Sparsity & CPU time (s) \\ \hline & $\epsilon$=0 & 0.0481$~\pm~$0.0430 & 0.00 & 2.54 \\\cline{2-5} $\tau$=0.1 & $\epsilon$=0.2 & 0.0450$~\pm~$0.0351 & 29.55 & 2.63 \\ \cline{2-5} & $\epsilon$=0.3 & 0.0433$~\pm~$ 0.0280 & 48.64 & 2.55 \\ \hline $\tau$=0.3 & $\epsilon$=0 & 0.0722$~\pm~$0.0518 & 0.00 & 2.59 \\ \cline{2-5} & $\epsilon$=0.1 & 0.0679$~\pm~$0.0473 & 27.68 & 2.51 \\ \hline & $\epsilon$=0 & 0.0699$~\pm~$0.0602 & 0.00 & 2.49 \\ \cline{2-5} $\tau$=0.7 & $\epsilon$=0.1 & 0.0623$~\pm~$0.0479 & 39.41 & 2.54 \\ \cline{2-5} & $\epsilon$=0.2 & 0.0598$~\pm~$0.0494 & 62.38 & 2.43 \\ \hline $\tau$=0.8 & $\epsilon$=0 & 0.0549$~\pm~$0.0459 & 0.00 & 2.56 \\ \cline{2-5} & $\epsilon$=0.1 & 0.0537$~\pm~$0.0407 & 33.61 & 2.57 \\ \hline \end{tabular} \caption{Performance of the proposed $\epsilon$-SVQR model on Traizines dataset for different value of $\tau$} \label{traizines} \end{table} \section{Conclusion} This paper reviews the development of Support Vector Quantile Regression (SVQR) models on the line of popular $\epsilon$- Support Vector Regression model and finds that the existing pinball loss functions fail to incorporate the $\epsilon$-insensitive zone in true sense. Further, this paper proposes a novel asymmetric $\epsilon$-insensitive pinball loss function for measuring the empirical risk in support vector quantile regression model. The proposed asymmetric $\epsilon$-insensitive pinball loss function divides the fixed width of $\epsilon$-insensitive zone using the $\tau$ value and present a suitable asymmetric $\epsilon$-insensitive zone for each $\tau$ value. The resulting model which has been termed with '`$\epsilon$-Support Vector Quantile Regression` ($\epsilon$-SVQR) model ignores data points which lie inside of the asymmetric $\epsilon$-insensitive zone which make the proposed model a sparse regression model. The optimal choice of the value of $\epsilon$ in the proposed $\epsilon$-SVQR model depends upon the variance present in the training data points around the $\epsilon$-insensitive zone. The $\epsilon$-SVQR model improves the prediction of the existing SVQR model significantly and also enjoys the sparsity as well. The detailed experiments carried on various artifical and real world UCI datasets using the various evaluation creteria show that the proposed $\epsilon$-SVQR owns better generalization ability than other SVQR models. The choice of the value of $\epsilon$ is crucial in the proposed $\epsilon$-SVQR model. Therefore, we would like to develop techniques to obtain the optimal choice of the $\epsilon$ for a given training set. \section*{Acknowledgment} We would like to acknowledge Ministry of Electronics and Information Technology, Government of India, as this work has been funded by them under Visvesvaraya PhD Scheme for Electronics and IT, Order No. Phd-MLA/4(42)/2015-16. \section* {Conflict of Interest} We authors hereby declare that we do not have any conflict of interest with the content of this manuscript.
1,108,101,562,590
arxiv
\section{Control Approach} \label{sec:strategy} Section \ref{background_bf} extends \cite{lindemann2018control} to single-agent systems with discontinuous inputs. Based on this, Section \ref{sec:controller} proposes a decentralized feedback control law for multi-agent systems. Sections \ref{background_bf} and \ref{sec:controller} assume that the control barrier function satisfies some conditions that account for $\phi_k$. These conditions are addressed in Section \ref{sec:construction_rules}. \subsection{Control Barrier Functions for Single-Agent Systems} \label{background_bf} For now, assume a single-agent system ($M=1$) given by \begin{align}\label{eq:simplified_dynamics} \dot{x}=f(\boldsymbol{x},t)+g(\boldsymbol{x},t)\boldsymbol{u}(\boldsymbol{x},t)+c(\boldsymbol{x},t) \end{align} where $\boldsymbol{u}(\boldsymbol{x},t)$ may be a discontinuous function, while $f:\mathbb{R}^n\times\mathbb{R}_{\ge 0}\to\mathbb{R}^n$, $g:\mathbb{R}^n\times\mathbb{R}_{\ge 0}\to\mathbb{R}^{n\times m}$, and $c:\mathbb{R}^n\times\mathbb{R}_{\ge 0}\to\mathbb{R}^n$ are sufficiently regular, and where again there exists $C\ge 0$ such that $\|c(\boldsymbol{x},t)\|\le C$ for all $(\boldsymbol{x},t)\in\mathbb{R}^n\times\mathbb{R}_{\ge 0}$. This section does not require Assumption \ref{ass1}. Assume that \eqref{eq:simplified_dynamics} is subject to a formula $\phi$ of the form \eqref{eq:phi_class}. Consider a function $\mathfrak{b}:\mathbb{R}^n\times[s_j,s_{j+1})\to\mathbb{R}$ that is continuously differentiable on $\mathbb{R}^n\times(s_j,s_{j+1})$ and let $\boldsymbol{x}:[t_j,t_{j+1}]\to\mathbb{R}^n$ be a Filippov solution to \eqref{eq:simplified_dynamics} under $\boldsymbol{u}(\boldsymbol{x},t)$ and with $t_j=s_j$. We distinguish between $t_{j+1}$ and $s_{j+1}$ since we want to ensure closed-loop properties over $[s_j,s_{j+1}]$, while $\boldsymbol{x}(t)$ may only be defined for $t_{j+1}<s_{j+1}$. Also, define $ \mathfrak{C}(t):=\{\boldsymbol{x}\in \mathbb{R}^n | \mathfrak{b}(\boldsymbol{x},t)\ge 0\}$ and consider an open set $\mathfrak{D}\in\mathbb{R}^n$ such that $\mathfrak{D}\supset\mathfrak{C}(t)$ or all $t\in[s_j,s_{j+1})$. \begin{definition}[Control Barrier Function]\label{cCBF} The function $\mathfrak{b}:\mathbb{R}^n\times[s_j,s_{j+1})\to \mathbb{R}$ is a candidate control barrier function (cCBF) for $[s_j,s_{j+1})$ if, for each $\boldsymbol{\boldsymbol{x}}_{0}\in\mathfrak{C}(s_j)$, there exists an absolutely continuous function $\boldsymbol{\boldsymbol{x}}:[s_j,s_{j+1})\to\mathbb{R}^n$ with $\boldsymbol{\boldsymbol{x}}(s_j):=\boldsymbol{\boldsymbol{x}}_0$ such that $\boldsymbol{\boldsymbol{x}}(t)\in\mathfrak{C}(t)$ for all $t\in [s_j,s_{j+1})$. A cCBF $\mathfrak{b}(\boldsymbol{x},t)$ for $[s_j,s_{j+1})$ is a valid control barrier function (vCBF) for $[s_j,s_{j+1})$ and for \eqref{eq:simplified_dynamics} under $\boldsymbol{u}(\boldsymbol{x},t)$ if, for each function $c:\mathbb{R}^n\times\mathbb{R}_{\ge 0}\to\mathbb{R}^n$ with $\|c(\boldsymbol{x},t)\|\le C$, $\boldsymbol{x}_0\in\mathfrak{C}(t_j)$ implies, for each Filippov solution $\boldsymbol{x}:[t_j,t_{j+1}]\to\mathbb{R}^n$ to \eqref{eq:simplified_dynamics} under $\boldsymbol{u}(\boldsymbol{x},t)$ with $t_j=s_j$ and $\boldsymbol{x}(t_j)=\boldsymbol{x}_0$, $\boldsymbol{x}(t)\in\mathfrak{C}(t)$ for all $t\in[t_j,\min(t_{j+1},s_{j+1}))$. \end{definition} \begin{theorem}\label{theorem:disc_barrier} Assume that $\mathfrak{b}(\boldsymbol{x},t)$ is a cCBF for $[s_j,s_{j+1})$. If $\boldsymbol{u}(\boldsymbol{x},t)$ is locally bounded and measurable and there is an extended locally Lipschitz continuous class $\mathcal{K}$ function $\alpha$ s.t. \begin{align}\label{eq:barrier_ineq} \min \mathcal{L}_{F[f+g\boldsymbol{u}]}\mathfrak{b}(\boldsymbol{x},t) \ge -\alpha(\mathfrak{b}(\boldsymbol{x},t))+\big\|\frac{\partial \mathfrak{b}(\boldsymbol{x},t)}{\partial \boldsymbol{x}}\big\|C \end{align} for all $(\boldsymbol{x},t)\in \mathfrak{D}\times (s_j,s_{j+1})$, then $\mathfrak{b}(\boldsymbol{x},t)$ is a vCBF for $[s_j,s_{j+1})$ and for \eqref{eq:simplified_dynamics} under $\boldsymbol{u}(\boldsymbol{x},t)$. \begin{proof} Note that $-\frac{\partial \mathfrak{b}(\boldsymbol{x},t)}{\partial \boldsymbol{x}}c(\boldsymbol{x},t)\le |{\frac{\partial \mathfrak{b}(\boldsymbol{x},t)}{\partial \boldsymbol{x}}}c(\boldsymbol{x},t)|\le \|{\frac{\partial \mathfrak{b}(\boldsymbol{x},t)}{\partial \boldsymbol{x}}}\|\|c(\boldsymbol{x},t)\|\le \|\frac{\partial \mathfrak{b}(\boldsymbol{x},t)}{\partial \boldsymbol{x}}\|C$ so that \eqref{eq:barrier_ineq} implies \begin{align}\label{eq:theorem1_ineq} \min \mathcal{L}_{F[f+g\boldsymbol{u}]}\mathfrak{b}(\boldsymbol{x},t) \oplus {\frac{\partial \mathfrak{b}(\boldsymbol{x},t)}{\partial \boldsymbol{x}}}c(\boldsymbol{x},t)\ge -\alpha(\mathfrak{b}(\boldsymbol{x},t)) \end{align} for each function $c:\mathbb{R}^n\times\mathbb{R}_{\ge 0}\to\mathbb{R}^n$ with $\|c(\boldsymbol{x},t)\|\le C$. Assume that $\boldsymbol{x}(t_j)\in\mathfrak{C}(t_j)$ and note that $\dot{\mathfrak{b}}(\boldsymbol{x}(t),t)\in\mathcal{L}_{F[f+g\boldsymbol{u}+c]}\mathfrak{b}(\boldsymbol{x}(t),t)$ for almost all $t\in(t_j,\min(t_{j+1},s_{j+1}))$ and consequently also for almost all $t\in[t_j,\min(t_{j+1},s_{j+1}))$. Due to \eqref{eq:theorem1_ineq} it holds that $\min \mathcal{L}_{F[f+g\boldsymbol{u}]}\mathfrak{b}(\boldsymbol{x}(t),t) + {\frac{\partial \mathfrak{b}(\boldsymbol{x}(t),t)}{\partial \boldsymbol{x}}}c(\boldsymbol{x}(t),t)\ge -\alpha(\mathfrak{b}(\boldsymbol{x}(t),t))$ and according to Lemma \ref{lemma_liederivate}, it holds that $\min \mathcal{L}_{F[f+g\boldsymbol{u}+c]}\mathfrak{b}(\boldsymbol{x}(t),t)\ge \min\{ \mathcal{L}_{F[f+g\boldsymbol{u}]}\mathfrak{b}(\boldsymbol{x}(t),t) \oplus \hat{\mathcal{L}}_{F[c]}\mathfrak{b}(\boldsymbol{x}(t),t)\}=\min \mathcal{L}_{F[f+g\boldsymbol{u}]}\mathfrak{b}(\boldsymbol{x}(t),t)\oplus{\frac{\partial \mathfrak{b}(\boldsymbol{x}(t),t)}{\partial \boldsymbol{x}}}c(\boldsymbol{x}(t),t)$ since $\hat{\mathcal{L}}_{F[c]}\mathfrak{b}(\boldsymbol{x}(t),t)=\{\frac{\partial \mathfrak{b}(\boldsymbol{x}(t),t)}{\partial \boldsymbol{x}}c(\boldsymbol{x}(t),t)\}$ (due to \cite[Thm.~1]{paden1987calculus} and since $\mathfrak{b}(\boldsymbol{x},t)$ is continuously differentiable). It then holds that $\dot{\mathfrak{b}}(\boldsymbol{x}(t),t)\ge \min \mathcal{L}_{F[f+g\boldsymbol{u}+c]}\mathfrak{b}(\boldsymbol{x}(t),t) \ge -\alpha(\mathfrak{b}(\boldsymbol{x}(t),t))$. By \cite[Lem.~2]{glotfelter2017nonsmooth} it follows that $\mathfrak{b}(\boldsymbol{x}(t),t)\ge 0$ for all $t\in[t_j,\min(t_{j+1},s_{j+1}))$. \end{proof} \end{theorem} In \cite{lindemann2018control}, a connection between $\mathfrak{b}(\boldsymbol{x},t)$ and the STL semantics of $\phi$ was established. In particular, conditions on $\mathfrak{b}(\boldsymbol{x},t)$ are imposed so that $\mathfrak{b}(\boldsymbol{x}(t),t)\ge 0$, i.e., $\boldsymbol{x}(t)\in\mathfrak{C}(t)$, for all $t\ge 0$ implies $\boldsymbol{x}\models \phi$. We shortly recall the main idea of \cite{lindemann2018control}. In order to use conjunctions in $\phi$, a smooth under-approximation of the min-operator is used. For a number of $\tilde{p}$ cCBF's $\mathfrak{b}_l(\boldsymbol{x},t)$, note that $\min_{l\in\{1,\hdots,\tilde{p}\}}\mathfrak{b}_l(\boldsymbol{x},t)\approx -\frac{1}{\eta}\ln\big(\sum_{l=1}^{\tilde{p}}\exp(-\eta \mathfrak{b}_l(\boldsymbol{x},t))\big)$ with $\eta>0$ and where the accuracy of this approximation increases as $\eta$ increases. In fact, it holds that $\lim_{\eta\to\infty}-\frac{1}{\eta}\ln\big(\sum_{l=1}^{\tilde{p}} \exp(-\eta\mathfrak{b}_l(\boldsymbol{x},t))\big)=\min_{l\in\{1,\hdots,\tilde{p}\}} \mathfrak{b}_l(\boldsymbol{x},t)$. Regardless of the choice of $\eta$, we have \begin{align}\label{eq:under_approx} -\frac{1}{\eta}\ln\big(\sum_{l=1}^{\tilde{p}} \exp(-\eta\mathfrak{b}_l(\boldsymbol{x},t))\big)\le \min_{l\in\{1,\hdots,\tilde{p}\}} \mathfrak{b}_l(\boldsymbol{x},t) \end{align} Now, $\mathfrak{b}(\boldsymbol{x},t)\ge 0$ implies $\mathfrak{b}_l(\boldsymbol{x},t)\ge 0$ for each $l\in\{1,\hdots,\tilde{p}\}$, i.e., the conjunction operator can be encoded. The conditions imposed on $\mathfrak{b}(\boldsymbol{x},t)$ are summarized in three steps (Steps A, B, and C). For negations on predicates $\neg \mu$ as in $\eqref{eq:psi_class}$, let the corresponding predicate function be $-h(\boldsymbol{x})$. Furthermore, let $h_1(\boldsymbol{x})$, $h_2(\boldsymbol{x})$, $h_3(\boldsymbol{x})$, and $h_4(\boldsymbol{x})$ correspond to the atomic propositions $\mu_1$, $\mu_2$, $\mu_3$, and $\mu_4$, respectively. \textbf{Step A)} Consider single temporal operators in \eqref{eq:phi_class} that \emph{do not} contain conjunctions, i.e., $G_{[a,b]}\mu_1$, $F_{[a,b]} \mu_1$, and $\mu_1 \until{a}{b} \mu_2$. For $G_{[a,b]}\mu_1$, select $\mathfrak{b}(\boldsymbol{x},t)$ so that $\mathfrak{b}(\boldsymbol{x},t^\prime)\le h_1(\boldsymbol{x})$ for all $t^\prime\in[a,b]$. For $F_{[a,b]} \mu_1$, select $\mathfrak{b}(\boldsymbol{x},t)$ so that $\mathfrak{b}(\boldsymbol{x},t^\prime)\le h_1(\boldsymbol{x})$ for some $t^\prime\in[a,b]$. For $\mu_1 \until{a}{b} \mu_2$, select $\mathfrak{b}(\boldsymbol{x},t):=-\frac{1}{\eta}\ln\big(\exp(-\eta\mathfrak{b}_1(\boldsymbol{x},t))+\exp(-\eta\mathfrak{b}_2(\boldsymbol{x},t))\big)$ so that $\mathfrak{b}_2(\boldsymbol{x},t^\prime)\le h_2(\boldsymbol{x})$ for some $t^\prime\in[a,b]$ and $\mathfrak{b}_1(\boldsymbol{x},t^{\prime\prime})\le h_1(\boldsymbol{x})$ for all $t^{\prime\prime}\in[a,t^\prime]$. \textbf{Step B)} Consider single temporal operators in \eqref{eq:phi_class} that \emph{do} contain conjunctions, i.e., $G_{[a,b]}\psi_1$, $F_{[a,b]} \psi_1$, and $\psi_1 \until{a}{b} \psi_2$ where $\psi_1$ and $\psi_2$ may contain a conjunction of predicates as in \eqref{eq:psi_class}. Assume, without loss of generality, $\psi_1:=\mu_1\wedge\mu_2$ and $\psi_2:=\mu_3\wedge\mu_4$. For $G_{[a,b]}\psi_1$, select $\mathfrak{b}(\boldsymbol{x},t):=-\frac{1}{\eta}\ln\big(\exp(-\eta\mathfrak{b}_1(\boldsymbol{x},t))+\exp(-\eta\mathfrak{b}_2(\boldsymbol{x},t))\big)$ so that $\mathfrak{b}_1(\boldsymbol{x},t^\prime)\le h_1(\boldsymbol{x})$ and $\mathfrak{b}_2(\boldsymbol{x},t^\prime)\le h_2(\boldsymbol{x})$ for all $t^\prime\in[a,b]$. For $F_{[a,b]}\psi_1$, select $\mathfrak{b}(\boldsymbol{x},t):=-\frac{1}{\eta}\ln\big(\exp(-\eta\mathfrak{b}_1(\boldsymbol{x},t))+\exp(-\eta\mathfrak{b}_2(\boldsymbol{x},t))\big)$ so that $\mathfrak{b}_1(\boldsymbol{x},t^\prime)\le h_1(\boldsymbol{x})$ and $\mathfrak{b}_2(\boldsymbol{x},t^\prime)\le h_2(\boldsymbol{x})$ for some $t^\prime\in[a,b]$. For $\psi_1 \until{a}{b} \psi_2$, select $\mathfrak{b}(\boldsymbol{x},t):=-\frac{1}{\eta}\ln\big(\sum_{l=1}^4\exp(-\eta\mathfrak{b}_l(\boldsymbol{x},t))\big)$ so that $\mathfrak{b}_3(\boldsymbol{x},t^\prime)\le h_3(\boldsymbol{x})$ and $\mathfrak{b}_4(\boldsymbol{x},t^\prime)\le h_4(\boldsymbol{x})$ for some $t^{\prime}\in[a,b]$ and $\mathfrak{b}_1(\boldsymbol{x},t^{\prime\prime})\le h_1(\boldsymbol{x})$ and $\mathfrak{b}_2(\boldsymbol{x},t^{\prime\prime})\le h_2(\boldsymbol{x})$ for all $t^{\prime\prime}\in[a,t^\prime]$. \textbf{Step C)} Consider conjunctions of temporal operators. For instance, consider $(G_{[a_1,b_1]}\psi_1) \wedge (F_{[a_2,b_2]} \psi_2) \wedge (\psi_3 \until{a_3}{b_3} \psi_4)$. Let $\mathfrak{b}(\boldsymbol{x},t):=-\frac{1}{\eta}\ln\big(\sum_{l=1}^3\exp(-\eta\mathfrak{b}_l(\boldsymbol{x},t))\big)$ where $\mathfrak{b}_1(\boldsymbol{x},t)$, $\mathfrak{b}_2(\boldsymbol{x},t)$, and $\mathfrak{b}_3(\boldsymbol{x},t)$ are associated with one temporal operator each and constructed as in Steps A and B. We integrate $\mathfrak{o}_l:\mathbb{R}_{\ge 0}\to\{0,1\}$ into $\mathfrak{b}(\boldsymbol{x},t):=-\frac{1}{\eta}\ln\big(\sum_{l=1}^p\mathfrak{o}_l(t)\exp(-\eta\mathfrak{b}_l(\boldsymbol{x},t))\big)$ to reduce conservatism as in \cite{lindemann2018control}; $p$ is the total number of functions $\mathfrak{b}_l(\boldsymbol{x},t)$ obtained from Steps A, B, and C and each $\mathfrak{b}_l(\boldsymbol{x},t)$ corresponds to either an always, eventually, or until operator with a corresponding time interval $[a_l,b_l]$. To reduce conservatism, we remove single functions $\mathfrak{b}_l(\boldsymbol{x},t)$ from $\mathfrak{b}(\boldsymbol{x},t)$ when the corresponding always, eventually, or until operator is satisfied. For each temporal operator, the associated $\mathfrak{b}_l(\boldsymbol{x},t)$ is removed at $t=b_l$, i.e., $\mathfrak{o}_l(t)=1$ if $t<b_l$ and $\mathfrak{o}_l(t):=0$ if $t\ge b_l$. The function $\mathfrak{b}(\boldsymbol{x},t)$ is now piecewise continuous in $t$. We denote the switching sequence by $\{s_0:=0,s_1,\hdots,s_q\}$ with $q\in\mathbb{N}$ as the total number of switches. This sequence is known due to knowledge of $[a_l,b_l]$. At time $t\ge s_j$ we have $s_{j+1}:= \text{argmin}_{b_l\in\{b_1,\hdots,b_p\}}\zeta(b_l,t)$ where $\zeta(b_l,t):=b_l-t$ if $b_l-t> 0$ and $\zeta(b_l,t):=\infty$ otherwise. To guarantee Filippov solutions $\boldsymbol{x}:[t_0,t_1]\to\mathbb{R}^n$ with $t_1\ge s_q$, we finally require that $\boldsymbol{x}(t)\in\mathfrak{C}(t)$ implies $\boldsymbol{x}(t)\in\mathfrak{D}'\subset\mathfrak{D}$ where $\mathfrak{D}'$ is some compact set. This requirement is not restrictive and can be achieved by adding a function $\mathfrak{b}_{p+1}(\boldsymbol{x},t)$ to $\mathfrak{b}(\boldsymbol{x},t):=-\frac{1}{\eta}\ln\big(\sum_{l=1}^{p+1}\mathfrak{o}_l(t)\exp(-\eta\mathfrak{b}_l(\boldsymbol{x},t))\big)$ where $\mathfrak{b}_{p+1}(\boldsymbol{x},t)=D-\|\boldsymbol{x}\|$ for a suitably selected $D\ge 0$, i.e., $D$ is such that $\mathfrak{D}':=\{\boldsymbol{x}\in\mathbb{R}^{n}|\|\boldsymbol{x}\|\le D\}\subset \mathfrak{D}$. \begin{corollary}\label{corollary_phi_sat} Consider $\phi$ of the form \eqref{eq:phi_class} and the system \eqref{eq:simplified_dynamics}. Let $\mathfrak{b}(\boldsymbol{x},t)$ satisfy the conditions in Steps A, B, and C for $\phi$ and be a cCBF for each time interval $[s_j,s_{j+1})$. If $\boldsymbol{u}(\boldsymbol{x},t)$ is locally bounded, measurable, and such that \eqref{eq:barrier_ineq} holds for all $(\boldsymbol{x},t)\in\mathfrak{D}\times(s_j,s_{j+1})$, then it follows that $\boldsymbol{x}\models \phi$ for each Filippov solution to \eqref{eq:simplified_dynamics} under $\boldsymbol{u}(\boldsymbol{x},t)$. \begin{proof} It holds that $\lim_{\tau\to s_j^-}\mathfrak{C}(\tau)\subseteq\mathfrak{C}(s_j)$ where $\lim_{\tau\to s_j^-}\mathfrak{C}(\tau)$ denotes the left-sided limit of $\mathfrak{C}(t)$ at $t=s_j$. It is hence sufficient to ensure forward invariance of $\mathfrak{C}(t)$ for each $[s_j,s_{j+1})$ separately. There exist Filippov solutions $\boldsymbol{x}:[t_0,t_1]\to\mathbb{R}^n$ to \eqref{eq:simplified_dynamics} from each $\boldsymbol{x}(t_0)\in\mathfrak{C}(t_0)$. Due to Theorem~\ref{theorem:disc_barrier}, it follows that $\boldsymbol{x}(t)\in\mathfrak{C}(t)$ for all $t\in[t_0,\min(t_1,s_1))$; $\boldsymbol{x}(t)\in\mathfrak{C}(t)$ implies $\boldsymbol{x}(t)\in\mathfrak{D}'\subset\mathfrak{D}$, i.e., $\boldsymbol{x}(t)$ remains in the compact set $\mathfrak{D}'$, which implies $t_1\ge s_1$ by \ree{\cite[Ch.~2.7]{filippov2013differential}}. The same reasoning can be applied for the intervals $[s_1,s_2)$, $\hdots$, $[s_{q-1},s_q)$. It follows that $\boldsymbol{x}\models\phi$. \end{proof} \end{corollary} \subsection{Control Barrier Functions for Multi-Agent Systems} \label{sec:controller} Consider again $M$ agents subject to \eqref{system_noise} and $K$ formulas $\phi_k$ of the form \eqref{eq:phi_class}. For $j_1,\hdots,j_{|{\mathcal{V}}_k|}\in{\mathcal{V}}_k$, let $\bar{\boldsymbol{x}}_k:=\begin{bmatrix} {\boldsymbol{x}_{j_1}}^T & \hdots & {\boldsymbol{x}_{j_{|{\mathcal{V}}_k|}}}^T \end{bmatrix}^T\in\mathbb{R}^{\bar{n}_k}$ and $\bar{g}_k(\bar{\boldsymbol{x}}_k,t):=\text{diag}({g_{j_1}(\boldsymbol{x}_{j_1},t)}, \hdots , {g_{j_{|\mathcal{V}_k|}}(\boldsymbol{x}_{j_{|\mathcal{V}_k|}},t)})$ where $\bar{n}_k:=n_{j_1}+\hdots+n_{j_{|{\mathcal{V}}_k|}}$. Let $\mathfrak{b}^k(\bar{\boldsymbol{x}}_k,t)$ be the cCBF corresponding to $\phi_k$ and accounting for Steps A, B, and C. Define $\mathfrak{C}_k(t):=\{\bar{\boldsymbol{x}}_k\in\mathbb{R}^{\bar{n}_k} | \mathfrak{b}^k(\bar{\boldsymbol{x}}_k,t)\ge 0\}$ and let $\{s_0^k:=0,s_1^k,\hdots,s_{q_k}^k\}$ be the switching sequence corresponding to $\mathfrak{b}^k(\bar{\boldsymbol{x}}_k,t)$ as discussed in the previous section. We next consider cases where ${\frac{\partial \mathfrak{b}^k(\bar{\boldsymbol{x}}_k,t)}{\partial \bar{\boldsymbol{x}}_k}}\bar{g}_k(\bar{\boldsymbol{x}}_k,t)= {\boldsymbol{0}}^T$ for $(\bar{\boldsymbol{x}}_k,t)\in \mathfrak{D}_k\times (s_j^k,s^k_{j+1})$ where $\mathfrak{D}_k$ is an open and bounded set such that $\mathfrak{D}_k\supset \mathfrak{C}_k(t)$ for all $t\ge 0$. These occurences mean that $\mathfrak{b}^k(\bar{\boldsymbol{x}}_k,t)$, although possibly being a cCBF for $[s_j^k,s^k_{j+1})$, may not be a vCBF for $[s_j^k,s^k_{j+1})$ and for \eqref{system_noise} under any control law since \eqref{eq:theorem1_ineq} might fail to hold. Due to Assumption \ref{ass1}, it holds that ${\frac{\partial \mathfrak{b}^k(\bar{\boldsymbol{x}}_k,t)}{\partial \bar{\boldsymbol{x}}_k}}\bar{g}_k(\bar{\boldsymbol{x}}_k,t)= {\boldsymbol{0}}^T$ if and only if $\frac{\partial \mathfrak{b}^k(\bar{\boldsymbol{x}}_k,t)}{\partial \bar{\boldsymbol{x}}_k}=\boldsymbol{0}$. \begin{assumption}\label{ass3} For an extended locally Lipschitz continuous class $\mathcal{K}$ function $\alpha_k$ and for $(\bar{\boldsymbol{x}}_k,t)\in\mathfrak{D}_k\times(s^k_j,s^k_{j+1})$ with $\frac{\partial \mathfrak{b}^k(\bar{\boldsymbol{x}}_k,t)}{\partial \bar{\boldsymbol{x}}_k}= \boldsymbol{0}$ it holds that $\frac{\partial \mathfrak{b}^k(\bar{\boldsymbol{x}}_k,t)}{\partial t}> -\alpha_k(\mathfrak{b}^k(\bar{\boldsymbol{x}}_k,t))$. \end{assumption} \begin{theorem}\label{theorem1} Consider a multi-agent system consisting of $M$ agents that are subject to the dynamics in \eqref{system_noise} satisfying Assumption \ref{ass1} and $K$ formulas $\phi_k$ of the form \eqref{eq:phi_class} satisfying Assumption \ref{ass:com}. Assume further that each $\mathfrak{b}^k(\bar{\boldsymbol{x}}_k,t)$ accounts for the STL semantics of $\phi_k$ according to Steps A, B, and C, and that $\mathfrak{b}^k(\bar{\boldsymbol{x}}_k,t)$ is a cCBF for each time interval $[s^k_j,s^k_{j+1})$ satisfying Assumption \ref{ass3}. If each agent $i\in{\mathcal{V}}_k$ applies the control law $\boldsymbol{u}_i^*(\bar{\boldsymbol{x}}_k,t):=\boldsymbol{u}_i$ where $\boldsymbol{u}_i$ is given by \begin{small} \begin{subequations}\label{eq:qp_conv} \begin{align} &\hspace{-0.1cm}\min_{\boldsymbol{u}_i}\; \boldsymbol{u}_i^T\boldsymbol{u}_i\\ \begin{split} \hspace{-0.1cm}\text{s.t. }& \hspace{-0.1cm}\frac{\partial \mathfrak{b}^k(\bar{\boldsymbol{x}}_k,t)}{\partial \boldsymbol{x}_i}(f_i(\boldsymbol{x}_i,t)+g_i(\boldsymbol{x}_i,t)\boldsymbol{u}_i)\ge\|{\frac{\partial \mathfrak{b}^k(\bar{\boldsymbol{x}}_k,t)}{\partial \boldsymbol{x}_i}}\|\hat{n}C\\ & \hspace{-0.1cm}-\mathfrak{N}_i(\bar{\boldsymbol{x}}_k,t)\big(\frac{\partial \mathfrak{b}^k(\bar{\boldsymbol{x}}_k,t)}{\partial t} +\alpha_k(\mathfrak{b}^k(\bar{\boldsymbol{x}}_k,t))\big),\label{eq:const_qp} \end{split} \end{align} \end{subequations} \end{small}where $\hat{n}:=\sqrt{\bar{n}_k\max(n_1,\hdots,n_M)}$ and where \begin{small} \begin{align*} \mathfrak{N}_i(\bar{\boldsymbol{x}}_k,t):= \begin{cases} \frac{\| \frac{\partial \mathfrak{b}^k(\bar{\boldsymbol{x}}_k,t)}{\partial \boldsymbol{x}_i}\|}{\sum_{d\in{\mathcal{V}}_k}\| \frac{\partial \mathfrak{b}^k(\bar{\boldsymbol{x}}_k,t)}{\partial \boldsymbol{x}_d}\|} &\hspace{-0.3cm}\text{if } \sum_{d\in{\mathcal{V}}_k}\| \frac{\partial \mathfrak{b}^k(\bar{\boldsymbol{x}}_k,t)}{\partial \boldsymbol{x}_d}\|\neq 0\\ 1 &\hspace{-0.3cm}\text{otherwise,} \end{cases} \end{align*} \end{small}then $\bar{\boldsymbol{x}}_k\models\phi_k$ for each Filippov solution to \eqref{system_noise}. \begin{proof} We only provide a sketch of the proof due to space limitations. It can be shown that $\boldsymbol{u}_i^*(\bar{\boldsymbol{x}}_k,t)$ is locally bounded and measureable so that Filippov solutions exist. It can then be shown that \eqref{eq:const_qp} is always feasible and implies \begin{small} \begin{align}\label{eq:barrier_cond_phik} \begin{split} &\min \tilde{\mathcal{L}}_{F[\bar{f}_k+\bar{g}_k\bar{\boldsymbol{u}}_k^*]}\mathfrak{b}^k(\bar{\boldsymbol{x}}_k,t) \ge\\ &\hspace{1cm} -\alpha_k(\mathfrak{b}^k(\bar{\boldsymbol{x}}_k,t))+\big\|\frac{\partial \mathfrak{b}^k(\bar{\boldsymbol{x}}_k,t)}{\partial \bar{\boldsymbol{x}}_k}\big\|\sqrt{\bar{n}_k}C \end{split} \end{align} \end{small}where $\bar{f}_k(\bar{\boldsymbol{x}}_k,t):=\begin{bmatrix}{f_{j_1}(\boldsymbol{x}_{j_1},t)}^T & \hdots & {f_{j_{|\mathcal{V}_k|}}(\boldsymbol{x}_{j_{|\mathcal{V}_k|}},t)}^T\end{bmatrix}^T$ and $\bar{\boldsymbol{u}}_k^*(\bar{\boldsymbol{x}}_k,t):=\begin{bmatrix} {\boldsymbol{u}_{j_1}^*}^T & \hdots & {\boldsymbol{u}_{j_{|{\mathcal{V}}_k|}}^*}^T \end{bmatrix}^T$ for $j_1,\hdots,j_{|{\mathcal{V}}_k|}\in{\mathcal{V}}_k$ so that Corollary \ref{corollary_phi_sat} can be used (note that $\sqrt{\bar{n}_k}C$ is needed here instead of only $C$).\end{proof} \end{theorem} The control law $\boldsymbol{u}_i^*(\bar{\boldsymbol{x}}_k,t)$ is discontinuous at points $(\bar{\boldsymbol{x}}_k,t)$ that lie at the intersection of the closures of the regions defined by the following three cases: case 1 where $\sum_{d\in{\mathcal{V}}_k}\|\frac{\partial \mathfrak{b}^k(\bar{\boldsymbol{x}}_k,t)}{\partial \boldsymbol{x}_j}\|= 0$, case 2 where $\sum_{d\in{\mathcal{V}}_k}\|\frac{\partial \mathfrak{b}^k(\bar{\boldsymbol{x}}_k,t)}{\partial \boldsymbol{x}_j}\|\neq 0$ and $\frac{\partial \mathfrak{b}^k(\bar{\boldsymbol{x}}_k,t)}{\partial \boldsymbol{x}_i}=\boldsymbol{0}$, and case 3 where $\sum_{d\in{\mathcal{V}}_k}\|\frac{\partial \mathfrak{b}^k(\bar{\boldsymbol{x}}_k,t)}{\partial \boldsymbol{x}_j}\|\neq 0$ and $\frac{\partial \mathfrak{b}^k(\bar{\boldsymbol{x}}_k,t)}{\partial \boldsymbol{x}_i}\neq\boldsymbol{0}$. The load sharing function $\mathfrak{N}_i(\boldsymbol{x},t)$ distributes the work needed to satisfy \eqref{eq:barrier_cond_phik} among agents; \eqref{eq:qp_conv} is a computationally tractable convex quadratic program. Computation of $\boldsymbol{u}_i^*(\bar{\boldsymbol{x}}_k,t)$ is decentralized and information does not need to be sent from local sensors to a central control unit and back to the actuators of each agent. Information only needs to be sent from local sensors to the local controllers. \subsection{Explicit Control Barrier Function Construction} \label{sec:construction_rules} Since the contruction procedure is the same for each $\phi_k$, we omit the index $k$ for readability reasons. First, define the function $\gamma_l:\mathbb{R}_{\ge 0}\to \mathbb{R}$ as $\gamma_l(t):=(\gamma_{l,0}-\gamma_{l,\infty})\exp(-\mathfrak{l}_lt)+\gamma_{l,\infty}$ where $\gamma_{l,0},\gamma_{l,\infty}\in \mathbb{R}$ and $\mathfrak{l}_l\in\mathbb{R}_{\ge 0}$. The function $\gamma_l(t)$ is associated with $h_l(\boldsymbol{x})$, which in turn corresponds to $\mu_l$. We require that $h^{\text{opt}}_l\ge 0$ where $h^{\text{opt}}_l:=\sup_{\boldsymbol{x}\in\mathbb{R}^n} h_l(\boldsymbol{x})$ ($\mu_l$ needs to be satisfiable) and consider to satisfy $\phi$ with robustness $r\in\mathbb{R}_{\ge 0}$. The construction of $\mathfrak{b}(\boldsymbol{x},t)$ is discussed in two steps (Steps 1 and 2) and is based on the conditions given in Steps A, B, and C in Section \ref{background_bf} leading to a function \re{$\mathfrak{b}(\boldsymbol{x},t):=-\frac{1}{\eta}\ln\big(\sum_{l=1}^p\mathfrak{o}_l(t)\exp(-\eta\mathfrak{b}_l(\boldsymbol{x},t))\big)$} where each $\mathfrak{b}_l(\boldsymbol{x},t)$ is associated with either $F_{[a_l,b_l]}\mu_l$ or $G_{[a_l,b_l]}\mu_l$. Recall that each $\mu_{l_1} \until{a_l}{b_l} \mu_{l_2}$ is encoded as $F_{[b_l,b_l]}\mu_{l_1}\wedge G_{[a_l,b_l]}\mu_{l_2}$ as discussed in Steps A, B, and C. \textbf{Step 1)} Assume $p=1$ and consider $\phi:=G_{[a_l,b_l]}\mu_l$ and $\phi:=F_{[a_l,b_l]}\mu_l$. It is first required that $r\le h_l^\text{opt}$. Then, set \begin{align}\label{eq:t_star} t^*_l:=\begin{cases} b_l &\text{if}\;\;\; F_{[a_l,b_l]} \mu_l\\ a_l &\text{if}\;\;\; G_{[a_l,b_l]} \mu_l, \end{cases} \end{align} which reflects the requirement that $\mu_l$ holds at least once between $[a_l,b_l]$ for $F_{[a_l,b_l]} \mu_l$ or at all times within $[a_l,b_l]$ for $G_{[a_l,b_l]} \mu_l$. If $t^*_l=0$, it is naturally required that $h_l(\boldsymbol{x}(0))\ge r$. Let now $\mathfrak{b}_l(\boldsymbol{x},t):=-\gamma_l(t)+h_l(\boldsymbol{x})$ with parameters \begin{subequations}\label{eq:gamma} \begin{align} \gamma_{l,0} &\in \begin{cases} \big(-\infty,h_l(\boldsymbol{x}(0))\big) &\text{if}\;\;\; t^*_l>0\\ \big[r,h_l(\boldsymbol{x}(0))\big) &\text{otherwise} \end{cases}\label{eq:gamma0}\\ \gamma_{l,\infty} &\in(\re{\max(r,\gamma_{l,0})},h^{\text{opt}}_l)\label{eq:gammainfty}\\ \mathfrak{l}_l & \in \begin{cases} -\ln\big(\frac{r-\gamma_{l,\infty}}{\gamma_{l,0}-\gamma_{l,\infty}}\big)/t^*_l &\text{if}\;\;\; \gamma_{l,0}<r\\ 0 &\text{otherwise}\label{eq:l}. \end{cases} \end{align} \end{subequations} Note that by the choice of $\gamma_{l,0}$, it holds that $\mathfrak{b}_l(\boldsymbol{x}(0),0)> 0$ and it furthermore holds that $\mathfrak{b}_l(\boldsymbol{x}(0),0)\le h_l(\boldsymbol{x}(0))-r$ if $t^*_l=0$ such that a satisfaction with a robustness of $r$ is possible. By the choice of $\gamma_{l,\infty}$ and $\mathfrak{l}_l$, it is ensured that $\mathfrak{b}_l(\boldsymbol{x}(t^\prime),t^\prime)\le h_l(\boldsymbol{x}(t^\prime))-r$ for all $t^\prime\ge t^*_l$. Hence, if now $\mathfrak{b}_l(\boldsymbol{x}(t^\prime),t^\prime)\ge 0$ for all $t^\prime\ge t^*_l$, then it follows that $ h_l(\boldsymbol{x}(t^\prime))-r\ge 0$, which implies $h_l(\boldsymbol{x}(t^\prime))\ge r$ leading to $\rho^{\phi}(\boldsymbol{x},0)\ge r$ by the choice of $t^*_l$ and $r$. \textbf{Step 2)} For $p>1$, a more elaborate procedure is needed. Let, similarly to Step~1, $\mathfrak{b}_l(\boldsymbol{x},t):=-\gamma_l(t)+h_l(\boldsymbol{x})$ with $\gamma_l(t)$ according to \eqref{eq:t_star} and \eqref{eq:gamma}. To ensure that $\mathfrak{b}(\boldsymbol{x},t)$ is a cCBF satisfying Assumption \ref{ass3}, we pose the following assumption. \begin{assumption}\label{ass4} Each predicate function contained in $\phi$, denoted by $h_l(\boldsymbol{x}):\mathbb{R}^n\to\mathbb{R}$ with $l\in\{1,\hdots,p\}$, is concave. \end{assumption} \begin{lemma}\label{lemma:concave} Under Assumption~\ref{ass4} and for a fixed $t'$, \begin{small}$\mathfrak{b}(\boldsymbol{x},t'):=-\frac{1}{\eta}\ln\big(\sum_{l=1}^p\mathfrak{o}_l(t')\exp(-\eta\mathfrak{b}_l(\boldsymbol{x},t'))\big)$\end{small} is concave. \begin{proof} Omitted due to space limitations. \end{proof} \end{lemma} Compared to Step 1, it is not enough to select $\gamma_{l,0}$ as in \eqref{eq:gamma0} to ensure $\mathfrak{b}(\boldsymbol{x}(0),0)\ge 0$ due to \eqref{eq:under_approx}. To see this, consider $\mathfrak{b}(\boldsymbol{x},t):=-\frac{1}{\eta}\ln\big(\exp(-\eta\mathfrak{b}_1(\boldsymbol{x},t))+\exp(-\eta\mathfrak{b}_2(\boldsymbol{x},t))\big)$. If $\mathfrak{b}_1(\boldsymbol{x}(0),0)> 0$ and $\mathfrak{b}_2(\boldsymbol{x}(0),0)> 0$ (which is both ensured by \eqref{eq:gamma0}), then it does not neccessarily hold that $\mathfrak{b}(\boldsymbol{x}(0),0)\ge 0$ depending on the value of $\eta$. Therefore, $\eta$ now needs to be selected sufficiently large hence increasing the accuracy of the used approximation. More importantly, $\gamma_{l,\infty}$, which has to be selected according to \eqref{eq:gammainfty}, and $r$ need to be selected so that for all $t\in[s_0,s_{q}]$ there exists $\boldsymbol{x}\in\mathbb{R}^n$ so that $\mathfrak{b}(\boldsymbol{x},t)\ge 0$. These objectives can be achieved by appropriately selecting $\eta$, $r$, $\gamma_{l,0}$, and $\gamma_{l, \infty}$. We formulate this parameter selection as an optimization problem that can be solved offline. Define next $\boldsymbol{\gamma}_{0}:=\begin{bmatrix} \gamma_{1,0} & \hdots & \gamma_{p,0}\end{bmatrix}^T$, $\boldsymbol{\gamma}_{\infty}:=\begin{bmatrix} \gamma_{1,\infty} & \hdots & \gamma_{p,\infty}\end{bmatrix}^T$, and $\boldsymbol{\mathfrak{l}}:=\begin{bmatrix} \mathfrak{l}_1 & \hdots & \mathfrak{l}_p\end{bmatrix}^T$ that contain the parameters $\gamma_{l,0}$, $\gamma_{l,\infty}$, and $\mathfrak{l}_l$ for each eventually- and always-operator encoded in $\mathfrak{b}_l(\boldsymbol{x},t)$. Define also $\boldsymbol{\xi}_1,\hdots,\boldsymbol{\xi}_q\in \mathbb{R}^n$ and let $\boldsymbol{\xi}:=\begin{bmatrix} {\boldsymbol{\xi}_1}^T & \hdots & {\boldsymbol{\xi}_q}^T \end{bmatrix}^T$. As argued in Section \ref{background_bf}, there also needs to exist a compact set $\mathfrak{D}'$, which is realized by including an additional barrier function $\mathfrak{b}_{p+1}(\boldsymbol{x},t)=D-\|\boldsymbol{x}\|$ for a suitably selected $D$ into $\mathfrak{b}(\boldsymbol{x},t)$. Next, select the parameters $\eta$, $r$, $D$, $\boldsymbol{\gamma}_{0}$, $\boldsymbol{\gamma}_{\infty}$, and $\boldsymbol{\mathfrak{l}}$ according to the solution of the following optimization problem \begin{subequations}\label{eq:cbf_selection} \begin{align} &\underset{\eta,r,D,\boldsymbol{\gamma}_{0},\boldsymbol{\gamma}_{\infty},\boldsymbol{\mathfrak{l}},\boldsymbol{\xi}}{\operatorname{argmax}} r\label{optim_a}\\ \text{s.t.} \; &\mathfrak{b}(\boldsymbol{x}(0),0)\ge \delta \label{optim_b}\\ &\lim_{\tau\to s_j^-}\mathfrak{b}(\boldsymbol{\xi}_j,\tau)\ge \delta \;\;\; \text{for each } j\in\{1,\hdots,q \} \label{optim_c}\\ & \gamma_{l,0} \text{ as in } \eqref{eq:gamma0} \text{ for each } l\in\{1,\hdots,p+1\}\label{optim_d}\\ & \gamma_{l,\infty} \text{ as in } \eqref{eq:gammainfty} \text{ for each } l\in\{1,\hdots,p+1\}\label{optim_e}\\ & \mathfrak{l}_l \text{ as in } \eqref{eq:l} \text{ for each } l\in\{1,\hdots,p+1\}\label{optim_f}\\ & \eta> 0 \;\; \text{and} \;\; r> 0\label{optim_f}. \end{align} \end{subequations} where $\delta\ge 0$. Note that $\lim_{\tau\to s_j^-}\mathfrak{b}(\boldsymbol{\xi}_j,\tau)$ can easily be evaluated since $\mathfrak{o}_l(t)$ is piecewise continuous. The optimization problem \eqref{eq:cbf_selection} is nonconvex, but can be solved offline. If maximization of $r$ is not of interest, then a feasibility program with constraints \eqref{optim_b}-\eqref{optim_f} can be solved instead. \begin{lemma}\label{lemma:cCBF} Under Assumption \ref{ass4}, the function $\mathfrak{b}(\boldsymbol{x},t)$ obtained by the solution of \eqref{eq:cbf_selection} is a cCBF for each $[s_{j},s_{j+1})$. \begin{proof} Omitted due to space limitations. \end{proof} \end{lemma} \begin{lemma}\label{lemma:vCBF} Assume that \eqref{eq:cbf_selection} is solved for $\delta>0$, then $\alpha$ can be selected such that $\mathfrak{b}(\boldsymbol{x},t)$ satisfies Assumption \ref{ass3}. \begin{proof} \re{For each $t'\in[s_j,s_{j+1})$, $\mathfrak{b}(\boldsymbol{x}^*_{t'},t')>\mathfrak{b}(\boldsymbol{x},t')$ for all $\boldsymbol{x}\neq\boldsymbol{x}^*_{t'}$ where $\boldsymbol{x}^{*}_{t'}:=\text{argmax}_{\boldsymbol{x}\in\mathbb{R}^n}\mathfrak{b}(\boldsymbol{x},t')$ and $\frac{\partial \mathfrak{b}(\boldsymbol{x},t')}{\partial \boldsymbol{x}}=\boldsymbol{0}$ if and only if $\boldsymbol{x}=\boldsymbol{x}^{*}_{t'}$ due Lemma \ref{lemma:concave}. Hence, $\mathfrak{b}(\boldsymbol{x}^{*}_{t'},t')\ge \delta>0$ for each $t'\in[s_0,s_{q}]$ due to \eqref{optim_b}, \eqref{optim_c}, and since $\gamma_l(t)$ is increasing so that $0<\delta\le \mathfrak{b}_l(\boldsymbol{x}^*_{t'},t')$ if $\mathfrak{o}_l(t')=1$ due to \eqref{eq:under_approx}. There also exists $\mathfrak{b}_l^\text{max}\ge 0$ such that $\mathfrak{b}_l(\boldsymbol{x}^*_{t'},t')\le \mathfrak{b}_l^\text{max}$ if $\mathfrak{o}_l(t')=1$ due to continuity of $\mathfrak{b}_l(\boldsymbol{x},t')$ on $\mathfrak{C}(t')$. Let $\mathfrak{b}^\text{max}:=\max(\mathfrak{b}_1^\text{max},\hdots,\mathfrak{b}_{p+1}^\text{max})$ and $\Delta_l:=\sup_{t\ge 0}|\frac{\partial \mathfrak{b}_l(\boldsymbol{x},t)}{\partial t}|=\mathfrak{l}_l(\gamma_{l,0}-\gamma_{l,\infty})$. Hence, we can deduce that $\frac{\partial \mathfrak{b}(\boldsymbol{x}^*_{t'},t')}{\partial t}\ge \frac{-\exp(-\eta\delta)\Delta_l}{\exp(-\eta\mathfrak{b}^\text{max})}=:\zeta$ (details omitted due to space limitations; note, however, that $\frac{\partial \mathfrak{b}_l(\boldsymbol{x},t)}{\partial t}$ is non-positive). If it is now guaranteed that $\zeta> -\alpha(\delta)$, then it holds that $\frac{\partial \mathfrak{b}(\boldsymbol{x}^*_{t'},t')}{\partial t}> -\alpha(\mathfrak{b}(\boldsymbol{x}^*_{t'},t')$ for all $t'\in[s_0,s_q]$ so that Assumption \ref{ass3} holds. By the specific choice of $\alpha(\delta)=\kappa\delta$, we can select $\kappa> \frac{-\zeta}{\delta}$ such that this is the case.} \end{proof} \end{lemma} \begin{corollary}\label{corr3} Consider the same assumptions as in Theorem \ref{theorem1}. If each $\phi_k$ additionally satisfies Assumption \ref{ass4}, $\mathfrak{b}^k(\bar{\boldsymbol{x}}_k,t)$ is the solution of \eqref{eq:cbf_selection} for $\delta>0$, and $\alpha_k:=\kappa\delta$ with $\kappa> -\frac{\zeta}{\delta}$, then $\rho^{\phi_k}(\bar{\boldsymbol{x}}_k,0)\ge r_k> 0$ where $r_k$ is obtained by the solution of \eqref{eq:cbf_selection} for each $k\in\{1,\hdots,K\}$. \begin{proof} Follows by Theorem \ref{theorem1} and Lemmas \ref{lemma:cCBF} and \ref{lemma:vCBF}. \end{proof} \end{corollary} \section{Preliminaries and Problem Formulation} \label{sec:backgound} True and false are $\top$ and $\bot$, respectively. Let $\boldsymbol{0}$ be a vector of appropriate size containing only zeros. An extended class $\mathcal{K}$ function $\alpha:\mathbb{R}\to\mathbb{R}_{\ge 0}$ is a continuous and strictly increasing function with $\alpha(0)=0$. The partial derivative of a function $\mathfrak{b}(\boldsymbol{x},t):\mathbb{R}^n\times\mathbb{R}_{\ge 0}\to\mathbb{R}$ evaluated at $(\boldsymbol{x}',t')$ is abbreviated by $\frac{\partial \mathfrak{b}(\boldsymbol{x}',t')}{\partial \boldsymbol{x}}:=\frac{\partial \mathfrak{b}( \boldsymbol{x},t)}{\partial \boldsymbol{x}}\Bigr|_{\substack{\boldsymbol{x}=\boldsymbol{x}'\\t=t'}}$ and assumed to be a row vector. For two sets $S_1$ and $S_2$, let $S_1\oplus S_2:=\{\boldsymbol{s}_1+\boldsymbol{s}_2|\boldsymbol{s}_1\in S_1, \boldsymbol{s}_2\in S_2\}$ denote the Minkowski sum. \subsection{Discontinuous Systems and Nonsmooth Analysis} Assume the system $\dot{\boldsymbol{x}}=f(\boldsymbol{x},t)$ where $f:\mathbb{R}^n\times\mathbb{R}_{\ge 0}\to\mathbb{R}^n$ is locally bounded and measurable. We consider Filippov solutions \cite{paden1987calculus} and define the Filippov set-valued map as \begin{align*} F[f](\boldsymbol{x},t)&:=\overline{\text{co}}\{\lim_{i\to\infty}f(\boldsymbol{x}_i,t)|\boldsymbol{x}_i\to \boldsymbol{x},\boldsymbol{x}_i\notin N\cup N_f \} \end{align*} where $\overline{\text{co}}$ denotes the convex closure; $N_f$ denotes the set of Lebesgue measure zero where $f(\boldsymbol{x},t)$ is discontinuous, while $N$ denotes an arbitrary set of Lebesgue measure zero. A Filippov solution of $\dot{\boldsymbol{x}}=f(\boldsymbol{x},t)$ is an absolutely continuous function $\boldsymbol{x}:[t_0,t_1]\to\mathbb{R}^n$ that satisfies $\dot{\boldsymbol{x}}(t)\in F[f](\boldsymbol{x},t)$ for almost all $t\in[t_0,t_1]$. Due to \cite[Prop.~3]{cortes2008discontinuous} it holds that there exists a Filippov solution to $\dot{\boldsymbol{x}}=f(\boldsymbol{x},t)$ if $f:\mathbb{R}^n\times\mathbb{R}_{\ge 0}\to\mathbb{R}^n$ is locally bounded and measurable. Consider a continuously differentiable function $\mathfrak{b}(\boldsymbol{x},t)$ so that Clarke's generalized gradient of $\mathfrak{b}(\boldsymbol{x},t)$ coincides with the gradient of $\mathfrak{b}(\boldsymbol{x},t)$ \cite[Prop.~6]{cortes2008discontinuous}, denoted by $\nabla \mathfrak{b}(\boldsymbol{x},t):=\begin{bmatrix} {\frac{\partial \mathfrak{b}(\boldsymbol{x},t)}{\partial \boldsymbol{x}}}^T & {\frac{\partial \mathfrak{b}(\boldsymbol{x},t)}{\partial t}} \end{bmatrix}^T$. The set-valued Lie derivative of $\mathfrak{b}(\boldsymbol{x},t)$ is then defined as \begin{align*} \mathcal{L}_{F[f]}\mathfrak{b}(\boldsymbol{x},t)&:={\{\nabla \mathfrak{b}(\boldsymbol{x},t)}^T\begin{bmatrix}{\boldsymbol{\zeta}}^T & 1\end{bmatrix}^T|\boldsymbol{\zeta}\in F[f](\boldsymbol{x},t)\}. \end{align*} According to \cite[Thm.~2.2]{shevitz1994lyapunov}, it holds that $\dot{\mathfrak{b}}(\boldsymbol{x}(t),t)\in\mathcal{L}_{F[f]}\mathfrak{b}(\boldsymbol{x}(t),t)$ for almost all $t\in[t_0,t_1]$. Let $\hat{\mathcal{L}}_{F[f]}\mathfrak{b}(\boldsymbol{x},t):=\big\{\frac{\partial \mathfrak{b}(\boldsymbol{x},t)}{\partial \boldsymbol{x}}\boldsymbol{\zeta}|\boldsymbol{\zeta}\in F[f](\boldsymbol{x},t)\big\}$, the set-valued Lie derivative is then equivalent to $ \mathcal{L}_{F[f]}\mathfrak{b}(\boldsymbol{x},t)=\hat{\mathcal{L}}_{F[f]}\mathfrak{b}(\boldsymbol{x},t)\oplus \big\{\frac{\partial \mathfrak{b}(\boldsymbol{x},t)}{\partial t}\big\}$. \begin{lemma}\label{lemma_liederivate} Consider $\dot{\boldsymbol{x}}=f_1(\boldsymbol{x},t)+f_2(\boldsymbol{x},t)$ where $f_1:\mathbb{R}^n\times\mathbb{R}_{\ge 0}\to\mathbb{R}^n$ and $f_2:\mathbb{R}^n\times\mathbb{R}_{\ge 0}\to\mathbb{R}^n$ are locally bounded and measurable. It then holds that \begin{small} \begin{align*} \mathcal{L}_{F[f_1+f_2]}\mathfrak{b}(\boldsymbol{x},t)\subseteq \hat{\mathcal{L}}_{F[f_1]}\mathfrak{b}(\boldsymbol{x},t)\oplus \hat{\mathcal{L}}_{F[f_2]}\mathfrak{b}(\boldsymbol{x},t)\oplus \Big\{\frac{\partial \mathfrak{b}(\boldsymbol{x},t)}{\partial t}\Big\}. \end{align*} \end{small}\begin{proof} Holds by definition of $\mathcal{L}_{F[f_1+f_2]}\mathfrak{b}(\boldsymbol{x},t)$ since {\begin{small}$F[f_1+f_2](\boldsymbol{x},t)\subseteq F[f_1](\boldsymbol{x},t)\oplus F[f_2](\boldsymbol{x},t)$\end{small}} \cite[Thm. 1]{paden1987calculus}. \end{proof} \end{lemma} \subsection{Signal Temporal Logic (STL)} Signal temporal logic \cite{maler2004monitoring} is based on predicates $\mu$ that are obtained after evaluation of a continuously differentiable predicate function $h:\mathbb{R}^n\to\mathbb{R}$ as $\mu:= \begin{cases} \top \text{ if } h(\boldsymbol{\zeta})\ge 0\\ \bot \text{ if } h(\boldsymbol{\zeta})< 0 \end{cases}$ for $\boldsymbol{\zeta}\in\mathbb{R}^n$. The STL syntax is then given by \begin{align*} \phi \; ::= \; \top \; | \; \mu \; | \; \neg \phi \; | \; \phi' \wedge \phi'' \; | \; \phi' \until{a}{b} \phi''\;, \end{align*} where $\phi'$, $\phi''$ are STL formulas with $a\le b<\infty$ for $\until{a}{b}$ (until operator). Also define $F_{[a,b]}\phi:=\top \until{a}{b} \phi$ (eventually operator) and $G_{[a,b]}\phi:=\neg F_{[a,b]}\neg \phi$ (always operator). Let $\boldsymbol{x}\models \phi$ denote if the signal $\boldsymbol{x}:\mathbb{R}_{\ge 0}\to\mathbb{R}^n$ satisfies $\phi$. These STL semantics are defined in \cite{maler2004monitoring}; $\phi$ is satisfiable if $\exists \boldsymbol{x}:\mathbb{R}_{\ge 0}\to\mathbb{R}^n$ such that $\boldsymbol{x}\models \phi$. Robust semantics determine how robustly a signal $\boldsymbol{x}$ satisfies the formula $\phi$ at time $t$. The robust semantics for STL \cite[Def. 3]{donze2} are defined by: \begin{small} \begin{align*} \rs{\mu}& := h(\boldsymbol{x}(t)), \hspace{0.2cm} \rs{\neg\phi} := -\rs{\phi},\\ \rs{\phi' \wedge \phi''} &:= \min(\rs{\phi'},\rs{\phi''}),\\ \rs{\phi' \until{a}{b} \phi''} &:= \underset{\bar{t}\in [t+a,t+b]}{\max} \min(\rss{\phi''}{\bar{t}}, \underset{\underline{t}\in[t,\bar{t}]}{\min}\rss{\phi'}{\underline{t}} ), \\ \rs{F_{[a,b]} \phi} &:= \underset{\bar{t}\in[t+a,t+b]}{\max}\rss{\phi}{\bar{t}},\\ \rs{G_{[a,b]} \phi} &:= \underset{\bar{t}\in[t+a,t+b]}{\min}\rss{\phi}{\bar{t}}. \end{align*} \end{small}Furthermore, it holds that $\boldsymbol{x}\models \phi$ if $\rho^\phi(\boldsymbol{x},0)>0$. \subsection{Coupled Multi-Agent Systems} Consider $M$ agents modeled by an undirected graph $\mathcal{G}:=(\mathcal{V},\mathcal{E})$ where $\mathcal{V}:=\{1,\hdots,M\}$ while $\mathcal{E}\in \mathcal{V}\times \mathcal{V}$ indicates communication links. Let $\boldsymbol{x}_i\in\mathbb{R}^{n_i}$ and $\boldsymbol{u}_i\in \mathbb{R}^{m_i}$ be the state and input of agent $i$. Also let $\boldsymbol{x}:=\begin{bmatrix} {\boldsymbol{x}_1}^T & \hdots & {\boldsymbol{x}_M}^T\end{bmatrix}^T\in\mathbb{R}^n$ with $n:=n_1+\hdots+n_M$ and \begin{align}\label{system_noise} \dot{\boldsymbol{x}}_i&=f_i(\boldsymbol{x}_i,t)+g_i(\boldsymbol{x}_i,t)\boldsymbol{u}_i+c_i(\boldsymbol{x},t) \end{align} where $f_i:\mathbb{R}^{n_i}\times \mathbb{R}_{\ge 0}\to\mathbb{R}^{n_i}$, $g_i:\mathbb{R}^{n_i}\times \mathbb{R}_{\ge 0}\to\mathbb{R}^{n_i\times m_i}$, and $c_i:\mathbb{R}^{n}\times \mathbb{R}_{\ge 0}\to\mathbb{R}^{n_i}$ are sufficiently regular. By sufficiently regular, we mean locally Lipschitz continuous and measurable in the first and second argument, respectively; $c_i(\boldsymbol{x},t)$ may model \emph{given dynamical couplings} such as those induced by a mechanical connection between agents; $c_i(\boldsymbol{x},t)$ may also describe unmodeled dynamics or process noise. We assume that $c_i(\boldsymbol{x},t)$ is bounded, but otherwise unknown so that no knowledge of $\boldsymbol{x}$ and $c_i(\boldsymbol{x},t)$ is required by agent $i$ for the control design. In other words, there exists $C\ge 0$ such that $\|c_i(\boldsymbol{x},t)\|\le C$ for all $(\boldsymbol{x},t)\in\mathbb{R}^{n}\times\mathbb{R}_{\ge 0}$. \begin{assumption}\label{ass1} The function $g_i(\boldsymbol{x}_i,t)$ has full row rank for all $(\boldsymbol{x}_i,t)\in\mathbb{R}^{n_i}\times\mathbb{R}_{\ge 0}$. \end{assumption} \begin{remark}\label{rem:1} Assumption \ref{ass1} implies $m_i\ge n_i$. Since $\boldsymbol{x}$ and $c_i(\boldsymbol{x},t)$ are not known by agent~$i$, the system \eqref{system_noise} is \emph{not} feedback equivalent to $\dot{\boldsymbol{x}}_i=\boldsymbol{u}_i$. Canceling $f_i(\boldsymbol{x}_i)$ may also induce high control inputs, while we derive a minimum norm controller in Section \ref{sec:controller}. Collision avoidance, consensus, formation control, or connectivity maintenance can be achieved through a secondary controller $f_i^\text{u}$. Let therefore $\mathcal{V}_i^\text{u}\subseteq\mathcal{V}$ be a set of agents that \emph{induce dynamical couplings}, and let $\boldsymbol{x}_i^\text{u}:=\begin{bmatrix} {\boldsymbol{x}_{j_1}}^T & \hdots & {\boldsymbol{x}_{j_{|\mathcal{V}_i^\text{u}|}}}^T\end{bmatrix}^T$ and $n_i^\text{u}:=n_{j_1}+\hdots+n_{j_{|\mathcal{V}_i^\text{u}|}}$ for $j_1,\hdots,j_{|\mathcal{V}_i^\text{u}|}\in\mathcal{V}_i^\text{u}$. By using $\boldsymbol{u}_i:={g_i(\boldsymbol{x}_i,t)}^T(g_i(\boldsymbol{x}_i,t){g_i(\boldsymbol{x}_i,t)}^T)^{-1}f_i^\text{u}(\boldsymbol{x}_i^\text{u},t)+\boldsymbol{v}_i$ the dynamics $\dot{\boldsymbol{x}}_i=f_i(\boldsymbol{x}_i,t)+f_i^\text{u}(\boldsymbol{x}^\text{u},t)+g_i(\boldsymbol{x}_i,t)\boldsymbol{v}_i+c_i(\boldsymbol{x},t)$ resemble \eqref{system_noise} if $f_i^\text{u}:\mathbb{R}^{n_i^\text{u}}\times\mathbb{R}_{\ge 0}\to\mathbb{R}^{n_i}$ is sufficiently regular. \end{remark} \section{Conclusion} \label{sec:conclusion} We proposed decentralized control barrier functions for multi-agent systems under STL tasks. We also presented a procedure to construct control barrier functions for a particular class of STL tasks. This last step is subject to future work to account for a more general class of STL tasks. \subsection{Problem Formulation} In this paper, we consider the STL fragment \begin{subequations}\label{eq:subclass} \begin{align} \psi \; &::= \; \top \; | \; \mu \; | \; \neg \mu \; | \; \psi' \wedge \psi'' \label{eq:psi_class}\\ \phi \; &::= \; G_{[a,b]}\psi \; | \; F_{[a,b]} \psi \;|\; \psi' \until{a}{b} \psi'' \; | \; \phi' \wedge \phi''\label{eq:phi_class} \end{align} \end{subequations} where $\psi'$, $\psi''$ are formulas of class $\psi$ in \eqref{eq:psi_class}, whereas $\phi'$, $\phi''$ are formulas of class $\phi$ in \eqref{eq:phi_class}. Consider $K$ formulas $\phi_1,\hdots,\phi_K$ of the form \eqref{eq:phi_class} and let the satisfaction of $\phi_k$ for $k\in\{1,\hdots,K\}$ depend on the set of agents ${\mathcal{V}}_k\subseteq \mathcal{V}$. \begin{assumption}\label{ass:com} For each $\phi_k$ with $k\in\{1,\hdots,K\}$, it holds that $(i_1,i_2)\in\mathcal{E}$ for all $i_1,i_2\in{\mathcal{V}}_k$. \end{assumption} Assume further that the sets of agents ${\mathcal{V}}_1,\hdots, {\mathcal{V}}_K\in\mathcal{V}$ are disjoint, i.e., ${\mathcal{V}}_{k_1}\cap{\mathcal{V}}_{k_2}=\emptyset$ for all $k_1,k_2\in\{1,\hdots,K\}$ with $k_1\neq k_2$, and such that ${\mathcal{V}}_1\cup\hdots\cup{\mathcal{V}}_K=\mathcal{V}$. \begin{problem}\label{prob1} Consider $K$ formulas $\phi_k$ of the form \eqref{eq:phi_class}. Derive a control law $\boldsymbol{u}_i$ for each agent $i\in\mathcal{V}$ so that $0<r\le \rho^{\phi_1 \wedge \hdots \wedge \phi_K}(\boldsymbol{x},0)$ where $r$ is maximized. \end{problem} \section{Introduction} \label{sec:introduction} Control of multi-agent systems deals with achieving tasks such as consensus \cite{ren2005consensus}, formation control \cite{tanner2003stable}, and connectivity maintenance \cite{zavlanos2008distributed}. More recently, ideas from formal verification have been used where the task is formulated as a temporal logic formula. Signal temporal logic (STL) \cite{maler2004monitoring} allows to formulate time and space constraints and is based on continuous-time signals. It is hence suited to impose continuous-time tasks on continous-time systems. STL also admits robust semantics \cite{donze2} that state how robustly an STL formula is satisfied. For discrete-time single-agent systems, \cite{raman1} and \cite{lindemann2016robust} propose optimization-based methods where in particular \cite{raman1} is based on a computationally expensive mixed integer linear program; \cite{raman1} has been extended to discrete-time multi-agent systems in \cite{liu2017distributed} and is hence subject to similar computational burdens. These methods fail to provide continuous time guarantees. A first step in this direction, i.e., for continuous-time systems, was made in \cite{pant2018fly} and \cite{lindemann2018decentralized}; \cite{pant2018fly}, however, is limited by the need of solving a non-convex optimization problem, while the STL fragment in \cite{lindemann2018decentralized} is limited; \cite{lindemann2018control} establishes a connection between the semantics of an STL task and time-varying control barrier functions and derives a computationally-efficient feedback control law for single-agent systems. Control barrier functions have first been proposed in \cite{wieland2007constructive} and guarantee the existence of a control law that renders a desired set forward invariant; \cite{ames2017control} presents control barrier functions tailord for the safe robot navigation, while \cite{wang2017safety} presents decentralized control barrier functions for safe multi-robot navigation. Nonsmooth and time-varying control barrier functions have been proposed in \cite{glotfelter2017nonsmooth} and \cite{xu2018constrained}, respectively; \cite{srinivasan2018control} uses finite time control barrier functions for single-agent systems under linear temporal logic tasks, hence not allowing explicit timing constraints. We propose a decentralized control barrier function-based feedback control law for continuous-time multi-agent systems under a set of STL tasks. This control law is discontinuous. Therefore, we first extend our previous work on time-varying control barrier functions for single-agent systems, proposed in \cite{lindemann2018control}, to systems with discontinuous control inputs. We provide a barrier condition that, if satisfied, results in a satisfaction of an associated STL task. This barrier condition induces a certain load that needs to be accomplished by the control input. For multi-agent systems, we propose to share this load among agents by means of a discontinuous load sharing function resulting in a decentralized and discontinuous control law. Notably, each agent is subject to unknown dynamical couplings with other agents as well as noise. In the last step, we propose explicit construction rules for the control barrier functions that account for the STL semantics of a fragment of STL tasks. The construction of this barrier function can conveniently be carried out offline and maximizes the robustness by which the STL task is satisfied. The proposed decentralized feedback control law, which is calculated online, is obtained by solving a computationally tractable convex quadratic program. Sec. \ref{sec:backgound} presents the problem formulation, while our proposed problem solution is stated in Sec. \ref{sec:strategy}. Simulations are presented in Sec. \ref{sec:simulations} followed by conclusions in Sec. \ref{sec:conclusion}. \section{Simulations} \label{sec:simulations} \begin{figure*}[tbh] \centering \begin{subfigure}{0.32\textwidth} \input{figures/1}\caption{$\mathfrak{b}^1(\boldsymbol{x}_{\phi_1}(t),t)$ and $\mathfrak{b}^2(\boldsymbol{x}_{\phi_2}(t),t)$}\label{fig:1} \end{subfigure} \begin{subfigure}{0.32\textwidth} \input{figures/2}\caption{Agent trajectories from $0$-$10$ time units}\label{fig:2} \end{subfigure} \begin{subfigure}{0.32\textwidth} \input{figures/3}\caption{Agent trajectories from $10$-$20$ time units}\label{fig:3} \end{subfigure} \caption{Barrier function evolution and agent trajectories for the simulated example.} \vspace{-10 pt} \end{figure*} Consider $M:=4$ agents with $\boldsymbol{x}_i:=\begin{bmatrix}x_{i,1} & x_{i,2} \end{bmatrix}^T\in\mathbb{R}^2$. The dynamics are $\dot{\boldsymbol{x}}_i=\boldsymbol{u}_i+c_i(\boldsymbol{x},t)$ where $c_i(\boldsymbol{x},t):=f_i^\text{c}(\boldsymbol{x})+\boldsymbol{w}(t)$ with $f_i^\text{c}(\boldsymbol{x}):=0.5\text{sat}_{1}(\boldsymbol{x}_4-\boldsymbol{x}_i)$ for $i\in\{1,2,3\}$ and $f_4^\text{c}(\boldsymbol{x}):=0.25(\text{sat}_{1}(\boldsymbol{x}_1-\boldsymbol{x}_4)+\text{sat}_{1}(\boldsymbol{x}_2-\boldsymbol{x}_4))$ are \emph{given dynamical couplings}, $\boldsymbol{w}_i\in\{\boldsymbol{w}_i\in\mathbb{R}^2|\|\boldsymbol{w}_i\|\le 0.1\}$ is noise, and where, for $\boldsymbol{\zeta}:=\begin{bmatrix} \zeta_1 & \zeta_2 \end{bmatrix}^T\in\mathbb{R}^2$, $\text{sat}_1(\boldsymbol{\zeta}):=\begin{bmatrix}\bar{\zeta}_1 & \bar{\zeta}_2 \end{bmatrix}^T$ with $\bar{\zeta}_c=\zeta_c$ if $|\zeta_c|\le 1$, $\bar{\zeta}_c=1$ if $\zeta_c> 1$, and $\bar{\zeta}_c=-1$ if $\zeta_c<-1$ for $c\in\{1,2\}$. We use \emph{induced dynamical couplings} to avoid collisions and set $\boldsymbol{u}_i:=f_i^\text{u}(\boldsymbol{x})+\boldsymbol{v}_i$ where \re{$f_i^\text{u}(\boldsymbol{x}):=\sum_{\mathfrak{j}=1}^{3}\frac{\boldsymbol{x}_i-\boldsymbol{x}_{\mathfrak{j}}}{\|\boldsymbol{x}_i-\boldsymbol{x}_{\mathfrak{j}}\|+0.01}$ for $i\in\{1,2,3\}$ and $f_4^\text{u}(\boldsymbol{x}):=\boldsymbol{0}$. We also rewrite $(\|\boldsymbol{x}_i\|_\infty\le 1)$ as $(x_{i,1}\le 1)\wedge (-x_{i,1}\le 1)\wedge (x_{i,2}\le 1)(-x_{i,2}\le 1)$}. Let $\phi_1:=G_{[5,10]}(\|\boldsymbol{x}_1-\boldsymbol{p}_A\|_\infty\le 0.5)\wedge \big((\|\boldsymbol{x}_2-\begin{bmatrix} x_{1,1}-1 & x_{1,2}+1 \end{bmatrix}^T\|_\infty\le 0.5)\wedge (\|\boldsymbol{x}_3-\begin{bmatrix} x_{1,1}-1 & x_{1,2}-1 \end{bmatrix}^T\|_\infty\le 0.5)\big)U_{[10,20]}(\|\boldsymbol{x}_1-\boldsymbol{p}_B\|_\infty\le 0.5)$ with $\boldsymbol{p}_A:=\begin{bmatrix} 2.5 & 7 \end{bmatrix}^T$, $\boldsymbol{p}_B:=\begin{bmatrix} 8 & 6 \end{bmatrix}^T$, i.e., agent~$1$ should always be in region $\boldsymbol{p}_A$ within $5$-$10$ time units, e.g., to prepare an object for transportation, while agents $2$ and $3$ form a formation with agent $1$ from $10$ time units and until agent $1$ reaches region $\boldsymbol{p}_B$, e.g., collaborative transportation of this object. Let $\phi_2:=F_{[5,10]}(\|\boldsymbol{x}_1-\boldsymbol{p}_C\|_\infty\le 1)\wedge G_{[0,10]} (x_{4,1}\ge 8) \wedge F_{[15,20]}(\|\boldsymbol{x}_1-\boldsymbol{p}_D\|_\infty\le 1) \wedge G_{[10,20]} (x_{4,2}\le 2)$ with $\boldsymbol{p}_C:=\begin{bmatrix} 9 & 1 \end{bmatrix}^T$, $\boldsymbol{p}_D:=\begin{bmatrix} 1 & 1 \end{bmatrix}^T$, e.g., a surveillance task. We obtain $\mathfrak{b}^1(\bar{\boldsymbol{x}}_1,t)$ and $\mathfrak{b}^2(\bar{\boldsymbol{x}}_2,t)$ for $\phi_1$ and $\phi_2$, respectively; \eqref{eq:cbf_selection} has been solved as a feasibility problem with computation times of $50$ and $13$ seconds, respectively, on a two-core $1.8$ GHz CPU with 4 GB of RAM. Computing \eqref{eq:qp_conv} took, on average, $0.006$ seconds. The control barrier functions are plotted in Fig. \ref{fig:1}, while Fig. \ref{fig:2} and \ref{fig:3} show the agent trajectories from $0$-$10$ and from $10$-$20$ time units, respectively. In Fig. \ref{fig:2} agents $1$, $2$, $3$, and $4$ first approach each other due to $f_i^\text{c}(\boldsymbol{x})$. At $2.6$, $2.4$, $3.4$, and $1.7$ time units, respectively, the barrier functions force the agents to not approach each other any further and instead work towards satisfying $\phi_1$ and $\phi_2$. At $10$ time units, agent $1$ reaches region $\boldsymbol{p}_A$ while agents $2$ and $3$ form a formation. Fig. \ref{fig:3} shows how this formation is remained while agent $1$ approaches $\boldsymbol{p}_B$. Agents do not get too close to each other due to $f_i^\text{u}(\boldsymbol{x})$. It holds that $\rho^{\phi_1\wedge \phi_2}(\boldsymbol{x},0)\ge 0.005$.
1,108,101,562,591
arxiv
\section{Scattered Light} \label{s:BackScatter} The sensitivity required to make gravitational wave measurements can be expressed also as a sensitivity to optical \textit{phase} fluctuations. The current state of the art is approximately $10^{-11}$ radians/$\sqrt{\rm Hz}$ and is nominally limited by quantum mechanics and the available laser power (cf. \Cref{s:sqz}). Practically, however, all of the high sensitivity interferometers in the past 30 years have been limited at some frequencies by the non-fundamental phase fluctuations due to scattered light. This phenomenon of excess phase noise due to scattered light has been known about for decades~\cite{Schilling:1981} but the methods by which to combat it have been developing steadily since then and involve the highest levels of experimental artistry in the field of laser interferometry. Due to imperfections in the optics, for example, a small fraction of the laser light escapes from the main interferometer beam path. This light then interacts with something (e.g. a piece of the vacuum system or some other suspended optic or beam dump) and then recombines with the circulating laser field within the interferometer~\cite{Kip:Scatter95, Kip:Baffles1989, Sam:Scatter2012, Stefano:Scatter, vinet1997scattered, fritschel1998high}. The recombination may occur at virtually any point within the system: inside of the Fabry-Perot arm cavities, at the beamsplitter, or even at the final photodetector which records the GW strain signal. Scattered light induced phase noise in the sensing of auxiliary degrees of freedom can couple into the strain channel through the relevant feedback control systems . \begin{figure} \begin{subfigure}{0.4\textwidth} \includegraphics[width=\textwidth]{Figures/ETMphasemap.png} \end{subfigure} \begin{subfigure}{0.4\textwidth} \includegraphics[width=\textwidth]{Figures/ETMpsd.pdf} \end{subfigure} \caption{(left) Surface figure of an uncoated LIGO mirror (ETM07) with the spherical curvature and tilt subtracted. The ripple pattern at a radius of 3\,--\,15\,cm from the center is due to the planetary coating technique. (right) 1D power spectrum of surface. Also plotted is the RMS surface roughness integrated from high to low spatial frequency.} \label{fig:PhaseMap} \end{figure} \subsection{Case I: The Beam Tube Baffles} The case of scattering from the multi-km beam tubes connecting the ends of the interferometer has been studied by Whitcomb, Weiss, Thorne, and Flanagan (for LIGO) and by Braccini, Vinet, et al. (for Virgo); see references above. Here we provide a condensed version of their arguments so as to construct a noise budget for backscatter. Let us assume in this case that the field scatters from the test mass surface, reflects from a piece of multi-km long beamtube, returns to the same test mass mirror, and then recombines into the circulating cavity mode. The resulting phase fluctuation of the electric field will be \begin{equation} \delta u_{s} = \frac{4 \pi}{\lambda} x_{s}(f) \sqrt{P_s} \end{equation} For comparison, the field fluctuation due to mirror motion will be \begin{equation} \delta u_{m} = \frac{4 \pi}{\lambda} x_{m}(f) \sqrt{P_0} \end{equation} where $x_s$ is the motion of the beamtube in the direction of the mirror, $x_m$ is the motion of the interferometer mirror, $\lambda$ is the laser wavelength, $P_0$ is the power stored in the arm cavity, and $P_s$ is the power which recombines into the circulating cavity mode. Ignoring for a moment the radiation pressure effects, we can see that the phase noise due to this type of backscatter can be simply expressed (in units of mirror displacement) using the ratios of the stored power and the backscattered power which recombines with the cavity mode: \begin{equation} x_{m}(f) = x_{s}(f) \sqrt{\frac{P_s}{2 P_0}} \end{equation} where the factor of 2 comes from only including phase fluctuations of the backscattered field. So in order to estimate the $x_m$, we will have to compute $P_s$. This can be written down by tracing the path of the beam and considering the relevant scattering cross sections at each step~\cite{Kip:Baffles1989}: \begin{enumerate} \item The probability for the main beam to scatter into some angle towards a potential backscatterer. \item The probability for that backscatterer to scatter light back towards the mirror. \item The probability for that backscattered field to recombine into the cavity mode. \end{enumerate} \begin{equation} P_s = P_0 \frac{\lambda^2}{d^2} \delta \Omega_{ms} \frac{dP}{d\Omega_{ms}} \frac{dP}{d\Omega_{bm}} \frac{dP}{d\Omega_{sm}} \label{eq:ScatProb1} \end{equation} In the above equation, the scatter probability is used; this is related to the more commonly used Bidirectional Reflectance Distribution Function (BRDF) by $ \mathrm{BRDF}(\theta) \times cos(\theta) = dP/d\Omega$, where $\theta = 0$ corresponds to normal incidence. The scattering in/out of the main cavity mode has time reversal symmetry, and so the first and last probabilities in \eqref{eq:ScatProb1} are equal: \begin{equation} P_s = P_0 \frac{\lambda^2}{d^2} \delta \Omega_{ms} \bigg(\frac{dP}{d\Omega_{ms}}\bigg)^2 \frac{dP}{d\Omega_{bm}} \label{eq:ScatProb2} \end{equation} \begin{figure}[h] \centering \includegraphics[width=\columnwidth]{Figures/ETM_scatter.pdf} \caption{Schematic diagram of a LIGO interferometer mirror and surrounding vacuum chambers responsible for backscattered light.} \label{fig:ETMscat} \end{figure} Based on the dimensions in Figure~\ref{fig:ETMscat}, we can see that we should consider angles of $\sim r_{\rm tube}/d_{\rm scat}$, where $r_{\rm tube} \simeq 0.5$\,m and $d_{\rm scat} \sim$10\,--\,4000\,m for the mirror BRDF. For light scattered into these angles ($\theta_{scat} = \lambda / \lambda_{rough} \simeq$ 0.1\,--50\,mrad), we need to consider spatial scales on the mirror of (cf. Figure~\ref{fig:PhaseMap}) $\lambda_{rough} \sim $ 0.001\,--\,1\,cm. \subsection{Case II: The Test Mass Chambers} During the operation of the first generation of kilometer scale interferometers, it became clear that the scattered light from the test mass mirrors was not all concentrated into small angles (as predicted by theory for super-polished optics). To calculate the total power scattered into wide angles, one can use the equivalent of Fermi's 'Golden Rule' (\Cref{eq:GoldenRule}) for optical scattering. Combining the measurement result from \Cref{fig:PhaseMap} with \Cref{eq:GoldenRule}, we can see that the power radiated into angles greater than 1\,degree should be less than 0.1\,ppm. This underestimates the true number by at least 2 orders of magnitude. Based on cavity ringdown and resonant reflectivity tests (cf.~\cite{isogai2013loss}), the round trip (arm cavity) loss in the initial LIGO interferometers ranged from $\sim$60\,--\,140\,ppm. For two of the Advanced LIGO interferometers, the loss is in the $\sim$80\,--\,120\,ppm range. Wide angle, \textit{in situ}, measurements (using calibrated photodetectors and CCD cameras) show that the total loss for $\theta_{scat} \gtrsim 1\deg$ is $\sim$\,15\,ppm. However, it is not at all clear that the angular distribution in that range is that of a Lambertian surface. Measurements taken at angles of $\sim$ (0.5 m)/(4000 m) $\approx 0.1$\,mrad show a large azimuthal dependence (i.e. the BRDF is not symmetric around the beam axis). Images, such as the one in Figure~\ref{fig:2kETMy}, implicate point-like structures. Repeated cleaning of the surface using various techniques reveals a largely unchanging pattern, implying that the points are defects in the dielectric mirror coatings, rather than surface contaminants. Although the details of the story are still being investigated, it seems possible that the defects are due to artifacts of the ion beam sputtering process~\cite{reid2016development} as well as crystal formation~\cite{Qin:ks} in the amorphous thin films during the post-deposition annealing. \subsection{Backscatter from the Dark Port} Another location which requires much care in design is the (dark) detection port of the interferometer. Since it is in this location that the very small optical phase measurements are done, any small fields at this location can be potentially troublesome. In particular, most of the past and current interferometers have $\sim$5\,--\,50\,mW of light going from the beamsplitter in the direction of the GW detection photodiodes. Some of this light is unwanted (coming from the contrast defect of the light from the two arms) and some is there intentionally (a local oscillator field at either the main laser frequency or offset by tens of MHz in the case of interferometers with heterodyne readout). As with all sources of backscatter, the fields which scatter back into the interferometer from the dark port produce both amplitude and phase modulation. In order to reduce the backscatter noise from the dark port to an acceptable level, a series of measures are generally taken: \begin{itemize} \item an Output Mode Cleaner~\cite{Rob:2008,Tobin:DC} cavity with a ring (3 or 4 mirror) cavity design so as to limit the scattering into the counter propagating mode. \item high quality optics (for low scatter) in the Output Mode Cleaner. \item a Faraday isolator between the interferometer and the output detection systems. \item appropriate angling of the photodetectors so as to minimize the specular reflection and minimize the backscattering into the interferometer. \end{itemize} One promising approach would be to limit the amount of carrier light which is transmitted by the interferometer to the dark port. The idea of making a balanced homodyne readout~\cite{fritschel2014balanced} utilizing a local oscillator field from an auxiliary beam is being pursued as a future upgrade of Advanced LIGO. \subsection{Basic optical requirements} \label{s:CoatReqs} We begin with a brief review of the basic requirements for the test mass HR coatings. \begin{itemize} \item \pgfmathparse{int(\GwincVal{Blue.Materials.MassRadius_cm}*2)}\pgfmathresult\,cm{} diameter: the coatings must extend across the full diameter of the silicon test masses. \item High reflectivity on ETM (Transmittance T = \SI[round-mode=figures,round-precision=3,scientific-notation=true]{\GwincVal{Blue.Optics.ETM.Transmittance}}{}) \item High reflectivity on ITM (T = \pgfmathparse{round(10000*\GwincVal{Blue.Optics.ITM.Transmittance})/100}\pgfmathresult\%) \item Low scatter loss ($\le$ \SI[round-mode=figures,round-precision=3,scientific-notation=true]{\GwincVal{Blue.Optics.Loss}}{} per bounce) \item Cancellation of thermo-optic noise \cite{Evans2008,Hong2013}. \item Reduction of Brownian noise by a factor of 4 or 5 from Advanced LIGO levels \item At most 1ppm absorption, set by the heat budget of the test mass, see \Cref{s:Cryo}. \end{itemize} \subsection{Brownian noise} \label{s:Brownian} \begin{table}[tbhp] \centering \begin{tabular}{lcccc} \toprule Parameter & Detector & Material & Loss-angle & Refractive Index \\ & & & ($\phi$) & $n$ \\ \midrule Low index & aLIGO (300\,K) & SiO$_2$ & $4.0\times10^{-5}$ & 1.45 \\ High index & aLIGO (300\,K) & Ta$_2$O$_5$ & $2.3\times10^{-4}$ & 2.07 \\ \midrule Low index & Voyager (123\,K)& SiO$_2$ & $1.0\times10^{-4}$ & 1.436 \\ High index & Voyager (123\,K)& $\alpha$-Si & $\le1.0\times10^{-5}$\cite{Hellman:2014} & 3.5 \\ \bottomrule \end{tabular} \caption[Summary of the coating material parameters] {Summary of the coating material parameters. Note that, due to the peculiarities of glass, the loss-angle for the SiO$_2$ increases at cryogenic temperatures\cite{Martin2014}.} \label{tab:coating_parameters} \end{table} As described in \cite{Evans2008}, Brownian noise in the coating is the dominant residual noise source, particularly when thermo-optic noise is minimized. Brownian noise is driven by mechanical dissipation, where the relation between the dissipation and the noise is described by Callen's Fluctuation-Dissipation Theorem~\cite{CaWe1951, Kubo:FDT, Callen:1959}: \begin{equation} S_x(f) = \frac{k_B T}{\pi^2 f^2} \left| Re \big[ Y(f) \big]\right|, \label{eq:FDT} \end{equation} where $S_x(f)$ is defined as the power spectral density of physical quantity $x$. $T$ is the temperature of the mirror and $Y(f) \equiv \dot{x}(f)/F(f)$ is the complex mechanical admittance (the inverse of the mechanical impedance) associated with the radiation pressure force of the Gaussian intensity profile laser beam. For a single layer coating, it can be shown that the Brownian noise spectrum is proportional to the mechanical loss angle, $\phi$, of the layer. The Brownian noise of a multi-layer coating will involve the complex weighted sum of all the loss angles of all the layers. \subsubsection{Amorphous Silicon} Although almost all amorphous thin films suffer from a high level of internal friction, there is a film that has been made with nearly no such loss: amorphous silicon ($\alpha$-Si)~\cite{Hellman:2014, Pohl:RMP}. Recent measurements~\cite{Reid2018a} have shown that amorphous silicon can be grown with both very low mechanical loss and low optical absorption at \SI[round-mode=figures]{2}{\micro\meter}. \Cref{tab:coating_parameters} compares the loss angles for the Advanced LIGO and Voyager coating materials. Note that the loss angle, $\phi$, for $\alpha$-Si is more than a factor of 20 lower than the high index material used in Advanced LIGO. \begin{figure} \centering \begin{subfigure}[b]{0.75\textwidth} \includegraphics[width=1\linewidth]{figures/ETM_aSi_123_Layers.pdf} \caption{} \label{fig:aSiCoatingDesign} \end{subfigure} \begin{subfigure}[b]{0.75\textwidth} \includegraphics[width=1\linewidth]{figures/ETM_aSi_123_R.pdf} \caption{} \label{fig:aSiCoatingSpectrum} \end{subfigure} \caption{(a) Layer structure for the $\alpha$-Si:SiO$_2$ HR coating for end test masses. This coating design was optimized to minimize Brownian noise, meet the 5\,ppm transmission goal, and minimize first order sensitivity to coating thickness and index of refraction errors. (b) Reflection and transmission calculations for the $\alpha$-Si:SiO$_2$ HR coating.} \end{figure} Using the material parameters for $\alpha$-Si:SiO$_2$ at \GwincVal{Blue.Materials.Substrate.Temp}\,K{} found in the literature, we have numerically optimized the layer structure so as to minimize the overall displacement noise while maintaining a low sensitivity to layer thickness variations (details of this technique can be found in \cite{Hong2013}). The result is an ETM coating with 5\,ppm transmission. \Cref{fig:aSiCoatingDesign} shows the coating structure (notice that the design is close to, but not exactly, a simple stack of layers of $\lambda/4$ thickness). The transmission and reflection spectra are shown in \Cref{fig:aSiCoatingSpectrum}. Finally, \Cref{fig:aSiCoatingNoise} shows the Brownian and thermo-optic noises for Advanced LIGO and LIGO Voyager{}; Brownian noise is the limiting coating noise source for both, but it is more than 4 times lower for Voyager compared to aLIGO. It is noteworthy that, unlike in today's gravitational wave detectors, the contribution to the Brownian noise from the high refractive index ($\alpha$-Si) layers is so small that the low index (SiO$_2$) layers become the dominant contributor to the noise. \begin{figure}[h] \centering \includegraphics[width=\columnwidth]{figures/Coating.pdf} \caption[$\alpha$-Si coating noise]{Mirror thermal noise sources for the LIGO Voyager and Advanced LIGO.} \label{fig:aSiCoatingNoise} \end{figure} \subsubsection{Crystalline coatings} \label{s:crystalCoatings} Crystalline coatings such as AlAs:GaAs~\cite{Cole:2016ud} and AlGaP:GaP~\cite{Angie:2013} have been shown to have a higher mechanical Q than amorphous dielectric coatings and, as such, are a favorable technology to pursue for high precision optical cavities. The thermo-optic noise of these coatings is generally high, but it can be mitigated by careful design of layer thicknesses~\cite{Chalermsongsak_2016}. Both crystalline coating options show promise as candidates for LIGO Voyager{} but require significant further development. AlGaP:GaP is lattice matched to silicon and could therefore be epitaxially grown directly onto a test mass substrate, but the absorption must be reduced to the 1\,ppm level. AlAs:GaAs is not lattice matched to silicon so must be grown on a GaAs substrate and then lifted off~\cite{Yablonovitch1990} and affixed to the silicon test mass face, a technique yet to be demonstrated for 30\,cm diameter coating stacks. While an $\alpha$-Si:SiO$_2$ coating is the current choice for LIGO Voyager{}, breakthrough results on crystalline coatings could lead to a switch in design. \subsection{Optical absorption} \label{s:opticalAbsorption} The design of LIGO Voyager{} allows for 1\,ppm absorption in the coatings of the test masses, stemming from the need to keep the core optics at cryogenic temperatures (see \Cref{s:Cryo}). Much research has been performed in the last few years with the aim of lowering the optical absorption, although an $\alpha$-Si coating with absorption of less than 1\,ppm at \SI{2000}{\nano\meter} and \GwincVal{Blue.Materials.Substrate.Temp}\,K has yet to be demonstrated. However, it appears likely this can be achieved, based on two recent results: \begin{itemize} \item The absorption in $\alpha$-Si coatings was consistently measured to be approximately 7 times lower at \SI{2000}{\nano\meter} than \SI{1550}{\nano\meter} \cite{Steinlechner:2017}, and also improves with cooling. \item Using a novel ion-beam deposition method, Birney et al.~\cite{Birney:2017dt} were able to produce an $\alpha$-Si coating with absorption of 7.6\,ppm at \SI{1550}{\nano\meter} and room temperature. \end{itemize} \noindent Taken together, these two results suggest that an $\alpha$-Si coating with less than 1\,ppm absorption is feasible. \subsection{Heat Loads} The heat budget for the test mass includes several significant sources, which must be managed so as not to exceed the available radiative cooling power: \begin{enumerate} \item absorption of the laser beam in the high reflectivity mirror coatings \item absorption of the laser beam in the bulk of the ITM silicon substrate \item thermal radiation from the room temperature, 4\,km beam tube \item thermal radiation from the vacuum chambers near the test masses \item thermal radiation from nearby optics (reaction mass, compensation plate, arm cavity transmission monitor) \end{enumerate} \subsubsection{Absorption of the laser beam} Even 1\,ppm of absorption in the high reflectivity coatings of the test masses will deliver significant heating, due to the large circulating power in the Fabry-Perot arm cavities. Assuming a circulating arm power $P_{\mathrm{cav}} = 3\,\mathrm{MW}$, and coating absorption $\alpha_{\mathrm{C}} = 1\,\mathrm{ppm}$, the heat deposited into each test mass is \begin{equation} P_{\mathrm{coating}} = P_\mathrm{cav}\,\alpha_{\mathrm{C}} = 3\,\mathrm{W} \end{equation} The input test masses of the arm cavities are transited by the circulating power in the power recycling cavity. Assuming a power incident on the beamsplitter $P_{\mathrm{BS}} = 3\,\mathrm{kW}$, and substrate absorption $\alpha_{\mathrm{S}} = 20\,\mathrm{ppm}/\mathrm{cm}$ in a test mass of depth $h_{\mathrm{TM}} = 55\,\mathrm{cm}$, the heat deposited into each test mass is \begin{equation} P_{\mathrm{substrate}} = P_{\mathrm{BS}}\,\alpha_{\mathrm{S}}\,h_{\mathrm{TM}} = 3.3\,\mathrm{W} \end{equation} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figures/EX-render-3b.pdf} \caption[Drawing of end station vacuum system] {Rendering of end station vacuum system. Outer shield, mirror shield, reaction chain, and suspension cage structure \emph{not shown} for clarity.} \label{fig:endstationvac} \end{figure} \subsubsection{Ambient environmental heating of the test mass} Cold windows in the arm cavities would prevent the mirrors from being exposed to the room temperature vacuum beam tube, but are not possible to include, for several reasons. First, the Fresnel reflections from even the best AR coatings would be in excess of the acceptable arm cavity loss of 10\,ppm. Second, the beam heating of the window from the 3\,MW of circulating power would introduce a large thermal lens, which would change as the circulating power is varied. Finally, the Brownian and thermo-optic noise of a window in the arm cavity would exceed the noise in the test masses. \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/ETMdiagram4.pdf} \caption[Suspension Cryogenics schematic with a col PUM] {Layout of the ETM suspension in the cold penultimate mass configuration with the cryogenic cooling elements. The test mass and reaction mass are cooled radiatively with a two layer heat shield system. The inner shield requires simple vibration isolation to mitigate scattered light noise. Flexible thermal straps thermally link the inner and outer shields to the cold head of the cryo cooler.} \label{fig:cryoETM_coldPUM} \end{figure} The radiant heating of the test mass can be largely mitigated by a cylindrical cold shield, extending out from the test mass to limit the solid angle at room temperature that the test mass `sees'. However, this shield cannot extend farther than the final gate valve separating the arm volume from the end station volume, at a distance of $\sim$\,10\,m, as illustrated in Figure \ref{fig:cryoETM_coldPUM}. The residual heating is given by the Stefan-Boltzmann law multiplied by the fraction of the full sphere subtended by the opening of the cylinder: \begin{equation} P_\mathrm{beamtube} = \sigma\, T^4\, \pi\, r_\mathrm{TM}^2 \,\frac{\pi r_\mathrm{snout}^2}{4 \pi L_\mathrm{snout}^2} = 6\,\mathrm{mW}, \end{equation} assuming that the length of the shield is $L_\mathrm{snout} = 15$\,m and the radius is $r_\mathrm{snout} = 0.25$\,m. This must be corrected for the non-black body emissivity of the HR surface. These parameters allow the heat load from the 300\,K beam tube to be negligible. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{figures/ITM-ColdShield-CP.pdf} \caption{Cutaway view of thermal finite element model of the input test mass. The model includes heating from the main laser beam in the coating and substrate, as well as radiative heating/cooling from the surroundings. \label{fig:radcooling} \end{figure} \subsection{Radiative cooling of the test mass} The effect of radiative cooling of the test mass into a 60\,K environment has been estimated using a finite element model (see~\Cref{fig:radcooling}). The model presumes that the HR and AR surfaces have emissivity $\varepsilon_{\mathrm{face}} = 0.5$, and the barrel has an emissivity of $\varepsilon_{\mathrm{barrel}} = 0.9$. At \GwincVal{Blue.Materials.Substrate.Temp}\,K, the test mass can radiate $\sim$10\,W. \subsubsection{Cold Shields} To minimize the radiative heat load from the 300\,K beam tube, the radiation shield will need to include a cylindrical piece which extends into the beam tube. The inside of the shield will require baffles, as in the KAGRA design, to reduce multiple reflection paths from the 300\,K environment~\cite{kagra_snout}. The inside of the long shield should be coated with a high emissivity black coating to maximize the radiative coupling to the test mass. However, there is also the consideration of the 2\,$\mu$m light scattered from the arm cavity into the shield, and then scattered back into the arm cavity. This will be a source of amplitude and phase noise, and it is important that the high emissivity coating also has low BRDF so that scattered light noise is insignificant. Such an effect might be mitigated through the use of a combination of specular baffling and broadband absorption. A second shield will be used outside of these blackened inner shields to reduce the large heat load from the 300\,K environment. Both of the shields can be cooled conductively using soft thermal straps, which, in turn, are connected to Gifford-McMahon cryo-coolers outside of the vacuum system. These closed cycle cryo-coolers can cool the shields to approximately 50\,--\,\SI{60}{K}, and their vibrations can be isolated from the heat shields using simple spring mass assemblies. \subsection{Laser requirements} The requirements for the LIGO Voyager{} laser are summarized in \Cref{tab:laser_requirements}. \begin{table}[h] \begin{center} \begin{tabular}{lcc}\\ \toprule {Type} & {Requirement} & {Comment} \\ \midrule Wavelength & \SI{1900}{\nano\meter}$\--$\SI{2100}{\nano\meter} & Single-frequency \\ Power & \SI[round-mode=figures,round-precision=2]{\GwincVal{Blue.Laser.PMCInput}}{W}{} & CW operation \\ Polarization* & horizontal, $>$100:1 ratio \cite{T050036} & \\ Spatial mode* & $\ge$97\% TEM00 \cite{T050036} & \\ HOM content* & $<$ 3\% \cite{T050036} & \\ Intensity noise (RIN)* & $\le 10^{-6}$ Hz$^{-1/2}$ & \SI{10}{\hertz} $\le f \le$ \SI{5}{\kilo\hertz} \cite{Heurs:04} \\ & $\le 2\times 10^{-7}$ Hz$^{-1/2}$ & \SI{10}{\kilo\hertz} $\le f \le$ \SI{10}{\mega\hertz} \cite{T050036} \\ & $\approx$ shot-noise limited & $f \ge$ \SI{10}{\mega\hertz} \cite{T050036} \\ Freq.~noise (free-running)* & $\le \left(10\, \mathrm{kHz}/f\right)$ Hz$^{-1/2}$ & \SI{1}{\hertz} $\le f \le$ \SI{5}{\kilo\hertz} \cite{Kwee:12}\\ Freq.~actuation bandwidth* & \SI{200}{\kilo\hertz} \cite{Hall2017} & \\ Operation & stable 365/24/7 & no maintenance required\\ Lifetime & 10+ years & continuous operation\\ \bottomrule \end{tabular} \caption{Provisional list of laser requirements. Those requirements marked with asterisks (*) are based on equivalent requirements or performance for the Advanced LIGO laser. Although linewidth is a popular specification, we specifically do not use it for characterizing frequency noise requirements.} \label{tab:laser_requirements} \end{center} \end{table} \subsubsection{Power} The laser beam must deliver approximately \SI[round-mode=figures,round-precision=2]{\GwincVal{Blue.Laser.Power}}{W}{} of stabilized single frequency power at \SI{2}{\micro\meter} to the power-recycling mirror (PRM), as illustrated in \Cref{fig:IFO_schematic}. Given that the total transmission of the input optics between the laser and PRM is approximately 70\%, the required output of the laser is approximately \SI[round-mode=figures,round-precision=2]{\GwincVal{Blue.Laser.PMCInput}}{W}{}. The current laser design is broadly similar to the existing Advanced LIGO laser \cite{Kwee:12}, based around three stages of increasing power: \begin{itemize} \item Low power stage: requires a low intensity and phase noise, single-frequency, linearly polarized, CW, \SI{2}{\micro\meter} master oscillator laser with output power of approximately \SI{1}{\watt} and good beam quality with ${\rm TEM}_{00}$ mode content preferably {\textgreater}97\%. \item Medium power stage: the laser enters a medium power second stage in which it amplified to \SI{35}{\watt}. \item High power stage: the last stage of the laser system amplifies the output to \SI[round-mode=figures,round-precision=2]{\GwincVal{Blue.Laser.PMCInput}}{W}{}. \end{itemize} \noindent The medium and high power stages must maintain the same low noise characteristics and good beam profile as the master oscillator. \subsubsection{Remaining requirements} Requirements for the higher-order mode (HOM) content and frequency and intensity noises are to be derived using a closed-loop, higher spatial order, opto-electronic feedback model of LIGO Voyager{} that includes realistic assumptions about absorption, thermal lensing and compensation based upon experiences with Advanced LIGO. At this time this model is still being developed. We use the Advanced LIGO laser requirements as a guide for upper limits to these requirements. These are listed in \Cref{tab:laser_requirements}. These requirements are expected to be refined as realistic models of LIGO Voyager{} are developed. The LIGO Voyager{} laser is expected to require at least a similar free-running frequency noise as Advanced LIGO. The frequency actuation bandwidth of the entire laser system must be approximately \SI{200}{\kilo\hertz} \cite{Hall2017} in order to be able to sufficiently stabilize frequency fluctuations to the low-noise reference of the interferometer itself. It is not necessary for each stage (master oscillator, medium power stage and high power stage) to be individually capable of providing this bandwidth, provided that there is one or more component in the system that can. The relative intensity noise (RIN) requirements, extending into the RF, are based on the intensity noise of the PSL in Advanced LIGO \cite{T050036}. Once integrated into LIGO Voyager{}, the laser intensity noise will also be suppressed to a similar level to Advanced LIGO. The laser is expected to run continuously without requiring major maintenance for the lifetime of the LIGO Voyager{} project, at least 10 years. \subsection{Laser candidate technologies and examples} There are two rare-earth dopants suitable for direct lasing at \SI{2}{\micro\meter}: thulium and holmium, which can provide optical amplification in the \SI{1900}{\nano\meter}--\SI{2040}{\nano\meter} \cite{ThFiberLaserReview}, and \SI{2040}{\nano\meter}--\SI{2170}{\nano\meter} \cite{HoLasersReview2014} bands, respectively. Two basic laser architectures are available: optical fiber \cite{Hemming:15} and free-space \cite{Ganija:16} lasers. These architectures are not necessarily incompatible and the final system may contain a low-power free-space master oscillator (MO) followed by some combination of power oscillators and fiber amplifiers. The following subsections detail examples of thulium and holmium lasers that are expected to meet the majority of the requirements for the LIGO Voyager{} laser. \subsubsection{Single-frequency, low-noise source} \label{s:SFlaser} The full laser system begins with a master oscillator stage that is a low-noise, single-frequency source. Fiber laser master oscillators use short lengths of doped silica fiber with spectrally-matched distributed-Bragg-reflector (DBR) fiber gratings \cite{Fu:17} that are spliced onto each end. The gratings are fabricated to suit the required lasing wavelength of the Tm or Ho dopant, and thus a broad range of wavelengths are possible and modifying the wavelength of an otherwise suitable MO to satisfy other interferometer requirements should be possible. Achieving a stable, narrow linewidth MO will require careful thermal and vibration isolation of the fiber from the environment however. Given that different wavelengths can be achieved, we must next consider the frequency noise. Determining if a commercial laser meets the frequency noise requirements from specifications alone is typically not possible as these lasers usually quote linewidth rather than frequency noise in their specifications. The single-frequency \SI{10}{\watt} Q-Peak Firebow CW10-500, which has a linewidth of $<$\SI{1}{\mega\hertz} \cite{Firebow:2018}, is a possible candidate, but the stability and frequency noise would need to be measured to verify compatibility with LIGO Voyager{} requirements. Alternatively, the free-space single-frequency non-planar ring oscillator (NPRO) architecture has been demonstrated to have low frequency noise, for example at \SI{1064}{\nano\meter} \cite{Willke:00}. This architecture uses a crystalline gain medium and thus only a few wavelengths are possible: a \SI{400}{\milli\watt} Tm:YAG NPRO at \SI{2013}{\nano\meter} \cite{Lin2009} and a \SI{7.3}{\watt} NPRO at \SI{2090}{\nano\meter} \cite{Yao:08} have been reported but frequency noise spectra were not available at the time of writing. Additionally, lasing of cryogenic Tm:YAG at \SI{1880}{\nano\meter} has been demonstrated \cite{Johnson1965} and is expected to be suitable for use in an NPRO. Marginally outside the \SI{1900}{\nano\meter}$\--$\SI{2100}{\nano\meter} range is a \SI{2128}{\nano\meter} laser. Lasers at this wavelength do not yet meet all the requirements for LIGO Voyager{} but could leverage existing \SI{1064}{\nano\meter} components for frequency stabilization (when doubled and locked to the existing aLIGO lasers). The existence of single-frequency, narrow linewidth, \SI{2}{\micro\meter} lasers is encouraging but more work needs to be done to determine if the frequency noise of these lasers meets the requirements of LIGO Voyager{}. \subsubsection{High power} \label{s:HPlaser} High power lasers that could serve as amplifiers have been demonstrated. For example, a multi-mode, CW, Ho-doped fiber laser at \SI{2100}{\nano\meter} has been demonstrated with power up to \SI{400}{\watt} \cite{Hemming:13, Simakov:14}. Reviews of recent work in fiber lasers are provided by Hemming \cite{HoLasersReview2014} and Fu \cite{Fu:17}. High power free-space CW oscillators have also been demonstrated, including a \SI{200}{\watt} Tm:YAG laser [ref] and a \SI{65}{\watt} cryogenic Ho:YAG laser that produced a \SI{100}{\watt} output at \SI{2097}{\nano\meter} with good beam quality \cite{Ganija:17}. The closest example of an existing high-power low-noise laser is the \SI{600}{\watt}, \SI{2040}{\nano\meter} single-frequency single-mode thulium fiber laser demonstrated by Goodno~et~al.~\cite{Goodno:09} that amplifies a \SI{5}{\mega\hertz} linewidth distributed feedback laser diode from \SI{3}{\milli\watt} to greater than \SI{600}{\watt} and maintains the low linewidth of the source. For this laser, stimulated Brillouin scattering (SBS) was demonstrated to be negligible below \SI{250}{\watt} output power and, as such, is not expected to be an issue for the LIGO Voyager{} laser. Some early indications from high-power laser work at \SI{2}{\micro\meter} suggest that there may be power-dependent excess relative intensity noise at radio frequencies (RF) and this remains an active area of investigation. \subsection{Summary of laser prospects} Most of the constituent requirements of a pre-stabilized laser around \SI{2}{\micro\meter} (master oscillator, intermediate amplification and high power stages) have been demonstrated at or near this wavelength. Special emphasis needs to be placed upon acquiring frequency and intensity noise measurements on low-noise master oscillators soon. It is clear that full confirmation of a \SI{2}{\micro\meter} laser source with sufficiently low frequency and intensity noise is still to be performed. We are confident that this can be achieved as the requirements for the LIGO Voyager{} laser are not substantially beyond specifications already demonstrated at other wavelengths \cite{Kwee:12}. Subsequent development and engineering work are still needed to integrate all these parts into a single system, however no fundamental reasons preclude the production of such a system. \subsection{Compensation plate material} \subsection{Balanced Homodyne Readout} \subsection{Noise requirements} We discuss now the laser noise requirements including: angular beam pointing jitter, intensity fluctuations, frequency noise, polarization fluctuations and \emph{whatever else we can think of}. \subsubsection{Jitter noise/Pointing requirements} We begin with a description of how jitter noise couples to DARM and include a calculation of the requirement on this. The angular jitter of the laser beam can be written as a component in a $TEM_{10}$ mode and misalignment of the interferometer optics will couple this mode into the TEM00 mode and hence into the GW signal quadrature. An analysis of the jitter noise requirement (noise on $TEM_{10}$ input beam) requires an estimate of the residual misalignment of the IFO optics, the resonant conditions of the $TEM_{10}$ in the recycling cavities and the attenuation of any input jitter noise by the input optics. The jitter noise requirement for Voyager is shown is shown in Figure \ref{fig:jitter_noise_req} and as a comparisonwe show the requirement for aLIGO. \emph{See T0900142 for aLIGO pointing requirements.} \begin{figure} \centering \includegraphics[width=\linewidth]{figures/aLIGO_jitter_noise_req.pdf} \label{fig:jitter_noise_req} \caption{Jitter noise requirement (for aLIGO from T0900142) and for Voyager} \end{figure} \subsubsection{Intensity noise} Intensity noise coupling to DARM directly and through radiation pressure moving the core optics. Derivation of intensity noise requirement. The intensity noise requirement is shown in Figure \ref{fig:RIN_noise_req} \begin{figure} \centering \includegraphics[width=\linewidth]{figures/Voy_PSL_RIN_requirement.pdf} \label{fig:RIN_noise_req} \caption{Relative intensity noise requirement (I have doubts about the region 10 - 100Hz. Also, the HF curve is dominated by RF SB getting onto the detection PDs. I had to guess this coupling so the location of the kink is totally false right now)} \end{figure} \subsubsection{Frequency noise} Short derivation of frequency noise requirement. The frequency noise requirement is shown in Figure \ref{fig:freq_noise_req}. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/aLIGO_freq_noise.png} \label{fig:freq_noise_req} \caption{\R{remake this plot without MS Paint} Frequency noise requirement (currently aLIGO figure as placeholder)} \end{figure} \subsubsection{Polarization noise} We need to do a matrix calculation that looks at $s_{pol}$ and $p_{pol}$ inputs and estimates the coupling of the input polarization noise to the output 00 signal. A derivation of the laser beam polarization noise requirement follows. Monocrystalline Silicon is a cubic crystal thus it is optically isotropic and cannot have birefringence. However, residual stress, dislocations and impurities can break the symmetry resulting in a small residual birefringence. This Birefringence can play an important part in this \cite{SiliconBirefringence2016} design because it is likely to vary throughout the volume of the material. Thus any motion of the silicon substrate or the polarization of the light is likely to result in phase noise as the light propagates through the substrate. Reference - Birefringence Measurements on Crystalline Silicon, C. Kruger, Class. Quantum Grav. 33 (2016) 015012 (17pp) \subsection{Auxiliary optics} Discussion on the impact of using \GwincVal{Blue.Laser.Wavelength_nm}\,nm on Faradays, AOMs and EOMs. \subsubsection{Isolators} Faradays - see Florida [Aidan] \subsubsection{Electro-optics} Modulators - see Florida [Aidan] Many of the electro-optical materials that work in the visible and at 1 micron also have low optical absorption in the \GwincVal{Blue.Laser.Wavelength_nm}\,nm wavelength region. The Table below lists several of them and their optical properties. \begin{table} \centering \begin{tabular}{ |c| c| c| c| } \hline & Transmission Range [nm] & EO Coefficient & Something Else (TM) \\ \hline $ADP$ & 185-1450?? & 1 & 2 \\ $KDP$ & 176-1420?? & 1 & 2 \\ $KTP$ & 350-4500& 1 & 2\\ $LiNbO3$ & 0.500-5000 & 1& 2\\ $LiIO3$ & 0.380-5500 & 1 & 2 \\ $KNbO3$ & 0.450-5000 & 1 & 2 \\ $LBO$ & 0.170-2500 & 1 & 2 \\ $BBO$ & 0.205-3000 & 1 & 2 \\ \hline \end{tabular} \caption{Common Electro-Optical materials with low optical absorption in the \GwincVal{Blue.Laser.Wavelength_nm}\,nm region.} \label{tab:EOM} \end{table} \subsection{Prospects for a Fiber Laser for Voyager} Emission lines in Thulium doped fibers span the 1.6 - 2.05 $\mu$m wavelength band \cite{TmLasersReview}, \cite{Richardson2010} while Holmium doped fibers span the range from 1.9 - 2.1 $\mu$m \cite{HoLasersReview2014}, \cite{Richardson2010}, \cite{Nilsson2004}. This raises the prospect of sources that span this whole region \cite{HoLasersReview2014} GW interferometers have been using diode laser pumped free space 1064\,nm lasers because of their low amplitude and frequency noise, but this wavelength is incompatible with silicon core optics. Moreover, the higher laser power required (\GwincVal{Blue.Laser.Power}\,W) is more compatible with the capabilities of fiber lasers. Current prospects for \GwincVal{Blue.Laser.Wavelength_nm}\,nm lasers are promising. Following the architecture of Advanced LIGO of using a low power and low noise master oscillator stage, mid-power amplifier stage (35W) and high-power stage (300 - 400 W). \begin{itemize} \item Master oscillator stage/low power stages: Single frequency Holmium fiber lasers have been demonstrated with 50kHz linewidth. Available up to 5W (NP Photonics). \item Medium/high power stage: Holmium fiber lasers have been demonstrated at 200W (multiple longitudinal modes) (reportedly 400W). DST group in Adelaide. Injection seeding/locking high power stage is TBD. \cite{HoLasersReview2014} \end{itemize} \subsubsection{Fiber Lasers} \textit{Laser Diode Pumping} of fiber lasers uses semiconductor diode laser similar to the ones used for pumping the solid state lasers described above, but restricted to high brightness, fiber-coupled devices suitable for coupling into the cladding of the fiber. Several single stripe pump lasers can be spliced in to increase the pump power making use of the large acceptance of the cladding. Alternatively, a brightness converting solid-state laser can sometimes be used if a suitable match of materials and wavelengths can be found. \textcolor{red}{Brightness converting SSL Ref} \textit{Polarization maintaining fiber} is required for our application. While a regular fiber is cylindrically symmetric with no preferred polarization, polarization maintaining fibers have a built-in birefringence formed during the manufacture by one of several methods, including the use of stress rods, elliptical cores and so-called Panda designs. They all introduce a preferred axis for the propagation and amplification of linearly polarized light \textcolor{red}{Pol maintaining fibers Ref}. \paragraph{Stimulated Brillouin Scattering} The classical picture of Brillouin scattering is a photon scattered and Doppler shifted from a moving acoustic density wave. In SBS it is the interference of the forward propagating laser beam and the backwards scattered beam that forms an acoustic grating due to electrostriction in the dielectric material. In other words the forward and counter propagating fields interfere with each other to produce an electric field that distorts the material, resulting in variations in the refractive index. The pump laser continues to build the strength of this moving grating until it is strong enough to operate as a reflector of the pump, frequency shifted by the moving grating. Threshold for SBS occurs for a pump intensity, $I (W/cm^2)$ and SBS gain length product, $g_0l$ ($g_0$ in cm/W) of $Ig_0l\sim 25$ \textcolor{red}{Zeldovich 1985 Ref} with a more refined calculation for optical fibers finding that the threshold gain-length product for SBS depends on the fiber length and varies from about 25 for short fibers to less than 5 for very long fibers \textcolor{red}{Kovalev 2007 Ref}. For a single frequency, long coherence length laser this corresponds to a laser power of about 5-10W in a few meters of fiber. To achieve higher powers, techniques to increase the threshold for SBS must be used and they rely on decreasing the intensity by using large mode area (LMA) fibers and on decreasing the effective interaction length by disrupting the coherence of the acoustic wave while maintaining the coherence length of the optical wave. Both of these approaches have been investigated with good success in the last decade. Disruption of the acoustic interaction length has been accomplished by introducing a longitudinal temperature gradient \textcolor{red}{SBS Temp grad Ref} and by introducing radial discontinuities in the material doping thus giving rise to different acoustic modes interfering destructively with each other \textcolor{red}{Reference}. For example using a LMA fiber, 12m long and with a core diameter of 30um, a SBS threshold of 40W was increased to greater than 150W by using the same LMA but with a graded Ge/Al co-doped core designed to disrupt the acoustic properties of the fiber \cite{Li2007}. In subsequent work the SBS threshold was increased to 500W \textcolor{red}{REFERENCE} and most numerical models predicted that one kW should be attainable from a narrow band, single mode fiber. In practice this was not achieved because of the onset of very strong mode instabilities, resulting in a rapid degradation of the beam quality at powers above 500W \textcolor{red}{REFERENCE} to be discussed below. \paragraph{Single Frequency Fiber Lasers for GW Detectors} High power fiber lasers for gravitational wave detectors have been investigated and may possibly be used later on in the advanced VIRGO detector \textcolor{red}{VIRGO Fiber laser Ref}. The fibers used were commercially available LMA fibers or photonic crystal rods and low noise amplification at $1 \nu m$ were obtained up to a power of 100W. No degradation or increased phase noise was measured \textcolor{red}{Nary Man et al 2010 Ref}. However, current indications are that when left on continually, the power output decreases due to photo-darkening in the fiber. This has been investigated by others and may be due to very small traces of other rare earth element impurities. Thus traces of Thulium as small as parts per billion have been found to lead to photo-darkening, attributed from UV emission from excited state absorption in these impurities \textcolor{red}{Photo darkening Ref}. This photo-darkening can be prevented by operating at lower powers and there are some reports that the effect is not permanent. This will need to be resolved. One proposed design for advanced VIRGO uses two of these fiber amplifiers coherently combined in parallel to achieve the necessary power, but additional work on the characterization, lifetime and engineering of the fiber lasers remain to be done. In other preliminary work using a commercially available fiber laser amplifier to amplify the output of an NPRO to 50W for gravitational wave research at the ACIGA facility, spatial variations in the beam were observed, possibly due to mode instabilities \textcolor{red}{Mode instabilites Ref}. However, these problems may be surmountable and fiber lasers may well be developed for third generation GW detectors as described below. \subsection{Lasers for Third generation and Beyond} The design discussed here will require higher powers \textcolor{red}{PowerSection} and longer wavelengths \textcolor{red}{WavelengthSection}. Thus laser designs in the 200W-400W range are of interest, as are wavelengths in the \GwincVal{Blue.Laser.Wavelength_nm}\,nm range which are well above the band-gap of silicon, \textcolor{red}{2PhotonAbsorptionSection} which is the material selected for the test masses in Voyager (\textcolor{red}{SiliconSection}. In what follows we provide a glimpse of what today appears to be attractive possibilities for future detectors. \subsubsection{Fiber Laser Amplifier} As discussed above, current limitations on single frequency, low noise fiber amplifiers appear to be the onset of SBS close to 100W. Further detailed studies of phase noise in LAM fiber amplifiers need to be undertaken while incorporating techniques to suppress SBS and transverse mode instabilities. Furthermore fiber laser amplifiers using different materials and dopants need to be investigated to assess the available wavelength range of these devices and the prevention of photo-darkening and other possible non-linear effects. At the time of writing 100 W was the maximum power that had been achieved in a polarization maintaining fiber at a single frequency \textcolor{red}{SingleFreqPowerReference} and at low noise consistent with gravitational wave detectors, but with insufficient long term reliability \textcolor{red}{ManBrilletReference}. More recent works reporting higher powers have not usually reported simultaneous achievements of single frequency, single mode and low phase noise required for our purpose. Nonetheless, considerable advances have been made in high power fiber lasers in general (\textcolor{red}{Review Jauregui 2103 Ref} and for fibers potentially useful for gravitational wave detectors. Thus single frequency SBS free, polarization maintaining fiber amplifiers have been developed to beyond 400W \textcolor{red}{Jeung 2007 Ref} and acoustically segmented photonic fibers to nearly 500W \textcolor{red}{Robin 2011 Ref}, but with imperfect beam quality, whereas 203W was estimated to be in \(TEM_{00}\) \textcolor{red}{Karow 2012 Ref} but with no phase noise measurements. In all cases, either the onset of SBS or transverse mode instabilities limit the maximum power attainable. The mode instabilities were observed to occur with a distinct threshold at about 275W for a LAM, 63 micron core fiber amplifier \textcolor{red}{Eidam 2011 Ref} and has been analyzed by many authors since then. Thus \textcolor{red}{Smith 2011 Ref}\textcolor{red}{Ward 2012 Ref} analyzed the thermal origin of the instabilities and traced them to the creation of a thermo-optic grating created by the interference of the fundamental and a higher mode in the fiber, allowed by the large diameter. Much progress has been made in improving the understanding of the origin of the mode instabilities \textcolor{red}{Jauregui 2012 Ref} and increasing the power threshold for the onset of the instability, concentrating primarily on reducing the gain available to the higher order modes by more efficient gain saturation, choice of pump wavelengths, choice of fibers and fiber geometries. The theoretical understanding is not complete and much work on increasing the threshold for the mode instabilities is being done worldwide, both analytically and experimentally. However for our purpose it seems reasonable to assume that reliable fiber amplifiers can be made in the power regime required, but this needs to be demonstrated for single frequency and low noise. Long term reliability and approaches to prevent possible photo-darkening also need to be studied. Due to the highly competitive market in fiber lasers, it is quite possible that progress on these topics have already been made, but not published in the open literature. Fiber laser amplifiers are also attractive because of their long interaction lengths allowing efficient lasing using other dopants. Thus fiber lasers have been developed to emit at 1550nm using Er doping, and very efficient high power fiber lasers have been demonstrated using thulium at ~1900nm and holmium at ~2000nm. \textcolor{red}{Thalium doping Ref} \textcolor{red}{Holmium doping Ref} These laser hosts have not yet been developed for single frequency low noise, and for use in gravitational wave detectors, reliable master oscillators must also be developed. NPROs using these dopants have been developed in YAG hosts (see below) but the resultant wavelengths are not necessarily suitable for use in glass fiber lasers. Thus it appears that the use of high power fiber laser amplifiers for gravitational wave detection are promising, but will require research to combine the approaches for mitigating SBS and mode instabilities with the simultaneous requirements for reliable single frequency, polarized, low noise operation and/or make use of coherent combination of individual lower power devices. Finally, the devices need to be optimized for the preferred wavelength. \subsection{Parametric Instabilities} The optical cavities and the interferometer mirrors have high quality factors, which allow for highly amplified resonances in the system. The accidental overlap of the resonances can lead to parametric instabilities~\cite{}. \begin{figure} \includegraphics[width=\columnwidth]{figures/PI-Loop.pdf} \caption{Feedback loop diagram of opto-mechanical parametric instabilities} \label{fig:PI} \end{figure} \subsection{Squeezed vacuum generation for 2000\,nm} \label{s:2umsqz} By employing a coherent control scheme~\cite{Henning:PRL2006}, as typically done to produce squeezing at \SI{1064}{\nano\meter} in the audio frequency regime, high levels of squeezing down to 10\,Hz should, in principle, be obtainable at different wavelengths. Indeed, high levels of squeezing at \SI{1550}{\nano\meter} have already been demonstrated in the MHz regime (12.3 dB~\cite{Mehmet:2011je}) by pumping PPKTP at \SI{775}{\nano\meter}. At the moment, no new technical difficulties peculiar to \SI{2000}{\nano\meter} are anticipated and the best achieved squeezing to-date (at or near \SI{2000}{\nano\meter}) is 4\,dB in the 1\,--\,40\,kHz band, demonstrated with a laser source at \SI{1984}{\nano\meter}~\cite{Mansell:2018}. This is illustrated in \Cref{fig:SqueezeHist} along with the history of squeezed light generation at different wavelengths (provided for reference). \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/SqueezeHistory.pdf} \caption[Squezing History]{Development of squeezed light sources over the years. The diameter of the circles is proportional to the log of the frequency at which the squeezing was demonstrated (i.e. small circles are for low audio frequencies) \cite{Slusher:1985,Kimble:1986,Vah2008}} \label{fig:SqueezeHist} \end{figure} \subsection{Filter Cavities for Input Squeezing} \label{sec:filter_cavities} Quantum noise appears in two forms: shot noise and radiation pressure (back-action) noise. Frequency-independent squeezed vacuum injection yields a reduction of high frequency quantum shot noise and a corresponding increase of low frequency quantum radiation pressure noise. In this form, squeezed vacuum injection (as in~\cite{GEO:Squeezing, H1:Squeezing}) will not be suitable for LIGO Voyager{}. However, squeezed vacuum can be manipulated to generate frequency-dependent squeezing by rotating the squeezed field relative to the interferometer field in a frequency dependent way. This can be achieved by reflecting the squeezed beam from a high finesse, detuned {\it filter cavity} before injection into the interferometer~\cite{KLMTV2001}. Filter cavities and their properties have been extensively studied theoretically~\cite{Harms:2003wv, Kha2010, evans2013realistic}. The performance of a filter cavity can be characterized in terms of its intra-cavity losses per unit length. The lower the losses per unit length, the better the filter cavity is able to rotate the squeezing ellipse without degrading it. Direct measurements report round-trip losses of 10\,ppm (5\,ppm per bounce) for beam spot sizes in the 1\,--\,3\,mm range (corresponding to confocal lengths of 5\,--\,25\,m range), giving losses per unit length of 0.5\,ppm/m with a 20\,m long filter cavity~\cite{Iso13}. Frequency dependent squeezing at \SI{1064}{\nano\meter} has been experimentally demonstrated with rotation of the squeezing quadrature taking place around 1\,kHz and squeezing levels of 5.4\,dB and 2.6\,dB observed at high and low frequency, respectively \cite{PhysRevLett.116.041102}. Technical noises (optical loss, phase noise) have been recently calculated in order to estimate realistic performance of a filter cavity~\cite{Kwee:2017}. The experimental characterization of the noise coupling mechanisms which limit the filter cavity performance is the next necessary milestone before validating this technology for application in gravitational wave detectors. The LIGO Scientific Collaboration has a program in place to achieve these goals for \SI{1064}{\nano\meter}. A similar program needs to be established for \SI{2000}{\nano\meter}. A time scale of 3 years seems adequate to finalize a filter cavity design for LIGO Voyager{}, informed by the outcome of the on-going effort for \SI{1064}{\nano\meter}. \subsection{Loss Control: General} In table top experiments, squeezing levels higher than 10\,dB have been measured~\cite{Vah2008, Stefszky:Balanced2012, Lisa:RPP:2018}. However, the measured squeezing in GW detectors is strongly dependent on the total loss that the squeezed beam encounters in the path from the squeezed light source to the measurement photodetector. In practice, in existing gravitational wave detectors, reducing these optical losses below 20\% is non-trivial due to the large number of optical loss sources. For example, GEO600 reports~\cite{dooley2016geo} up to 4\,dB of detected squeezing, corresponding to 40\% total losses. LIGO Voyager{} will have to contend with the same practical issues. Every optical loss in the path from the squeezed light source to the final photo-detector contributes vacuum fluctuations that degrade the squeezed state. The list of optical loss sources includes: squeezer optical parametric oscillator (OPO) internal losses, mode-mismatch, Faraday rotators and associated elements, signal- and arm-cavity losses, output mode cleaner (OMC) throughput and photodetector quantum efficiency (QE). To achieve \pgfmathparse{int(round(\GwincVal{Blue.Squeezer.InjectionLoss}*100))}\pgfmathresult\% injection loss and \pgfmathparse{int(100- round(\GwincVal{Blue.Optics.PhotoDetectorEfficiency}*100))}\pgfmathresult\%, readout loss, as currently assumed in the LIGO Voyager{} baseline curve, all of these loss sources need to be of the order of 0.5\%\,--\,2\%. Active mode-matching systems are currently in development for Advanced LIGO and we plan to continue development for LIGO Voyager{}. Continued effort is required to develop low-loss small optics for the OMC, polarizing components, OPO and Faraday isolators. \input{QuantumEfficiency} \subsection{Conclusion} The parts required for 10\,dB of audio-band frequency-dependent squeezing at \SI{2000}{\nano\meter} have yet to be demonstrated. Analogous demonstrations and the rate of technological development of squeezing over the last ten years, coupled with no fundamental reasons against, lead us to conclude that the LIGO Voyager{} squeezing design is achievable. \subsubsection{Quantum Efficiency of Photodiodes around \SI{2000}{\nano\meter}} \subsection{Loss Control: Quantum efficiency} \label{s:QE} One of the most challenging loss considerations at \SI{2000}{\nano\meter} is the QE of photodiodes. The QE of the photodetectors used at the GW signal extraction ports must be {\textgreater}99\% with a goal of 99.5\%. Additionally, the high-QE photodiodes will need to remain linear and low-noise with approximately \SI{10}{\milli\watt} of optical power incident on them. Several options are available for detectors: extended InGaAs, mercury cadmium telluride (MCT or HgCdTe), and InAsSb, and these are discussed below. At this time, none of these options meets the high-QE, linearity, and low noise requirements, and significant development will be required on all these technologies to achieve better than 99\% QE while simultaneously coping with the large amount of incident optical power. \subsubsection{Extended InGaAs photodetectors} Current extended InGaAs photodetectors typically have low QE ($\sim$75\%) around \SI{2000}{\nano\meter}, although Laser Components Inc.~has a series of photodiodes that have QE up to 87\%~\cite{LaserComponentsExtendedInGaAs}. Extended InGaAs photodiodes achieve a broader spectral response by varying the relative amounts of InAs and GaAs in the semiconductor alloy to increase the cut-off wavelength of the photodetector. Unfortunately, this leads to lattice spacing mismatch within the material that, in turn, results in significantly increased $1/f$ noise and, indirectly, lower QE. It is an active area of research to determine if QE can be increased in extended InGaAs without introducing catastrophic levels of low-frequency dark noise. \subsubsection{HgCdTe (MCT) photodetectors} Mercury cadmium telluride (MCT or HgCdTe) detectors are commonly used for infrared astronomy. They have a strong response in the mid-IR from 1.5\,\si{\micro\meter}, with cut-off wavelengths of 2.5\,\si{\micro\meter}, 5\,\si{\micro\meter} or longer, depending on the construction. MCT detectors with an broadband AR coating have measured QE of approximately 94\% \cite{Teledyne:2017}. As most MCT detectors are p-n junction based, they rely on diffusion of electrons and holes across the active region which is a (relatively) slow process and can lead to recombination of holes and electrons before they reach the junctions. Recombination results in an effective loss of QE as those charge carriers are ultimately not converted into photocurrent. MCT photodiodes could be promising in configurations other than p-n junction. \subsubsection{InAsSb photodetectors} InAsSb detectors have matured in the past two decades \cite{Martyniuk:2014}. Traditionally low QE and high noise (when used at room temperature), InAsSb performance has improved in recent years by exploiting different junction architectures \cite{Kilpstein:2011,Soibel2011, Steenbergen:2011}. They are currently a promising candidate for further consideration. \subsection{Sidles-Sigg Instability} \label{s:SiggSidles} Optical power, circulating in the arm cavities, applies torque on the mirrors and changes the dynamics of the suspended mirrors~\cite{Sidles:2006un, Hirose:10, Dooley:ASC:2013}. The magnitude of this radiation pressure induced optical torque depends upon the optical power and $g$-factors of the cavities. The circulating power acts as a spring with either positive or negative stiffness. The sign of the feedback depends on the misalignment mode. In the case when two test masses have equal radius of curvature, a tilt of the axis produces a restoring torque; if the optical axis shifts, then radiation pressure torque tends to further misalign the mirrors. In one case the torque induced by radiation pressure makes the suspension mode stiffer (hard), while in the other case it tends to make the mode less stiff (soft). \Cref{fig:HardSoftModes} shows the eigenfrequencies of hard and soft modes for different power levels. Here the nominal laser wavelength of \SI{2000}{\nano\meter} for Voyager and \SI{1064}{\nano\meter} for Advanced LIGO is assumed. When the optical power is high enough, the soft mode becomes unstable. A robust feedback control loop should have enough bandwidth to suppress the instability. Simulations show that if the frequency of the unstable mode is $f_{\rm soft}$, then the bandwidth of the control loop needs to be $\sim 3 f_{\rm soft}$, and significant filtering of the sensing noise ($\sim60$\,dB) can be achieved at $\sim10 f_{\rm soft}$. Since the frequency of the soft mode is less than 1\,Hz for the Voyager design at \SI{3}{\mega\watt}, sensing noise from angular loops should not limit the sensitivity. \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/SoftHardModes.pdf} \caption[Angular Instability v. wavelength and g-factor] {Frequencies of the hard and soft modes vs.~arm power for various $g$-factors. The radii of curvature of the input and end test masses are set to be the same in this simulation. The larger rotational moments of inertia for LIGO Voyager{} remove the possibility of angular instabilities (a major controls problem in Advanced LIGO).} \label{fig:HardSoftModes} \end{figure} The hard/soft frequencies are functions only of the cavity $g$-factors, and not explicitly the laser wavelength. However, if the laser beam spot size on the mirrors is kept to a maximum value, $\omega_{max}$, due to clipping losses, then the cavity $g$-factor will be smaller for a longer wavelength. Stated another way, if the beam spot size is maximized to reduce the thermal noise, the longer wavelength results in a more stable interferometer. \subsection{Material} We have chosen \GwincVal{Blue.Materials.Substrate.Temp}{}\,K crystalline silicon as the test mass material for LIGO Voyager. \Cref{fig:SiThermal} shows the thermal noise strain curves from crystalline silicon test masses held at \GwincVal{Blue.Materials.Substrate.Temp}{}\,K, where it can be seen that neither Brownian nor thermo-optic substrate noises should limit detector sensitivity. To justify this material and temperature choice, we compare its thermal noise performance with three other materials that are currently used or proposed for use in GW interferometers: fused silica~\cite{aLIGODetectorRef, VirgoDetectorRef}, sapphire~\cite{PhysRevD.89.062003}, and 10\,K silicon~\cite{HiEA2009}. Thermal noise in a fused silica test mass is limited by Brownian motion, which is related to mechanical loss through the fluctuation-dissipation theorem~\cite{CaWe1951, Kubo:FDT, Callen:1959}. In fused silica, the mechanical loss has a broad peak below room temperature. Thus its thermal noise does not benefit from cryogenic cooling~\cite{schroeter2007}. Silicon has lower mechanical loss, and consequently lower Brownian noise than fused silica without any loss peaks at low temperatures~\cite{NumataYamamotoCryogenics}. % \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/silicaSapphire.pdf} \caption{Strain noise from thermally induced noise sources in the LIGO Voyager \ \GwincVal{Blue.Materials.Substrate.Temp}{}\,K crystalline silicon test masses. } \label{fig:SiThermal} \end{figure} Sapphire, like silicon, is free of cryogenic loss peaks. However, thermo-elastic noise is an important noise mechanism in these two crystalline materials, due to their high thermal conductivity. Thermodynamic fluctuations of heat inside the material are the source of this noise. The fluctuations are converted to mirror surface displacement through the coefficient of thermal expansion $\alpha$. The displacement power spectral density is $S_{\rm TE}(f) \sim \kappa \alpha^2 T^2$, where $\kappa$ is the thermal conductivity~\cite{BGV1999,LiTh2000}. Thermo-elastic noise can be mitigated by holding the test mass at a temperature near absolute zero ($\sim$20\,K is sufficient for sapphire), where $\alpha$ must vanish due to the Nernst heat theorem. Silicon has the unusual property that its thermo-elastic noise is also eliminated at an elevated temperature, \GwincVal{Blue.Materials.Substrate.Temp}{}\,K, where $\alpha$ crosses through zero~\cite{Kim1992} (see \Cref{fig:magicSi123K}). To operate an interferometer at temperatures in the 10\,--\,20\,K regime requires imposing an austere heat budget on the test mass, which in turn makes it difficult to achieve high circulating power in the arms~\cite{somiya2012detector, HiEA2009}. By contrast, at \GwincVal{Blue.Materials.Substrate.Temp}{}\,K, the test mass heat budget is compatible with the use of megawatts of circulating power. This advance in optical power handling is what will allow us to also reduce the quantum noise, so as to realize the benefit of the improved thermal noise in \GwincVal{Blue.Materials.Substrate.Temp}{}\,K silicon across a broad band of frequencies. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{figures/SiliconMagic123.pdf} \caption{Coefficient of thermal expansion (CTE) vs temperature for silicon (blue). The substrate thermo-elastic noise at 100Hz (solid red) is minimized at 123\,K when the CTE crosses zero~\cite{Kim1992}. Shown for reference is the quantum noise (QN) at 100\,Hz (dashed red), corresponding to a fixed \SI{3}{\mega\watt} in the arms (for simplicity, we have deliberately ignored secondary effects that can cause the stored arm power to vary with temperature, such as temperature-dependent variations in power-handling in the mirrors).} \label{fig:magicSi123K} \end{figure} \subsection{Size and composition} Large, high purity silicon crystals will be required for the LIGO Voyager{} test masses. The size of the test mass affects the sensitivity in two ways. First, larger mirror surfaces enable larger optical spot sizes, thus reducing the coating thermal noise. Second, heavier masses suffer less disturbance from radiation pressure forces. Impurities in the silicon can degrade the sensitivity. The most stringent known requirement derives from the production of free carriers (unbound electrons and holes) by these impurities and, ultimately, the impact this has on the cryogenic cooling system. To couple light into the arm cavities, a high-power beam must transmit through each input test mass. Some of the light interacts with free carriers inside the silicon substrate and is absorbed, heating up the test mass. High purity silicon is needed so that the heating due to free carrier absorption does not exceed the radiative cooling. The size and composition of silicon crystals available to us are dictated by the commercially viable processes for crystal growth. Crystals with ultra-low contamination are produced using the float zone technique, but this process has not been scaled up to sizes greater than 20\,cm in diameter. The magnetically stabilized Czochralski (MCZ) technique, on the other hand, yields 45\,cm crystals that are somewhat less pure than float zone silicon~\cite{SiliconMaterialsSemiconductorHandbook}. MCZ silicon is the most promising candidate for producing test masses of the size needed for LIGO Voyager. Oxygen is by far the most abundant impurity in MCZ silicon. It enters by diffusion from the fused silica crucible that holds the molten silicon, and is typically present at the level of \SI{1E17}{\per\cubic\centi\metre} or even higher. Most of this oxygen is interstitial to the lattice of silicon atoms, and does not affect the free carrier density. However, oxygen also forms complexes, referred to as ``thermal donors'', that add free electrons. Rapid annealing may offer a way to disrupt oxygen complexes and eliminate some of the free carriers they contribute, which would otherwise be the dominant population in undoped MCZ silicon~\cite{OxygenInSiliconChap7, doi:10.1063/1.368586, Kissinger2015}. Other impurities include carbon, boron, and phosphorus. Carbon, typically found at \SI{1E15}{\per\cubic\centi\metre}, has little effect on the free carrier density. Boron and phosphorus are used as dopants to manipulate the carrier density, and they are found even in undoped silicon as contaminants with concentrations $\sim$\SI{1E12}{\per\cubic\centi\metre}. \subsection{Absorption} Noteworthy absorption processes in silicon include inter-band absorption, two-photon absorption, and free carrier absorption. Due to the choice of wavelength and power density in the optics, the inter-band and two-photon absorption are found to be unimportant for LIGO Voyager~\cite{doi:10.1063/1.4923379, doi:10.1063/1.2737359}. In the Drude model of free carrier response, the free carrier absorption is calculated as~\cite{Soref1987}: \begin{equation} \label{eq:FreeCarrierAbsorption} \alpha_{\rm FC} = \frac{e^2 \lambda^2}{4 \pi \epsilon_0 n c^3} \frac{n_c}{m_*^2 \mu} \end{equation} with $\lambda$ the optical wavelength, $e$ the elementary charge, $n$ the refractive index, $n_c$ the density of free carriers, $m_*$ the carrier effective mass, and $\mu$ the carrier mobility. (Note that the carrier density, mass, and mobility are different for electrons and holes.) According to \Cref{eq:FreeCarrierAbsorption}, absorption of roughly 1\,ppm/cm would be expected, if the level of residual boron and phosphorus doping available in MCZ silicon is the limiting factor. Absorption as low as 4.3\,ppm/cm has been measured at 1550\,nm in float zone silicon~\cite{Degallaix:13}. This result was in excess of the Drude model prediction, possibly due to the existence of an absorption band near 2300\,nm in $n$-type silicon~\cite{PhysRev.108.268}. Absorption measurements and annealing experiments on MCZ silicon samples are in progress, to better understand the mechanisms that limit absorption, and how thoroughly the contribution of oxygen can be suppressed. \subsection{Phase noise} The dominant phase noise term in the substrate is expected to be thermo-refractive noise. Like thermo-elastic noise, this noise is sourced by thermodynamic fluctuations of heat inside the material. The fluctuations are converted to refractive index fluctuations through the coefficient $\beta = dn/dT$. The resulting phase noise is imposed on the light in the signal recycling cavity. The power spectral density of this noise has been estimated as~\cite{Braginsky:2004fp}: \begin{equation} S_{\rm TR}(f) = \frac{4 a \beta^2}{\pi^3 w^4 f^2} \frac{\kappa k_{\rm B} T^2}{\rho^2 C^2} \end{equation} in units of signal recycling cavity displacement, for a Gaussian beam of radius $w$, traversing an infinite plate with thickness $a$, where $\rho$ is the density, $C$ is the specific heat capacity, and $\kappa$ is the thermal conductivity. For LIGO Voyager, thermo-refractive noise is expected to be below the coating and quantum noise terms, but still within a factor of a few of limiting the sensitivity, as shown in \Cref{fig:SiThermal}. Analogously, the density of free carriers in silicon has an effect on the refractive index, so that thermodynamic fluctuations of the carrier density $n_{\rm c}$ impose phase noise. The magnitude of this effect is described by a carrier dispersion coefficient $\gamma_{\rm c} = dn/dn_{\rm c}$ (different for electrons and holes). The carrier density noise was estimated as~\cite{CDNoiseBruns2020}: \begin{equation} S_{\rm CD}(f) = \frac{2 n_{\rm c} \gamma_{\rm c}^2 a l_{\rm D}^2}{\pi w^2 D_{\rm c}} \end{equation} referred to signal recycling cavity displacement, where $D_{\rm c}$ is the carrier diffusion coefficient (also different for electrons and holes), and $l_{\rm D}$ is the Debye length. Although this noise has yet to be experimentally validated, the noise level was estimated to be less than $10^{-28} /\sqrt{\rm Hz}$, and thus is expected to be negligible for LIGO Voyager. \subsection{Scattering} The absorption, refractive index, birefringence, and surface profile of the test masses should all be uniform spatially, as far as possible. Any spatial inhomogeneity leads to scattering of the light that interacts with the test mass. Scattering is problematic because the loss of light can limit the buildup of optical power in the cavities. Even worse, scattered light often finds a path to return to the interferometer, thus contaminating the output with the ambient noise of all surfaces it encountered along the way. The specific requirements on these characteristics will be determined as part of the detailed optical design of LIGO Voyager{}. The tolerable amount of scattered light will be smaller than specified for Advanced LIGO~\cite{T000127}. However, LIGO Voyager{} will also be less prone to wide-angle scattering, as discussed in \cref{s:scatter}. Spatial gradients in the atomic impurities discussed above are one likely source of inhomogeneity in MCZ silicon crystals. Another is microscopic crystal defects, such as voids, stacking faults, and SiO$_2$ precipitates~\cite{Vanhellemont2015}. Impurity and defect populations can be manipulated during the crystal growth process, and also to some extent by annealing of the finished crystal. If we suppose that voids are the predominant defect population, approximated as spheres with a characteristic radius of 100\,nm, then we can compute their scattering cross-section due to Mie scattering at wavelength 2000\,nm. For a void concentration of \SI{1e3}{\per\cubic\centi\metre}, the resulting loss is estimated as 10\,ppm per round trip through a LIGO Voyager{} test mass. Measurements to check the level of scatter loss in MCZ silicon crystals are underway~\cite{G1700998}. \subsection{Thermal lensing and active wavefront control} \label{s:thermalLensing} GW interferometers suffer from the detrimental effects of thermal gradients and distortion due to absorption of optical power~\cite{Brooks:16,Winkler:TCS} in the surface and substrates of the core optics. LIGO Voyager{} is no exception, but the high thermal conductivity of silicon at cryogenic temperatures helps to mitigate this issue. Analogous to the Advanced LIGO thermal compensation system \cite{Brooks:16}, in LIGO Voyager{} there are two room-temperature silicon compensation plates in the recycling cavities, as illustrated in \Cref{fig:IFO_schematic}, to which thermal actuation can be applied to correct for lensing in the substrates of the core optics. Room-temperature silicon is preferable to fused silica for the compensation plates for two reasons: \begin{itemize} \item Silicon's thermal lensing per watt is a factor of six greater at \SI{300}{\kelvin} than at \SI{123}{\kelvin}. Consequently there is much more actuation per watt in the compensation plate than distortion per watt in the test mass, yielding a comfortable measure of control on the thermal lensing. \item Due to the increased absorption of mid-IR wavelengths (particularly at longer wavelengths), self-heating in a fused silica compensation plate may produce a larger thermal lens than the one to be corrected in the test mass (see \Cref{s:fused_silica_absorption} for details). Absorption in the room-temperature silicon compensation plates is expected to be comparable to that found in the test masses~\cite{PhysRev.108.268, Degallaix_2014}. \end{itemize} Point absorbers on the reflective surface of the test mass have impaired the performance of Advanced LIGO~\cite{PointAbsorbers2019}. However, LIGO Voyager{} will not suffer from this problem, as the coefficient of thermal expansion of the silicon test mass is effectively zero at \SI{123}{\kelvin}. The surface deformation from point absorbers at full operating power is expected to be at least $1000$ times smaller than in Advanced LIGO. Finally, the current design has not specified a way to tune the radius of curvature of the test masses (in Advanced LIGO this tuning relies on a non-zero coefficient of thermal expansion). Unless such an actuator can be devised, the curvature error tolerance will be tighter than in Advanced LIGO. The curvature tolerance will be computed using a full simulation/model of the interferometer that includes the effects of control loops and higher order modes. This is still under development and beyond the scope of this design paper. However, we indicate here the considerations that will impact the tolerance specification: \begin{itemize} \item optimizing the mode-matching between the two arm cavities (to minimize the differential loss), \item optimizing the mode-matching to the two recycling cavities (to maximize power build up, signal bandwidth and squeezing efficiency), \item ensuring the overall design of the arms is such that no higher order modes are close to resonant in the arms, and \item ensuring the arm cavities are designed to minimize the number of parametric instabilities that have to be damped, see \Cref{s:PI}. \end{itemize} \subsection{Introduction} The LIGO Voyager{} suspension system will have much in common with the Advanced LIGO suspension~\cite{Aston:2012}. The basic quadruple pendulum design will be used. The upper two masses and their suspensions will be made from steel, and the lower two masses and suspension elements between them are made from a single material (silica in Advanced LIGO and silicon in LIGO Voyager{}). Hydroxide-catalysis bonding~\cite{Rowan:1998,vanVeggel:2009} or optical contacting will be used to assemble the final monolithic stage. The three-stage seismic isolation system used in Advanced LIGO~\cite{Wen:2014hm, Matichard:2015hb} will be reused for LIGO Voyager{}, with minor engineering modifications to accommodate the heavier payload. There are, however, two major differences between the Advanced LIGO suspensions and those of LIGO Voyager{}: (i) The silica cylindrical fiber final stage suspension will be replaced with silicon ribbons, and (ii) silicon cantilever blade springs for vertical isolation will be added to the final stage. \begin{figure}[b] \centering \includegraphics[width=0.5\columnwidth]{figures/cryo_suspension_3Dmodel.pdf} \caption{Conceptual model of the LIGO Voyager{} silicon monolithic suspension. The plates surrounding the masses represent a cut-away view of the thermal shields. } \label{fig:suspension} \end{figure} \subsection{Suspension design} The lower two masses of the LIGO Voyager{} suspension will be cooled radiatively (\Cref{fig:suspension}). The silicon test mass will be suspended by four silicon ribbons, via silicon vertical-spring blades attached to the silicon penultimate mass. This section between the test mass and the penultimate mass is conductively cooled by the cold masses. The cold section and the other upper masses are suspended with steel wires from the upper stages. The mass distribution and suspension lengths have been designed to minimize the quadrature sum of the modeled seismic noise and suspension thermal noise at \SI{12}{\hertz}, as described in \Cref{tab:suspension_mass_distribution}. The current seismic platform is able to support a payload of up to \SI{1150}{\kilo\gram}. In our design, \SI[round-mode=figures,round-precision=3]{\GwincVal{Blue.Suspension.Stage(4).CummulativeMass}}{}~\SI{}{\kilo\gram} was assigned to the main suspension chain, reserving \SI{630}{\kilo\gram} for the reaction chain, the cage structure, and balancing mass. The total length of the main suspension chain from the top suspension point to the optical height of the test mass remains the same as Advanced LIGO. The resulting overall isolation of the suspension is shown in \Cref{fig:suspension_isolation}. For a given total length of the suspension chain, the best vibration isolation above the pendulum resonant frequencies is realized with equal length stages~\cite{T1300786}. However, we have chosen to make the bottom two stages as long as possible, so as to reduce the thermal noise from the penultimate stage (cf.~\Cref{sec:sus-thermal-noise}). To maintain the $\sim$\SI{10}{\hertz} seismic wall, the noise of the seismic platform can be improved through lower noise seismometers for the feedback control. \begin{table}[b] \centering \begin{tabular}{lccccc} \toprule &\multicolumn{2}{c}{aLIGO} &\hspace*{10pt}& \multicolumn{2}{c}{LIGO Voyager{}} \\ Parameters & mass (\SI{}{\kilo\gram}) & length (\SI{}{\meter}) && mass (\SI{}{\kilo\gram}) & length (\SI{}{\meter}) \\ \midrule Payload total & 124 & 1.642 && \SI[round-mode=figures,round-precision=3]{\GwincVal{Blue.Suspension.Stage(4).CummulativeMass}}{} & 1.642 \\ Top mass & 22 & 0.422 && \SI[round-mode=figures,round-precision=3]{\GwincVal{Blue.Suspension.Stage(4).Mass}}{} & \SI[round-mode=figures,round-precision=3]{\GwincVal{Blue.Suspension.Stage(4).Length}}{} \\ Second mass & 22 & 0.277 && \SI[round-mode=figures,round-precision=3]{\GwincVal{Blue.Suspension.Stage(3).Mass}}{} & \SI[round-mode=figures,round-precision=3]{\GwincVal{Blue.Suspension.Stage(3).Length}}{} \\ Penultimate mass & 40 & 0.341 && \SI[round-mode=figures,round-precision=3]{\GwincVal{Blue.Suspension.Stage(2).Mass}}{} & \SI[round-mode=figures,round-precision=3]{\GwincVal{Blue.Suspension.Stage(2).Length}}{} \\ Final mass & 40 & 0.602 && \SI[round-mode=figures,round-precision=3]{\GwincVal{Blue.Suspension.Stage(1).Mass}}{} & \SI[round-mode=figures,round-precision=3]{\GwincVal{Blue.Suspension.Stage(1).Length}}{} \\ \bottomrule \end{tabular} \caption[Summary of the suspension parameters] {Summary of the suspension parameters for the quadruple suspensions for Advanced LIGO and LIGO Voyager{}. Here the length of each stage refers to the wire (ribbon) length between that stage and the one above it. The total length refers to the total length of the suspension chain from the top suspension point to the optic center.} \label{tab:suspension_mass_distribution} \end{table} \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/sus_horiz_isolation_tf.pdf} \caption[Vibration isolation of the suspension] {Horizontal vibration transmission of the quadruple suspensions in Advanced LIGO and LIGO Voyager{}.} \label{fig:suspension_isolation} \end{figure} \subsection{Fabrication of a monolithic silicon final stage} The final stage suspension will use silicon ribbons to suspend the silicon test mass from silicon vertical-spring blades bonded to the silicon penultimate mass. Crystalline silicon is the preferred material for the suspension, considering the thermal noise and the material matching with the mirror. Silica fibers like those of the second generation detectors are not suitable, because of their increased mechanical dissipation at low temperature~\cite{ANDERSON:1955cb, Fine:2004hr, Marx:2004dr, McSkimin:2004dh}. Not all of the engineering design of the monolithic stage has been determined. However, as discussed in \Cref{sec:sus-thermal-noise}, the thermal noise in this stage does not limit the sensitivity of the interferometer. It leaves plenty of room to relax the design requirements regarding the thermal noise to make the construction of this stage feasible. \subsubsection{Production of silicon ribbons} Silicon ribbons can be manufactured by cutting and etching a long silicon boule or a large wafer~\cite{Cumming:2014iz}. In the LIGO Voyager{} design, the ribbons have a width of \SI{10}{\milli\meter} and a thickness of \SI{0.5}{\milli\meter}. Since the test mass is cooled by radiation, the ribbon dimensions are determined purely by the tensile strength of silicon, and heat conduction is irrelevant. A review of silicon's tensile strength can be found in~\cite{Cumming:2014iz}. There the measured tensile strengths range from \SI{200}{\mega\pascal} to \SI{8.8}{\giga\pascal}, depending on the dimensions of the ribbons, and the importance of the surface and edge quality is emphasized. Recent results under various surface treatments are found in~\cite{Cumming:2014iz} and~\cite{Birney:2017dt}, and there the average tensile strengths are distributed from \SI{100}{\mega\pascal} to \SI{400}{\mega\pascal}. We have assumed a tensile strength of \SI{100}{\mega\pascal} to provide a safety factor, although stronger and thinner ribbons should become possible as the development progresses. \subsubsection{Hydroxide-catalysis bonding of the final stage} Instead of the laser welding used for fused silica suspensions, hydroxide-catalysis bonding (HCB) can be used for the assembly of the LIGO Voyager{} suspensions. HCB of oxide materials~\cite{Rowan:1998} was used in Gravity Probe B~\cite{Buchman:1996fl}, and also to bond some glass parts to aLIGO test masses~\cite{Aston:2012}. The same technique has been demonstrated to work on silicon~\cite{vanVeggel:2009}. The upper limit of the mechanical loss associated with the bonded silicon was reported to be $\phi < (5 \pm 2) \times 10^{-3}$~\cite{Prokhorov:2017jd}. The effect of this mechanical loss is not included in the thermal noise calculations here, and will have to be calculated by FEA. As with Advanced LIGO, we expect to only estimate the thermal noise using FEA and mechanical Q measurements, since the direct measurement of the suspension's thermal noise is challenging even with the extremely low displacement noise of a gravitational wave interferometer. \subsubsection{Vertical suspension isolation} Vertical-spring blades are designed to lower the vertical resonant frequency and reduce the amount of vertical seismic and thermal noise coupling to the horizontal motion of the mirror. The bottom stage vertical springs will be made of silicon, for cryogenic compatibility, and they will be HCB-bonded to the penultimate mass and to the ribbon, as conceptually shown in \Cref{fig:suspension} (cf.~the sapphire blades for KAGRA~\cite{2016JPhCS.716a2017K}). As with silicon ribbons, the dimensions of the silicon vertical-spring blades will be determined by the breaking stress of silicon. If a breaking strength of \SI{100}{\mega\pascal} is assumed, a \SI{400}{\milli\meter}-length \SI{80}{\milli\meter}-wide triangular blade with a thickness of \SI{12}{\milli\meter} can sustain \SI{50}{\kilo\gram} of load. This blade would have a vertical spring constant of \SI{6.5e4}{\newton / \meter}, and yields a rather high resonant frequency of \SI{5.7}{\hertz}. Further surface treatments of the blades should should allow us to increase the breaking strength and lower the vertical frequency. \begin{figure}[h] \centering \includegraphics[width=\columnwidth]{figures/BQuad_Thermal.pdf} \caption[Suspension Thermal Noise] {Estimated total suspension thermal noise (for a single test mass), and the contribution from losses localized at each mass.} \label{fig:susnoise} \end{figure} \subsection{Suspension thermal noise} \label{sec:sus-thermal-noise} \Cref{fig:susnoise} shows the current thermal noise estimate for a single test mass suspension, along with a breakdown of the losses associated with each suspension stage. This rough thermal noise model includes the bulk mechanical loss of the silicon and steel for the ribbons, wires, and blades, thermoelastic noise, and the surface loss effect of silicon. It does not include more detailed effects such as the mechanical loss associated with bonding, the shape factor of the ribbons and blades, couplings between the mechanical modes, etc. An increase in the suspension thermal noise relative to our rough estimates will result in a small increase in the overall LIGO Voyager{} noise at $\sim10$\,Hz, and a negligible impact on astrophysical metrics~\cite{VoyScienceCase}. The total suspension thermal noise below \SI{20}{\hertz} is dominated by the penultimate stage wire, particularly its upper end which is attached to the upper intermediate mass. This is due to the high mechanical loss ($\phi = \SI{2e-4}{}$) of the steel wire, and the warm temperature of the mass. This noise cannot be mitigated by, for example, changing the wire material to fused silica or silicon. The penultimate stage has different temperatures for the upper and lower joints, and these alternate materials would suffer from excessive thermal noise at either the upper or the lower joint. To filter the noise from the penultimate stage, the lengths of the final two stages were made as long as possible (leaving \SI{0.15}{\meter} as the minimum length of the upper stages). The noise of the final stage dominates above \SI{20}{\hertz}, but is very low due to the low mechanical loss and negligible thermal expansion of silicon at \GwincVal{Blue.Materials.Substrate.Temp}{}\,K. Another notable feature is the violin mode seen around \SI{120}{\hertz}, as determined by the size of the silicon ribbons. Further improvement of the tensile strength and optimization of the ribbon size is desirable, to move the violin modes higher. \subsection{Thermal lensing} Absorbed light in the coating and substrate of the test mass causes thermo-elastic distortion creating figure errors and thermal lensing that can degrade interferometer performance~\cite{TCSsomething}. For silica test masses, both effects are significant. Maintaining silicon test masses at the zero CTE point results in low thermo-elastic distortion, while thermal lensing in the substrate remains significant. The defocusing power, $S$, (the reciprocal of the focal length) of the induced thermal lens may be approximated as \begin{equation} S = \frac{dn/dT}{\pi\kappa w^2}P_\mathrm{abs}, \end{equation} where $dn/dT$ is the temperature coefficient of refractive index of the substrate, $\kappa$ is the thermal conductivity of the substrate, $w$ is the $1/e^2$ intensity radius of the laser beam, and $P_\mathrm{abs}$ is the absorbed optical power in the \textcolor{red}{test mass and coating}. In the Advanced LIGO interferometers, thermal lens compensation is achieved using both radiative heating coils near the edge of the mirror surface, as well as by compensation optics (CPs) heated by a $CO_2$ laser projected in a pattern which creates a negative thermal lens. The heating coils mainly act to reduce surface deformations of the arm-cavity optics, while the heated compensation optics remove the thermal lens of the mirror substrate. Such a system would also be necessary for a cryogenic interferometer. In order to optimally control thermal lensing, we require: (a) a {\it test mass} in which absorbed interferometer power (and therefore thermal lensing) is minimized, (b) a {\it compensation plate} in which absorbed interferometer power (and therefore thermal lensing) is minimized, and (c) a compensation plate in which the thermal lensing response per Watt absorbed power is maximized. In this third point, we are implicitly referring to power absorbed in the CP from sources other than the interferometer. Table \ref{tab:TCS} shows the amount of self-induced thermal lensing expected from absorbed interferometer power for the Advanced LIGO test mass, a cryogenic silicon test mass, a room-temperature fused silica compensation plate and a room-temperature silicon compensation plate. \begin{table} \centering \begin{tabular}{ |c| c| c| c| c | } \hline & aLIGO & Si-TM & SiO$_2$-CP & Si-CP \\ \hline \hline Temperature, T (K) & 300 & 123 & 300 & 300 \\ \hline Absorbed power, $P_\textrm{abs}$ (W) & 0.4 & 3 & 0.25 & xx \\ \hline Thermo-refractive coeff., $dn/dT$ (K$^{-1}$) & 8.6E-6 & 9.5E-5 & 8.6E-6 & 1.8E-4 \\ \hline Thermal conductivity, $\kappa$ (W/m.K) & 1.38 & 500 & 1.38 & 150\\ \hline Beam size, $w$ (cm) & 5.4 & \GwincVal{Blue.Optics.ITM.BeamRadius_cm} & \GwincVal{Blue.Optics.ITM.BeamRadius_cm} & 5.9 \\ \hline \hline Defocus, $S$ ($\upmu$D) & 272 & 66 & 170 & xx \\ \hline \end{tabular} \caption{Total substrate lens self-induced by each optic.} \label{tab:TCS} \end{table} A fused silica compensation plate produces three times as much self-induced thermal lensing as the test mass it is there to compensate. Although the total thermal lens to correct ($66$ $\mu$D + 170$\mu$D = 236 $\mu$D) is of the order of the aLIGO thermal lens (272 $\mu$D), this is still a sub-optimal starting point. {\it Need a table here that shows the response of different materials to applied power. Shows that Si@300K is optimum CP material.} \subsubsection{Actuator configuration} \begin{itemize} \item Test masses? No \item Compensation plates: Yes \item Beam splitter: yes \item Power/signal recycling core optics: yes times 2 \item input optics: yes x2 \item SRC to OMC: yes x 2 \item SQZ to FC: yes x 2 \item FC to IFO: yes x 2 \end{itemize} \subsection{Static lens correction} A future interferometer will require significant control over the mode-matching between the input mode cleaner, the interferometer (with its four internal cavities), the injected squeezed filed, the filter cavity and the output mode cleaner. \subsection{Higher order modes and PI control} Proposal to aid in PI control with AWC in the SRC. \subsection{Considerations} \subsection{Quantum limits} For a fixed arm cavity power, the shot noise limited strain sensitivity at high frequencies degrades proportionally with the square root of the laser wavelength. Conversely, the radiation pressure limited strain sensitivity at low frequencies improves with increasing wavelength. From a quantum noise standpoint, increasing the laser wavelength by a factor of 2 is equivalent to lowering the arm cavity power by a factor of 2, all else being equal. However, the available arm cavity power is also constrained by other factors, primarily the coating absorption effect discussed above. \subsubsection{Photodetector quantum efficiency} \label{s:QEpre} High photodetector quantum efficiency (QE) is essential to make good use of high levels of squeezing. $\mathrm{QE} > 99\%$ will be required for LIGO Voyager{}. At the time of writing, the QE of InGaAs photodetectors at \SI{1550}{\nano\meter} is already sufficient to meet this requirement~\cite{Mehmet:2011je}. At \SI{2000}{\nano\meter}, $\mathrm{QE} \gtrsim 90\%$ has yet to be demonstrated for InGaAs. Currently, \SI{1550}{\nano\meter} is a better choice of wavelength from the perspective of QE. However, we know of no fundamental obstacle to achieving near-unity QE in photodetectors around \SI{2000}{\nano\meter}. Photodetectors for \SI{2000}{\nano\meter} are discussed in more detail in \Cref{s:QE}. \subsection{Noise Sources} \subsubsection{Coating thermal noise} The coating layer structure and thickness depend upon the wavelength. In general, a longer operating wavelength requires a proportionally thicker coating, and so the coating thermal noise increases roughly as the square root of the wavelength. This implies a $\sim 14\%$ degradation in coating thermal noise at 2000\,nm, relative to 1550\,nm. Amorphous silicon remains the best coating material available for NIR operations (from a thermal noise standpoint); however, the low index bilayer's performance could be improved by changing from SiO$_2$ to either alumina (Al$_2$O$_3$) or SiN which do not have the low temperature mechanical loss peaks. \subsubsection{Optical scatter loss and noise} \label{s:scatter} For a mirror with a given roughness, the total power scattered into wide angles scales as $1/\lambda^2$~\cite{adhikari2019integrated}. We expect approximately 66\% more loss via wide-angle scattering from \SI{1550}{\nano\meter} vs.~\SI{2000}{\nano\meter} \footnote{It is assumed that the roughness of the mirror coating is independent of the detailed coating layer structure}. Advantages of reducing the scatter loss include: \begin{itemize} \item a higher power recycling gain due to lower loss in the arm cavities \item lower loss in the high-finesse, squeezing filter cavity \item reduced backscattering noise (currently limiting all ground based detectors) \end{itemize} These in turn lead to reduced requirements on the input laser power, the length of the filter cavity, and scattered light beam baffles, respectively. \subsubsection{Residual gas noise} The phase noise due to residual gas in the main beam tubes~\cite{Zucker:Gas, TAMA:gas} is mainly due to H$_2$ and N$_2$, which have a negligible wavelength dependence in the NIR band. At atmospheric pressure, there are wide absorption bands~\footnote{\url{https://www.gemini.edu/sciops/telescopes-and-sites/observing-condition-constraints/ir-transmission-spectra}} near 2000\,nm due to water vapor. At UHV pressures, however, it can be assumed that there is no broadening of resonance linewidths due to particle collisions, but the distribution of particle velocities will create a Doppler resonance profile. The measured pressure for H$_2$O in the LIGO beamtubes is 10$^{-10}$\,Torr; at this level any particular resonances can be avoided by tuning the main laser frequency by several GHz. The atmospheric absorption is not an issue for the main interferometer, but could be an issue for some of the high power, in-air, laser systems. This issue would drive the laser wavelength higher (e.g. to 2128\,nm) to where the absorption is minimal. \subsection{Absorption and Impact on Cryogenics} \subsubsection{Absorption in the HR coatings} At \GwincVal{Blue.Materials.Substrate.Temp}{}\,K, radiative cooling can extract at most \SI{10}{\watt} of heat from the test masses, as described in \Cref{s:Cryo}. To keep the heat budget in balance, we can tolerate no more than \SI{3}{\watt} of absorbed power in the coating. With \SI{3}{\mega\watt} incident on the optical surfaces, absorption in the coatings must be very low ($\lesssim 1$ppm) in order to maintain cryogenic temperature. Measurements of absorption in amorphous silicon coatings show strong wavelength dependence, with the absorption being much higher at \SI{1550}{\nano\meter} than at \SI{2000}{\nano\meter} \cite{Steinlechner:2017}. The physical mechanism for this is not well understood. However, at present it appears that \SI{2000}{\nano\meter} will be the superior choice of wavelength to reach the objective of high power cryogenic operation. \subsubsection{Absorption in the test mass substrate} Substrate absorption is largely determined by the purity of the silicon material and its thermal history, as described in \Cref{s:SiliconMasses}. According to \Cref{eq:FreeCarrierAbsorption}, the absorption is expected to scale with $\lambda^2$, being $\sim66\%$ higher at \SI{2000}{\nano\meter} than at \SI{1550}{\nano\meter}. Substrate absorption is an important component of the heat budget for the input test masses, and the arm cavity finesse in LIGO Voyager{} will be substantially higher than in Advanced LIGO in order to manage this heat source. With the nominal design parameters (cf.~\Cref{tab:params}), the heat load in the substrate of the ITM will be about a factor of three less than that due to the coating absorption. This ultimately drives the design towards longer wavelengths. \subsubsection{Absorption in auxiliary fused silica components} \label{s:fused_silica_absorption} Fused silica will likely be the substrate material for all optics other the test masses and compensation plates. Absorption of optical power in fused silica, in the absence of OH in the glass, still occurs due to an intrinsic multi-phonon absorption process associated with the Si-O bonds in fused silica. This shows a large increase in absorption around \SI{2000}{nm}~\cite{Tropf95, Kitamura:07}. Absorption of optical power in these optics, most notably the beam-splitter (BS), will cause thermal lensing and loss of power without mitigation by thermal compensation~\cite{Brooks:16}. For comparison, the estimates of the theoretical limits for absorption in fused silica~\cite{thomas_optical_2006} are approximately: \begin{itemize} \item $<$\,1\,ppm/cm at \SI{1550}{\nano\meter}, ($\approx$ \SI{0.03}{\watt} absorbed in BS) \item 20\,ppm/cm at \SI{1900}{\nano\meter}, ($\approx$ \SI{0.6}{\watt} absorbed in BS) \item 40\,ppm/cm at \SI{2000}{\nano\meter}, ($\approx$ \SI{1.1}{\watt} absorbed in BS) \item 90\,ppm/cm at \SI{2100}{\nano\meter}, ($\approx$ \SI{2.5}{\watt} absorbed in BS) \item 120\,ppm/cm at \SI{2128}{\nano\meter}, ($\approx$ \SI{3.3}{\watt} absorbed in BS) \end{itemize} versus $<$\,0.06\,ppm/cm at \SI{1064}{\nano\meter} (where the BS is \SI{9}{\centi\meter} thick and the substrate sees half of the \SI[round-mode=figures,round-precision=2]{\GwincVal{Blue.Laser.BSPower_Watts}}{W}{} in the PRC). The elevated absorption at the longer end of the wavelength range could present significant engineering challenges (strong thermal lenses, increased losses, power imbalance between the arms leading to increased technical noise couplings, increased contrast defect, etc). It may be possible to decrease the absorption by transitioning to glass made of a material with a heavier molecular mass, such as fluoride \cite{Lines1998}. The technical challenges presented by wavelength dependent absorption are an active area of research requiring a full interferometer model to analyze the effects in a quantitative way. The results of this will impact the final choice of wavelength. The absorption in fused silica opens up an intriguing prospect for an alternative thermal compensation design. Recent work in optical fibers \cite{Dragic:17} has demonstrated that by doping SiO$_2$ with P$_2$O$_5$, which has a negative thermo-refractive coefficient equal to \SI[round-mode=figures,round-precision=3,scientific-notation=true]{-13.3E-6}{\per\kelvin}, it is possible to tune the $dn/dT$ of the resulting phosphosilicate glass. If we were to use fused silica compensation plates (instead of room-temperature silicon), the absorption of the interferometer laser in the glass coupled with a precisely tuned $dn/dT$ could be made to significantly cancel the thermal lens in the substrate of the test mass, thereby rendering the interferometer (mostly) thermally self-correcting. \subsection{Radiation Pressure Instabilities} \subsubsection{Opto-Mechanical Angular Instability} \input{SiggSidles} \subsubsection{Parametric Instabilities} \input{PI} \subsection{Summary} The wavelength considerations for LIGO Voyager{} are summarized in \Cref{tab:wavelength_summary}. The color scheme varies through red, orange, yellow and green corresponding to a variation from negative to positive situations. As stated at the top of this section, and visually indicated in this table, the coating absorption favors a longer wavelength (around \SI{2000}{\nano\meter}), with absorption in fused silica potentially excluding longer wavelengths if it cannot be mitigated. \begin{table}[b] \centering \begin{tabular}{lcccc} \toprule {Consideration} &\multicolumn{4}{c}{{Wavelength}} \\ & \SI{1550}{\nano\meter} & \SI{1900}{\nano\meter} & \SI{2000}{\nano\meter} & \SI{2128}{\nano\meter} \\ \midrule Photodiode Q.E. & \cellcolor{GoodGreen} $>99\%$ & \multicolumn{3}{c}{ \cellcolor{NotSureOrng} $\approx87$\%. Promising trajectory (\Cref{s:QE}). } \\ Coating thermal noise & \cellcolor{GoodGreen} Low & \multicolumn{3}{c}{ \cellcolor{OKYellow} $\approx$14\% larger } \\ Optical scatter loss & \cellcolor{NotSureOrng} 66\% larger & \multicolumn{3}{c}{ \cellcolor{GoodGreen} Low } \\ Residual gas noise & \cellcolor{GoodGreen} low H$_2$O & \cellcolor{NotSureOrng} some H$_2$O & \multicolumn{2}{c}{ \cellcolor{GoodGreen} low H$_2$O } \\ Coating absorption & \cellcolor{BadRed} High & \multicolumn{3}{c}{ \cellcolor{OKYellow} Medium } \\ Si substrate absorption &\multicolumn{4}{c}{\cellcolor[rgb]{0.75 0.75 0.75} Increases as $\lambda^2$ but not dominant effect} \\ SiO$_{2}$ substrate absorption & \cellcolor{GoodGreen} $<1$ ppm/cm & \cellcolor{OKYellow} 20 ppm/cm & \cellcolor{NotSureOrng} $40$ ppm/cm & \cellcolor{BadRed}120 ppm/cm \\ Angular instability & \cellcolor{OKYellow} Less stable & \multicolumn{3}{c}{ \cellcolor{GoodGreen} More stable arm cavity } \\ Parametric instability &\multicolumn{4}{c}{\cellcolor[rgb]{0.75 0.75 0.75} Very little change with wavelength} \\ \bottomrule \end{tabular} \caption[Summary of wavelength considerations] {Summary of wavelength considerations} \label{tab:wavelength_summary} \end{table} \section{Conclusion} \markboth{}{} \label{s:conclusion} We have described LIGO Voyager{}, a design concept for the next generation of ground based gravitational wave detector. The design takes advantage of large silicon mirrors, operated at high optical power and cryogenic temperatures, with quantum assisted metrology. This instrument will extract the full potential of the existing LIGO facilities. Nearly all of the existing infrastructure (including the complex vibration isolation systems) will be re-used, greatly reducing the cost and complexity of the upgrade. Much of the R\&D required for LIGO Voyager{} has been ongoing for several years to support the cryogenic KAGRA and Einstein Telescope designs, and will also be applicable to the Cosmic Explorer design~\cite{CosmicExplorerarXiv}. We anticipate that LIGO Voyager{} will open the next chapter of major discoveries in gravitational wave astronomy~\cite{VoyScienceCase}. The upgraded detectors will find thousands of binary neutron stars, and detect stellar-mass binary black holes from throughout the cosmological era in which such mergers are believed to have taken place. The nearest sources will be detected with unprecedented clarity, providing highly sensitive probes of the behavior of ultra-dense matter and the nature of gravity itself. This work was supported in part by the National Science Foundation under the LIGO cooperative agreement PHY-0757058. This paper has been assigned LIGO document number LIGO-P1800072. \section{Introduction} \markboth{Introduction}{} \input{intro} \clearpage \section{Test Masses} \markboth{Test Masses}{} \input{SiliconMasses} \clearpage \section{Optical Coatings} \markboth{Optical Coatings}{} \input{Coatings} \clearpage \section{Choice of Laser Wavelength} \markboth{Choice of Laser Wavelength}{} \input{Wavelength} \clearpage \section{Quantum Noise} \markboth{Quantum Noise}{} \input{Quantum} \clearpage \section{Suspensions} \markboth{Suspensions}{} \input{Suspensions} \clearpage \section{Laser Technology} \markboth{Laser Technology}{} \input{Lasers} \clearpage \section{Configurations} \markboth{Configurations}{} \input{configurations} \input{conclusion} \clearpage \appendices \subsection{Justification} The most significant design changes in LIGO Voyager{} versus Advanced LIGO can be traced to the need to reduce the quantum noise in tandem with the mirror thermal noise. \begin{itemize} \item Quantum noise will be reduced by increasing the optical power stored in the arms. In Advanced LIGO, the stored power is limited by thermally induced wavefront distortion effects in the fused silica test masses. These effects will be alleviated by choosing a test mass material with a high thermal conductivity, such as silicon. \item The test mass temperature will be lowered to \GwincVal{Blue.Materials.Substrate.Temp}{}\,K, to mitigate thermo-elastic noise. This species of thermal noise is especially problematic in test masses that are good thermal conductors. Fortunately, in silicon at \GwincVal{Blue.Materials.Substrate.Temp}{}\,K, the thermal expansion coefficient crosses zero, which eliminates thermo-elastic noise. (Other plausible material candidates, such as sapphire, require cooling to near 0\,K to be free of this noise.) \item The thermal noise of the mirror coating will be reduced by switching to low dissipation amorphous silicon based coatings, and by reducing the temperature. Achieving low optical absorption in the amorphous silicon coatings requires an increased laser wavelength. \end{itemize} \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/IFO_schematic.pdf} \caption[Schematic layout of Voyager]{A simplified schematic layout of LIGO Voyager{}. Dual-recycled Fabry-Perot Michelson (DRFPMI) with frequency dependent squeezed light injection. The beam from a 2\si{\micro\meter} pre-stabilized laser (PSL), passes through an input mode cleaner (IMC) and is injected into the DRFPMI via the power-recycling mirror (PRM). Signal bandwidth is shaped via the signal recycling mirror (SRM). A squeezed vacuum source (SQZ) injects this vacuum into the DRFPMI via an output Faraday isolator (OFI) after it is reflected off a filter-cavity to provide frequency dependent squeezing. A Faraday isolator (FCFI) facilitates this coupling to the filter cavity. The output from the DRFPMI is incident on a balanced homodyne detector, which employs two output mode cleaner cavities (OMC1 and OMC2) and the local oscillator light picked off from the DRFPMI. Cold shields surround the input and end test masses in both the X and Y arms (ITMX, ITMY, ETMX and ETMY) to maintain a temperature of \GwincVal{Blue.Materials.Substrate.Temp}{}\,K in these optics. The high-reflectivity coatings of the test masses are made from amorphous silicon. } \label{fig:IFO_schematic} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figures/BlueBird5.pdf} \caption[Voyager Noise Curve w/ Comparisons] {LIGO Voyager{} noise curve compared to Advanced LIGO during O3, and the Advanced LIGO and A+ design goals.} \label{fig:noise_comparison} \end{figure} \subsection{Design overview} The LIGO Voyager{} design is illustrated in \Cref{fig:IFO_schematic}, with critical parameters called out in \Cref{tab:params}. The dual-recycled, Fabry-Perot Michelson topology is similar to Advanced LIGO and A+, with the following additional upgrades. Optical coatings on the cryogenically-cooled (\GwincVal{Blue.Materials.Substrate.Temp}{}\,K) test masses will be made from amorphous silicon, with the lower coating mechanical loss and cryogenic operation reducing the coating thermal noise. The 200\,kg test-masses will be made of crystalline silicon (rather than fused silica). The absorption spectrum of the test mass materials requires us to choose a longer wavelength laser. The longer wavelength will also significantly reduce optical scattering from the mirrors, lowering losses and allowing for higher finesse arm cavities. The quantum noise (shot noise and radiation pressure) will be reduced by a combination of frequency-dependent squeezing, heavier test masses, and higher stored power in the arms. Finally, the environmentally produced Newtonian gravitational noise~\cite{Harms2015} will be reduced using seismometer arrays combined with adaptive noise regression~\cite{Cella2000, PhysRevD.86.102001}. The LIGO Voyager{} noise budget and resulting design sensitivity are shown in \Cref{fig:noise_comparison}. Horizon distances for astrophysical sources are illustrated in \Cref{fig:RedshiftRange} and \Cref{fig:horizon_donut}, showing the improvement over the Advanced LIGO design. Although most optical components will need to be changed to handle the new wavelength, we plan on reusing the Advanced LIGO hardware and infrastructure wherever possible (for example, the seismic isolation platforms, vacuum systems, electronics and infrastructure). \input{paramtable} \subsection{Article overview} This article presents a detailed description of the LIGO Voyager{} design with the goals of (a) investigating the feasibility of all the required technology, largely illustrated in \Cref{fig:IFO_schematic}, and highlighting those technological areas that require further research and (b) describing all the key noise contributions illustrated in the noise budget in \Cref{fig:noise_comparison} (and thus determining the LIGO Voyager{} sensitivity). The structure of the paper is as follows. In \Cref{s:SiliconMasses}, we examine the feasibility of using large, cryogenically-cooled (\GwincVal{Blue.Materials.Substrate.Temp}{}\,K) silicon test masses and identify the substrate thermo-refractive noise, shown in the noise budget, as the limiting noise source associated with the test mass. \Cref{s:Coatings} describes an amorphous-silicon based coating design that delivers the coating Brownian noise curve shown in the noise budget and also identifies coating absorption as a key obstacle that must be overcome. The numerous factors that enter into the choice of \SI{2000}{\nano\meter} as the laser wavelength are described in detail in \Cref{s:Wavelength}. Quantum noise as a limiting noise source and the feasibility of injecting \GwincVal{Blue.Squeezer.AmplitudedB}\,dB of frequency-dependent squeezed vacuum at \GwincVal{Blue.Laser.Wavelength_nm}\,nm are considered in \Cref{s:Quantum}. The suspension thermal noise (associated with the use of silicon blades and ribbons) is described in \Cref{s:Suspensions}. This section also explores the practicality of manufacturing these silicon blades and ribbons. In \Cref{s:Lasers}, we review the development of mid-IR laser sources and find no significant impediment to producing a thulium- or holmium-based \SI[round-mode=figures,round-precision=2]{\GwincVal{Blue.Laser.PMCInput}}{W}{}, low-noise, single-frequency, \SI{2000}{\nano\meter} laser within the next 10 years. \Cref{s:configs} explores configurations of LIGO Voyager{} that are optimized for high-frequency astrophysical sources, given the considerable tunability of the quantum noise curve and interferometer optical configuration. Finally, cryogenic considerations are discussed in \Cref{s:Cryo}. \begin{figure} \centering \begin{subfigure}[b]{0.65\textwidth} \includegraphics[width=1\linewidth]{figures/BlueBird5_z.pdf} \caption{} \label{fig:RedshiftRange} \end{subfigure} \begin{subfigure}[b]{0.65\textwidth} \includegraphics[width=1\linewidth]{figures/horizon_donut.pdf} \caption{} \label{fig:horizon_donut} \end{subfigure} \caption[Redshift Range]{(A) Distance at which an optimally oriented, equal mass, binary black hole merger can be detected (with SNR = 8) as a function of the total mass of the binary (in the source frame). (B) Donut visualization of the horizon distance of LIGO Voyager{}, aLIGO, and A+, shown with a population of binary neutron star mergers (yellow) and 30--30 $M_\odot$ binary black hole mergers (gray). This assumes a Madau-Dickinson star formation rate~\cite{MDSFR} and a typical merger time of 100 Myr.} \end{figure}
1,108,101,562,592
arxiv
\section{Introduction} The idea that the early universe underwent a period of accelerating expansion i.e. inflation, is an attractive one. Such a period of inflation would explain the observed smoothness and flatness of today's universe. But it might also explain the inhomogeneities present in the universe. During inflation, quantum mechanical vacuum fluctuations in various fields would have been amplified and stretched to macroscopic length scales, to later seed the growth of large scale structures like those we see today. However, the theoretical foundations of inflation are still unclear. There is no compelling connection between the field needed to drive inflation and fundamental theory. The initial conditions are usually imposed by hand, or via a somewhat handwaving appeal to primordial chaos. The problem of the initial singularity is not addressed. And the usual calculation of the quantum mechanical perturbations relies on a rough argument that the perturbation modes should be close to the Minkowski space-time ground state when their wavelength is far beneath the Hubble radius during inflation~\cite{muk}. This argument is reasonable but hardly rigorous since the wavelengths of interest today are typically sub-Planckian during the early stages of inflation. An alternative to the usual approach is to confront the problem of initial conditions head on by making an ansatz for the initial quantum state of the universe. The Euclidean no boundary proposal due to Hartle and Hawking~\cite{HH} represents one such attempt, and we shall implement this proposal in the current work, albeit with slightly different emphasis. The main idea is that the path integral may be used to define its own initial conditions, if we `round off' all Lorentzian four-geometries on Euclidean compact four-geometries. This is appealing because it is a natural generalization of the imaginary time formalism well established in statistical physics, and in some sense represents an unbiased sum over possible initial quantum states. One might also add that in field theories and in the theory of random geometries in general the only nonperturbative (i.e. lattice) formulations are framed in Euclidean terms. The Euclidean approach to quantum gravity is therefore probably the most conservative approach to quantum gravity, building upon techniques which are well proven in other fields. Recently, Hawking and one of us found a class of singular but finite-action Euclidean instantons which can be used to define the initial conditions for realistic inflationary universes \cite{HT}. For a generic inflationary scalar potential there exists a one-parameter family of singular but finite-action instantons, each allowing an analytic continuation to a real open Lorentzian universe. If these instantons are allowed, then open inflation occurs generically and not just for potentials with contrived false vacua as was previously believed \cite{bucher}. In this paper, we investigate the spectrum of perturbations about such singular instantons as well as the more conventional non-singular Coleman--De Luccia instantons \cite{coldel} previously used to describe open inflation. We shall be particularly interested in determining whether there are any observable differences between the perturbation spectra produced in these two cases. The instanton solution provides the classical background with respect to which the quantum fluctuations are defined. In the Euclidean path integral approach, one can in principle compute correlators of the quantum fluctuations perturbatively to any desired order in $\hbar$. Unlike the usual approach to inflation, there is no ambiguity in the choice of initial conditions at all, because the Euclidean Green function is unique. We shall compute the Euclidean Green function here for instantons of the singular type, as well as for regular Coleman-De Luccia instantons such as occur in theories with a false vacuum. The question of whether singular instantons are allowed has provoked some debate in the literature (see \cite{Vilenkin}, \cite{Linde}, \cite{NTcon} and references therein). In this paper we add to the evidence in their favour by showing that the spectrum of fluctuations is well defined in the presence of the singularity. The Euclidean action {\it by itself} specifies the allowed perturbation modes without any extra input. In a parallel paper~\cite{tom} it is shown that the same is true for tensor perturbations. These calculations demonstrate that at least to first order in $\hbar$ the quantum fluctuations about singular instantons appear healthy. In principle, the present framework also offers a method for computing higher order corrections, although one expects that the non-renormalizability of gravity would introduce new free parameters describing the coefficients of the higher order counterterms. There already exists an extensive literature on the problem of fluctuations in open inflation. There have been a number of difficulties in obtaining precise results and to our knowledge there are no calculations yet available for generic potentials which do not make one approximation or another (see e.g. \cite{cohn}, \cite{garriganew}, \cite{LindeSas}). There are also a number of unresolved ambiguities which arise in precisely which modes to allow (see the discussion in \cite{xavi} and references therein). We believe we have isolated and overcome these problems in the present work and reduced the problem to a complete prescription which may be numerically implemented. The main idea of our method is to compute real-space correlators in the Euclidean region and analytically continue them to the Lorentzian region. In contrast, previous work has treated the perturbation modes of each wavenumber separately. This can lead to confusion since the naive open universe modes provide only a partial description (the `sub-curvature' piece) of the relevant correlators. The remaining `super-curvature' piece is not expressible in terms of these modes. In our method, both pieces are automatically included, and the connection between them is thereby clarified. In the Euclidean real-space approach the problem of sub-Planckian modes does not occur as directly as in the usual approach, since the initial conditions are defined in a manner that makes no mention of the mode decomposition. In analogy with black hole physics, there is reason to hope that the results obtained are likely to be insensitive to short distance sub-Planckian physics. Euclidean quantum gravity is not a complete theory. As well as being non-renormalizable, it suffers from the well known conformal factor problem, the Euclidean Einstein action being unbounded below. This problem does not affect the calculations reported here because the spatially {\it inhomogeneous} perturbations have positive Euclidean action. It is only the spatially {\it homogeneous} modes which suffer from the conformal factor problem. Even for these modes we think that there are no grounds for pessimism. For pure gravity, or for gravity with a cosmological constant, the conformal factor problem disappears when the gauge fixing procedures are carefully followed. The technology for scalar fields coupled to gravity has not yet been worked out, but as far as we know there is no insuperable obstacle to doing so. Although the simplest (e.g. $\phi^2$ or $\phi^4$) inflationary potentials yield in this approach \cite{HT} a most probable universe which is much too open to be compatible with observation, there are other scalar potentials which yield acceptable values closer to unity~\cite{Wiseman}. In this paper we do not address this question, but formulate our results so that they apply for an arbitrary scalar potential. Finally, we note that in a generic inflationary theory there also exist singular instantons yielding real Lorentzian closed inflating universes. We shall investigate the quantum fluctuations about such instantons in a future publication. \section{Fluctuations from the Euclidean Path Integral} In this section we discuss the principles of our method, postponing the technicalities to the next section. We choose to frame the Euclidean no boundary proposal in the following form. We take it to be an essentially topological prescription for the lower limit of the functional integral. We write the quantum mechanical amplitude for the state described by three-metric $g^3$ and matter fields $\phi$ as \begin{eqnarray} \Psi[g^3,\phi]\sim \int^{g^3,\phi} \[{\cal D} g \] \[{\cal D} \phi\] e^{{i {\cal S}( g,\phi) \over \hbar}} \labeq{psi} \end{eqnarray} and our prescription is to integrate over all Euclidean/Lorentzian four-geometries $g$ of the form shown in Figure \ref{fig:simp}, with associated matter fields $\phi$, bounded by the three-geometry $\Sigma_f$ and matter fields present in the final state. The Euclidean region is essential because there is no way to `round off' a Lorentzian manifold without introducing a boundary. We sum over all matching surfaces $\Sigma$ and Euclidean regions bounded by $\Sigma$. Our prescription differs from that of Hartle and Hawking \cite{HH} in that we shall not impose regularity of the four-geometries summed over. Such a prescription seems to us to be at odds with the basic principles of path integration. Regularity of the background geometry and the fluctuation modes should emerge as an output of the path integral rather than be input, and we shall see this occur in our calculations below. The expression (\ref{eq:psi}) is only formal, and part of our investigation will be to see whether we can calculate it in a perturbative expansion around saddle point solutions i.e. instantons. The relevant instantons are real solutions of the Euclidean field equations which possess a surface $\Sigma$ on which they may be analytically continued to a real Lorentzian spacetime. The condition for this to be possible is that the normal derivatives of the three-metric and matter fields should be zero on $\Sigma$. Note that regularity of the four-geometry of the instanton, needed here so that analytic continuation is possible, is a consequence of the instanton being a saddle point in the sum over all four-geometries and therefore a solution of a partial differential equation. Such analytic four-geometries are of course a set of measure zero in the original path integral. \begin{figure} \centerline{\psfig{file=manif.eps,width=2.in}} \caption{The quantum amplitude for a three-geometry $\Sigma_f$ is given as the path integral over all Lorentzian geometries matched on a three-surface $\Sigma$ to a compact Euclidean four-manifold. All geometries of this type are to be integrated over.} \labfig{simp} \end{figure} We wish to calculate correlators of physical observables in the Lorentzian region. Each instanton provides a zeroth order approximation, giving us a classical background within which quantum fluctuations propagate. To first order in $\hbar$ the quantum fluctuations are specified by a Gaussian integral. One can perform the integral in the Euclidean region and then analytically continue in the coordinates of the background solution to find the quantum correlators in the Lorentzian region. As we shall now see, the analytic continuation is guaranteed to give real-valued Lorentzian fields and momenta if the background solution is real in both the Euclidean and Lorentzian regions. The quantum mechanical amplitude for the fluctuations about a particular background solution $B$ is given from (\ref{eq:psi}) by (henceforth we set $\hbar=1$) \begin{eqnarray} \Psi_B[g^3,\phi]\sim e^{iS_B (g_B,\phi_B)} \int^{\delta g^3,\delta \phi} \[{\cal D} \delta g \] \[{\cal D} \delta \phi\] e^{i {\cal S}_2 ( \delta g, \delta \phi)} \labeq{HH} \end{eqnarray} where the metric $g=g_B +\delta g$ and fields $\phi=\phi_B +\delta \phi$. ${\cal S}_2$ is the action for second and higher order fluctuations. Let ${\cal P}(x_1)$ and ${\cal Q}(x_2)$ be two observables at $x_1$ and $x_2$ on $\Sigma_f$. Then their correlator is given by integrating $|\Psi|^2$ times ${\cal P}(x_1) {\cal Q}(x_2)$ over a complete set of observables ${\cal O}(x)$ on a final state three-surface $\Sigma_f$ containing $x_1$ and $x_2$, \begin{eqnarray} \langle {\cal P}(x_1) {\cal Q}(x_2)\rangle = {\cal N} \int dB \int \[{\cal D} {\cal O} (\Sigma_f)\] \Psi_B\[{\cal O}\]^* \Psi_B\[{\cal O}\] {\cal P}(x_1) {\cal Q}(x_2) \labeq{HHa} \end{eqnarray} with ${\cal N}$ an appropriate normalization constant. The full quantum correlator involves the sum over all background solutions $B$. If no other constraints are imposed, the solution with the lowest Euclidean action will dominate. The insertion of the sum over a complete set of states means that the result (\ref{eq:HHa}) may be viewed a single `doubled' path integral. The `initial' state is established on the Euclidean region bounded by the first version of $\Sigma$. There is then a Lorentzian region running forward with weighting factor $e^{iS}$ to $\Sigma_f$ on which the operators of interest are located. Then there is a region running back to a second version of $\Sigma$, with weighting factor $e^{-iS}$, and finally the geometry closes on another Euclidean compact four-manifold. In the instanton approximation, the two Euclidean regions are actually copies of the same half-instanton. But the quantum fluctuations on the upper and lower halves are independent - the wavefunction $\Psi$ and its conjugate $\Psi^*$ involve independent integrations over the perturbations. In formula (\ref{eq:HHa}) we can now continue $x_1$ and $x_2$ back into the Euclidean region of the background solution. If the Lorentzian continuation of the instanton is real, then as mentioned the Lorentzian part of the path integral involves a factor of $e^{iS}$ coming from $\Psi$ and a factor of $e^{-iS}$ from $\Psi^*$. These cancel exactly allowing us to deform $\Sigma_f$ back towards the Euclidean region. Upon reaching the Euclidean region however, we discover that both $\Psi$ and $\Psi^*$ involve $e^{-S_E}$ so there is no cancellation. In the correlator we are left with an overall $e^{-2S_E}$ where $S_E$ is the half-instanton action, and the action for the fluctuations is just the Euclidean action evaluated over the doubled half-instanton. Correlators calculated in the Euclidean region and continued to the Lorentzian region are therefore equal to those computed from the Lorentzian path integral with Euclidean no boundary initial conditions only if the background solution is real. Note that the cancellation of $e^{iS}$ and $e^{-iS}$ occurs for any three-surface $\Sigma_f$ in the Lorentzian region. There is no requirement that $\Sigma_f$ be spacelike. In fact, there is no reason for a complete surface $\Sigma_f$ to exist at all. If one computes correlators in the Euclidean region as we shall, all that matters is that a smooth continuation of {\it local} observables exists into the Lorentzian region. If there are singularities, they can be avoided by choosing an appropriate continuation contour. The above construction is closely analogous to the imaginary time formalism for thermal field theory, where real-time correlators can be calculated by analytic continuation of Euclidean ones. In the present context, the compact nature of the Euclidean instantons has the same effect on correlators that the periodicity in imaginary time does in thermal field theory. The periodicity of the instanton solution introduces thermal weighting factors corresponding to the Hawking temperature of the background spacetime. \section{Instantons and Analytic Continuation} Let us briefly recall the form of the instantons we are interested in. We consider Einstein gravity coupled to a single scalar field $\phi$ with potential energy $V(\phi)$. We seek finite-action solutions of the Euclidean equations of motion. If $V(\phi)$ has a positive extremum, there is a solution in the form of a four-sphere. This has the maximal symmetry allowed in four dimensions, namely $O(5)$. In general however, no such instanton exists, and the highest symmetry possible is $O(4)$. The instantons are described by the line element \begin{equation} ds^2= d\sigma^2 +b^2(\sigma)d \Omega_3^2 =d\sigma^2 +b^2(\sigma)\(d \psi^2+{\rm s in}^2 (\psi) d \Omega_2^2 \) \labeq{emetric} \end{equation} with $b(\sigma)$ the radius of the three-sphere. The Einstein and scalar field equations take the form \begin{equation} \phi_{,\sigma\sigma}+3{b_{,\sigma}\over b}\phi_{,\sigma} =V_{,\phi},\qquad b_{,\sigma \sigma} = -{\kappa \over 3} b \( \phi_{,\sigma}^2 +V\) \labeq{EEs} \end{equation} where $\kappa \equiv 8 \pi G$, $\phi_{,\sigma} \equiv \partial_\sigma \phi$ and $V_{,\phi}\equiv \frac{dV}{d\phi}$. The point of \cite{HT} was that for a gently sloping potential of the kind needed for inflation, there exists a one-parameter family of solutions to these equations labelled by the value of the scalar field at the north pole of the instanton. At the north pole the metric and scalar field are regular. At the south pole, which is singular, the scale factor goes to zero as $\(\sigma_m-\sigma\)^{\frac{1}{3}}$, and the scalar field diverges logarithmically in $\sigma$. These singular instantons are in general not stationary points of the action and they should be interpreted as constrained instantons, where the constraint is imposed on a small three-surface surrounding the singularity \cite{NTcon}. \begin{figure} \centerline{\psfig{file=milne.eps,width=2.in}} \caption{The Euclidean/Lorentzian manifold in the vicinity of the regular pole. } \labfig{milne} \end{figure} The various analytic continuations are fixed by the fact that the Euclidean/Lorentzian geometry in the neighbourhood of the north pole is locally flat. The coordinates we use for the instanton and its continuation reduce in this neighbourhood to those used in mapping flat space onto the Milne universe, see Figure \ref{fig:milne}. There is then an obvious choice of regular coordinates which allows one to uniquely fix the required continuations. Consider flat Minkowski space with line element $-dT^2+ dR^2+R^2 d\Omega_2^3$. The analytic continuation to Euclidean time is performed by setting $T_E=iT$. Under this continuation the Lorentzian action $S$ yields the positive Euclidean action, $iS=-S_{E}$. We may describe the situation in O(4) invariant coordinates $T_E= \sigma \cos \Omega$, $R=\sigma \sin \Omega$, with $\Omega$ the polar angle on the three-sphere. These coordinates correspond to those used for our O(4) invariant instantons. As $\Omega$ runs from $\pi$ to zero, $T_E$ runs from $-\sigma$ to $\sigma$. For fixed $\sigma$, the Euclidean action involves the integral $\int_{-\sigma}^{\sigma} dT_E$. The continuation to the Lorentzian region is described by distorting the $T_E$ contour into the complex plane. The new contour runs from $-\sigma$ to zero, up the positive imaginary axis where the $e^{-S_E}$ factor gives $e^{iS}$ needed for the wavefunction $\Psi$ in (\ref{eq:HHa}). The continuation then runs back down the imaginary axis, giving $e^{-iS}$ and along the real axis from zero to $+\sigma$. If we translate $T_E$ into $\Omega$, the first two segments of the contour run from $\Omega= \pi$ to $\Omega= \pi/2$ and then down the line $\Omega=(\pi/2)-it'$, with $t'$ real and positive. The latter segment gives us the $e^{iS}$ term in the `doubled' path integral. Following this continuation on the surface $T_E=0$, from the Euclidean region we obtain a Lorentzian spacetime with timelike coordinate $t'$. We have $T=\sigma \sinh t'$ and $R= \sigma \cosh t'$. Since these formulae imply $R^2-T^2 >0$, these coordinates cover only part of the Lorentzian region, the exterior of the light cone emanating from the origin of spherical coordinates at $T=0$. We call this region II. We can then perform a second continuation $t'=\chi-i{\pi\over 2}$ by setting $T=t \cosh \chi$ and $R= t \sinh \chi$ to obtain the metric in the interior of the light cone, which we call region I. In the vicinity of the regular pole, these new coordinates are those of the Milne universe. Globally, region I is the inflating open universe. Hence the required continuations are \begin{eqnarray} &&{\rm Euclidean \rightarrow I:} \qquad \sigma=i t, \qquad \Omega=-i\chi, \cr &&{\rm Euclidean \rightarrow II:} \qquad \Omega={\pi\over 2} -it', \labeq{continuation} \end{eqnarray} yielding the following line elements \begin{eqnarray} &&{\rm Euclidean:} \qquad d\sigma^2 +b^2(\sigma) d\Omega_3^2,\cr &&{\rm Region~I:} \qquad -dt^2 +a^2(t)(d\chi^2+\sinh^2 \chi d\Omega_2^2), \cr &&{\rm Region~II:} \qquad d\sigma^2 +b^2(\sigma)(-{dt'}^2 +\cosh^2 t'd\Omega_2^2). \labeq{continuationmet} \end{eqnarray} The function $b(\sigma) \rightarrow \sigma$ as $\sigma \rightarrow 0$, and $a(t) \rightarrow t$ as $t \rightarrow 0$, so it follows from~(\ref{eq:continuation}) that $b=ia$. The Euclidean action is expressed as an integral over the coordinates $\sigma$ and $\Omega$, and then regarded as a four dimensional contour integral. We may then distort the integration contour into the regions of the complex coordinate space corresponding to the Lorentzian regions I and II. This procedure uniquely defines the path integral measure for Lorentzian correlators. \begin{figure} \centerline{\psfig{file=tpath2.eps,width=2.in}} \caption{Contour for $X\(t\)$.} \labfig{tpath} \end{figure} In what follows, it will be very useful to work in terms of a conformal spatial coordinate in the Euclidean region and in region II, defined by \begin{eqnarray} X\equiv\int_{\sigma}^{\sigma_m}\frac{d\sigma'}{b\(\sigma'\)}. \labeq{xdef} \end{eqnarray} For singular instantons, $X=0$ corresponds to the singular pole and $X\rightarrow\infty$ corresponds to the regular pole. For regular (Coleman-De Luccia) instantons, the second regular pole is at $X\rightarrow-\infty$. It is also useful to work in a conformal time coordinate $\tau$ in the open universe, defined to obey $dt=a(t) d\tau$. For many purposes we shall need to relate $\tau$ to $X$. To do this we extend our integral definition of $X$ in equation~(\ref{eq:xdef}) into the complex $\sigma$-plane. The required integration contour is shown in Figure~\ref{fig:tpath}. One has \begin{eqnarray} X\equiv \int_{it}^{\sigma_m}\frac{1}{b\(\sigma\)}d\sigma = -\tau-i{\pi \over 2} \labeq{confx} \end{eqnarray} where we define the lower limit of the conformal time $\tau$ via \begin{eqnarray} -\tau \equiv \lim_{\epsilon\rightarrow 0}\(\int_{\epsilon}^{\sigma_m}\frac{d\sigma}{b\(\sigma\)} - \int_{\epsilon}^{t}\frac{dt'}{a\(t'\)}\), \labeq{conftime} \end{eqnarray} so that $\tau$ runs from $-\infty$ at the beginning of the open universe to an approximate constant when inflation is well underway, and then finally diverges logarithmically to $\infty$ at late times in the open universe when $a\(t\) \sim t$. The conformal structure of the Lorentzian region is shown in Figure~\ref{fig:construc}. As the diagram indicates, the singularity is timelike and visible from within the open universe region, labelled I. It is interesting to ask when the singularity first becomes visible to an observer in region I. Any such observer can by symmetry be placed at the origin of spherical coordinates. To find the null geodesics, we first change coordinates to conformal time in region I, defined in equation (\ref{eq:conftime}). Null geodesics incident on the origin of spherical coordinates, $\chi=0$, at a conformal time $\tau_0$ obey \begin{eqnarray} \chi= \tau_0-\tau. \labeq{null} \end{eqnarray} With the above continuations (\ref{eq:continuation}) the conformal space coordinate $X$ in region II obeys \begin{eqnarray} X= t'-\tau_0. \labeq{nullx} \end{eqnarray} The singularity is located at $X=0$, and the transition from the Lorentzian to the Euclidean region is at $t'=0$. It follows that singularity first becomes visible in the open universe when $\tau_0=0$. As mentioned above, with the above definition of $\tau$, inflation ends at negative conformal time. For example, for a ${1\over 2} m^2\phi^2$ potential with 60 efolds of inflation, we find the end of inflation occurs at $\tau=- 1.70$ in units where the space curvature is minus one. Evolving forward into the late universe, one finds that the singularity is first visible at late times when universe is becoming curvature dominated. \begin{figure} \centerline{\psfig{file=construc.eps,height=2.5in}} \caption{Conformal spacetime structure of the classical background solution.} \labfig{construc} \end{figure} Although the singularity is visible within the open universe, and has a definite observable effect, as we shall see the quantum fluctuations are nevertheless well defined in its presence. The singularity acts as a reflecting boundary \cite{garriga}, and nothing enters the universe from it. \section{The Path Integral for Scalar Perturbations} In this section we derive the action appropriate for fluctuations about the instantons described above. Since the background solution satisfies the field equations (including an appropriate constraint if that is required \cite{NTcon}), the leading term in the action occurs at second order in the perturbations. We shall calculate this second order term and the perform the Gaussian path integral determining the quantum correlators to first order in $\hbar$. We compute the relevant action in the open universe region, where a Hamiltonian treatment is straightforward. Our discussion follows the notation of ref.~\cite{xavi}, although we shall work strictly from the path integral point of view. The action for scalar perturbations reduces to that for a single gauge-invariant variable $q$, related to the Newtonian potential $\Psi_N$ standardly used in the analysis of inflationary quantum fluctuations. We analytically continue the variable $q$ into the Euclidean region of the instanton, where its action is positive. The real-space correlator of $q$ is computed in the Euclidean region and finally analytically continued back into the Lorentzian region. Following standard notation we write the perturbed line element and scalar field as \begin{eqnarray} ds^2 & = & a^2\(\tau\)\(-\(1+2A\)d\tau^2 + S_idx^id\tau +\(\gamma_{ij} +h_{ij}\)dx^i dx^j\), \nonumber \\ \phi & = & \phi_0\(\tau\)+\delta\phi \labeq{metricpert} \end{eqnarray} where $\gamma_{ij}$ is the metric on the background three-space, $\tau$ is the conformal time, $\phi_0\(\tau\)$ is the background acalar field and $a(\tau)$ is the background scale factor. We decompose $S_i$ and $h_{ij}$ as follows (see~\cite{stew}) \begin{eqnarray} h_{ij} & = & -2\psi\gamma_{ij}+2E_{|ij} + 2F_{\(i|j\)}+t_{ij}, \nonumber \\ S_i & = & B_{|i}+V_i. \labeq{decomp} \end{eqnarray} Here $|$ denotes covariant derivative on the background three-space, $\psi$, $B$ and $E$ are scalars, $V_i$ and $F_i$ are divergenceless vectors, and $t_{ij}$ is a transverse traceless tensor. In general, with a suitable asymptotic decay condition, the above decomposition is unique up to $B\rightarrow B+$constant, $F_i\rightarrow F_i + K_i$, with $K_i$ a Killing vector. For compact 3-spaces there is in addition an ambiguity since the metric is unchanged under $\psi\rightarrow \psi-\zeta$, $E\rightarrow E+\zeta$, with $\(\Delta+3\)\zeta=0$. However for the compact Euclidean instantons we consider all modes obeying $\(\Delta+3\)\zeta=0$ are actually pure gauge and we shall in any case have to project them out. Under an infinitesimal scalar coordinate transformation $x^{\mu}\rightarrow x^{\mu}+\lambda^{\mu}$, where $\lambda^{\mu}=\(\lambda^0,\lambda^{|i}\)$, the perturbations in (\ref{eq:decomp}) transform as \begin{eqnarray} \psi\rightarrow\psi-{\cal H}\lambda^0, & ~~~~ & B\rightarrow B+\lambda'-\lambda^0, \nonumber \\ A\rightarrow A+{\lambda^{0}}'+{\cal H}\lambda^0, & ~~~~ & E\rightarrow E+\lambda, ~~~~ \delta\phi \rightarrow \delta\phi+\phpr \lambda^0 \labeq{gts} \end{eqnarray} where ${\cal H} \equiv a'/a$, and here and below prime denotes derivative with respect to conformal time. The action is invariant under these transformations. We must pick a gauge in order to fix this invariance and obtain a unique result. As is well known, the computation of inflationary perturbations from a single scalar field is simplest in conformal Newtonian gauge, defined by setting $B=E=0$. This completely fixes the gauge freedom. Equivalently, one can define the gauge-invariant variable~\cite{muk} \begin{eqnarray} \Psi_N=\psi -{\cal H}\(B-E'\). \labeq{psidef} \end{eqnarray} As long as there are no anisotropic stresses, $\Psi_N $ is governed by a second order differential equation in time, and all perturbations are determined from $\Psi_N $ with the use of the Einstein constraint equations. An approximately-conserved quantity $\chi$ can be constructed from from $\Psi_N $ which may be used to match the super-Hubble radius perturbations across the reheating surface and into the late universe relevant for observation (see e.g.~\cite{bucher}). In this section we derive the path integral appropriate for computing $\Psi_N$ correlators in the open universe. We shall do so in a manifestly gauge-invariant manner. The first point to note is that $\Psi_N$ involves a field velocity and in the path integral formalism the first step is to convert this to a canonical momentum. This requires us to use the Hamiltonian (first order) form of the path integral. A second merit of this formalism is that integration over the nondynamical lapse and shift fields imposes the Einstein constraint equations as delta functionals, enabling further integrations to be performed. Our discussion parallels that of Appendix B of reference~\cite{xavi}, but is slightly more concise. We shall also be careful to keep certain surface terms which determine the allowed fluctuation modes about singular instantons. Our starting point is the action for gravity plus a scalar field \begin{eqnarray} S=\frac{1}{2\kappa}\int d^4x \sqrt{-g}\(R-\frac{1}{2}\nabla_{\mu}\phi\nabla^{\mu}\phi-V\(\phi\)\) - \frac{1}{\kappa} \int d^3x \sqrt{\gamma} K, \labeq{corac} \end{eqnarray} where $K$ is the trace of the extrinsic curvature of the boundary three-surface. The surface term is needed to remove second derivatives from the action, so that fluctuation variables are constrained on the boundary but their derivatives are not. The decomposition~(\ref{eq:decomp}) is substituted into equation~(\ref{eq:corac}), keeping all terms to second order. The scalar, vector and tensor components decouple. The vector perturbations are uninteresting because the Einstein constraints force them to be zero. The tensor perturbations are discussed in the parallel paper~\cite{tom}. The second order action for the scalar perturbations reads \begin{eqnarray} S_2 & = & \frac{1}{2\kappa} \int d\tau d^3 x a^2 \sqrt{\gamma} \bigg\{ -6\psi'^2-12{\cal H} A\psi'+2\Delta\psi\(2A-\psi\)-2\({\cal H}'+2{\cal H}^2\)A^2 \nonumber \\ & & +\kappa\(\delta\phi'^2+\delta\phi\Delta\delta\phi-a^2 V_{,\phi\phi}\delta\phi^2\) +2\kappa\(3\phpr\psi'\delta\phi-\phpr\delta\phi'A-a^2V_{,\phi}A\delta\phi\) \nonumber \\ & & +{\cal K}\(-6\psi^2+2A^2+12\psi A + 2\(B-E'\)\Delta\(B-E'\)\) \nonumber \\ & & +4\Delta\(B-E'\) (\frac{\kappa}{2} \phpr\delta\phi-\psi'-{\cal H}A) \bigg\} \labeq{2ac} \end{eqnarray} where $\Delta$ is the Laplacian on the three-space. Integrations by parts in the spatial directions may be freely used in anticipation of the fact that the fluctuations are determined in the Euclidean region where the three-space is an $S^3$ so there are no surface terms. We shall eventually need to be more careful about integrating by parts with respect to $\tau$, at least in the case of singular instantons, because $\tau$ continues to the coordinate $X$ in the Euclidean region which terminates on the singular boundary. However, even here we may integrate by parts freely until we have expressed our action density in Hamiltonian form for the observable of interest, namely $\Psi_N$. After this last stage we need to be careful to retain all surface terms. As we shall see, after continuation to the Euclidean region these surface terms determine the set of allowed fluctuation modes. We introduce the momenta canonically conjugate to $\psi$, $E$, and $\delta\phi$ as \begin{eqnarray} \Pi_\psi & = & \frac{2a^2\sqrt{\gamma}}{\kappa}\(-3\psi'+3\frac{\kappa}{2}\phpr\delta\phi-3{\cal H} A-\Delta\(B-E'\)\), \nonumber \\ \Pi_E &=& \frac{2a^2\sqrt{\gamma}\Delta}{\kappa}\(\psi'-\frac{\kappa}{2}\phpr\delta\phi+{\cal H} A-{\cal K}\(B-E'\)\), \nonumber \\ \Pi_{\delta\phi} &=& a^2\sqrt{\gamma}\(\delta\phi'-\phpr A\). \end{eqnarray} We shall only consider modes for which $-\Delta-3{\cal K}$ is positive, and in this case $\Pi_{\psi}$ and $\Pi_E$ are independent and we can solve for the velocities in terms of fields and momenta. As an aside, we mention a subtlety associated with the modes known in the literature as `bubble wall' fluctuation modes \cite{garrigabub}, \cite{bellidobub}. These modes have have $\Delta+3{\cal K}=0$. Such perturbation modes are possible in the open universe in spite of the fact that they possess the `wrong sign' for the Laplacian and therefore grow exponentially with comoving radius. However, if one expands in harmonics, for $l<2$ such modes may be gauged away. And modes with $l \geq 2$ are singular when continued into the Euclidean region, since with $\Delta_E\equiv-\(n^2-1\)=-3$ the regular eigenfunctions all have $l<n=2$. So there are no physical fluctuation modes with $\Delta+3{\cal K}=0$. However, our gauge-invariant action will be zero for the $\Delta+3{\cal K}=0$ gauge modes, and we will need to project out their contribution at various stages of the calculation. \vfill\eject Proceeding with this caveat, we rewrite the action~(\ref{eq:2ac}) in first order form \begin{eqnarray} S_2 & = & \int d\tau d^3 x \Bigg\{ \Pi_\psi \psi' + \Pi_E E' + \Pi_{\delta\phi} \delta\phi' \nonumber \\ & & -\frac{\kappa}{4a^2\sqrt{\gamma}\(\Delta+3{\cal K}\)}\(-{\cal K}\Pi_\psi^2+2\Pi_\psi\Pi_E + \frac{3}{\Delta}\Pi_E^2 +\frac{2\(\Delta+3{\cal K}\)}{\kappa}\Pi_{\delta\phi}^2\) - \frac{\kappa}{2}\phpr\Pi_\psi\delta\phi \nonumber \\ & & -\frac{a^2\sqrt{\gamma}}{\kappa} \(\(\Delta+3{\cal K}\)\psi^2 -\frac{\kappa}{2}\(\Delta+3{\cal K} -{\cal H}^2-{\cal H}'+\frac{\phi'''_0}{\phpr}\)\delta\phi^2\) \nonumber \\ & & -B\Pi_E - A \(-{\cal H}\Pi_\psi+\phi'_0\Pi_{\delta\phi}+\frac{2a^2\sqrt{\gamma}}{\kappa}\(-\(\Delta+3{\cal K}\)\psi+\frac{\kappa}{2}\({\cal H}\phpr-\phi''_0\)\delta\phi\)\) \Bigg\}. \end{eqnarray} As stated above, we would like to evaluate the correlator for the gauge-invariant variable $\Psi_N$. We may express $\Psi_N$ as a dynamical variable in terms of fields and momenta as follows \begin{eqnarray} \Psi_N=\psi+\frac{{\cal H}\kappa}{2a^2\sqrt{\gamma}} \(\frac{\Delta\Pi_{\psi}+3\Pi_E}{\Delta\(\Delta+3{\cal K}\)}\). \end{eqnarray} This is singular for $\Delta=0$ so our discussion will only be valid for the inhomogeneous perturbations, which is all we shall calculate here. The homogeneous perturbation modes require a separate treatment because the Euclidean action is not positive and to cure this the conformal factor must be decoupled from the physical degrees of freedom. We do not address this problem here, but shall simply project out the homogeneous modes on the $S^3$ or $H^3$ at each stage of the calculation. We now add a source term $-i\int J_N \Psi_N$ to the quadratic action, and perform the path integral over all fields and momenta. The non-dynamical variables A and B only occur linearly in $S_2$, and functional integration over these fields gives us delta functionals imposing the $G_{0i}$ and $G_{00}$ Einstein constraints. We use these delta functionals to perform the $\Pi_E$ and $\Pi_{\delta\phi}$ integrals. This sets $\Pi_E$ to zero in the term multiplying the source. There is then no residual $E$ dependence in the action and and the functional integral over $E$ is ignored as an infinite gauge orbit volume. There is still a residual gauge freedom in the action, corresponding to $\lambda^0$ reparametrizations (see above). However the action density can be expressed, up to a total derivative, in terms of $\Psi_N$ and its canonically conjugate gauge-invariant variable \begin{eqnarray} \Pi_N &=& {1\over 2} \Pi_{\psi} - {a^2 \sqrt{\gamma} \over \kappa {\cal H} } \(\Delta+3{\cal K}\) \psi - {2a^2 \sqrt{\gamma} \over \kappa \phi_0'} \(\Delta+3{\cal K}\) \delta\phi. \end{eqnarray} The action density is now independent of $\delta\phi$, so this too can be integrated out as a gauge orbit volume. \vfill\eject It is convenient to rescale the coordinate $q= {2 a \over \kappa \phi_0'} \Psi_N$, the momentum $p= {\kappa \phi_0'\over 2a} \Pi_N$, and the source $J_q=\frac{\kappa \phi'}{2a} J_N$. We then perform an integration by parts to represent the action in canonical form $S= \int \[p q' -H(p,q)\]$. From now on, all surface terms must be kept. Finally, we perform the Gaussian integral over the momentum variable $p$ to obtain the reduced action for $q$, \begin{eqnarray} i S_2\(J\) & = & -{i\over 2} \int d\tau d^3x \sqrt{\gamma} \Bigg\{ \(\Delta+3{\cal K}\)q\(\hat{O}+\Delta+3{\cal K}\)q \cr &+& \(\Delta+3{\cal K}\)\left[ qq'+\(\frac{\kappa\phi'^{2}_0}{4{\cal H}}+\frac{\phi''_0}{\phi'_0}\)q^2\right]' \Bigg\} +\int J_q q \labeq{qac} \end{eqnarray} where \begin{eqnarray} \hat{O} \equiv -\frac{d^2}{d\tau ^2} +\frac{\kappa}{2} (\phpr)^2 + \phpr \(\frac{1}{\phpr}\)''. \labeq{odef} \end{eqnarray} The result for the bulk term agrees, up to a sign, with that obtained in \cite{xavi}. We are now ready to analytically continue to the Euclidean region and perform the Gaussian integral to obtain the $qq$ correlator. \section{The Euclidean Green Function} The continuation to the Euclidean region is performed as described above, in equation (\ref{eq:continuation}). The Laplacian $\Delta$ continues to $-\Delta_3$, where $\Delta_3$ is the Laplacian on $S^3$ and the constant ${\cal K}$ continues to itself. We set ${\cal K}=-1$ from now on. Finally, the variable $q$ continues to itself. The Euclidean action is given by \begin{eqnarray} i S_2\(J\) &=& -S_2^E\(J\) = -{1\over 2} \int dX d^3\Omega_3 \Bigg\{ \(-\Delta_3-3\)q\(\hat{O}-\Delta_3-3\)q \cr &+& \(-\Delta_3-3\)\left[ qq'+\(\frac{\kappa\phi'^{2}_0}{4{\cal H}}+\frac{\phi''_0}{\phi'_0}\)q^2\right]' \Bigg\} +\int J_q q \labeq{qace} \end{eqnarray} where the volume element on the three-sphere is $d^3\Omega_3$ and now \begin{eqnarray} \hat{O} \equiv -\frac{d^2}{d X ^2} +\frac{\kappa}{2} (\phpr)^2 + \phpr \(\frac{1}{\phpr}\)''\equiv -\frac{d^2}{d X ^2} +U(X). \labeq{odefe} \end{eqnarray} where primes now denote derivatives with respect to $X$. The bulk term in the action is positive definite for the inhomogeneous modes of interest, as was noted by Lavrelashvili \cite{lav}. The path integral over $q$ is Gaussian and is performed by solving the classical field equation for $q(J)$ and substituting back. After the appropriate normalization, we obtain the generating functional for $q$ correlators, exp$({1\over 2} \int \int JG_E J)$, where $G_E$ is the Euclidean Green function. The two point correlator is then $\langle q\({\bf X}\) q\({\bf X'}\)\rangle = G_E({\bf X}, {\bf X'})$. The Euclidean Green function is the solution to the equation \begin{eqnarray} \(-\Delta_3-3\)\(\hat{O} -\Delta_3-3\) G_E = g^{-{1\over2}} \delta^{(4)}({\bf X}-{\bf X'}) -\delta(X-X')\frac{\sin 2\om}{\pi^2 \sin \om}. \labeq{ge2} \end{eqnarray} Here ${\bf X}=(X,x^i)$, and $g$ is the determinant of the metric. $\om$ is the angle between the two points $x$ and $x'$ on the three-sphere. The second term on the right hand side of equation~(\ref{eq:ge2}) is present because we project out the $-\Delta_3=3$ gauge modes as mentioned above. Its coefficient is fixed by the requirement that the right hand side be orthogonal to the $-\Delta_3=3$ modes under integration over the three-sphere. We shall also project out the $\Delta=0$ homogeneous mode, but for the sake of brevity we shall not write out the relevant terms explicitly. Equation~(\ref{eq:ge2}) is solved by expressing both $\delta(X-X')$ and $G_E$ as sums over a complete set of eigenmodes of $\hat{O}$ and equating coefficients. $\hat{O}$ is a Schr\"{o}dinger operator, and its eigenfunctions obey \begin{eqnarray} \hat{O} \psi_p(X) = \(-{d^2 \over dX^2} +U(X)\) \psi_p(X) = (p^2+4)\psi_p(X), \labeq{sch} \end{eqnarray} where the potential $U(X)$ is given in equation~(\ref{eq:odefe}). Near the regular pole of the instanton i.e. $X \rightarrow \infty$, we have $\phi_{0,\sigma} \sim \sigma \sim e^{-X}$, so that $\phpr \sim e^{-2X}$, and $U\rightarrow 4$. There is therefore a positive continuum starting at $p^2=0$. For gently sloping inflationary potentials, there is also generally a single bound state with $-1<p^2<0$. For singular instantons, near $X=0$ the potential term is repulsive and diverges as $\frac{3}{4X^2}$, as noted by Garriga \cite{garriga}. There are two sets of eigenmodes of $\hat{O}$ for each $p$, behaving as $X^{-1/2}$ or as $X^{3/2}$ for small $X$ respectively. Now the importance of keeping the surface terms in the Euclidean action (\ref{eq:qace}) becomes clear. The surface terms are positive infinity for the divergent modes. Thus the Euclidean action by itself completely determines the allowed spectrum of fluctuations. Regularity of the eigenmodes does not need to be imposed as an additional, external condition. Only fluctuations vanishing at the singularity are allowed, so in effect the singularity enforces Dirichlet boundary conditions. Now consider for fixed real $p$ the solution to the Schr\"{o}dinger equation~(\ref{eq:sch}) which behaves as $X^{\frac{3}{2}}$ for $X\rightarrow 0$, which we shall denote $\psi_p (X)$. Being the solution to a differential equation with finite coefficients, this is analytic for all finite $p$ in the complex $p$-plane~\cite{JJ}. It is also useful to define the Jost function $g_p(X)$ as the solution which tends to $e^{ipX}$ as $X\rightarrow \infty$. This is analytic in the upper half complex $p$-plane, as seen by iterating the integral equation~\cite{newton} \begin{eqnarray} g_p (X) = e^{ipX} +\frac{1}{p}\int_X^{\infty} \sin p(Y-X) \, V(Y) \, g_p (Y) \,dY. \labeq{inteq} \end{eqnarray} The two solutions are related via \begin{eqnarray} \psi_p (X) = a_p g_p (X) + b_p g_{-p} (X). \labeq{psidec} \end{eqnarray} Since the differential equation is even in $p$, $\psi_{-p}(X)=\psi_p (X)$ and $b_p=a_{-p}$. Completeness of the eigenfunctions then allows us to write the following representation of the delta function, \begin{eqnarray} \delta \(X-X'\)=\int_{-\infty}^{\infty} \frac{\psi_p\(X\) \psi_p\(X'\)}{4\pi a_p a_{-p}} dp + \psi^N_{i\Lambda}\(X\)\psi^N_{i\Lambda}\(X'\), \labeq{comp1} \end{eqnarray} where we have included the contribution of a single assumed normalized bound state wavefunction $\psi_{i\Lambda}^N (X)$. One may check the normalization of the continuum contribution in equation~(\ref{eq:comp1}) as follows. For $X$ and $X'$ large, substituting in (\ref{eq:psidec}) one sees that the terms going as $e^{\pm ip (X-X')}$ integrate to give the correctly normalized delta function. It is also instructive to see how the right hand side vanishes for any $X\neq X'$. For definiteness let us take $X>X'$. Substituting the decomposition~(\ref{eq:psidec}) into the integral for $\psi_p (X)$, we have \begin{eqnarray} \int_{-\infty}^{\infty} \frac{\psi_p\(X\) \psi_p\(X'\)}{4\pi a_p a_{-p}} dp & = & \int_{-\infty}^{\infty} \frac{\( a_p g_p (X) + a_{-p} g_{-p} (X)\) \psi_p\(X'\)}{4\pi a_p a_{-p}} dp \nonumber \\ & =& \int_{-\infty}^{\infty} \frac{g_p\(X\) \psi_p\(X'\)}{2\pi a_{-p}} dp. \labeq{randoma} \end{eqnarray} We now distort the integral onto a semicircle at infinity in the upper half $p$-plane. The contour at infinity gives no contribution since $g_p (X) \psi_p (X')$ decays exponentially, as $e^{ip(X\pm X')}$. Since $\psi_p(X')$ is analytic for all $p$, and $g_p (X)$ is analytic in the upper half $p$-plane, the only contribution to the integral comes from zeroes of $a_{-p}$ in the upper half $p$-plane. These zeroes correspond to bound states as may be seen from the expression (\ref{eq:psidec}). If $a_{-p}$ vanishes, the corresponding solution decays exponentially at large $X$. Thus for each zero of $a_{-p}$ we have a normalizable bound state with eigenvalue $p^2$. But the Schr\"{o}dinger equation only has real eigenvalues, so this is only possible for imaginary $p=i\Lambda$, with $\Lambda$ real. Next we compute the residue of the pole, which we assume to be simple. From the Schr\"{o}dinger equation one obtains the following identity \begin{eqnarray} \(\psi\frac{\partial\psi'}{\partial p}-\frac{\partial \psi}{\partial p}\psi'\)'=-2p\psi^2. \labeq{random} \end{eqnarray} We integrate both sides over $X$ from zero to infinity. The left hand side yields a difference of surface terms. Only the surface term at infinity is non-zero, and we evaluate it using the asympotic form for $\psi_p$. Near $p =i\Lambda$, $a_{-p}$ vanishes as $\(p-i\Lambda\).\partial_p a_{-p}|_{p=i\Lambda} $. Then the integrated equation~(\ref{eq:random}) gives us $\partial_p a_{-p}|_{p=i\Lambda}=-\frac{i\mathcal{N}}{a_{i\Lambda}}$, where the normalization constant ${\cal N} \equiv \int_0^{\infty} \psi_{i\Lambda}^2 dX$. So the contribution from the zero of $a_{-p}$ to the integral~(\ref{eq:randoma}) is just $\frac{-a_{i\Lambda}^2}{\mathcal{N}} g_{i\Lambda} (X) g_{i\Lambda} (X')$. But $\frac{a_{i\Lambda}}{\sqrt{\mathcal{N}}} g_{i\Lambda} (X)$ is precisely the normalized bound state wavefunction $\psi^N_{i\Lambda} (X)$, so the contribution from the zero of $a_{-p}$ is exactly cancelled by the bound state term in~(\ref{eq:comp1}). Hence the right hand side is zero as required. All of this is simply understood. Imagine deforming our potential $U(X)$ to one possessing no bound state. Then the integral in (\ref{eq:randoma}), with $p$ running along the real axis, would give the delta function with no bound state contribution being needed. Now as one deforms the potential back to $U(X)$, a zero in $a_{-p}$ corresponding a bound state would appear first at $p=0$. This yields a pole in the integrand of (\ref{eq:randoma}) but the contour may be deformed above it since the rest of the integrand is analytic in the upper half $p$-plane. As the potential becomes more and more negative, the zero of $a_{-p}$ moves up the imaginary $p$ axis to $p=i\Lambda$. But the integral still equals $\delta(X-X')$ as long as one takes the contour above the pole (see Figure \ref{fig:cont}). If one deforms the contour back to the real $p$ axis, the bound state contribution discussed above is produced from the residue of the pole. \begin{figure} \centerline{\psfig{file=cont2.eps,width=3in}} \caption{Contour of integration avoiding the bound state pole.} \labfig{cont} \end{figure} In the case of non-singular instantons, a similar procedure may be followed. Here, since $b(\sigma)$ goes linearly to zero at each pole, $X$ should be defined as $\int_{\sigma}^{\sigma_t}\frac{d\sigma'}{b\(\sigma'\)}$, where $\sigma_t$ is say the value of $\sigma$ for which $b(\sigma)$ is a maximum. Then $X$ ranges from $-\infty$ to $\infty$, and for each value of $p^2$ we need two linearly independent mode functions. These may be taken to be $g_p^{\mathrm{left}}\(X\)$, defined to tend to $e^{-ipX}$ as $X\rightarrow-\infty$, and $g_p^{\mathrm{right}}\(X\)$, defined to tend to $e^{ipX}$ as $X\rightarrow \infty$. These can be shown to be orthogonal, and analytic in the upper half $p$-plane. As $X\rightarrow \infty$, if we write $g_p^{\mathrm{left}}\(X\)\rightarrow c_p e^{ipX}+d_p e^{-ipX}$, then as $X\rightarrow-\infty$, $g_p^{\mathrm{right}}\(X\) \rightarrow d_{p} e^{ipX}-c_{-p} e^{-ipX}$. Finally, we may express $\delta \(X-X'\)$ as $\int_{-\infty}^{\infty} \frac{g_p^{\mathrm{right}}\(X\) g_p^{\mathrm{left}}\(X'\)}{2\pi d _p} dp + \psi^N_{i\Lambda}\(X\)\psi^N_{i\Lambda}\(X'\)$ in close analogy with the singular case. Returning to our discussion of the singular case, from the representation (\ref{eq:comp1}) of the delta function we may now construct the following ansatz for the Green function \begin{eqnarray} G_E =\int_{-\infty}^{\infty}C_p\(\Omega\)\frac{\psi_p\(X\) \psi_p\(X'\) } {4\pi a_p a_{-p}} dp + C_{i\Lambda}\(\Omega\) \psi^N_{i\Lambda}\(X\)\psi^N_{i\Lambda}\(X'\) \labeq{cdef} \end{eqnarray} where as above $\Omega$ is the angle between the two points $x^i$ and ${x^{i}}'$ on the three-sphere. Substituting (\ref{eq:cdef}) into (\ref{eq:ge2}) we obtain an equation governing the `universal' part of the correlator, $C_p\(\Omega\)$. This reads \begin{equation} (\tilde{\Delta} +3) (\tilde{\Delta} -1-p^2) C_p\(\Omega\)= \delta^3(\Omega)-\frac{\sin 2\om}{\pi^2 \sin \om}, \labeq{a2} \end{equation} where $\tilde{\Delta}\equiv \partial_\Omega^2 +2\cot \Omega \partial_\Omega$. We first find the particular integral for the term on the far right and then the four linearly independent solutions of the homogeneous equation. The latter are \begin{equation} \frac{\cos2\om}{\sin\om},\frac{\sin2\om}{\sin\om}, \frac{\cosh p\om}{\sin\om}, \mbox{~and~} \frac{\sinh p\om}{\sin\om}. \end{equation} We combine the 2 solutions singular at $\om=0$ to cancel their leading singularities, and arrange that the term linear in $\Omega$ at small $\Omega$ has the correct coefficient to match the delta function in (\ref{eq:a2}). Then we choose the coefficient of $\sinh p\om / \sin\om$ to make $C_p$ finite at $\Omega=\pi$ (i.e. for antipodal points on the $S^3$). Finally we choose the coefficient of the $\sin2\om / \sin\om$ term to make the projection of $C_p$ onto the gauge modes zero. Hence we find \begin{eqnarray} C_p\(\Omega\) = \frac{1}{4\pi\(p^2+4\)}\frac{\sinh p\(\om-\pi\)}{\sinh p\pi \sin \om}+ \frac{\(\pi-\om\)\cos 2\om}{4 \pi^2 \(p^2+4\) \sin \om} + \nonumber \\ \(\frac{1}{\pi^2\(p^2+4\)^2}-\frac{1}{16 \pi^2 \(p^2+4\)}\)\frac{\sin 2\om}{\sin \om}, \labeq{cp} \end{eqnarray} where the first term gives us the contribution one might expect by analogy with the usual Euclidean scalar field vacuum. We shall be interested in the behaviour of $C_p\(\Omega\)$ in the complex $p$-plane. Here, the role of the extra terms is just to remove the double pole at $p=2i$. Had we written out the terms involving the homogeneous mode on the three-sphere, we would find that their effect would be to remove the pole at $p=i$. We now have a convergent expression for $G_E$ in the Euclidean region. Note that whilst $G_E$ is perhaps most naturally expressed as a sum of regular eigenmodes, discrete because the space is compact, we have instead expressed it as an integral. The integral formula is more useful for analytic continuation since it is already close to an expression of the form we desire in the open universe, namely a integral over a continuum of modes. Nevertheless it is interesting to see how the Euclidean Green function appears as a discrete sum. For $X>X'$, we may close the $p$ contour above to obtain an infinite sum, convergent in the Euclidean region, \begin{eqnarray} \sum^{\infty}_{n=3}\frac{1}{4\pi^2 \(n^2-4\)}\frac{\sin n\om}{\sin \om} { g_{ni}(X) \psi_{ni}(X')\over a_{-ni}}, \labeq{sumeq} \end{eqnarray} where $g_{ni}(X) \sim e^{-nX}$ at large $X$. This demonstrates that our Green function is analytic in proper distance $\sigma \sim e^{-X}$ at the north pole of the instanton, as it should be. For non-singular instantons the argument generalizes to show that the Green function is analytic at the south pole too. \section{The Lorentzian Green Function} We now wish to continue our integral formula for the Green function given by (\ref{eq:cdef}), (\ref{eq:cp}) into the open universe region. This involves setting $\om=-i\chi$ and continuing the conformal coordinate as described in equation (\ref{eq:confx}). To perform the continuation we take $X>X'$ and write $G_E$ as \begin{eqnarray} \int_{\cal C} \frac{g_p(X)\psi_p(X')}{2\pi a_{-p}} C_p(\om) dp \labeq{start} \end{eqnarray} where the contour ${\cal C}$ for the $p$ integral has been deformed above the bound state pole as described above (see Figure \ref{fig:cont}). We can perform the analytic continuation $\Omega=-i\chi$ immediately. However, the $X$ continuation is more subtle, because $g_p \sim e^{ipX}$ at large $X$, and unless we are careful terms like $\sim e^{p \pi}$ occur which cause the $p$ integral to diverge. We circumvent this problem as follows. For $X-X'>0$ we have \begin{eqnarray} \int_{C} dp \frac{g_p(X)\psi_p(X')}{2\pi a_{-p}} {e^{ip\chi}\over \(p^2+4\)} = {e^{-2 \chi} \over 16 \pi^2} \frac{g_{2i}(X)\psi_{2i}(X')}{a_{-2i}} \labeq{ident} \end{eqnarray} since the integrand is analytic in the upper half $p$-plane and the integral may be closed above. By inserting ${\sinh} p\pi / {\sinh} p\pi=1$ under the integral one sees that the integral (\ref{eq:ident}) with a factor $e^{p\pi} /{\sinh} p\pi$ inserted is equal to that with a factor $e^{-p\pi} /{\sinh} p\pi$ inserted, plus the remaining term on the right hand side. We use this identity to re-express the term from (\ref{eq:cp}) behaving as $e^{p\pi+i\chi}$. The term on the right hand side of (\ref{eq:ident}) may be then combined with the analytic continuation of the term involving $\pi \cos 2\Omega$ on the right hand side of (\ref{eq:cp}) to produce a term proportional to $\sinh 2\chi /\sinh \chi$. The remaining terms in the correlator arising from the second and third terms in (\ref{eq:cp}) are proportional to $\chi \cosh 2\chi /\sinh \chi$ and $\sinh 2\chi /\sinh \chi$. These, and the constant terms we have not written explicitly, are zero modes of the operators $\Delta$, $\Delta+3{\cal K}$ or $(\Delta+3{\cal K})^2$ and are therefore homogeneous modes or gauge modes which should be ignored. After these simplifications, we rewrite our partially-continued correlator as \begin{eqnarray} \int_{\cal C} {dp \over 8 \pi^2 (p^2+4)}{{\sin} p \chi \over {\sinh} \chi} {e^{-p\pi} \over {\sinh} p\pi } {g_p(X) \over a_{-p}} \psi_p(X') \labeq{rewr} \end{eqnarray} and insert the expression (\ref{eq:psidec}) to obtain \begin{eqnarray} \int_{\cal C} {dp \over 8 \pi^2 (p^2+4)} {{\sin} p \chi \over {\sinh} \chi }{e^{-p\pi} \over {\sinh} p\pi } \[g_p(X) g_{-p}(X')+ {a_p \over a_{-p}} g_{p}(X) g_{p}(X')\]. \labeq{rewri} \end{eqnarray} We are now ready to perform the analytic continuation $X= -i{\pi\over 2} -\tau$ under the integral. Under this substitution, we have \begin{eqnarray} g_{\pm p}(X) \rightarrow e^{\pm p \pi \over 2} g^L_{\pm p}(\tau) \labeq{jostcont} \end{eqnarray} where the Lorentzian Jost function $g_p^L(\tau)$ is defined to be solution to the Lorentzian perturbation equation $\hat{O} g_p^L(\tau) = (p^2+4) g_p^L(\tau)$ obeying $g_p^L(\tau) \rightarrow e^{-i p\tau}$ as $\tau \rightarrow -\infty$. Equation (\ref{eq:jostcont}) follows by matching at large $X$. Like $g_{p}(X)$, the Lorentzian Jost function $g_p^L(\tau)$ is analytic in the upper half $p$-plane. The correlator now reads \begin{eqnarray} \int_{\cal C} {dp \over 8 \pi^2 (p^2+4)} {{\sin} p \chi \over {\sinh} \chi} {1 \over {\sinh} p\pi } \[e^{-p\pi} g_p^L(\tau) g_{-p}^L(\tau') + {a_p \over a_{-p}} g_{p}^L(\tau) g_{p}^L(\tau')\]. \labeq{rewrii} \end{eqnarray} We want to distort the contour of integration ${\cal C}$ back to the real $p$ axis. This is only possible provided $g_{-p}$ is analytic in the region through which the contour is moved. Analyticity of $g_{-p}$ in the strip $0<\Im (p)<1$ is proven as follows. One may re-express the Lorentzian perturbation equation $\hat{O}q = (p^2+4)q$ in terms of Lorentzian proper time $t$. The equation then takes the form $t^2 \ddot{q} + A(t)t \dot{q} +B(t)q=-p^2 q$. The coefficients $A(t)$ and $B(t)$ have Taylor expansions in $t$ about $t=0$. One now seeks power series solutions $q\sim t^{s}(1+q_1t+q_2t^2+...)$ and finds the appropriate indicial equation for $s$ and recursion relation for the coefficients. Here we obtain $s=\pm ip$, and a recursion relation that is non-singular as long as $p$ is not an integer times $i$. In this situation, one is guaranteed that the series for $q(t)$ converges within a circle extending to the nearest singularity of the differential equation \cite{WW}, and even then the solution may be defined by analytic continuation around that singularity. In our case the first singularity occurs on the real axis when the background scalar field velocity $\dot{\phi}_0$ is zero, when inflation is over and reheating has begun. Matching across that singularity is accomplished by switching from $q$ to $\Psi_N$, and will not present any difficulties. \vfill\eject Since $g_{-p}$ is analytic in the desired region, we may now distort the countour ${\cal C}$ back to the real $p$ axis, recovering the bound state term from the simple pole at $p=i\Lambda$. Once the integral is along the real axis we use symmetry under $p \rightarrow -p$ to rewrite it as \begin{eqnarray} \int_{-\infty}^\infty && {dp \over 16 \pi^2 (p^2+4)} {{\sin} p \chi \over {\sinh} \chi } \Bigg( {\rm coth} p\pi \[ g_p^L(\tau) g_{-p}^L(\tau') +g_{-p}^L(\tau) g_{p}^L(\tau')\] \cr -&& \[ g_p^L(\tau) g_{-p}^L(\tau') -g_{-p}^L(\tau) g_{p}^L(\tau')\] + {1\over {\sinh} p\pi} \[ {a_p \over a_{-p}} g_p^L(\tau) g_{p}^L(\tau')+ {a_{-p} \over a_{p}} g_{-p}^L(\tau) g_{-p}^L(\tau')\] \Bigg). \labeq{rewriii} \end{eqnarray} Note that for real $p$, $g^L_{-p}(\tau)$ is the complex conjugate of $g_{p}^L(\tau')$ and $a_{-p}$ is the complex conjugate of $a_{p}$, so the second term in square brackets is imaginary, but the first and third terms are real. We have the correlator in the Lorentzian region. Recall that the derivation assumed $X-X'>0$. This translates to $\tau'>\tau$, and we have calculated the Feynman (time ordered) correlator for all $\chi$ subject to this condition. For cosmological applications we are usually interested in the expectation value of some quantity squared, like the microwave background multipole moments or the Fourier modes of the density field. For this purpose, all that matters is the symmetrized correlator $\langle \{q (\chi,\tau) , q (0,\tau')\} \rangle$ which is just the real part of the Feynman correlator. The symmetrised correlator also represents the `classical' part of the two point function and in the situations of interest it will be much larger than the imaginary piece. The second term in (\ref{eq:rewriii}) is pure imaginary and does not contribute to the symmetrized correlator. The other terms combine to give our final result \begin{eqnarray} \langle \{ \Psi_N(\chi,\tau), \Psi_N(0,\tau') \} \rangle&& = {4\over \kappa^2 \dot{\phi_0}(\tau) \dot{\phi_0}(\tau')} \times \cr \Bigg[ \int_{0}^{\infty}\frac{dp}{4\pi^2\(p^2+4\)}&& \frac{\sin p\chi}{\sinh \chi} \Re \( \coth p\pi g_p^L(\tau) g_{-p}^L(\tau')+ \frac{1}{\sinh p\pi} \( \frac{a_p}{a_{-p}} g_p^L\(\tau\)g_{p}^L(\tau')\)\) \cr &&+ \frac{1}{4\pi\(4-\Lambda^2\)} \frac{\sinh \Lambda\chi}{\sinh \chi} \frac{a_{i\Lambda}^2}{\mathcal{N}} \frac{ g^{L}_{i\Lambda}\(\tau\) g^{L}_{i\Lambda}\(\tau'\)}{\sin \Lambda \pi} \Bigg] \labeq{lorgreen} \end{eqnarray} where the bound state pole arises as mentioned above from distorting the contour across the pole at $p=i\Lambda$ and we have converted the final integral along the real axis to one from $0$ to $\infty$. We have also converted from $q$ variable to $\Psi_N$ by multiplying by the appropriate factor where dots denote derivatives with respect to Lorentzian proper time. Also recall that units here are such that the comoving curvature scale is unity (${\cal K}=-1$). The first term in this formula is essentially identical to that derived by \cite{bucher} and \cite{cohn}, although in those derivations it was obtained only as an approximation. The second term, representing the reflection amplitude for waves incident from the regular pole, is only important at low $p$, since the $\sinh p \pi$ denominator suppresses it exponentially at high $p$. Finally, the bound state term produces long range correlations beyond the curvature scale. For large $p$ we recover the usual scale-invariant spectrum of inflationary quantum fluctuations. To see this note that at large $p$, $|g_p^L\(\tau\)|\rightarrow 1$. Then according to~(\ref{eq:lorgreen}) there are equal contributions to the variance $\langle \Psi_N^2(0,\tau) \rangle$ from each logarithmic interval in $p$ at large $p$. As mentioned in the introduction, one of our main concerns is to establish the differences between perturbations about singular and non-singular instantons. The above derivation was for singular instantons. It is straightforward to follow the argument through for non-singular instantons, where the coordinate $X$ now runs from $-\infty$ instead of zero. The only change in the final formula (\ref{eq:lorgreen}) is that the phase factor $a_p/a_{-p}$ is replaced by $c_p/d_p$, which is the reflection amplitude for waves incident on the potential from $X=+\infty$. \section{Conclusions} We have derived a general formula~(\ref{eq:lorgreen}) for the time dependent correlator of the Newtonian potential in open inflationary universes resulting from Euclidean cosmological instantons. The phase shifts and mode normalizations in the Euclidean region can be calculated numerically, and the Lorentzian Jost functions can be evolved numerically until they pass outside of the Hubble radius during inflation and freeze out. In future work we shall obtain the associated primordial matter power spectra and cosmic microwave anisotropies for a variety of potentials, considering both singular Hawking--Turok and non-singular Coleman--De Luccia instantons ~\cite{ght}. To summarise, we have computed from first principles the spectrum of density perturbations in an inflationary open universe including gravitational effects. We found that the Euclidean path integral coupled to the no boundary proposal gives a well defined, unique fluctuation spectrum, obtained via analytic continuation of the real-space Euclidean correlator. \medskip \centerline{\bf Acknowledgements} We wish to thank Martin Bucher, Rob Crittenden, Stephen Hawking, Thomas Hertog and Harvey Reall for valuable comments and continuous encouragement. We also thank J. Garriga and X. Montes for informative discussions of their work. Using different methods they have independently derived results related to our final formula (\ref{eq:lorgreen}) \cite{garriganew}. This work was supported by a PPARC (UK) rolling grant and a PPARC studentship.
1,108,101,562,593
arxiv
\section{Introduction} The work of Bekenstein \cite{Bekenstein:1973ur} and Hawking \cite{Hawking:1974sw} has spurred extensive research on the so-called `information paradox'. The premise of the paradox is that in a collapse of matter forming a black hole, the intermediate state post-collapse is a black hole that can be characterized by a small number of physical parameters (mass, charge, angular momentum, etc.). A semi-classical calculation as the one Hawking originally did, however, suggests that black holes radiate as black bodies, namely with a thermal spectrum. This seems to suggest a gross violation of unitary evolution as all information about the exact in-state that went into forming the black hole appears to have been lost after its evaporation. \\ The emergence of string theory, holography \cite{'tHooft:1996tq,'tHooft:1984re,Susskind:1994vu} and gauge-gravity duality \cite{Maldacena:1997re,Gubser:1998bc,Witten:1998qj} has shed significant light on this problem. In fact, it is often claimed that if one were to believe gauge-gravity duality, the paradox is solved `in-principle' as the boundary theory is unitary by construction and the duality states an equivalence (at the level of partition functions) between the gravitational and boundary field theories. Nevertheless, the strength of this claim is questionable \cite{Mathur,Mathur:2009hf} and even within the best understood examples of gauge-gravity duality, there is no general consensus on the exact process of information retrieval. Furthermore, the best understood examples of the said duality, while providing for a very useful toolbox, typically involve bulk space-times with a negative cosmological constant and are far from the real world. Technology at this stage is far from established to reliably understand more realistic space-times. Additionally, why intricate details of string theory or the duality may be absolutely necessary for our understanding of the evolution of general gravitational dynamics is not apparent. While the fuzzball program \cite{Mathur:2005zp,Bena:2007kg,Balasubramanian:2008da,Skenderis:2008qn,Mathur:2008nj} provides some arguments for why stringy details may be important, it is fair to say that there is no general consensus on the matter. \\ Years before gauge-gravity duality was proposed and was seen as a possible resolution to the information paradox, there was an alternative suggestion by 't Hooft \cite{'tHooft:1996tq,'tHooft:1991bd,'tHooft:1992zk}. The proposal was to consider particles of definite momenta `scattering' off a black-hole horizon. These particles were to impact the out-going Hawking quanta owing to their back-reaction on the geometry. With the knowledge that the black hole is made out of a large, yet \textit{finite}, number of in-states, one may scatter particles of varying momenta repeatedly, until all in-states that may have made up the black hole have been exhausted. This led to a construction of an S-Matrix that maps in to out states. This matrix was shown to be unitary. A further advancement for spherically symmetric horizons was made recently \cite{Hooft:2015jea,Hooft:2016itl,Hooft:2016cpw}, where a partial wave expansion allowed for an explicit writing of the S-Matrix for each spherical harmonic. However, this construction has its own short-comings. It presumes that the S-Matrix can be split as \begin{equation}\label{eqn:SMatrixsplit} S_{\text{total}} ~ = ~ S_{-\infty} ~ S_{\text{horizon}} ~ S_{+\infty} \, , \end{equation} where $S_{\pm\infty}$ correspond to matrices that map asymptotic in-states to in-going states near the horizon and outgoing states near the horizon to asymptotic out-states respectively. And $S_{\text{horizon}}$ is the S-Matrix that captures all the dynamics of the horizon. Whether such an arbitrarily near-horizon region captures all the dynamics of the black hole is not entirely clear. The construction is also done in a `probe-limit' in that the back-reaction is not taken to impact the mass of the black hole. Only its effect on outgoing particles is captured. Furthermore, throwing a particle into a black hole is not an exactly spherically symmetric process. While a non-equilibrium process initiated by the in-going particle does break spherical symmetry, it may be expected that the black hole settles down into a slightly larger, spherically symmetric solution after some characteristic time-scale that depends on the interactions between the various degrees of freedom that make up the black hole. This scattering can be decomposed into partial waves. And the different waves are assumed to evolve independently. However, one expects that the partial waves are not independent and that they indeed `interact' in a generic evolutionary process; it is not clear how one may incorporate this interaction in this construction. Furthermore, the scattering algebra possibly would need modification in a more general setup. Another important limitation is that the back-reaction calculations ignore transverse effects \cite{Aichelburg:1970dh,Dray:1984ha} which grow in increasing importance as we approach Planckian scales. Finally, while it may not be a fundamental difficulty, the splitting of the wave function via \eqref{eqn:SMatrixsplit} needs further investigation. \\ As is evident from the above, it is surprisingly easy to criticize even the most promising approaches to quantum black hole physics. In this article, we seek to address some of the criticisms of the S-Matrix approach to quantum black holes. Inspired by old ideas from non-critical 2d string theory \cite{Klebanov:1991qa,Moore1992b,Schoutens:1993hu,Verlinde:1993sg,Alexandrov:2002fh,Karczmarek:2004bw,Friess:2004tq,Maldacena:2005he}, we construct a theory\textemdash of a collection of quantum mechanics models with inverted harmonic oscillator potentials\textemdash that exactly reproduces the S-Matrix of 't Hooft for every partial wave; the inverted potentials arise naturally to allow for scattering states, as opposed to bound states in a conventional harmonic oscillator. The intrinsically quantum nature of the model dispenses with the critique that the S-Matrix of 't Hooft is a `classical' one. With any toy model, it may be hard to establish the validity of its applicability to black hole physics. However, in our construction, all the observables (S-Matrix elements) are exactly identical to those of 't Hooft's S-Matrix; thereby avoiding any ambiguity of its validity. Furthermore, we observe that in-states must contain an approximately constant number density over a wide range of frequencies in order for the scattered out-states to appear (approximately) thermal; this condition was also noted in the 2d string theory literature. Finally, and perhaps most significantly, we show that our model captures an exponentially growing degeneracy of states. \\ It may be added that aside from the approaches mentioned earlier, there have been many attempts to construct toy-models to study black hole physics \cite{Callan:1992rs,Gibbons:1998fa,Kazakov2002,Magan:2016ojb,Jansen:2016zai,Banerjee:2016mhh}. The hope being that `good' toy models teach us certain universal features of the dynamics of black hole horizons. \\ This article is organized as follows. In the section \ref{sec:macroSMatrix}, we briefly review gravitational back-reaction and 't Hooft's S-Matrix construction along with its partial wave expansion. Our derivation is slightly different to the one of 't Hooft \cite{Hooft:2015jea} in that our derivation relies only on the algebra associated to the scattering problem. Therefore, the `boundary conditions' of the effective bounce, as was imposed in 't Hooft's construction is built in from the start via the back-reaction algebra \eqref{eqn:macroalgebra}. In section \ref{sec:micromodel}, we present our model and compute the corresponding scattering matrix to show that it explicitly matches the one of 't Hooft. In section \ref{sec:collapse}, we make an estimate of the high energy behaviour of the total density of states to argue that the model indeed describes the existence of an intermediate black-hole state. We conclude with a discussion and some future perspectives in \ref{sec:discussion}. \\ \paragraph{A brief summary of results: } There are two main results of this work: one is a re-writing the degrees of freedom associated to 't Hooft's black hole S-Matrix in terms of inverted harmonic oscillators; this allows us to write down the corresponding Hamiltonian of evolution explicitly. The second, related result is an identification of a connection to 2d string theory which in turn allows us to show that there is an exponential degeneracy of how a given total initial energy may be distributed among many partial waves of the 4d black hole; much as is expected from the growth of states associated to black hole entropy. At various points in Sections \ref{sec:micromodel} and \ref{sec:collapse}, we review some aspects of matrix models and 2d string theory in detail. While we expect some consequences for these theories based on our current work, we do not have any new results within the framework of 2d black holes or matrix models in this paper. \section{Back-reaction and the Black Hole S-Matrix}\label{sec:macroSMatrix} Consider a vacuum solution to Einstein's equations of the form: \begin{equation}\label{eqn:genericmetric} ds^2 ~ = ~ 2 A\left(u^+,u^-\right) du^+ \, du^- + g\left(u^+,u^-\right) \, h\left(\Omega\right) d\Omega^2 \, , \end{equation} where $u^+, u^-$ are light-cone coordinates, $A\left(u^+,u^-\right)$ and $g\left(u^+,u^-\right)$ are generic smooth functions of those coordinates and $h\left(\Omega\right)$ is the metric tensor depending on only the $(d-2)$ transverse coordinates $\Omega$. It was shown in \cite{Dray:1984ha} that an in-going massless particle with momentum $p^-$ induces a shock-wave at its position specified by $\Omega$ and $u^-=0$. The shock-wave was shown to change geodesics such that out-going massless particles feel a kick\textemdash of the form $u^- \rightarrow u^- + 8 \pi G \, p^-_{\text{in}} \hat{f}\left(\Omega,\Omega^\prime\right)$\textemdash in their trajectories at $u^-=0$, where $\hat{f}$ depends on the spacetime in question. If we were to associate a putative S-Matrix to the dynamics of the black hole, the said back-reaction may be attributed to this S-Matrix in the following manner. Consider a generic in-state $\Ket{\text{in}_0}$ that collapsed into a black hole and call the corresponding out-state after the complete evaporation of the black hole $\Ket{\text{out}_0}$. The S-Matrix maps one into the other via: $S \, \Ket{\text{in}_1} = \Ket{\text{out}_1}$. Now the back-reaction effect may be treated as a tiny modification of the in-state as $\Ket{\text{in}_0} \rightarrow \Ket{\text{in}_0 + \delta p^-_{\text{in}}\left(\Omega\right)}$, where $\delta p^-_{\text{in}}\left(\Omega\right)$ is the momentum of an in-going particle at position $\Omega$ on the horizon. Consequently, the action of the S-Matrix on the modified in-state results in a different out-state which is acted upon by an operator that yields the back-reacted displacement: \begin{equation} S \, \Ket{\text{in}_0 + \delta p^-_{\text{in}}\left(\Omega\right)} ~ = ~ e^{- i \delta p^+_{\text{out}}\left(\Omega^\prime\right) \delta u^-_{\text{out}}} \Ket{\text{out}_0} \, , \end{equation} where the operator acting on the out-state above is the `displacement' operator written in Fourier modes. Now, we may repeat this modification arbitrarily many times. This results in a cumulative effect arising from all the radially in-going particles with a distribution of momenta on the horizon. Therefore, writing the new in- and out-states\textemdash with all the modifications included\textemdash as $\Ket{\text{in}}$ and $\Ket{\text{out}}$ respectively, we have \begin{equation} \Bra{\text{out}}S\Ket{\text{in}} ~ = ~ \Bra{\text{out}_0}S\Ket{\text{in}_0} \, \exp \left[- i 8 \pi G \int d^{d-2}\Omega^\prime \, p^+_{\text{out}}\left(\Omega^\prime\right) \, \hat{f}\left(\Omega,\Omega^\prime\right) \, p^-_{\text{in}}\left(\Omega\right)\right] \, . \end{equation} Should we now \textit{assume} that the Hilbert space of states associated to the black-hole is completely spanned by the in-going momenta and that the Hawking radiation is entirely spanned by the out-state momenta, we are naturally led to a unitary S-Matrix given by \begin{equation} \Bra{p^+_\text{out}}S\Ket{p^-_\text{in}} ~ = ~ \exp \left[- i 8 \pi G \int d^{d-2}\Omega^\prime \, p^+_{\text{out}}\left(\Omega^\prime\right) \, \hat{f}\left(\Omega,\Omega^\prime\right) \, p^-_{\text{in}}\left(\Omega\right)\right] \, . \end{equation} There is an overall normalization factor (vacuum to vacuum amplitude) that is undetermined in this construction. The assumption that the black hole Hilbert space of states is spanned entirely by the in-state momenta $p^-_{\text{in}}$ is equivalent to postulating that the said collection of radially in-going, gravitationally back-reacting particles collapse into a black hole. While this may seem a reasonable assumption, it is worth emphasizing that there is no evidence for this at the level of the discussion so far. We have not modeled a collapsing problem. We will see in Section \ref{sec:collapse} that our proposed model in Section \ref{sec:micromodel} provides for a natural way to study this further. And significantly, we give non-trivial evidence that the derived S-Matrix possibly models a collapsing black-hole. \subsection{Derivation of the S-Matrix} We now return to the back-reaction effect at a semi-classical level in order to derive an explicit S-Matrix using a partial wave expansion in a spherically symmetric problem. For the back-reacted metric\textemdash after incorporating the shift $u^- \rightarrow u^- + f\left(\Omega,\Omega^\prime\right)$ into \eqref{eqn:genericmetric}\textemdash to still satisfy Einstein's equations of motion, the following conditions need to hold at $u^-=0$ \cite{Dray:1984ha}: \begin{align}\label{eqn:backreactionconds.} \dfrac{A\left(u^{+,-}\right)}{g\left(u^{+,-}\right)} \bigtriangleup_\Omega f\left(\Omega,\Omega^\prime\right) - \left(\dfrac{d-2}{2}\right)\dfrac{\partial_{u^+} \partial_{u^-} g\left(u^{+,-}\right)}{g\left(u^{+,-}\right)} f\left(\Omega,\Omega^\prime\right) ~ &= ~ 8 \, \pi \, p^-_{\text{in}} \, A\left(u^{+,-}\right)^2 \, \delta^{(d-2)}\left(\Omega,\Omega^\prime\right) \nonumber \\ \partial_{u^-} A\left(u^{+,-}\right) ~ &= ~ 0 ~ = ~ \partial_{u^-} g\left(u^{+,-}\right) \, , \end{align} where $\bigtriangleup_\Omega$ is the Laplacian on the $(d-2)$-dimensional metric $h\left(\Omega\right)$. We concern ourselves with the Schwarzschild black-hole, written in Kruskal-Szekeres coordinates as \begin{equation}\label{eqn:metricinKruskal} ds^2 ~ = ~ - \dfrac{32 \, G^3 \, m^3}{r} e^{-r/2Gm} du^+ \, du^- + r^2 d\Omega^2 \, . \end{equation} For the above metric \eqref{eqn:metricinKruskal}, at the horizon $r = R = 2 G m$, the conditions \eqref{eqn:backreactionconds.} were shown \cite{Dray:1984ha} to reduce to \begin{equation}\label{eqn:laplacianonshift} \bigtriangleup_S \left(\Omega\right) f\left(\Omega,\Omega^\prime\right) ~ \coloneqq ~ \left(\bigtriangleup_\Omega - 1\right) f\left(\Omega,\Omega^\prime\right) ~ = ~ - \kappa \, \delta^{(d-2)}\left(\Omega,\Omega^\prime\right) \, , \end{equation} with the implicit dependence of $r$ on $u^+$ and $u^-$ given by \begin{equation} u^+ \, u^- ~ = ~ \left(1 - \dfrac{r}{2 G m}\right) e^{-r/2 G m} \, , \end{equation} and $\kappa = 2^4 \, \pi \, e^{-1} \, G \, R^2 \, p^-_{\text{in}}$. These seemingly ugly coefficients may easily be absorbed into the stress-tensor on the right hand side of the Einstein's equations. Now, the cumulative shift experienced by an out-going particle, say $u^-_{\text{out}}$, is given by a distribution of in-going momenta on the horizon \begin{align}\label{eqn:cumulativeshift1} u^-_{\text{out}}\left(\Omega\right) ~ &= ~ 8 \pi G R^2 \, \int d^{d-2} \Omega^\prime \, \tilde{f}\left(\Omega,\Omega^\prime\right) \, p^-_{\text{in}}\left(\Omega^\prime\right) \, , \end{align} where $\kappa \, \tilde{f}\left(\Omega,\Omega^\prime\right) = f\left(\Omega,\Omega^\prime\right)$. Similarly, we have the complementary relation for the momentum of the out-going particle, say $p^+_{\text{out}}$ given in terms of the position $u^+_{\text{in}}$ of the in-going particle: \begin{align}\label{eqn:cumulativeshift2} u^+_{\text{in}}\left(\Omega\right) ~ &= ~ -8 \pi G R^2 \, \int d^{d-2} \Omega^\prime \, \tilde{f}\left(\Omega,\Omega^\prime\right) \, p^+_{\text{out}}\left(\Omega^\prime\right) \end{align} The expressions \eqref{eqn:cumulativeshift1} and \eqref{eqn:cumulativeshift2} may be seen as `boundary conditions' of an effective bounce off the horizon. However, this intuition is rather misleading and we will refrain from this line of thought. Nevertheless, what is striking to note is that the momentum of the in-state is encoded in the out-going position of the Hawking radiation while the position of the in-state is encoded in the momentum of the out-going Hawking state! However, so far, the quantities $u^\pm_{\text{in/out}}$ are dimensionless while $p^\mp_{\text{in/out}}$ are densities of momenta with mass dimensions four. Therefore, to appropriately interpret these as positions and momenta, we rescale them as $u^\pm_{\text{in/out}} \rightarrow R u^\pm_{\text{in/out}}$ and $p^\mp_{\text{in/out}} \rightarrow R^{-3} p^\mp_{\text{in/out}}$ \cite{Hooft:2016itl}. Notwithstanding this rescaling, we continue to use the same labels for the said quantities in order to avoid clutter of notation. Now, using the canonical commutation relations, respectively, for the out and in particles\footnote{To avoid clutter in notation, we drop the in/out labels on positions and momenta of particles. $u^+$ and $u^-$ always refer to ingoing/outgoing positions, respectively. Consequently, $p^-$ and $p^+$ are always associated with ingoing/outgoing momenta, respectively.} \begin{equation} \left[\hat{u}^-\left(\Omega\right),\hat{p}^+\left(\Omega^\prime\right)\right] ~ = ~ \left[\hat{u}^+\left(\Omega\right),\hat{p}^-\left(\Omega^\prime\right)\right] ~ = ~ i \, \delta^{(d-2)}\left(\Omega-\Omega^\prime\right) \, , \end{equation} we may derive the algebra associated to the black hole scattering. We do this in a partial wave expansion\textemdash in four dimensions\textemdash as \begin{equation} \hat{u}^\pm \left(\Omega\right) ~ = ~ \sum_{lm} \hat{u}^\pm_{lm} \, Y_{lm} \left(\Omega\right) \quad \text{and} \quad \hat{p}^\pm \left(\Omega\right) ~ = ~ \sum_{lm} \hat{p}^\pm_{lm} \, Y_{lm} \left(\Omega\right) \, . \end{equation} Working with these eigenfunctions of the two-sphere Laplacian and using \ref{eqn:laplacianonshift} we can write the back-reaction equations \eqref{eqn:cumulativeshift1} and \eqref{eqn:cumulativeshift2} as \begin{equation}\label{eqn:partialwavebackreaction} \hat{u}^\pm_{lm} ~ = ~ \mp \dfrac{8 \pi G}{R^2\left(l^2 + l + 1\right)} \hat{p}^\pm_{lm} ~ \eqqcolon ~ \mp \lambda \, \hat{p}^\pm_{lm} \, . \end{equation} In terms of these partial waves, we may now write the scattering algebra as \begin{align}\label{eqn:macroalgebra} \left[\hat{u}^\pm_{lm},\hat{p}^\mp_{l^\prime m^\prime}\right] ~ &= ~ i \delta_{l l^\prime} \delta_{m m^\prime} \\ \left[\hat{u}^+_{lm},\hat{u}^-_{l^\prime m^\prime}\right] ~ &= ~ i \, \lambda \, \delta_{l l^\prime} \delta_{m m^\prime} \\ \left[\hat{p}^+_{lm},\hat{p}^-_{l^\prime m^\prime}\right] ~ &= ~ - \dfrac{i}{\lambda} \, \delta_{l l^\prime} \delta_{m m^\prime} \end{align} A few comments are now in order. Since the different spherical harmonics do not couple in the algebra, we will drop the subscripts of $l$ and $m$ from here on. Furthermore, we see that the shift-parameter $\lambda$ `morally' plays the role of Planck's constant $\hbar$, but one that is now $l$ dependent. Moreover, we see that wave-functions described in terms of four phase-space variables are now pair-wise related owing to the back-reaction \eqref{eqn:partialwavebackreaction}. Finally, it is important to note that each partial wave does not describe a single particle but a specific profile of a density of particles. For instance, the $s$-wave with $l=0$ describes a spherically symmetric density of particles. \\ Since the operators $\hat{u}^\pm$ and $\hat{p}^\pm$ obey commutation relations associated to position and momentum operators, we see that the algebra may be realized with $\hat{u}^- = - i \lambda \partial_{u^+}$ in the $u^+$ basis and $\hat{u}^+ = i \lambda \partial_{u^-}$ in the $u^-$ basis. A similar realization is evident for the momentum operators. Moreover, we may now define the following inner-products on the associated Hilbert space of states that respect the above algebra: \begin{align} \Braket{u^\pm|p^\mp} ~ &= ~ \dfrac{1}{\sqrt{2 \pi}} \exp\left(i u^\pm p^\mp\right) \\ \Braket{u^+|u^-} ~ &= ~ \dfrac{1}{\sqrt{2 \pi \lambda}} \exp\left(i \frac{u^+ u^-}{\lambda}\right) \label{eqn:upmscattering}\\ \Braket{p^+|p^-} ~ &= ~ \sqrt{\dfrac{\lambda}{2 \pi}} \exp\left(i \lambda p^+ p^-\right) \end{align} Using \eqref{eqn:upmscattering}, for instance, we may write the out-going wave-function\textemdash travelling along the coordinate $u^-$ after scattering\textemdash in terms of the in-going one travelling along $u^+$ as \begin{equation}\label{eqn:scatwavefn} \Braket{u^-|\psi} ~ \eqqcolon ~ \psi^{\text{out}}\left(u^-\right) ~ = ~ \int_{-\infty}^{\infty} \, \dfrac{d u^+}{\sqrt{2 \pi \lambda}} \, \exp\left(-i \frac{u^+ u^-}{\lambda}\right) \, \psi^{\text{in}}\left(u^+\right) \, . \end{equation} One can immediately see that this mapping is Unitary just being a fourier transform. To derive another useful form of the S-Matrix associated to the scattering, we first move to Eddington-Finkelstein coordinates: \begin{equation} u^+ ~ = ~ \alpha^+ \, e^{\rho^+} \, ,\quad u^- ~ = ~ \alpha^- \, e^{\rho^-} \, , \quad \, p^+ ~ = ~ \beta^+ \, e^{\omega^+} \quad \text{and} \quad p^- ~ = ~ \beta^- \, e^{\omega^-} \end{equation} where $\alpha^\pm = \pm 1$ and $\beta^\pm = \pm 1$ to account for both positive and negative values of the phase space coordinates $u^+$, $u^-$, $p^+$ and $p^-$. The normalization of the wave-function as \begin{align} 1 ~ &= ~ \int_{-\infty}^\infty \left|\psi\left(u^+\right)\right|^2 \, d u^+ \nonumber \\ &= ~ \int_{-\infty}^0 \left|\psi\left(u^+\right)\right|^2 \, du^+ ~ + ~ \int_{0}^{\infty} \left|\psi\left(u^+\right)\right|^2 \, d u^+ \nonumber \\ &= ~ - \int_{\infty}^{-\infty} \left|\psi^+\left(- e^{\rho^+}\right)\right|^2 e^{\rho^+} \, d\rho^+ ~ + ~ \int_{-\infty}^{\infty} \left|\psi^+\left(+ e^{\rho^+}\right)\right|^2 e^{\rho^+} \, d\rho^+ \nonumber \\ &= ~ \sum_{\alpha = \pm} \, \int_{-\infty}^{\infty} \left|\psi^+\left(\alpha e^{\rho^+}\right)\right|^2 e^{\rho^+} \, d\rho^+ \end{align} suggests the following redefinitions for the wave-function in position and momentum spaces \begin{align} \psi^\pm\left(\alpha^\pm e^{\rho^\pm}\right) ~ &= ~ e^{-\rho^\pm/2} \, \phi^\pm\left(\alpha^\pm,\rho^\pm\right) \quad \& \quad \tilde{\psi}^\pm\left(\beta^\pm e^{\omega^\pm}\right) ~ = ~ e^{-\omega^\pm/2} \, \tilde{\phi}^\pm\left(\beta^\pm,\omega^\pm\right) \, . \end{align} Therefore, using \eqref{eqn:scatwavefn}, we may write $\phi^{\text{out}}\left(\alpha^-, \rho^-\right)$ as: \begin{align}\label{eqn:fourier1} \phi^\text{out}\left(\alpha^-, \rho^-\right) ~ &= ~ \dfrac{1}{\sqrt{2 \pi \lambda}} \int_{-\infty}^{\infty} du^+ \, e^{\frac{\rho^+ + \rho^-}{2}} \, \exp\left(-i \frac{u^+ u^-}{\lambda}\right) \, \phi^{\text{in}}\left(\alpha^+,\rho^+\right) \nonumber \\ &= ~ \sum_{\alpha^+ = \pm} \int_{-\infty}^{\infty} \dfrac{du^+ }{\sqrt{2 \pi}} e^{\frac{\rho^+ + \rho^- - \log\lambda}{{2}}} \exp\left(- i \alpha^+ \alpha^- e^{\rho^+ + \rho^- - \log\lambda}\right) \, \phi^{\text{in}}\left(\alpha^+,\rho^+\right) \nonumber \\ &= \sum_{\alpha^+ = \pm} \int_{-\infty}^{\infty} \dfrac{dx}{\sqrt{2 \pi}} \, \exp\left(\frac{x}{2} - i \alpha^+ \alpha^- e^x\right) \, \phi^{\text{in}}\left(\alpha^+,x + \log\lambda - \rho^-\right) \, , \end{align} where in the last line, we introduced $x\coloneqq \rho^+ + \rho^- - \log\lambda$. This equation may be written in matrix form as \begin{equation}\label{eqn:SMatrix1} \left( \begin{array}{c} \phi^{\text{out}}\left(+,\rho^-\right) \\ \phi^{\text{out}}\left(-,\rho^-\right) \end{array} \right) ~ = ~ \int_{-\infty}^\infty \, dx \left( \begin{array}{cc} A\left(+,+,x\right) & A\left(+,-,x\right) \\ A\left(-,+,x\right) & A\left(-,-,x\right) \end{array} \right) \left( \begin{array}{c} \phi^{\text{in}}\left(+,x + \log\lambda - \rho^-\right) \\ \phi^{\text{in}}\left(-,x + \log\lambda - \rho^-\right) \end{array} \right) \end{equation} where we have defined the quantity \begin{equation}\label{eqn:Ax} A\left(\gamma,\delta,x\right) ~ \coloneqq ~ \dfrac{1}{\sqrt{2\pi}} \, \exp\left(\frac{x}{2} - i \, \gamma \, \delta \, e^x\right) \, , \end{equation} with $\gamma = \pm$ and $\delta = \pm$. This integral equation may further be simplified by moving to Rindler plane waves: \begin{align} \phi^{\text{out}}\left(\pm,\rho^-\right) ~ &= ~ \dfrac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} dk_- \, \phi^{\text{out}}\left(\pm,k_-\right) \, e^{i k_- \rho^-} \\ \phi^{\text{in}}\left(\pm,x + \log\lambda - \rho^-\right) ~ &= ~ \dfrac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} \, d k_{\tilde{x}} \, \phi^{\text{in}}\left(\pm,k_{\tilde{x}}\right) e^{-ik_{\tilde{x}} \left(x + \log\lambda - \rho^-\right)} \\ A\left(\gamma,\delta,x\right) ~ &= ~ \dfrac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} \, dk_x \, A\left(\gamma,\delta,k_x\right) e^{i k_x \, x} \end{align} This allows us to write the above matrix equation \eqref{eqn:SMatrix1} as \begin{equation} \left( \begin{array}{c} \phi^{\text{out}}\left(+,k\right) \\ \phi^{\text{out}}\left(-,k\right) \end{array} \right) ~ = ~ e^{-ik \log\lambda} \left( \begin{array}{cc} A\left(+,+,k\right) & A\left(+,-,k\right) \\ A\left(-,+,k\right) & A\left(-,-,k\right) \end{array} \right) \left( \begin{array}{c} \phi^{\text{in}}\left(+,k\right) \\ \phi^{\text{in}}\left(-,k\right) \end{array} \right) \end{equation} where $A\left(\gamma,\delta,k\right)$ can be computed from the inverse Fourier transform of \eqref{eqn:Ax} using a coordinate change $y = e^x$ and the identity \begin{equation} \int_0^{\infty} \, dy \, e^{i\sigma y} y^{-ik-\frac{1}{2}} ~ = ~ \Gamma\left(\dfrac{1}{2} - ik\right) \, e^{i\sigma\frac{\pi}{4}} \, e^{k\sigma\frac{\pi}{2}} \, , \quad \text{where} \quad \sigma = \pm \, . \end{equation} Carrying out this computation, we find the following S-Matrix: \begin{align}\label{eqn:SMatrix2} S\left(k_l,\lambda_l\right) ~ &= ~ e^{-ik_l \log\lambda_l} \left( \begin{array}{cc} A\left(+,+,k_l\right) & A\left(+,-,k_l\right) \\ A\left(-,+,k_l\right) & A\left(-,-,k_l\right) \end{array} \right) \nonumber \\ &= ~ \dfrac{1}{\sqrt{2\pi}} \Gamma\left(\dfrac{1}{2} - ik_l\right) e^{-ik_l \log\lambda_l} \left( \begin{array}{cc} e^{-i\frac{\pi}{4}} \, e^{-k_l\frac{\pi}{2}} & e^{i\frac{\pi}{4}} \, e^{k_l\frac{\pi}{2}} \\ e^{i\frac{\pi}{4}} \, e^{k_l\frac{\pi}{2}} & e^{-i\frac{\pi}{4}} \, e^{-k_l\frac{\pi}{2}} \end{array} \right) \end{align} In this expression, we have reinstated a subscript on $k$ and $\lambda$ to signify that they depend on the specific partial wave in question. One may additionally diagonalize this matrix by noting that \begin{equation} A\left(+,+,k\right) ~ = ~ A\left(-,-,k\right) \quad \text{and} \quad A\left(+,-,k\right) ~ = ~ A\left(-,+,k\right) \, . \end{equation} With this observation, we see that the diagonalization of the S-Matrix is achieved via the redefinitions \begin{align} \phi^+_1\left(k\right) ~ &= ~ \phi^+\left(+,k\right) + \phi^+\left(-,\rho^+\right) \, , ~~ \qquad \phi^+_2\left(k\right) ~ &&= ~ \phi^+\left(+,k\right) - \phi^+\left(-,k\right) \nonumber \\ A_1\left(k\right) ~ &= ~ A\left(+,+,k\right) + A\left(+,-,k\right) \,, \qquad A_2\left(k\right) ~ &&= ~ A\left(+,+,k\right) - A\left(+,-,k\right) \, . \end{align} It may be additionally checked that this matrix is unitary. As already mentioned, while it may not be clear whether this matrix is applicable to the formation and evaporation of a physical black hole, a conservative statement that can be made with certainty is the following: all information that is thrown into a large black hole is certainly recovered in its entirety, at least when the degrees of freedom in question are positions and momenta. It would be interesting to generalize this to degrees of freedom carrying additional conserved quantities like electric charge, etc. On the other hand, there is a certain property of the S-Matrix that may be puzzling at first sight. Positive Rindler energies $k$ imply that the off-diagonal elements in the S-Matrix are dominant with exponentially suppressed diagonal elements. While negative Rindler energies reverse roles. One way to interpret this feature is to think of an eternal black hole where dominant off-diagonal elements suggest that information about in-going matter from the right exterior is carried mostly by out-going matter from the left exterior. However, in a physical collapse, there is only one exterior. It has been suggested by 't Hooft that one must make an antipodal mapping between the two exteriors to make contact with the one-sided physical black hole; we discuss this issue in Section \ref{sec:discussion}. \section{The model}\label{sec:micromodel} Asking two simple questions allows us to almost entirely determine a quantum mechanical model that corresponds to the black hole scattering matrix of the previous section. The first question is `what kind of a quantum mechanical potential allows for scattering states?' The answer is quite simply that it must be an unstable potential. The second question is `what quantum mechanical model allows for energy eigenstates that resemble those of Rindler space?' The answer, as we will show in this section, is a model of waves scattering off an inverted harmonic oscillator potential. Using this intuition, we will now construct the model and show that it explicitly reproduces the desired S-Matrix. Having constructed the model, we will then proceed to compare it to 2d string theory models. The construction of our model and intuition gained from a comparison to 2d string theory/matrix quantum mechanics models \cite{Moore1992b,Alexandrov:2002fh,Maldacena:2005he} allows us to study time delays and degeneracy of states in the next section. \\ Inverted quadratic potentials, at a classical level, fill up phase space with hyperbolas as opposed to ellipses as in the case of standard harmonic oscillator potentials. Since we have a tower of 4d partial waves in the black hole picture, each of them results in a phase space of position and momentum and consequently a collection of inverted harmonic oscillators, one for each partial wave. Since the black hole scattering of 't Hooft mixes positions and momenta, we are naturally led to consider the description of scattering in phase space. \subsection{Construction of the model} We first start with a phase space parametrized by variables $x_{l m}$ and $p_{l m}$. To implement the appropriate scattering off the horizon, we start with the same black hole scattering algebra: $[\hat x_{l m} , \hat p_{l' m'} ]= i \lambda \delta_{m m'} \delta_{l l'}$, with $\lambda = c/\left(l^2+l+1\right)$ with $c=8 \pi G/R^2$. We will return to how this parameter might naturally arise in a microscopic setting in Section \ref{sec:discussion}. Standard bases of orthonormal states are $|x; l, m \rangle$ and $|p; l, m \rangle$; these are coordinate and momentum eigenstates respectively, with \begin{equation} \langle l, m ; x |p ; l, m \rangle ~ = ~ \dfrac{1}{\sqrt{2 \pi \lambda}} \, e^{ipx/ \lambda} \, \delta_{m m'} \delta_{l l'} \, . \end{equation} Since our interest is in the scattering of massless particles, it will turn out to be convenient to use light-cone bases $|u^\pm ; l, m \rangle$ which are orthonormal eigenstates of the light-cone operators: \begin{equation} \hat u^\pm_{l m} ~ = ~ \dfrac{\hat{p}_{l m} \pm \hat{x}_{l m}}{\sqrt 2} \qquad \text{and} \qquad \left[\hat u^+_{l m} ,\hat u^-_{l' m'} \right] ~ = ~ i \lambda \delta_{l l'}\delta_{m m'} \, . \end{equation} While they look similar to creation and annihilation operators of the ordinary harmonic oscillator, $\hat u^\pm$ are in truth hermitian operators themselves; and are not hermitian conjugate to each other. Therefore, the states $|u^\pm ; {l, m}\rangle$ are reminiscent of coherent states. These plus and minus bases will be useful in describing the in and outgoing states of the upside down harmonic oscillator. For definiteness, we will choose for the ingoing states to be described in terms of the $u^+_{l, m}$ basis while for the outgoing ones to be in terms of the $u^-_{l, m}$ basis. As in the previous section, we will work in the simplification where different oscillators (partial waves) do not interact and will therefore omit the partial wave labels in all places where they do not teach us anything new. Furthermore, as before, from the commutation relations we may define the following inner product on the Hilbert space of states \begin{equation}\label{eqn:fourierkernel} \langle u^+ |u^- \rangle ~ = ~ \dfrac{1}{\sqrt{ 2\pi \lambda}} \, \exp\left( \frac{iu^+ u^-}{\lambda} \right) \, , \end{equation} that expresses the Fourier transform kernel between the two bases. We may again realize the algebra if $\hat u^-$ acts on $ \langle u^+ |u^- \rangle $ and $\langle u^+ |x \rangle $ as $-i \lambda \partial_{u^+}$ while $\hat u^+$ acts on $\langle u^-|u+\rangle $ and $ \langle u^-|x\rangle$ as $i \lambda \partial_{u^-}$. To endow the model with dynamics, we now turn to the Hamiltonian for each oscillator/partial wave \begin{align} H_{l m} ~ &= ~ \, \dfrac{1}{2} \left(p_{l m}^2 - x_{l m}^2\right) \nonumber \\ &= ~ \, \dfrac{1}{2}(u^+_{l m} u^-_{l m} + u^-_{l m} u^+_{l m}) \, , \end{align} which may also be written as \begin{eqnarray} H ~ = ~ \mp \, i \, \lambda \left( u^\pm \partial_{u^\pm} + \dfrac{1}{2} \right) \end{eqnarray} in the $u^\pm$ bases where we drop the $l , m$ indices. Physically the wave-function can be taken to correspond to a wave coming from the right which after scattering splits into a transmitted piece that moves on to the left and a reflected piece that returns to the right. The other wave function can be obtained from this one by a reflection $x \to -x$. The light-cone coordinates describe these left/right movers and simplify the description of scattering since the Schr\"{o}dinger equation becomes a first order partial differential equation. Moreover, the energy eigenfunctions are simply monomials of $u^\pm$ while in the $x$ representation the energy eigenfunctions are more complicated parabolic cylinder functions. In particular, for each partial wave the Schr\"{o}dinger equation in light-cone coordinates is: \begin{equation} i \, \lambda \, \partial_t \psi_{\pm}\left(u^\pm , t\right) ~ = ~\mp i \, \lambda \, \left(u^\pm \partial_{u^\pm} + 1/2 \right) \psi_{\pm}\left(u^\pm, t\right) \, \end{equation} with solutions \begin{equation} \psi_{\pm}\left(u^\pm , t\right) ~ = ~ e^{\mp t/2} \, \psi_{\pm}^0 \left(e^{\mp t} u^{\pm}\right) \, . \end{equation} This can also be written in bra/ket notation as: \begin{equation} \langle u^\pm | \psi^\pm (t) \rangle ~ = ~ \langle u^\pm | e^{\frac{i}{\lambda} \hat H t} | \Psi^\pm_0 \rangle ~ = ~ e^{\mp \frac{t}{2}} \langle e^{\mp t} u^\pm | \Psi^\pm_0 \rangle \, . \end{equation} The time evolution for the basis states is given by \begin{align} e^{\frac{i}{\lambda} Ht} |u^\pm \rangle ~ &= ~ e^{\pm{t\over 2}} |e^{\pm t} u^\pm \rangle \cr \langle u^\pm| e^{\frac{i}{\lambda} Ht} ~ &= ~ e^{\mp {t\over 2}} \langle e^{\mp t} u^\pm | \cr \langle u^+| e^{\frac{i}{\lambda} H t}|u^- \rangle ~ &= ~ { 1 \over \sqrt{ 2 \pi \lambda} } e^{-{t\over 2}}\exp\left( \frac{i}{\lambda} u^+ u^- e^{-t}\right) \end{align} \begin{figure}[t] \centering \includegraphics[width=70mm]{phasespace.pdf} \caption{The scattering diagram.} \label{fig:scatteringdiagram} \end{figure} In the conventions of Figure \ref{fig:scatteringdiagram}, it is easy to see that ingoing states can be labelled by the $u^+$ axis while the outgoing ones by the $u^-$ axis. Since the potential is unbounded, the Hamiltonian has a continuous spectrum. In the $u^+$ representation the energy eigenstates with eigenvalue $\epsilon$ are \begin{equation} \dfrac{1}{\sqrt{2 \pi \lambda}}(u^+)^{i \frac{\epsilon}{\lambda} - \frac{1}{2}} \nonumber \, . \end{equation} The singularity at $u^+=0$ leads to a two fold doubling of the number of states. This is understood to be arising from the existence of the two regions (I - II) in the scattering diagram. From now on we use $|\epsilon, \alpha^+ \rangle_{\text{in}}$ and $|\epsilon, \alpha^- \rangle_{\text{out}}$ for the in and outgoing energy eigenstates with the labels $\alpha^+ = \pm$ , $\alpha^- = \pm$ to denote the regions I and II. While we have four labels, we are still only describing waves in the two quadrants (I-II) with two of them for ingoing waves and two for outgoing ones. The in-states may be written as \begin{align} &\langle u^+|\epsilon, + \rangle_{\text{in}}= \begin{cases}{1 \over \sqrt{2\pi \lambda}}{(u^+)}^{i\frac{\epsilon}{\lambda} - \fract12} \def\quart{\fract14} \def\halff{\ffract12}& u^+>0\cr 0 & u^+<0\end{cases} \quad &\langle u^+|\epsilon, - \rangle_{\text{in}} = \begin{cases}0& u^+>0\cr {1 \over \sqrt{2\pi \lambda}}(-u^+)^{i\frac{\epsilon}{\lambda}-\fract12} \def\quart{\fract14} \def\halff{\ffract12} & u^+<0\end{cases} \nonumber \end{align} describing left and right moving ingoing waves for the regions I and II respectively. Similarly, the natural out basis is written as \begin{align} &\langle u^-|\epsilon, + \rangle_{\text{out}}= \begin{cases}{1 \over \sqrt{2\pi \lambda}}{(u^-)}^{-i\frac{\epsilon}{\lambda}-\fract12} \def\quart{\fract14} \def\halff{\ffract12}& u^- >0\cr 0 & u^- <0\end{cases} \quad &\langle u^-|\epsilon, - \rangle_{\text{out}}= \begin{cases} 0& u->0\cr {1 \over \sqrt{2\pi \lambda}}(-u^-)^{-i\frac{\epsilon}{\lambda}-\fract12} \def\quart{\fract14} \def\halff{\ffract12} & u^-<0 \end{cases} \nonumber \end{align} to describe the right and left moving outgoing waves for the regions I and II respectively. Therefore, time evolution of the energy eigenstates \begin{equation} \langle u^+|\epsilon, + \rangle_{\text{in}} (t) ~ = ~ \dfrac{1}{\sqrt{2 \pi \lambda}} e^{- i \frac{\epsilon}{\lambda} t} (u^+)^{i \frac{\epsilon}{\lambda} -\fract12} \def\quart{\fract14} \def\halff{\ffract12} ~ = ~ \dfrac{1}{\sqrt{2 \pi \lambda}} e^{-\frac{\rho^+}{2}} e^{- i \frac{\epsilon}{\lambda} t} e^{i \frac{\epsilon}{\lambda} \rho^+} \end{equation} implies that they correspond to the Rindler relativistic plane-waves\footnote{Normalised in the $u^\pm$ basis.} moving with the speed of light in the tortoise-coordinates if we identify the quantum mechanical time with Rindler time $t=\tau$ and the inverted harmonic oscillator energy with the Rindler momentum via $\kappa \lambda = \epsilon$. This means that the energy of the eigenstates of the non-relativistic inverted oscillator, when multiplied by $\lambda$, can also be interpreted as the energy/momentum of the Rindler relativistic plane waves of the previous section. This allows us to write down any ingoing state in terms of these Rindler plane waves. As we have seen, the unitary operator relating the $u^\pm$ representations is given by the fourier kernel \eqref{eqn:fourierkernel} on the whole line that acts on a state as \begin{equation} \psi_{\text{out}} (u^-) ~ = ~ \left[ \hat{\mathcal S} \psi_{\text{in}} \right] (u^-)= \int_{-\infty}^\infty \dfrac{du^+}{\sqrt{2 \pi \lambda}} e^{-i u^+ u^- \over \lambda} \psi_{\text{in}}(u^+). \end{equation} It is now clear that repeating the calculations of the previous section results in the same S-Matrix, rather trivially. However, to make the connection to the eigenstates of the inverted harmonic oscillator transparent, we will derive it in a more conventional manner. To represent the action of the kernel on energy eigenstates, we split it into a $2\times 2$ matrix that relates them as follows: \begin{equation} &\begin{pmatrix} |\epsilon, + \rangle_{out} \cr |\epsilon, - \rangle_{out} \end{pmatrix} =&\cal \hat S \begin{pmatrix}|\epsilon, + \rangle_{in} \cr |\epsilon, - \rangle_{in} \end{pmatrix} \end{equation} The fastest method to find each entry is to compute the in-going energy eigenstates in the out-going position basis and vice versa using the insertion of a complete set of states of the form \begin{equation} \langle u^- | \epsilon \rangle_{\text{in}} ~ = ~ \int_{-\infty}^\infty du^+ \langle u^- | u^+ \rangle \langle u^+ | \epsilon \rangle_{\text{in}} \, . \end{equation} The results are \begin{align} \langle u^-|\epsilon, \pm \rangle_{\text{in}} ~ &= ~ \lambda^{i \epsilon \over \lambda} e^{\mp i \pi \over 4} e^{\pm \frac{\pi \epsilon}{2 \lambda}} \Gamma\left(\dfrac{1}{2} + i \dfrac{\epsilon}{\lambda} \right) \dfrac{(\alpha^-| u^-|)^{-i\frac{\epsilon}{\lambda} - \fract12} \def\quart{\fract14} \def\halff{\ffract12}}{\sqrt{2\pi \lambda}} \\ \langle u^+|\epsilon, \pm \rangle_{\text{out}} ~ &= ~ \lambda^{-i \epsilon \over \lambda} e^{\pm i \pi \over 4} e^{\pm\frac{\pi \epsilon}{2 \lambda}} \Gamma\left(\dfrac{1}{2} - i \dfrac{\epsilon}{\lambda} \right) \dfrac{(\alpha^+ |u^+|)^{i\frac{\epsilon}{\lambda} - \fract12} \def\quart{\fract14} \def\halff{\ffract12}}{\sqrt{2\pi \lambda}} \, . \end{align} Each of these equations gives two results for each sign\footnote{For negative signs, one makes use of $(-1)^{i\epsilon/\lambda - 1/2} = e^{- i \pi/2} e^{- \pi \epsilon/\lambda}$.} to yield: \begin{align} \mathcal{S} ~ &= ~ \dfrac{1}{\sqrt{2 \pi}} \exp\left(-i \, \dfrac{\epsilon}{\lambda} \, \log \lambda\right) \Gamma\left(\fract12} \def\quart{\fract14} \def\halff{\ffract12 - i \dfrac{\epsilon}{\lambda}\right) \begin{pmatrix} e^{- i \frac{\pi}{4}} \, e^{- \frac{\pi \epsilon}{2 \lambda}} & e^{i \frac{\pi}{4}} \, e^{\frac{\pi \epsilon}{2 \lambda}} \\ e^{i \frac{\pi}{4}} \, e^{\frac{\pi \epsilon}{2 \lambda}} & e^{- i \frac{\pi}{4}} \, e^{- \frac{\pi \epsilon}{2 \lambda}} \end{pmatrix} \nonumber \\ &= ~ e^{i\Phi(\epsilon)} \, \exp\left(-i \, \dfrac{\epsilon}{\lambda} \, \log \lambda\right) \begin{pmatrix}{ e^{ -i \pi/4}\over \sqrt{1 + e^{2\pi \epsilon/\lambda} }} & e^{ i \pi/4} \over \sqrt{1 + e^{-2\pi \epsilon/\lambda}}\\ e^{ i \pi/4} \over \sqrt{1 + e^{-2\pi \epsilon/\lambda}} & e^{- i \pi/4} \over \sqrt{1 + e^{2\pi \epsilon/\lambda}}\end{pmatrix} \, , \end{align} with the scattering phase $\Phi(\epsilon)$ being defined as \begin{equation} \Phi(\epsilon) ~ = ~ \sqrt{\dfrac{\Gamma\left(\frac{1}{2} - i \frac{\epsilon}{\lambda} \right)}{\Gamma\left(\frac{1}{2} + i \frac{\epsilon}{\lambda}\right)}} \, . \end{equation} Identifying parameters as $k_l \, \lambda_l = \epsilon_l$, we see that this precisely reproduces the S-Matrix derived in the previous section for every partial wave. In this model, it is clear that the competition between reflection and transmission coefficients is owed to the energy of the waves being scattered being larger than the tip of the inverted potential. \\ \subsection{A projective light-cone construction} Although we had good reason to expect such an inverse harmonic oscillator realization of the black hole S-Matrix, there is, in fact, another way to derive it\textemdash using what is called a projective light-cone construction. This construction was first studied by Dirac and \cite{Rychkov:2016iqz,Weinberg:2010fx} provide a good modern introduction to the topic. The essential idea is to embed a null hyper-surface inside Minkowski space to study how linear Lorentz symmetries induce non-linearly realized conformal symmetries on a (Euclidean) section of the embedded surface. This allows us to relate the Rindler Hamiltonian -which can then be related directly to the Hamiltonian of the quantum mechanics model that describes the scattering- with the Dilatation operator on the horizon. In a black hole background this construction is of course expected to hold only locally in the near horizon region. We first introduce $X=(x^\mu, x^{d-1}, x^{d} )$ with $\mu = 1, .., d-2$ (note that $\mu$ is a Euclidean index), where the light-cone coordinates are defined as $x^\pm = x^{d} \pm x^{d-1}$. Here, $x^{d}$ serves as the time coordinate \footnote{The null cone is described by the equation $X^2=0$ and a Euclidean section can be given as $x^+=f(x^\mu)$.}. The Minkowski metric $\eta_{MN}$ in these coordinates is given as \begin{equation} ds^2 ~ = ~ - dx^+ dx^- + dx_\mu dx^\mu \, , \end{equation} which has an $SO(d-1,1)$ Lorentz symmetry. There is an isomorphism between the corresponding Lorentz algebra and the Euclidean conformal algebra in $d-2$ dimensions. To state this isomorphism, we first label the $d-2$-dimensional Euclidean conformal group generators as: \begin{align} P_\mu ~ &= ~ i\partial_\mu \quad &&\text{corresponding to translations,} \nonumber \\ M_{\mu\nu} ~ &= ~ i \left(x_\mu\partial_\nu-x_\nu\partial_\mu\right) \quad &&\text{to rotations,} \nonumber \\ D ~ &= ~ i x^\mu\partial_\mu \quad &&\text{to dilatations, and} \nonumber \\ K_\mu ~ &= ~ i \left(2x_\mu \left(x^\nu\partial_\nu\right) - x^2\partial_\mu\right) \quad &&\text{to special conformal transformations} \, . \end{align} The identification is now given as follows: \begin{align} J_{\mu\nu} ~ = ~ M_{\mu\nu} \, , \quad J_{\mu+} ~ = ~ P_\mu \, , \quad J_{\mu-} ~ = ~ K_\mu \, , \quad J_{+-} ~ = ~ D \, , \end{align} where the $SO(3,1)$ Lorentz generators $J_{MN}$ are given by \begin{equation} J_{M N} ~ = ~ x_M p_N - x_N p_M \, . \end{equation} These satisfy the $SO(3, 1)\backsimeq SL(2,\mathbb{C})$ algebra. In particular the Dilatation operator on the two dimensional horizon is \begin{equation} D ~ = ~ J_{+ -} ~ = ~ x_+ p_- - x_- p_+ ~ = ~ \dfrac{1}{\lambda} \left(u^+ u^- + u^- u^+ \right) ~ = ~ \dfrac{1}{\lambda} H \, , \end{equation} where in the second equality we used $u^\pm = x_\pm$ to connect to the light-cone coordinates of the previous sub-section and in the third equality, we made use of the back-reaction relations \eqref{eqn:partialwavebackreaction}. Interestingly enough, we see that an appropriately scaled Dilatation operator together with the back-reaction relations gives us exactly the Hamiltonian of the inverted oscillator. The scaling is also neatly realized in the relation between the quantum mechanical energy $\epsilon$ and the Rindler energy $\kappa$ to relate the two S-Matrices. \\ This construction via the light-cone projection could possibly shed more light on the relation between the black hole S-Matrix and string theoretic amplitudes. In the early papers on black hole scattering \cite{'tHooft:1991bd,'tHooft:1992zk,'tHooft:1996tq}, a striking similarity between the S-Matrix and stringy amplitudes was observed. The role of the string worldsheet was attributed to the horizon itself. It was noted that the string tension was imaginary. In the construction above, we found that the induced conformal symmetry on the horizon is Euclidean and that the Dilatation operator is mapped to the time-evolution operator (Rindler Hamiltonian) of the 4d Lorentzian theory. This led us to the unstable potential of the inverse harmonic oscillator. It may well be that the apparently misplaced factors of $i$ in the string tension is owed to the Euclidean nature of conformal algebra on the horizon. It would also be interesting to understand the role of possible infinite-dimensional local symmetries on the horizon/worldsheet \cite{Hawking:2015qqa,Hawking:2016msc} from the point of view of the quantum mechanics model, elaborating on the null cone construction. We leave this study to future work. \\ While the model is seemingly very simple, this is not the first time that such a model has been considered to be relevant for black hole physics \cite{Friess:2004tq,Maldacena2005}. However, previous considerations have found that these models do not correspond to 2d black hole formation owing to an insufficient density of states in the spectrum. Refining these considerations with the intuition that each oscillator as considered in this section corresponds to a partial wave of a 4d black hole, we find that our model may indeed be directly related to 4d black holes formed by physically collapsing matter. We provide evidence for this in Section \ref{sec:collapse}. In order to move on to which, however, it will be very useful for us to review the 2d string theory considerations of the past; this is what we now turn to. \subsection{Relation to matrix models and 2-d string theory} Hermitian Matrix Quantum Mechanics (MQM, henceforth) in the inverted harmonic oscillator was studied in connection with $c=1$ Matrix models and string theory in two dimensions. For more details, we refer the reader to \cite{Klebanov:1991qa,Martinec:2004td}. Here, we briefly review these results in order to point out various similarities and differences with our work. The Lagrangian of MQM is of the form \begin{equation} L ~ = ~ \dfrac{1}{2} {\mbox{Tr}}\,} \def\nl{\newline} \def\cl{\centerline} \def\fn{\footnote \left[ \left(D_t M\right)^2 + M^2 \right] \qquad \text{with} \qquad D_t ~ = ~ \partial_t - i A_t \, , \end{equation} where $A_t$ is a non-dynamical gauge field. The $N \times N$ Hermitian Matrices transform under $U(N)$ as $M \rightarrow U^\dagger M U$. The role of the non-dynamical gauge field is to project out the non-singlet states in the path integral. Diagonalization of the matrices results in a Vandermonde factor in the path integral measure: \begin{equation} \mathcal{D} M ~ = ~ \mathcal{D} U \, \prod_i \, d x_i \, \prod_{i<j} (x_i - x_j)^2 \, . \end{equation} This indicates a natural fermionic redefinition of the wave-functions into Slater determinants (in a first quantised description). The Hamiltonian of the system is, therefore, in terms of $N$ free fermions: \begin{equation} \hat H \, \tilde{\Psi} ~ = ~ - \left(\dfrac{\hbar^2}{2} \sum_{i=1}^N \partial_{x_i}^2 + \dfrac{1}{2} x_i^2 \right) \, \tilde{\Psi} \qquad \text{with} \qquad \tilde{\Psi}(x_i)= \prod_{i<j} (x_i - x_j) \Psi(x_i) \, , \end{equation} with $\tilde{\Psi}(x_i)$ being the redefined fermionic wave-functions. Filling up the `Fermi-sea' up to a level $\mu$, allows for a definition of the vacuum. Clearly, all fermions are subject to the same chemical potential $\mu$ that is typically considered to be below the tip of the inverted oscillator. A smooth string world-sheet was argued to be produced out of these matrices in a double-scaling limit $\mu \rightarrow 0 , \hbar \rightarrow 0$ with a fixed inverse string-coupling defined by the ratio $\mu/\hbar \sim 1/g_s $. In this double-scaling limit, this theory describes string theory on a 2d linear dilaton background with coordinates described by time $t$ and the Liouville field $\phi$. The matrix model/harmonic oscillator coordinate $x$ is conjugate to the target space Liouville field via a non-local integral transformation \cite{Moore1992a}. In contrast to this picture, owing to a one-one correspondence between the 2d harmonic oscillators and 4d partial waves in our model, this integral transform is unnecessary. However, it has been argued in string theory that only the quadratic tip is relevant in this double-scaling limit, even in the presence of a generic inverted potential, emphasizing the universality of the quadratic tip. Whilst we do not have a similar stringy argument, we expect the ubiquitous presence of the quadratic potential to persist in our construction owing to the ubiquitous presence of the Rindler horizon in physical black holes formed from collapsing matter. A modern discourse with emphasis on the target space interpretation of the matrix model as the effective action of $N$ $D0$ branes may be found in \cite{McGreevy2004}. A natural second quantized string field theory description of the system where the fermionic wave-functions are promoted to fermionic fields may be found in \cite{Ginsparg1992,Klebanov:1991qa,Nakayama2004} and references therein. A satisfactory picture of free fermionic scattering in the matrix model was given in \cite{Moore1992b} via the following S-Matrix relation: \begin{equation} \hat S ~ = ~ i_{b \rightarrow f} \circ \hat S_{ff} \circ i_{f \rightarrow b} \end{equation} where even though the asymptotic tachyonic states are bosonic, one is instructed to first fermionize, then scatter the fermions in the inverted quadratic potential and then to bosonize again. The total S-matrix is unitary if the fermionic scattering is unitary and the bosonization spans all possible states. The logic of this expression resembles that of 't Hooft's S-matrix, where one first expands a generic asymptotic state into partial waves, expresses them in terms of near horizon Rindler parameters, scatters them with the given S-matrix that is similar to the one of 2d string theory before transforming back to the original asymptotic coordinates. At the level of the discussion now, it may already be noted that one important difference between the 2d string-theoretic interpretation of the matrix model and our 4d partial wave one is the nature of the transformations that relate asymptotic states to the eigenstates of the inverted harmonic oscillator. Additionally, and perhaps more importantly, in our construction, we have an entire collection of such harmonic oscillators/matrix models parametrised by $l, m$ that conspire to make up a 4d black hole. We present concrete evidence for this by studying time-delays and degeneracy of states in Section \ref{sec:collapse}. There are further differences between the 2d string theories and our construction, in order to present which, we need to proceed to a study of the spectrum of states in our model; this enables us to study growth of states in the two models. Finally, we also comment on a possible second quantization and appropriate MQM interpretation of our model in Section \ref{sec:discussion}. \section{Combining the oscillators (partial waves)}\label{sec:collapse} On the side of the macroscopic black hole in Section \ref{sec:macroSMatrix}, the calculation was done in an approximation where there is a pre-existing black hole into which degrees of freedom are thrown (as positions and momenta). It was then evident that the information that was sent into the black hole is completely recovered since the S-Matrix was unitary. Furthermore, the back-reaction computation told us exactly how this information is retrieved: in-going positions as out-going momenta and in-going momenta as out-going positions. However, a critical standpoint one may take with good reason would be to say that this is not good enough to tell us if a physical collapse of a black hole and complete evaporation of it is a unitary process. The calculation has not modeled a collapsing problem.\\ The picture to have in a realistic collapse is that of an initial state that evolves in time to collapse into an intermediate black hole state which then subsequently evaporates to result in a final state that is related to the initial one by a unitary transformation. Naturally, the corresponding macroscopic picture is that of a strongly time-dependent metric. Heuristically, one may think of the total S-Matrix of this process as being split as \begin{equation} \hat S ~ = ~ \hat S_{\mathscr{I}^- \rightarrow \text{hor}^-} ~ \hat S_{\text{hor}^- \rightarrow \text{hor}^+} ~ \hat S_{\text{hor}^+ \rightarrow \mathscr{I}^+ } \end{equation} where $\hat S_{\mathscr{I}^- \rightarrow \text{hor}^-}$ corresponds to evolution from asymptotic past to a (loosely defined) point in time when gravitational interactions are strong enough for the collapse to begin, $\hat S_{\text{hor}^- \rightarrow \text{hor}^+}$ to the piece that captures all the `action'\textemdash insofar as collapse and evaporation are concerned\textemdash take place and finally $\hat S_{\text{hor}^+ \rightarrow \mathscr{I}^+ }$ represents the evolution of the evaporated states to future infinity. The horizon\textemdash being a teleological construction that can be defined only if one knows the global structure of spacetime\textemdash has a time dependent size and location in a collapse/evaporation scenario but for us will nevertheless comprise the locus of spacetime points where the backreaction effects are important. Therefore, we use subscripts $\text{hor}^\pm$ to refer to it, at different points in time, in the above heuristic split. Thought of the total evolution this way, it is clear that the most important contribution arises from the part of the matrix that refers to the region in space-time where gravitational back-reaction cannot be ignored. The other pieces are fairly well-approximated by quantum field theory on an approximately fixed background. Nevertheless, in the intermediate stage, the metric is strongly time-dependent. \\ At the outset, let it be stated that we will not get as far as being able to derive this metric from the quantum mechanics model. We may ask if there are generic features of the black hole that we have come to learn from semi-classical analyses that can also be seen in this model. We will focus on two important qualitative aspects of (semi-classical) black holes: \paragraph{Time-delay} A physical black hole is not expected to instantaneously radiate information that has been thrown into it. There is a time-delay between the time at which radiation begins to be received by a distant observer and the time at which one may actually recover in-going information. In particular, given an in-state that collapses into a black hole, we expect that the time-scale associated to the scattering process is `long'. In previous studies of 2d non-critical string theory, it was found that with a single inverted harmonic oscillator, the associated time-delay is not long enough to have formed a black hole \cite{Schoutens:1993hu,Karczmarek:2004bw,Friess:2004tq}. However, with the recognition that each oscillator corresponds to a partial wave and that a collection of oscillators represents a 4d black hole, we see that the black hole degeneracy of states arises from the entire collection while the time-delay associated to each oscillator is the time spent by an in-going mode in the scattering region; the latter being more reminiscent of what one might call `scrambling time'. \paragraph{Approximate thermality} As Hawking famously showed \cite{Hawking:1974sw}, the spectrum of radiation looks largely thermal for a wide range of energies. One way to probe this feature is via the number operator\textemdash which, for a finite temperature system, can be written as $\bra \hat N (\omega)\ket= \rho(\omega) f(\omega)$ with $\rho(\omega)$ being the density of states and $f(\omega)$ the appropriate thermal distribution for Fermi/Bose statistics. Given that the S-Matrix is unitary, we know that this notion of temperature and thermality of the spectrum is only approximate. Notwithstanding this, a detector at future infinity should register this approximately thermal distribution for a large frequency range. \\ In what follows, we will study whether the S-Matrix corresponding to our collection of oscillators in the model presented in Section \ref{sec:micromodel} displays both these properties. \subsection{Time delays and degeneracy of states} We have seen that the total scattering matrix associated to four-dimensional gravity can be seen as arising via a collection of inverted harmonic oscillators, each with a different algebra differentiated by $\lambda_l$ in, say, \eqref{eqn:macroalgebra}. One canonical way to study life-times in scattering problems in quantum mechanics is via the time-delay matrix, which is defined as: \begin{equation}\label{eqn:timedelay} \Delta t_{ab} ~ = ~ \Re\left(i \, \sum_c \, S^\dagger(k_l,\lambda_l)_{ac} \left(\dfrac{d S(k_l,\lambda_l)}{d k}\right)_{cb}\right) \, . \end{equation} Each matrix element above encodes the time spent by a wave of energy $k_l$ in the scattering region in the corresponding channel. The trace of this matrix, called Wigner's time delay $\tau_l$, captures the total characteristic time-scale associated to the entire scattering process. Said another way, should we start with a generic in-state that undergoes scattering and is then retrieved in the asymptotic future as some out-state, the trace of the above matrix associates a life-time to the intermediate state \cite{deBianchi,deCarvalho200283}. For large energies $k_l$, using $S\left(k_l,\lambda_l\right)$ in \eqref{eqn:SMatrix2}, the Wigner time-delay associated to the scattering of a single oscillator can be calculated to scale as $\tau \sim \log\left(\lambda_l \, k_l\right)$. This is the same result as was found in the 2d string theory literature \cite{Schoutens:1993hu,Karczmarek:2004bw,Friess:2004tq} and was argued to not be long-enough for black hole formation. Based on these black hole non-formation results in the matrix quantum mechanics, it was suggested that studying the non-singlet sectors would shed light on 2d black hole formation\cite{Kazakov2002,Banks2015}. Despite some efforts in relating the adjoint representations with long-string states \cite{Maldacena2005}, a satisfactory Lorentzian description is still missing. Anticipating our result prematurely, our model does not suffer from these difficulties as it is to describe a 4d black hole with a collection of oscillators. Merely the $s$-wave oscillator in our model would mimic the singlet sector in matrix quantum mechanics\footnote{It would be very interesting if higher $l$ modes can be described as non-singlets of a matrix model.}.\\ The above time delay $\tau$ may also be interpreted as a density of states associated to the system. The inverted potential under consideration implies a continuous spectrum. In order to discretize which, to derive the density of states, the system must be stabilized\textemdash by putting it in a box of size $\Lambda$, for instance. Demanding that the wavefunctions vanish at the wall and regulating the result by subtracting any cut-off dependent quantities, the density of states may be computed from the scattering phase $\Phi$ defined via $\mathcal{S}\left(k_l,\lambda_l\right) = \exp\left[i \, \Phi\left(k_l,\lambda_l\right)\right]$ as $\rho(\epsilon_l)= d \Phi/d \epsilon_l$ \cite{Kazakov:1990ue}. The result is exactly the same as what we get from computing the time delay using the scattering matrix \eqref{eqn:SMatrix2} and the time-delay equation \eqref{eqn:timedelay} to find a Di-Gamma function $\psi^{(0)}$ \begin{align}\label{eqn:densityofstates} \rho\left(\epsilon_l\right) ~ = ~ \tau_l ~ &= ~ \, \dfrac{2}{\lambda_l} \Re \left[\psi ^{(0)}\left(\dfrac{1}{2} - i \dfrac{\epsilon_l}{\lambda_l}\right) + \log\left(\lambda_l\right)\right] \nonumber \\ &= ~ \Re \left[\sum_{n=0}^\infty ~ \dfrac{2}{i \epsilon_l - \lambda_l \left(n+\frac{1}{2}\right)} + \dfrac{2}{\lambda_l} \log\left(\lambda_l\right)\right] \, . \end{align} This density of states may be used to define a partition function for each partial wave (with Hamiltonian $\hat{H}_{l m}$), where the energy eigenstates contributing to the partition function will have been picked out by the poles of the density $\rho\left(\epsilon_l\right)$. However, in our model, we see that there are many oscillators in question. Should we start with an in-state made of a collection of all oscillators instead of a single partial wave, we may first write down the total S-Matrix as a product of the individual oscillators as \begin{equation}\label{eqn:totSMatrix} \mathcal{S}_{\text{tot}} ~ = ~ \prod_{l=0}^\infty \, \mathcal {S}\left(k_l, \lambda_l\right) \, , \end{equation} assuming that different partial waves do not interact. One may correct for this by adding interaction terms between different oscillators. To compute the time-delay associated to a scattering of some in-state specified by a given total energy involves an appropriately defined Wigner time-delay matrix as \begin{equation}\label{eqn:totTime} \tau_{\text{tot}} ~ = ~ \text{Tr} \left[\Re\left(- i \left(\mathcal{S}^\dagger_{\text{tot}}\right)_{ac} \left(\dfrac{d \mathcal{S}_{\text{tot}}}{dE_{\text{tot}}}\right)_{cb}\right)\right] \, , \end{equation} where this equation makes sense only if we have defined a common time evolution and unit of energy for the total system/collection of partial waves. We will elaborate on this in a while. Now, even in the spherically symmetric approximation, to write the total S-Matrix as a function of merely one coarse-grained energy $E_{\text{tot}}$ is not a uniquely defined procedure. However, our intuition that each partial wave may be thought of as a single-particle oscillator allows us to compute the density of states in a combinatorial fashion. We will see that the degeneracy of states associated to an intermediate long-lived thermal state arises from the various ways in which one might distribute a given total energy among the many available oscillators. Given a total energy $E_{\text{tot}}$, we now have the freedom to describe many states, each with a different distribution of energies into the various available oscillators. From the poles in the density defined in \eqref{eqn:densityofstates}, we see that each oscillator has energies quantized as\footnote{The seemingly disconcerting factor of $i$ is just owed to the fact that we have scattering states as opposed to bound ones.} \begin{align} \epsilon_l ~ &= ~ i \lambda_l \left(n_l + \dfrac{1}{2}\right) \, . \end{align} This allows us to measure energies in units of $c$, where $c$ is defined implicitly via $\lambda_l \left(l^2 + l + 1\right) = c$. Therefore, in these units, the energies are `quantized' as \begin{equation} \dfrac{\epsilon_l}{i \, c} ~ = ~ \dfrac{1}{l^2 + l + 1} \left(n_l + \dfrac{1}{2}\right) \, . \end{equation} Now, given some total energy $E_{\text{tot}}$, we see that any oscillator may be populated with a single particle state carrying energy such that $n_l = E_{\text{tot}} \left(l^2 + l + 1\right)$, where we leave out the half integer piece for simplicity. Importantly, we see that there exist `special' states coming from very large $l$-modes even for very small energies. For example, an energy of $1$ could arise from a very large $l$-mode with the excitation given by $n_l = \left(l^2 + l + 1\right)$. This is rather unsatisfactory for one expects that it costs a lot of energy to create such states. Moreover, there is an interplay between the log term in the growth of states and the behaviour of the DiGamma function that we are unable to satisfactorily take into account. There is an additional problem which is that the energy of each partial wave is measured in different units that are $l$ dependent; this means that they also evolve with different times. We thus conclude that this is not the correct way to combine the different oscillators. \\ There is a rather beautiful way to resolve all these three problems via a simple change of variables that we turn to next. It will allow us to interpret the above cost of energy as relative shifts of energies with respect to a common ground state. Additionally these relative shifts also cure the above interplay; there will simply be no log term in the density of states. Finally, this will also introduce a canonical time evolution for the entire system, resulting in one common unit of energy. \subsection{Exponential degeneracy for the collection of oscillators} In order to combine the different oscillators and define a Hamiltonian for the total system we need to get rid of the $l$ dependence in the units of energy used for different oscillators. It turns out that this is possible by rewriting the black hole algebra. Moreover using these new variables, the relation between 't Hooft's black hole S-Matrix for an individual partial wave and the one of 2d string theory of type II \cite{Moore1992b} can be made manifest. To make this connection transparent, we again start with a collection of inverse harmonic oscillators and the following Hamiltonian for the total system \begin{align} H_{tot} ~ &= ~ \sum_{l,m} \, \dfrac{1}{2} \left(\tilde{p}_{l m}^2 - \tilde{x}_{l m}^2\right) \nonumber \\ &= ~ \sum_{l,m} \, \dfrac{1}{2} \left(\tilde{u}^+_{l m} \tilde{u}^-_{l m} + \tilde{u}^-_{l m} \tilde{u}^+_{l m}\right) \, , \end{align} but this time imposing the usual $\lambda$-independent commutation relations $[\tilde{u}^+_{l m} \tilde{u}^-_{l' m'}]= i \delta_{l l'} \delta_{m m'}$. The $\lambda$ dependence will come through via an assignment of a chemical potential $\mu(\lambda)$ for each oscillator; this assignment is to be thought of as a different vacuum energy for each partial wave. Following \cite{Moore1992b}, one may then derive an S-Matrix for this theory. To match this to the one of 't Hooft for any given partial wave, one must identify the chemical potential and energy parameters as $\mu =1 / \lambda$ and the Rindler energy $k=\omega+ \mu =\omega + 1 / \lambda$. It is worth noting that in the reference cited above, only energies below the tip of the inverted potential were considered, resulting in a dominant reflection coefficient. In contrast 't Hooft's partial waves carry energies higher than the one set by the tip of the potential. Consequently, to make an appropriate identification of 2d string theory with the partial wave S-Matrix, an interchanging of the reflection and transmission coefficients is necessary. From a matrix quantum mechanics point of view, it may additionally be noted that the partial wave parameter $\lambda$ may be absorbed in either the Planck's constant or the chemical potential to leave the string coupling of each partial wave fixed as $g_s \sim \hbar/\mu \sim c/(l^2+l+1)$. This indicates that as we increase the size of the black hole or we consider higher $l$ partial waves the corresponding string coupling becomes perturbatively small. \\ Writing out the energies of the various partial waves with the above identification, we have \begin{equation} k_l ~ = ~ \omega_l + \frac{l^2+l+1}{c} \, , \qquad \text{and} \qquad E^{\text{Rindler}}_{\text{tot}} ~ = ~ \sum_l k_l \, . \end{equation} At this stage, the labels $\omega_l$ are continuous energies. However, discretizing the spectrum as before, by putting the system in a box, we arrive at discrete energies\footnote{Note again the relative factor of $i$ that indicates that the harmonic oscillator levels have to do with decaying/scattering states while $l$'s are bound states.} \begin{equation}\label{eqn:totenergy} c \, E_{\text{tot}} ~ = ~ \sum_l \, \left[ i \, c \, \left( n_l + \fract12} \def\quart{\fract14} \def\halff{\ffract12 \right) + l^2 + l + 1\right] \, , \end{equation} for every individual oscillator. Without a detour into this 2d string theory literature, we may have alternatively arrived at this spectrum from the quantum mechanics model in \ref{sec:micromodel} via the following identifications: \begin{equation} \epsilon_l ~ \longrightarrow ~ 1 + \lambda_l \, \omega_l \qquad \text{and} \qquad \lambda_l ~ \longrightarrow ~ \dfrac{1}{\mu_l} \, . \end{equation} \begin{wrapfigure}[22]{r}{0.5\textwidth} \centering \vspace{-10pt} \includegraphics[width=0.48\textwidth]{levels.png} \caption{Spectrum of the collection of oscillators. The red curve is indicative of the potential with the horizontal solid lines indicating the various energy levels available.} \label{fig:levels} \end{wrapfigure} While the model presented in Section \ref{sec:micromodel} makes the algebra manifest, the above identification of parameters to relate to the model with a $\lambda$-independent algebra makes the physical interpretation of the relative shifts in energies between the partial waves manifest and allows for a consistent definition of time and energy for the total system. \\ This allows us to rewrite our S-Matrix $S\left(\epsilon_l/\lambda_l\right)$ as a function of two variables $\omega_l$ and $\mu_l$ as $S\left(\omega_l,\mu_l\right)$. With this change of variables, we recover exactly the S-Matrix of the 2d matrix models discussed in the literature and therefore, now allows us to interpret $\mu$ as a chemical potential of the theory. However, since $\mu_l$ is now $l$ dependent in our collection of models, it gives us a natural way to interpret how the combined system of oscillators behave. To excite a very large $l$ oscillator, one first has to provide sufficient energy that is equal to $\mu_l \sim \left(l^2 + l + 1\right)$. Therefore, we naturally see that exciting a large $l$-oscillator costs energy! The physical spectrum may be depicted as in Figure \ref{fig:levels}, where we depict an arbitrarily chosen ground-state energy with $E = 0$, each oscillator labelled by $l$ and excitations above them by $n_l$. The various oscillators are shifted by a chemical potential. And the vacuum is defined to be the one with all Rindler energies $k_l$ set to zero. Now, given an initial state carrying a total energy of $E_{\text{tot}}$, we are left with a degeneracy of states that may be formed by distributing this energy among the many available oscillators. The larger this energy, the more oscillators we may distribute it into and hence the larger the degeneracy. The degeneracy associated to equation \eqref{eqn:totenergy}, without the chemical potential shift, is merely asking for the number of sets of all integers $\{n_l\}$ that add up to $E_{\text{tot}}$. These are the celebrated partitions into integers that\textemdash as Ramanujan showed\textemdash grow exponentially. Clearly, for large total energy, our degeneracy grows similarly at leading order. However, the chemical potential shift slows down the growth polynomially compared to the partitioning into integers owing to the fact that for a given $E_{\text{tot}}$, only approximately $\sqrt{E_{\text{tot}}}$ number of oscillators are available. It is worthwhile to note that, in this simplistic analysis, we have ignored the degeneracy arising from the $m$ quantum number; accounting for which clearly increases the growth of states. Therefore, we already see that the model allows for collapse in that it supports an exponential growth of density of states! This shares striking resemblance to the Hagedorn growth of density of states in black holes. \\ As a conservative estimate, we may start with some total energy $E_{\text{tot}}$ and a fixed set of oscillators that are allowed to contribute to it. This allows us to sum over the contribution arising from the $\left(l^2 + l + 1\right)c^{-1}$ piece in \eqref{eqn:totenergy} to be left with some subtracted total energy $\tilde{E}_{\text{tot}}$ that is to be distributed among the $n_l$ excitations over each of the available oscillators. Clearly, this grows exponentially much as the partitions into integers does, with the subtracted energy $\tilde{E}_{\text{tot}}$. This is given by the famous Hardy-Ramanujan formula for the growth of partitions of integers: \begin{equation} p\left(n\right) ~ \sim ~ \exp\left(\pi \sqrt{\dfrac{2 \, n}{3}}\right) \, . \end{equation} Identifying $n$ with the integer part of $\tilde{E}_{\text{tot}}$, we see the desired exponential growth. And considering that the same total energy may be gained from choosing different sets of oscillators to start with, increases this degeneracy further, in equal measure. While imposing the antipodal identification of 't Hooft\textemdash which we discuss in Section \ref{sec:discussion}\textemdash reduces this degeneracy, the exponential growth of states remains. How one may derive the Schwarzschild entropy from this degeneracy requires a truly microscopic understanding of the parameter $\lambda$. We suggest a way forward towards the end of this article but leave a careful study to future work. \section{Discussion}\label{sec:discussion} In this article, we have constructed a quantum mechanics model that reproduces 't Hooft's black hole S-Matrix for every partial wave using which, we provided non-trivial evidence that it corresponds to a black hole S-Matrix; one that can be formed in a time-dependent collapsing process owing to the appropriate density of states. Several questions, though, remain unanswered. The only degrees of freedom in question were momenta and positions of ingoing modes. One may add various standard model charges, spin, etc. to see how information may be retrieved by the asymptotic observer. \\ Dynamically speaking, gravitational evolution is expected to be very complicated in real-world scenarios. We have merely approximated it to one where different spherical harmonics do not interact. While incorporating these interactions may be very difficult to conceive in gravity, they are rather straightforward to implement in the quantum mechanical model; one merely introduces interaction terms coupling different oscillators. Exactly what the nature of these interactions is, is still left open. \\ The complete dynamics of the black hole includes a change in mass of the black hole during the scattering process. In this article, we chose to work in an approximation where this is ignored. The corresponding approximation in the inverted oscillators is that the potential is not affected by the scattering waves. In reality, of course, the quadratic potential changes due to the waves that scatter off it. The change in the form of the inverted potential due to a scattering mode can be calculated \cite{Asplund:2010xh}. We hope to work on this in the future and we think that this gives us a natural way to incorporate the changes to the mass of the black hole. Another possible avenue for future work is to realise a truly microscopic description of the S-matrix, either in the form of a matrix model or a non-local spin model having a finite-dimensional Hilbert space from the outset, where the inverse harmonic potential or emergent $SL(2,\mathbb{R})$ symmetries are expected to arise after an averaging over the interactions between the microscopic degrees of freedom. Some models with these properties can be found in \cite{Kitaev,Anninos:2014ffa,Anninos:2015eji,Maldacena:2016hyu}. \paragraph{Antipodal entanglement} Unitarity of the S-Matrix demands that both the left and right exteriors in the two-sided Penrose diagram need to be accounted for; they capture the transmitted and reflected pieces of the wave-function, respectively. In the quantum mechanics model, there appears to be an ambiguity of how to associate the two regions I and II of the scattering diagram in Fig. \ref{fig:scatteringdiagram} to the two exteriors of the Penrose diagram. We saw, in the previous section, that the quantum mechanical model appears to support the creation of physical black holes by exciting appropriate oscillators. Therefore, in this picture there is necessarily only one physical exterior. To resolve the issue of two exteriors, it was proposed that one must make an antipodal identification on the Penrose diagram \cite{Hooft:2016itl}; see figure \ref{fig:2sided}. Unitarity is arguably a better physical consistency condition than a demand of the maximal analytic extension. The precise identification is given by $x \rightarrow J x$ with\footnote{Note that the simpler mapping of identifying points in $I , II$ via $(u^+, u^-, \theta , \phi) \leftrightarrow (-u^+, -u^-, \theta, \phi)$ is singular on the axis $u^+, u^- =0$.} \begin{equation}\label{eqn:antipodalidentification} J: \quad (u^+ , u^- , \theta , \phi) ~\longleftrightarrow ~ \left(- u^+ , - u^- ,\pi - \theta, \pi + \phi\right) \, . \end{equation} \begin{figure}[t] \centering \includegraphics[width=0.7 \textwidth]{2sided.pdf} \caption{The Penrose diagram. Each point in the conformal diagram originally corresponds to a different sphere. After antipodal identification, the points $(u^+, u^-)$ and $(-u^+, -u^-)$ correspond to antipodal points on a common sphere as given in \eqref{eqn:antipodalidentification}. The red lines indicate the arrows of time.} \label{fig:2sided} \end{figure} Note that $J$ has no fixed points and is also an involution, in that $J^2=1$. Such an identification implies that spheres on antipodal points in the Penrose diagram are identified with each other. In particular, this means \begin{equation} u^\pm \left(\theta, \phi\right) ~ = ~ - u^\pm \left(\pi - \theta, \pi + \phi\right) \qquad \text{and} \qquad p^\pm \left(\theta, \phi\right) ~ = ~ - p^\pm \left(\pi - \theta, \pi + \phi\right) \, . \end{equation} Therefore, noting that the spherical harmonics then obey $Y_{l,m} \left(\pi - \theta, \pi + \phi\right) = \left(-1\right)^l Y_{l,m} \left(\theta,\phi\right)$, we see that only those modes with an $l$ that is odd contribute. However, owing to the validity of the S-Matrix only in the region of space-time that is near the horizon, this identification is presumably valid only in this region. Global identifications of the two exteriors have been considered in the past \cite{Gibbons:1986dd,Sanchez:1986qn,Parikh:2002py}. The physics of the scattering, with this identification is now clear. In-going wave-packets move towards the horizon where gravitational back-reaction is strongest according to an asymptotic observer. Most of the information then passes through the antipodal region and a small fraction is reflected back. Turning on quantum mechanics implies that ingoing position is imprinted on outgoing momenta and consequently, an highly localised ingoing wave-packet transforms into two outgoing pieces\textemdash transmitted and reflected ones\textemdash but both having highly localised momenta. Their positions, however, are highly de-localised. This is how large wavelength Hawking particles are produced out of short wavelength wave-packets and an IR-UV connection seems to be at play. Interestingly, the maximal entanglement between the antipodal out-going modes suggests a wormhole connecting each pair \cite{Maldacena:2013xja}; the geometric wormhole connects the reflected and transmitted Hilbert spaces. Furthermore, as the study of the Wigner time-delay showed, the reflected and transmitted pieces arrive with a time-delay that scales logarithmically in the energy of the in-going wave. This behaviour appears to be very closely related to scrambling time (not the lifetime of the black hole) and we leave a more detailed investigation of this feature to the future. One may also wonder why transmitted pieces dominate the reflected ones. It may be that the attractive nature of gravity is the actor behind the scene. \paragraph{Approximate thermality} We now turn to the issue of thermality of the radiated spectrum. Given a number density, say $N^{\text{in}}(k)$ as a function of the energy $k$, we know that there is a unitary matrix that relates it to radiated spectrum. This unitary matrix is precisely the S-Matrix of the theory. The relation between the in and out spectra is given by $N^{\text{out}}(k)=S^{\dagger}N^{\text{in}}(k)S$. Using the explicit expression for the S-Matrix \eqref{eqn:SMatrix2}, we find \begin{align} N_{++}^{\text{out}}(k)~ &= ~ \frac{N_{++}^{\text{in}}(k)}{1+e^{2\pi k}}+\frac{N_{--}^{\text{in}}(k)}{1+e^{-2\pi k}}\\ N_{--}^{\text{out}}(k) ~ &= ~ \frac{N_{--}^{\text{in}}(k)}{1+e^{2\pi k}}+\frac{N_{++}^{\text{in}}(k)}{1+e^{-2\pi k}} \, , \end{align} where $N^{\text{in}}_{++}$ and $N^{\text{in}}_{--}$ are the in-going number densities from either side of the potential. We see that indeed the scattered pulse emerges with thermal factors $1+e^{\pm 2\pi k}$. For most of the radiated spectrum to actually be thermal, we see that $N^{\text{in}}_{++}$ and $N^{\text{in}}_{--}$ must be constant over a large range of energies. This was observed to be the case in the context of 2d string theory, starting from a coherent pulse, seen as an excitation over an appropriate Fermi-sea vacuum \cite{Schoutens:1993hu,Karczmarek:2004bw,Friess:2004tq}. In our context, since we do not yet have a first principles construction of the appropriate second quantised theory, this in-state may be chosen. For instance, a simple pulse with a wide-rectangular shape would suffice. One may hope to create such a pulse microscopically, by going to the second quantised description and creating a coherent state. Alternatively, one may hope to realize a matrix quantum mechanics model that realizes a field theory in the limit of large number of particles. After all, we know that each oscillator in our model really corresponds to a partial wave and not a single particle in the four dimensional black hole picture. \paragraph{Second Quantization v/s Matrix Quantum Mechanics} Given the quantum mechanical model we have studied in this article, we may naively promote the wave-functions $\psi_{lm}$ into fields to obtain a second quantized Lagrangian: \begin{equation} \mathcal{L} ~ = ~ \sum_{l,m} \, \int_{-\infty}^{\infty} \, du^\pm \, \psi_{lm}^\dagger\left(u^\pm,t\right) \left[i \partial_t + \dfrac{i}{2} \left(u^\pm \partial_{u^\pm} + \partial_{u^\pm} u^\pm \right) + \mu_l \right] \, \psi_{lm} (u^\pm,t) \, . \end{equation} With a change of variables to go to Rindler coordinates, \begin{align} &\psi_{lm}^{(\text{in/out})}(\alpha^\pm ,\rho^\pm,t) ~ = ~ e^{\rho^\pm/2} \psi_{lm}(u^\pm = \alpha^\pm e^{\rho^\pm},t) \, , \end{align} the Lagrangian becomes relativistic \begin{align} \mathcal{L} ~ &= ~ \sum_{l,m} \, \int_{-\infty}^{\infty} \, d \rho^\pm \, \sum_{\alpha^\pm=1,2} \, \Psi_{lm}^{\dagger (\text{in/out})} \left(\alpha^\pm ,\rho^\pm,t\right) \left(i \partial_t - i \partial_{\rho^\pm} + \mu_l\right) \Psi_{lm}^{(\text{in/out})}\left(\alpha^\pm ,\rho,t\right) \, , \end{align} where the label `in' (out) corresponds to the $+$ ($-$) sign. The form of the Lagrangian being first order in derivatives indicates that the Rindler fields are naturally fermionic. In this description we have a collection of different species of fermionic fields labelled by the $\{l,m\}$ indices. And the interaction between different harmonics would correspond to interacting fermions of the kind above. The conceptual trouble with this approach is that each ``particle'' to be promoted to a field is in reality a partial wave as can be seen from the four-dimensional picture. Therefore, second quantizing this model may not be straight-forward \cite{Hooft:2016cpw}. It appears to be more appealing to think of each partial wave as actually arising from an $N$-particle matrix quantum mechanics model which in the large-$N$ limit yields a second quantized description. Since $N$ counts the number of degrees of freedom, it is naturally related to $c$ via \begin{equation} \dfrac{1}{N^2} ~ \sim ~ c ~ = ~ \dfrac{8 \pi G}{R^2} ~ \sim ~ \dfrac{l^2_P}{R^2} \, . \end{equation} Therefore, $N$ appears to count the truly microscopic Planckian degrees of freedom that the black hole is composed of. The collection of partial waves describing the Schwarzschild black hole would then be a collection of such $N$-particle matrix quantum mechanics models. Another possibility is to describe the total system in terms of a single matrix model but including higher representations/non-singlet states to describe the higher $l$ modes. This seems promising because if one fixes the ground state energy of the lowest $l=0$ (or $l=1$ after antipodal) oscillator, the higher $l$ oscillators have missing poles in their density of states compared to the $l=0$, much similar to what was found for the adjoint and higher representations in \cite{Boulatov:1991xz}. Finally we note that we can combine the chemical potential with the oscillator Hamiltonian to get \begin{equation} \hat{H}_{\text{tot}} ~ = ~ \sum_{l,m} \, \left[\dfrac{1}{2} \left(\hat{p}_{l m}^2 - \hat{x}_{l m}^2\right) + \dfrac{R^2}{8 \pi G} \left(\hat{L}^2+1\right)\right] \, , \end{equation} with $\hat{L}^2=\sum_i \hat{L}^2_i$ giving the magnitude of angular momentum of each harmonic. One can then perform a matrix regularisation of the spherical harmonics following \cite{deWit:1988wri,Taylor:2001vb} which replaces the spherical harmonics $Y_{l m}(\theta, \phi)$ with $N \times N$ matrices $\mathbb{Y}_{l m}$ where $l \leq N -1$. This naturally sets a cut-off on the spherical harmonics from the onset. To sharpen any microscopic statements about the S-matrix, one might first need to derive an MQM model that regulates Planckian effects. \section*{Acknowledgements} We are indebted to Gerard 't Hooft for passing on to us, his infectious enthusiasm for black hole physics, for many extremely important comments on various drafts of this article and several encouraging discussions. We are grateful to Jan Manschot for very illuminating correspondence and discussions. We also take pleasure in thanking Umut G\"{u}rsoy, Javi Martinez-Magan, Phil Szepietowski and Stefan Vandoren for their insightful comments. \\ This work is supported by the Netherlands Organisation for Scientific Research (NWO) under the VIDI grant 680-47-518, the VICI grant 680-47-603, and the Delta-Institute for Theoretical Physics (D-ITP) that is funded by the Dutch Ministry of Education, Culture and Science (OCW).
1,108,101,562,594
arxiv
\section{Introduction}\label{sec1} A Hamilton decomposition of a graph or digraph $G$ is a set of edge-disjoint Hamilton cycles which together cover all the edges of~$G$. The topic has a long history but some of the main questions remain open. In 1892, Walecki showed that the edge set of the complete graph $K_n$ on $n$ vertices has a Hamilton decomposition if $n$ is odd (see e.g.~\cite{abs,lucas} for the construction). If $n$ is even, then $n$ is not a factor of ${n \choose 2}$, so clearly $K_n$ does not have such a decomposition. Walecki's result implies that a complete digraph $G$ on $n$ vertices has a Hamilton decomposition if $n$ is odd. More generally, Tillson~\cite{till} proved that a complete digraph $G$ on $n$ vertices has a Hamilton decomposition if and only if $n \not = 4,6$. A tournament is an orientation of a complete graph. We say that a tournament is {\it regular} if every vertex has equal in- and outdegree. Thus regular tournaments contain an odd number $n$ of vertices and each vertex has in- and outdegree $(n-1)/2$. The following beautiful conjecture of Kelly~(see e.g.~\cite{bang,bondy,moon}), which has attracted much attention, states that every regular tournament has a Hamilton decomposition: \begin{conj}[Kelly] \label{kelly} Every regular tournament on $n$ vertices can be decomposed into $(n-1)/2$ edge-disjoint Hamilton cycles. \end{conj} In this paper we prove an approximate version of Kelly's conjecture. \begin{thm}\label{main1} For every $\eta >0$ there exists an integer $n_0$ so that every regular tournament on $n \geq n_0$ vertices contains at least $(1/2-\eta)n$ edge-disjoint Hamilton cycles. \end{thm} In fact, we prove the following stronger result, where we consider orientations of almost complete graphs which are almost regular. An \emph{oriented graph} is obtained from an undirected graph by orienting its edges. So it has at most one edge between every pair of vertices, whereas a digraph may have an edge in each direction. \begin{thm}\label{main} For every $\eta_1 >0$ there exist $n_0= n_0 (\eta_1)$ and $\eta _2=\eta _2 (\eta_1) >0$ such that the following holds. Suppose that $G$ is an oriented graph on $n\geq n_0$ vertices such that every vertex in $G$ has in- and outdegree at least $(1/2-\eta _2) n$. Then $G$ contains at least $(1/2-\eta_1)n$ edge-disjoint Hamilton cycles. \end{thm} The \emph{minimum semidegree} $\delta^0 (G)$ of an oriented graph $G$ is the minimum of its minimum outdegree and its minimum indegree. So the minimum semidegree of a regular tournament on $n$ vertices is $(n-1)/2$. Most of the previous partial results towards Kelly's conjecture have been obtained by giving bounds on the minimum semidegree of an oriented graph which guarantees a Hamilton cycle. This approach was first used by Jackson~\cite{jackson}, who showed that every regular tournament on at least 5 vertices contains a Hamilton cycle and a Hamilton path which are edge-disjoint. Zhang~\cite{zhang} then showed that every such tournament contains two edge-disjoint Hamilton cycles. Improved bounds on the value of $\delta^0 (G)$ which forces a Hamilton cycle were then found by Thomassen~\cite{tom1}, H\"aggkvist~\cite{HaggkvistHamilton}, H\"aggkvist and Thomason~\cite{haggtom} as well as Kelly, K\"uhn and Osthus~\cite{kelly}. Finally, Keevash, K\"uhn and Osthus~\cite{kko} showed that every sufficiently large oriented graph~$G$ on $n$ vertices with $\delta ^0 (G) \geq (3n-4)/8$ contains a Hamilton cycle. This bound on $\delta^0(G)$ is best possible and confirmed a conjecture of H\"aggkvist~\cite{HaggkvistHamilton}. Note that this result implies that every sufficiently large regular tournament on $n$ vertices contains at least $n/8$ edge-disjoint Hamilton cycles. This was the best bound so far towards Kelly's conjecture. Kelly's conjecture has also been verified for $n \le 9$ by Alspach (see the survey~\cite{bt}). A result of Frieze and Krivelevich~\cite{fk} states that Theorem~\ref{main} holds for `quasi-random' tournaments. As indicated below, we will build on some of their ideas in the proof of Theorem~\ref{main}. It turns out that Theorem~\ref{main} can be generalized even further: any large almost regular oriented graph on $n$ vertices whose in- and outdegrees are all a little larger than $3n/8$ can almost be decomposed into Hamilton cycles. The corresponding modifications to the proof of Theorem~\ref{main} are described in Section~\ref{38}. We also discuss some further open questions in that section. Jackson~\cite{jackson} also introduced the following bipartite version of Kelly's conjecture (both versions are also discussed e.g.~in the Handbook article by Bondy~\cite{bondy}). A \emph{bipartite tournament} is an orientation of a complete bipartite graph. \begin{conj}[Jackson] \label{kellybip} Every regular bipartite tournament has a Hamilton decomposition. \end{conj} An undirected version of Conjecture~\ref{kellybip} was proved independently by Auerbach and Laskar~\cite{auerbach}, as well as Hetyei~\cite{hetyei}. However, a bipartite version of Theorem~\ref{main} does not hold, because there are almost regular bipartite tournaments which do not even contain a single Hamilton cycle. (Consider for instance the following `blow-up' of a 4-cycle: the vertices are split into 4 parts $A_0,\dots,A_3$ whose sizes are almost but not exactly equal, and we have all edges from $A_i$ to $A_{i+1}$, with indices modulo 4.) Kelly's conjecture has been generalized in several directions. For instance, given an oriented graph $G$, define its \emph{excess} by $$ {\rm ex}(G):=\sum_{v \in V(G)} \max \{ d^+(v)-d^-(v),0 \}, $$ where $d^+(v)$ denotes the number of outneighbours of the vertex $v$, and $d^-(v)$ the number of its inneighbours. Pullman (see e.g.~Conjecture~8.25 in~\cite{bondy}) conjectured that if $G$ is an oriented graph such that $d^+(v)+d^-(v)=d$ for all vertices $v$ of $G$, where $d$ is odd, then $G$ has a decomposition into ${\rm ex}(G)$ directed paths. To see that this would imply Kelly's conjecture, let $G$ be the oriented graph obtained from a regular tournament by deleting a vertex. Another generalization was made by Bang-Jensen and Yeo~\cite{bangyeo}, who conjectured that every $k$-edge-connected tournament has a decomposition into $k$ spanning strong digraphs. In~\cite{tom1}, Thomassen also formulated the following weakening of Kelly's conjecture. \begin{conj}[Thomassen] \label{thomconj} If $G$ is a regular tournament on $2k+1$ vertices and $A$ is any set of at most $k-1$ edges of $G$, then $G-A$ has a Hamilton cycle. \end{conj} In~\cite{chvatal}, we proved a result on the existence of Hamilton cycles in `robust expander digraphs' which implies Conjecture~\ref{thomconj} for large tournaments (see~\cite{chvatal} for details). \cite{tom1} also contains the related conjecture that for any $\ell \ge 2$, there is an $f(\ell)$ so that every strongly $f(\ell)$-connected tournament contains $\ell$ edge-disjoint Hamilton cycles. Further support for Kelly's conjecture was also provided by Thomassen~\cite{thom2}, who showed that the edges of every regular tournament on $n$ vertices can be covered by $12n$ Hamilton cycles. In~\cite{cyclesurvey} the first two authors observed that one can use Theorem~\ref{main} to reduce this to $(1/2+o(1))n$ Hamilton cycles. A discussion of further recent results about Hamilton cycles in directed graphs can be found in the survey~\cite{cyclesurvey}. It seems likely that the techniques developed in this paper will also be useful in solving further problems. In fact, Christofides, K\"uhn and Osthus~\cite{cko} used similar ideas to prove approximate versions of the following two long-standing conjectures of Nash-Williams~\cite{nash1, nash2}:% \begin{conj}[Nash-Williams~\cite{nash1}] Let $G$ be a $2d$-regular graph on at most $4d+1$ vertices, where $d \ge 1$. Then $G$ has a Hamilton decomposition. \end{conj} \begin{conj}[Nash-Williams~\cite{nash2}] \label{nwconj2} Let $G$ be a graph on $n$ vertices with minimum degree at least $n/2$. Then $G$ contains $n/8+o(n)$ edge-disjoint Hamilton cycles. \end{conj} (Actually, Nash-Williams initially formulated Conjecture~\ref{nwconj2} with the term $n/8$ replaced by $n/4$, but Babai found a counterexample to this.) Another related problem was raised by Erd\H{o}s (see~\cite{tom1}), who asked whether almost all tournaments $G$ have at least $\delta^0(G)$ edge-disjoint Hamilton cycles. Note that an affirmative answer would not directly imply that Kelly's conjecture holds for almost all regular tournaments, which would of course be an interesting result in itself. There are also a number of corresponding questions for random undirected graphs (see e.g.~\cite{fk}). After giving an outline of the argument in the next section, we will state a directed version of the Regularity lemma and some related results in Section~\ref{3}. Section~\ref{4} contains statements and proofs of several auxiliary results, mostly on (almost) $1$-factors in (almost) regular oriented graphs. The proof of Theorem~\ref{main} is given in Section~\ref{5}. A generalization of Theorem~\ref{main} to oriented graphs with smaller degrees is discussed in Section~\ref{38}. \section{Sketch of the proof of Theorem~\ref{main}} \label{sketch} Suppose we are given a regular tournament $G$ on $n$ vertices and our aim is to `almost' decompose it into Hamilton cycles. One possible approach might be the following: first remove a spanning regular oriented subgraph $H$ whose degree $\gamma n$ satisfies $\gamma \ll 1$. Let $G'$ be the remaining oriented subgraph of $G$. Now consider a decomposition of $G'$ into $1$-factors $F_1,\dots,F_r$ (which clearly exists). Next, try to transform each $F_i$ into a Hamilton cycle by removing some of its edges and adding some suitable edges of $H$. This is of course impossible if many of the $F_i$ consist of many cycles. However, an auxiliary result of Frieze and Krivelevich in~\cite{fk} implies that we can `almost' decompose $G'$ so that each $1$-factor $F_i$ consists of only a few cycles. If $H$ were a `quasi-random' oriented graph, then (as in~\cite{fk}) one could use it to successively `merge' the cycles of each $F_i$ into Hamilton cycles using a `rotation-extension' argument: delete an edge of a cycle $C$ of $F_i$ to obtain a path $P$ from $a$ to $b$, say. If there is an edge of $H$ from $b$ to another cycle $C'$ of $F_i$, then extend $P$ to include the vertices of $C'$ (and similarly for $a$). Continue until there is no such edge. Then (in $H$) the current endvertices of the path $P$ have many neighbours on $P$. One can use this together with the quasi-randomness of $H$ to transform $P$ into a cycle with the same vertices as $P$. Now repeat this, until we have merged all the cycles into a single (Hamilton) cycle. Of course, one has to be careful to maintain the quasi-randomness of $H$ in carrying out this `rotation-extension' process for the successive $F_i$ (the fact that $F_i$ contains only few cycles is important for this). The main problem is that $G$ need not contain such a spanning `quasi-random' subgraph $H$. So instead, in Section~\ref{applyDRL} we use Szemer\'edi's regularity lemma to decompose $G$ into quasi-random subgraphs. We then choose both our $1$-factors $F_i$ and the graph $H$ according to the structure of this decomposition. More precisely, we apply a directed version of Szemer\'edi's regularity lemma to obtain a partition of the vertices of $G$ into a bounded number of clusters $V_i$ so that almost all of the bipartite subgraphs spanned by ordered pairs of clusters are quasi-random (see Section~\ref{3.3} for the precise statement). This then yields a reduced digraph $R$, whose vertices correspond to the clusters, with an edge from one cluster $U$ to another cluster $W$ if the edges from $U$ to $W$ in $G$ form a quasi-random graph. (Note that $R$ need not be oriented.) We view $R$ as a weighted digraph whose edge weights are the densities of the corresponding ordered pair of clusters. We then obtain an unweighted multidigraph $R_m$ from $R$ as follows: given an edge $e$ of $R$ joining a cluster $U$ to $W$, replace it with $K=K(e)$ copies of $e$, where $K$ is approximately proportional to the density of the ordered pair $(U,W)$. It is not hard to show that $R_m$ is approximately regular (see Lemma~\ref{multimin}). If $R_m$ were regular, then it would have a decomposition into $1$-factors, but this assumption may not be true. However, we can show that $R_m$ can `almost' be decomposed into `almost' $1$-factors. In other words, there exist edge-disjoint collections $\mathcal F_1, \dots , \mathcal F_r$ of vertex-disjoint cycles in $R_m$ such that each $\mathcal F_i$ covers almost all of the clusters in $R_m$ (see Lemma~\ref{multifactor1}). Now we choose edge-disjoint oriented spanning subgraphs $C_1,\dots,C_r$ of $G$ so that each $C_i$ corresponds to $\mathcal F_i$. For this, consider an edge $e$ of $R$ from $U$ to $W$ and suppose for example that $\mathcal F_1$, $\mathcal F_2$ and $\mathcal F_8$ are the only $\mathcal F_i$ containing copies of $e$ in $R_m$. Then for each edge of $G$ from $U$ to $W$ in turn, we assign it to one of $C_1$, $C_2$ and $C_8$ with equal probability. Then with high probability, each $C_i$ consists of bipartite quasi-random oriented graphs which together form a disjoint union of `blown-up' cycles. Moreover, we can arrange that all the vertices have degree close to $\beta m$ (here $m$ is the cluster size and $\beta$ a small parameter which does not depend on $i$). We now remove a small proportion of the edges from $G$ (and thus from each $C_i$) to form oriented subgraphs $H_1^+,H_1^-,H_2,H_{3,i},H_4,H_{5,i}$ of $G$, where $1 \le i \le r$. Ideally, we would like to show that each $C_i$ can almost be decomposed into Hamilton cycles. Since the $C_i$ are edge-disjoint, this would yield the required result. One obvious obstacle is that the $C_i$ need not be spanning subgraphs of $G$ (because of the exceptional set $V_0$ returned by the regularity lemma and because the $\mathcal F_i$ are not spanning.) So in Section~\ref{sec:incorp} we add suitable edges between $C_i$ and the leftover vertices to form edge-disjoint oriented spanning subgraphs $G_i$ of $G$ where every vertex has degree close to $\beta m$. (The edges of $H_1^-$ and $H_1^+$ are used in this step.) But the distribution of the edges added in this step may be somewhat `unbalanced', with some vertices of $C_i$ sending out or receiving too many of them. In fact, as discussed at the beginning of Section~\ref{skel}, we cannot even guarantee that $G_i$ has a single $1$-factor. We overcome this new difficulty by adding carefully chosen further edges (from $H_2$ this time) to each $G_i$ which compensate the above imbalances. Once these edges have been added, in Section~\ref{nicefactor} we can use the max-flow min-cut theorem to almost decompose each $G_i$ into $1$-factors $F_{i,j}$. (This is one of the points where we use the fact that the $C_i$ consist of quasi-random graphs which form a union of blown-up cycles.) Moreover, (i) the number of cycles in each of these $1$-factors is not too large and (ii) most of the cycles inherit the structure of $\mathcal F_i$. More precisely, (ii) means that most vertices $u$ of $C_i$ have the following property: let $U$ be the cluster containing $u$ and let $U^+$ be the successor of $U$ in $\mathcal F_i$. Then the successor $u^+$ of $u$ in $F_{i,j}$ lies in $U^+$. In Section~\ref{4.6} we can use (i) and (ii) to merge the cycles of each $F_{i,j}$ into a $1$-factor $F'_{i,j}$ consisting only of a bounded number of cycles -- for each cycle $\mathcal C$ of $\mathcal F_i$, all the vertices of $G_i$ which lie in clusters of $\mathcal C$ will lie in the same cycle of $F_{i,j}'$. We will apply a rotation-extension argument for this, where the additional edges (i.e.~those not in $F_{i,j}$) come from $H_{3,i}$. Finally, in Section~\ref{merging} we will use the fact that $R_m$ contains many short paths to merge each $F'_{i,j}$ into a single Hamilton cycle. The additional edges will come from $H_4$ and $H_{5,i}$ this time. \section{Notation and the Diregularity lemma} \label{3} \subsection{Notation} Throughout this paper we omit floors and ceilings whenever this does not affect the argument. Given a graph $G$, we denote the degree of a vertex $x \in V(G)$ by $d_G (x)$ and the maximum degree of $G$ by $\Delta (G)$. Given two vertices $x$ and $y$ of a digraph $G$, we write $xy$ for the edge directed from $x$ to $y$. We denote by $N^+ _G (x)$ the set of all outneighbours of~$x$. So $N^+ _G (x)$ consists of all those $y \in V(G)$ for which $xy \in E(G)$. We have an analogous definition for $N^-_G (x).$ Given a multidigraph $G$, we denote by $N^+ _{G} (x)$ the {\emph{multiset}} of vertices where a vertex $y \in V(G)$ appears $k$ times in $N_G ^+ (x)$ if $G$ contains precisely $k$ edges from~$x$ to~$y$. Again, we have an analogous definition for $N^-_G (x)$. We will write $N^+ (x)$ for example, if this is unambiguous. Given a vertex $x$ of a digraph or multidigraph $G$, we write $d^+ _G (x):=|N^+(x)|$ for the outdegree of $x$, $d^- _G(x):=|N^-(x)|$ for its indegree and $d(x):=d^+(x)+d^-(x)$ for its degree. The maximum of the maximum outdegree $\Delta ^+ (G)$ and the maximum indegree $\Delta ^- (G) $ is denoted by $\Delta ^0 (G)$. The \emph{minimum semidegree} $\delta^0 (G)$ of $G$ is the minimum of its minimum outdegree $\delta ^+ (G)$ and its minimum indegree $\delta ^- (G)$. Throughout the paper we will use $d^{\pm} _G (x)$, $\delta ^{\pm} (G)$ and $N^{\pm} _G (x)$ as `shorthand' notation. For example, $\delta ^{\pm} (G) \geq \delta ^{\pm} (H)/2$ is read as $\delta ^+ (G) \geq \delta ^+ (H)/2$ and $\delta ^- (G) \geq \delta ^- (H)/2$. A \emph{1-factor} of a multidigraph~$G$ is a collection of vertex-disjoint cycles in~$G$ which together cover all the vertices of~$G$. Given $A,B \subseteq V(G)$, we write $e_G (A,B)$ to denote the number of edges in $G$ with startpoint in $A$ and endpoint in $B$. Similarly, if $G$ is an undirected graph, we write $e_G (A,B)$ for the number of all edges between $A$ and~$B$. Given a multiset $X$ and a set $Y$ we define $X \cap Y$ to be the multiset where $x$ appears as an element precisely $k$ times in $X \cap Y$ if $x \in X$, $x \in Y$ and $x$ appears precisely $k$ times in $X$. We write $a=b\pm \varepsilon$ for $a\in [b-\varepsilon,b+\varepsilon]$. \subsection{A Chernoff bound} We will often use the following Chernoff bound for binomial and hypergeometric distributions (see e.g.~\cite[Corollary 2.3 and Theorem 2.10]{Janson&Luczak&Rucinski00}). Recall that the binomial random variable with parameters $(n,p)$ is the sum of $n$ independent Bernoulli variables, each taking value $1$ with probability $p$ or $0$ with probability $1-p$. The hypergeometric random variable $X$ with parameters $(n,m,k)$ is defined as follows. We let $N$ be a set of size $n$, fix $S \subseteq N$ of size $|S|=m$, pick a uniformly random $T \subseteq N$ of size $|T|=k$, then define $X=|T \cap S|$. Note that $\mathbb{E}X = km/n$. \begin{prop}\label{chernoff} Suppose $X$ has binomial or hypergeometric distribution and $0<a<3/2$. Then $\mathbb{P}(|X - \mathbb{E}X| \ge a\mathbb{E}X) \le 2 e^{-\frac{a^2}{3}\mathbb{E}X}$. \end{prop} \subsection{The Diregularity lemma} \label{3.3} In the proof of Theorem~\ref{main} we will use the directed version of Szemer\'edi's Regularity lemma. Before we can state it we need some more notation and definitions. The \emph{density} of an undirected bipartite graph $G$ with vertex classes~$A$ and~$B$ is defined to be $$d_G(A,B):=\frac{e_G(A,B)}{|A||B|}.$$ We will write $d(A,B)$ if this is unambiguous. Given any $\varepsilon, \varepsilon '>0$, we say that $G$ is \emph{$[\varepsilon,\varepsilon']$-regular} if for all sets $X \subseteq A$ and $Y \subseteq B$ with $|X|\ge \varepsilon |A|$ and $|Y|\ge \varepsilon |B|$ we have $|d(A,B)-d(X,Y)|< \varepsilon '$. In the case when $\varepsilon =\varepsilon '$ we say that $G$ is \emph{$\varepsilon$-regular}. Given $d \in [0,1)$ we say that $G$ is \emph{$(\varepsilon,d)$-super-regular} if all sets $X \subseteq A$ and $Y \subseteq B$ with $|X|\ge \varepsilon |A|$ and $|Y|\ge \varepsilon |B|$ satisfy $d(X,Y)= d\pm \varepsilon$ and, furthermore, if $ d_G (a)= (d\pm \varepsilon)|B|$ for all $a \in A$ and $ d_G (b)=(d\pm \varepsilon )|A|$ for all $b \in B$. Note that this is a slight variation of the standard definition. Given disjoint vertex sets~$A$ and~$B$ in a digraph~$G$, we write $(A,B)_G$ for the oriented bipartite subgraph of~$G$ whose vertex classes are~$A$ and~$B$ and whose edges are all the edges from~$A$ to~$B$ in~$G$. We say $(A,B)_G$ is \emph{$[\varepsilon , \varepsilon ']$-regular and has density~$d'$} if this holds for the underlying undirected bipartite graph of $(A,B)_G$. (Note that the ordering of the pair $(A,B)_G$ is important here.) In the case when $\varepsilon = \varepsilon '$ we say that \emph{$(A,B)_G$ is $\varepsilon$-regular and has density $d'$}. Similarly, given $d \in [0,1)$ we say $(A,B)_G$ is \emph{$(\varepsilon ,d) $-super-regular} if this holds for the underlying undirected bipartite graph. The Diregularity lemma is a variant of the Regularity lemma for digraphs due to Alon and Shapira~\cite{alon}. Its proof is similar to the undirected version. We will use the degree form of the Diregularity lemma which can be derived from the standard version in the same manner as the undirected degree form (see~\cite{survey} for a sketch of the latter). \begin{lemma}[Degree form of the Diregularity lemma]\label{dilemma} For every $\varepsilon\in (0,1)$ and every integer~$M'$ there are integers~$M$ and~$n_0$ such that if~$G$ is a digraph on $n\ge n_0$ vertices and $d\in[0,1]$ is any real number, then there is a partition of the vertex set of~$G$ into $V_0,V_1,\ldots,V_L$ and a spanning subdigraph~$G'$ of~$G$ such that the following holds: \begin{itemize} \item $M'\le L\leq M$, \item $|V_0|\leq \varepsilon n$, \item $|V_1|=\dots=|V_L|=:m$, \item $d^\pm_{G'}(x)>d^\pm_G(x)-(d+\varepsilon)n$ for all vertices $x\in V(G)$, \item for all $i=1,\dots,L$ the digraph $G'[V_i]$ is empty, \item for all $1\leq i,j\leq L$ with $i\neq j$ the pair $(V_i,V_j)_{G'}$ is $\varepsilon$-regular and has density either~$0$ or at least~$d$. \end{itemize} \end{lemma} We call $V_1, \dots, V_L$ \emph{clusters}, $V_0$ the \emph{exceptional set} and the vertices in~$V_0$ \emph{exceptional vertices}. We refer to~$G'$ as the \emph{pure digraph}. The last condition of the lemma says that all pairs of clusters are $\varepsilon$-regular in both directions (but possibly with different densities). The {\it reduced digraph~$R$ of~$G$ with parameters $\varepsilon$, $d$ and~$M'$} is the digraph whose vertices are $V_1, \dots , V_L$ and in which $V_i V_j$ is an edge precisely when $(V_i,V_j)_{G'}$ is $\varepsilon$-regular and has density at least~$d$. The next result shows that we can partition the set of edges of an $\varepsilon$-(super)-regular pair into edge-disjoint subgraphs such that each of them is still (super)-regular. \begin{lemma}\label{split} Let $0< \varepsilon \ll d_0 \ll 1$ and suppose $K\ge 1$. Then there exists an integer $m_0=m_0 (\varepsilon,d_0, K)$ such that for all $d\ge d_0$ the following holds. \begin{itemize} \item[{\rm (i)}] Suppose that $G=(A,B)$ is an $\varepsilon$-regular pair of density $d$ where $|A|=|B|=m\geq m_0$. Then there are $\lfloor K\rfloor$ edge-disjoint spanning subgraphs $S_1, \dots, S_{\lfloor K\rfloor}$ of $G$ such that each $S_i$ is $[\varepsilon, 4\varepsilon /K]$-regular of density $(d\pm 2\varepsilon)/K$. \item[{\rm (ii)}] If $K=2$ and $G=(A,B)$ is $(\varepsilon,d)$-super-regular with $|A|=|B|=m\geq m_0$. then there are two edge-disjoint spanning subgraphs $S_1$ and $S_2$ of $G$ such that each $S_i$ is $(2\varepsilon,d/2)$-super-regular. \end{itemize} \end{lemma} \removelastskip\penalty55\medskip\noindent{\bf Proof. } We first prove~(i). Suppose we have chosen $m_0$ sufficiently large. Initially set $E(S_i)= \emptyset$ for each $i=1, \dots , \lfloor K \rfloor$. We consider each edge of $G$ in turn and add it to each $E(S_i)$ with probability $1/K$, independently of all other edges of~$G$. So the probability that $xy$ is added to none of the $S_i$ is $1-\lfloor K\rfloor/K$. Moreover, $\mathbb E (e(S_i)) = e(G)/K=d m^2/K$. Given $X \subseteq A$ and $Y \subseteq B$ with $|X|,|Y|\geq \varepsilon m$ we have that $|d_G(X,Y)-d|<\varepsilon$. Thus $$ \frac{1}{K}(d- \varepsilon)|X||Y|<\mathbb E(e_{S_i} (X,Y)) < \frac{1}{K}(d+ \varepsilon)|X||Y|$$ for each $i$. Proposition~\ref{chernoff} for the binomial distribution implies that with high probability $(d - 2\varepsilon)|X||Y|/K < e_{S_i} (X,Y)<(d + 2\varepsilon)|X||Y| /K$ for each $i \leq \lfloor K \rfloor$ and every $X \subseteq A$ and $Y \subseteq B$ with $|X|,|Y|\geq \varepsilon m$. Such~$S_i$ are as required in~(i).\COMMENT{We are looking at roughly $(2^m)^2=4^m$ pairs of sets $X,Y$ here (for each $i=1, \dots , \lfloor K \rfloor $). So in total using Chernoff $\leq K 4^m $ times. For each $Z:=e_{S_i} (X,Y)$, we have that $\mathbb P (|Z-\mathbb E Z|\geq \varepsilon \mathbb E Z)\leq 2e^{-\frac{\varepsilon ^2}{3} \mathbb E Z} \leq 2 e^{-cm^2}$ for some $c>0$ (since $\mathbb E Z\geq (d-\varepsilon)\varepsilon ^2 m^2/K$). Now $2K4^m e^{-cm^2} \ll 1$ as $m$ sufficiently large, as desired.} The proof of~(ii) is similar. Indeed, as in (i) one can show that with high probability any $X\subseteq A$ and $Y \subseteq B$ with $|X|,|Y|\geq \varepsilon m$ satisfy $d_{S_i} (X,Y)=d/2 \pm 2 \varepsilon$ (for $i=1,2$). Moreover, each vertex $a\in A$ satisfies $\mathbb E(d_{S_i}(a))=d_G(a)/2=(d\pm \varepsilon)m/2$ (for $i=1,2$) and similarly for the vertices in~$B$. So again Proposition~\ref{chernoff} for the binomial distribution implies that with high probability $d_{S_i}(a)=(d/2\pm 2\varepsilon)m$ for all $a\in A$ and $d_{S_i}(b)=(d/2\pm 2\varepsilon)m$ for all $b\in B$. Altogether this shows that with high probability both $S_1$ and $S_2$ are $(2\varepsilon,d/2)$-super-regular.\COMMENT{Same as in case (i). We have to consider the $2m$ degrees as well. Now $\mathbb P (|d_{S_i} (x)-\mathbb E d_{S_i} (x)|\geq \varepsilon\mathbb E d_{S_i} (x) ) \leq 2 e ^{-c'm} $ for some $c'>0$. But as $4^{m+1} e^{-cm^2} +4m e^{-c'm} \ll 1$ we are fine.} \noproof\bigskip Suppose $0<1/M'\ll \varepsilon \ll \beta \ll d \ll 1$ and let $G$ be a digraph. Let $R$ and $G'$ denote the reduced digraph and pure digraph respectively, obtained by applying Lemma~\ref{dilemma} to $G$ with parameters $\varepsilon, d$ and $M'$. For each edge $V_iV_j$ of~$R$ we write $d_{i,j}$ for the density of $(V_i, V_j)_{G'}$. (So $d_{i,j}\geq d$.) The \emph{reduced multidigraph} $R_m$ of $G$ with parameters $\varepsilon, \beta, d$ and $M'$ is obtained from $R$ by setting $V(R_m):=V(R)$ and adding $\lfloor d_{i,j} /\beta \rfloor$ directed edges from $V_i$ to $V_j$ whenever $V_iV_j\in E(R)$. We will always consider the reduced multidigraph $R_m$ of a digraph $G$ whose order is sufficiently large in order to apply Lemma~\ref{split} to any pair $(V_i, V_j)_{G'}$ of clusters with $V_iV_j\in E(R)$. Let $K:= d_{i,j}/\beta$ and $S_{i,j,1}, \dots, S_{i,j,\lfloor K\rfloor}$ be the spanning subgraphs of $(V_i,V_j)_{G'}$ obtained from Lemma~\ref{split}. (So each $S_{i,j,k}$ is $\varepsilon$-regular of density $\beta\pm \varepsilon$.) Let $(V_iV_j)_1, \dots , (V_iV_j)_{\lfloor K\rfloor}$ denote the directed edges from $V_i$ to $V_j$ in $R_m$. We associate each $(V_iV_j)_k$ with the edges in $S_{i,j,k}$. \begin{lemma}\label{multimin} Let $0 < 1/M' \ll \varepsilon \ll \beta \ll d \ll c_1 \leq c_2 <1$ and let $G$ be a digraph of sufficiently large order $n$ with $\delta ^0 (G) \geq c_1n$ and $\Delta ^0 (G) \leq c_2n$. Apply Lemma~\ref{dilemma} with parameters $\varepsilon, d$ and $M'$ to obtain a pure digraph $G'$ and a reduced digraph $R$ of $G$. Let $R_m$ denote the reduced multidigraph of $G$ with parameters $\varepsilon, \beta, d$ and $M'$. Then $$\delta ^0 (R_m) > (c_1-3d)\frac{|R_m|}{\beta} \text{ and } \Delta ^0 (R_m)< (c_2 +2 \varepsilon)\frac{|R_m|}{\beta}.$$ \end{lemma} \noindent Note the corresponding upper bound would not hold if we considered $R$ instead of $R_m$ here. \removelastskip\penalty55\medskip\noindent{\bf Proof. } Given any $V_i , V_j \in V(R)$, let $d_{i,j}$ denote the density of $(V_i,V_j)_{G'}$. Then \begin{align}\label{first} (c_1 -2d)|R| \leq \frac{(c_1-2d)nm}{m^2} \leq \frac{\sum_{v\in V_i} \left( d^+_{G'}(v)-|V_0|\right)}{m^2}\le \sum _{V_j \in V(R)} d_{i,j} \end{align} by Lemma~\ref{dilemma}. Thus \begin{align*} d^+ _{R_m} (V_i) & = \sum _{V_j \in V(R_m)} \left\lfloor \frac{d_{i,j}}{\beta} \right\rfloor \geq \frac{1}{\beta} \sum _{V_j \in V(R)} d_{i,j} -|R_m| \stackrel{(\ref{first})}{\ge} (c_1-2d -\beta)\frac{|R_m|}{\beta}\\ & >(c_1-3d)\frac{|R_m|}{\beta}. \end{align*} So indeed $\delta ^+ (R_m) >(c_1-3d)|R_m|/\beta$. Similar arguments can be used to show that $\delta ^- (R_m) >(c_1-3d)|R_m|/\beta$ and $\Delta ^0 (R_m)< (c_2 +2 \varepsilon)|R_m|/\beta$.% \COMMENT{Indeed, given any $V_i \in V(R)$, $ c_2 nm\ge \sum_{v\in V_i}d^+_{G'}(v) \ge m^2\sum _{V_j \in V(R)} d_{i,j}$. So $\sum _{V_j \in V(R)} d_{i,j}< (c_2 +2 \varepsilon )|R|$. Thus $d^+ _{R_m} (V_i)= \sum _{V_j \in V(R_m)} \lfloor \frac{d_{i,j}}{\beta} \rfloor \leq \frac{1}{\beta} \sum _{V_j \in V(R)} d_{i,j}< (c_2 +2 \varepsilon)|R_m|/\beta$.} \noproof\bigskip We will also need the well-known fact that for any cycle $C$ of the reduced multigraph $R_m$ we can delete a small number of vertices from the clusters in~$C$ in order to ensure that each edge of $C$ corresponds to a super-regular pair. We include a proof for completeness. \begin{lemma}\label{superreg} Let $C=V_{j_1}\dots V_{j_s}$ be a cycle in the reduced multigraph~$R_m$ as in Lemma~\ref{multimin}. For each $t=1,\dots,s$ let $(V_{j_t}V_{j_{t+1}})_{k_t}$ denote the edge of $C$ which joins $V_{j_t}$ to~$V_{j_{t+1}}$ (where $V_{j_{s+1}}:=V_{j_1}$). Then we can choose subclusters $V'_{j_t}\subseteq V_{j_t}$ of size $m':=(1-4\varepsilon)m$ such that $(V'_{j_t},V'_{j_{t+1}})_{S_{j_t,j_{t+1},k_t}}$ is $(10\varepsilon,\beta)$-super-regular (for each $t=1,\dots,s$). \end{lemma} \removelastskip\penalty55\medskip\noindent{\bf Proof. } Recall that for each $t=1,\dots, s$ the digraph $S_{j_t,j_{t+1},k_t}$ corresponding to the edge $(V_{j_t}V_{j_{t+1}})_{k_t}$ of $C$ is $\varepsilon$-regular and has density $\beta\pm \varepsilon$. So $V_{j_t}$ contains at most $2\varepsilon m$ vertices whose outdegree in $S_{j_t,j_{t+1},k_t}$ is either at most $(\beta-2\varepsilon)m$ or at least $(\beta+2\varepsilon)m$. Similarly, there are at most $2\varepsilon m$ vertices in $V_{j_t}$ whose indegree in $S_{j_{t-1},j_{t},k_{t-1}}$ is either at most $(\beta-2\varepsilon)m$ or at least $(\beta+2\varepsilon)m$. Let $V'_{j_t}$ be a set of size $m'$ obtained from $V_{j_t}$ by deleting all these vertices (and some additional vertices if necessary). It is easy to check that $V'_{j_1},\dots,V'_{j_t}$ are subclusters as required. \noproof\bigskip Finally, we will use the following crude version of the fact that every $[\varepsilon,\varepsilon']$-regular pair contains a subgraph of given maximum degree $\Delta$ whose average degree is close to $\Delta$. \begin{lemma} \label{boundmax} Suppose that $0<1/n \ll \varepsilon', \varepsilon \ll d_0 \le d_1 \ll 1$ and that $(A,B)$ is an $[\varepsilon,\varepsilon']$-regular pair of density $d_1$ with $n$ vertices in each class. Then $(A,B)$ contains a subgraph $H$ whose maximum degree is at most $d_0n$ and whose average degree is at least $d_0n/8$. \end{lemma} \removelastskip\penalty55\medskip\noindent{\bf Proof. } Let $A'' \subseteq A$ be the set of vertices of degree at least $2d_1n$ and define $B''$ similarly. Then $|A''|,|B''| \le \varepsilon n$. Let $A':=A \setminus A''$ and $B':=B \setminus B''$. Then $(A',B')$ is still $[2\varepsilon,2\varepsilon']$-regular of density at least $d_1/2$. Now consider a spanning subgraph $H$ of $(A',B')$ which is obtained from $(A',B')$ by including each edge with probability $d_0/3d_1$. So the expected degree of every vertex is at most $2d_0n/3$ and the expected number of edges of $H$ is at least $d_0 (n-\varepsilon n)^2/6$. Now apply the Chernoff bound on the binomial distribution in Proposition~\ref{chernoff} to each of the vertex degrees and to the total number of edges in $H$ to see that with high probability $H$ has the desired properties.\COMMENT{If a vertex in $(A',B')$ already has degree $\leq d_0 n$ then don't need to apply Chernoff as they already have desired property. Apply Chernoff for the remaining vertices (i.e. doing this $\leq 2n \ll e^n$ times) and the edge set. Note that we are using $\mathbb P(d_H (x) \geq (1+\varepsilon)2d_0n/3 ) \leq \mathbb P (|d_{H} (x)- \mathbb E(d_H (x))| \geq \varepsilon \mathbb E (d_H (x))) \leq 2 e^{-\frac{\varepsilon^2}{3} d_0 ^2 n/(3d_1)} \leq 2e^ {-c n}$ for some constant $c>0$ (since $\mathbb E (d_H (x)) \geq d_0 ^2 n/(3d_1)$). Further $\mathbb P (e (H) \leq (1-\varepsilon)d_0 (n-\varepsilon n )^2 /6) \leq \mathbb P (e (H) \leq (1-\varepsilon) \mathbb E (e(H)))\leq P (|e(H)- \mathbb E (e(H))| \geq \varepsilon \mathbb E (e(H)) \leq 2e^{-cn^2}$. (Since $\mathbb E (e(H)) \geq d_0 (n-\varepsilon n)^2/6$.)} \noproof\bigskip \section{Useful results} \label{4} \subsection{$1$-factors in multidigraphs} Our main aim in this subsection is to show that the reduced multidigraph $R_m$ contains a collection of `almost' 1-factors which together cover almost all the edges of~$R_m$ (see Lemma~\ref{multifactor1}). To prove this we will need the following result which implies $R_m$ contains many edges between any two sufficiently large sets. The second part of the lemma will be used in Section~\ref{sec:shifted}. \begin{lemma}\label{keevashmult} Let $0 <1/n \ll 1/M'\ll \varepsilon \ll \beta \ll \eta \ll d \ll c,d'\ll 1$. Suppose that $G$ is an oriented graph of order $n$ with $\delta ^0 (G)\geq (1/2-\eta)n$. Let $R$ and $R_m$ denote the reduced digraph and the reduced multidigraph of $G$ obtained by applying Lemma~\ref{dilemma} (with parameters $\varepsilon,d, M'$ and $\varepsilon, \beta ,d, M'$ respectively). Let $L:=|R|=|R_m|$. Then the following properties hold. \begin{itemize} \item[{\rm (i)}] Let $X \subseteq V(R_m)$ be such that $\delta ^0 (R_m [X]) \geq (1/2-c)|X|/\beta$. Then for all (not necessarily disjoint) subsets $A$ and $B$ of $X$ of size at least $(1/2-c)|X|$ there are at least $|X|^2/(60 \beta )$ directed edges from $A$ to $B$ in~$R_m$. \item[{\rm (ii)}] Let $R'$ denote the spanning subdigraph of $R$ obtained by deleting all edges which correspond to pairs of density at most $d'$ (in the pure digraph $G'$). Then $\delta^0(R')\ge (1/2-2d')L$ and for all (not necessarily disjoint) subsets $A$ and $B$ of $V(R')$ of size at least $(1/2-c)L$ there are at least $L^2/60$ directed edges from $A$ to $B$ in~$R'$. \end{itemize} \end{lemma} \removelastskip\penalty55\medskip\noindent{\bf Proof. } We first prove~(i). Recall that for every edge $V_iV_j$ of $R$ there are precisely $\lfloor d_{i,j} /\beta \rfloor$ edges from $V_i $ to $V_j$ in $R_m$, where $d_{i,j}$ denotes the density of $(V_i, V_j )_{G'}$. But $d_{i,j}+d_{j,i} \leq 1 $ since $G$ is oriented and so $R_m$ contains at most $1/\beta$ edges between $V_i$ and $V_j$ (here we count the edges in both directions). By deleting vertices from $A$ and $B$ if necessary we may assume that $|A|=|B|=(1/2-c)|X|$. We will distinguish two cases. Suppose first that $|A\cap B|>|X|/5$ and let $Y:=A\cap B$. Define $\overline{Y}:=X \backslash Y$ and $\overline{A \cup B}:=X\backslash (A\cup B)$. Then \begin{eqnarray*} 2e(A,B)& \ge & 2e(Y)=\sum_{V\in Y} d_{R_m [X]}(V)-e(Y,\overline{Y})-e(\overline{Y},Y)\\ & {\ge} & |Y|(1-2c)|X|/\beta -|Y|(|X|-|Y|)/\beta =|Y|(|Y|-2c|X|)/\beta\ge |X|^2/(30\beta). \end{eqnarray*} So suppose next that $|A\cap B|\le |X|/5$. Then $|\overline{A\cup B}|\le |X|-|A|-|B|+|A\cap B|\le (1/5+2c)|X|$. Therefore, \begin{eqnarray*} e(A,B) & \ge & \sum_{V\in A} d^+_{R_m [X]}(V)-e(A,\overline{A\cup B})-e(A)\\ & {\ge} & |A|(1/2-c)|X|/\beta-|A||\overline{A\cup B}|/\beta-|A|^2/(2\beta)\\ & \ge & |A|[(1/2-c)-(1/5+2c)-(1/2-c)/2]|X|/\beta\ge |X|^2/(60\beta), \end{eqnarray*} as required. To prove~(ii) we consider the weighted digraph $R'_w$ obtained from $R'$ by giving each edge $V_iV_j$ of $R'$ weight $d_{i,j}$. Given a cluster $V_i$, we write $w^+(V_i)$ for the sum of the weights of all edges sent out by~$V_i$ in~$R'_w$. We define $w^-(V_i)$ similarly and write $w^0(R'_w)$ for the minimum of $\min\{w^+(V_i),w^-(V_i)\}$ over all clusters~$V_i$. Note that $\delta^0(R')\ge w^0(R'_w)$. Moreover, Lemma~\ref{dilemma} implies that $d^{\pm} _{G'\backslash V_0} (x) > (1/2 -2d)n \text{ \ for all } x \in V(G' \backslash V_0)$. Thus each $V_i \in V(R')$ satisfies $$ (1/2-2d)nm \leq e_{G'}(V_i, V(G')\backslash V_0) \leq m^2 w^+ (V_i) +(d'm^2)L $$ and so $w^+(V_i) \geq (1/2 -2d-d')L>(1/2-2d')L$. Arguing in the same way for inweights gives us $\delta^0(R')\ge w^0 (R'_w) > (1/2-2d')L$. Let $A,B\subseteq V(R')$ be as in~(ii). Similarly as in~(i) (setting $\beta:=1$ and $X:=V(R')$ in the calculations) one can show that the sum of all weights of the edges from~$A$ to $B$ in~$R'_w$ is at least $L^2/60$. But this implies that $R'$ contains at least $L^2/60$ edges from~$A$ to~$B$. \noproof\bigskip \begin{lemma}\label{multifactor1} Let $0 <1/n \ll 1/M'\ll \varepsilon \ll \beta \ll \eta \ll d \ll c \ll 1$. Suppose that $G$ is an oriented graph of order $n$ with $\delta ^0 (G)\geq (1/2-\eta)n$. Let $R_m$ denote the reduced multidigraph of $G$ with parameters $\varepsilon, \beta ,d$ and $M'$ obtained by applying Lemma~\ref{dilemma}. Let $r:= (1/2-c) |R_m|/\beta$. Then there exist edge-disjoint collections $\mathcal F_1, \dots , \mathcal F_r$ of vertex-disjoint cycles in $R_m$ such that each $\mathcal F_i$ covers all but at most $c|R_m|$ of the clusters in $R_m$. \end{lemma} \removelastskip\penalty55\medskip\noindent{\bf Proof. } Let $L:=|R_m|$. Since $\Delta ^0 (G) \leq n-\delta ^0 (G) \leq (1/2+\eta )n$, Lemma~\ref{multimin} implies that \begin{align}\label{multideg} \delta ^0 (R_m) \geq (1/2-4d)\frac{L}{\beta} \text{ \ \ and \ \ }\Delta ^0 (R_m) \leq (1/2+2\eta)\frac{L}{\beta}. \end{align} First we find a set of clusters $X \subseteq V(R)$ with the following properties: \begin{itemize} \item $|X|= cL$, \item $|N^{\pm} _{R_m} (V_i) \cap X| = (1/2\pm 5d)\frac{cL}{\beta}$ for all $V_i \in V(R_m)$. \end{itemize} We obtain $X$ by choosing a set of $cL$ clusters uniformly at random. Then each cluster $V_i$ satisfies $$ \mathbb E (|N^{\pm} _{R_m} (V_i) \cap X|) = c |N^{\pm} _{R_m} (V_i)| \stackrel{(\ref{multideg})}{=} c(1/2\pm 4 d)\frac{L}{\beta}.$$ Proposition~\ref{chernoff} for the hypergeometric distribution now implies that with nonzero probability $X$ satisfies our desired conditions. (Recall that $N^{+} _{R_m} (V_i)$ is a multiset. Formally Proposition~\ref{chernoff} does not apply to multisets. However, for each $j=1, \dots , 1/\beta$ we can apply Proposition~\ref{chernoff} to the set of all those clusters which appear at least $j$ times in $N^+ _{R_m} (V_i)$, and similarly for $N^- _{R_m}(V_i)$.)\COMMENT{We have $2L$ sets $N^{\pm} _{R_m} (V_i)$. For each set we apply Chernoff at most $1/ \beta $ times. So we use Chernoff $\leq 2L/\beta \ll e^L$ times. Let $N^+ _j (V_i)$ denote the set of all clusters that appear $\geq j$ times in $N^+ _{R_m} (V_i)$ (similarly define $N^- _j (V_i)$). If $|N^{\pm} _j (V_i)|\leq \varepsilon cL$ don't need to apply Chernoff. Otherwise set $X^+ _{ij} :=X\cap N^+ _j (V_i)$. Then $\mathbb P (X^+ _{ij} \geq (1+\varepsilon)c |N^+ _j (V_i)| \text{ or } X^+ _{ij} \leq (1-\varepsilon)c |N^+ _j (V_i)|) \leq \mathbb P(|X^+ _{ij} -\mathbb E (X^+ _{ij})| \geq \varepsilon \mathbb E(X^+ _{ij}) )\leq 2e^{-\frac{\varepsilon ^3}{3} c^2 L}$. So whp $|N^+ _{R_m} (V_i)\cap X| =(1 \pm \varepsilon )c |N^+ _{R_m} (V_i)| \pm \varepsilon c L / \beta =(1/2\pm 5d)\frac{cL}{\beta}$ as desired.} Note that $$ d^{\pm} _{R_m \backslash X} (V_i) = \left( \frac{1}{2}- \frac{c}{2}\pm 5d\right)\frac{L}{\beta}$$ for each $V_i \in V(R_m\backslash X)$. We now add a small number of \emph{temporary edges} to $R_m \backslash X$ in order to turn it into an $r'$-regular multidigraph where $r':=( \frac{1}{2}- \frac{c}{2}+5d)\frac{L}{\beta}$. We do this as follows. As long as $R_m \backslash X$ is not $r'$-regular there exist $V_i , V_j \in V(R_m \backslash X)$ such that $V_i$ has outdegree less than $r'$ and $V_j$ has indegree less than $r'$. In this case we add an edge from $V_i$ to $V_j$. (Note we may have $i=j$, in which case we add a loop.) We decompose the edge set of $R_m \backslash X $ into $r'$ 1-factors $\mathcal F'_1, \dots, \mathcal F'_{r'}$. (To see that we can do this, consider the bipartite multigraph $H$ where both vertex classes $A,B$ consist of a copy of $V(R_m \backslash X)$ and we have $s$ edges between $a \in A$ and $b \in B$ if there are precisely $s$ edges from $a$ to~$b$ in $R_m \backslash X$, including the temporary edges. Then $H$ is regular and so has a perfect matching.% \COMMENT{We need Hall for bipartite multigraphs here. But the proof of Hall still works.} This corresponds to a $1$-factor $\mathcal F'_1$. Now remove the edges of $\mathcal F'_1$ from~$H$ and continue to find $\mathcal F'_2,\dots,\mathcal F'_{r'}$ in the same way.) Since at each cluster we added at most $20d \frac{L}{\beta}$ temporary edges, all but at most $20 \sqrt{d}\frac{L}{\beta}$ of the $\mathcal F'_i$ contain at most $\sqrt{d} L$ temporary edges. By relabeling if necessary we may assume that $\mathcal F'_1, \dots , \mathcal F'_{r}$ are such $1$-factors. We now remove the temporary edges from each of these $1$-factors, though we still refer to the digraphs obtained in this way as $\mathcal F'_1, \dots , \mathcal F'_{r}$. So each $\mathcal F'_i$ spans $R_m \backslash X$ and consists of cycles and at most $\sqrt {d} L$ paths. Our aim is to use the clusters in $X$ to piece up these paths into cycles in order to obtain edge-disjoint directed subgraphs $\mathcal F_1, \dots , \mathcal F_{r}$ of $R_m$ where each $\mathcal F_i$ is a collection of vertex-disjoint cycles and $\mathcal F'_i \subseteq \mathcal F_i$.% \COMMENT{note we could have 2-cycles.} Let $P'_1, \dots, P'_\ell$ denote all the paths lying in one of $\mathcal F'_1, \dots , \mathcal F'_{r}$ (so $\ell \leq \sqrt{d}L r\leq \sqrt {d} L^2/\beta$). Our next task is to find edge-disjoint paths and cycles $P_1, \dots, P_\ell$ of length~$5$ in $R_m$ with the following properties. \begin{itemize} \item[(i)] If $P'_j$ consists of a single cluster $V_{j'} \in V(R)$ then $P_j$ is a cycle consisting of $4$ clusters in $X$ as well as $V_{j'}$. \item[(ii)] If $P'_j$ is a path of length $\geq 1$ then $P_j$ is a path whose startpoint is the endpoint of $P'_j$. Similarly the endpoint of $P_j$ is the startpoint of $P'_j$. \item[(iii)] If $P'_j$ is a path of length $\ge 1$ then the internal clusters in the path $P_j$ lie in~$X$. \item[(iv)] If $P'_{j_1}$ and $P' _{j_2}$ lie in the same $\mathcal F'_i$ then $P_{j_1}$ and $P _{j_2}$ are vertex-disjoint. \end{itemize} So conditions (i)--(iii) imply that $P'_j \cup P_j$ is a directed cycle for each $1 \leq j \leq \ell$. Assuming we have found such paths and cycles $P_1, \dots, P_\ell$, we define $\mathcal F_1 , \dots , \mathcal F_{r}$ as follows. Suppose $P'_{j_1}, \dots ,P'_{j_t}$ are the paths in $\mathcal F'_i$. Then we obtain $\mathcal F_i$ from $\mathcal F'_i$ by adding the paths and cycles $P_{j_1}, \dots ,P_{j_t}$ to $\mathcal F'_i$. Condition~(iv) ensures that the $\mathcal F_i$ are indeed collections of vertex-disjoint cycles. It remains to show the existence of $P_1, \dots, P_\ell$. Suppose that for some $j\le \ell$ we have already found $P_1, \dots, P_{j-1}$ and now need to define $P_j$. Consider $P'_j$ and suppose it lies in $\mathcal F'_i$. Let $V_{a}$ denote the startpoint of $P'_j$ and $V_{b}$ its endpoint. We call an edge $(V_{i_1}V_{i_2})_k$ in $R_m$ {\emph{free}} if it has not been used in one of $P_1, \dots ,P_{j-1}$. Let $B$ be the set of all those clusters $V\in X$ for which at least $c|X|/\beta$ of the edges at $V$ in $R_m [X]$ are not free. Our next aim is to show that $B$ is small. More precisely, $$|B|\leq d^{1/4} L.$$ To see this, note that $3(j-1)\leq 3\ell\leq 3 \sqrt{d}\frac{L^2}{\beta}$ edges of $R_m[X]$ lie in one of $P_1, \dots , P_{j-1}$. Thus, $2\cdot 3\sqrt{d}\frac{L^2}{\beta} \geq \frac{c|X|}{\beta}|B|=\frac{c^2L|B|}{\beta}$. (The extra factor of~2 comes from the fact that we may have counted edges at the vertices in~$B$ twice.) Since $c\gg d$ this implies that $|B|\leq d^{1/4} L$, as desired. We will only use clusters in $X':=X\backslash B$ when constructing $P_j$. Note that $V_{a}$ receives at most $|B|/\beta \leq d^{1/4}L/\beta$ edges from $B$ in $R_m$. Since we added at most $20dL/\beta $ temporary edges to $R_m \backslash X$ per cluster, $V_{a}$ can be the startpoint or endpoint of at most $20dL/\beta $ of the paths $P'_1, \dots, P'_{j-1}$. Thus $V_{a}$ lies in at most $20dL/\beta$ of the paths and cycles $P_1, \dots , P_{j-1}$. In particular, at most $40dL/\beta$ edges at $V_a $ in $R_m$ are not free.% \COMMENT{40 instead of 20 since each $P_s$ containing $V_a$ could be a cycle} We will avoid such edges when constructing $P_j$. For each of $P_1, \dots , P_{j-1}$ we have used $4$ clusters in $X$. Let $P'_{j_1}, \dots , P'_{j_t}$ denote the paths which lie in $\mathcal F'_i$ (so $t \leq \sqrt{d}L$). Thus at most $4\sqrt{d}L$ clusters in $X$ already lie in the paths and cycles $P_{j_1}, \dots ,P_{j_t}$. So for $P_j$ to satisfy (iv), the inneighbour of $V_{a}$ on $P_j$ must not be one of these clusters. Note that $V_{a}$ receives at most $4\sqrt{d}L/\beta$ edges in $R_m$ from these clusters. Thus in total we cannot use $d^{1/4}L/\beta+40dL/\beta +4\sqrt{d}L/\beta \leq 2d^{1/4}L/\beta$ of the edges which $V_{a}$ receives from $X$ in $R_m$. But $|N_ {R_m} ^- (V_{a}) \cap X|\ge (\frac{1}{2}-5d)cL/\beta\gg 2d^{1/4}L/\beta$ and so we can still choose a suitable cluster $V_{a^-}$ in $N_ {R_m} ^- (V_{a}) \cap X$ which will play the role of the inneighbour of $V_{a}$ on $P_j$. Let $(V_{a^-}V_{a})_{k_5}$ denote the corresponding free edge in $R_m$ which we will use in $P_j$. A similar argument shows that we can find a cluster $V_{b^+} \not = V_{a^-}$ to play the role of the outneighbour of $V_{b}$ on $P_j$. So $V_{b^+} \in X'$, $V_{b^+}$ does not lie on any of $P_{j_1}, \dots, P_{j_t}$ and there is a free edge $(V_{b}V_{b^+})_{k_1}$ in $R_m$. We need to choose the outneighbour $V_{b^{++}}$ of $V_{b^+}$ on $P_j$ such that $V_{b^{++}}\in X'\setminus \{V_{a^-}\}$, $V_{b^{++}}$ has not been used in $P_{j_1},\dots,P_{j_t}$ and there is a free edge from $V_{b^+}$ to $V_{b^{++}}$ in $R_m$. Let $A_1$ denote the set of all clusters in $X'$ which satisfy these conditions. Since $V_{b^+}\in X'$ at most $c|X|/\beta $ edges at $V_{b^+}$ in $R_m [X]$ are not free. So $V_{b^+}$ sends out at least $(1/2-5d)\frac{|X|}{\beta}-c\frac{|X|}{\beta}-\frac{|B\cup \{V_{a^-}\}|}{\beta}\geq (1/2-2c)\frac{|X|}{\beta}$ free edges to $X'\setminus \{V_{a^-}\}$ in $R_m$. On the other hand, as before one can show that $V_{b^+}$ sends at most $4\sqrt{d}L/\beta$ edges to clusters in $X'$ which already lie in $P_{j_1}, \dots, P_{j_t}$. Hence, $|A_1|\geq \beta[(1/2-2c)|X|/\beta- 4\sqrt{d}L/\beta]\geq (1/2-3c)|X|$. Similarly we need to choose the inneighbour $V_{a^{--}}$ of $V_{a^-}$ on $P_j$ such that $V_{a^{--}}\in X'\setminus \{V_{b+}\}$, $V_{a^{--}}$ has not been used in $P_{j_1}, \dots, P_{j_t}$ and so that $R_m$ contains a free edge from $V_{a^{--}}$ to $V_{a^-}$. Let $A_2$ denote the set of all clusters in $X'$ which satisfy these conditions. As before one can show that $|A_2| \geq (1/2-3c)|X|$. Recall that $\delta ^0 (R_m [X])\geq (1/2-5d)|X|/\beta$ by our choice of~$X$. Thus Lemma~\ref{keevashmult}(i) implies that $R_m [X]$ contains at least $|X|^2/(60\beta)= c^2L^2/(60 \beta)$ edges from $A_1$ to $A_2$. Since all but at most $5\ell\le 5\sqrt{d}L^2/\beta$ edges of $R_m$ are free, there is a free edge $(V_{b^{++}}V_{a^{--}})_{k_3}$ from $A_1$ to $A_2$. Let $(V_{b^+}V_{b^{++}})_{k_2}$ be a free edge from $V_{b^+}$ to $V_{b^{++}}$ in $R_m$ and let $(V_{a^{--}}V_{a^{-}})_{k_4}$ be a free edge from $V_{a^{--}}$ to $V_{a^{-}}$ (such edges exist by definition of $A_1$ and $A_2$). We take $P_j$ to be the directed path or cycle which consists of the edges $(V_{b}V_{b^+})_{k_1}$, $(V_{b^{+}}V_{b^{++}})_{k_2}$, $(V_{b^{++}} V_{a^{--}})_{k_3}$, $(V_{a^{--}} V_{a^-})_{k_4}$ and $(V_{a^-}V_a)_{k_5}$. \noproof\bigskip \subsection{Spanning subgraphs of super-regular pairs} Frieze and Krivelevich~\cite{fk} showed that every $(\varepsilon,\beta)$-super-regular pair $\Gamma$ contains a regular subgraph $\Gamma'$ whose density is almost the same as that of $\Gamma$. The following lemma is an extension of this, where we can require $\Gamma'$ to have a given degree sequence, as long as this degree sequence is almost regular. \begin{lemma}\label{fandk} Let $0 < 1/m\ll\varepsilon \ll \beta \ll \alpha ' \ll \alpha \ll 1$. Suppose that $\Gamma =(U,V)$ is an $(\varepsilon , \beta+\varepsilon)$-super-regular pair where $|U|=|V|=m$. Define $\tau:= (1- \alpha)\beta m$. Suppose we have a non-negative integer $x_i \leq \alpha ' \beta m$ associated with each $u_i \in U$ and a non-negative integer $y_i \leq \alpha ' \beta m$ associated with each $v_i \in V$ such that $\sum _{u_i \in U} x_i= \sum _{v_i \in V} y_i$. Then $\Gamma$ contains a spanning subgraph $\Gamma '$ in which $c_i:= \tau-x_i$ is the degree of $u_i \in U$ and $d_i:= \tau-y_i$ is the degree of $v_i \in V$. \end{lemma} \removelastskip\penalty55\medskip\noindent{\bf Proof. } We first obtain a directed network $N$ from $\Gamma$ by adding a source $s$ and a sink $t$. We add an edge $su_i$ of capacity $c_i$ for each $u_i \in U$ and an edge $v_i t$ of capacity $d_i$ for each $v_i \in V$. We give all the edges in $\Gamma$ capacity $1$ and direct them from $U$ to~$V$. Our aim is to show that the capacity of any cut is at least $\sum _{u_i \in U} c_i = \sum _{v_i \in V} d_i$. By the max-flow min-cut theorem this would imply that $N$ admits a flow of value $\sum _{u_i \in U} c_i$, which by construction of $N$ implies the existence of our desired subgraph~$\Gamma'$. So consider any $(s,t)$-cut $(S,\bar{S})$ where $S=\{s\} \cup S_1 \cup S_2$ with $S_1 \subseteq U$ and $S_2 \subseteq V$. Let $\bar S_1:=U \backslash S_1$ and $\bar S_2:=V \backslash S_2.$ The capacity of this cut is $$\sum _{u_i \in \bar S_1} c_i + \sum _{v_i \in S_2} d_i + e(S_1,\bar S_2) $$ and so our aim is to show that \begin{align}\label{aim} e(S_1,\bar S_2) \geq \sum _{u_i \in S_1} c_i - \sum _{v_i \in S_2} d_i. \end{align} Now \begin{align}\label{aim1}\sum _{u_i \in S_1} c_i - \sum _{v_i \in S_2} d_i \leq |S_1|(1- \alpha)\beta m -|S_2|(1-\alpha -\alpha ')\beta m \end{align} and similarly \begin{align}\label{aim2} \sum _{u_i \in S_1} c_i - \sum _{v_i \in S_2} d_i =\sum _{v_i \in \bar S_2} d_i -\sum _{u_i \in \bar S_1} c_i \leq |\bar S_2|(1- \alpha)\beta m -|\bar S_1|(1-\alpha -\alpha ')\beta m. \end{align} By (\ref{aim1}) we may assume that $|S_1| \geq (1-2 \alpha ')|S_2|$. (Since otherwise $\sum _{u_i \in S_1} c_i - \sum _{v_i \in S_2} d_i <0$ and thus (\ref{aim}) is satisfied.) Similarly by (\ref{aim2}) we may assume that $|\bar S_2| \geq (1-2 \alpha ')|\bar S_1|$. Let $\alpha ^*:= \alpha '/ \alpha$. We now consider several cases. \medskip \noindent {\bf Case 1.} $|S_1|, |\bar S_2| \geq \varepsilon m$ and $|S_1| \geq (1+\alpha^*)|S_2|.$ \smallskip \noindent Since $\Gamma$ is $(\varepsilon, \beta +\varepsilon)$-super-regular we have that \begin{align*} e (S_1, \bar S_2) &\geq \beta |S_1|(m -|S_2|) \geq \beta m (|S_1|-|S_2|) \\ & = \left( |S_1|(1- \alpha)\beta m -|S_2|(1-\alpha -\alpha ')\beta m \right)+ \alpha \beta m |S_1| -(\alpha +\alpha ')\beta m|S_2| \\ & \geq |S_1|(1- \alpha)\beta m -|S_2|(1-\alpha -\alpha ')\beta m. \end{align*} (The last inequality follows since $\alpha |S_1| \geq (\alpha +\alpha ')|S_2|$.) Together with~(\ref{aim1}) this implies (\ref{aim}). \medskip \noindent {\bf Case 2.} $|S_1|, |\bar S_2| \geq \varepsilon m$, $|S_1| <(1+\alpha^*)|S_2|$ and $|S_2| \leq (1- \alpha ^*)m.$ \smallskip \noindent Again since $\Gamma$ is $(\varepsilon, \beta +\varepsilon)$-super-regular we have that \begin{align}\label{eqaim} e (S_1, \bar S_2) \geq \beta |S_1|(m -|S_2|)=\beta |S_1||\bar S_2|. \end{align} As before, to prove~(\ref{aim}) we will show that $$e (S_1, \bar S_2) \geq |S_1|(1- \alpha)\beta m -|S_2|(1-\alpha -\alpha ')\beta m.$$ Thus by (\ref{eqaim}) it suffices to show that $\alpha m| S_1|- |S_1||S_2| +(1-\alpha-\alpha ')m|S_2| \geq 0$. We know that $|S_2| (1- \alpha -\alpha ') \geq |S_1|(1- \alpha - \alpha ^*)$ since $(1+\alpha^*)|S_2|>|S_1|.$% \COMMENT{This follows as $1+\alpha ^* \leq (1- \alpha -\alpha ')/(1- \alpha - \alpha ^*)$ as $(1+\alpha ^*)(1- \alpha - \alpha ^*)=1- \alpha- \alpha \alpha ^* - (\alpha^*)^2 = 1- \alpha -\alpha ' - (\alpha^*)^2 \leq 1- \alpha -\alpha '.$} Hence, $\alpha |S_1|-|S_1|(1- \alpha ^*)+|S_2|(1-\alpha-\alpha ') \geq 0$. So $\alpha m| S_1|- |S_1||S_2| +(1-\alpha-\alpha ')m|S_2| \geq 0$ as $|S_2| \leq (1- \alpha ^*)m.$ So indeed (\ref{aim}) is satisfied. \medskip \noindent {\bf Case 3.} $|S_1|, |\bar S_2| \geq \varepsilon m$, $|S_1| <(1+\alpha^*)|S_2|$ and $|S_2| > (1- \alpha ^*)m.$ \smallskip \noindent By~(\ref{aim2}) in order to prove~(\ref{aim}) it suffices to show that $$e (S_1, \bar S_2) \geq |\bar S_2|(1- \alpha)\beta m -|\bar S_1|(1-\alpha -\alpha ')\beta m.$$ Since~(\ref{eqaim}) also holds in this case, this means that it suffices to show that $\alpha | \bar S_2|m- |\bar S_1||\bar S_2| +(1-\alpha-\alpha ')|\bar S_1| m\geq 0$.% \COMMENT{since we want to show $| S_1||\bar S_2| \geq (1- \alpha ) |\bar S_2 |m-(1-\alpha-\alpha ')|\bar S_1| m$} Since $|S_1|\geq (1- 2\alpha ')|S_2|$ and $|S_2| > (1- \alpha ^*)m$ we have that $|S_1|> (1-\alpha )m$. Thus $\alpha |\bar S_2|m \geq |\bar S_1 ||\bar S_2|$ and so indeed (\ref{aim}) holds. \medskip \noindent {\bf Case 4.} $|S_1| < \varepsilon m \text{ and } |\bar S_2| \geq \varepsilon m.$ \smallskip \noindent Since $|S_1| \geq (1-2 \alpha ')|S_2|$ we have that $|S_2| \leq 2 \varepsilon m$. Hence, $$e(S_1, \bar S_2) \geq \beta m|S_1|-|S_1||S_2| \geq (\beta-2 \varepsilon)m|S_1| \geq (1- \alpha)\beta m |S_1|$$ and so by (\ref{aim1}) we see that (\ref{aim}) is satisfied, as desired. \medskip \noindent {\bf Case 5.} $|S_1| \ge \varepsilon m \text{ and } |\bar S_2|< \varepsilon m$. \smallskip \noindent Similarly as in Case~4 it follows that $e(S_1, \bar S_2) \geq (1- \alpha)\beta m |\bar S_2|$ and so by (\ref{aim2}) we see that (\ref{aim}) is satisfied, as desired. \medskip \noindent Note that we have considered all possible cases since we cannot have that $|S_1|,|\bar S_2| < \varepsilon m$. Indeed, if $|S_1|,|\bar S_2| < \varepsilon m$ then $|S_2| \geq (1- \varepsilon)m$ and as $|S_1|\geq (1- 2 \alpha ')|S_2|$ this implies $|S_1|\geq (1- 2 \alpha ')(1- \varepsilon)m$, a contradiction. \noproof\bigskip \subsection{Special $1$-factors in graphs and digraphs} It is easy to see that every regular oriented graph $G$ contains a $1$-factor. The following result states that if $G$ is also dense, then (i) we can guarantee a $1$-factor with few cycles. Such $1$-factors have the advantage that we can transform them into a Hamilton cycle by adding/deleting a comparatively small number of edges. (ii) implies that even if $G$ contains a sparse `bad' subgraph $H$, then there will be a $1$-factor which does not contain `too many' edges of $H$. \begin{lemma}\label{1factororiented} Let $0< \theta_1 ,\theta _2 , \theta _3 <1/2$ and $\theta _1 /\theta _3 \ll \theta _2$. Let $G$ be a $\rho$-regular oriented graph whose order $n$ is sufficiently large and where $\rho:=\theta _3 n$. Suppose $A_1, \dots , A_{5n}$ are sets of vertices in $G$ with $a_i:=|A_i|\ge n^{1/2}$. Let $H$ be an oriented subgraph of $G$ such that $d^{\pm} _H (x) \leq \theta _1 n$ for all $x \in A_i$ (for each $i$). Then $G$ has a $1$-factor $F$ such that \begin{itemize} \item[(i)] $F$ contains at most $n/(\log n)^{1/5}$ cycles; \item[(ii)] For each $i$, at most $\theta _2 a_i$ edges of $H\cap F$ are incident to $A_i$. \end{itemize} \end{lemma} To prove this result we will use ideas similar to those used by Frieze and Krivelevich~\cite{fk}. In particular, we will use the following bounds on the number of perfect matchings in a bipartite graph. \begin{thm}\label{matchingbounds} Suppose that $B$ is a bipartite graph whose vertex classes have size $n$ and $d_1,\dots , d_n$ are the degrees of the vertices in one of these vertex classes. Let $\mu (B)$ denote the number of perfect matchings in $B$. Then \begin{align*} \mu (B) \leq \prod _{k=1} ^n (d_k !)^{1/d_k}. \end{align*} Furthermore, if $B$ is $\rho$-regular then \begin{align*} \mu (B) \ge\left(\frac{\rho}{n}\right) ^n n!. \end{align*} \end{thm} The upper bound in Theorem~\ref{matchingbounds} was proved by Br\'egman~\cite{3}. The lower bound is a consequence of the Van der Waerden conjecture which was proved independently by Egorychev~\cite{eg} and Falikman~\cite{fa}. We will deduce (i) from the following result in~\cite{randmatch}, which in turn is similar to Lemma~2 in~\cite{fk}. \begin{lemma} \label{2factor} For all $\theta \le 1$ there exists $n_0=n_0(\theta)$ such that the following holds. Let $B$ be a $\theta n$-regular bipartite graph whose vertex classes $U$ and $W$ satisfy $|U|=|W|=:n\ge n_0$. Let $M_1$ be any perfect matching from $U$ to $W$ which is disjoint from $B$. Let $M_2$ be a perfect matching chosen uniformly at random from the set of all perfect matchings in $B$. Let $F=M_1 \cup M_2$ be the resulting $2$-factor. Then the probability that $F$ contains more than $n/(\log n)^{1/5}$ cycles is at most $e^{-n}$. \end{lemma} \medskip \noindent {\bf Proof of Lemma~\ref{1factororiented}.} Consider the $\rho$-regular bipartite graph $B$ whose vertex classes $V_1,V_2$ are copies of $V(G)$ and where $x \in V_1$ is joined to $y \in V_2$ if $xy$ is a directed edge in $G$. Note that every perfect matching in $B$ corresponds to a $1$-factor of $G$ and vice versa. Let $\mu (B)$ denote the number of perfect matchings of $B$. Then \begin{align}\label{pmatch1} \mu (B) \geq \left( \frac{\rho}{n} \right) ^n n!\geq \left( \frac{\rho}{n} \right) ^n \left( \frac{n}{e} \right) ^n = \left( \frac{\rho}{e} \right) ^n \end{align} by Theorem~\ref{matchingbounds}. Here we have also used Stirling's formula which implies that for sufficiently large $m$, \begin{align}\label{stirling} \left( \frac{m}{e} \right) ^m \leq m! \leq \left( \frac{m}{e} \right) ^{m+1}. \end{align} We now count the number $\mu _i (G)$ of $1$-factors of $G$ which contain more than $\theta _2 a_i$ edges of $H$ which are incident to $A_i$. Note that \begin{align}\label{mured1} \mu _{i} (G) \leq \binom{2a_i}{\theta _2 a_i}(\theta _1 n)^{\theta _2 a_i} (\rho !)^ {(n-\theta _2 a_i)/\rho}. \end{align} Indeed, the term $\binom{2a_i}{\theta _2 a_i}(\theta _1 n)^{\theta _2 a_i}$ in~(\ref{mured1}) gives an upper bound for the number of ways we can choose $\theta _2 a_i$ edges from $H$ which are incident to $A_i$ such that no two of these edges have the same startpoint and no two of these edges have the same endpoint. The term $(\rho !)^ {(n-\theta _2 a_i)/\rho}$ in~(\ref{mured1}) uses the upper bound in Theorem~\ref{matchingbounds} to give a bound on the number of $1$-factors in $G$ containing $\theta _2 a_i $ fixed edges. Now \begin{align}\label{rhobounda} (\rho !)^{(n- \theta _2 a_i)/ \rho} \stackrel{(\ref{stirling})}{\leq} \left( \frac{\rho}{e} \right) ^{(1+1/\rho)(n- \theta _2 a_i)} \leq \left( \frac{\rho}{e} \right)^{n- \theta _2 a_i +1/\theta _3} \end{align} since $\rho = \theta _3 n$ and \begin{align}\label{rhobound2a} \left( \frac{e}{\rho} \right)^{ \theta _2 a_i -1/\theta _3} \leq \left( \frac{2e}{\theta _3 n} \right)^{\theta _2 a_i} \end{align} since $a_i \geq n^{1/2}$. Furthermore, \begin{align}\label{binomial1} \binom{2a_i}{\theta _2 a_i} \leq \frac{(2a_i)^{\theta _2 a_i} }{(\theta _2 a_i)!} \stackrel{(\ref{stirling})}{\leq} \left( \frac{2 e}{ \theta _2} \right) ^{\theta _2 a_i}. \end{align} So by (\ref{mured1}) we have that \begin{align*} \mu _{i} (G) &\stackrel{(\ref{rhobounda}),(\ref{binomial1})}{\leq} \left(\frac{2e}{\theta _2}\right)^ {\theta _2 a_i} (\theta _1 n)^{\theta _2 a_i} \left( \frac{\rho}{e} \right)^{n- \theta _2 a_i + 1/\theta _3}\\ & \ \ \stackrel{(\ref{rhobound2a})}{\leq} \left( \frac{2e}{\theta _2} \theta _1 n \frac{2 e}{\theta _3 n} \right)^{\theta _2 a_i} \left( \frac{\rho}{e}\right) ^{n} \stackrel{(\ref{pmatch1})}{\leq}\left( \frac{4e^2 \theta _1}{\theta _2 \theta _3} \right)^{\theta _2 a_i} \mu (B) \ll \frac{\mu(B)}{5n} \end{align*} since $\theta _1 / \theta _3 \ll \theta _2$, $ a_i \geq n^{1/2}$ and $n$ is sufficiently large. Now we apply Lemma~\ref{2factor} to $B$ where $M_1$ is the identity matching (i.e.~every vertex in $V_1$ is matched to its copy in $V_2$). Then a cycle of length $2\ell$ in $M_1 \cup M_2$ corresponds to a cycle of length $\ell$ in $G$. So, since $n$ is sufficiently large, the number of $1$-factors of $G$ containing more than $n/(\log n)^{1/5}$ cycles is at most $e^{-n} \mu (B)$. So there exists a $1$-factor $F$ of $G$ which satisfies (i) and (ii). \noproof\bigskip \subsection{Rotation-Extension lemma} The following lemma will be a useful tool when transforming $1$-factors into Hamilton cycles. Given such a $1$-factor $F$, we will obtain a path $P$ by cutting up and connecting several cycles in $F$ (as described in the proof sketch in Section~\ref{sketch}). We will then apply the lemma to obtain a cycle $C$ containing precisely the vertices of $P$.% \begin{lemma}\label{rotationlemma} Let $0 < 1/m \ll \varepsilon \ll \gamma <1$. Let $G$ be an oriented graph on $n \geq 2m$ vertices. Suppose that $U$ and $V$ are disjoint subsets of $V(G)$ of size $m$ with the following property: \begin{align}\label{label} \text{If }S \subseteq U, \ T \subseteq V \text{ are such that }|S|,|T| \geq \varepsilon m \text{ then }e_G (S,T) \geq \gamma |S||T|/2. \end{align} Suppose that $P=u_1 \dots u_k$ is a directed path in $G$ where $u_1 \in V$ and $u_k \in U$. Let $X$ denote the set of inneighbours $u_i$ of $u_1$ which lie on $P$ so that $u_i \in U$ and $u_{i+1} \in V$. Similarly let $Y$ denote the set of outneighbours $u_i$ of $u_k$ which lie on $P$ so that $u_i \in V$ and $u_{i-1} \in U$. Suppose that $|X|,|Y| \geq \gamma m$. Then there exists a cycle $C$ in $G$ containing precisely the vertices of $P$ such that $|E(C)\backslash E(P)|\leq 5$. Furthermore, $E(P)\backslash E(C)$ consists of edges from~$X$ to~$X^+$ and edges from~$Y^-$ to~$Y$. (Here $X^+$ is the set of successors of vertices in~$X$ on $P$ and $Y^-$ is the set of predecessors of vertices in~$Y$ on~$P$.) \end{lemma} \removelastskip\penalty55\medskip\noindent{\bf Proof. } Clearly we may assume that $u_k u_1 \not \in E(G)$. Let $X_1$ denote the set of the first $\gamma m/2$ vertices in $X$ along $P$ and $X_2$ the set of the last $\gamma m/2$ vertices in $X$ along $P$. We define $Y_1$ and $Y_2$ analogously. So $X_1, X_2 \subseteq U$ and $Y_1, Y_2 \subseteq V$. We have two cases to consider. \medskip \noindent {\bf{Case 1.}} All the vertices in $X_1$ precede those in $Y_2$ along $P$. \smallskip \noindent Partition $X_1 = X_{11} \cup X_{12}$ where $X_{11}$ denotes the set of the first $\gamma m/4$ vertices in $X_1$ along $P$. We partition $Y_2$ into $Y_{21}$ and $Y_{22}$ analogously. Let $X_{12}^+$ denote the set of successors on $P$ of the vertices in $X_{12}$ and $Y^-_{21}$ the set of predecessors of the vertices in $Y_{21}$. So $X^+_{12} \subseteq V$ and $Y^-_{21} \subseteq U$. Further define \begin{itemize} \item $X'_{11} := \{ u_i \ | \ u_{i-1} \in X_{11} \text{ and } \exists \ \text{edge from $u_{i-1}$ to $X^+_{12}$} \} $ and \item $Y'_{22} := \{ u_i \ | \ u_{i+1} \in Y_{22} \text{ and } \exists \ \text{edge from } Y^-_{21} \text{ to }u_{i+1} \} $. \end{itemize} So $X'_{11} \subseteq V$ and $Y'_{22} \subseteq U$. From (\ref{label}) it follows that $|X'_{11}|\geq \frac{(\gamma /2)(\gamma m/4)|X^+_{12}|}{|X^+_{12}|} \geq \varepsilon m$ and similarly $|Y'_{22}| \geq \varepsilon m$. Since $X'_{11} \subseteq V$ and $Y'_{22} \subseteq U$, by (\ref{label}) $G$ contains an edge $u_{i'}u_{i}$ from $ Y'_{22}$ to $X'_{11}$. Since $u_{i} \in X'_{11}$, by definition of $X'_{11}$ it follows that $G$ contains an edge $u_{i-1}u_{j}$ for some $u_{j} \in X^+_{12}$. Likewise, since $u_{i'} \in Y'_{22}$, there is an edge $u_{j'} u_{i'+1}$ for some $u_{j'}\in Y^-_{21}$. Furthermore, $u_{j-1}u_1$ and $u_k u_{j'+1}$ are edges of $G$ by definition of $X^+_{12}$ and $Y^-_{21}$. It is easy to check that the cycle $$C=u_1 \dots u_{i-1} u_{j} u_{j+1} \dots u_{j'} u_{i'+1}u_{i'+2}\dots u_k u_{j'+1} u_{j'+2} \dots u_{i'} u_{i}u_{i+1} \dots u_{j-1} u_1$$ has the required properties (see Figure~1). For example, $E(P)\backslash E(C)$ consists of the edges $u_{i-1}u_i$, $u_{j-1}u_j$, $u_{j'}u_{j'+1}$ and $u_{i'}u_{i'+1}$. The former two edges go from~$X$ to~$X^+$ and the latter two from~$Y^-$ to~$Y$. \begin{figure}[htb!] \label{fig:rotation} \begin{center}\footnotesize \psfrag{1}[][]{\normalsize $u_1$} \psfrag{2}[][]{\normalsize $u_{i-1}$} \psfrag{3}[][]{\normalsize $u_{i}$} \psfrag{4}[][]{\normalsize $u_{j-1}$} \psfrag{5}[][]{\normalsize \ \ \ \ \ \ \ \ $u_{j} \in X_{12}^+$} \psfrag{6}[][]{\normalsize $u_{j'}$} \psfrag{7}[][]{\normalsize $u_{j'+1}$} \psfrag{8}[][]{\normalsize $u_{i'}$} \psfrag{9}[][]{\normalsize $u_{i'+1}$} \psfrag{10}[][]{\normalsize $u_k$} \includegraphics[width=0.70\columnwidth]{rotation3.eps} \caption{The cycle $C$ from Case~1} \end{center} \end{figure} \medskip \noindent {\bf Case 2.} All the vertices in $Y_1$ precede those in $X_2$ along $P$. \smallskip \noindent Let $Y^- _1$ be the predecessors of the vertices in $Y_1$ and $X^+ _2$ the successors of the vertices in $X_2$ on $P$. So $|Y^-_1|=|X^+_2|=\gamma m/2$ and $Y^- _1 \subseteq U$ and $X_2 ^+ \subseteq V$. Thus by~(\ref{label}) there exists an edge $u_{i}u_{j}\in E(G)$ from $Y^-_1$ to $X_2 ^+$. Again, it is easy to check that the cycle $$C=u_1 \dots u_{i} u_{j} u_{j+1} \dots u_k u_{i+1} u_{i+2} \dots u_{j-1} u_1$$ has the desired properties. \noproof\bigskip \subsection{Shifted walks}\label{sec:shifted} Suppose $R$ is a digraph and $F$ is a collection of vertex-disjoint cycles with $V(F) \subseteq V(R)$. A \emph{closed shifted walk $W$ in~$R$ with respect to $F$} is a walk in $R \cup F$ of the form $$W=c_1 ^+ C_1 c_1 c_2 ^+ C_2 c_2 \dots c^+ _{s-1} C_{s-1} c_{s-1} c^+ _s C_s c_s c_1 ^+,$$ where \begin{itemize} \item $\{C_1, \dots , C_s\}$ is the set of all cycles in $F$; % \item $c_i$ lies on $C_i$ and $c^+ _i$ is the successor of $c_i$ on $C_i$ for each $1 \le i \le s$; \item $c_i c_{i+1} ^+$ is an edge of $R$ (here $c^+ _{s+1}:=c_1 ^+$). \end{itemize} Note that the cycles $C_1, \dots , C_s$ are not necessarily distinct. If a cycle $C_i$ in $F$ appears exactly $t$ times in $W$ we say that $C_i$ is \emph{traversed $t$ times}. Note that a closed shifted walk $W$ has the property that for every cycle $C$ of $F$, every vertex of $C$ is visited the same number of times by $W$. The next lemma will be used in Section~\ref{merging} to combine cycles of $G$ which correspond to different cycles of $F$ into a single (Hamilton) cycle. Shifted walks were introduced in~\cite{kelly}, where they were used for a similar purpose. \begin{lemma}\label{shiftedwalk} Let $0 <1/n \ll 1/M'\ll \varepsilon \ll \eta \ll d \ll c\ll d' \ll 1$. Suppose that $G$ is an oriented graph of order $n$ with $\delta ^0 (G)\geq (1/2-\eta)n$. Let $R$ denote the reduced digraph of $G$ with parameters $\varepsilon ,d$ and $M'$ obtained by applying Lemma~\ref{dilemma}. Let $L:=|R|$. Let $R'$ denote the spanning subgraph of $R$ obtained by deleting all edges which correspond to pairs of density at most $d'$ in the pure digraph~$G'$. Let $F$ be a collection of vertex-disjoint cycles with $V(F) \subseteq V(R')$ and $|V(F)| \ge (1-c)L$. Then $R'$ contains a closed shifted walk with respect to $F$ so that each cycle $C$ in $F$ is traversed at most $3L$ times. \end{lemma} \removelastskip\penalty55\medskip\noindent{\bf Proof. } Let $C_1, \dots , C_t$ denote the cycles of $F$. We construct our closed shifted walk $W$ as follows: for each cycle $C_i$, choose an arbitrary vertex $a_i$ lying on $C_i$ and let $a_i^+$ denote its successor on~$C_i$. Let $U_i:=N^+_{R'}(a_i) \cap V(F)$ and let $U_i^-$ be the set of predecessors of $U_i$ on $F$. Similarly, let $V_i:=N^-_{R'}(a_i^+) \cap V(F)$ and let $V_i^+$ be the set of successors of $V_i$ on $F$. Since $\delta^0(R')\ge (1/2-2d')L$ by Lemma~\ref{keevashmult}(ii), we have $|U_i^-|=|U_i| \ge (1/2-3d')L$ and $|V_i^+| =|V_i|\ge (1/2-3d')L$. So by Lemma~\ref{keevashmult}(ii) there is an edge $u_i^-v_{i+1}^+$ from $U_i^-$ to $V_{i+1}^+$ in $R'$. Then we obtain a walk $W_i$ from $a^+_i$ to $a_{i+1}$ by first traversing $C_i$ to reach $a_i$, then use the edge from $a_i$ to the successor $u_i$ of $u^-_i$, then traverse the cycle in~$F$ containing $u_i$ as far as $u_i^-$, then use the edge $u_i^-v_{i+1}^+$, then traverse the cycle in~$F$ containing $v_{i+1}^+$ as far as $v_{i+1}$, and finally use the edge $v_{i+1}a_{i+1}$. (Here $a_{t+1}:=a_1$.) $W$ is obtained by concatenating the $W_i$. \noproof\bigskip \section{Proof of Theorem~\ref{main}} \label{5} \subsection{Applying the Diregularity lemma}\label{applyDRL} Without loss of generality we may assume that $0<\eta _1 \ll 1$. Define further constants satisfying \begin{align}\label{hier}0< 1/M'\ll \varepsilon \ll \beta \ll \eta _2 \ll d\ll c \ll c' \ll \gamma _1 \ll\gamma _2 \ll \gamma _3 \ll \gamma _4 \ll \gamma_5 \ll d' \ll \gamma \ll \eta_1. \end{align} Let $G$ be an oriented graph of order $n\gg M'$ such that $\delta ^0 (G) \geq (1/2-\eta _2)n$. Apply the Diregularity lemma (Lemma~\ref{dilemma}) to $G$ with parameters $\varepsilon, d$ and $M'$ to obtain clusters $V_1, \dots,V_L$ of size $m$, an exceptional set $V_0$, a pure digraph $G'$ and a reduced digraph $R$ (so $L=|R|$). Let $R'$ be the spanning subdigraph of~$R$ whose edges correspond to pairs of density at least~$d'$. So $V_iV_j$ is an edge of~$R'$ if $(V_i,V_j)_{G'}$ has density at least~$d'$. Let $R_m$ denote the reduced multidigraph of $G$ with parameters $\varepsilon, \beta, d$ and $M'$. For each edge $V_iV_j$ of $R$ let $d_{i,j}$ denote the density of the $\varepsilon$-regular pair $(V_i,V_j)_{G'}$. Recall that each edge $(V_iV_j)_k \in E(R_m)$ is associated with the $k$th spanning subgraph $S_{i,j,k}$ of $(V_i,V_j)_{G'}$ obtained by applying Lemma~\ref{split} with parameters $\varepsilon, d_{i,j}$ and $K:=d_{i,j}/\beta$. Each $S_{i,j,k}$ is $\varepsilon$-regular with density $\beta\pm \varepsilon$. Lemma~\ref{multimin} implies that \begin{align}\label{Rmdeg} \delta ^0 (R_m) \geq (1/2-4d)\frac{L}{\beta} \text{ \ \ and \ \ }\Delta ^0 (R_m) \leq (1/2+2\eta_2)\frac{L}{\beta}. \end{align} (The second inequality holds since $\Delta^0(G)\le n-\delta^0(G)\le (1/2+\eta_2)n$.) Apply Lemma~\ref{multifactor1} to $R_m$ in order to obtain% \COMMENT{note we could choose more $1$-factors, but it is needed for later calculations that this $\gamma$ is `big'} \begin{align}\label{rdef} r:= (1-\gamma)L/2\beta \end{align} edge-disjoint collections $\mathcal F_1, \dots , \mathcal F_r$ of vertex-disjoint cycles in $R_m$ such that each $\mathcal F_i$ contains all but at most $cL$ of the clusters in $R_m$.% \COMMENT{We could have $2$-cycles here. I don't think this will be a problem but if it is we can do a random split of clusters- double length of cycles...} Let $V_{0,i}$ denote the set of all those vertices in $G$ which do not lie in clusters covered by~$\mathcal F_i$. So $V_0 \subseteq V_{0,i}$ for all $1 \le i \le r$ and $|V_{0,i}|\leq |V_0|+cLm\leq (\varepsilon+c)n$. We now apply Lemma~\ref{superreg} to each cycle in $\mathcal F_i$ to obtain subclusters of size $m':=(1-4\varepsilon)m$ such that the edges of $\mathcal F_i$ now correspond to $(10\varepsilon,\beta)$-super-regular pairs. By removing one extra vertex from each cluster if necessary we may assume that $m'$ is even. All vertices not belonging to the chosen subclusters of $\mathcal F_i$ are added to $V_{0,i}$. So now \begin{align}\label{v0} |V_{0,i}|\leq 2cn. \end{align} We refer to the chosen subclusters as the clusters of $\mathcal F_i$ and still denote these clusters by $V_1, \dots , V_L$. (This is a slight abuse of notation since the clusters of $\mathcal F_i$ might be different from those of $\mathcal F_{i'}$.) Thus an edge $(V_{j_1}V_{j_2})_k$ in $\mathcal F_i$ corresponds to the $(10\varepsilon , \beta)$-super-regular pair $S' _{j_1, j_2 , k}:=(V_{j_1},V_{j_2})_{S_{j_1,j_2,k}}$. Let $C_i$ denote the oriented subgraph of $G$ whose vertices are all those vertices belonging to clusters in $\mathcal F_i$ such that for each $(V_{j_1}V_{j_2})_k \in E(\mathcal F_i)$ the edges between $V_{j_1}$ and $V_{j_2}$ are precisely all the edges in $S' _{j_1,j_2,k}$. Clearly $C_1, \dots , C_{r}$ are edge-disjoint. We now define `random' edge-disjoint oriented subgraphs $H^+_1$, $H^-_1$, $H_2$, $H_{3,i}$, $H_4$ and $H_{5,i}$ of $G$ (for each $i=1,\dots,r$). $H^+_1$ and $H^-_1$ will be used in Section~\ref{sec:incorp} to incorporate the exceptional vertices in $V_{0,i}$ into~$C_i$. $H_2$ will be used to choose the skeleton walks in Section~\ref{skel}. The $H_{3,i}$ will be used in Section~\ref{4.6} to merge certain cycles. $H_4$ and the $H_{5,i}$ will be used in Section~\ref{merging} to find our almost decomposition into Hamilton cycles. We will choose these subgraphs to satisfy the following properties: \medskip {\noindent\bf{Properties of $H^+_1$ and $H^-_1$.}} \begin{itemize} \item $H^+_1$ is a spanning oriented subgraph of $G$. \item For all $x \in V(H^+_1)$, $\gamma _1 n \leq d^{\pm} _{H^+_1} (x) \leq 2 \gamma _1 n$. \item For all $x \in V(H^+_1)$ and each $1 \leq i \leq r$, $|N ^{\pm} _{H^+_1} (x) \cap V_{0,i}| \leq 4 \gamma _1 |V_{0,i}|$. \item $H^-_1$ satisfies analogous properties. \end{itemize} \medskip {\noindent\bf{Properties of $H_2$.}}% \begin{itemize} \item The vertex set of $H_2$ consists of precisely all those vertices of $G$ which lie in a cluster of $R$ (i.e.~$V(H_2)=V(G)\setminus V_0$). \item For each edge $(V_{j_1}V_{j_2})_k$ of $R_m$, $H_2$ contains a spanning oriented subgraph of $S_{j_1, j_2, k}$ which forms an $\varepsilon$-regular pair of density at least $\gamma _2 \beta$. \item All edges of $H_2$ belong to one of these $\varepsilon$-regular pairs. \item For all $x \in V(H_2)$, $d^{\pm}_{H_{2}} (x) \leq 2 \gamma_2 n$. \end{itemize} \medskip {\noindent\bf{Properties of each $H_{3,i}$.}} \begin{itemize} \item The vertex set of $H_{3,i}$ consists of precisely all those vertices of $G$ which lie in a cluster of $\mathcal F_i$ (i.e.~$V(H_{3,i})=V(G)\setminus V_{0,i}$). \item For each edge $(V_{j_1}V_{j_2})_k$ of $\mathcal F_i$, $H_{3,i}$ contains a spanning oriented subgraph of $S' _{j_1, j_2, k}$ which forms a $(\sqrt{\varepsilon}/2, 2\gamma _3 \beta )$-super-regular pair. \item All edges in $H_{3,i}$ belong to one of these pairs. \item Let $H_3$ denote the union of all the oriented graphs $H_{3,i}$. The last two properties together with~(\ref{rdef}) imply that $d^{\pm}_{H_{3}}(x) \leq 3 \gamma_3 n$ for all $x \in V(H_3)$. \end{itemize} \medskip {\noindent\bf{Properties of $H_4$.}} \begin{itemize} \item The vertex set of $H_4$ consists of precisely all those vertices of $G$ which lie in a cluster of $R'$ (i.e.~$V(H_4)=V(G)\setminus V_0$). \item For each edge $V_{j_1}V_{j_2}$ of $R'$, $(V_{j_1},V_{j_2})_{H_4}$ is $\varepsilon$-regular of density at least $\gamma _4 d'$. \item All edges in $H_4$ belong to one of these $\varepsilon$-regular pairs. \item For all $x \in V(H_4)$, $d^{\pm}_{H_{4}} (x) \leq 2 \gamma_4 n$. \end{itemize} \medskip {\noindent\bf{Properties of each $H_{5,i}$.}} \begin{itemize} \item The vertex set of $H_{5,i}$ consists of precisely all those vertices of $G$ which lie in a cluster of $\mathcal F_i$. \item For each edge $(V_{j_1}V_{j_2})_k$ of $\mathcal F_i$, $H_{5,i}$ contains a spanning oriented subgraph of $S' _{j_1, j_2, k}$ which forms a $(\sqrt{\varepsilon}/2, 2\gamma _5 \beta )$-super-regular pair. \item All edges in $H_{5,i}$ belong to one of these pairs. \item Let $H_5$ denote the union of all the oriented graphs $H_{5,i}$. The last two properties together with~(\ref{rdef}) imply that $d^{\pm}_{H_{5}}(x) \leq 3 \gamma_5 n$ for all $x \in V(H_5)$. \end{itemize} \medskip {\noindent\bf{Properties of each $S'_{i,j,k}$}.} \begin{itemize} \item For each edge $(V_{j_1}V_{j_2})_k$ of $\mathcal F_i$ the oriented subgraph obtained from $S'_{j_1,j_2,k}$ by removing all the edges in $H^+_1,H^-_1,H_2,\dots,H_5$ is $(\varepsilon ^{1/3}, \beta _1)$-super-regular for some $\beta_1$ with% \COMMENT{Phrased in this way so that def of $\beta_1$ not too technical whilst still ensuring can't have vertices of large degree in this super-reg pair-- I think this is important later on} \begin{align}\label{beta1} (1-\gamma)\beta \leq \beta _1 \leq \beta. \end{align} \end{itemize} The existence of $H^+_1$, $H^-_1$, $H_2$, $H_{3,i}$, $H_4$ and $H_{5,i}$ can be shown by considering suitable random subgraphs of $G$ and applying the Chernoff bound in Proposition~\ref{chernoff}. For example, to show that $H^+_1$ exists, consider a random subgraph of $G$ which is obtained by including each edge of $G$ with probability $3\gamma_1$.% \COMMENT{Since the degrees in $G$ are about $n/2$ the expected degrees in~$H^+_1$ are about $3\gamma _1 n/2$} Similarly, to define $H_2$ choose every edge in $S_{j_1,j_2,k}$ with probability $3\gamma_2/2$ (for all $S_{j_1,j_2,k}$) and argue as in the proof of Lemma~\ref{split}. Note that since $H_4$ only consists of edges between pairs of clusters $V_{j_1},V_{j_2}$ which form an edge in $R'$, the oriented subgraphs obtained from the $S'_{j_1,j_2,k}$ by deleting all the edges in $H^+_1,H^-_1,H_2,\dots,H_5$ may have densities which differ too much from each other. Indeed, if $V_{j_1}V_{j_2} \notin E(R')$, then the corresponding density will be larger. However, for such pairs we can delete approximately a further $\gamma_4$-proportion of the edges to ensure this property holds. Again, the deletion is done by considering a random subgraph obtained by deleting edges with probability $\gamma_4$.% \COMMENT{We define our `random' graphs one at a time. Firstly we define $H^+ _1$: Each edge of $G$ is added to $H^+ _i$ with probability $3 \gamma _1 /2$. So $\mathbb E (d^{\pm} _{H^+ _1} (x))=\frac{3\gamma_1}{2}(1/2\pm \eta _2)n$ for all $x \in V(G)$. Clearly applying Chernoff we get that whp $\gamma _1 n \leq d^{\pm} _{H^+ _1} (x) \leq 2 \gamma _1 n$. Given any $x \in V(G)$, if $|N^{\pm} _G (x) \cap V_{0,i}| \leq 4 \gamma _1 |V_{0,i}|$ then clearly $|N^{\pm} _{H^+ _1} (x) \cap V_{0,i}| \leq 4 \gamma _1 |V_{0,i}|$. So consider the case when $|N^{\pm} _G (x) \cap V_{0,i}| \geq 4 \gamma _1 |V_{0,i}|\geq 4 \gamma _1 \varepsilon n$ (since $|V_{0,i}| \geq 4\varepsilon mL (1-c)$ as we sliced vertices off of each cluster to make super-regular). Let $Z := |N^{\pm} _{H^+ _1} (x) \cap V_{0,i}|$. So $6 \gamma _1 ^2 \varepsilon n \leq \mathbb E(Z) \leq 3 \gamma _1 |V_{0,i}|$. Thus $\mathbb P (Z \geq (1+ \varepsilon )3 \gamma _1 |V_{0,i}|)\leq 2 e^{-2 \varepsilon ^3 \gamma _1 ^2 n}.$ Note that $4nr e^{-2 \varepsilon ^3 \gamma _1 ^2 n}\ll 1$ so whp get 3rd condition of $H^+ _1$ satisfied. Consider each $S_{i,j,k}$. It was initally $\varepsilon$-regular of denisty $\beta \pm \varepsilon$. Removing the edges of $H^+ _1$ from $S_{i,j,k}$ we have the following: Given any $X \subseteq V_i$ and $Y \subseteq V_j$ such that $|X|,|Y| \geq \varepsilon m$ then $\mathbb E (Z)=(1-\frac{3\gamma _1}{2} )(\beta \pm 2 \varepsilon)|X||Y| \geq \beta \varepsilon ^2 m^2/2$ where $Z=e(X,Y)$. So $\mathbb P(|Z-\mathbb E(Z)| \geq \varepsilon \mathbb E(Z)) \leq 2 e^{-\frac{\varepsilon^2}{3} \mathbb E(Z)} \leq 2 e^{-cm^2}$ for some $c>0$. Note that there at most $(2^m)^2=4^m$ such pairs $X,Y$ for each $S_{i,j,k}$. Note that $\frac{L^2}{\beta}4^m 2e^{-cm^2} \ll1$. So whp have that each $S_{i,j,k}$ is still $6\varepsilon$-regular with density $(1-3\gamma_1 /2)\beta \pm 3 \varepsilon$. Similarly we can argue as above to show whp each $S'_{i,j,k}$ is $(20 \varepsilon, (1- 3\gamma _1 /2)\beta)$-super-regular. We can then argue as above to obtain $H^- _1$. (Modify each $S_{i,j,k}$ and $S'_{i,j,k}$ accordingly.) To obtain each of the remaining random subgraphs we will repeatedly slice a small proportion of the edges off of each $S_{i,j,k}$ (even if we don't add all these edges into the respective random subgraph). This will ensure each $S_{i,j,k}$ remains regular and each $S'_{i,j,k}$ super-regular. More precisely: For each $S_{i,j,k}$ we add an edge from $S_{i,j,k}$ to $H_2$ with probability $3 \gamma_2 /2$. Argue similarly to above to get $H_2$ as desired. Again each $S_{i,j,k}$ will be $\varepsilon ^* $-regular with density $(1- \gamma ^*)\beta \pm \varepsilon ^*$ where $\varepsilon ^*$ significantly bigger than $\varepsilon$ and $\gamma _2 < \gamma ^* \ll \gamma$. Also $S'_{i,j,k}$ will be $(\varepsilon ^*, (1-\gamma^*)\beta)$-super-regular. For each $S_{i,j,k}$ we remove an edge with probability $2\gamma_3 \beta/[(1- \gamma ^*)\beta]$. Those edges corresponding to an edge $(V_i,V_j)_k$ in $\mathcal F_i$ get assigned to $H_{3,i}$. As before edges are distributed roughly as expected so conditions for each $H_{3,i}$ hold. We then define each $H_{5,i}$ in an analogous way. Finally we create $H_4$. Even though $H_4$ only contains edges corresponding to edges in $R'$, we still take roughly a $\gamma _4$ slice from each $S_{i,j,k}$. Only the relevant edges go into $H_4$, the remaining edges get thrown away. At every step above we slice off roughly the same amount of edges from each $S_{i,j,k}$ and $S'_{i,j,k}$. So this will ensure each $S'_{i,j,k}$ is $(\varepsilon ^{1/3}, \beta _1)$-super-regular at the end of the process.} We now remove the edges in $H^+_1,H^-_1,H_2,\dots,H_5$ from each $C_i$. We still refer to the subgraphs of $C_i$ and $S'_{j_1,j_2,k}$ thus obtained as $C_i$ and $S'_{j_1,j_2,k}$. \subsection{Incorporating $V_{0,i}$ into $C_i$}\label{sec:incorp} Our ultimate aim is to use each of the $C_i$ as a `framework' to piece together roughly $\beta _1 m'$ Hamilton cycles in $G$. In this section we will incorporate the vertices in $V_{0,i}$, together with some edges incident to these vertices, into $C_i$. For each $i=1,\dots,r$, let $G_i$ denote the oriented spanning subgraph of $G$ obtained from $C_i$ by adding the vertices of $V_{0,i}$. So initially $G_i$ contains no edges with a start- or endpoint in $V_{0,i}$. We now wish to add edges to $G_i$ so that \begin{itemize} \item[(i)] $d^{\pm} _{G_i} (x) \geq(1-\sqrt{c} ) \beta _1 m'$ where $x$ has neighbours only in $C_i$, for all $x \in V_{0,i}$; \item [(ii)] $|N^{\pm} _{G_i} (y) \cap V_{0,i}| \leq \sqrt{c} \beta _1 m'$ for all $y \in V(C_i)$; \item[(iii)] $G_1, \dots ,G_{r}$ are edge-disjoint. \end{itemize} For each $x \in V(G)$ we define $\mathcal L_x := \{ i \ | \ x \in V_{0,i}\}$ and let $L_x := |\mathcal L_x|$. To satisfy~(i), we need to find roughly $L_x\beta_1 m'$ edges sent out by~$x$ (as well as $L_x\beta_1 m'$ edges received by~$x$) such that none of these edges already lies in any of the~$C_i$. It is not hard to check that such edges exist (c.f.~(\ref{eq:freeedges}) below). However, if $L_x$ is small then there is not much choice to which $G_i$ with $i\in \mathcal{L}_x$ we add each of these edges and so it might not be possible to guarantee~(ii). For this reason we reserved $H^+_1$ and $H^-_1$ in advance and for all those $x$ for which $L_x$ is small we will use the edges at $x$ lying in these two graphs. More precisely, let $$B':= \left\{ x \in V(G) \ | \ L_ x \geq \frac{\gamma _1 n}{2\beta _1 m'}\right\}.$$ As indicated above, we now consider the vertices in $B'$ and $V(G)\backslash B'$ separately. First consider any $x\in V(G)\setminus B'$. Let $p:=2\beta _1 m'/ \gamma _1 n$ and consider each edge $e$ sent out by $x$ in $H^+_1$. With probability $L_x p \leq 1$ we will assign $e$ to exactly one of the $G_i$ with $i \in \mathcal L_x$. More precisely, for each $i \in \mathcal L_x$ we assign $e$ to $G_i$ with probability $p$. So the probability $e$ is not assigned to any of the $G_i$ is $1-L_x p \geq 0$. We randomly distribute the edges of~$H^-_1$ received by $x$ in an analogous way amongst all the $G_i$ with $i\in \mathcal L_x$. We proceed similarly for all the vertices in $V(G)\setminus B'$, with the random choices being independent for different such vertices. Since $H^+_1$ and $H^-_1$ are edge-disjoint from each other and from all the $C_i$, the oriented graphs obtained from $G_1,\dots,G_r$ in this way will still be edge-disjoint. Moreover, $\mathbb E (d^{\pm} _{G_i} (x))\geq \gamma _1 n p $ and $\mathbb E (d^{\pm} _{G_i [V_{0,i}]} (x)) \leq |V_{0,i}|p \leq 2 cn p$ for every $x \in V(G)\setminus B'$ and each $i\in \mathcal L_x$. Thus \begin{align}\label{job} \mathbb E(|N^{\pm} _{G_i} (x) \cap V(C_i)|) \geq (\gamma _1 - 2c)n p \geq \beta _1 m'. \end{align} Let $B_i:= V_{0,i} \cap B'$ and $\bar{B}_i:=V_{0,i}\backslash B'$. Since $|N^{\pm} _{H^+_1\cup H^-_1} (y) \cap V_{0,i}| \leq 8 \gamma _1 |V_{0,i}|$ for every $y \in V(C_i)$ (by definition of $H^+_1$ and $H^-_1$) we have that \begin{align}\label{ctov} \mathbb E (|N^{\pm} _{G_i} (y) \cap \bar{B}_i| ) \leq 8\gamma _1 |V_{0,i}| p \stackrel{(\ref{v0})}{\leq} 32 c \beta _1 m'. \end{align} Applying the Chernoff bound in Proposition~\ref{chernoff} (for the binomial distribution) for each~$i$ and summing up the error probabilities for all~$i$ we see that with nonzero probability the following properties hold: \begin{itemize} \item (\ref{job}) implies that $|N^{\pm} _{G_i} (x) \cap V(C_i)| \geq (1-\sqrt{c})\beta _1 m' $ for every $x \in \bar{B}_i$. \item (\ref{ctov}) implies that $|N^{\pm} _{G_i} (y) \cap \bar{B}_i| \leq \sqrt{c}\beta_1 m'/2 $ for every $y \in V(C_i)$. \end{itemize} \COMMENT{Given a vertex $x \in V(G) \backslash B'$ and $i \in \mathcal L_x$ let $Z:= |N^+ _{G_i} (x) \cap V(C_i)|$. So $\mathbb P (Z\geq (1- \varepsilon )\beta _1 m' ) \leq \mathbb P (|Z-\mathbb E(Z)| \geq \varepsilon \mathbb E(Z))\leq 2 e^{-\frac{\varepsilon^2}{3 }\beta_1 m'}.$ Do this for every $x \in V(G) \backslash B'$ and $i \in \mathcal L_x$. So we use Chernoff at most $nr$ times for these cases. (We also repeat for inneighbourhoods.) Note $4nr e^{-\frac{\varepsilon ^2}{3} \beta _1 m'} \ll 1$, as desired. Given any $i$ and $y \in V(C_i)$ if $|N^{-} _{H^+ _1} (y)\cap \bar{B}_i| \leq \sqrt{c} \beta _1 m'/2$ then clearly we don't need to apply Chernoff. (Likewise if $|N^{-} _{H^+ _1} (y)\cap \bar{B}_i| \leq \sqrt{c} \beta _1 m'/2$.) Otherwise we apply Chernoff: Let $Z:=|N^- _{G_i} (y) \cap \bar{B}_i|$. So $\mathbb P( Z\geq (1+\varepsilon) 32c \beta _1 m') \leq \mathbb P (|Z-\mathbb E(Z)| \geq \varepsilon \mathbb E (Z)) \leq 2 e^{-\frac{\varepsilon^2}{6} \sqrt{c}\beta _1 m'}$. We apply Chernoff for each such $y \in V(C_i)$ in each $G_i$. We also act analogously for $|N^+ _{G_i} (y) \cap \bar{B}_i|$. Notice that $4 nr e^{-\frac{\varepsilon^2}{6} \sqrt{c}\beta _1 m'}\ll 1$, so we have that whp our two conditions are satified.} For each~$i$ we delete all the edges with both endpoints in~$V_{0,i}$ from~$G_i$.% \COMMENT{Have to do this already now since otherwise $(\ref{freedeg})$ and $(\ref{freedeg2})$ might not hold} Having dealt with the vertices in $V(G)\setminus B'$, let us now consider any $x\in B'$. We call each edge of $G$ with startpoint $x$ \emph{free} if it does not lie in any of $C_i$, $H^+_1,H^-_1,H_2,\dots,H_5$ (for all $i=1,\dots,r$) and if the endpoint is not in $B'$. Note that $$ |B'| \frac{\gamma _1 n}{2 \beta_1 m'} \leq \sum ^{r} _{i=1} |V_{0,i}| \stackrel{(\ref{v0})}{\leq} 2c r n \stackrel{(\ref{rdef})}{\leq} cn \frac{L}{\beta},$$ and so $|B'| \leq \frac{2 c n}{\gamma _1}.$ So the number of free edges sent out by $x$ is at least \begin{align}\label{eq:freeedges} & ({1/2}-\eta _2)n -(\beta _1 + \varepsilon^{1/3})m' (r-L_x)- 4\gamma _1 n -2\gamma _2 n -3\gamma _3 n - 2\gamma _4 n-3\gamma_5 n-|B'| \nonumber\\ & \stackrel{(\ref{rdef})}{\geq}({1/2}-\eta _2)n -(\beta + \varepsilon^{1/3})m'(1- \gamma )\frac{L}{2 \beta} + L_x \beta _1 m'-4\gamma _5 n -\frac{2 c n}{\gamma _1} \nonumber\\ & \stackrel{(\ref{hier})}{\geq} ({1/2}-\eta _2)n- \left( \frac{ \varepsilon ^{1/3} n}{2 \beta}+\frac{n}{2} \right) + \frac{\gamma n}{4}+ L_x \beta _1 m' -5 \gamma _5 n \stackrel{(\ref{hier})}{\geq} L_x \beta _1 m'. \end{align} We consider $L_x \beta _1 m'$ of these free edges sent out by $x$ and distribute them randomly amongst all the $G_i$ with $i \in \mathcal L_x$. More precisely, each such edge is assigned to $G_i$ with probability $1/L_x$ (for each $i \in \mathcal L_x$). So for each $i \in \mathcal L_x$, \begin{align}\label{freedeg} \mathbb E (d^+ _{G_i} (x))= \beta_1 m' \end{align} and \begin{align}\label{freedeg2} \mathbb E (d^+ _{G_i[V_{0,i}]} (x))\leq |V_{0,i}| \frac{1}{L_x} \stackrel{(\ref{v0})}{\leq} 2cn\left( \frac{2 \beta_1 m'}{\gamma_1 n}\right) =\frac{4c \beta_1 m'}{\gamma_1} \ll \sqrt{c} \beta _1 m'/4. \end{align} We can introduce an analogous definition of a free edge at $x$ but for edges whose endpoint is $x$. As above we randomly distribute $L_x \beta _1 m'$ such edges amongst all the $G_i$ with $i \in \mathcal L_x$. Thus for each $i \in \mathcal L_x$, \begin{align}\label{freein} \mathbb E (d^- _{G_i} (x))= \beta _1 m' \text{ \ \ and \ \ } \mathbb E (d^- _{G_i[V_{0,i}]} (x)) \ll \sqrt{c} \beta _1 m'/4. \end{align} We proceed similarly for all vertices in $B'$, with the random choices being independent for different vertices $x\in B'$. (Note that every edge of $G$ is free with respect to at most one vertex in~$B'$.) Then using the lower bound on $L_x$ for all $x\in B'$ we have \begin{align}\label{inC} \mathbb E (|N^{\pm} _{G_i} (y) \cap B_i| ) \leq |V_{0,i}| \frac{2\beta_1 m'}{\gamma_1 n} \stackrel{(\ref{v0})}{\leq} \sqrt{c} \beta _1 m' /4 \end{align} for each $i=1,\dots,r$ and all $y \in V(C_i)$. As before, applying the Chernoff type bound in Proposition~\ref{chernoff} for each $i$ and summing up the failure probabilities over all~$i$ shows that with nonzero probability the following properties hold: \begin{itemize} \item (\ref{freedeg})--(\ref{freein}) imply that $|N^{\pm} _{G_i} (x) \cap V(C_i)|\geq (1- \sqrt{c}) \beta_1 m' $ for each $x \in B_i$. \item (\ref{inC}) implies that $|N^{\pm} _{G_i} (y) \cap B_i|\leq \sqrt{c} \beta_1 m' /2$ for each $y \in V(C_i)$. \end{itemize} \COMMENT{As before we get that whp the 1st condition is satisfied. Second condition is similar to before. Again we split into two cases: If the number of free edges from vertices in $B_i$ to a vertex $y \in V(C_i)$ is $\leq \sqrt{c} \beta _1 m' /2$ then clearly we will have that $|N^- _{G_i}(y) \cap B_i| \leq \sqrt{c} \beta _1 m' /2$. Otherwise we apply Chernoff as before.} Together with the properties of $G_i$ established after choosing the edges at the vertices in $V(G)\setminus B'$ it follows that $|N^{\pm} _{G_i} (x) \cap V(C_i)|\geq (1- \sqrt{c}) \beta _1m' $ for every $x\in V_{0,i}$ and $|N^{\pm} _{G_i} (y) \cap V_{0,i}|\leq \sqrt{c} \beta _1 m' $ for every $y \in V(C_i)$. Furthermore, $G_1,\dots,G_r$ are still edge-disjoint since when dealing with the vertices in $B'$ we only added free edges. By discarding any edges assigned to $G_i$ which lie entirely in $V_{0,i}$ we can ensure that~(i) holds. So altogether (i)--(iii) are satisfied, as desired. \subsection{Randomly splitting the $G_i$}\label{randomsplit} As mentioned in the previous section we will use each of the $G_i$ to piece together roughly $\beta _1 m'$ Hamilton cycles of $G$. We will achieve this by firstly adding some more special edges to each $G_i$ (see Section~\ref{skel}) and then almost decomposing each $G_i$ into $1$-factors. However, in order to use these $1$-factors to create Hamilton cycles we will need to ensure that no $1$-factor contains a $2$-path with start- and endpoint in $V_{0,i}$, and midpoint in $C_i$. Unfortunately $G_i$ might contain such paths. To avoid them, we will `randomly split' each~$G_i$. We start by considering a random partition of each $V \in V(\mathcal F_i)$. Using the Chernoff bound in Proposition~\ref{chernoff} for the hypergeometric distribution one can show that there exists a partition of $V$ into subclusters $V'$ and $V''$ so that the following conditions hold: \begin{itemize} \item $|V'|,|V''|=m'/2$ for each $V \in V(\mathcal F_i)$. \item $|N^{\pm} _{G_i} (x) \cap \mathcal V'| \geq (1/2-\sqrt{c}) \beta _1 m'$ and $|N^{\pm} _{G_i} (x) \cap \mathcal V''| \geq (1/2-\sqrt{c}) \beta _1 m'$ for each $x \in V_{0,i}$. (Here $\mathcal V ':=\bigcup _{V \in V(\mathcal F_i)} V'$ and $\mathcal V '':=\bigcup _{V \in V(\mathcal F_i)} V''$.) \end{itemize} \COMMENT{Consider any $x \in V_{0,i}$ and some $V \in V (\mathcal F_i)$. Set $Z:=|N^+ _{G_i}(x) \cap V|$. If $Z < \varepsilon m'/L$ then we won't apply Chernoff (note at most $\varepsilon m'$ vertices in $G_i$ lie in such clusters $V$). So suppose that $Z \geq \varepsilon m'/L$. Let $Z':=|N^+ _{G_i} (x) \cap V'|$ and $Z''$ similarly. So $\mathbb E (Z')=\mathbb E (Z'')= Z/2 \geq \varepsilon m'/(2L)$. Thus $\mathbb P (Z' \leq (1-\varepsilon) Z/2) \leq \mathbb P (|Z'-\mathbb E(Z')|\geq \varepsilon \mathbb E(Z')) \leq 2 e^{- \frac{\varepsilon^2}{3}\mathbb E(Z')} \leq 2 e^{-\frac{\varepsilon ^3}{6L}m'}$ (and similarly for $Z''$). We do this for each pair $x \in V_{0,i}$ and each $V \in V(\mathcal F_i)$. Note that $|V_{0,i}| L \leq 2cnL$ and thus $8cnL e^{-\frac{\varepsilon ^3}{6L}m'} \ll 1$. We deal with inneighbourhoods $N^- _{G_i} (x)$ analogously. So whp we have that $|N^{\pm} _{G_i} (x) \cap \mathcal V'| \geq (1/2- \varepsilon/2)[(1-\sqrt{c})\beta _1 m' - \varepsilon m'] \geq (1/2- \sqrt{c} ) \beta _1 m'$ (and similarly for $\mathcal V''$).} Recall that each edge $(V_{j_1}V_{j_2})_k \in E (\mathcal F_i)$ corresponds to the $(\varepsilon ^{1/3}, \beta _1)$-super-regular pair $S'_{j_1,j_2,k}$. Let $\beta _2 := \beta _1/2$. So \begin{align}\label{beta2} (1/2-\gamma)\beta \stackrel{(\ref{beta1})}{\le} \beta _2 \stackrel{(\ref{beta1})}{\le} \beta /2. \end{align} Apply Lemma~\ref{split}(ii) to obtain a partition $E'_{j_1,j_2,k},E''_{j_1,j_2,k}$ of the edge set of $S'_{j_1,j_2,k}$ so that the following condition holds: \begin{itemize} \item The edges of $E'_{j_1,j_2,k}$ and $E''_{j_1,j_2,k}$ both induce an $(\varepsilon ^{1/4}, \beta _2)$-super-regular pair which spans $S'_{j_1,j_2,k}$. \end{itemize} We now partition $G_i$ into two oriented spanning subgraphs $G'_i$ and $G''_i$ as follows. \begin{itemize} \item The edge set of $G'_i$ is the union of all $E'_{j_1,j_2,k}$ (over all edges $(V_{j_1}V_{j_2})_k$ of $\mathcal F_i$) together with all the edges in $G_i$ from $V_{0,i}$ to $\mathcal V'$, and all edges in~$G_i$ from $\mathcal V''$ to $V_{0,i}$. \item The edge set of $G''_i$ is the union of all $E''_{j_1,j_2,k}$ (over all edges $(V_{j_1}V_{j_2})_k$ of $\mathcal F_i$) together with all the edges in $G_i$ from $V_{0,i}$ to $\mathcal V''$, and all edges in~$G_i$ from $\mathcal V'$ to $V_{0,i}$. \end{itemize} Note that neither $G'_i$ nor $G''_i$ contains the type of $2$-paths we wish to avoid. For each $i=1,\dots,r$ we use Lemma~\ref{split}(ii) to partition the edge set of each $H_{3,i}$ to obtain edge-disjoint oriented spanning subgraphs $H'_{3,i}$ and $H''_{3,i}$ so that the following condition holds: \begin{itemize} \item For each edge $(V_{j_1}V_{j_2})_k$ in $\mathcal F_i$, both $H'_{3,i}$ and $H''_{3,i}$ contain a spanning oriented subgraph of $S' _{j_1, j_2, k}$ which is $(\sqrt{\varepsilon}, \gamma _3 \beta)$-super-regular. Moreover, all edges in $H'_{3,i}$ and $H''_{3,i}$ belong to one of these pairs. \end{itemize} Similarly we partition the edge set of each $H_{5,i}$ to obtain edge-disjoint oriented spanning subgraphs $H'_{5,i}$ and $H''_{5,i}$ so that the following condition holds: \begin{itemize} \item For each edge $(V_{j_1}V_{j_2})_k$ in $\mathcal F_i$, both $H'_{5,i}$ and $H''_{5,i}$ contain a spanning oriented subgraph of $S' _{j_1, j_2, k}$ which is $(\sqrt{\varepsilon}, \gamma _5 \beta)$-super-regular. Moreover, all edges in $H'_{5,i}$ and $H''_{5,i}$ belong to one of these pairs. \end{itemize} We pair $H'_{3,i}$ and $H'_{5,i}$ with $G'_i$ and pair $H''_{3,i}$ and $H''_{5,i}$ with $G''_i$. We now have $2r$ edge-disjoint oriented subgraphs of $G$, namely $G'_1, G''_1, \dots , G'_r,G''_r$. To simplify notation, we relabel these oriented graphs as $G_1, \dots, G_{r'}$ where \begin{align}\label{r'} r':=2r\stackrel{(\ref{rdef})}{=}(1-\gamma)L/\beta. \end{align} We similarly relabel the oriented graphs $H'_{3,1}, H''_{3,1}, \dots , H'_{3,r}, H''_{3,r}$ as $ H_{3,1}, \dots , H_{3,r'}$ and relabel $H'_{5,1}, H''_{5,1}, \dots , H'_{5,r}, H''_{5,r}$ as $ H_{5,1}, \dots , H_{5,r'}$ in such a way that $H_{3,i}$ and $H_{5,i}$ are the oriented graphs which we paired with $G_i$. For each $i$ we still use the notation $\mathcal F_i$, $C_i$ and $V_{0,i}$ in the usual way. Now (i) from Section~\ref{sec:incorp} becomes \begin{itemize} \item[(i$'$)] $d^{\pm} _{G_i} (x) \geq(1/2-\sqrt{c} ) \beta _1 m'$ where $x$ has neighbours only in $C_i$, for all $x \in V_{0,i}$, \end{itemize} while (ii) and (iii) remain valid. \subsection{Adding skeleton walks to the $G_i$}\label{skel} Note that all vertices (including the vertices of $V_{0,i}$) in each $G_i$ now have in- and outdegree close to $\beta_2m'$. In Section~\ref{nicefactor} our aim is to find a $\tau$-regular oriented subgraph of $G_i$, where \begin{align}\label{tau} \tau:= (1-\gamma)\beta _2 m'. \end{align} However, this may not be possible: suppose for instance that $V_{0,i}$ consists of a single vertex $x$, $\mathcal F_i$ consists of 2 cycles $C$ and $C'$ and that all outneighbours of $x$ lie on $C$ and all inneighbours lie on $C'$. Then $G_i$ does not even contain a $1$-factor. A similar problem arises if for example $V_{0,i}$ consists of a single vertex $x$, $\mathcal F_i$ consists of a single cycle $C=V_1 \dots V_t$, all outneighbours of $x$ lie in the cluster $V_2$ and all inneighbours in the cluster $V_8$. Note that in both situations, the edges between $V_{0,i}$ and $C_i$ are not `well-distributed' or `balanced'. To overcome this problem, we add further edges to $C_i$ which will `balance out' the edges between $C_i$ and $V_{0,i}$ which we added previously. These edges will be part of the skeleton walks which we define below. To motivate the definition of the skeleton walks it may be helpful to consider the second example above: Suppose that we add an edge $e$ from $V_1$ to $V_9$. Then $G_i$ now has a $1$-factor. In general, we cannot find such an edge, but it will turn out that we can find a collection of 5 edges fulfilling the same purpose. A {\emph{skeleton walk}} $ S$ in $G$ with respect to $G_i$ is a collection of distinct edges $x_1x_2$, $x^-_2x_3$, $x_3^-x_4$, $x_4^-x_5$ and $x_5^-x_1$ of $G$ with the following properties: \begin{itemize} \item $x_1 \in V_{0,i}$ and all vertices in $ V(S) \backslash \{ x_1 \}$ lie in $C_i$. \item Given some $2\le j\le 5$, let $V\in V(\mathcal F_i)$ denote the cluster in~$\mathcal F_i$ containing $x_j$ and let $C$ denote the cycle in $\mathcal F_i$ containing $V$. Then $x_j^- \in V^-$, where $V^- $ is the predecessor of $V$ on $C$. \end{itemize} Note that whenever $\mathcal S$ is a union of edge-disjoint skeleton walks and $V $ is a cluster in $\mathcal F_i$, the number of edges in $\mathcal S$ whose endpoint is in $V$ is the same as the number of edges in $\mathcal S$ whose startpoint is in $V^-$. As indicated above, this `balanced' property will be crucial when finding a $\tau$-regular oriented subgraph of $G_i$ in Section~\ref{nicefactor}. The 2nd, 3rd and 4th edge of each skeleton walk~$S$ with respect to~$G_i$ will lie in the `random' graph~$H_2$ chosen in Section~\ref{applyDRL}. More precisely, each of these three edges will lie in a `slice' $H_{2,i}$ of~$H_2$ assigned to~$G_i$. We will now partition~$H_2$ into these `slices' $H_{2,1},\dots,H_{2,r'}$. To do this, recall that any edge $(V_{j_1}V_{j_2})_k$ in $R_m$ corresponds to an $\varepsilon$-regular pair of density at least $\gamma _2 \beta$ in $H_2$. Here $V_{j_1}$ and $V_{j_2}$ are viewed as clusters in $R_m$, so $|V_{j_1}|=|V_{j_2}|=m$. Apply Lemma~\ref{split}(i) to each such pair of clusters to find edge-disjoint oriented subgraphs $H_{2,1}, \dots , H_{2,r'}$ of $H_2$ so that for each $H_{2,i}$ all the edges $(V_{j_1}V_{j_2})_k$ in $R_m$ correspond to $[\varepsilon,5\beta \varepsilon /L]$-regular pairs with density at least $(\gamma _2 \beta -2\varepsilon)\beta /L\geq \gamma _2 \beta ^2 /2L$ in $H_{2,i}$. Recall that by~(i$'$) in Section~\ref{randomsplit} each vertex $ x \in V_{0,i}$ has at least $ (1/2-\sqrt{c})\beta _1m' \geq \tau$ outneighbours in $C_i$ and at least $(1/2-\sqrt{c})\beta _1m'$ inneighbours in $C_i$. We pair $\tau$ of these outneighbours $x^+$ with distinct inneighbours $x^-$. For each of these $\tau$ pairs $x^+ , x^-$ we wish to find a skeleton walk with respect to~$G_i$ whose $1$st edge is $xx^+$ and whose $5$th edge is $x^-x$. We denote the union of these $\tau$ pairs $xx^+, x^-x$ of edges over all $x\in V_{0,i}$ by $\mathcal T_i$. In Section~\ref{randomsplit} we partitioned each cluster $V \in V(\mathcal F_i)$ into subclusters $V'$ and~$V''$. We next show how to choose the skeleton walks for all those $G_i$ for which each edge in $G_i$ with startpoint in $V_{0,i}$ has its endpoint in $\mathcal V'$ (and so each edge in $G_i$ with endpoint in $V_{0,i}$ has startpoint in $\mathcal V''$). The other case is similar, one only has to interchange $\mathcal V'$ and $\mathcal V''$. \begin{claim}\label{skelG} We can find a set $\mathcal S_i$ of $\tau |V_{0,i}|$ skeleton walks with respect to~$G_i$, one for each pair of edges in $\mathcal T_i$, such that $\mathcal S_i$ has the following properties: \begin{itemize} \item[(i)] For each skeleton walk in $\mathcal S_i$, its $2$nd, $3$rd and $4$th edge all lie in $H_{2,i}$ and all these edges have their startpoint in $\mathcal V''$ and endpoint in $\mathcal V'$. \item[(ii)] Any two of the skeleton walks in $\mathcal S_i$ are edge-disjoint. \item[(iii)] Every $y \in V(C_i)$ is incident to at most $c^{1/5}\beta _2 m'$ edges belonging to the skeleton walks in $\mathcal S_i$. \end{itemize} \end{claim} Note that $|\mathcal S_i|=|\mathcal T_i|= \tau |V_{0,i}| \le 2c\beta _2 m'n$ by (\ref{v0}) and (\ref{tau}). To find $\mathcal S_i$, we will first find so-called shadow skeleton walks (here the internal edges are edges of $R_m$ instead of $G$). More precisely, a \emph{shadow skeleton walk} $S'$ with respect to $G_i$ is a collection of two edges $x_1x_2$, $x_5^-x_1$ of $G$ and three edges $(X_2^-X_3)_{k_2}$, $(X_3^-X_4)_{k_3}$, $(X_4^-X_5)_{k_4}$ of $R_m$ with the following properties: \begin{itemize} \item $x_1x_2$, $x_5^-x_1$ is a pair in $\mathcal T_i$. \item $x_2 \in X_2$, $x_5^- \in X_5^-$ and each $X_j$ is a vertex of a cycle in $\mathcal F_i$ and $X_j^-$ is the predecessor of $X_j$ on that cycle. \end{itemize} Note that in the second condition we slightly abused the notation: as $X_j$ is a cluster in~$R_m$, it only corresponds to a cluster in~$\mathcal F_i$ (which has size~$m'$ and is a subcluster of the one in~$R_m$). However, in order to simplify our exposition, we will use the same notation for a cluster in~$R_m$ as for the cluster in~$\mathcal F_i$ corresponding to it. We refer to the edge $(X^- _j X _{j+1})_{k_j}$ as the $j$th edge of the shadow skeleton walk $S'$. Given a collection $\mathcal S'$ of shadow skeleton walks (with respect to $G_i$) we say an edge of $R_m$ is {\emph{bad}} if it is used at least $B:=c^{1/4} \beta^2 (m')^2/L$ times in $\mathcal S'$, and {\emph{very bad}} if it is used at least $10B$ times in $\mathcal S'$. We say an edge from $V$ to $U$ in $R_m$ is \emph{$(V,+)$-bad} if it is used at least $B$ times as a $2$nd edge in the shadow skeleton walks of $\mathcal S'$. An edge from $W$ to $V$ in $R_m$ is \emph{$(V,-)$-bad} if it is used at least $B$ times as a $4$th edge in the shadow skeleton walks of~$\mathcal S'$. To prove Claim~\ref{skelG} we will first prove the following result. \begin{claim}\label{shadow} We can find a collection $\mathcal S'_i$ of $\tau |V_{0,i}|$ shadow skeleton walks with respect to $G_i$, one for each of pair in $\mathcal T_i$, such that the following condition holds: \begin{itemize} \item For each $2 \le j \le 4$, every edge in $R_m$ is used at most $B$ times as a $j$th edge of some shadow skeleton walk in $\mathcal S'_i$. In particular no edge in $R_m$ is very bad. \end{itemize} \end{claim} \removelastskip\penalty55\medskip\noindent{\bf Proof. } Suppose that we have already found $\ell < \tau |V_{0,i}|$ of our desired shadow skeleton walks for $G_i$. Let $xx^+, x^-x$ be a pair in $\mathcal T_i$ for which we have yet to define a shadow skeleton walk. We will now find such a shadow skeleton walk $S'$. Suppose $x^+ \in V^+$ and $x^- \in W^-$, where $V^+ , W^- \in V (\mathcal F_i )$. Let $V$ denote the predecessor of $V^+$ in $\mathcal F_i$ and $W$ the successor of $W^-$ in $\mathcal F_i$. We define $\mathcal V^+$ to consist of all those clusters $U \in V(\mathcal F_i)$ for which there exists an edge from $V$ to $U$ in $R_m$ which is not $(V,+)$-bad. By definition of $G_i$ (condition~(ii) in Section~\ref{sec:incorp}), each $y \in V(C_i)$ has at most $\sqrt{c} \beta _1 m'$ inneighbours in $V_{0,i}$ in $G_i$. So the number of $(V,+)$-bad edges is at most $$ \frac{\sqrt{c}\beta _1 (m')^2}{B}=\frac{\sqrt{c}\beta _1 (m')^2}{c^{1/4} \beta ^2 (m')^2/L} = \frac{c^{1/4} \beta_1 L}{\beta^2} \stackrel{(\ref{beta1})}{\leq } \frac{c^{1/4}L}{\beta}. $$ Together with~(\ref{Rmdeg}) this implies that $$|\mathcal V^+|\geq (1/2 - 4d- c^{1/4})L \geq (1/2 -2c^{1/4})L.$$ Similarly we define $\mathcal W^-$ to consist of all those clusters $U \in V(\mathcal F_i)$ for which there exists an edge from $U$ to $W$ in $R_m$ which is not $(W,-)$-bad. Again, $|\mathcal W^-|\geq (1/2-2c^{1/4})L$. Let $\mathcal V$ denote the set of those clusters which are the predecessors in $\mathcal F_i$ of a cluster in $\mathcal V^+$. Similarly let $\mathcal W$ denote the set of those clusters which are the successors in $\mathcal F_i$ of a cluster in $\mathcal W^-$. So $|\mathcal V|=|\mathcal V^+|$ and $|\mathcal W| = |\mathcal W^+|$. By Lemma~\ref{keevashmult}(i) applied with $X=V(R_m)$ there exist at least $L^2/60 \beta$ edges in $R_m$ from $\mathcal V$ to $\mathcal W$. On the other hand, the number of bad edges is at most $$ \frac{3\tau |V_{0,i}|}{B}\stackrel{(\ref{v0}),(\ref{tau})}{\leq} \frac{6\beta _2 m'cn}{c^{1/4} \beta ^2 (m')^2/L} \leq \frac{7 c^{3/4} \beta_2 L^2}{\beta^2 } \stackrel{(\ref{beta2})}{\leq} \frac{7c^{3/4}L^2}{\beta}. $$ So we can choose an edge $(XY)_k$ from $\mathcal V$ to $\mathcal W$ in $R_m$ which is not bad. Let $X^+$ denote the successor of $X$ in $\mathcal F_i$ and $Y^-$ the predecessor of $Y$ in $\mathcal F_i$. Thus $X^+ \in \mathcal V^+$ and $Y^- \in \mathcal W^-$ and so there is an edge $(VX^+)_{k'}$ in $R_m$ which is not $(V,+)$-bad and an edge $(Y^-W)_{k''}$ which is not $(W,-)$-bad. Let $S'$ be the shadow skeleton walk consisting of the edges $xx^+$, $(VX^+)_{k'}$, $(XY)_k$, $(Y^-W)_{k''}$, and $x^-x$. Then we can add $S'$ to our collection of $\ell$ skeleton walks that we have found already. \noproof\bigskip We now use Claim~\ref{shadow} to prove Claim~\ref{skelG}. {\removelastskip\penalty55\medskip\noindent{\bf Proof of Claim~\ref{skelG}.}} We apply Claim~\ref{shadow} to obtain a collection $\mathcal S'_i$ of shadow skeleton walks. We will replace each edge of $R_m$ in these shadow skeleton walks with a distinct edge of $H_{2,i}$ to obtain our desired collection $\mathcal S_i$ of skeleton walks. Recall that each edge $(VW)_k$ in $R_m$ corresponds to an $[\varepsilon, 5\varepsilon\beta /L]$-regular pair of density at least $\gamma _2 \beta ^2 /2L$ in $H_{2,i}$. Thus in $H_{2,i}$ the edges from $V''$ to $W'$ induce a $[3\varepsilon, 10\varepsilon\beta /L]$-regular pair% \COMMENT{3 instead of 2 since we first go to subclusters of size $m'$ and then partition each of these into two} of density $d_1 \geq \gamma _2 \beta ^2 /3L$. (Here $V',V''$ and $W',W''$ are the partitions of $V$ and $W$ chosen in Section~\ref{randomsplit}.) Let $d_0:=80B/(m'/2)^2$ and note that $d_0 \le d_1$. So we can now apply Lemma~\ref{boundmax} to $(V'',W')_{H_{2,i}}$ to obtain a subgraph $H'_{2,i}[V'',W']$ with maximum degree at most $d_0 m'/2 $ and at least $d_0 (m'/2)^2/8 = 10B $ edges. We do this for all those edges in $R_m$ which are used in a shadow skeleton walk in $\mathcal S'_i$. Since no edge in $R_m$ is very bad, for each $S' \in \mathcal S'_i$ we can replace an edge $(VW)_k$ in $S'$ with a distinct edge $e$ from $V''$ to $W'$ lying in $H'_{2,i}[V'',W']$. Thus we obtain a collection $\mathcal S_i$ of skeleton walks which satisfy properties (i) and (ii) of Claim~\ref{skelG}. Note that by the construction of $\mathcal S_i$ every vertex $y \in V(C_i)$ is incident to at most $d_0 m'L/(2\beta ) \ll c^{1/5} \beta _2 m'/2$ edges which play the role of a $2$nd, $3$rd or $4$th edge in a skeleton walk in $\mathcal S_i$. Condition~(ii) in Section~\ref{sec:incorp} implies that $y$ is incident to at most $2 \sqrt{c}\beta _1 m'$ edges which play the role of a 1st or 5th edge in a skeleton walk in $\mathcal S_i$. So in total $y$ is incident to at most $c^{1/5} \beta _2 m'/2 +2 \sqrt{c}\beta _1 m' \leq c^{1/5} \beta _2 m'$ edges of the skeleton walks in $\mathcal S_i$. Hence (iii) and thus the entire claim is satisfied. \noproof\bigskip We now add the edges of the skeleton walks in $\mathcal S_i$ to $G_i$. Moreover, for each $x\in V_{0,i}$ we delete all those edges at~$x$ which do not lie in a skeleton walk in~$\mathcal S_i$. \subsection{Almost decomposing the $G_i$ into $1$-factors}\label{nicefactor} Our aim in this section is to find a suitable collection of 1-factors in each~$G_i$ which together cover almost all the edges of~$G_i$. In order to do this, we first choose a $\tau$-regular spanning oriented subgraph $G^*_i$ of $G_i$ and then apply Lemma~\ref{1factororiented} to~$G^*_i$. We will refer to all those edges in $G_i$ which lie in a skeleton walk in $\mathcal S_i$ as \emph{red}, and all other edges in $G_i$ as {\emph{white}}. Given $V \in V(\mathcal F_i)$ and $x \in V$, we denote by $N^+ _{w} (x) $ the set of all those vertices which receive a white edge from~$x$ in~$G_i$. Similarly we denote by $N^- _{w} (x) $ the set of all those vertices which send out a white edge to $x$ in $G_i$. So $N^{+} _w (x)\subseteq V^+$ and $N^- _w (x) \subseteq V^-$, where $V^+$ and $V^-$ are the successor and the predecessor of $V$ in $\mathcal F_i$. Note that $G_i$ has the following properties: \begin{itemize} \item[$(\alpha _1)$] $d^{\pm} _{G_i} (x) = \tau $ for each $x \in V_{0,i}$. Moreover, $x$ does not have any in- or outneighbours in $V_{0,i}$. \item[$(\alpha _2)$] Every path in~$G_i$ consisting of two red edges has its midpoint in~$V_{0,i}$. \item[$(\alpha _3)$] For each $(V_jV^+ _j)_k\in E(\mathcal F_i)$ the white edges in $G_i$ from $V_j$ to $V^+ _j$ induce a $( \varepsilon ^{1/4}, \beta _2)$-super-regular pair $(V_j , V^+ _j)_{G_i}$. \item[$(\alpha _4)$] Every vertex $u \in V(C_i)$ receives at most $c^{1/5} \beta _2 m'$ red edges and sends out at most $c^{1/5} \beta _2 m' $ red edges in $G_i$. \item[$(\alpha _5)$] In total, the vertices in $G_i$ lying in a cluster $V_j \in V(\mathcal F_i)$ send out the same number of red edges as the vertices in $V^+ _j$ receive. \end{itemize} In order to find our $\tau$-regular spanning oriented subgraph of $G_i$, consider any edge $(V_jV^+ _j)_k \in E(\mathcal F_i)$. Given any $u_\ell \in V_j$, let $x_\ell$ denote the number of red edges sent out by $u_\ell$ in $G_i$. Similarly given any $v_\ell \in V^+ _j$, let $y_\ell$ denote the number of red edges received by $v_\ell$ in $G_i$. By $(\alpha _4)$ we have that $x_\ell,y_\ell \leq c^{1/5} \beta _2 m'$ and by $(\alpha _5)$ we have that $$\sum _{u_\ell \in V_j} x_\ell= \sum _{v_\ell \in V^+ _j} y_\ell.$$ Thus we can apply Lemma~\ref{fandk} to obtain an oriented spanning subgraph of $(V_j , V^+ _j)_{G_i}$ in which each $u_\ell$ has outdegree $\tau-x_\ell$ and each $v_\ell$ has indegree $\tau-y_\ell$. We apply Lemma~\ref{fandk} to each $(V_jV^+ _j)_k \in E(\mathcal F_i)$. The union of all these oriented subgraphs together with the red edges in $G_i$ clearly yield a $\tau$-regular oriented subgraph $G^* _i$ of $G_i$, as desired. We will use the following claim to almost decompose $G^* _i$ into $1$-factors with certain useful properties. \begin{claim}\label{nice1factors} Let $G^*$ be a spanning $\rho$-regular oriented subgraph of $G_i$ where $\rho \geq \gamma \beta _2 m'$. Then $G^*$ contains a $1$-factor $F^*$ with the following properties: \begin{itemize} \item[{\rm(i)}] $F^*$ contains at most $n/(\log n)^{1/5}$ cycles. \item[{\rm(ii)}] For each $V_j \in V(\mathcal F_i)$, $F^*$ contains at most $c' m'$ red edges incident to vertices in $V_j$. \item[{\rm(iii)}] Let $F^* _{red}$ denote the set of vertices which are incident to a red edge in $F^*$. Then $|F^* _{red} \cap N^{\pm} _{H_{3,i}} (x)| \leq 2c'\gamma _3 \beta m'$ for each $x \in V(C_i)$. \item[{\rm(iv)}] $|F^* _{red} \cap N^{\pm} _{w} (x)| \leq 2c' \beta _2 m'$ for each $x \in V(C_i)$. \end{itemize} \end{claim} \removelastskip\penalty55\medskip\noindent{\bf Proof. } A direct application of Lemma~\ref{1factororiented} to $G^*$ proves the claim. Indeed, we apply the lemma with $\theta _1 = (c^{1/5} \beta _2 m')/n $, $\theta _2 = c'$, $\theta _3 =\rho/n \geq (\gamma \beta _2 m')/n$ and with the oriented spanning subgraph of $G^*$ whose edge set consists precisely of the red edges in $G^*$ playing the role of $H$. Furthermore, the clusters in $V(\mathcal F_i)$ together with the sets $N^{\pm} _w (x)$ and $N^{\pm} _{H_{3,i}} (x)$ (for each $x \in V(C_i)$) play the role of the $A_j$. \noproof\bigskip Repeatedly applying Claim~\ref{nice1factors} we obtain edge-disjoint $1$-factors $F_{i,1}, \dots , F_{i, \psi}$ of $G_i$ satisfying conditions (i)--(iv) of the claim, where \begin{align}\label{psi} \psi := (1-2\gamma )\beta_2 m'. \end{align} Our aim is now to transform each of the $F_{i,j}$ into a Hamilton cycle using the edges of $H_{3,i}$, $H_4$ and $H_{5,i}$. \subsection{Merging the cycles in $F_{i,j}$ into a bounded number of cycles}\label{4.6} Let $D_1, \dots , D_{\xi}$ denote the cycles in $\mathcal F_i$ and define $V_G(D_k)$ to be the set of vertices in $G_i$ which lie in clusters in the cycle $D_k$. In this subsection, for each $i$ and $j$ we will merge the cycles in $F_{i,j}$ to obtain a $1$-factor $F'_{i,j}$ consisting of at most $\xi$ cycles. Recall from Section~\ref{nicefactor} that we call the edges of $G_i$ which lie on a skeleton walk in $\mathcal S_i$ red and the non-red edges of $G_i$ white. We call the edges of the `random' oriented graph $H_{3,i}$ defined in Section~\ref{applyDRL} \emph{green}. (Recall that $H_{3,i}$ was modified in Section~\ref{randomsplit}.) We will use the edges from $H_{3,i}$ to obtain $1$-factors $F'_{i,1}, \dots , F' _{i , \psi}$ for each $G_i$ with the following properties:% \COMMENT{Previously $(\beta_4)$ was: Let $V \in V(\mathcal F_i)$ and $x \in V$. If $x$ lies on a white edge $xy$ in $F_{i,j}$ then $x$ either lies on the same edge $xy$ in $F'_{i,j}$ or $x$ lies on a green edge $xz \in E(H_{3,i})$ in $F'_{i,j}$. In the latter case $y$ is the endpoint of a green edge lying on the same cycle as $x$ in $F'_{i,j}$.} \begin{itemize} \item[$(\beta _1)$] If $i\not = i'$ or $j \not = j'$ then $F'_{i,j}$ and $F'_{i',j'}$ are edge-disjoint. \item[$(\beta _2)$] For each $V\in V( \mathcal F_i) $ all $x\in V$ which send out a white edge in $F_{i,j}$ lie on the same cycle $C$ in $F'_{i,j}$. \item[$(\beta _3)$] $|E(F'_{i,j})\backslash E(F_{i,j})|\le 6 n/(\log n)^{1/5}$ for all $i$ and $j$. Moreover, $E(F'_{i,j})\backslash E(F_{i,j})$ consists of green and white edges only. \item[$(\beta _4)$] For every edge in $F_{i,j}$ both endvertices lie on the same cycle in~$F'_{i,j}$. \item[$(\beta _5)$] All the red edges in $F_{i,j}$ still lie in $F'_{i,j}$. \end{itemize} Before showing the existence of $1$-factors satisfying ($\beta_1$)--($\beta_5$), we will derive two further properties ($\beta_6$) and ($\beta_7$) from them which we will use in the next subsection. So suppose that $F'_{i,j}$ is a $1$-factor satisfying the above conditions. Consider any cluster $V\in V(\mathcal F_i)$. Claim~\ref{nice1factors}(ii) implies that $F_{i,j}$ contains at most $c ' m'$ red edges with startpoint in $V$. So the cycle $C$ in $F'_{i,j}$ which contains all vertices $x\in V $ sending out a white edge in $F_{i,j}$ must contain at least $(1- c ')m'$ such vertices $x$. In particular there are at least $(1- c ')m'>c 'm'$ vertices $y \in V^+$ which lie on $C$. So some of these vertices $y$ send out a white edge in $F_{i,j}$. But by $(\beta _2)$ this means that $C$ contains all those vertices $y \in V^+$ which send out a white edge in $F_{i,j}$. Repeating this argument shows that $C$ contains all vertices in $V(D_k)$ which send out a white edge in $F_{i,j}$ (here $D_k$ is the cycle on $\mathcal F_i$ that contains $V$). Furthermore, by property $(\beta _4)$, $C$ contains all vertices in $V(D_k)$ which receive a white edge in $F_{i,j}$. By property~$(\alpha_2)$ in Section~\ref{nicefactor} no vertex of $C_i$ is both the a startpoint of a red edge in $G_i$ and an endpoint of a red edge in $G_i$. So this implies that all vertices in $V_G(D_k)$ lie on $C$. Thus if we obtain $1$-factors $F'_{i,1}, \dots , F'_{i,\psi}$ satisfying $(\beta _1)$--$(\beta _5)$ then the following conditions also hold: \begin{itemize} \item[$(\beta _6)$] For each $j=1,\dots,\psi$ and each $k=1,\dots,\xi$ all the vertices in $V_G(D_k)$ lie on the same cycle in $F'_{i,j}$. \item[$(\beta_7)$] For each $V\in V(\mathcal F_i)$ and each $j=1,\dots, \psi$ at most $c'm'$ vertices in~$V$ lie on a red edge in~$F'_{i,j}$. \end{itemize} (Condition~$(\beta_7)$ follows from Claim~\ref{nice1factors}(ii) and the `moreover' part of~$(\beta_3)$.) For every $i$, we will define the 1-factors $F'_{i,1}, \dots , F' _{i , \psi}$ sequentially. Initially, we let $F'_{i,j}=F_{i,j}$. So the $F'_{i,j}$ satisfy all conditions except $(\beta_2)$. Next, we describe how to modify $F'_{i,1}$ so that it also satisfies $(\beta_2$). Recall from Section~\ref{randomsplit} that for each edge $(VV^+)_k$ of $\mathcal F_i$ the pair $(V,V^+)_{H_{3,i}}$ is $(\sqrt{\varepsilon},\gamma_3\beta)$-super-regular and thus $\delta^{\pm} (H_{3,i}) \geq (\gamma _3 \beta-\sqrt{\varepsilon}) m' \ge \gamma _3 \beta m'/2$. Furthermore, whenever $V \in V(\mathcal F_i)$ and $x \in V$, the outneighbourhood of $x$ in $H_{3,i}$ lies in $V^+$ and the inneighbourhood of $x$ in $H_{3,i}$ lies in $V^-$. Let $H'_{3,i}$ denote the oriented spanning subgraph of $H_{3,i}$ whose edge set consists of those edges $xy$ of $H_{3,i}$ for which $x$ is not a startpoint of a red edge in our current $1$-factor $F'_{i,1}$ and $y$ is not an endpoint of a red edge in $F'_{i,1}$. Consider a white edge $xy$ in $F'_{i,1}$. Claim~\ref{nice1factors}(iii) implies that $x$ sends out at most $2c' \gamma _3 \beta m'$ green edges $xz$ in $H_{3,i}$ which do not lie in $H'_{3,i}$. So $d^+ _{H'_{3,i}}(x) \geq (1/2-2c') \gamma _3 \beta m'$. Similarly, $d^- _{H'_{3,i}}(y) \geq (1/2-2c') \gamma _3 \beta m'$. (However, if $uv$ is a red edge in $F'_{i,1}$ then $d^+ _{H'_{3,i}}(u) =d^- _{H'_{3,i}}(v) =0$.) Thus we have the following properties of $H_{3,i}$ and $H'_{3,i}$: \begin{itemize} \item[($\gamma _1$)] For each $V\in V(\mathcal F_i)$ all the edges in~$H_{3,i}$ sent out by vertices in~$V$ go to~$V^+$. \item[($\gamma _2$)] If $xy$ is a white edge in $F'_{i,1}$ then $ d^+ _{H'_{3,i}}(x) ,d^- _{H'_{3,i}}(y)\geq \gamma _3 \beta m'/3$. \item[($\gamma _3$)] Consider any $V \in V(\mathcal F_i)$. Let $S \subseteq V$ and $T \subseteq V^+$ be such that $|S|,|T|\geq \sqrt{\varepsilon} m' $. Then $e_{H_{3,i}} (S,T) \geq \gamma _3 \beta |S||T|/2$. \end{itemize} If $F'_{i,1}$ does not satisfy ($\beta_2$), then it contains cycles $C\neq C^*$ such that there is a cluster $V \in V(\mathcal F_i)$ and white edges $xy$ on~$C$ and $x^*y^*$ on~$C^*$ with $x,x^* \in V$ and $y,y ^* \in V^+$. We have 3 cases to consider. Firstly, we may have a green edge $xz \in E(H'_{3,i})$ such that $z$ lies on a cycle $C'\neq C$ in $F'_{i,1}$. Then $z \in V^+$ and $z$ is the endpoint of a white edge in $F'_{i,1}$ (by $(\gamma_1)$ and the definition of $H'_{3,i}$). Secondly, there may be a green edge $wy^* \in E(H'_{3,i})$ such that $w$ lies on a cycle $C'\neq C^*$ in $F'_{i,1}$. So here $w \in V$ is the startpoint of a white edge in $F'_{i,1}$. If neither of these cases hold, then $N^+ _{H'_{3,i}} (x)$ lies on $C$ and $N^- _{H'_{3,i}}(y^*)$ lies on $C^*$. Since $d^+ _{H'_{3,i}} (x) ,d^- _{H'_{3,i}}(y^*)\geq \gamma _3 \beta m'/3$ by~($\gamma _2$), we can use~($\gamma _3$) to find a green edge $x'y'$ from $N^- _{H'_{3,i}}(y^*)$ to $N^+ _{H'_{3,i}} (x)$. Then $x' \in V$, $y' \in V^+$, $x'$ is the startpoint of a white edge in $F'_{i,1}$ and $y'$ is the endpoint of a white edge in $F'_{i,1}$. We will only consider the first of these 3 cases. The other cases can be dealt with analogously: In the second case $w$ plays the role of $x$ and $y^*$ plays the role of $z$. In the third case $x'$ plays the role of $x$ and $y'$ plays the role of $z$. So let us assume that the first case holds, i.e.~there is a green edge $xz \in E(H'_{3,i})$ such that $z$ lies on a cycle $C'\neq C$ in $F'_{i,1}$ and $z$ lies on a white edge $wz$ on $C'$. Let $P$ denote the directed path $(C \cup C' \cup \{xz \} )\backslash \{ xy,wz \}$ from $y\in V^+$ to $w\in V$. Suppose that the endpoint $w$ of $P$ lies on a green edge $wv\in E(H'_{3,i})$ such that $v$ lies outside~$P$. Then $v \in V^+$ is the endpoint of a white edge $uv$ lying on the cycle $C''$ in $F'_{i,1}$ which contains~$v$. We extend $P$ by replacing $P$ and $C''$ with $(P \cup C'' \cup \{wv\}) \backslash \{ uv\}$. We make similar extensions if the startpoint $y$ of $P$ has an inneighbour in $H'_{3,i}$ outside $P$. We repeat this `extension' procedure as long as we can. Let $P$ denote the path obtained in this way, say $P$ joins $a \in V^+$ to $b \in V$. Note that $a$ must be the endpoint of a white edge in $F'_{i,1}$ and $b$ the startpoint of a white edge in $F'_{i,1}$. We will now apply a `rotation' procedure to close $P$ into a cycle. By ($\gamma _2$) $a$ has at least $\gamma _3 \beta m' /3$ inneighbours in $H'_{3,i}$ and $b$ has at least $\gamma _3 \beta m' /3$ outneighbours in $H'_{3,i}$ and all these in- and outneighbours lie on $P$ since we could not extend $P$ any further. Let $X:=N^- _{H'_{3,i}}(a)$ and $Y:=N^+ _{H'_{3,i}}(b)$. So $|X|,|Y| \geq \gamma_3 \beta m' /3$ and $X \subseteq V$ and $Y \subseteq V^+$ by~$(\gamma_1)$. Moreover, whenever $c\in X$ and $c^+$ is the successor of~$c$ on~$P$, then either $cc^+$ was a white edge in $F'_{i,1}$ or $cc^+\in E(H'_{3,i})$. Thus in both cases $c^+\in V^+$. So the set $X^+$ of successors in~$P$ of all the vertices in~$X$ lies in~$V^+$ and no vertex in~$X$ sends out a red edge in~$P$. Similarly one can show that the set $Y^-$ of predecessors in~$P$ of all the vertices in~$Y$ lies in~$V$ and no vertex in~$Y$ receives a red edge in~$P$. Together with~($\gamma _3$) this shows that we can apply Lemma~\ref{rotationlemma} with $P\cup H_{3,i}$ playing the role of $G$ and $V^+$ playing the role of~$V$ and $V$ playing the role of $U$ to obtain a cycle $\hat{C}$ containing precisely the vertices of $P$ such that $|E(\hat{C})\backslash E(P)|\leq 5$, $E(\hat{C})\backslash E(P)\subseteq E(H_{3,i})$ and such that $E(P)\backslash E(\hat{C})$ consists of edges from $X$ to~$X^+$ and edges from~$Y^-$ to~$Y$. Thus $E(P)\backslash E(\hat{C})$ contains no red edges. Replacing $P$ with $\hat{C}$ gives us a $1$-factor (which we still call $F'_{i,j}$) with fewer cycles. Also note that if the number of cycles is reduced by $\ell$, then we use at most $\ell+5 \le 6\ell$ edges in $H_{3,i}$ to achieve this. So $F'_{i,j}$ still satisfies all requirements with the possible exception of ($\beta_2$). If it still does not satisfy ($\beta_2$), we will repeatedly apply this `rotation-extension' procedure until the current $1$-factor $F'_{i,1}$ also satisfies ($\beta_2$). However, we need to be careful since we do not want to use edges of $H_{3,i}$ several times in this process. Simply deleting the edges we use may not work as ($\gamma_2$) might fail later on (when we will repeat the above process for $F'_{i,j}$ with $j>1$). So each time we modify $F'_{i,1}$, we also modify $H_{3,i}$ as follows. All the edges from $H_{3,i}$ which are used in $F'_{i,1}$ are removed from $H_{3,i}$. All the edges which are removed from $F'_{i,1}$ in the rotation-extension procedure are added to $H_{3,i}$. (Note that by~$(\beta_5)$ we never add red edges to $H_{3,i}$.) When we refer to $H_{3,i}$, we always mean the `current' version of $H_{3,i}$, not the original one. Furthermore, at every step we still refer to an edge of $H_{3,i}$ as green, even if initially the edge did not lie in $H_{3,i}$. Similarly at every step we refer to the non-red edges of our current $1$-factor as white, even if initially they belonged to $H_{3,i}$. Note that if we added a green edge $xz$ into $F'_{i,1}$, then $x$ lost an outneighbour in $H_{3,i}$, namely $z$. However, $(\beta _5)$ implies that we also moved some (white) edge $xy$ of $F'_{i,1}$ to $H_{3,i}$, where $y$ lies in the same cluster $V^+ \in V(\mathcal F_i)$ as $z$ (here $x\in V$). So we still have that $\delta^{+} (H_{3,i}) \geq \gamma _3 \beta m'/3$. Similarly, at any stage $\delta^{-} (H_{3,i}) \geq \gamma _3 \beta m'/3$. When $H_{3,i}$ is modified, then $H'_{3,i}$ is modified accordingly. This will occur if we add some white edges to $H_{3,i}$ whose start or endpoint lies on a red edge in $F'_{i,1}$. However, Claim~\ref{nice1factors}(iv) implies that at any stage we still have $$ d^+_{H'_{3,i}}(x) ,d^-_{H'_{3,i}}(y)\geq (1/2-2c')\gamma _3 \beta m'- 2c' \beta _2 m'\geq\gamma _3\beta m'/3. $$ Also note that by ($\beta_3$), the modified version of $H_{3,i}$ still satisfies \begin{equation} \label{h3i} e_{H_{3,i}} (S,T) \geq (\gamma _3 \beta-\sqrt{\varepsilon}) |S||T| -6n/(\log n)^{1/5} \geq \gamma _3 \beta |S||T|/2. \end{equation} So $H_{3,i}$ and $H'_{3,i}$ will satisfy ($\gamma_1$)--($\gamma_3$) throughout and thus the above argument still works. So after at most $n/(\log n)^{1/5}$ steps $F'_{i,1}$ will also satisfy ($\beta_2$). Suppose that for some $1< j \le \psi$ we have found $1$-factors $F'_{i,1}, \dots , F'_{i, j-1}$ satisfying $(\beta _1)$--$(\beta _5)$. We can now carry out the rotation-extension procedure for $F'_{i,j}$ in the same way as for $F'_{i,1}$ until $F'_{i,j}$ also satisfies ($\beta_2$). In the construction of $F'_{i,j}$, we do not use the original $H_{3,i}$, but the modified version obtained in the construction of $F'_{i,j-1}$. We then introduce the oriented spanning subgraph $H'_{3,i}$ of $H_{3,i}$ similarly as before (but with respect to the current $1$-factor $F'_{i,j}$). Then all the above bounds on these graphs still hold, except that in the middle expression of~(\ref{h3i}) we multiply the term $6n/(\log n)^{1/5}$ by $j$ to account for the total number of edges removed from $H_{3,i}$ so far. But this does not affect the next inequality. So eventually, all the $F'_{i,j}$ will satisfy ($\beta_1$)--($\beta_5$). \subsection{Merging the cycles in $F'_{i,j}$ to obtain Hamilton cycles}\label{merging} Our final aim is to piece together the cycles in $F'_{i,j}$, for each $i$ and $j$, to obtain edge-disjoint Hamilton cycles of $G$. Since we have $\psi$ $1$-factors $F'_{i,1}, \dots , F'_{i,\psi}$ for each $G_i$, in total we will find \begin{eqnarray*} \psi r' & \stackrel{(\ref{r'}),(\ref{psi})}{=} & (1-2\gamma)\beta _2 m'(1-\gamma)L/\beta \stackrel{(\ref{beta2})}{\geq}(1-2\gamma)(1-\gamma)(1/2-\gamma)m' L\\ & \stackrel{(\ref{hier})}{\geq } &(1/2- \eta _1 )n \end{eqnarray*} edge-disjoint Hamilton cycles of $G$, as desired. Recall that $R'$ was defined in Section~\ref{applyDRL}. Given any $i$, apply Lemma~\ref{shiftedwalk} to obtain a closed shifted walk $$W_i= U^+ _1 D'_1 U_1 U_2 ^+ D'_2 U_2 \dots U^+ _{s-1} D'_{s-1}U_{s-1} U^+ _s D' _s U_s U^+ _1$$ in $R'$ with respect to~$\mathcal F_i$ such that each cycle in $\mathcal F_i$ is traversed at most $3L$ times. So $\{D'_1,\dots,D'_s\}$ is the set of all cycles in $\mathcal F_i$, $U^+ _k$ is the successor of $U_k$ on $D'_{k}$ and $U_k U_{k+1} ^+\in E(R')$ for each $k=1,\dots, s$ (where $U_{s+1}:=U_1$). Moreover, \begin{align}\label{s} s \leq 3L^2. \end{align} For each $1$-factor $F'_{i,j}$ we will now use the edges of $H_4$ and $H_{5,i}$ to obtain a Hamilton cycle $C_{i,j}$ with the following properties: \begin{itemize} \item[(i)] If $i \not = i'$ or $j \not = j'$ then $C_{i,j}$ and $C_{i',j'}$ are edge-disjoint. \item[(ii)] $E(C_{i,j})$ consists of edges from $F'_{i,j}$, $H_4$ and $H_{5,i}$ only. \item[(iii)] There are at most $3L^2$ edges from $H_4$ lying in $C_{i,j}$. \item[(iv)] There are at most $3L^2+5$ edges from $H_{5,i}$ lying in $C_{i,j}$. \end{itemize} For each $j$, we will use $W_{i}$ to `guide' us how to merge the cycles in $F'_{i,j}$ into the Hamilton cycle $C_{i,j}$. Suppose that we have already defined $\ell < \psi r'$ of the Hamilton cycles $C_{i',j'}$ satisfying (i)--(iv), but have yet to define $C_{i,j}$. We remove all those edges which have been used in these $\ell$ Hamilton cycles from both~$H_4$ and $H_{5,i}$. For each $V \in V(\mathcal F_i)$, we denote by $V_w$ the subcluster of $V$ containing all those vertices which do not lie on a red edge in $F'_{i,j}$. We refer to $V_w$ as the \emph{white subcluster of $V$}. Thus $|V_w|\geq (1-c')m'$ by property~$(\beta_7)$ in Section~\ref{4.6}. Note that the outneighbours of the vertices in $V_w$ on $F'_{i,j}$ all lie in $V^+$ while their inneighbours lie in $V^-$. For each $k=1,\dots,s$ we will denote the white subcluster of a cluster $U_k$ by $U_{k,w}$. We use similar notation for $U^+ _k$ and $U^- _k$. Consider any $UV\in E(R')$. Recall that $U$ and $V$ are viewed as clusters of size $m$ in $R'$, but when considering $\mathcal F_i$ we are in fact considering subclusters of $U$ and $V$ of size $m'$. When viewed as clusters in $R'$, $UV$ initially corresponded to an $\varepsilon$-regular pair of density at least $\gamma_4 d'$ in $H_4$. Thus when viewed as clusters in $\mathcal F_i$, $UV$ initially corresponded to a $2\varepsilon$-regular pair of density at least $\gamma _4 d'/2$ in $H_4$. Moreover, initially the edges from $U_w$ to $V_w$ in $H_4$ induce a $3\varepsilon$-regular pair of density at least $\gamma _4 d'/3$. However, we have removed all the edges lying in the $\ell$ Hamilton cycles $C_{i',j'}$ which we have defined already. Property~(iii) implies that we have removed at most $3L^2 \ell\leq 3L^2n$ edges from $H_4$. Thus we have the following property: \begin{itemize} \item[$(\delta _1)$] Given any $UV \in E(R')$, let $S\subseteq U_w$, $T\subseteq V_w$ be such that $|S|,|T|\geq 3\varepsilon m'$. Then $e_{H_4} (S,T) \geq \gamma _4 d'|S||T|/4$. \end{itemize} When constructing $C_{i,j}$ we will remove at most $3L^2$ more edges from $H_4$. But since $(\delta _1)$ is far from being tight, it will hold throughout the argument below. Similarly, the initial definition of $H_{5,i}$ (c.f.~Section~\ref{randomsplit}) and~(iv) together imply the following property: \begin{itemize} \item[$(\delta _2)$] Consider any edge $V V^+\in E(\mathcal F_i)$. Let $S \subseteq V$ and $T \subseteq V^+$ be such that $|S|,|T|\geq \sqrt{\varepsilon} m'$. Then $e_{H_{5,i}} (S,T) \geq \gamma _5 \beta |S||T|/2$. \end{itemize} We now construct $C_{i,j}$ from $F'_{i,j}$. Condition~$(\beta _6)$ in Section~\ref{4.6} implies that, for each $k=1,\dots,s$, every vertex in $V_G(D'_k)$ lies on the same cycle, $C'_k$ say, in $F'_{i,j}$. Let $x_1 \in U_{1,w}$ be such that $x_1$ has at least $\gamma _4 d'|U^+_{2,w}|/4\ge \gamma _4 d' m'/5$ outneighbours in $H_4$ which lie in $U_{2,w} ^+$. By~$(\delta _1)$ all but at most $3\varepsilon m'$ vertices in $U_{1,w}$ have this property. Note that the outneighbour in $F'_{i,j}$ of any such vertex lies in $U^+ _1$. However, by~$(\delta _2)$ all but at most $\sqrt{\varepsilon }m'$ vertices in $U^+ _1$ have at least $\gamma _5 \beta |U_{1,w}|/2\ge \gamma _5 \beta m'/3$ inneighbours in $H_{5,i}$ which lie in $U_{1,w}$. Thus we can choose $x_1$ with the additional property that its outneighbour $y_1\in U^+_1$ in $F'_{i,j}$ has at least $\gamma _5 \beta m'/3$ inneighbours in $H_{5,i}$ which lie in~$U_{1,w}$. Let $P$ denote the directed path $C'_1-x_1 y_1$ from~$y_1$ to $x_1$. We now have two cases to consider. \medskip \noindent {\bf{Case 1.}} $C'_1 \not = C'_2$. \smallskip \noindent Note that $x_1$ has at least $\gamma _4 d' m'/5-c'm'\gg \gamma _4 d' m'/6$ outneighbours $y'_2\in U_{2,w} ^+$ in $ H_4$ such that the inneighbour of $y'_2$ in~$F'_{i,j}$ lies in $U_{2,w}$. However, by~$(\delta_1)$ all but at most $3\varepsilon m'$ vertices in $U_{2,w}$ have at least $\gamma _4 d' m'/5$ outneighbours in $H_4$ which lie in $U^+ _{3,w}$. Thus we can choose an outneighbour $y'_2\in U_{2,w} ^+$ of $x_1$ in $H_4$ such that the inneighbour $x'_2$ of $y'_2$ in~$F'_{i,j}$ lies in $U_{2,w}$ and $x'_2$ has at least $\gamma _4 d' m'/5$ outneighbours in $H_4$ which lie in $U^+ _{3,w}$. We extend $P$ by replacing it with $(P \cup C'_2 \cup \{ x_1 y'_2\}) \backslash \{x'_2 y'_2\}$. \medskip \noindent {\bf{Case 2.}} $C'_1 = C'_2$. \smallskip \noindent In this case the vertices in $V_G(D'_2)$ already lie on $P$. We will use the following claim to modify $P$. \begin{claim}\label{rotateclaim} There is a vertex $y_2 \in U^+ _{2,w}$ such that: \begin{itemize} \item $x_1 y_2 \in E(H_4)$. \item The predecessor $x_2$ of $y_2$ on $P$ lies in $U_{2,w}$. \item There is an edge $x_2 y'_2$ in $H_{5,i}$ such that $y'_2 \in U^+ _{2,w}$ and $y_2$ precedes $y'_2$ on $P$ (but need not be its immediate predecessor). \item The predecessor $x'_2$ of $y'_2$ on $P$ lies in $U_{2,w}$. \item $x'_2$ has at least $\gamma _4 d' m'/5$ outneighbours in $H_4$ which lie in $U^+ _{3,w}$. \end{itemize} \end{claim} \begin{figure}[htb!] \begin{center}\footnotesize \psfrag{1}[][]{\normalsize $y_1$} \psfrag{2}[][]{\normalsize $x_{2}$} \psfrag{3}[][]{\normalsize $y_{2}$} \psfrag{4}[][]{\normalsize $x' _{2}$} \psfrag{5}[][]{\normalsize $y' _{2}$} \psfrag{6}[][]{\normalsize $x_{1}$} \includegraphics[width=0.65\columnwidth]{rotation2.eps} \caption{The modified path $P$ in Case~2} % \end{center} \end{figure} \removelastskip\penalty55\medskip\noindent{\bf Proof. } Since $x_1 $ has at least $\gamma _4 d' m'/5$ outneighbours in $H_4$ which lie in $U^+ _{2,w}$, at least $\gamma _4 d'm' /5- c' m'-3\varepsilon m' \geq \gamma _4 d'm'/6$ of these outneighbours $y$ are such that the predecessor $x$ of $y$ on $P$ lies in $U_{2,w}$ and at least $\gamma _4 d'm' /5$ outneighbours of $x$ in $H_4$ lie in $U^+ _{3,w}$. This follows since all such vertices $y$ have their predecessor on $P$ lying in $U_2$ (since $y \in U^+ _{2,w}$), since $|U_{2,w}| \geq (1- c' )m'$ and since by~$(\delta_1)$ all but at most $3\varepsilon m'$ vertices in $U_{2,w}$ have at least $\gamma _4 d'm' /5$ outneighbours in $U^+ _{3,w}$. Let $Y_2$ denote the set of all such vertices $y$, and let $X_2$ denote the set of all such vertices $x$. So $|X_2|=|Y_2|\geq \gamma _4 d' m' /6$, $X_2 \subseteq U_{2,w}$, $Y_2 \subseteq U^+ _{2,w} \cap N^+ _{H_4} (x_1)$. Let $X^* _2 $ denote the set of the first $\gamma _4 d' m' /12$ vertices in $X_2$ on $P$ and $Y^* _2$ the set of the last $\gamma _4 d' m'/12$ vertices in $Y_2$ on $P$. Then~$(\delta_2)$ implies the existence of an edge $x_2y'_2$ from $X^*_2$ to $Y^*_2$ in~$H_{5,i}$. Then the successor $y_2$ of $x_2$ on $P$ satisfies the claim. \noproof\bigskip Let $x_2 , y_2, x'_2$ and $y'_2$ be as in Claim~\ref{rotateclaim}. We modify $P$ by replacing $P$ with $$(P\cup\{ x_1 y_2, x_2 y'_2\}) \backslash \{ x_2 y_2 ,x'_2 y'_2\}$$ (see Figure~2).% In either of the above cases we obtain a path $P$ from $y_1$ to some vertex $x'_2 \in U_{2,w}$ which has at least $\gamma _4 d' m'/5$ outneighbours in $H_4$ lying in~$U^+ _{3,w}$. We can repeat the above process: If $C'_3 \not = C'_1,C'_2$ then we extend $P$ as in Case~1. If $C'_3 =C'_1$ or $C'_3 =C'_2$ then we modify $P$ as in Case~2. In both cases we obtain a new path $P$ which starts in $y_1$ and ends in some $x'_3 \in U_{3,w}$ that has at least $\gamma _4 d' m'/5$ outneighbours in $H_4$ lying in $U^+ _{4,w}$. We can continue this process, for each $C'_k$ in turn, until we obtain a path $P$ which contains all the vertices in $C'_1, \dots , C'_s$ (and thus all the vertices in $G$), starts in $y_1$ and ends in some $x'_s \in U_{s,w}$ having at least $\gamma _4 d' m'/5$ outneighbours in $H_4$ which lie in $U^+ _{1,w}$. \begin{claim}\label{rotateclaim2} There is a vertex $y'_1 \in U^+ _{1}\setminus\{y_1\}$ such that: \begin{itemize} \item $x'_s y'_1 \in E(H_4)$. \item The predecessor $x'_1$ of $y'_1$ on $P$ lies in $U_{1,w}$. \item There is an edge $x'_1 y''_1$ in $H_{5,i}$ such that $y''_1 \in U^+ _{1,w}$ and $y'_1$ precedes $y''_1$ on $P$. \item The predecessor $x''_1$ of $y''_1$ on $P$ lies in $U_{1,w}$. \item $x''_1$ has at least $\gamma _5\beta m'/3$ outneighbours in $H_{5,i}$ which lie in $U^+ _{1,w}$. \end{itemize} \end{claim} \removelastskip\penalty55\medskip\noindent{\bf Proof. } The proof is almost identical to that of Claim~\ref{rotateclaim} except that we apply $(\delta_2)$ to ensure that $x''_1$ has at least $\gamma _5\beta m'/3$ outneighbours in $H_{5,i}$ which lie in $U^+ _{1,w}$. \noproof\bigskip Let $x'_1,y'_1, x''_1$ and $y''_1$ be as in Claim~\ref{rotateclaim2}. We modify $P$ by replacing it with the path $$(P\cup \{x'_sy'_1, x'_1 y''_1\} )\backslash \{x'_1 y'_1 , x''_1 y''_1\}$$ from $y_1$ to $x''_1$. So $P$ is a Hamilton path in~$G$ which is edge-disjoint from the $\ell$ Hamilton cycles $C_{i',j'}$ already defined. In each of the $s$ steps in our construction of $P$ we have added at most one edge from each of $H_4$ and $H_{5,i}$. So by~(\ref{s}) $P$ contains at most $3L^2$ edges from $H_4$ and at most $3L^2$ edges from~$H_{5,i}$. All other edges of $P$ lie in $F'_{i,j}$. Recall that $y_1$ has at least $\gamma _5 \beta m'/3$ inneighbours in $H_{5,i}$ which lie in $U_{1,w}$ and $x''_1$ has at least $\gamma _5\beta m'/3$ outneighbours in $H_{5,i}$ which lie in $U^+ _{1,w}$. Thus we can apply Lemma~\ref{rotationlemma} to $P\cup H_{5,i}$ with $U^+_{1}$ playing the role of $V$ and $U_{1}$ playing the role of $U$ to obtain a Hamilton cycle $C_{i,j}$ in $G$ where $|E(C_{i,j})\backslash E(P)|\leq 5$. By construction, $C_{i,j}$ satisfies (i)--(iv). Thus we can indeed find $(1/2-\eta _1)n $ Hamilton cycles in $G$, as desired. \section{Almost decomposing oriented regular graphs with large semidegree} \label{38} In this section, we describe how Theorem~\ref{main} can be extended to `almost regular' oriented graphs whose minimum semidegree is larger than $3n/8$. More precisely, we say that an oriented graph $G$ on $n$ vertices is \emph{$(\alpha \pm \eta )n$-regular} if $\delta^0(G)\ge (\alpha - \eta) n$ and $\Delta^0(G) \le (\alpha + \eta) n$. \begin{thm}\label{main38} For every $ \gamma >0$ there exist $n_0=n_0 (\gamma )$ and $\eta =\eta(\gamma)>0$ such that the following holds. Suppose that $G$ is an $(\alpha \pm \eta)n$-regular oriented graph on $n\geq n_0$ vertices where $3/8+ \gamma \leq \alpha <1/2 $. Then $G$ contains at least $(\alpha -\gamma)n$ edge-disjoint Hamilton cycles. \end{thm} Theorem~\ref{main38} is best possible in the sense that there are almost regular oriented graphs whose semidegrees are all close to $3n/8$ but which do not contain a Hamilton cycle. These were first found by H\"aggkvist~\cite{HaggkvistHamilton}. However, we believe that if one requires $G$ to be completely regular, then one can actually obtain a Hamilton decomposition of $G$. Note this would be a significant generalization of Kelly's conjecture. \begin{conj} For every $ \gamma >0$ there exists $n_0=n_0 (\gamma )$ such that for all $n \ge n_0$ and all $r\ge (3/8+\gamma)n$ each $r$-regular oriented graph on $n$ vertices has a decomposition into Hamilton cycles. \end{conj} At present we do not even have any examples to rule out the possibility that one can reduce the constant $3/8$ in the above conjecture: \begin{question} Is there a constant $c<3/8$ such that for every sufficiently large $n$ every $cn$-regular oriented graph $G$ on $n$ vertices has a Hamilton decomposition or at least a set of edge-disjoint Hamilton cycles covering almost all edges of $G$? \end{question} It is clear that we cannot take $c <1/4$ since there are non-Hamiltonian $k$-regular oriented graphs on $n$ vertices with $k=n/4-1/2$ (consider a union of 2 regular tournaments). \medskip \noindent {\bf Sketch proof of Theorem~\ref{main38}.} The proof of Theorem~\ref{main38} is similar to that of Theorem~\ref{main}. A detailed proof of Theorem~\ref{main38} can be found in~\cite{treglownphd}. The main use of the assumption of high minimum semidegree in our proof of Theorem~\ref{main} was that for any pair $A$, $B$ of large sets of vertices, we could assume the existence of many edges between $A$ and $B$ (see Lemma~\ref{keevashmult}). This enabled us to prove the existence of very short paths, shifted walks and skeleton walks between arbitrary pairs of vertices. Lemma~\ref{keevashmult} does not hold under the weaker degree conditions of Theorem~\ref{main38}. However, (e.g.~by Lemma~4.1 in~\cite{kelly}) these degree conditions are strong enough to imply the following `expansion property': for any set $S$ of vertices, we have that $|N^+_G(S)| \ge |S|+ \gamma n/2$ (provided $|S|$ is not too close to $n$). Lemma~3.2 in~\cite{kelly} implies that this expansion property is also inherited by the reduced graph. So in the proof of Lemma~\ref{multifactor1}, this expansion property can be used to find paths of length $O(1/\gamma)$ which join up given pairs of vertices. Similarly, in Lemma~\ref{shiftedwalk} we find closed shifted walks so that each cycle $C$ in $F$ is traversed $O(1/\gamma)$ times instead of just $3$ times (such a result is proved explicitly in Corollary~4.3 of~\cite{kelly}). Finally, in the proof of Claim~\ref{shadow} we now find shadow skeleton walks whose length is $O(1/\gamma)$ instead of 5. In each of these cases, the increase in length does not affect the remainder of the proof. \noproof\bigskip \section{Acknowledgement} We would like to thank Demetres Christofides for helpful discussions.
1,108,101,562,595
arxiv
\section{Introduction} Let $G$ be a non-trivial group. An equation in $t$ over $G$ is an expression of the form $$ s(t)=g_{1}t^{\epsilon_{1}}g_{2}t^{\epsilon_{2}} \cdots g_{n}t^{\epsilon_{n}}=1 \ (g_{i} \in G,\ \epsilon_i=\pm 1) $$ such that $\epsilon_{i}+\epsilon_{i+1}=0$ implies $g_{i+1} \neq 1$ in $G$. We say, that the equation $s(t) = 1$ has a solution over $G$ if and only if $s(h) = 1$ for some element $h$ of a group $H$ containing $G$. The study of equations over groups was initiated by B.H. Neumann \cite{N} . Levin \cite{levin}, showed that a solution always exists provided $n \geq 1$ and $\epsilon_1 , \cdots , \epsilon_{n} > 0$. An equation $s(t)=1$ is called singular if $\displaystyle\Sigma_{i=1}^{n} \; \epsilon_{i}=0$ and is called non-singular otherwise. It has been conjectured \cite{LS}, that any non-singular equation is solvable. Moreover Levin \cite{levin}, conjectured that every equation is solvable over a torsion free group. A significant work has been done to verify these conjectures \cite{levin,E,IK,EJ}. In \cite{P}, Prishchepov used results of Brodskii and Howie \cite{BH}, to show that the Levin's conjecture is true for $n \leq 6$. In \cite{BE}, Bibi and Edjvet proved that the conjecture holds for $n =7$. The authors considered a non-singular equation of length $8$ in \cite{ABIA}, and proved that the conjecture holds. The proofs in \cite{BE}, and \cite{ABIA}, are given by considering all possible conditions on elements of the group $G$. The number of such cases become extremely large as length of the equation increases. It is however the only method available, so far, which can be applied to equations of arbitrary length. The method depends on application of either weight test or the curvature distribution method \cite{BE}, to each case of an equation. In \cite{ABA}, it was proved that certain equations of arbitrary length have a solution over torsion free groups provided a single relation among elements of $G$ holds. In this paper we consider equations of arbitrary length and show that the Levin conjecture holds under some mild conditions on elements of $G$. These results significantly reduce the number of cases in \cite{ABIA}. The results will also be used to solve equations of length greater than or equal to $8$. Our main results are the following theorems. \begin{theo*} Let $\displaystyle s(t)=a_{1}t a_{2}t \; \cdots \; a_{k_{1}-3}E_{k_1}a_{k_{1}}E_{k_2}a_{k_{2}} \; \cdots \; a_{k_{m-1}}E_{k_m}a_{k_{m}}t \; \cdots \; ta_{n}t$ such that $a_{j} \in G$ for all $j$ and $E_{k_i}=t a_{k_{i}-2} t^{-1}a_{k_{i}-1}t$ for all $i \geq 1$. If $a_{k_{i}-2}=a_{k_{1}-2}$ and $a_{k_{i}-1}=a_{k_{1}-1}$ for all $i \geq 2$ then $s(t)=1$ has a solution over $G$. \end{theo*} \begin{theo*} Let $\displaystyle s(t)=a_{1}t a_{2}t \; \cdots \; a_{k_{1}-2}E_{k_1}a_{k_{1}+1}E_{k_2}a_{k_{2}+1} \; \cdots \; a_{k_{m-1}}E_{k_m}a_{k_{m}+1}t \; \cdots \; ta_{n}t$ such that $a_{j} \in G$ for all $j$ and $E_{k_i}=t^{-1} a_{k_{i}-1} ta_{k_{i}}t$ for all $i \geq 1$. If $a_{k_{i}-1}=a_{k_{1}-1}$ and $a_{k_{i}}=a_{k_{1}}$ for all $i \geq 2$ then $s(t)=1$ has a solution over $G$. \end{theo*} \begin{theo*} Let $\displaystyle s(t)=a_{1}t a_{2}t \; \cdots \; a_{k_{1}-2}E_{k_1}a_{k_{1}+1}E_{k_2}a_{k_{2}+1} \; \cdots \; a_{k_{m-1}}E_{k_m}a_{k_{m}+1}t^{-1}a_{k_{m}+2}t \; \cdots \; ta_{n}t$ such that $a_{j} \in G$ for all $j$ and $E_{k_i}=t a_{k_{i}-1} t a_{k_{i}}t^{-1}$ for all $i \geq 1$. If $a_{k_{i}-1}=a_{k_{1}-1}$ and $a_{k_{i}}=a_{k_{1}}$ for all $i \geq 2$ then $s(t)=1$ has a solution over $G$. \end{theo*} \begin{rem*} We would like to point out that the above results can suitably be generalized to higher powers of $t$ (i.e. replace $t$ by $t^{m}$ for some positive integer $m$). The proves are similar to the ones given here, therefore we omit the details. \end{rem*} \section{Preliminaries} A relative group presentation is a presentation of the form $\mathcal{P}=\langle G, \; x \; | \; r \rangle$ where $r$ is a set of cyclically reduced words in $G * \langle x \rangle$. If the relative presentation is orientable and aspherical then the natural map from $G$ to $\langle G, \; x \; | \; r \rangle$ is injective. In our case $x$ and $r$ consist of the single element $t$ and $s(t)$ respectively, therefore $\mathcal{P}$ is orientable and so asphericity implies $s(t)=1$ is solvable. In this paper we use the weight test to show that $\mathcal{P}$ is aspherical \cite{BP}. The star graph $\Gamma$ of $\mathcal{P}$ has vertex set $x \cup x^{-1}$ and edge set $r^{*}$, where $r^{*}$ is the set of all cyclic permutations of the elements of $r \cup r^{-1}$ which begin with an element of $x \cup x^{-1}$. For $R \in r^{*}$ write $R=Sg$ where $g \in G$, and $S$ begins and ends with $x$ symbols. Then $\mathfrak{i}(R)$ is the inverse of the last symbol of $S$, $\tau(R)$ is the first symbol of $S$ and $\lambda(R)=g$. A weight function $\theta$ on $\Gamma$ is a real valued function on the set of edges of $\Gamma$ which satisfies $\theta(Sh)=\theta(S^{-1}h^{-1})$. A weight function $\theta$ is called aspherical if the following three conditions are satisfied: \begin{enumerate} \item Let $R \in r^{*}$ with $R=x_{1}^{\epsilon_{1}}g_{1} \; \cdots \; x_{n}^{\epsilon_{n}}g_{n}$. Then $$\sum_{i=1}^{n}\; (1-\theta(x_{i}^{\epsilon_{i}}g_{i} \; \cdots \; x_{n}^{\epsilon_{n}}g_{n}x_{1}^{\epsilon_{1}} g_{1} \; \cdots \; x_{i-1}^{\epsilon_{i-1}} g_{i-1})) \geq 2. $$ \item Each admissible cycle in $\Gamma$ has weight at least $2$ (where admissible means having a label trivial in $G$). \item Each edge of $\Gamma$ has a non-negative weight. \end{enumerate} If $\Gamma$ admits an aspherical weight function then $\mathcal{P}$ is aspherical \cite{BP}. The following lemma \cite{K}, tells us that we can apply asphericity test in $k-$steps. \begin{lem*} Let the relative presentation $P = \langle H, x : r \rangle$ define a group $G$ and let $Q = \langle G, t : s \rangle$ be another relative presentation. If $Q$ and $P$ are both aspherical, then the relative presentation $R = \langle H, x \cup t : r \cup \tilde{s} \rangle$ is aspherical, where $\tilde{s}$ is an element of $H * F(x) * F(t)$ obtained from $s$ by lifting. \end{lem*} It is clear from our definition of a group equation that if $g_{i}$ is a coefficient between a negative and a positive power of $t$ then $g_{i}$ is not trivial in $G$. This fact will be used in all subsequent proofs without reference. \section{Main Results} The following lemma demonstrates the result of first theorem for equations involving only two negative powers of $t$. \begin{lem}\label{lsgtype1} The equation $g_1tg_2t \cdots tg_{i-2}t^{-1}g_{i-1} t g_i t g_{i+1}t^{-1} g_{i+2}t g_{i+3}t \cdots t g_n t =1$ is solvable if $g_{i+1} =g_{i-2}$ and $g_{i+2} =g_{i-1}$. \end{lem} \begin{proof} Let $$\mathcal{P}=\langle A, t~|~ g_1tg_2t \cdots tg_{i-2}t^{-1}g_{i-1} t g_i t g_{i-2}t^{-1} g_{i-1}t g_{i+3}t \cdots t g_n t \rangle$$ be the relative presentation corresponding to the given equation. Following \cite{BP}, it is sufficient to show that the presentation $\mathcal{P}$ is aspherical. Substitute $x=t g_{i-2} t^{-1}g_{i-1}t$ to obtain $$\mathcal{P}=\langle A, t~|~ g_1t \cdots tg_{i-3} x g_{i} x g_{i+3} t \cdots t g_n t =1=tg_{i-2}t^{-1} g_{i-1}tx^{-1} \rangle.$$ We use the weight test to show that $\mathcal{P}$ is aspherical. The star graph $\Gamma$ for $\mathcal{P}$ is given by Figure \ref{sgtype1}. \begin{figure}[H] \begin{center} \includegraphics[trim={0 12cm 0 4cm}, width=0.95\textwidth, clip]{sg2_1.pdf} \caption{Star graph $\Gamma$} \label{sgtype1} \end{center} \end{figure} In this case, the weight function $\theta$ is defined by $\theta(g_{i-1})=\theta(g_{i-2})=\theta(g_{i-3})= \theta(g_{i+3})=0$. Remaining edges are assigned a weight $1$. The star graph clearly indicates that all admissible cycles of weight less than two imply that the group is a torsion group. Hence $\mathcal{P}$ is aspherical over a torsion free group. \end{proof} \begin{theo} Let $\displaystyle s(t)=a_{1}t a_{2}t \; \cdots \; a_{k_{1}-3}E_{k_1}a_{k_{1}}E_{k_2}a_{k_{2}} \; \cdots \; a_{k_{m-1}}E_{k_m}a_{k_{m}}t \; \cdots \; ta_{n}t$ such that $a_{j} \in G$ for all $j$ and $E_{k_i}=t a_{k_{i}-2} t^{-1}a_{k_{i}-1}t$ for all $i \geq 1$. If $a_{k_{i}-2}=a_{k_{1}-2}$ and $a_{k_{i}-1}=a_{k_{1}-1}$ for all $i \geq 2$ then $s(t)=1$ has a solution over $G$. \end{theo} \begin{proof} Let $$\mathcal{P}=\langle A, t~|~ a_{1}t a_{2}t \; \cdots \; a_{k_{1}-3}E_{k_1}a_{k_{1}}E_{k_2}a_{k_{2}} \; \cdots \; a_{k_{m-1}}E_{k_m}a_{k_{m}}t \; \cdots \; ta_{n}t \rangle$$ be the relative presentation corresponding to the given equation. It is sufficient to show that the presentation $\mathcal{P}$ is aspherical. Substituting $x=t a_{k_{1}-2} t^{-1}a_{k_{1}-1}t$ yields $$\mathcal{P}=\langle A, t~|~ a_1t \; \cdots \; ta_{k_{1}-3} x a_{k_{1}} x a_{k_{2}} x \; \cdots \; a_{k_{m-1}} x a_{k_{m}}t \; \cdots \; t a_{n} t =1=t a_{k_{1}-2} t^{-1}a_{k_{1}-1}tx^{-1} \rangle.$$ We use the weight test to show that $\mathcal{P}$ is aspherical. The star graph $\Gamma$ for $\mathcal{P}$ is given by Figure \ref{sgtype2_7}. \begin{figure}[H] \begin{center} \includegraphics[trim={0 12cm 0 4cm}, width=0.95\textwidth, clip]{sg2_7.pdf} \caption{Star graph $\Gamma$} \label{sgtype2_7} \end{center} \end{figure} In this case, the weight function $\theta$ is defined by $\theta(a_{k_{1}-1})=\theta(a_{k_{1}-2})=\theta(a_{k_{1}-3})= \theta(a_{k_{m}})=0$. Remaining edges are assigned a weight $1$. The star graph clearly indicates that all admissible cycles of weight less than two imply that the group is a torsion group. Hence $\mathcal{P}$ is aspherical over a torsion free group. \end{proof} The following lemma gives a special case of second theorem where equation contains only three negative powers of $t$. \begin{lem}\label{lsgtype1_1} The equation $g_1tg_2t \cdots t g_{i-2} t^{-1} g_{i-1} t g_{i} t g_{i+1} t^{-1} g_{i+2} t g_{i+3} t g_{i+4}t^{-1} g_{i+5} t g_{i+6} t g_{i+7}t \cdots t g_n t =1$ is solvable if $g_{i+5}=g_{i+2} =g_{i-1}$ and $g_{i+6}=g_{i+3} =g_{i}$. \end{lem} \begin{proof} Let $$\mathcal{P}=\langle A, t~|~ g_1t \cdots t g_{i-2} t^{-1}g_{i-1}tg_{i} t g_{i+1} t^{-1} g_{i-1}t g_{i}t g_{i+4}t^{-1} g_{i-1} t g_{i} t g_{i+7}t \cdots t g_n t \rangle$$ be the relative presentation corresponding to the given equation. It is sufficient to show that the presentation $\mathcal{P}$ is aspherical. Substitute $x=t^{-1} g_{i-1} t g_{i}t$ to get $$\mathcal{P}=\langle A, t~|~ g_1t \cdots tg_{i-2} x g_{i+1} x g_{i+4} x g_{i+7} t \cdots t g_n t =1=t^{-1}g_{i-1}t g_{i}tx^{-1} \rangle.$$ We use the weight test to show that $\mathcal{P}$ is aspherical. The star graph $\Gamma$ for $\mathcal{P}$ is given by Figure \ref{sgtype1_1_4}. \begin{figure}[H] \begin{center} \includegraphics[trim={0 11cm 0 4cm}, width=0.95\textwidth, clip]{sg2_4.pdf} \caption{Star graph $\Gamma$} \label{sgtype1_1_4} \end{center} \end{figure} In this case, the weight function $\theta$ is defined by $\theta(g_{i-1})=\theta(g_{i-2})=\theta(g_{i+7})=0$. Furthermore the weight of the edge $x \rightarrow t^{-1}$ with label $1$ is also zero. Remaining edges are assigned a weight $1$. The star graph clearly indicates that all admissible cycles of weight less than two imply that the group is a torsion group. Hence $\mathcal{P}$ is aspherical over a torsion free group. \end{proof} \begin{theo} Let $\displaystyle s(t)=a_{1}t a_{2}t \; \cdots \; a_{k_{1}-2}E_{k_1}a_{k_{1}+1}E_{k_2}a_{k_{2}+1} \; \cdots \; a_{k_{m-1}}E_{k_m}a_{k_{m}+1}t \; \cdots \; ta_{n}t$ such that $a_{j} \in G$ for all $j$ and $E_{k_i}=t^{-1} a_{k_{i}-1} ta_{k_{i}}t$ for all $i \geq 1$. If $a_{k_{i}-1}=a_{k_{1}-1}$ and $a_{k_{i}}=a_{k_{1}}$ for all $i \geq 2$ then $s(t)=1$ has a solution over $G$. \end{theo} \begin{proof} Let $$\mathcal{P}=\langle A, t~|~ a_{1}t a_{2}t \; \cdots \; a_{k_{1}-2}E_{k_1}a_{k_{1}+1}E_{k_2}a_{k_{2}+1} \; \cdots \; a_{k_{m-1}}E_{k_m}a_{k_{m}+1}t \; \cdots \; ta_{n}t \rangle$$ be the relative presentation corresponding to the given equation. It is sufficient to show that the presentation $\mathcal{P}$ is aspherical. By substituting $x=t^{-1} a_{k_{1}-1} t a_{k_{1}} t$ we get $$\mathcal{P}=\langle A, t~|~ a_1t \; \cdots \; ta_{k_{1}-2} x a_{k_{1}+1} x a_{k_{2}+1} x \; \cdots \; a_{k_{m-1}+1} x a_{k_{m}+1}t \; \cdots \; t a_{n} t =1=t^{-1} a_{k_{1}-1} ta_{k_{1}}tx^{-1} \rangle.$$ The weight test will be used to show that $\mathcal{P}$ is aspherical. The star graph $\Gamma$ for $\mathcal{P}$ is given by Figure \ref{sgtype2_8}. \begin{figure}[H] \begin{center} \includegraphics[trim={0 11cm 0 4cm}, width=0.95\textwidth, clip]{sg2_8.pdf} \caption{Star graph $\Gamma$} \label{sgtype2_8} \end{center} \end{figure} In this case, the weight function $\theta$ is defined by $\theta(a_{k_{1}-1})=\theta(a_{k_{1}-2})= \theta(a_{k_{m}+1})=0$. Furthermore the weight of the edge $x \rightarrow t^{-1}$ with label $1$ is also zero. Remaining edges are assigned a weight $1$. The star graph clearly indicates that all admissible cycles of weight less than two imply that the group is a torsion group. Hence $\mathcal{P}$ is aspherical over a torsion free group. \end{proof} We now give a special case of the third theorem where equation contains only two negative powers of $t$. \begin{lem}\label{lsgtype1_1_2} The equation $g_1tg_2t \cdots g_{i-2} tg_{i-1}tg_{i} t^{-1} g_{i+1} t g_{i+2}t g_{i+3}t^{-1} g_{i+4}t^{-1}g_{i+5}t \cdots t g_n t =1$ is solvable if $g_{i+2} =g_{i-1}$ and $g_{i+3} =g_{i}$. \end{lem} \begin{proof} Let $$\mathcal{P}=\langle A, t~|~ g_1tg_2t \cdots g_{i-2} tg_{i-1}tg_{i} t^{-1} g_{i+1} t g_{i-1}t g_{i}t^{-1} g_{i+4}t^{-1} g_{i+5}t \cdots t g_n t \rangle$$ be the relative presentation corresponding to the given equation. It is sufficient to show that the presentation $\mathcal{P}$ is aspherical. Substituting $x=t g_{i-1} t g_{i}t^{-1}$ yields $$\mathcal{P}=\langle A, t~|~ g_1t \cdots tg_{i-2} x g_{i+1} x g_{i+4} t^{-1} g_{i+5}t \cdots t g_n t =1=tg_{i-1}t g_{i}t^{-1}x^{-1} \rangle.$$ We use the weight test to show that $\mathcal{P}$ is aspherical. The star graph $\Gamma$ for $\mathcal{P}$ is given by Figure \ref{sgtype1_1_5}. \begin{figure}[H] \begin{center} \includegraphics[trim={0 12cm 0 4cm}, width=0.95\textwidth, clip]{sg2_5.pdf} \caption{Star graph $\Gamma$} \label{sgtype1_1_5} \end{center} \end{figure} In this case, the weight function $\theta$ is defined by $\theta(g_{i})=\theta(g_{i-2})=\theta(g_{i+5})=0$. Furthermore the weight of the edge $t \rightarrow x^{-1}$ with label $1$ is also zero. Remaining edges are assigned a weight $1$. The star graph clearly indicates that all admissible cycles of weight less than two imply that the group is a torsion group. Hence $\mathcal{P}$ is aspherical over a torsion free group. \end{proof} \begin{theo} Let $\displaystyle s(t)=a_{1}t a_{2}t \; \cdots \; a_{k_{1}-2}E_{k_1}a_{k_{1}+1}E_{k_2}a_{k_{2}+1} \; \cdots \; a_{k_{m-1}}E_{k_m}a_{k_{m}+1}t^{-1}a_{k_{m}+2}t \; \cdots \; ta_{n}t$ such that $a_{j} \in G$ for all $j$ and $E_{k_i}=t a_{k_{i}-1} t a_{k_{i}}t^{-1}$ for all $i \geq 1$. If $a_{k_{i}-1}=a_{k_{1}-1}$ and $a_{k_{i}}=a_{k_{1}}$ for all $i \geq 2$ then $s(t)=1$ has a solution over $G$. \end{theo} \begin{proof} Let $$\mathcal{P}=\langle A, t~|~ a_{1}t a_{2}t \; \cdots \; a_{k_{1}-2}E_{k_1}a_{k_{1}+1}E_{k_2}a_{k_{2}+1} \; \cdots \; a_{k_{m-1}}E_{k_m}a_{k_{m}+1}t^{-1}a_{k_{m}+2}t \; \cdots \; ta_{n}t \rangle$$ be the relative presentation corresponding to the given equation. It is sufficient to show that the presentation $\mathcal{P}$ is aspherical. Substitute $x=t a_{k_{1}-1} t a_{k_{1}} t^{-1}$ to get $$\mathcal{P}=\langle A, t~|~ a_1t \; \cdots \; ta_{k_{1}-2} x a_{k_{1}+1} x a_{k_{2}+1} x \; \cdots \; a_{k_{m-1}+1} x a_{k_{m}+1}t^{-1} a_{k_{m}+2}t \; \cdots \; t a_{n} t =1=t a_{k_{1}-1} ta_{k_{1}}t^{-1}x^{-1} \rangle.$$ We use the weight test to show that $\mathcal{P}$ is aspherical. The star graph $\Gamma$ for $\mathcal{P}$ is given by Figure \ref{sgtype2_9}. \begin{figure}[H] \begin{center} \includegraphics[trim={0 11cm 0 4cm}, width=0.95\textwidth, clip]{sg2_9.pdf} \caption{Star graph $\Gamma$} \label{sgtype2_9} \end{center} \end{figure} In this case, the weight function $\theta$ is defined by $\theta(a_{k_{1}})=\theta(a_{k_{1}-2})= \theta(a_{k_{m}+2})=0$. Furthermore the weight of the edge $t \rightarrow x^{-1}$ with label $1$ is also zero. Remaining edges are assigned a weight $1$. The star graph clearly indicates that all admissible cycles of weight less than two imply that the group is a torsion group. Hence $\mathcal{P}$ is aspherical over a torsion free group. \end{proof}
1,108,101,562,596
arxiv
\section{Introduction} The motivation for studying obstacle problems has roots in many applications. There are examples in physics and in mechanics, and many prime examples can be found in \cite{Chipot,Fried,KO,KS,Rod}. The classical obstacle problem consists in finding the minimizer of Dirichlet energy in a domain $\Omega$, among all functions $v$, with fixed boundary data, constrained to lie above a given obstacle $\psi$. Active areas of research related to this problem include both studying properties of minimizers and analyzing the regularity of the boundary of the coincidence set between minimizer and obstacle. In the '60s the obstacle problem was introduced within the study of variational inequalities. In the last fifty years much effort has been made to understand of the problem, a wide variety of issues has been analyzed and new mathematical ideas have been introduced. Caffarelli in \cite{Caf80} introduced the so-called method of \emph{blow-up}, imported from geometric measure theory for the study of minimal surfaces, to prove some local properties of solutions. Alt-Caffarelli-Friedman \cite{ACF}, Weiss \cite{Weiss} and Monneau \cite{Monneau} introduced monotonicity formulae to show the blow-up property and to obtain the free-boundary regularity in various problems. See \cite{Caf77,CS05,Fried,KS,PSU,Rod} for more detailed references and historical developments. Recently many authors have improved classical results replacing the Dirichlet energy by a more general variational functional and weakening the regularity of the obstacle (we can see \cite{CFS04,FS07,Foc10,Foc12,FGS,MP11,Mon09,Wang00,Wang02}). In this context, we aim to minimize the following energy \begin{equation} \mathcal{E}(v):=\int_\Omega \big(\langle \mathbb{A}(x)\nabla v(x),\nabla v(x)\rangle +2f(x)v(x)\big)\,dx, \end{equation} among all positive functions with fixed boundary data, where $\Omega\subset \mathbb{R}^n$ is a smooth, bounded and open set, $n\geq 2$, $\mathbb{A}:\Omega\to \mathbb{R}^{n\times n}$ is a matrix-valued field and $f:\Omega\to \mathbb{R}$ is a function satisfying: \begin{itemize} \item[$(H1)$]$\mathbb{A}\in W^{1+s,p}(\Omega;\mathbb{R}^{n\times n})$ with $s>\frac{1}{p}\,$ and $\,p>\frac{n^2}{n(1+s)-1} \wedge n$; where the symbol $\wedge$ indicates the minimum of the surrounding quantities; \item[$(H2)$] $\mathbb{A}(x)=\left(a_{ij}(x)\right)_{i,j=1,\dots,n}$ symmetric, continuous and coercive, that is $a_{ij}=a_{ji}$ $\mathcal{L}^n$ a.e. $\Omega$ and for some $\L\geq 1$ i.e. \begin{equation}\label{A coerc cont} \Lambda^{-1}|\xi|^2\leq \langle \mathbb{A}(x)\xi, \xi\rangle \leq \Lambda|\xi|^2 \qquad\qquad \mathcal{L}^n \,\,\textrm{a.e.}\,\,\Omega, \,\,\forall \xi\in \mathbb{R}^n; \end{equation} \item[$(H3)$] $f$ Dini-continuous, that is $\omega(t)=\sup_{|x-y|\leq t} |f(x)-f(y)|$ modulus of continuity $f$ satisfying the following integrability condition \begin{equation}\label{e:Dini continuity} \int_0^1 \frac{\omega(t)}{t}\,dt < \infty, \end{equation} and exists $c_0>0$ such that $f\geq c_0$. \item[$(H4)$] $f$ satisfies a double Dini-type condition: let $\omega(t)=\sup_{|x-y|\leq t} |f(x)-f(y)|$ be the modulus of continuity of $f$ and set $a\geq 1$ it holds the following condition of integrability \begin{equation}\label{H3'} \int_0^1 \frac{\omega(r)}{r}\,|\log r|^a\, dr < \infty, \end{equation} and exists $c_0>0$ such that $f\geq c_0$. \end{itemize} In Remark \ref{choice (H1)} we will justify the choice of $p$ in hypothesis $(H1)$. We note that we are reduced to the $0$ obstacle case, so $f=-\mathrm{div}(\mathbb{A}\nabla \psi)$. In the paper we prove that the unique minimizer, which we will indicate as $u$, is the solution of an elliptic differential equation in divergence form. With classical PDE regularity theory we deduce that $u$ and $\nabla u$ are H\"{o}lder continuous and $\nabla^2 u$ is integrable. To prove the regularity of the free-boundary $\Gamma_u=\partial \{u=0\}\cap \Omega$, we apply the method of blow-up introduced by Caffarelli \cite{Caf80}. For all $x_0$ points of free-boundary $\Gamma_u=\partial \{u=0\}\cap \Omega$ we introduce a sequence of rescaled functions and, through a $C^{1,\gamma}$ estimate of rescaled functions (for a suitable $\gamma\in (0,1)$), we prove the existence of sequence limits; these limits are called blow-ups. To classify the blow-ups and to prove the uniqueness of the sequence limit, for all points of $\Gamma_u$, we introduce a technical tool: the quasi-monotonicity formulae. To simplify the notation we introduce, for all $x_0\in\Gamma_u$ an opportune change of variable for which, without loss of generality, we can suppose: \begin{equation}\label{abuso_notazione} x_0=\underline{0}\in \Gamma_u,\qquad \mathbb{A}(\underline{0})=I_n,\qquad f(\underline{0})=1. \end{equation} As in \cite{FGS} we introduce the auxiliary energy ``à la Weiss'' \begin{equation*} \Phi(r):=\int_{B_1}\big(\langle \mathbb{A}(rx)\nabla u_r(x),\nabla u_r(x)\rangle +2f(rx)u_r(x)\big)\,dx + 2\int_{\partial B_1}\left\langle \mathbb{A}(rx)\frac{x}{|x|},\frac{x}{|x|}\right\rangle u_r^2(x) \,d\mathcal{H}^{n-1}(x), \end{equation*} and prove the main results of the paper: \begin{teo}[Weiss' quasi-monotonicity formula]\label{Weiss} Assume that $(H1)$-$(H3)$ and \eqref{abuso_notazione} hold. There exist nonnegative constants $\bar{C_3}$ and $C_4$ independent from $r$ such that the function \begin{equation*} r\mapsto \Phi(r)\,e^{\bar{C_3}r^{1-\frac{n}{\Theta}}} + C_4\,\int_0^r \left(t^{-\frac{n}{\Theta}}+\frac{\omega(t)}{t}\right)e^{\bar{C_3}t^{1-\frac{n}{\Theta}}}\,dt \end{equation*} with the constant $\Theta$ given in equation \eqref{B}, is nondecreasing on the interval $(0,\frac{1}{2}\mathrm{dist}(\underline{0},\partial\Omega)\wedge 1)$. More precisely, the following estimate holds true for $\mathcal{L}^1$-a.e. $r$ in such an interval: \begin{equation}\label{dis monotonia Weiss} \begin{split} \frac{d}{dr}\bigg(\Phi(r)\,e^{\bar{C_3}r^{1-\frac{n}{\Theta}}} &+ C_4\,\int_0^r \left(t^{-\frac{n}{\Theta}}+\frac{\omega(t)}{t}\right)e^{\bar{C_3}t^{1-\frac{n}{\Theta}}}\,dt\bigg)\\ &\geq \frac{2e^{\bar{C_3}r^{1-\frac{n}{\Theta}}}}{r^{n+2}}\int_{\partial B_r}\mu \Big(\langle \mu^{-1} \mathbb{A}\nu, \nabla u \rangle - 2\frac{u}{r} \Big)^2\,d\mathcal{H}^{n-1}. \end{split} \end{equation} In particular, the limit $\Phi(0^+):=\lim_{r\to 0^+}\Phi(r)$ exists and it is finite and there exists a constant $c>0$ such that \begin{equation}\label{weiss stima Phir} \begin{split} &\Phi(r)-\Phi(0^+)\\ &\phantom{A}\geq \Phi(r)\,e^{\bar{C_3}r^{1-\frac{n}{\Theta}}} + C_4\,\int_0^r \left(t^{-\frac{n}{\Theta}}+\frac{\omega(t)}{t}\right)e^{\bar{C_3}t^{1-\frac{n}{\Theta}}}\,dt -\Phi(0^+) - c\,\left(r^{1-\frac{n}{\Theta}}+\int_0^r\frac{\omega(t)}{t}\,dt\right). \end{split} \end{equation} \end{teo} \begin{teo}[Monneau's quasi-monotonicity formula]\label{Monneau} Assume $(H1)$, $(H2)$ and $(H4)$ with $a\geq 1$ and \eqref{abuso_notazione}. Let $u$ be the minimizer of $\mathcal{E}$ on $K$, with $\underline{0}\in Sing(u)$ (i.e. \eqref{pto sing1} holds), and $v$ be a $2$-homogeneous, positive, polynomial function, solution of $\Delta v=1$ on $\mathbb{R}^n$. Then, there exists a positive constant $C_5=C_5(\l,\|\mathbb{A}\|_{W^{s,p}})$ such that \begin{equation}\label{funz monotona Monneau} r \longmapsto \int_{\partial B_1} (u_r-v)^2\,d\mathcal{H}^{n-1} + C_5\,\left(r^{1-\frac{n}{\Theta}}+\int_0^r\frac{\omega(t)}{t}\,dt + \int_0^r\frac{dt}{t}\int_0^t\frac{\omega(s)}{s}\,ds\right) \end{equation} is nondecreasing on $(0,\frac{1}{2}\mathrm{dist}(\underline{0},\partial\Omega)\wedge 1)$. More precisely, $\mathcal{L}^1$-a.e. on such an interval \begin{equation}\label{dis monotonia Monneau} \begin{split} \frac{d}{dr}\bigg(\int_{\partial B_1}(u_r-v)^2\,&d\mathcal{H}^{n-1} + C_5\,\left(r^{1-\frac{n}{\Theta}}+\int_0^r\frac{\omega(t)}{t}\,dt + \int_0^r\frac{dt}{t}\int_0^t\frac{\omega(s)}{s}\,ds \right)\bigg)\\ &\geq \frac{2}{r}\bigg(e^{C_3\,r^{1-\frac{n}{\Theta}}}\Phi(r) + C_4 \int_0^r e^{c_3t^{1-\frac{n}{\Theta}}}\left(t^{-\frac{n}{\Theta}}+\frac{\omega(t)}{t}\right)\,dt-\Psi_v(1)\bigg). \end{split} \end{equation} where $\Psi_v(1)=\int_{B_1}\big(|\nabla v|^2 + 2v\big)\,dx -2\int_{\partial B_1} v^2\,d\mathcal{H}^{n-1}$ \end{teo} These theorems generalize the results of Weiss \cite{Weiss} and Monneau \cite{Monneau}. The Weiss monotonicity formula was proven by Weiss within \cite{Weiss} for the case where $\mathbb{A}\equiv I_n$ and $f\equiv 1$; in the same paper he proved the celebrated epiperimetric inequality (see Theorem \ref{epip Weiss}) and gave a new way of approaching the problem of the regularity for the free-boundary. In \cite{PS07} Petrosyan and Shahgholian proved the monotonicity formula for $\mathbb{A}\equiv I_n$ and $f$ with a double Dini modulus of continuity (but for obstacle problems with no sign condition on the solution). Lederman and Wolanski \cite{LW07} provided a local monotonicity formula for the perturbated problem to achieve the regularity of Bernoulli and Stefan free-boundary problems, while Ma, Song and Zhao \cite{MSZ10} showed the formula for elliptic and parabolic systems in the case in which $\mathbb{A}\equiv I_n$ and the equations present a first order nonlinear term. Garofalo and Petrosyan in \cite{GP09} proved the formula for the thin obstacle problem with a smooth obstacle. Garofalo, Petrosyan and Smit Vega Garcia in \cite{GPS16} proved the result for Signorini's problem under the hypotheses $\mathbb{A}\in W^{1,\infty}$ and $f\in L^\infty$. Focardi, Gelli and Spadaro in \cite{FGS} proved the formula for the classical obstacle problem for $\mathbb{A}\in W^{1,\infty}$ and $f\in C^{0,\alpha}$ for $\alpha\in (0,1)$. In the same paper (under the same hypotheses of coefficients) the three authors proved a generalization of the monotonicity formula introduced by Monneau \cite{Monneau} to analyze the behaviour near the singular points (see Definition \ref{reg sing}). In \cite{Mon09} he improved his result; he showed that his monotonicity formula holds under the hypotheses that $\mathbb{A}\equiv I_n$ and $f$ with a Dini modulus of continuity in an $L^p$ sense. In \cite{GP09} Garofalo and Petrosyan showed the formula of Monneau for the thin obstacle with a regular obstacle.\\ In our work (inspired by \cite{FGS}) we prove the quasi-monotonicity formulae under the hypotheses, $(H1)$-$(H4)$ improving the results with respect to current literature. As we will see in Corollary \ref{cor Morrey fraz} if $ps>n$ the embedding $W^{1+s,p}\hookrightarrow W^{1,\infty}$ holds true. Consequently, we assume $sp\leq n$ and we obtain an original result not covered by \cite{FGS} if $p>\frac{n^2}{n(1+s)-1}\wedge n$. (We can observe that $(\frac{n^2}{n(1+s)-1}\wedge n )< \frac{n}{s}$ for all $s\in \mathbb{R}$.)\\ Weiss' quasi-monotonicity formula allows us first to deduce that blow-ups are homogeneous of degree $2$, and second (using also the nondegeneracy of the solution proven in an even more general setting by Blank and Hao \cite{Blank}) to show that the blow-ups are nonzero. Thanks to a $\Gamma$-convergence argument and according to Caffarelli's classification of blow-ups, in the classical case (see \cite{Caf77,Caf80,Caf98}), we can classify the blow-up types and so distinguish the points in $\Gamma_u$ as regular and singular (respectively $Reg(u)$ and $Sing(u)$, see Definition \ref{reg sing}). Following the energetic approach by Focardi, Gelli and Spadaro \cite{FGS} we prove the uniqueness of blow-ups for both the regular and the singular cases. In the classical framework, the uniqueness of the blow-ups can be derived, a posteriori, from the regularity properties of the free-boundary (see Caffarelli \cite{Caf80}). In our setting we distinguish two cases: $x_0\in Sing(u)$ and $x_0\in Reg(u)$. In the first case, through the two quasi-monotonicity formulae and an ``absurdum'' argument, we prove the uniqueness of blow-ups providing a uniform decay estimate for all points in a compact subset of $Sing(u)$. In the second case, we need to introduce an assumption, probably of a technical nature, on the modulus of continuity of $f$: $(H4)$ with $a>2$ (more restrictive than double Dini continuity which is equivalent to $(H4)$ with $a=1$, see \cite[Definition 1.1]{Mon09}). So, thanks to the epiperimetric inequality of Weiss \cite{Weiss} we obtain a uniform decay estimate for the convergence of the rescaled functions with respect to their blow-up limits. We recall that Weiss \cite{Weiss} proved the uniqueness for regular points in $\mathbb{A}\equiv I_n$ and $f\equiv 1$. Focardi, Gelli and Spadaro \cite{FGS} also had proved our same result for $\mathbb{A}$ Lipschitz continuous and $f$ H\"{o}lder continuous. Monneau \cite{Mon09} proved the uniqueness of blow-ups both for regular points and for singular points with $\mathbb{A}\equiv I_n$ and $f$ with Dini continuous modulus of mean oscillation in $L^p$. Therefore, without further hypotheses, in the regular case and adding double Dini continuity condition on the modulus of the mean oscillation, Monneau gave a very accurate pointwise decay estimate, providing an explicit modulus of continuity for the solution. These results allow us to prove the regularity of free-boundary: \begin{teo} We assume the hypothesis $(H1)$-$(H3)$. The free-boundary decomposes as $\Gamma_u=Reg(u)\cup Sing(u)$ with $Reg(u)\cap Sing(u)=\emptyset$. \begin{itemize} \item[(i)] Assume $(H4)$ with $a>2$. $Reg(u)$ is relatively open in $\partial \{u=0\}$ and for every point $x_0\in Reg(u)$. there exists $r=r(x_0)>0$ such that $\Gamma_u\cap B_r(x_0)$ is a $C^1$ hypersurface with normal vector $\varsigma$ is absolutely continuous with a modulus of continuity depending on $\rho$ defined in \eqref{rho}. In particular if $f$ is H\"{o}lder continuous there exists $r=r(x_0)>0$ such that $\Gamma_u\cap B_r(x)$ is $C^{1,\b}$ hypersurface for some universal exponent $\b \in (0,1)$. \item[(ii)] Assume $(H4)$ with $a\geq 1$. $Sing(u)=\cup_{k=0}^{n-1} S_k$ (see Definition \ref{d:singular stratum}) and for all $x\in S_k$ there exists $r$ such that $S_k\cap B_r(x)$ is contained in a regular $k$-dimensional submanifold of $\mathbb{R}^n$. \end{itemize} \end{teo} In order to justify the choice of regularity of the coefficients of $\mathbb{A}$ and $f$ we discuss the hypotheses $(H1)$ and $(H3)$. The hypothesis $(H3)$ turns out to be the best condition to obtain the uniqueness of blow-up (we need of $(H4)$ with $a\geq 1$ in the case of singular points and $a>2$ in the case of regular points). In fact when condition \eqref{e:Dini continuity} is not satisfied, Blank gave in \cite{Bl01} an example of nonuniqueness of the blow-up limit at a regular point. Monneau observed in \cite{Monneau} that using the symmetry $x \mapsto - x$, it is easy to transform the result of Blank into an example of nonuniqueness of the blow-up limit at a singular point when condition \eqref{e:Dini continuity} is not satisfied. Before taking into account hypothesis $(H1)$ we need to clarify the relationship between the regularity of coefficients $\mathbb{A}$, $f$ and the regularity of the free-boundary. Caffarelli \cite{Caf77} and Kinderlehrer and Nirenberg \cite{KN77} proved that for smooth coefficients of $\mathbb{A}$ and for $f\in C^1$ the regular points are a $C^{1,\alpha}$-manifold for all $\a\in (0,1)$, for $f\in C^{m,\alpha}$ $Reg(u)$ is a $C^{m+1,\alpha}$-manifold with $\alpha\in (0,1)$ and if $f$ is analytic so is $Reg(u)$. In \cite{Bl01} Blank proved that, in the Laplacian case with $f$ Dini continuous, the set of regular points is a $C^1$-manifold, but if $f$ is $C^0$, but is not Dini continuous, then $Reg(u)$ is Reifenberg vanishing, but not necessarily smooth. In \cite{FGS} Focardi, Gelli and Spadaro proved that if $\mathbb{A}\in W^{1,\infty}$ and $f\in C^{0,\alpha}$ with $\alpha\in (0,1)$, then $Reg(u)$ is a $C^{1,\beta}$-manifold with $\beta\in (0,\alpha)$. A careful inspection of the proof of \cite[Theorem 4.12]{FGS} shows that in the case of $\mathbb{A}\in W^{1,\infty}$ and $f\equiv 1$ the regular set turns out to be a $C^{1,\beta'}$-manifold with $\beta'\in (0,\frac{1}{2})$, so, despite the linear term being constant, the regularity improves slightly but remains in the same class. Blank and Hao in \cite{BH15} proved that if $a_{i,j},f\in VMO$, any compact set $K\subset\subset Reg(u)\cap B_\frac{1}{2}$ is relatively Reifenberg vanishing with respect to $Reg(u)\cap B_\frac{1}{2}$. So the regularity of the regular part of the free-boundary turns out to be strictly related to regularity of coefficients of matrix $\mathbb{A}$ and the linear term $f$. Under the hypotheses $(H1)$ and $(H2)$ we prove that if $f$ is H\"{o}lder continuous we obtain that the regular part of the free-boundary is a $C^{1,\beta}$-manifold for some $\beta$, while if $f$ satisfies hypothesis $(H4)$ with $a>2$ we prove that $Reg(u)$ is a $C^1$-manifold. So the process of weakening the regularity of coefficients goes along two directions: to obtain a strong or a weak regularity of the regular part of the free-boundary. Our work forms part of the first way and with the technical hypothesis $(H4)$ wiht $a>2$ for $f$, which is better than the H\"{o}lder continuity, and by hypothesis $(H1)$ of matrix $\mathbb{A}$, we improve the current literature. The best regularity for $\mathbb{A}$ that allows us to have a strong regularity of $Reg(u)$ still remains, to our knowledge, an open problem. Regarding the best regularity for $f$, from \cite{Bl01} we know that it is the Dini continuity; we do not reach it but we improve the already investigated condition of H\"{o}lder continuity. The natural sequel of these results is the study of obstacle problems for nonlinear energies. The future developments aim at the same direction as \cite{FGerS} where the author, Focardi and Spadaro prove an exhaustive analysis of the free-boundary for nonlinear variational energies as the outcome of analogous results for the classical obstacle problem for quadratic energies with Lipschitz coefficients.\\ To conclude, the paper is organized as follows. In Section $2$ we fix the notation of fractional Sobolev spaces. In Section $3$ we prove the existence, the uniqueness and the regularity of the minimizer $u$. In Section $4$ we introduce the sequence of rescaled functions, prove the existence of blow-ups and state a property of nondegeneracy of the solution of the obstacle problem. In Section $5$ and $7$ we respectively prove the quasi-monotonicity formulae of Weiss and Monneau. In Section $6$ we prove the $2$-homogeneity and the nonzero value property of blow-ups, classify blow-ups and distinguish the point of the free-boundary in regular and singular. In Section $8$ we deduce the uniqueness of blow-ups in the case of regular and singular points. In Section $9$ we state the properties of the regularity of the free-boundary. \section{Preliminaries: Fractional Sobolev spaces} In order to fix the notation, we report the definition of the fractional Sobolev spaces. See \cite{NPV,LM} for more detailed references. \begin{defi} For any real $\l\in (0,1)$ and for all $p\in (0,\infty)$ we define the space \begin{equation} W^{\l,p}(\Omega):=\left\{v\in L^p(\Omega) : \frac{|v(x)-v(y)|}{|x-y|^{\frac{n}{p}+\l}}\in L^p(\Omega\times\Omega) \right\}, \end{equation} i.e, an intermediary Banach space between $L^p(\Omega)$ and $W^{1,p}(\Omega)$, endowed with the norm \begin{equation*} \|v\|_{W^{\l,p}(\Omega)}=\left(\int_\Omega |v|^p\,dx +\iint_{\Omega\times\Omega} \frac{|v(x)-v(y)|^p}{|x-y|^{n+\l p}}\,dx\,dy\right)^\frac{1}{p}. \end{equation*} If $\l>1$ and not integer we indicate with $\lfloor \l \rfloor$ its integer part and with $\sigma=\l - \lfloor \l \rfloor$ its fractional part. In this case the space $W^{\l,p}$ consists of functions $u\in W^{\lfloor \l \rfloor,p}$ such that the distributional derivatives $D^\a v\in W^{\sigma,p}$ with $|\a|=\lfloor \l \rfloor$ \begin{equation*} W^{\l,p}(\Omega):=\left\{v\in W^{\lfloor \l \rfloor,p}(\Omega) : \frac{|D^\a v(x)-D^\a v(y)|}{|x-y|^{\frac{n}{p}+\sigma}}\in L^p(\Omega\times\Omega),\quad \forall\a \,\,\textit{such that}\,\, |\a|=\lfloor \l \rfloor \right\}. \end{equation*} $W^{\l,p}(\Omega)$ is a Banach space with the norm \begin{equation*} \|v\|_{W^{\l,p}(\Omega)}=\left(\|v\|^p_{W^{\lfloor \l \rfloor,p}(\Omega)} +\sum_{|\a|=\lfloor \l \rfloor}\|D^\a v\|^p_{W^{\sigma,p}(\Omega)}\right)^\frac{1}{p}. \end{equation*} \end{defi} We state three results on fractional Sobolev spaces useful for the follows. Theorems \ref{cor imm fraz} and \ref{toerema-traccia-fraz} are proved, respectively in \cite{Mu} and \cite{Sch}, for Besov spaces; thanks to \cite[Remark 3.6]{Sch} and \cite[Theorem 14.40]{Leoni} we can reformulate these results in our notation. Theorem \ref{cor Morrey fraz} is obtained combining classical Morrey theorem, \cite[Theore 8.3]{NPV} and Theorem \ref{cor imm fraz}. \begin{teo}[Embedding Theorem {\cite[Theorem 9]{Mu} and \cite[Theorem 6.5]{NPV}}]\label{cor imm fraz} Let $v\in W^{\l,p}(\Omega)$ with $\l>0$, $1 <p<\infty$, $p\l<n$ and $\Omega \subset \mathbb{R}^n$ be a bounded open set of class $C^{0,1}$. Then for all $0<t<\l$ there exists a constant $C=C(n,\l,p,t,\Omega)$ for which \begin{equation*} \|v\|_{W^{t,\frac{np}{n-(\l-t)p}}(\Omega)} \leq C\, \|v\|_{W^{\l,p}(\Omega)}. \end{equation*} If $t=0$ there exists a constant $C=C(n,\l,p,\Omega)$ for which \begin{equation*} \|v\|_{L^{\frac{np}{n-\l p}}(\Omega)} \leq C\, \|v\|_{W^{\l,p}(\Omega)}. \end{equation*} \end{teo} \begin{teo}[Embedding Theorem]\label{cor Morrey fraz} Let $p\in [1,\infty)$ such that $sp>n$ and $\Omega\subset\mathbb{R}^n$ be an extension domain for $W^{\l,p}$. Then there exists a positive constant $C=C(n,p,\l,\Omega)$, for which \begin{equation} \|v\|_{C^{h,\a}}\leq C\|v\|_{W^{\l,p}(\Omega)}, \end{equation} for all $v\in L^p(\Omega)$ for some $\a\in (0,1)$ and $h$ integer with $h\leq m$. \end{teo} \begin{teo}[Trace Theorem {\cite[Theorem 3.16]{Sch}}]\label{toerema-traccia-fraz} Let $n\geq 2$, $0<p<\infty$, $\l>\frac{1}{p}$ and $U$ a bounded $C^k$ domain, $k>\l$ in $\mathbb{R}^n$. Then there exists a bounded operator \begin{equation} \c_0:W^{\l,p}(U)\longrightarrow W^{\l-\frac{1}{p},p}(\partial U; \mathcal{H}^{n-1}), \end{equation} such that $\c_0(v)=v_{|\partial U}$ for all functions $v\in W^{\l,p}(U)\cap C(\overline{U})$. $\c_0$ is called \emph{trace operator}. \end{teo} \begin{oss}\label{oss stime imm e traccia} Let $p,\l$ be exponents as in theorem \ref{toerema-traccia-fraz}, $p\l<n$ and $\sigma:=\l-\lfloor \l \rfloor$. If $U=B_r$, we see how the constant of the trace operator changes when the radius $r$ changes. By taking into account Theorem \ref{toerema-traccia-fraz} and \ref{cor imm fraz} we have the following embeddings \begin{equation*} W^{\l,p}(B_r)\hookrightarrow W^{\l-\frac{1}{p},p}(\partial B_r; \mathcal{H}^{n-1}) \hookrightarrow L^1(\partial B_r; \mathcal{H}^{n-1}). \end{equation*} Then, setting $v_r(y)=v(ry)$ \begin{equation*} \begin{split} \int_{\partial B_r}&|\c_0(v)(x)|\,d\mathcal{H}^{n-1} \stackrel{y=rx}{=}r^{n-1}\int_{\partial B_1}|\c_0(v_r)(y)|\,d\mathcal{H}^{n-1}\\ &\leq c(1) r^{n-1}\bigg(\sum_{|\a|\leq \lfloor \l \rfloor}\int_{B_1}|D^\a v_r(y)|^p\,dx + \iint_{B_1\times B_1}\frac{|v_r(y)-v_r(z)|^p}{|y-z|^{n+\sigma p}}\,dy\,dz\bigg)^\frac{1}{p}\\ &\stackrel{x=\frac{y}{r}}{=} c(1) r^{n-1}\,r^{-\frac{n}{p}} \bigg(\sum_{|\a|\leq \lfloor \l \rfloor}\int_{B_r}|D^\a v(x)|^p\,dx+r^{\sigma p}\iint_{B_r\times B_r}\frac{|v(x)-v(w)|^p}{|x-w|^{n+\sigma p}}\,dx\,dw\bigg)^\frac{1}{p}\\ &\leq C r^{n-1}\,r^{-\frac{n}{p}} \|v\|_{W^{\l,p}(B_r)}. \end{split} \end{equation*} For which \begin{equation}\label{stima imm 1} \|\c_0(v)\|_{L^1(\partial B_r; H^{n-1})}\leq C\,r^{n-1}\,r^{-\frac{n}{p}} \|v\|_{W^{\l,p}(B_r)}. \end{equation} If $p\leq n$, let $\frac{n-(\l-t)p}{np}<t<\l$, i.e. $\frac{n-\l p}{(n-1)p}<t<\l$, of note that $\l>\frac{n-\l p}{(n-1)p}$ if and only if $\l>\frac{1}{p}$. We infer by Theorem \ref{cor imm fraz} and by Theorem \ref{toerema-traccia-fraz} the following \begin{equation*} W^{\l,p}(B_r)\hookrightarrow W^{t,\frac{np}{n-(\l-t)p}}(B_r)\hookrightarrow W^{t-\frac{n-(\l-t)p}{np},\frac{np}{n-(\l-t)p}}(\partial B_r; \mathcal{H}^{n-1}) \hookrightarrow L^1(\partial B_r; \mathcal{H}^{n-1}). \end{equation*} Applying the same reasoning to deduce \eqref{stima imm 1}, in particular we achieve \begin{equation}\label{stima imm q} \|\c_0(v)\|_{L^1(\partial B_r; H^{n-1})}\leq C\, r^{n-1}\,r^{-\frac{n-(\l-t)p}{p}} \|v\|_{W^{t,\frac{np}{n-(\l-t)p}}(B_r)}. \end{equation} \end{oss} \section{The classical obstacle problem} Let $\Omega\subset \mathbb{R}^n$ be a smooth, bounded and open set, $n\geq 2$, let $\mathbb{A}:\Omega\to \mathbb{R}^{n\times n}$ be a matrix-valued field and $f:\Omega\to \mathbb{R}$ be a function satisfying assumptions $(H1)$-$(H3)$ seen in the Introduction. \begin{oss} By \eqref{A coerc cont}, we immediately deduce that $\mathbb{A}$ is bounded. In particular, $\|\mathbb{A}\|_{L^\infty(\Omega)}\leq \Lambda$. \end{oss} We define, for every open $A\subset \Omega$ and for each function $v\in H^1(\Omega)$, the following energy: \begin{equation} \mathcal{E}[v,A]:=\int_A \left(\langle \mathbb{A}(x)\nabla v(x),\nabla v(x)\rangle +2f(x)u(x)\right)\,dx, \end{equation} with $\mathcal{E}[v,\Omega]:=\mathcal{E}[v]$. \begin{prop} We consider the following minimum problem with obstacle: \begin{equation}\label{problema ad ostacolo} \inf_K \mathcal{E}[\cdot], \end{equation} where $K\subset H^1(\Omega)$ is the weakly closed convex given by \begin{equation} K:=\{v\in H^1(\Omega)\, |\, v\geq 0\, \mathcal{L}^n\textit{-a.e. on}\,\, \Omega,\, \c_0(v)=g\,\textit{on}\,\, \partial\Omega \}, \end{equation} with $g\in H^\frac{1}{2}(\partial\Omega)$ being a nonnegative function. Then there exists a unique solution for the minimum problem \eqref{problema ad ostacolo}. \end{prop} \begin{proof} The hypotheses $(H1)$-$(H3)$ imply that the energy $\mathcal{E}$ is coercive and strictly convex in $K$. Thus, $\mathcal{E}$ is lower semicontinuous for the weak topology in $H^1(\Omega)$, and so there exists a unique minimizer that, as we stated in the introduction, we will indicate by $u$. \end{proof} Now, we can fix the notation for the \emph{coincidence set}, \emph{non-coincidence set} and the \emph{free-boundary} by defining the following: \begin{equation}\label{notation free-boundary set} \Lambda_u:=\{u=0\},\qquad N_u:=\{u>0\},\qquad \Gamma_u=\partial\Lambda_u\cap\Omega. \end{equation} Actually, the minimum $u$ satisfies the partial differential equation both in the distributional sense and a.e. on $\Omega$. Therefore it shows good properties of regularity: \begin{prop}\label{prop PDE_u} Let $u$ be the minimum of $\mathcal{E}$ in $K$. Then \begin{equation}\label{PDE_u} \mathrm{div}(\mathbb{A}(x)\nabla u(x))=f(x)\chi_{\{u>0\}}(x) \qquad \textit{a.e. on $\Omega$ and in $\mathcal{D}'(\Omega)$}. \end{equation} Therefore, \begin{itemize} \item[(i)] if $ps<n$, called $p^*(s,p):=p^*=\frac{np}{n-sp}$, we have $u\in W^{2,p^*}\cap C^{1,1-\frac{n}{p^*}}(\Omega)$; \item[(ii)] if $ps=n$ we have $u\in W^{2,q}\cap C^{1,1-\frac{n}{q}}(\Omega)$ for all $1<q<\infty$. \end{itemize} \end{prop} \begin{proof} For the first part of proof we refer to \cite[Proposition 2.2]{FGS} and \cite[Proposition 3.2]{FGerS}. Now, based on equation \eqref{PDE_u}, we can prove (i), the regularity of $u$ if $ps<n$. From Theorem \ref{cor imm fraz} $W^{1+s,p}(\Omega) \hookrightarrow W^{1,p^*}(\Omega)$ with $p^*=\frac{np}{n-sp}$. We also note that by the hypothesis $(H1)$ $p^*>n$, so by Morrey theorem $\mathbb{A}\in C^{0,1-\frac{n}{p^*}}(\Omega)$. Since $u$ is the solution of \eqref{PDE_u}, and thanks to \cite[Theorem 3.13]{HL}, $u\in C_{loc}^{1,1-\frac{n}{p^*}}(\Omega)$. We consider the equation \begin{align}\label{PDE u Miranda} \mathrm{Tr}(\mathbb{A}\nabla^2 v) = f\chi_{\{u>0\}}-\sum_{j}\mathrm{div}(a^j)\frac{\partial u}{\partial x_j}=:\varphi, \end{align} where the symbol $\mathrm{Tr}$ is the \emph{trace} of the matrix $\mathbb{A}\nabla^2 v$ and $a^j$ denotes the $j$-column of $\mathbb{A}$. Since $\nabla u\in L^\infty_{loc}(\Omega)$ and $\mathrm{div}(a^j)\in L^{p^*}(\Omega)$ for all $j\in\{1,\dots,n\}$ then $\varphi\in L^{p^*}_{loc}(\Omega)$. So, from \cite[Corollary 9.18]{GT} there exists a unique $v\in W_{loc}^{2,p^*}(\Omega)$ solution of \eqref{PDE u Miranda}. We observe that the identity $\mathrm{Tr}(\mathbb{A}\nabla^2 v)=\mathrm{div}(\mathbb{A}\nabla v)-\sum_{j}\mathrm{div}(a^j)\frac{\partial u}{\partial x_j}$ is verified. So, if we rewrite \eqref{PDE u Miranda} as follows \begin{equation} \mathrm{div}(\mathbb{A}\nabla v)-\sum_{j}\mathrm{div}(a^j)\frac{\partial u}{\partial x_j} =\varphi, \end{equation} we have that $u$ and $v$ are two solutions. Then by \cite[Theorem 8.3]{GT} we obtain $u=v$ and the thesis follows. Instead, if $ps=n$ from \cite[Theorem 6.10]{NPV}, $\mathbb{A}\in W^{1,q}$ and so $u\in W^{2,q}\cap C^{1,1-\frac{n}{q}}(\Omega)$ for all $1<q<\infty$. Applying the same reasoning to deduce the item (i) we obtain the item (ii) of the thesis. \end{proof} We note that thanks to continuity of $u$ the sets defined in \eqref{notation free-boundary set} are pointwise defined and we can equivalently write $\Gamma_u=\partial N_u\cap\Omega$. \begin{oss} The assumption $f\geq c_0>0$ in $(H3)$ is not necessary in order to prove the regularity of $u$ and that the minimum $u$ satisfies the equation~\eqref{PDE_u} (cf. \cite[Proposition~3.2, Theorem~3.4, Corollary~3.5]{FGerS}). \end{oss} \section{The blow-up method: Existence of blow-ups and nondegeneracy of the solution} In this section we shall investigate the existence of blow-ups. In this connection, we need to introduce for any point $x_0\in \Gamma_u$ a sequence of rescaled functions: \begin{equation}\label{u_x_0 r} u_{x_0,r}:=\frac{u(x_0+rx)}{r^2}. \end{equation} We want to prove the existence of limits (in a strong sense) of this sequence as $r\to 0^+$ and define these \emph{blow-ups}. We start observing that the rescaled function satisfies an appropriate PDE and satisfies uniform $W^{2,p^*}$ estimates. We can prove this thanks to the regularity theory for elliptic equations. \begin{prop}\label{prop u_r limitata W2p} Let $u$ be the solution to the obstacle problem \eqref{problema ad ostacolo} and $x_0\in \Gamma_u$. Then, for every $R>0$ there exists a constant $C>0$ such that, for every $r\in (0,\frac{\mathrm{dist}(x_0, \partial\Omega)}{4R})$ \begin{equation}\label{u_r limitata W2p} \|u_{x_0,r}\|_{W^{2,p^*}(B_R(x_0))}\leq C. \end{equation} In particular, the functions $u_{x_0,r}$ are equibounded in $C^{1,\c'}$ for $\c'\leq \c:=1-\frac{n}{p^*}$. \end{prop} \begin{proof} From \eqref{u_x_0 r} and Proposition \ref{prop PDE_u} it holds \begin{equation}\label{PDE_ur} \mathrm{div}(\mathbb{A}(x_0+rx)\nabla u_{x_0,r}(x))=f(x_0+rx)\chi_{\{u_{x_0,r}>0\}}(x) \quad \textit{a.e. on $B_{4R}(x_0)$ and on $\mathcal{D}'(B_{4R}(x_0))$}, \end{equation} and $u_{x_0,r}\in W^{2,p^*}\cap C^{1,\c}(B_{4R}(x_0))$. We have $x_0\in \Gamma_u$, then $u_{x_0,r}(\underline{0})=0$. Since $u_{x_0,r}\geq 0$, from \cite[Theorems $8.17$ and $8.18$]{GT} we have \begin{equation}\label{u_r leq f} \|u_{x_0,r}\|_{L^\infty(B_{4R}(x_0))}\leq C(R,x_0)\|f\|_{L^\infty(B_{4R}(x_0))}. \end{equation} Thanks to \cite[Theorem 8.32]{GT} and \eqref{u_r leq f} we obtain \begin{equation}\label{u_r C1 hold <f} \|u_{x_0,r}\|_{C^{1,\c}(B_{2R}(x_0))}\leq C\, \big(\|u_{x_0,r}\|_{L^\infty(B_{4R}(x_0))} + \|f\|_{L^\infty(B_{4R}(x_0))}\big)\leq C' \|f\|_{L^\infty(B_{4R}(x_0))}. \end{equation} We observe that, as in Proposition \ref{prop PDE_u}, $u_{x_0,r}$ is solution to \begin{equation}\label{PDE u_r Miranda} \mathrm{Tr}\left(\mathbb{A}(x_0+rx)\nabla^2u_{x_0,r}(x)\right) = f(x_0+rx)\chi_{\{u_{x_0,r}>0\}} - r \sum_{j}\mathrm{div}\left(a^j(x_0+rx)\right)\frac{\partial u_{x_0,r}}{\partial x_j}(x)=:\varphi_r(x), \end{equation} with $\varphi_r\in L^{p^*}(B_{2R}(x_0))$. Then from \cite[Theorem 9.11]{GT} \begin{equation} \|u_{x_0,r}\|_{W^{2,p^*}\left(B_R(x_0)\right)}\leq C \left(\|u_{x_0,r}\|_{L^{p^*}(B_{2R}(x_0))} + \|\varphi_r\|_{L^{p^*}(B_{2R}(x_0))}\right). \end{equation} We define $\mathrm{div}(\mathbb{A}):=(\mathrm{div}(a^j))_j$, namely the vector of divergence of the vector column of $\mathbb{A}$. Then by \eqref{u_r C1 hold <f} \begin{equation*} \begin{split} \|\varphi_r\|^{p^*}_{L^{p^*}(B_{2R}(x_0))}&=\int_{B_{2R}(x_0)} |f(rx)\chi_{\{u_r>0\}} - r\langle\mathrm{div}\mathbb{A}(rx),\nabla u_r(x)\rangle|^{p^*}\,dx\\ &\leq C\,\|f\|^{p^*}_{L^\infty(B_{4R}(x_0))}\bigg(1 + r^{p^*-n}\int_{B_{2rR}(x_0)} |\langle\mathrm{div}\mathbb{A}(y)|^{p^*}\,dy\bigg)\\ &\leq C\,\|f\|^{p^*}_{L^\infty(B_{4R}(x_0))}\bigg(1 + \Big(\frac{\mathrm{dist}\left(x_0, \partial\Omega\right)}{4R}\Big)^{p^*-n}\|\mathrm{div}\mathbb{A}(y)\|^{p^*}_{W^{1,p^*}(\Omega)}\bigg). \end{split} \end{equation*} So $\|u_{x_0,r}\|_{W^{2,p^*}}(B_R(x_0))\leq C$, where $C$ does not depend on $r$. \end{proof} \begin{cor}[Existence of blow-ups] Let $x_0\in \Gamma_u$ with $u$ the solution of \eqref{problema ad ostacolo}. Then for every sequence $r_k\downarrow 0$ there exists a subsequence $(r_{k_j})_j\subset (r_k)_k$ such that the rescaled functions $(u_{x_0,r_{k_j}})_j$ converge in $C^{1,\gamma}$. We define these limits as \emph{blow-ups}. \end{cor} \begin{proof} The proof is an easy consequence of Proposition \ref{prop u_r limitata W2p} and the Ascoli-Arzelà Theorem. \end{proof} \begin{oss} Recalling $x_0\in \Gamma_u$ we have $u(x_0)=0$ and $\nabla u(x_0)=0$ so \begin{equation}\label{u e grad u limitate} \|u\|_{L^\infty(B_r)(x_0)}\leq C\, r^2 \qquad \mathrm{and} \qquad \|\nabla u\|_{L^\infty(B_r(x_0))}\leq C\, r. \end{equation} We note that the constant in \eqref{u e grad u limitate} only depends on the constant $C$ in \eqref{u_r limitata W2p} and is therefore uniformly bounded for points $x_0\in \Gamma_u\cap K$ for each compact set $K\subset \Omega$. \end{oss} As in classical case, the solution $u$ has a quadratic growth. The lacking regularity of the problem does not allow us to use the classic approach by Caffarelli \cite{Caf98} also used by Focardi, Gelli and Spadaro in \cite[Lemma 4.3]{FGS}. The main problem is that $\mathrm{div}(a^j)$, that is a $W^{1,p^*}$ function, is not a priori pointwise defined, so the classical argument fails. We use a more general result of Blank and Hao in \cite[Chapter 3]{Blank}. \begin{prop}[{\cite[Theorem 3.9]{Blank}}]\label{prop crescita basso} Let $x_0\in \Gamma_u$, and $u$ be the minimum of \eqref{problema ad ostacolo}. Then, there exists a constant $\theta>0$ such that \begin{equation} \sup_{\partial B_r(x_0)} u \geq \theta\,r^2. \end{equation} \end{prop} To proceed in the analysis of the blow-ups we shall prove a monotonicity formula. This will be a key ingredient to prove the $2$-homogeneity of blow-ups and that blow-ups are nonzero. Therefore it allows us to classify blow-ups. This result will be the focus of Section \ref{s:classification}, while the quasi-monotonicity formula will be the topic of Section \ref{s:Weiss}. \section{Weiss’ quasi-monotonicity formula}\label{s:Weiss} In this section we show that the monotonicity formulae established by Weiss \cite{Weiss} and Monneau \cite{Monneau} in the Laplace case ($\mathbb{A}\equiv I_n$) and by Focardi, Gelli and Spadaro \cite{FGS} in the $\mathbb{A}$ Lipschitz continuous and $f$ H\"{o}lder continuous case, hold in our case as well. As in \cite{FGS} we proceed by fixing the coordinates system: let $x_0\in \Gamma_u$, be any point of free-boundary, then the affine change of variables \begin{equation} x \longmapsto x_0 + f(x_0)^{-\frac{1}{2}}\mathbb{A}^{\frac{1}{2}}(x_0)x = x_0 + \mathbb{L}(x_0)x \end{equation} leads to \begin{equation*} \mathcal{E}[u,\Omega]=f^{1-\frac{n}{2}}(x_0)\, \textrm{det}(\mathbb{A}^{\frac{1}{2}}(x_0))\,\, \mathcal{E}_{\mathbb{L}(x_0)}[u_{\mathbb{L}(x_0)},\Omega_{\mathbb{L}(x_0)}], \end{equation*} with the following notations: \begin{equation}\label{notazioni trasformate} \begin{split} \mathcal{E}_{\mathbb{L}(x_0)}[v,A]&:=\int_A \left(\langle \mathbb{C}_{x_0}\nabla v, \nabla v \rangle + 2\frac{f_{\mathbb{L}(x_0)}}{f(x_0)}v\right)\,dx \qquad \forall\,A\subset \Omega_{\mathbb{L}(x_0)},\\ \Omega_{\mathbb{L}(x_0)}&:=\mathbb{L}(x_0)^{-1}(\Omega - x_0),\\ u_{\mathbb{L}(x_0)}(x)&:=u(x_0+\mathbb{L}(x_0)x),\\ f_{\mathbb{L}(x_0)}&:=f(x_0+\mathbb{L}(x_0)x),\\ \mathbb{C}_{x_0}&:=\mathbb{A}^{-\frac{1}{2}}(x_0) \mathbb{A}(x_0+\mathbb{L}(x_0)x) \mathbb{A}^{-\frac{1}{2}}(x_0),\\ u_{\mathbb{L}(x_0),r}(y)&:=\frac{u(x_0+r\mathbb{L}(x_0)y)}{r^2}. \end{split} \end{equation} We observe that the image of the free-boundary in the new coordinates is: \begin{equation} \Gamma_{u_{\mathbb{L}(x_0)}}=\mathbb{L}(x_0)^{-1}(\Gamma_u - x_0) \end{equation} and we see how energy $\mathcal{E}$ is minimized by $u$, if and only if, the energy $\mathcal{E}_{\mathbb{L}(x_0)}$ is minimized by $u_{\mathbb{L}(x_0)}$. Therefore, for a fixed base point $x_0\in \Gamma_u$, we change the coordinates system and as we stated before \begin{equation*} \underline{0}\in \Gamma_{u_{\mathbb{L}(x_0)}} \qquad\qquad \mathbb{C}_{x_0}(\underline{0})=I_n\qquad\qquad f_{\mathbb{L}(x_0)}(\underline{0})=f(x_0). \end{equation*} The point of the choice of this change of variable is that, in a neighborhood of $\underline{0}$, the functional $\mathcal{E}_{\mathbb{L}(x_0)}[v,\Omega]$ is a perturbation of $\int_{\Omega}(|\nabla v|^2 +2v)\,dx$, which is the functional associated with the classical Laplacian case. We identify the two spaces in this section to simplify the ensuing calculations, then with a slight abuse of notation we reduce to \eqref{abuso_notazione}: \begin{equation*} x_0=\underline{0}\in \Gamma_u,\qquad \mathbb{A}(\underline{0})=I_n,\qquad f(\underline{0})=1. \end{equation*} We note that with this convention $\underline{0}\in\Omega$. In the new coordinates system we define \begin{equation}\label{def mu} \nu(x):=\frac{x}{|x|}\quad \textit{for}\,\, x\neq\underline{0}, \qquad\qquad\qquad \mu(x):=\left\{ \begin{array}{ll} \langle \mathbb{A}(x)\nu(x), \nu(x)\rangle & \quad \textrm{if} \,\, x\neq\underline{0}\\ 1 & \quad \textrm{otherwise}. \end{array} \right. \end{equation} We note that $\mu\in C^0(\Omega)$ by $(H1)$ and \eqref{abuso_notazione}. We prove the following result: \begin{lem}\label{lem reg mu} Let $\mathbb{A}$ be a matrix-valued field. Assume that $(H1)$, $(H2)$ and \eqref{abuso_notazione} hold, then \begin{equation} \mu\in W^{1,q}\cap C^{0,1-\frac{n}{p^*}}(\Omega) \qquad\qquad \forall q<p^*, \end{equation} and \begin{equation}\label{mu limitata} \Lambda^{-1}\leq \mu(x)\leq \Lambda \qquad\qquad \forall x\in\Omega. \end{equation} \end{lem} \begin{proof} We prove that $\mu\in W^{1,q}$ for any $q<p^*$. We use a characterization of Soblev's spaces (see \cite[Proposition IX.3]{Brezis}): $\mu \in W^{1,q}(\Omega)$ if and only if there exists a constant $C>0$ such that for every open $\omega\subset\subset\Omega$ and for any $h\in \mathbb{R}^n$ with $|h|<\mathrm{dist}(\omega, \partial\Omega)$ it holds \begin{equation*} \|\tau_h \mu - \mu\|_{L^q(\omega)}\leq C\,|h|. \end{equation*} For the convexity of the function $|\cdot|^q$, remembering that $\mathbb{A}$ is $(1-\frac{n}{p^*})$-H\"{o}lder continuous and since for all $x\in R^n$ and $h\neq -x$ the inequality $\left|(x+h)\frac{|x|}{|x+h|} - x \right|\leq 2|h|$ holds, we have \begin{equation*} \begin{split} \|\tau_h \mu - &\mu\|_{L^q(\omega)}^q = \int_\omega \left|\langle\mathbb{A}(x+h)\frac{x+h}{|x+h|},\frac{x+h}{|x+h|}\rangle - \langle\mathbb{A}(x)\frac{x}{|x|},\frac{x}{|x|}\rangle\right|^q\,dx\\ &=\int_\omega \bigg|\langle(\mathbb{A}(x+h)-\mathbb{A}(x))\frac{x+h}{|x+h|},\frac{x+h}{|x+h|}\rangle + \langle\left(\mathbb{A}(x)-\mathbb{A}(\underline{0})\right)\left(\frac{x+h}{|x+h|}+\frac{x}{|x|}\right),\left(\frac{x+h}{|x+h|}-\frac{x}{|x|}\right)\rangle\bigg|^q\,dx\\ &\leq 2^{q-1}\bigg(\int_\omega |\mathbb{A}(x+h)-\mathbb{A}(x)|^q\,dx + \int_\omega 2^q \bigg|\frac{\mathbb{A}(x)-\mathbb{A}(\underline{0})}{|x|}\bigg|^q\,\,\bigg| (x+h)\frac{|x|}{|x+h|} - x \bigg|^q\ \bigg)\\ &\leq 2^{q-1}\bigg(c |h|^q + 4^q |h|^q \int_\omega \frac{1}{|x|^{n\frac{q}{p^*}}}\,dx\bigg) = C\,|h|^q, \end{split} \end{equation*} where in the last equality, we rely on $|x|^{-\frac{nq}{p^*}}$ being integrable if and only if $q<p^*$. By Sobolev embedding theorem, we have $\mu\in C^{0,1-\frac{n}{q}}$ for any $q<p^*$. Thanks to the structure of $\mu$ we can earn more regularity. In particular $\mu\in C^{0,\c}$ with $\c=1-\frac{n}{p^*}$. We start off proving the inequality when one of the two points is $\underline{0}$: \begin{align*} |\mu(x)-\mu(\underline{0})|&=\bigg|\langle \mathbb{A}(x)\frac{x}{|x|}, \frac{x}{|x|}\rangle - 1\bigg| = \bigg|\langle\mathbb{A}(x)\frac{x}{|x|}, \frac{x}{|x|}\rangle - \langle\frac{x}{|x|}, \frac{x}{|x|}\rangle\bigg|\\ & =\bigg|\langle (\mathbb{A}(x)-\mathbb{A}(\underline{0}))\frac{x}{|x|}, \frac{x}{|x|}\rangle\bigg| = [A]_{C^{0,\c}} \, |x|^\c. \end{align*} Let us assume now that $x,y\neq \underline{0}$ and prove the inequality in the remaining case. Let $z=|y|\frac{x}{|x|}$ then \begin{equation*} |\mu(x)-\mu(y)|\leq |\mu(x)-\mu(z)| + |\mu(z)-\mu(y)|. \end{equation*} As $\frac{z}{|z|}=\frac{x}{|x|}$ \begin{equation*} |\mu(x)-\mu(z)|=\bigg|\langle (\mathbb{A}(x)-\mathbb{A}(z))\frac{x}{|x|}, \frac{x}{|x|}\rangle\bigg| \leq [A]_{C^{0,\c}} \, |x-z|^\c, \end{equation*} while by $|z|=|y|=r$ \begin{equation*} \begin{split} |\mu(z)&-\mu(y)|=\bigg|\langle\mathbb{A}(z)\frac{z}{r},\frac{z}{r}\rangle - \langle\mathbb{A}(y)\frac{y}{r},\frac{y}{r}\rangle\bigg| \leq \bigg|\langle(\mathbb{A}(z)-\mathbb{A}(y))\frac{z}{r},\frac{z}{r}\rangle\bigg| + \bigg|\langle\mathbb{A}(y))\frac{z}{r},\frac{z}{r}\rangle - \langle\mathbb{A}(y)\frac{y}{r},\frac{y}{r}\rangle\bigg| \\ &\leq [\mathbb{A}]_{C^{0,\c}}|z-y|^\c + \bigg|\langle (\mathbb{A}(y)-\mathbb{A}(0))\frac{z+y}{r},\frac{z-y}{r}\rangle\bigg| \leq [\mathbb{A}]_{C^{0,\c}}\bigg(|z-y|^\c + 2 \frac{|z-y|^{1-\c}}{r^{1-\c}} |z-y|^\c \bigg)\\ &\leq [\mathbb{A}]_{C^{0,\c}}\bigg(|z-y|^\c + 2^{1-\c}|z-y|^\c \bigg)\leq C [\mathbb{A}]_{C^{0,\c}}|z-y|^\c. \end{split} \end{equation*} Therefore, since $|x-z|=| |x|-|y||\leq |x-y|$ and $|z-y|\leq |z-x|+|x-y|\leq 2|x-y|$ we have the thesis \begin{equation*} |\mu(x)-\mu(y)|\leq C [\mathbb{A}]_{C^{0,\c}}|x-y|^\c. \end{equation*} \end{proof} We introduce rescaled volume and boundary energies \begin{equation} \begin{split}\label{Er} \mathcal{E}(r)&:=\mathcal{E}[u,B_r]=\int_{B_r}\left(\langle \mathbb{A}(x)\nabla u(x),\nabla u(x)\rangle +2f(x)u(x)\right)\,dx \\ &=r^{n+2}\int_{B_1}\left(\langle \mathbb{A}(rx)\nabla u_r(x),\nabla u_r(x)\rangle +2f(rx)u_r(x)\right)\,dx \end{split} \end{equation} \begin{equation}\label{Hr} \mathscr{H}(r):=\int_{\partial B_r}\mu(x)u^2(x)\,d\mathcal{H}^{n-1}=r^{n+3}\int_{\partial B_1}\mu(rx)u_r^2(x)\,d\mathcal{H}^{n-1}. \end{equation} We now introduce an energy ``à la Weiss'' combining and rescaling the terms above: \begin{equation}\label{Phir} \Phi(r):=r^{-n-2}\mathcal{E}(r) - 2\,r^{-n-3}\mathscr{H}(r). \end{equation} \begin{oss} By \eqref{abuso_notazione}, \eqref{Er}, \eqref{Hr} and Proposition \ref{prop u_r limitata W2p} we have \begin{equation}\label{E H-O_grande} \begin{split} \mathcal{E}(r)&=\int_{B_r}(|\nabla u_r|^2 + 2u)\,dx + O(r^{n+2+\min(\c,\a)})\stackrel{\eqref{u e grad u limitate}}{=} O(r^{n+2}), \\ \mathscr{H}(r)&=\int_{\partial B_r}u^2\,d\mathcal{H}^{n-1} + O(r^{n+3+\c})\stackrel{\eqref{u e grad u limitate}}{=} O(r^{n+3}). \end{split} \end{equation} Hence, the choice of the renormalized factors in \eqref{Phir}. \end{oss} To complete the notation in \eqref{notazioni trasformate} we show the trasformed version of \eqref{def mu} and \eqref{Phir}: \begin{equation}\label{notazioni LL e energie} \begin{split} \mu_{\mathbb{L}(x_0)}(y):=&\langle \mathbb{C}_{x_0}(y)\nu(y),\nu(y)\rangle \qquad y\neq \underline{0}, \qquad\qquad \mu_{\mathbb{L}(x_0)}(\underline{0}):=1,\\ \Phi_{\mathbb{L}(x_0)}(r):=&\int_{B_1}\big(\langle\mathbb{C}_{x_0}(ry)\nabla u_{\mathbb{L}(x_0),r}(y), \nabla u_{\mathbb{L}(x_0),r}(y) \rangle +2\,\frac{f_{\mathbb{L}(x_0)}(ry)}{f(x_0)}u_{\mathbb{L}(x_0),r}\big)\,dy\\ & -2\int_{\partial B_1}\mu_{\mathbb{L}(x_0)}(ry)\,u_{\mathbb{L}(x_0),r}^2(y)\,d\mathcal{H}^{n-1} \end{split} \end{equation} \begin{oss} We can note by the definition above and in view of Lemma~\ref{lem reg mu} $\Lambda^{-2}\leq \mu_{\mathbb{L}(x_0)}(y)\leq \Lambda^2$ and $\mu_{\mathbb{L}(x_0)}\in~C^{0,\c}(\Omega)$. \end{oss} \subsection{Estimate of derivatives of $\mathcal{E}$ and $\mathscr{H}$} To estimate the derivative of auxiliary energy $\Phi$ we estimate the derivative of addenda $\mathcal{E}$ and $\mathscr{H}$. Starting with $\mathcal{E}$, for this purpose, following Focardi, Gelli and Spadaro \cite{FGS}, we use a generalization of Rellich–Necas’ identity due to Payne–Weinberger \cite[Lemma 3.4]{FGS} in order to calculate the derivative. \begin{prop}\label{E'} There exists a constant $C_1>0$, $C_1=C_1(\l,C,\|\mathbb{A}\|_{W^{1+s,p}(\Omega)})$, such that for $\mathcal{L}^1$-a.e. $r\in(0,\mathrm{dist}(\underline{0},\partial\Omega))$, \begin{equation}\label{estimate E'} \begin{split} \mathcal{E}'(r)&=2\int_{\partial B_r}\mu^{-1}\langle \mathbb{A}\nu, \nabla u \rangle^2\,d\mathcal{H}^{n-1} + \frac{1}{r}\int_{B_r}\langle\mathbb{A}\nabla u,\nabla u\rangle\,\mathrm{div}(\mu^{-1}\mathbb{A} x)\,dx\\ &-\frac{2}{r}\int_{B_r}f \langle\mu^{-1}\mathbb{A} x,\nabla u\rangle\,dx - \frac{2}{r}\int_{B_r}\langle\mathbb{A}\nabla u, \nabla^T(\mu^{-1}\mathbb{A} x)\nabla u\rangle\, dx\\ &+2\int_{\partial B_r} f u\, d\mathcal{H}^{n-1} + \varepsilon(r), \end{split} \end{equation} with $|\varepsilon(r)|\leq C_1 \mathcal{E}(r)\,r^{-\frac{n}{p^*}}$. \end{prop} \begin{proof} For details of proof we refer to \cite[Proposition 3.5]{FGS}. The difference consists in the different regularity of $\mathrm{div}\mathbb{A}$ that in our case is $L^{p^*}$ instead of $L^\infty$. In the estimate of $\varepsilon(r)$, made through H\"{o}lder inequality, the factor $r^{-\frac{n}{p^*}}$ appears. This is unbounded but at the same time it is integrable in $r$ and this will be crucial in proving the quasi-monotonicity formula. \end{proof} \begin{oss} The term $\varepsilon(r)$ is exactly: \begin{equation*} \varepsilon(r)=\frac{1}{r}\int_{B_r}\mu^{-1}\, (\nabla\mathbb{A} : \mathbb{A} x \otimes \nabla u\otimes \nabla u)\,dx. \end{equation*} In \cite[Proposition 3.5]{FGS} under the hypothesis $\mathbb{A}\in W^{1,\infty}(\Omega,\mathbb{R}^n\times~\mathbb{R}^n)$, Focardi, Gelli and Spadaro showed the equality \eqref{estimate E'} with $\varepsilon(r)$ as above, and proved that $|\varepsilon(r)|\leq C\, \mathcal{E}(r)$. \end{oss} The next step is to estimate the derivative of $\mathscr{H}(r)$. By definition $\mathscr{H}(r)$ is a boundary integral; we follow the strategy of \cite[Proposition 3.6]{FGS} which consists in bringing us back to a volume integral using the Divergence Theorem and deriving through the Coarea formula. The difficulty is that we have to integrate the function $\mathrm{div}\mathbb{A}$ on $\partial B_r$, but by $(H1)$ $\mathrm{div}\mathbb{A}$ is a function in $W^{s,p}(\Omega)$ with $s>\frac{1}{p}$, and it is not, a priori, well defined on $\partial B_r$. Then, taking into account the concept of \emph{trace} we can prove a corollary of the Coarea formula. \begin{prop}\label{coarea-traccia} Let $\varphi\in W^{\l,p}(B_1)$ with $\l>\frac{1}{p}$. Then, denoting by $\c_0(\phi)$ the trace of $\phi$ (see Theorem \ref{toerema-traccia-fraz}), for $\mathcal{L}^1$-a.e. $r\in (0,1)$ it holds \begin{equation} \frac{d}{dr}\bigg(\int_{B_r} \varphi\,dx\bigg) = \int_{\partial B_r} \c_0(\varphi)\,d\mathcal{H}^{n-1}. \end{equation} \end{prop} \begin{proof} Let $(\varphi_j)_j\subset C^\infty(\overline{B_1})$ such that $\varphi_j\to \varphi$ in $W^{\l,p}(B_1)$. For each function $g_j$, by the Coarea formula for $\mathcal{L}^1$-a.e. $r\in (0,1)$ it holds that \begin{equation}\label{coarea_j} \frac{d}{dr}\bigg(\int_{B_r} \varphi_j\,dx\bigg) = \int_{\partial B_r} \varphi_j\,d\mathcal{H}^{n-1}. \end{equation} By the continuity of trace and Lebesgue's dominated convergence Theorem we have \begin{equation}\label{cont-j-traccia} \lim_j \int_{\partial B_r} \varphi_j\,d\mathcal{H}^{n-1} = \int_{\partial B_r} \c_0(\varphi)\,d\mathcal{H}^{n-1}. \end{equation} Let us now prove that $\lim_j\frac{d}{dr}\bigg(\int_{B_r} \varphi_j\,dx\bigg)=\frac{d}{dr}\bigg(\int_{B_r} \varphi\,dx\bigg)$. In this connection we define the function $G(r):=\int_{B_r} \varphi\,dx$ and the sequence $G_j(r):=\int_{B_r} \varphi_j\,dx$; we can prove that $G_j\to G$ in $W^{1,1}((0,1))$. We recall that by a well-known characterization, the functions in $W^{1,1}$ on an interval are absolutely continuous functions. In order to deduce that $G,G_j\in W^{1,1}$ we have to prove that for any $\varepsilon>0$ there exists a $\d>0$ such that for any finite sequence of disjoint intervals $(a_k,b_k)\subset (0,1)$ the condition $\sum_k |G_j(b_k)-G_j(a_k)|<\varepsilon$ holds if $\sum_k |b_k-a_k|<\d$. Therefore, we estimate as follows \begin{align*} \sum_k |G_j(b_k)-G_j(a_k)|=\sum_k \bigg|\int_{B_{b_k}\setminus B_{a_k}} \varphi_j\,dx\bigg|\leq \int_{\bigcup_k (B_{b_k}\setminus B_{a_k})} |\varphi_j|\,dx<\varepsilon, \end{align*} where in the last inequality, we use the absolute continuity of the integral and \begin{equation*} \mathcal{L}^n(\cup_k (B_{b_k}\setminus B_{a_k})=n\omega_n\int_{\cup_k (b_k,a_k)} r^{n-1}\,dr\leq n\omega_n\int_{1-\d}^1 r^{n-1}\,dr \leq n\omega_n \d. \end{equation*} The previous argument holds for $G$ as well. Thus $G$ and $G_j$ are differentiable $\mathcal{L}^1$-a.e. on~$(0,1)$. On the other hand by the Coarea formula, we can represent the weak derivative of $G_j$ in the following way: \begin{equation*} G_j'(r)=\int_{\partial B_r} \varphi_j\,d\mathcal{H}^{n-1}, \quad G'(r)=\int_{\partial B_r} \varphi\,d\mathcal{H}^{n-1} \qquad\qquad \mathcal{L}^1\textit{-a.e.}\,\,r\in(0,1). \end{equation*} Thus \begin{align*} \|G_j-G\|_{L^1((0,1))}&=\int_0^1 \Big|\int_{B_r}(\varphi_j-\varphi)\,dx\Big|\,dr\leq \|\varphi_j-\varphi\|_{L^1(B_1)}\xrightarrow{j\to \infty} 0,\\ \|G_j'-G'\|_{L^1((0,1))}&=\int_0^1 \Big|\int_{\partial B_r}(\varphi_j-\varphi)\,d\mathcal{H}^{n-1}\Big|\,dr\leq \|\varphi_j-\varphi\|_{L^1(B_1)}\xrightarrow{j\to \infty} 0, \end{align*} therefore up to subsequence $G_j'\to G'$ $\mathcal{L}^1$-a.e. $(0,1)$; then by combining together this, \eqref{coarea_j} and \eqref{cont-j-traccia} we have the thesis. \end{proof} We estimate the derivative of $\mathscr{H}(r)$.\\ We define an exponent $\Theta=\Theta(s,p,n,t_0)$, with $t_0\in \left(\frac{n-sp}{p(n-1)}, s\right)$ as in Remark \ref{oss stime imm e traccia}, for which the term $r^{-\frac{n}{\Theta}}$ is integrable. For this purpose we define: \begin{equation}\label{B} \Theta=\Theta(s,p,n,t_0)=\left\{ \begin{array}{ll} p & \quad \textrm{if} \,\, p>n\\ \frac{np}{n-(s-t_0)p} & \quad \textrm{if} \,\, p\leq n. \end{array} \right. \end{equation} \begin{oss}\label{choice (H1)} If $p>n$ the condition is trivial. If instead $p\leq n$ the condition $\frac{np}{n-(s-t_0)p}>n$ is equivalent to $t_0 < s+1-\frac{n}{p}$. Now such a $t_0$ exists if and only if $\frac{n-sp}{p(n-1)}<s + 1-\frac{n}{p}$ which is equivalent to requiring that $p>\frac{n^2}{n(1+s)-1}$. This explains the choice of the condition $(H1)$. \end{oss} \begin{prop}\label{H'} There exists a positive constant $C_2=C_2(\|\mathbb{A}\|_{W^{1+s,p}})$ such that for $\mathcal{L}^1$-a.e. $r\in(0,\mathrm{dist}(\underline{0},\partial \Omega))$ the following holds \begin{equation}\label{estimate H'} \mathscr{H}'(r) = \frac{n-1}{r}\mathscr{H}(r) + 2\int_{\partial B_r} u \langle \mathbb{A}\nu,\nabla u \rangle\, d\mathcal{H}^{n-1} + h(r), \end{equation} with $|h(r)|\leq C_2 \mathscr{H}(r)r^{-\frac{n}{\Theta}}$, where the constant $\Theta$ is given in \eqref{B}. \end{prop} \begin{proof} From the Divergence Theorem we write $\mathscr{H}(r)$ as volume integral \begin{align*} \mathscr{H}(r)& = \frac{1}{r}\int_{\partial B_r} u^2(x)\langle\mathbb{A}(x)x,\nu \rangle\,d\mathcal{H}^{n-1}= \frac{1}{r}\int_{B_r} \mathrm{div}\big(u^2(x)\mathbb{A}(x)x\big)\,dx \\ &=\frac{2}{r}\int_{B_r}u\nabla u\cdot \mathbb{A}(x)x\,dx + \frac{1}{r} \int_{B_r}u^2(x)\mathrm{Tr}\mathbb{A}\,dx + \frac{1}{r} \int_{B_r}u^2(x)\mathrm{div}\mathbb{A}(x)\cdot x\,dx. \end{align*} By taking Coarea formula and Proposition \ref{coarea-traccia} into account, we have \begin{align*} \mathscr{H}'(r)= &-\frac{1}{r}\mathscr{H}(r) +2\int_{\partial B_r}\!\!u \langle \mathbb{A}\nu, \nabla u\rangle \,d\mathcal{H}^{n-1} + \frac{1}{r} \int_{\partial B_r}\!\!u^2\,\mathrm{Tr}\mathbb{A} \,d\mathcal{H}^{n-1} + \frac{1}{r} \int_{\partial B_r}\!\!u^2\c_0\big(\mathrm{div}\mathbb{A}(x)\big)\cdot x\,d\mathcal{H}^{n-1}\\ =&\,\frac{n-1}{r}\mathscr{H}(r) + 2\int_{\partial B_r}\mu \langle \mathbb{A}\nu,\nabla u \rangle\, d\mathcal{H}^{n-1} + h(r), \end{align*} with \begin{equation}\label{hr} h(r)=\frac{1}{r} \int_{\partial B_r}u^2\,\big(\mathrm{Tr}\mathbb{A} - n\mu\big) \,d\mathcal{H}^{n-1} + \frac{1}{r} \int_{\partial B_r}u^2\c_0\big(\mathrm{div}\mathbb{A}(x)\big)\cdot x\,d\mathcal{H}^{n-1}=:I+II. \end{equation} We estimate separately the two terms. For the first term let us recall that the H\"{o}lder continuity of $\mathbb{A}$ and $\mu$, the condition \eqref{u e grad u limitate} and the fact that $\mathbb{A}(\underline{0})=I_n$ and $\mu(\underline{0})=1$ hold, we have: \begin{equation}\label{1add-H'} \begin{split} |I|\, &=\frac{1}{r} \bigg|\int_{\partial B_r}u^2\,\sum_i \big(a_{ii}(x)- \mu(x)\big) \,d\mathcal{H}^{n-1}\bigg| \leq\frac{1}{r} \int_{\partial B_r}\!\!\!\!u^2\,\sum_i \big(|a_{ii}(x)-a_{ii}(\underline{0})|+ |\mu(\underline{0}) - \mu(x)|\big) \,d\mathcal{H}^{n-1}\\ &\leq C'r^{n+3-\frac{n}{p^*}} \leq C'\mathscr{H}(r)\, r^{-\frac{n}{p^*}}, \end{split} \end{equation} where in the last inequality we use \eqref{E H-O_grande}. For the second term from the H\"{o}lder inequality, by \eqref{u e grad u limitate} and recalling Remark \ref{oss stime imm e traccia} according to which $\c_0(\mathrm{div}\mathbb{A})\in L^1(\partial B_r,\mathbb{R}^n;\mathcal{H}^{n-1})$ we have: \begin{equation}\label{2add-H'} \begin{split} |II|\,&\leq \frac{1}{r} \int_{\partial B_r} u^2\big|\c_0\big(\mathrm{div}\mathbb{A}\big)(x)\big|\,|x|\,d\mathcal{H}^{n-1} \leq C'r^4\||\c_0(\mathrm{div}\mathbb{A})|\|_{L^1(\partial B_r,\mathbb{R}^n;\mathcal{H}^{n-1})}. \end{split} \end{equation} We can now analyze separately the two cases $p>n$ and $p\leq n$. We start with the case $p>n$. We use \eqref{stima imm 1}, \eqref{E H-O_grande} in \eqref{2add-H'} to obtain \begin{equation}\label{2add H' p} \begin{split} |II|\,\leq C\, \|\mathrm{div}\mathbb{A}\|_{W^{s,p}(B_r,\mathbb{R}^n;\mathcal{H}^{n-1})}\,r^{n+3}\, r^{-\frac{n}{p}} \leq C\,\|\mathrm{div}\mathbb{A}\|_{W^{p,s}(\Omega,\mathbb{R}^n;\mathcal{H}^{n-1})}\,\mathscr{H}(r)\,r^{-\frac{n}{p}} \leq C\,\mathscr{H}(r)\,r^{-\frac{n}{p}}. \end{split} \end{equation} If $p\geq n$ by \eqref{stima imm q} we have \begin{equation*} \|\c_0(\mathrm{div}\mathbb{A})\|_{L^1(\partial B_r,\mathbb{R}^n; H^{n-1})}\leq C\, r^{n-1}\,r^{-\frac{n-(s-t_0)p}{p}} \|\mathrm{div}\mathbb{A}\|_{W^{t_0,\frac{np}{n-(s-t_0)p}}(B_r,\mathbb{R}^n;\mathcal{H}^{n-1})}. \end{equation*} Hence, recalling \eqref{2add-H'} and \eqref{E H-O_grande} \begin{equation}\label{2add H' q} \begin{split} |II|\,&\leq C\, \|\mathrm{div}\mathbb{A}\|_{W^{t_0,\frac{np}{n-(s-t_0)p}}(B_r,\mathbb{R}^n;\mathcal{H}^{n-1})}\,r^{n+3}\, r^{-\frac{n-(s-t_0)p}{p}}\\ &\leq C\,\|\mathrm{div}\mathbb{A}\|_{W^{t_0,\frac{np}{n-(s-t_0)p}}(\Omega,\mathbb{R}^n;\mathcal{H}^{n-1})}\,\mathscr{H}(r)\,r^{-\frac{n-(s-t_0)p}{p}}\\ &\leq C\,\|\mathrm{div}\mathbb{A}\|_{W^{s,p}(\Omega,\mathbb{R}^n;\mathcal{H}^{n-1})}\,\mathscr{H}(r)\,r^{-\frac{n-(s-t_0)p}{p}} \leq C\,\mathscr{H}(r)\,r^{-\frac{n-(s-t_0)p}{p}} \end{split} \end{equation} So, assuming the notation introduced in \eqref{B}, by combining together \eqref{1add-H'}, \eqref{2add H' q} and \eqref{2add H' p}, and recalling that $\Theta<p^*$, we have \begin{equation*} |h(r)|\leq C'\mathscr{H}(r)\, r^{-\frac{n}{p*}} + \bar{C}\mathscr{H}(r)\,r^{-\frac{n}{\Theta}} \leq C_2\mathscr{H}(r)\,r^{-\frac{n}{\Theta}}. \end{equation*} \end{proof} \begin{oss} In \cite[Proposition 3.6]{FGS} Focardi, Gelli and Spadaro showed the estimate \eqref{estimate H'} with $h(r)$ given in \eqref{hr}. Under the hypothesis $\mathbb{A}\in W^{1,\infty}(\Omega,\mathbb{R}^n\times~\mathbb{R}^n)$, the term $\mathrm{div}\mathbb{A}(x)\in L^\infty(\Omega, \mathbb{R}^n)$, so they obtained $|h(r)|\leq C \mathscr{H}(r)$. \end{oss} \subsection{Proof of Weiss’s quasi-monotonicity formula} In this section we prove a Weiss’ quasi-monotonicity formula that is one of the main results of the paper. The plan of proof is the same as \cite[Theorem 3.7]{FGS}. The difference, due to the lack of regularity of the coefficients, consists in the presence of additional unbounded factors. These factors are produced in Proposition \ref{E'}, Proposition \ref{H'}, and from a freezing argument, and they include: $r^{-\frac{n}{p^*}}$, $r^{-\frac{n}{\Theta}}$ and $\frac{\omega(r)}{r}$. The key observation is that, for our hypotheses, these terms are integrable, so we are able to obtain the formula. For completeness we report the proof with all the details. \begin{proof}[Proof of Theorem \ref{Weiss}] Assume the definition of $\Phi(r)$ by \eqref{Phir}: \begin{equation*} \Phi(r):=r^{-n-2}\mathcal{E}(r) - 2\,r^{-n-3}\mathscr{H}(r). \end{equation*} Then for $\mathcal{L}^1$-a.e. $r\in \mathrm{dist}(\underline{0},\partial\Omega)$ we have \begin{equation} \Phi'(r)=\frac{\mathcal{E}'(r)}{r^{n+2}}-(n+2)\frac{\mathcal{E}(r)}{r^{n+3}} - 2\frac{\mathscr{H}'(r)}{r^{n+3}} +2(n+3)\frac{\mathscr{H}(r)}{r^{n+4}}. \end{equation} By Proposition \ref{E'} we have \begin{equation*} \begin{split} &\frac{\mathcal{E}'(r)}{r^{n+2}}-(n+2)\frac{\mathcal{E}(r)}{r^{n+3}}\geq \frac{2}{r^{n+2}}\int_{\partial B_r}\mu^{-1}\langle \mathbb{A}\nu, \nabla u \rangle^2\,d\mathcal{H}^{n-1} + \frac{1}{r^{n+3}}\int_{B_r}\langle\mathbb{A}\nabla u,\nabla u\rangle\,\mathrm{div}(\mu^{-1}\mathbb{A} x)\,dx\\ &- \frac{2}{r^{n+3}}\int_{B_r}f \langle\mu^{-1}\mathbb{A} x,\nabla u\rangle\,dx - \frac{2}{r^{n+3}}\int_{B_r}\langle\mathbb{A}\nabla u, \nabla^T(\mu^{-1}\mathbb{A} x)\nabla u\rangle\, dx + \frac{2}{r^{n+2}}\int_{\partial B_r} f u\, d\mathcal{H}^{n-1}\\ &-\frac{C_1}{r^{n+2}}\frac{\mathcal{E}(r)}{r^{\frac{n}{p^*}}} - \frac{n+2}{r^{n+3}}\int_{B_r}\langle\mathbb{A}\nabla u, \nabla u\rangle\, dx - \frac{2(n+2)}{r^{n+3}}\int_{B_r} f u\,dx. \end{split} \end{equation*} Then, integrating by parts and given \eqref{PDE_u}: \begin{equation}\label{int-part1-f.mon-Weiss} \begin{split} \int_{B_r}\langle\mathbb{A}\nabla u, \nabla u\rangle\, dx + \int_{B_r} f u\,dx &= \int_{B_r}\langle\mathbb{A}\nabla u, \nabla u\rangle\, dx + \int_{B_r} u\mathrm{div}(\mathbb{A}\nabla u)\,dx = \int_{\partial B_r} u \langle \mathbb{A}\nu, \nabla u \rangle \, d\mathcal{H}^{n-1}. \end{split} \end{equation} Thus, applying \eqref{int-part1-f.mon-Weiss} in four occurrences, we deduce \begin{equation}\label{E'-E} \begin{split} &\frac{\mathcal{E}'(r)}{r^{n+2}}-(n+2)\frac{\mathcal{E}(r)}{r^{n+3}}\geq - \frac{C_1}{r^{n+2}}\mathcal{E}(r)\,r^{-\frac{n}{p^*}} + \frac{2}{r^{n+2}}\int_{\partial B_r}\mu^{-1}\langle \mathbb{A}\nu, \nabla u \rangle^2\,d\mathcal{H}^{n-1}\\ &+ \frac{1}{r^{n+3}}\int_{B_r}\!\!\!\!\Big(\langle\mathbb{A}\nabla u,\nabla u\rangle\,\mathrm{div}(\mu^{-1}\mathbb{A} x) - 2\langle\mathbb{A}\nabla u, \nabla^T(\mu^{-1}\mathbb{A} x)\nabla u\rangle - (n-2)\langle\mathbb{A}\nabla u, \nabla u\rangle\Big)\,dx\\ & - \frac{2}{r^{n+3}}\int_{B_r}\!\!f \langle\mu^{-1}\mathbb{A} x,\nabla u\rangle\,dx + \frac{2}{r^{n+2}}\int_{\partial B_r}\!\! f u\, d\mathcal{H}^{n-1} -\frac{4}{r^{n+3}}\int_{\partial B_r}\!\! u \langle \mathbb{A}\nu, \nabla u \rangle \, d\mathcal{H}^{n-1} - \frac{2n}{r^{n+3}}\int_{B_r}\!\! f u\,dx. \end{split} \end{equation} Instead the Proposition \ref{H'} leads to \begin{equation}\label{H'-H} \begin{split} -2\frac{\mathscr{H}'(r)}{r^{n+3}} + 2(n+3)\frac{\mathscr{H}(r)}{r^{n+4}}\geq - \frac{2 C_2}{r^{n+3}} \mathscr{H}(r)r^{-\frac{n}{\Theta}} + \frac{8}{r^{n+4}}\mathscr{H}(r) - \frac{4}{r^{n+3}}\int_{\partial B_r} u \langle \mathbb{A}\nu,\nabla u \rangle\, d\mathcal{H}^{n-1}. \end{split} \end{equation} By combining together \eqref{E'-E} and \eqref{H'-H} and since $p^*\geq \Theta$ we finally infer that \begin{equation}\label{Phi' + Phi} \begin{split} &\Phi'(r) + (C'_1 \vee C_2) \Phi(r) r^{-\frac{n}{\Theta}}\geq \frac{2}{r^{n+2}}\int_{\partial B_r}\mu^{-1}\langle \mathbb{A}\nu, \nabla u \rangle^2\,d\mathcal{H}^{n-1}\\ &+ \frac{1}{r^{n+3}}\int_{B_r}\!\!\!\!\Big(\langle\mathbb{A}\nabla u,\nabla u\rangle\,\mathrm{div}(\mu^{-1}\mathbb{A} x) - 2\langle\mathbb{A}\nabla u, \nabla^T(\mu^{-1}\mathbb{A} x)\nabla u\rangle - (n-2)\langle\mathbb{A}\nabla u, \nabla u\rangle\Big)\,dx\\ & - \frac{2}{r^{n+3}}\int_{B_r}f \langle\mu^{-1}\mathbb{A} x,\nabla u\rangle\,dx + \frac{2}{r^{n+2}}\int_{\partial B_r} f u\, d\mathcal{H}^{n-1} -\frac{4}{r^{n+3}}\int_{\partial B_r} u \langle \mathbb{A}\nu, \nabla u \rangle \, d\mathcal{H}^{n-1}\\ & - \frac{2n}{r^{n+3}}\int_{B_r} f u\,dx + \frac{8}{r^{n+4}}\mathscr{H}(r) - \frac{4}{r^{n+3}}\int_{\partial B_r} u \langle \mathbb{A}\nu,\nabla u \rangle\, d\mathcal{H}^{n-1}\\ &= \frac{2}{r^{n+2}}\int_{\partial B_r}\Big(\mu^{-1}\langle \mathbb{A}\nu, \nabla u \rangle^2 + 4\frac{u^2}{r^2}\mu - 4\frac{u}{r}\langle \mathbb{A}\nu, \nabla u \rangle\Big)\,d\mathcal{H}^{n-1}\\ &+ \frac{1}{r^{n+3}}\int_{B_r}\!\!\!\!\Big(\langle\mathbb{A}\nabla u,\nabla u\rangle\,\mathrm{div}(\mu^{-1}\mathbb{A} x) - 2\langle\mathbb{A}\nabla u, \nabla^T(\mu^{-1}\mathbb{A} x)\nabla u\rangle - (n-2)\langle\mathbb{A}\nabla u, \nabla u\rangle\Big)\,dx\\ & - \frac{2}{r^{n+3}}\bigg(\int_{B_r}f \big(\langle\mu^{-1}\mathbb{A} x,\nabla u\rangle + nu\big)\,dx - r\int_{\partial B_r} f u\,d\mathcal{H}^{n-1}\bigg) =: R_1 + R_2 + R_3. \end{split} \end{equation} We can estimate the three addenda separately. \begin{equation} \begin{split}\label{R_1} R_1&= \frac{2}{r^{n+2}}\int_{\partial B_r}\mu \Big(\mu^{-2}\langle \mathbb{A}\nu, \nabla u \rangle^2 + 4\frac{u^2}{r^2} - 4\mu^{-1}\frac{u}{r}\langle \mathbb{A}\nu, \nabla u \rangle\Big)\,d\mathcal{H}^{n-1}\\ &=\frac{2}{r^{n+2}}\int_{\partial B_r}\mu \Big(\langle \mu^{-1} \mathbb{A}\nu, \nabla u \rangle - 2\frac{u}{r} \Big)^2\,d\mathcal{H}^{n-1}. \end{split} \end{equation} Since $n= \mathrm{div}(x)$ by \eqref{u e grad u limitate} we have \begin{equation*} \begin{split} |R_2|&= \frac{1}{r^{n+3}}\left|\int_{B_r}\!\!\!\!\Big(\langle\mathbb{A}\nabla u,\nabla u\rangle\,\mathrm{div}(\mu^{-1}\mathbb{A} x) - 2\langle\mathbb{A}\nabla u, \nabla^T(\mu^{-1}\mathbb{A} x)\nabla u\rangle - (n-2)\langle\mathbb{A}\nabla u, \nabla u\rangle\Big)\,dx\right|\\ &= \left|\frac{1}{r^{n+3}}\int_{B_r}\!\!\!\!\Big(\langle\mathbb{A}\nabla u,\nabla u\rangle\,\mathrm{div}(\mu^{-1}\mathbb{A} x- x) - 2\langle\mathbb{A}\nabla u, \nabla^T(\mu^{-1}\mathbb{A} x - x)\nabla u\rangle \Big)\,dx\right|\\ &\leq \frac{C^2\,\Lambda}{r^{n+1}}\int_{B_r}\!\!\!\!\Big(|\mathrm{div}(\mu^{-1}\mathbb{A} x- x) + 2|\nabla(\mu^{-1}\mathbb{A} x - x)|\Big)\,dx \leq \frac{C'\,\Lambda}{r^{n+1}}\int_{B_r}\!\!\!\!\Big(|\nabla(\mu^{-1}\mathbb{A} x - x)|\Big)\,dx, \end{split} \end{equation*} We estimate $|\nabla(\mu^{-1}\mathbb{A} x - x)|$: \begin{equation*} \begin{split} |\nabla(\mu^{-1}\mathbb{A} x - x)|&= \left|\nabla\left((\mu^{-1}\mathbb{A} - id)x\right)\right|=|\nabla(\mu^{-1}\mathbb{A} - id)x + (\mu^{-1}\mathbb{A} - id)|\\ &=|\nabla(\mu^{-1})\otimes \mathbb{A} x + \mu^{-1} \nabla\mathbb{A}\, x + (\mu^{-1}\mathbb{A} - I_n)| \leq \Lambda r (|\nabla \mathbb{A}|+|\nabla \mu|) + C\,r^{1-\frac{n}{p^*}}, \end{split} \end{equation*} where in the last inequality, we have used the $\c$-Holder continuity of $\mathbb{A}-\mu I_n$. Thus, from Lemma \ref{lem reg mu} \begin{equation*} \begin{split} |R_2|&\leq \frac{C'\Lambda}{r^{n+1}}\int_{B_r}\Big(|\nabla(\mu^{-1}\mathbb{A} x - x)|\Big)\,dx \leq \frac{C''}{r^{n+1}}\int_{B_r}\left(r^{1-\frac{n}{p^*}}+r(|\nabla \mathbb{A}|+|\nabla \mu|)\right)\,dx \\ &\leq C'''r^{-\frac{n}{p^*}} + \frac{C''}{r^n}\bigg(\Big(\int_{B_r}|\nabla\mathbb{A}|^{p^*}\,dx\Big)^\frac{1}{p*}(\omega_n r^n)^{1-\frac{1}{p^*}}+\Big(\int_{B_r}|\nabla\mu|^{q}\,dx\Big)^\frac{1}{q}(\omega_n r^n)^{1-\frac{1}{q}}\bigg) \leq \bar{C}r^{-\frac{n}{q}}, \end{split} \end{equation*} for each $n<\Theta<q<p^*$, whence \begin{equation*} |R_2|\leq c\, \frac{\mathcal{E}(r)}{r^{n+2}}r^{-\frac{n}{q}}. \end{equation*} Moreover, from \eqref{Hr} and \eqref{mu limitata} \begin{equation*} 0\leq \frac{\mathscr{H}(r)}{r^{n+3}}\leq c\|u_r\|_{L^\infty}^2\leq c, \end{equation*} with a certain constant $c$ independent from $r$, then \begin{equation}\label{R_2} |R_2|\leq c\,\left(\frac{\mathcal{E}(r)}{r^{n+2}}-2\frac{\mathscr{H}(r)}{r^{n+3}}\right)r^{-\frac{n}{q}} + 2c\,\frac{\mathscr{H}(r)}{r^{n+3}}\, r^{-\frac{n}{q}}\leq c\,\Phi(r)r^{-\frac{n}{\Theta}}+c\,r^{-\frac{n}{\Theta}}. \end{equation} Finally, assuming that $n=\mathrm{div} x$ and using the following identity, which is consequence of the divergence theorem \begin{equation} \int_{B_r}\left(\langle x,\nabla u\rangle + u\,\mathrm{div} x\right)\,dx = r\int_{\partial B_r} u\,d\mathcal{H}^{n-1}, \end{equation} we have \begin{equation*} \begin{split} R_3=& - \frac{2}{r^{n+3}}\bigg(\int_{B_r}f \big(\langle\mu^{-1}\mathbb{A} x,\nabla u\rangle + nu\big)\,dx - r\int_{\partial B_r} f u\,d\mathcal{H}^{n-1}\bigg) \\ =& - \frac{2}{r^{n+3}}\bigg(\int_{B_r}(f(x)-f(\underline{0})) \big(\langle\mu^{-1}\mathbb{A} x,\nabla u\rangle +nu\big)\,dx + f(\underline{0})\int_{B_r} \big(\langle\mu^{-1}\mathbb{A} x,\nabla u\rangle - u\,\mathrm{div} x\big)\,dx\\ &- r\int_{\partial B_r} (f(x)-f(\underline{0})) u\,d\mathcal{H}^{n-1} - r\,f(\underline{0})\int_{\partial B_r} u\,d\mathcal{H}^{n-1} \bigg) \\ =& - \frac{2}{r^{n+3}}\bigg(\int_{B_r}(f(x)-f(\underline{0})) \big(\langle\mu^{-1}\mathbb{A} x,\nabla u\rangle +nu\big)\,dx + f(\underline{0})\int_{B_r} \big(\langle\mu^{-1}\mathbb{A} x - x,\nabla u\rangle \big)\,dx\\ &- r\int_{\partial B_r} (f(x)-f(\underline{0})) u\,d\mathcal{H}^{n-1} \bigg). \end{split} \end{equation*} Thus \begin{equation}\label{R_3} \begin{split} |R_3|=& \frac{2}{r^{n+3}}\bigg| f(\underline{0})\int_{B_r}\big(\langle\mu^{-1}\mathbb{A} x - x,\nabla u\rangle \big)\,dx\\ &+\int_{B_r}(f(x)-f(\underline{0})) \big(\langle\mu^{-1}\mathbb{A} x,\nabla u\rangle +nu\big)\,dx - r\int_{\partial B_r} (f(x)-f(\underline{0})) u\,d\mathcal{H}^{n-1} \bigg|\\ \leq& \frac{c}{r^{n+1}} \bigg(\int_{B_r}|\mathbb{A}-\mu I_n|\,dx + \int_{B_r}|f(x)-f(\underline{0})|\,dx + r\int_{\partial B_r} |f(x)-f(\underline{0})|\,d\mathcal{H}^{n-1}\bigg)\\ \leq& \frac{c}{r^{n+1}}(r^{n+1-\frac{n}{p^*}}+r^n\,\omega(r))\leq c \left(r^{-\frac{n}{p^*}}+ \frac{\omega(r)}{r}\right). \end{split} \end{equation} Now by combining together \eqref{Phi' + Phi}, \eqref{R_1}, \eqref{R_2} and \eqref{R_3} we have \begin{equation} \begin{split} \Phi'(r) + C_3 \Phi(r)\,r^{-\frac{n}{\Theta}} + C_4\, \left(r^{-\frac{n}{\Theta}}+\frac{\omega(r)}{r}\right)\geq \frac{2}{r^{n+2}}\int_{\partial B_r}\mu \Big(\langle \mu^{-1} \mathbb{A}\nu, \nabla u \rangle - 2\frac{u}{r} \Big)^2\,d\mathcal{H}^{n-1}. \end{split} \end{equation} Multiplying the inequality by the integral factor $e^{\bar{C_3}r^{1-\frac{n}{\Theta}}}$ with $\bar{C_3}= \frac{C_3}{1-\frac{n}{\Theta}}$ we get \begin{equation*} \left(\Phi(r)\,e^{\bar{C_3}r^{1-\frac{n}{\Theta}}}\right)' + C_4\,\left(r^{-\frac{n}{\Theta}}+\frac{\omega(r)}{r}\right)e^{\bar{C_3}r^{1-\frac{n}{\Theta}}} \geq \frac{2e^{\bar{C_3}r^{1-\frac{n}{\Theta}}}}{r^{n+2}}\int_{\partial B_r}\mu \Big(\langle \mu^{-1} \mathbb{A}\nu, \nabla u \rangle - 2\frac{u}{r} \Big)^2\,d\mathcal{H}^{n-1} \end{equation*} whence \begin{equation} \begin{split} \frac{d}{dr}\bigg(\Phi(r)\,e^{\bar{C_3}r^{1-\frac{n}{\Theta}}} &+ C_4\!\!\!\int_0^r \!\left(t^{-\frac{n}{\Theta}}+\frac{\omega(t)}{t}\right)e^{\bar{C_3}t^{1-\frac{n}{\Theta}}}\,dt\bigg) \geq \frac{2e^{\bar{C_3}r^{1-\frac{n}{\Theta}}}}{r^{n+2}}\int_{\partial B_r}\!\!\!\mu \Big(\langle \mu^{-1} \mathbb{A}\nu, \nabla u \rangle - 2\frac{u}{r} \Big)^2\,d\mathcal{H}^{n-1}. \end{split} \end{equation} In particular, the quantity under the sign of the derivative, bounded by construction, is also monotonic, therefore its limit exists as $r\to 0^+$. It follows that $\Phi(0^+):=\lim_{r\to 0^+}\Phi(r)$ exists and is bounded. Finally \begin{equation} \begin{split} \Phi(r)-\Phi(0^+)\geq& -|\Phi(r)\,e^{\bar{C_3}r^{1-\frac{n}{\Theta}}}-\Phi(r)| + \Phi(r)\,e^{\bar{C_3}r^{1-\frac{n}{\Theta}}} + C_4\,\int_0^r \left(t^{-\frac{n}{\Theta}}+\frac{\omega(t)}{t}\right)e^{\bar{C_3}t^{1-\frac{n}{\Theta}}}\,dt\\ &-\Phi(0^+) - C_4\,\int_0^r \left(t^{-\frac{n}{\Theta}}+\frac{\omega(t)}{t}\right)e^{\bar{C_3}t^{1-\frac{n}{\Theta}}}\,dt\\ \geq& -|\Phi(r)|\,c'\,r^{1-\frac{n}{\Theta}} + \Phi(r)\,e^{\bar{C_3}r^{1-\frac{n}{\Theta}}} + C_4\,\int_0^r \left(t^{-\frac{n}{\Theta}}+\frac{\omega(t)}{t}\right)e^{\bar{C_3}t^{1-\frac{n}{\Theta}}}\,dt\\ &-\Phi(0^+) - c'\,\left(r^{1-\frac{n}{\Theta}}+\int_0^r\frac{\omega(t)}{t}\,dt\right)\\ \geq &\Phi(r)\,e^{\bar{C_3}r^{1-\frac{n}{\Theta}}} + C_4\,\int_0^r \left(t^{-\frac{n}{\Theta}}+\frac{\omega(t)}{t}\right)e^{\bar{C_3}t^{1-\frac{n}{\Theta}}}\,dt -\Phi(0^+) - c\,\left(r^{1-\frac{n}{\Theta}}+\int_0^r\frac{\omega(t)}{t}\,dt\right), \end{split} \end{equation} where in the last inequality, we used the boundedness of $\Phi(r)$. \end{proof} \begin{oss} In \cite[Theorem 3.7]{FGS}, under the hypotheses $\mathbb{A}\in W^{1,\infty}(\Omega,\mathbb{R}^n\times~\mathbb{R}^n)$ and $f\in C^{0,\a}(\Omega)$, Focardi, Gelli and Spadaro proved that the following estimate holds true for $\mathcal{L}^1$-a.e. $r$ in $(0,\frac{1}{2}\mathrm{dist}(\underline{0},\partial\Omega)\wedge~1)$: \begin{equation*} \frac{d}{dr}\bigg(\Phi(r)\,e^{\bar{C_3}r} + C_4\!\!\!\int_0^r \!t^{\a-1}e^{\bar{C_3}t}\,dt\bigg) \geq \frac{2e^{\bar{C_3}r}}{r^{n+2}}\int_{\partial B_r}\!\!\!\mu \Big(\langle \mu^{-1} \mathbb{A}\nu, \nabla u \rangle - 2\frac{u}{r} \Big)^2\,d\mathcal{H}^{n-1}. \end{equation*} \end{oss} \begin{oss}\label{oss cost bdd in cpt} We note that from Proposition \ref{prop u_r limitata W2p} the uniform boundedness of the sequence $(u_{x_0,r})_r$ in $C^{0,\c}(\mathbb{R}^n)$ follows. Moreover, for base points $x_0$ in a compact set of $\Omega$, the $C^{0,\c}$ norms, and thus the constants in the monotonicity formulae, are uniformly bounded. Indeed, as pointed out in the corresponding statements they depend on $\|\mathbb{A}\|_{W^{s,p}(\Omega)}$ and $\mathrm{dist}(x_0,\partial\Omega)$. \end{oss} \section{The blow-up method: Classification of blow-ups}\label{s:classification} In this section we proceed with the analysis of blow-ups showing the consequences of Theorem \ref{Weiss}. The first consequence is that the blow-ups are $2$-homogeneous, i.e. $v(tx)=t^2v(x)$ for all $t>0$ and for all $x\in \mathbb{R}^n$, as it is possible to deduce from the second member of \ref{dis monotonia Weiss} where, according to Euler's homogeneous function Theorem\footnote{Let $v:\mathbb{R}^n\to \mathbb{R}$ a differentiable function, then $v$ is $k$-homogeneous with $k>0$ if and only if $k\,v(x)=\langle \nabla v(x), x \rangle$.}, the integral represents a distance to a $2$-homogeneous function set. For a proof of the following result we refer to \cite[Proposition 4.2]{FGS}. \begin{prop}[$2$-homogeneity of blow-ups]\label{prop blow-up 2omo} Let $x_0\in \Gamma_u$ and $(u_{x_0,r})_r$ as in (\ref{u_x_0 r}). Then, for every sequence $(r_j)_j\downarrow 0$ there exists a subsequence $(r_{j_k})_k\subset(r_j)_j$ such that the sequence $(u_{x_0,r_{j_k}})_k$ converges in $C^{1,\c}(\mathbb{R}^n)$ to a function $v(y)=w(\mathbb{L}^{-1}(x_0)y)$, where $w$ is $2$-homogeneous. \end{prop} As a second consequence, remembering Proposition \ref{prop crescita basso} we can obtain that the blow-ups are nonzero. \begin{cor}\label{cor w non 0} Let $v(y)=w(\mathbb{L}^{-1}(x_0)y)$ be a limit of $C^{1,\c}$ a converging sequence of rescalings $(u_{x_0, r_j})_j$ in a free-boundary point $x_0\in \Gamma_u$, then $\underline{0}\in \Gamma_w$, i.e. $w\not\equiv 0$ in any neighborhood of $\underline{0}$. \end{cor} \begin{proof} Due to Proposition \ref{prop crescita basso} for any $j\in \mathbb{N}$, there exists a $\nu_j\in \mathbb{S}^{n-1}$ such that $u_{x_0,r_j}(\nu_j)\geq \theta$. From the compactness of $\mathbb{S}^{n-1}$ we can extract a subsequence $(\nu_{j_k})_k$ such that $\nu_{j_k}\to \nu\in \mathbb{S}^{n-1}$. Due to the convergence in $C^{1,\c}$ we have that $v(\nu)\geq \theta$, if we define $\xi:=\mathbb{L}^{-1}(x_0)\nu$, we get $w(\xi)\geq \theta$. As noticed in Proposition \ref{prop blow-up 2omo} $w$ is $2$-homogeneous, then in any neighborhood of $\underline{0}$ there exists a point on the direction $\xi$ on which $w$ is strictly positive, so for any $\d>0$ we have $w(\d\xi)=\d^2 w(\xi)\geq \d^2 \theta$, and thus this Corollary is verified. \end{proof} Finally, it is possible to give a classification of blow-ups. We begin by recalling the result in the classical case established by Caffarelli \cite{Caf77,Caf80,Caf98}. \begin{defi} A \emph{global solution} to the obstacle problem is a positive function $w\in C^{1,1}_{loc}(\mathbb{R}^n)$ solving \eqref{PDE_u} in the case $\mathbb{A}\equiv I_n$ and $f\equiv 1$. \end{defi} The following result occurs: \begin{teo}\label{teo Caffarelli} Every global solution $w$ is convex. Moreover, if $w\not\equiv 0$ and $2$-homogeneous, then one of the following two cases occurs: \begin{itemize} \item [(A)] $w(y)=\frac{1}{2}\big(\langle y, \nu\rangle \vee 0\big)^2$ for some $\nu\in \mathbb{S}^{n-1}$, where the symbol $\vee$ denotes the maximum of the surrounding quantities; \item [(B)] $w(y)=\langle \mathbb{B}y, y\rangle$ with $\mathbb{B}$ a symmetric, positive semidefinite matrix satisfying and $\mathrm{Tr}\mathbb{B}=\frac{1}{2}$. \end{itemize} \end{teo} Having this result at hand, a complete classification of the blow-up limits, for the obstacle problem \eqref{problema ad ostacolo}, follows as in the classical context. Up to minimal difference the result looks like \cite[Proposition 4.5]{FGS} to which we refer for the proof. The main ingredients of the proof are the quasi-monotonicity formula by Weiss and a $\Gamma$-convergence argument: \begin{prop}[Classification of blow-ups]\label{prop Gamma conv} Every blow-up $v_{x_0}$ at a free-boundary point $x_0\in \Gamma_u$ is of the form $v_{x_0}=w(\mathbb{L}^{-1}(x_0)y)$, with $w$ a non-trivial, $2$-homogeneous global solution. \end{prop} According to Theorem \ref{teo Caffarelli} we shall call a global solution of type $(A)$ or of type $(B)$. The above proposition allows us to formulate a simple criterion to distinguish between regular and singular free-boundary points. \begin{defi}\label{reg sing} A point $x_0\in \Gamma_u$ is a \emph{regular} free-boundary point, and we write $x_0\in Reg(u)$ if there exists a blow-up of $u$ at $x_0$ of type $(A)$. Otherwise, we say that $x_0$ is \emph{singular} and write $x_0\in Sing(u)$. \end{defi} \begin{oss}\label{oss conto energia} Simple calculations show that $\Psi_w(1)=\theta$ for every global solution of type $(A)$ and $\Psi_w(1)=2\theta$ for every global solution of type $(B)$, where $\Psi_w$ is the energy defined in \eqref{Psi_v} and $\theta$ is a dimensional constant. \end{oss} \begin{oss} We observe that for every sequence $r_j\searrow0$ for which $u_{\mathbb{L}(x_0),r_j}\to w$ in $C^{1,\c}(B_1)$ with $w$ being a $2$-homogeneous global solution then \begin{equation*} \lim_{r_j\to 0} \Phi_{\mathbb{L}(x_0)}(r_j)=\Psi_w(1). \end{equation*} From Weiss’ quasi-monotonicity the uniqueness of the limit follows, so $\Phi_{\mathbb{L}(x_0)}(0)=\Psi_w(1)$ for every $w$ that is the limit of the sequence $(u_{\mathbb{L}(x_0),r})_{r}$. It follows that if $x_0\in \Gamma_u$ is a regular point then $\Phi_{\mathbb{L}(x_0)}(0)=\theta$ or, equivalently every blow-up at $x_0$ is of type $(A)$. \end{oss} \section{Monneau’s quasi-monotonicity formula} In this section we prove a Monneau type quasi-monotonicity formula (see \cite{Monneau}) for singular free-boundary points. The plan of proof follows \cite[Theorem 3.8]{FGS}. The additional difficulty is the same as Theorem \ref{Weiss} so for completeness we report the whole proof. Let $v$ be a $2$-homogeneous positive polynomial, solving \begin{equation}\label{laplaciano v=1} \Delta v = 1 \qquad\qquad \mathrm{on}\,\, \mathbb{R}^n. \end{equation} Let \begin{equation}\label{Psi_v} \Psi_v(r):=\frac{1}{r^{n+2}}\int_{B_r}\big(|\nabla v|^2 + 2v\big)\,dx -\frac{2}{r^{n+3}}\int_{\partial B_r} v^2\,d\mathcal{H}^{n-1}. \end{equation} We note that the expression of $\Psi_v(r)$ is analogous to those of $\Phi$ with coefficients frozen in $\underline{0}$ (recalling \eqref{Phir}). An integration by parts, \eqref{Psi_v} and the $2$-homogeneity of $v$ yields \begin{equation*} \begin{split} \frac{1}{r^{n+2}}\int_{B_r}|\nabla v|^2\,dx=&\frac{1}{r^{n+2}}\int_{B_r}\big(\mathrm{div}(v\nabla v) - v\,\Delta v\big)\,dx =\frac{1}{r^{n+3}}\int_{\partial B_r} \langle\nabla v,x\rangle\,d\mathcal{H}^{n-1} - \frac{1}{r^{n+2}}\int_{B_r}v\,dx\\ =&\frac{1}{r^{n+3}}\int_{\partial B_r} v^2\,d\mathcal{H}^{n-1} - \frac{1}{r^{n+2}}\int_{B_r}v\,dx =\int_{\partial B_1} v^2\,d\mathcal{H}^{n-1} - \int_{B_1}v\,dx \end{split} \end{equation*} and therefore \begin{equation}\label{Psi_v = int v} \Psi_v(r)=\Psi_v(1)=\int_{B_1}v\,dx. \end{equation} In the next theorem we give a monotonicity formula for solutions of the obstacle problem such that $\underline{0}$ is a point of the free-boundary and \begin{equation}\label{pto sing1} \Phi(0^+)=\Psi_v(1) \quad \textrm{for some}\, v\, 2\textrm{-homogeneous solution of \eqref{laplaciano v=1}}. \end{equation} As explained in Definition \ref{reg sing}, formula \eqref{pto sing1} characterizes the singular part of the free boundary. \begin{proof}[Proof of Theorem \ref{Monneau}] Set $w_r=u_r-v$. As $v$ is $2$-homogenus we have that $w_r(x)=\frac{w(rx)}{r^2}$. Assuming that from \eqref{abuso_notazione} $\mathbb{A}(\underline{0})=I_n$, due to the Divergence Theorem and Euler's homogeneous function Theorem we find \begin{equation*} \begin{split} \frac{d}{dr}\int_{\partial B_1} w_r^2\,&d\mathcal{H}^{n-1} = \int_{\partial B_1} w_r\,\frac{d}{dr}\left(\frac{w(rx)}{r^2}\right)\,d\mathcal{H}^{n-1}\\ =& \frac{2}{r}\int_{\partial B_1} w_r (\langle\nabla w_r, x\rangle - 2w_r)\,d\mathcal{H}^{n-1} = \frac{2}{r}\int_{\partial B_1} w_r (\langle\nabla u_r, x\rangle - 2u_r)\,d\mathcal{H}^{n-1}\\ =& \frac{2}{r}\int_{\partial B_1} w_r (\langle \mathbb{A}(rx)\nabla u_r, x\rangle - 2u_r)\,d\mathcal{H}^{n-1} +\frac{2}{r}\int_{\partial B_1} w_r \langle (\mathbb{A}(\underline{0})-\mathbb{A}(rx))\nabla u_r, x\rangle \,d\mathcal{H}^{n-1}\\ \geq& \frac{2}{r}\int_{\partial B_1} w_r (\langle \mathbb{A}(rx)\nabla u_r, x\rangle - 2u_r)\,d\mathcal{H}^{n-1} - C\, \|\nabla u_r\|_{L^2(\partial B_1)}\, \|w_r\|_{L^2(\partial B_1)}\, [\mathbb{A}]_{0,\c}\,r^{-\frac{n}{p^*}}, \end{split} \end{equation*} thus by \eqref{u e grad u limitate} \begin{equation}\label{monneau1} \begin{split} \frac{d}{dr}\int_{\partial B_1} w_r^2\,d\mathcal{H}^{n-1} \geq \frac{2}{r}\,\int_{\partial B_1} w_r (\langle \mathbb{A}(rx)\nabla u_r, x\rangle - 2u_r)\,d\mathcal{H}^{n-1} - C\,r^{-\frac{n}{p^*}}. \end{split} \end{equation} Using an integration by parts, and \eqref{laplaciano v=1} we can rewrite the first term on the right as \begin{equation} \begin{split} \int_{\partial B_1}& w_r (\langle \mathbb{A}(rx)\nabla u_r, x\rangle - 2u_r)\,d\mathcal{H}^{n-1} \\ \stackrel{\eqref{PDE_u}}{=}& \int_{B_1}\big(\langle \mathbb{A}(rx)\nabla u_r, \nabla w_r\rangle + w_r\,f(rx)\chi_{\{u_r>0\}}(x)\big)\,dx -\int_{\partial B_1} 2\,w_r\,u_r\,d\mathcal{H}^{n-1} \\ =& \int_{B_1}\big(\langle \mathbb{A}(rx)\nabla u_r, \nabla u_r\rangle + u_r\,f(rx)\chi_{\{u_r>0\}}(x)\big)\,dx -\int_{\partial B_1} 2\,u_r^2\,d\mathcal{H}^{n-1}\\ &- \int_{B_1}\big(\langle \mathbb{A}(rx)\nabla u_r, \nabla v\rangle + v\,f(rx)\chi_{\{u_r>0\}}(x)\big)\,dx +\int_{\partial B_1} 2\,v\,u_r\,d\mathcal{H}^{n-1} \\ =& \Phi(r) - \int_{B_1} f(rx)\big(u_r + v\,\chi_{\{u_r>0\}}(x)\big)\,dx + 2\int_{\partial B_1} \big(\mu(rx)-\mu(\underline{0})\big)u_r^2\,d\mathcal{H}^{n-1}\\ &- \int_{B_1}\langle \mathbb{A}(rx)\nabla u_r, \nabla v\rangle\,dx +2\int_{\partial B_1} v\,u_r\,d\mathcal{H}^{n-1}\\ \geq& \Phi(r) - \int_{B_1}(u_r + v\,\chi_{\{u_r>0\}}(x))\,dx -\int_{B_1}\langle \nabla u_r, \nabla v\rangle\,dx - \int_{B_1} \big(f(rx)-f(\underline{0})\big)(u_r + v\,\chi_{\{u_r>0\}}(x))\,dx\\ &- \int_{B_1}\langle \big(\mathbb{A}(rx)-\mathbb{A}(\underline{0})\big)\nabla u_r, \nabla v\rangle\,dx + 2\int_{\partial B_1} \big(\mu(rx)-\mu(\underline{0})\big)u_r^2\,d\mathcal{H}^{n-1} +2\int_{\partial B_1} v\,u_r\,d\mathcal{H}^{n-1}. \end{split} \end{equation} Recalling the $\c$-H\"{o}lder continuity of $\mathbb{A}$ and $\mu$, from the Divergence Theorem, we obtain \begin{equation}\label{monneau2} \begin{split} \int_{\partial B_1}& w_r (\langle \mathbb{A}(rx)\nabla u_r, x\rangle - 2u_r)\,d\mathcal{H}^{n-1} \\ \geq& \Phi(r) - \int_{B_1}(u_r + v)\,dx -\int_{B_1}\langle \nabla u_r, \nabla v\rangle\,dx +2\int_{\partial B_1} v\,u_r\,d\mathcal{H}^{n-1} -c\,\left(r^\c + \omega(r)\right)\\ \stackrel{\eqref{Psi_v = int v}}{=}& \Phi(r) -\Psi_v(1) - \int_{B_1}(u_r\,\Delta v)\,dx -\int_{B_1}\langle \nabla u_r, \nabla v\rangle\,dx +2\int_{\partial B_1} v\,u_r\,d\mathcal{H}^{n-1} -c'\,\left(r^\c + \omega(r)\right)\\ =& \Phi(r) -\Psi_v(1) - \int_{B_1}\mathrm{div}(u_r\,\nabla v)\,dx +2\int_{\partial B_1} v\,u_r\,d\mathcal{H}^{n-1} -c'\,\left(r^\c + \omega(r)\right)\\ =& \Phi(r) -\Psi_v(1) + \int_{\partial B_1}\!\!\!u_r\big(2v - \langle\nabla v, x\rangle\big)\,d\mathcal{H}^{n-1} -c'\left(r^\c + \omega(r)\right) = \Phi(r) -\Psi_v(1) -c'\left(r^\c + \omega(r)\right).\\ \end{split} \end{equation} So, combining together \eqref{monneau1} and \eqref{monneau2}, and assuming that $\c:= 1-\frac{n}{p^*}$ we deduce \begin{equation*} \begin{split} \frac{d}{dr}\int_{\partial B_1} w_r^2\,d\mathcal{H}^{n-1} \geq \frac{2}{r} \big(\Phi(r) -\Psi_v(1)\big) -c'\,\left(r^{-\frac{n}{p^*}}+\frac{\omega(r)}{r}\right). \end{split} \end{equation*} from inequality \eqref{weiss stima Phir} we deduce \begin{equation*} \begin{split} \frac{d}{dr}\int_{\partial B_1}\!\!\!\!\!\! w_r^2\,d\mathcal{H}^{n-1} \geq& \frac{2}{r} \Big(\Phi(r)e^{\bar{C_3}r^{1-\frac{n}{\Theta}}} + C_4\,\int_0^r\!\!\! \left(t^{-\frac{n}{\Theta}}+\frac{\omega(t)}{t}\right)e^{\bar{C_3}t^{1-\frac{n}{\Theta}}}\,dt\\ &\phantom{AAAAAAAAA}- c\,\left(r^{1-\frac{n}{\Theta}} + \int_0^r\frac{\omega(t)}{t}\,dt\right)-\Psi_v(1)\Big) -c'\,\left(r^{-\frac{n}{p^*}}+\frac{\omega(r)}{r}\right)\\ \geq& \frac{2}{r} \Big(\Phi(r)e^{\bar{C_3}r^{1-\frac{n}{\Theta}}} + C_4\,\int_0^r\!\!\! \left(t^{-\frac{n}{\Theta}}+\frac{\omega(t)}{t}\right)e^{\bar{C_3}t^{1-\frac{n}{\Theta}}}\,dt -\Psi_v(1)\Big)\\ &\phantom{AAAAAAAAA}-c''\,\left(r^{-\frac{n}{\Theta}}+\frac{\omega(r)}{r}+\frac{1}{r}\int_0^r\frac{\omega(t)}{t}\,dt\right)\\ \end{split} \end{equation*} and then set $C_5=\frac{c''}{1-\frac{n}{\Theta}}$ \begin{equation*} \begin{split} &\frac{d}{dr}\bigg(\int_{\partial B_1}\!\!\!\!\!\! w_r^2\,d\mathcal{H}^{n-1} + C_5\,\left(r^{1-\frac{n}{\Theta}}+\int_0^r\frac{\omega(t)}{t}\,dt + \int_0^r\frac{dt}t\int_0^t\frac{\omega(s)}{s}\,ds\right)\bigg)\\ &\phantom{AAAAAAAAA}\geq \frac{2}{r} \Big(\Phi(r)e^{\bar{C_3}r^{1-\frac{n}{\Theta}}} + C_4\,\int_0^r\!\!\! \left(t^{-\frac{n}{\Theta}}+\frac{\omega(t)}{t}\right)e^{\bar{C_3}t^{1-\frac{n}{\Theta}}}\,dt -\Psi_v(1)\Big). \end{split} \end{equation*} \end{proof} \begin{oss} In \cite[Theorem 3.8]{FGS}, under hypotheses $\mathbb{A}\in W^{1,\infty}(\Omega,\mathbb{R}^n\times~\mathbb{R}^n)$ and $f\in C^{0,\a}(\Omega)$, Focardi, Gelli and Spadaro proved that the following estimate holds true for $\mathcal{L}^1$-a.e. $r$ in $(0,\frac{1}{2}\mathrm{dist}(\underline{0},\partial\Omega)\wedge~1)$: \begin{equation*} \frac{d}{dr}\bigg(\int_{\partial B_1}(u_r-v)^2\,d\mathcal{H}^{n-1} + C_5\,r^\a\bigg) \geq \frac{2}{r}\bigg(e^{C_3\,r}\Phi(r) + C_4 \int_0^r e^{C_3t}t^{\a-1}\,dt-\Psi_v(1)\bigg). \end{equation*} \end{oss} \section{The blow-up method: Uniqueness of blow-ups} The last remarks show that the blow-up limits at the free-boundary points must be of a unique type: nevertheless, this does not imply the uniqueness of the limit itself. In this paragraph we prove the property of uniqueness of blow-ups. In view of Proposition \ref{prop Gamma conv}, if $x\in \Gamma_u$ the blow-up in $x$ is unique with form \begin{equation*} v_x(y) = \left\{ \begin{array}{ll} \vspace{0.2cm} \frac{1}{2}\big(\langle \mathbb{L}^{-1}(x)\varsigma(x), y\rangle \vee 0\big)^2 & \quad x\in Reg(u)\\ \langle\mathbb{L}^{-1}(x)\mathbb{B}_x\mathbb{L}^{-1}(x) y, y\rangle & \quad x\in Sing(u). \end{array} \right. \end{equation*} where $\varsigma(x)\in\mathbb{S}^{n-1}$ is the blow-up direction at $x\in Reg(u)$ and $\mathbb{B}_x$ is symmetric matrix such that $\mathrm{Tr}\,\mathbb{B}_x=\frac{1}{2}$.\\ We start with the case of singular points. Therefore, from Weiss’ and Monneau’s quasi-monotonicity formulae it follows that: \begin{prop}[{\cite[Proposition 4.11]{FGS}}]\label{unicita blowup + mod cont} For every point $x\in Sing(u)$ there exists a unique blow-up limit $v_x(y)=w(\mathbb{L}^{-1}(x)y)$. Moreover, if $K\subset Sing(u)$ is a compact subset, then, for every point $x\in K$ \begin{equation}\label{modulo di continuita K pti sing} \left\|u_{\mathbb{L}(x),r} - w \right\|_{C^1(B_1)}\leq \sigma_K(r) \qquad \forall r\in (0, r_K), \end{equation} for some modulus of continuity $\sigma_K :\mathbb{R}^+\to\mathbb{R}^+$ and a radius $r_K>0$. \end{prop} Next, we proceed with the case of the regular points.\\ We extend the energy defined in \eqref{Psi_v} from $2$-homogeneous functions to each function $\xi\in W^{1,2}(B_1)$ by \begin{equation*} \Psi_\xi(1)=\int_{B_1}\big(|\nabla \xi|^2 + 2\xi\big)\,dx -\int_{\partial B_1} \xi^2\,d\mathcal{H}^{n-1}. \end{equation*} We state Weiss' celebrated epiperimetric inequality \cite[Theorem 1]{Weiss} (recently a variational proof for the thin obstacle problem has been given in \cite{FS}, and with the same approach, for the lower dimensional obstacle problem has been given in \cite{Geraci}): \begin{teo}[Weiss’ epiperimetric inequality]\label{epip Weiss} There exist $\d>0$ and $k\in (0,1)$ such that, for every $\phi\in H^1(B_1)$, $2$-homogeneous function, with $\|\phi-w\|_{H^1(B_1)}\leq \d$ for some global solution $w$ of type $(A)$, there exists a function $\xi\in H^1(B_1)$ such that $\xi_{|\partial B_1}=\phi_{|\partial B_1}$, $\xi\geq 0$ and \begin{equation}\label{dis epip} \Psi_\xi(1)-\theta\leq (1-k)\left(\Psi_\phi(1)-\theta\right), \end{equation} where $\theta=\Psi_w(1)$ is the energy of any global solution of type $(A)$. \end{teo} As in \cite{FGS} we prove a technical lemma that will be the key ingredient in the proof of uniqueness. With respect to \cite[Lemma 4.8]{FGS} the lack of regularity of $\mathbb{A}$ and $f$ in $(H1)$-$(H3)$ does not allow us to use the final dyadic argument; for this reason we introduce a technical hypothesis $(H4)$ with $a>2$. For a clearer comprehension on behalf of the reader, we report the whole proof: \begin{lem}\label{lem tecn unic blow-up} Let $u$ be the solution of \eqref{problema ad ostacolo} and we assume $(H4)$ with $a>2$ and \eqref{abuso_notazione}. If there exist radii $0\leq \varrho_0<r_0<1$ such that \begin{equation}\label{cond inf lemma tecn} \inf_{w}\|{u_r}_{|\partial B_1}-w\|_{H^1(\partial B_1)}\leq \d \qquad\qquad \forall\,\,\varrho_0\leq r\leq r_0, \end{equation} where the infimum is taken on all global solutions $w$ of type $(A)$ and $\d>0$ is the constant of Theorem \ref{epip Weiss}, then for each pair of radii $\varrho,t$ such that $\varrho_0\leq \varrho<t\leq r_0$ we have \begin{equation}\label{cond int lemma tecn} \int_{\partial B_1} |u_t - u_\varrho|\, d\mathcal{H}^{n-1}\leq C_7\, \rho(t), \end{equation} with $C_7$ a positive constant independent of $r$ and $\varrho$, while $\rho(t)$ is a growing function vanishing in $0$. \end{lem} \begin{proof} From the Divergence Theorem, \eqref{E H-O_grande} and \eqref{H'-H} we can compute the derivative of $\Phi'(r)$ in the following way: \begin{equation*} \begin{split} \Phi'(r)&=\frac{\mathcal{E}'(r)}{r^{n+2}}-(n+2)\frac{\mathcal{E}(r)}{r^{n+3}} - 2\frac{\mathscr{H}'(r)}{r^{n+3}} +2(n+3)\frac{\mathscr{H}(r)}{r^{n+4}}\\ \geq &\frac{1}{r^{n+2}}\int_{\partial B_r}(\langle\mathbb{A}\nabla u,\nabla u\rangle +2\,fu)\,d\mathcal{H}^{n-1} -(n+2)\frac{\mathcal{E}(r)}{r^{n+3}} + \frac{8}{r^{n+4}}\mathscr{H}(r)\\ &- \frac{4}{r^{n+3}}\int_{\partial B_r} u \langle \mathbb{A}\nu,\nabla u \rangle\, d\mathcal{H}^{n-1} -C\,r^{-\frac{n}{\Theta}}\\ \geq& \frac{1}{r^{n+2}}\int_{\partial B_r}(|\nabla u|^2 +2\,u)\,d\mathcal{H}^{n-1} -\frac{(n+2)}{r}\Phi(r) - \frac{2(n-2)}{r^{n+4}}\int_{\partial B_r} u^2\,d\mathcal{H}^{n-1}\\ &- \frac{4}{r^{n+3}}\int_{\partial B_r} u \langle \nu,\nabla u \rangle\, d\mathcal{H}^{n-1} -C\,\left(r^{-\frac{n}{\Theta}}+\frac{\omega(r)}{r}\right)\\ =& -\frac{(n+2)}{r}\Phi(r) + \frac{1}{r}\int_{\partial B_1} \Big(\big(\langle\nu,\nabla u_r\rangle-2u_r\big)^2 + |\partial_\tau u_r|^2 +2u_r - 2n\,u_r^2\Big)\,d\mathcal{H}^{n-1}\\ &- C\left(r^{-\frac{n}{\Theta}}+\frac{\omega(r)}{r}\right), \end{split} \end{equation*} where we denote by $\partial_\tau u_r$, the tangential derivative of $u_r$ along $\partial B_1$. Let $w_r$ be the $2$-homogeneous extension of ${u_r}_{|\partial B_1}$. We note that if $\varphi$ is a $2$-homogeneous function, then we have \begin{equation}\label{cambio variabili w_r} \begin{split} \int_{B_1} \varphi(x)\,dx&=\int_0^1\int_{\partial B_t} \varphi(y)\,d\mathcal{H}^{n-1}(y)\,dt=\int_0^1 t^{n+1}\int_{\partial B_1} \varphi(y)\,d\mathcal{H}^{n-1}(y)\\ &=\frac{1}{n+2}\int_{\partial B_1} \varphi(y)\,d\mathcal{H}^{n-1}(y). \end{split} \end{equation} Then a simple integration in polar coordinates, thanks to Euler's homogeneous function Theorem and \eqref{cambio variabili w_r} which give \begin{equation*} \begin{split} \int_{\partial B_1}\big(|\partial_\tau & u_r|^2 +2u_r - 2n\,u_r^2\big)\,d\mathcal{H}^{n-1}=\int_{\partial B_1}\!\!\!\!\big(|\partial_\tau w_r|^2 +2w_r +4w_r^2- 2(n+2)\,w_r^2\big)\,d\mathcal{H}^{n-1}\\ =&\int_{\partial B_1}\big(|\nabla w_r|^2 +2w_r\big) - 2(n+2)\int_{\partial B_1}w_r^2\,d\mathcal{H}^{n-1}\\ =&(n+2)\int_{B_1} (|\nabla w_r|^2 + 2w_r)\,d\mathcal{H}^{n-1}- 2(n+2)\int_{\partial B_1}w_r^2\,d\mathcal{H}^{n-1}= (n+2)\Psi_{w_r}(1). \end{split} \end{equation*} Therefore, we conclude that \begin{equation}\label{Phi'> in lem} \Phi'(r)\geq \frac{(n+2)}{r}\big(\Psi_{w_r}(1)-\Phi(r)\big) + \frac{1}{r}\int_{\partial B_1} \Big(\big(\langle\nu,\nabla u_r\rangle-2u_r\big)^2\,d\mathcal{H}^{n-1} - C\,\left(r^{-\frac{n}{\Theta}}+\frac{\omega(r)}{r}\right). \end{equation} We can also note that, being $w_r$ the $2$-homogeneous extension of ${u_r}_{|\partial B1}$, thanks to \eqref{cambio variabili w_r} and \eqref{cond inf lemma tecn}, there exists a global solution $w$ of type $(A)$ such that \begin{equation*} \|w_r-w\|_{H^1(B_1)}\leq \frac{1}{\sqrt{n+2}}\|{w_r}_{\partial B_1} - w\|_{H^1(\partial B_1)} \leq \d. \end{equation*} Hence, we can apply the epiperimetric inequality \eqref{dis epip} to $w_r$ and find a function $\xi\in w_r + H^1_0(B_1)$ such that \begin{equation}\label{epip in lem} \Psi_\xi(1)-\theta\leq (1-k)\big(\Psi_{w_r}(1)-\theta \big). \end{equation} Moreover, we can assume without loss of generality (otherwise we substitute $\xi$ with $u_r$) that $\Psi_\xi(1)\leq \Psi_{u_r}(1)$. Then, from the minimality of $u_r$ in $\mathcal{E}$ with respect to its boundary conditions \eqref{abuso_notazione} and lemma \ref{lem reg mu} we have that \begin{equation}\label{Psi_xi > in lem} \begin{split} \Psi_\xi(1)=& \int_{B_1}\big(|\nabla \xi|^2 + 2\xi\big)\,dx -\int_{\partial B_1} \xi^2\,d\mathcal{H}^{n-1}\\ \geq& \int_{B_1}\big(\langle \mathbb{A}(rx)\nabla \xi,\nabla \xi\rangle + 2\,f(rx)\xi\big)\,dx -\int_{\partial B_1} \mu(rx)\,\xi^2\,d\mathcal{H}^{n-1}\\ &-C\,\left(r^{1-\frac{n}{p^*}}+\omega(r)\right) \int_{B_1}\big(|\nabla \xi|^2 + 2\xi\big)\,dx - C\,r^\c \int_{\partial B_1} \xi^2\,d\mathcal{H}^{n-1}\\ \geq& \Phi(r) - C\,\left(r^{1-\frac{n}{p^*}}+\omega(r)\right) \int_{B_1}\big(|\nabla \xi|^2 + 2\xi\big)\,dx - C\,r^\c \int_{\partial B_1} \xi^2\,d\mathcal{H}^{n-1}\\ \geq& \Phi(r) - C\,\left(r^{1-\frac{n}{p^*}}+\omega(r)\right). \end{split} \end{equation} From \eqref{epip in lem} and \eqref{Psi_xi > in lem} we get \begin{equation}\label{Psi-Phi in lem} \begin{split} \Psi_{w_r}(1)-\Phi(r)&\geq \frac{1}{1-k}\left(\Phi(r)-\theta-C\left(r^{1-\frac{n}{p^*}}+\omega(r)\right)\right) + \theta - \Phi(r) \\ &= \frac{k}{1-k} (\Phi(r)-\theta)- C\left(r^{1-\frac{n}{p^*}}+\omega(r)\right). \end{split} \end{equation} Then from \eqref{Phi'> in lem} and \eqref{Psi_xi > in lem} \begin{equation}\label{lemmatecn Phi'>} \Phi'(r)\geq \frac{n+2}{r}\frac{k}{1-k} (\Phi(r)-\theta) - C\,\left(r^{-\frac{n}{\Theta}}+\frac{\omega(r)}{r}\right). \end{equation} Let now $\widetilde{C}_6\in (0,\,(1-\frac{n}{\Theta}) \wedge (n+2)\frac{k}{1-k})$, then \begin{equation}\label{Phi-theta)'} \bigg((\Phi(r)-\theta)\,r^{-\widetilde{C}_6}\bigg)'\geq - C\,\left(r^{-\frac{n}{\Theta}-\widetilde{C}_6}+\frac{\omega(r)}{r^{1+\widetilde{C}_6}}\right). \end{equation} Indeed, by taking into account \eqref{lemmatecn Phi'>} \begin{equation*} \begin{split} \Big((&\Phi(r)-\theta)\,r^{-\widetilde{C}_6}\Big)' = \Phi'(r)r^{-\widetilde{C}_6} - \widetilde{C}_6\,(\Phi(r)-\theta)\,r^{-\widetilde{C}_6-1}\\ &\geq \bigg(\frac{n+2}{r}\frac{k}{1-k} (\Phi(r)-\theta) - C\,\left(r^{-\frac{n}{\Theta}}+\frac{\omega(r)}{r}\right)\bigg)r^{-\widetilde{C}_6} - \widetilde{C}_6\,(\Phi(r)-\theta)\,r^{-\widetilde{C}_6-1}\\ &\geq(\Phi(r)-\theta)r^{-\widetilde{C}_6-1} \bigg((n+2)\frac{k}{1-k}-\widetilde{C}_6\bigg) - C\,\left(r^{-\frac{n}{\Theta}}+\frac{\omega(r)}{r}\right)r^{-\widetilde{C}_6}\\ &\geq - C\,\left(r^{-\frac{n}{\Theta}-\widetilde{C}_6}+\frac{\omega(r)}{r^{1+\widetilde{C}_6}}\right). \end{split} \end{equation*} By integrating \eqref{Phi-theta)'} in $(t,r_0)$ with $t\in (s_0,r_0)$ and multiplying by $t^{\widetilde{C}_6}$ we finally get \begin{align*} &t^{\widetilde{C}_6}\,\bigg[(\Phi(r)-\theta)\,r^{-\widetilde{C}_6}\bigg]_t^{r_0}\geq - C\,t^{\widetilde{C}_6}\,\int_t^{r_0}\left(r^{-\frac{n}{\Theta}-\widetilde{C}_6}+\frac{\omega(r)}{r^{1+\widetilde{C}_6}}\right)\,dr \end{align*} whence \begin{equation}\label{Phi - theta} \begin{split} \Phi(t)-\theta &\leq C\bigg(\int_t^{r_0}\left(r^{-\frac{n}{\Theta}-\widetilde{C}_6}+\frac{\omega(r)}{r^{1+\widetilde{C}_6}}\right)\,dr + 1\bigg)t^{\widetilde{C}_6}\\ &\leq C \bigg(r^{1-\frac{n}{\Theta}}+ t^{\widetilde{C}_6} + t^{\widetilde{C}_6} \int_t^{r_0}\frac{\omega(r)}{r^{1+\widetilde{C}_6}}\,dr \bigg) \leq C\, t^{\widetilde{C}_6} \left(\int_t^{r_0}\frac{\omega(r)}{r^{1+\widetilde{C}_6}}\,dr + 1\right). \end{split} \end{equation} Consider now $\varrho_0<\varrho<r_0$ and estimate as follows \begin{equation*} \begin{split} \int_{\partial B_1}|u_t-&u_\varrho|\,d\mathcal{H}^{n-1}=\int_{\partial B_1}\left|\int_\varrho^t\frac{d}{dr}\left(\frac{u(rx)}{r^2}\right)\,dr\right|\,d\mathcal{H}^{n-1}\\ \leq& \int_\varrho^t\!\! r^{-2}\int_{\partial B_1}\!\!\! \left|\langle\nabla u(rx),x \rangle - 2\frac{u(rx)}{r}\right|\,d\mathcal{H}^{n-1}\,dr = \int_\varrho^t\!\! r^{-1}\int_{\partial B_1} \!\!\left|\langle\nabla u_r(x),x \rangle - 2\,u_r(x)\right|\,d\mathcal{H}^{n-1}\,dr\\ \leq& \sqrt{n\omega_n} \int_\varrho^t r^{-\frac{1}{2}}\left(r^{-1}\int_{\partial B_1}\left|\langle\nabla u_r(x),x \rangle - 2\,u_r(x)\right|^2\,d\mathcal{H}^{n-1}\right)^\frac{1}{2}\,dr. \end{split} \end{equation*} Combining \eqref{weiss stima Phir}, \eqref{Phi'> in lem}, \eqref{Psi-Phi in lem}, \eqref{Phi - theta} and the H\"{o}lder inequality we have \begin{equation}\label{u_t-u_s in lem} \begin{split} \int_{\partial B_1} |u_t &-u_\varrho|\,d\mathcal{H}^{n-1}\leq C\int_\varrho^t r^{-\frac{1}{2}}\bigg(\Phi'(r)+ C\left(r^{-\frac{n}{\Theta}}+\frac{\omega(r)}{r}\right)\bigg)^\frac{1}{2}\,dr\\ &\leq C \left(\log\frac{t}{\varrho}\right)^\frac{1}{2}\,\left(\Phi(t)-\Phi(\varrho) + C\left(t^{1-\frac{n}{\Theta}}-\varrho^{1-\frac{n}{\Theta}}+ \int_\varrho^t\frac{\omega(r)}{r}\,dr\right)\right)^\frac{1}{2}\\ &\leq C \left(\log\frac{t}{\varrho}\right)^\frac{1}{2}\,\left((\Phi(t)-\theta) +(\theta-\Phi(\varrho)) + C\left(t^{1-\frac{n}{\Theta}}+\int_\varrho^t\frac{\omega(r)}{r}\,dr\right)\right)^\frac{1}{2}\\ &\leq C \left(\log\frac{t}{\varrho}\right)^\frac{1}{2}\,\left(t^{\widetilde{C}_6} + t^{\widetilde{C}_6} \int_t^{r_0}\frac{\omega(r)}{r^{1+\widetilde{C}_6}}\,dr + \int_0^t\frac{\omega(r)}{r}\,dr + \int_\varrho^t\frac{\omega(r)}{r}\,dr \right)^\frac{1}{2}. \end{split} \end{equation} The function $|\log t|^{a}$ is decreasing if $t\in (0,1]$ and it is easy to prove that $t^{\widetilde{C}_6}|\log t|^{a}\searrow 0$. If $r_0<<1$ then we have \begin{equation*} t^{\widetilde{C}_6} \int_t^{r_0}\frac{\omega(r)}{r^{1+\widetilde{C}_6}}\,dr + \int_0^t\frac{\omega(r)}{r}\,dr\leq |\log t|^{-a}\int_0^{r_0}\frac{\omega(r)|\log t|^{a}}{r}\,dr. \end{equation*} Therefore thanks to the hypothesis $(H4)$ with $a>2$, if $r_0<<1$ for every $0\leq t\leq r_0$, then we achieve $(\omega(t)\vee t^{C_6})\leq |\log t|^{-a}$. Then the following holds \begin{equation}\label{u_t-u_s < log in lem} \begin{split} \int_{\partial B_1} |u_t -u_\varrho|\,d\mathcal{H}^{n-1} &\leq C \left(\log\frac{t}{\varrho}\right)^\frac{1}{2}\, |\log t|^{-\frac{a}{2}}\left(1 +\int_0^{r_0}\frac{\omega(r)\,|\log r|^a}{r}\,dr\right)^\frac{1}{2} \leq C \left(\log\frac{t}{\varrho}\right)^\frac{1}{2}\,|\log t|^{-\frac{a}{2}}. \end{split} \end{equation} A simple dyadic decomposition argument then leads to the conclusion. If $\varrho\in [2^{-k},2^{-k+1})$ and $t\in [2^{-h},2^{-h+1})$ with $h<k$, applying \eqref{u_t-u_s < log in lem} \begin{equation*} \int_{\partial B_1} |u_t-u_\varrho|\,d\mathcal{H}^{n-1}\leq C\,\sum_{j=h}^k \log(2^{j})^{-\frac{a}{2}}\leq C_7\, \sum_{j=h}^\infty \frac{1}{j^{\frac{a}{2}}}=: C_7\, \rho(t), \end{equation*} with \begin{equation}\label{rho} \rho(t):=\sum_{j=h}^\infty \frac{1}{j^{\frac{a}{2}}}\quad \textrm{if} \quad t\in [2^{-h},2^{-h+1}). \end{equation} By taking \eqref{H3'} into account we have $a>2$, therefore, the function $\rho(t)$ is growing and infinitesimal in $0$, from which the conclusion of the lemma follows. \end{proof} Checking the hypothesis of Lemma \ref{lem tecn unic blow-up} it is possible to prove the uniqueness of the blow-ups at regular points of the free-boundary: \begin{prop}[{\cite[Proposition 4.10]{FGS}}]\label{prop unicita blow-up pti regolari} Let $u$ be a solution to the obstacle problem \eqref{PDE_u} with $f$ that satisfies $(H4)$ with $a>2$ and $x_0\in Reg(u)$. Then, there exist constants $r_0=r_0(x_0), \eta_0=\eta_0(x_0)$ such that every $x\in \Gamma_u\cap B_{\eta_0}(x_0)$ is a regular point and, denoting by $v_{x}=w(\mathbb{L}^{-1}(x)y)$ any blow-up of $u$ in $x$ we have \begin{equation}\label{stima unicità blow-up pti reg} \int_{\partial B_1}|u_{\mathbb{L}(x),r}-w|\,d\mathcal{H}^{n-1}(y)\leq C_7\,\rho(r) \qquad \forall\,r\in (0,r_0), \end{equation} where $C_7$ is an independent constant from $r$ and $\rho(r)$ a growing, infinitesimal function in $0$. In particular, the blow-up limit $v_x$ is unique. \end{prop} \begin{oss} If $f$ is $\a$-H\"{o}lder we can prove Lemma \ref{lem tecn unic blow-up} and Proposition \ref{prop unicita blow-up pti regolari} with $\rho(t)=t^{C_6}$ where $C_6:=\frac{\bar{C}_6\wedge \a}{2}$. \end{oss} \section{Regularity of the free-boundary} In this last section we state some regularity results of the free-boundary of $u$, the solution of \eqref{problema ad ostacolo}. If the matrix $\mathbb{A}$ satisfies the hypotheses $(H1)$-$(H2)$ and the linear term $f$ satisfies the hypothesis $(H4)$ with $a>2$ we obtain differentiability of the free-boundary in a neighborhood of any point $x\in Reg(u)$. In particular if $f$ is H\"{o}lder we establish the $C^{1,\b}$ regularity as in \cite{FGS} where $\mathbb{A}$ is Lipschitz continuous. Since the arguments involved are those used in \cite{FGS} together with the preliminary assumption developed in the previous sections we do not provide any proof. \begin{teo}[{\cite[Theorem 4.12]{FGS}}]\label{teo regolarità pti reg} Assume hypotheses $(H1)$, $(H2)$ and $(H4)$ with $a>2$ hold. Let $x\in Reg(u)$. Then, there exists $r>0$ such that $\Gamma_u\cap B_r(x)$ is hypersurface $C^1$ and $n$ its normal vector is absolutely continuous with modulus of continuity depending on $\rho$ defined in \eqref{rho}. In particular if $f$ is H\"{o}lder continuous there exists $r>0$ such that $\Gamma_u\cap B_r(x)$ is hypersurface $C^{1,\b}$ for some universal exponent $\b\in (0,1)$. \end{teo} We are able to say less on the set of singular points. We know that under the hypotheses $(H1)$, $(H2)$ and $(H4)$ wiht $a\geq 1$, the set $Sing(u)$ is contained in the union of $C^1$ submanifold. \begin{defi}\label{d:singular stratum} The singular stratum $S_k$ of dimension $k$ for $k=0,1,\dots,n-1$ is the subset of points $x\in Sing(u)$ for which $\mathrm{Ker}(\mathbb{B}_x)=k$. \end{defi} In the following theorem we show that the set $Sing(u)$ has a stronger regularity property than rectifiabilty: we show that the singular stratum $S_k$ is locally contained in a single submanifold. Moreover that $\cup_{k=l}^{n-1} S_k$ is a closed set for every $l=0,1,\dots,n-1$. \begin{teo}[{\cite[Theorem 4.14]{FGS}}]\label{teo regolarita pti sing} Assume hypotheses $(H1)$, $(H2)$ and $(H4)$ with $a\geq 1$. Let $x\in S_k$. Then there exists $r$ such that $S_k\cap B_r(x)$ is contained in regular $k$-dimensional submanifold of $\mathbb{R}^n$. \end{teo} \subsection*{Acknowledgments} The author is grateful and would like to thank Professor Matteo Focardi for his suggestion of the problem and for his constant support and encouragement. The author is partially supported by project GNAMPA $2015$ ``Regolarità per problemi di analisi geometrica e del calcolo delle variazioni''. The author is member of the Gruppo Nazionale per l'Analisi Matematica, la Probabilità e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM). {\footnotesize
1,108,101,562,597
arxiv
\section{Introduction} Graphene, which is a single layer of sp$^2$ bonded carbon atoms, shows promise for use in a variety of technological applications. For instance, it has an optical transmittance of over 97\% throughout the visible range and is a semi-metal, which makes it an ideal material for use as a transparent conducting medium for optical displays or solar cells \cite{wang2008transparent, 5347313820100801}. Its compatibility with conventional Si-based processing techniques and its intrinsic mobility of over 200,000 cm$^2$/V$\cdot$s make graphene a potential candidate for use in high speed electronic devices \cite{Bolotin2008351, Lin10062011}. In addition, the sheet thermal conductivity of graphene has been measured to be 5 $\times$ 10$^3$ W/m$\cdot$K, which is considerably higher than the thermal conductivity of bulk Cu (400~W/m$\cdot$K) \cite{doi:10.1021/nl0731872}. This ability of graphene to efficiently dissipate heat is especially important if it is to be implemented in high-density integrated circuits. Although graphene shows great promise for use in several technological applications, it has not seen widespread adoption within the semiconductor industry. The primary reason for this is that a low-cost technique for producing large areas of graphene with a defect density that is low enough for most applications has not yet been developed. Although techniques for producing high quality graphene on SiC substrates have been developed, the cost of the SiC substrates is prohibitively high. Chemical vapor deposition (CVD) of graphene on copper foil substrates is one of the most promising techniques for producing large area graphene films since the substrate cost is relatively low and the graphene thickness is self-limited to a single layer of graphene when low pressure conditions are used \cite{Li05062009}. However, controlling the defect density of graphene grown by CVD has been a challenge. Because of this, a wide range of carrier mobilities have been reported for graphene films grown by CVD on Cu foil substrates \cite{C0JM02126A, guermoune2011chemical}. Imperfections in the graphene films can result from several different sources. For instance, graphene must be transferred from the Cu substrate to an insulating or semiconducting substrate for characterization of its mobility, which can result in structural damage during the transfer process. Chemical residues from the etching of the Cu substrate will also adversely affect the transport properties. However, most of the variability of the transport properties of graphene grown by CVD most likely results from the wide range of vacuum conditions, temperature profiles, and gas purities used during the growth process \cite{C0JM02126A}. For growth conditions that produce large graphene grains with a low number of structural defects and impurities, the transport properties have been shown to approach those of graphene mechanically exfoliated from graphite \cite{doi:10.1021/nl101629g}. On the other hand, graphene films composed of small grains that are rotationally misaligned with respect to each other are expected to have a reduced mobility and thermal conductivity owing to the scattering of carriers and phonons at grain boundaries. In addition, structural imperfections at grain boundaries and within the grain will have a much higher reactivity towards atmospheric contaminants. This will result in a time dependent degradation of the transport properties. There are several factors that will affect the graphene grain size and defect density during the growth by CVD. For instance, the size and orientation of the Cu substrate grains are expected to influence the rotational orientation and size of the graphene grains. Since the interaction of graphene and Cu is weak, the rotational orientation of the graphene grains during the initial nucleation is expected to be preserved as the graphene grains grow laterally. Step edges and point defects at the surface of the Cu substrate typically have a much higher catalytic activity towards the decomposition of the hydrocarbon precursor and a higher interaction strength with carbon atoms. Therefore, these sites are expected to be the primary nucleation sites for graphene at lower temperatures. At higher temperatures, the interaction of the carbon atoms with the ordered sites on the terraces between the Cu substrate step edges should be comparable to the interaction at the step edges. Once this happens, nucleation on the terraces is expected to dominate. Since the terrace sites have a well-defined symmetry, the graphene grains will typically have a few preferred orientations with respect to the substrate. The two lowest energy surface terminations of Cu, which is a face-centered cubic (FCC) crystal, are the (111) and (100) orientations. For cold-rolled foils of Cu, the grains within the Cu are known to predominantly reorient with a (100) surface termination at the elevated temperatures used during CVD growth of graphene \cite{doi:10.1021/ja109793s, robinson:011401}. However, thin films of Cu deposited on substrates such as sapphire or SiO$_2$ often have a (111) texture \cite{citeulike:9349235}. Because both graphene and the Cu(111) surface have hexagonal symmetry and the lattice mismatch between them is only -3.5\%, it should be possible to grow overlayers of graphene on this surface with a single rotational orientation if nucleation occurs on the substrate terraces. In fact, previous studies of graphene growth in ultra-high vacuum (UHV) on Cu(111) single crystals have shown that single-domain epitaxial films of graphene can be grown \cite{PhysRevB.84.155425, robinson2012argon}. Because the (100) surface has square symmetry, whereas the graphene lattice is hexagonal, graphene grown on this surface will have a minimum of two preferred orientations. Somewhat conflicting results have been reported for the rotational alignment of graphene films on Cu(100) single crystals. Rasool \emph{et al}. used scanning tunneling microscopy (STM) to measure the atomic structure of graphene films grown in a tube furnace on Cu(100) single crystals and found that the graphene grains had nucleated in a wide range of orientations on this surface \cite{doi:10.1021/ja200245p}. In contrast to those results, Zhao \emph{et al}. used \emph{in-situ} STM and low energy electon diffraction (LEED) to analyze the crystal struture of graphene films grown on a Cu(100) single-crystal in their UHV system and reported that a two-domain overlayer had formed \cite{Zhao2011509}. However, the presence of broad arcs instead of spots in their LEED patterns indicates that the graphene grains within their films also had considerable rotational disorder. Woford \emph{et al}. used \emph{in-situ} low energy electron microscopy (LEEM) to analyze the crystal structure of graphene grains grown by carbon deposition in UHV on Cu foil substrates that had recrystallized with a (100) texture and found a similar result: a two-domain overlayer with a mosaic spread of $\pm 7.5^\circ$ \cite{doi:10.1021/nl102788f}. Therefore, it is still an open question as to whether it is possible to grow a graphene overlayer where each domain is rotationally aligned within a few degrees of one of the two high symmetry directions of the substrate. As mentioned above, one of the most popular substrates for graphene growth are cold-rolled Cu foils. For growth of graphene by CVD on foils, an important factor that can influence the growth rate and the defect density of graphene is the presence of copper oxide within the foil and oxygen containing molecules such as O$_2$ and H$_2$O in the chamber and/or gas stream during growth. The copper oxide can potentially affect both the growth rate of the grains within the Cu foil and the catalytic activity of the surface. Oxygen in the gas stream will etch the graphene at elevated temperatures by forming CO$_2$ and CO. During conventional CVD growth of graphene, the Cu foil substrates are annealed in flowing hydrogen before graphene growth to reduce the copper oxide on the surface and within the bulk of the foil. The annealing process also results in an increase in the size of the grains within the Cu substrate. To initiate graphene growth, a hydrocarbon precursor gas is introduced into the growth chamber, usually mixed with hydrogen to prevent copper oxide formation during the graphene growth process. After the growth process is complete, the sample is cooled to room temperature in a flow of the hydrocarbon precursor mixed with hydrogen. The most common chambers used for graphene growth are hot walled reactors (\emph{i.e.}, tube furnaces). For atmospheric pressure CVD reactors, a flow of an inert gas such as argon is often used to maintain low levels of impurity gases in the reactor and to dilute the hydrocarbon precursor. Low pressure CVD reactors are generally cleaner than atmospheric pressure reactors. However, the base pressure of most of these systems is in the mTorr region, so these systems are still far from the purity that can be achieved with UHV-based growth chambers. One of the first studies of the influence of oxygen on the rate at which a hydrocarbon precursor decomposes on a Cu single crystal substrate was published by Alstrup \emph{et al}. in 1992 \cite{Alstrup199295}. In this study, the decomposition of methane (CH$_4$) on both the clean and oxygen pre-dosed Cu(100) surface was performed in a UHV chamber. The decomposition rate of the methane was monitored with \emph{in-situ} X-ray photoelectron spectroscopy (XPS). Because of the exceedingly low dissociative chemisorption probability, extreme care was taken to reduce impurities in the methane gas by using a bakeable gas inlet system that incorporated a cooled molecular sieve and a nickel catalyst. The decomposition studies were performed with a methane pressure of 10 Torr over the temperature range of 800-1000 K for the clean surface and 700-800 K for the oxygen pre-dosed surface. The oxygen pre-dosed surface was prepared by exposing the surface to 1000 L of molecular oxygen at 500 K, which resulted in a saturation coverage of 0.5 monolayers (ML) of chemisorbed oxygen on the surface. It was determined that the activation energy for dissociative adsorption was 201 $\pm 4$ kJ/mol and 123 $\pm 27$ kJ/mol on the clean surface and the oxygen pre-dosed surface, respectively. Interestingly, they also determined that the saturation coverage of carbon on the Cu(100) surface was $\sim$2.4~ML, which corresponds to a single atomic layer of graphene ($\sigma _{Cu(100)} = 1.53 \times 10^{15}$ cm$^{-2}$ and $\sigma _{graphene} = 3.82 \times 10^{15}$ cm$^{-2}$). Therefore, they were apparently the first group to determine that graphene growth self-terminates at a single atomic layer on a Cu substrate when using a methane precursor. In order to study the effect that oxygen has on the growth of graphene on Cu substrates, we have grown graphene films in an UHV chamber by catalytic decomposition of ethylene (C$_2$H$_4$) on a Cu(100) single crystal substrate with and without a chemisorbed oxygen layer on the surface and monitored the crystal structure of the graphene with \emph{in-situ} LEED and the growth morphology with \emph{ex-situ} scanning electron microscopy (SEM). The Cu(100) surface was chosen because the grains within Cu foils used for graphene growth typically recrystallize with a (100) surface termination. By performing this study in an UHV system on a Cu(100) single crystal, effects due to contamination within the substrate, the chamber, and the gas stream can be minimized. In addition, the surface normal of the Cu(100) crystal used in this study was oriented within $\pm 0.1 ^\circ$ of the [100] direction to help reduce the influence of step edges on the nucleation of the graphene grains. \section{Experimental} Graphene growth was carried out on a Cu(100) single-crystal (99.999\% purity) in an UHV chamber with a base pressure of $1 \times 10^{-10}$~Torr. The Cu(100) crystal was cut in a top-hat design, and the surface was polished to within $\pm 0.1 ^\circ$, which results in an average terrace width of 0.1~$\mu$m. The substrate was heated with an oxygen series button heater (HeatWave Labs, Inc., P/N 101275-28) that was mounted on a custom-made sample holder, as shown in Figure \ref{CustomSampleHolder}. The sample holder was attached to a stainless steel dewar that was connected to a differentially pumped rotary motion feedthru that was attached to an \emph{x,y,z} manipulator. The crystal was held in place with a Ta cap. A spacer ring made from Mo foil was used to center the crystal in the Ta cap. Ta foil was mounted on the side and back of the button heater to reduce radiative heat losses. This allowed annealing of the crystal at temperatures as high as $\sim$1000 $^\circ$C, which is close to the melting point of Cu (1085 $^\circ$C). The front surface of the crystal was left exposed so that the surface could be cleaned by Ar ion sputtering and could be characterized with \emph{in-situ} LEED. A chromel-alumel thermocouple was spot-welded onto the Mo ring, and was used to calibrate a disappearing-filament pyrometer that was used to measure the temperature of the front surface of the Cu crystal. LEED was the primary characterization technique used for these experiments and was done \emph{in-situ} with a Princeton Research Instruments (PRI), four-grid, rear-view LEED. SEM was also used to characterize the sample, and those measurements were done \emph{ex-situ} in a LEO 1550 SEM. The gas doser systems used for the ethylene, oxygen, and argon each consisted of a bakeable variable leak valve, stainless steel tubing, stainless steel regulator, and lecture bottle. The grade of ethylene was ultra-high purity (99.95\%); the grade of oxygen was Matheson purity (99.997\%); and the grade of the argon was Matheson purity (99.9995\%). The stainless steel tubing of each doser system was baked into a mini turbo-molecular pump for 24 hours before backfilling with the ethylene, oxygen, or argon. Ethylene pressures as high as 10 mTorr were used for the growth of the graphene films. In this pressure range, gas molecules are readily ionized by large electric fields, which will produce a corona discharge. In addition, ethylene will dissociate when passed across a hot filament. To prevent inadvertent dissociation of the ethylene, the ion gauge was turned off once the chamber pressure exceeded 10$^{-5}$ Torr. A UHV capacitive manometer that has a measurement range of $1 \times 10^{-1}$~Torr to $1 \times 10^{-5}$~Torr was used to measure the ethylene pressure during graphene growth. The capacitive manometer has no high voltages or hot filaments that could result in a corona discharge or the dissociation of the ethylene molecules. The capacitive manometer is also an absolute pressure gauge, which has the advantage that it does not require calibration for specific gas types. With this unique UHV-based experimental setup, the growth of graphene by CVD can be performed with precursor pressures as high as 100 mTorr and temperatures up to 1000~$^\circ$C. Because of the extrene cleanliness of the gas dosing system and the UHV components, the chamber pressure typically returns to the 10$^{-10}$~Torr range within a few minutes of pumping out the precursor gas. This allows the growth of the graphene films in an environment that minimizes the presence of impurities at the surface of the crystal and within the precursor gas, while allowing the \emph{in-situ} characterization of the films with LEED. The Cu(100) single crystal was cleaned by cycles of sputtering with 1~keV Ar ions followed by annealing. Initial attempts to clean the crystal by sputtering followed by annealing at 650~$^\circ$C resulted in a clean surface after a few sputter-anneal cycles, as determined by LEED patterns with sharp spots, low diffuse background, and the square symmetry expected for the Cu(100) surface. However, subsequent annealing of the crystal at 900~$^\circ$C resulted in a complex LEED pattern from diffusion of impurities to the surface, presumably S, which is a common impurity in Cu. In order to sufficiently clean the crystal for high temperature graphene growth, it was necessary to sputter the crystal while annealing at 650~$^\circ$C. After a few cycles of high temperature sputtering, it was found that annealing in UHV at 900~$^\circ$C resulted in sharp LEED spots with the correct square symmetry associated with the Cu(100) surface. After the bulk impurities were removed from the Cu(100) crystal, a few cycles of room temperature sputtering and annealing were done to ensure a relatively smooth surface for graphene growth since high temperature sputtering can result in surface roughening. A LEED pattern of the Cu(100) surface after this cleaning procedure is shown in Figure \ref{CleanCu100}a. A faint ring structure is also observed in the pattern just inside the four primary LEED spots of the Cu(100) substrate. This is caused by reflection of light from the back of the electron gun and is not related to diffraction from the sample surface. The growth of a chemisorbed oxygen layer on the Cu(100) surface was achieved by dosing molecular oxygen on the surface at a pressure of $1~\times~10^{-6}$~Torr for 5~min (300~L) while maintaining the temperature of the crystal at 300~$^\circ$C. This resulted in a sharp ($\sqrt{2}\times2\sqrt{2}$)~R45$^\circ$ LEED pattern (Figure \ref{OxygenCu100}a), which corresponds to 0.5 monolayers of chemisorbed oxygen \cite{Wuttig1989103}. Since the Cu(100) surface has square symmetry and the oxygen reconstruction has rectangular symmetry, two different orientations of the reconstruction are observed in the figure, rotated by 90$^\circ$ with respect to each other. \section{Results} The initial attempts to grow graphene on the Cu(100) surface used a technique where the substrate was heated to the growth temperature and the ethylene was then backfilled into the UHV chamber with the gate valves to the ion pump and turbo pump closed. Growth temperatures of 700~$^\circ$C, 800~$^\circ$C and 900~$^\circ$C were attempted with ethylene pressures ranging from 1 to 10~mTorr. In each attempt, the Cu substrate was held at the growth temperature in ethylene for 10 minutes, then the power to the button heater was ramped down, and the gate valve to the turbo pump was opened. The initial cooling rate was $\sim$70$^\circ$C/min, and the chamber pressure dropped to the low 10$^{-9}$ Torr range within about a minute of opening the turbo pump gate valve. For each of these growth attempts, no graphene was detected with LEED after the Cu(100) crystal had cooled to room temperature. Graphene growth was also attempted with the gate valve to the turbo pump open to produce a gas-flow environment. This also did not result in graphene growth on the Cu(100). These results are similar to our previously published attempts to grow graphene on the Cu(111) substrate by heating the crystal and backfilling with ethylene \cite{robinson2012argon}. The suppression of graphene growth on these surfaces is attributed to the high vapor pressure of Cu at the graphene growth temperature. For instance, the vapor pressure of the Cu(100) surface is $4 \times 10^{-6}$~Torr at 900~$^\circ$C \cite{vaporpressure}, which results in a sublimation rate of 0.4~ML/s in UHV. A growth technique that involved first backfilling the chamber with ethylene while the crystal is at room temperature, followed by heating the substrate to the growth temperature was then attempted. For each growth, the crystal was held at the growth temperature for 10~min, after which the gate valve to the turbo pump was opened, and power to the button heater was ramped down. This technique resulted in graphene formation on the Cu(100) substrate. A LEED pattern taken after the growth of graphene on the Cu(100) at an ethylene pressure of 5~mTorr and an anneal temperature of 800~$^\circ$C is shown in Figure \ref{CleanCu100}b. The four first-order diffraction spots from the Cu(100) surface can be seen in the figure, as well as a ring just outside the Cu(100) spots with approximately 24 broad regions of increased intensity. This corresponds to graphene domains with four different primary rotational alignments with respect to the Cu(100) substrate lattice. The arcs indicate that the alignment of each graphene domain has considerable rotational disorder. The same growth conditions were repeated at 900~$^\circ$C and resulted in the corresponding LEED image shown in Figure \ref{CleanCu100}c. For this growth condition, the graphene has a much higher degree of rotational order, as indicated by the formation of spots instead of broad arcs. The 12 spots correspond to two predominant orientations of the graphene with respect to the Cu(100) substrate. Each domain corresponds to graphene grains with one of their lattice vectors aligned with one of the Cu(100) surface lattice vectors. Since the Cu(100) surface has square symmetry but graphene has hexagonal symmetry, this results in a two-domain growth. In addition, there are faint arcs corresponding to a small amount of graphene that is mis-oriented with respect to the substrate by $\pm$15$^\circ$. Once the procedure for growing graphene on the Cu(100) surface was established, the effect that pre-dosing chemisorbed oxygen on the Cu(100) surface has on the growth of graphene was studied. Before each graphene growth was attempted, the Cu(100) surface was cleaned by sputtering and annealing, and a chemisorbed oxygen layer was formed on the Cu(100) surface by dosing oxygen while annealing the crystal at 300~$^\circ$C. The graphene growth procedure used after the chemisorbed oxygen layer was formed was identical to the procedure used for the clean Cu(100) surface. The LEED patterns for growths done at 800~$^\circ$C and 900~$^\circ$C in 5mTorr of ethylene for 10 min are shown in Figure \ref{OxygenCu100}b and \ref{OxygenCu100}c, respectively. In both cases, a ring structure consisting of 12 broad arcs is observed. The 12 arcs correspond to graphene with one of its lattice vectors aligning $\pm$15$^\circ$ out of phase with one of the Cu(100) surface lattice vectors. The primary difference between the LEED patterns for growth at 800~$^\circ$C and 900~$^\circ$C is that the intensity of the arcs after growth at 900~$^\circ$C is stronger than for growth at 800~$^\circ$C. In addition, the Cu(100) spots are weaker after growth at 900~$^\circ$C, which indicates that the graphene overlayer has a higher coverage at this temperature. For both the graphene grown on the clean Cu(100) surface and the surface with a chemisorbed oxygen layer, circumferential intensity profiles were measured, as seen in Figure~\ref{Cu100_bothGraphene}. The intensity profile for the clean Cu(100) surface has four sharp peaks, indicating a well ordered surface. For the lower plot, which is from graphene that was grown on the clean Cu(100) surface, there are 12 primary peaks due to the two different rotational orientations of the 6-fold symmetric hexagonal lattice of the graphene. These peaks are also sharp, indicating a small amount of rotational disorder. In the middle plot, which is from graphene that was grown on the Cu(100) surface that was pre-dosed with chemisorbed oxygen, 12 broad peaks can be seen that are out of phase with the Cu(100) spots. Since Alstrup \emph{et al}. \cite{Alstrup199295} had observed that pre-dosing oxygen on the Cu(100) surface reduced the activation energy for the dissociation of methane, growths at 700~$^\circ$C for 10 min at an ethylene pressure of 5 mTorr on both the oxygen pre-dosed and clean Cu(100) surfaces were attempted. Trace amounts of graphene formed on both surfaces, as indicated by the very weak arcs observed in the LEED pattern for growth on the clean Cu(100) surface and a very weak ring structure for the oxygen pre-dosed surface. Since the LEED intensity associated with the graphene overlayer was approximately the same for both surface preparations, the pre-dosing of oxygen does not seem to significantly lower the activation energy for dissociation of ethylene at the Cu(100) surface. To determine the thermal stability of the chemisorbed oxygen layer, temperature dependent LEED analysis was done on the oxygen pre-dosed Cu(100) surface, as shown in Figure~\ref{CookAndLook}. A chemisorbed oxygen layer was first adsorbed on the Cu(100) surface, resulting in a sharp ($\sqrt{2}\times2\sqrt{2}$)~R45$^\circ$ LEED pattern (Figure~\ref{CookAndLook}b). The sample was then heated while simultaneously performing LEED analysis. At ~$\sim$400~$^\circ$C, the LEED pattern was observed to convert from a ($\sqrt{2}\times2\sqrt{2}$)~R45$^\circ$ pattern to a ($\sqrt{2}\times\sqrt{2}$)~R45$^\circ$ pattern. At 500~$^\circ$C, the LEED spots corresponding to the oxygen overlayer had almost completely disappeared and the Cu(100) substrate spots were very weak (Figure~\ref{CookAndLook}c). However, upon cooling the sample back down to 100~$^\circ$C, the sharp ($\sqrt{2}\times2\sqrt{2}$)~R45$^\circ$ LEED pattern returned (Figure~\ref{CookAndLook}d). This indicates that the chemisorbed oxygen overlayer undergoes a surface melting transition at $\sim$500~$^\circ$C and then reorders in the ($\sqrt{2}\times2\sqrt{2}$)~R45$^\circ$ overlayer structure upon cooling. The heating sequence was then repeated, but to a temperature of 600~$^\circ$C. At 600~$^\circ$C, the LEED spots corresponding to the oxygen overlayer had completely disappeared and the spots corresponding to the Cu(100) substrate were very weak. Upon cooling back down to 100~$^\circ$C, the sharp ($\sqrt{2}\times2\sqrt{2}$)~R45$^\circ$ LEED pattern returned, but with weaker intensity than was observed after the first anneal at 500~$^\circ$C. This indicates that there was a loss of some of the oxygen atoms from the surface. The heating sequence was then repeated again, but to a temperature of 700~$^\circ$C. At this temperature, the LEED spots corresponding to the oxygen overlayer and the first order Cu(100) spots had completely disappeared (Figure~\ref{CookAndLook}e). Upon cooling back down to RT, the LEED pattern of the clean Cu(100) surface was observed (Figure~\ref{CookAndLook}f). Therefore, annealing at 700~$^\circ$C was sufficient to remove the oxygen from the surface region of the Cu(100) crystal. To determine the growth morphology of the graphene on both the clean and oxygen pre-dosed Cu(100) surfaces, \emph{ex-situ} SEM analysis was performed. For each surface, the graphene overlayer was grown by backfilling the UHV chamber with 5~mTorr of ethylene, followed by ramping the temperature to 900~$^\circ$C and annealing for 5~min. After venting, the crystal was transferred to the SEM so that the surface coverage and nucleation rate could be determined. The LEED images that were taken before venting the chamber indicate that a well-ordered, two-domain, epitaxial graphene overlayer had formed for growth on the clean Cu(100) surface and that a two-domain overlayer with considerable rotational disorder had formed for growth on the oxygen pre-dosed surface. SEM images of the graphene overlayers are shown in Figure \ref{SEM}. For the graphene grown on the clean Cu(100) surface, the graphene regions appear as dark patches (Figure \ref{SEM}a). For the graphene grown on the oxygen pre-dosed surface, the graphene regions have a dark patch at the outer edge of the graphene island (Figure \ref{SEM}b). The dark constrast is attributed to oxidation of the Cu atoms below the graphene. The lateral size of the graphene islands grown on the clean surface is $\sim$0.1~$\mu$m, whereas the lateral size of the graphene islands grown on the oxygen pre-dosed surface is $\sim$1.5~$\mu$. Since the islands on the oxygen pre-dosed surface are considerably larger than the islands grown on the clean surface, the Cu oxidation is only present under the perimeter of the islands grown on the oxygen pre-dosed surface. For both substrate preparations, graphene islands formed randomly on the substrate surface. For growth on the clean Cu(100) surface, the islands were irregularly shaped. The graphene coverage is estimated to be 45\% and the nucleation rate to be $\sim$3~islands per $\mu$m$^2$. The exact nucleation density is difficult to estimate due to the coalescence of some of the islands. For growth on the oxygen pre-dosed surface, the islands are somewhat square in shape. The graphene coverage is 20\%, and the nucleation rate is $\sim$0.1~islands per $\mu$m$^2$. Therefore, the much larger size of the graphene islands on the oxygen pre-dosed surface is a result of the much lower nucleation rate. The lower absolute coverage probably results from the reaction of some of the surface oxygen with carbon atoms to form volatile CO and CO$_2$. \section{Discussion} The observation that graphene would not grow when exposing the Cu crystal to ethylene after heating it to the growth temperature but could be grown when the substrate temperature was ramped from RT to the growth temperature in an ethylene atmosphere can be explained by the temperature dependence of the ethylene dissociation process. If the dissociation of the ethylene molecules begins in the temperature range where the the vapor pressure of the Cu is relatively low, a non-graphitic carbon layer will form on the surface before Cu sublimation becomes appreciable. Once the temperature of the substrate reaches a point where the Cu vapor pressure is high, the carbon on the surface will act as a diffusion barrier for the subliming Cu atoms. Interestingly, we reported in a previous publication that when graphene growth was attempted on the Cu(111) surface by heating the crystal from RT to the growth temperature in ethylene, only trace amounts of graphene could be formed \cite{robinson2012argon}. This indicates that the catalytic activity of the Cu(100) surface is higher than the Cu(111) surface for the decomposition of ethylene. Because the atoms at the (100) surface of FCC metals have a lower coordination than the atoms at the close-packed (111) surface, the (100) surface is generally more reactive than the (111) surface for most metals. For graphene growth on the clean Cu(100) substrate, four rotational orientations of the graphene grains with respect to the substrate are observed for growth at both 700~$^\circ$C and 800~$^\circ$C. A schematic of the four different rotational orientations of graphene and the corresponding LEED pattern are shown in Figure \ref{model}. As the growth temperature is increased to 900~$^\circ$C, two domains are observed: each with one of the reciprocal lattice vectors of the hexagonal lattice of the graphene aligned with one of the reciprocal lattice vectors of the square lattice of the Cu(100) surface. The observation of four domains at lower temperatures indicates that there are two nucleation orientations that are almost equivalent in energy. As the temperature is increased, the thermal energy will eventually become sufficient for all of the carbon atoms to access the lowest energy orientation (\emph{i.e.}, the orientation where one of the graphene lattice vectors is aligned with one of the surface lattice vectors of the Cu(100)). Two rotational orientations of the graphene grains are also observed for growth on the oxygen pre-dosed Cu(100) surface. However, each orientation has one of its reciprocal lattice vectors weakly aligned $\pm$45$^\circ$ with respect to one of the Cu(100) reciprocal lattice vectors. Because of the six-fold symmetry of the hexagonal lattice, this results in the observed $\pm$15$^\circ$ rotation graphene diffraction arcs with respect to the Cu(100) diffraction spots (Figure \ref{OxygenCu100}c). The change in preferred alignment and the increase in rotational disorder of the graphene grains for growth on the oxygen predosed Cu(100) surface indicates that during the initial nucleation of the graphene, oxygen is suppressing the formation of grains that have one of their lattice vectors aligned with one of the lattice vectors of the Cu(100) surface. In other words, the presence of oxygen at the surface is most likely blocking the nucleation of graphene grains on the Cu(100) terrace sites. Since the temperature dependent LEED analysis provides evidence that most of the oxygen has left the surface region of the Cu(100) substrate by $\sim$700~$^\circ$C, this would imply that the initial nucleation of graphene is occurring below 700~$^\circ$C. This is further corroborated by the observation that trace amounts of graphene formed on both the clean and oxygen pre-dosed surfaces when heated to 700~$^\circ$C in ethylene. For graphene growth on the clean Cu(100) surface, an increase in graphene grains aligned with the Cu(100) substrate was observed as the growth temperature was raised from 800~$^\circ$C to 900~$^\circ$C. This was not observed for growth on the oxygen pre-dosed surface. This is probably because the size of the graphene islands for the oxygen pre-dosed surface are about an order of magnitude larger than for the clean Cu(100) surface. Therefore, the islands may be too large to rotate into one of the two equivalent low energy orientations at elevated temperatures. It may also be that the graphene islands that are nucleating on the oxygen pre-dosed surface are composed of grains with multiple orientations. Therefore, these graphene islands will not have a preferred orientation with respect to the Cu(100) substrate. Since the electron gun of our conventional LEED has a diameter of $\sim$1~mm, it is probing the crystallographic orientation of several thousand graphene islands simultaneously. To probe the crystallographic orientation of each individual graphene island, a $\mu$-LEED analysis would need to be done using LEEM. The role that oxygen plays in graphene growth is important because most CVD reactors operate in either the low pressure or atmospheric pressure regime where some oxygen contamination will be present. Most groups use a flow of hydrogen in their reactor to reduce the copper oxide that is in the bulk or forms at the surface of the substrate. The reduction process will convert the copper oxide into metallic Cu and water vapor. However, as shown above, even sub-monolayer oxygen contamination on the surface of the Cu(100) substrate can drastically affect the subsequent graphene growth. For instance, our results show an increase in rotational disorder, a change in the preferred orientation of the graphene grains, and a reduction in the nucleation rate of the graphene islands for growth on a substrate with a half monolayer of oxygen atoms chemisorbed to the surface. These results could help explain why different groups have measured large differences in the transport properties for graphene films grown on Cu foil substrates even though the reported growth conditions are similar. Since a minimum of two rotational orientations are expected for graphene growth on the Cu(100) surface, grain boundaries will always be present once the graphene islands that have nucleated on the surface to coalesce into a single monolayer film. For perfect two-domain epitaxial growth on the Cu(100) surface, the grains are rotated 90$^\circ$ with respect to each other. Yazyev and Louie have predicted that the reflection probability of carriers in graphene depends strongly on the misorientation angle between adjacent grains \cite{5388152720101001}. For a 90$^\circ$ mis-orientation, a transport gap of 1.1 eV is predicted. Therefore, this particular orientation of adjacent grains is expected to result in a high probability for reflection of charge carriers at the grain boundary. Since the graphene grains grown on a Cu(100) surface that has been pre-dosed with chemisorbed oxygen have a much larger distribution of misalignment angles, a wide range of scattering probabilities at grain boundaries will result. Therefore, weak rotational alignment of the graphene grains with respect to the substrate may produce better transport properties than a well-aligned two-domain epitaxial overlayer. The actual transport properties measured for a particular graphene film will depend on both the mis-orientation angle(s) of the grains and the average size of the graphene grains. \section{Conclusions} Our results show that a well-ordered two-domain epitaxial graphene film can be grown on the clean Cu(100) surface at 900~$^\circ$C. On the other hand, the presence of a chemisorbed oxygen layer on the Cu(100) surface before graphene growth results in graphene overlayers with considerable rotational disorder. The difference in growth morphology of the graphene films grown on the clean and oxygen pre-dosed Cu(100) surfaces is attributed to the interaction of oxygen with the ethylene molecules during the initial nucleation of graphene islands. Because Cu foils are one of the most common substrates for large area graphene growth and they typically recrystallize with a (100) texture during graphene growth, our results for the growth of graphene on both the clean and oxygen pre-dosed Cu(100) surfaces could be used to develop techniques for growing graphene films with better transport properties. \acknowledgement This research project was supported by the National Science Foundation (DMR-1006411). P. T. would like to thank NSF for her financial support, and T. R. M. would like to thank SEMATECH for his financial support. Z. R. R. would like to thank the American Society for Engineering Education for his post-doctoral support. \bibliographystyle{achemso}
1,108,101,562,598
arxiv
\section{Geometric and topological models} A natural geometric context for studying the global structure of ${\rm{GL}}(n,{\Z})$\ is provided by the symmetric space $X$ of positive-definite, real symmetric matrices of determinant 1 (see \cite{Sou04} for a nice introduction to this subject). This is a non-positively curved manifold diffeomorphic to $\mathbb R^d$, where $d={\frac 1 2 n(n+1)}-1$. ${\rm{GL}}(n,{\Z})$\ acts properly by isometries on $X$ with a quotient of finite volume. Each $A\in X$ defines an inner product on $\mathbb R^n$ and hence a Riemannian metric $\nu$ of constant curvature and volume 1 on the $n$-torus $T^n= \mathbb R^n/\mathbb Z^n$. One can recover $A$ from the metric $\nu$ and an ordered basis for $\pi_1T^n$. Thus $X$ is homeomorphic to the space of equivalence classes of {\em marked} Euclidean tori $(T^n,\nu)$ of volume 1, where a {\em marking} is a homotopy class of homeomorphisms $\rho:T^n\to (T^n,\nu)$ and two marked tori are considered equivalent if there is an isometry $i:(T_1^n,\nu_1)\to (T_2^n,\nu_2)$ such that $\rho_2^{-1}\circ i\circ\rho_1$ is homotopic to the identity. The natural action of $\hbox{GL}(n,\mathbb Z)={\rm{Out}}(\mathbb Z^n)$ on $T^n=K(\mathbb Z^n,1)$ twists the markings on tori, and when one traces through the identifications this is the standard action on $X$. If one replaces $T^n$ by $S_g$ and follows exactly this formalism with marked metrics of constant curvature\footnote{if $g\ge 2$ then the curvature will be negative} and fixed volume, then one arrives at the definition of {\em Teichm\"uller space} and the natural action of ${\rm{Mod}}^{\pm }(S_g)$$={\rm{Out}}(\pi_1S_g)$ on it. Teichm\"uller space is again homeomorphic to a Euclidean space, this time $\mathbb R^{6g-6}$. In the case of ${\rm{Out}}(F_n)$\ there is no canonical choice of classifying space $K(F_n,1)$ but rather a finite collection of natural models, namely the finite graphs of genus $n$ with no vertices of valence less than 3. Nevertheless, one can proceed in essentially the same way: one considers metrics of fixed volume (sum of the lengths of edges =1) on the various models for $K(F_n,1)$, each equipped with a marking, and one makes the obvious identifications as the homeomorphism type of a graph changes with a sequence of metrics that shrink an edge to length zero. The space of marked metric structures obtained in this case is Culler and Vogtmann's Outer space \cite{CulVog86}, which is stratified by manifold subspaces corresponding to the different homeomorphism types of graphs that arise. This space is not a manifold, but it is contractible and its local homotopical structure is a natural generalization of that for a manifold (cf.~\cite{Vog90}). One can also learn a great deal about the group ${\rm{GL}}(n,{\Z})$\ by examining its actions on the Borel-Serre compactification of the symmetric space $X$ and on the spherical Tits building, which encodes the asymptotic geometry of $X$. Teichm\"uller space and Outer space both admit useful bordifications that are closely analogous to the Borel-Serre bordification \cite{Har88, Iva87, BesFei00}. And in place of the spherical Tits building for ${\rm{GL}}(n,{\Z})$\ one has the complex of curves \cite{Har81} for ${\rm{Mod}}^{\pm }(S_g)$, which has played an important role in recent advances concerning the large scale geometry of ${\rm{Mod}}^{\pm }(S_g)$. For the moment this complex has no well-established counterpart in the context of ${\rm{Out}}(F_n)$. These closely parallel descriptions of geometries for the three families of groups have led mathematicians to try to push the analogies further, both for the geometry and topology of the ``symmetric spaces" and for purely group-theoretic properties that are most naturally proved using the geometry of the symmetric space. For example, the symmetric space for ${\rm{GL}}(n,{\Z})$\ admits a natural equivariant deformation retraction onto an $n(n-1)/2$-dimensional cocompact subspace, the {\it well-rounded retract} \cite{Ash84}. Similarly, both Outer space and the Teichm\"uller space of a punctured or bounded orientable surface retract equivariantly onto cocompact simplicial spines \cite{CulVog86, Har88}. In all these cases, the retracts have dimension equal to the virtual cohomological dimension of the relevant group. For closed surfaces, however, the question remains open: \begin{question}\label{spine} Does the Teichm\"uller space for $S_g$ admit an equivariant deformation retraction onto a cocompact spine whose dimension is equal to $4g-5$, the virtual cohomological dimension of ${\rm{Mod}}^{\pm }(S_g)$? \end{question} Further questions of a similar nature are discussed in (\ref{BC}). The issues involved in using these symmetric space analogs to prove purely group theoretic properties are illustrated in the proof of the Tits alternative, which holds for all three classes of groups. A group $\Gamma$ is said to satisfy the Tits alternative if each of its subgroups either contains a non-abelian free group or else is virtually solvable. The strategy for proving this is similar in each of the three families that we are considering: inspired by Tits's original proof for linear groups (such as ${\rm{GL}}(n,{\Z})$), one attempts to use a ping-pong argument on a suitable boundary at infinity of the symmetric space. This strategy ultimately succeeds but the details vary enormously between the three contexts, and in the case of ${\rm{Out}}(F_n)$\ they are particularly intricate (\cite{BesFeiHan00, BesFeiHan97} versus \cite{BirLubMcC83}). One finds that this is often the case: analogies between the three classes of groups can be carried through to theorems, and the architecture of the expected proof is often a good guide, but at a more detailed level the techniques required vary in essential ways from one class to the next and can be of completely different orders of difficulty. Let us return to problems more directly phrased in terms of the geometry of the symmetric spaces. The symmetric space for ${\rm{GL}}(n,{\Z})$\ has a left-invariant metric of non-positive curvature, the geometry of which is relevant to many areas of mathematics beyond geometric group theory. Teichm\"uller space has two natural metrics, the Teichm\"uller metric and the Weyl-Petersen metric, and again the study of each is a rich subject. In contrast, the metric theory of Outer space has not been developed, and in fact there is no obvious candidate for a natural metric. Thus, the following question has been left deliberately vague: \begin{question} Develop a metric theory of Outer space. \end{question} The elements of infinite order in ${\rm{GL}}(n,{\Z})$\ that are diagonalizable over $\mathbb C$ act as loxodromic isometries of $X$. When $n=2$, these elements are the hyperbolic matrices; each fixes two points at infinity in $X=\mathbb H^2$, one a source and one a sink. The analogous type of element in ${\rm{Mod}}^{\pm }(S_g)$\ is a pseudo-Anosov, and in ${\rm{Out}}(F_n)$\ it is an {\em iwip} (irreducible with irreducible powers). In both cases, such elements have two fixed points at infinity (i.e. in the natural boundary of the symmetric space analog), and the action of the cyclic subgroup generated by the element exhibits the north-south dynamics familiar from the action of hyperbolic matrices on the closure of the Poincar\'e disc \cite{LevLus03}, \cite{Iva02}. In the case of ${\rm{Mod}}^{\pm }(S_g)$\ this cyclic subgroup leaves invariant a unique geodesic line in Teichm\"uller space, i.e. pseudo-Anosov's are axial like the semi-simple elements of infinite order in ${\rm{GL}}(n,{\Z})$. Work of Handel and Mosher \cite{HanMos05} shows that in the case of {\em iwips} one cannot hope to have an axis in precisely the same metric sense, but leaves open the possibility that there may be a reasonable notion of ``quasi-axis" for such automorphisms. \begin{question} Find a useful description of a quasi-axis for an iwip acting on Outer Space, with limit set the fixed points of the iwip at infinity. \end{question} \section{Actions of ${\rm{Aut}}(F_n)$\ and ${\rm{Out}}(F_n)$\ on other spaces} Some of the questions that we shall present are more naturally stated in terms of ${\rm{Aut}}(F_n)$\ than ${\rm{Out}}(F_n)$, while some are natural for both. To avoid redundancy, we shall state only one form of each question. \subsection{Baum-Connes and Novikov conjectures}\label{BC} Two famous conjectures relating topology, geometry and functional analysis are the Novikov and Baum-Connes conjectures. The Novikov conjecture for closed oriented manifolds with fundamental group $\Gamma$ says that certain {\it higher signatures\ } coming from $H^*(\Gamma;\mathbb Q)$ are homotopy invariants. It is implied by the Baum-Connes conjecture, which says that a certain {\it assembly map\ } between two $K$-theoretic objects associated to $\Gamma$ is an isomorphism. Kasparov \cite{Kasp88} proved the Novikov conjecture for ${\rm{GL}}(n,{\Z})$, and Guenther, Higson and Weinberger proved it for all linear groups \cite{GHW05}. The Baum-Connes conjecture for ${\rm{GL}}(n,{\Z})$\ is open when $n\ge 4$ (cf.~\cite{Laff98}). Recently Storm \cite{Sto05} pointed out that the Novikov conjecture for mapping class groups follows from results that have been announced by Hamenst\"adt \cite{Ham04} and Kato \cite{Kat00}, leaving open the following: \begin{question} Do mapping class groups or ${\rm{Out}}(F_n)$\ satisfy the Baum-Connes conjecture? Does ${\rm{Out}}(F_n)$\ satisfy the Novikov conjecture? \end{question} An approach to proving these conjectures is given by work of Rosenthal \cite{Ros05}, generalizing results of Carlsson and Pedersen \cite{CarPed95}. A contractible space on which a group $\Gamma$ acts properly and for which the fixed point sets of finite subgroups are contractible is called an $\underbar{E}\Gamma$. Rosenthal's theorem says that the Baum-Connes map for $\Gamma $ is split injective if there is a cocompact $\underbar{E}\Gamma=E$ that admits a compactification $X$, such that \begin{enumerate} \item the $\Gamma$-action extends to $X$; \item $X$ is metrizable; \item $X^G$ is contractible for every finite subgroup $G$ of $\Gamma$ \item $E^G$ is dense in $X^G$ for every finite subgroup $G$ of $\Gamma$ \item compact subsets of E become small near $Y = X\smallsetminus E$ under the $\Gamma$-action: for every compact $ K \subset E$ and every neighborhood $U\subset X$ of $y\in Y$, there exists a neighborhood $V\subset X$ of $y$ such that $\gamma K \cap V \neq\emptyset$ implies $\gamma K \subset U$. \end{enumerate} The existence of such a space $E$ also implies the Novikov conjecture for $\Gamma$. For ${\rm{Out}}(F_n)$\ the spine of Outer space mentioned in the previous section is a reasonable candidate for the required $\underbar {E}\Gamma$, and there is a similarly defined candidate for ${\rm{Aut}}(F_n)$. For mapping class groups of punctured surfaces the complex of arc systems which fill up the surface is a good candidate (note that this can be identified with a subcomplex of Outer space, as in \cite{HatVog96}, section 5). \begin{question} Does there exist a compactification of the spine of Outer space satisfying Rosenthal's conditions? Same question for the complex of arc systems filling a punctured surface. \end{question} In all of the cases mentioned above, the candidate space $E$ has dimension equal to the virtual cohomological dimension of the group. G.~Mislin \cite{Mis04} has constructed a cocompact $\underbar{E}G$ for the mapping class group of a closed surface, but it has much higher dimension, equal to the dimension of the Teichm\"uller space. This leads us to a slight variation on Question \ref{spine}. \begin{question} Can one construct a cocompact $\underbar{E}G$ with dimension equal to the virtual cohomological dimension of the mapping class group of a closed surface? \end{question} \subsection{Properties (T) and FA} A group has Kazdhan's Property (T) if any action of the group by isometries on a Hilbert space has fixed vectors. Kazdhan proved that ${\rm{GL}}(n,{\Z})$\ has property (T) for $n\ge 3$. \begin{question} For $n>3$, does ${\rm{Aut}}(F_n)$\ have property (T)? \end{question} The corresponding question for mapping class groups is also open. If ${\rm{Aut}}(F_n)$\ were to have Property (T), then an argument of Lubotzky and Pak \cite{LubPak01} would provide a conceptual explanation of the apparently-unreasonable effectiveness of certain algorithms in computer science, specifically the Product Replacement Algorithm of Leedham-Green {\em{et al}}. If a group has Property (T) then it has Serre's property FA: every action of the group on an $\mathbb R$-tree has a fixed point. When $n\ge 3$, ${\rm{GL}}(n,{\Z})$\ has property FA, as do ${\rm{Aut}}(F_n)$\ and ${\rm{Out}}(F_n)$, and mapping class groups in genus $\ge 3$ (see \cite{CulVog96}). In contrast, McCool \cite{McC89} has shown that ${\rm{Aut}}(F_3)$ has a subgroup of finite-index with positive first betti number, i.e. a subgroup which maps onto $\mathbb Z$. In particular this subgroup acts by translations on the line and therefore does not have property FA or (T). Since property (T) passes to finite-index subgroups, it follows that ${\rm{Aut}}(F_3)$ does not have property (T). \begin{question}\label{betti} For $n>3$, does ${\rm{Aut}}(F_n)$\ have a subgroup of finite index with positive first betti number? \end{question} Another finite-index subgroup of $Aut(F_3)$ mapping onto $\mathbb Z$ was constructed by Alex Lubotzky, and was explained to us by Andrew Casson. Regard $F_3$ as the fundamental group of a graph $R$ with one vertex. The single-edge loops provide a basis $\{a,b,c\}$ for $F_3$. Consider the 2-sheeted covering $\hat R\to R$ with fundamental group $\langle a,b,c^2,cac^{-1},cbc^{-1} \rangle$ and let $G\subset{\rm{Aut}}(F_3)$ be the stabilizer of this subgroup. $G$ acts on $H_1(\hat R,\mathbb Q)$ leaving invariant the eigenspaces of the involution that generates the Galois group of the covering. The eigenspace corresponding to the eigenvalue $-1$ is two dimensional with basis $\{a-cac^{-1},\, b-cbc^{-1}\}$. The action of $G$ with respect to this basis gives an epimorphism $G\to{\rm{GL}}(2,\mathbb Z)$. Since ${\rm{GL}}(2,\mathbb Z)$ has a free subgroup of finite-index, we obtain a subgroup of finite index in ${\rm{Aut}} (F_3)$ that maps onto a non-abelian free group. One can imitate the essential features of this construction with various other finite-index subgroups of $F_n$, thus producing subgroups of finite index in ${\rm{Aut}}(F_n)$\ that map onto ${\rm{GL}}(m,\mathbb Z)$. In each case one finds that $m\ge n-1$. \begin{question} If there is a homomorphism from a subgroup of finite index in ${\rm{Aut}}(F_n)$\ onto a subgroup of finite index in $GL(m,\mathbb Z)$, then must $m\ge n-1$? \end{question} Indeed one might ask: \begin{question} If $m<n-1$ and $H\subset$${\rm{Aut}}(F_n)$\ is a subgroup of finite index, then does every homomorphism $H\to{\rm{GL}}(m,\mathbb Z)$ have finite image? \end{question} Similar questions are interesting for the other groups in our families (cf.~section 3). For example, If $m<n-1$ and $H\subset$${\rm{Aut}}(F_n)$\ is a subgroup of finite index, then does every homomorphism $H\to{\rm{Aut}}(F_m)$ have finite image? A positive answer to the following question would answer Question \ref{betti}; a negative answer would show that ${\rm{Aut}}(F_n)$\ does not have property (T). \begin{question} For $n\ge 4$, do subgroups of finite index in ${\rm{Aut}}(F_n)$\ have Property FA? \end{question} A promising approach to this last question breaks down because we do not know the answer to the following question. \begin{question} Fix a basis for $F_n$ and let $A_{n-1}\subset Aut(F_n)$ be the copy of $Aut(F_{n-1})$ corresponding to the first $n-1$ basis elements. Let $\phi: Aut(F_n)\to G$ be a homomorphism of groups. If $\phi(A_{n-1})$ is finite, must the image of $\phi$ be finite? \end{question} Note that the obvious analog of this question for ${\rm{GL}}(n,{\Z})$\ has a positive answer and plays a role in the foundations of algebraic $K$-theory. A different approach to establishing Property (T) was developed by Zuk \cite{Zuk96}. He established a combinatorial criterion on the links of vertices in a simply connected $G$-complex which, if satisfied, implies that $G$ has property (T): one must show that the smallest positive eigenvalue of the discrete Laplacian on links is sufficiently large. In addition to the $Aut(F_n)$ analog of the spine of Outer space, there are several other simply-connected complexes on which ${\rm{Aut}}(F_n)$\ acts, and these might be used to test Zuk's criterion. Hand-worked experiments enable one to get arbitrarily close to the critical value in Zuk's criterion as $n\to\infty$, but it is not clear how to interpret this evidence. \subsection{Actions on CAT$(0)$ spaces} An $\mathbb R$-tree may be defined as a complete CAT$(0)$ space of dimension\footnote{topological covering dimension} 1. Thus one might generalize property FA by asking, for each $d\in\mathbb N$, which groups must fix a point whenever they act by isometries on a complete CAT$(0)$ space of dimension $\le d$. \begin{question} What is the least integer $\delta$ such that ${\rm{Out}}(F_n)$\ acts without a global fixed point on a complete CAT$(0)$ space of dimension $\delta$? And what is the least dimension for the mapping class group ${\rm{Mod}}^{\pm }(S_g)$? \end{question} The action of ${\rm{Out}}(F_n)$\ on the first homology of $F_n$ defines a map from ${\rm{Out}}(F_n)$ to ${\rm{GL}}(n,\mathbb Z)$ and hence an action of ${\rm{Out}}(F_n)$\ on the symmetric space for ${\rm{GL}}(n,\mathbb R)$, which is a complete CAT$(0)$ space of dimension $\frac1 2{n(n+1)}-1$. This action does not have a global fixed point and hence we obtain a quadratic upper bound $n(n+1)/2$ on $\delta$. On the other hand, since ${\rm{Out}}(F_n)$\ has property FA, $\delta\ge 2$. In fact, motivated by work of Farb on ${\rm{GL}}(n,\mathbb Z)$, Bridson \cite{Brid05} has shown that using a Helly-type theorem and the structure of finite subgroups in ${\rm{Out}}(F_n)$, one can obtain a lower bound on $\delta$ that grows as a linear function of $n$. Note that a lower bound of $3n-3$ on $\delta$ would imply that Outer Space did not support a complete ${\rm{Out}}(F_n)$-equivariant metric of non-positive curvature. If $X$ is a CAT$(0)$ polyhedral complex with only finitely many isometry types of cells (e.g. a finite dimensional cube complex), then each isometry of $X$ is either elliptic (fixes a point) or hyperbolic (has an axis of translation) \cite{Bri99}. If $n\ge 4$ then a variation on an argument of Gersten \cite{Ger94} shows that in any action of ${\rm{Out}}(F_n)$\ on $X$, no Nielsen generator can act as a hyperbolic isometry. \begin{question} If $n\ge 4$, then can ${\rm{Out}}(F_n)$\ act without a global fixed point on a finite-dimensional {\rm{CAT$(0)$}} cube complex? \end{question} \subsection{Linearity} Formanek and Procesi \cite{ForPro92} proved that ${\rm{Aut}}(F_n)$\ is not linear for $n\geq 3$ by showing that ${\rm{Aut}}(F_3)$ contains a ``poison subgroup", i.e. a subgroup which has no faithful linear representation. Since ${\rm{Aut}}(F_n)$ embeds in ${\rm{Out}}(F_{n+1})$, this settles the question of linearity for ${\rm{Out}}(F_n)$\ as well, except when $n=3$. \begin{question} Does $\rm{Out}(F_3)$ have a faithful representation into $\rm{GL}(m,\mathbb C)$ for some $m\in\mathbb N$? \end{question} Note that braid groups are linear \cite{Big01} but it is unknown if mapping class groups of closed surfaces are. Brendle and Hamidi-Tehrani \cite{BreHam01} showed that the approach of Formanek and Procesi cannot be adapted directly to the mapping class groups. More precisely, they prove that the type of ``poison subgroup" described above does not arise in mapping class groups. The fact that the above question remains open is an indication that ${\rm{Out}}(F_3)$ can behave differently from ${\rm{Out}}(F_n)$\ for $n$ large; the existence of finite index subgroups mapping onto $\mathbb Z$ was another instance of this, and we shall see another in our discussion of automatic structures and isoperimetric inequalities. \section{Maps to and from ${\rm{Out}}(F_n)$} A particularly intriguing aspect of the analogy between ${\rm{GL}}(n,{\Z})$\ and the two other classes of groups is the extent to which the celebrated rigidity phenomena for lattices in higher rank semisimple groups transfer to mapping class groups and ${\rm{Out}}(F_n)$. Many of the questions in this section concern aspects of this rigidity; questions 9 to 11 should also be viewed in this light. Bridson and Vogtmann \cite{BriVog03:2} showed that any homomorphism from ${\rm{Aut}}(F_n)$\ to a group $G$ has finite image if $G$ does not contain the symmetric group $\Sigma_{n+1}$; in particular, any homomorphism ${\rm{Aut}}(F_n)\to {\rm{Aut}}(F_{n-1})$ has image of order at most 2. \begin{question} If $n\ge 4$ and $g\ge 1$, does every homomorphism from ${\rm{Aut}}(F_n)$\ to ${\rm{Mod}}^{\pm }(S_g)$\ have finite image? \end{question} By \cite{BriVog03:2}, one cannot obtain homomorphisms with infinite image unless ${\rm{Mod}}^{\pm }(S_g)$\ contains the symmetric group $\Sigma_{n+1}$. For large enough genus, you can realize any symmetric group; but the order of a finite group of symmetries is at most 84g-6, so here one needs $84g-6\geq (n+1)!$. There are no {\em injective} maps from ${\rm{Aut}}(F_n)$ to mapping class groups. This follows from the result of Brendle and Hamidi-Tehrani that we quoted earlier. For certain $g$ one can construct homomorphisms ${\rm{Aut}}(F_3)\to$${\rm{Mod}}^{\pm }(S_g)$\ with infinite image, but we do not know the minimal such $g$. \begin{question} Let $\Gamma$ be an irreducible lattice in a semisimple Lie group of $\mathbb R$-rank at least 2. Does every homomorphism from $\Gamma$ to ${\rm{Out}}(F_n)$\ have finite image? \end{question} This is known for non-uniform lattices (see \cite{BriFar01}; it follows easily from the Kazdhan-Margulis finiteness theorem and the fact that solvable subgroups of ${\rm{Out}}(F_n)$\ are virtually abelian \cite{BesFeiHan04}). Farb and Masur provided a positive answer to the analogous question for maps to mapping class groups \cite{FarMas98}. The proof of their theorem was based on results of Kaimanovich and Masur \cite{KaiMas96} concerning random walks on Teichm\"uller space. (See \cite{Iva02} and, for an alternative approach, \cite{BesFuj02}.) \begin{question} Is there a theory of random walks on Outer space similar to that of Kaimanovich and Masur for Teichm\"uller space? \end{question} Perhaps the most promising approach to Question 17 is via bounded cohomology, following the template of Bestvina and Fujiwara's work on subgroups of the mapping class group \cite{BesFuj02}. \begin{question} If a subgroup $G\subset$${\rm{Out}}(F_n)$\ is not virtually abelian, then is $H^2_b(G;\mathbb R)$ infinite dimensional? \end{question} If $m\ge n$ then there are obvious embeddings ${\rm{GL}}(n,\mathbb Z)\to {\rm{GL}}(m,\mathbb Z)$ and ${\rm{Aut}}(F_n)\to{\rm{Aut}}(F_m)$, but there are no obvious embeddings ${\rm{Out}}(F_n)$\ $\to{\rm{Out}}(F_m)$. Bogopolski and Puga \cite{BogPug04} have shown that, for $m = 1 + (n-1)kn$, where $k$ is an arbitrary natural number coprime to $n-1$, there is in fact an embedding, by restricting automorphisms to a suitable characteristic subgroup of $F_m$. \begin{question} For which values of $m$ does ${\rm{Out}}(F_n)$\ embed in ${\rm{Out}}(F_m)$? What is the minimal such $m$, and is it true for all sufficiently large $m$? \end{question} Hatcher and Vogtmann \cite{HatVog04} showed that when $n$ sufficiently large with respect to $i$, the homology group $H_i({\rm{Out}}(F_n),\mathbb Z)$ is independent of $n$. \begin{question} Is there a map ${\rm{Out}}(F_n)$\ $ \to{\rm{Out}}(F_m)$ that induces an isomorphism on homology in the stable range? \end{question} A number of the questions in this section and (2.2) ask whether certain quotients of ${\rm{Out}}(F_n)$\ or ${\rm{Aut}}(F_n)$\ are necessarily finite. The following quotients arise naturally in this setting: define $Q(n,m)$ to be the quotient of ${\rm{Aut}}(F_n)$\ by the normal closure of $\lambda^m$, where $\lambda$ is the Nielsen move defined on a basis $\{a_1,\dots,a_n\}$ by $a_1\mapsto a_2a_1$. (All such Nielsen moves are conjugate in ${\rm{Aut}}(F_n)$, so the choice of basis does not alter the quotient.) The image of a Nielsen move in ${\rm{GL}}(n,{\Z})$\ is an elementary matrix and the quotient of ${\rm{GL}}(n,{\Z})$\ by the normal subgroup generated by the $m$-th powers of the elementary matrices is the finite group ${\rm{GL}}(n,\mathbb Z/m)$. But Bridson and Vogtmann \cite{BriVog03:2} showed that if $m$ is sufficiently large then $Q(n,m)$ is infinite because it has a quotient that contains a copy of the free Burnside group $B(n-1,m)$. Some further information can be gained by replacing $B(n-1,m)$ with the quotients of $F_n$ considered in subsection 39.3 of A.Yu.~Ol'shanskii's book \cite{Ols91}. But we know very little about the groups $Q(n,m)$. For example: \begin{question} For which values of $n$ and $m$ is $Q(n,m)$ infinite? Is $Q(3,5)$ infinite? \end{question} \begin{question} Can $Q(n,m)$ have infinitely many finite quotients? Is it residually finite? \end{question} \section{Individual elements and mapping tori}\label{growth} Individual elements $\alpha\in{\rm{GL}}(n,\mathbb Z)$ can be realized as diffeomorphisms $\hat\alpha$ of the $n$-torus, while individual elements $\psi\in$ ${\rm{Mod}}^{\pm }(S_g)$\ can be realized as diffeomorphisms $\hat\psi$ of the surface $S_g$. Thus one can study $\alpha$ via the geometry of the torus bundle over $\mathbb S^1$ with holonomy $\hat\alpha$ and one can study $\psi$ via the geometry of the 3-manifold that fibres over $\mathbb S^1$ with holonomy $\hat\psi$. (In each case the manifold depends only on the conjugacy class of the element.) The situation for ${\rm{Aut}}(F_n)$\ and ${\rm{Out}}(F_n)$\ is more complicated: the natural choices of classifying space $Y=K(F_n,1)$ are finite graphs of genus $n$, and no element of infinite order $\phi\in$${\rm{Out}}(F_n)$\ is induced by the action on $\pi_1(Y)$ of a homeomorphism of $Y$. Thus the best that one can hope for in this situation is to identify a graph $Y_\phi$ that admits a homotopy equivalence inducing $\phi$ and that has additional structure well-adapted to $\phi$. One would then form the mapping torus of this homotopy equivalence to get a good classifying space for the algebraic mapping torus $F_n\rtimes_\phi\mathbb Z$. The {\em train track technology} of Bestvina, Feighn and Handel \cite{BesHan92, BesFeiHan00, BesFeiHan97} is a major piece of work that derives suitable graphs $Y_\phi$ with additional structure encoding key properties of $\phi$. This results in a decomposition theory for elements of ${\rm{Out}}(F_n)$\ that is closely analogous to (but more complicated than) the Nielsen-Thurston theory for surface automorphisms. Many of the results mentioned in this section are premised on a detailed knowledge of this technology and one expects that a resolution of the questions will be too. There are several natural ways to define the {\em growth} of an automorphism $\phi$ of a group $G$ with finite generating set $A$; in the case of free, free-abelian, and surface groups these are all asymptotically equivalent. The most easily defined growth function is $\gamma_\phi(k)$ where $\gamma_\phi(k):=\max\{d(1,\phi^k(a)\mid a\in A\}$. If $G=\mathbb Z^n$ then $\gamma_\phi(k)\simeq k^d$ for some integer $d\le n-1$, or else $\gamma_\phi(k)$ grows exponentially. If $G$ is a surface group, the Nielsen-Thurston theory shows that only bounded, linear and exponential growth can occur. If $G=F_n$ and $\phi\in{\rm{Aut}}(F_n)$ then, as in the abelian case, $\gamma_\phi(k)\simeq k^d$ for some integer $d\le n-1$ or else $\gamma_\phi(k)$ grows exponentially. \begin{question} Can one detect the growth of a surface or free-group homomorphism by its action on the homology of a characteristic subgroup of finite index? \end{question} Notice that one has to pass to a subgroup of finite index in order to have any hope because automorphisms of exponential growth can act trivially on homology. A.~Piggott \cite{Pig04} has answered the above question for free-group automorphisms of polynomial growth, and linear-growth automorphisms of surfaces are easily dealt with, but the exponential case remains open in both settings. Finer questions concerning growth are addressed in the on-going work of Handel and Mosher \cite{HanMos05}. They explore, for example, the implications of the following contrast in behaviour between surface automorphisms and free-group automorphisms: in the surface case the exponential growth rate of a pseudo-Anosov automorphism is the same as that of its inverse, but this is not the case for iwip free-group automorphisms. For mapping tori of automorphisms of free abelian groups $G=\mathbb Z^n\rtimes_\phi\mathbb Z$, the following conditions are equivalent (see \cite{BriGer96}): $G$ is automatic; $G$ is a CAT$(0)$ group\footnote{this means that $G$ acts properly and cocompactly by isometries on a CAT$(0)$ space}; $G$ satisfies a quadratic isoperimetric inequality. In the case of mapping tori of surface automorphisms, all mapping tori satisfy the first and last of these conditions and one understands exactly which $S_g\rtimes\mathbb Z$ are CAT$(0)$ groups. Brady, Bridson and Reeves \cite{BraBriRee05} show that there exist mapping tori of free-group automorphisms $F\rtimes\mathbb Z$ that are not automatic, and Gersten showed that some are not CAT$(0)$ groups \cite{Ger94}. On the other hand, many such groups do have these properties, and we have already noted that they all satisfy a quadratic isoperimetric inequality. \begin{question} Classify those $\phi\in{\rm{Aut}}(F_n)$ for which $F_n\rtimes_\phi\mathbb Z$ is automatic and those for which it is {\rm{CAT}}$(0)$. \end{question} Of central importance in trying to understand mapping tori is: \begin{question} Is there an alogrithm to decide isomorphism among groups of the form $F\rtimes\mathbb Z$. \end{question} In the purest form of this question one is given the groups as finite presentations, so one has to address issues of how to find the decomposition $F\rtimes\mathbb Z$ and one has to combat the fact that this decomposition may not be unique. But the heart of any solution should be an answer to: \begin{question} Is the conjugacy problem solvable in ${\rm{Out}}(F_n)$? \end{question} Martin Lustig posted a detailed outline of a solution to this problem on his web page some years ago \cite{Lus99}, but neither this proof nor any other has been accepted for publication. This problem is of central importance to the field and a clear, compelling solution would be of great interest. The conjugacy problem for mapping class groups was shown to be solvable by Hemion \cite{Hem79}, and an effective algorithm for determining conjugacy, at least for pseudo-Anosov mapping classes, was given by Mosher \cite{Mos84}. The isomorphism problem for groups of the form $S_g\rtimes\mathbb Z$ can be viewed as a particular case of the solution to the isomorphism problem for fundamental groups of geometrizable 3-manifolds \cite{Sel95}. The solvability of the conjugacy problem for {\rm{GL}}$(n,\mathbb Z)$ and of the isomorphism problem among groups of the form $\mathbb Z^n\rtimes\mathbb Z$ is classical. \section{Cohomology} In each of the series of groups ${\{\Gamma_n\}}$ we are considering, the $i$th homology of $\Gamma_n$ has been shown to be independent of $n$ for $n$ sufficiently large. For ${\rm{GL}}(n,{\Z})$\ this is due to Charney \cite{Cha79}, for mapping class groups to Harer \cite{Har90}, and for ${\rm{Aut}}(F_n)$\ and ${\rm{Out}}(F_n)$\ to Hatcher and Vogtmann \cite{HatVog98:3, HatVog04}. With trivial rational coefficients, the stable cohomology of ${\rm{GL}}(n,{\Z})$\ was computed in the 1970's by Borel \cite{Bor74}, and the stable rational cohomology of the mapping class group computed by Madsen and Weiss in 2002 \cite{MadWei04}. The question for ${\rm{Out}}(F_n)$\ remains open: \begin{question} What is the stable rational cohomology of ${\rm{Out}}(F_n)$? \end{question} No non-trivial stable rational cohomology classes have been found, and the standard conjecture is that the stable rational cohomology is in fact trivial. The exact stable range for trivial rational coefficients is known for ${\rm{GL}}(n,{\Z})$\ and for mapping class groups of punctured surfaces. For ${\rm{Out}}(F_n)$\ the best known result is that the $i$th homology is independent of $n$ for $n>5i/4$, but the exact range is unknown: \begin{question} Where precisely does the rational homology of ${\rm{Out}}(F_n)$\ stabilize? And for ${\rm{Aut}}(F_n)$? \end{question} Since the stable rational cohomology of mapping class groups and ${\rm{GL}}(n,{\Z})$\ is non-trivial, there is a natural impulse to use these maps to try to detect cohomology in ${\rm{Out}}(F_n)$. However, a result of Igusa's \cite{Igu02} shows that this does not work for ${\rm{GL}}(n,{\Z})$: the map from ${\rm{Out}}(F_n)$\ to ${\rm{GL}}(n,{\Z})$\ is trivial on homology in the stable range. Wahl \cite{Wah04} has studied the mapping class group case and shown that the map is an infinite loop space map. But the following question remains open. \begin{question} Do the natural maps from the mapping class groups of non-closed surfaces to ${\rm{Out}}(F_n)$\ induce the zero map on stable rational homology? \end{question} In fact, there are only two known non-trivial classes in the rational homology of ${\rm{Out}}(F_n)$\ \cite{HatVog98:2, ConVog04}, both below the stable range. However, Morita \cite{Mor99} has defined an infinite series of cycles, by using work of Kontsevich which identifies the cohomology of ${\rm{Out}}(F_n)$\ with the homology of a certain infinite-dimensional Lie algebra. The first of these cycles coincides with the only previously known cohomology class, in $H_4( {\rm{Out}}(F_4);\mathbb Q)$, and Conant and Vogtmann \cite{ConVog04} showed that the second also gives a non-trivial class, in $H_8({\rm{Out}}(F_6);\mathbb Q)$. Both Morita and Conant-Vogtmann also defined more general cycles, parametrized by odd-valent graphs. \begin{question} Are Morita's original cycles non-trivial in homology? Are the generalizations due to Morita and to Conant and Vogtmann non-trivial in homology? \end{question} We note that Morita has identified several conjectural relationships between his cycles and various other interesting objects, including the image of the Johnson homomorphism, the group of homology cobordism classes of homology cylinders, and the motivic Lie algebra associated to the algebraic mapping class group (see Morita's article in this volume). It is interesting to note that $H_8({\rm{GL}}(6,\mathbb Z);\mathbb Q)\cong \mathbb Q$ \cite{ElbGanSou02}; this leads naturally to the question \begin{question} Is the image of the second Morita class in $H_8({\rm{GL}}(6,\mathbb Z;\mathbb Q))$ non-trivial? \end{question} Finally, it is worth noting that some of Conant and Vogtmann's generalizations of the Morita cycles lie in the stable range. \section{Generators and Relations} The groups we are considering are all finitely generated. In each case, the most natural set of generators consists of a single orientation-reversing generator of order two, together with a collection of simple infinite-order special automorphisms. For ${\rm{Out}}(F_n)$, these special automorphisms are the Nielsen automorphisms, which multiply one generator of $F_n$ by another and leave the rest of the generators fixed; for ${\rm{GL}}(n,{\Z})$\ these are the elementary matrices; and for mapping class groups they are Dehn twists around a small set of non-separating simple closed curves. These generating sets have a number of important features in common. First, implicit in the description of each is a choice of generating set for the group $B$ on which $\Gamma$ is acting. In the case of ${\rm{Mod}}^{\pm }(S_g)$\ this ``basis" can be taken to consist of $2g+1$ simple closed curves representing the standard generators $a_1,b_1,a_2,b_2, \ldots,a_g,b_g,$ of $\pi_1(S_g)$ together with $z=a_2^{-1}b_{3}a_{3}b_{3}^{-1}$. In the case of ${\rm{Out}}(F_n)$\ and ${\rm{GL}}(n,{\Z})$, the generating set is a basis for $F_n$ and $\mathbb Z^n$ respectively. Note that in the cases $\Gamma=$${\rm{Out}}(F_n)$\ or ${\rm{GL}}(n,{\Z})$, the universal property of the underlying free objects $B=F_n$ or $\mathbb Z^n$ ensures that $\Gamma$ acts transitively on the set of preferred generating sets (bases). In the case $B=\pi_1S_g$, the corresponding result is that any two collections of simple closed curves with the same pattern of intersection numbers and complementary regions are related by a homeomorphism of the surface, hence (at the level of $\pi_1$) by the action of $\Gamma$. If we identify $\mathbb Z^n$ with the abelianization of $F_n$ and choose bases accordingly, then the action of ${\rm{Out}}(F_n)$\ on the abelianization induces a homomorphism $\hbox{${\rm{Out}}(F_n)$}\to\hbox{${\rm{GL}}(n,{\Z})$}$ that sends each Nielsen move to the corresponding elementary matrix (and hence is surjective). Correspondingly, the action ${\rm{Mod}}^{\pm }(S_g)$\ on the abelianization of $\pi_1S_g$ yields a homomorphism onto the symplectic group $Sp(2g,\mathbb Z)$ sending the generators of ${\rm{Mod}}^{\pm }(S_g)$\ given by Dehn twists around the $a_i$ and $b_i$ to transvections. Another common feature of these generating sets is that they all have linear growth (see section \ref{growth}). Smaller (but less transparent) generating sets exist in each case. Indeed B.H.~Neumann \cite{Neu32} proved that ${\rm{Aut}}(F_n)$\ (hence its quotients ${\rm{Out}}(F_n)$\ and ${\rm{GL}}(n,{\Z})$) is generated by just 2 elements when $n\ge 4$. Wajnryb \cite{Waj96} proved that this is also true of mapping class groups. In each case one can also find generating sets consisting of finite order elements, and in fact by involutions. Zucca showed that ${\rm{Aut}}(F_n)$\ can be generated by 3 involutions two of which commute \cite{Zuc97}, and Kassabov, building on work of Farb and Brendle, showed that mapping class groups of large enough genus can be generated by 4 involutions \cite{Kas03}. Our groups are also all finitely presented. For ${\rm{GL}}(n,{\Z})$, or more precisely for ${\rm{SL}}(n,{\Z})$, there are the classical Steinberg relations, which involve commutators of the elementary matrices. For the special automorphisms ${\rm{SAut}}(F_n)$, Gersten gave a presentation in terms of corresponding commutator relations of the Nielsen generators \cite{Ger84}. Finite presentations of the mapping class groups are more complicated. The first was given by Hatcher and Thurston, and worked out explicitly by Wajnryb \cite{Waj83}. \begin{question} Is there a set of simple Steinberg-type relations for the mapping class group? \end{question} There is also a presentation of ${\rm{Aut}}(F_n)$\ coming from the action of ${\rm{Aut}}(F_n)$\ on the subcomplex of Auter space spanned by graphs of degree at most 2. This is simply-connected by \cite{HatVog98:3}, so Brown's method \cite{Bro84} can be used to write down a presentation. The vertex groups are stabilizers of marked graphs, and the edge groups are the stabilizers of pairs consisting of a marked graph and a forest in the graph. The quotient of the subcomplex modulo ${\rm{Aut}}(F_n)$\ can be computed explicitly, and one finds that ${\rm{Aut}}(F_n)$\ is generated by the (finite) stabilizers of seven specific marked graphs. In addition, all of the relations except two come from the natural inclusions of edge stabilizers into vertex stabilizers, i.e. either including the stabilizer of a pair (graph, forest) into the stabilizer of the graph, or into the stabilizer of the quotient of the graph modulo the forest. Thus the whole group is almost (but not quite) a pushout of these finite subgroups. In the terminology of Haefliger (see \cite{BriHae99}, II.12), the complex of groups is not simple. \begin{question} Can ${\rm{Out}}(F_n)$\ and ${\rm{Mod}}^{\pm }(S_g)$\ be obtained as a pushout of a finite subsystem of their finite subgroups, i.e. is either the fundamental group of a developable simple complex of finite groups on a 1-connected base? \end{question} \subsection{IA automorphisms} We conclude with a well-known problem about the kernel ${\rm{IA}}(n)$ of the map from ${\rm{Out}}(F_n)$ to ${\rm{GL}}(n,Z)$. The notation ``IA" stands for {\it identity on the abelianization}; these are (outer) automorphisms of $F_n$ which are the identity on the abelianization $Z^n$ of $F_n$. Magnus showed that this kernel is finitely generated, and for $n=3$ Krstic and McCool showed that it is not finitely presentable \cite{KrsMcC97}. It is also known that in some dimension the homology is not finitely generated \cite{SmiVog87}. But that is the extent of our knowledge of basic finiteness properties. \begin{question} Establish finiteness properties of the kernel ${\rm{IA}}(n)$ of the map from ${\rm{Out}}(F_n)$ to ${\rm{GL}}(n,\mathbb Z)$. In particular, determine whether ${\rm{IA}}(n)$ is finitely presentable for $n>3$. \end{question} The subgroup ${\rm{IA}}(n)$ is analogous to the Torelli subgroup of the mapping class group of a surface, which also remains quite mysterious in spite of having been extensively studied. \section{Automaticity and Isoperimetric Inequalities} In the foundational text on automatic groups \cite{EpEtAl92}, Epstein gives a detailed account of Thurston's proof that if $n\ge 3$ then ${\rm{GL}}(n,{\Z})$\ is not automatic. The argument uses the geometry of the symmetric space to obtain an exponential lower bound on the $(n-1)$-dimensional isoperimetric function of ${\rm{GL}}(n,{\Z})$; in particular the Dehn function of ${\rm{GL}}(3,\mathbb Z)$ is shown to be exponential. Bridson and Vogtmann \cite{BriVog95}, building on this last result, proved that the Dehn functions of ${\rm{Aut}}(F_3)$ and ${\rm{Out}}(F_3)$ are exponential. They also proved that for all $n\ge 3$, neither ${\rm{Aut}}(F_n)$\ nor ${\rm{Out}}(F_n)$\ is biautomatic. In contrast, Mosher proved that mapping class groups are automatic \cite{Mosh95} and Hamenst\"adt \cite{Ham04} proved that they are biautomatic; in particular these groups have quadratic Dehn functions and satisfy a polynomial isoperimetric inequality in every dimension. Hatcher and Vogtmann \cite{HatVog96} obtain an exponential upper bound on the isoperimetric function of ${\rm{Aut}}(F_n)$\ and ${\rm{Out}}(F_n)$\ in every dimension. An argument sketched by Thurston and expanded upon by Gromov \cite{Gromov93}, \cite{Gromov00} (cf.~\cite{Drutu04}) indicates that the Dehn function of ${\rm{GL}}(n,{\Z})$\ is quadratic when $n\ge 4$. More generally, the isoperimetric functions of ${\rm{GL}}(n,{\Z})$\ should parallel those of Euclidean space in dimensions $m\le n/2$. \begin{question} What are the Dehn functions of ${\rm{Aut}}(F_n)$\ and ${\rm{Out}}(F_n)$\ for $n>3$? \end{question} \begin{question} What are the higher-dimensional isoperimetric functions of ${\rm{GL}}(n,{\Z})$, ${\rm{Aut}}(F_n)$ and ${\rm{Out}}(F_n)$? \end{question} \begin{question} Is ${\rm{Aut}}(F_n)$\ automatic for $n>3$? \end{question} \bibliographystyle{siam}
1,108,101,562,599
arxiv
\section{Introduction} At passage of ultrarelativistic charged particles through amorphous matter, they undergo multiple, essentially uncorrelated scattering on atoms, typically through small angles. If the target is not too thick, the longitudinal momentum of a high-energy particle may be regarded as conserved. Then, the transport equation depends only on the particle deflection angles, and is exactly solvable by means of Fourier transformation \cite{Bothe-Wentzel}. However, conditions of multiple Coulomb scattering on screened atomic nuclei may require one to separately treat hard and soft contributions to the distribution function, as was first pointed out by Williams \cite{Williams}. That separation was cast in the form of a large-thickness expansion by Moli\`{e}re \cite{MoliereFirst,Moliere}, subsequently reviewed by Bethe \cite{Bethe}, and nowadays is recognized as a standard procedure (see \cite{Scott,Mott-Massey,Uch-Zol}). In modern practice, yet a simplified approach is applied at times, retaining only the Gaussian component with the root mean square angle inferred from Gaussian fits \cite{Highland-Lynch-Dahl}, or derived analytically from the Mol\`{e}re theory \cite{Bond-Shul}. But in high-statistics experiments, non-Gaussian ``wings" are noticeable even for rather thick targets. Although Moli\`{e}re's expansion provides a formal background for the theory, from the physical point of view it is not completely satisfactory. It is known that, in principle, it does not converge (see, e.g., \cite{Bielajew}), and yet, is comprised of oscillatory functions of deflection angles, which do not admit independent probabilistic interpretation. At the same time, there were recent phenomenological indications that beyond the central Gaussian region, the distribution function does not immediately switch to the asymptotic power law corresponding to single scattering, but exhibits some transient behavior over a sizable range of angles \cite{Taratin,Arleo}. In case such a transition region does exist, the best option to compute within it the distribution function would be to resum all the non-Gaussian (at least, power-law) contributions through all orders, as is done in other physical problems (see, e.g., \cite{Bondarenco-paper}). That typically leads to integral representations for resummed quantities. But in the present case, one can employ to this end a method \cite{Bethe,Scott}, in which the original Fourier integral representation for the distribution function is extended into the complex plane, and two principally different (vertical and horizontal) parts of the integration path are distinguished. That method so far has never developed to a procedure superior to Moli\`{e}re's expansion; nonetheless, with some improvements, it can be raised to that status, and provide a different view on behavior of the angular distribution beyond the central region. The key notion here is that integrals over the mentioned parts of the path appear to be positive, and therefore may be interpreted as hard and soft scattering components, coexisting at any scattering angle. Comparison of those components will allow us to determine the width of the transition region between Gaussian and Rutherford regions in the aggregate distribution, and assess the significance of resummation of all the plural hard-scattering contributions. \section{Preliminary considerations} \subsection{Fourier-Bessel solution of the transport equation} The probability distribution of fast particles scattered through small angles $\theta$ in an amorphous medium, $f(\theta,l)=\frac{dw}{d^2\theta}$, is governed by the transport equation \begin{equation}\label{transport-eq} \frac{\partial f}{\partial l}= n\int d\sigma(\chi)\left[f(\bm\theta-\bm\chi,l)-f(\theta,l)\right], \end{equation} where $d\sigma(\chi)=d^2\chi\frac{d\sigma}{d^2\chi}$ is the differential cross-section of particle scattering on one atom through angle $\chi$, $n$ is the density of atoms in the medium, and $l$ the traversed target thickness. Equation~(\ref{transport-eq}) conserves the normalization: \begin{equation}\label{norm=1} \int d^2\theta f(\theta,l)=1, \end{equation} Solution of Eq.~(\ref{transport-eq}) satisfying the initial condition $f(\theta,0)=\delta(\bm\theta)$ is obtained by means of Fourier-Bessel transformation: \begin{subequations}\label{2DFourier} \begin{eqnarray} f(\theta,l)&=&\int \frac{d^2\rho}{(2\pi)^2} e^{i\bm{\rho}\cdot\bm{\theta}-nl\int d\sigma(\chi) (1-e^{-i\bm{\rho}\cdot\bm{\chi}})}\label{}\\ &\equiv&\frac{1}{2\pi}\int_0^{\infty} d\rho\rho J_0(\rho\theta) e^{-nl\int d\sigma(\chi) \left[1-J_0(\rho\chi)\right]}.\label{3b} \end{eqnarray} \end{subequations} In some applications, one may be concerned rather with the projected angle distribution, which is given by a 1-dimensional Fourier transformation: \begin{eqnarray}\label{f-proj-def} f(\theta_x,l)&=&\int_{-\infty}^{\infty} d\theta_yf(\theta,l)\nonumber\\ &=&\int_{-\infty}^{\infty} \frac{d\xi}{2\pi} e^{i\xi\theta_x-nl\int d\sigma(\chi) \left[1-J_0(\xi\chi)\right]}. \end{eqnarray} We shall denote distribution functions (\ref{2DFourier}) and (\ref{f-proj-def}) by the same letter $f$, distinguishing them just by notation of their angle arguments. \subsection{Thick targets: Moli\`{e}re's theory} At significant target thickness, the random walk in the plane of deflection angles (which may be viewed as transverse vectors) must reduce to diffusion. In generic integral representations (\ref{2DFourier}) and (\ref{f-proj-def}), that comes about as follows: at large $nl$, the exponential in their integrands is rapidly decreasing, therefore the contributing $\rho$ or $\xi$ are small, permitting one to expand the exponent to leading order in their values. However, naive expansion $1-J_0(\rho\chi)\simeq{\rho^2\chi^2}/{4}$ in the integrand gives a logarithmically diverging variance $\int d\sigma(\chi)\chi^2$, given that the physical differential cross-section of fast charged particle scattering on one atom through large angles obeys the Rutherford asymptotics \begin{equation}\label{Ruth-dsigma} \frac{d\sigma}{d\chi}\underset{\chi/\chi'_a\to\infty}\simeq \frac{8\pi Z^2\alpha^2}{p^2\chi^3}, \end{equation} with $p$ being the particle momentum, $Z$ the nucleus charge, and $\alpha$ the fine structure constant. A more accurate calculation \cite{Bethe} shows that the small-$\rho$ asymptotics of the exponent in (\ref{2DFourier}), (\ref{f-proj-def}) involves a factor logarithmically depending on $\rho$: \begin{equation}\label{J0-square-log} nl\int d\sigma(\chi) \left[1-J_0(\rho\chi)\right]\underset{\rho\chi'_a\to0}\simeq\frac{\chi_c^2\rho^2}2\ln\frac{2}{\chi'_a\rho}, \end{equation} and thereby spoiling the Gaussianity of the Fourier-Bessel integral. Here $\chi_c^2(l)=4\pi nl Z^2\alpha^2/p^2$, and the screening angle $\chi'_a\sim1/R_{a}p$, with $R_a$ being the atomic radius, characterizes the scale of angles at which the singularity in (\ref{Ruth-dsigma}) is tamed.\footnote{In terms of the exact scattering differential cross-section $\frac{d\sigma}{d\chi}=\frac{8\pi Z^2\alpha^2}{p^2\chi^3}q(\chi)$, with $\chi^{-4}q(\chi)\underset{\chi/\chi'_a\to0}\to\text{const}>0$ and $q(\chi)\underset{\chi/\chi'_a\to\infty}\to1$, the screening angle expresses as \begin{equation}\label{dsigma-J0->ln} \ln\chi'_a=\int dq(\chi)\ln\chi+\gamma_{\text{E}}-1, \end{equation} where $\gamma_{\text{E}}$ is the Euler's constant. This definition \cite{Bethe} differs from the more conventional $\chi_a$ \cite{Moliere} by terms $\gamma_{\text{E}}-1/2=0.077$, but numerically, the difference is small. With definition (\ref{dsigma-J0->ln}), the right-hand side (rhs) of Eq.~(\ref{J0-square-log}) is the shortest, facilitating the following calculations. Note, too, that while Eq.~(\ref{J0-square-log}) was written for pure elastic scattering, inelastic contributions can also be incorporated there \cite{Fano}, just by redefining $\chi'_a$ and $\chi_c$.} Thus, the diffusion here is anomalous, but only marginally, in the sense that the anomaly is logarithmic instead of a power law. That implies that the distribution function does \emph{not} approach a L\'{e}vy distribution \cite{Uch-Zol}, albeit is not strictly Gaussian either. Ratio ${\chi_c^2}/{\chi'^2_a}$ essentially measures the target thickness in units of the radiation length $X_0$: \[ \frac{\chi_c^2}{\chi'^2_a}=\frac{\pi}{\alpha \gamma^2\chi'^2_a\ln\frac{\text{const}}{2\gamma\chi'_a}}\frac{l}{X_0}, \] with $\text{const}\sim1$, and $\gamma\chi'_a$ expressible in terms of $X_0$, as well [see, e.g., \cite{Bond-Shul}, Eq.~(42)]. For instance, ratio ${\chi_c}/{\chi'_a}=10^2$ corresponds to targets of solid materials of a few millimeter thickness. In what follows, we will measure the target thickness in $Z$-independent fashion, merely in units of ${\chi_c^2}/{\chi'^2_a}$. Approximation (\ref{J0-square-log}) appreciably simplifies the structure of integrals (\ref{2DFourier}), (\ref{f-proj-def}), but their evaluation still involves non-trivial aspects. Intuitively, it is clear that the diffusion, at least at typical angles, must be close to Gaussian, although with possible logarithmic deviations. To tackle those, Moli\`{e}re \cite{Moliere} assumed that the typical deflection angle is $\chi_c\sqrt{B}$, with $B$ such that the difference of logarithmically large parameters $B-\ln B-\ln\frac{\chi_c^2}{\chi'^2_a}$ is a constant of the order of unity (conventionally set to be zero). Therewith, $B(\chi_c^2/\chi'^2_a)$ is a Lambert (or product logarithm) function, asymptotically equal $B\underset{\chi_c\gg\chi'_a}\simeq\ln\left(\frac{\chi_c^2}{\chi'^2_a}\ln\frac{\chi_c^2}{\chi'^2_a}\right)$, and the rhs of (\ref{J0-square-log}) rewrites as \[ \frac{\chi_c^2\rho^2}2\ln\frac{2}{\chi'_a\rho}=\frac{u^2}{4}-\frac{u^2}{4B}\ln\frac{u^2}{4}, \] where $u=\chi_c\sqrt{B}\rho$. As long as the logarithmic dependence on the rescaled integration variable $u$ in the exponent appears to be inversely proportional to the large parameter $B$, that suggests expanding this part of the exponential into power series and formally integrating termwise: \begin{equation}\label{Moliere-expansion} f(\theta,l)=\frac1{2\pi\chi_c^2 B}\sum_{k=0}^{\infty}\frac1{B^k}f^{(k)}\left(\frac{\theta}{\chi_c\sqrt{B}}\right), \end{equation} with \begin{equation}\label{fk-Moliere-def} f^{(k)}(\Theta)=\frac1{k!}\int_0^{\infty}duuJ_0(\Theta u)e^{-u^2/4}\left(\frac{u^2}4\ln\frac{u^2}4\right)^k. \end{equation} Note that the expansion parameter $B^{-1}$ here is only logarithmically small, but for $B\geq 4.5$, i.e., $\chi_c\gg 10\chi'_a$, expansion (\ref{Moliere-expansion}) is reported to work reasonably well \cite{Moliere,Bethe}. An important consequence of (\ref{fk-Moliere-def}) is that for all $k\geq1$, \begin{equation}\label{int-fk-Moliere=0} \int d^2\theta f^{(k)}(\theta)\equiv0. \end{equation} Hence, functions $f^{(k)}$ at $k\geq1$ are not everywhere positive, and do not admit probabilistic interpretation. Analyzing integrals (\ref{fk-Moliere-def}), one finds that at large $\Theta$, components of (\ref{Moliere-expansion}) behave as $f^{(0)}(\Theta)=2e^{-\Theta^2}$, which corresponds to a perfect Gaussian, and $f^{(1)}(\Theta)\sim \Theta^{-4}$, which reflects the Rutherford asymptotics $f(\theta)\simeq \frac{nl}{2\pi\theta}\frac{d\sigma}{d\theta}$. For $k\geq2$, $f^{(k)}(\Theta)\sim \Theta^{-2-2k}$ times logarithmic factors (which will be determined below). Further analysis reveals that, in fact, functions $f^{(k)}$ for $k\geq1$ make several oscillations,\footnote{That owes to the fact that as $k$ increases, factor $e^{-u^2/4}\left(\frac{u^2}4\ln\frac{u^2}4\right)^k$ in the integrand of (\ref{fk-Moliere-def}) becomes sharply peaking at $u\sim2\sqrt{k}$. Therewith, at fixed $\Theta$ and increasing $k$, integral (\ref{fk-Moliere-def}) tends to \begin{equation}\label{ln^kk} f^{(k)}(\Theta)\sim2\ln^k k J_0(2\sqrt{k}\Theta). \end{equation} For $\Theta=0$, the latter scaling law was quoted in \cite{Bielajew}. } which are much stronger than the asymptotic power-law ``tails". At moderate $\chi_c/\chi'_a$, they may cause a spurious warp in between the Gaussian and Rutherford regions. Yet, despite the factor $k!$ in the denominator in the rhs of (\ref{fk-Moliere-def}), functions $f^{(k)}$ \emph{grow} with $k$ faster than exponentially [see Eq.~(\ref{ln^kk})]. Therefore, in principle, series (\ref{Moliere-expansion}) diverges, though it may still serve as an asymptotic expansion in the limit $nl\to\infty$. \subsection{Thin targets: Power and logarithmic corrections to the Rutherford asymptotics}\label{subsec:Glauber-expansion} Even though at typical angles the number of scatterings in any macroscopic target is very large, at significant deflection angles the distribution function may be determined by just a few hard scatterings. It can thus be useful to expand the distribution function into perturbation series \begin{equation}\label{series-f-thetax} f(\theta_x,l)=\sum_{k=1}^{\infty}(nl)^kf_k(\theta_x), \end{equation} and study the behavior of its components $f_k(\theta_x)$ at large $\theta_x$. The lowest-order terms of (\ref{series-f-thetax}) are \begin{eqnarray}\label{f1x-Ruth-asympt} f_1(\theta_x)&=&\frac{1}{2\pi}\int_{-\infty}^{\infty} d\xi\cos(\xi\theta_x) \int d\sigma(\chi) \left[J_0(x\chi)-1\right]\nonumber\\ &\equiv&\frac{1}{2\pi}\int_{-\infty}^{\infty} d\xi e^{i\xi\theta_x} \int_{-\infty}^{\infty} d\chi_x\frac{d\sigma}{d\chi_x} \left(e^{-ix\chi_x}-1\right)\nonumber\\ &=&\frac{d\sigma}{d\theta_x}-\sigma\delta(\theta_x)\underset{\theta_x/\chi'_a\to\infty}\sim \frac{\chi_c^2}{2nl\theta_x^3}, \end{eqnarray} and \begin{eqnarray} f_2(\theta_x)&=&\frac{1}{4\pi}\int_{-\infty}^{\infty} d\xi e^{i\xi\theta_x} \left[\int_{-\infty}^{\infty} d\chi_x\frac{d\sigma}{d\chi_x} \left(e^{-i\xi\chi_x}-1\right)\right]^2\nonumber\\ &=&\frac12\int_{-\infty}^{\infty} d\chi_x\frac{d\sigma}{d\chi_x}\frac{d\sigma}{d(\theta_x-\chi_x)}-\sigma\frac{d\sigma}{d\theta_x}+\frac{\sigma^2}2\delta(\theta_x).\nonumber\\ \label{proj-f2} \end{eqnarray} The dominant contribution to the integral term in (\ref{proj-f2}) comes from neighborhoods of two points: $\chi_x=0$, where $\frac{d\sigma}{d(\theta_x-\chi_x)}$ may be approximated by a constant, and $\chi_x=\theta_x$, where $\frac{d\sigma}{d\chi_x}\simeq \frac{d\sigma}{d\theta_x}$. The corresponding asymptotics of the integral thus equals $\frac12\int_{-\infty}^{\infty} d\chi_x\frac{d\sigma}{d\chi_x}\frac{d\sigma}{d(\theta_x-\chi_x)}\simeq \sigma\frac{d\sigma}{d\theta_x}$, but it is exactly canceled by the second term of (\ref{proj-f2}). Therefore, to determine the asymptotics of $f_2$, one has to expand the slowly varying factors in the integrand to higher orders: \begin{eqnarray} f_2(\theta_x)\!&\underset{\theta_x/\chi'_a\to\infty}\simeq&\!\int d\chi_x\frac{d\sigma}{d\chi_x}\!\left(\!-\chi_x\frac{d}{d\theta_x}\frac{d\sigma}{d\theta_x}+\frac{\chi_x^2}2\frac{d^2}{d\theta_x^2}\frac{d\sigma}{d\theta_x}\right)\nonumber\\ &=&\frac{1}2\frac{d^2}{d\theta_x^2}\frac{d\sigma}{d\theta_x}\int d\chi_x\chi_x^2\frac{d\sigma}{d\chi_x}.\label{} \end{eqnarray} Here $\frac{d^2}{d\theta_x^2}\frac{d\sigma}{d\theta_x}\simeq\frac{6\chi_c^2}{nl\theta_x^5}$, and $\int_{\sim-\theta_x}^{\sim \theta_x} d\chi_x\chi_x^2\frac{d\sigma}{d\chi_x}\simeq \frac{\chi_c^2}{nl}\ln\frac{\theta_x}{\chi'_a}$, wherewith \begin{equation}\label{} (nl)^2f_2(\theta_x)\underset{\theta_x/\chi'_a\to\infty}\simeq\frac{3\chi_c^4}{\theta_x^5}\ln\frac{\theta_x}{\chi'_a}. \end{equation} Hence, if one considers a ``form factor" $\theta_x^3 f(\theta_x)$, which vanishes at $\theta_x=0$, and tends to a constant as $\theta_x/\chi'_a\to\infty$, it appears to be a nonmonotonous function of $\theta_x$, and overshoots the latter constant at some intermediate $\theta_x$. That salient feature of the multiple Coulomb scattering angular distribution was confirmed experimentally (see \cite{Hanson,Bethe}). Similarly, it can be proven that higher-order terms in (\ref{series-f-thetax}) are all positive and asymptotically scale as \begin{equation}\label{fk-as-x} (nl)^k f_k(\theta_x)\underset{\theta_x/\chi'_a\to\infty}\simeq\frac{k(2k-1)!!\chi_c^{2k}}{2\theta_x^{1+2k}}\ln^{k-1}\frac{\theta_x}{\chi'_a}. \end{equation} For polar angle distribution \begin{equation}\label{series-f-theta} f(\theta,l)=\sum_{k=1}^{\infty}(nl)^k f_k(\theta), \end{equation} the asymptotics of the leading terms of the expansion is \begin{equation}\label{f1-Ruth} f_1(\theta)=\frac{d\sigma}{d^2\theta}+\sigma\delta(\bm\theta)\underset{\theta/\chi'_a\to\infty}\simeq\frac{\chi_c^2}{\pi nl\theta^4}, \end{equation} \begin{equation}\label{f2-asympt} f_2(\theta)\underset{\theta/\chi'_a\to\infty}\simeq\frac14\triangle_{\theta}\frac{d\sigma}{d^2\theta}\int_0^{\sim\theta/2} \!d\chi\chi^2\frac{d\sigma}{d\chi}\simeq\frac{8\chi_c^4}{\pi(nl)^2\theta^6}\!\ln\frac{\theta}{\chi'_a}, \end{equation} and generally \begin{equation}\label{fk-as} (nl)^k f_k(\theta)\underset{\theta/\chi'_a\to\infty}\simeq \frac{kk!2^{k-1}\chi_c^{2k}}{\pi\theta^{2+2k}}\ln^{k-1}\frac{\theta}{\chi'_a}. \end{equation} Note that the coefficients at logarithms in Eqs.~(\ref{fk-as-x}), (\ref{fk-as}) turn out to be sizable already at $k=1$, and grow with $k$ factorially. Thus, at moderately large $\theta$, it would be advantageous to sum such contributions through all orders. Resummations of that kind are usually carried out via Borel transformation \cite{Borel-summ}. But in our case, construction of a new integral representation is unnecessary, as long as the original integral representation (\ref{2DFourier}) or (\ref{f-proj-def}) is already well suited for that purpose. Below we will derive corresponding resumming expressions directly from integrals (\ref{2DFourier}) and (\ref{f-proj-def}). \section{Analysis in the complex plane} Since we are interested in the case when the number of collisions is high, the exponent in integrals (\ref{2DFourier}), (\ref{f-proj-def}) will generally assume large values. The modern approach to deriving asymptotics of such integrals consists in extending the integral into a complex plane. With an appropriate choice of the integration path, the integrand can be made non-oscillatory, which substantially alleviates derivation of the asymptotics of the integral. In application to multiple Coulomb scattering distributions, such a deformation procedure was first suggested by Bethe (see Appendix A in \cite{Bethe}, and also \cite{Scott}), but served mainly for the purpose of deriving the coefficients of large-angle power asymptotic terms \cite{Scott}, or combining just a few such terms to an expression, which still worked only in a limited domain of $\theta$ (at large $\theta$) \cite{Bethe}. Here we are going to handle the entire sequence of asymptotic terms simultaneously, but in order to make it applicable \emph{everywhere}, the definition of the integration path must be improved. The path extension problem appears to be technically simpler for the projected angle distribution, which was not considered in \cite{Bethe} at all, and which we consider here first. \subsection{Projected angle distribution}\label{subsec:proj-angle} The diffusion approximation to Eq.~(\ref{f-proj-def}) reads (see footnote 1) \begin{equation}\label{1DFourier} f(\theta_x,l)\underset{\chi_c/\chi'_a\to\infty}\simeq\frac1{\pi\chi_c}\mathfrak{Re}\int_0^{\sim\chi_c/\chi'_a} d\kappa e^{i\frac{\theta_x}{\chi_c}\kappa+\frac{\kappa^2}2 \ln\frac{\chi'_a\kappa}{2\chi_c}}, \end{equation} where we set $\kappa=\xi\chi_c$. When extending this integral to the plane of complex $\kappa$, it is found that its integrand has a single saddle point obeying the equation \begin{equation}\label{saddle-point-eq-x} \frac{\partial}{\partial\kappa}\!\!\left(\!i\frac{\theta_x}{\chi_c}\kappa+\frac{\kappa^2}2 \ln\frac{\chi'_a\kappa}{2\chi_c}\right)\!\!\Bigg|_{\kappa=\kappa_0 \!\!=i\frac{\theta_x}{\chi_c}+\kappa_0\!\left(\!\ln\frac{\kappa_0\chi'_a}{2\chi_c}+\frac12\right)\!=0. \end{equation} As long as Eq.~(\ref{saddle-point-eq-x}) is transcendental, only its approximate solution can be expressed explicitly, which, though, will suit us at the present stage. We can choose an approximation to the solution of (\ref{saddle-point-eq-x}), which is strictly imaginary: \begin{equation}\label{lambda0-x-log-approx} \kappa_0=i\nu_0, \qquad \nu_0\approx\frac{\theta_x}{\chi_c\ln\left(\frac{2\chi_c^2}{\chi'_a\theta_x}\ln\frac{2\chi_c^2}{\chi'_a\theta_x}\right)}, \end{equation} with a proviso that this formula is good only for $\chi_c/\chi'_a\gg10$ (and $\theta_x<2\chi_c^2/\chi'_a$, which is usually fulfilled in practice). To illustrate the accuracy of approximation (\ref{lambda0-x-log-approx}), in Fig.~\ref{fig:lambda0-proj-angle} it is plotted along with the exact solution of equation \begin{equation}\label{approx-saddle-point-eq-proj} \frac{\theta_x}{\chi_c}+\nu_0\left(\ln\frac{\chi'_a\nu_0}{2\chi_c}+\frac12\right)=0, \end{equation} obtained from (\ref{saddle-point-eq-x}) by neglecting $\ln i$. It clearly indicates that approximation (\ref{lambda0-x-log-approx}) begins to fail for $\chi_c/\chi'_a\sim10$. \begin{figure} \includegraphics{lambda0-proj-angle} \caption{\label{fig:lambda0-proj-angle} Solid curves, behavior of solution of the corner point equation for the polar angle distribution [Eq.~(\ref{approx-saddle-point-eq-proj})]. Dashed curves, approximation (\ref{lambda0-x-log-approx}). Red, for $\chi_c/\chi'_a=10$; green, for $\chi_c/\chi'_a=10^2$; blue, for $\chi_c/\chi'_a=10^3$.} \end{figure} The logarithmic factor in the exponent in (\ref{1DFourier}) induces a singularity of the integrand at the origin, coinciding with the lower endpoint of the integration interval. The steepest descent path must then start at the origin, and go toward the saddle point. For simplicity of the resulting integral, though, we direct it strictly along the imaginary axis, rewriting the integration variable as $\kappa=i\nu$. After reaching a point $\kappa_0$ defined by Eq.~(\ref{approx-saddle-point-eq-proj}), the path must turn to the right and proceed along the steepest descent path, but again, for simplicity, we just direct it parallel to the real axis (see Fig.~\ref{fig:path-proj-angle}). Ultimately, the distribution function splits to a sum of two real-variable integrals: \begin{equation}\label{fthetax=fhard+fsoft} f(\theta_x,l)=f_{h}(\theta_x,l)+f_{s}(\theta_x,l), \end{equation} where \begin{equation}\label{fx-hard-def} f_{h}(\theta_x,l)=\frac1{\pi\chi_c}\int_0^{\nu_0(\theta_x)} d\nu e^{-\frac{\theta_x}{\chi_c}\nu+\frac{\nu^2}2 \ln\frac{2\chi_c}{\chi'_a\nu}}\sin\frac{\pi\nu^2}{4}, \end{equation} \begin{equation}\label{fx-soft-def} f_{s}(\theta_x,l)=\frac1{\pi\chi_c}\mathfrak{Re}\int_{i\nu_0(\theta_x)}^{\sim\chi_c/\chi'_a} d\kappa e^{i\frac{\theta_x}{\chi_c}\kappa+\frac{\kappa^2}2 \ln\frac{\chi'_a\kappa}{2\chi_c}}. \end{equation} \begin{figure} \includegraphics{contour-proj-angle} \caption{\label{fig:path-proj-angle} Gradient plot of function $\mathfrak{Re}\left(i\frac{\theta_x}{\chi_c}\kappa+\frac{\kappa^2}2 \ln\frac{\chi'_a\kappa}{2\chi_c}\right)$ [the real part of the exponent in Eq.~(\ref{1DFourier})] in the upper half-plane of complex integration variable $\kappa$, for exemplary values of $\chi_c$ and $\theta_x$. The deformed integration path is drawn by the black line, with $\nu_0$ evaluated by Eq.~(\ref{lambda0-x-log-approx}). } \end{figure} Below we will show that in spite of the admitted simplification of the integration path, integrals (\ref{fx-hard-def}), (\ref{fx-soft-def}) can be robustly interpreted as hard and soft scattering components. Our task now is to investigate their properties. \paragraph{Hard component} Component $f_{h}$ proves to be positive everywhere, even for an approximate solution of the saddle-point equation, insofar as typical contributing $\nu$ in Eq.~(\ref{fx-hard-def}) are always $\lesssim1$, entailing $\sin\frac{\pi\nu^2}4>0$. Furthermore, almost everywhere it is tolerable to replace in (\ref{fx-hard-def}) $\sin\frac{\pi\nu^2}4\approx\frac{\pi\nu^2}4$. That is strictly justified in limits of either large or small $\theta_x/\chi_c$: If $\theta_x/\chi_c\ll1$, that becomes possible because the upper integration limit tends to zero, leaving \begin{eqnarray}\label{fhard-x-lowthetax} f_{h}(\theta_x,l)&\underset{\theta_x/\chi_c\ll1}\simeq& \frac1{4\chi_c}\int_0^{\frac{\theta_x}{\chi_c\ln\left(\frac{2\chi_c^2}{\chi'_a\theta_x}\ln\frac{2\chi_c^2}{\chi'_a\theta_x}\right)}}d\nu\nu^2\nonumber\\ &=&\frac{\theta_x^3}{12\chi_c^4\ln^3\left(\frac{2\chi_c^2}{\chi'_a\theta_x}\ln\frac{2\chi_c^2}{\chi'_a\theta_x}\right)}. \end{eqnarray} If $\theta_x/\chi_c\to\infty$, the sine in (\ref{fx-hard-def}) can be linearized by virtue of the rapid decrease of factor $e^{-\frac{\theta_x}{\chi_c}\nu}$ in the integrand. Therewith, expansion of the rest of the exponential into Maclaurin series yields the Rutherford law (\ref{f1x-Ruth-asympt}), along with power corrections to it (beyond the leading logarithmic accuracy): \begin{eqnarray} f_{h}(\theta_x,l)&\underset{\theta_x/\chi_c\to\infty}\simeq&\frac1{4\chi_c}\int_0^{\infty}\! d\nu\nu^2 e^{-\frac{\theta_x}{\chi_c}\nu}\!\left(\!1+\frac{\nu^2}2 \ln\frac{2\chi_c}{\chi'_a\nu}\right)\nonumber\\ &=&\frac{\chi_c^2}{2\theta_x^3}+3\frac{\chi_c^4}{\theta_x^5}\left[\ln\frac{2\theta_x}{\chi'_a}-\psi(5)\right],\label{Ruth-x+corr} \end{eqnarray} with $\psi(z)=\Gamma'(z)/\Gamma(z)$ being the digamma function. Clearly, integral (\ref{fx-hard-def}) resums also all the higher power corrections to the Rutherford asymptotics. \begin{figure} \includegraphics{f-x-hard} \caption{\label{fig:f-x-hard} Hard component of the projected angle distribution function at $\chi_c/\chi'_a=10^2$, built by Eqs.~(\ref{fx-hard-def}), (\ref{approx-saddle-point-eq-proj}) (solid black curve), and by Eqs.~(\ref{fx-hard-def}), (\ref{lambda0-x-log-approx}) (solid red curve). Dashed curve, Rutherford asymptotics (\ref{f1x-Ruth-asympt}). Dot-dashed, Rutherford asymptotics with the first power correction, Eq.~(\ref{Ruth-x+corr}). Dotted, low-$\theta_x$ asymptotics (\ref{fhard-x-lowthetax}).} \end{figure} The fact that the component $f_{h}(\theta_x)$ vanishes in both extremes $\theta_x/\chi_c\to0$ and $\theta_x/\chi_c\to\infty$ implies that it must peak somewhere in between [see Fig.~\ref{fig:f-x-hard}]. From the analysis of integral (\ref{fx-hard-def}), one generally concludes that the summit of $f_{h}(\theta_x)$ must be reached when $\nu_0\sim\chi_c/\theta_x$, i.e., $\theta_x\sim\chi_c\sqrt{B(\chi_c^2/\chi'^2_a)}$, which is nothing but Moli\`{e}re's typical angle. More precisely, that corresponds to the rising slope of the peak, while the maximum is located at a somewhat greater $\theta_x$ (see Fig.~\ref{fig:f-x-hard}). The end of the region where resummation effects are strong may be assessed from equating the Rutherford asymptotic term to the \textit{doubled} next-to-leading-order power correction in (\ref{Ruth-x+corr}): $\frac{\chi_c^2}{2\theta_x^3}=2\times3\frac{\chi_c^4}{\theta_x^5}\left[\ln\frac{2\theta_x}{\chi'_a}-\psi(5)\right]$, i.e., $\theta_x=\chi_c\sqrt{B_1}$, where $B_1=6B(24e^{-2\psi(5)}\chi_c^2/\chi'^2_a)$. Due to the sizable numerical coefficients involved therein, interval \[ \chi_c\sqrt{B}<\theta_x<\chi_c\sqrt{B_1}\qquad \text{(semihard region)} \] appears to be even wider than the soft central region $0<\theta_x<\chi_c\sqrt{B}$. Besides that, it is noteworthy that $f_{h}(\theta_x)$ does not lie between its asymptotes (in particular, it goes well above the Rutherford asymptote). This (or rather the corresponding feature for $f_{h}(\theta)$ proven in the next subsection) may be responsible for the empirical controversies mentioned in the Introduction. \paragraph{Soft component} \begin{figure} \includegraphics{f-x-soft} \caption{\label{fig:f-x-soft} Soft component of the projected angle distribution function at $\chi_c/\chi'_a=10^2$, built by Eqs.~(\ref{fx-soft-def}) and (\ref{lambda0-x-log-approx}) (solid curve). Dashed curve, the same evaluated for the corner point defined by (\ref{lambda0-x-log-approx}). Dot-dashed, the quasi-Gaussian approximation, Eqs.~(\ref{fsoft-x-Gauss}), (\ref{g0x}), with $C=2.2$. Dotted, Moli\`{e}re's $f^{(0)}$ for the projected angle distribution.} \end{figure} Next, we inspect the soft component, which is defined by integral (\ref{fx-soft-def}). This integral is close to Gaussian form, so its fastest dependence on $\theta_x$ stems from the value of the exponential at the endpoint: \[ e^{-\frac{\theta_x}{\chi_c}\nu_0+\frac{\nu_0^2}2\ln\frac{2\chi_c}{i\chi'_a\nu_0}}\simeq e^{-\frac{\theta_x}{2\chi_c}\nu_0}, \] where we used the saddle point equation (\ref{saddle-point-eq-x}) within the accuracy to which we neglected $\ln i$ in Eq.~(\ref{approx-saddle-point-eq-proj}). To account for the rest of the $\theta_x$-dependence, the simplest way might be to replace in the relation \begin{equation}\label{f=expg} f_{s}(\theta_x,l)=e^{-\frac{\theta_x}{2\chi_c}\nu_0(\theta_x,l)}g(\theta_x,l) \end{equation} the relatively slowly varying factor $g(\theta_x,l)$ by its value in the origin, \begin{equation}\label{g0x} g(0,l)=f(0,l)=\frac1{\pi\chi_c}\int_0^{\sim\chi_c/\chi'_a} d\kappa e^{-\frac{\kappa^2}2 \ln\frac{2\chi_c}{\chi'_a\kappa}}. \end{equation} More precisely, the width of $g$ is $\theta_x\sim\chi_c\ln\frac{2\chi_c^2}{\chi'_a\theta_x}$, whereas that of $f_s$ is $\theta_x\sim\chi_c\sqrt{\ln\frac{2\chi_c^2}{\chi'_a\theta_x}}$, which is narrower, but not by a very large factor. So, in practice it would be certainly worth taking into account also the slope of $g(\theta_x,l)$ in the origin. That can be implemented to the structure of the leading exponential in Eq.~(\ref{f=expg}) by approximating \begin{equation}\label{gx-approx} g(\theta_x,l)\to g(0,l)e^{-\frac{\theta_x^2\ln C}{2\chi_c^2\ln^2\frac{2\chi_c^2}{\chi'_a\theta_x}}}, \end{equation} with $C\approx2.2$. Combining (\ref{f=expg}), (\ref{lambda0-x-log-approx}) and (\ref{gx-approx}), we obtain a quasi-Gaussian structure \begin{equation}\label{fsoft-x-Gauss} f_{s}(\theta_x,l)\approx f(0,l)e^{-\frac{\theta_x^2}{2\chi_c^2\ln\left[\frac{2\chi_c^2}{C\chi'_a\theta_x}\ln \frac{2\chi_c^2}{\chi'_a\theta_x}\right]}}. \end{equation} It resembles the zeroth-order approximation $f^{(0)}(\theta/\chi_c\sqrt{B})$ of Moli\`{e}re's expansion (applied to the projected angle distribution), but has a more precise normalization (\ref{g0x}), and yet involves $\theta_x$ under the logarithm in the denominator of the exponent. Due to the latter dependence, (\ref{fsoft-x-Gauss}) is narrower than Moli\`{e}re's $f^{(0)}$ at $\theta_x>\chi_c$, i.e., in fact, at typical angles (see Fig.~\ref{fig:f-x-soft}). A narrowing of that kind was empirically found in \cite{Hanson}. Besides that, the integral of (\ref{fsoft-x-Gauss}) over $\theta_x$, in contrast to the integral of the zeroth component of Moli\`{e}re's expansion, is somewhat less than unity, leaving a part of the probability for $f_{h}$. \begin{figure} \includegraphics{proj-angle} \caption{\label{fig:proj-angle} Relative contributions of the hard [dashed curve, Eqs.~(\ref{fx-hard-def}), (\ref{lambda0-x-log-approx})] and soft [dot-dashed curve, Eq.~(\ref{fsoft-x-Gauss})] components to the aggregate projected angle distribution [solid curve, Eq.~(\ref{1DFourier})], for $\chi_c/\chi'_a=10^2$. The sum of thus computed hard and soft component is virtually indistinguishable from the solid curve. The dotted curve shows the Rutherford asymptotics (\ref{f1x-Ruth-asympt}).} \end{figure} \paragraph{Aggregate distribution} The circumstance that components (\ref{fx-hard-def}), (\ref{fx-soft-def}) in decomposition (\ref{fthetax=fhard+fsoft}) peak at different $\theta_x$ might potentially lead to appearance of a secondary bump in the aggregate distribution. To check whether this happens in reality, let us first assess the scale at which $f_{s}(\theta_x)$ and $f_{h}(\theta_x)$ become commensurable. For large $\chi_c/\chi'_a$, that occurs at relatively large $\theta_x$, allowing one, oversimplistically, to employ the Rutherford asymptotics for $f_{h}$, and equate it to the Gaussian approximation for $f_{s}$. Solving the equation in the leading logarithmic approximation yields $\theta_x\sim\chi_c\sqrt{2\ln\frac{2\chi_c}{\chi'_a}}$, which is of the order of the scale $\chi_c\sqrt{B}$ at which $f_{h}(\theta_x)$ reaches its maximum. Therefore, around its maximum, $f_{h}(\theta_x)$ is commensurable with $f_{s}(\theta_x)$, and consequently, the sum (\ref{fthetax=fhard+fsoft}) needs not develop a secondary peak or bump. That is what actually happens in practice, and is physically natural, because a diffusion process tends to smear out all the features of the probability distribution. (But for the rescaled distribution $\theta_x^3f(\theta_x)$, as was mentioned in Sec.~\ref{subsec:Glauber-expansion}, such a bump does exist \cite{Hanson,Bethe}.) Figure~\ref{fig:proj-angle} shows the shape of the aggregate distribution, along with contributions to it from different mechanisms, for $\chi_c/\chi'_a=10^2$. The figure demonstrates that the aggregate distribution (solid curve) considerably exceeds the sum of soft and pure Rutherford components (dot-dashed and dotted curves, correspondingly). To account for this excess, one has to employ the resummed hard component (dashed curve) instead of a single-scattering contribution. So, the issue of resummation of plural hard scattering contributions is quite essential in practice. Effectively, it slows down the transition from a Gaussian to Rutherford regime, so that over a substantial angular interval it may mimic a law intermediate between Gaussian and Rutherford decrease, such as a simple exponential law (cf. \cite{Taratin}), or a power law with an index greater than that for the lowest Born approximation, as is the case, e.g., for hard scattering of hadrons (which are themselves composite objects) \cite{Arleo}. \paragraph{Probabilistic interpretation} Granted the positivity of both functions $f_{s}(\theta_x)$ and $f_{h}(\theta_x)$, in conjunction with the normalization condition $\int^{\infty}_{-\infty} d\theta_x f_{s}+\int^{\infty}_{-\infty} d\theta_x f_{h}=1$, it is tempting further to interpret them independently as partial probability distributions. Specifically, since $f_{h}(\theta_x)$ incorporates all the power-law contributions, it might be regarded as the probability distribution of hard-scattered particles, and $f_{s}(\theta_x)$, since it is nearly Gaussian, should be interpreted as the probability distribution of soft-scattered particles. That, inevitably, involves an element of arbitrariness, as long as there is no sharp physical boundary between soft- and hard-scattered particles. Besides that, there are regions at sufficiently large $\theta_x$, where $f_s(\theta_x)$ as evaluated by Eq.~(\ref{fx-soft-def}) becomes slightly negative [though that is immaterial for practice, because there it is already overtaken by $f_h(\theta_x)$]. For those reasons, it is more appropriate to term the encountered functions \textit{pseudo}-probability distributions. The mentioned arbitrariness then manifests itself as the residual slight freedom in the choice of the location of the integration path corner. \begin{figure} \includegraphics{wx-hard} \caption{\label{fig:wx-hard} Total percentage of hard-scattered particles in the projected angle distribution, calculated by Eqs.~(\ref{whard-int}), (\ref{fx-hard-def}), (\ref{approx-saddle-point-eq-proj}) (black solid curve), and by Eqs.~(\ref{whard-int}), (\ref{fx-hard-def}), (\ref{lambda0-x-log-approx}) (red solid curve). Dashed curve, approximation (\ref{whard-log}). } \end{figure} Accepting the partial (pseudo-)probability interpretation, let us assess the corresponding total probability for a particle to belong to the projected hard component: \begin{equation}\label{whard-int} w_{h\text{-}x}(l)=2\int_0^{\infty}d\theta_x f_{h}(\theta_x,l). \end{equation} At large $\chi_c/\chi'_a$, inserting (\ref{fx-hard-def}) to (\ref{whard-int}) and interchanging the order of integrations leads to \begin{subequations} \begin{eqnarray} w_{h\text{-}x}&=&\frac{2}{\pi\chi_c}\int_0^{\infty}d\nu\sin\frac{\pi\nu^2}4e^{\frac{\nu^2}2\ln\frac{2\chi_c}{\chi'_a\nu}}\nonumber\\ &\,&\qquad\quad\times\int_{\nu\chi_c\ln\frac{2\chi_c}{\nu\chi'_a}}^{\infty}d\theta_x e^{-\frac{\theta_x}{\chi_c}\nu}\nonumber\\ &\underset{\chi_c\gg\chi'_a}\simeq&\frac{1}{2}\int_0^{\sim\chi_c/\chi'_a}d\nu\nu e^{-\frac{\nu^2}2\ln\frac{2\chi_c}{\chi'_a\nu}}.\label{whard-integral} \end{eqnarray} The latter single integral can be evaluated by expanding $e^{\frac{\nu^2}2\ln\frac{\nu}2}\simeq1+\frac{\nu^2}2\ln\frac{\nu}2$, and integrating termwise, whereupon reassembling it to a single fraction within the given accuracy: \begin{equation}\label{whard-log} w_{h\text{-}x} \underset{\chi_c\gg\chi'_a}\simeq\frac1{\ln\left(\frac{\chi_c^2}{\chi'^2_a}\ln\frac{\chi_c^2}{\chi'^2_a}\right)-\psi(2)}. \end{equation} \end{subequations} That means that essentially, $w_{h\text{-}x}\simeq1/B$. Formula (\ref{whard-log}) shows that the fraction of hard-scattered particles \emph{decreases} with the increase of the target thickness, as an inverse of its logarithm. The physical reason for this is that the boundary beginning from which the particles must be regarded as hard-scattered moves outwards with the increase of the target thickness, due to the expanding Gaussian component. In contrast, identity (\ref{int-fk-Moliere=0}) in the Moli\`{e}re expansion does not grant direct access to the number of particles in the non-Gaussian component. The exact behavior of $w_{h}$ as a function of $\chi_c/\chi'_a$ is plotted in Fig.~\ref{fig:wx-hard} by the solid curve, along with approximation (\ref{whard-log}) plotted by the dashed curve. It appears that (\ref{whard-log}) gives a fair approximation for $w_{h}$ at $\chi_c/\chi'_a\gtrsim10^2$. It may also be mentioned that the excess of total probability $w_{s\text{-}x}+w_{h\text{-}x}-1$, for $w_{s\text{-}x}=2\int_0^{\infty}d\theta_xf_{s}(\theta_x)$ and $f_{s}(\theta_x)$ evaluated by approximation (\ref{fsoft-x-Gauss}), (\ref{g0x}), with $C=2.2$, is positive but small compared with $w_{\text{hard}}$: \[ w_{s\text{-}x}+w_{h\text{-}x}-1\sim 2\times 10^{-3}. \] That corroborates self-consistency of our approximations. \subsection{Polar angle distribution} Let us next turn to the somewhat subtler case of the polar angle distribution, which is given by Bessel integral (\ref{3b}). To appropriately extend the corresponding diffusion approximation \begin{equation}\label{f-theta-small-imp-par} f(\theta,l)\underset{\chi_c/\chi'_a\to\infty}\simeq\frac1{2\pi\chi_c^2}\int_0^{\sim\chi_c/\chi'_a}d\kappa\kappa J_0\left(\frac{\theta}{\chi_c}\kappa\right)e^{\frac{\kappa^2}2\ln\frac{\chi'_a\kappa}{2\chi_c}} \end{equation} to the complex plane of\footnote{In \cite{Bethe}, $\kappa$ was denoted as $y$, but we keep the same notation as for the projected angle distribution.} $\kappa=\chi_c\rho$, one needs to substitute $J_0\left(\frac{\theta}{\chi_c}\kappa\right)=\mathfrak{Re}H_0^{(1)}\left(\frac{\theta}{\chi_c}\kappa\right)$ in the integrand, and exploit the exponential decrease of Hankel function $H_0^{(1)}(z)$ in the upper half-plane of complex $z$. It is also preferable in the integrand of (\ref{f-theta-small-imp-par}) \emph{not} to include factor $\kappa$ (physically arising as a part of the integration element $\kappa d\kappa=d\kappa^2/2$) to the expression for which the saddle point is sought. Therewith, the saddle point equation reads \begin{equation}\label{saddle-point-eq-fullangle} \frac{\partial}{\partial\kappa}\left[\ln H_0^{(0)}\left(\frac{\theta}{\chi_c}\kappa\right)+\frac{\kappa^2}2\ln\frac{\chi'_a\kappa}{2\chi_c} \right]\Bigg|_{\kappa=\kappa_0}=0, \end{equation} and like in the previous subsection, its solution at large $\chi_c/\chi'_a$ must be predominantly imaginary\footnote{That owes to the fact that $H_0^{(0)}(z)$, like $e^{iz}$, is an even function of $\mathfrak{Re}z$. This would not be the case if the saddle point was sought for the integrand including the factor $\kappa$. The emerging integral representations for $f_{h}$ and $f_{s}$ would then be too cumbersome.}. Searching a purely imaginary approximation, i.e., letting $\kappa_0=i\nu_0$, utilizing the relation $H_0^{(1)}(iz)=\frac{2}{i\pi}K_0(z)$, and neglecting imaginary terms $\ln i$ compared to the large real logarithm, leads to a real equation \begin{equation}\label{approx-saddle-point-eq-fullangle} \frac{\theta}{\chi_c}\frac{K_1\left(\frac{\theta}{\chi_c}\nu_0\right)}{K_0\left(\frac{\theta}{\chi_c}\nu_0\right)}+\nu_0\left(\ln\frac{\chi'_a\nu_0}{2\chi_c}+\frac12\right)=0. \end{equation} \begin{figure} \includegraphics{lambda0-fullangle} \caption{\label{fig:lambda0-fullangle} Behavior of the solution of the corner point equation for the polar angle distribution [Eq.~(\ref{approx-saddle-point-eq-fullangle})], for $\chi_c/\chi'_a=10^2$ (solid curve). Dashed curve, approximation (\ref{lambda0-high-asympt}). Dotted curve, approximation (\ref{lambda0-low-asympt}). Dot-dashed curve, Bethe's choice for the corner point, Eq.~(\ref{Bethe-corner-point}).} \end{figure} Unfortunately, now Eq.~(\ref{approx-saddle-point-eq-fullangle}) is difficult to solve by analytic means even approximately, as long as it requires an approximation for $K_0(z)$ applicable at any positive $z$. Simple approximations exist only for large $z$, where $\frac{K_1\left(z\right)}{K_0\left(z\right)}\underset{z\to\infty}\to1$, implying \begin{equation}\label{lambda0-high-asympt} \nu_0\underset{\theta/\chi_c\to\infty}\sim \frac{\theta/\chi_c}{\ln \left(\frac{2\chi_c^2}{\chi'_a\theta}\ln\frac{2\chi_c^2}{\chi'_a\theta}\right)-1/2} \end{equation} [similar to Eq.~(\ref{lambda0-x-log-approx}), and different from Bethe's choice\footnote{In paper \cite{Bethe}, the saddle point was actually sought only for part of the integrand, $K_0\left(\frac{\theta}{\chi_c}\nu\right)e^{\frac{\nu^2}2\ln\frac{2\theta}{\chi'_a k}}\approx e^{-\frac{\theta}{\chi_c}\nu+\frac{\nu^2}2\ln\frac{2\theta}{\chi'_a k}}$ (at real $\nu$, corresponding to purely imaginary $\kappa$). Eq.~(\ref{Bethe-corner-point}) corresponds to effectively replacing $\nu$ under the logarithm by $\chi_c/\theta$ rather than $\theta/\chi_c$, as is suggested by Eq.~(\ref{lambda0-high-asympt}). That still works when dealing with large-angle asymptotics of the angular distribution, but not when one aims to find a uniform approximation for all deflection angles. In the latter case, the saddle point must be sought for the entire integrand, and the path corner point be chosen as near as possible to it, as is done in the present paper. Moreover, even Eq.~(\ref{lambda0-high-asympt}) may be not the perfect approximation for the entire range of $\theta$ (as we will see below), so, generally, it seems best to solve the corner point equation numerically. } \begin{equation}\label{Bethe-corner-point} \nu_0=\frac{\theta}{\chi_c\ln\frac{2\theta}{\chi'_a k}}, \end{equation} with $k\sim5$], and at small $z$, where $\frac{K_1\left(z\right)}{K_0\left(z\right)}\sim\frac1{z\ln \frac1z}$, giving in the leading logarithmic approximation \begin{equation}\label{lambda0-low-asympt} \nu_0\underset{\theta/\chi_c\to0}\sim\frac1{\sqrt{\ln\frac{\chi_c}{\theta}\ln\frac{2\chi_c}{\chi'_a}}}. \end{equation} The behavior of the solution of Eq.~(\ref{approx-saddle-point-eq-fullangle}) along with its asymptotes (\ref{lambda0-high-asympt}), (\ref{lambda0-low-asympt}) is illustrated in Fig.~\ref{fig:lambda0-fullangle}. \begin{figure} \includegraphics{f-hard} \caption{\label{fig:f-hard} The shape of the hard scattering component (\ref{f-hard-def}), (\ref{approx-saddle-point-eq-fullangle}), at $\chi_c/\chi'_a=10^2$ (black solid curve). Red curve, the same for approximate solution (\ref{lambda0-high-asympt}) of the path corner point equation. Dashed curve, Rutherford asymptotics (\ref{f1-Ruth}); dot-dashed curve, Rutherford asymptotics with the first power correction, Eq.~(\ref{R-theta-corr}). Dotted curve, small-angle asymptotics (\ref{fhard-smalltheta}).} \end{figure} Once the solution to Eq.~(\ref{approx-saddle-point-eq-fullangle}) is found, choosing the integration path similarly to that of Fig.~\ref{fig:path-proj-angle} leads to a decomposition \begin{equation}\label{ftheta=fhard+fsoft} f(\theta,l)=f_{h}(\theta,l)+f_{s}(\theta,l), \end{equation} with \begin{equation}\label{f-hard-def} f_{h}(\theta,l)=\frac1{\pi^2\chi_c^2}\int_0^{\nu_0(\theta)}d\nu\nu K_0\left(\frac{\theta}{\chi_c}\nu\right)e^{\frac{\nu^2}2\ln\frac{2\chi_c}{\chi'_a\nu}}\sin\frac{\pi\nu^2}{4}, \end{equation} and \begin{equation}\label{f-soft-def} f_{s}(\theta,l)=\frac1{2\pi\chi_c^2}\mathfrak{Re}\int_{i\nu_0(\theta)}^{\sim\chi_c/\chi'_a}d\kappa\kappa H_0^{(1)}\left(\frac{\theta}{\chi_c}\kappa\right)e^{\frac{\kappa^2}2\ln\frac{\chi'_a\kappa}{2\chi_c}}. \end{equation} Again, we interpret them as partial pseudoprobability distributions for a particle to belong to hard or to soft scattering probability. Let us now analyze the behavior of those components, and compare them with the corresponding projected angle distributions. \begin{figure} \includegraphics{w-polar} \caption{\label{fig:w-polar} Total percentage of hard scattered particles, calculated by Eqs.~(\ref{f-hard-def}), (\ref{approx-saddle-point-eq-fullangle}) (solid curve). Dashed curve, interpolation (\ref{wh-heuristic}). The red curve shows the probability deficit $1-w_{h}-w_{s}$, for $f_{h}(\theta)$ evaluated by Eqs.~(\ref{f-hard-def}), (\ref{approx-saddle-point-eq-fullangle}), and $f_{s}(\theta)$ by Eqs.~(\ref{f-soft-def}), (\ref{fsoft-Gauss-0}) with $C=5$. } \end{figure} First of all, similarly to the previous subsection, function $f_{h}(\theta,l)$ proves to be everywhere positive, because typical $\nu$ for any $\theta$ are less than unity, and then the sine in the integrand is positive, so we effectively have an integral of a positive definite function. Using the unlimited growth of $\nu_0$ with $\theta$, it is straightforward to derive the Rutherford asymptotics for integral (\ref{f-hard-def}), along with its next-to-leading order power correction: \begin{eqnarray}\label{R-theta-corr} f_{h}(\theta,l)&\underset{\theta/\chi_c\to\infty}\simeq&\frac1{4\pi\chi_c^2}\int_0^{\infty}d\nu\nu^3 K_0\left(\frac{\theta}{\chi_c}\nu\right)\nonumber\\ &\,&\qquad\qquad\quad\times\left(1+\frac{\nu^2}2\ln\frac{2\chi_c}{\chi'_a\nu}\right)\nonumber\\ &=&\!\!\frac{\chi_c^2}{\pi\theta^4}+\frac{8\chi_c^4}{\pi\theta^6}\left(\ln\frac{\theta}{\chi'_a}+\gamma_{\text{E}}-\frac32\right). \end{eqnarray} The coefficient of the correction term here is in agreement with the leading log calculation (\ref{f2-asympt}). In the opposite limit $\theta/\chi_c\to0$, using (\ref{lambda0-low-asympt}), function $f_{h}(\theta)$ can be shown to decrease much slower than in the case of the projected angle distribution (\ref{fhard-x-lowthetax}): \begin{eqnarray}\label{fhard-smalltheta} f_{h}(\theta,l)&\underset{\theta/\chi_c\to0}\sim&\frac{\nu_0^4}{16\pi\chi_c^2}\ln\frac{\chi_c}{\theta\nu_0}\nonumber\\ &\sim&\frac1{16\pi\chi_c^2 \ln\frac{\chi_c}{\theta}\ln^2\frac{2\chi_c}{\chi'_a}}, \end{eqnarray} but tends to zero, anyway. Hence, it must reach a maximum at some finite, nonzero $\theta$. Figure~\ref{fig:f-hard} plots function (\ref{f-hard-def}) with $\nu_0$ evaluated numerically from Eq.~(\ref{approx-saddle-point-eq-fullangle}). In contrast to the case of projected angle distribution, it appears now that the use of Eq.~(\ref{lambda0-high-asympt}) does \emph{not} give a good approximation for $f_{h}(\theta)$ simultaneously for all typical $\theta$ -- because (\ref{lambda0-high-asympt}) is a much poorer approximation for solution of Eq.~(\ref{approx-saddle-point-eq-fullangle}) itself. That is demonstrated by Fig.~\ref{fig:f-hard}, where the red curve corresponding to approximation (\ref{lambda0-high-asympt}) falls much below the calculation with the exact solution of Eq.~(\ref{approx-saddle-point-eq-fullangle}). It signals that for the polar angle distribution, it is much more reliable to solve the corner point equation numerically. Similarly to the previous subsection, we can find that the support region for function $f_{h}(\theta)$ is concentrated at \begin{equation}\label{} \chi_c\sqrt{B}<\theta<\chi_c\sqrt{B_2},\qquad \text{(semihard region)} \end{equation} where $B_2=8B(8e^{2\gamma_{\text{E}}-3}\chi_c^2/\chi'^2_a)$. \begin{figure} \includegraphics{soft-polar} \caption{\label{fig:soft-polar} The shape of the soft component (\ref{f-soft-def}) at $\chi_c/\chi'_a=10^2$, built by Eqs.~(\ref{f-soft-def}) and (\ref{approx-saddle-point-eq-fullangle}) (solid curve). Dashed curve, the same evaluated for the corner point defined by Eq.~(\ref{lambda0-high-asympt}). Dot-dashed, the quasi-Gaussian approximation, Eqs.~(\ref{fsoft-Gauss}), (\ref{fsoft-Gauss-0}), with $C=5$. Dotted, Moli\`{e}re's $f^{(0)}$. } \end{figure} Equation (\ref{approx-saddle-point-eq-fullangle}) also does not permit expressing $\theta$ through $\nu$, which hampers analytic computation of the total pseudoprobability of hard scattering $w_{h}=2\pi\int_0^{\infty}d\theta\theta f_{h}(\theta,l)$ by interchanging the order of integrations. Numerically, of course, that presents no difficulty, and is illustrated in Fig.~\ref{fig:w-polar}. Qualitatively, function $w_h(\chi_c/\chi'_a)$ exhibits a behavior similar to that of $w_{h\text{-}x}(\chi_c/\chi'_a)$ in Sec.~\ref{subsec:proj-angle}, but is some 3 times greater, so that it cannot be even regarded as small. A satisfactory heuristic approximation of the same structure as Eq.~(\ref{whard-log}) may be written as \begin{equation}\label{wh-heuristic} w_{h}\approx\frac{3}{\ln\left(\frac{\chi_c^2}{\chi'^2_a}\ln\frac{\chi_c^2}{\chi'^2_a}\right)-1.5}. \end{equation} In what concerns $f_{s}(\theta,l)$, physically it is expected to exhibit a behavior similar to Eq.~(\ref{fsoft-x-Gauss}). Indeed, approximation \begin{equation}\label{fsoft-Gauss} f_{s}(\theta,l)=f(0,l)e^{-\frac{\theta^2}{2\chi_c^2\ln\left(\frac{2\chi_c^2}{C\chi'_a\theta}\ln \frac{2\chi_c^2}{\chi'_a\theta}\right)}} \end{equation} with \begin{equation}\label{fsoft-Gauss-0} f(0,l)\simeq\frac1{2\pi\chi_c^2}\int_0^{\sim\chi_c/\chi'_a}d\kappa\kappa e^{-\frac{\kappa^2}2\ln\frac{2\chi_c}{\chi'_a\kappa}} \end{equation} [an integral similar to (\ref{whard-integral})] and $C\approx5$ works reasonably well (see Figs.~\ref{fig:soft-polar}, \ref{fig:full-angle}). The total pseudoprobability corresponding to this approximation equals $w_{s}=2\pi\int_0^{\infty} d\theta\theta f_{s}(\theta)=1-w_{h}-\Delta w$, with $\Delta w\sim 2\times 10^{-2}$ (see Fig.~\ref{fig:w-polar}, red curve), but still tolerably small. Herein, we will restrict our analysis to this notion. \begin{figure} \includegraphics{full-angle} \caption{\label{fig:full-angle} Relative contributions of the hard [dashed curve, Eqs.~(\ref{f-hard-def}), (\ref{approx-saddle-point-eq-fullangle})] and soft [dot-dashed curve, Eqs.~(\ref{fsoft-Gauss}), (\ref{fsoft-Gauss-0}) with $C=5$] components to the full-angle distribution [Eq.~(\ref{2DFourier}), solid curve], for $\chi_c/\chi'_a=10^2$. The sum of thus computed hard and soft components is virtually indistinguishable from the solid curve. The dotted curve represents the Rutherford asymptotics (\ref{f2-asympt}).} \end{figure} When comparing the results of this subsection with those of Sec.~\ref{subsec:proj-angle}, it should be borne in mind that\footnote{Note that in the lhs and in the rhs of Eqs.~(\ref{neq}), (\ref{neq-s}), letter $f$ represents different functions.} \begin{equation}\label{neq} f_{h}(\theta_x,l)<\int_{-\infty}^{\infty}d\theta_y f_{h}(\theta,l)\big|_{\theta=\sqrt{\theta_x^2+\theta_y^2}}, \end{equation} and correspondingly, \begin{equation}\label{neq-s} f_{s}(\theta_x,l)>\int_{-\infty}^{\infty}d\theta_y f_{s}(\theta,l)\big|_{\theta=\sqrt{\theta_x^2+\theta_y^2}}, \end{equation} That is clear as long as the integral from a positive function in the rhs of (\ref{neq}) can not vanish at $\theta_x\to0$, whereas the left-hand side (lhs) does vanish. It is also natural physically, because hard collisions which are nearly in the $y$-direction are not treated as hard when computing the projected distribution in $\theta_x$. But in the large-$\theta$ asymptotics, (\ref{neq}) holds as an equality for all the terms of the descending power series, by virtue of the identity \begin{equation}\label{} \frac{2^k k!}{\pi}\int_{-\infty}^{\infty}\frac{d\theta_y}{\left(\theta_x^2+\theta_y^2\right)^{1+k}}=\frac{(2k-1)!!}{\theta_x^{1+2k}} \end{equation} and its derivatives by index $k$, which generate the logarithmic factors. In turn, inequality (\ref{neq-s}) explains why constant $C$ for approximation (\ref{fsoft-Gauss}) is greater than that for approximation (\ref{fsoft-x-Gauss}). \section{Summary} The main conclusions of our paper can be summarized as follows. A continuation into the complex plane allows presenting the angular distribution of probability of particles scattered in amorphous matter as a sum of hard- and soft-scattering components, with no restriction on the number of scatterings. At that, the hard component incorporates all the plural-scattering power-law corrections to the Rutherford single-scattering contribution, while the soft component is nearly Gaussian, but is narrower than Moli\`{e}re's $f^{(0)}$. Due to their positivity almost everywhere, those components admit independent (pseudo)probabilistic interpretation. The corresponding total percentage of hard-scattered particles (not appearing naturally in the Moli\`{e}re theory) amounts typically $w_{h\text{-}x}\sim10\%$ for the projected angle distribution, and $w_{h}\sim25\%$ in case of the polar angle distribution), and sets the accuracy limit for Gauss-like approximations for the soft component. The second conclusion is that in the aggregate distribution of scattered particles, there is a significant transition region between multiple soft and single hard scattering, in which scattering is multiple but hard. Physically, it is chained to the fact that at significant target thickness, there always exists a range of angles, where the probability of several hard rescatterings is non-negligible. The resummed hard-scattering component peaks at a non-zero deflection angle, and around its maximum (in the so-called semihard region), it exceeds the single hard scattering (Rutherford) contribution by a significant factor. Nonetheless, no bump emerges in the aggregate distribution around this angle, inasmuch as in the semihard region, the hard component is comparable with the soft one. From the practical point of view, it must be noted that if it is desired to use a single approximation within the central region of scattering angles only, it may be reasonable to employ Moli\`{e}re's $f^{(0)}$; but if the hard ``tail" needs description, as well, it is advantageous to use the separation $f_{h}+f_{s}$ introduced herein. Even after approximating $f_s$ by a quasi-Gaussian, the sum $f_{h}+f_{s}$ is still numerically more accurate than a few first terms of the Moli\`{e}re expansion. Besides that, it should be remembered that the separation of $f_{h}$ and $f_{s}$ somewhat depends on the choice of the corner point for the integration path (which does not coincide with the saddle point of the integrand exactly), and thus may involve slight ambiguity. Analytic solutions of the corner point equation provide insight into qualitative dependencies of the particle distribution function on the total deflection angle and the target thickness, but may sometimes be insufficiently accurate, so, if better precision is required, the saddle point equation is to be solved numerically. Thus, depending on the needs of the study, the proposed construction may be used either for analytic, or for numerical purposes.
1,108,101,562,600
arxiv
\section{Introduction} \label{intro} Quantum cryptography, where the security is based on the laws of quantum physics, is a remarkable application of quantum mechanics in the field of information theory. In 1984, Bennett and Brassard first used quantum resources to complete a cryptographic task, and they generated a secret key between two parties, which is the first quantum key distribution (QKD) protocol. This is called the BB84 protocol~\cite{bennett2020quantum}, where the parties use a sequence of single photons randomly prepared in the rectilinear basis ($\{\ket{0},\ket{1}\}$), and the diagonal basis ($\{\ket{+},\ket{-}\}$) to produce a random secret key. After that, various QKD protocols have been proposed by many researchers, such as Ekert's protocol~\cite{ekert1991quantum}, B92 protocol~\cite{bennett1992quantum1}, BBM92 protocol~\cite{bennett1992quantum}, SARG04 protocol~\cite{scarani2004quantum} and so on~\cite{long2002theoretically,xue2002conditional,deng2004bidirectional,lo2012measurement}. In 2002, quantum secure direct communication (QSDC), a new concept of communicating messages securely over a quantum channel without any shared key, was first proposed by Long et. al.~\cite{long2002theoretically}. This is a process of secure communication without any cryptographic encryption or decryption. Here the sender encodes the message on some qubits by using some predefined encoding rule and sends these qubits to the receiver through a quantum channel. From its initial stage, QSDC has drawn a lot of attention and has become an interesting topic of research~\cite{beige2001secure,bostrom2002ping,deng2003two,deng2004secure,wang2005quantum,wang2005multi,wang2006quantum,zhang2017quantum,das2021quantum}. A bidirectional QSDC protocol, called quantum dialogue (QD), was first proposed by Nguyen in 2004~\cite{nguyen2004quantum}. Now there is a large collection of QD protocols, for example Refs.~\cite{zhang2004deterministic,zhong2005quantum,xia2006quantum,xin2006secure,gao2010two,maitra2017measurement,das2020two}. QSDC protocols for three or more parties are discussed in~\cite{gao2005deterministic,jin2006three,ting2005simultaneous,tan2014multi,das2021secure}. Recently, Niu et al. proposed a measurement-device-independent (MDI) QSDC protocol using Einstein-Podolsky-Rosen (EPR) pairs~\cite{niu2018measurement}. Then they generalized this one-way communication to a bidirectional one and proposed an MDI-QD protocol. In their protocols, the two legitimate parties prepare two sets of EPR pairs in their place, and send the partner qubits of their EPR pairs to an untrusted third party, since the condition for being an MDI protocol is that, all the measurements during the communication process should be performed by an untrusted third party (who may be an eavesdropper). Here we analyze these protocols and point out that the secret messages are not transmitted securely for both the protocols. We show that fifty percent of the information about the secret message bits is leaked out in both the protocols. In other words, in the perspective of information theory and cryptography, these protocols are not secure. This type of security loophole of information leakage in various QSDC and QD protocols are discussed in~\cite{zhong2007improvement,gao2008comment,gao2008revisiting,tan2008classical,fei2008teleportation,wang2011information,gao2014information,das2020cryptanalysis}. We also propose modifications of these protocols to improve their security. The rest of the paper is organized as follows. In the next section, we briefly describe the MDI-QSDC and MDI-QD protocols proposed by Niu et al.~\cite{niu2018measurement}. In Section~\ref{sec3:analysis}, we analyze the security loophole of the above protocols, and then our proposed remedy is given in Section~\ref{sec4:modified protocol}. Finally, Section~\ref{conclusion} concludes our work. \section{Brief Review of Niu et al.'s Protocols \cite{niu2018measurement}} \label{sec:review} In this section, we briefly describe the MDI-QSDC and MDI-QD protocols proposed by Niu et al. in 2018. \subsection{MDI-QSDC protocol }\label{MDI-QSDC protocol} There are three parties in this protocol, namely, Alice, Bob and Charlie, where Alice wants to send some message to Bob, and Charlie is an untrusted third party, who performs all the measurements. They use the EPR pairs $\ket{\Phi^{+}}, \ket{\Phi^{-}}, \ket{\Psi^{+}}, \ket{\Psi^{-}}$ for sending the message bits, where, \begin{equation} \ket{\Phi^{\pm}}=\frac{1}{\sqrt{2}}(\ket{00} \pm \ket{11}),\\ \ket{\Psi^{\pm}}=\frac{1}{\sqrt{2}}(\ket{01} \pm \ket{10}). \end{equation} The steps of the protocol are as follows: \begin{enumerate} \item \label{step1}Alice prepares $n$ EPR pairs randomly in $\ket{\Psi^{+}}$ and $\ket{\Psi^{-}}$ states and creates two sequences $S_{A_1}$ and $S_{A_2}$ of single photons, such that for $1 \leq i \leq n$, the $i$-th qubits of $S_{A_1}$ and $S_{A_2}$ are partners of each other in the $i$-th EPR pair. Similarly, Bob also prepares $S_{B_1}$ and $S_{B_2}$ from his $n$ EPR pairs randomly chosen from $\ket{\Psi^{+}}$ and $\ket{\Psi^{-}}$. Alice (Bob) also chooses $m$ single qubit states randomly from $\{\ket{0},\ket{1},\ket{+}=\frac{1}{\sqrt{2}}(\ket{0}+\ket{1}),\ket{-}=\frac{1}{\sqrt{2}}(\ket{0}-\ket{1})\}$ and inserts these qubits in random positions of $S_{A_2}$ ($S_{B_2}$), and let the new sequence be $C_{A_2}$ ($C_{B_2}$) containing $(n+m)$ single qubit states. \item \label{step2}Alice (Bob) sends the sequence $C_{A_2}$ ($C_{B_2}$) to Charlie and keeps $S_{A_1}$ ($S_{B_1}$) in her (his) lab. \item Charlie makes Bell measurement on each pair of $C_{A_2}$ and $C_{B_2}$ (i.e., the $i$-th Bell measurement on the $i$-th qubit of $C_{A_2}$ and the $i$-th qubit of $C_{B_2}$, $1\leq i \leq n+m$) and announces the results. \label{1st_measure} \item \label{step4}Alice and Bob announce the positions of the single qubit states in the sequences $C_{A_2}$ and $C_{B_2}$ respectively. For $1\leq i \leq n+m$, four cases may arise. \begin{enumerate} \item \label{after_1st_measure}If the $i$-th qubit of $C_{A_2}$ and the $i$-th qubit of $C_{B_2}$ are from $S_{A_2}$ and $S_{B_2}$ respectively, then as a result of quantum entanglement swapping~\cite{zukowski1993event}, the Bell measurement causes the corresponding partner qubits of $S_{A_1}$ and $S_{B_1}$ become an EPR pair, which is shown in Equation~\eqref{ent_swp}. \begin{equation} \label{ent_swp} \begin{aligned} \ket{\Psi^{+}}_{A_1A_2}\ket{\Psi^{+}}_{B_1B_2}={} & \frac{1}{2}(\ket{\Psi^{+}}_{A_1B_1}\ket{\Psi^{+}}_{A_2B_2}-\ket{\Psi^{-}}_{A_1B_1}\ket{\Psi^{-}}_{A_2B_2}+\\ & \ket{\Phi^{+}}_{A_1B_1}\ket{\Phi^{+}}_{A_2B_2}-\ket{\Phi^{-}}_{A_1B_1}\ket{\Phi^{-}}_{A_2B_2}),\\ \ket{\Psi^{-}}_{A_1A_2}\ket{\Psi^{+}}_{B_1B_2}={} & \frac{1}{2}(\ket{\Psi^{-}}_{A_1B_1}\ket{\Psi^{+}}_{A_2B_2}-\ket{\Psi^{+}}_{A_1B_1}\ket{\Psi^{-}}_{A_2B_2}+\\ & \ket{\Phi^{-}}_{A_1B_1}\ket{\Phi^{+}}_{A_2B_2}-\ket{\Phi^{+}}_{A_1B_1}\ket{\Phi^{-}}_{A_2B_2}),\\ \ket{\Psi^{+}}_{A_1A_2}\ket{\Psi^{-}}_{B_1B_2}={} & \frac{1}{2}(\ket{\Psi^{+}}_{A_1B_1}\ket{\Psi^{-}}_{A_2B_2}-\ket{\Psi^{-}}_{A_1B_1}\ket{\Psi^{+}}_{A_2B_2}+\\ & \ket{\Phi^{-}}_{A_1B_1}\ket{\Phi^{+}}_{A_2B_2}-\ket{\Phi^{+}}_{A_1B_1}\ket{\Phi^{-}}_{A_2B_2}),\\ \ket{\Psi^{-}}_{A_1A_2}\ket{\Psi^{-}}_{B_1B_2}={} & \frac{1}{2}(\ket{\Psi^{-}}_{A_1B_1}\ket{\Psi^{-}}_{A_2B_2}-\ket{\Psi^{+}}_{A_1B_1}\ket{\Psi^{+}}_{A_2B_2}+\\ & \ket{\Phi^{+}}_{A_1B_1}\ket{\Phi^{+}}_{A_2B_2}-\ket{\Phi^{-}}_{A_1B_1}\ket{\Phi^{-}}_{A_2B_2}).\\ \end{aligned} \end{equation} \item If the $i$-th qubit of $C_{A_2}$ is from $S_{A_2}$ and the $i$-th qubit of $C_{B_2}$ is any single qubit from the set $\{\ket{0},\ket{1},\ket{+},\ket{-}\}$, then Alice and Bob discard the $i$-th Bell measurement result. \item If the $i$-th qubit of $C_{A_2}$ is a single qubit from the set $\{\ket{0},\ket{1},\ket{+},\ket{-}\}$ and the $i$-th qubit of $C_{B_2}$ is from $S_{B_2}$, then also Alice and Bob discard the $i$-th Bell measurement result. \item If both the $i$-th qubits of $C_{A_2}$ and $C_{B_2}$ are from the set $\{\ket{0},\ket{1},\ket{+},\ket{-}\}$, then Alice and Bob exchange the basis information of their single qubits. If the bases are different, then they discard the $i$-th Bell measurement result. Else it is used for security checking. A pair of single qubits with identical bases can be written as: \begin{equation} \label{sinle_bell} \begin{aligned} \ket{0}_{A_2}\ket{0}_{B_2}= \frac{1}{\sqrt{2}}(\ket{\Phi^{+}}_{A_2B_2}+\ket{\Phi^{-}}_{A_2B_2}), \\ \ket{1}_{A_2}\ket{1}_{B_2}= \frac{1}{\sqrt{2}}(\ket{\Phi^{+}}_{A_2B_2}-\ket{\Phi^{-}}_{A_2B_2}), \\ \ket{0}_{A_2}\ket{1}_{B_2}= \frac{1}{\sqrt{2}}(\ket{\Psi^{+}}_{A_2B_2}+\ket{\Psi^{-}}_{A_2B_2}), \\ \ket{1}_{A_2}\ket{0}_{B_2}= \frac{1}{\sqrt{2}}(\ket{\Psi^{+}}_{A_2B_2}-\ket{\Psi^{-}}_{A_2B_2});\\ \end{aligned} \end{equation} and \begin{equation} \label{sinle_bell_+} \begin{aligned} \ket{+}_{A_2}\ket{+}_{B_2}= \frac{1}{\sqrt{2}}(\ket{\Phi^{+}}_{A_2B_2}+\ket{\Psi^{+}}_{A_2B_2}), \\ \ket{-}_{A_2}\ket{-}_{B_2}= \frac{1}{\sqrt{2}}(\ket{\Phi^{+}}_{A_2B_2}-\ket{\Psi^{+}}_{A_2B_2}).\\ \ket{+}_{A_2}\ket{-}_{B_2}= \frac{1}{\sqrt{2}}(\ket{\Phi^{-}}_{A_2B_2}-\ket{\Psi^{-}}_{A_2B_2}), \\ \ket{-}_{A_2}\ket{+}_{B_2}= \frac{1}{\sqrt{2}}(\ket{\Phi^{-}}_{A_2B_2}+\ket{\Psi^{-}}_{A_2B_2}). \\ \end{aligned} \end{equation} Using the relations~\eqref{sinle_bell} and~\eqref{sinle_bell_+}, Alice and Bob estimate the error in the channel and decide to continue the protocol or not. \end{enumerate} \item \label{step5}Alice and Bob discard the qubits, which are not entangled, from their sequences $S_{A_1}$ and $S_{B_1}$, and make the new sequences $M_A$ and $M_B$ respectively. Let the number of discarded qubits from each set be $\delta$, and then each new sequence contains $(n-\delta)$ single qubits. Alice performs the unitary operation $\sigma_z$~\cite{nielsen2002quantum}, on the qubits of $M_A$, whose initial states were $\ket{\Psi^{+}}$. This process is equivalent to the fact that Alice prepared all the initial EPR pairs in $\ket{\Psi^{-}}$ state. Now, only Bob knows the actual state of the qubit pairs $({M_A}_i,{M_B}_i)$ for $1 \leq i \leq n-\delta$, where ${M_A}_i$ and ${M_B}_i$ are the $i$-th qubits of the sequences $M_A$ and $M_B$ respectively. Due to quantum entanglement swapping, $({M_A}_i,{M_B}_i)$ is in a Bell state (see Equation~\eqref{ent_swp}). \item \label{step6}Message encoding: Alice puts some random checking bits on random positions of her message. She applies one of the four unitary operators (Pauli matrices~\cite{nielsen2002quantum}), $I$, $\sigma_x$, $i\sigma_y$ and $\sigma_{z}$, on the qubits of $M_A$, to encode the information $00$, $01$, $10$, and $11$ respectively. To make the protocol secure against the intercept-and-resend attack, Bob randomly applies $I$ or $\sigma_{z}$ on the qubits of $M_B$. \item Alice (Bob) sends the sequence $M_{A}$ ($M_{B}$) to Charlie, who measures each pair of qubits of $M_A$ and $M_B$ on Bell basis and announces the results. From the measurement results, Bob decodes the message of Alice. Then Alice announces the positions and value of the random checking bits, and from this information, they can check the integrity of the message. A non-negligible error implies the existence of some eavesdropper in the channel. \label{2nd_measure} \end{enumerate} \subsection{MDI-QD protocol } This is a simple generalization of the previous MDI-QSDC protocol. The first five steps are the same as above. To encode their messages, Alice and Bob divide the pair of sequence $(M_A,M_B)$ into two disjoint parts $(M_{A}^1,M_{B}^1)$ and $(M_{A}^2,M_{B}^2)$. One part is used for sending the message from Alice to Bob and another part is used for sending a message from Bob to Alice. \section{Security loophole of the MDI-QSDC protocol~\cite{niu2018measurement}}\label{sec3:analysis} In this section, we explicitly analyze the above MDI-QSDC protocol discussed in Section~\ref{MDI-QSDC protocol}. After Charlie has done the first set of Bell measurements of the qubits pairs of $S_{A_2}$ and $S_{B_2}$ in Step~\ref{1st_measure}, the qubits pairs of $S_{A_1}$ and $S_{B_1}$ become entangled due to entanglement swapping (Step~\ref{after_1st_measure}). Now from Equation~\eqref{ent_swp}, we can see that, if the Bell measurement results of the qubits pairs of $S_{A_2}$ and $S_{B_2}$ are $\ket{\Phi^{+}}_{A_2B_2}$ or $\ket{\Phi^{-}}_{A_2B_2}$, then also the states of the qubit pairs of $S_{A_1}$ and $S_{B_1}$ are $\ket{\Phi^{+}}_{A_1B_1}$ or $\ket{\Phi^{-}}_{A_1B_1}$. Similarly, the state of the qubit pair $(A_2,B_2)=\ket{\Psi^{\pm}}_{A_2B_2}$ implies the state of the qubit pair $(A_1,B_1)=\ket{\Psi^{\pm}}_{A_1B_1}$ or $\ket{\Psi^{\mp}}_{A_1B_1}$. After security checking, Alice and Bob discard the qubits, which are not entangled, from their sequences $S_{A_1}$ and $S_{B_1}$, and make the new sequences $M_A$ and $M_B$ respectively. So, from the Bell measurement results of the qubit pairs $(A_2,B_2)$, Charlie knows the states of the qubit pairs $(A_1,B_1)$, are either $\ket{\Phi^{\pm}}_{A_1B_1}$ or $\ket{\Psi^{\pm}}_{A_1B_1}$. That is, for $1 \leq i \leq n-\delta$, Charlie exactly knows that the qubit pairs $({M_A}_i,{M_B}_i)$ are in set $\varPhi=\{\ket{\Phi^{+}},\ket{\Phi^{-}}\}$ or in set $\varPsi=\{\ket{\Psi^{+}},\ket{\Psi^{-}}\}$. Now Alice applies $\sigma_z$ on the qubits of $M_A$, whose corresponding initial states were $\ket{\Psi^{+}}$. It is easy to check that, if Alice applies $\sigma_z$ on $M_{A_i}$ for some $i$, then the state of the qubit pair $({M_A}_i,{M_B}_i)$ changes from either $\ket{\Phi^{\pm}}$ to $\ket{\Phi^{\mp}}$ or $\ket{\Psi^{\pm}}$ to $\ket{\Psi^{\mp}}$. Thus Charlie's knowledge about the state of $({M_A}_i,{M_B}_i)$ remains same. Then Alice encodes her message on the qubits of $M_A$ by using the unitary operations $I$, $\sigma_x$, $i\sigma_y$ and $\sigma_{z}$ corresponding the message bits $00$, $01$, $10$, and $11$ respectively. That is, the unitary operators $I$ and $\sigma_z$ are used to encode the message bits $bb$, and the unitary operators $\sigma_x$ and $i\sigma_y$ are used to encode the message bits $b\bar{b}$, where $b \in \{0,1\}$ and $\bar{b}$ = bit complement of $b$. Bob also randomly applies $I$ or $\sigma_{z}$ on the qubits of $M_B$. They send $M_A$ and $M_B$ to Charlie, who measures each pair of qubits $({M_A}_i,{M_B}_i)$ in Bell basis, and announces the results. All the different cases are given in Table~\ref{cases of qsdc}. \begin{table}[] \centering \renewcommand*{\arraystretch}{1.7} \caption{Different cases of MDI-QSDC~\cite{niu2018measurement}.} \setlength{\tabcolsep}{10pt} \resizebox{1.05\textwidth}{!}{ \begin{tabular}{|c|c|c|c|c|} \hline \textbf{State of $\mathbf{(M_{A_i},M_{B_i})}$} & \textbf{Message bits} & \textbf{Alice's unitary} & \textbf{Bob's unitary} & \textbf{State of $\mathbf{(M_{A_i},M_{B_i})}$} \\ \textbf{ before encoding} & \textbf{of Alice} & \textbf{operation on $\mathbf{M_{A_i}}$} & \textbf{operation on $\mathbf{M_{B_i}}$} & \textbf{after encoding} \\ \hline \multirow{8}{*}{$\ket{\Phi^{+}}$} & \multirow{2}{*}{00} & \multirow{2}{*}{$I$} & $I$ & $\ket{\Phi^{+}}$ \\ \cline{4-5} & & & $\sigma_z$ & $\ket{\Phi^{-}}$ \\ \cline{2-5} & \multirow{2}{*}{01} & \multirow{2}{*}{$\sigma_x$} & $I$ & $\ket{\Psi^{+}}$ \\ \cline{4-5} & & & $\sigma_z$ & $\ket{\Psi^{-}}$ \\ \cline{2-5} & \multirow{2}{*}{10} & \multirow{2}{*}{$i\sigma_y$} & $I$ & $\ket{\Psi^{-}}$ \\ \cline{4-5} & & & $\sigma_z$ & $\ket{\Psi^{+}}$ \\ \cline{2-5} & \multirow{2}{*}{11} & \multirow{2}{*}{$\sigma_z$} & $I$ & $\ket{\Phi^{-}}$ \\ \cline{4-5} & & & $\sigma_z$ & $\ket{\Phi^{+}}$ \\ \hline \multirow{8}{*}{$\ket{\Phi^{-}}$} & \multirow{2}{*}{00} & \multirow{2}{*}{$I$} & $I$ & $\ket{\Phi^{-}}$ \\ \cline{4-5} & & & $\sigma_z$ & $\ket{\Phi^{+}}$ \\ \cline{2-5} & \multirow{2}{*}{01} & \multirow{2}{*}{$\sigma_x$} & $I$ & $\ket{\Psi^{-}}$ \\ \cline{4-5} & & & $\sigma_z$ & $\ket{\Psi^{+}}$ \\ \cline{2-5} & \multirow{2}{*}{10} & \multirow{2}{*}{$i\sigma_y$} & $I$ & $\ket{\Psi^{+}}$ \\ \cline{4-5} & & & $\sigma_z$ & $\ket{\Psi^{-}}$ \\ \cline{2-5} & \multirow{2}{*}{11} & \multirow{2}{*}{$\sigma_z$} & $I$ & $\ket{\Phi^{+}}$ \\ \cline{4-5} & & & $\sigma_z$ & $\ket{\Phi^{-}}$ \\ \hline \multirow{8}{*}{$\ket{\Psi^{+}}$} & \multirow{2}{*}{00} & \multirow{2}{*}{$I$} & $I$ & $\ket{\Psi^{+}}$ \\ \cline{4-5} & & & $\sigma_z$ & $\ket{\Psi^{-}}$ \\ \cline{2-5} & \multirow{2}{*}{01} & \multirow{2}{*}{$\sigma_x$} & $I$ & $\ket{\Phi^{+}}$ \\ \cline{4-5} & & & $\sigma_z$ & $\ket{\Phi^{-}}$ \\ \cline{2-5} & \multirow{2}{*}{10} & \multirow{2}{*}{$i\sigma_y$} & $I$ & $\ket{\Phi^{-}}$ \\ \cline{4-5} & & & $\sigma_z$ & $\ket{\Phi^{+}}$ \\ \cline{2-5} & \multirow{2}{*}{11} & \multirow{2}{*}{$\sigma_z$} & $I$ & $\ket{\Psi^{-}}$ \\ \cline{4-5} & & & $\sigma_z$ & $\ket{\Psi^{+}}$ \\ \hline \multirow{8}{*}{$\ket{\Psi^{-}}$} & \multirow{2}{*}{00} & \multirow{2}{*}{$I$} & $I$ & $\ket{\Psi^{-}}$ \\ \cline{4-5} & & & $\sigma_z$ & $\ket{\Psi^{+}}$ \\ \cline{2-5} & \multirow{2}{*}{01} & \multirow{2}{*}{$\sigma_x$} & $I$ & $\ket{\Phi^{-}}$ \\ \cline{4-5} & & & $\sigma_z$ & $\ket{\Phi^{+}}$ \\ \cline{2-5} & \multirow{2}{*}{10} & \multirow{2}{*}{$i\sigma_y$} & $I$ & $\ket{\Phi^{+}}$ \\ \cline{4-5} & & & $\sigma_z$ & $\ket{\Phi^{-}}$ \\ \cline{2-5} & \multirow{2}{*}{11} & \multirow{2}{*}{$\sigma_z$} & $I$ & $\ket{\Psi^{+}}$ \\ \cline{4-5} & & & $\sigma_z$ & $\ket{\Psi^{-}}$ \\ \hline \end{tabular} } \label{cases of qsdc} \end{table} We now show that, in the MDI-QSDC protocol~\cite{niu2018measurement}, the untrusted third party Charlie (or any eavesdropper) can get partial information about the secret without any active attack. For this, we need to discuss the effects of the encoding rules in this MDI-QSDC protocol. Without loss of generality, suppose the joint state of ${M_A}_i$, ${M_B}_i$ before encoding is $\ket{\Phi^{+}}$, then Charlie knows that the joint state is in the set $\varPhi$. After Charlie measures $({M_A}_i,{M_B}_i)$ in Bell basis, if the measurement result is in the set $\varPhi$, then from Table~\ref{cases of qsdc}, Charlie concludes that, the secret information is either $00$ or $11$. Again if the measurement result is in the set $\varPsi$, then from Table~\ref{cases of qsdc}, Charlie concludes that, the secret information is either $01$ or $10$. Similarly, for the other cases, Charlie exactly knows that the secret information is $bb$ or $b\bar{b}$. For both the cases, Charlie can get the exact secret information with probability $1/2$, thus the Shannon entropy, which measures the amount of uncertainty, is equal to $-\sum_{j=1}^{2}\frac{1}{2}\log\frac{1}{2}=1$ bit. That means, only one bit among two bits of secret information is unknown to Charlie. One may note that, from the viewpoint of information theory, this is equivalent to the event that, among two bits of secret information, Charlie knows the exact value of one bit and does not have any knowledge about the other bit. Thus we can say that, here in this MDI-QSDC protocol, only fifty percent of the secret message communicated securely. By the same argument, we can say that the MDI-QD protocol proposed in~\cite{niu2018measurement} is also not secure against information leakage, and in this protocol, only fifty percent of the secret messages communicated securely. Now, we find the root of this information leakage problem in these protocols. Let for some~$i$, $M_{A_i} \in M_A$ and $M_{B_i} \in M_B$, and after Alice and Bob apply their unitary operators, the states $M_{A_i}$ and $M_{B_i}$ become $N_{A_i}$ and $N_{B_i}$ respectively. If the joint state $(M_{A_i},M_{B_i})\in \varPhi$ or $\varPsi$, then after applying $I$ or $\sigma_z$ on $M_{A_i}$ (or $M_{B_i}$), the joint state $(N_{A_i},M_{B_i})$ (or $(M_{A_i},N_{B_i})$) remains in the same set $\varPhi$ or $\varPsi$ respectively. In other words, both $I$ and $\sigma_z$ are applied on $M_{A_i}$ or $M_{B_i}$ or both $M_{A_i}$ and $M_{B_i}$, map the set $\varPhi$ to $\varPhi$, and $\varPsi$ to $\varPsi$. That is, for both the mappings, the domain and the range sets are same, and if both the joint states $(M_{A_i},M_{B_i})$ and $(N_{A_i},N_{B_i})$ belong to the same subset of the Bell states $\varPhi$ or $\varPsi$, then Charlie concludes that the message bits are $bb$. Otherwise, when $(M_{A_i},M_{B_i})$ and $(N_{A_i},N_{B_i})$ belong to two different subsets $\varPhi$ or $\varPsi$, then Charlie concludes that the message bits are $b\bar{b}$ (i.e., Alice applies $\sigma_x$ or $i\sigma_y$ on $M_{A_i}$). So, the main problem in this encoding rule is, Bob's random unitary operations can not lower down the information of Charlie about the secret message. In the next section, we propose a remedy to overcome this security flaw. \section{Proposed modification of MDI-QSDC protocol}\label{sec4:modified protocol} In this section, we modify the MDI-QSDC protocol, to make it secure against information leakage. To resolve the problem discussed in Section~\ref{sec3:analysis}, Bob needs to apply some random unitary operators on $M_{B_i}$ such that the the union of the range sets, of his unitary operators, becomes the whole set of Bell states, i.e., for each $(M_{A_i},M_{B_i}) \in \varPhi$ or $\varPsi$ and $(N_{A_i},N_{B_i}) \in \varPhi \cup \varPsi$, there exist all the four possibilities of Alice's two bits message $b_1b_2$ ($b_1,b_2\in \{0,1\}$). The modified protocol is almost same as the original one. In our modified MDI-QSDC protocol, Step~\ref{step1} to Step~\ref{step5} and Step~\ref{2nd_measure} are same as the MDI-QSDC protocol discussed in Section~\ref{MDI-QSDC protocol}. In Step 6, the encoding process of Alice is the same as the previous one, and Bob randomly applies $\sigma_x$ and $I$ on the qubits of $M_B$ (instead of $\sigma_z$ and $I$ in the original one). All the different cases, of the states of the qubit pairs of $M_A$ and $M_B$, before and after encoding are given in Table~\ref{mod_table}. We will now show that this modified protocol is secure against information leakage. Again without loss of generality, suppose the joint state of ${M_A}_i$, ${M_B}_i$ before encoding is $\ket{\Phi^{+}}$, then Charlie knows that the joint state is either $\ket{\Phi^{+}}$ or $\ket{\Phi^{-}}$. From Table~\ref{mod_table}, it is easy to check that, before encoding, if the joint state is $\ket{\Phi^{\pm}}$, then all the four Bell states can arise after encoding any two message bits $b_1b_2$. Thus Charlie's knowledge, about the joint state before encoding, does not help him to extract any information about the secret bits. Similarly for the other cases also Charlie can not get any secret information about the message bits. We can also modify the MDI-QD protocol of \cite{niu2018measurement}, with a similar approach, i.e., the receiver applies the unitary $I$ and $\sigma_x$ randomly on his (her) state at the time of encoding. \subsection{Other Pauli operators to fix the issue} One can ask, what happen if Bob chooses any other pair of Pauli matrices as his random unitary operators. To check this, we consider two sets of linear transformations $\mathcal{F}_1=\{I, \sigma_z\}$ and $\mathcal{F}_2=\{\sigma_x, i\sigma_y\}$ (note that, every matrix is a linear transformation), where both the domain and range of these linear transformations are $\varPhi$ and $\varPsi$. Then, $f \in \mathcal{F}_1$ implies that $f$ maps the set $\varPhi$ to $\varPhi$ and the set $\varPsi$ to $\varPsi$ (ignoring the global phase of the Bell states). Again, $f \in \mathcal{F}_2$ implies that $f$ maps the set $\varPhi$ to $\varPsi$ and the set $\varPsi$ to $\varPhi$. Let for any mapping $f$, $\mathcal{D}(f)$ and $\mathcal{R}(f)$ be the domain and range of $f$ respectively. If Bob uses both his unitary operators from the same set $\mathcal{F}_1$ or $\mathcal{F}_2$ (i.e., Bob's unitary operator $f_1$, $f_2 \Longrightarrow \mathcal{D}(f_1)=\mathcal{D}(f_2)=\mathcal{D}$ (say) and $\mathcal{R}(f_1)=\mathcal{R}(f_2)=\mathcal{R}$ (say), where both $\mathcal{D}$ and $\mathcal{R}$ are either $\varPhi$ or $\varPsi$), then $(N_{A_i},N_{B_i}) \in \mathcal{R} \Longrightarrow~(N_{A_i},M_{B_i}) \in \mathcal{D}$. As Charlie knows exactly the set $\varPhi$ or $\varPsi$ in which the state $(M_{A_i},M_{B_i})$ belongs, thus from the knowledge that $(N_{A_i},M_{B_i}) \in \mathcal{D}$, Charlie gets the information that ``both the bits of Alice's two bits message are equal or not". Now let the two unitary operators of Bob be $f_1$ and $f_2$, where $f_1 \in \mathcal{F}_1$ and $f_2 \in \mathcal{F}_2$. Then $\mathcal{D}(f_1)=\mathcal{D}(f_2)=\mathcal{D}$ (say) implies $\mathcal{R}(f_1)$ and $\mathcal{R}(f_2)$ are disjoint. Since $\varPhi$ and $\varPsi$ make a partition of the set of all the two qubits Bell states, thus $\mathcal{R}(f_1) \cup \mathcal{R}(f_2)$ contains all the Bell states. As Bob randomly chooses between $f_1$ and $f_2$, therefore from the exact state of $(N_{A_i},N_{B_i})$, Charlie does not know the exact set of the state $(N_{A_i},M_{B_i})$. For example, if Charlie knows $(M_{A_i},M_{B_i})\in \varPhi$, then for Alice's message $b_1b_2$, all the four Bell state can occur as the state of $(N_{A_i},N_{B_i})$. So in this case, the protocol is secure against information leakage. Hence the collection of all possible choices of Bob's random unitary operators pairs, from the set of Pauli matrices, is $\{(f_1,f_2): f_1 \in \mathcal{F}_1 \text{ and } f_2 \in \mathcal{F}_2\}$, i.e., there are four options for Bob to choose his pair of unitary operators and they are: $I$ and $\sigma_x$; $I$ and $i\sigma_y$; $\sigma_z$ and $\sigma_x$; $\sigma_z$ and $i\sigma_y$. One can easily check that, if Bob uses any one pair from the above set as his random unitary operators, then both the protocols prevent the information leakage problem. \begin{table}[] \centering \renewcommand*{\arraystretch}{1.7} \caption{Different cases of modified MDI-QSDC.} \setlength{\tabcolsep}{10pt} \resizebox{1.05\textwidth}{!}{ \begin{tabular}{|c|c|c|c|c|} \hline \textbf{State of $\mathbf{(M_{A_i},M_{B_i})}$} & \textbf{Message bits} & \textbf{Alice's unitary} & \textbf{Bob's unitary} & \textbf{State of $\mathbf{(M_{A_i},M_{B_i})}$} \\ \textbf{ before encoding} & \textbf{of Alice} & \textbf{operation on $\mathbf{M_{A_i}}$} & \textbf{operation on $\mathbf{M_{B_i}}$} & \textbf{after encoding} \\ \hline \multirow{8}{*}{$\ket{\Phi^{+}}$} & \multirow{2}{*}{00} & \multirow{2}{*}{$I$} & $I$ & $\ket{\Phi^{+}}$ \\ \cline{4-5} & & & $\sigma_x$ & $\ket{\Psi^{+}}$ \\ \cline{2-5} & \multirow{2}{*}{01} & \multirow{2}{*}{$\sigma_x$} & $I$ & $\ket{\Psi^{+}}$ \\ \cline{4-5} & & & $\sigma_x$ & $\ket{\Phi^{+}}$ \\ \cline{2-5} & \multirow{2}{*}{10} & \multirow{2}{*}{$i\sigma_y$} & $I$ & $\ket{\Psi^{-}}$ \\ \cline{4-5} & & & $\sigma_x$ & $\ket{\Phi^{-}}$ \\ \cline{2-5} & \multirow{2}{*}{11} & \multirow{2}{*}{$\sigma_z$} & $I$ & $\ket{\Phi^{-}}$ \\ \cline{4-5} & & & $\sigma_x$ & $\ket{\Psi^{-}}$ \\ \hline \multirow{8}{*}{$\ket{\Phi^{-}}$} & \multirow{2}{*}{00} & \multirow{2}{*}{$I$} & $I$ & $\ket{\Phi^{-}}$ \\ \cline{4-5} & & & $\sigma_x$ & $\ket{\Psi^{-}}$ \\ \cline{2-5} & \multirow{2}{*}{01} & \multirow{2}{*}{$\sigma_x$} & $I$ & $\ket{\Psi^{-}}$ \\ \cline{4-5} & & & $\sigma_x$ & $\ket{\Phi^{-}}$ \\ \cline{2-5} & \multirow{2}{*}{10} & \multirow{2}{*}{$i\sigma_y$} & $I$ & $\ket{\Psi^{+}}$ \\ \cline{4-5} & & & $\sigma_x$ & $\ket{\Phi^{+}}$ \\ \cline{2-5} & \multirow{2}{*}{11} & \multirow{2}{*}{$\sigma_z$} & $I$ & $\ket{\Phi^{+}}$ \\ \cline{4-5} & & & $\sigma_x$ & $\ket{\Psi^{+}}$ \\ \hline \multirow{8}{*}{$\ket{\Psi^{+}}$} & \multirow{2}{*}{00} & \multirow{2}{*}{$I$} & $I$ & $\ket{\Psi^{+}}$ \\ \cline{4-5} & & & $\sigma_x$ & $\ket{\Phi^{+}}$ \\ \cline{2-5} & \multirow{2}{*}{01} & \multirow{2}{*}{$\sigma_x$} & $I$ & $\ket{\Phi^{+}}$ \\ \cline{4-5} & & & $\sigma_x$ & $\ket{\Psi^{+}}$ \\ \cline{2-5} & \multirow{2}{*}{10} & \multirow{2}{*}{$i\sigma_y$} & $I$ & $\ket{\Phi^{-}}$ \\ \cline{4-5} & & & $\sigma_x$ & $\ket{\Psi^{-}}$ \\ \cline{2-5} & \multirow{2}{*}{11} & \multirow{2}{*}{$\sigma_z$} & $I$ & $\ket{\Psi^{-}}$ \\ \cline{4-5} & & & $\sigma_x$ & $\ket{\Phi^{-}}$ \\ \hline \multirow{8}{*}{$\ket{\Psi^{-}}$} & \multirow{2}{*}{00} & \multirow{2}{*}{$I$} & $I$ & $\ket{\Psi^{-}}$ \\ \cline{4-5} & & & $\sigma_x$ & $\ket{\Phi^{-}}$ \\ \cline{2-5} & \multirow{2}{*}{01} & \multirow{2}{*}{$\sigma_x$} & $I$ & $\ket{\Phi^{-}}$ \\ \cline{4-5} & & & $\sigma_x$ & $\ket{\Psi^{-}}$ \\ \cline{2-5} & \multirow{2}{*}{10} & \multirow{2}{*}{$i\sigma_y$} & $I$ & $\ket{\Phi^{+}}$ \\ \cline{4-5} & & & $\sigma_x$ & $\ket{\Psi^{+}}$ \\ \cline{2-5} & \multirow{2}{*}{11} & \multirow{2}{*}{$\sigma_z$} & $I$ & $\ket{\Psi^{+}}$ \\ \cline{4-5} & & & $\sigma_x$ & $\ket{\Phi^{+}}$ \\ \hline \end{tabular} } \label{mod_table} \end{table} \section{Conclusion}\label{conclusion} In this paper, we have analyzed Niu et al.'s MDI quantum communication protocols and observed some security issues in both the protocols. We have shown that these protocols are not secure against information leakage, and one bit among two bits of information is always leaked without any active attack. Then we have proposed a modification of these protocols, which are secure against such information leakage problem. We also characterize the set of Pauli operators, which can alternatively be used to bypass the security flaws. \section*{Authors' note} After submitting our current work to arXiv.org (arXiv:2006.05263v1), the authors of Ref.~\cite{niu2018measurement} corrected their flaw independently in Ref.~\cite{niu2020security} by replacing the cover operation from $\{I, \sigma_z\}$ to $\{I, \sigma_x, \sigma_y, \sigma_z\}$. They also simplified the protocol by preparing the EPR pairs all in state $\ket{\psi^-}$. However, in addition to the correction, we also discussed and analyzed the information leakage problem. \bibliographystyle{unsrt}
1,108,101,562,601
arxiv
\section{Introduction} Magnetic reconnection\cite{PF00, *birn:2007:book, *zweibel:2009, *yamada:2010} frequently occurs at and around magnetic null points: locations where the magnetic field strength equals zero.\cite{cowley:1973, fukao:1975, greene:1988, lau:1990, parnell:1996} Magnetospheric null points have been identified using multipoint \emph{in situ} measurements as the nulls pass through the spacecraft constellation.\cite{xiao:2006, *xiao:2007, *he:2008, *wendel:2013, *rguo:2013, *fu:2015, *olshevsky:2015} Null points in the solar atmosphere have been identified through extrapolation of the photospheric magnetic field and morphology in coronal emission.\cite{filippov:1999, *aulanier:2000, *zhao:2008, *longcope:2009, *freed:2015, *edwards:2015, *masson:2009, *demoulin:1994, *barnes:2007, *Close:2004, *Regnier:2008} Numerical simulations of magnetic reconnection and plasma turbulence at low guide fields frequently show the formation and evolution of null points,\cite{servidio:2009, *servidio:2010} as do numerical experiments of typical solar events such as flux emergence.\cite{Maclean:2009, *Parnell:2010B} Two-dimensional, non-degenerate magnetic null points are classified as X-type or O-type depending on the local magnetic field structure. If we define \M\ as the Jacobian matrix of the magnetic field at the null point, then a null point will be X-type if $\det\M<0$, O-type if $\det\M>0$, and degenerate if $\det\M=0$\@. Magnetic reconnection in two dimensions can only occur at null points.\cite[e.g.,][]{priest:2003, pontin:2011:review} In three dimensions, the structure of non-degenerate magnetic null points is significantly more complex.\cite{cowley:1973, fukao:1975, greene:1988, lau:1990, parnell:1996} Null lines and null planes are structurally unstable and unlikely to exist in real systems.\cite[e.g.,][]{greene:1988, hornig:1996} The magnetic field structure around a linear three-dimensional null point includes separatrix surfaces (or fans) of infinitely many field lines that originate (or terminate) at the null, and two spine field lines that end (or begin) at the null. A negative (or type A) null point has separatrix surface field lines heading inward toward the null point with spine field lines heading outward from the null point. In contrast, a positive (or type B) null point has separatrix surface field lines heading outward away from the null point and spine field lines heading inward toward the null point. Separators (also known as X-lines by some in the magnetospheric community) are magnetic field lines that connect two nulls. Separators that include a spine field line are not structurally stable, so separators in real systems will almost always be given by the intersection of two separatrix surfaces. Null points, separatrix surfaces, spines, and separators are the topological boundaries that divide the magnetic field into distinct domains and are therefore preferred locations for magnetic reconnection.\cite{longcope:2005, haynes:2010, Parnell:2010A, Parnell:2010B} Three-dimensional magnetic reconnection can also occur without nulls,\cite{Hesse:1988, Schindler:1988, priest:1992B, *aulanier:2006, *janvier:2013:III, pontin:2011:review} especially in regions such as quasi-separatrix layers where the magnetic connectivity changes quickly. Motion of magnetic null points and reconnection regions occurs during any realistic occurrence of magnetic reconnection. In Earth's magnetosphere, X-line retreat has been observed in the magnetotail\cite{forbes:1981, *hasegawa:2008, *oka:2011, *xcao:2012} and poleward of the cusp.\cite{wilder:2014} At the dayside magnetopause \cite{swisdak:2003, *phan:2013} and in tokamaks,\cite{rogers:1995, *beidler:2011} the combination of a plasma-pressure gradient and a guide field leads to diamagnetic drifting of the reconnection site that can suppress reconnection. Laboratory experiments frequently show reconnection site motion and asymmetry, often due to geometry or the Hall effect.\cite{inomoto:counter, *yoo:2014, murphy:mrx, lukin:2011} During solar flares, the reconnection site often rises with time as the flare loops grow and can also show transverse motions.\cite[e.g.,][]{forbes:1996, *savage:2010} Theoretical models of magnetic reconnection often assume symmetry such that each magnetic null coincides with a flow stagnation point in the reference frame of the system. When asymmetry is introduced, there is in general a separation between these two points,\cite{cassak:asym, *cassak:hall, *cassak:dissipation, murphy:mrx, murphy:asym, murphy:retreat, oka:2008, murphy:partialasym} and in some cases a stagnation point might not even exist near a null point.\cite{murphy:double} In all of these situations, there will generally be plasma flow across the magnetic null and the null will change position. Interestingly, the velocity of a null point will generally not equal the plasma flow velocity at the null point.\cite{oka:2008, murphy:retreat, murphy:partialasym, murphy:double} This effect is similar to the flow-through mode of reconnection.\cite{siscoe:2002, *maynard:2012} During asymmetric magnetic reconnection in partially ionized plasmas, there may exist neutral flow through the current sheet from the weak magnetic field (high neutral pressure) side to the strong magnetic field (low neutral pressure) side due to the neutral pressure gradient.\cite{murphy:partialasym} In previous work,\cite{murphy:retreat} we derived an exact expression for the motion of an X-line when its location is constrained to one dimension by symmetry. In resistive magnetohydrodynamics (MHD), X-line motion results from a combination of advection by the bulk plasma flow and resistive diffusion of the normal component of the magnetic field. In this work, we present exact expressions for the motion of linear null points in three dimensions and discuss the typical properties of the bifurcations of degenerate magnetic null points. Section \ref{linear} contains a derivation of the motion of linear null points in a vector field. Section \ref{magnetic} uses the results from Section \ref{linear} to describe the motion of magnetic null points. Section \ref{bifurcation} considers the local bifurcation properties of magnetic null points and provides three examples. Section \ref{discussion} contains a summary and discussion of this work. \section{Motion of Linear Null Points in an Arbitrary Vector Field\label{linear}} We define $\xn(t)$ as the time-dependent position of an isolated null point in a vector field $\B(\mathbf{x},t)$\@. We define $\mathbf{B}_n(\xn(t),t)$ as the value of the vector field at the null; while $\B_n\equiv 0$ for all time, $\dbdtxnA \equiv \dbdtxnB \neq 0$ when the null point is moving. We define \U\ to be the velocity of this null, \begin{equation} \U \equiv \frac{\dif \xn}{\dif t}. \label{udef} \end{equation} The local structure of a non-degenerate null point can be found by taking a Taylor expansion and keeping the linear terms.\cite{cowley:1973, fukao:1975, greene:1988, lau:1990, parnell:1996} The linear structure is then given by \begin{equation} \B = \M\cdot\r, \label{BMr} \end{equation} where $\r\equiv\x-\xn$. The elements of the Jacobian matrix \M\ evaluated at the null are given by \begin{equation} M_{ij} = \partial_j B_i, \label{jacobiandef} \end{equation} where $i$ is the row index and $j$ is the column index. The trace of \M\ equals zero when $\nabla\cdot\B=0$, and $\M=(\nabla\B)^T$. Next we take the derivative following the motion of the null, \begin{equation} \dbdtxn + \left. \U \cdot \nabla \B \right|_{\xn} = 0. \label{convectiveder} \end{equation} This expression gives the total derivative of the magnetic field at the null point using the null's velocity in an arbitrary reference frame. This derivative equals zero because the magnetic field at the null by definition does not deviate from zero as we are following it. By solving for \U\ in Eq.\ \ref{convectiveder}, we arrive at the most general expression for the velocity of the null point\cite{ [][{. A similar expression for null point motion is given by Eq.\ 15 of this reference; however, this expression contains a sign error.}]greene:1993, [][{. See Eq.\ 79.}]lindeberg:1994, klein:2007} \begin{equation} \U = - \Minv\, \dbdtxn , \label{Ubasic} \end{equation} which is valid for vector fields of arbitrary dimension. This derivation provides an exact result as long as \M\ is non-singular. An alternate derivation for Eq.\ \ref{Ubasic} starts from the first order Taylor series expansion of \B\ with respect to time and space about a magnetic null point, \begin{equation} \B \left( \delta \x, \delta t \right) = \M \cdot \delta \x + \dbdtxn \delta t + \mathcal{O}(\|\delta\x\|^2, \delta t^2) . \label{timedep} \end{equation} This first order expansion is valid in the limit of small $\delta t$ and $\left|\delta\x\right|$. We define $\delta \xn$ as the position of the null point at $\delta t$. Setting $\B \left( \delta \xn, \delta t \right)=0$ provides a unique solution for $\U \equiv \delta \xn / \delta t$, and we again arrive at Eq.\ \ref{Ubasic}. Unlike the previous paragraph, this derivation uses the linearization approximation. Eq.\ \ref{Ubasic} may also be derived from the implicit function theorem. Equation \ref{Ubasic} shows that a null point will move along the path for which \B\ and \dbdtxn\ are oppositely directed. The null point will move faster if the vector field is changing quickly in time or varying slowly in space along this path. This exact result for \U\ can be applied to find the velocity of linear null points in any time-varying vector field with continuous first derivatives in time and space about the null point. A unique velocity \U\ exists as long as \M\ is non-singular. If \M\ is non-singular, then there exists exactly one radial path away from the null for which the vector field is pointed in a particular direction. \section{Motion of Magnetic Null Points\label{magnetic}} We next consider the case where \B\ is a magnetic field rather than just any vector field. The derivation of Eq.\ \ref{Ubasic} does not invoke any of Maxwell's equations. We now introduce Faraday's law, \begin{equation} \dbdt = - \nabla \times \E, \label{faraday} \end{equation} where \E\ is the electric field. By combining Eqs.\ \ref{Ubasic} and \ref{faraday}, we arrive at the relation \begin{equation} \U = \Minv \, (\nabla \times \E), \label{Ufaraday} \end{equation} which additionally requires continuous first derivatives of the electric field in space about the null point. This expression does not depend on any particular Ohm's law, and indeed can be applied in situations where there is no Ohm's law. Next we consider the resistive MHD Ohm's law, \begin{equation} \E + \V \times \B = \eta \J, \label{ohms_rmhd} \end{equation} where \V\ is the plasma flow velocity and \J\ is the current density. The resistivity $\eta$ is assumed to be uniform for simplicity. Eq.\ \ref{Ufaraday} then becomes \begin{equation} \U = \V - \eta \Minv \nabla^2 \B, \label{Urmhd} \end{equation} where all quantities on the right hand side are evaluated at the magnetic null. This expression requires that \B\ has continuous first derivatives in time and continuous second derivatives in space about the null point. Null point motion in resistive MHD results from a combination of advection by the bulk plasma flow and resistive diffusion of the magnetic field. Even in the absence of flow, null points may still move in resistive situations. The plasma flow velocity \emph{at} the null point does not equal the velocity of the null point itself.\cite{murphy:retreat} A schematic showing null point motion due to resistive diffusion is presented in Fig.\ \ref{mechanism}. \begin{figure} \includegraphics[width=8.5cm]{fig1.png} \caption{A two-dimensional example showing the motion of an X-type null point dominated by resistive diffusion of $B_z$ along the $z$ direction. Above and below the null, $B_z<0$. The negative $B_z$ diffuses along the $z$ direction into the immediate vicinity of the null point. At a slightly later time, the magnetic field at the current position of the null point will have $B_z<0$. The negative $B_z$ diffusion cancels out positive $B_z$ to the right of the null point, so the resulting null point motion is to the right. Reproduced with permission from Ref.\ \onlinecite{murphy:retreat}. Copyright 2010 American Institute of Physics.} \label{mechanism} \end{figure} Equation \ref{Ufaraday} can also be evaluated using an Ohm's law containing additional terms. For example, we can choose our Ohm's law to be \begin{equation} \E + \Vi \times \B = \eta \J + \frac{\J\times\B}{en_e} - \frac{\nabla p_e}{en_e} \label{hallohms}, \end{equation} where $\V_i$ is the bulk ion velocity, $n_e$ is the electron density, $e$ is the elementary charge, and $p_e$ is a scalar electron pressure. For $\J=en_e\left(\Vi - \Ve\right)$, Eq.\ \ref{Ufaraday} becomes \begin{equation} \U = \Ve - \eta\,\Minv\,\nabla^2\B + \Minv\left(\frac{\nabla n_e \times \nabla p_e}{n_e^2 e}\right) \label{Utwofl} \end{equation} where quantities are again evaluated at the null point. The first term on the right hand side corresponds to the magnetic field being carried with the electron flow velocity, \Ve, rather than the bulk plasma flow, the second term corresponds to the resistive diffusion of the magnetic field at the null, and the third term corresponds to the Biermann battery. \section{The appearance and disappearance of magnetic null points\label{bifurcation}} We next consider the emergence and disappearance of magnetic null points, with an emphasis on the instantaneous velocity of separation or convergence of the bifurcating null-null pair. The local approach taken here complements global bifurcation studies.\cite{Priest:1996A, *DBrown:1999B, *DBrown:2001} Thus far we have only considered non-degenerate null points for which the local magnetic field can be described by Eq.\ \ref{BMr} using only the linear terms in the Taylor series expansion. As long as \M\ is non-singular at the null, then there exists a unique velocity corresponding to the motion of that null point. Non-degenerate null points are therefore structurally stable and cannot disappear unless \M\ becomes singular.\cite{greene:1993} In contrast, degenerate null points are structurally unstable and generally exist instantaneously as a transition between different topological states.\cite{greene:1988, hornig:1996} Null points must appear or disappear in oppositely signed pairs during a bifurcation because the overall topological degree of the region cannot change unless a null point enters or leaves the domain across a boundary.\cite{deimling:1985} In most situations of physical interest, degenerate three-dimensional magnetic null points will have $\mathop{\mathrm{rank}}\M=2$ and $\mathop{\mathrm{nullity}}\M=1$ (e.g., Ref.\ \onlinecite{greene:1988}). The null space (or kernel) of \M\ will then be one-dimensional and corresponds to the eigenvector of \M\ with eigenvalue zero. The three eigenvalues must sum to zero because of the divergence constraint,\cite{fukao:1975} which implies that the two non-zero eigenvalues must either be both real and of opposite sign, or both complex and of opposite sign.\cite{parnell:1996} Although the linear representation in Eq.\ \ref{BMr} can describe the magnetic structure surrounding a degenerate null point (e.g., Ref.~\onlinecite{parnell:1996}) the region around a bifurcating null-null pair requires higher-order terms. Third-order terms need to be considered only when the first- and second-order derivatives both vanish at the null, so usually a second-order expansion will suffice. The Taylor series expansion of the magnetic field about a three-dimensional null point to second-order in time and space is \begin{equation} \begin{split} \B(\delta\x,\delta t) = \M\cdot\delta\x + \dbdtxn \delta t + \frac{1}{2} \left[ \begin{array}{c} \delta\x^T \, \Hx \, \delta\x \\ \delta\x^T \, \Hy \, \delta\x \\ \delta\x^T \, \Hz \, \delta\x \end{array} \right] \hspace{12mm} \\ \hspace{4mm} + \frac{\delta t}{2} \left[ \begin{array}{c} (\delta\x\cdot\nabla ) \partial_t B_x \\ (\delta\x\cdot\nabla ) \partial_t B_y \\ (\delta\x\cdot\nabla ) \partial_t B_z \\ \end{array} \right] + \frac{\delta t^2}{2} \frac{\partial^2 \B_n}{\partial t^2} + \mathcal{O}(\|\delta\x\|^3,\delta t^3), \label{hessiantensor} \end{split} \end{equation} where the Jacobian matrix, Hessian matrices, and derivatives are evaluated at the null point. The elements of the Hessian matrices are given by $\H_{k,ij} = \partial_i \partial_j B_k$ for $i,j,k \in \{x,y,z\}$. If the magnetic field is locally continuous, then the partial derivative operators will be commutative and the Hessian matrices will be symmetric. When $\dot{\B}$ is constant in time and space, then the fourth and fifth terms on the right hand side of Eq.\ \ref{hessiantensor} vanish. The positions of the null points for a given $\delta t$ may be found by setting $\B(\delta\x_n,\delta t) =0$ in Eq.~\ref{hessiantensor} (or the full expression of the magnetic field) and then solving for $\delta \x_n$. The instantaneous velocity of convergence or separation of a bifurcating null-null pair may be infinite, finite, or zero (see Section \ref{bifex} for examples). Suppose that there exists a degenerate three-dimensional null point with $\mathop{\mathrm{rank}}{\M}=2$ and that \M\ has three unique eigenvectors. The first-order directional derivatives of each component of \B\ are zero along the one-dimensional null space of \M\@. Under most realistic circumstances, the second-order directional derivative of \B\ along the null space of \M\ will be nonzero. Except in special circumstances, the component of velocity along the null space of \M\ of the bifurcating null-null pair will be instantaneously infinite. Next, consider the two-dimensional subspace that is orthogonal to the null space of \M\@. The Jacobian of \M\ in this subspace at the null point will be invertible. Consequently, there exists a unique finite velocity within this subspace for the two-dimensional null point. Thus, in general, the instantaneous component of velocity along the null space of a bifurcating null-null pair will be infinite while the components of velocity orthogonal to the null space will be finite or zero. Next we consider separators that may exist and connect a bifurcating null-null pair. Because these bifurcations cannot change the topological degree of the system, the null-null pair will include one negative null and one positive null.\cite{deimling:1985, greene:1988} Define $\ensuremath{\Sigma_{\Aorminus}}$ and $\ensuremath{\Sigma_{\Borplus}}$ as the separatrix surfaces and $\ensuremath{\gamma_{\Aorminus}}$ and $\ensuremath{\gamma_{\Borplus}}$ as the spine field lines of the negative and positive null points, respectively. The field lines in $\ensuremath{\Sigma_{\Aorminus}}$ and $\ensuremath{\gamma_{\Borplus}}$ approach the null, while the field lines in $\ensuremath{\Sigma_{\Borplus}}$ and $\ensuremath{\gamma_{\Aorminus}}$ recede from the null. In the neighborhood of a linear null, the separatrix surface is given by the plane spanned by the two eigenvectors associated with eigenvalues that have the same sign for their real part, and the two spine field lines are along the remaining eigenvector.\cite{parnell:1996} Separators that exist in real systems will almost always be given by the intersection of two separatrix surfaces.\cite{greene:1988, haynes:2010} Spine-spine separators may exist if \ensuremath{\gamma_{\Aorminus}}\ and \ensuremath{\gamma_{\Borplus}}\ include the same field line. Though spine-spine separators may occur in some symmetric systems, they are not structurally stable and thus can generally be ignored.\cite{haynes:2010} As explained above, during the bifurcation of a degenerate null point a positive and negative null are formed; hence, spine-fan separators can never connect a bifurcating null-null pair because such separators connect either two positive or two negative nulls. Additionally, not all bifurcating null-null pairs will be connected by a separator. In most realistic situations, there will not exist a straight line separator connecting a bifurcating null-null pair as one might intuitively expect (see also Refs.\ \onlinecite{greene:1988} and \onlinecite{lau:1990}). Typically, there will exist some angle between the separatrix surfaces of each of the two null points in the time surrounding the bifurcation. Equivalently, each pair of eigenvectors associated with the separatrix surface of each null will usually be changing in time, and, in general, this evolution will be different for each separatrix surface. A straight line separator may only be created under special circumstances, such as when certain symmetries are present. For example, a straight line separator will occur if the two nulls from the bifurcating null-null pair are both improper nulls (not spiral) and they both share the same fan eigenvector which is parallel to the direction of motion of the bifurcating nulls. \subsection{Bifurcation Examples\label{bifex}} \newcommand\nulldet{\ensuremath{a^2+bc}} \newcommand\sqrtnulldet{\ensuremath{\sqrt{\nulldet}}} \newcommand\eone{{\ensuremath{\mathbf{e}_1}}} \newcommand\etwo{{\ensuremath{\mathbf{e}_2}}} Let us consider a prototypical null point bifurcation of the form \begin{equation} \B\left(\x,t\right) = \begin{bmatrix} (a-z)x + by \\ cx - (a+z)y \\ z^2 \end{bmatrix} + \delta\B(\x,t) , \label{bifurcationexample} \end{equation} where $a$, $b$, and $c$ are arbitrary real constants with $\nulldet \ne 0$. We assume that $\delta\B(\x,0)=0$ so that there exists a degenerate null point at the origin with $\mathop{\mathrm{rank}}\M=2$ at $t=0$. The null space of \M\ at the degenerate null point is given by $\zhat$, which is the eigenvector of \M\ corresponding to eigenvalue zero. The remaining eigenvectors of the degenerate null point are in the $x$-$y$ plane and given by \begin{equation} \eone \equiv \begin{bmatrix} -b \\ a+\sqrtnulldet \\ 0 \end{bmatrix} \mbox{and } \etwo \equiv \begin{bmatrix} -b \\ a-\sqrtnulldet \\ 0 \end{bmatrix} , \end{equation} which correspond to eigenvalues $-(\nulldet)$ and $\nulldet$, respectively. We only consider time and space close to the bifurcation such that $\left|\delta B_z\right| < \left|\nulldet\right|$. These examples will elucidate many of the properties of null point bifurcations discussed earlier in this section. \subsubsection{First bifurcation example\label{bifexone}} Suppose that $\delta\B(t)=-\zhat\mathop{\mathrm{sgn}}(t)\left|t\right|^\alpha$ in Eq.\ \ref{bifurcationexample}, with $\alpha>1$. For $t>0$, the third component of \B\ reduces to $z^2-t^\alpha$ and two null points exist, but when $t<0$, there are no null points. At $t=0$, a single second-order null point appears at the origin as the system undergoes a saddle-node bifurcation. For $t>0$, the two null points are at $\xn = \left[0,0,\pm t^{\alpha/2}\right]^\top$. The null point with $z_n>0$ will be a positive null if $a^2+bc>0$ and a negative null otherwise. The null points have velocities of $\dotxn = \left[0,0,\pm \frac{\alpha}{2}t^{\alpha/2-1} \right]^\top$. When $1<\alpha<2$, the velocity of separation diverges to infinity at $t=0$. For the critical case when $\alpha=2$, the null-point velocities are constant: $\dotxn = \left[0,0,\pm 1\right]^\top$. When $\alpha>2$, the null-point velocities asymptotically approach zero at $t=0$. Since the null point velocities are purely in the $\zhat$ direction and this is also the direction of an eigenvector of the fan planes of each of the nulls, the resulting separator is a straight line along $x=y=0$ for which $B_z<0$. Finely tuned examples such as this one are unlikely to occur in nature, but show that instantaneous velocities that are infinite, finite, or zero are mathematically allowable during the bifurcation of a degenerate null point. \subsubsection{Second bifurcation example\label{bifextwo}} Suppose that $\delta\B(t)= \dot{\B}\hspace{0.1mm}t$ in Eq.\ \ref{bifurcationexample}, where $\dot{B}_x$, $\dot{B}_y$, and $\dot{B}_z$ are real constants with $\dot{B}_z<0$. By setting $\B(\xn,t)=0$, we arrive at this expression for the null point positions for $t\geq 0$, \begin{equation} \mathbf{x}_{n}(t) = \begin{bmatrix} \dfrac{-a \ensuremath{\dot{B}_x} t - b \ensuremath{\dot{B}_y} t \mp \ensuremath{\dot{B}_x}\sqrt{-\ensuremath{\dot{B}_z}}t^{3/2}}{\nulldet + \ensuremath{\dot{B}_z} t} \vspace{1.6mm}\\ \dfrac{-c\ensuremath{\dot{B}_x} t + a\ensuremath{\dot{B}_y} t \mp \ensuremath{\dot{B}_y} \sqrt{-\ensuremath{\dot{B}_z}}t^{3/2}}{\nulldet + \ensuremath{\dot{B}_z} t} \vspace{1.6mm}\\ \pm \sqrt{-\ensuremath{\dot{B}_z} t} \end{bmatrix}. \label{nullposbif} \end{equation} The instantaneous velocities of the bifurcating null points at $t=0$ are given by \begin{equation} \lim_{t\rightarrow 0^+} \dot{\x}_{n}(t) = \begin{bmatrix} \dfrac{- a\dot{B}_x - b\dot{B}_y}{a^2+bc} \vspace{1.6mm}\\ \dfrac{- c\dot{B}_x + a\dot{B}_y}{a^2+bc} \vspace{1.6mm}\\ \pm \infty \end{bmatrix}. \end{equation} The velocity in the $x$-$y$ plane will be finite except under the special circumstance when $\ensuremath{\dot{B}_x}=\ensuremath{\dot{B}_y}=0$ in which case the velocity in the $x$-$y$ plane will be zero. The instantaneous component of velocity along the null space of \M\ is infinite. \begin{table*} \caption{\label{eigentable}The eigenvectors, eigenvalues, and direction normal to the fan for the null points in the second bifurcation example.} \begin{ruledtabular} \begin{tabular}{lcccc} & \multicolumn{2}{c}{Case 1: $\nulldet>z_n^2>0$} & \multicolumn{2}{c}{Case 2: $\nulldet<0$} \\ & Pos.\ null ($z_n>0$) & Neg.\ null ($z_n<0$) & Pos.\ null ($z_n<0$) & Neg.\ null ($z_n>0$) \\ \hline Fan eigenvector, eigenvalue & \zhat, $2|z_n|$ & \zhat, $-2|z_n|$ & \eone, $|z_n|+\sqrtnulldet$ & \eone, $-|z_n|-\sqrtnulldet$ \\ Fan eigenvector, eigenvalue & \etwo, $-|z_n|+\sqrtnulldet$ & \eone, $|z_n|-\sqrtnulldet$ & \etwo, $|z_n|-\sqrtnulldet$ & \etwo, $-|z_n|+\sqrtnulldet$\\ Spine eigenvector, eigenvalue & \eone, $-|z_n|-\sqrtnulldet$ & \etwo, $|z_n|+\sqrtnulldet$ & \zhat, $-2|z_n|$ & \zhat, $2|z_n|$\\ Direction normal to fan & $[-a+\sqrtnulldet,-b,0]^\top$ & $[-a-\sqrtnulldet,-b,0]^\top$ & \zhat & \zhat \end{tabular} \end{ruledtabular} \end{table*} \begin{figure} \includegraphics[width=8.5cm]{fig2.png} \caption{ Two improper null points (red and blue spheres) resulting from the bifurcation of a degenerate null point with $a^2+bc>z_n^2>0$. The fan surfaces of the two nulls (denoted by salmon field lines for the positive null at $z_n>0$ and light blue field lines for the negative null at $z_n<0$) intersect to yield a curved separator field line (green). The spine field lines are orange/purple for the positive/negative null. In this example, $(a,b,c)=(2,-1,3)$, $\dot{\mathbf{B}}=[1,-1,-1]^\top$, and $t=0.2$. }\label{realnulls} \end{figure} \begin{figure} \includegraphics[width=8.5cm]{fig3.png} \caption{ Two spiral null points (red and blue spheres) resulting from the bifurcation of a degenerate null point with $a^2+bc<0$. The spine field lines (thick red/dark blue lines for the positive/negative nulls) wrap around each other instead of intersecting. The fan surfaces (salmon/light blue lines for the positive/negative nulls) are parallel to each other and the $x$-$y$ plane so do not intersect. Hence, no separator connects this bifurcating null-null pair. In this example, $(a,b,c)=(1,1,-2)$, $\dot{\mathbf{B}}=[1,1,-2]^\top$, and $t=0.1$. }\label{spiralnulls_nosep} \end{figure} The eigenvectors, eigenvalues, and direction normal to the fan for the null points resulting from the bifurcation are shown in Table \ref{eigentable}. The eigenvectors associated with each null are not functions of time for this example, but the eigenvalues are. The structure of the resulting null-null pair depends on the value of \nulldet. When $\nulldet>z_n^2>0$ (Case 1), all eigenvalues are real so the bifurcation results in a positive improper null point with $z_n>0$ and a negative improper null point with $z_n<0$. The separatrix surfaces cannot be parallel in such a case, so a separator exists between the two nulls as the curved intersection of the two separatrix surfaces (see Figure~\ref{realnulls} for a typical example). When $\nulldet<0$ (Case 2), each null has two complex conjugate eigenvalues so the bifurcation results in a positive spiral null point with $z_n<0$ and a negative spiral null point with $z_n>0$. The field lines in the separatrix surfaces of both nulls are parallel to the $x$-$y$ plane, so the separatrix surfaces do not intersect to yield a separator. A spine-spine separator can exist under special conditions (e.g., when $\ensuremath{\dot{B}_x}=\ensuremath{\dot{B}_y}=0$), but under generic conditions no separator will exist to connect these two newly formed null points. Figure~\ref{spiralnulls_nosep} shows an example where the spine field lines of each null twist around each other before approaching the fan of the other null and spiraling away. A separator will exist as a straight line between the two bifurcating null points if and only if $\ensuremath{\dot{B}_x}=\ensuremath{\dot{B}_y}=0$ for both Case 1 or Case 2\@. \subsubsection{Third bifurcation example\label{bifexthree}} In contrast to the first two examples, we now consider a magnetic field perturbation that is a function of both time and space. We define $\delta\B(\x,t) = \dot{\B}(\x)\hspace{0.1mm}t$ in Eq.\ \ref{bifurcationexample}, where \ensuremath{\dot{B}_x}\ and \ensuremath{\dot{B}_y}\ are real constants and $\ensuremath{\dot{B}_z}(y)=3y-2$ is a linear function of $y$. For the particular case shown in Figure 4, $(a,b,c)=(1,1,-2)$. A separator is created connecting the bifurcating null-null pair because both the eigenvalues and eigenvectors of the nulls evolve in time. The separators formed in the case between two bifurcating spiral nulls are typically long and highly spiralled. Solving ${\mathbf{B}}=0$ using the above values in Eq.\ \ref{bifurcationexample} gives the following as the null point locations as a function of $t$, \begin{equation} \mathbf{x}_{n}(t) = \begin{bmatrix} -\frac{1}{2}\left(1\pm\sqrt{2t-3ty_n}\right)y_n \vspace{0.6mm} \\ y_n \vspace{0.6mm} \\ \pm \sqrt{2t-3ty_n} \end{bmatrix}, \label{nullposbif2} \end{equation} where $y_n = \left[\left(1+2t\right)+\sqrt{28t^2+4t+1}\right]/6t$. In the limit as $t\rightarrow 0$, we must consider the quadratic equation satisfied by $y_n$, \begin{equation} 3ty_n^2 -(1+2t)y_n -2t = 0, \end{equation} which implies $y_n=0$ (and, hence, $x_n=z_n=0$) as $t\rightarrow 0$. Differentiating this quadratic with respect to $t$ and then taking the limit $t\rightarrow 0$ reveals that $\dot{y_n}=-2$. It can then be shown that in the limit $t\rightarrow0$, the instantaneous velocities of the bifurcating null points are $\dotxn=[1,-2,\infty]^\top$. \begin{figure} \includegraphics[width=8.5cm]{fig4.png} \caption{ As Figure~\ref{spiralnulls_nosep}, but here the time derivative of the $z$-component of the magnetic field, $\dot{B_z}$, is linear in $y$. The addition of a second-order term in space and time means that the fan surfaces (salmon/light blue lines for the positive/negative nulls) tilt towards each other the instant after bifurcation creating a separator which connects this bifurcating null-null pair. In this example, $(a,b,c)=(1,1,-2)$, $t=0.1$, and $\delta{\B}(\x,t) = [t,0,3yt-2t]^\top$. }\label{spiralnulls_sep} \end{figure} \section{Discussion\label{discussion}} In this paper, we derive an exact expression for the motion of linear null points in a vector field and apply this expression to magnetic null points. Resistive diffusion and other effects in the generalized Ohm's law allow for non-ideal flows across magnetic null points. In resistive MHD, null point motion results from a combination of advection by the bulk plasma flow and resistive diffusion of the magnetic field. These results are particularly relevant to studies of null point magnetic reconnection, especially when asymmetries are present. Analytical models of asymmetric reconnection must necessarily satisfy these expressions. Non-ideal flows at null points allow the transfer of plasma across topological boundaries. Just as we must be careful when describing the motion of magnetic field lines,\cite{newcomb:1958, *stern:1966, *vasyliunas:1972} we must also be careful when describing the motion of magnetic null points. Null points are not objects. A null point is not permanently affixed to a parcel of plasma except in ideal or certain perfectly symmetric cases. Null points cannot be pushed directly by plasma pressure gradients or other forces on the plasma, but there will generally be indirect coupling between the momentum equation and Faraday's law that contributes to null-point motion. The motion of a null point is determined intrinsically by local quantities evaluated at the null point. However, global dynamics help set the local conditions that determine null-point motion. In addition to providing insight into the physics of non-ideal flows at magnetic null points and constraining models of asymmetric reconnection, the expressions for null-point motion have several practical applications. Locating nulls of vector fields in three dimensions is non-trivial,\cite{greene:1992, *haynes:2007:trilinear} but if the null-point positions are found for one time, then these expressions provide a method for estimating the positions of null points at future times. When there exists a cluster of several null points, these expressions provide a method for identifying which null points correspond to each other at different times. A practical limitation is that these expressions will often require evaluating derivatives of noisy or numerical data (cf.\ Ref.\ \onlinecite{cullum:1971, *chartrand:2011}). However, these expressions provide a test of numerical convergence and can be used to estimate the effective numerical resistivity in simulations of null-point reconnection (compare to Ref.\ \onlinecite{edmondson:2010a, *shen:2011}). Linear magnetic null points appear and disappear in pairs associated with the bifurcation of a degenerate magnetic null point. The null space of \M\ in these degenerate nulls will typically be one-dimensional. Second or higher order terms in the Taylor series expansion are necessary to describe the structure of a degenerate null point and the region between a bifurcating null-null pair. Except in special circumstances, the instantaneous velocity of convergence or separation of a null-null pair will typically be infinite along the null space of \M\ but with finite components of velocity in the orthogonal directions. This means that null-null pairs that have just appeared or are just about to disappear will not lie next to one another, but will always have a finite separation no matter how small a time step between frames is taken and regardless of whether the field is known numerically on a grid or analytically everywhere within a domain. Just before or after a bifurcation, a straight line separator connecting the null-null pair will generally not exist. Furthermore, a separator, curved or straight, generic or non-generic, will not necessarily connect a null-null pair just before or after bifurcation if the nulls involved are of spiral type and their separatrix surfaces are parallel. The structures of second-order nulls and separators that exist very near a bifurcation remain important problems for future work. In resistive MHD, null points must resistively diffuse in and out of existence. In the reference frame of the moving plasma, a necessary condition for a degenerate null point to form is that the resistive term in the induction equation, $\eta\nabla^2\B$, be antiparallel to the magnetic field at the location of the impending degenerate null. This places physics-based geometric constraints on when and where bifurcations are allowed to happen. We may consider whether or not a similar local analysis can be performed to describe the motion of separators. Consider a separator that connects two magnetic null points. Suppose that a segment of this separator exhibits non-ideal evolution. Along the remainder of its length, the magnetic field in the vicinity of the separator evolves ideally. At a slightly later time, the field line that was the separator will, in general, not continue to be the separator between these two null points despite the locally ideal evolution. The motion of separators therefore cannot be described using solely local parameters. However, it may be possible to derive an expression for the motion of a separator by taking into account plasma flow and connectivity changes along its entire length. Such an approach would provide insight into the structural stability of separators and separator bifurcations,\cite{DBrown:1999A} as well as the nature of plasma flows across topological boundaries. This latter aspect is fundamental to the basic physics of three-dimensional reconnection; indeed, an early definition\cite{vasyliunas:1975} states that reconnection is ``the process whereby plasma flows across a surface that separates regions containing topologically different magnetic field lines'' (see also Ref.\ \onlinecite{Schindler:1988}). We are investigating the problem of separator motion in two and three dimensions in ongoing work. There exist numerous additional opportunities for future work. Our results take a local approach; consequently, numerical simulations are needed to investigate the interplay between local and global scales during null-point motion and bifurcations. Numerical simulations can be used to investigate how null points diffuse in and out of existence in non-ideal plasmas and how separators behave during bifurcations. If the flow field and magnetic field are well diagnosed in space or laboratory plasmas, the expressions for null-point motion may be used to provide constraints on magnetic field dissipation. Equation \ref{Urmhd} offers another opportunity to measure the plasma resistivity in the collisional limit. Dedicated laboratory experiments offer an opportunity to investigate plasma flow across null points (and other topological boundaries) as well as null point and separator bifurcations. Finally, many results exist in the literature outside of plasma physics on bifurcations of vector fields and topology-based visualization of vector fields. While communication across disciplines is hindered by differences in terminology,\footnote{Null points are also known as neutral points, fixed points, stationary points, equilibrium points, critical points, singular points, and singularities. Separators are also known as saddle connectors and separation/attachment lines. Separatrix surfaces are also known as fans and separation surfaces.} the application of this external knowledge to plasma physics will likely lead to improved physics-based understanding of these processes. \begin{acknowledgments} The authors thank A.\ Aggarwal, A.\ Bhattacharjee, A.\ Boozer, P.\ Cassak, J.\ Dorelli, T.\ Forbes, L.\ Guo, Y.-M.\ Huang, J.\ Lin, V.\ Lukin, M.\ Oka, E.\ Priest, J.\ Raymond, K.\ Reeves, C.\ Shen, V.\ Titov, D.\ Wendel, and E.\ Zweibel for helpful discussions. The authors thank A.\ Wilmot-Smith and D.\ Pontin for a discussion that helped elucidate why the motion of separators cannot be described locally. N.A.M.\ acknowledges support from NASA grants {{NNX11AB61G}}, {{NNX12AB25G}}, and {{NNX15AF43G}}; NASA contract {{NNM07AB07C}}; and NSF SHINE grants {{AGS-1156076}} and {{AGS-1358342}} to SAO\@. C.E.P.\ acknowledges support from the St Andrews 2013 STFC Consolidated grant. This research has made use of NASA's Astrophysics Data System Bibliographic Services. The authors thank the journal for considering a manuscript containing only null results. \end{acknowledgments}
1,108,101,562,602
arxiv
\section{String homology in degree zero}\label{sec:string} In this section, we introduce the degree $0$ string homology $H_0^{\rm string}(K)$. The discussion of string homology here is only a first approximation to the more precise approach in Section~\ref{sec:string-ref}, but is much less technical and suffices for the comparison to the cord algebra. We then give several formulations of the cord algebra $\operatorname{Cord}(K)$ and use these to prove that $H_0^{\rm string}(K) \cong \operatorname{Cord}(K)$ and that string homology detects the unknot. Throughout this section, $K$ denotes an oriented framed knot in some oriented $3$-manifold $Q$. \subsection{A string topology construction}\label{ss:string0} Here we define $H_0^{\rm string}(K)$ for an oriented knot $K \subset Q$. Let $N$ be a tubular neighborhood of $K$. For this definition we do not need a framing for the knot $K$; later, when we identify $H_0^{\rm string}(K)$ with the cord algebra, it will be convenient to fix a framing, which will in turn fix an identification of $N$ with $S^1 \times D^2$. Any tangent vector $v$ to $Q$ at a point on $K$ has a tangential component parallel to $K$ and a normal component lying in the disk fiber; write $v^{\text{normal}}$ for the normal component of $v$. Fix a base point $x_0\in\partial N$ and a unit tangent vector $v_0\in T_{x_0}\partial N$. \begin{figure} \labellist \small\hair 2pt \pinlabel ${\color{blue} x_0}$ at 82 -6 \pinlabel ${\color{blue} v_0}$ at 127 2 \pinlabel ${\color{blue} s_1}$ at 120 19 \pinlabel ${\color{red} s_2}$ at 161 95 \pinlabel ${\color{blue} s_3}$ at 136 144 \pinlabel ${\color{red} s_4}$ at 88 111 \pinlabel ${\color{blue} s_5}$ at 46 13 \pinlabel $K$ at 222 27 \endlabellist \centering \includegraphics[width=0.7\textwidth]{figures/bcstringrev} \caption{A broken closed string with $4$ switches. Here, as in subsequent figures, we draw the knot $K$ in black, $Q$-strings ($s_2,s_4$) in red, and $N$-strings ($s_1,s_3,s_5$) in blue (dashed for clarity to distinguish from the red $Q$-strings).} \label{fig:1} \end{figure} \begin{definition}\label{def:broken_string1} A {\em broken (closed) string with $2\ell$ switches} on $K$ is a tuple $s=(a_1,\dots,a_{2\ell+1};s_1,\dots,s_{2\ell+1})$ consisting of real numbers $0=a_0<a_1<\dots<a_{2\ell+1}$ and $C^1$ maps $$ s_{2i+1}:[a_{2i},a_{2i+1}]\to N,\quad s_{2i}:[a_{2i-1},a_{2i}]\to Q $$ satisfying the following conditions: \begin{enumerate \item $s_0(0)=s_{2\ell+1}(a_{2\ell+1})=x_0$ and $\dot s_0(0)=\dot s_{2\ell+1}(a_{2\ell+1})=v_0$; \item for $j=1,\dots, 2\ell$, $s_j(a_j)=s_{j+1}(a_j) \in K$; \item for $i=1,\dots,\ell$, \begin{align*} (\dot s_{2i}(a_{2i}))^{\text{normal}} &= -(\dot s_{2i+1}(a_{2i}))^{\text{normal}} \\ (\dot s_{2i-1}(a_{2i-1}))^{\text{normal}} &=(\dot s_{2i}(a_{2i-1}))^{\text{normal}}. \end{align*} \end{enumerate} We will refer to the $s_{2i}$ and $s_{2i+1}$ as {\em Q-strings} and {\em N-strings}, respectively. Denote by $\Sigma^\ell$ the set of broken strings with $2\ell$ switches. \end{definition} \noindent The last condition, involving normal components of the tangent vectors to the ends of the $Q$- and $N$-strings, models the boundary behavior of holomorphic disks in this context (see Subsections~\ref{ss:series} and \ref{ss:brokenstring} for more on this point). A typical picture of a broken string is shown in Figure~\ref{fig:1}. We call a broken string $s=(s_1,\dots s_{2\ell+1})$ {\em generic} if none of the derivatives $\dot s_i(a_{i-1})$, $\dot s_i(a_i)$ is tangent to $K$ and no $s_i$ intersects $K$ away from its end points. We call a smooth 1-parameter family of broken strings $s^\lambda=(s_1^\lambda,\dots s_{2\ell+1}^\lambda)$, $\lambda\in[0,1]$, {\em generic} if $s^0$ and $s^1$ are generic strings, none of the derivatives $\dot s_i^\lambda(a_{i-1}^\lambda),\dot s_i^\lambda(a_i^\lambda)$ is tangent to $K$, and for each $i$ the family $s_i^\lambda$ intersects $K$ transversally in the interior. The boundary of this family is given by $$ \partial\{s^\lambda\} := s^1-s^0. $$ \begin{figure} \labellist \small\hair 2pt \pinlabel ${\color{blue} \lambda=1}$ at 107 493 \pinlabel ${\color{blue} \lambda=0}$ at 160 528 \pinlabel ${\color{blue} s^\lambda}$ at 338 302 \pinlabel ${\color{blue} 1}$ at 88 400 \pinlabel ${\color{blue} 2}$ at 88 330 \pinlabel ${\color{blue} 3}$ at 14 367 \pinlabel $K$ at 353 525 \pinlabel $K$ at 715 525 \pinlabel $\delta_N$ at 414 417 \pinlabel ${\color{red} \lambda=1}$ at 107 205 \pinlabel ${\color{red} \lambda=0}$ at 160 240 \pinlabel ${\color{red} s^\lambda}$ at 338 14 \pinlabel ${\color{red} 1}$ at 88 112 \pinlabel ${\color{red} 2}$ at 88 42 \pinlabel ${\color{red} 3}$ at 98 79 \pinlabel $K$ at 353 237 \pinlabel $K$ at 715 237 \pinlabel $\delta_Q$ at 414 129 \endlabellist \centering \includegraphics[width=0.9\textwidth]{figures/deltaNQrev} \caption{The definition of $\delta_N$ and $\delta_Q$. The two configurations shown have sign $\varepsilon=1$. If the orientation of the $1$-parameter family $s^\lambda$ is switched, i.e., the $\lambda=0$ and $\lambda=1$ ends are interchanged, then $\delta_N$ and $\delta_Q$ are still as shown, but with sign $\varepsilon = -1$. The coordinate axes denote orientations chosen on $N$ (top) and $Q$ (bottom). } \label{fig:2} \end{figure} We define {\em string coproducts} $\delta_Q$ and $\delta_N$ as follows, cf.\ Section~\ref{ss:string-op}. Fix a family of bump functions (which we will call {\em spikes}) ${\mathfrak s}_\nu:[0,1] \to D^2$ for $\nu\in D^2$ such that ${\mathfrak s}_\nu^{-1}(0)=\{0,1\}$, $\dot{\mathfrak s}_\nu(0)=\nu$ and $\dot{\mathfrak s}_\nu(1)=-\nu$; for each $\nu$, ${\mathfrak s}_\nu$ lies in the line joining $0$ to $\nu$. For a generic $1$-parameter family of broken strings $\{s^\lambda\}$ denote by $\lambda^j,b^j$ the finitely many values for which $s_{2i}^{\lambda^j}(b^j)\in K$ for some $i=i(j)$. For each $j$, let ${\mathfrak s}^j={\mathfrak s}_{\nu^j}(\cdot-b^j):[b^j,b^j+1]\to N$ be a shift of the spike associated to the normal derivative $\nu^j:=-(\dot\sigma_{2i}^{\lambda^j}(b_j))^{\rm normal}$, with constant value $s_{2i}^{\lambda^j}(b^j)$ along $K$; interpret this as an $N$-string in the normal disk to $K$ at the point $s_{2i}^{\lambda^j}(b^j)$, traveling along the line joining $0$ to $\nu^j \in D^2$. Now set $$ \delta_Q\{s^\lambda\} := \sum_{j}\varepsilon^j\Bigl(s_1^{\lambda^j},\dots, s^{\lambda^j}_{2i}|_{[a_{2i-1},b^j]},{\mathfrak s}^j, \hat s^{\lambda^j}_{2i}|_{[b^j,a_{2i}]}, \dots, \hat s^{\lambda^j}_{2\ell+1}\Bigr), $$ where the hat means shift by $1$ in the argument, and $\varepsilon^j=\pm 1$ are signs defined as in Figure~\ref{fig:2}.\footnote{Regarding the signs: from our considerations of orientation bundles in Section~\ref{sec:trans}, we can assign the same sign (which we have chosen to be $\varepsilon=1$) to both configurations shown in Figure~\ref{fig:2}, provided we choose orientations on $Q$ and $N$ appropriately. More precisely, at a point $p$ on $K$, if $(v_1,v_2,v_3)$ is a positively oriented frame in $Q$ where $v_1$ is tangent to $K$ and $v_2,v_3$ are normal to $K$, then we need $(v_1,Jv_2,-Jv_3)$ to be a positively oriented frame in $N$, where $J$ is the almost complex structure that rotates normal directions in $Q$ to normal directions in $N$. As a result, if we give $Q$ any orientation and view $N$ as the subset of $Q$ given by a tubular neighborhood of $K$, then we assign the \textit{opposite} orientation to $N$.} Loosely speaking, $\delta_Q$ inserts an {\em $N$-spike} at all points where some $Q$-string meets $K$, in such a way that (iii) still holds. The operation $\delta_N$ is defined analogously, inserting a {\em $Q$-spike} where an $N$-string meets $K$ (and defining $\nu^j$ without the minus sign). Denote by $C_0(\Sigma^\ell)$ and $C_1(\Sigma^\ell)$ the free ${\mathbb{Z}}$-modules generated by generic broken strings and generic 1-parameter families of broken strings with $2\ell$ switches, respectively, and set $$ C_i(\Sigma) := \bigoplus_{\ell=0}^\infty C_i(\Sigma^\ell),\qquad i=0,1. $$ Concatenation of broken strings at the base point gives $C_0(\Sigma)$ the structure of a (noncommutative but strictly associative) algebra over ${\mathbb{Z}}$. The operations defined above yield linear maps $$ \partial:C_1(\Sigma^\ell)\to C_0(\Sigma^\ell)\subset C_0(\Sigma),\qquad \delta_N,\delta_Q:C_1(\Sigma^\ell)\to C_0(\Sigma^{\ell+1})\subset C_0(\Sigma). $$ Define the degree zero {\em string homology} of $K$ as $$ H_0^{\rm string}(K) = H_0(\Sigma) := C_0(\Sigma)/\im(\partial+\delta_N+\delta_Q). $$ Since $\partial+\delta_N+\delta_Q$ commutes with multiplication by elements in $C_0(\Sigma)$, its image is a two-sided ideal in $C_0(\Sigma)$. Hence degree zero string homology inherits the structure of an algebra over ${\mathbb{Z}}$. By definition, $H_0^{\rm string}(K)$ is an isotopy invariant of the oriented knot $K$ (the framing was used only for convenience but is not really needed for the construction, cf.~Remark~\ref{rem:framing} below). Considering 1-parameter families consisting of generic strings (on which $\delta_N$ and $\delta_Q$ vanish), we see that for the computation of $H_0^{\rm string}(K)$ we may replace the algebra $C_0(\Sigma)$ by its quotient under homotopy of generic strings. On the other hand, if $\{s^\lambda\}$ is a generic $1$-parameter family of strings that consists of generic strings except for an $N$-string (resp.~a $Q$-string) that passes through $K$ exactly once, then $\delta_N$ (resp.~$\delta_Q$) contributes a term to $(\partial+\delta_N+\delta_Q)$, and setting $(\partial+\delta_N+\delta_Q)(\{s^\lambda\})=0$ in these two cases yields the following ``skein relations'': \begin{enumerate}[(a) \item \hspace{2ex} \label{eq:delta1} $0 \hspace{1ex} = \hspace{1ex} {\labellist \small\hair 2pt \pinlabel $K$ at 87 87 \endlabellist \raisebox{-3ex}{\includegraphics[width=7ex]{figures/deltaN2rev}}} \hspace{2ex} - \hspace{1ex} {\labellist \small\hair 2pt \pinlabel $K$ at 87 87 \endlabellist \raisebox{-3ex}{\includegraphics[width=7ex]{figures/deltaN1rev}}} \hspace{2ex} + \hspace{1ex} {\labellist \small\hair 2pt \pinlabel $K$ at 87 87 \endlabellist \raisebox{-3ex}{\includegraphics[width=7ex]{figures/deltaN3rev}}}$ \vspace{1ex} \item \hspace{2ex} $0 \hspace{1ex} = \hspace{1ex} {\labellist \small\hair 2pt \pinlabel $K$ at 87 87 \endlabellist \raisebox{-3ex}{\includegraphics[width=7ex]{figures/deltaQ2}}} \hspace{2ex} - \hspace{1ex} {\labellist \small\hair 2pt \pinlabel $K$ at 87 87 \endlabellist \raisebox{-3ex}{\includegraphics[width=7ex]{figures/deltaQ1}}} \hspace{2ex} + \hspace{1ex} {\labellist \small\hair 2pt \pinlabel $K$ at 87 87 \endlabellist \raisebox{-3ex}{\includegraphics[width=7ex]{figures/deltaQ3rev}}}$ \hspace{1ex}. \label{eq:delta2} \end{enumerate} Since any generic $1$-parameter family of broken closed strings can be divided into $1$-parameter families each of which crosses $K$ at most once, we have proved the following result. \begin{prop}\label{prop:iso Let $\mathcal{B}$ be the quotient of $C_0(\Sigma)$ by homotopy of generic broken strings and let $\mathcal{J}\subset\mathcal{B}$ be the two-sided ideal generated by the skein relations \eqref{eq:delta1} and \eqref{eq:delta2}. Then \[ H_0^{\rm string}(K) \cong \mathcal{B}/\mathcal{J}. \] \end{prop} \begin{remark}\label{rem:framing} Degree zero string homology $H_0^{\rm string}$ (as well as its higher degree version defined later) is an invariant of an oriented knot $K \subset Q$. Reversing the orientation of $K$ has the result of changing the signs of $\delta_N$ and $\delta_Q$ but not of $\partial$ and gives rise to isomorphic $H_0^{\rm string}$. More precisely, if $-K$ is $K$ with the opposite orientation, the map $C_0(\Sigma) \to C_0(\Sigma)$ given by multiplication by $(-1)^\ell$ on the summand $C_0(\Sigma^\ell)$ intertwines the differentials $\partial+\delta_N+\delta_Q$ for $K$ and $-K$ and induces an isomorphism $H_0^{\rm string}(K) \to H_0^{\rm string}(-K)$. Similarly, mirroring does not change $H_0^{\rm string}$ up to isomorphism: if $\bar K$ is the mirror of $K$, then the mirror (reflection) map induces a map $C_0(\Sigma) \to C_0(\bar\Sigma)$, and composing with the above map $C_0(\Sigma) \to C_0(\Sigma)$ gives a chain isomorphism $C_0(\Sigma) \to C_0(\bar\Sigma)$. In Sections~\ref{ss:cordalg} through~\ref{ss:groupring}, we will ``improve'' $H_0^{\rm string}$ from an abstract ring to one that canonically contains the ring ${\mathbb{Z}}[\lambda^{\pm 1},\mu^{\pm 1}]$. This requires a choice of framing of $K$ (though for $Q={\mathbb{R}}^3$, there is a canonical choice given by the Seifert framing). In the improved setting, $H_0^{\rm string}$ changes under orientation reversal of $K$ by replacing $(\lambda,\mu)$ by $(\lambda^{-1},\mu^{-1})$; under framing change by $f\in{\mathbb{Z}}$ by replacing $(\lambda,\mu)$ by $(\lambda\mu^f,\mu)$; and under mirroring by replacing $(\lambda,\mu)$ by $(\lambda,\mu^{-1})$. In particular, the improved $H_0^{\rm string}$ is very sensitive to framing change and mirroring. For a related discussion, see \cite[\S 4.1]{Ng:1}. \end{remark} \smallskip {\bf A modified version of string homology. } The choice of the base point in $N$ rather than $Q$ in the definition of string homology $H^{\str}_0(K)$ is dictated by the relation to Legendrian contact homology. However, from the perspective of string topology we could equally well pick the base point in $Q$, as we describe next. \begin{figure} \labellist \small\hair 2pt \pinlabel ${\color{red} x_0}$ at 26 109 \pinlabel ${\color{red} v_0}$ at 6 60 \pinlabel ${\color{red} s_1}$ at 65 40 \pinlabel ${\color{blue} s_2}$ at 229 40 \pinlabel ${\color{red} s_3}$ at 302 72 \pinlabel ${\color{blue} s_4}$ at 243 116 \pinlabel ${\color{red} s_5}$ at 97 135 \pinlabel $K$ at 351 2 \endlabellist \centering \includegraphics[width=0.6\textwidth]{figures/bcstring-modified} \caption{In the alternate definition that produces modified string homology, a broken closed string with $4$ switches. As usual, $Q$-strings are in red, $N$-strings in (dashed) blue.} \label{fig:bcstring-modified} \end{figure} Choose a base point $x_0\in Q\setminus K$ and a tangent vector $v_0 \in T_{x_0}Q$. Modify the definition of a broken string with $2\ell$ switches to $s=(a_0,\dots, a_{2\ell+1};s_0,\dots,s_{2\ell})$, where now $$ s_{2i}:[a_{2i},a_{2i+1}]\to Q,\quad s_{2i-1}:[a_{2i-1},a_{2i}]\to N, $$ and we require that $s_0(a_0)=s_{2\ell}(a_{2\ell+1})=x_0$, $\dot s_0(a_0)=\dot s_{2\ell}(a_{2\ell+1})=v_0$ and conditions (ii) and (iii) of Definition~\ref{def:broken_string1} hold. See Figure~\ref{fig:bcstring-modified}. Let $\hat{C}_0(\Sigma)$ denote the ring generated as a ${\mathbb{Z}}$-module by generic broken strings with base point $x_0\in Q$. (As usual, the product operation on $\hat{C}_0(\Sigma)$ is given by string concatenation.) We can define string coproducts $\delta_N$, $\delta_Q$ as before, and then define the degree $0$ \textit{modified string homology} of $K$ as $$ \hat{H}^{\str}_0(K) = \hat{C}_0(\Sigma)/\im(\partial+\delta_N+\delta_Q). $$ We have the following analogue of Proposition~\ref{prop:iso}. \begin{prop} Let $\hat{\mathcal{B}}$ be the quotient of $\hat{C}_0(\Sigma)$ by homotopy of generic broken strings and let $\hat{\mathcal{J}} \subset \hat{\mathcal{B}}$ be the two-sided ideal generated by the skein relations \eqref{eq:delta1} and \eqref{eq:delta2}. Then $$ \hat{H}^{\str}_0(K) \cong \hat{\mathcal{B}}/\hat{\mathcal{J}}. $$ \end{prop} There is one key difference between $\hat{H}^{\str}_0$ and $H^{\str}_0$. Since any element in $\pi_1(Q\setminus K,x_0)$ can be viewed as a pure $Q$-string, we have a canonical map ${\mathbb{Z}}\pi_1(Q\setminus K,x_0) \to \hat{H}^{\str}_0(K)$. In fact, we will see in Proposition~\ref{prop:groupring} that this is a ring isomorphism. The same is not the case for $H^{\str}_0(K)$. \subsection{The cord algebra}\label{ss:cordalg} The definition of $H^{\str}_0(K)$ in Section~\ref{ss:string0} is very similar to the definition of the cord algebra of a knot \cite{Ng:2b,Ng:1,Ngsurvey}. Here we review the cord algebra, or more precisely, present a noncommutative refinement of it, in which the ``coefficients'' $\lambda,\mu$ do not commute with the ``cords''. Let $K \subset Q$ be an oriented knot equipped with a framing, and let $K'$ be a parallel copy of $K$ with respect to this framing. Choose a base point $\ast$ on $K$ and a corresponding base point $\ast$ on $K'$ (in fact only the base point on $K'$ will be needed). \begin{definition} A \textit{(framed) cord} of $K$ is a continuous map $\gamma :\thinspace [0,1] \to Q$ such that $\gamma([0,1]) \cap K = \emptyset$ and $\gamma(0),\gamma(1) \in K'\setminus\{\ast\}$. Two framed cords are \textit{homotopic} if they are homotopic through framed cords. \end{definition} We now construct a noncommutative unital ring $\AA$ as follows: as a ring, $\AA$ is freely generated by homotopy classes of cords and four extra generators $\lambda^{\pm 1},\mu^{\pm 1}$, modulo the relations \[ \lambda\cdot\lambda^{-1} = \lambda^{-1}\cdot\lambda = \mu\cdot\mu^{-1} = \mu^{-1}\cdot\mu = 1, \hspace{3ex} \lambda\cdot\mu = \mu\cdot\lambda. \] Thus $\AA$ is generated as a ${\mathbb{Z}}$-module by (noncommutative) words in homotopy classes of cords and powers of $\lambda$ and $\mu$ (and the powers of $\lambda$ and $\mu$ commute with each other, but not with any cords). \begin{definition} The \textit{cord algebra} of $K$ is the quotient ring \label{def:cordalg} \[ \operatorname{Cord}(K) = \AA/\mathcal{I}, \] where $\mathcal{I}$ is the two-sided ideal of $\AA$ generated by the following ``skein relations'': \begin{enumerate} \item \label{it:cord1} $\raisebox{-3ex}{\includegraphics[height=7ex]{figures/skein1}} = 1-\mu$ \item \label{it:cord0} $\raisebox{-3ex}{\includegraphics[height=7ex]{figures/skein3iwrapped}} = \mu \cdot \raisebox{-3ex}{\includegraphics[height=7ex]{figures/skein3i}}$ \hspace{3ex} and \hspace{3ex} $\raisebox{-3ex}{\includegraphics[height=7ex]{figures/skein3cwrapped}} =\raisebox{-3ex}{\includegraphics[height=7ex]{figures/skein3c}} \cdot \mu$ \item \label{it:cord2} $\raisebox{-3ex}{\includegraphics[height=7ex]{figures/skein2a}} = \lambda \cdot \raisebox{-3ex}{\includegraphics[height=7ex]{figures/skein2b}}$ \hspace{3ex} and \hspace{3ex} $\raisebox{-3ex}{\includegraphics[height=7ex]{figures/skein2c}} = \raisebox{-3ex}{\includegraphics[height=7ex]{figures/skein2d}} \cdot \lambda$ \item \label{it:cord3} $\raisebox{-3ex}{\includegraphics[height=7ex]{figures/skein3a}} - \raisebox{-3ex}{\includegraphics[height=7ex]{figures/skein3b}} = \raisebox{-3ex}{\includegraphics[height=7ex]{figures/skein3c}} \cdot \raisebox{-3ex}{\includegraphics[height=7ex]{figures/skein3d}}$. \end{enumerate} Here $K$ is depicted in black and $K'$ parallel to $K$ in gray, and cords are drawn in red. \end{definition} \begin{remark} The skein relations in Definition~\ref{def:cordalg} depict cords in space that agree outside of the drawn region (except in (\ref{it:cord3}), where either of the two cords on the left hand side of the equation splits into the two on the right). Thus (\ref{it:cord0}) states that appending a meridian to the beginning or end of a cord multiplies that cord by $\mu$ on the left or right, and (\ref{it:cord3}) is equivalent to: \[ \raisebox{-3ex}{\includegraphics[height=7ex]{figures/skein3f}} - \raisebox{-3ex}{\includegraphics[height=7ex]{figures/skein3g}} = \raisebox{-3ex}{\includegraphics[height=7ex]{figures/skein3h}} \cdot \raisebox{-3ex}{\includegraphics[height=7ex]{figures/skein3i}}. \] \end{remark} \begin{remark} \label{rmk:commute} Our stipulation that $\lambda,\mu$ not commute with cords necessitates a different normalization of the cord algebra of $K \subset Q$ from previous definitions \cite{Ng:1,Ngsurvey}. In the definition from \cite{Ngsurvey} (\cite{Ng:1} is the same except for a change of variables), $\lambda,\mu$ commute with cords, and the parallel copy $K'$ is not used. Instead, cords are defined to be paths that begin and end on $K$ with no interior point lying on $K$, and the skein relations are suitably adjusted, with the key relation, the equivalent of (\ref{it:cord3}), being: \[ \raisebox{-3ex}{\includegraphics[height=7ex]{figures/skein31a}} - \mu \raisebox{-3ex}{\includegraphics[height=7ex]{figures/skein31b}} = \raisebox{-3ex}{\includegraphics[height=7ex]{figures/skein31c}} \cdot \raisebox{-3ex}{\includegraphics[height=7ex]{figures/skein31d}}. \] Let $\operatorname{Cord}'(K)$ denote the resulting version of cord algebra. If we take the quotient of the cord algebra $\operatorname{Cord}(K)$ from Definition~\ref{def:cordalg} where $\lambda,\mu$ commute with everything, then the result is a ${\mathbb{Z}}[\lambda^{\pm 1},\mu^{\pm 1}]$-algebra isomorphic to $\operatorname{Cord}'(K)$, as long as we take the Seifert framing ($\operatorname{lk}(K,K') = 0$). The isomorphism is given as follows: given a framed cord $\gamma$, extend $\gamma$ to an oriented closed loop $\widetilde{\gamma}$ in $Q\setminus K$ by joining the endpoints of $\gamma$ along $K'$ in a way that does not pass through the base point $\ast$, and map $\gamma$ to $\mu^{-\operatorname{lk}(\widetilde{\gamma},K)} \gamma$. This is a well-defined map on $\operatorname{Cord}(K)$ and sends the relations for $\operatorname{Cord}(K)$ to the relations for $\operatorname{Cord}'(K)$. See also the proof of Theorem 2.10 in \cite{Ng:1}. \end{remark} We now show that the cord algebra is exactly equal to degree $0$ string homology. This follows from the observation that the $Q$-strings in a generic broken closed string are each a framed cord of $K$, once we push the endpoints of the $Q$-string off of $K$; and thus a broken closed string can be thought of as a product of framed cords. \begin{proposition} Let $K \subset Q$ be a framed oriented knot. Then we have a ring isomorphism \label{prop:HS0cord} \[ \operatorname{Cord}(K) \cong H^{\str}_0(K). \] \end{proposition} \begin{proof} Choose a normal vector field $v$ along $K$ defining the framing and let $K'$ be the pushoff of $K$ in the direction of $v$, placed so that $K'$ lies on the boundary of the tubular neighborhood $N$ of $K$. Fix a base point $p \neq \ast$ on $K$, and let $p'$ be the corresponding point on $K'$, so that $v(p)$ is mapped to $p'$ under the diffeomorphism between the normal bundle to $K$ and $N$. Identify $p'$ with $x_0\in \partial N$ from Definition~\ref{def:broken_string1} (the definition of broken closed string). We can homotope any cord of $K$ so that it begins and ends at $p'$, by pushing the endpoints of the cord along $K'$, away from $\ast$, until they reach $p'$. Every generator of $\operatorname{Cord}(K)$ as a ${\mathbb{Z}}$-module has the form $\alpha_1 x_1 \alpha_2 x_2 \cdots x_\ell \alpha_{\ell+1}$, where $\ell \geq 0$, $x_1,\ldots,x_\ell$ are cords of $K$, and $\alpha_1,\ldots,\alpha_{\ell+1}$ are each of the form $\lambda^a \mu^b$ for $a,b\in{\mathbb{Z}}$. We can associate a broken closed string with $2\ell$ switches as follows. Assume that each cord $x_1,\ldots,x_\ell$ begins and ends at $p'$. Fix paths $\gamma_Q,\tilde{\gamma}_Q$ in $Q$ from $p,p'$ to $p',p$ respectively, and paths $\gamma_N,\tilde{\gamma}_N$ in $N$ from $p,p'$ to $p',p$ respectively, as shown in Figure~\ref{fig:cordtostring}: these are chosen so that the derivative of $\gamma_Q,\tilde{\gamma}_Q,\gamma_N,\tilde{\gamma}_N$ at $p$ is $-v(p),-v(p),v(p),-v(p)$, respectively. For $k=1,\ldots,\ell$, let $\overline{x}_k$ be the $Q$-string with endpoints at $p$ given by the concatenation $\gamma_Q \cdot x_k \cdot \tilde{\gamma}_Q$ (more precisely, smoothen this string at $p'$). Similarly, for $k=1,\ldots,\ell+1$, identify $\alpha_k \in \pi_1(\partial N) = \pi_1(T^2)$ with a loop in $\partial N$ with basepoint $p'$ representing this class; then define $\overline{\alpha}_k$ to be the $N$-string $\gamma_N \cdot \alpha_k \cdot \tilde{\gamma}_N$ for $k=1,\ldots,\ell$, $\alpha_1 \cdot \tilde{\gamma}_N$ for $k=0$, and $\gamma_N \cdot \alpha_{\ell+1}$ for $k=\ell+1$. (If $\ell=0$, then $\overline{\alpha}_1 = \alpha_1$.) Then the concatenation \[ \overline{\alpha}_1 \cdot \overline{x}_1 \cdot \overline{\alpha}_2 \cdot \overline{x}_2 \cdots \overline{x}_\ell \cdot \overline{\alpha}_{\ell+1} \] is a broken closed string with $2\ell$ switches. \begin{figure} \labellist \small\hair 2pt \pinlabel $K$ at 104 24 \pinlabel $K$ at 230 24 \pinlabel $K$ at 392 24 \pinlabel $K$ at 524 24 \pinlabel $K'$ at 104 51 \pinlabel $K'$ at 230 51 \pinlabel $K'$ at 392 51 \pinlabel $K'$ at 524 51 \pinlabel $p$ at 34 14 \pinlabel $p$ at 169 14 \pinlabel $p$ at 330 14 \pinlabel $p$ at 456 14 \pinlabel $p'$ at 36 61 \pinlabel $p'$ at 180 61 \pinlabel $p'$ at 327 61 \pinlabel $p'$ at 467 61 \pinlabel ${\color{red} x_k}$ at 84 82 \pinlabel ${\color{red} x_k}$ at 126 82 \pinlabel ${\color{red} \gamma_Q}$ at 70 10 \pinlabel ${\color{red} \tilde{\gamma}_Q}$ at 183 40 \pinlabel ${\color{blue} \alpha_k}$ at 378 82 \pinlabel ${\color{blue} \alpha_k}$ at 409 82 \pinlabel ${\color{blue} \gamma_N}$ at 346 34 \pinlabel ${\color{blue} \tilde{\gamma}_N}$ at 471 40 \endlabellist \centering \includegraphics[width=0.9\textwidth]{figures/cordtostringrev} \caption{Turning an element of the cord algebra into a broken closed string.} \label{fig:cordtostring} \end{figure} Extend this map from generators of $\operatorname{Cord}(K)$ to broken closed strings to a map on $\operatorname{Cord}(K)$ by ${\mathbb{Z}}$-linearity. We claim that this induces the desired isomorphism $\phi :\thinspace \operatorname{Cord}(K) \to H^{\str}_0(K)$. Recall that $\operatorname{Cord}(K)$ is defined by skein relations (\ref{it:cord1}), (\ref{it:cord0}), (\ref{it:cord2}), (\ref{it:cord3}) from Definition~\ref{def:cordalg}, while $H^{\str}_0(K)$ is defined by skein relations \eqref{eq:delta1}, \eqref{eq:delta2} from Proposition~\ref{prop:iso}. To check that $\phi$ is well-defined, we need for the skein relations (\ref{it:cord1}), (\ref{it:cord0}), (\ref{it:cord2}), (\ref{it:cord3}) to be preserved by $\phi$. Indeed, (\ref{it:cord1}) maps under $\phi$ to \[ \raisebox{-3ex}{\includegraphics[height=7ex]{figures/skeina1arev}} = \raisebox{-3ex}{\includegraphics[height=7ex]{figures/skeina1crev}} - \raisebox{-3ex}{\includegraphics[height=7ex]{figures/skeina1drev}}, \] which holds in $H^{\str}_0(K)$ since both sides are equal to $\raisebox{-3ex}{\includegraphics[height=7ex]{figures/skeina1brev}}$: the left hand side by rotating the end of the red $Q$-string and the beginning of the blue $N$-string around $K$ at their common endpoint, the right hand side by skein relation \eqref{eq:delta1}. Skein relation (\ref{it:cord3}) maps under $\phi$ to \[ \raisebox{-3ex}{\includegraphics[height=7ex]{figures/skeina4a}} - \raisebox{-3ex}{\includegraphics[height=7ex]{figures/skeina4b}} = \raisebox{-3ex}{\includegraphics[height=7ex]{figures/skeina4crev}}, \] which holds by \eqref{eq:delta2}. Finally, (\ref{it:cord0}) and (\ref{it:cord2}) map to homotopies of broken closed strings: for instance, the left hand relation in (\ref{it:cord0}) maps to \[ \raisebox{-3ex}{\includegraphics[height=7ex]{figures/skeina2arev}} = \raisebox{-3ex}{\includegraphics[height=7ex]{figures/skeina2brev}}. \] To show that $\phi$ is an isomorphism, we simply describe the inverse map from broken closed strings to the cord algebra. Given any broken closed string, homotope it so that the switches all lie at $p$, and so that the tangent vector to the endpoint of all strings ending at $p$ is $-v(p)$; then the result is in the image of $\phi$ by construction. There is more than one way to homotope a broken closed string into this form, but any such form gives the same element of the cord algebra: moving the switches along $K$ to $p$ in a different way gives the same result by (\ref{it:cord2}), while moving the tangent vectors to $-v(p)$ in a different way gives the same result by (\ref{it:cord0}). The two skein relations \eqref{eq:delta1} and \eqref{eq:delta2} are satisfied in the cord algebra because of (\ref{it:cord1}) and (\ref{it:cord3}). \end{proof} As mentioned in the Introduction, when $Q={\mathbb{R}}^3$, it is an immediate consequence of Theorem~\ref{thm:main} and Proposition~\ref{prop:HS0cord} that the cord algebra is isomorphic to degree $0$ knot contact homology: \[ H^{\rm contact}_0(K) \cong H^{\str}_0(K) \cong \operatorname{Cord}(K). \] This recovers a result from the literature (see Theorem~\ref{thm:cordHC0}), modulo one important point. Recall (or see Section~\ref{sec:leg}) that $H^{\rm contact}_0(K)$ is the degree $0$ homology of a differential graded algebra $(\AA,\partial)$. In much of the literature on knot contact homology, cf.\ \cite{EENStransverse,Ng:1,Ngtransverse}, this DGA is an algebra over the coefficient ring ${\mathbb{Z}}[\lambda^{\pm 1},\mu^{\pm 1}]$ (or ${\mathbb{Z}}[\lambda^{\pm 1},\mu^{\pm 1},U^{\pm 1}]$, but in this paper we set $U=1$): $\AA$ is generated by a finite collection of noncommuting generators (Reeb chords) along with powers of $\lambda,\mu$ that commute with Reeb chords. By contrast, in this paper $(\AA,\partial)$ is the \textit{fully noncommutative} DGA in which the coefficients $\lambda,\mu$ commute with each other but not with the Reeb chords; see \cite{EENS,Ngsurvey}. The isomorphism $\operatorname{Cord}(K) \cong H^{\rm contact}_0(K)$ in Theorem~\ref{thm:cordHC0} is stated in the existing literature as an isomorphism of ${\mathbb{Z}}[\lambda^{\pm 1},\mu^{\pm 1}]$-algebras, i.e., the coefficients $\lambda,\mu$ commute with everything for both $H^{\rm contact}_0(K)$ and $\operatorname{Cord}(K)$. However, an inspection of the proof of Theorem~\ref{thm:cordHC0} from \cite{EENS,Ng:2b,Ng:1} shows that it can be lifted to the fully noncommutative setting, in which $\lambda,\mu$ do not commute with Reeb chords (for $H^{\rm contact}_0(K)$) or cords (for $\operatorname{Cord}(K)$). We omit the details here, and simply note that our results give a direct proof of Theorem~\ref{thm:cordHC0} in the fully noncommutative setting. \begin{remark} \label{rmk:commute1} Besides being more natural from the viewpoint of string homology, the stipulation that $\lambda,\mu$ do not commute with cords (in the cord algebra) or Reeb chords (in the DGA) is essential for our construction, in Section~\ref{ss:groupring} below, of a map from degree $0$ homology to the group ring of $\pi$, the fundamental group of the knot complement. This in turn is what allows us to (re)prove that knot contact homology detects the unknot, among other things. If we pass to the quotient where $\lambda,\mu$ commute with everything, then there is no well-defined map to ${\mathbb{Z}}\pi$. \end{remark} \begin{remark} As already mentioned in the introduction, in \cite{BMSS} Basu, McGibbon, Sullivan and Sullivan have given a string topology description of a version of the cord algebra for a codimension 2 submanifold $K \subset Q$ of some ambient manifold $Q$, proving a theorem which formally looks quite similar to Proposition~\ref{prop:HS0cord}. In the language we use here, the main difference in their work is the absence of $N$-strings, so that for knots $K \subset {\mathbb{R}}^3$ the version of $H^{\str}_0(K)$ they define only recovers the specialization at $\lambda=1$ of (the commutative version of) $\operatorname{Cord}(K)$. \end{remark} \subsection{Homotopy formulation of the cord algebra}\label{ss:htpy} We now reformulate the cord algebra in terms of fundamental groups, more precisely the knot group and its peripheral subgroup, along the lines of the Appendix to \cite{Ng:2b}. In light of Proposition~\ref{prop:HS0cord}, we will henceforth denote the cord algebra as $H^{\str}_0(K)$. We first introduce some notation. Let $K$ be an oriented knot in an oriented $3$-manifold $Q$ (in fact we only need an orientation and coorientation of $K$). Let $N$ be a tubular neighborhood of $K$; as suggested by the notation, we will identify this neighborhood with the conormal bundle $N \subset T^*Q$ via the tubular neighborhood theorem. We write \begin{align*} \pi &= \pi_1(Q\setminus K) \\ \hat{\pi} &= \pi_1(\partial N); \end{align*} note that the inclusion $\partial N \hookrightarrow N$ induces a map $\hat{\pi} \to \pi$, typically an injection. Let ${\mathbb{Z}}\pi$, ${\mathbb{Z}}\hat{\pi}$ denote the group rings of $\pi,\hat{\pi}$. We fix a framing on $K$; this, along with the orientation and coorientation of $K$, allows us to specify two elements $\mu,\lambda$ for $\hat{\pi}$ corresponding to the meridian and longitude, and to write \[ {\mathbb{Z}}\hat{\pi} = {\mathbb{Z}}[\lambda^{\pm 1},\mu^{\pm 1}]. \] The group ring ${\mathbb{Z}}\pi$ and the cord algebra $H^{\str}_0(K)$ both have natural maps from ${\mathbb{Z}}[\lambda^{\pm 1},\mu^{\pm 1}]$ (which are injective unless $K$ is the unknot). This motivates the following definition, where ``NC'' stands for ``noncommutative''. \begin{definition} Let $R$ be a ring. An \textit{$R$-NC-algebra} is a ring $S$ equipped with a ring homomorphism $R \to S$. Two $R$-NC-algebras $S_1,S_2$ are \textit{isomorphic} if there is a ring isomorphism $S_1\to S_2$ that commutes with the maps $R\to S_1$, $R\to S_2$. \end{definition} Note that when $R$ is commutative, the notion of an $R$-NC-algebra differs from the usual notion of an $R$-algebra; for example, an $R$-algebra $S$ requires $s_1(rs_2) = rs_1s_2$ for $r\in R$ and $s_1,s_2\in S$, while an $R$-NC-algebra does not. (One can quotient an $R$-NC-algebra by commutators involving elements of $R$ to obtain an $R$-algebra.) If $R$ and $S$ are both commutative, however, then the notions agree. Also note that any $R$-NC-algebra is automatically an $R$-bimodule, where $R$ acts on the left and on the right by multiplication. By the construction of the cord algebra $\operatorname{Cord}(K)$ from Section~\ref{ss:cordalg}, $H^{\str}_0(K)$ is a ${\mathbb{Z}}\hat{\pi}$-NC-algebra. We now give an alternative definition of $H^{\str}_0(K)$ that uses $\pi$ and $\hat{\pi}$ in place of cords. A \textit{broken word} in $\pi,\hat{\pi}$ is a nonempty word in elements of $\pi$ and $\hat{\pi}$ whose letters alternate between elements in $\pi$ and $\hat{\pi}$. For clarity, we use Roman letters for elements in $\pi$ and Greek for $\hat{\pi}$, and enclose elements in $\pi,\hat{\pi}$ by square and curly brackets, respectively. Thus examples of broken words are $\{\alpha\}$, $[x]$, $[x]\{\alpha\}$, and $\{\alpha_1\}[x_1]\{\alpha_2\}[x_2]\{\alpha_3\}$. Consider the ${\mathbb{Z}}$-module freely generated by broken words in $\pi,\hat{\pi}$, divided by the following \textit{string relations}: \begin{enumerate} \item \label{it:str1} $\cdots_1 [x\alpha_1]\{\alpha_2\} \cdots_2 = \cdots_1 [x]\{\alpha_1\alpha_2\} \cdots_2 $ \item \label{it:str2} $\cdots_1 \{\alpha_1\}[\alpha_2 x] \cdots_2 = \cdots_1 \{\alpha_1\alpha_2\}[x] \cdots_2$ \item \label{it:str3} $(\cdots_1 [x_1x_2] \cdots_2) - (\cdots_1 [x_1\mu x_2] \cdots_2) = \cdots_1 [x_1]\{1\}[x_2] \cdots_2$ \item \label{it:str4} $(\cdots_1 \{\alpha_1\alpha_2\} \cdots_2) - (\cdots_1 \{\alpha_1\mu\alpha_2\} \cdots_2 ) = \cdots_1 \{\alpha_1\}[1]\{\alpha_2\} \cdots_2$. \end{enumerate} Here $\cdots_1$ is understood to represent the same (possibly empty) subword each time it appears, as is $\cdots_2$. We denote the resulting quotient by $S(\pi,\hat\pi)$. The ${\mathbb{Z}}$-module $S(\pi,\hat \pi)$ splits into a direct sum corresponding to the four possible beginnings and endings for broken words: \[ S(\pi,\hat\pi) = S^{\hat{\pi}\hat{\pi}}(\pi,\hat\pi) \oplus S^{\hat{\pi}\pi}(\pi,\hat\pi) \oplus S^{\pi\hat\pi}(\pi,\hat\pi) \oplus S^{\pi\pi}(\pi,\hat\pi), \] where the superscripts denote which of $\pi$ and $\hat{\pi}$ contain the first and last letters in the broken word. Thus $S^{\hat{\pi}\hat{\pi}}(\pi,\hat\pi)$ is generated by broken words beginning and ending with curly brackets (elements of $\hat{\pi}$)--- $\{\alpha\}$, $\{\alpha_1\}[x]\{\alpha_2\}$, etc.---while $S^{\pi\pi}(\pi,\hat\pi)$ is generated by $[x]$, $[x]\{\alpha\}[y]$, etc. We think of these broken words as broken strings with base point on $N\cap Q$ beginning and ending with $N$-strings (for $S^{\hat{\pi}\hat{\pi}}(\pi,\hat\pi)$) or $Q$-strings (for $S^{\pi\pi}(\pi,\hat\pi)$). The other two summands $S^{\hat{\pi}\pi}(\pi,\hat\pi), S^{\pi\hat\pi}(\pi,\hat\pi)$ can similarly be interpreted in terms of broken strings, but we will not consider them further. On $S^{\hat{\pi}\hat{\pi}}(\pi,\hat\pi)$ and $S^{\pi\pi}(\pi,\hat\pi)$, we can define multiplications by \[ (\cdots_1 \{\alpha_1\})(\{\alpha_2\}\cdots_2) = \cdots_1 \{\alpha_1\alpha_2\} \cdots_2 \] and \[ (\cdots_1 [x_1])([x_2] \cdots_2) = \cdots_1 [x_1x_2] \cdots_2, \] respectively. These turn $S^{\hat{\pi}\hat{\pi}}(\pi,\hat\pi)$ and $S^{\pi\pi}(\pi,\hat\pi)$ into rings. Note for future reference that $S^{\hat{\pi}\hat{\pi}}(\pi,\hat\pi)$ is generated as a ring by $\{\alpha\}$ and $\{1\}[x]\{1\}$ for $\alpha \in \hat\pi$ and $x\in\pi$. \begin{proposition} $S^{\hat{\pi}\hat{\pi}}(\pi,\hat \pi)$ is a ${\mathbb{Z}}\hat{\pi}$-NC-algebra, while $S^{\pi\pi}(\pi,\hat \pi)$ is a ${\mathbb{Z}}\pi$-NC-algebra and hence a ${\mathbb{Z}}\hat{\pi}$-NC-algebra as well. Both $S^{\hat{\pi}\hat{\pi}}(\pi,\hat \pi)$ and $S^{\pi\pi}(\pi,\hat \pi)$ are knot invariants as NC-algebras. \end{proposition} \begin{proof} We only need to specify the ring homomorphisms ${\mathbb{Z}}\hat{\pi} \to S^{\hat{\pi}\hat{\pi}}(\pi,\hat \pi)$ and ${\mathbb{Z}}\pi \to S^{\pi\pi}(\pi,\hat \pi)$; these are given by $\alpha \mapsto \{\alpha\}$ and $x \mapsto [x]$, respectively. \end{proof} \begin{remark} View ${\mathbb{Z}}\pi$ as a ${\mathbb{Z}}\hat{\pi}$-bimodule via the map $\hat{\pi} \to \pi$. Then $S^{\hat{\pi}\hat{\pi}}(\pi,\hat \pi)$ and $S^{\pi\pi}(\pi,\hat \pi)$ can alternatively be defined as follows. Let $\mathcal{A},\hat{\mathcal{A}}$ be defined by \begin{align*} \mathcal{A} &= {\mathbb{Z}}\hat{\pi} \oplus {\mathbb{Z}}\pi \oplus ({\mathbb{Z}}\pi \otimes_{{\mathbb{Z}}\hat{\pi}} {\mathbb{Z}}\pi) \oplus ({\mathbb{Z}}\pi \otimes_{{\mathbb{Z}}\hat{\pi}} {\mathbb{Z}}\pi \otimes_{{\mathbb{Z}}\hat{\pi}} {\mathbb{Z}}\pi) \oplus \cdots \\ \hat{\mathcal{A}} &= {\mathbb{Z}}\pi \oplus ({\mathbb{Z}}\pi \otimes_{{\mathbb{Z}}\hat{\pi}} {\mathbb{Z}}\pi) \oplus ({\mathbb{Z}}\pi \otimes_{{\mathbb{Z}}\hat{\pi}} {\mathbb{Z}}\pi \otimes_{{\mathbb{Z}}\hat{\pi}} {\mathbb{Z}}\pi) \oplus \cdots. \end{align*} Each of $\mathcal{A},\hat{\mathcal{A}}$ has a multiplication operation given by concatenation (e.g. $a \cdot (b\otimes c) = a\otimes b\otimes c$); multiplying by an element of ${\mathbb{Z}}\hat{\pi} \subset \mathcal{A}$ uses the ${\mathbb{Z}}\hat{\pi}$-bimodule structure on ${\mathbb{Z}}\pi$. There are two-sided ideals $\mathcal{I}\subset\mathcal{A}, \hat{\mathcal{I}}\subset\hat{\mathcal{A}}$ generated by \begin{gather*} x_1x_2 - x_1\mu x_2-x_1\otimes x_2 \\ 1_{\hat{\pi}}-(1-\mu)_{\pi} \end{gather*} where $x_1,x_2\in\pi$, $x_1x_2,x_1\mu x_2$ are viewed as elements in ${\mathbb{Z}}\pi$, and $1_{\hat{\pi}}$ denotes the element $1\in{\mathbb{Z}}\hat\pi$ while $(1-\mu)_{\pi}$ denotes the element $1-\mu\in{\mathbb{Z}}\pi$. Then \begin{align*} S^{\hat{\pi}\hat{\pi}}(\pi,\hat \pi) &\cong \mathcal{A}/\mathcal{I} \\ S^{\pi\pi}(\pi,\hat \pi) &\cong \hat{\mathcal{A}}/\hat{\mathcal{I}}. \end{align*} \end{remark} We conclude this subsection by noting that $S^{\hat{\pi}\hat{\pi}}(\pi,\hat \pi)$ is precisely the cord algebra of $K$. \begin{proposition} We have the following isomorphism of ${\mathbb{Z}}\hat\pi$-NC-algebras: \[ H^{\str}_0(K) \cong S^{\hat{\pi}\hat{\pi}}(\pi,\hat \pi). \] \end{proposition} \begin{proof} We use the cord-algebra formulation of $H^{\str}_0(K) \cong \operatorname{Cord}(K)$ from Definition~\ref{def:cordalg}. Let $K'$ be the parallel copy of $K$, and choose a base point $p'$ for $\pi = \pi_1(Q\setminus K)$ with $p' \in K'\setminus \{\ast\}$. Given a cord $\gamma$ of $K$, define $\widetilde{\gamma} \in \pi$ as in Remark~\ref{rmk:commute}: extend $\gamma$ to a closed loop $\widetilde{\gamma}$ in $Q\setminus K$ with endpoints at $p'$ by connecting the endpoints of $\gamma$ to $p'$ along $K'\setminus\{\ast\}$. Then the isomorphism $\phi :\thinspace \operatorname{Cord}(K) \to S^{\hat{\pi}\hat{\pi}}(\pi,\hat \pi)$ is the ring homomorphism defined by: \begin{align*} \phi(\gamma) &= \{1\}[\widetilde{\gamma}]\{1\} & \phi(\alpha) &= \{\alpha\}, \end{align*} for $\gamma$ any cord of $K$ and $\alpha$ any element of $\operatorname{Cord}(K)$ of the form $\lambda^a \mu^b$. The skein relations in $\operatorname{Cord}(K)$ from Definition~\ref{def:cordalg} are mapped by $\phi$ to: \begin{enumerate} \item $\{1\}[1]\{1\} = \{1\}-\{\mu\}$ \item $\{1\}[\mu\widetilde{\gamma}]\{1\} = \{\mu\}[\widetilde{\gamma}]\{1\}$ and $\{1\}[\widetilde{\gamma}\mu]\{1\} = \{1\}[\widetilde{\gamma}]\{\mu\}$ \item $\{1\}[\lambda\widetilde{\gamma}]\{1\} = \{\lambda\}[\widetilde{\gamma}]\{1\}$ and $\{1\}[\widetilde{\gamma}\lambda]\{1\} = \{1\}[\widetilde{\gamma}]\{\lambda\}$ \item $\{1\}[\widetilde{\gamma}_1\widetilde{\gamma}_2]\{1\} - \{1\}[\widetilde{\gamma}_1\mu\widetilde{\gamma}_2]\{1\} = \{1\}[\widetilde{\gamma}_1]\{1\}[\widetilde{\gamma}_2]\{1\}$. \end{enumerate} In $S^{\hat{\pi}\hat{\pi}}(\pi,\hat \pi)$, these follow from string relations (\ref{it:str4}), (\ref{it:str1}) and (\ref{it:str2}), (\ref{it:str1}) and (\ref{it:str2}), and (\ref{it:str3}), respectively. Thus $\phi$ is well-defined. It is straightforward to check that $\phi$ is an isomorphism (indeed, the string relations are constructed so that this is the case), with inverse $\phi^{-1}$ defined by \begin{align*} \phi^{-1}(\{\alpha\}) &= \alpha \\ \phi^{-1}(\{1\}[\widetilde{\gamma}]\{1\}) &= \widetilde{\gamma}, \end{align*} for $\alpha \in \hat\pi$ and $\widetilde{\gamma}\in\pi$: note that a closed loop at $p' \in K'\setminus\{\ast\}$ representing $\widetilde{\gamma}$ is also by definition a cord of $K$. \end{proof} \begin{remark} Similarly, one can show that $\hat{H}^{\str}_0(K) \cong S^{\pi\pi}(\pi,\hat \pi)$ as ${\mathbb{Z}}\pi$-NC-algebras. In the same vein, there is also a cord formulation for modified string homology $\hat{H}^{\str}_0(K)$ (as introduced at the end of Section~\ref{ss:string0}), along the lines of Definition~\ref{def:cordalg}: this is $\hat{\mathcal{A}}/\hat{\mathcal{I}}$, where $\hat{\mathcal{A}}$ is the non-unital algebra generated by nonempty products of cords (the difference from $\mathcal{A}$ being that $\hat{\mathcal{A}}$ does not contain words of the form $\lambda^a \mu^b$, which have no cords), and $\hat{\mathcal{I}}$ is the ideal of $\hat{\mathcal{A}}$ generated by skein relations \eqref{it:cord0} through \eqref{it:cord3} from Definition~\ref{def:cordalg}, without \eqref{it:cord1}. \end{remark} \subsection{The cord algebra and group rings}\label{ss:groupring} Having defined the cord algebra in terms of homotopy groups, we can now give an even more explicit interpretation not involving broken words, in terms of the group ring ${\mathbb{Z}}\pi$. Notation is as in Section~\ref{ss:htpy}: in particular, $K\subset Q$ is a framed oriented knot with tubular neighborhood $N$, $\pi = \pi_1(Q\setminus K)$, and $\hat\pi = \pi_1(\partial N)$. When $Q={\mathbb{R}}^3$, we assume for simplicity that the framing on $K$ is the Seifert framing. Before addressing the cord algebra $H^{\str}_0(K) \cong S^{\hat{\pi}\hat{\pi}}(\pi,\hat \pi)$ itself, we first note that the modified version $\hat{H}^{\str}_0(K) \cong S^{\pi\pi}(\pi,\hat \pi)$ is precisely ${\mathbb{Z}}\pi$. \begin{proposition} For a knot $K\subset Q$, we have an isomorphism as ${\mathbb{Z}}\pi$-NC-algebras $$ S^{\pi\pi}(\pi,\hat \pi) \cong {\mathbb{Z}}\pi. $$ \label{prop:groupring} \end{proposition} \begin{proof} The map ${\mathbb{Z}}\pi \to S^{\pi\pi}(\pi,\hat \pi)$, $x \mapsto [x]$, has inverse $\phi$ given by \begin{align*} \phi([x_1]\{\alpha_1\}[x_2]\{\alpha_2\}&\cdots [x_{n-1}]\{\alpha_{n-1}\}[x_n]) \\ &= x_1(1-\mu)\alpha_1x_2(1-\mu)\alpha_2\cdots x_{n-1}(1-\mu)\alpha_{n-1}x_n; \end{align*} note that $\phi$ is well-defined (just check the string relations) and preserves ring structure. \end{proof} The corresponding description of the cord algebra $S^{\hat{\pi}\hat{\pi}}(\pi,\hat \pi)$ is a bit more involved, and we give two interpretations. \begin{proposition} For a knot $K \subset Q$, we have a ${\mathbb{Z}}$-module isomorphism $$ S^{\hat{\pi}\hat{\pi}}(\pi,\hat \pi) \cong {\mathbb{Z}}[\lambda^{\pm 1}] \oplus {\mathbb{Z}}\pi. $$ For any $\alpha\in{\mathbb{Z}}[\lambda^{\pm 1}]$, the left and right actions of $\alpha$ on $S^{\hat{\pi}\hat{\pi}}(\pi,\hat \pi)$ induced from the $\hat{\pi}$-NC-algebra structure on $S^{\hat{\pi}\hat{\pi}}(\pi,\hat \pi)$ coincide under this isomorphism with the actions of $\alpha$ on the factors of ${\mathbb{Z}}[\lambda^{\pm 1}] \oplus {\mathbb{Z}}\pi$ by left and right multiplication. \label{prop:directsum} \end{proposition} \begin{proof} The isomorphism ${\mathbb{Z}}[\lambda^{\pm 1}] \oplus {\mathbb{Z}}\pi \to S^{\hat{\pi}\hat{\pi}}(\pi,\hat \pi)$ sends $(\alpha,0)$ to $\{\alpha\}$ and $(0,x)$ to $\{1\}[x]\{1\}$. Note that this map commutes with left and right multiplication by powers of $\lambda$; for example, $\{\lambda^k\alpha\} = \lambda^k\{\alpha\}$ and $\{1\}[\lambda^k x]\{1\} = \{\lambda^k\}[x]\{1\} = \lambda^k\{1\}[x]\{1\}$. To see that the map is a bijection, note that the generators of $S^{\hat{\pi}\hat{\pi}}(\pi,\hat \pi)$ can be separated into ``trivial'' broken words of the form $\{\alpha\}$ and ``nontrivial'' broken words of length at least $3$. Using the string relations, we can write any trivial broken word uniquely as a sum of some $\{\lambda^a\}$ and some nontrivial broken words: $$ \{\lambda^a \mu^b\} = \{\lambda^a\} - \sum_{i=0}^{b-1} \{\lambda^a\}[\mu^{i-1}]\{1\}$$ if $b \geq 0$, and similarly for $b<0$. On the other hand, any nontrivial broken word in $S^{\hat{\pi}\hat{\pi}}(\pi,\hat \pi)$ can be written uniquely as a ${\mathbb{Z}}$-linear combination of words of the form $\{1\}[x]\{1\}$, $x\in\pi$: just use the map $\phi$ from the proof of Proposition~\ref{prop:groupring} to reduce any nontrivial broken word to broken words of length $3$, and then apply the identity $\{\alpha_1\}[x]\{\alpha_2\} = \{1\}[\alpha_1x\alpha_2]\{1\}$. \end{proof} \begin{proposition} For knots $K \subset {\mathbb{R}}^3$, string homology $H^{\str}_0(K)\congS^{\hat{\pi}\hat{\pi}}(\pi,\hat \pi)$, and thus knot contact homology, detects the unknot $U$. More precisely, left multiplication by $\lambda-1$ on $S^{\hat{\pi}\hat{\pi}}(\pi,\hat \pi)$ has nontrivial kernel if and only if $K$ is unknotted. \label{prop:distinguish} \end{proposition} \begin{proof} First, if $K = U$, then $\lambda = 1$ in $\pi$, and so $$ (\lambda-1)\{1\}[1]\{1\} = \{\lambda\}[1]\{1\}-\{1\}[1]\{1\} = \{1\}[\lambda]\{1\}-\{1\}[1]\{1\} = 0 $$ in $H^{\str}_0(U)$, while $\{1\}[1]\{1\} \neq 0$ by the proof of Proposition~\ref{prop:directsum}. Next assume that $K \neq U$, and consider the effect of multiplication by $\lambda-1$ on ${\mathbb{Z}}[\lambda^{\pm 1}] \oplus {\mathbb{Z}}\pi$. Clearly this map is injective on the ${\mathbb{Z}}[\lambda^{\pm 1}]$ summand; we claim that it is injective on the ${\mathbb{Z}}\pi$ summand as well. Indeed, suppose that some nontrivial sum $\sum a_i [x_i] \in {\mathbb{Z}}\pi$ is unchanged by multiplication by $\lambda$. Then $[\lambda^k x_1]$ must appear in this sum for all $k$, whence the sum is infinite since $\hat{\pi}$ injects into $\pi$ by the Loop Theorem. \end{proof} \begin{remark} It was first shown in \cite{Ng:1} that the cord algebra detects the unknot. That proof uses a relationship between the cord algebra and the $A$-polynomial, along with the fact that the $A$-polynomial detects the unknot \cite{DG}, which in turn relies on gauge-theoretic results of Kronheimer and Mrowka \cite{KM}. As noted previously, by contrast, the above proof that string homology detects the unknot uses only the Loop Theorem. Either proof shows that knot contact homology detects the unknot. However, we emphasize that for our argument, unlike the argument in \cite{Ng:1}, it is crucial that we use the fully noncommutative version of knot contact homology. \end{remark} We can recover the multiplicative structure on $S^{\hat{\pi}\hat{\pi}}(\pi,\hat \pi)$ under the isomorphism of Proposition~\ref{prop:directsum} as follows. On ${\mathbb{Z}}[\lambda^\pm] \oplus {\mathbb{Z}}\pi$, define a multiplication operation $*$ by $$ (\lambda^{k_1},x_1) * (\lambda^{k_2},x_2) = (\lambda^{k_1+k_2}, \lambda^{k_1} x_2 + x_1 \lambda^{k_2} + x_1x_2 - x_1\mu x_2). $$ It is easy to check that $*$ is associative, and that the isomorphism $S^{\hat{\pi}\hat{\pi}}(\pi,\hat \pi) \cong ({\mathbb{Z}}[\lambda^{\pm}] \oplus {\mathbb{Z}}\pi,*)$ now becomes an isomorphism of ${\mathbb{Z}}\hat{\pi}$-NC-algebras, where ${\mathbb{Z}}[\lambda^{\pm 1}] \oplus {\mathbb{Z}}\pi$ is viewed as a ${\mathbb{Z}}\hat{\pi}$-NC-algebra via the map ${\mathbb{Z}}[\hat{\pi}] \to {\mathbb{Z}}[\lambda^{\pm 1}] \oplus {\mathbb{Z}}\pi$ sending $\lambda$ to $(\lambda,0)$ and $\mu$ to $(1,0)-(0,1)$. We now turn to another formulation of string homology in terms of the group ring ${\mathbb{Z}}\pi$. This formulation is a bit cleaner than the one in Proposition~\ref{prop:directsum}, as the multiplication operation is easier to describe. \begin{proposition} For a knot $K\subset Q$, let $\mathfrak{R}$ denote the subring of ${\mathbb{Z}}\pi$ generated by $\hat{\pi}$ and $\im(1-\mu)$, where $1-\mu$ denotes the map ${\mathbb{Z}}\pi\to{\mathbb{Z}}\pi$ given by left multiplication by $1-\mu$. There is a ring homomorphism \[ \psi :\thinspace H^{\str}_0(K) \to \mathfrak{R} \] determined by $\psi(\{\alpha\}) = \alpha$ and $\psi(\{1\}[x]\{1\}) = x-\mu x$. If $\hat\pi \to \pi$ is an injection (in particular, if $K \subset{\mathbb{R}}^3$ is nontrivial), then $\psi$ is an isomorphism of ${\mathbb{Z}}\hat{\pi}$-NC-algebras. \label{prop:subring} \end{proposition} \begin{proof} It is easy to check that $\psi$ respects all of the string relations defining $S^{\hat{\pi}\hat{\pi}}(\pi,\hat \pi) \cong H^{\str}_0(K)$: the key relation $[x_1x_2] - [x_1\mu x_2] - [x_1]\{1\}[x_2]$ is sent to $(1-\mu)x_1x_2-(1-\mu)x_1\mu x_2-(1-\mu)x_1(1-\mu)x_2 = 0$. Thus $\psi$ is well-defined as a map $H^{\str}_0(K) \to \mathfrak{R}$. This map acts as the identity on $\hat{\pi}$ and thus is a ${\mathbb{Z}}\hat{\pi}$-NC-algebra map. Since $\psi$ is surjective by construction, it remains only to show that $\psi$ is injective when $\hat\pi\to\pi$ is injective. Suppose that \begin{equation} 0 = \psi\left(\sum_i a_i\{\lambda^i\} + \sum_j b_j\{1\}[x_j]\{1\}\right) = \sum_i a_i\lambda^i + \sum_j b_j (1-\mu)x_j \label{eq:injective} \end{equation} for some $a_i,b_j\in{\mathbb{Z}}$ and $x_j\in\pi$. We claim that $b_j = 0$ for all $j$, whence $a_i = 0$ for all $i$ since $\hat{\pi}$ injects into $\pi$ for $K$ nontrivial. Assume without loss of generality that the framing on $K$ is the $0$-framing (changing framing simply replaces $\lambda$ by $\lambda \mu^k$ for some $k$). Then the linking number with $K$ gives a homomorphism ${\rm lk}\colon\thinspace\pi\to{\mathbb{Z}}$ satisfying ${\rm lk}(\lambda) = 0$ and ${\rm lk}(\mu) = 1$. If $\sum_j b_j x_j$ is not a trivial sum, then let $x_\ell$ be the contributor to this sum of maximal linking number. The term $-b_\ell\mu x_\ell$ in $\sum_j b_j (1-\mu) x_j$ cannot be canceled by any other term in that sum; thus for (\ref{eq:injective}) to hold, $x_\ell$ must have linking number $-1$. But a similar argument shows that the contributor to $\sum_j b_j x_j$ of minimal linking number must have linking number $0$, contradiction. We conclude that $\sum_j b_j x_j$ must be a trivial sum, as claimed. \end{proof} \begin{remark} To be clear, as a knot invariant derived from knot contact homology, the cord algebra $H^{\str}_0(K)$ (for $K\neq U$) is the ring $\mathfrak{R} \subset {\mathbb{Z}}\pi$ along with the map ${\mathbb{Z}}\hat{\pi} = {\mathbb{Z}}[\lambda^{\pm 1},\mu^{\pm 1}] \to \mathfrak{R}$. Proposition~\ref{prop:subring} implies that the ${\mathbb{Z}}[\lambda^{\pm 1},\mu^{\pm 1}]$-NC-algebra structure on ${\mathbb{Z}}\pi = \hat{H}^{\str}_0(K)$ completely determines $H^{\str}_0(K)$. We do not know if $H^{\str}_0(K)$ determines $\hat{H}^{\str}_0(K)$ as well, nor whether $H^{\str}_0(K)$ is a complete knot invariant.\footnote{Added in revision: it has now been proven by Shende \cite{Shende}, and then reproven in \cite{ENSh}, that the Legendrian isotopy type of $\Lambda_K$ completely determines the knot $K$. The proof in \cite{ENSh} relies on the present paper and shows that an enhanced version of knot contact homology (or of $H^{\str}_0(K)$) determines $K$. The question of whether $H^{\str}_0(K)$ is a complete invariant remains open.} On the other hand, $\hat{H}^{\str}_0(K)$ as a ring is a complete knot invariant for prime knots in ${\mathbb{R}}^3$ up to mirroring, as we can see as follows. By Proposition~\ref{prop:groupring}, $\hat{H}^{\str}_0(K) \cong {\mathbb{Z}}[\pi]$, and for prime knots $K$, Gordon and Luecke \cite{GL} show that $\pi = \pi_1({\mathbb{R}}^3 \setminus K)$ determines $K$ up to mirroring. On the other hand, $\pi$ is a left-orderable group, and the ring isomorphism type of ${\mathbb{Z}}[G]$ when $G$ is left-orderable is determined by the group isomorphism type of $G$ \cite{LR}. We thank Tye Lidman for pointing this out to us. \end{remark} We conclude this section with two examples. \begin{example} When $K$ is the unknot $U$, then $H^{\str}_0(U) \cong {\mathbb{Z}}[\lambda^{\pm 1},\mu^{\pm 1}]/((\lambda-1)(\mu-1))$, while $\hat{H}^{\str}_0(U) \cong {\mathbb{Z}}[\mu^{\pm 1}]$. \label{ex:unknot} The ring homomorphism $\psi$ from Proposition~\ref{prop:subring}, which is not injective, is given by $\psi(\lambda)=1$, $\psi(\mu)=\mu$. The isomorphism from Proposition~\ref{prop:directsum} is the (inverse of the) map \begin{align*} {\mathbb{Z}}[\lambda^{\pm 1}]\oplus{\mathbb{Z}}[\mu^{\pm 1}] &\to {\mathbb{Z}}[\lambda^{\pm 1},\mu^{\pm 1}]/((\lambda-1)(\mu-1)) \\ (\alpha,\beta) &\mapsto \alpha+(\mu-1)\beta. \end{align*} As noticed by Lidman, this computation of $H^{\str}_0(U)$ along with Proposition~\ref{prop:subring} gives an alternative (and shorter) proof that knot contact homology detects the unknot (Proposition~\ref{prop:distinguish}), and more generally that this continues to hold even if the knot is not assumed to be Seifert framed (Corollary~\ref{cor:unknot}). \begin{proof}[Proof of Corollary~\ref{cor:unknot}] Suppose that $H^{\str}_0(K) \cong H^{\str}_0(U)$ where $K$ is a framed oriented knot and $U$ is the unknot with some framing. By changing the framing of both, we can assume that $K$ has its Seifert framing. If $K$ is knotted, then ${\mathbb{Z}}\pi$ has no zero divisors since $\pi$ is left-orderable, and thus $H^{\str}_0(K) \subset {\mathbb{Z}}\pi$ also has no zero divisors by Proposition~\ref{prop:subring}. On the other hand, $H^{\str}_0(U) \cong {\mathbb{Z}}[\lambda^{\pm 1},\mu^{\pm 1}]/((\lambda\mu^f-1)(\mu-1))$ for some $f\in{\mathbb{Z}}$. Thus $K$ must be the unknot and must further have the same framing as $U$. \end{proof} In \cite{GLid}, Gordon and Lidman extend this line of argument (i.e., applying Proposition~\ref{prop:subring}) to prove that knot contact homology detects torus knots as well as cabling and compositeness. \end{example} \begin{example} When $K$ is the right-handed trefoil $T$, a slightly more elaborate version of the calculation of the cord algebra from \cite{Ng:1} (see also \cite{Ngsurvey}) gives the following expression for $H^{\str}_0(T)$: it is generated by $\lambda^{\pm 1}$, $\mu^{\pm 1}$, and one more generator $x$, along with the relations: \begin{align*} \lambda\mu &= \mu\lambda \\ \lambda\mu^6 x &= x \lambda\mu^6 \\ -1+\mu+x-\lambda\mu^5 x \mu^{-3} x \mu^{-1} &= 0 \\ 1-\mu-\lambda\mu^4 x \mu^{-2} - \lambda\mu^5 x \mu^{-2} x \mu^{-1} &= 0. \end{align*} On the other hand, $\hat{H}^{\str}_0(T) = {\mathbb{Z}}\pi$ is the ring generated by $\mu^{\pm 1}$ and $a^{\pm 1}$ modulo the relation $\mu a \mu = a \mu a$; the longitudinal class is $\lambda = a\mu a^{-1}\mu a\mu^{-3}$. The explicit map from $H^{\str}_0(T)$ to ${\mathbb{Z}}\pi$ is given by: \begin{align*} \mu & \mapsto \mu \\ \lambda & \mapsto \lambda = a\mu a^{-1}\mu a\mu^{-3} \\ x & \mapsto (1-\mu) a \mu^{-1} a^{-1}. \end{align*} It can be checked that this map preserves the relations in $H^{\str}_0(T)$. \end{example} \section{Roadmap to the proof of Theorem~\ref{thm:main}}\label{sec:roadmap} The remainder of this paper is devoted to the proof of Theorem~\ref{thm:main}. To avoid getting lost in the details, we give here a roadmap to the proof and explain the technical issues to be addressed along the way. The proof follows the scheme that is described for a different situation in~\cite{CL} and consists of 3 steps. Let $\AA$ be the free ${\mathbb{Z}}\hat\pi$-NC-algebra generated by Reeb chords and $\partial_\Lambda:\AA\to\AA$ the boundary operator for Legendrian contact homology. For a Reeb chord $a$ and an integer $\ell\geq 0$ denote by $\mathcal{M}_\ell(a)$ the moduli space of $J$-holomorphic disks in $T^*Q$ with one positive puncture asymptotic to $a$ and boundary on $Q\cup L_K$ with $2\ell$ corners at which it switches between $L_K$ and $Q$. {\bf Step 1. }Show that $\mathcal{M}_\ell(a)$ can be compactified to a manifold with corners $\overline\mathcal{M}_\ell(a)$ and that the generating functions $\phi(a):=\sum_{\ell=0}^\infty\overline\mathcal{M}_\ell(a)$ (extended as algebra maps to $\AA$) satisfy the relation $$ \partial\phi = \phi\partial_\Lambda - \delta\phi, $$ where $\delta\overline\mathcal{M}_\ell(a)$ is the subset of elements in $\overline\mathcal{M}_\ell(a)$ that intersect $K$ at the interior of some boundary string. {\bf Step 2. }Construct a chain complex $(C_*(\Sigma),\partial+\delta)$ of suitable chains of broken strings such that $\phi$ induces a chain map $$ \Phi:(\AA,\partial_\Lambda)\to(C(\Sigma),\partial+\delta), $$ and the homology $H_0(\Sigma,\partial+\delta)$ agrees with the string homology $H_0^{\rm string}(K)$ as defined in Section~\ref{ss:string0}. {\bf Step 3. }Prove that $\Phi$ induces an isomorphism on homology in degree zero. \medskip Step 1 occupies Sections~\ref{S:mdlisp} to~\ref{sec:gluing}. It involves detailed descriptions of \begin{itemize} \item the behavior of holomorphic disks at corner points; \item compactifications of moduli spaces of holomorphic disks; \item transversality and gluing of moduli spaces. \end{itemize} In Step 2 (Sections~\ref{sec:holo} to~\ref{sec:chain}) we encounter the following problem: The direct approach to setting up the complex $(C(\Sigma),\partial+\delta)$ would involve chains in spaces of broken strings with varying number of switches. These spaces could probably be given smooth structures using the polyfold theory by Hofer, Wysocki and Zehnder~\cite{HWZ1}. Here we choose a different approach, keeping the number of switches fixed and inserting small ``spikes'' in the definition of the string operation $\delta=\delta_Q+\delta_N$. Since this involves non-canonical choices, one does not expect identities such as $\partial\delta+\delta\partial=0$ to hold strictly but only up to homotopy, thus leading to an $\infty$-structure as described by Sullivan in~\cite{Su}. We avoid $\infty$-structures by carefully defining $\delta$ via induction over the dimension of chains such that all identities hold strictly on the chain level. Step 3 (Section~\ref{sec:iso}) follows the scheme described in~\cite{CL}. This involves \begin{itemize} \item a length estimate for the boundary of holomorphic disks, which implies that $\Phi$ respects the filtrations of $\AA$ and $C(\Sigma)$ by the actions of Reeb chords and the total lengths of $Q$-strings, respectively. \item construction of a length-decreasing chain homotopy deforming $C(\Sigma)$ to chains $C(\Sigma_{\rm lin})$ of broken strings all of whose $Q$-strings are {\em linear straight line segments} (at this point we specialize to $Q={\mathbb{R}}^3$); \item Morse-theoretical arguments on the space $\Sigma_{\rm lin}$ to prove that $\Phi$ induces an isomorphism on degree zero homology. \end{itemize} \section{The chain map from Legendrian contact homology to string homology}\label{sec:chain} In this section we define a chain map $\Phi\colon C_*(\mathcal{R})\to C_{\ast}(\Sigma)$ from a complex computing Legendrian contact homology to the string chain complex defined in the previous section. The boundary operator on $C_*(\mathcal{R})$ is defined using moduli spaces of holomorphic disks in ${\mathbb{R}}\times S^{\ast}Q$ with Lagrangian boundary condition ${\mathbb{R}}\times\Lambda_{K}$ and the map $\Phi$ is defined using moduli spaces of holomorphic disks in $T^{\ast}Q$ with Lagrangian boundary condition $Q\cup L_{K}$, where the boundary is allowed to switch back and forth between the two irreducible components of the Lagrangian at corners as in Lagrangian intersection Floer homology. We will describe these spaces and their properties, as well as define the algebra and the chain map. In order not to obscure the main lines of argument, we postpone the technicalities involved in detailed proofs to Sections \ref{S:mdlisp} -- \ref{sec:gluing}. \subsection{Holomorphic disks in the symplectization}\label{sec:symp-disks} Consider a contact $(2n-1)$-manifold $(M,\lambda)$ with a closed Legendrian $(n-1)$-submanifold $\Lambda$. For the purposes of this paper we only consider the case that $M=S^*Q$ is the cosphere bundle of $Q={\mathbb{R}}^3$ with its standard contact form $\lambda=p\,dq$ and $\Lambda=\Lambda_K$ is the unit conormal bundle of an oriented framed knot $K\subset Q$, but the construction works more generally for any pair $(M,\Lambda)$ for which $M$ has no contractible closed Reeb orbits, see Remark~\ref{rem:cont-hom} below. Denote by $R$ the Reeb vector field of $\lambda$. A {\em Reeb chord} is a solution $a\colon [0,T]\to M$ of $\dot a=R$ with $a(0),a(T)\in\Lambda$. Reeb chords correspond bijectively to {\em binormal chords} of $K$, i.e., geodesic segments meeting $K$ orthogonally at their endpoints. As usual, we assume throughout that $\Lambda$ is chord generic, i.e., each Reeb chord corresponds to a Morse critical point of the distance function on $K \times K$. In order to define Maslov indices, one usually chooses for each Reeb chord $a\colon [0,T]\to M$ {\em capping paths} connecting $a(0)$ and $a(T)$ in $\Lambda$ to a base point $x_0\in\Lambda$. Then one can assign to each $a$ completed by the capping paths a {\em Maslov index} $\mu(a)$, see \cite[Appendix A]{CEL}. In the case under consideration ($M=S^{\ast}{\mathbb{R}}^3$ and $\Lambda=\Lambda_{K}$) the Maslov class of $\Lambda$ equals $0$, so the Maslov index does not depend on the choice of capping paths. It is given by $\mu(a)={\rm ind}(a)+1$, where ${\rm ind}(a)$ equals the index of $a$ as a critical point of the distance function on $K\times K$, see \cite{EENS}. We define the {\em degree} of a Reeb chord $a$ as \[ |a|:=\mu(a)-1={\rm ind}(a), \] and the degree of a word $\mathbf{b}=b_1b_2\cdots b_m$ of Reeb chords as $$ |\mathbf{b}| := \sum_{j=1}^{m}|b_j|. $$ Given $a$ and $\mathbf{b}$, we write $\mathcal{M}^{\rm sy}(a;\mathbf{b})$ for the moduli space of $J$-holomorphic disks $u\colon (D,\partial D)\to({\mathbb{R}}\times M,{\mathbb{R}}\times\Lambda)$ with one positive boundary puncture asymptotic to the Reeb chord strip over $a$ at the positive end of the symplectization, and $m$ negative boundary punctures asymptotic to the Reeb chord strips over $b_1,\dots, b_m$ at the negative end of the symplectization. Here $J$ is an ${\mathbb{R}}$-invariant almost complex structure on ${\mathbb{R}}\times M$ compatible with $\lambda$. For generic $J$, the moduli space $\mathcal{M}^{\rm sy}(a;\mathbf{b})$ is a manifold of dimension \[ \dim(\mathcal{M}^{\rm sy}(a;\mathbf{b}))=|a|-|\mathbf{b}|=|a|-\sum_{j=1}^{m}|b_j|, \] see Theorem \ref{t:sy}. In fact, the moduli spaces correspond to the zero set of a Fredholm section of a Banach bundle that can be made transverse by perturbing the almost complex structure, and there exist a system of coherent (or gluing compatible) orientations of the corresponding index bundles over the configuration spaces and this system induces orientations on all the moduli spaces. By our choice of almost complex structure, ${\mathbb{R}}$ acts on $\mathcal{M}^{\rm sy}(a;\mathbf{b})$ by translations in the target ${\mathbb{R}}\times M$ and we write $\mathcal{M}^{\rm sy}(a;\mathbf{b})/{\mathbb{R}}$ for the quotient, which is then an oriented manifold of dimension $|a|-|\mathbf{b}|-1$. Finally, we discuss the compactness properties of $\mathcal{M}^{\rm sy}(a;\mathbf{b})/{\mathbb{R}}$. The moduli space $\mathcal{M}^{\rm sy}(a;\mathbf{b})/{\mathbb{R}}$ is generally not compact but admits a compactification by multilevel disks, where a multilevel disk is a tree of disks with a top level disk in $\mathcal{M}^{\rm sy}(a,\mathbf{b}^{1})$, $\mathbf{b}^{1}=b^{1}_{1},\dots,b_{m_1}^{1}$, second level disks in $\mathcal{M}^{\rm sy}(b_{i}^{1};\mathbf{b}^{2,i})$ attached at the negative punctures of the top level disk, etc. See Figure~\ref{fig:sy-sy} below. It follows from the dimension formula above that the formal dimension of the total disk that is the union of the levels in a multilevel disk is the sum of dimensions of all its components. Consequently, for generic almost complex structure, if $\dim(\mathcal{M}^{\rm sy}(a;\mathbf{b}))=1$ then $\mathcal{M}^{\rm sy}(a;\mathbf{b})/{\mathbb{R}}$ is a compact 0-dimensional manifold, and if $\dim(\mathcal{M}^{\rm sy}(a;\mathbf{b}))=2$ then the boundary of $\mathcal{M}^{\rm sy}(a;\mathbf{b})/{\mathbb{R}}$ consists of two-level disks where each level is a disk of dimension $1$ (and possibly trivial Reeb chord strips). The simplest version of Legendrian contact homology would be defined by the free ${\mathbb{Z}}$-algebra generated by the Reeb chords, with differential counting rigid holomorphic disks. In the following subsection we will define a refined version which also incorporates the boundary information of holomorphic disks. \subsection{Legendrian contact homology}\label{sec:leg} In this subsection we define a version of Legendrian contact homology that will be directly related to the string homology of Section~\ref{sec:string-ref}, see \cite{E_rsft} for a similar construction in rational symplectic field theory. The usual definition of Legendrian contact homology is a quotient of our version. We keep the notation from Section~\ref{sec:symp-disks}. Fix an integer $m\geq 3$. For points $x,y\in\Lambda$ we denote by $P_{x,y}\Lambda$ the space of $C^m$ paths $\gamma:[a,b]\to\Lambda$ with $\gamma(a)=x$ and $\gamma(b)=y$ whose first $m$ derivatives vanish at the endpoints. Here the interval $[a,b]$ is allowed to vary. The condition at the endpoints ensures that concatenation of such paths yields again $C^m$ paths. Fix a base point $x_0\in\Lambda$ and denote by $\Omega_{x_0}\Lambda=P_{x_0x_0}\Lambda$ the Moore loop space based at $x_0$. \begin{figure} \labellist \small\hair 2pt \pinlabel ${\color{red} a_1}$ at 22 148 \pinlabel ${\color{red} a_2=a}$ at 111 148 \pinlabel ${\color{red} a_3}$ at 200 148 \pinlabel ${\color{red} b_1}$ at 38 4 \pinlabel ${\color{red} b_2}$ at 110 4 \pinlabel ${\color{red} b_3}$ at 182 4 \pinlabel ${\color{blue} \alpha_1}$ at 29 298 \pinlabel ${\color{blue} \alpha_2}$ at 46 219 \pinlabel ${\color{blue} \alpha_3}$ at 162 223 \pinlabel ${\color{blue} \alpha_4}$ at 140 318 \pinlabel ${\color{blue} \beta_1}$ at 26 114 \pinlabel ${\color{blue} \beta_2}$ at 75 87 \pinlabel ${\color{blue} \beta_3}$ at 146 87 \pinlabel ${\color{blue} \beta_4}$ at 196 114 \pinlabel ${\color{blue} x_0}$ at 93 359 \pinlabel ${\color{blue} u}$ at 109 110 \endlabellist \centering \includegraphics[width=0.5\textwidth]{figures/Reeb-string} \caption{The definition of $\partial(u)$ and $\partial(u) \,\cdot_{i}\, \mathbf{a}$.} \label{fig:Reeb-string} \end{figure} \begin{definition} A {\em Reeb string with $\ell$ chords} is an expression $\alpha_1a_1\alpha_2a_2\cdots \alpha_\ell a_\ell \alpha_{\ell+1}$, where the $a_i\colon[0,T_i]\to M$ are Reeb chords and the $\alpha_i$ are elements in the path spaces $$ \alpha_1\in P_{x_0a_1(T_1)},\qquad \alpha_i\in P_{a_{i-1}(0)a_i(T_i)} \text{ for }2\leq i\leq \ell,\qquad \alpha_{\ell+1}\in P_{a_\ell(0)x_0}. $$ \end{definition} See the top of Figure~\ref{fig:Reeb-string}. Note that the $\alpha_i$ and the {\em negatively traversed} Reeb chords $a_i$ fit together to define a loop in $M$ starting and ending at $x_0$. Concatenating all the $\alpha_i$ and $a_i$ in a Reeb string with the appropriate capping paths, we can view each $\alpha_i$ as an element in the based loop space $\Omega_{x_0}\Lambda$. However, we will usually not take this point of view. Boundaries of holomorphic disks in the symplectization give rise to Reeb strings as follows. Consider a holomorphic disk $u$ belonging to a moduli space $\mathcal{M}^{\rm sy}(a;\mathbf{b})$ as above, with Reeb chords $a:[0,T]\to M$ and $b_i:[0,T_i]\to M$, $i=1,\dots,\ell$. Its boundary arcs in counterclockwise order and orientation projected to $\Lambda$ define paths $\beta_1,\dots,\beta_{\ell}$ in $\Lambda$ as shown in Figure~\ref{fig:Reeb-string}, i.e. $$ \beta_1\in P_{a(T)b_1(T_1)},\qquad \beta_i\in P_{b_{i-1}(0)b_i(T_i)} \text{ for }2\leq i\leq \ell,\qquad \beta_{\ell+1}\in P_{b_\ell(0)a(0)}. $$ We denote the alternating word of paths and Reeb chords obtained in this way as the boundary of $u$ by \begin{equation}\label{eq:pu} \partial(u) := \beta_1b_1\beta_2b_2\cdots\beta_\ell b_\ell\beta_{\ell+1}. \end{equation} Note that the $\beta_i$ and the negatively traversed Reeb chords $b_i$ fit together to define a path in $M$ from $a(T)$ to $a(0)$. We obtain from $\partial(u)$ a Reeb string if we extend $\beta_1$ and $\beta_{\ell+1}$ to the base point $x_0$ by the capping paths of $a$. For $\ell\geq 0$ we denote by $\mathcal{R}^\ell$ the space of Reeb strings with $\ell$ chords, equipped with the $C^m$ topology on the path spaces. Note that different collections of Reeb chords correspond to different components. Concatenation at the base point gives $$ \mathcal{R} := \amalg_{\ell\geq 0}\mathcal{R}^\ell $$ the structure of an H-space. Note that the sub-H-space $\mathcal{R}^0=\Omega_{x_0}\Lambda$ agrees with the Moore based loop space with its Pontrjagin product. Let $$ C(\mathcal{R}) = \bigoplus_{d\geq 0}C_d(\mathcal{R}) $$ be singular chains in $\mathcal{R}$ with integer coefficients. It carries two gradings: the degree $d$ as a singular chain, which we will refer to as the {\em chain degree}, and the degree $\sum_{i=1}^{\ell}|b_i|$ of the Reeb chords, which we will refer to as the {\em chord degree}. For sign rules we think of the {\em chain coming first and the Reeb chords last}. The total grading is given by the sum of the two degrees. Recall that it does not depend on the choice of capping paths. Concatenation of Reeb strings at the base point and product of chains gives $C(\mathcal{R})$ the structure of a (noncommutative but strictly associative) graded ring. Note that it contains the subring $$ C(\mathcal{R}^0) = C(\Omega_{x_0}\Lambda). $$ Next we define the differential $$ \partial_\Lambda=\partial^{\rm sing}+\partial^{\rm sy}\colon C(\mathcal{R})\to C(\mathcal{R}). $$ Here $\partial^{\rm sing}$ is the singular boundary and $\partial^{\rm sy}$ is defined as follows. Pick a generic compatible cylindrical almost complex structure $J$ on the symplectization ${\mathbb{R}}\times M$. Consider a punctured $J$-holomorphic disk $u\colon D\to{\mathbb{R}}\times M$ in $\mathcal{M}^{\rm sy}(a;\mathbf{b})$. If the Reeb chord $a=a_i$ appears in a Reeb string $\mathbf{a}=\alpha_{1}a_1\dots a_m\alpha_{m+1}$, then we can replace $a_i$ by $\partial(u)$ to obtain a new Reeb string which we denote by $$ \partial(u) \,\cdot_{i}\, \mathbf{a} := \alpha_1a_1\cdots \widetilde\alpha_i\partial(u)\widetilde\alpha_{i+1}\cdots a_\ell \alpha_{\ell+1}. $$ Here $\partial(u)$ is defined in~\eqref{eq:pu} and the paths $\widetilde\alpha_i,\widetilde\alpha_{i+1}$ are the concatenations of $\alpha_i,\alpha_{i+1}$ with the paths $\beta_1,\beta_{\ell+1}$ in $\partial(u)$, respectively. See Figure~\ref{fig:Reeb-string}. For a chain $\mathbf{a}\in C(\mathcal{R})$ of Reeb strings of type $\mathbf{a}=\alpha_{1}a_1\dots a_m\alpha_{m+1}$ we now define $$ \partial^{\rm sy}(\mathbf{a}) := \quad \sum_{i=1}^\ell \!\! \sum_{\begin{smallmatrix} |a_i|-|\mathbf{b}|=1 \\ u\in\mathcal{M}^{\rm sy}(a_i;\mathbf{b})/{\mathbb{R}} \end{smallmatrix}} \varepsilon (-1)^{d+|a_1|+\cdots+|a_{i-1}|} \partial(u)\,\cdot_{i}\,\mathbf{a}, $$ where $d$ is the chain degree of $\mathbf{a}$ and $\varepsilon$ is the sign from the orientation of $\mathcal{M}^{\rm sy}(a_i;\mathbf{b})/{\mathbb{R}}$ as a compact oriented 0-manifold (i.e., points with signs). Note that $\partial^{\rm sy}$ preserves the chain degree and decreases the chord degree by $1$, whereas $\partial^{\rm sing}$ preserves the chord degree and decreases the chain degree by $1$. In particular, $\partial_\Lambda$ has degree $-1$ with respect to the total grading. The main result about the contact homology algebra that we need is summarized in the following theorem. \begin{thm}\label{thm:comt-hom} The differential $\partial_\Lambda\colon C(\mathcal{R})\to C(\mathcal{R})$ satisfies $\partial_\Lambda^2=0$ and the {\em Legendrian contact homology} $$ H^{\rm contact}(\Lambda) := \ker\partial_\Lambda/\im\partial_\Lambda $$ is independent of all choices. \end{thm} \begin{proof} In the case that we use it, for $M=S^*{\mathbb{R}}^{3}$ and $\Lambda=\Lambda_K$, the proof is an easy adaptation of the one in \cite{EES2,EESori} and \cite{Rizell}, see also \cite{EENS}. Consider first the equation for the differential. The equation $\partial_\Lambda^2=0$ follows from our description of the boundary of the moduli spaces $\mathcal{M}^{\rm sy}(a;\mathbf{b})$ of dimension $2$ in Section~\ref{sec:symp-disks}, which shows that contributions to $(\partial^{\rm sy})^{2}$ are in oriented one-to-one correspondence with the boundary of an oriented 1-manifold and hence cancel out. The relations $(\partial^{\rm sing})^2=0$ and $\partial^{\rm sing}\partial^{\rm sy} + \partial^{\rm sy}\partial^{\rm sing} =0$ are clear. To prove the invariance statement we use a bifurcation method similar to \cite[section 4.3]{EESori}. Consider a generic $1$-parameter family $(\Lambda_{s},J_{s})$, $s\in S=[0,1]$, of Legendrian submanifolds and almost complex structures. By genericity of the family there is a finite set of points $s_{1}<s_{2}<\dots < s_{m}$ such that in $S\setminus\{s_{1},\dots,s_{m}\}$ all Reeb chords of $\Lambda_{s}$ are transverse, all Reeb chords have distinct actions, and all holomorphic disks determined by $(\Lambda_{s},J_{s})$ have dimension at least $1$ (i.e.~if we write $\mathcal{M}^{{\rm sy}}_{s}$ for moduli spaces determined by $(\Lambda_{s},J_{s})$ then $\dim\mathcal{M}^{{\rm sy}}_{s}(a,\mathbf{b})\ge 1$ if the moduli space is nonempty). Furthermore, the points $s_{j}$ are of three kinds: \begin{itemize} \item handle slides, where all Reeb chords are nondegenerate but where there is a transversely cut out disk of formal dimension $0$ (i.e., there exists one $\mathcal{M}^{{\rm sy}}(a;\mathbf{b})$, with $\dim\mathcal{M}^{{\rm sy}}(a;\mathbf{b})=0$ which contains one ${\mathbb{R}}$-family of $J_{s_{j}}$-holomorphic disks with boundary on $\Lambda_{s_{j}}$, and this disk is transversely cut out as a solution of the parameterized problem); \item action switches, where two nondegenerate Reeb chords have the same action and their actions interchange; \item birth/death moments where there is one degenerate Reeb chord at which two Reeb chords cancel through a quadratic tangency. \end{itemize} To show invariance we first observe that if $[s',s'']\subset S$ is an interval which does not contain any $s_{j}$, then the Reeb chords of $\Lambda_{s}$, $s\in[s',s'']$ form 1-manifolds canonically identified with $[s',s'']$ and the actions of the different Reeb chord manifolds do not cross. Thus for Reeb chords $a,b_{1},\dots,b_{m}$ of $\Lambda_{s'}$ we get corresponding chords on $\Lambda_{s}$ for each $s\in [s',s'']=S'$ which we denote by the same symbols, suppressing the $s$-dependence below. We next define a chain map \[ \Phi\colon C(\mathcal{R}_{s'})\to C(\mathcal{R}_{s''}) \] which counts geometrically induced chains as follows. We introduce the notion of a {\em disk with lines of Reeb chords}. Such an object has a positive puncture at the Reeb chord $a$ over $s'$ and negative punctures at Reeb chords according to $\mathbf{b}$ over $s''$ and is given by a collection of disks $u_{1},\dots,u_{m}$ where the disk $u_{j}$ is a disk at $\sigma_{j}$, where $s'\le \sigma_{1}\le \sigma_{2}\le\dots\le \sigma_{m}\le s''$ and if $\sigma_{j}> \sigma_{k}$ for some $k$ then its positive Reeb chord is connected by a line in a Reeb chord manifold to a Reeb chord at the negative puncture of some $u_{r}$ for $\sigma_{r}< \sigma_{j}$. The collection of such objects naturally forms a moduli space, $\mathcal{M}^{{\rm sy}}_{S'}(a;\mathbf{b})$, where we glue two disks when the length of the line connecting them goes to zero. We define the chain map $\Phi$ as \[ \Phi(a) = \left[\mathcal{M}^{{\rm sy}}_{S'}(a,\mathbf{b})\right]. \] The chain map equation $\partial_{\Lambda_{s''}}\Phi=\Phi\partial_{\Lambda_{s'}}$ follows immediately once one notices that the codimension one boundary of the moduli space consists of disks over the endpoints with lines of Reeb chords over $[s',s'']$ attached. (We point out that this construction is inspired by Morse-Bott arguments, compare \cite{EK}.) Consider the filtration in $C(\mathcal{R})$ which associates to a chain of Reeb strings the sum of actions of its Reeb chords. By Stokes' theorem the differential respects the filtration. The pure lines of Reeb chords (without disks) contribute to the map and show that \[ \Phi(a) = a + \Phi_{0}(a), \] where the action of $\Phi_{0}(a)$ is strictly smaller than that of $a$. It follows that $\Phi$ induces an isomorphism on the $E_{2}$-page of the action spectral sequence and hence is a quasi-isomorphism. In order to show invariance at the bifurcation moments we consider the deformation in a small interval around $[s_{j}-\epsilon,s_{j}+\epsilon]$. In this case we can construct a Lagrangian cobordism $L$ in the symplectization ${\mathbb{R}}\times M$ interpolating between the cylinders on $\Lambda_{s_{j}-\epsilon}$ and $\Lambda_{s_{j}+\epsilon}$, see \cite[Lemma A.2]{E_rsft}. If $a$ is a Reeb chord of $\Lambda_{s_{j}+\epsilon}$ and $\mathbf{b}$ is a word of Reeb chords of $\Lambda_{s_{j}-\epsilon}$ then let $\mathcal{M}^{{\rm sy},L}(a;\mathbf{b})$ denote the moduli space of holomorphic disks defined as $\mathcal{M}^{{\rm sy}}(a,\mathbf{b})$, see Section \ref{sec:symp-disks}, but with boundary condition given by $L$ instead of ${\mathbb{R}}\times\Lambda$. (Note that since $L$ is not ${\mathbb{R}}$-invariant, ${\mathbb{R}}$ does generally not act on $\mathcal{M}^{{\rm sy},L}(a;\mathbf{b})$.) We define a chain map \[ \Phi\colon C(\mathcal{R}_{+})\to C(\mathcal{R}_{-}) \] between the algebras at the positive and the negative ends as follows: $\Phi$ is the identity map on chains, and on Reeb chords $a$ of $\Lambda_{s_{j}-\epsilon}$ $\Phi$ is given by \[ \Phi(a) = \sum_{\mathbf{b}}[\mathcal{M}^{{\rm sy},L}(a;\mathbf{b})], \] where $\mathbf{b}$ runs over all words of Reeb chords of $\Lambda_{s_j+\epsilon}$ and $[\mathcal{M}^{{\rm sy},L}(a;\mathbf{b})]$ denotes the chain of Reeb strings carried by the moduli space. SFT compactness and gluing as in \cite{E_rsft} shows that the chain map equation $\partial_{\Lambda_{s_j+\epsilon}}\Phi=\Phi\partial_{\Lambda_{s_j-\epsilon}}$ holds. It remains to show that $\Phi$ is a quasi-isomorphism. Consider first the case that $s_{j}$ is a handle slide. Taking $\epsilon$ sufficiently small we find that for each Reeb chord $a$ on $\Lambda_{s_{j}-\epsilon}$ there is a unique holomorphic strip connecting it to the corresponding Reeb chord $a$ on $\Lambda_{s_{j}+\epsilon}$. (These strips converge to trivial strips as $\epsilon\to 0$.) It follows that for each generator $c$ (chord or chain), \[ \Phi(c)= c + \Phi_{0}(c), \] where the filtration degree of $\Phi_{0}(c)$ is strictly smaller than that of $c$. Thus $\Phi$ induces an isomorphism on the $E_{2}$-page of the action filtration spectral sequence and is hence a quasi-isomorphism. Consider second the case of an action switch. In this case we find exactly as in the handle slide case that \[ \Phi(c) = c + \Phi_{0}(c) \] for each $c$. The only difference is that now one action window contains two generators. Since the two Reeb chords have the same action but lie at a positive distance apart, it follows by monotonicity and Stokes' theorem that the chain map induces an isomorphism also in this action window. We find as above that $\Phi$ is a quasi-isomorphism. Finally consider the case that $s_{j}$ is a birth moment where two new Reeb chords $a$ and $b$ are born (the death case is analogous). For $\epsilon>0$ sufficiently small we have \[ \partial^{\rm sy} a = b + \partial^{\rm sy}_{0}(a), \] where the action of $\partial^{\rm sy}_{0}(a)$ is strictly smaller than the action of $b$, see \cite[Lemma 2.14]{EES1}. As above we find that for any Reeb chord $c$ of $\Lambda_{s_{j}+\epsilon}$ we have \[ \Phi(c) = c + \Phi_{0}(c). \] If we filter by small action windows that contain one Reeb chord each, except for one that contains both $a$ and $b$ (note that the action of $a$ approaches the action of $b$ as $\epsilon\to 0$) we find again that $\Phi$ gives an isomorphism on the $E_{2}$-page and hence is an isomorphism. We conclude that we can subdivide the interval $S$ into pieces with endpoints with quasi-isomorphic algebras. The theorem follows. \end{proof} According to Theorem~\ref{thm:comt-hom}, $(C(\mathcal{R}),\partial_\Lambda)$ is a (noncommutative but strictly associative) differential graded (dg) ring containing the dg subring $$ \Bigl(C(\mathcal{R}^0),\partial_\Lambda\Bigr) = \Bigl(C(\Omega_{x_0}\Lambda),\partial^{\rm sing}\Bigr). $$ Thus $(C(\mathcal{R}),\partial_\Lambda)$ is a $(C(\mathcal{R}^0),\partial_\Lambda)$-NC-algebra in the sense of the following definition. \begin{definition} Let $(R,\partial)$ be a dg ring. An {\em $(R,\partial)$-NC-algebra} is a dg ring $(S,\partial_S)$ together with a dg ring homomorphism $(R,\partial)\to(S,\partial_S)$. \end{definition} It follows that the Legendrian contact homology $H^{\rm contact}(\Lambda)$ is an NC-algebra over the graded ring $$ H_*(\Omega_{x_0}\Lambda,\partial^{\rm sing}) \cong {\mathbb{Z}}\pi_1(\Lambda) \cong {\mathbb{Z}}[\lambda^{\pm 1},\mu^{\pm 1}]. $$ Here we have used that in our situation $\Lambda\cong T^2$ is a $K(\pi,1)$, so all the homology of its based loop space is concentrated in degree zero and agrees with the group ring of its fundamental group $\pi_1(\Lambda)\cong{\mathbb{Z}}^2$. {\bf Relation to standard Legendrian contact homology. } Recall that $C(\mathcal{R})$ is a double complex with bidegree (chain degree, chord degree), horizontal differential $\partial^{\rm sing}$, and vertical differential $\partial^{\rm sy}$. As observed above, the first page of the spectral sequence corresponding to the chord degree is concentrated in the $0$-th column and given by $$ \Bigl(\AA:=H_0(\mathcal{R},\partial^{\rm sing}),\partial^{\rm sy}\Bigr). $$ Generators of $\AA$ are words $\alpha_1a_1\alpha_2a_2\cdots \alpha_\ell a_\ell \alpha_{\ell+1}$ consisting of Reeb chords $a_i$ and {\em homotopy classes} of paths $\alpha_i$ satisfying the same boundary conditions as before. Note that $\AA$ is an NC-algebra over the subring $\AA^0=H_0(\mathcal{R}^0)\cong{\mathbb{Z}}\pi_1(\Lambda)$ (on which $\partial_\Lambda$ vanishes), and $\AA^k=H_0(\mathcal{R}^k)$ is the $k$-fold tensor product of the bimodule $\AA^1$ over the ring $\AA^0$. We denote by $$ \bar\AA := \AA/\mathcal{I} $$ the quotient of $\AA$ by the ideal $\mathcal{I}$ generated by the commutators $[a,\beta]$ of Reeb chords $a$ and $\beta\in\pi_1(\Lambda)$. Since $\partial_\Lambda(\mathcal{I})\subset\mathcal{I}$, the differential descends to a differential $\bar\partial^{\rm sy}:\bar\AA\to\bar\AA$ whose homology $$ \bar H^{\rm contact}(\Lambda) := \ker\bar\partial^{\rm sy}/\im\bar\partial^{\rm sy} $$ is the usual Legendrian contact homology as defined in~\cite{EES1}. {\bf Length filtration. } The complex $(C(\mathcal{R}),\partial_\Lambda)$ is filtered by the {\em length} $$ L(\alpha_1a_1\alpha_2a_2\cdots \alpha_\ell a_\ell \alpha_{\ell+1}) := \sum_{i=1}^\ell L(a_i), $$ where $L(a)=\int_a\lambda$ denotes the action of a Reeb chord $a$, which agrees with its period and also with the length of the corresponding binormal cord. The length is preserved by the singular boundary operator $\partial^{\rm sing}$ and strictly decreases under $\partial^{\rm sy}$. \begin{remark}\label{rem:cont-hom} The construction of Legendrian contact homology in this subsection works for any pair $(M,\Lambda)$ such that $M$ has no contractible closed Reeb orbits. Examples include cosphere bundles $S^*Q$ of $n$-manifolds $Q$ with a metric of nonpositive curvature that are convex at infinity, with $\Lambda=\Lambda_K$ the unit conormal bundle of a closed connected submanifold $K\subset Q$. However, if $\Lambda$ is not a $K(\pi,1)$, then the coefficient ring $H_*(\Omega_{x_0}\Lambda,\partial^{\rm sing})$ will not be equal to the group ring of its fundamental group but contain homology in higher degrees. \end{remark} \subsection{Switching boundary conditions, winding numbers, and length}\label{ss:switching} We continue to consider $Q={\mathbb{R}}^{3}$ equipped with the flat metric and an oriented framed knot $K\subset Q$. In addition, we assume from now on that {\em $K$ is real analytic}; this can always be achieved by a small perturbation of $K$ not changing its knot type. We equip $T^{\ast} Q$ with an almost complex structure $J$ which agrees with an ${\mathbb{R}}$-invariant almost complex structure on the symplectization of $S^{\ast}Q$ outside a finite radius disk sub-bundle of $T^{\ast} Q$ and with the standard almost complex structure $J_{\rm st}$ on $T^{\ast}Q$ inside the disk sub-bundle of half that radius. An explicit formula for such $J$ is given in Section~\ref{sec:length-estimates2}. We point out that the canonical isomorphism $(T^{\ast}Q,J_{\rm st})\cong ({\mathbb{C}}^{3},i)$ identifies the fibre with ${\mathbb{R}}^3$ and the zero section with $i{\mathbb{R}}^3$. Recall that $L=Q\cup L_K$. Let $D$ be the closed unit disk with a boundary puncture at $1\in\partial D$ and let $u\colon (D,\partial D)\to (T^{\ast}Q,L)$ be a holomorphic disk with one positive puncture and switching boundary conditions. This means that the map $u$ is asymptotic to a Reeb chord at infinity at the positive puncture $1$ and that it is smooth outside an additional finite number of boundary punctures where the boundary switches, i.e., jumps from one irreducible component of $L$ to another (which may be the same one). At these additional boundary punctures, the holomorphic disk is asymptotic to some point in the clean intersection $K\subset L$, i.e., it looks like a corner of a disk in Lagrangian intersection Floer homology. The real analyticity of $K$ allows us to get explicit local forms for holomorphic disks near corners. We show in Lemma~\ref{l:knotnbhd} that there are holomorphic coordinates \[ {\mathbb{R}}\times (0,0)\subset U\subset {\mathbb{C}} \times {\mathbb{C}}^{2}, \] in which $K$ corresponds to ${\mathbb{R}}\times (0,0)$, the $0$-section $Q$ corresponds to ${\mathbb{R}}\times {\mathbb{R}}^{2}$, and the conormal $L_K$ to ${\mathbb{R}}\times i{\mathbb{R}}^{2}$. Consider now a neighborhood of a switching point of a holomorphic disk $u$ on the boundary of $D$, where we use $z$ in a half-disk $D_\varepsilon^+$ around $0$ in the upper half-plane as a local coordinate around the switching point in the source. According to Section~\ref{ss:series}, $u$ admits a Taylor expansion around $0$, with $u=(u_1,u_2)\in {\mathbb{C}}\times{\mathbb{C}}^{2}$: \begin{equation}\label{eq:Taylorswitch} u_1(z) = \sum_{k\in{\mathbb{N}}} b_k z^{k},\qquad u_{2}(z) = \sum_{k\in\frac12{\mathbb{N}}} c_k z^{k}. \end{equation} Here compared to Section~\ref{ss:series} we have divided the indices by $2$, so the $b_k$ and $c_k$ correspond to the $a_{2k}$ in Section~\ref{ss:series}. The coefficients $b_j$ are real constants, reflecting smoothness of the tangent component of $u$. The $c_k$ satisfy one of the conditions in Remark~\ref{rem:cases1-4}, i.e., they are either all real or all purely imaginary vectors in ${\mathbb{C}}^{2}$, and the indices are either all integers or all half-integers. Equivalently (and more adapted to the analytical study in Sections \ref{S:mdlisp} -- \ref{sec:gluing}) one can use $z$ in a neighborhood of infinity in the strip ${\mathbb{R}}\times [0,1]$ as a local coordinate in the source. Composing the Taylor expansions~\eqref{eq:Taylorswitch} with the biholomorphism \begin{equation}\label{eq:chi} \chi:{\mathbb{R}}_{\geq 0}\times[0,1]\stackrel{\cong}\longrightarrow D^+, \qquad z\mapsto -\exp(-\pi z) \end{equation} (see Figure~\ref{fig:exp}) \begin{figure} \labellist \small\hair 2pt \pinlabel $1$ at 6 92 \pinlabel $-1$ at 307 6 \pinlabel $1$ at 451 6 \endlabellist \centering \includegraphics[width=\textwidth]{figures/exp} \caption{ The biholomorphism $\chi$. } \label{fig:exp} \end{figure} one gets instead the Fourier expansions \begin{equation}\label{eq:Fourierswitch} u_1(z) = \sum_{k\in{\mathbb{N}}} (-1)^kb_k e^{-k\pi z},\qquad u_{2}(z) = \sum_{k\in\frac12{\mathbb{N}}} (-1)^kc_k e^{-k\pi z}. \end{equation} Recall from Section~\ref{ss:winding} that the \emph{local winding number} at the switch is the positive half-integer or integer which is the index of the first non-vanishing Fourier coefficients in the expansion of $u_{2}$ in~\eqref{eq:Fourierswitch}. The sum of the local winding numbers at all switching boundary punctures is the \emph{total winding number} of the disk. Since the number of switches from $L_K$ to $Q$ equals that from $Q$ to $L_K$, the total winding number is an integer. The following technical result, which is a special case of \cite[Theorem 1.2]{CEL}, will play a crucial role in the sequel. \begin{thm}[\cite{CEL}]\label{thm:finite} For a cord-generic real analytic knot $K\subset{\mathbb{R}}^3$ the total winding number, and in particular the number of switches, of any holomorphic disk $u\colon (D,\partial D)\to (T^{\ast}Q,L)$ with one positive puncture is uniformly bounded by a constant $\kappa$. \end{thm} \begin{remark} The necessary energy bound appearing in the corresponding statement in \cite{CEL} is automatic here, since in our present situation the energy is given by the action of the Reeb chord at the positive puncture, which only varies in a finite set. \end{remark} In view of this result, when we discuss compactness we need only consider sequences of holomorphic disks with a {\em fixed} finite number of switches, each of fixed winding number. As we prove in Section~\ref{S:mdlisp}, each moduli space of such holomorphic disks is for generic data a manifold that admits a natural compactification as a manifold with boundary with corners. We will specifically need such moduli spaces of dimension $0$, $1$, or $2$ and we give brief descriptions in these cases. Let $a$ be a Reeb chord of $\Lambda_{K}$. Let $q_{1},\dots,q_{m}$ be punctures in $\partial D$ and let $\mathbf{n}=(n_1,\dots,n_{m})$ be a vector of local winding numbers, so $n_j\in\left\{\tfrac12,1,\tfrac32,2,\dots\right\}$ is the local winding number at $q_j$. We write $\mathcal{M}(a;\mathbf{n})$ for the moduli space of holomorphic disks with positive puncture at the Reeb chord $a$ and switching punctures at $q_1,\dots,q_{m}$ with winding numbers according to $\mathbf{n}$. Define the nonnegative integer \[ |\mathbf{n}|:=\sum_{j=1}^{m}2(n_j-\tfrac12)\ge 0. \] \begin{theorem}\label{t:dim-moduli} For generic almost complex structure $J$, the moduli space $\mathcal{M}(a;\mathbf{n})$ is a manifold of dimension \[ \dim \mathcal{M}(a;\mathbf{n})=|a|-|\mathbf{n}|. \] Furthermore, the choice of a spin structure on $L_{K}$ together with the spin structure on ${\mathbb{R}}^{3}$ induces a natural orientation on $\mathcal{M}(a;\mathbf{n})$. \end{theorem} \begin{proof} This is a consequence of \cite[Theorem A.1]{CEL} and Lemma \ref{l:tv} below. \end{proof} Note that, due to Theorem~\ref{thm:finite}, any moduli space $\mathcal{M}(a;\mathbf{n})$ is empty if $\mathbf{n}$ has more than $\kappa$ components, i.e., there are more than $\kappa$ switches. \subsection{Moduli spaces of dimension zero and one}\label{s:mswitchdim0to1} For moduli spaces of dimension $\le 1$ with positive puncture at a Reeb chord of degree $\le 1$, we have the following. Theorem \ref{t:dim-moduli} implies that if $|a|=0$ then $\mathcal{M}(a;\mathbf{n})$ is empty if $|\mathbf{n}|>0$ and is otherwise a compact oriented $0$-manifold. Likewise, if $|a|=1$ then $\mathcal{M}(a;\mathbf{n})$ is empty if $|\mathbf{n}|> 1$ and is an oriented $0$-manifold if $|\mathbf{n}|=1$. Note that $|\mathbf{n}|=1$ implies that there is exactly one switch with winding number $1$ and that the winding numbers at all other switches equal $\tfrac12$. Finally, if all entries in $\mathbf{n}$ equal $\tfrac12$ then $\dim(\mathcal{M}(a;\mathbf{n}))=1$. It follows by Theorem \ref{t:[1,0]} that the 1-dimensional moduli spaces of disks with switching boundary condition admit natural compactifications to 1-manifolds with boundary. The next result describes the disk configurations corresponding to the boundary of these compact intervals. \begin{prop}\label{prop:simpleboundary} If $a$ is a Reeb chord of degree $|a|=1$ and if all entries of $\mathbf{n}$ equal $\tfrac12$, then the oriented boundary of $\mathcal{M}(a;\mathbf{n})$ consists of the following: \begin{description} \item[$(Lag)$] Moduli spaces $\mathcal{M}(a;\mathbf{n}')$, where $\mathbf{n}'$ is obtained from $\mathbf n$ by removing two consecutive $\tfrac12$-entries and inserting in their place a $1$. \item[$(sy)$] Products of moduli spaces \[ \mathcal{M}^{\rm sy}(a;\mathbf{b})/{\mathbb{R}} \;\times \; \Pi_{b_j\in\mathbf{b}}\, \mathcal{M}(b_j;\mathbf{n}_j), \] where $\mathbf{n}$ equals the concatenation of the $\mathbf{n}_j$. \end{description} \end{prop} \begin{figure} \labellist \small\hair 2pt \pinlabel $\frac{1}{2}$ at 18 14 \pinlabel $\frac{1}{2}$ at 54 14 \pinlabel $\frac{1}{2}$ at 90 14 \pinlabel $\frac{1}{2}$ at 126 14 \pinlabel $\frac{1}{2}$ at 162 14 \pinlabel $\frac{1}{2}$ at 198 14 \pinlabel $\frac{1}{2}$ at 342 14 \pinlabel $1$ at 387 14 \pinlabel $\frac{1}{2}$ at 432 14 \pinlabel $\frac{1}{2}$ at 486 14 \pinlabel $\frac{1}{2}$ at 522 14 \pinlabel ${\color{blue} N}$ at 34 150 \pinlabel ${\color{blue} N}$ at 181 150 \pinlabel ${\color{blue} N}$ at 358 150 \pinlabel ${\color{blue} N}$ at 505 150 \pinlabel ${\color{blue} N}$ at 71 102 \pinlabel ${\color{blue} N}$ at 143 102 \pinlabel ${\color{blue} N}$ at 459 111 \pinlabel ${\color{red} Q}$ at 36 54 \pinlabel ${\color{red} Q}$ at 108 54 \pinlabel ${\color{red} Q}$ at 180 54 \pinlabel ${\color{red} Q}$ at 387 54 \pinlabel ${\color{red} Q}$ at 504 54 \pinlabel ${\color{red} a}$ at 108 231 \pinlabel ${\color{red} a}$ at 432 231 \endlabellist \centering \includegraphics[width=\textwidth]{figures/Q-boundary} \caption{ Type $(Lag)$ boundary where an $N$-string disappears. } \label{fig:Q-boundary} \end{figure} \begin{figure} \labellist \small\hair 2pt \pinlabel $\frac{1}{2}$ at 18 14 \pinlabel $\frac{1}{2}$ at 54 14 \pinlabel $\frac{1}{2}$ at 90 14 \pinlabel $\frac{1}{2}$ at 126 14 \pinlabel $\frac{1}{2}$ at 162 14 \pinlabel $\frac{1}{2}$ at 198 14 \pinlabel $\frac{1}{2}$ at 342 14 \pinlabel $\frac{1}{2}$ at 378 14 \pinlabel $\frac{1}{2}$ at 486 14 \pinlabel $\frac{1}{2}$ at 522 14 \pinlabel $1$ at 432 73 \pinlabel ${\color{blue} N}$ at 34 150 \pinlabel ${\color{blue} N}$ at 181 150 \pinlabel ${\color{blue} N}$ at 358 150 \pinlabel ${\color{blue} N}$ at 505 150 \pinlabel ${\color{blue} N}$ at 71 102 \pinlabel ${\color{blue} N}$ at 143 102 \pinlabel ${\color{blue} N}$ at 432 111 \pinlabel ${\color{red} Q}$ at 36 54 \pinlabel ${\color{red} Q}$ at 108 54 \pinlabel ${\color{red} Q}$ at 180 54 \pinlabel ${\color{red} Q}$ at 360 54 \pinlabel ${\color{red} Q}$ at 504 54 \pinlabel ${\color{red} a}$ at 108 231 \pinlabel ${\color{red} a}$ at 432 231 \endlabellist \centering \includegraphics[width=\textwidth]{figures/N-boundary} \caption{ Type $(Lag)$ boundary where a $Q$-string disappears. } \label{fig:N-boundary} \end{figure} \begin{figure} \labellist \small\hair 2pt \pinlabel $\frac{1}{2}$ at 18 49 \pinlabel $\frac{1}{2}$ at 54 49 \pinlabel $\frac{1}{2}$ at 90 49 \pinlabel $\frac{1}{2}$ at 126 49 \pinlabel $\frac{1}{2}$ at 162 49 \pinlabel $\frac{1}{2}$ at 198 49 \pinlabel $\frac{1}{2}$ at 342 14 \pinlabel $\frac{1}{2}$ at 378 14 \pinlabel $\frac{1}{2}$ at 414 14 \pinlabel $\frac{1}{2}$ at 450 14 \pinlabel $\frac{1}{2}$ at 486 14 \pinlabel $\frac{1}{2}$ at 522 14 \pinlabel ${\color{blue} N}$ at 39 184 \pinlabel ${\color{blue} N}$ at 71 136 \pinlabel ${\color{blue} N}$ at 143 136 \pinlabel ${\color{blue} N}$ at 178 184 \pinlabel ${\color{blue} N}$ at 348 120 \pinlabel ${\color{blue} N}$ at 444 120 \pinlabel ${\color{blue} N}$ at 465 99 \pinlabel ${\color{blue} N}$ at 526 99 \pinlabel ${\color{blue} N}$ at 396 99 \pinlabel ${\color{blue} {\mathbb{R}} \times \Lambda_K}$ at 364 240 \pinlabel ${\color{blue} {\mathbb{R}} \times \Lambda_K}$ at 507 240 \pinlabel ${\color{blue} {\mathbb{R}} \times \Lambda_K}$ at 440 218 \pinlabel ${\color{red} Q}$ at 37 90 \pinlabel ${\color{red} Q}$ at 108 90 \pinlabel ${\color{red} Q}$ at 179 90 \pinlabel ${\color{red} Q}$ at 359 54 \pinlabel ${\color{red} Q}$ at 431 54 \pinlabel ${\color{red} Q}$ at 503 54 \pinlabel ${\color{red} a}$ at 108 270 \pinlabel ${\color{red} a}$ at 432 304 \pinlabel ${\color{red} b_1}$ at 400 184 \pinlabel ${\color{red} b_2}$ at 485 184 \endlabellist \centering \includegraphics[width=\textwidth]{figures/symp-boundary} \caption{ Type $(sy)$ boundary. } \label{fig:symp-boundary} \end{figure} \begin{proof} This is a consequence of Theorem~\ref{t:[1,0]}. To motivate the result, note that the first type of boundary corresponds to two switches colliding, see Figures~\ref{fig:Q-boundary} and~\ref{fig:N-boundary}. The second type corresponds to a splitting into a two level curve with one ${\mathbb{R}}$-invariant level (of dimension 1) in the symplectization and one rigid curve (of dimension 0) in $T^{\ast}Q$, see Figure~\ref{fig:symp-boundary}. By transversality, compactness, and the dimension formula this accounts for all the possible boundary phenomena, and by a gluing argument we find that any such configuration corresponds to a unique boundary point. \end{proof} We conclude this subsection by giving an alternate interpretation of the first boundary phenomenon in Proposition~\ref{prop:simpleboundary}. Let $\mathcal{M}^{\ast}(a;\mathbf{n})$ denote the moduli space corresponding to $\mathcal{M}(a;\mathbf{n})$, but with one extra marked point on the boundary of the disk. Then $\mathcal{M}^{\ast}(a;\mathbf{n})$ fibers over $\mathcal{M}(a;\mathbf{n})$ with fiber $\partial D-\{1,q_1,\dots,q_{m}\}$ and there is an evaluation map ${\rm ev}\colon\mathcal{M}^{\ast}(a;\mathbf{n})\to L$. It follows from Theorem \ref{t:emb} that for $|a|=1$ and $|\mathbf{n}|=0$ (and generic data), ${\rm ev}^{-1}(K)$ is a transversely cut out oriented $0$-manifold that projects injectively into $\mathcal{M}(a;\mathbf{n})$. We denote its image by $$ \delta\mathcal{M}(a;\mathbf{n}). $$ As the notation suggests, this space will be the natural domain for the string operations $\delta=\delta_Q+\delta_N$. \begin{prop}\label{prop:b=intersdim1} If $a$ is a Reeb chord of degree $|a|=1$ and if all entries $\mathbf{n}$ equal $\tfrac12$, then there is a natural orientation preserving identification between $\delta\mathcal{M}(a;\mathbf{n})$ and $\mathcal{M}(a;\mathbf{n}'')$, where $\mathbf{n}''$ is obtained from $\mathbf{n}$ by inserting in $\mathbf{n}$ a new entry equal to $1$ at the position given by the marked point. \end{prop} \begin{proof} This is a consequence of Theorem~\ref{t:emb}. Here is the idea. Consider local coordinates around the marked point in the source and around $K$ in the target. Then the Taylor expansions~\eqref{eq:Taylorswitch} with $c_{\frac12}=0$ and $c_{1}\ne 0$ give the map in $\delta\mathcal{M}(a;\mathbf{n})$ with the marked point corresponding to $0$. The corresponding Fourier expansions \eqref{eq:Fourierswitch} present the map as an element in $\mathcal{M}(a;\mathbf{n}'')$, where the marked point is replaced by a puncture. Conversely, translating the Fourier picture to the Taylor picture proves the other inclusion and hence equality holds. See Section \ref{ss:signsandchainmap} for a discussion of orientations of the moduli spaces involved. \end{proof} \subsection{Moduli spaces of dimension two}\label{s:mswitchdim2} For moduli spaces $\mathcal{M}(a;\mathbf{n})$ with positive puncture at a Reeb chord $a$ of degree $|a|=2$, Theorem~\ref{t:dim-moduli} implies the following: \begin{itemize} \item If $|\mathbf{n}|>2$ then $\mathcal{M}(a;\mathbf{n})=\varnothing$. \item If $|\mathbf{n}|=2$ then $\mathcal{M}(a;\mathbf{n})$ is a compact $0$-dimensional manifold. This can happen in two ways: either exactly one entry in $\mathbf{n}$ equals $\frac32$, or exactly two entries equal $1$ and all others equal $\frac12$. \item If $|\mathbf{n}|=1$ then $\mathcal{M}(a;\mathbf{n})$ is an oriented 1-manifold, exactly one entry in $\mathbf{n}$ equals $1$ and all others equal $\frac12$. \item If $|\mathbf{n}|=0$ then $\mathcal{M}(a;\mathbf{n})$ is an oriented 2-manifold and all entries in $\mathbf{n}$ equal $\frac12$. \end{itemize} It follows by Theorem \ref{t:[2,0]} that the 2-dimensional moduli spaces of disks with switching boundary condition admit natural compactifications to 2-manifolds with boundary and corners. The next result describes the disk configurations corresponding to the boundary and corner points of these compact surfaces, see Figures~\ref{fig:Lag-Lag-1}, \ref{fig:Lag-Lag-2}, \ref{fig:sy-Lag} and~\ref{fig:sy-sy}. \begin{figure} \labellist \small\hair 2pt \pinlabel $1$ at 164 172 \pinlabel $1$ at 88 95 \pinlabel $1$ at 232 95 \pinlabel $1$ at 190 10 \endlabellist \centering \includegraphics[height=0.5\textwidth]{figures/Lag-Lag-1} \caption{ Type $(Lag|Lag)^1$ corner. } \label{fig:Lag-Lag-1} \end{figure} \begin{figure} \labellist \small\hair 2pt \pinlabel $1$ at 164 172 \pinlabel $1$ at 47 42 \pinlabel $3/2$ at 181 10 \endlabellist \centering \includegraphics[height=0.5\textwidth]{figures/Lag-Lag-2} \caption{ Type $(Lag|Lag)^2$ corner. } \label{fig:Lag-Lag-2} \end{figure} \begin{figure} \labellist \small\hair 2pt \pinlabel $1$ at 19 20 \pinlabel $1$ at 167 3 \endlabellist \centering \includegraphics[height=0.6\textwidth]{figures/sy-Lag} \caption{ Type $(sy|Lag)$ corner. } \label{fig:sy-Lag} \end{figure} \begin{figure} \labellist \small\hair 2pt \pinlabel $T$ at 232 84 \endlabellist \centering \includegraphics[height=0.7\textwidth]{figures/sy-sy} \caption{ Type $(sy|sy)$ corner. } \label{fig:sy-sy} \end{figure} \begin{prop}\label{prop:hardboundary} If $a$ is a Reeb chord of degree $|a|=2$ and if all entries of $\mathbf{n}$ equal $\tfrac12$, then the 1-dimensional boundary segments in the boundary of $\mathcal{M}(a;\mathbf{n})$ consist of the following configurations: \begin{description} \item[$(Lag)$] Moduli spaces $\mathcal{M}(a;\mathbf{n}')$, where $\mathbf{n}'$ is obtained from $\mathbf{n}$ by removing two consecutive $\tfrac12$-entries and inserting in their place a $1$. \item[$(sy)$] Products of moduli spaces \[ \mathcal{M}^{\rm sy}(a;\mathbf{b})/{\mathbb{R}} \;\times \; \Pi_{b_j\in\mathbf{b}}\, \mathcal{M}(b_j;\mathbf{n}_j), \] where $\mathbf{n}$ equals the concatenation of the $\mathbf{n}_j$. \end{description} The corner points in the boundary consists of the following configurations: \begin{description} \item[$(Lag|Lag)^1$] Moduli spaces $\mathcal{M}(a;\mathbf{n}')$, where $\mathbf{n}'$ is obtained from $\mathbf{n}$ by removing two pairs of consecutive $\tfrac12$-entries and inserting $1$'s in their places. \item[$(Lag|Lag)^2$] Moduli spaces $\mathcal{M}(a;\mathbf{n}'')$, where $\mathbf{n}''$ is obtained from $\mathbf{n}$ by removing three consecutive $\frac12$-entries and inserting a $\frac32$ in their place. \item[$(sy|Lag)$] Products of moduli spaces \[ \mathcal{M}^{\rm sy}(a;\mathbf{b})/{\mathbb{R}} \;\times \; \Pi_{b_j\in\mathbf{b}}\, \mathcal{M}(b_j;\mathbf{n}_j), \] where the concatenation of the $\mathbf{n}_j$ gives $\mathbf{n}$ with one consecutive pair of $\frac12$-entries removed and a $1$ inserted in their place. \item[$(sy|sy)$] Products of moduli spaces \[ \mathcal{M}^{\rm sy}(a;\mathbf{b})/{\mathbb{R}} \;\times \; \prod_{b_j\in\mathbf{b}}\, \Bigl(\mathcal{M}^{\rm sy}(b_j;\mathbf{c}_j)/{\mathbb{R}}\;\times\prod_{c_{jk}\in\mathbf{c_j}}\, \mathcal{M}(c_{jk};\mathbf{n}_{jk})\Bigr), \] where $\mathbf{n}$ equals the concatenation of the $\mathbf{n}_{jk}$, and all but one of the $\mathcal{M}^{\rm sy}(b_j;\mathbf{c}_j)$ are trivial strips over the Reeb chords $b_j$. \end{description} \end{prop} \begin{proof} This is a consequence of Theorem~\ref{t:[2,0]}. The descriptions of the boundary segments are analogous to the boundary phenomena of Proposition~\ref{prop:simpleboundary}. At a type $(Lag|Lag)^1$ corner we have two pairs of switches colliding. Local coordinates in the moduli space around this configuration can be taken as the lengths of the corresponding short boundary segments, which is a product of two half-open intervals. At a type $(Lag|Lag)^2$ corner there are likewise two short boundary segments that give local coordinates on the moduli space, see Figure \ref{fig:Lag-Lag-2}. At a type $(sy|Lag)$ corner the two parameters are the length of the short boundary segment and the gluing parameter for the two-level curve. Finally, at a type $(sy|sy)$ corner the two parameters are the two gluing parameter for the three-level curve. \end{proof} We next give alternate interpretations of the boundary phenomena in Proposition~\ref{prop:hardboundary}. Recall the notation $\mathcal{M}^{\ast}(a;\mathbf{n})$ for the moduli space corresponding to $\mathcal{M}(a;\mathbf{n})$ in which the disks have an additional free marked point $*$ on the boundary. It comes with an evaluation map ${\rm ev}\colon \mathcal{M}^{\ast}(a;\mathbf{n})\to L$ and a projection $\pi:\mathcal{M}^{\ast}(a;\mathbf{n})\to \mathcal{M}(a;\mathbf{n})$ forgetting the marked point, and we denote $\delta\mathcal{M}(a;\mathbf{n})={\rm ev}^{-1}(K)$. \begin{prop}\label{prop:b=intersdim2} If $a$ is a Reeb chord of degree $|a|=2$ and if all entries $\mathbf{n}$ equal $\tfrac12$, then there is a natural orientation preserving identification between $\delta\mathcal{M}(a;\mathbf{n})$ and $\mathcal{M}(a;\mathbf{n}'')$, where $\mathbf{n}''$ is obtained from $\mathbf{n}$ by inserting in $\mathbf{n}$ a new entry equal to $1$ at the position given by the marked point. The moduli space $\delta\mathcal{M}(a;\mathbf{n})\subset \mathcal{M}^\ast(a;\mathbf{n})$ is an embedded curve with boundary. Its boundary consists of transverse intersections with the boundary of $\mathcal{M}^\ast(a;\mathbf{n})$, corresponding to degenerations of type $(sy|Lag)$ and $(Lag|Lag)^1$ involving the marked point $*$, and to points in the interior of $\mathcal{M}^\ast(a;\mathbf{n})$, corresponding to degenerations of type $(Lag|Lag)^2$ involving the marked point $*$. The projection $\pi(\delta\mathcal{M}(a;\mathbf{n}))\subset \mathcal{M}(a;\mathbf{n})$ is an immersed curve with boundary and transverse self-intersections. Its boundary consists of transverse intersections with the boundary of $\mathcal{M}^\ast(a;\mathbf{n})$. See Figure~\ref{fig:delta-M}. \end{prop} \begin{figure} \labellist \small\hair 2pt \pinlabel ${\color{blue} (sy|sy)}$ at 0 162 \pinlabel ${\color{blue} (sy)}$ at 69 229 \pinlabel ${\color{blue} (sy|Lag)}$ at 108 318 \pinlabel ${\color{blue} (Lag)}$ at 213 294 \pinlabel ${\color{blue} (Lag|Lag)^2}$ at 323 316 \pinlabel ${\color{blue} (sy)}$ at 68 91 \pinlabel ${\color{blue} (sy|Lag)}$ at 107 12 \pinlabel ${\color{blue} (Lag)}$ at 215 30 \pinlabel ${\color{blue} (Lag|Lag)^1}$ at 324 12 \pinlabel ${\color{red} (Lag|Lag)^2}$ at 146 115 \endlabellist \centering \includegraphics[width=0.7\textwidth]{figures/delta-M} \caption{ The $2$-dimensional moduli space $\mathcal{M}(a;\mathbf{n})$ and the immersed curve $\pi(\delta\mathcal{M}(a;\mathbf{n}))$. } \label{fig:delta-M} \end{figure} \begin{proof} This is a consequence of Theorem~\ref{t:imm}. Here is a sketch. The proof of the first statement is analogous to that of Proposition~\ref{prop:b=intersdim1}, looking at Taylor and Fourier expansions. That $\delta\mathcal{M}(a;\mathbf{n})$ is an embedded curve with boundary follows from transversality of the evaluation map ${\rm ev}\colon \mathcal{M}^{\ast}(a;\mathbf{n})\to L$ to the knot $K$, which holds for generic almost complex structure. More refined transversality arguments show that the projection $\pi(\delta\mathcal{M}(a;\mathbf{n}))$ is an immersed curve with transverse self-intersections corresponding to holomorphic disks that meet the knot twice at non-corner points on their boundary. For the other statements, note that each stratum of $\delta\mathcal{M}(a;\mathbf{n})$ corresponds to a moduli space $\mathcal{M}(a;\mathbf{n'})$, where $\mathbf{n'}$ is obtained from $\mathbf{n}$ by inserting an entry $1$ corresponding to the marked point $*$. It follows from Proposition~\ref{prop:hardboundary} that boundary points of $\delta\mathcal{M}(a;\mathbf{n})$ correspond to degenerations of types $(sy|Lag)$, $(Lag|Lag)^1$ and $(Lag|Lag)^2$ involving the point $*$. The first two correspond to transverse intersections of $\pi(\delta\mathcal{M}(a;\mathbf{n}))$ with boundary strata of $\mathcal{M}(a;\mathbf{n})$ of types $(sy)$ and $(Lag)$, respectively. A dimension argument shows that degenerations of type $(Lag|Lag)^2$ involving the point $*$ cannot meet the boundary of $\mathcal{M}^\ast(a;\mathbf{n})$, so they correspond to boundary points of $\delta\mathcal{M}(a;\mathbf{n})$ in the interior of $\mathcal{M}^\ast(a;\mathbf{n})$. They appear in pairs corresponding to holomorphic disks in which the marked point $*$ has approached a corner from the left or right to form a new corner of weight $3/2$. In $\delta\mathcal{M}(a;\mathbf{n})$ the two configurations on a pair are distinct (formally, they are distinguished by the position of the marked point $*$ on the $3$-punctured constant disk attached at the weight $3/2$ corner), so they give actual boundary points. In the projection $\pi(\delta\mathcal{M}(a;\mathbf{n}))$ the two configuration become equal and thus give an interior point, hence $\pi(\delta\mathcal{M}(a;\mathbf{n}))$ has no boundary points in the interior of $\mathcal{M}(a;\mathbf{n})$. See Section \ref{ss:signsandchainmap} for a discussion of orientations of the moduli spaces involved in these arguments. \end{proof} \comment{ In order to define the chain map from $\AA$ to $\mathcal{C}$ we need singular chains rather than moduli spaces as parameter spaces for broken strings. To this end, if $\mathbf{n}$ have all entries equal to $\frac12$ then we fix for each moduli space $\mathcal{M}(c;\mathbf{n})$ of dimension $\le 2$ a triangulation that is compatible with the boundary and the corner structure, and so that the simplices are transverse to the stratified subsets $\mathcal{M}'(c;\mathbf{n})$. We write $\mathcal{M}^{\Delta}(c;\mathbf{n})$ for the compactified and triangulated moduli space corresponding to $\mathcal{M}(c;\mathbf{n})$. \begin{remark}\label{rmk:triangulation} More concretely the triangulated moduli spaces have the following properties: \begin{itemize} \item[$(\Delta_{0})$] If $|a|=0$ then $\mathcal{M}^{\Delta}(a;\mathbf{n})$ is a finite collection of oriented $0$-simplices. \item[$(\Delta_{1})$] If $|a|=1$ then $\mathcal{M}^{\Delta}(a;\mathbf{n})$ is a finite collection of oriented $1$-simplices with boundary points corresponding to the disk configurations described in Proposition~\ref{prop:simpleboundary} and containing the oriented $0$-manifold $\mathcal{M}'(a,\mathbf{n})$ in its interior. \item[$(\Delta_{2})$] If $|a|=2$ then $\mathcal{M}^{\Delta}(a;\mathbf{n})$ is a finite collection of oriented $2$-simplices. The boundary of the sum of all the collection corresponds to a triangulation of the moduli spaces of the disk configurations in the boundary described by Proposition~\ref{prop:hardboundary}, where corners appear as $0$-simplices and boundary edges are unions of $1$-simplices. Furthermore the stratified curve $\mathcal{M}'(a;\mathbf{n})$ is transverse to the triangulation also at the boundary. (Double points and endpoints lie in the interior of the 2-simplices, the curve does not meet $0$-simplices and is transverse to all $1$-simplices.) \end{itemize} \end{remark} } \subsection{The chain map} We can summarize the description of the moduli spaces of punctured holomorphic disks with switching boundary conditions in the preceding subsections as follows. For all Reeb chords $a$ and all integers $\ell\geq 0$ the compactified moduli spaces $$ \overline\mathcal{M}_\ell(a) := \overline\mathcal{M}(a;\underbrace{\tfrac12,\dots,\tfrac12}_{2\ell}) $$ are compact oriented manifolds with boundary and corners of dimension $|a|$ whose codimension $1$ boundaries satisfy the relations \begin{equation}\label{eq:domain-ell} \partial\overline\mathcal{M}_\ell(a) = \overline\mathcal{M}_\ell(\partial_\Lambda a) \cup -\delta\overline\mathcal{M}_{\ell-1}(a), \end{equation} where $\partial_\Lambda a=\partial^{\rm sy}a$ and $\delta\overline\mathcal{M}_{\ell-1}(a)$ is the closure in $\overline\mathcal{M}_{\ell-1}(a)$ of the set $$ \delta\mathcal{M}_{\ell-1}(a) := \delta\mathcal{M}(a;\underbrace{\tfrac12,\dots,\tfrac12}_{2\ell-2}) $$ introduced in Proposition~\ref{prop:b=intersdim2}. Again we refer to Section \ref{ss:signsandchainmap} for a description of the orientations involved. \begin{prop}\label{prop:chain-map-ell} There exist smooth triangulations of the spaces $\overline\mathcal{M}_\ell(a)$ and generic chains of broken strings $$ \Phi_\ell(a): \overline\mathcal{M}_\ell(a)\to\Sigma^\ell $$ (understood as singular chains by summing up their restrictions to the simplices of the triangulations) satisfying the relations \begin{equation}\label{eq:chain-map-ell} \partial\Phi_\ell(a) = \Phi_\ell(\partial_\Lambda a) -(\delta_Q+\delta_N)\Phi_{\ell-1}(a). \end{equation} \end{prop} \begin{proof} The idea of the proof is very simple: After connecting the end points of $a$ to the base point $x_0$ by capping paths, a suitable parametrization (explained below) of the boundary of a holomorphic disk $u\in\mathcal{M}_\ell(a)$ determines a broken string $\partial(u)\in\Sigma^\ell$. Thus we get maps $$ \widetilde\Phi_\ell(a):\mathcal{M}_\ell(a) \to \Sigma^\ell,\qquad u\mapsto\partial(u) $$ and the relations~\eqref{eq:chain-map-ell} should follow from~\eqref{eq:domain-ell}. However, the map $\widetilde\Phi_\ell(a)$ in general does not extend to the compactification $\overline\mathcal{M}_\ell$ as a map to $\Sigma^\ell$ because on the boundary some $Q$- or $N$-string can disappear in the limit. We will remedy this by suitably modifying the maps $\widetilde\Phi_\ell(a)$ near the boundaries (inserting spikes). Before doing this, let us discuss parametrizations of the broken string $\partial(u)$ for $u\in\mathcal{M}_\ell(a)$. Near a switch we can pick holomorphic coordinates on the domain (with values in the upper half-disk) and the target (provided by Lemma~\ref{l:knotnbhd}) in which the normal projection of $u$ consists of two holomorphic functions near a corner as in Section~\ref{sec:holo}. The discussion in that section shows that in these coordinates $\partial(u)$ satisfies the matching conditions on the $m$-jets required in the definition a broken string. We take near each corner a parametrization of $\partial(u)$ induced by such holomorphic coordinates and extend them arbitrarily away from the corners to make $\partial(u)$ a broken string in the sense of Definition~\ref{def:string}. Note that the space of such parametrizations is contractible. Now we proceed by induction over $|a|=0,1,2$. {\bf Case $|a|=0$:} In this case $\overline\mathcal{M}_\ell(a)$ consists of finitely many oriented points and we set $\Phi_\ell(a)(u):=\partial(u)$ (picking a parametrization of the boundary as above). \smallskip {\bf Case $|a|=1$:} We proceed by induction on $\ell=0,1,\dots$. For $\ell=0$, on the boundary $\partial\overline\mathcal{M}_0(a) = \overline\mathcal{M}_0(\partial_\Lambda a)$ we are already given the map $\Phi_0(\partial_\Lambda a)$. We extend it to a map $\Phi_0(a):\overline\mathcal{M}_0(a) \to \Sigma^0$ by sending $u$ to $\partial(u)$ with parametrizations matching the given ones on $\partial\overline\mathcal{M}_0(a)$, so that $\partial\Phi_0(a) = \Phi_0(\partial_\Lambda a)$ holds. Now suppose that we have already defined $\Phi_0(a),\dots,\Phi_{\ell-1}(a)$ such that the relations~\eqref{eq:chain-map-ell} hold up to $\ell-1$. According to \eqref{eq:domain-ell}, the boundary $\partial \overline\mathcal{M}_\ell(a)$ is identified with the union of domains of the maps on the right hand side of~\eqref{eq:chain-map-ell}. On the other hand, on the interior $\mathcal{M}_\ell(a)$ we are given the map $\widetilde{\Phi}_\ell(a)$ described above. Furthermore, by Proposition~\ref{prop:simpleboundary} and Remark \ref{r:breakingmodelclose}, elements $u$ close to the boundary points in $\delta\overline\mathcal{M}_{\ell-1}(a) \subset \partial\overline\mathcal{M}_\ell(a)$ have spikes (shrinking as $u$ tends to the boundary) roughly in the same direction as those on the boundary. So near $\partial\overline\mathcal{M}_\ell(a)$ we can interpolate between the map on the boundary given by the right hand side of~\eqref{eq:chain-map-ell} and the map $\widetilde\Phi_\ell(a)$ on the interior to obtain a map $\Phi_\ell(a):\overline\mathcal{M}_\ell(a)\to\Sigma^\ell$ satisfying~\eqref{eq:chain-map-ell}. Since the modification of $\widetilde\Phi_\ell(a)$ can be done away from the finite set $\delta\overline\mathcal{M}_\ell(a) \subset \mathcal{M}_\ell(a)$, $\Phi_\ell(a)$ is a generic $1$-chain of broken strings. This concludes the inductive step. Since we are dealing with $1$-chains, a smooth triangulation just amounts to a parametrization of the components of $\overline\mathcal{M}_\ell(a)$ by intervals whose boundary points avoid the set $\delta\overline\mathcal{M}_\ell(a)$. \smallskip {\bf Case $|a|=2$:} We proceed again by induction on $\ell=0,1,\dots$. For $\ell=0$, we again define $\Phi_0(a):\overline\mathcal{M}_0(a) \to \Sigma^0$ by sending $u$ to $\partial(u)$, with parametrizations matching the given ones on $\partial\overline\mathcal{M}_0(a)$, so that $\partial\Phi_0(a) = \Phi_0(\partial_\Lambda a)$ holds. Now suppose that we have already defined $\Phi_0(a),\dots\Phi_{\ell-1}(a)$ and triangulations of their domains such that they are generic $2$-chains of broken strings and the relations~\eqref{eq:chain-map-ell} hold up to $\ell-1$. As in the case of 1-chains, the boundary $\partial\overline\mathcal{M}_\ell(a)$ is identified via~\eqref{eq:domain-ell} with the union of domains of the right hand side of~\eqref{eq:chain-map-ell}, so as before we define $\Phi_\ell(a)$ on that boundary via the maps $\Phi_\ell(\partial_\Lambda a)$ resp.~$\delta\Phi_{\ell-1}(a)$. By induction hypothesis, these maps coincide at corner points. Note that the map $\delta\Phi_{\ell-1}(a)$ inserts spikes at the intersection points with the knot. According to Proposition~\ref{prop:hardboundary} and Remark \ref{r:breakingmodelclose}, elements $u$ close to the codimension one boundary strata $\delta\overline\mathcal{M}_{\ell-1}(a)$ have spikes roughly in the same direction as those on the boundary (shrinking in size as $u$ tends to the boundary). Elements close to a corner point where two boundary strata of $\delta\overline\mathcal{M}_{\ell-1}(a)$ meet have two spikes roughly in the same directions as those on the nearby boundary strata (which both shrink as $u$ tends to the corner point), see Remark \ref{r:breakingmodelclose}. So we can interpolate between the given map on the boundary $\partial\overline\mathcal{M}_\ell(a)$ and the map $\widetilde\Phi_\ell(a)$ on the interior $\mathcal{M}_\ell(a)$ to obtain a map $\Phi_\ell(a):\overline\mathcal{M}_\ell(a)\to\Sigma^\ell$ satisfying~\eqref{eq:chain-map-ell}. Recall that $\delta\overline\mathcal{M}_\ell(a)$ is an immersed $1$-dimensional submanifold with finitely many transverse self-intersections in the interior, and which meets the boundary transversely away from the corners. The modification of $\widetilde\Phi_\ell(a)$ can be done away from the finite set of self-intersections of $\delta\overline\mathcal{M}_\ell(a)$ in the interior. Moreover, the modification of $\widetilde\Phi_\ell(a)$ near the boundary only involves inserting spikes at switching points of broken strings, which can be performed away from the finitely many interior intersection points of the broken strings with the knot and thus does not affect $\delta\overline\mathcal{M}_\ell(a)$. We pick a smooth triangulation of $\overline\mathcal{M}_\ell(a)$ transverse to $\delta\overline\mathcal{M}_\ell(a)$ (i.e., transverse to its $1$-dimensional strata as well as its self-intersection points) and inducing the given triangulation on the boundary. By the discussion in the preceding paragraph, $\Phi_\ell(a)$ (interpreted as the sum over its restriction to simplices) is a generic $2$-chain of broken strings. This concludes the inductive step and thus the proof of Proposition~\ref{prop:chain-map-ell}. \end{proof} Given a Reeb chord $a$, we define $$ \Phi(a) := \sum_{\ell=0}^\kappa\Phi_\ell(a)\in C(\Sigma)=\bigoplus_{\ell=0}^\infty C(\Sigma^\ell). $$ Here $\kappa$ is the constant from the Finiteness Theorem~\ref{thm:finite}. The relation~\eqref{eq:chain-map-ell} for the chains $\Phi_\ell(a)$ translates into \begin{equation}\label{eq:chain-map} \partial\Phi(a) = \Phi(\partial^{\rm sy} a) -\delta\Phi(a),\qquad \delta=\delta_Q+\delta_N. \end{equation} Given a $d$-simplex of Reeb strings $\mathbf{a}=\alpha_{1}a_1\dots a_m\alpha_{m+1}:\Delta\to\mathcal{R}^m$ we define $$ \Phi(\mathbf{a}) := \alpha_{1}\Phi(a_1)\dots \alpha_m\Phi(a_m)\alpha_{m+1}\in C(\Sigma). $$ Here the boundary arcs are concatenated in the obvious way to obtain broken strings. For singular simplices $\Delta_i$ appearing as domains in $\Phi(a_i)$, the corresponding term in $\Phi(\mathbf{a})$ has by our orientation convention the domain $$ \Delta\times\Delta_1\times\cdots\times\Delta_m $$ in this order of factors. \begin{thm}\label{t:Phichainmap} The map $\Phi$ is a chain map from $(C_*(\mathcal{R}),\partial_\Lambda)$ to $(C_*(\Sigma),\partial+\delta_{Q}+\delta_{N})$. \end{thm} \begin{proof} Using~\eqref{eq:chain-map} we compute for $\mathbf{a}\in C_d(\mathcal{R})$ as above, with $*=d+|a_1|+\cdots+|a_{i-1}|$: \begin{align*} \partial\Phi(\mathbf{a}) &= \Phi(\partial^{\rm sing}\mathbf{a}) + \sum_{i=1}^m(-1)^{*} \alpha_{1}\Phi(a_1)\alpha_2\cdots \partial\Phi(a_i)\cdots\alpha_m\Phi(a_m)\alpha_{m+1} \cr &= \Phi(\partial^{\rm sing}\mathbf{a}) + \sum_{i=1}^m(-1)^{*} \alpha_{1}\Phi(a_1)\alpha_2\cdots \Bigl(\Phi(\partial^{\rm sy} a_i)-\delta\Phi(a_i)\Bigr)\cdots\alpha_{m+1} \cr &= \Phi(\partial^{\rm sing}\mathbf{a}) + \Phi(\partial^{\rm sy} \mathbf{a})-\delta\Phi(\mathbf{a}). \end{align*} Since $\partial_\Lambda=\partial^{\rm sing}+\partial^{\rm sy}$, this proves the theorem. \end{proof} {\bf Compatibility with length filtrations. } Holomorphic disks with switching boundary conditions have a length decreasing property that leads to the chain map $\Phi$ respecting the length (or action) filtration, which is central for our isomorphism proof. Let $u\in\mathcal{M}(a;\mathbf{n})$ be a holomorphic disk with $k$ boundary segments that map to $Q$. Let $\sigma_1,\dots,\sigma_{k}$ be the corresponding curves in $Q$ and let $L(\sigma_{i})$ denote the length of $\sigma_{i}$. Recall that the Reeb chord $a$ is the lift of a binormal chord on the link $K$ and that the action $\int_{a} pdq$ of $a$ equals the length of the underlying chord in $Q$, which we write as $L(a)$. In Section~\ref{sec:length-estimates2} we utilize the positivity of a scaled version of the contact form on holomorphic disks to show the following result (Proposition~\ref{prop:length-estimate}). \begin{prop}\label{prop:disklength} If $u\in \mathcal{M}(a;\mathbf{n})$ is as above then \[ \sum_{i=1}^{k} L(\sigma_{i}) \leq L(a), \] with equality if and only if $u$ is a trivial half strip over a binormal chord. \end{prop} Recall that both chain complexes $(C_*(\mathcal{R}),\partial_\Lambda)$ and $(C_*(\Sigma),\partial+\delta_{Q}+\delta_{N})$ carry length filtrations that were defined in Sections~\ref{sec:leg} and~\ref{ss:length-filt}, respectively. Recall also that the length filtration on $C_*(\Sigma)$ does not count the lengths of $Q$-spikes. Hence the insertion of $Q$-spikes in the definition of the chain map $\Phi$ does not increase length and Proposition~\ref{prop:disklength} implies \begin{cor}\label{cor:respect-length} The chain map $\Phi$ in Theorem~\ref{t:Phichainmap} respects the length filtrations, i.e., it does not increase length. \end{cor} \section{Properties of holomorphic disks}\label{S:mdlisp} In this section we begin our analysis of the holomorphic disks involved in the definition of the chain map from Legendrian contact homology to string homology. For the remainder of the paper, we consider the following setup: \begin{itemize} \item $Q$ is a real analytic Riemannian $3$-manifold without closed geodesics and convex at infinity (the main example being $Q={\mathbb{R}}^3$ with the flat metric); \item $K\subset Q$ is a real analytic knot with nondegenerate binormal chords; \item $L_K\subset T^{\ast}Q$ is the conormal bundle, $Q\subset T^{\ast}Q$ is the $0$-section, and $$ L = L_K\cup Q $$ is the singular Lagrangian with clean intersection $L_K\cap Q=K$. \end{itemize} The reader will notice that much of the discussion naturally extends to higher dimensional manifolds $Q$ and submanifolds $K\subset Q$. \subsection{Almost complex structures}\label{s:acs} Consider the subsets $$ S^*Q = \{(q,p)\;\bigl|\;|p|=1\}\subset D^*Q = \{(q,p)\;\bigl|\;|p|\leq 1\}\subset T^*Q $$ of the cotangent bundle. The canonical isomorphism \begin{equation*} {\mathbb{R}}\times S^*Q\to T^*Q\setminus Q,\qquad \bigl(s,(q,p)\bigr)\mapsto (q,e^s p) \end{equation*} intertwines the ${\mathbb{R}}$-actions given by translation resp.~rescaling. Let $\lambda=p\,dq$ be the canonical Liouville form on $T^*Q$ with Liouville vector field $p\partial_p$. Its restriction $\lambda_1$ to $S^*Q$ is a contact form with contact structure $\xi=\ker\lambda_1$ and Reeb vector field $R$. We denote the ${\mathbb{R}}$-invariant extensions of $\lambda_1,\xi,R$ to $T^*Q\setminus Q$ by the same letters. In geodesic normal coordinates $q_i$ and dual coordinates $p_i$ they are given by $$ \lambda_1=\frac{p\,dq}{|p|},\qquad R = \sum_i p_i\frac{\partial}{\partial q_i},\qquad \xi_{(q,p)} = \ker\lambda_1\cap\ker(p\,dp) = {\rm span}\Bigl\{R,p\frac{\partial}{\partial p}\Bigr\}^{\perp_{d\lambda_1}}. $$ Around each Reeb chord $c:[0,T]\to S^*Q$ with end points on $\Lambda_K=L_K\cap S^*Q$ we pick a neighborhood $U\times(-\varepsilon,T+\varepsilon)\subset S^*Q$, where $U$ is a neighborhood of the origin in ${\mathbb{C}}^{2}$, with the following properties: \begin{itemize} \item the Reeb chord $c$ corresponds to $\{0\}\times[0,T]$; \item the Reeb vector field $R$ is parallel to $\partial_t$, where $t$ is the coordinate on $(-\varepsilon,T+\varepsilon)$ and the contact planes project isomorphically onto $U$ along $R$; \item along $\{0\}\times(-\varepsilon,T+\varepsilon)$ the contact planes agree with ${\mathbb{C}}^2\times\{0\}$ and the form $d\lambda_1$ with $\omega_{\rm st}=dx_1\wedge dy_1+dx_2\wedge dy_2$; \item the Legendrian $\Lambda_{K}$ intersects $U\times(-\varepsilon,T+\varepsilon)$ in two linear subspaces contained in $U\times\{0\}$ and $U\times\{T\}$, respectively, whose projections to $U$ are transversely intersecting Lagrangian subspaces of $({\mathbb{C}}^2,\omega_{\rm st})$. \end{itemize} \begin{definition}\label{def:admissible} An almost complex structure $J$ on $T^*Q$ is called {\em admissible} if it has the following properties. \begin{enumerate} \item $J$ is everywhere compatible with the symplectic form $dp\wedge dq$. Moreover, $Q$ admits an exhaustion $Q_1\subset Q_2\subset\cdots$ by compact sets with smooth boundary such that the pullbacks $\pi^{-1}(\partial Q_i)$ under the projection $\pi:T^*Q\to Q$ are $J$-convex hypersurfaces. \item Outside $D^*Q$, $J$ agrees with an ${\mathbb{R}}$-invariant almost complex structure $J_1$ on the symplectization that takes the Liouville field $p\partial_p$ to the Reeb vector field $R$, restricts to a complex structure on the contact distribution $\xi$, and is compatible with the symplectic form $d\lambda_1$ on $\xi$. \item Outside the zero section, $J$ preserves the subspace ${\rm span}\{p\partial_p,R\}$ as well as $\xi$ and is compatible with the symplectic form $d\lambda_1$ on $\xi$. Along the zero section, $J$ agrees with the canonical structure $\frac{\partial}{\partial p_i}\mapsto\frac{\partial}{\partial q_i}$. \item $J$ is integrable near $K$ such that $Q$ and $K$ are real analytic. \item On each neighborhood $U\times(-\varepsilon,T+\varepsilon)$ around a Reeb chord as above, the restriction of $J_1$ to the contact planes is the pullback of the standard complex structure on $U\subset{\mathbb{C}}^2$ under the projection. \end{enumerate} \end{definition} \begin{remark} Conditions (i) and (ii) are standard conditions for studying holomorphic curves in $T^*Q$ and its symplectization ${\mathbb{R}}\times S^*Q$. Condition (iii) ensures the crucial length estimate for holomorphic curves in the next subsection. Condition (iv) is needed for the Finiteness Theorem~\ref{thm:finite} to hold. Condition (v) is added to facilitate our study of spaces of holomorphic disks and is convenient for fixing gauge when finding smooth structures on moduli spaces; it can probably be removed with a more involved analysis of asymptotics. \end{remark} \begin{remark} Note that an admissible almost complex structure remains so under arbitrary deformations satisfying (ii) that are supported outside $D^*Q$ and away from the Reeb chords. This gives us enough freedom to achieve transversality within the class of admissible structures in Section~\ref{sec:trans}. \end{remark} The Riemannian metric on $Q$ induces a canonical almost complex structure $J_{\rm st}$ on $T^*Q$ which in geodesic normal coordinates $q_i$ at a point $q$ and dual coordinates $p_i$ is given by $$ J_{\rm st}\left(\frac{\partial}{\partial q_i}\right)=-\frac{\partial}{\partial p_i},\qquad J_{\rm st}\left(\frac{\partial}{\partial p_i}\right)=\frac{\partial}{\partial q_i}. $$ More generally, for a positive smooth function $\rho\colon [0,\infty)\to(0,\infty)$ we define an almost complex structure $J_{\rho}$ by $$ J_\rho\left(\frac{\partial}{\partial q_i}\right)=-\rho(|p|)\frac{\partial}{\partial p_i},\qquad J_\rho\left(\frac{\partial}{\partial p_i}\right)=\rho(|p|)^{-1}\frac{\partial}{\partial q_i}. $$ If $\rho(r)=r$ for large $r$, then it is easy to check that $J_\rho$ satisfies the first part of condition (i) as well as conditions (ii) and (iii) in Definition~\ref{def:admissible}. If the metric is flat (i.e., $Q$ is ${\mathbb{R}}^3$ or a quotient of ${\mathbb{R}}^3$ by a lattice), then $J_{\rm st}$ is integrable and $J_\rho$ also satisfies the second part of (i) (choosing $Q_i$ to be round balls) and condition (iv). Condition (v) can then be arranged by deforming $J_\rho$ near infinity within the class of almost complex structures satisfying (ii). So we have shown the following. \begin{lemma}\label{lem:admissible} For $Q={\mathbb{R}}^3$ with the Euclidean metric there exist admissible almost complex structures in the sense of Definition~\ref{def:admissible}. \hfill $\square$ \end{lemma} \begin{remark} In fact, the almost complex structure $J_{\rm st}$ induced by the metric is integrable if and only if the metric is flat (this observation is due to M.~Gr\"uneberg, unpublished). So the preceding proof of Lemma~\ref{lem:admissible} does not carry over to general manifolds $Q$ (although the conclusion should still hold). \end{remark} \comment{ We will use the following notation for subsets of $T^{\ast}{\mathbb{R}}^3$. Let $(q,p)$ be coordinates on $T^{\ast}{\mathbb{R}}^{3}$ where $q\in{\mathbb{R}}^{3}$ and $p$ is a coordinate on the fiber. For $r>0$ we write \begin{align*} E^{\ast}_{r}{\mathbb{R}}^{3}&=\{(q,p)\colon p^{2}\ge r^{2}\},\\ D^{\ast}_{r}{\mathbb{R}}^{3}&=\{(q,p)\colon p^{2}\le r^{2}\},\\ S^{\ast}_r{\mathbb{R}}^{3}&=\{(q,p)\colon p^{2}=r^{2}\}. \end{align*} The restriction of the action form $p\,dq$ to $S^{\ast}_r{\mathbb{R}}^{3}$ is a contact form. Consider the symplectization ${\mathbb{R}}\times S^{\ast}{\mathbb{R}}^{3}$, $S^{\ast}{\mathbb{R}}^{3}=S_1^{\ast}{\mathbb{R}}^{3}$ with symplectic form $d(e^{t}(p\, dq))$, where $t$ is a coordinate in the ${\mathbb{R}}$-factor. We view $E^{\ast}_{1}{\mathbb{R}}^{3}$ as the positive half of the symplectization using the map $\phi\colon {\mathbb{R}}_+\times S^{\ast}_1{\mathbb{R}}^{3}\to E^{\ast}_{1}{\mathbb{R}}^{3}$ \[ \phi((q,p))=(q,e^{t}p). \] Then $\phi$ intertwines the symplectic forms $d(e^{t}(p\,dq))$ on ${\mathbb{R}}_+\times S^{\ast}{\mathbb{R}}^{3}$ and $dp\wedge dq$ on $E^{\ast}_{1}{\mathbb{R}}^{3}$. When studying spaces of holomorphic curves we will use almost complex structures on $T^{\ast}{\mathbb{R}}^{3}$ with the following properties: \begin{enumerate} \item On $D^{\ast}_{1/2}$, $J=J_0$ where $J_0$ is the standard complex structure on a $\frac12$-neighborhood of ${\mathbb{R}}^{3}\subset{\mathbb{C}}^{3}$. \item On $E_{1}^{\ast}=S^{\ast}{\mathbb{R}}^{3}$, $J$ is induced from a complex structure on the contact planes $\ker(pdq)$ that is compatible with the symplectic form $d(pdq)$ on these contact planes, is ${\mathbb{R}}$-invariant, and takes $\partial_{t}$ to the Reeb vector field $R$. \item $J$ is everywhere compatible with the symplectic form $dp\wedge dq$. \item Each Reeb chord $c$ of $\Lambda_{K}$, has a neighborhood $\phi\colon \tilde c\times U\to S^{\ast}{\mathbb{R}}^{3}$, where $U$ is a neighborhood of the origin in ${\mathbb{C}}^{2}$ and where $\tilde c$ is an open flow segment containing $c$ with the following properties. The Legendrian $\Lambda_{K}$ corresponds to the intersection of linear Lagrangian subspaces of $U$ at the Reeb chord endpoints and the contact planes project isomorphically to $U$. We take the complex structure in the contact planes to be induced by the standard complex structure in $U$. Note that this specifies the almost complex structure $J$ in a neighborhood of ${\mathbb{R}}\times c\subset E_1^{\ast}{\mathbb{R}}^{3}$. \end{enumerate} \begin{remark} Conditions (ii) and (iii) are standard conditions for studying holomorphic disk in cotangent bundles. Conditions (i) and (iv) are added to facilitate our study of spaces of holomorphic disks and are convenient for fixing gauge when finding smooth structures on moduli spaces. The properties needed for gauge fixing can probably be proved for a much larger class of complex structures with more a involved analysis of asymptotics. \end{remark} We will also study holomorphic disks in the symplectization ${\mathbb{R}}\times S^{\ast}{\mathbb{R}}^{3}$ and we use the ${\mathbb{R}}$-invariant extension of $J$ as above in the positive part of the symplectization $E_{1}^{\ast}{\mathbb{R}}^{3}$ to the whole symplectization. } The next result provides nice holomorphic coordinates near $K\subset T^{\ast}Q$. \begin{lemma}\label{l:knotnbhd} Suppose that $J$ satisfies condition (iv) in Definition~\ref{def:admissible}. Then for $\delta>0$ small enough there exists a holomorphic embedding from $S^{1}\times (-\delta,\delta)\times B^{4}_\delta$, where $B^{4}_\delta\subset {\mathbb{C}}^{2}$ is the ball of radius $\delta$, with its standard complex structure onto a neighborhood of $K$ in $T^{\ast}Q$ with complex structure $J$ with the following properties: \begin{itemize} \item $S^{1}\times \{0\}\times\{0\}$ maps onto $K$; \item $S^{1}\times\{0\}\times({\mathbb{R}}^{2}\cap B^{4}_\delta)$ maps to $Q$; \item $S^{1}\times\{0\}\times(i{\mathbb{R}}^{2}\cap B^{4}_\delta)$ maps to $L_K$. \end{itemize} Alternatively, we can arrange the last two properties with the roles of $Q$ and $L_K$ interchanged. \end{lemma} \begin{proof} This is proved in more generality in~\cite[Remark 3.2]{CEL}; for convenience we repeat the proof in the situation at hand. Consider the real analytic embedding $\gamma\colon S^{1}\to Q$ representing $K$. Pick a real analytic vector field $v$ on $Q$ which is nowhere tangent to $K$ along $K$. Let $v_1$ be the unit vector field along $K$ in the direction of the component of $v$ perpendicular to $\dot\gamma$. Then $v_1$ is a real analytic vector field along $K$. Let $v_2= \dot\gamma \times v_1$ be the unit vector field along $K$ which is perpendicular to both $\dot\gamma$ and $v_1$ and which is such that $(\dot\gamma,v_1,v_2)$ is a positively oriented basis of $TQ$. Consider $S^{1}\times D^{2}$ with coordinates $(s,\sigma_1,\sigma_2)$, $s\in{\mathbb{R}}/{\mathbb{Z}}$, $\sigma_j\in{\mathbb{R}}$. Since $K$ is an embedding there exists $\rho>0$ such that \begin{equation}\label{e:emb1} \phi(s,\sigma_1,\sigma_2)=\gamma(s)+\sigma_1v_1(s)+\sigma_2 v_2(s) \end{equation} is an embedding for $\sigma_1^{2}+\sigma_2^{2}<\rho$. Note that the embedding is real analytic. Equip $S^{1}\times D^{2}$ with the flat metric and consider the induced complex structure on $T^{\ast}(S^{1}\times D^{2})$. The real analyticity of $\phi$ in \eqref{e:emb1} implies that it extends to holomorphic embedding $\Phi$ from a neighborhood of $S^{1}\times D^{2}$ in $T^{\ast}(S^1\times D^{2})$ to a neighborhood of $K$ in $T^{\ast}Q$ (here we use integrability of $J$ near $K$). In fact, locally $\Phi$ is obtained by replacing the real variables $(s,\sigma_1,\sigma_2)$ in the power series corresponding in the right hand side of \eqref{e:emb1} by their complexifications $(s+it,\sigma_1+i\tau_1,\sigma_2+i\tau_2)$. This proves the first assertion of the lemma. The alternative assertion follows from this one by precomposing $\Phi$ with multiplication by $i$ on $B^4_\delta$. \end{proof} \begin{remark} The coordinate system gives a framing of $K$ determined by the normal vector field $v$. By real analytic approximation we can take $v$ to represent any class of framings. \end{remark} \subsection{Length estimates}\label{sec:length-estimates2} In this subsection we show that the chain map $\Phi$ respects the length filtrations. This was shown in~\cite{CL} for the absolute case, i.e.~without the additional boundary condition $L_K$, and the arguments carry over immediately to the relative case. For completeness, we provide the proof in this subsection and we keep the level of generality of \cite{CL}, which is slightly more than what we use in this paper. For preparation, consider a smooth function $\tau\colon [0,\infty) \to [0,\infty)$ with $\tau'(s) \geq 0$ everywhere and $\tau(s)=0$ near $s=0$. Then $$ \lambda_\tau := \frac{\tau(|p|) p\,dq}{|p|} $$ defines a smooth $1$-form on $T^*Q$. \begin{lemma}\label{lem:nonneg} Let $J$ be an admissible almost complex structure on $T^*Q$ and $\tau$ a function as above. Then for all $v\in T_{(q,p)}T^*Q$ we have $$ d\lambda_\tau(v,Jv)\geq 0. $$ At points where $\tau(|p|)>0$ and $\tau'(|p|)>0$ equality holds only for $v=0$, whereas at points where $\tau(|p|)>0$ and $\tau'(|p|)=0$ equality holds if and only if $v$ is a linear combination of the Liouville field $p\,\partial_{p}$ and the Reeb vector field $R=p\,\partial_{q}$. \end{lemma} \begin{proof} By condition (iii) in Definition~\ref{def:admissible}, $J$ preserves the splitting $$ T(T^*Q)={\rm span}\{p\,\partial_p,R\}\oplus \xi $$ and is compatible with $d\lambda_1$ on $\xi$. Let us denote by $\pi_1:T(T^*Q)\to {\rm span}\{p\,\partial_p,R\}$ and $\pi_2:T(T^*Q)\to \xi$ the projections onto the direct summands. Since $\ker(d\lambda_1) = {\rm span}\{p\,\partial_p,R\}$, for $v\in T_{(q,p)}T^*Q$ we conclude $$ d\lambda_1(v,Jv) = d\lambda_1(\pi_2v,J\pi_2v)\geq 0, $$ with equality iff $v\in {\rm span}\{p\,\partial_p,R\}$. Next, we consider $$ d\lambda_\tau = \tau(|p|)d\lambda_1 + \frac{\tau'(|p|)}{|p|}p\,dp\wedge\lambda_1. $$ Since the form $p\,dp\wedge\lambda_1$ vanishes on $\xi$ and is positive on ${\rm span}\{p\,\partial_p,R\}$, we conclude $$ d\lambda_\tau(v,Jv) = \tau(|p|)d\lambda_1(\pi_2v,J\pi_2v) + \frac{\tau'(|p|)}{|p|}p\,dp\wedge\lambda_1(\pi_1v,J\pi_1v) \geq 0, $$ with equality iff both summands vanish. From this the lemma follows. \end{proof} \comment{ \begin{lemma}\label{lem:nonneg} For any functions $\rho$ and $\tau$ as above, if $v\in T_{(q,p)}T^*Q$, $|p|\le 1$ then $$ d\lambda_\tau(v,J_\rho v)\geq 0. $$ Furthermore, at points where $\tau'(|p|)>0$, equality holds only for $v=0$, whereas at points where $\tau'(|p|)=0$ and $\tau(|p|)>0$ equality holds if and only if $v$ is a linear combination of $\sum_{i}p_{i}\partial_{p_{i}}$ and $\sum_{i}p_{i}\partial_{q_{i}}$. \end{lemma} \begin{proof} We compute (in geodesic normal coordinates) \begin{align*} d\lambda_\tau &= d\left(\sum_i\frac{\tau(|p|)p_idq_i}{|p|}\right) \\ & = \sum_i\frac{\tau(|p|)dp_i\wedge dq_i}{|p|} + \sum_{i,j}\frac{(\tau'(|p|)|p|-\tau(|p|))p_ip_jdp_i\wedge dq_j}{|p|^3}. \end{align*} For a vector of the form $v=\sum_ia_i\rho(|p|)\partial_{p_i}$ we obtain $J_\rho v=\sum_ia_i\partial_{q_i}$ and hence by the Cauchy-Schwarz inequality \begin{align*} d\lambda(v,Jv) &= \sum_i\frac{\tau(|p|)\rho(|p|)a_i^2}{|p|} - \sum_{i,j}\frac{(\tau'(|p|)|p|-\tau(|p|))\rho(|p|)p_ip_ja_ia_j}{|p|^3} \cr &= \frac{\tau(|p|)\rho(|p|)}{|p|^3}(|a|^2|p|^2-\langle a,p\rangle^2) + \frac{\tau'(|p|)\rho(|p|)}{|p|^2}\langle a,p\rangle^2 \cr &\geq 0. \end{align*} At points where $\tau'>0$, equality only holds for $a=0$, and at points where $\tau'=0$ equality holds iff $a$ is a multiple of $p$. Similarly, for a general vector $v=\sum_ia_i\rho(|p|)\partial_{p_i} - \sum_ib_i\partial_{q_i}$ we get $d\lambda(v,J_{\rho}v)\geq 0$, with equality if and only if either $a=b=0$ or $\tau'=0$ and both $a$ and $b$ are multiples of $p$. \end{proof} } Let now $J$ be an admissible almost complex structure on $T^*Q$ and $$ u\colon (\Sigma,\partial \Sigma)\to (T^*Q,Q\cup L_K) $$ be a $J$-holomorphic curve with finitely many positive boundary punctures asymptotic to Reeb chords $a_1,\dots,a_s$ and with switching boundary conditions on $Q\cup L_K$. Let $\sigma_1,\dots,\sigma_k$ be the boundary segments on $Q$. Recall that $L(\sigma_i)$ denotes the Riemannian length of $\sigma_i$ and $L(a_j)=\int_{a_j}\lambda_1$ denotes the action of the Reeb chord $a_j$, which agrees with the length of the corresponding binormal chord. \begin{prop}\label{prop:length-estimate} With notation as above we have $$ \sum_{i=1}^kL(\sigma_i) \leq \sum_{j=1}^sL(a_j), $$ and equality holds if and only if $u$ is a branched covering of a half-strip over a binormal chord. \end{prop} \begin{proof} The idea of the proof is straightforward: integrate $u^*d\lambda_1$ over $\Sigma$ and apply Stokes' theorem. However, some care is required to make this rigorous because the $1$-form $\lambda_1$ is singular along the zero section. Fix a small $\delta>0$. For $i=1,\dots,s$ pick biholomorphic maps $\phi_i:[0,\delta]\times[0,1]\to N_i\subset\Sigma$ onto neighborhoods $N_i$ in $\Sigma$ of the $i^{th}$ boundary segment mapped to $Q$, so that $\phi_i(0,t)$ is a parametrization of the $i^{th}$ boundary segment. We choose $\delta$ so small that $N_i \cap N_j = \varnothing$ if $i\neq j$ and $u\circ\phi_i(\delta,\cdot)$ does not hit the zero section (the latter is possible because otherwise by unique continuation $u$ would be entirely contained in the zero section, which it is not by assumption). For fixed $i$ we denote the induced parametrization of $\sigma_i$ by $q(t):=u\circ\phi_i(t)\in Q$, so we can write $$ u\circ\phi_i(s,t)=(q(t)+v(s,t),s\dot{q}(t) + w(s,t)) $$ with $v(0,t)=0=w(0,t)$, and therefore $\frac {\partial v}{\partial t}(0,t)=0=\frac {\partial w}{\partial t}(0,t)$. The hypothesis that $J$ is standard near the zero section (condition (iii) in Definition~\ref{def:admissible}) implies that $\frac {\partial v}{\partial s}(0,t)=0=\frac {\partial w}{\partial s}(0,t)$. Denoting $v_\delta=v(\delta,\cdot)$ and $w_\delta=w(\delta,\cdot)$ we compute \begin{align*} (u\circ\phi_i)^*\lambda_1|_{s=\delta} &= \frac{\langle \delta\dot{q} + w_\delta,\dot{q} + \dot{v_\delta} \rangle} {|\delta\dot{q} + w_\delta|} dt \\ &= \frac {\langle \dot{q} + \frac{w_\delta}{\delta}, \dot{q} + \dot{v_\delta}\rangle} {|\dot{q}+ \frac{w_\delta}{\delta}|} dt\\ &= \bigl(|\dot q| + O(\delta)\bigr)dt, \end{align*} where in the last line we have used that $\dot v_\delta=O(\delta)$ and $w_\delta=O(\delta^2)$. Pick $\varepsilon>0$ smaller than the minimal norm of the $p$-components of $u\circ\phi_i(\delta,\cdot)$ for all $i$. Pick a function $\tau\colon [0,\infty) \to [0,1]$ with $\tau'\geq 0$, $\tau(s)=0$ near $s=0$, and $\tau(s)=1$ for $s\geq \varepsilon$. By Lemma~\ref{lem:nonneg}, the form $\lambda_\tau = \frac{\tau(|p|)}{|p|}pdq$ on $T^*Q$ satisfies $u^*(d\lambda_\tau) \geq 0$. Note that $\lambda_\tau$ agrees with $\lambda_1 = \frac p {|p|}dq$ on the subset $\{|p|\geq \epsilon\} \subset T^*Q$, so the preceding computation yields $$ \int_{\{s=\delta\}}(u\circ\phi_i)^*\lambda_\tau = \int_{\{s=\delta\}}\bigl(|\dot q| + O(\delta)\bigr)dt = L(\sigma_i)+O(\delta) $$ for all $i$. Next, consider polar coordinates $(r,\varphi)$ around $0$ in the upper half plane $H^+$ near the $j^{th}$ positive puncture. Then the asymptotic behavior of $u$ near the punctures yields \[ \int_{\{r=\delta\}\cap H^+}u^*\lambda_\tau = L(a_j)+O(\delta). \] Now let $\Sigma_\delta\subset\Sigma$ be the surface obtained by removing the neighborhoods $\{r\leq \delta\}\cap H^+$ around the positive punctures and the neighborhoods $N_i$ of the boundary segments mapped to $Q$, see Figure \ref{fig:sigmadelta}. \begin{figure} \labellist \small\hair 2pt \pinlabel $L_K$ at 94 305 \pinlabel $L_K$ at 277 300 \pinlabel $L_K$ at 12 158 \pinlabel $L_K$ at 118 13 \pinlabel $L_K$ at 323 87 \pinlabel $Q$ at 42 247 \pinlabel $Q$ at 49 61 \pinlabel $Q$ at 228 10 \pinlabel $\Sigma$ at 342 18 \pinlabel $\Sigma_\delta$ at 178 162 \pinlabel $N_1$ at 69 230 \pinlabel $N_2$ at 79 83 \pinlabel $N_3$ at 217 39 \endlabellist \centering \includegraphics[width=.6\linewidth]{figures/sigmadelta-new} \caption{The domain $\Sigma_{\delta}$ is obtained from $\Sigma$ by removing small neighborhoods of the boundary arcs mapping to $Q$ and of the positive punctures. The punctures are denoted by x, and switches are denoted by dots.} \label{fig:sigmadelta} \end{figure} The boundary of $\Sigma_\delta$ consists of the arcs $\{r=\delta\}\cap H^+$ around the positive punctures, the arcs $\phi_i(\{s=\delta\})$ near the boundary segments mapped to $Q$ (negatively oriented), and the remaining parts of $\partial\Sigma$ mapped to $L_K$. Since $\lambda_\tau$ vanishes on $L_K$, the latter boundary parts do not contribute to its integral and Stokes' theorem combined with the preceding observations yields $$ 0 \leq \int_{\Sigma_\delta}u^*d\lambda_\tau = \int_{\partial\Sigma_\delta}u^*\lambda_\tau = \sum_{j=1}^sL(a_j) - \sum_{i=1}^kL(\sigma_i) + O(\delta). $$ Taking $\delta\to 0$ this proves the inequality in Proposition~\ref{prop:length-estimate}. Equality holds iff $u^*d\lambda_\tau$ vanishes identically, which by Lemma~\ref{lem:nonneg} is the case iff $u$ is everywhere tangent to ${\rm span}\{p\,\partial_p,R\}$. In view of the asymptotics at the positive punctures, this is the case precisely for a half-strip over a binormal chord. \end{proof} \subsection{Holomorphic half-strips} We consider the half-strip ${\mathbb{R}}_+\times[0,1]$ with coordinates $(s,t)$ and its standard complex structure. Let $J$ be an admissible almost complex structure on $T^*Q$ and $J_1$ the associated structure on ${\mathbb{R}}\times S^*Q$. A {\em holomorphic half-strip} in ${\mathbb{R}}\times S^*Q$ is a holomorphic map $$ u\colon{\mathbb{R}}_+\times[0,1]\to ({\mathbb{R}}\times S^*Q,J_1) $$ mapping the boundary segments ${\mathbb{R}}\times\{0\}$ and ${\mathbb{R}}\times\{1\}$ to ${\mathbb{R}}\times\Lambda_K$. Similarly, a holomorphic half-strip in $T^*Q$ is a holomorphic map $$ u\colon{\mathbb{R}}_+\times[0,1]\to (T^*Q,J) $$ mapping the boundary to $L=L_{K}\cup Q$. We write the components of a map $u$ into ${\mathbb{R}}\times S^*Q$ (or into $T^*Q\setminus D^*Q\cong {\mathbb{R}}_+\times S^*Q$) as $$ u=(a,f). $$ Recall from~\cite{BEHWZ} (see also~\cite{CEL}) that to any smooth map $u$ from a surface to ${\mathbb{R}}\times S^*Q$ or $T^*Q$ we can associate its {\em Hofer energy} $E(u)$. It is defined as the sum of two terms, the $\omega$-energy and the $\lambda$-energy, whose precise definition will not be needed here. The following result follows from \cite[Lemma B.1]{E_rsft}, see also \cite[Proposition 6.2]{BEHWZ}, in combination with well-known results in Lagrangian Floer theory, see e.g.~\cite{Floer_Lag_int}. \begin{prop}\label{prop:asympt} For each holomorphic half-strip $u$ in ${\mathbb{R}}\times S^*Q$ or $T^*Q$ of finite Hofer energy exactly one of the following holds: \begin{itemize} \item There exists a Reeb chord $c\colon [0,T]\to S^*Q$ and a constant $a_0\in{\mathbb{R}}$ such that $$ a(s,t)-Ts-a_0\to 0,\qquad f(s,t)\to c(Tt) $$ uniformly in $t$ as $s\to\infty$. We say that the map has a \emph{positive puncture} at $c$. \item There exists a Reeb chord $c\colon [0,T]\to S^*Q$ and a constant $a_0\in{\mathbb{R}}$ such that $$ a(s,t)+Ts-a_0\to 0,\qquad f(s,t)\to c(-Tt) $$ uniformly in $t$ as $s\to\infty$. We say that the map has a \emph{negative puncture} at $c$. \item There exists a point $x_0$ on ${\mathbb{R}}\times\Lambda_K$ (resp.~$L$) such that $$ u(s,t)\to x_0 $$ uniformly in $t$ as $s\to\infty$. In this case $u\circ\chi^{-1}$, where $\chi:{\mathbb{R}}_+\times[0,1]\to D^+$ is the map from~\eqref{eq:chi}, extends to a holomorphic map on the half-disk mapping the boundary to ${\mathbb{R}}\times\Lambda_K$ (resp.~$L$). If $x_0\notin K$ then we say that $u$ has a \emph{removable puncture} at $x_0$, and if $x_0\in K$ then we say that $u$ has a {\em Lagrangian intersection puncture} at $x_0$. (These are the standard situations in ordinary Lagrangian intersection Floer homology.) \end{itemize} \end{prop} Because of our choice of almost complex structure we can say more about the local forms of the maps as follows. Consider first a Reeb chord puncture where the map approaches a Reeb chord $c$. Let $U\times(-\varepsilon,T+\varepsilon)$ be the neighborhood of $c$ as in Definition~\ref{def:admissible} (v) and note that the holomorphic half-strip is uniquely determined by the local projection to $U\subset{\mathbb{C}}^{2}$ where the complex structure is standard. By a complex linear change of coordinates on ${\mathbb{C}}^{2}$ we can arrange that the two branches of the Legendrian $\Lambda_K$ through the end points of $c$ project to ${\mathbb{R}}^{2}$ and to the subspace spanned by the vectors $(e^{i\theta_1},0)$ and $(0,e^{i\theta_2})$, for some angles $\theta_1,\theta_2$. The ${\mathbb{C}}^{2}$-component $v$ of the map $u$ then has a Fourier expansion \begin{equation}\label{eq:chordasympt} v(z)=\sum_{n\ge 0} \left(c_{1;n}e^{-(\theta_1+n) z}, c_{2;n}e^{-(\theta_2+n) z}\right), \end{equation} where $c_{j;n}$ are real numbers. We call the smallest $n$ such that $(c_{1;n},c_{2;n})\ne 0$ the \emph{order of convergence} to the Reeb chord $c$. We have similar expansions near the Lagrangian intersection punctures. Lemma \ref{l:knotnbhd} gives holomorphic coordinates $(z_0,z_1)=(x_0+iy_0,x_1+iy_1)$ in ${\mathbb{C}}\times{\mathbb{C}}^{2}$ around any point $q_0\in K$ such that the Lagrangian submanifold $Q\subset T^*Q$ corresponds to $\{y_0=y_1=0\}$, the Lagrangian submanifold $L_K$ corresponds to $\{y_0=x_1=0\}$, and the almost complex structure $J$ corresponds to the standard complex structure $i$ on ${\mathbb{C}}^{3}$. Consider a holomorphic map $u\colon [0,\infty)\times[0,1]\to T^*Q$ such that $u(z)\to q\in K$ as $z\to\infty$ where $q$ lies in a small neighborhood of $q_0$ in $K$. We write $u$ in the local coordinates described above as $v=(v_0,v_1)$. Now Remark~\ref{rem:series} yields the following Fourier expansions for $v$. If $v([0,\infty)\times\{0\})\subset Q$ and $v([0,\infty)\times\{1\})\subset L_K$ then \begin{equation}\label{e:nearK1} v(z)=\left(\sum_{m\ge 0} c_{0,m} e^{-m\pi z}, \sum_{n+\frac12>0} c_{1;n+\frac12} e^{-(n+\frac{1}{2})\pi z}\right), \end{equation} where $c_{0;m}\in{\mathbb{R}}$ for all $m\in{\mathbb{Z}}_{\ge 0}$ and where $c_{1;n+\frac12}\in{\mathbb{R}}^{2}$ for all $n\in{\mathbb{Z}}_{\ge 0}$, in a neighborhood of $\infty$. If $v([0,\infty)\times\{0\})\subset L_K$ and $v([0,\infty)\times\{1\})\subset Q$ then \begin{equation}\label{e:nearK2} v(z)=\left(\sum_{m\ge 0} c_{0,m} e^{-m\pi z}, i\sum_{n+\frac12>0} c_{1;n+\frac12} e^{-(n+\frac{1}{2})\pi z}\right), \end{equation} where notation is as in \eqref{e:nearK1}. If $v([0,\infty)\times\{0\})\subset Q$ and $v([0,\infty)\times\{1\})\subset Q$ then \begin{equation}\label{e:nearK3} v(z)=\left(\sum_{n\ge 0} c_{0,m} e^{-m\pi z}, \sum_{n>0} c_{1;n} e^{-n\pi z}\right), \end{equation} where $c_{0;m}$ is as in \eqref{e:nearK1} and $c_{1;n}\in{\mathbb{R}}^{2}$ all $n\in{\mathbb{Z}}_{>0}$. If $v([0,\infty)\times\{0\})\subset L_K$ and $v([0,\infty)\times\{1\})\subset L_K$ then \begin{equation}\label{e:nearK4} v(z)=\left(\sum_{n\ge 0} c_{0,m} e^{-m\pi z}, i\sum_{n>0} c_{1;n} e^{-n\pi z}\right), \end{equation} where notation is as in \eqref{e:nearK3}. We say that the smallest half-integer $n+\frac12$ in \eqref{e:nearK1} or \eqref{e:nearK2} such that $c_{1,n+\frac12}\ne 0$ or the smallest integer $n$ in \eqref{e:nearK3} or \eqref{e:nearK4} such that $c_{1;n}\ne 0$ is the {\em asymptotic winding number} of $u$ at its Lagrangian intersection puncture. \subsection{Holomorphic disks}\label{s:disks} Consider the closed unit disk $D\subset{\mathbb{C}}$ with $m+1$ cyclically ordered distinct points $z_0,\dots,z_m$ on $\partial D$. Set $\dot D:=D\setminus\{z_0,\dots,z_m\}$. Consider a $J$-holomorphic map $u\colon\dot D\to {\mathbb{R}}\times S^{\ast} Q$ resp.~$T^{\ast}Q$ which maps $\partial D\setminus\{z_0,\dots,z_m\}$ to ${\mathbb{R}}\times\Lambda_{K}$ resp.~$L=Q \cup L_{K}$ and which has finite $\omega$-energy and $\lambda$-energy. Proposition~\ref{prop:asympt} shows that near each puncture $z_j$ the map $u$ either extends continuously, or it is positively or negatively asymptotic to a Reeb chord. We will use the following notation for such disks. A {\em symplectization disk (with $m \geq 0$ negative punctures)} is a $J$-holomorphic map $$ u\colon(\dot D,\partial \dot D)\to ({\mathbb{R}}\times S^{\ast} Q,{\mathbb{R}}\times\Lambda_{K}) $$ with positive puncture at $z_0$ and negative punctures at $z_1,\dots,z_m$. A {\em cobordism disk (with $m\ge 0$ Lagrangian intersection punctures)} is a $J$-holomorphic map $$ u\colon(\dot D,\partial \dot D)\to (T^{\ast}Q,L) $$ with positive puncture at $z_0$ and Lagrangian intersection punctures at $z_1,\dots,z_m$. Let $\mathbf{b}=b_1b_2\dots b_m$ be a word of $m$ Reeb chords. We write \[ \mathcal{M}^{\rm sy}(a,n_0;b_1,\dots,b_m)=\mathcal{M}^{\rm sy}(a,n_0;\mathbf{b}) \] for the moduli space of symplectization disks with positive puncture asymptotic to the Reeb chord $a$ where the order of convergence is $n_0$ and $m$ negative punctures (in counterclockwise order) asymptotic to the Reeb chords $b_1,\dots,b_m$. Here the points $z_0,\dots,z_m$ on $\partial D$ are allowed to vary and we divide by the action of M\"obius transformations on $D$. Note that ${\mathbb{R}}$ acts by translation on these moduli spaces. Similarly, let $\mathbf{n}=(n_1,\dots,n_m)$ be a vector of half-integers or integers. We write \[ \mathcal{M}(a,n_0;n_1,\dots,n_m)=\mathcal{M}(a,n_0;\mathbf{n}) \] for the moduli space of cobordism disks with positive puncture asymptotic to the Reeb chord $a$ with degree of convergence $n_0$ and $m \geq 0$ Lagrangian intersection punctures with asymptotic winding numbers given by the integers or half-integers $n_j$. Note that the number of half-integers must be even for topological reasons (at each half-integer the boundary of $u$ switches from $Q$ to $L_K$ or vice versa). In both cases when $n_0=0$ we will suppress it from notation and simply write \[ \mathcal{M}^{\rm sy}(a;\mathbf{b})\text{ and }\mathcal{M}(a;\mathbf{n}), \] respectively. For a Reeb chord $c\colon[0,T]\to S^*Q$ of length $T$, the map $u_c\colon{\mathbb{R}} \times [0,1] \to {\mathbb{R}} \times S^*Q$ given by $u_c(s+it)=(Ts, c(Tt))$ is a $J$-holomorphic parametrization of ${\mathbb{R}} \times c$ and thus a symplectization disk with positive and negative puncture asymptotic to $c$. We call it the {\em Reeb chord strip} over $c$. \subsection{Compactness in ${\mathbb{R}}\times S^{\ast}Q$ and $T^{\ast} Q$}\label{S:cp} In this subsection we review the compactness results proved in \cite{CEL} that concern compactness of the moduli spaces of holomorphic disks discussed in Section \ref{s:disks}. Let us denote by a {\em source disk} ${\bf D}_m$ the unit disk with some number $m+1\geq 1$ of punctures $z_0,\dots,z_m$ on its boundary; we call $z_0$ the positive and $z_1,\dots,z_m$ the negative punctures. A {\em broken source disk $\dot{\bf D}_m$ with $r\geq 1$ levels} with $m+1$ boundary punctures is represented as a finite disjoint union of punctured disks, \[ \dot {\bf D}_m={\bf D}^{1,1}\cup({\bf D}^{2,1}\cup\dots\cup {\bf D}^{2,l_2}) \cup \dots\cup({\bf D}^{r,1}\cup\dots\cup {\bf D}^{r,l_r}), \] where $({\bf D}^{j,1}\cup\dots\cup{\bf D}^{j,l_j})$ are the disks in the $j^{\rm th}$ level and we require the following properties: \begin{itemize} \item Each negative puncture $q$ of a disk ${\bf D}^{j,k}$ in the $j^{\rm th}$ level for $j<r$ is formally joined to the positive puncture of a unique disk ${\bf D}^{j+1,s}$ in the $(j+1)^{\rm th}$ level. We say that ${\bf D}^{j+1,s}$ is attached to ${\bf D}^{j,k}$ at the negative puncture $q$. \item The total number of negative punctures on level $r$ is $m$. \end{itemize} Note that a broken source disk with one level is just a source disk. We consider first compactness for curves in the symplectization. Let $\dot{\bf D}_m$ be a broken source disk as above. A {\em broken symplectization disk with $r$ levels} with domain $\dot {\bf D}_m$ is a collection $\dot v$ of $J$-holomorphic maps $v^{j,k}$ defined on ${\bf D}^{j,k}$ with the following properties: \begin{itemize} \item For each $1 \leq j \leq r$ and $1\le k\le l_{j}$, $v^{j,k}$ represents an element in \[ \mathcal{M}^{\rm sy}(a^{j,k};b^{j,k}_1,\dots,b^{j,k}_{s}). \] Moreover, for $j>1$, the Reeb chord $a^{j,k}$ at the positive puncture of $v^{j,k}$ matches the Reeb chord $b^{j-1,k'}$ at the negative puncture of $v^{j-1,k'}$ in ${\bf D}^{j-1,k'}$ at which ${\bf D}^{j,k}$ is attached. \item For each level $1 \leq j \leq r$, at least one of the maps $v^{j,k}$ is not a Reeb chord strip. \end{itemize} An {\em arc} in a source disk is an embedded curve that intersects the boundary only at its end points and away from the punctures. We say that a sequence of symplectization disks \[ \{u_j\}\subset\mathcal{M}^{\rm sy}(a;b_1,\dots,b_m) \] {\em converges to a broken symplectization disk} if there are disjoint arcs $\gamma_1,\dots,\gamma_k$ in the domains of $u_j$ which give the decomposition of the domain into a broken source disk in the limit and such that in the complement of these arcs, the maps $u_j$ converge to the corresponding map of the broken disk uniformly on compact subsets. \begin{thm}\label{t:cp_sy} Any sequence $\{u_{j}\} \subset \mathcal{M}^{\rm sy}(a,b_1,\dots,b_m)$ of symplectization disks has a subsequence which converges to a broken symplectization disk $\dot v$ with $r\geq 1$ levels. \end{thm} \begin{proof} Follows from \cite{BEHWZ} (see also \cite[Theorem 1.1]{CEL}). \end{proof} In order to describe the compactness result for moduli spaces of holomorphic disks in $T^{\ast}Q$ we first introduce a class of constant holomorphic disks and then the notion of convergence to a constant disk. A \emph{constant holomorphic disk} is a source disk ${\bf D}_m$, $m\ge 3$, a constant map into a point $q\in K$, and the following extra structure: Each boundary component is labeled by $L_K$ or by $Q$ and at each puncture $z_j$ there is an asymptotic winding number $n_j\in \{\frac12, 1,\frac32,\dots\}$ such that $n_{j}$ is a half-integer if the adjacent boundary components of $\dot{{\bf D}}_{m}$ are labeled by different components of $L=L_K\cup Q$ and an integer otherwise, and such that $n_0=\sum_{j=1}^{m}n_j$. A sequence of holomorphic maps $v_j\colon \dot {\bf D}_{m}\to T^{\ast}Q$ with boundary on $L$ \emph{converges to a constant holomorphic disk} if it converges uniformly to the constant map on any compact subset and if for all sufficiently large $j$, $v_j$ takes any boundary component labeled by $L_K$ or $Q$ to $L_K$ or $Q$, respectively, and if the asymptotic winding numbers at the negative punctures of the maps $v_j$ agree with those of the constant limit map at corresponding punctures. Let $\dot{\bf D}_m$ be a broken source disk with $r$ levels and suppose $1 \leq r_0\le r$. A {\em broken cobordism disk with $r_0$ non-constant levels} and domain $\dot {\bf D}_m$ is a collection $\dot v$ of $J$-holomorphic maps $v^{j,k}$ defined on ${\bf D}^{j,k}$ with the following properties. \begin{itemize} \item For $j<r_0$ and $1\le k\le l_{j}$, $v^{j,k}$ represents an element in \[ \mathcal{M}^{\rm sy}(a^{j,k};b^{j,k}_1,\dots,b^{j,k}_{s}). \] Moreover, for $j>1$, the Reeb chord $a^{j,k}$ at the positive puncture of $v^{j,k}$ matches the Reeb chord $b^{j-1,k'}$ at the negative puncture of $v^{j-1,k'}$ in ${\bf D}^{j-1,k'}$ at which ${\bf D}^{j,k}$ is attached. \item For each level $j<r_0$, at least one of the maps $v^{j,k}$ is not a Reeb chord strip. \item For $j=r_0$ and $1\le k\le l_{j}$, $v^{j,k}$ represents an element in \[\mathcal{M}(a^{j,k};n^{j,k}_1,\dots,n^{j,k}_{s})\] and the Reeb chord at the positive puncture of $v^{j,k}$ matches the Reeb chord at the negative puncture of $v^{j-1,k'}$ in ${\bf D}^{j-1,k'}$ at which ${\bf D}^{j,k}$ is attached. \item For $j>r_0$, $v^{j,k}$ is a constant map to $q\in K$, where $q\in K$ is the image of the negative puncture of $v^{j-1,k'}$ in ${\bf D}^{j-1,k'}$, at which ${\bf D}^{j,k}$ is attached. Moreover, ${\bf D}^{j,k}$ has at least $3$ punctures and the winding number and labels at its positive puncture agree with those of the negative puncture where it is attached. (From the point of view of the source disk these constant levels encode degenerations of the conformal structure corresponding to colliding Lagrangian intersection punctures, see Section \ref{ss:constglu} for more details.) \end{itemize} We say that the disks in levels $j<r_0$ are the {\em symplectization disks}, that the disks in level $r_0$ are the {\em cobordism disks}, and that disks in levels $j>r_0$ are the {\em constant disks} of the broken disk. We define convergence to a broken cobordism disk completely parallel to the symplectization case. \begin{thm}\label{t:cp} Let $\{u_{j}\} \subset \mathcal{M}(a;n_1,\dots,n_m)$ be a sequence of cobordism disks. Then $\{u_j\}$ has a subsequence which converges to a broken cobordism disk. \end{thm} \begin{proof} This is a consequence of \cite[Theorem 1.1]{CEL}. Note that the levels of constant disks are recovered by the sequence of source disks that converges to a broken source disk. \end{proof} \begin{remark}\label{r:breakingmodelclose} We consider the convergence implied by the Compactness Theorem \ref{t:cp} in more detail in a special case relevant to the description of our moduli spaces below. Consider a sequence of holomorphic disks $u_j$ as in the theorem that converges to a broken cobordism disk with top level $v$ and such that all disks on lower levels are constant. Let $q_\ell$ be a negative puncture of the top level $v$ and let $D_\ell$ be the (possibly broken) constant disk attached with its positive puncture at $q_\ell$. Consider the sequence of domains of $u_j$ as a sequence of strips with slits $S_j$, see the discussion of standard domains in Section \ref{ssec:confrep} and Figure \ref{fig:stdom}. It follows from the proof of \cite[Theorem 1.1]{CEL} that there is a strip region $[-\rho_j,0]\times[0,1]\subset S_j$, where $\rho_j\to\infty$ as $j\to\infty$ such that in the limit the negative puncture $q_\ell$ of $v$ corresponds to $(-\infty,0]\times[0,1]$ and the positive puncture of the domain $D_\ell$ corresponds to $[0,\infty)\times[0,1]$ attached at this puncture. Assume that $q_\ell$ maps to $x\in K$ and consider the Fourier expansion of $v$ near $q_\ell$ in the local coordinates near $K$ perpendicular to the knot: \[ v(s+it)=e^{k_0\pi(s+it)}\sum_{k=0}^{\infty} c_k e^{k\pi(s+it)}, \] where $k_{0}\ge\frac12$ is a half-integer and $c_k$ are vectors in ${\mathbb{R}}^{2}$ or $i{\mathbb{R}}^{2}$, $c_0\ne 0$. We say that the complex line spanned by $c_0$ is the limiting tangent plane of $v$ at $q_\ell$. Writing $v$ using Taylor expansion as a map from the upper half plane with the puncture $q_\ell$ at the origin and taking the complex line of $c_0$ as the first coordinate we find that the normal component of $v$ at $x$ is given by \[ v(z) = \left(z^{k_0},{\mathcal O}(z^{k_0+1})\right), \] after suitable rescaling of the first coordinate. We next restrict to the case relevant to our applications, of a sequence of disks $u_j$ with a constant disk with three or four punctures splitting off. The three punctured disk is simpler, so we consider the case of a disk with four punctures splitting off. In this case, consider a vertical segment $\{\rho^{0}\}\times[0,1]$ in the stretching strip $[-\rho_j,0]\times [0,1]$. It subdivides the domain of $u_j$ in two components $D_+$ containing the positive puncture and its complement $D_-$. Consider the Fourier expansion of $u_j$ near this vertical segment. We have \[ u_j(s+it)=\sum_{k\ge k_0} c_{j;k} e^{-k\pi(s+it)}, \] where $k$ are half-integers and $c_{j;k}\in{\mathbb{R}}^{2}$ (or $i{\mathbb{R}}^{2}$). Since the winding number along the vertical segment is equal to the sum of the winding numbers of the negative punctures in the component of $D_{-}$ that it bounds, we find that, for $j$ sufficiently large, $c_{j;k}=0$ for all $k<\frac32$, hence $k_0\ge \frac32$. Moreover, $c_{j;k_0}$ converges to a vector in the limiting tangent plane of $u_0$ at the newborn negative puncture. In the generic case, see Lemma \ref{l:tv}, this limiting vector is non-zero. We assume for definiteness in what follows that it is equal to $(1,0)$. Pick a conformal map taking $D_-$ to the half disk of radius $1$ in the upper half plane, with the vertical segment corresponding to the half circular arc and with the middle boundary puncture mapping to $0$. Then as $j\to\infty$ the locations of the other two punctures both converge to $0$ and, for large $j$, the projection to the first complex coordinate determines the location of the other two punctures. Moreover, the sum of the winding numbers at these three punctures equals $\frac32$ (i.e.~the winding number along the half circle of radius $1$). Consequently, we have, with $z$ a coordinate on the upper half plane, for all $j$ large enough \[ u_j(z)=\sqrt{z(z-\delta_j)(z-\epsilon_j)}\Bigl((1,0)+v_j+{\mathcal O}(z)\Bigr), \] where $\delta_j,\epsilon_j\to 0$ and $v_j\to 0\in{\mathbb{R}}^{2}$ as $j\to\infty$. It follows that disks in a limiting sequence eventually lie close to the model disk \eqref{eq:2-dimmodel} discussed in Section~\ref{ss:spikes}. There is a completely analogous and simpler analysis of the case when two punctures collide which shows that disks in a limiting sequence are close to the model disk \eqref{eq:1-dimmodel} of Section~\ref{ss:spikes} in the same sense. \end{remark} \section{Transversely cut out solutions and orientations}\label{sec:trans} In this section we show that the moduli spaces in Section \ref{S:mdlisp} are manifolds for generic almost complex structure $J$. To accomplish this, we first express each moduli space as the zero locus of a section of a bundle over a Banach manifold and then show, using an argument from \cite{EES2}, that one may make any section transverse to the $0$-section by perturbing the almost complex structure. Here cases of disks with unstable domains require extra care: we stabilize their domains using extra marked points on the boundary. We control these marked points using disks with higher order of convergence to Reeb chords. \subsection{Conformal representatives and Banach manifolds}\label{ssec:confrep} In order to define suitable Banach spaces for our study of holomorphic curves we endow the domains of our holomorphic disks with cylindrical ends. For convenience we choose a particular such model for each conformal structure on the punctured disks. (The precise choice is not important since the space of possible choices of cylindrical ends is contractible.) A \emph{standard domain} $\Delta_{0}$ with one puncture is the unit disk in the complex plane with a puncture at $1$ and fixed cylindrical end $[0,\infty)\times[0,1]$ at this puncture. A \emph{standard domain} $\Delta_{1}$ with two punctures is the strip ${\mathbb{R}}\times[0,1]$. A \emph{standard domain} $\Delta_m([a_1,\dots,a_{m-1}])$ with $m+1\geq 2$ boundary punctures is a strip ${\mathbb{R}}\times[0,m]\subset{\mathbb{C}}$ with slits of small fixed width (and fixed shape) around half-infinite lines $(-\infty,a_j]\times \{j\}$, where $0<j<m$ is an integer, removed. See Figure \ref{fig:stdom}. \begin{figure} \centering \includegraphics[width=.6\linewidth, angle=180]{figures/stdom-new} \caption{A standard domain.} \label{fig:stdom} \end{figure} We say that $a_j\in{\mathbb{R}}$ is the $j^{\rm th}$ boundary maximum of $\Delta_m([a_1,\dots,a_{m-1}])$. The space of conformal structures $\mathcal{C}_m$ on the $(m+1)$-punctured disk is then represented as ${\mathbb{R}}^{m}/{\mathbb{R}}$ where ${\mathbb{R}}$ acts on vectors of boundary maxima by overall translation, see \cite[Section 2.1.1]{E}. The boundary of the space of conformal structures on an $(m+1)$-punctured disk in its compactification $\partial\mathcal{C}_m\subset\overline{\mathcal{C}}_m$ can then be understood as consisting of the several level disks which arise as some differences $|a_j-a_k|$ between boundary maxima approach $\infty$. We sometimes write $\Delta_m$ for a standard domain, suppressing its conformal structure $[a_1,\dots,a_{m-1}]$ from the notation. The breaking of a standard domain into a standard domain of several levels is compatible with the compactness results Theorems \ref{t:cp_sy} and \ref{t:cp}. In the proof of these results given in \cite{CEL}, after adding a finite number of additional punctures the derivatives of the maps are uniformly bounded and each component in the limit has at least two punctures and can thus be represented as a standard domain. In particular, the domain right before the limit is the standard domain obtained by gluing these in the natural way and the arcs in the definition of convergence can be represented by vertical segments. Here a \emph{vertical segment} in a standard domain $\Delta_m\subset{\mathbb{C}}$ is a line segment in $\Delta_m$ parallel to the imaginary axis which connects two boundary components of $\Delta_m$. \subsection{Configuration spaces}\label{s:confcob} In this section we construct Banach manifolds which are configuration spaces for holomorphic disks. In order to show that all moduli spaces we use are manifolds we need to stabilize disks with one and two punctures by adding punctures in a systematic way. To this end we will use Sobolev spaces with extra weights. This is the reason for introducing somewhat more complicated spaces below. The constructions in this section parallels corresponding constructions in \cite{EES2} and \cite{ES}. We first define the configuration space for holomorphic disks in $T^{\ast}Q$ and then find local coordinates for this space showing that it is a Banach manifold. We then repeat this construction for disks in the symplectization. Below we are interested in the moduli spaces $\mathcal{M}(a,n_0;\mathbf{n})$ of holomorphic disks for $n_0=0$ or $n_0=1$ and $\mathbf{n}=(n_1,\dots,n_m)$, which we will describe as subsets of suitable configuration spaces $\mathcal{W}=\mathcal{W}(a;\delta_0;\mathbf{n})$. Here $\delta_0>0$ and $n_0$ are related as follows: Consider the standard neighborhood $(-\varepsilon,T+\varepsilon) \times U$ (with $U \subset {\mathbb{C}}^2$) of the Reeb cord $a:[0,T] \to S^*Q$ which we introduced on page \pageref{s:acs}. The projections of the contact planes at the two end points of $a$ to ${\mathbb{C}}^2$ intersect transversally, and we denote by $0<\theta'\pi\le \theta''\pi<\pi$ the two complex angles between them. Now for $n_0=0$ we choose $0<\delta_0<\theta'$ and for $n_0=1$ we choose $\theta''<\delta_0<1$. The space $\mathcal{W}$ fibers over the product space \[ B={\mathbb{R}}^{m-2}\times {\mathbb{R}}\times J(K). \] The first factor ${\mathbb{R}}^{m-2}$ is the space of conformal structures on the disk with $m+1$ boundary punctures. We represent the disk as a standard domain with the first boundary maximum at $0$ and ${\mathbb{R}}^{m-2}$ as the coordinates of the remaining $m-2$ boundary maxima. The second factor ${\mathbb{R}}$ corresponds to the shift in parameterization of the asymptotic trivial strips at the positive puncture. The third factor is itself a product with one factor for each negative puncture: \[ J(K)=J^{(r_1)}(K)\times\dots\times J^{(r_m)}(K). \] Here $r_{j}$ is the smallest integer $<n_j$ and $J^{(r_{j})}(K)$ denotes the $r_{j}^{\rm th}$ jet-space of $K$. A point $(q_0,q_1,\dots,q_{r_j}) \in J^{(r_j)}(K)$ corresponds to the first Fourier (Taylor) coefficients of the map at the $j^{\rm th}$ negative puncture. Note that $J(K)$ depends on $\mathbf{n}=(n_1,\dots,n_m)$, but we omit this dependence from the notation. Fix a parameterization of each Reeb chord strip. If $\gamma\in{\mathbb{R}}^{m-2}$ then we write $\Delta[\gamma]$ for the standard domain with first boundary maximum at $0$ and the following boundary maxima according to the components of $\gamma$. If $\mathbf{n}=(n_1,\dots,n_m)\in (\frac12{\mathbb{Z}})^{m}$ with $\sum_j n_j\in{\mathbb{Z}}$ then we decorate the boundary components of $\Delta[\gamma]$ according to $\mathbf{n}$ as follows. Start at the positive puncture and follow the boundary of $\Delta[\gamma]$ in the positive direction. Decorate the first boundary component by $L_K$ and then when we pass the $j^{\rm th}$ negative puncture we change Lagrangian (from $L_K$ to $Q$ or vice versa) if $n_j$ is a half integer and do not change if it is an integer. Fix a smooth family of smooth maps \[ w_{\beta}\colon (\Delta[\beta_1],\partial\Delta[\beta_1])\to (T^{\ast}Q,L), \quad\beta=(\beta_1,\beta_2,\beta_3)\in B, \] with the following properties: \begin{itemize} \item $w_\beta$ respects the boundary decoration, i.e., it takes boundary components decorated by $L_K$ resp. $Q$ to the corresponding Lagrangian submanifold. \item $w_{\beta}$ agrees with the Reeb chord strip of $a$ shifted by $\beta_2$ in a neighborhood of the positive puncture. \item Consider standard coordinates ${\mathbb{C}}\times{\mathbb{C}}^{2}$ near the first component of $\beta_3^{j}\in J^{(r_{j})}(K)$. Then in a strip neighborhood of the $j^{\rm th}$ negative puncture, the ${\mathbb{C}}^{2}$-component of $w_{\beta}$ vanishes and the ${\mathbb{C}}$-component is given by \[ w_{\beta}(z)= \sum_{l=0}^{r_{j}} q_l e^{l\pi z}, \] where the $j^{\rm th}$ component $\beta_3^{j}$ of $\beta_{3}$ is \[ \beta_3^{j}=(q_0,q_1,\dots,q_{r_j})\in J^{(r_{j})}(K). \] \end{itemize} Let $0<\delta<\frac12$ and as before let either $0<\delta_0<\theta'$ or $\theta''<\delta_0<1$, where $\theta'$ describes the smallest non-zero complex angle at the Reeb chord $a$ and $\theta''$ the largest. Let ${\mathcal H}_{\delta_0,\delta}(\beta_1)$ denote the Sobolev space of maps \[ w\colon\Delta[\beta_1]\to T^*{\mathbb{R}}^3\cong {\mathbb{R}}^6 \] with two derivatives in $L^{2}$ and finite weighted 2-norm with respect to the weight function $\eta_{\delta}$ with the following properties. \begin{itemize} \item $\eta_{\delta_0,\delta}$ equals $1$ outside a neighborhood of the punctures. \item $\eta_{\delta_0,\delta}(s+it)=e^{\delta_{0}\pi|s|}$ near the positive puncture. \item $\eta_{\delta_0,\delta}(s+it)=e^{(n_j-\delta)\pi|s|}$ near the $j^{\rm th}$ negative puncture. \end{itemize} Consider the bundle $E\to B$ with fiber over $\beta\in B$ given by ${\mathcal H}_{\delta_0,\delta}(\beta_1)$. Define the configuration space $\mathcal{W}=\mathcal{W}(a;\delta_0;\mathbf{n})\subset E$ of $(\beta,w)$ such that $u=w_\beta+w$ satisfies the following \begin{itemize} \item $u$ takes the boundary of $\Delta[\beta_1]$ to $L$ respecting the boundary decoration. \item $u$ is holomorphic on the boundary, i.e. the restriction (trace) of $\bar{\partial}_{J}u$ to $\partial\Delta[\beta_1]$ vanishes. \end{itemize} It is not hard to see that $\mathcal{W}$ is a closed subspace of $E$. In fact it is a Banach submanifold of the Banach manifold $E$. We will next explain how to find local coordinates on $\mathcal{W}$. Let $(\beta,w)\in E$, and assume that $u=w_\beta+w$ is a map in $\mathcal{W}$. In order to find local coordinates around $u$ we first consider the finite dimensional directions. Pick diffeomorphisms of the source $\Delta[\beta_1]$, \begin{equation}\label{eq:confvardiffeos} \phi_{\gamma},\; \gamma\in{\mathbb{R}};\qquad \psi_{\eta_{1}},\;\eta_{1}\in{\mathbb{R}}^{m-1}, \end{equation} corresponding to the second and first finite dimensional factors. Here $\phi_{\gamma}$ equals the identity outside a neighborhood of the positive end where it equals translation by $\gamma$, and $\psi_{\eta_1}\colon \Delta[\beta_1]\to\Delta[\beta_1+\eta_1]$ moves the boundary maxima according to $\eta_1$, see \cite[Section 6.2.3]{E}. We next turn to the translations along the knot and the infinite dimensional component of the space. Using the coordinate map of Lemma \ref{l:knotnbhd} we import the flat metric on $T^{\ast}(S^{1}\times D^{2})$ to $T^{\ast}Q$, we extend this metric to a metric $h^1$ on all of $T^\ast Q$ so that $L_K$ is totally geodesic and flat near Reeb chord endpoints, see \ref{def:admissible} (v), and such that $h^1=ds^{2}+g$ on $T^*Q\setminus D^*Q\cong{\mathbb{R}}_+\times S^{\ast}Q$, where $g$ is a metric on $S^{\ast}Q$. Consider the standard almost complex structure in a neighborhood of the zero section of $Q={\mathbb{R}}^{3}$ in $T^{\ast} Q$. Note that this almost complex structure agrees with the standard almost complex structure in the holomorphic neighborhood of $K$. Using the construction in \cite[Proposition 5.3]{EES1}, we extend it to an almost complex structure $J$ over all of $T^{\ast}Q$ with the following additional property near $L_{K}$. If $V$ is a vector field along a geodesic in the metric $h^{1}$ in $L_{K}$ then $V$ satisfies the Jacobi equation if and only if the vector field $JV$ does. To achieve this we might have to be alter $h^{1}$ slightly near but not on $L_K$, see \cite[Equation (5.7)]{EES1} for the precise form of $h^{1}$ (corresponding to $\hat g$ in that equation). Note that this construction gives the standard almost complex structure near the knot. Let $h^0$ denote the standard flat metric on $T^{\ast}Q$ and note that it has the Jacobi field property discussed above along $Q$. Let \begin{equation}\label{eq:interpolmetric} h^\sigma, \quad 0\le \sigma\le 1 \end{equation} be the linear interpolation between the metrics $h^0$ and $h^1$. Consider the pullback bundle $u^{\ast}T(T^{\ast}Q)$. Note that the Riemannian metrics $h^t$ on $T^{\ast}Q$ induce connections on this bundle which we denote by $\nabla^t$. Let $\dot{\mathcal H}_{\delta}(u)$ denote the linear space of sections $v$ of $u^{\ast}T(T^{\ast}Q)$ with the following properties \begin{itemize} \item The partial derivatives of $v$ up to second order lie in $L^{2}_{\rm loc} (\Delta[\beta],u^{\ast}T(T^{\ast}Q))$. \item The restriction of $\nabla^{\sigma} v + J\circ\nabla^{\sigma} v\circ i$ to the boundary (sometimes called the trace of $\nabla^{\sigma} v + J\circ\nabla^{\sigma} v\circ i$) vanishes, where $\sigma=1$ for a boundary component mapping to $L_K$ and $\sigma=0$ for a component mapping to $Q$. \item With $\|\cdot\|_{\delta,\delta_0,\mathbf{n}}$ denoting the Sobolev $2$-norm weighted by $\eta_{\delta,\delta_0,\mathbf{n}}$, $\|v\|_{\delta,\mathbf{n}}<\infty$. \end{itemize} Then $\dot{\mathcal H}_{2,{\delta,\delta_0,\mathbf{n}}}(w)$ equipped with the norm $\|\cdot\|_{\delta,\delta_0,\mathbf{n}}$ is a Banach space. Also fix $m+\sum_{j=1}^{m} r_j$ smooth vector fields $s^{j}_{k}$, $1\le j\le m$ and $0\le k\le r_j$ along $u$ with properties as above and with the following additional properties \begin{itemize} \item The vector field $s^{j}_k$ is supported only near the $j^{\rm th}$ negative puncture in a half strip neighborhood which maps into the analytic neighborhood of the knot. \item In standard coordinates along the knot ${\mathbb{C}}\times{\mathbb{C}}^{2}$, the ${\mathbb{C}}^{2}$-component of $s^j_{k}$ equals $0$ and the ${\mathbb{C}}$-component is $s^{j}_{k}= e^{k\pi z}$. \end{itemize} We are now ready to define the local coordinate system. Write $\exp^{\sigma}$ for the exponential map in the Riemannian metric $h^{\sigma}$, $0\le \sigma\le 1$, from \eqref{eq:interpolmetric}. The local coordinate system around $u$ has the form \[ \Phi_{u}\colon U_1\times U_{2}\times U_{3} \times \mathcal{U}\to \mathcal{W}, \] where $U_1\subset{\mathbb{R}}^{m-2}$, $U_{2}\subset{\mathbb{R}}$, $U_3\subset\Pi_{j=1}^{m}{\mathbb{R}}^{r_j+1}$, and $\mathcal{U}\subset \dot{\mathcal H}_{2,{\delta,\delta_0,\mathbf{n}}}(w)$ are small neighborhoods of the origin with coordinates $\gamma_j\in U_j$. Let $\sigma\colon \Delta[\beta_1]\to [0,1]$ be a smooth function that equals $0$ resp.~$1$ in a neighborhood of any boundary component that maps to $Q$ resp.~$L_K$ and that equals $0$ on $u^{-1}(D^{\ast} Q)$. For $u$ as above we then consider \[ \Psi_{u}(\gamma_1,\gamma_2,\gamma_3,v)(z)= \exp^{\sigma(z')}_{u(z')}\left(v(z')+\sum_{j=1}^{m}\sum_{k=0}^{r_j}{\gamma_3}^{j}_{k}s^{j}_{k}(z')\right),\quad z'=\phi_{\gamma_1}(\psi_{\gamma_{2}}(z)), \] see \eqref{eq:confvardiffeos} for the diffeomorphisms $\phi_{\gamma_{1}}$ and $\psi_{\gamma_{2}}$. Here $\gamma_{1}$ corresponds to shifts near the positive puncture, $\gamma_{2}$ corresponds to variations of the conformal structure, $\gamma_{3}$ is related to variations of the map near Lagrangian intersection punctures, and $v$ is a vector field along the curve. We use the exponential map to go from linearized variations to actual maps. \begin{lemma} The space $\mathcal{W}$ is a Banach manifold with local coordinates around $u$ given by $\Psi_{u}$. \end{lemma} \begin{proof} This is straightforward, see \cite[Lemma 3.2]{EES2} for an analogous result. \end{proof} Consider the bundle $\mathcal{E}$ over the configuration space $\mathcal{W}$ with fiber over $u$ the complex anti-linear maps \[ T\Delta[\beta_1]\to T(T^{\ast}Q). \] The $\bar{\partial}_{J}$-operator gives a section of this bundle $u\mapsto (du+J\circ du\circ i)$ and the moduli space $\mathcal{M}(a;n_0,\mathbf{n})$ is the zero locus of this section, where $n_0=0$ if $0<\delta_0<\theta'$ and $n_0=1$ if $\theta''<\delta_0<1$. The section is Fredholm and the formal dimension of the solution spaces is given by its index. We have the following dimension formula. \begin{lemma}\label{l:weightdimcob} The formal dimension of $\mathcal{M}(a,n_0;\mathbf{n})$ is given by \[ \dim(\mathcal{M}(a,n_0;\mathbf{n}))=|a|-2n_0-|\mathbf{n}|. \] \end{lemma} \begin{proof} The case $n_0=0$ follows from \cite[Theorem A.1 and Remark A.2]{CEL}. The fact that the index jumps when the exponential weight crosses the eigenvalues of the asymptotic operator is well known and immediately gives the other case, see e.g.~\cite[Proposition 6.5]{EES1}. \end{proof} We next consider a completely analogous construction of a configuration space for holomorphic disks in $\mathcal{M}^{\rm sy}(a,n_0;\mathbf{b})$. We discuss mainly the points where this construction differs from that above. Consider first the finite dimensional base. Here the situation is simpler and we take instead \[ B={\mathbb{R}}^{m-2}\times{\mathbb{R}}^{m+1}, \] where the first factor corresponds to conformal structures on the domain exactly as before and where the second factor corresponds to re-parameterizations of the trivial Reeb chord strips exactly as for the positive puncture before. We fix a smooth family of maps $w_{\beta}\colon\Delta[\beta_1]\to{\mathbb{R}}\times S^{\ast}Q$ which agrees with the prescribed Reeb chord strips near the punctures. We next fix an isometric embedding of $S^{\ast}Q$ into ${\mathbb{R}}^{N}$ and consider the bundle of weighted Sobolev spaces with fiber over $\beta\in B$ the Sobolev space ${\mathcal H}_{n_0,\delta}$ of functions with two derivatives in $L^{2}$ with respect to the norm weighed by a function which equals $e^{\delta|s|}$ in the negative ends and $e^{(\delta+n_0)|s|}$ in the positive end. In analogy with the above we then fix (commuting) re-parameterization diffeomorphisms $\psi_{\beta_1}$ corresponding to changes of the conformal structure and $\phi_{\beta_2}$ corresponding to translation in the half strip neighborhoods. Again this then leads to a Fredholm section and its index gives the formal dimension of the moduli space. \begin{lemma}\label{l:weightdimsymp} The formal dimension of $\mathcal{M}^{\rm sy}(a,n_0;\mathbf{b})$ is given by \[ \dim\mathcal{M}^{\rm sy}(a,n_0;\mathbf{b})=|a|-2n_0-|\mathbf{b}|. \] \end{lemma} \begin{proof} See \cite[Theorem A.1 and Remark A.2]{CEL} and use the relation between weights and index, see e.g.~\cite[Proposition 6.5]{EES1}. \end{proof} \begin{remark}\label{rmk:confvaratpuncture} We consider for future reference the conformal variations of the domain with more details. In the local coordinates around a map $w\colon \Delta_{m+1}\to T^{\ast} Q$ or $w\colon \Delta_{m+1}\to {\mathbb{R}}\times S^{\ast} Q$ defined above, the conformal variations correspond to a diffeomorphism that moves the boundary maxima of the domain. We take such a diffeomorphism to be a shift along a constant (and hence holomorphic) vector field $\tau$ in the real direction around the boundary maximum and then cut it off in nearby strip regions. Hence the corresponding linearized variation $L\bar{\partial_{J}}(\gamma)$ at $w$, where $\gamma$ is the first order variation of the complex structure corresponding in the domain is \[ L\bar{\partial}_{J}(\gamma)=\partial_{J}w\circ \bar{\partial} \tau. \] We will sometimes use other ways of expressing conformal variations, where the variations are supported near a specific negative puncture rather than near a specific boundary maximum. To this end we first note that we may shift the conformal variation by any element $L\bar{\partial}_{J}(v)$ where $v$ is a vector field along $w$ in the Sobolev space ${\mathcal H}_{\delta}$. In particular we can shift $\gamma$ by $\bar{\partial}\sigma$ where $\sigma$ is a vector field along $\Delta_{m+1}$ that is constant near the punctures. In this way we get equivalent conformal variations $\gamma_{q}$ of the form \[ L\bar{\partial}_{J}(\gamma_{q})=\partial_{J}w\circ \bar{\partial}\tau_{q}. \] where $\tau_{q}$ is a vector field of the form \[ \tau_{q}(z) = \beta(s+it) e^{\pi(s+it)}, \] where $s+it$ is a standard coordinate in the strip neighborhood of the negative puncture $a$ and $\beta$ is a cut-off function equal to $1$ near the puncture and $0$ outside a strip neighborhood of the puncture. We refer to \cite[Section 2.1.1]{E} for details. \end{remark} \subsection{Transversality}\label{ss:trans} We next use the special form of our almost complex structure near Reeb chords in combination with an argument from \cite[Lemma 4.5]{EES2} to show that we can achieve transversality for $\bar{\partial}_{J}$-section of $\mathcal{E}$ over $\mathcal{W}$ by perturbing the almost complex structure. In other words we need to show that the linearization $L\bar\partial_{J}$ of the section $\bar\partial_{J}$ is surjective. \begin{lemma}\label{l:tv} For generic $J$ any solutions in $\mathcal{M}(a,n_0;\mathbf{n})$ and $\mathcal{M}^{\rm sy}(a, n_0;\mathbf{b})$ are transversely cut out. \end{lemma} \begin{proof} To see this we perturb the almost complex structure near the positive puncture. Consider the local projection to ${\mathbb{C}}^{2}$ near the Reeb chord. Here the Lagrangians correspond to two Lagrangian planes. Furthermore the holomorphic disks admit local Taylor expansions near the points that map to their intersection. The lemma now follows from the proof of \cite[Lemma 4.5]{EES2}. We sketch the argument. Let $U$ denote a neighborhood of the Reeb chord strip $C_{a}$ of $a$ for $\mathcal{M}^{\rm sy}$ or of the Reeb chord strip in $T^{\ast}Q- D^{\ast}Q$ for $\mathcal{M}$. If $u$ is a holomorphic disk then $u^{-1}(U\cap C_{a})$ is the pre-image under $u$ composed with the projection to ${\mathbb{C}}^{2}$ of the intersection point of the two Lagrangian planes. It follows by monotonicity that the preimage is a finite collection of points $\{q_{0},q_{1},\dots, q_{r}\}$, where $q_{0}$ is the positive puncture. If $q_{i}$ is an interior point, let $E_{i}$ denote a small disk around $q_{i}$, if $q_{j}$ is a boundary point let $E_{j}$ denote a half-disk neighborhood of $q_{i}$. If the map $u$ has an injective point near the double point then a standard argument perturbing the almost complex structure there establishes the necessary transversality. We therefore assume that this is not the case. Consider the image of a small half disk $E_{0}$ near the positive puncture $q_{0}$, and note that the boundary arcs end at the positive punctures. Since the map is not injective there are neighborhoods (after renumbering) $E_{1},\dots,E_{m}$ where $u$ agrees with the image $\gamma$ under $u$ of one of the boundary arcs of $E_{0}$. By analytic continuation, the images of these neighborhoods then intersect the Lagrangian sheet of the boundary arc $\gamma'$ that contains $\gamma$. Consequently, the map has multiplicity $m+1$ along $\gamma$ and multiplicity $m$ along $\gamma'-\gamma$. Consider a vector field in the cokernel of the linearized operator $L\bar{\partial}$. Perturbing the almost complex structure near $\gamma'-\gamma$ we see that the contributions from the anti-holomorphic cokernel vector field on $E_{1},\dots,E_{m}$ must vanish. By unique continuation, the contributions from $E_{1},\dots, E_{m}$ must then also cancel along $\gamma$ and it follows that there is nothing that cancels the perturbation in $E_{0}$ (just as if the map was injective in $E_{0}$). The desired transversality follows. \end{proof} \subsection{Stabilization of domains}\label{s:stab} For disks with more than three punctures the transversality results in Section \ref{ss:trans} directly give the solution spaces the structure of $C^{1}$-smooth manifolds. For the case of unstable domains this is not as direct since the solutions admit re-parameterizations that do not act with any uniformity on the associated configuration spaces. This is a well-known phenomenon and we resolve the problem by a gauge fixing procedure, adding marked points near the positive puncture. This construction was studied in detail in \cite[Appendix A.2]{ESrev} and in \cite[Sections 5.2 and 6]{ES} and we will refer to these articles for details. As we shall see below we need only consider moduli spaces of dimension $\le 2$. Recall the neighborhood $a\times U$, $U\subset{\mathbb{C}}^{2}$ of the Reeb chord $a$, see the discussion before Definition~\ref{def:admissible} on page~\pageref{def:admissible}, and the corresponding Fourier expansion of the ${\mathbb{C}}^{2}$-component of any holomorphic disk near $a$, see \eqref{eq:chordasympt} on page~\pageref{eq:chordasympt}. Consider a space $\mathcal{M}(a,\mathbf{n})$ of formal dimension $\le 1$. Then by Lemmas \ref{l:weightdimcob} and \ref{l:tv} the corresponding space $\mathcal{M}(a,1;\mathbf{n})$ is empty. Consequently, for any solution $u\in \mathcal{M}(a,\mathbf{n})$, the first Fourier coefficient of the ${\mathbb{C}}^{2}$-component of the map near $a$ is non-vanishing. Let $S_{0;\epsilon}$ and $S_{1;\epsilon}$ be spheres in $\Lambda_{K}$ of radii $\epsilon>0$ around the Reeb chord endpoints of $a$. Non-vanishing of the first Fourier coefficient in combination with compactness then implies that for each solution $u$ there are two unique points in the boundary of the domain closest to the positive puncture that map to $S_{j;\epsilon}$, $j=0,1$, see Figure \ref{fig:marked}. We add punctures at these points. More precisely, we consider standard domains with two more punctures and require that the maps are asymptotic to points in $S_{j;\epsilon}$ at the extra punctures. In the above notation these would be ``Lagrangian intersection punctures'' in $S_{j;\epsilon}$ of local winding number $1$ in the direction normal to $S_{j;\epsilon}$. The transversality result \ref{l:tv} holds as before also for the solution spaces with extra punctures, so that they are $C^{1}$-manifolds. The asymptotic properties above then imply that the solutions with extra punctures capture all holomorphic disks. Consider next a space $\mathcal{M}^{\rm sy}(a;\mathbf{b})$ of formal dimension $\le 2$. Since any holomorphic curve in the symplectization can be translated we find that the corresponding space $\mathcal{M}^{\rm sy}(a,1;\mathbf{b})$ is again empty and we get a manifold structure by adding two marked points near the Reeb chord endpoints exactly as above. It remains then to consider the case of spaces $\mathcal{M}(a;\mathbf{n})$ of formal dimension $2$. Here the corresponding space $\mathcal{M}(a,1;\mathbf{n})$ has dimension $0$. There are then a finite number of solutions with this decay condition. Considering the Fourier expansion we can fix unique marked points for all solutions in a neighborhood $\mathcal{V}$ (in the configuration space) of these isolated solutions as above. For solutions outside $\mathcal{V}$ the Fourier coefficients do not vanish and we can fix marked points as above. Note however, that these will generally not be the same marked points. This way we however get two types of manifold charts: one for solutions inside $\mathcal{V}$ and one for solutions in a neighborhood of any map $u'$ with nonvanishing first Fourier coefficient which lies outside a smaller neighborhood $\mathcal{V}'$ of $u$. To get a manifold structure for the moduli space we then need to study the transition maps, and to that end we use four marked points, see Figure \ref{fig:marked} and \cite[Section 5.2]{ES} for details. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{figures/marked-new} \caption{Top left: marked points for a disk near the disk with degenerate asymptotics. Top right: marked points for a disk outside a neighborhood of the disk with degenerate asymptotics. Lower: the four marked points in the intermediate region used to define the coordinate change. The (black) lines represent the projections of the branches of $L_K \cup Q$ to ${\mathbb{C}}^2$, and the (blue) circles represent 3-spheres $S^3_\varepsilon$, which cut these local branches along the circles $S_{j,\varepsilon}$ appearing in the text.} \label{fig:marked} \end{figure} A priori, the smooth structures on the moduli spaces above depend on the choice of gauge condition. However, using the fact that the $C^{0}$-norm of a holomorphic map controls all other norms, it is not hard to see that different gauge conditions lead to the same smooth structure. We also need to show that the compactness result where sequences of curves converge to several level curves are compatible with additional marked points. This is similar to the above. The compactness result we already have implies uniform convergence on compact sets and in particular it is possible to add marked points on the curves near the limit that correspond to the extra marked points on the unstable curves in the limit. As before we show that these extra marked points do not affect the moduli spaces. See \cite[Section A.3]{ES} for details. In conclusion, by adding marked points also on curves near broken limits we obtain versions of the compactness results Theorems \ref{t:cp_sy} and \ref{t:cp} where all domains involved are stable with marked points compatible with the several level breaking. \subsection{Index bundles and orientations}\label{ss:indexbndles} Viewing the $\bar{\partial}_{J}$-operator as a Fredholm section of a Banach bundle, its linearization defines an index bundle over the configuration space and an orientation of this index bundle gives an orientation of transverse solution spaces. Following Fukaya, Oh, Ohta, Ono~\cite[Section 8.1]{FO3I} one defines a coherent system of such orientations as follows. Fix spin structures of the two Lagrangians $L_K$ and $Q$, which we here can think of as trivializations of the respective tangent bundles. Consider a closed disk with boundary in one of the two Lagrangians and the linearized $\bar{\partial}_{J}$-operator acting on vector fields along this disk that are tangent to the Lagrangian along the boundary. Using the trivialization of the boundary condition, such an operator can be deformed to an operator on the disk with values in ${\mathbb{C}}^3$ and constant ${\mathbb{R}}^{3}$ boundary condition, with a copy of ${\mathbb{C}} P^{1}$ attached at the center with a complex linear operator. The first operator has trivial cokernel and a kernel that consists only of constant vector fields, and the orientation of ${\mathbb{C}}$ induces an orientation on the determinant bundle of the operator over ${\mathbb{C}} P^{1}$. This gives a canonical orientation over closed disks with trivialized boundary condition (that depends only on the homotopy class of the trivialization). Here we need to orient moduli spaces of disks with punctures. This was done in the setting of Legendrian contact homology in \cite{EESori}; we will give a sketch and refer to that reference for details. We reduce to the case of closed disks by picking so-called capping operators at all Reeb chords and along the Lagrangian intersection $K$ with an orientation of the corresponding determinant bundles. Here it is important that the capping operators are chosen in a consistent way. At Reeb chords there is a positive and a negative capping operator and we require that they glue to the standard orientation on the closed disk. We also pick positive and negative capping operators at the Lagrangian intersection punctures satisfying the same conditions. Now, given a holomorphic disk in the symplectization or in $T^{\ast}Q$ we glue the capping operators to it and produce a closed disk. The standard orientation of the closed disk and the chosen orientations on the capping operators then give an orientation of the determinant line of the linearized operator over the disk, which, together with an orientation of the finite dimensional space of conformal structures on the punctured disk, in turn gives an orientation of the moduli space if it is transversely cut out. The gluing condition for the capping operators ensures that the resulting orientations of the moduli spaces are compatible with splittings into multi-level curves. In what follows we assume that spin structures on the Lagrangians and capping operators have been fixed and thus all our moduli spaces are oriented manifolds. \subsection{Signs and the chain map equation}\label{ss:signsandchainmap} Recall the chain map \[ \Phi\colon (C_*(\mathcal{R}),\partial_\Lambda)\to (C_*(\Sigma),\partial+\delta_{Q}+\delta_{N}) \] from Theorem \ref{t:Phichainmap}. Here we consider the signs of the operations $\delta_Q$ and $\delta_N$ in this formula. These operations are defined on chains of broken strings by taking the oriented preimage of $K$ under the evaluation map. In the map $\Phi$, the oriented chain is given by a moduli space of holomorphic disks. In order to deal with the evaluation maps on such spaces we present them as bundles over $Q$ as follows. Consider first the operation $\delta_Q$. Fix a point $q\in Q$ and an additional puncture on the boundary that we require maps to $q$. Concretely, we work on strips with slits and add a small positive exponential weight at the puncture mapping to $q$. Then we consider the bundle of such maps over $Q$ when we let $q$ vary in $Q$. The orientation of this space is induced from capping operators as described above. When we consider the corresponding boundary condition on the closed disk we find a vanishing condition for linearized variations at the marked point corresponding to the positive exponential weight. Thus if $\sigma$ denotes the orientation of the index bundle induced as above, then the orientation on the bundle with marked point mapping to $q$ is given by the orientation of the formal difference $\sigma \ominus TQ$. (The formal difference should be interpreted as in $K$-theory: the difference $\xi\ominus \eta$ of two bundles $\xi$ and $\eta$ is represented by a bundle $\zeta$ such that the direct sum $\zeta\oplus\eta$ is equivalent to $\xi$.) We point out that here and throughout this section orientations depend on ordering conventions, whether the point condition goes before or after the index bundle, etc. In calculations below we put point conditions after the index bundle, and put the fiber of bundles over $Q$ before the base. The orientation of the bundle corresponding to a point constraint $q$ varying over $Q$ is then given given by $\sigma\ominus TQ\oplus TQ$. Finally, the orientation of the chain given by the preimage of $K$ under the evaluation map is then \begin{equation}\label{eq:cutori0} \sigma\ominus TQ\oplus TQ\oplus TK\ominus TQ=\sigma\oplus TK\ominus TQ. \end{equation} In order to show that the chain map equation holds we must then show that there are choices of capping operators and orientations on $Q$ and $N$ so that this orientation agrees with the boundary orientation of the disk viewed as the boundary in the moduli space of disks with two colliding Lagrangian punctures. Consider the capping operators $c_{QN}$ and $c_{NQ}$ for such a puncture going from $Q$ to $N$ and vice versa. These capping operators are standard $\bar{\partial}$-operators on a once punctured disk $D_{1}$ acting on ${\mathbb{C}}^{3}$-valued functions in a weighted Soboloev space that satisfy a Lagrangian boundary condition. We first describe the boundary conditions. For $c_{QN}$ the Lagrangian boundary condition $\lambda\colon \partial D_{1}\to \mathrm{Lag}_{3}$, where $\mathrm{Lag}_{3}$ denotes the Lagrangian Grassmannian of Lagrangian subspaces of ${\mathbb{C}}^{3}$, starts at the tangent space of $Q$ and ends at the tangent space of $N$. For $c_{NQ}$ the boundary condition instead starts at the tangent space of $N$ and ends at the of $Q$. More specifically, the tangent spaces of $Q$ and $N$ intersect along $TK$ and are perpendicular in the normal directions of $K$. We think of the normal directions to $K$ as ${\mathbb{C}}^{2}$ and the tangent spaces of $Q$ and $N$ as $i{\mathbb{R}}^{2}$ and ${\mathbb{R}}^{2}$, respectively. We take both capping operators $c_{QN}$ and $c_{NQ}$ to fix $TK$, to be a rotation by $\frac{\pi}{2}$ in one of the complex lines normal to the knot, and a rotation by $\frac{3\pi}{2}$ in the other. We next describe the weights at the puncture in $D_{1}$. We use a half strip neighborhood of the puncture and a Sobolev space with small positive exponential weight $\delta$, $0<\delta<\frac{\pi}{2}$, in this strip neighborhood. The index of the $\bar{\partial}$-operator with this boundary condition and weight equals $3$, see e.g.~\cite[Proposition 6.5]{EES1}. Recall from Section \ref{ss:indexbndles} that an orientation of the moduli space is induced from the capping operators together with an orientation on the space of conformal structures on the punctured disk. Here we think of variations of the conformal structure as vector fields moving the punctures along the boundary of the disk. We have one such vector field for each puncture which give an additional one dimensional oriented vector space associated to each puncture, see \cite[Section 3.4.1]{EESori} for details. For simplicity we write simply $c_{QN}$ and $c_{NQ}$ for the sum of the index bundles of the capping operator described above and one dimensional conformal variations associated to the respective punctures. Thus, in the calculations below $c_{QN}$ and $c_{NQ}$ have index $3+1=4$. We choose the orientations on $Q$ and $N$ so that the linear transformations between tangent spaces $TQ$ and $TN$ induced by the Lagrangian boundary conditions of $c_{QN}$ and $c_{NQ}$ take the orientation on $Q$ to that on $N$ and vice versa. The boundary orientation of the two-level disk (second level constant) is the fiber product over $K$ of the orientations of its levels. We view the top level disk as having a small positive exponential weight at the puncture mapping to $K$ and a cut-off local solution in the direction of $K$. In analogy with the above, its orientation is thus given by $\sigma\ominus TQ\oplus TK$. The orientation of the constant disk (which has small negative weights at its positive puncture) is then $\sigma'\oplus c_{QN}\oplus c_{NQ}$, where $\sigma'$ is the standard orientation on the closed up boundary condition of the constant three punctured disk. The boundary orientation is thus \begin{equation}\label{eq:cutori1} (\sigma\ominus TQ\oplus TK)\oplus (\sigma'\oplus c_{QN}\oplus c_{NQ})\ominus TK. \end{equation} Now choose the orientation on $c_{QN}$ and $c_{NQ}$ so that the orientation of the index one problem on the constant disk with kernel in direction of the knot induced by $\sigma'\oplus c_{QN}\oplus c_{NQ}$ is opposite to the orientation of $TK$. Then the orientation in \eqref{eq:cutori1} is $\sigma\oplus TK\ominus TQ$ (there is an orientation change when one permutes the odd-dimensional summands $TK$ and $TQ$), in agreement with \eqref{eq:cutori0}. For the sign of the operation $\delta_N$ we argue exactly as above replacing $Q$ with $N$ and we must compare the orientations $\sigma\oplus TK\ominus TN$ and \[ (\sigma\ominus TN\oplus TK)\oplus (\sigma'\oplus c_{NQ}\oplus c_{QN})\ominus TK. \] Compared to the above the main difference is that the summands $c_{NQ}$ and $c_{QN}$ have been permuted. However, as explained above, the index of each of these operators is $4$, so the orientation remains as before and the positive sign for $\delta_N$ is correct for the chain map. \section{Compactification of moduli spaces and gluing}\label{sec:gluing} In this section we show that the moduli spaces $\mathcal{M}(a;\mathbf{n})$ and $\mathcal{M}^{\rm sy}(a;\mathbf{b})/{\mathbb{R}}$ admit compactifications as manifolds with boundary with corners. Furthermore, we describe the boundary explicitly in terms of broken holomorphic disks. The smoothness of individual strata of the compactified moduli spaces are governed by the Transversality Lemma \ref{l:tv}. The Compactness Theorems \ref{t:cp} and \ref{t:cp_sy} describe disk configurations in the boundary of the compactification. The main purpose of this section is thus to show how to glue these configurations on the boundary to curves in the smooth part of the moduli space and thereby obtain boundary charts in the sense of manifolds with boundary with corners. Such gluing theorems were proved before in closely related situations and we will discuss details only when they differ from the standard cases. We first state the structural theorems in Section \ref{s:mfdbdrycrn} and then turn to the gluing results and their proofs in the following subsections. We work throughout this section with an almost complex structure $J$ so that Lemma \ref{l:tv} holds. Furthermore we assume that the domains of all holomorphic disks are stable, which can be achieved by adding marked points as explained in Section \ref{s:stab}. \subsection{Structure of the moduli spaces}\label{s:mfdbdrycrn} In this subsection we state the results on moduli spaces of holomorphic disks. As before there are two cases to consider, disks in the symplectization and disks in the cotangent bundle. The structural results all have the same flavor. Basically we show that a specified moduli space is a manifold with boundary with corners of dimension $\le 2$, and we describe the boundary strata as well as certain submanifolds important for our study. The proofs of the results are the main goal for the rest of the section. Recall from Sections~\ref{s:confcob} and~\ref{ss:trans} (with $n_0=0$) that for generic $J$ the moduli spaces $\mathcal{M}(a;\mathbf{n})$ and $\mathcal{M}^{\rm sy}(a;\mathbf{b})$ are manifolds of dimensions $$ \dim\mathcal{M}(a;\mathbf{n})=|a|-|\mathbf{n}|,\qquad \dim\mathcal{M}^{\rm sy}(a;\mathbf{b})=|a|-|\mathbf{b}|. $$ Here $|a|={\rm ind}(a)$ is the degree of the Reeb chord $a$ (which takes only values $0,1,2$), and to the vector of local winding numbers $\mathbf{n}=(n_1,\dots,n_{m})$ (where the $n_j$ are positive half-integers or integers) we have associated the nonnegative integer \[ |\mathbf{n}|=\sum_{j=1}^{m}2(n_j-\tfrac12)\ge 0. \] If either $\mathbf{n}$ or $\mathbf{b}$ is empty, the corresponding contribution to the index formula is $0$. If $a$ is a Reeb chord of $\Lambda_{K}\subset S^{\ast}Q$, then $0\le |a|\le 2$. Since $J_1$ is ${\mathbb{R}}$-invariant, $0$-dimensional moduli spaces in the symplectization consists only of Reeb chord strips. Thus the only non-empty moduli spaces $\mathcal{M}^{\rm sy}(a;\mathbf{b})$ of dimension $d^{\rm sy}$ are the following (write $\mathbf{b}=b_1\dots b_m$), see Figure \ref{fig:sympdisk}: \begin{itemize} \item {$[2,0]^{\rm sy}$}: If $|a|=2$ and $|\mathbf{b}|=0$ (i.e.~$|b_j|=0$ for all $j$) then $d^{\rm sy}=2$. \item {$[2,1]^{\rm sy}$}: If $|a|=2$ and $|\mathbf{b}|=1$ (i.e.~$|b_j|=0$ for all $j\ne s$ and $|b_s|=1$) then $d^{\rm sy}=1$. \item {$[1,0]^{\rm sy}$}: If $|a|=1$ and $|\mathbf{b}|=0$ then $d^{\rm sy}=1$. \end{itemize} \begin{figure} \labellist \small\hair 2pt \pinlabel $D$ at 64 91 \pinlabel $u$ at 184 107 \pinlabel ${\color{red} a}$ at 298 177 \pinlabel ${\color{red} b_1}$ at 256 4 \pinlabel ${\color{red} b_2}$ at 300 4 \pinlabel ${\color{red} b_3}$ at 342 4 \pinlabel $+$ at 64 162 \pinlabel $-$ at 0 61 \pinlabel $-$ at 63 22 \pinlabel $-$ at 127 61 \endlabellist \centering \includegraphics[width=.7\linewidth]{figures/sympdisk-new} \caption{Disks $u :\thinspace (D,\partial D) \to ({\mathbb{R}}\times S^*Q,{\mathbb{R}}\times\Lambda_K)$ in the symplectization.} \label{fig:sympdisk} \end{figure} Similarly, the only non-empty moduli spaces $\mathcal{M}(a;\mathbf{n})$ of dimension $d$ are the following (write $\mathbf{n}=n_1\cdots n_m$), see Figures \ref{fig:pihalf}, \ref{fig:pi}, \ref{fig:pipihalf}, \ref{fig:pipi}: \begin{itemize} \item {$[2,0]$}: If $|a|=2$ and all $n_j=\frac12$, then $|\mathbf{n}|=0$ and $d=2$. \item {$[2,1]$}: If $|a|=2$ and $n_j=\frac12$ for all $j\ne s$ and $n_s=1$, then $|\mathbf{n}|=1$ and $d=1$. \item {$[2,\frac32]$}: If $|a|=2$ and $n_j=\frac12$ for all $j\ne s$ and $n_s=\frac32$, then $|\mathbf{n}|=2$ and $d=0$. \item {$[2,2]$}: If $|a|=2$ and $n_j=\frac12$ for all $j\ne s,t$, and $n_s=n_t=1$, then $|\mathbf{n}|=2$ and $d=0$. \item {$[1,0]$}: If $|a|=1$ and all $n_j=\frac12$, then $|\mathbf{n}|=0$ and $d=1$. \item {$[1,1]$}: If $|a|=1$ and $n_j=\frac12$ for all $j\ne s$ and $n_s=1$, then $|\mathbf{n}|=1$ and $d=0$. \item {$[0,0]$}: If $|a|=0$ and $n_j=\frac12$ all $j$, then $|\mathbf{n}|=0$ and $d=0$. \end{itemize} \begin{figure} \labellist \small\hair 2pt \pinlabel ${\color{red} a}$ at 64 134 \pinlabel ${\color{red} Q}$ at 174 17 \pinlabel ${\color{blue} L_K}$ at 28 65 \pinlabel ${\color{blue} L_K}$ at 100 65 \pinlabel $\textstyle{\frac{1}{2}}$ at 42 10 \pinlabel $\textstyle{\frac{1}{2}}$ at 96 10 \endlabellist \centering \includegraphics[width=.6\linewidth]{figures/pihalf-new} \begin{align*} [2,0]: \hspace{4ex} & |a|=2, \hspace{2ex} \dim=2 \\ [1,0]: \hspace{4ex} & |a|=1, \hspace{2ex} \dim=1 \\ [0,0]: \hspace{4ex} & |a|=0, \hspace{2ex} \dim=0 \end{align*} \caption{Curves with $|\mathbf{n}|=0$.} \label{fig:pihalf} \end{figure} \begin{figure} \labellist \small\hair 2pt \pinlabel ${\color{red} a}$ at 75 134 \pinlabel ${\color{red} Q}$ at 174 17 \pinlabel ${\color{blue} L_K}$ at 28 65 \pinlabel ${\color{blue} L_K}$ at 82 65 \pinlabel ${\color{blue} L_K}$ at 116 65 \pinlabel $\textstyle{\frac{1}{2}}$ at 42 10 \pinlabel $1$ at 79 10 \pinlabel $\textstyle{\frac{1}{2}}$ at 114 10 \endlabellist \centering \includegraphics[width=.6\linewidth]{figures/pi-new} \begin{align*} [2,1]: \hspace{4ex} & |a|=2, \hspace{2ex} \dim=1 \\ [1,1]: \hspace{4ex} & |a|=1, \hspace{2ex} \dim=0 \end{align*} \caption{Curves with $|\mathbf{n}|=1$.} \label{fig:pi} \end{figure} \begin{figure} \labellist \small\hair 2pt \pinlabel ${\color{red} a}$ at 86 142 \pinlabel ${\color{red} Q}$ at 174 28 \pinlabel ${\color{blue} L_K}$ at 28 73 \pinlabel ${\color{blue} L_K}$ at 82 73 \pinlabel ${\color{blue} L_K}$ at 138 73 \pinlabel $\textstyle{\frac{1}{2}}$ at 42 18 \pinlabel $\textstyle{\frac{3}{2}}$ at 68 18 \endlabellist \centering \includegraphics[width=.6\linewidth]{figures/pipihalf-new} \begin{align*} [2,\textstyle{\frac{3}{2}}]: \hspace{4ex} & |a|=2, \hspace{2ex} \dim=0 \end{align*} \caption{Curves with $|\mathbf{n}|=2$.} \label{fig:pipihalf} \end{figure} \begin{figure} \labellist \small\hair 2pt \pinlabel ${\color{red} a}$ at 74 134 \pinlabel ${\color{red} Q}$ at 174 17 \pinlabel ${\color{blue} L_K}$ at 20 65 \pinlabel ${\color{blue} L_K}$ at 128 65 \pinlabel $\textstyle{\frac{1}{2}}$ at 34 10 \pinlabel $1$ at 65 10 \pinlabel $1$ at 92 10 \pinlabel $\textstyle{\frac{1}{2}}$ at 124 10 \endlabellist \centering \includegraphics[width=.6\linewidth]{figures/pipi-new} \begin{align*} [2,2]: \hspace{4ex} & |a|=2, \hspace{2ex} \dim=0 \end{align*} \caption{Curves with $|\mathbf{n}|=2$.} \label{fig:pipi} \end{figure} It follows from Theorem \ref{t:cp} and Lemma \ref{l:tv} (see also Section \ref{s:stab}) that the $0$-dimensional moduli spaces listed above are transversely cut out compact $0$-manifolds. The corresponding structure theorems for moduli spaces of dimension one and two are the following. Recall that ${\mathbb{R}}$ acts on holomorphic disks in the symplectization ${\mathbb{R}}\times S^{\ast}Q$ by translation. Dividing out this action, we obtain moduli spaces of dimension zero and one in the symplectization which have the following structure. \begin{thm}\label{t:sy} Moduli spaces of holomorphic disks in the symplectization satisfy the following. \begin{itemize} \item[{\rm (i)}] If $\mathcal{M}^{\rm sy}(a;\mathbf{b})$ is a moduli space of type $[2,1]^{\rm sy}$ or of type $[1,0]^{\rm sy}$, then $\mathcal{M}^{\rm sy}(a;\mathbf{b})/{\mathbb{R}}$ is a compact $0$-manifold. \item[{\rm (ii)}] If $\mathcal{M}^{\rm sy}(a;\mathbf{b})$ is a moduli space of type $[2,0]^{\rm sy}$, then $\mathcal{M}^{\rm sy}(a;\mathbf{b})/{\mathbb{R}}$ admits a natural compactification $\overline{{\mathcal{M}}^{\rm sy}}(a,\mathbf{b})$ which is a compact $1$-manifold with boundary. Boundary points of $\overline{{\mathcal{M}}^{\rm sy}}(a;\mathbf{b})$ correspond to two-level disks $\hat v$ where the level one disk $v^{1}$ is of type $[2,1]^{\rm sy}$, and where exactly one level two disk $v^{2,s}$ is of type $[1,0]^{\rm sy}$ and all other level two disks $v^{2,j}$, $j\ne s$, are trivial Reeb chord strips. \end{itemize} \end{thm} In the cotangent bundle we have moduli spaces of dimension zero, one, or two. We start with the $0$-dimensional case. \begin{thm}\label{t:0dimcob} Moduli spaces $\mathcal{M}(a;\mathbf{n})$ of holomorphic disks of types $[2,\frac32]$, $[2,2]$, $[1,1]$, or $[0,0]$ are compact $0$-dimensional manifolds. \end{thm} In the $1$-dimensional case we consider two cases separately. We first consider the case when $|\mathbf{n}|=0$. \begin{thm}\label{t:[1,0]} Moduli spaces $\mathcal{M}(a;\mathbf{n})$ of disks of type $[1,0]$ admit natural compactifications $\overline{\mathcal{M}}(a;\mathbf{n})$ which are $1$-manifolds with boundary. Boundary points of $\overline{\mathcal{M}}(a;\mathbf{n})$ correspond to the following. \begin{itemize} \item[{\rm (a)}] Two-level disks $\dot v$ where the level one disk $v^{1}$ has type $[1,1]$ and where the second level is a three punctured constant disk $v^{2}$ attached at the Lagrangian intersection puncture of $v^{1}$ where the asymptotic winding number equals $1$. \item[{\rm (b)}] Two-level disks $\dot v$ where the top level disk $v^{1}$ is a symplectization disk of type $[1,0]^{\rm sy}$ and where all the second level disks $v^{2,j}$, $1\le j\le k$ are of type $[0,0]$. \item[{\rm (c)}] If there are no entries in $\mathbf{n}$, then all points of the reduced moduli space $\mathcal{M}^{\rm sy}(a;\varnothing)/{\mathbb{R}}$ containing disks of type $[1,0]^{\rm sy}$ appear as boundary points. \end{itemize} \end{thm} In the second $1$-dimensional case $|\mathbf{n}|=1$ and we have the following. \begin{thm}\label{t:[2,1]} Moduli spaces $\mathcal{M}(a;\mathbf{n})$ of disks of type $[2,1]$ admit natural compactifications $\overline{\mathcal{M}}(a;\mathbf{n})$ which are $1$-manifolds with boundary. Boundary points of $\overline{\mathcal{M}}(a;\mathbf{n})$ correspond to the following. \begin{itemize} \item[{\rm (a)}] Two-level disks $\dot v$ where the level one disk $v^{1}$ has type $[2,2]$ and where the second level is a three punctured constant disk $v^{2}$ attached at the new-born Lagrangian intersection puncture of $v^{1}$ with winding number $1$. \item[{\rm (b)}] Two-level disks $\dot v$ where the top level disk $v^{1}$ is a symplectization disk of type $[2,1]^{\rm sy}$ and where the second level consists of disks $v^{2,j}$, $1\le j\le k$ such that for some $s$, $v^{2,s}$ has type $[1,1]$ and $v^{2,j}$ has type $[0,0]$ for $j\ne s$. \item[{\rm (c)}] Two-level disks $\dot v$ where the level one disk is of type $[2,\frac32]$ and where the second level disk is a constant three punctured disk attached at the Lagrangian intersection puncture with winding number $\frac32$. (Here the constant disk has winding number $\frac32$ at its positive puncture, and $1$ and $\frac12$ at its negative punctures.) \end{itemize} \end{thm} \begin{remark}\label{r:1-dimcornercoord} In order to parameterize a neighborhood of the boundary points in Theorem~\ref{t:[1,0]}(a) and Theorem \ref{t:[2,1]}(a) one can use the local model~\eqref{eq:1-dimmodel} from Section~\ref{ss:spikes}. Here the location $\varepsilon>0$ of the puncture on the real axis can be used as local coordinate for the moduli space. Furthermore, the maps in the moduli space differ from the map in~\eqref{eq:1-dimmodel} by terms of order ${\mathcal O}(z^{2})$, so they have a spike that vanishes as $\varepsilon\to 0$ as shown at the top of Figure~\ref{fig:spike-families}. Similarly, in order to parameterize a neighborhood of the boundary points in Theorem \ref{t:[2,1]} (c) one can use the local model~\eqref{eq:2-dimmodel} from Section~\ref{ss:spikes} with $\delta=0$. Here the location $\varepsilon>0$ of the puncture on the real axis can be used as local coordinate for the moduli space. Furthermore, the maps in the moduli space differ from the map in~\eqref{eq:2-dimmodel} by terms of order ${\mathcal O}(z^{5/2})$, so they have a spike that vanishes as $\varepsilon\to 0$ as shown at the bottom of Figure~\ref{fig:spike-families} (with $\delta=0$). See Remark~\ref{r:1-dimfinal} for details. \end{remark} In the 2-dimensional case we have the following description of the structure of the moduli space which is naturally more involved. \begin{thm}\label{t:[2,0]} Moduli spaces $\mathcal{M}(a;\mathbf{n})$ of disks of type $[2,0]$ admit natural compactifications $\overline{\mathcal{M}}(a;\mathbf{n})$ which are $2$-manifolds with boundary with corners. The top-dimensional strata of the boundary have codimension $1$ in $\overline{\mathcal{M}}(a;\mathbf{n})$ and correspond to the following. \begin{itemize} \item[{\rm (a1)}] Two-level disks $\dot v$ where the top level disk $v^{1}$ has type $[2,1]$ and where the second level is a three punctured constant disk $v^{2}$ attached at the Lagrangian intersection puncture of $v^{1}$ where the asymptotic winding number equals $1$. \item[{\rm (b1)}] Two-level disks $\dot v$ where the top level disk $v^{1}$ is a symplectization disk of type $[2,0]^{\rm sy}$ and where all the second level disks $v^{2,j}$, $1\le j\le k$ are of type $[0,0]$. \item[{\rm (c1)}] Two-level disks $\dot v$ where the top level disk $v^{1}$ is a symplectization disk of type $[2,1]^{\rm sy}$ and where the second level consists of disks $v^{2,j}$, $1\le j\le k$ such that for some $s$, $v^{2,s}$ has type $[1,0]$ and $v^{2,j}$ has type $[0,0]$ for $j\ne s$. \end{itemize} The corner points on the boundary (i.e., the codimension two strata) of $\overline{\mathcal{M}}(a;\mathbf{n})$ correspond to the following. \begin{itemize} \item[{\rm (a2)}] Two-level disks $\dot v$ where the top level disk $v^{1}$ has type $[2,2]$ and where the second level consists of two three punctured constant disks $v^{2,1}$ and $v^{2,2}$ attached at the Lagrangian intersection punctures of $v^{1}$ where the winding numbers are $1$. \item[{\rm (b2)}] Three-level disks $\dot v$ where the top level disk $v^{1}$ is a symplectization disk of type $[2,1]^{\rm sy}$, where the second level disk $v^{2,s}$ is of type $[1,1]$ and all other second level disks $v^{2,j}$, $j\ne s$ are of type $[0,0]$, and where the third level consists of a constant three punctured disk $v^{3}$ attached at the Lagrangian intersection puncture of $v^{2,s}$ with winding number $1$. \item[{\rm (c2)}] Three-level disks $\dot v$ where the top level disk $v^{1}$ is a symplectization disk of type $[2,1]^{\rm sy}$, where the second level disk $v^{2,s}$ is of type $[1,0]^{\rm sy}$ and all other second level disks $v^{2,j}$ are Reeb chord strips, and where the third level consists of disks $v^{3,j}$ all of type $[0,0]$. \item[{\rm (d2)}] Two-level disks $\dot v$ where the top level disk $v^{1}$ has type $[2,\tfrac32]$ and where the second level consists of a 4- punctured constant disk $v^{2}$ attached at the Lagrangian intersection puncture of $v^{1}$ where the asymptotic winding number is $\tfrac32$. \end{itemize} \end{thm} \begin{remark}\label{r:2-dimcornercoord} In order to parameterize a neighborhood of the corner points in Theorem \ref{t:[2,0]} (d2) one can use the local model~\eqref{eq:2-dimmodel} from Section~\ref{ss:spikes} Here the locations $(\varepsilon,\delta)$ of the punctures on the real axis can be used as local coordinates for the moduli space. Furthermore, the maps in the moduli space differ from the map in~\eqref{eq:2-dimmodel} by terms of order ${\mathcal O}(z^{5/2})$, so they have two spikes that vanish as $\varepsilon,\delta\to 0$ as shown at the bottom of Figure~\ref{fig:spike-families}. See Remark~\ref{r:2-dimfinal} for details. \end{remark} Some of the moduli spaces above admit natural maps into others by forgetting some Lagrangian intersection punctures. We next describe such maps. It is convenient to write \[ \mathbf{\tfrac12}^{s}=\tfrac12,\stackrel{s}{\dots},\tfrac12. \] We consider first the case when the target is a one dimensional moduli space. \begin{thm}\label{t:emb} Consider a moduli space $\mathcal{M}(a;\mathbf{\tfrac12}^{s},1,\mathbf{\tfrac12}^{t})$ of disks of type $[1,1]$. Forgetting the $(s+1)^{\rm th}$ Lagrangian intersection puncture we get a map \[ \mathcal{M}(a;\mathbf{\tfrac12}^{s},1,\mathbf{\tfrac12}^{t})\to \overline{\mathcal{M}}(a;\mathbf{\tfrac12}^{s+t}) \] into the compactified moduli space of disks of type $[1,0]$. This map is an embedding of a $0$-dimensional manifold into the interior of a $1$-manifold. \end{thm} Finally, we consider similar maps when the target space is two dimensional. \begin{thm}\label{t:imm} Consider a compactified moduli space $\overline{\mathcal{M}}(a;\mathbf{\tfrac12}^{s},1,\mathbf{\tfrac12}^{t})$ of disks of type $[2,1]$. Forgetting the $(s+1)^{\rm th}$ Lagrangian intersection puncture we get a map \[ \iota\colon\overline{\mathcal{M}}(a;\mathbf{\tfrac12}^{s},1,\mathbf{\tfrac12}^{t})\to \overline{\mathcal{M}}(a;\mathbf{\tfrac12}^{s+t})=\overline{\mathcal{M}} \] into the compactified moduli space of disks of type $[2,0]$. This map is an immersion of a $1$-dimensional manifold into a $2$-manifold with boundary with corners. Let $\overline\mathcal{M}_{s+1}$ denote the image of this immersion. Then $\overline{\mathcal{M}}_{s+1}$ consists of those disks for which some point in the $(s+1)^{\rm th}$ boundary arc hits $K$. Then $\overline{\mathcal{M}}_{s+1}$ and $\overline{\mathcal{M}}_{t+1}$ intersect (self-intersect if $s=t$) transversely at disks with two points hitting $K$ (this corresponds to disks of type $[2,2]$). The boundary of $\overline{\mathcal{M}}_s$ consists of points in the codimension one boundary of $\overline{\mathcal{M}}$ corresponding to disks as in Theorem \ref{t:[2,1]} (a) and (b) as well as to interior points corresponding to disks of type $[2,\frac32]$ as in Theorem \ref{t:[2,1]} (c). Furthermore $\overline{\mathcal{M}}_{s+1}$ and $\overline{\mathcal{M}}_{s+2}$ with a common boundary point corresponding to a disk of type $[2,\frac32]$ fit together smoothly at this point. \end{thm} \subsection{Floer's Picard lemma}\label{s:Floer-Picard} In the following subsections we show that the broken disks in Theorems \ref{t:[1,0]}--\ref{t:[2,1]} can be glued in a unique way to give disks in the interior of the moduli space thus providing a standard neighborhood of the boundary of the moduli space inside the compactified moduli space. Our approach here is standard and starts from Floer's Picard lemma, see \cite{Fmem} for a proof. \begin{lemma}\label{l:FloerPicard} Let $f\colon B_1\to B_2$ be a smooth map of Banach spaces which satisfies $$ f(v)=f(0)+df(0)v+ N(v), $$ where $df(0)$ is Fredholm and has a right inverse $Q$ satisfying \begin{equation}\label{e:nlFloer} \|QN(u)-QN(v)\|\le G(\|u\|+\|v\|)\|u-v\| \end{equation} for some constant $G$. Let $B(0,r)$ be the $r$-ball centered at $0\in B_1$ and assume that \begin{equation}\label{e:appFloer} \|Qf(0)\|\le \frac{1}{8G}. \end{equation} Then for $r<\frac{1}{4G}$, the zero-set $f^{-1}(0)\cap B(0,r)$ is a smooth submanifold of dimension $\dim(\ker(df(0)))$ diffeomorphic to the $r$-ball in $\ker(df(0))$. \end{lemma} We will apply this result as well as a parameterized version of it, see \cite[Lemma 5.13]{ESrev}. In our case $f$ will be the $\bar\partial_J$-operator. To show existence of solutions near a broken solution we must thus establish three things: a sufficiently good approximate solution $w$ near the broken solution corresponding to $0$ in Lemma \ref{l:FloerPicard}, a right inverse for the linearization of the $\bar\partial_J$-operator at $w$, corresponding to $Q$ in Lemma \ref{l:FloerPicard}, and a quadratic estimate for the non-linear term in the Taylor expansion, corresponding to \eqref{e:nlFloer}. Here the Banach space $B_1$ will be a product of a weighted Sobolev space and a certain finite dimensional space that will serve as a neighborhood of the broken configuration and the Banach space $B_2$ will be a space of fields of complex antilinear maps. In addition to verifying uniform invertibility of the differential and the non-linear estimate we must also check that the gluing construction captures all solutions near the broken solution and that the natural change of coordinates (from the Banach space around the broken solution to the standard charts in the interior of the moduli space) is smooth. \subsection{Gluing constant disks}\label{ss:constglu} The boundary strata of the moduli spaces we study involve splitting off of constant disks and splitting off of disks in the symplectization. In this section we consider gluing constant disks. We first consider a configuration $\dot v$ as in Theorem \ref{t:[1,0]} (a), \ref{t:[2,1]} (a), or \ref{t:[2,0]} (a1). In all these cases the broken configuration is a two level disk where the second level consists of a constant $3$-punctured disk $v^{2}$ that is attached to the first level disk at a Lagrangian intersection puncture with asymptotic winding number $1$. After we have carried out the gluing argument in this case we will discuss modifications needed for the other cases of constant disk gluing. Assume that the first level disk $v^{1}$ has $m$ Lagrangian intersection punctures. We take the domain of $v^{1}$ to be the standard domain $\Delta^{1}\approx \Delta_{m}$. (As explained in Section \ref{s:stab}, we may assume that the domain is stable by adding extra marked points near the positive puncture.) Recall that we defined a functional analytic neighborhood $\mathcal{W}(a;\mathbf{n})$ of $v^{1}$, where $\mathcal{W}(a;\mathbf{n})$ is a product of an infinite dimensional weighted Sobolev space $\mathcal{H}(a;\delta,\mathbf{n})$ and a finite dimensional space which is an open neighborhood $B$ of the origin in ${\mathbb{R}}^{m-2}\times {\mathbb{R}}\times {\mathbb{R}}^{m}$, see Section \ref{s:confcob}. Here the first ${\mathbb{R}}^{m-2}$-component of an element in $B$ corresponds to variations of the conformal structure of $\Delta^{1}$, the second ${\mathbb{R}}$-factor to shifts of the map in the symplectization direction near the positive puncture, and the last ${\mathbb{R}}^{m}$-factor corresponds to shifts along the knot near the Lagrangian intersection punctures. Here we will write $\mathcal{W}^{1}$ for this neighborhood $\mathcal{W}(a;\mathbf{n})$ and think of it as a product \[ \mathcal{W}^{1}= \mathcal{W}^{1}_{0}\times B^{1} \] where $B^{1}$ is an open subset of ${\mathbb{R}}$, as follows. Let $q$ denote the negative puncture where the second level disk is attached. Then $B^{1}$ corresponds to shifts along the knot at $q$. Consider the negative puncture $q$ at which the constant three punctured disk $v^{2}\colon\Delta^2\to K$, where $\Delta^{2}\approx\Delta_{3}$ is a standard domain with three punctures, is attached and fix a half-strip neighborhood $Q=(-\infty,0]\times[0,1]$ of it such that $v^{1}(Q)$ lies entirely in the standard neighborhood of $K$ with complex analytic coordinates. For $\rho>0$, define a standard domain $\Delta_{\rho}\approx \Delta_{m+1}$ as follows. Remove the neighborhood $(-\infty,-\rho)\times[0,1]$ of $q$ from $\Delta^{1}$ and the neighborhood $(\rho,\infty)\times[0,1]$ of the positive puncture in $\Delta^{2}$, getting domains $\Delta_{\rho}^{1}$ and $\Delta_{\rho}^{2}$. The domain $\Delta_{\rho}$ is then obtained by identifying the boundary segments $\{-\rho\}\times[0,1]\subset\Delta_{\rho}^{1}$ and $\{\rho\}\times[0,1]\subset\Delta_{\rho}^{2}$. Then $\Delta_{\rho}$ contains the strip $Q_{\rho}\approx [-\rho,\rho]\times[0,1]$: \[ Q_{\rho}=\Delta_{\rho}-(\Delta_{0}^{1}\cup\Delta_{0}^{2}). \] We next define a pre-gluing $w_{\rho}\colon \Delta_{\rho}\to T^{\ast}Q$ (i.e., an approximate solution close to the broken disk $\dot v$) and a neighborhood of it in a suitably weighted space of maps. We start with the map. Fix complex analytic coordinates ${\mathbb{C}}\times{\mathbb{C}}^{2}$ around $p\in K$ on $T^{\ast}Q$, where $p$ is the point where the constant disk $v^{2}$ sits. Let $\phi\colon \Delta_{\rho}\to{\mathbb{C}}$ be a smooth function which equals $1$ on $\Delta_{{\rho/2}}^{1}$, equals $0$ on $\Delta_{\rho}^{2}$ and is real-valued and holomorphic on the boundary. (Holomorphic on the boundary just means that the restriction of $\bar{\partial}$ to the boundary vanishes. For example, if $s+it\in \mathbb{R}\times[0,1]$ are coordinates on the strip and $\phi(s)$ is an ordinary real valued cut-off function then a corresponding complex valued cut-off function that is holomorphic on the boundary is $\phi(s)+i\psi(t)\frac{d\phi}{ds}(s)$, where $\psi(t)$ is a small function with support near $\partial [0,1]$ such that $\psi(0)=\psi(1)=0$ and $\frac{d\psi}{dt}(0)=\frac{d\psi}{dt}(1)=1$.) Define \[ w_{\rho}(z)= \begin{cases} v^{1}(z), & z\in\Delta_{{\rho/2}}^{1},\\ \phi(z)v^{1}(z) & z\notin\Delta_{{\rho/2}}^{1}, \end{cases} \] where the last expression refers to the analytic coordinates around $q$ corresponding to $0$ in the coordinate system. Note then that $w_{\rho}$ takes the boundary $\partial \Delta_{\rho}$ to $L=L_{K}\cup Q$ and that $\bar{\partial}_{J}w_\rho$ is supported in $Q_{\rho}$. Furthermore, using the Fourier expansion of $v^{1}$ near $q$, \[ v^{1}(z)=\sum_{k\ge 1} c_{k} e^{-k\pi z}, \] we find that \[ |\bar{\partial}_{J}w_{\rho}|_{C^{1}}={\mathcal O}(e^{-\pi\rho}). \] Define a weight function $\lambda_\rho\colon\Delta_{\rho}\to{\mathbb{R}}$ as follows, where $\eta_\delta\colon\Delta_{m+1}\to{\mathbb{R}}$ denotes the weight function on $\Delta^{1}$, \[ \lambda_\rho(z)= \begin{cases} \eta_\delta(z) &\text{for }z\in \Delta^{1}_0,\\ e^{\delta(\rho-|\tau|)} &\text{for }z\in Q_\rho\approx[-\rho,\rho]\times[0,1],\\ 1 &\text{for }z\in\Delta^{2}_0. \end{cases} \] Let $\|\cdot\|_{k,\rho}$ denote the Sobolev norm with $k$ derivatives on $\Delta_{\rho}$ and weight function $\lambda_{\rho}$. From the above we then find \begin{equation} \|\bar\partial_J w_\rho\|_{1,\rho}\le |\bar\partial_J w_\rho|_{C^{1}}\int_{-\rho}^{\rho}e^{\delta(\rho-|\tau|)} d\tau= {\mathcal O}(e^{-(\pi-\delta)\rho}). \end{equation} We next define configuration spaces of maps giving neighborhoods of the approximate solutions $w_\rho$. As in Subsection \ref{s:confcob} this space is a direct sum of an infinite dimensional space and two finite dimensional summands. We first discuss the infinite dimensional summand. Define ${\mathcal H}_{2,\rho}(w_\rho)$ as the Sobolev space of vector fields $v$ along $w_\rho$ (i.e., sections of $w_\rho^{\ast}T(T^\ast Q)\to\Delta_{\rho}$) which satisfies the following requirements. \begin{itemize} \item If $\zeta\in\partial\Delta_{\rho}$ maps to $L_K$ (maps to $Q$) then $v(\zeta)$ is tangent to $L_K$ (resp. to $Q$). \item $\nabla v + J\circ \nabla v\circ i=0$ along $\partial\Delta_{\rho}$. \item Fix an endpoint $\zeta_0\in\partial\Delta_\rho$ of the vertical segment which separates the part of $\Delta_{\rho}$ which corresponds to $\Delta^{1}$ from that corresponding to $\Delta^{2}$. We require that $v(\zeta_0)=0$. \end{itemize} Here the first two requirements have counterparts in Section~\ref{s:disks} and the third is connected to the addition of certain cut-off solutions in the gluing region. We endow ${\mathcal H}_{2,\rho}(w_\rho)$ with the weighted Sobolev $2$-norm $\|\cdot\|_{2,\rho}$. Second, we discuss the finite dimensional factor $B_{\rho}=B_{\rho}^{1}\times B_{\rho}^{2}$. Here $B_{\rho}^{1}$ is an open neighborhood of the origin in ${\mathbb{R}}^{m-2}\times {\mathbb{R}}\times{\mathbb{R}}^{m-1}$ and agrees with the finite dimensional factor of $\mathcal{W}^{1}_{0}$ in the following sense. The first ${\mathbb{R}}^{m-2}$-factor corresponds to the conformal variations of $\Delta_{\rho}$ inherited from $\Delta^{1}$, the second ${\mathbb{R}}$-factor corresponds to shifts at the positive puncture, and the last ${\mathbb{R}}^{m-1}$-factor corresponds to shifts along the knot $K$ at Lagrangian intersection punctures that are also punctures of $\Delta^{1}$. The second factor $B_{\rho}^{2}$ is an open neighborhood of the origin in ${\mathbb{R}}^{3}\times {\mathbb{R}}^{2}\times{\mathbb{R}}$, where the first ${\mathbb{R}}^{3}$-factor corresponds to constant vector fields supported in $Q_{\rho}$ along the Lagrangian in a neighborhood of $p$ that are cut off in finite regions near the ends of $Q_{\rho}$, where the weight function $\lambda_{\rho}$ is uniformly bounded and where the second factor corresponds to the shifts along $K$ supported at the Lagrangian intersection punctures that are also punctures of $\Delta^{2}$. Finally, the third ${\mathbb{R}}$-factor is a newborn conformal variation defined as follows. Consider the domain of the constant disks as a strip ${\mathbb{R}}\times[0,1]$ with positive puncture at $+\infty$, one negative puncture at $0$, and one at $-\infty$. Let $v$ be the constant vector field $\partial_\tau$ and note that its flow moves the puncture at $0$ and that in the standard model of the $3$-punctured disk this vector field looks like $c_1+{\mathcal O}(e^{-\pi|\tau|})$ at the puncture at $+\infty$ and at one of the punctures at $-\infty$, whereas it looks like $c_2 e^{-\pi\tau}+{\mathcal O}(1)$ at the other puncture at $-\infty$, where $c_j$, $j=1,2$ are real constants. We extend this vector field $v$ holomorphically over the gluing region $Q_\rho$ and then cut it off using a cut-off function $\beta$ with derivative supported near the end of $Q_{\rho}$ that comes from $\Delta^{1}$ where the weight function $\lambda_{\rho}$ is close to $1$. The conformal variation is then the complex anti-linear map $i\bar\partial(\beta v)$. Note that the conformal variation in $\Delta_{\rho}$ that is inherited from the conformal variation at $q$ in $\Delta^{1}$ can be identified with the linear combination of conformal variations as above for the two punctures from $\Delta^{2}$ which looks like $0+{\mathcal O}(e^{-\pi|\tau|})$ at the positive puncture. We take the ${\mathbb{R}}$-factor to correspond to this variation. (Note that this conformal variation is supported in $\Delta^{1}_{\rho}$ and agrees with the conformal variation in $\Delta^{1}$ corresponding to the negative puncture $q$ where the constant disk is attached.) \begin{remark}\label{rmk:gluingparameter} We note that there is a complementary linear combination of the two newborn conformal variations with non-zero leading constant term at the positive puncture of the constant disk that corresponds to the gluing parameter $\rho$ which, from the point of view of the domain, shifts the boundary maximum between the two new punctures, see Remark \ref{rmk:confvaratpuncture}. \end{remark} Let $\mathcal{E}_{1,\rho}$ denote the space of complex anti-linear maps $T\Delta^{\rho}_{m+2}\to w_\rho^{\ast}T(T^{\ast}Q)$, again weighted by $\lambda_\rho$. The linearization of the $\bar\partial_J$-operator at $w_\rho$ is then an operator \[ L\bar\partial_J\colon{\mathcal H}_{2,\rho}(w_\rho)\times B_{\rho}\to \mathcal{E}_{1,\rho}, \] \begin{lemma}\label{l:unifinv} The operators $L\bar\partial_J$ admit right inverses which are uniformly bounded as $\rho\to\infty$. \end{lemma} \begin{proof} The argument here is standard. Let $k_1,\dots, k_l$ be a basis of the kernel $K$ of the linearized operator on $v^{1}$. Fix a cut-off function $\beta$ which equals $1$ on the part of $\Delta_{\rho}$ corresponding to $\Delta^{1}$ and with first and second derivatives supported in $Q_\rho$ of size ${\mathcal O}(\rho^{-1})$. (Such a cut-off function exists since the length of $Q_\rho$ equals $2\rho$.) We will establish an estimate \begin{equation}\label{e:estonL2c} \|v\|_{2,\rho}\le C\|L\bar\partial_j v\|_{1,\rho}, \end{equation} where $C>0$ is a constant, for $v$ in the $L^{2}$-complement of the subspace $\tilde K$ spanned by the cut-off solutions $\beta k_1,\dots,\beta k_l$. The lemma follows from this estimate. We argue by contradiction: assume that the estimate does not hold. Then there is a sequence $v_\rho$ in this $L^{2}$-complement with \begin{equation}\label{e:contra} \|v_\rho\|_{2,\rho}= 1,\quad\text{and}\quad \|L\bar\partial_J v_\rho\|_{1,\rho}\to 0. \end{equation} We write $v_\rho=u_\rho + b_\rho^{1} + b_\rho^{2}$, where $u_\rho\in {\mathcal H}_{2,\rho}(w_\rho)$, $b_\rho^{1}\in B^{1}_{\rho}$, and $b^{2}_\rho\in B^{2}_{\rho}$. Fix cut-off functions $\beta_j$ on $\Delta_{\rho}$, $j=1,2$ with the following properties. The function $\beta_1$ equals $1$ on $\Delta^{1}_{\rho/2}\subset \Delta_{\rho}$ and equals $0$ on $\Delta^{2}_{\rho}\subset\Delta_{\rho}$. Furthermore, $\beta_1 u_\rho$ is holomorphic on the boundary, and $|D\beta_1|={\mathcal O}(\rho^{-1})$. The function $\beta_2$ has similar properties but with support in $\Delta_{\rho}^{2}\subset\Delta_\rho$. We also let $\alpha$ be a similar cut-off function on $\Delta_{\rho}$, equal to $1$ on $Q_{\frac12\rho}$ and equal to $0$ outside $Q_{\rho-1}$. Since $|\bar\partial\beta_j^{\rho}|\to 0$ as $\rho\to 0$ we then have \begin{align*} \left\|L\bar{\partial}_{J}(\beta_1 u_\rho+b_{\rho}^{1}+b_{\rho}^{2}|_{\Delta_{\rho}^{1}})\right\|_{1,\rho} &\le|\bar\partial\beta_j^{\rho}|\left\| u_\rho \right\|_{1,\rho} +\left\|L\bar{\partial}_{J}(u_\rho+b_{\rho}^{1}+b_{\rho}^{2}|_{\Delta_{\rho}^{1}})\right\|_{1,\rho}\\ &\le |\bar\partial\beta_j^{\rho}|\left\| u_\rho \right\|_{1,\rho} + \|L\bar\partial_J v_\rho\|_{1,\rho} \to 0, \end{align*} as $\rho\to\infty$. We then conclude from transversality of $v^{1}$ (i.e., invertibility of the linearized operator off of its kernel) that there exists a constant $M>0$ such that \begin{equation}\label{eq:tozero1} \left\|\beta_1 u_\rho+b_{\rho}^{1}+b_{\rho}^{2}|_{\Delta_{\rho}^{1}}\right\|_{2,\rho}\le M \left\|L\bar{\partial}_{J}(\beta_1 u_\rho+b_{\rho}^{1}+b_{\rho}^{2}|_{\Delta_{\rho}^{1}})\right\|_{1,\rho} \to 0. \end{equation} In particular the cut-off constant solution in the gluing region goes to $0$. Similarly we have \[ \left\|L\bar{\partial}_{J}(\beta_2u_\rho +b_{\rho}^{2}|_{\Delta_{\rho}^{2}})\right\|_{1,\rho}\to 0. \] We conclude from the invertibility of the standard operator on the three punctured disk that \begin{equation}\label{eq:stconst3punct} \left\|\beta_2u_\rho +b_{\rho}^{2}|_{\Delta_{\rho}^{2}}\right\|_{2,\rho}\to 0. \end{equation} After dividing the weight function in the gluing region $Q_{\rho/2}\approx [-\frac{\rho}{2},\frac{\rho}{2}]\times[0,1]$ by its maximum the problem on the gluing region converges to the $\bar\partial$-problem on the strip with ${\mathbb{R}}^{3}$-boundary condition and negative exponential weights at both ends (i.e.~weight function $\rho(s+it)=e^{-\delta|s|}$). This problem has a three-dimensional kernel spanned by constant solutions in ${\mathbb{R}}^{3}$. As mentioned above, the estimates \eqref{eq:tozero1} and \eqref{eq:stconst3punct} imply that the components along the constant solutions go to zero. This gives first that \[ \left\|L\bar\partial_J(\alpha u_\rho)\right\|_{1,\rho}\to 0, \] and then, by invertibility of $L\bar{\partial}_{J}$ on the complement of the kernel, also that $\|\alpha u_{\rho}\|_{2,\rho}\to 0$. Our assumption thus implies that $\|v_\rho\|_{2,\rho} \to 0$. This contradicts \eqref{e:contra}. The lemma follows. \end{proof} The next thing to establish is the quadratic estimate for the non-linear term in the Taylor expansion of $\bar\partial_J$ around $w_{\rho}$, i.e., around the origin in ${\mathcal H}_{2,\rho}\times B_{\rho}$. We use the exponential map as in Section \ref{s:confcob} to define the local coordinate system around $w_{\rho}$ and the estimate for the non-linear term follows from a standard argument that uses the uniform bounds on the derivatives of the exponential map in our metric, see \cite[Lemma A.18]{ESrev} and also \cite{EES1, EES2}. In fact the standard argument gives the corresponding unweighted estimate but then the case of positive weights follows since the left hand side of the inequality is linear in the weight whereas the right hand side is quadratic. So the inequality follows for weights bounded from below. Note also that variations along the cut-off solutions in $B_{\rho}$ give contributions to the non-linear term only in the regions where the derivatives of the cut-off functions are supported and in such regions the weight functions have finite size. \begin{remark} It is essential here that the cut-off solutions are real solutions to the non-linear equation since a small error term would give a large norm contribution because of the large weight function in $N_\rho$, which in turn is key for the proof of the uniform invertibility of the differential in Lemma \ref{l:unifinv}. \end{remark} The final step is then to show surjectivity of the construction. More concretely, this means that we must show that any sequence of disks which converges in the sense of Subsection \ref{S:cp} to a broken disk eventually lies in a small $\|\cdot\|_{2,\rho}$-neighborhood of $w_\rho$. This follows once we show that any holomorphic disk in a $C^{0}$-neighborhood of the approximate solution is also close in $\|\cdot\|_{2,\rho}$-norm. The proof of that fact follows from the knowledge of explicit solutions in the region where the weight is big. Here $C^{0}$-control at the ends gives norm control, see \cite[Proof of Theorem A.21]{ESrev} or \cite[Proof of Theorem 1.3]{E}. This finishes the gluing results needed in the cases when we glue one constant 3-punctured disks at a Lagrangian intersection puncture of winding number $1$. The remaining cases for gluing constant disks are proved by modifications of the above argument that we describe next. Consider first Theorem \ref{t:[2,0]} (a2). Here we replace the gluing parameter $\rho$ with two independent gluing parameters $(\rho_1,\rho_2)\in[0,\infty)^{2}$, one for each constant disk. Likewise we have two copies of the new finite dimensional factors in the configuration space. The gluing argument is then a word by word repetition of the above. Next consider broken disks as in Theorem \ref{t:[2,1]}(c). Here the exponential weight at the winding $\frac32$-puncture of $v^{1}$ is $\delta\in (\frac{\pi}{2},\pi)$ and the boundary condition in the strip $Q_{\rho}$ has different constant Lagrangians along the two boundary components. The cut-off solutions in $B^{2}_{\rho}$ change accordingly: instead of an ${\mathbb{R}}^{3}$-factor of cut-off solutions we have an ${\mathbb{R}}^{5}$-factor, ${\mathbb{R}}^{5}={\mathbb{R}}\times{\mathbb{R}}^{2}\times{\mathbb{R}}^{2}$. The ${\mathbb{R}}$-factor is a constant solution in the direction of the knot. The first ${\mathbb{R}}^{2}$-factor contains cut-off solutions near the positive puncture of $\Delta_{\rho}^{2}$ of the form $c e^{\frac{\pi z}{2}}$ for $c$ a vector in the appropriate Lagrangian 2-space perpendicular to the knot, the second ${\mathbb{R}}^{2}$-factor consists of cut-off solutions of the form $c e^{\frac{-\pi z}{2}}$. Then in Lemma \ref{l:unifinv} we replace \eqref{eq:stconst3punct} with the estimate on the three punctured disk with boundary condition corresponding to the constant disk. I.e., in directions perpendicular to the knot the boundary condition are two perpendicular Lagrangian planes at the two boundary components near the positive puncture and one of these planes between the two negative punctures. There is a small positive exponential weight at the negative punctures, the weight $\delta$ and two cut-off solutions at the positive puncture. In the directions perpendicular to the knot the $\bar{\partial}$-operator is then an isomorphism and the argument above proceeds as before. \begin{remark}\label{rmk:twoconst} In Theorem \ref{t:[1,0]} (c) there are two different constant disks and the corresponding boundary points cancel out. Geometrically this corresponds to pushing a winding $\frac12$ puncture through a winding $1$ puncture. \end{remark} Finally, we consider Theorem \ref{t:[2,0]} (d2). The argument here is the same as that just described for Theorem \ref{t:[2,1]} (c) with the only difference being that the 3-punctured constant disk should be replaced by a 4-punctured disk and that we invert the operator on the $L^{2}$ complement of the additional conformal variation in the 4-punctured disk. In fact, when the 4-punctured disk is broken into two levels it corresponds to the 3-level configuration with the two top levels as in Theorem \ref{t:[2,1]} (c) and a third level constant disk attached at the winding 1 puncture of the second level constant disk. \subsection{Symplectization gluing}\label{ss:sympglu} Consider a disk with two non-constant levels as in Theorem \ref{t:[2,0]} (b) or (c), Theorem \ref{t:[1,0]} (b) or (c), or Theorem \ref{t:[2,1]} (b). The argument needed to glue such configurations is similar to the one in Subsection \ref{ss:constglu} and we only sketch the details. There are again four steps: define an approximate solution, prove uniform invertibility of the differential, establish a quadratic estimate for the non-linear term, and show surjectivity of the construction. We consider first the case when we glue a symplectization disk to a disk in $T^{\ast}Q$ and discuss modifications needed when the second level also lives in the symplectization later. Denote the top-level disk in the symplectization $v^{1}\colon \Delta^{1}=\Delta_{m}\to {\mathbb{R}}\times S^{\ast}Q$ and the $m$ second level disks $v^{2,j}\colon \Delta_{m_j}\to T^{\ast}Q$, $j=1,\dots,m$. Recall that by adding marked points we reduce to the case when all domains involved are stable, see Section \ref{s:stab}. Each symplectization disk lies in a natural ${\mathbb{R}}$-family. Let $t$ denote a standard coordinate on the ${\mathbb{R}}$-factor. Fix the unique map $v^{1}$ in this family that takes the largest boundary maximum in $\Delta^{1}$ to the slice $\{t=0\}$. By asymptotics at the negative punctures, for all $T>0$ sufficiently large $(v^{1})^{-1}(\{t\le -T\})$ consists of $m$ half strip regions with one component around each negative puncture of $v^{1}$. Furthermore, as $T\to\infty$ the inverse image of the slice $\{t=-T\}$ converges to vertical segments at an exponential rate (since the map agrees with trivial Reeb chord strips up to exponential error). We fix such a slice and consider the vertical segments through its end point. Parameterize the neighborhoods of all the punctures cut at these vertical segments by $(-\infty,0]\times[0,1]$. For $\rho>0$, let $\Delta^{1}_{\rho}\subset \Delta^{1}$ be the subset obtained by removing $(-\infty,-\rho)\times[0,1]$ from the neighborhood $(-\infty,0]\times[0,1]$ of each negative puncture. Fix neighborhoods $[0,\infty)\times[0,1]$ of the positive puncture in each $\Delta^{2,j}$, $j=1,\dots,m$ in which the map is well approximated by the trivial strip at the positive puncture and let $\Delta^{2,j}_{\rho}\subset\Delta^{2,j}$ denote the subset obtained by removing $(\rho,\infty)\times[0,1]$ from this neighborhood. Let $\Delta_{\rho}$ denote the domain obtained by adjoining $\Delta^{2,j}_{\rho}$ to $\Delta^{1}_{\rho}$ by identifying the vertical segment at the positive puncture of $\Delta^{2,j}_{\rho}$ with the vertical segment of the negative puncture in $\Delta^{1}_{\rho}$ where $v^{2,j}$ is attached to $v^{1}$. Then we get $m$ strip regions $Q^{j}_{\rho}=[-\rho,\rho]\times[0,1]\subset\Delta_{\rho}$ around each vertical segment where the disks were joined. By interpolating between the two maps joined at each negative puncture using the standard coordinates near the Reeb chords we find a pregluing \[ w_{\rho}\colon \Delta_{\rho}\to T^{\ast}Q \] such that $\bar{\partial}_{J}w_{\rho}$ is supported only in the middle $[-1,1]\times[0,1]$ of each $Q_{\rho}^{j}$ and such that \[ |\bar{\partial}_{J}w_{\rho}|_{C^{1}}={\mathcal O}(e^{-\alpha\rho}), \] where $\alpha>0$ depends on the angle between the Lagrangian subspaces of the contact hyperplane obtained by moving the tangent space of $\Lambda_K$ at the Reeb chord start point to the tangent space of $\Lambda_K$ at the Reeb chord end point by the linearized Reeb flow. As in Section \ref{ss:constglu} we use a configuration space of maps in a neighborhood of $w_{\rho}$ that is a product of an infinite and a finite dimensional space of functions. We first consider the infinite dimensional factor. Define weight functions $\lambda_\rho\colon\Delta_{\rho}\to{\mathbb{R}}$ by patching (suitably scaled) weight functions $\eta_\delta$ of the domains of the broken disks where we take $0<\delta<\alpha$. In particular, we have $\lambda_\rho(\tau+it)=c_j e^{\delta|\tau|}$ for $\tau+it\in Q_\rho^{j}$. Then, writing $\|\cdot\|_{k,\rho}$ for the Sobolev $k$-norm with this weight, we have \[ \|\bar\partial_J w_\rho\|_{1,\rho}={\mathcal O}(e^{(\delta-\alpha)\rho}). \] We let ${\mathcal H}_{2,\rho}(w_\rho)$ denote the $\lambda_\rho$-weighted Sobolev space of vector fields along $w_\rho$ which are tangent to the Lagrangians, holomorphic on the boundary, and which satisfy the following vanishing condition. The map $w_{\rho}$ maps the strip regions $Q^{j}_{\rho}$ into small neighborhoods of the Reeb chord strips where we have standard coordinates ${\mathbb{R}}\times (-\epsilon, L+\epsilon)\times{\mathbb{C}}^{2}$ and we require that the ${\mathbb{R}}$-component of the vector field vanishes at one of the endpoints of the vertical segments where the disks were joined. Thus there are in total $m$ vanishing conditions. Next we discuss the finite dimensional factor $B_{\rho}=B^{0}_{\rho}\times B^{1}_{\rho}\times B^{2}_{\rho}$. The second factor $B^{1}_{\rho}$ is an open subset of the origin in ${\mathbb{R}}$ corresponding to the shift at the positive puncture of $w_{\rho}$. The third factor $B^{2}_{\rho}$ contains all the conformal variations and the shifts inherited from the negative punctures of the second level disks. Thus $B^{2}_{\rho}$ is a neighborhood of the origin in \[ \Pi_{j=1}^{m} ({\mathbb{R}}^{m_j-2}\times{\mathbb{R}}^{m_j}). \] Finally, the first factor $B_{\rho}^{0}$ is an open subset of the origin in a codimension one subspace of \[ ({\mathbb{R}}\times{\mathbb{R}}^{2})^{m}, \] where each (${\mathbb{R}}\times{\mathbb{R}}^{2}$)-factor corresponds to a specific second level disk. The ${\mathbb{R}}$-component of the $j^{\rm th}$ puncture of $v^{1}$ corresponds to a cut-off shifting vector field $a_j$ in the ${\mathbb{R}}$-direction of the symplectization supported in $Q^{j}_{\rho}$. The ${\mathbb{R}}^{2}$-component corresponds to the two newborn conformal variations in $\Delta^{2,j}_{\rho}$. As before these conformal variations have the form $\gamma = \bar{\partial} V$ where $V$ is a vector field along $\Delta_{\rho}$. The first factor of ${\mathbb{R}}^{2}$ corresponds to a variation $\gamma^{j}_{1}$ that agrees with the conformal variation at the negative puncture in $\Delta^{1}$ where $v^{2,j}$ is attached. The second factor is spanned by $\gamma^{2,j}=\bar{\partial} V_{2}$ where $V_{2}$ is the vector field in $\Delta^{2,j}_{\rho}\cup Q_{\rho}^{j}$ that corresponds to translations along the real axis that moves all the boundary maxima in $\Delta^{2,j}_{\rho}$ cut off near the end of $Q_{\rho}$ in $\Delta_{\rho}^{1}$. The codimension one subspace is the orthogonal complement of the line given by the equation \[ \gamma^{2,1}=\gamma^{2,2}=\dots =\gamma^{2,m}. \] Note that this later conformal variation corresponds to changing $\rho$. \begin{remark} The nature of the conformal variations $\gamma^{1,j}$ and $\gamma^{2,j}$ are easy to see using a different conformal model for the domain $\Delta_{\rho}$ as follows. Consider the domain of $\Delta^{1}$ as the upper half plane $H$ with positive puncture at $\infty$ and negative punctures along the real axis. The conformal variations of this domain can be viewed as translating the negative punctures along the real axis. To construct the domain $\Delta_{\rho}$ we think also of the domains $\Delta^{2,j}$ as upper half planes. Cut out small half disks of radius $c_j e^{-\alpha\rho}$ near the negative punctures of $\Delta^{1}$ and glue in the half disks in the domain $\Delta^{2,j}$ of radius $c_j e^{\alpha\rho}$ scaled by $e^{-2\alpha\rho}$. Now the conformal variation $\gamma^{1,j}$ corresponds to translating the whole half disk at the $j^{\rm th}$ negative puncture of $\Delta^{1}$ rigidly in the real direction and the conformal variation $\gamma^{2,j}$ corresponds to keeping the small half disk fixed but scaling it so that its negative punctures move closer together. \end{remark} We use the neighborhood $\mathcal{W}_{\rho}={\mathcal H}_{2,\rho}\times B_{\rho}$ of $w_{\rho}$. In order to apply Lemma \ref{l:FloerPicard} we must first establish the counterpart of Lemma \ref{l:unifinv}. Here we invert the linearized operator on the $L^{2}$-complement of the subspace spanned by cut-off kernel elements in $\Delta^{1}$ and $\Delta^{2,j}$ defined as follows. The infinite dimensional components are indeed just a cut-off vector field. For the finite dimensional components we identify the conformal variation at the $j^{\rm th}$ negative puncture of $\Delta^{1}$ with $\gamma^{1,j}$, the shift at this negative puncture with $a_j$, and the shift at the positive puncture of $\Delta^{2,j}$ with $\gamma^{2,j}$. To show uniform invertibility we then argue by contradiction as in the proof of Lemma \ref{l:unifinv}. Using the above identifications of finite dimensional factors, the result follows in a straightforward way. Finally, the two remaining steps, the quadratic estimate for the non-linear term and the surjectivity of the construction are completely analogous to their counterparts in Subsection \ref{ss:constglu} and will not be discussed further. In the case that the second level disk lies in the symplectization as well we start as above by fixing a representative for $v^{1}$ and a slice $\{t=-T\}$ after which this representative is well approximated by Reeb chord strips. We then fix representatives for all the non-trivial second level curves $v^{2,j}$ (of which there is only one in our case) that are translated sufficiently much so that they are well approximated by Reeb chord strips in the slice $\{t=-T\}$ at their positive punctures. We then repeat the argument above. \subsection{Point constraints on the knot}\label{ss:branchmark} An analogous construction allows us to express neighborhoods of disks with Lagrangian intersection punctures of winding number $1$ inside the space of disks with these punctures removed. In the analytical ${\mathbb{C}}\times{\mathbb{C}}^{2}$-coordinates around the knot a disk $v$ with such a puncture looks like \[ v(z)= \sum_{n\ge 0} c_n e^{-n\pi z}, \quad z\in[0,\infty)\times[0,1], \quad c_n\in{\mathbb{R}}^{3} \text{ or } c_n\in{\mathbb{R}}\times i{\mathbb{R}}^{2} \] with $c_0=(c_0',0)$, whereas a general disk looks the same way but has unrestricted $c_0$. We can thus construct a configuration space $\mathcal{W}$ for unrestricted disks in a neighborhood of $v$ as \[ \mathcal{W}=\mathcal{W}'\oplus {\mathbb{R}}^{2}, \] where $\mathcal{W}'$ is the configuration space for disks in a neighborhood of $v$ with Lagrangian intersection puncture of winding number $1$ and ${\mathbb{R}}^{2}$ is spanned by two cut-off constant solutions in the Lagrangian perpendicular to $K$. The zero-set of the $\bar{\partial}_{J}$-operator acting on $\mathcal{W}$ then gives a neighborhood of $v$ in the space of unrestricted disks. \subsection{Proofs of the structure theorems} The proof of all the theorems on the structure of the compactified moduli spaces as manifolds with boundary with corners now follow the same pattern. Transversality and compactness results give the possible degenerations and gluing give neighborhoods of several level disks in the boundary. The manifold structure in the interior is a consequence of standard Fredholm theory, whereas charts near the boundary are obtained from the conformal structures of the domains. \begin{proof}[Proof of Theorem \ref{t:sy}] Part (i) follows immediately from Lemma \ref{l:tv} and Theorem \ref{t:cp}. Consider part (ii). Lemma \ref{l:tv} and Theorem \ref{t:cp} imply that the broken disks listed are the only possible configurations in the boundary of the compactified moduli space. It follows from (the parameterized version of) Lemma \ref{l:FloerPicard} that the gluing parameter gives a parameterization of the boundary of the reduced moduli space. Recall that we identified the gluing parameter with a certain conformal variation (that shifts all the boundary maxima in the second level disk) and we topologize a neighborhood of the broken configuration using the induced map to the compactified space of conformal structures. This establishes (ii). \end{proof} \begin{proof}[Proof of Theorem \ref{t:0dimcob}] The theorem follows immediately from Lemma \ref{l:tv} and Theorem \ref{t:cp}. \end{proof} \begin{proof}[Proof of Theorem \ref{t:[1,0]}] The proof is analogous to the proof of Theorem \ref{t:sy} (ii) except for (c). Here a disk without Lagrangian intersection punctures moves out as a rigid disk in the symplectization into the ${\mathbb{R}}$-invariant region and the translations along ${\mathbb{R}}$ give a neighborhood of the boundary. \end{proof} \begin{proof}[Proof of Theorem \ref{t:[2,1]}] The argument is analogous to the proofs above and we explain only how to parameterize the boundary in the cases that differ from the above. Consider (b). Recall that we identified the gluing parameter with the conformal variation that translates all the boundary maxima in the second level disks uniformly. As above we use this to parameterize a neighborhood of the boundary. Finally, consider (c). Here again the boundary can be parameterized by the gluing parameter which corresponds to a conformal variation. In particular, the boundary point corresponds to a three punctured disk splitting off. As explained in Remark \ref{rmk:twoconst} there are two such disks and the corresponding boundaries of the moduli space naturally fit together to a smooth $1$-manifold. \end{proof} \begin{remark}\label{r:1-dimfinal}(cf.~Remark~\ref{r:1-dimcornercoord}). Consider a holomorphic disk near the codimension one boundary as in Theorem~\ref{t:[2,1]} (c). Remark \ref{r:breakingmodelclose} gives a local model~\eqref{eq:1-dimmodel} for the disk, parameterized by a half disk in the upper half plane near the two colliding corners with one puncture at $0$ and one at $\epsilon>0$. The above proof shows that the newborn conformal variation which here is the length of the stretching strip can be used as local coordinate in the moduli space near the corner. A conformal map that takes a vertical segment in the stretching strip to the upper arc in the unit circle and the boundary of the domain in the disk splitting off to the real line gives a smooth change of coordinates from this parameter to the coordinates given by $\epsilon$. Thus the local model \eqref{eq:1-dimmodel} used in the definition of the string operations is $C^{k}$-close to the actual moduli space, when both are viewed as parameterized by the coordinates $\epsilon$. A similar discussion applies to Theorem~\ref{t:[2,1]} (c), using the local model~\eqref{eq:2-dimmodel} with $\delta=0$. \end{remark} \begin{proof}[Proof of Theorem \ref{t:[2,0]}] Arguments for producing neighborhoods of codimension one boundary strata are similar to the above, so we discuss the codimension two parts. Consider a broken disk as in (a2). The gluing result needed in this case is analogous to the argument in Subsection \ref{ss:constglu}. Here however we attach two constant disks, producing approximate solutions $w_{\rho_1,\rho_2}$ depending on two independent variables $\rho_1,\rho_2\to\infty$. In this case there are two independent newborn conformal variations and the linearized $\bar\partial_J$-operator is inverted on the complement of their linear span. It follows as above that the projection of the moduli space is an embedding into the space of conformal structures and we induce the corner structure from there. Note that this is coherent with our treatment of nearby codimension one boundary disks. The arguments in cases (b2), (c2), and (d2) follow the same lines. We produce approximate solutions depending on two independent variables. In case (b2) the linearized operator is inverted on the complement of the $2$-dimensional spaces spanned by the cut off shift of the symplectization disk and the newborn conformal structure of the constant disk. In case (c2) the linearized operator is inverted on the complement of the (independent) shifts of the first and second level disks, and in case (d2) on the complement of the newborn conformal structure and the additional conformal structure in the constant 4-punctured disk. In all cases, the corner structure is induced from the corresponding structure on the space of conformal structures and the construction is compatible with nearby strata of lower codimension. \end{proof} \begin{remark}\label{r:2-dimfinal}(cf.~Remark~\ref{r:2-dimcornercoord}). Consider a holomorphic disk near the codimension two corner as in Theorem~\ref{t:[2,0]} (d2). Remark \ref{r:breakingmodelclose} gives a local model~\eqref{eq:2-dimmodel} for the disk, parameterized by a half disk in the upper half plane near the three colliding corners with one puncture at $0$ and the two others at boundary points $\delta< 0$ and $\epsilon>0$. The above proof shows that the newborn conformal variation (which here is the length of the stretching strip) together with the difference between the boundary maxima in the 4-punctured disk splitting off can be used as local coordinates in the moduli space near the corner. A conformal map that takes a vertical segment in the stretching strip to the upper arc in the unit circle and the boundary of the domain in the disk splitting off to the real line gives a smooth change of coordinates from these two parameters to the coordinates given by $(\epsilon,\delta)$. Thus the local model \eqref{eq:2-dimmodel} used in the definition of the string operations is $C^{k}$-close to the actual moduli space, when both are viewed as parameterized by the coordinates $(\epsilon,\delta)$. \end{remark} \begin{proof}[Proof of Theorem \ref{t:emb}] The theorem follows from the discussion in Subsection \ref{ss:branchmark}. \end{proof} \begin{proof}[Proof of Theorem \ref{t:imm}] The theorem follows from the discussion in Subsection \ref{ss:branchmark} in combination with the argument in the proof of Theorem \ref{t:[2,1]} (c). \end{proof} \section{Introduction}\label{sec:intro} To a smooth $n$-manifold $Q$ we can naturally associate a symplectic manifold and a contact manifold: its cotangent bundle $T^*Q$ with the canonical symplectic structure $\omega=dp\wedge dq$, and its unit cotangent bundle (with respect to any Riemannian metric) $S^*Q\subset T^*Q$ with its canonical contact structure $\xi=\ker(p\,dq)$. Moreover, a $k$-dimensional submanifold $K\subset Q$ naturally gives rise to a Lagrangian and a Legendrian submanifold in $T^*Q$ resp.~$S^*Q$: its conormal bundle $L_K=\{(q,p)\in T^*Q\mid q\in K,\;p|_{T_qK}=0\}$ and its unit conormal bundle $\Lambda_K=L_K\cap S^*Q$. Symplectic field theory (SFT~\cite{EGH}) provides a general framework for associating algebraic invariants to a pair $(M,\Lambda)$ consisting of a contact manifold and a Legendrian submanifold; when applied to $(S^*Q,\Lambda_K)$, these invariants will be diffeotopy invariants of the manifold pair $(Q,K)$. The study of the resulting invariants was first suggested by Y.~Eliashberg. In this paper we concentrate on the case where $K$ is a framed oriented knot in $Q={\mathbb{R}}^3$. Moreover, we consider only the simplest SFT invariant: {\em Legendrian contact homology}. For $Q={\mathbb{R}}^3$, $S^*Q$ is contactomorphic to the $1$-jet space $J^1(S^2)$, for which Legendrian contact homology has been rigorously defined in \cite{EES2}. The Legendrian contact homology of the pair $(S^*{\mathbb{R}}^3,\Lambda_K)$ is called the \textit{knot contact homology} of $K$. We will denote it $H^{\rm contact}_*(K)$. In its most general form (see \cite{EENStransverse,Ngtransverse}), knot contact homology is the homology of a differential graded algebra over the group ring ${\mathbb{Z}}[H_2(S^*{\mathbb{R}}^3,\Lambda_K)] = {\mathbb{Z}}[\lambda^{\pm 1},\mu^{\pm 1},U^{\pm 1}]$, where the images of $\lambda,\mu$ under the connecting homomorphism generate $H_1(\Lambda_K) = H_1(T^2)$ and $U$ generates $H_2(S^*{\mathbb{R}}^3)$. The isomorphism class of $H^{\rm contact}_*(K)$ as a ${\mathbb{Z}}[\lambda^{\pm 1},\mu^{\pm 1},U^{\pm 1}]$-algebra is then an isotopy invariant of the framed oriented knot $K$. The topological content of knot contact homology has been much studied in recent years; see for instance \cite{AENV} for a conjectured relation, which we will not discuss here, to colored HOMFLY-PT polynomials and topological strings. One part of knot contact homology that has an established topological interpretation is its $U=1$ specialization. In \cite{Ng:2b,Ng:1}, the third author constructed a knot invariant called the \textit{cord algebra} $\operatorname{Cord}(K)$, whose definition we will review in Section~\ref{ss:cordalg}. The combined results of \cite{Ng:2b,Ng:1,EENS} then prove that the cord algebra is isomorphic as a ${\mathbb{Z}}[\lambda^{\pm 1},\mu^{\pm 1}]$-algebra to the $U=1$ specialization of degree $0$ knot contact homology. We will assume throughout this paper that we have set $U=1$;\footnote{However, we note that it is an interesting open problem to find a similar topological interpretation of the full degree $0$ knot contact homology as a ${\mathbb{Z}}[\lambda^{\pm 1},\mu^{\pm 1},U^{\pm 1}]$-algebra. } then the result is: \begin{thm}[\cite{Ng:2b,Ng:1,EENS}] $H^{\rm contact}_0(K)\cong \operatorname{Cord}(K)$. \label{thm:cordHC0} \end{thm} It has been noticed by many people that the definition of the cord algebra bears a striking resemblance to certain operations in string topology \cite{CS:1,S:1}. Indeed, Basu, McGibbon, Sullivan, and Sullivan used this observation in \cite{BMSS} to construct a theory called ``transverse string topology'' associated to any codimension $2$ knot $K\subset Q$, and proved that it determines the $U=\lambda=1$ specialization of the cord algebra. In this paper, we present a different approach to knot contact homology and the cord algebra via string topology. Motivated by the general picture sketched by the first and third authors in \cite{CL}, we use string topology operations to define the \textit{string homology $H^{\rm string}_*(K)$} of $K$. Then the main result of this paper is: \begin{thm}\label{thm:main} For any framed oriented knot $K\subset{\mathbb{R}}^3$, we have an isomorphism between $U=1$ knot contact homology and string homology in degree $0$, $$ H^{\rm contact}_0(K)\cong H_0^{\rm string}(K), $$ defined by a count of punctured holomorphic disks in $T^{\ast} {\mathbb{R}}^{3}$ with Lagrangian boundary condition $L_K\cup {\mathbb{R}}^{3}$. \end{thm} On the other hand, degree $0$ string homology is easily related to the cord algebra: \begin{prop}\label{prop:str-cord} For any framed oriented knot $K\subset{\mathbb{R}}^3$, we have an isomorphism $$ H^{\rm string}_0(K)\cong \operatorname{Cord}(K). $$ \end{prop} \noindent As a corollary we obtain a new geometric proof of Theorem~\ref{thm:cordHC0}. In fact, we even prove a slight refinement of the usual formulation of Theorem~\ref{thm:cordHC0}, as we relate certain noncommutative versions of the two sides where the coefficients $\lambda,\mu$ do not commute with everything; see Section~\ref{ss:cordalg} for the version of $\operatorname{Cord}(K)$ and Section~\ref{sec:leg} for the definition of $H^{\rm contact}_0(K)$ that we use. Our proof is considerably more direct than the original proof of Theorem~\ref{thm:cordHC0}, which was rather circuitous and went as follows. The third author constructed in \cite{Ng:2a,Ng:1} a combinatorial differential graded algebra associated to a braid whose closure is $K$, and then proved in \cite{Ng:2b,Ng:1} that the degree $0$ homology of this combinatorial complex is isomorphic to $\operatorname{Cord}(K)$ via a mapping class group argument. The second and third authors, in joint work with Etnyre and Sullivan \cite{EENS}, then proved that the combinatorial complex is equal to the differential graded algebra for knot contact homology, using an analysis of degenerations of holomorphic disks to Morse flow trees. Besides providing a cleaner proof of Theorem~\ref{thm:cordHC0}, the string topology formulation also gives a geometric explanation for the somewhat mystifying skein relations that define the cord algebra. Moreover, string homology can be directly related to the group ring ${\mathbb{Z}}\pi$ of the fundamental group $\pi = \pi_1({\mathbb{R}}^3\setminus K)$ of the knot complement: \begin{prop}[see Proposition~\ref{prop:subring}] \label{prop:str-fund} For a framed oriented knot $K \subset {\mathbb{R}}^3$, $H_0^{\rm string}(K) \cong H_0^{\rm contact}(K)$ is isomorphic to the subring of ${\mathbb{Z}}\pi$ generated by $\lambda^{\pm 1}$, $\mu^{\pm 1}$, and $\operatorname{im}(1-\mu)$, where $\lambda,\mu$ are the elements of $\pi$ representing the longitude and meridian of $K$, and $1-\mu$ denotes the map ${\mathbb{Z}}\pi \to {\mathbb{Z}}\pi$ given by left multiplication by $1-\mu$. \end{prop} As an easy consequence of Proposition~\ref{prop:str-fund}, we recover the following result from \cite{Ng:1}: \begin{cor}[see Section~\ref{ss:groupring}] Knot contact homology detects the unknot: if $H_0^{\rm contact}(K) \cong H_0^{\rm contact}(U)$ where $K$ is a framed oriented knot in ${\mathbb{R}}^3$ and $U$ is the unknot with any framing, then $K=U$ as framed oriented knots. \label{cor:unknot} \end{cor} \noindent The original proof of Corollary~\ref{cor:unknot} in \cite{Ng:1} uses the result that the $A$-polynomial detects the unknot \cite{DG}, which in turn relies on results from gauge theory \cite{KM}. By contrast, our proof of Corollary~\ref{cor:unknot} uses no technology beyond the Loop Theorem (more precisely, the consequence of the Loop Theorem that the longitude is null-homotopic in ${\mathbb{R}}^3\setminus K$ if and only if $K$ is unknotted). {\bf Organization of the paper. } In Section~\ref{sec:string} we define degree $0$ string homology and prove Proposition~\ref{prop:str-cord}, Proposition~\ref{prop:str-fund} and Corollary~\ref{cor:unknot}. The remainder of the paper is occupied by the proof of Theorem~\ref{thm:main}, beginning with an outline in Section~\ref{sec:roadmap}. After a digression in Section~\ref{sec:holo} on the local behavior of holomorphic functions near corners, which serves as a model for the behavior of broken strings at switches, we define string homology in arbitrary degrees in Section~\ref{sec:string-ref}. The main work in proving Theorem~\ref{thm:main} is an explicit description of the moduli spaces of holomorphic disks in $T^*{\mathbb{R}}^3$ with boundary on $L_K\cup{\mathbb{R}}^3$ and punctures asymptotic to Reeb chords. In Section~\ref{sec:chain} we state the main results about these moduli spaces and show how they give rise to a chain map from Legendrian contact homology to string homology (in arbitrary degrees). Moreover, we show that this chain map respects a natural length filtration. In Section~\ref{sec:iso} we construct a length decreasing chain homotopy and prove Theorem~\ref{thm:main}. The technical results about moduli spaces of holomorphic disks and their compactifications as manifolds with corners are proved in the remaining Sections~\ref{S:mdlisp}, \ref{sec:trans} and~\ref{sec:gluing}. {\bf Extensions. } The constructions in this paper have several possible extensions. Firstly, the definition of string homology and the construction of a homomorphism from Legendrian contact homology to string homology in degree zero work the same way for a knot $K$ in an arbitrary $3$-manifold $Q$ instead of ${\mathbb{R}}^3$ (the corresponding sections are actually written in this more general setting), and more generally for a codimension $2$ submanifold $K$ of an arbitrary manifold $Q$.\footnote{ In the presence of contractible closed geodesics in $Q$, this will require augmentations by holomorphic planes in $T^*Q$, see e.g.~\cite{CL}.} The fact that the ambient manifold is ${\mathbb{R}}^3$ is only used to obtain a certain finiteness result in the proof that this map is an isomorphism (see Remark~\ref{rem:R3}). If this result can be generalized, then Theorem~\ref{thm:main} will hold for arbitrary codimension $2$ submanifolds $K\subset Q$. Secondly, for knots in $3$-manifolds, the homomorphism from Legendrian contact homology to string homology is actually constructed in arbitrary degrees. Proving that it is an isomorphism in arbitrary degrees will require analyzing codimension three phenomena in the space of strings with ends on the knot, in addition to the codimension one and two phenomena described in this paper. \section*{Acknowledgments} We thank Chris Cornwell, Tye Lidman, and especially Yasha Eliashberg for stimulating conversations. This project started when the authors met at the Workshop ``SFT 2'' in Leipzig in August 2006, and the final technical details were cleaned up when we met during the special program on ``Symplectic geometry and topology'' at the Mittag-Leffler institute in Djursholm in the fall of 2015. We would like to thank the sponsors of these programs for the opportunities to meet, as well as for the inspiring working conditions during these events. The work of KC was supported by DFG grants CI 45/2-1 and CI 45/5-1. The work of TE was supported by the Knut and Alice Wallenberg Foundation and by the Swedish Research Council. The work of JL was supported by DFG grant LA 2448/2-1. The work of LN was supported by NSF grant DMS-1406371 and a grant from the Simons Foundation (\# 341289 to Lenhard Ng). Finally, we thank the referee for suggesting numerous improvements. \section{Proof of the isomorphism in degree zero}\label{sec:iso} In the previous section we have constructed a chain map $\Phi:(C_*(\mathcal{R}),\partial_\Lambda)\to (C_{\ast}(\Sigma),D)$, where $D=\partial+\delta_{Q}+\delta_{N}$. In this section we finish the proof of Theorem~\ref{thm:main} by showing that the induced map $\Phi_*:H_0(\mathcal{R},\partial_\Lambda)\to H_0(\Sigma,D)$ in degree zero is an isomorphism. Whereas the results in the previous section hold for any $3$-manifold $Q$ with a metric of nonpositive curvature which is convex at infinity, in this section we need to restrict to the case $Q={\mathbb{R}}^3$ with its Euclidean metric. This restriction will allow us to obtain crucial control over the straightening procedure for $Q$-strings described in Proposition~\ref{prop:pl-lin} (see the comment in Remark~\ref{rem:R3} below). As a first step, we will slightly extend the definition of broken strings to include piecewise linear $Q$-strings. A relatively simple approximation result will show that the inclusion of broken strings with piecewise linear $Q$-strings into all broken strings induces an isomorphism on string homology in degree 0. The central piece of the argument will then consist of deforming the complex of broken strings with piecewise linear $Q$-strings into the subcomplex of those with {\em linear} $Q$-strings. It is important that both of these reduction steps can be done {\em preserving the length filtration on $Q$-strings}. The final step of the argument then consists of comparing the contact homology $H_0(\mathcal{R},\partial_\Lambda)$ with the homology of the chain complex of broken strings with linear $Q$-strings. At this stage, we will use the length filtrations to reduce to the comparison of homology in degrees 0 and 1 in small length windows containing at most one critical value. \subsection{Approximation by piecewise linear $Q$-strings}\label{ss:pl} In the following we enlarge the space of broken $C^m$-strings $\Sigma$, keeping the same notation, to allow for $Q$-strings to be {\em piecewise $C^m$}. Here a $Q$-string $s_{2i}:[a_{2i-1},a_{2i}]\to Q$ is called piecewise $C^m$ if there exists a subdivision $a_{2i-1}=b_0<b_1<\cdots<b_r=a_{2i}$ such that the restriction of $s_{2i}$ to each subinterval $[b_{j-1},b_j]$ is $C^m$. For a generic $d$-chain $S:\Delta_d\to\Sigma^\ell$ ($d=0,1,2$) we require that the number of subdivision points on each $Q$-string is constant over the simplex $\Delta_d$. The subdivision points can vary smoothly over $\Sigma_d$ but have to remain distinct. If for some subdivision point $b_j$ the two $C^m$-strings meeting at $b_j$ fit together in a $C^m$-fashion for all $\lambda\in\Delta_d$, then we identify $S$ with the generic $d$-chain obtained by removing the subdivision point $b_j$. We allow $Q$-strings in a generic $d$-chain $S$ to meet the knot $K$ at a subdivision point $b_j$, provided at such a point the derivatives from both sides satisfy the genericity conditions in Definition~\ref{def:generic-chain}. If this occurs for some parameter value $\lambda^*\in\Delta_2$ in a generic $2$-chain, then we require in addition that the corresponding $Q$-string meets $K$ at the subdivision point $b_j(\lambda)$ for all $\lambda$ in the component of $\lambda^*$ in the domain $M_{\delta_Q}$ of $\delta_QS$ defined in Section~\ref{ss:string-op}. These conditions ensure that the operator $D=\partial+\delta_{Q}+\delta_{N}$ extends to generic chains of piecewise $C^m$ strings satisfying the relations in Proposition~\ref{prop:string-relations}. The subspace $\Sigma_{\rm pl}\subset\Sigma$ of broken strings whose $Q$-strings are {\em piecewise linear} give rise to an inclusion of a $D$-subcomplex \begin{equation}\label{eq:ipl} C_*(\Sigma_{\rm pl})\stackrel{i_{\rm pl}}\hookrightarrow C_*(\Sigma). \end{equation} For this to hold, we choose the $Q$-spikes inserted under the map $\delta_N$ to be degenerate $3$-gons, i.e., short segments orthogonal to the knot traversed back and forth. Then $C_*(\Sigma_{\rm pl})$ becomes a $D$-subcomplex. We will also consider the subspace $\Sigma_{\rm lin} \subset \Sigma_{\rm pl}$ of broken closed strings whose $Q$-strings are (essentially) {\em linear}: any two points $x_1,x_2 \in K$ determine a unique line segment $[x_1,x_2]$ in ${\mathbb{R}}^3$ connecting them. For technical reasons, special care has to be taken when such a linear $Q$-string becomes very short. Indeed, near the diagonal $\Delta \subset K \times K$ we deform the segments to piecewise linear strings with one corner in such a way that at each point of the diagonal, instead of a segment of length zero we have a degenerate 3-gon as above, i.e., a short spike in direction of the curvature of the knot (which we assume vanishes nowhere). Now $\Sigma_{\rm lin} \subset \Sigma_{\rm pl}$ consists of all broken closed strings whose $Q$-strings are constant speed parametrizations of such (possibly deformed) segments. In this way, \begin{equation}\label{eq:ilin} C_*(\Sigma_{\rm lin})\stackrel{i_{\rm lin}}\hookrightarrow C_*(\Sigma_{\rm pl}) \end{equation} will be an inclusion of a $D$-subcomplex. Recall from Section~\ref{ss:length-filt} that these complexes are filtered by the length $L(\beta)$, i.e.~the maximum of the total length of $Q$-strings over all parameter values of the chain, where in the length we do not count $Q$-spikes. With these notations, we have the following approximation result. \begin{prop}\label{prop:pl} There exist maps $$ {\mathbb{F}}_0:C_0(\Sigma) \to C_0(\Sigma_{\rm pl}),\qquad {\mathbb{F}}_1:C_1(\Sigma) \to C_1(\Sigma_{\rm pl}) $$ and $$ {\mathbb{H}}_0:C_0(\Sigma) \to C_1(\Sigma),\qquad {\mathbb{H}}_1:C_1(\Sigma) \to C_2(\Sigma) $$ satisfying with the map $i_{\rm pl}$ from~\eqref{eq:ipl}: \begin{enumerate} \item ${\mathbb{F}}_0i_{\rm pl}={{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l$ and $D{\mathbb{H}}_0= i_{\rm pl}{\mathbb{F}}_0-{{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l$; \item ${\mathbb{F}}_1i_{\rm pl}={{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l$ and ${\mathbb{H}}_0D + D{\mathbb{H}}_1 = i_{\rm pl}{\mathbb{F}}_1-{{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l$; \item ${\mathbb{F}}_0$, ${\mathbb{H}}_0$, ${\mathbb{F}}_1$ and ${\mathbb{H}}_1$ are (not necessarily strictly) length-decreasing. \end{enumerate} \end{prop} \begin{proof} We first define ${\mathbb{F}}_0$ and ${\mathbb{H}}_0$. Given $\beta\in C_0(\Sigma)$, we pick finitely many subdivision points $p_i$ on the $Q$-strings in $\beta$ (which include all end points) and define ${\mathbb{H}}_0\beta$ to be the straight line homotopy from $\beta$ to the broken string ${\mathbb{F}}_0\beta$ whose $Q$-strings are the piecewise linear strings connecting the $p_i$. We choose the subdivision so fine that the $Q$-strings in ${\mathbb{H}}_0\beta$ remain transverse to $K$ at the end points and do not meet $K$ in the interior. The $N$-strings are just slightly rotated near the end points to match the new $Q$-strings, without creating intersections with $K$. Then ${\mathbb{H}}_0\beta$ is a generic $1$-chain in $\Sigma$ satisfying $$ \partial{\mathbb{H}}_0\beta={\mathbb{F}}_0\beta-\beta,\qquad \delta_Q{\mathbb{H}}_0\beta=\delta_N{\mathbb{H}}_0\beta=0. $$ If $\beta$ is already piecewise linear we include the corner points in the subdivision to ensure ${\mathbb{F}}_0\beta=\beta$, so that condition (i) holds. To define ${\mathbb{F}}_1$ and ${\mathbb{H}}_1$, consider a generic $1$-simplex $\beta:[0,1]\to\Sigma$. We pick finitely many smooth paths of subdivision points $p_i(\lambda)$ on the $Q$-strings in $\beta(\lambda)$ (which include all end points) and define ${\mathbb{H}}_1\beta$ to be the straight line homotopy from $\beta$ to the $1$-simplex ${\mathbb{F}}_1\beta$ whose $Q$-strings are the piecewise linear strings connecting the $p_i(\lambda)$. Here we choose the $p_i(\lambda)$ to agree with the ones in the definition of ${\mathbb{H}}_0$ at $\lambda=0,1$ as well as at the finitely many values $\lambda_j$ where some $Q$-string intersects the knot in its interior (so at such $\lambda_j$ the intersection point with $K$ is included among the $p_i(\lambda_j)$). Note that for this we may first have to add new subdivision points on the $Q$-strings on $\beta(\lambda)$ for $\lambda=0,1,\lambda_j$, which is allowed due to the identification above. Moreover, we choose the subdivision so fine that the $Q$-strings in ${\mathbb{H}}_1\beta$ remain transverse to $K$ at the end points and meet $K$ in the interior exactly at the values $\lambda_j$ above. The $N$-strings are just slightly rotated near the end points to match the new $Q$-strings, without creating new intersections with $K$ besides the ones already present in $\beta$ that are continued along the homotopy. Then ${\mathbb{H}}_1\beta$ is a generic $2$-chain in $\Sigma$ satisfying $$ (\partial{\mathbb{H}}_1+{\mathbb{H}}_0\partial)\beta={\mathbb{F}}_1\beta-\beta,\qquad (\delta_Q{\mathbb{H}}_1+{\mathbb{H}}_0\delta_Q)\beta=(\delta_N{\mathbb{H}}_1+{\mathbb{H}}_0\delta_N)\beta=0. $$ If $\beta$ is already piecewise linear we include the corner points in the subdivision to ensure ${\mathbb{F}}_1\beta=\beta$, so that condition (ii) holds. \end{proof} \subsection{Properties of triangles for generic knots}\label{ss:triangles} In our arguments, we will assume that the knot $K$ is generic. In particular, we will use that it has the properties listed in the following lemma. \begin{lemma}\label{lem:generic.knots} A generic knot $K\subset{\mathbb{R}}^3$ has the following properties: \begin{enumerate} \item \label{planes} There exists an $S\in{\mathbb{N}}$ such that each plane intersects $K$ at most $S$ times. \item The set $T \subset K$ of points whose tangent lines meet the knot again is finite (and each such tangent line meets the knot in exactly one other point). \end{enumerate} \end{lemma} \begin{proof} We prove part (i). For a generic knot $K$ parametrized by $\gamma:S^1={\mathbb{R}}/L{\mathbb{Z}}\to{\mathbb{R}}^3$, the first four derivatives $(\dot\gamma,\gamma^{(2)},\gamma^{(3)},\gamma^{(4)})$ span ${\mathbb{R}}^3$ at each $t\in S^1$. (For this, use the jet transversality theorem~\cite[Chapter 3]{Hir} to make the corresponding map $S^1\to({\mathbb{R}}^3)^4$ transverse to the codimension two subset consisting of quadruples of vectors that lie in a plane.) It follows that there exists an $\varepsilon>0$ such that $\gamma$ meets each plane at most $4$ times on a time interval of length $\varepsilon$. (Otherwise, taking a limit of quintuples of times mapped into the same plane whose mutual distances shrink to zero, we would find in the limit an order four tangency of $\gamma$ to a plane, which we have excluded.) Hence $\gamma$ can meet each plane at most $4L/\varepsilon$ times. The proof of part (ii) is contained in the proof of Lemma~\ref{lem:2-gons}(b) below. It relies on choosing $K$ such that its curvature vanishes nowhere. \end{proof} \comment{ In part (ii) we introduced the finite subset $T \subset K$ of points whose tangent lines meet the knot again. We denote by $B \subset K$ the subset of these ``other points'' on tangent lines to $K$. Standard transversality arguments yield \begin{lemma} Let $\sigma:[0,1] \to \Sigma^\ell$ be a $1$-simplex. With an arbitrarily small perturbation, we can achieve that for each $Q$-string $s_{2i}$ of $\sigma$ the evaluation map $$ \lambda \mapsto \bigl(s_{2i}(a_{2i-1}(\lambda)),s_{2i}(a_{2i}(\lambda))\bigr) $$ is transverse to the diagonal in $K \times K$ as well as to the sets $T \times K \cup K \times T$ and $B \times K \cup K \times B$ of end points of segments which are tangent to the knot at one end point. \hfill$\square$ \end{lemma} } Now we consider the space of triangles in ${\mathbb{R}}^3$ with pairwise distinct corners $x_1,x_2,x_3$ such that $x_1$ and $x_3$ lie on the knot $K$. Using an arclength parametrization $\gamma:S^1={\mathbb{R}}/L{\mathbb{Z}}\to K$ we identify this space with the open subset $$ \mathcal{T}=\{(s,x_2,r)\in S^1\times{\mathbb{R}}^3\times S^1\mid x_1=\gamma(s),\;x_2,\;x_3=\gamma(r)\text{ are distinct}\}. $$ We parametrize each triangle $[x_1,x_2,x_3]$ by the map (see Figure~\ref{fig:triangle-par}) $$ [0,1]^2\to{\mathbb{R}}^3,\qquad (u,t)\mapsto (1-t)x_1+t\bigl((1-u)x_2+ux_3\bigr). $$ \begin{figure} \labellist \small\hair 2pt \pinlabel ${\color{blue} x_1 = \gamma(s)}$ at 60 109 \pinlabel ${\color{blue} v_2}$ at 145 182 \pinlabel ${\color{blue} x_2}$ at 234 235 \pinlabel ${\color{blue} v_3}$ at 188 79 \pinlabel ${\color{blue} x_3 = \gamma(r)}$ at 328 56 \pinlabel ${(1-u)x_2+u x_3}$ at 315 172 \pinlabel ${(1-t)x_1+t((1-u)x_2+u x_3)}$ at 127 27 \endlabellist \centering \includegraphics[width=0.8\textwidth]{figures/triangle-par-new} \caption{ Parametrization of a triangle. } \label{fig:triangle-par} \end{figure} \begin{lemma}\label{lem:triangles} For a generic $1$-parameter family of triangles $\beta:[0,1]\to\mathcal{T}$, $\lambda\mapsto(s^\lambda,x_2^\lambda,r^\lambda)$ the following holds. (a) The evaluation map $$ {\rm ev}_\beta:[0,1]^3\to{\mathbb{R}}^3,\qquad (\lambda,u,t)\mapsto (1-t)x_1^\lambda+t\bigl((1-u)x_2^\lambda+ux_3^\lambda\bigr) $$ is transverse to $K$ on its interior, where we have set $x_1^\lambda=\gamma(s^\lambda)$ and $x_3^\lambda=\gamma(r^\lambda)$. (b) The map $(\lambda,u)\mapsto \frac{\partial{\rm ev}_\beta}{\partial t}(\lambda,u,0)$ meets the tangent bundle to $K$ transversely in finitely many points. At these points the triangle is tangent to the knot at $x_1^\lambda$ but not contained in its osculating plane. (c) The points in (b) compactify the set ${\rm ev}_\beta^{-1}(K) \cap [0,1]^2 \times (0,1]$ to an embedded curve in $[0,1]^3$ transverse to the boundary. Its image in $[0,1]^2$ under the projection $(\lambda,u,t)\mapsto(\lambda,u)$ is an immersed curve with transverse self-intersections. \end{lemma} \begin{proof} Part (a) follows from standard transversality arguments. For part (b) we introduce $$ v_2:=x_2-x_1,\qquad v_3:=x_3-x_1,\qquad \nu:=\frac{v_2\times v_3}{|v_2\times v_3|}. $$ Thus $v_2,v_3$ are tangent to the sides of the triangle at $x_1$ and $\nu$ is a unit normal vector to the triangle. So the space of triangles that are tangent to the knot at $x_1$ is the zero set of the map $$ F:\mathcal{T}\to{\mathbb{R}},\qquad (s,x_2,r)\mapsto\langle\dot\gamma(s),\nu\rangle = \frac{\langle v_2,v_3\times\dot\gamma(s)\rangle}{|v_2\times v_3|}. $$ The last expression shows that along the zero set the variation of $F$ in direction $x_2$ (or equivalently $v_2$) is nonzero provided that $v_3\times\dot\gamma(s)\neq 0$. So $F^{-1}(0)$ is a transversely cut out hypersurface in $\mathcal{T}$ outside the set $\mathcal{T}_0$ where $v_3=\gamma(r)-\gamma(s)$ and $\dot\gamma(s)$ are collinear. By Lemma~\ref{lem:generic.knots}(ii) the set $\mathcal{T}_0$ has codimension $2$. Hence a generic curve $\beta:[0,1]\to\mathcal{T}$ avoids the set $\mathcal{T}_0$ and intersects $F^{-1}(0)$ transversely, which implies the first statement in (b). The second statement in (b) follows similarly from the fact that the set of triangles contained in the osculating plane at $x_1$ has codimension $2$ in $\mathcal{T}$. For part (c), consider a point $(\lambda_0,u_0)$ as in (b). To simplify notation, let us shift the parameter interval such that $\lambda_0=0$ is an interior point. Then with the obvious notation $\nu^\lambda$ etc the following conditions hold at $\lambda=0$: $$ a:=\langle\dot\gamma(s^0),\nu^0\rangle = 0,\quad b:=\langle\ddot\gamma(s^0),\nu^0\rangle \neq 0,\quad c:=\frac{d}{d\lambda}\bigl|_{\lambda=0}\langle\dot\gamma(s^\lambda),\nu^\lambda\rangle \neq 0. $$ Here the first condition expresses the fact that the triangle is tangent to the knot at $x_1^0$, the second on that the triangle is not contained in the osculating plane, and the third one the transversality of the map in (b) to the tangent bundle of $K$. Intersections of $K$ with triangles $\beta(\lambda)$ for $\lambda$ close to zero can be written in the form $\gamma(s^\lambda+s)$ with $s=O(\lambda)$ and must satisfy the equation $$ 0 = \Bigl\langle \gamma(s^\lambda+s)-\gamma(s^\lambda),\nu^\lambda\Bigr\rangle = \Bigl\langle s\dot\gamma(s^\lambda)+\frac{1}{2}s^2\ddot\gamma(s^\lambda) + O(s^3),\nu^\lambda\Bigr\rangle. $$ Ignoring the trivial solution $s=0$, we divide by $s$ and obtain using $s=O(\lambda)$: \begin{align*} 0 &= \Bigl\langle \dot\gamma(s^\lambda)+\frac{1}{2}s\ddot\gamma(s^\lambda) + O(s^2),\nu^\lambda\Bigr\rangle \cr &= \Bigl\langle \dot\gamma(s^0)+\ddot\lambda\gamma(s^0)+\frac{1}{2}s\ddot\gamma(s^0) + O(\lambda^2),\nu^0+\lambda\dot\nu^0+O(\lambda^2)\Bigr\rangle \cr &= \langle\dot\gamma(s^0),\nu^0\rangle + \lambda\Bigl[\langle\ddot\gamma(s^0),\nu^0\rangle + \langle\dot\gamma(s^0),\dot\nu^0\rangle + O(\lambda)\Bigr] + s\Bigl[\frac{1}{2}\langle\ddot\gamma(s^0),\nu^0\rangle + O(\lambda)\Bigr]\cr &= a + \lambda\Bigl[b+O(\lambda)\Bigr] + s\Bigl[\frac{1}{2}c+O(\lambda)\Bigr]. \end{align*} Since $a=0$ and and $b,c$ are nonzero, this equation has for each $\lambda$ a unique solution $s$ of the form $$ s = -\frac{2b}{c}\lambda + O(\lambda^2). $$ Now recall that by hypothesis $\dot\gamma(s^0)$ is a multiple of $(1-u^0)v_2^0+u^0v_3^0$. If it is a positive (resp.~negative) multiple, then only solutions with $s>0$ (resp.~$s<0$) will lie in the triangle. So in either case the solutions describe a curve with boundary and part (c) follows. \end{proof} \begin{remark}\label{rem:trianges-par} Lemma~\ref{lem:triangles} shows that, given a generic $1$-parameter family of triangles $\beta:[0,1]\to\mathcal{T}$, the associated $2$-parameter family $(\lambda,u)\mapsto{\rm ev}_\beta(\lambda,u,\cdot)$ can be reparametrized in $t$ to look like the $Q$-strings in a generic $2$-chain of broken strings. To see the last condition (2e) in Definition~\ref{def:generic-chain}, consider a parameter value $(\lambda,u)$ as in Lemma~\ref{lem:triangles}(b). Since the triangle is not contained in the osculating plane at $x_1^\lambda$, the linear string $t\mapsto{\rm ev}_\beta(\lambda,u,t)$ deviates quadratically from the knot, so its projection normal to the knot has nonvanishing second derivative at $t=0$. Hence we can reparametrize it to make its second derivative vanish and its third derivative nonzero as required in condition (2e). We will ignore these reparametrizations in the following. \end{remark} \begin{remark} Lemma~\ref{lem:triangles} remains true (with a simpler proof) if in the definition of the space of triangles $\mathcal{T}$ we allow $x_3$ to move freely in ${\mathbb{R}}^3$ rather than only on the knot; this situation will also occur in the shortening process in the next subsection. Let us emphasize that in the space $\mathcal{T}$ we require the points $x_1,x_2,x_3$ to be distinct. Now in a generic $1$-parameter family of triples $(x_1,x_2,x_3)$ with $x_1,x_3\in K$ the points $x_1,x_3$ may meet for some parameter values, so this situation is not covered by Lemma~\ref{lem:triangles}. See Remark~\ref{rem:crossing} below on how to deal with this situation. \end{remark} \subsection{Reducing piecewise linear $Q$-strings to linear ones} In this subsection we deform chains in $\Sigma_{\rm pl}$ to chains in $\Sigma_{\rm lin}$, not increasing the length of $Q$-strings in the process. The main result of this subsection is \begin{prop}\label{prop:pl-lin} For a generic knot $K$ there exist maps $$ {\mathbb{F}}_0:C_0(\Sigma_{\rm pl}) \to C_0(\Sigma_{\rm lin}),\qquad {\mathbb{F}}_1:C_1(\Sigma_{\rm pl}) \to C_1(\Sigma_{\rm lin}) $$ and $$ {\mathbb{H}}_0:C_0(\Sigma_{\rm pl}) \to C_1(\Sigma_{\rm pl}),\qquad {\mathbb{H}}_1:C_1(\Sigma_{\rm pl}) \to C_2(\Sigma_{\rm pl}) $$ satisfying with the map $i_{\rm lin}$ from~\eqref{eq:ilin}: \begin{enumerate} \item ${\mathbb{F}}_0i_{\rm lin}={{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l$ and $D{\mathbb{H}}_0= i_{\rm lin}{\mathbb{F}}_0-{{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l$; \item ${\mathbb{F}}_1i_{\rm lin}={{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l$ and ${\mathbb{H}}_0D + D{\mathbb{H}}_1 = i_{\rm lin}{\mathbb{F}}_1-{{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l$; \item ${\mathbb{F}}_0$, ${\mathbb{H}}_0$, ${\mathbb{F}}_1$ and ${\mathbb{H}}_1$ are (not necessarily strictly) length-decreasing. \end{enumerate} \end{prop} \begin{proof} We assume that $K$ satisfies the genericity properties in Section~\ref{ss:triangles}. We first construct the maps ${\mathbb{H}}_0$ and ${\mathbb{F}}_0$. For each simplex $\beta \in C_0^{\rm pl}(\Sigma)$ we denote by $M(\beta)$ the total number of corners in the $Q$-strings of $\beta$, not counting the corners in $Q$-spikes (which are by definition $3$-gons). Connecting each corner to the starting point of its $Q$-string, we obtain $M(\beta)$ triangles connecting the various $Q$-strings to the segments between their end points. We define the {\em complexity} of $\beta \in C_0^{\rm pl}(\Sigma)$ to be the pair of nonnegative integers $$ c(\beta) := (M(\beta), I(\beta)), $$ where $I(\beta)$ is the number of interior intersection points of the first triangle with $K$ (we set $I(\beta)=0$ in the case $M(\beta)=0$, i.e. if there are no triangles). Note that by part (\ref{planes}) of Lemma~\ref{lem:generic.knots} we know that $I$ is bounded a priori by a fixed constant $S=S(K)$. We define the maps ${\mathbb{H}}_0$ and ${\mathbb{F}}_0$ by induction on the lexicographical order on complexities $c(\beta)$. For $c(\beta)=(0,0)$ we set ${\mathbb{F}}_0\beta=\beta$ and ${\mathbb{H}}_0\beta=0$. For the induction step, let $\beta\in C^{\rm pl}_0(\Sigma)$ be a $0$-simplex and assume that ${\mathbb{F}}_0$ and ${\mathbb{H}}_0$ satisfying (i) and (iii) have been defined for all simplices of complexities $c<c(\beta)$. Let the first triangle of $\beta$ have vertices $x_1,x_2,x_3$, where $x_1$ is the starting point of the first $Q$-string which is not a segment, and $x_2$ and $x_3$ are the next two corners on that $Q$-string ($x_3$ might also be the end point). Since there are only finitely many intersections of the knot $K$ with the interior of the triangle (and none with its sides), we can find a segment connecting $x_2$ to a point $x_3'$ on the segment $x_1x_3$ which is so close to $x_3$ that the triangle $x_2x_3'x_3$ does not contain any intersection points with the knot. Let $h\beta\in C_1^{\rm pl}(\Sigma)$ be the $1$-simplex obtained by sweeping the first triangle by the family of segments from $x_1$ to a varying point $(1-u)x_2+ux_3'$ on the segment $[x_2,x_3']$, followed by the segment from that point to $x_3$ and the remaining segments to $x_4$ etc. See Figure~\ref{fig:triangle-new} (the point $y$ and the shaded region play no role here and are included for later use). \begin{figure}[h] \labellist \small\hair 2pt \pinlabel $x_2$ at 97 339 \pinlabel $\beta$ at 52 208 \pinlabel $x_1$ at 7 9 \pinlabel $x_3'$ at 302 117 \pinlabel $x_3$ at 406 182 \pinlabel $x_4$ at 410 99 \pinlabel ${\color{blue} (1-u)x_2+ux_3'}$ at 308 278 \pinlabel ${\color{blue} f\beta}$ at 218 81 \pinlabel ${\color{red} y}$ at 118 186 \endlabellist \centering \includegraphics[width=0.5\textwidth]{figures/triangle-new2} \caption{Reducing the number of corner points.} \label{fig:triangle-new} \end{figure} The $N$-string ending at $x_1$ (and if there is one, also the $N$-string starting at $x_3$) is ``dragged along'' without creating intersections with $K$, and all remaining $N$-and $Q$-strings remain unchanged in the process. The $1$-simplex $h\beta$ has boundary $\partial(h\beta)=\beta'-\beta$, where $\beta'$ is the $0$-simplex at the end of the sweep with first segment $[x_1,x_3]$. We define $$ f\beta:= Dh\beta+\beta = \beta'+\delta_Qh\beta+\delta_Nh\beta. $$ By construction we have $\delta_Nh\beta=0$ and $M(\beta')<M(\beta)$, hence $c(\beta')<c(\beta)$. The domain of $\delta_Qh\beta$ consists of those finitely many points where the triangle intersects $K$ in its interior, so that $\delta_Qh\beta$ consists of broken strings with one more $Q$-string (which is linear) and with the same total number of corners as $\beta$. But since the new first triangle is contained in the original first triangle for $\beta$, and one of the intersection points is now the starting point of the new $Q$-string, we have $I(\delta_Qh\beta)<I(\beta)$. Altogether we see that $c(f\beta)<c(\beta)$, so by induction hypothesis ${\mathbb{F}}_0$ and ${\mathbb{H}}_0$ are already defined on $f\beta$. We set $$ {\mathbb{F}}_0\beta:= {\mathbb{F}}_0f\beta \quad \text{\rm and} \quad {\mathbb{H}}_0\beta:= {\mathbb{H}}_0f\beta +h\beta $$ and verify that indeed (using condition (i) on $f\beta$) $$ D{\mathbb{H}}_0\beta = D{\mathbb{H}}_0f\beta +Dh\beta = {\mathbb{F}}_0f\beta -f\beta+f\beta-\beta = {\mathbb{F}}_0\beta-\beta, $$ so condition (i) continues to hold. Condition (iii) holds by induction hypothesis in view of $L(f\beta) \le L(\beta)$ and $L(h\beta)\leq L(\beta)$. Since every $\beta\in C_0^{\rm pl}(\Sigma)$ has finite complexity, this finishes the definition of ${\mathbb{F}}_0$ and ${\mathbb{H}}_0$. We next construct the maps ${\mathbb{H}}_1$ and ${\mathbb{F}}_1$, following the same strategy. For this, we first extend the notion of complexity $c=(M,I)$ to 1-chains with piecewise linear $Q$-strings. For a $1$-simplex $\beta:[0,1]\to\Sigma^{\rm pl}$, we set $$ M(\beta) := \max_{\lambda\in[0,1]}M(\beta(\lambda)),\qquad I(\beta) := \max_{\lambda\in[0,1]}I(\beta(\lambda)). $$ Note that $I(\beta)$ is still bounded by the constant $S=S(K)$ in Lemma~\ref{lem:generic.knots}. Note also that, according to our definition of chains of piecewise linear strings, the number $M(\beta(\lambda))$ of corner points of $Q$-strings in $\beta(\lambda)$ is actually constant equal to the maximal number $M(\beta)$. Observe that with this definition of complexity for $1$-chains, the maps $h_0:=h$ and ${\mathbb{H}}_0$ used in the argument for $0$-chains do not increase complexity. Again our definition of ${\mathbb{F}}_1$ and ${\mathbb{H}}_1$ proceeds by induction on the lexicographic order on complexity. For simplices $\beta\in C_1^{\rm pl}(\Sigma)$ with $M=0$ we set ${\mathbb{F}}_1 \beta=\beta+ {\mathbb{H}}_0D\beta$ and ${\mathbb{H}}_1 \beta=0$. Then (ii) holds by construction, and (iii) holds since ${\mathbb{H}}_0$ and $D$ are length-decreasing. For the induction step, let $\beta\in C_1^{\rm pl}(\Sigma)$ be a $1$-simplex, and assume that ${\mathbb{F}}_1$ and ${\mathbb{H}}_1$ satisfying (ii) and (iii) have been defined for all $1$-simplices of complexity $c<c(\beta)$. Using a parametrized version of sweeping the first triangle, we obtain a $2$-chain $h_1\beta\in C_2^{\rm pl}(\Sigma)$. By construction its boundary satisfies $\partial h_1\beta+h_0\partial\beta=\beta'-\beta$, where $\beta'$ is the $1$-simplex at the end of the sweep with first segment $[x_1,x_3]$, see Figure~\ref{fig:Hbeta}. \begin{figure}[h] \labellist \small\hair 2pt \pinlabel $0$ at 12 297 \pinlabel $h_1\partial\beta$ at 1 160 \pinlabel $1$ at 12 64 \pinlabel $u$ at 12 32 \pinlabel $\beta$ at 417 297 \pinlabel $h_1\partial\beta$ at 410 179 \pinlabel $\beta'$ at 412 40 \pinlabel $Z\beta$ at 100 194 \pinlabel ${\color{blue} \delta_N\beta}$ at 206 297 \pinlabel ${\color{blue} \delta_N h_1 \beta}$ at 105 125 \pinlabel ${\color{blue} \delta_N h_1 \beta}$ at 235 169 \pinlabel ${\color{red} \delta_Q \beta}$ at 134 297 \pinlabel ${\color{red} \delta_Q \beta}$ at 294 297 \pinlabel ${\color{red} \delta_Q h_1 \beta}$ at 88 250 \pinlabel ${\color{red} \delta_Q h_1 \beta}$ at 283 228 \pinlabel ${\color{red} \delta_Q h_1 \partial \beta}$ at 418 209 \pinlabel ${\color{red} \delta_Q h_1 \partial \beta}$ at 418 151 \pinlabel ${\color{red} \delta_Q h_1 \beta}$ at 312 139 \pinlabel ${\color{red} \delta_Q \beta'}$ at 279 40 \endlabellist \centering \includegraphics[width=0.9\textwidth]{figures/Hbeta-new} \caption{The domain of $h_1\beta$.} \label{fig:Hbeta} \end{figure} We now define \begin{align*} f_1\beta :&= Dh_1\beta + h_0D\beta + \beta \cr &= \beta' + (\delta_Qh_1+h_0\delta_Q)\beta + (\delta_Nh_1+h_0\delta_N)\beta. \end{align*} We claim that $c(f_1\beta)<c(\beta)$. To see this, we need to show that the three terms on the right hand side of the last displayed equation have complexity lower that $c(\beta)$. For $\beta'$ this holds because its $Q$-strings have one fewer corner, i.e.~$M(\beta')<M(\beta)$. The domain of $(\delta_Qh_1+h_0\delta_Q)\beta$ consists of the finitely many curves in which the first triangle intersects $K$ at an interior point $y$, so that $(\delta_Qh_1+h_0\delta_Q)\beta$ consists of broken strings with one more $Q$-string (which is linear) and with the same total number of corners as $\beta$. But {\em since the new first triangle (the shaded region in Figure~\ref{fig:triangle-new}) is contained in the original first triangle} for each parameter value in $\beta$, and one of the intersection points is now the starting point of the new $Q$-string, we have $I((\delta_Qh_1+h_0\delta_Q)\beta)<I(\beta)$. The domain of $(\delta_Nh_1+h_0\delta_N)\beta$ consists of the finitely many straight line segments $[u,1]\times\{\lambda\}$ emanating from the parameter values $(u,\lambda)$ corresponding to the tangencies of the triangle $[x_1,x_2,x_3]$ to the knot at $x_1$, see Figure~\ref{fig:Hbeta} where one such point of tangency is shown as $Z\beta$. So $(\delta_Nh_1+h_0\delta_N)\beta$ consists of broken strings with one more $Q$-spike and with the same total number of corners as $\beta$. But since the new triangle with corners $x_1,(1-u)x_2+ux_3',x_3$ is contained in the original first triangle at parameter value $\lambda$, and one of the intersection points with the knot is the corner point $x_1$ of the new triangle (which does not count towards $I$), we have $I((\delta_Qh_1+h_0\delta_Q)\beta)<I(\beta)$ and the claim is proved. According to the claim, ${\mathbb{F}}_1$ and ${\mathbb{H}}_1$ are defined on $f_1\beta$ and we set $$ {\mathbb{F}}_1\beta := {\mathbb{F}}_1 f_1\beta \quad \text{and} \quad {\mathbb{H}}_1\beta :={\mathbb{H}}_1f_1\beta +h_1\beta. $$ To distinguish the proposed extensions from the maps given by induction hypothesis, we temporarily call the extended versions $\mathcal{H}_1$ and $\mathcal{F}_1$, so we can write $$ \mathcal{F}_1 := {\mathbb{F}}_1 f_1 \quad \text{and} \quad \mathcal{H}_1 :={\mathbb{H}}_1f_1 +h_1 $$ without ambiguity. Recall also that in this notation $\mathcal{H}_0={\mathbb{H}}_0 f_0+h_0$. Now using $f_1=h_0D+Dh_1+{{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l$ we compute \begin{align*} D\mathcal{H}_1 +\mathcal{H}_0D &= D{\mathbb{H}}_1 f_1 +Dh_1+{\mathbb{H}}_0 f_0D +h_0D\\ &= ({\mathbb{F}}_1f_1-f_1-{\mathbb{H}}_0Df_1) + (f_1-{{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l-h_0D) + {\mathbb{H}}_0f_0D+h_0D\\ &= \mathcal{F}_1-{{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l +{\mathbb{H}}_0(f_0D-Df_1). \end{align*} Using $f_1=h_0D+Dh_1+{{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l$ again and $f_0=Dh_0+{{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l$, we find $Df_1=Dh_0D+D = (Dh_0+{{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l)D = f_0D$, so that the last term in the displayed equation vanishes and the extensions $\mathcal{H}_1,\mathcal{F}_1$ have the required properties. This completes the induction step and hence the proof of Proposition~\ref{prop:pl-lin}. \end{proof} \begin{remark}\label{rem:crossing} If in a $1$-simplex $\beta$ as in the preceding proof the third point $x_3$ of the first triangle is the end point of the corresponding $Q$-string and thus constrained to lie on the knot, then the points $x_1$ and $x_3$ can cross each other for some parameter values $\lambda$ in the chain. The homotopy $h_1\beta$ then shrinks the corresponding degenerate triangle at parameter $\lambda$ to a constant $Q$-string, which according to our convention from Section~\ref{ss:pl} we interpret as a linear $Q$-spike in the direction of the degenerate triangle. Incidentally, the segment $[x_2,x_3]$ is always short throughout the shortening process, so if $x_1$ and $x_3$ agree then the triangle is already a linear $Q$-spike without further shrinking. \end{remark} \begin{remark} Definition~\ref{def:spike} implies that if a $Q$-string in $\beta$ in the preceding proof is a (piecewise linear) $Q$-spike, then it never intersects the knot in its interior and remains a $Q$-spike throughout the shortening process (which ends with a degenerate triangle as in Remark~\ref{rem:crossing}). This property ensures that ${\mathbb{H}}_0$ and ${\mathbb{H}}_1$ indeed do not increase length, which does not count $Q$-spikes. \end{remark} \begin{remark}\label{rem:R3} The proof relies crucially on the (trivial) fact that the new triangle $[y,(1-u)x_2+ux_3',x_3]$ (the shaded region in Figure~\ref{fig:triangle-new}) obtained by splitting the $Q$-string at an intersection point $y$ with $K$ is contained in the old triangle $[x_1,x_2,x_3]$. {\em This is the only place where we use that the metric is Euclidean}; the rest of the proof works equally well for any metric of nonpositive curvature. \end{remark} \subsection{Properties of linear $Q$-strings for generic knots} Now we consider the space of {\em $2$-gons}, i.e., straight line segments starting and ending on the knot. This space is canonically identified with $K\times K$ by associating to each $2$-gon its endpoints on $K$. We consider the squared distance function $$ E:K\times K\to{\mathbb{R}},\qquad E(x,y)=\frac{1}{2}|x-y|^2. $$ \begin{lemma}\label{lem:2-gons} For a generic knot $K\subset{\mathbb{R}}^3$ the following holds for the space $K\times K$ of $2$-gons (see Figure~\ref{fig:2-gons}). \begin{figure}[h] \labellist \small\hair 2pt \pinlabel $0$ at 3 3 \pinlabel $L$ at 268 3 \pinlabel $L$ at 3 268 \pinlabel $s$ at 317 19 \pinlabel $t$ at 17 323 \pinlabel $K\times K$ at 294 288 \pinlabel ${\color{blue} -\nabla E}$ at 72 207 \pinlabel ${\color{red} S_Q}$ at 124 230 \pinlabel ${\color{red} S_Q}$ at 231 129 \endlabellist \centering \includegraphics[width=0.5\textwidth]{figures/2-gons-new} \caption{The space of $2$-gons.} \label{fig:2-gons} \end{figure} (a) $E$ attains its minimum $0$ along the diagonal, which is a Bott nondegenerate critical manifold; the other critical points are nondegenerate binormal chords of index $0,1,2$. (b) The subset $S_Q\subset K\times K$ of $2$-gons meeting $K$ in their interior is a $1$-dimensional submanifold with boundary consisting of finitely many $2$-gons tangent to $K$ at one endpoint, and with finitely many transverse self-intersections consisting of finitely many $2$-gons meeting $K$ twice in their interior. (c) The negative gradient $-\nabla E$ is not pointing into $S_Q$ at the boundary points. \end{lemma} \begin{proof} (a) In terms of an arclength parametrization $\gamma$ of $K$ we write the energy as a function $E(s,t)=\frac{1}{2}|\gamma(s)-\gamma(t)|^2$. We compute its partial derivatives \begin{equation}\label{eq:partial-E} \begin{gathered} \frac{\partial E}{\partial s}= \langle\gamma(s)-\gamma(t),\dot\gamma(s)\rangle ,\qquad \frac{\partial E}{\partial t}= \langle\gamma(t)-\gamma(s),\dot\gamma(t)\rangle ,\cr \frac{\partial^2E}{\partial s^2}= |\dot\gamma(s)|^2+\langle\gamma(s)-\gamma(t),\ddot\gamma(s)\rangle ,\qquad \frac{\partial^2E}{\partial s\partial t}= -\langle\dot\gamma(s),\dot\gamma(t)\rangle ,\cr \frac{\partial^2E}{\partial t^2}=|\dot\gamma(t)|^2+\langle\gamma(t)-\gamma(s),\ddot\gamma(t)\rangle. \end{gathered} \end{equation} We see that critical points of $E$ are points on the diagonal $s=t$ and binormal chords (where $s\neq t$), and the Hessian of $E$ at $s=t$ equals $\left(\begin{smallmatrix}1&-1\\-1&1\end{smallmatrix}\right)$. Its kernel is the tangent space to the diagonal and it is positive definite in the transverse direction. This proves Bott nondegeneracy of the diagonal. Nondegeneracy of the binormal chords is achieved by a generic perturbation of $K$. \begin{figure} \labellist \small\hair 2pt \pinlabel $K$ at 240 304 \pinlabel $K$ at 390 317 \pinlabel $p$ at 143 144 \pinlabel $p_\xi$ at 81 194 \pinlabel $p_\eta$ at 200 182 \pinlabel $\dot{\gamma}(s)$ at 189 144 \pinlabel $q$ at 342 162 \pinlabel ${\color{blue} P}$ at 85 75 \pinlabel ${\color{blue} Q}$ at 337 18 \pinlabel ${\color{red} \ell_{\xi,\eta}}$ at 261 208 \endlabellist \centering \includegraphics[width=0.7\textwidth]{figures/tangency-new2} \caption{A $2$-gon becoming tangent to $K$ at an endpoint.} \label{fig:tangency} \end{figure} (b) We choose $K$ so that its curvature is nowhere $0$ (which holds generically). Then there exists $\delta>0$ such that no $2$-gon of positive length $<\delta$ intersects the knot in an interior point. Consider the tangential variety $\tau_{K}$ of $K$ (where $\gamma\colon [0,L]\to{\mathbb{R}}^{3}$ is a parametrization of $K$) \[ \tau_{K}:=\left\{\gamma(s)+r\dot \gamma(s)\mid s\in[0,L], r\in{\mathbb{R}}\right\}\subset{\mathbb{R}}^3. \] Since the curvature of $K$ is nowhere zero, there exists $\delta>0$ such that for each $s$ the line segment $\{\gamma(s)+r\dot \gamma(s)\mid r\in(-\delta,\delta)\}$ intersects $K$ only at $r=0$. Let $N(\delta)$ denote the union of these line segments. After small perturbation, the surface $\tau_{K}\setminus N(\delta)$ intersects $K$ transversely. This shows that there are finitely many 2-gons that are tangent to $K$ at one endpoint and that this is a transversely cut out $0$-manifold. Moreover, transversality implies that for each $2$-gon that is tangent to $K$ at one endpoint $p$, the tangent line $Q$ to $K$ at the other endpoint $q$ does not lie in the osculating plane $P$ (the plane spanned by the first two derivatives of $\gamma$) at $p$; see Figure~\ref{fig:tangency}. We claim that the $2$-gon $[p,q]$ is the boundary point of a unique local embedded curve of $2$-gons intersecting $K$ in their interior. To see this, we choose affine coordinates $(x,y,z)$ on ${\mathbb{R}}^3$ in which $p=(0,0,0)$, $q=(1,0,0)$, $P$ is the $(x,y)$-plane, and $Q$ is parallel to the $z$-axis. Then $K$ can be written near $p$ as a graph over the $x$-axis in the form $$ y=\kappa x^2+O(x^3),\quad z=O(x^3), $$ and near $q$ as a graph over the $z$-axis in the form $$ x=1+O(z^2),\quad y=O(z^2). $$ Here $2\kappa\neq0$ is the curvature of $K$ at $p$, and after a further reflection we may assume that $\kappa>0$. We fix a small $\varepsilon>0$ (to be chosen later) and consider points $\xi,\eta$ on the $x$-axis with $-\varepsilon<\xi<\eta<2\varepsilon$. Let $p_\xi,p_\eta$ be the points of $K$ near $p$ with $x$-coordinates $\xi,\eta$ and let $\ell_{\xi,\eta}$ be the line through $p_\xi$ and $p_\eta$. Let $\pi(x,y,z)=(x,z)$ be the projection onto the $(x,z)$-plane. Since the line $\ell_{\xi,\eta}$ is close to the $x$-axis and $K$ is tangent to the $z$-axis at $q$, the projected curves $\pi(\ell_{\xi,\eta})$ and $\pi(K)$ intersect in a unique point $r_{\xi,\eta}$ in the $(x,z)$-plane near $\pi(q)=(1,0)$. Let $f_\xi(\eta)$ denote the difference in the $y$-values between the points of $K$ and $\ell_{\xi,\eta}$ lying over $r_{\xi,\eta}$. Thus $f_\xi(\eta)$ is the ``distance in the $y$-direction'' between $\ell_{\xi,\eta}$ and $K$ near $q$. To compute the function $f_\xi(\eta)$, note that the slope of the line through the points $(\xi,\kappa\xi^2)$ and $(\eta,\kappa\eta^2)$ on the parabola $y=\kappa x^2$ equals $\kappa(\xi+\eta)$, so the $y$-value of this line at $x=1$ is of the form $\kappa(\xi+\eta) + O(\xi^2+\eta^2)$. The linear term persists for the function $f_\xi(\eta)$, hence $$ f_\xi(\eta) = \kappa(\xi+\eta) + O(\xi^2+\eta^2). $$ For $\varepsilon$ sufficiently small, we see that if $\xi\geq 0$, then $f_\xi(\eta)>0$ for all $\eta\in(\xi,2\varepsilon)$. Suppose therefore that $\xi<0$. Then for $\varepsilon$ sufficiently small we have $f_\xi(0)=\kappa\xi+O(\xi^2)<0$, $f_\xi(-2\xi)=-\kappa\xi+O(\xi^2)>0$, and $f_\xi'(\eta)=\kappa+O(|\xi|+|\eta|)>0$. Thus for every $\xi\in(-\varepsilon,0)$ there exists a unique $\eta(\xi)\in(\xi,2\varepsilon)$ such that $f_\xi(\eta(\xi))=0$, i.e., the line $\ell_{\xi,\eta(\xi)}$ intersects $K$ near $q$. Moreover, the point $\eta(\xi)$ depends smoothly on $\xi$ and satisfies $0<\eta(\xi)<-2\xi$. This shows that the $2$-gons with endpoints near $p,q$ intersecting $K$ in their interior form a smooth curve parametrized by $\xi\in(-\varepsilon,0)$, consisting of the corresponding segments of the lines $\ell_{\xi,\eta(\xi)}$. As this curve extends smoothly to $\xi=0$ by the $2$-gon $[p,q]$, the claim is proved. So we have shown that the subset $S_Q\subset K\times K$ avoids a neighborhood of the diagonal and is a $1$-manifold with boundary near the finitely many $2$-gons that are tangent to $K$ at an endpoint. Away from these sets, a generic perturbation of $K$ makes the evaluation map at the interior of the $2$-gons transverse to $K$. Since the condition that a chord meets $K$ in the interior is codimension one, and the condition that the tangent line at the intersection is parallel to the chord is of codimension three and can thus be avoided for generic $K$, we conclude that (b) holds. (c) Consider a boundary point of $S_Q$, i.e., a $2$-gon $[p,q]$ tangent to $K$ at one endpoint, say at $p$. Let $p=\gamma(s)$ and $q=\gamma(t)$ for an arclength parametrization of $K$ such that $\dot\gamma(s)$ is a positive multiple of $q-p$; see Figure~\ref{fig:tangency}. By equation~\eqref{eq:partial-E} we have $\frac{\partial E}{\partial s}= \langle p-q,\dot\gamma(s)\rangle<0$, so the parameter $s$ strictly increases in the direction of $-\nabla E$. On the other hand, the description in (b) shows that $s$ strictly decreases as we move into $S_Q$. Hence $-\nabla E$ is not pointing into $S_Q$ at $[p,q]$. \end{proof} \comment{ The next lemma describes the familiar Morse theoretic properties of the space of $2$-gons. \begin{lemma}\label{lem:eps} For a generic knot $K\subset{\mathbb{R}}^3$, for there exists a constant $\varepsilon>0$ with the following properties. (a) For each index $0$ binormal chord $c$, the connected component of $c$ in the space of $2$-gons of length in $[L(c)-\varepsilon,L(c)+\varepsilon]$ deformation retracts under the flow of $-\nabla E$ onto $c$. (b) For each index $1$ binormal chord $c$, the connected component of the unstable manifold $U_c$ of $c$ in the space of paths of $2$-gons of length $\leq L(c)+\varepsilon$ with boundary of length $\leq L(c)-\varepsilon$ deformation retracts under the flow of $-\nabla E$ onto $U_c$. \end{lemma} \begin{proof} According to Lemma~\ref{lem:2-gons}, for generic $K$, the function $E:K\times K\to{\mathbb{R}}$ has a Bott nondegenerate minimum along the diagonal and Morse critical points outside the diagonal. Since the length $L=\sqrt{2E}$ is strictly decreasing along the flow of $-\nabla E$, the lemma follows by standard Morse theoretic arguments. \end{proof} } More generally, for an integer $\ell\geq 1$ we consider the space $(K\times K)^\ell$ of $\ell$-tuples of $2$-gons with the energy and length functions $E^\ell,L^\ell:(K\times K)^\ell\to{\mathbb{R}}$, \begin{gather*} E^\ell(x_1,y_1,\dots,x_\ell,y_\ell) := \frac{1}{2}\sum_{i=1}^\ell|x_i-y_i|^2, \cr L^\ell(x_1,y_1,\dots,x_\ell,y_\ell) := \sum_{i=1}^\ell|x_i-y_i|. \end{gather*} As a consequence of Lemma~\ref{lem:2-gons}, $E^\ell$ is a Morse-Bott function whose critical manifolds are products $C_1\times\dots\times C_\ell$ of critical manifolds of $E$, so each $C_i$ is either a binormal chord or the corresponding diagonal. Note that the symmetric group $S_\ell$ acts on $(K\times K)^\ell$ preserving $E^\ell$ as well as the product metric. For $a>0$ we denote by $M^a\subset(K\times K)^\ell$ the collection of tuples $c=(c_1,\dots,c_\ell)$ of binormal chords of total length $L(c)=a$, and by $W^a$ the disjoint union of the unstable manifolds of points in $M^a$ under the flow of $-\nabla E^\ell$ (here $M^a$ and thus $W^a$ may be empty). Let $\phi^T:(K\times K)^\ell\to (K\times K)^\ell$ be the time-$T$ map of the flow of $-\nabla E^\ell$. \begin{lemma}\label{lem:eps} For a generic knot $K\subset{\mathbb{R}}^3$ and each $a>0$ there exist $\varepsilon_a>0$ and $T_a>0$ with the following property. For each $\varepsilon<\varepsilon_a$, $T\geq T_a$ and $\ell\in{\mathbb{N}}$ we have $$ \phi^T(\{L^\ell\leq a+\varepsilon\})\subset\{L^\ell\leq a-\varepsilon\}\cup V^a, $$ where $V^a$ is a tubular neighborhood of $W^a\cap\{L^\ell\geq a-\varepsilon\}$ in $\{a-\varepsilon\leq L^\ell\leq a+\varepsilon\}$. Moreover, tuples of $Q$-strings in $V^a$ do not intersect the knot $K$ in their interior. \end{lemma} \begin{proof} Note that on $K\times K$ the length and energy are related by $L=\sqrt{2E}$, so they have the same critical points and $L$ is strictly decreasing under the flow of $-\nabla E$ outside the critical points. Since the flow of $-\nabla E^\ell$ is the product of the flows of $E$ in each factor, the same relation holds for any $\ell\in{\mathbb{N}}$: $L^\ell$ and $E^\ell$ have the same critical points and $L^\ell$ is strictly decreasing under the flow of $-\nabla E^\ell$ outside the critical points. Next recall from above that $E^\ell$ is a Morse-Bott function. In particular, the set of critical values of $E^\ell$, and thus also of $L^\ell$, is discrete. Given $a\in{\mathbb{R}}$, we pick $\varepsilon_a>0$ such that $a$ is the only critical value of $L^\ell$ in the interval $[a-\varepsilon_a,a+\varepsilon_a]$. (Since only finitely many binormal chords can appear in tuples of critical points of total length $a$, the constant $\varepsilon_a$ can be chosen independently of $\ell$.) For $\varepsilon<\varepsilon_a$, the familiar argument from Morse theory shows that $\phi^T(\{L^\ell\leq a+\varepsilon\}\subset\{L^\ell\leq a-\varepsilon\}\cup V^a_{\varepsilon,T}$, where $V^a_{\varepsilon,T}$ for large $T$ are tubular neighborhoods of $W^a\cap\{L^\ell\geq a-\varepsilon\}$ in $\{a-\varepsilon\leq L^\ell\leq a+\varepsilon\}$ that shrink to $W^a\cap\{L^\ell\geq a-\varepsilon\}$ as $T\to\infty$. For the last statement recall that, for a generic knot $K$, binormal chords do not meet $K$ in their interior. So for each $\ell\in{\mathbb{N}}$ there exists a neighborhood $U^a$ of $M^a$ in $(K\times K)^\ell$ such that tuples of $Q$-strings in $U^a$ do not intersect $K$ in their interior. We pick $T_a$ large enough and $\varepsilon_a$ small enough so that $V^a_{\varepsilon_a,T_a}$ is contained in $U^a$. By the argument as in the previous paragraph, the constants $\varepsilon_a$ and $T_a$ can be chosen independently of $\ell$ and the lemma is proved. \end{proof} \comment{ Lemma~\ref{lem:eps} now implies \begin{cor}\label{cor:eps-a} For a generic knot $K\subset{\mathbb{R}}^3$ and $\ell\in{\mathbb{N}}$, each critical value of $E^\ell$ corresponds to a unique $S_\ell$-orbit of critical points. Moreover, for each $a\in{\mathbb{R}}$ there exists a constant $\varepsilon_a>0$ with the following properties. (a) If $a$ is not a critical value, then the space $\{L\leq a+\varepsilon_a\}$ of $\ell$-tuples of $2$-gons of total length $\leq a+\varepsilon$ deformation retracts under the flow of $-\nabla E^\ell$ onto $\{L\leq a-\varepsilon_a\}$. (b) For each tuple $c=(c_1,\dots,c_\ell)$ of binormal cords of total index $0$ and $L(c)=a$, the connected component of $c$ in the space $\{a-\varepsilon_a\leq L\leq a+\varepsilon_a\}$ deformation retracts under the flow of $-\nabla E^\ell$ onto $c$. (c) For each tuple $c=(c_1,\dots,c_\ell)$ of binormal cords of total index $1$ and $L(c)=a$, the connected component of the unstable manifold $U_c$ of $c$ in the space of paths in $\{L\leq a+\varepsilon_a\}$ with boundary in $\{L\leq a-\varepsilon_a\}$ deformation retracts under the flow of $-\nabla E^\ell$ onto $U_c$. \hfill$\square$ \end{cor} } \subsection{Shortening linear $Q$-strings}\label{ss:chain-hom} We will need some homological algebra. Suppose we have the following algebraic situation: \begin{itemize} \item a chain complex $(\mathcal{C},D=\partial+\delta)$ satisfying the relations $$ \partial^2=\delta^2=\partial\delta+\delta\partial=0, \quad\text{\rm and} $$ \item a chain map $f:(\mathcal{C},\partial)\to(\mathcal{C},\partial)$ and a chain homotopy $H:(\mathcal{C},\partial)\to(\mathcal{C},\partial)$ satisfying \begin{equation}\label{eq:iso1} \partial H+H\partial = f-{{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l, \end{equation} such that for every $c\in\mathcal{C}$ there exists a positive integer $S(c)$ with \begin{equation}\label{eq:iso3} (\delta H)^{S(c)}(c)=0. \end{equation} \end{itemize} In our applications below, we will have $\delta= \delta_Q+\delta_N$, and the equation $\delta^2=0$ will follow from $$ \delta_Q^2=\delta_N^2=[\delta_Q,\delta_N]=0, $$ which is part of the statement that $D^2=0$ in our chain complex. Here, as usual, we denote the graded commutator of two maps $A,B$ by $$ [A,B] := AB - (-1)^{|A||B|}BA. $$ Set $H_0:=H$ and $f_0:=f$, and more generally for $d\geq 1$ define the maps \begin{equation}\label{eq:Hd} H_d:= H (\delta H)^d, \quad f_d := \sum_{i=0}^{d} (H \delta)^i f (\delta H)^{d-i}. \end{equation} It is also convenient to set $H_{-1}=0$. Note that the maps $f_d$ satisfy the recursion relation $f_{d+1}=f_d \delta H + H_d\delta f$. \begin{lemma} For each $d\geq 1$ we have \begin{equation}\label{eq:iso4} [\partial,H_d]+[\delta,H_{d-1}]=f_d. \end{equation} \end{lemma} \begin{proof} We prove this by induction on $d$. The case $d=1$ is an immediate consequence of \eqref{eq:iso1} and $[\delta,\partial]=0$. For the induction step we observe that \begin{align*} [\partial,H_{d+1}] &= \partial H_d\delta H + H_d\delta H\partial\cr &= [\partial,H_d]\delta H - H_d \partial \delta H -H_d\delta \partial H + H_d\delta f -H_d\delta\cr &= f_d \delta H -[\delta,H_{d-1}] \delta H + H_d\delta f -H_d\delta\cr &= f_{d+1} -\delta H_d - H_{d-1}\delta^2H - H_d\delta\cr &= f_{d+1} -[\delta,H_d]. \end{align*} Here in the second equality we have used \eqref{eq:iso1}, in the third equality the induction hypothesis and $[\delta,\partial]=0$, in the fourth equality the recursion relation above, and in the fifth equality we have used $\delta^2=0$. \end{proof} In view of equation~\eqref{eq:iso3}, for each $c\in\mathcal{C}$ we have $H_dc=0$ and $f_dc=0$ for $$ d\geq S(c)+\max\bigl\{S(fc),S(f\delta H c),\dots,S(f(\delta H)^{S(c)-1}c)\bigr\}. $$ So the sums \begin{equation}\label{eq:BH} {\mathbb{H}}:= \sum_{d=0}^\infty H_d,\qquad {\mathbb{F}}:= \sum_{d=0}^\infty f_d \end{equation} are finite on every $c\in\mathcal{C}$. Summing up equation~\eqref{eq:iso4} for $d=1,\dots,e$ and using equation~\eqref{eq:iso1}, we obtain $$ [\partial,H_e] + [D,H_0+\cdots+H_{e-1}] = f_0+\cdots+f_e-{{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l $$ for all $e$, and hence \begin{equation* [D,{\mathbb{H}}] ={\mathbb{F}} -{{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l. \end{equation*} This concludes the homological algebra discussion. We now apply this construction to the space $\Sigma_{\rm lin}$ of broken strings with linear $Q$-strings as follows. We fix a large time $T>0$ and consider a generic $i$-chain $\beta$ in $\Sigma_{\rm lin}$, for $i=0,1$. Moving the $Q$-strings in $\beta$ by the flow of $-\nabla E$ for times $t\in[0,T]$ we obtain an $(i+1)$-chain in $(K\times K)^\ell$. We make this an $(i+1)$-chain $H^T\beta$ in $\Sigma_{\rm lin}$ by dragging along the $N$-strings without creating new intersections with the knot. In the case $i=1$, we moreover grow new $N$-spikes starting from the finitely many points $Z\beta$ where some $Q$-string becomes tangent to the knot at one end point, as shown in Figure~\ref{fig:Hbeta}. We define $f^T\beta$ as the boundary component of $H^T\beta$ at time $T$. \begin{remark} Technically, we should be careful to arrange that $H$ maps generic chains to generic chains. This is easy for $0$-chains, but some care should be taken for $1$-chains, especially near the points $Z\beta$ where some $Q$-string becomes tangent to $K$ at one of its end points. \end{remark} \begin{prop} For a generic knot $K$, the operations defined above yield for $i=0,1$ maps $$ f^T:C_i(\Sigma_{\rm lin})\to C_i(\Sigma_{\rm lin}),\qquad H^T:C_i(\Sigma_{\rm lin})\to C_{i+1}(\Sigma_{\rm lin}) $$ satisfying conditions~\eqref{eq:iso1} and~\eqref{eq:iso3}. \end{prop} \begin{proof} Standard transversality arguments show that $f^T$ and $H^T$ map generic chains to generic chains, provided that we impose suitable genericity conditions on generic chains with respect to linear strings. Now condition~\eqref{eq:iso1} is clear by construction. For condition~\eqref{eq:iso3}, we use Lemma~\ref{lem:2-gons}(c). It implies that there exists a neighborhood $U\subset K\times K$ of the finitely many $2$-gons $\partial S_Q$ that are tangent to $K$ at one end point and an $\varepsilon>0$ with the following property: Each $2$-gon in $U\cap S_Q$ decreases in length by at least $\varepsilon$ under the flow of $-\nabla E$ before it meets $S_Q$ again, and the same holds for the longer $2$-gon resulting from splitting it at its intersection with the knot. On the other hand, if a $2$-gon in $S_Q\setminus U$ is split at its intersection with the knot, then both pieces are shorter by at least some fixed amount $\delta>0$. Hence each application of $H^T\delta_Q$ decreases the total length of $Q$-strings by at least $\min(\varepsilon,\delta)$, and since $L(\beta)$ is finite this can happen only finitely many times. \end{proof} Applying definition~\eqref{eq:BH} to the maps $f^T$ and $H^T$, we obtain for $i=0,1$ length decreasing maps $$ {\mathbb{F}}^T:C_i(\Sigma_{\rm lin})\to C_i(\Sigma_{\rm lin}),\qquad {\mathbb{H}}^T:C_i(\Sigma_{\rm lin})\to C_{i+1}(\Sigma_{\rm lin}) $$ satisfying \begin{equation}\label{eq:HT} D{\mathbb{H}}^T_0 ={\mathbb{F}}^T_0 -{{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l,\qquad {\mathbb{H}}^T_0D+D{\mathbb{H}}^T_1 ={\mathbb{F}}^T_1 -{{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l \end{equation} We now use these maps to compute the homology of $(C_i(\Sigma_{\rm lin}),D)$ in small length intervals. For $a\in{\mathbb{R}}$ and $i=0,1$ we denote by $\AA_i^a$ the free ${\mathbb{Z}}$-module generated by words $\gamma_1c_1\dots\gamma_\ell c_\ell\gamma_{\ell+1}$, $\ell\geq 0$, where $c_1,\dots,c_\ell$ are binormal chords of total length $a$ and of total index $i$, and the $\gamma_j$ are homotopy classes of paths in $\partial N$ connecting the $c_j$ to broken strings and not intersecting $K$ in their interior. We define linear maps $$ \Theta:\AA_i^a\to H_i^{[a-\varepsilon,a+\varepsilon)}(\Sigma_{\rm lin},D) $$ as follows. For $i=0$, $\Theta$ sends $\gamma_1c_1\dots\gamma_\ell c_\ell\gamma_{\ell+1}$ to the homology class of the broken string $\widetilde\gamma_1c_1\dots\widetilde\gamma_\ell c_\ell\widetilde\gamma_{\ell+1}$, where $\tilde\gamma_j$ are representatives of the classes $\gamma_j$. For $i=1$, consider a word $\gamma_1c_1\dots\gamma_\ell c_\ell\gamma_{\ell+1}$ with exactly one binormal chord $c_k$ of index $1$ and all others of index $0$. Then $\Theta$ sends this word to the homology class of the $1$-chain $\widetilde\gamma_1c_1\dots\widetilde c_k\dots\widetilde\gamma_\ell c_\ell\widetilde\gamma_{\ell+1}$, where $\tilde\gamma_j$ are representatives of the classes $\gamma_j$ and $\widetilde c_k$ is the unstable manifold of $c_k$ in $(K\times K)\cap\{L\geq a-\varepsilon\}$, viewed as a $1$-chain by fixing some parametrization. \begin{cor}\label{cor:lin-hom} For $a\in{\mathbb{R}}$ let $\varepsilon_a$ be the constant from Lemma~\ref{lem:eps}. Then for each $\varepsilon<\varepsilon_a$ the map $\Theta:\AA_i^a\to H_i^{[a-\varepsilon,a+\varepsilon)}(\Sigma_{\rm lin},D)$ is an isomorphism for $i=0$ and surjective for $i=1$. \end{cor} \begin{proof} We first consider the case $i=1$. Fix $\varepsilon<\varepsilon_a$ and $T>T_a$, where $\varepsilon_a,T_a$ are the constants from Lemma~\ref{lem:eps}. Consider a relative $1$-cycle $\beta\in C_1^{[a-\varepsilon,a+\varepsilon)}(\Sigma_{\rm lin})$. In view of~\eqref{eq:HT}, $\beta$ is homologous to ${\mathbb{F}}^T\beta$. Recall from its definition in~\eqref{eq:Hd} and~\eqref{eq:BH} that each tuple of $Q$-strings appearing in ${\mathbb{F}}^T\beta$ is obtained by flowing some tuple of $Q$-strings for time $T$ (and maybe applying $H\delta_Q$ several times to the resulting tuple). Now we distinguish two cases. {\em Case 1: }$a$ is not the length of a word of binormal chords. Then in Lemma~\ref{lem:eps} the set $V^a$ is empty and it follows that all tuples of $Q$-strings in ${\mathbb{F}}^T\beta$ have length at most $a-\varepsilon$. This shows that $H_1^{[a-\varepsilon,a+\varepsilon)}(\Sigma_{\rm lin})=0$ and the map $\Theta$ is an isomorphism. {\em Case 2: }$a$ is the length of a word of binormal chords. For simplicity, let us assume that up to permutation there is only one word $w$ of length $a$ (the general case differs just in notation). By Lemma~\ref{lem:eps}, ${\mathbb{F}}^T\beta$ is a finite sum $\beta_1'+\beta_2'+\dots$ of relative $1$-cycles $\beta_\ell'$ in tubular neighborhoods $V^a$ of the unstable manifolds $W^a\cap\{L^\ell\geq a-\varepsilon\}$ of critical $\ell$-tuples of length $a$. Recall that critical $\ell$-tuples consist of binormal chords and $Q$-spikes (corresponding to constant $2$-gons). Using the operation $\delta_N$, we can replace $Q$-spikes by differences of $N$-strings to obtain a relative $1$-cycle $\beta''$ in $V^a$ homologous to ${\mathbb{F}}^T\beta$ which contains no $Q$-spikes. So each $1$-simplex $\beta_j''$ in $\beta''$ is a relative $1$-chain whose $Q$-strings lie in the tubular neighborhood $V_j$ of the unstable manifold of some permutation $w_j$ of $w$. Then the $N$-strings in $\beta''$ do not intersect the knot in their interior, and by Lemma~\ref{lem:eps} neither do the $Q$-strings. Thus each $\beta_j''$ is a relative cycle in $V_j$ with respect to the singular boundary $\partial$. We distinguish $2$ subcases. (i) If the total degree of the word $w$ is bigger than $1$, then its stable manifold for the flow of $-\nabla E$ has codimension bigger than $1$. So, after a small perturbation, each $\beta_j''$ will avoid the stable manifold of $w_j$ and will therefore have length at most $a-\varepsilon$ for sufficiently large $T$. This shows that, as in Case 1, both groups vanish and $\Theta$ is an isomorphism. (ii) If the degree of the word $w$ is $0$, then its unstable manifold is a point and thus each $V_j$ is contractible relative to $\{L\leq a-\varepsilon\}$. It follows that each relative cycle $\beta_j''$ is $\partial$-exact, and since no $\delta_Q$ and $\delta_N$ occurs also $D$-exact. Again we see that both groups vanish and $\Theta$ is an isomorphism. (iii) If the degree of the word $w$ is $1$, then each $V_j$ deformation retracts relative to $\{L\leq a-\varepsilon\}$ onto the $1$-dimensional unstable manifold $\widetilde w_j$ of $w_j$. It follows that each relative cycle $\beta_j''$ is $\partial$-homologous, and since no $\delta_Q$ and $\delta_N$ occurs also $D$-homologous, to a multiple of the $1$-chain of $Q$-strings $\widetilde w_j$ connected by suitable $N$-strings. By definition of $\Theta$, this shows that the $D$-homology class $[\beta'']=[\beta]$ lies in the image of $\Theta$. So $\Theta$ is surjective, which concludes the case $i=1$. In the case $i=0$, the proof of surjectivity is analogous but simpler than in the case $i=1$. For injectivity one considers ${\mathbb{F}}^T\beta$ for a $1$-chain $\beta$ in $\Sigma_{\rm lin}$ with $D\beta=\alpha$ for a given $0$-chain $\alpha$ and argues similarly. Note that this last step does not work to prove injectivity for $i=1$ because it would require considering ${\mathbb{F}}^T\beta$ for a $2$-chain $\beta$, which we have not defined (although this should of course be possible). \end{proof} \subsection{Proof of the isomorphism} Let $\Phi:(C_*(\mathcal{R}),\partial_\Lambda)\to (C_{\ast}(\Sigma),D)$ be the chain map constructed in the previous section. We now use the fact (Corollary~\ref{cor:respect-length}) that the map $\Phi$ preserves the length filtrations. Thus for $a<b<c$ we have the commuting diagram with exact rows of length filtered homology groups \begin{equation*} \begin{CD} H_1^{[b,c)}(\mathcal{R}) @>>> H_0^{[a,b)}(\mathcal{R}) @>>> H_0^{[a,c)}(\mathcal{R}) @>>> H_0^{[b,c)}(\mathcal{R}) @>>> 0 \\ @VV{\Phi_*}V @VV{\Phi_*}V @VV{\Phi_*}V @VV{\Phi_*}V @VVV \\ H_1^{[b,c)}(\Sigma) @>>> H_0^{[a,b)}(\Sigma) @>>> H_0^{[a,c)}(\Sigma) @>>> H_0^{[b,c)}(\Sigma) @>>> 0\,. \end{CD} \end{equation*} The main result of this section asserts that $\Phi_*$ is an isomorphism (resp.~surjective) for sufficiently small action intervals: \begin{prop}\label{prop:rel-hom} For each $a\in{\mathbb{R}}$ there exists an $\varepsilon_a>0$ such that for each $\varepsilon<\varepsilon_a$ the map $$ \Phi_*:H_0^{[a-\varepsilon,a+\varepsilon)}(\mathcal{R}) \to H_0^{[a-\varepsilon,a+\varepsilon)}(\Sigma) $$ is an isomorphism and the map $$ \Phi_*:H_1^{[a-\varepsilon,a+\varepsilon)}(\mathcal{R}) \to H_1^{[a-\varepsilon,a+\varepsilon)}(\Sigma) $$ is surjective. \end{prop} This proposition implies Theorem~\ref{thm:main} as follows. Since $H_0(\mathcal{R}) = \lim_{R\to\infty}H_0^{[0,R)}(\mathcal{R})$ and $H_0(\Sigma) = \lim_{R\to\infty}H_0^{[0,R)}(\Sigma)$, it suffices to show that $\Phi_*:H_0^{[0,R)}(\mathcal{R})\to H_0^{[0,R)}(\Sigma)$ is an isomorphism for each $R>0$. Now the compact interval $[0,R]$ is covered by finitely many of the open intervals $(a-\varepsilon_a,a+\varepsilon_a)$, with $a\in[0,R]$ and $\varepsilon_a$ as in Proposition~\ref{prop:rel-hom}. Thus, according to Proposition~\ref{prop:rel-hom}, there exists a partition $0=r_0<r_1<\cdots<r_N=R$ such that the maps $\Phi_*:H_0^{[r_{i-1},r_i)}(\mathcal{R})\to H_0^{[r_{i-1},r_i)}(\Sigma)$ are isomorphisms and $\Phi_*:H_1^{[r_{i-1},r_i)}(\mathcal{R})\to H_1^{[r_{i-1},r_i)}(\Sigma)$ are surjective for all $i=1,\dots,N$. To prove by induction that $\Phi_*:H_0^{[0,r_i)}(\mathcal{R})\to H_0^{[0,r_i)}(\Sigma)$ is an isomorphism for each $i=1,\dots,N$, consider the commuting diagram above with $a=0$, $b=r_{i-1}$ and $c=r_i$. By induction hypothesis for $i-1$ the second, fourth and fifth vertical maps are isomorphisms and the first one is surjective, so by the five lemma the third vertical map is an isomorphism as well. This proves the inductive step and hence Theorem~\ref{thm:main}. \begin{proof}[Proof of Proposition~\ref{prop:rel-hom}] Let us denote the maps provided by Proposition~\ref{prop:pl} by ${\mathbb{F}}_i^{\rm pl},{\mathbb{H}}_i^{\rm pl}$ and the maps in Proposition~\ref{prop:pl-lin} by ${\mathbb{F}}_i^{\rm lin},{\mathbb{H}}_i^{\rm lin}$, $i=0,1$. A short computation shows that the maps \begin{align*} {\mathbb{F}}_i &:= {\mathbb{F}}_i^{\rm lin}\circ{\mathbb{F}}_i^{\rm pl}:C_i(\Sigma) \to C_i(\Sigma_{\rm lin}), \cr {\mathbb{H}}_i &:= {\mathbb{H}}_i^{\rm pl}+i_{\rm pl}\circ{\mathbb{H}}_i^{\rm lin}\circ{\mathbb{F}}_i^{\rm pl}:C_i(\Sigma) \to C_{i+1}(\Sigma) \end{align*} for $i=0,1$ satisfy with the map $i:=i_{\rm pl}\circ i_{\rm lin}:C_*(\Sigma_{\rm lin})\hookrightarrow C_*(\Sigma)$: \begin{enumerate} \item ${\mathbb{F}}_0i={{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l$ and $D{\mathbb{H}}_0= i{\mathbb{F}}_0-{{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l$; \item ${\mathbb{F}}_1i={{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l$ and ${\mathbb{H}}_0D + D{\mathbb{H}}_1 = i{\mathbb{F}}_1-{{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l$; \item ${\mathbb{F}}_0$, ${\mathbb{H}}_0$, ${\mathbb{F}}_1$ and ${\mathbb{H}}_1$ are (not necessarily strictly) length-decreasing. \end{enumerate} Conditions (i) and (ii) imply $D{\mathbb{F}}_1={\mathbb{F}}_0D$ and $i{\mathbb{F}}_1D=D({{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l+{\mathbb{H}}_1D)$, and therefore $$ {\mathbb{F}}_0(\im D)\subset\im D,\qquad {\mathbb{F}}_1(\ker D)\subset\ker D,\qquad {\mathbb{F}}_1(\im D)\subset i^{-1}(\im D). $$ Hence the ${\mathbb{F}}_i$ define chain maps between the chain complexes (where the left horizontal maps are the obvious inclusions) \begin{equation*} \begin{CD} \im D @>>> C_1(\Sigma) @>D>> C_0(\Sigma) \\ @VV{{\mathbb{F}}_1}V @VV{{\mathbb{F}}_1}V @VV{{\mathbb{F}}_0}V \\ i^{-1}(\im D) @>>> C_1(\Sigma_{\rm lin}) @>{D^{\rm lin}}>> C_0(\Sigma_{\rm lin})\,. \end{CD} \end{equation*} Note that the upper complex computes the homology groups $H_0(\Sigma)$ and $H_1(\Sigma)$, while the lower complex has homology groups $H_0(\Sigma^{\rm lin})$ and $$ \widehat H_1(\Sigma^{\rm lin}) := \ker D^{\rm lin}/i^{-1}(\im D). $$ Conditions (i) and (ii) show that ${\mathbb{F}}_0,{\mathbb{F}}_1$ induce isomorphisms between these homology groups (with inverses $i_*$), and in view of condition (iii) the same holds for length filtered homology groups. Setting $$ \Psi:={\mathbb{F}}_i\circ\Phi:(C_i(\mathcal{R}),\partial_\Lambda)\to (C_i(\Sigma^{\rm lin}),D),\qquad i=0,1, $$ it therefore suffices to prove: {\em For each $a\in{\mathbb{R}}$ there exists an $\varepsilon_a>0$ such that for each $\varepsilon<\varepsilon_a$ the map $$ \Psi_*:H_0^{[a-\varepsilon,a+\varepsilon)}(\mathcal{R}) \to H_0^{[a-\varepsilon,a+\varepsilon)}(\Sigma^{\rm lin}) $$ is an isomorphism and the map $$ \Psi_*:H_1^{[a-\varepsilon,a+\varepsilon)}(\mathcal{R}) \to \widehat H_1^{[a-\varepsilon,a+\varepsilon)}(\Sigma^{\rm lin}) $$ is surjective.} We take for $\varepsilon_a$ the constant from Lemma~\ref{lem:eps} and consider $\varepsilon<\varepsilon_a$. Then we have canonical isomorphisms $$ \Gamma:H_i^{[a-\varepsilon,a+\varepsilon)}(\mathcal{R}) \cong \AA^a_i,\qquad i=0,1 $$ to the groups $\AA^a_i$ introduced in the previous subsection. Recall the maps $\Theta:\AA_i^a\to H_i^{[a-\varepsilon,a+\varepsilon)}(\Sigma_{\rm lin},D)$ from Corollary~\ref{cor:lin-hom} which are an isomorphism for $i=0$ and surjective for $i=1$. We consider first the case $i=0$. By Proposition~\ref{prop:disklength}, for a binormal chord $c$ of index $0$ and length $a$ the moduli space of holomorphic disks with positive puncture $c$ and switching boundary conditions contains one component corresponding to the half-strip over $c$, and on all other components the $Q$-strings in the boundary have total length less than $a-\varepsilon$. This shows that the map $\Psi_*:H_0^{[a-\varepsilon,a+\varepsilon)}(\mathcal{R}) \to H_0^{[a-\varepsilon,a+\varepsilon)}(\Sigma^{\rm lin})$ agrees with $\Theta\circ\Gamma$ and is therefore an isomorphism. For $i=1$ we have a diagram \begin{equation*} \begin{CD} H_1^{[a-\varepsilon,a+\varepsilon)}(\mathcal{R}) @>{\Psi_*}>>\widehat H_1^{[a-\varepsilon,a+\varepsilon)}(\Sigma^{\rm lin}) \\ @V{\cong}V{\Gamma}V @AA{\Pi}A \\ \AA^a_1 @>{\Theta}>> H_1^{[a-\varepsilon,a+\varepsilon)}(\Sigma^{\rm lin}), \end{CD} \end{equation*} where $\Pi:H_1(\Sigma^{\rm lin}) = \ker D^{\rm lin}/\im D^{\rm lin} \to \ker D^{\rm lin}/i^{-1}(\im D) = \widehat H_1(\Sigma^{\rm lin})$ is the canonical projection. Since $\Pi$ and $\Theta$ are surjective, surjectivity of $\Psi_*$ follows once we show that the diagram commutes. To see this, consider a word $w=b_1\cdots b_kc$ of binormal chords of indices $|b_i|=0$ and $|c|=1$ and total length $a$. The $1$-dimensional moduli space of holomorphic strips with positive puncture asymptotic to $c$ and one boundary component on the zero section contains a unique component $\mathcal{M}_c$ passing through the trivial strip over $c$. By Proposition~\ref{prop:disklength}, for each other element in $\mathcal{M}_c$ the boundary on the zero section has length strictly less than $L(c)$. So, for $\varepsilon$ sufficiently small, the moduli space represents a generator of the local first homology at $c$. Since on all other components of the moduli space the $Q$-strings in the boundary have total length less than $a-\varepsilon$, the product of $\mathcal{M}_c$ with the half-strips over the $b_j$ gives $\Phi(w)\in C_1^{[a-\varepsilon,a+\varepsilon)}(\Sigma)$. Its image $\Psi(w)={\mathbb{F}}_1\circ\Phi(w)\in C_1^{[a-\varepsilon,a+\varepsilon)}(\Sigma_{\rm lin})$ is obtained from $\Phi(w)$ by shortening the $Q$-strings to linear ones. Since the tuples of $Q$-strings in $\Phi(w)$ were either $C^1$-close to $w$ (depending on $\varepsilon$) or had total length less that $a-\varepsilon$, the same holds for $\Psi(w)$. Hence $\Psi(w)$ is homologous (with respect to $\partial$, and therefore with respect to $D$) in $C_1^{[a-\varepsilon,a+\varepsilon)}(\Sigma_{\rm lin})$ to the unstable manifold of $w$ in $\Sigma_{\rm lin}$, which by definition equals $\Pi\circ\Theta\circ\Gamma(w)$. In the previous argument we have ignored the $N$-strings, always connecting the ends of $Q$-strings to the base point by capping paths. More generally, a generator of $H_1^{[a-\varepsilon,a+\varepsilon)}(\mathcal{R})\cong \AA^a_1$ is given by a word $\gamma_1c_1\cdots\gamma_\ell c_\ell\gamma_{\ell+1}$, where the $c_j$ are binormal chords with one of them of index $1$ and all others of index $1$, and the $\gamma_j$ are homotopy classes of $N$-strings connecting the end points and not intersecting $K$ in the interior. Now we apply the same arguments as above to the $Q$-strings, dragging along the $N$-strings, to prove commutativity of the diagram. This concludes the proof of Proposition~\ref{prop:rel-hom}, and thus of Theorem~\ref{thm:main}. \end{proof} \section{Holomorphic functions near corners}\label{sec:holo} In this section, we call a function $f:R\to{\mathbb{C}}$ on a subset $R\subset{\mathbb{C}}$ with piecewise smooth boundary {\em holomorphic} if it is continuous on $R$ and holomorphic in the interior of $R$. \subsection{Power series expansions}\label{ss:series} Denote by $D\subset{\mathbb{C}}$ the open unit disk and set \begin{align*} D^+ &:= \{z\in D\mid \Im(z)\geq 0\}, \cr Q^+ &:= \{z\in D\mid \Re(z)\geq 0,\,\Im(z)\geq 0\}. \end{align*} Consider a holomorphic function $f:Q^+\to{\mathbb{C}}$ (in the above sense, i.e.~continuous on $Q^+$ and holomorphic in the interior) with $f(0)=0$. We distinguish four cases according to their boundary conditions. {\em Case 1: }$f$ maps ${\mathbb{R}}_+$ to ${\mathbb{R}}$ and $i{\mathbb{R}}_+$ to $i{\mathbb{R}}$.\\ In this case, we extend $f$ to a map $f:D^+\to{\mathbb{C}}$ by the formula $$ f(z):=-\overline{f(-\bar z)},\qquad \Re(z)\leq 0,\Im(z)\geq 0, $$ and then to a map $f:D\to{\mathbb{C}}$ by the formula $$ f(z):=\overline{f(\bar z)},\qquad \Im(z)\leq 0. $$ The resulting map $f$ is continuous on $D$ and holomorphic outside the axes ${\mathbb{R}}\cup i{\mathbb{R}}$, hence holomorphic on $D$, and it maps ${\mathbb{R}}$ to ${\mathbb{R}}$ and $i{\mathbb{R}}$ to $i{\mathbb{R}}$. Thus it has a power series expansion $$ f(z) = \sum_{j=1}^\infty a_{2j-1}z^{2j-1},\qquad a_j\in{\mathbb{R}}. $$ This shows that each holomorphic function $f:Q^+\to{\mathbb{C}}$ mapping ${\mathbb{R}}_+$ to ${\mathbb{R}}$ and $i{\mathbb{R}}_+$ to $i{\mathbb{R}}$ is uniquely the restriction of such a power series. In particular, $f$ has an isolated zero at the origin unless it vanishes identically. Similar discussions apply in the other cases. {\em Case 2: }$f$ maps $({\mathbb{R}}_+,i{\mathbb{R}}_+)$ to $(i{\mathbb{R}},{\mathbb{R}})$. Then it has a power series expansion $$ f(z) = i\sum_{j=1}^\infty a_{2j-1}z^{2j-1},\qquad a_j\in{\mathbb{R}}. $$ {\em Case 3: }$f$ maps $({\mathbb{R}}_+,i{\mathbb{R}}_+)$ to $({\mathbb{R}},{\mathbb{R}})$. Then it has a power series expansion $$ f(z) = \sum_{j=1}^\infty a_{2j}z^{2j},\qquad a_j\in{\mathbb{R}}. $$ {\em Case 4: }$f$ maps $({\mathbb{R}}_+,i{\mathbb{R}}_+)$ to $(i{\mathbb{R}},i{\mathbb{R}})$. Then it has a power series expansion $$ f(z) = i\sum_{j=1}^\infty a_{2j}z^{2j},\qquad a_j\in{\mathbb{R}}. $$ \begin{remark}\label{rem:cases1-4} We can summarize the four cases by saying that $f:Q^+\to{\mathbb{C}}$ is given by a power series $$ f(z) = \sum_{k=1}^\infty a_kz^k $$ with either only odd (in Cases 1 and 2) or only even (in Cases 3 and 4) indices $k$, and with the $a_k$ either all real (in Cases 1 and 3) or all imaginary (in Cases 2 and 4). Such holomorphic functions $f$ will appear as projections onto a normal direction of the holomorphic curves considered in Section~\ref{ss:switching} near switches. Then Case 1 corresponds to a switch from $Q$ to $N$, Case 2 to a switch from $N$ to $Q$, Case 3 to a switch from $N$ to $N$, and Case 4 to a switch from $Q$ to $Q$. \end{remark} \begin{remark}\label{rem:series} It will sometimes be convenient to switch from the positive quadrant to other domains. For example, the map $\psi(z):=\sqrt{z}$ maps the upper half disk $D^+$ biholomorphically onto $Q^+$. Thus in Case 1 the composition $f\circ\psi$ is a holomorphic function on $D^+$ which maps ${\mathbb{R}}_+$ to ${\mathbb{R}}$ and ${\mathbb{R}}_-$ to $i{\mathbb{R}}$, and it has an expansion in powers of $\sqrt{z}$ by $$ f\circ\psi(z) = \sum_{j=1}^\infty a_{2j-1}z^{j-1/2},\qquad a_j\in{\mathbb{R}}. $$ As another example, the map $\phi(s,t):=ie^{-\pi(s+it)/2}$ maps the strip $(0,\infty)\times[0,1]$ biholomorphically onto $Q^+\setminus\{0\}$. Thus in Case 1 the composition $f\circ\phi$ is a continuous function on ${\mathbb{R}}_+\times[0,1]$ which is holomorphic in the interior and maps ${\mathbb{R}}_+ \times \{0\}$ to $i{\mathbb{R}}$ and ${\mathbb{R}}_+\times \{1\}$ to ${\mathbb{R}}$, and it has a power series expansion $$ f\circ\phi(s,t) = -i\sum_{j=1}^\infty(-1)^j a_{2j-1}e^{-(2j-1)\pi(s+it)/2},\qquad a_j\in{\mathbb{R}}. $$ Similar discussions apply to the other cases. \end{remark} Let us consider once more the function $f:Q^+\to{\mathbb{C}}$ of Case 1 mapping $({\mathbb{R}}_+,i{\mathbb{R}}_+)$ to $({\mathbb{R}},i{\mathbb{R}})$. Its restrictions to $i{\mathbb{R}}_+$ resp.~${\mathbb{R}}_+$ naturally give rise to functions $f_-:(-1,0]\to{\mathbb{R}}$ resp.~$f_+:[0,1)\to{\mathbb{R}}$ via $$ f_-(t):=(-i)f(-it),\ t\leq 0,\qquad f_+(t):=f(t),\ t\geq 0.\qquad $$ Here and in the sequel we always use the isomorphism $(-i)=i^{-1}:i{\mathbb{R}}\to{\mathbb{R}}$ to identify $i{\mathbb{R}}$ with ${\mathbb{R}}$ in the target. So $f_\pm$ are related by $f_-=r_*f_+$, where the {\em reflection} $r_*f$ of a complex valued power series $f(t)=\sum_{k=1}^\infty a_kt^k$, $a_k\in{\mathbb{C}}^n$, is defined by $$ r_*f(t) := (-i)f(-it) = \sum_{k=1}^\infty(-i)^{k+1}a_kt^k. $$ (Note that the domain $\mathbb{C}$ and the target $\mathbb{C}$ play different roles here: multiplication by $(-i)$ on the domain comes from opening up the positive quadrant to the upper half plane, while multiplication by $(-i)$ in the target corresponds to the canonical rotation by $-J$ from $i\mathbb{R}\subset Q$ to $\mathbb{R}\subset N$.) The effect of $r_*$ on the power series expansion $f(t) = \sum_{j=1}^\infty a_{2j-1}t^{2j-1}$ in Case 1 is as follows: $$ r_*f(t) = (-i)\sum_{j=1}^\infty a_{2j-1}(-it)^{2j-1} = \sum_{j=1}^\infty (-1)^ja_{2j-1}t^{2j-1}, $$ so the coefficient $a_{2j-1}$ is changed to $(-1)^ja_{2j-1}$. Note that $a_1$ is changed to $-a_1$, which justifies the name ``reflection''. Now consider $f$ as in Case 2 mapping $({\mathbb{R}}_+,i{\mathbb{R}}_+)$ to $(i{\mathbb{R}},{\mathbb{R}})$. Here the restrictions to $i{\mathbb{R}}_+$ resp.~${\mathbb{R}}_+$ naturally give rise to functions $f_-:(-1,0]\to{\mathbb{R}}$ resp.~$f_+:[0,1)\to{\mathbb{R}}$ via $$ f_-(t):=f(-it),\ t\leq 0,\qquad f_+(t):=(-i)f(t),\ t\geq 0.\qquad $$ So $f_\pm$ are related by $f_-=-r_*f_+$, and the coefficient $a_{2j-1}$ in the power series expansion of $f_+$ is changed to $(-1)^{j+1}a_{2j-1}$. In particular, $a_1$ is unchanged so that $f_-$ and $f_+$ fit together to a function $(-1,1)\to{\mathbb{R}}$ of class $C^2$ (but not $C^3$). \subsection{Winding numbers}\label{ss:winding} Consider a holomorphic function $f:Q^+\to{\mathbb{C}}$ given by a power series $f(z)=\sum_{k=1}^\infty a_kz^k$ as in Cases 1-4 of the previous subsection. In each of these cases we define its {\em winding number} at $0$ as $$ w(f,0) := \frac{1}{2}\inf\{k\mid a_k\neq 0\}. $$ Note that the winding number is a half-integer in the first two cases and an integer in the last two cases. Also note that the winding number is given by $$ w(f,0) = \frac{1}{\pi}\int_{\gamma}f^*d\theta, $$ where $\gamma$ is a small arc in $Q^+$ connecting $(0,1)$ to $i(0,1)$. This can be seen, for example, by choosing $\gamma$ as a small quarter circle $Q^+\cap\partial D_\varepsilon$; then the symmetry of $f$ with respect to reflections at the coordinate axes implies $$ \frac{1}{\pi}\int_{\gamma}f^*d\theta = \frac{1}{4\pi}\int_{\partial D_\varepsilon}f^*d\theta = \frac{1}{4\pi}\cdot 2\pi\inf\{k\mid a_k\neq 0\} = w(f,0). $$ Next let $r>1$, denote by $D_r$ the open disk of radius $r$, by $$ H^+:=\{z\in{\mathbb{C}}\mid \Im(z)\geq 0\} $$ the upper half plane, and set $D_r^+:=D_r\cap H^+$. Consider a nonconstant continuous map $f:D_r^+\to{\mathbb{C}}$ which is holomorphic in the interior and maps the interval $(-r,r)$ to ${\mathbb{R}}\cup i{\mathbb{R}}$. Suppose that $f$ has no zeroes on the semi-circle $\partial D_1\cap H^+$. Then $f$ has finitely many zeroes $s_1,\dots,s_k$ in the interior of $D_1^+$ as well as finitely many zeroes $t_1,\dots,t_\ell$ in $(-1,1)$. (Finiteness holds because the holomorphic function $z\mapsto f(z)^2$ maps ${\mathbb{R}}$ to ${\mathbb{R}}$, and thus can only have finitely many zeroes by the Schwarz reflection principle and unique continuation.) Denote by $w(f,s_i)\in{\mathbb{N}}$ resp.~$w(f,t_j)\in\frac{1}{2}{\mathbb{N}}$ the winding numbers at the zeroes. Thus with the closed angular form $d\theta$ on ${\mathbb{C}}\setminus\{0\}$, $$ w(f,s_i) := \frac{1}{\pi}\int_{\alpha_i}f^*d\theta,\qquad w(f,t_j) := \frac{1}{\pi}\int_{\beta_j}f^*d\theta, $$ where $\alpha_i$ is a small circle around $s_i$ and $\beta_j$ is a small semi-circle around $t_j$ in $D_1^+$, both oriented in the counterclockwise direction. (Thus the $w(f,s_i)$ are even integers and the $w(f,t_j)$ are integers or half-integers). Denote by $\gamma$ the semi-circle $\partial D_1\cap H^+$ oriented in the counterclockwise direction. Then Stokes' theorem yields $$ \frac{1}{\pi}\int_\gamma f^*d\theta = \sum_{i=1}^k w(f,s_i) + \sum_{j=1}^\ell w(f,t_j). $$ Since all winding numbers are nonnegative, we have shown the following result. \begin{lemma}\label{lem:wind} Consider a nonconstant continuous map $f:D_r^+\to{\mathbb{C}}$ which is holomorphic in the interior and maps $(-r,r)$ to ${\mathbb{R}}\cup i{\mathbb{R}}$. Suppose that $f$ has no zeroes on the semi-circle $\gamma=\partial D_1\cap H^+$ and zeroes at $t_1,\dots,t_m\in(-1,1)$ (plus possibly further zeroes in $D_1^+$). Then $$ \frac{1}{\pi}\int_\gamma f^*d\theta \geq \sum_{j=1}^m w(f,t_j). $$ \end{lemma} More generally, for $n\geq 1$ consider a nonconstant continuous map $f:D_r^+\to{\mathbb{C}}^n$ which is holomorphic in the interior and maps $(-r,r)$ to ${\mathbb{R}}^n\cup i{\mathbb{R}}^n$. Suppose that $f$ has no zeroes on the semi-circle $\partial D_1\cap H^+$ and zeroes $z_1,\dots,z_m$ in $D_1^+$ (in the interior or on the boundary). For each direction $v\in S^{n-1}\subset{\mathbb{R}}^n$ we obtain a holomorphic map $f_v:=\pi_v\circ f$, where $\pi_v$ is the projection onto the complex line spanned by $v$. Fix a positive volume form $\Omega$ on $S^{n-1}$ of total volume $1$. Then there exists an open subset $V\subset S^{n-1}$ of measure $1$ such that for all $v\in V$, $f_v$ has zeroes precisely at the $z_j$ and their winding numbers are independent of $v\in V$. So we can define $$ w(f,z_j) := \int_V w(f_v,z_j)\Omega(v) = w(f_{v_0},z_j) $$ for any $v_0\in V$ and obtain \begin{corollary}\label{cor:wind} Consider a nonconstant continuous map $f:D_r^+\to{\mathbb{C}}^n$ which is holomorphic in the interior and maps $(-r,r)$ to ${\mathbb{R}}^n\cup i{\mathbb{R}}^n$. Suppose that $f$ has no zeroes on the semi-circle $\gamma=\partial D_1\cap H^+$ and zeroes at $t_1,\dots,t_m\in(-1,1)$ (plus possibly further zeroes in $D_1^+$). Then there exists an open subset $V\subset S^{n-1}$ of measure $1$ such that for every $v_0\in V$, $$ \frac{1}{\pi}\int_\gamma f_{v_0}^*d\theta = \int_V\left(\frac{1}{\pi}\int_\gamma f_v^*d\theta\right)\Omega(v) \geq \sum_{j=1}^m w(f,t_j). $$ \end{corollary} \subsection{Spikes}\label{ss:spikes} Consider again the upper half disk $D^+=\{z\in D\mid \Im(z)\geq 0\}$ and real points $-1<b_1<b_2<\cdots<b_\ell<1$. We are interested in holomorphic functions $f:D^+\setminus\{b_1,\dots,b_\ell\}$, continuous on $D^+$, mapping the intervals $[b_{i-1},b_i]$ alternatingly to ${\mathbb{R}}$ and $i{\mathbb{R}}$. We wish to describe models of 1- resp.~2-parameter families in which 2 resp.~3 of the $b_i$ come together. A model for such a 1-parameter family is \begin{equation}\label{eq:1-dimmodel} f_\varepsilon(z) := \sqrt{z(z-\varepsilon)},\qquad \varepsilon\geq 0 \end{equation} with zeroes at $0,\varepsilon$. A model for a 2-parameter family is \begin{equation}\label{eq:2-dimmodel} f_{\delta,\varepsilon}(z) := \sqrt{z(z+\delta)(z-\varepsilon)},\qquad \varepsilon,\delta\geq 0 \end{equation} with zeroes at $-\delta,0,\varepsilon$. Here we choose appropriate branches of the square root so that the functions become continuous. The images of these functions are shown in Figure~\ref{fig:spike-families}. \begin{figure} \labellist \small\hair 2pt \pinlabel $f_\varepsilon$ at 323 533 \pinlabel $f_{\delta,\varepsilon}$ at 323 191 \pinlabel ${\color{blue} 0}$ at 140 453 \pinlabel ${\color{blue} \varepsilon}$ at 181 453 \pinlabel ${\color{blue} 0}$ at 140 114 \pinlabel ${\color{blue} \varepsilon}$ at 181 114 \pinlabel ${\color{blue} -\delta}$ at 73 114 \endlabellist \centering \includegraphics[width=0.8\textwidth]{figures/spike-families} \caption{ Spikes in the model families $f_\varepsilon$ and $f_{\delta,\varepsilon}$. } \label{fig:spike-families} \end{figure} They show that $f_\varepsilon$ has a ``spike'' in the direction $i{\mathbb{R}}_+$ which disappears as $\varepsilon\to 0$, and $f_{\delta,\varepsilon}$ has two ``spikes'' in the directions ${\mathbb{R}}_-$ resp.~$i{\mathbb{R}}_+$ which disappear as $\delta$ resp.~$\varepsilon$ approaches zero. Based on these models, the notion of a ``spike'' will be formalized in Section~\ref{sec:string-ref}. In the following section, functions with two spikes will appear in the following local model. Consider the $1$-parameter family of functions $f_a:Q^+\to{\mathbb{C}}$, $$ f_a(z)=i(az-z^3),\qquad a\in{\mathbb{R}}. $$ They map $({\mathbb{R}}_+,i{\mathbb{R}}_+)$ to $(i{\mathbb{R}},{\mathbb{R}})$ and thus correspond to Case 2 in Section~\ref{ss:series}. Via the identifications in that section, $f_a$ induces functions \begin{align*} f_-(a,t) &:= f_a(-it) = at+t^3, \qquad t\neq 0,\cr f_+(a,t) &:= (-i)f_a(t) = at-t^3, \qquad t\geq 0, \end{align*} which fit together to a $C^2$ (though not $C^3$) function $$ f(a,t) = at - {\rm sgn\,}(t)t^3,\qquad t\in{\mathbb{R}}. $$ In Case 1, one considers the functions $f_a(z)=-az+z^3$ mapping $({\mathbb{R}}_+,i{\mathbb{R}}_+)$ to $({\mathbb{R}},i{\mathbb{R}})$. Here the induced functions \begin{align*} f_-(a,t) &:= (-i)f_a(-it) = at+t^3, \qquad t\neq 0,\cr f_+(a,t) &:= f_a(t) = -at+t^3, \qquad t\geq 0 \end{align*} do not fit together to a $C^1$ function, but when we replace $f_+$ by $-f_+$ they fit together to the function $f(a,t)$ above. \section{String homology in arbitrary degree}\label{sec:string-ref} \subsection{Broken strings}\label{ss:brokenstring} Let $K$ be a framed oriented knot in some oriented 3-manifold $Q$. Fix a tubular neighborhood $N$ of $K$ and a diffeomorphism $N\cong S^1\times D^2$. Fix an integer $m\geq 3$ and a base point $x_0\in\partial N$. We also fix an $m$-jet of a curve passing through $x_0$ in $N$. Using the diffeomorphism $N \cong S^1 \times D^2$, this is equivalent to specifying suitable vectors $v_0^{(k)}\in{\mathbb{R}}^3$, $1\leq k\leq m$. The following definition refines the one given in Section~\ref{sec:string}, which corresponds to the case $m=1$. \begin{definition}\label{def:string} A {\em broken (closed) string with $2\ell$} switches on $K$ is a tuple $s=(a_1,\dots,a_{2\ell+1};s_1,\dots,s_{2\ell+1})$ consisting of real numbers $0=a_0<a_1<\dots<a_{2\ell+1}$ and $C^m$-maps $$ s_{2i+1}:[a_{2i},a_{2i+1}]\to N,\quad s_{2i}:[a_{2i-1},a_{2i}]\to Q $$ satisfying the following matching conditions at the end points $a_i$: (i) $s_1(0)=s_{2\ell+1}(a_{2\ell+1})=x_0$ and $s_1^{(k)}(0)=s_{2\ell+1}^{(k)}(a_{2\ell+1})=v_0^{(k)}$ for $1\leq k\leq m$. (ii) For $i=1,\dots,\ell$, $$ s_{2i}(a_{2i}) = s_{2i+1}(a_{2i})\in K,\qquad s_{2i-1}(a_{2i-1})=s_{2i}(a_{2i-1})\in K. $$ (iii) Denote by $\sigma_i$ the $D^2$-component of $s_i$ near its end points. Then for $i=1,\dots,\ell$ and $1\leq k\leq m/2$ (for the left hand side) resp.~$1\leq k\leq (m+1)/2$ (for the right hand side) \begin{gather*} \sigma_{2i}^{(2k)}(a_{2i}) = \sigma_{2i+1}^{(2k)}(a_{2i})=0,\qquad \sigma_{2i}^{(2k-1)}(a_{2i}) = (-1)^{k}\sigma_{2i+1}^{(2k-1)}(a_{2i}),\cr \sigma_{2i-1}^{(2k)}(a_{2i-1})=\sigma_{2i}^{(2k)}(a_{2i-1})=0,\qquad \sigma_{2i-1}^{(2k-1)}(a_{2i-1})=(-1)^{k+1}\sigma_{2i}^{(2k-1)}(a_{2i-1}). \end{gather*} \end{definition} We will refer to the $s_{2i}$ and $s_{2i+1}$ as {\em Q-strings} and {\em N-strings}, respectively. A typical picture of a broken string is shown in Figure~\ref{fig:1} on page~\pageref{fig:1}. Conditions (i) and (ii) in Definition~\ref{def:string} mean that the $s_i$ fit together to a continuous loop $s:[0,a_{2\ell+1}]\to Q$ with end points at $x_0$ (which fit together in $C^m$). Condition (iii) is motivated as follows: In Section~\ref{S:mdlisp} below, we consider almost complex structures $J$ on $T^*Q$ which are particularly well adapted to the immersed Lagrangian submanifold $Q \cup L_K \subset T^*Q$. For such a $J$, Lemma~\ref{l:knotnbhd} then provides a holomorphic embedding of a neighborhood $\mathcal{O}$ of $S^1 \times \{0\} \subset {\mathbb{C}} \times {\mathbb{C}}^2$ onto a neighborhood of $K \subset T^*Q$ mapping $\mathcal{O} \cap (S^1 \times i{\mathbb{R}}^2)$ to $Q$ and $\mathcal{O} \cap (S^1 \times {\mathbb{R}}^2)$ to $L_K$. Condition (iii) requires that the normal component $\sigma$ of $s$ at the switching points $a_i$ behaves like the boundary values of a holomorphic disk with boundary on $Q \cup L_K$ when projected to ${\mathbb{C}}^2$ in these coordinates near $K$. To see this, let us reformulate condition (iii). As in Section~\ref{ss:series}, to a complex valued polynomial $p(t)=\sum_{k=1}^mp_kt^k$, $p_k\in{\mathbb{C}}^2$, we associate its reflection $$ r_*p(t) = (-i)p(-it) = \sum_{k=1}^m(-i)^{k+1}p_kt^k. $$ Then two {\em real valued} polynomials $p(t)=\sum_{k=1}^mp_kt^k$ and $q(t)=\sum_{k=1}^mq_kt^k$, $p_k,q_k\in{\mathbb{R}}^2$, satisfy $r_*p=q$ if and only if for $1\leq k\leq m/2$ (on the left hand side) resp.~$1\leq k\leq (m+1)/2$ (on the right hand side) $$ p_{2k}=q_{2k}=0 \text{ and } p_{2k-1}=(-1)^kq_{2k-1}. $$ So in terms of the {\em normal Taylor polynomials} at the switching points $$ T^m\sigma_i(a_{i-1})(t) := \sum_{k=1}^m\frac{\sigma_i^{(k)}(a_{i-1})}{k!}t^k, \qquad T^m\sigma_i(a_i)(t) := \sum_{k=1}^m\frac{\sigma_i^{(k)}(a_i)}{k!}t^k, $$ condition (iii) is equivalent to the conditions $$ T^m\sigma_{2i}(a_{2i}) = r_*T^m\sigma_{2i+1}(a_{2i}),\qquad T^m\sigma_{2i-1}(a_{2i-1})=-r_*T^m\sigma_{2i}(a_{2i-1}). $$ These are precisely the conditions in Section~\ref{ss:series} describing the boundary behavior of holomorphic disks at a corner going from the imaginary to the real axis (Case 1, corresponding to a switch from $Q$ to $N$), resp.~from the real to the imaginary axis (Case 2, corresponding to a switch from $N$ to $Q$). \begin{remark} (a) The case $m=3$ suffices for the purposes of this paper. In fact, for $0$- and $1$-parametric families of strings we only need the conditions on the first derivatives (the case $m=1$ considered in Section~\ref{sec:string}), while for $2$-parametric families we also need the conditions on the second and third derivatives). Explicitly, condition (iii) for $m=3$ reads \begin{equation} \begin{gathered}\label{eq:m=3} \sigma_{2i}'(a_{2i}) = -\sigma_{2i+1}'(a_{2i}),\qquad \sigma_{2i-1}'(a_{2i-1})=\sigma_{2i}'(a_{2i-1}), \cr \sigma_{2i}''(a_{2i}) = \sigma_{2i+1}''(a_{2i}) = \sigma_{2i-1}''(a_{2i-1})=\sigma_{2i}''(a_{2i-1})=0, \cr \sigma_{2i}'''(a_{2i}) = \sigma_{2i+1}'''(a_{2i}),\qquad \sigma_{2i-1}'''(a_{2i-1})=-\sigma_{2i}'''(a_{2i-1}). \end{gathered} \end{equation} (b) In Definition~\ref{def:string} one could add the condition that all derivatives of the tangent components agree at switches (as it is the case for boundaries of holomorphic disks). However, we will not need such a condition and thus chose not to include it. Similarly, one could have required all the $s_j$ to be $C^\infty$ rather than $C^m$. \end{remark} We denote by $\Sigma^\ell$ the space of broken strings with $2\ell$ switches. We make it a Banach manifold by equipping it with the topology of ${\mathbb{R}}$ on the $a_j$ and the $C^m$-topology on the $s_j$. It comes with interior evaluation maps $$ {\rm ev}_i:(0,1)\times \Sigma^\ell\to Q \text{ resp.~}N,\qquad (t,s)\mapsto s_i\bigl((1-t)a_{i-1}+ta_i\bigr) $$ and corner evaluation maps $$ T_i:\Sigma^\ell\to ({\mathbb{R}}^2)^{\lfloor \frac{m+1}{2}\rfloor},\qquad s\mapsto T^m\sigma_i(a_i) \cong \Bigl(\sigma_i^{(2k-1)}(a_i)\Bigr)_{1\leq k\leq \lfloor \frac{m+1}{2}\rfloor}. $$ Moreover, concatenation at the base point $x_0$ yields a smooth map $$ \Sigma^\ell \times \Sigma^{\ell'}\mapsto \Sigma^{\ell+\ell'}. $$ \subsection{Generic chains of broken strings}\label{ss:chains} Now we define the generators of the string chain complex in degrees $d\in\{0,1,2\}$. Set $\Delta_0:=\{0\}$ and let $\Delta_d=\{(\lambda_1,\dots,\lambda_d)\in{\mathbb{R}}^d\mid \lambda_i\geq 0,\lambda_1+\dots+\lambda_d\leq 1\}$ denote the $d$-dimensional standard simplex for $d\geq 1$. It is stratified by the sets where some of the inequalities are equalities. Fix $m\geq 3$ as in the previous subsection. \begin{definition}\label{def:generic-chain} A {\em generic $d$-chain} in $\Sigma^\ell$ is a smooth map $S:\Delta_d\to \Sigma^\ell$ such that the maps ${\rm ev}_i\circ S:(0,1)\times \Delta_d\to Q$ and $T_i\circ S:\Delta_d\to ({\mathbb{R}}^2)^{\lfloor \frac{m+1}{2} \rfloor}$ are jointly transverse to $K$ resp.~jet-transverse to $0$ (on all strata of $\Delta_d$). \end{definition} Let us spell out what this means for $m=3$ in the cases $d=0,1,2$. \underline{$d=0$}: A generic $0$-chain is a broken string $s=(s_1,\dots s_{2\ell+1})$ such that (0a) $\dot\sigma_i(a_i)\neq 0$ for all $i$; (0b) $s_i$ intersects $K$ only at its end points. \smallskip \underline{$d=1$}: A generic 1-chain of broken strings is a smooth map $$ S:[0,1]\to\Sigma^\ell,\qquad \lambda\mapsto s^\lambda=(s_1^\lambda,\dots s_{2\ell+1}^\lambda) $$ such that (1a) $s^0$ and $s^1$ are generic strings; (1b) $\dot\sigma_i^\lambda(a_i^\lambda)\neq 0$ for all $i,\lambda$; (1c) for each $i$ the map $$ (0,1)\times(0,1)\to Q \text{ resp.~}N,\qquad (t,\lambda)\to s_i^\lambda\bigl((1-t)a_{i-1}^\lambda+ta_i^\lambda\bigr) $$ meets $K$ transversely in finitely many points $(t_a,\lambda_a)$. Moreover, distinct such intersections (even for different $i$) appear at distinct parameter values $\lambda_a$. \smallskip \underline{$d=2$}: A generic 2-chain of broken strings is a smooth map $$ S:\Delta_2\to\Sigma^\ell,\qquad \lambda\mapsto s^\lambda=(s_1^\lambda,\dots s_{2\ell+1}^\lambda) $$ such that (2a) the $s^\lambda$ at vertices $\lambda\in \Delta_2$ are generic strings; (2b) the restrictions of $S$ to edges of $\Delta_2$ are generic 1-chains; (2c) for each $i$ the map $$ (0,1)\times{\rm int} \Delta_2\to Q \text{ resp.~}N,\qquad (t,\lambda)\to s_i^\lambda\bigl((1-t)a_{i-1}^\lambda+ta_i^\lambda\bigr) $$ is transverse to $K$; moreover, we assume that the projection of the preimage of $K$ to $\Delta_2$ is an immersed submanifold $D_i \subset \Delta_2$ with transverse double points; (2d) for all $i,j$ the submanifolds $D_i,D_j\subset \Delta_2$ from (2c) meet transversely in finitely many points; (2e) for each $i$ the map $$ {\rm int} \Delta_2\to{\mathbb{R}}^2,\qquad \lambda\mapsto \dot\sigma_i^\lambda(a_i^\lambda) $$ meets $0$ transversely in finitely many points satisfying $(\sigma_i^\lambda)^{(3)}(a_i^\lambda)\neq 0$; moreover, these points do not meet the $D_j$. \smallskip We will see in the next subsection that the points in (2e) are limit points of both $D_i$ and $D_{i+1}$. \subsection{String operations}\label{ss:string-op} \begin{figure} \labellist \small\hair 2pt \pinlabel $q_1$ at 72 28 \pinlabel ${\color{blue} p_1}$ at 193 145 \endlabellist \centering \includegraphics[width=0.5\textwidth]{figures/spike} \caption{ A spike with ends $(p,q)$. } \label{fig:spike} \end{figure} Now we define the relevant operations on generic chains of broken strings. Let $\partial$ denote the singular boundary operator, thus $$ \partial\{s^\lambda\} := s^1-s^0,\qquad \partial S := S|_{\partial\Delta_2} $$ for $1$-resp.~$2$-chains. For the definition of string coproducts we need the following \begin{definition}\label{def:spike} Let $p(t)=\sum_{k=1}^mp_kt^k$ and $q(t)=\sum_{k=1}^mq_kt^k$, $p_k,q_k\in{\mathbb{R}}^2$, be real polynomials with $\langle p_1,q_1\rangle<0$. A {\em spike} with ends $(p,q)$ is a $C^m$-function $f:[a,b]\to D^2$ with the following properties (see Figure~\ref{fig:spike}): (S1) the Taylor polynomials to order $m$ of $f$ at $a$ resp.~$b$ agree with $p$ resp.~$q$; (S2) $\langle f(t),p_1\rangle>0$ and $\langle f(t),q_1\rangle<0$ for all $t\in(a,b)$. \end{definition} \begin{remark}\label{rem:spikes-convex} Note that the spikes with fixed ends $(p,q)$ and fixed or varying $a<b$ form a convex (hence contractible) space. \end{remark} We choose a family of {\em preferred spikes ${\mathfrak s}_{p,q}:[0,1]\to D^2$} for all $(p,q)$ depending smoothly (with respect to the $C^m$-topology) on the coefficients of $p$ and $q$. Now we are ready to define the {\em string coproducts} $\delta_N,\delta_Q$ on generic $d$-chains for $d\leq 2$. \underline{$d=0$}: On $0$-chains set $\delta_N=\delta_Q=0$. \underline{$d=1$}: For a $1$-chain $\{s^\lambda\}_{\lambda\in[0,1]}$ let $(\lambda^j,b^j)$ be the finitely many values for which $s_{2i}^{\lambda^j}(b^j)\in K$ for some $i=i(j)$. Set $$ \delta_Q\{s^\lambda\} := \sum_{j}\varepsilon^j\Bigl(s_1^{\lambda^j},\dots, s^{\lambda^j}_{2i}|_{[a_{2i-1},b^j]},{\mathfrak s}^j, \hat s^{\lambda^j}_{2i}|_{[b^j,a_{2i}]}, \dots, \hat s^{\lambda^j}_{2\ell+1}\Bigr), $$ where ${\mathfrak s}^j={\mathfrak s}(\cdot-b^j):[b^j,b^j+1]\to N$ is a shift of the preferred spike ${\mathfrak s}$ with ends $\bigl(r_*T^m\sigma_{2i}^{\lambda^j}(b_j), T^m\sigma_{2i}^{\lambda^j}(b_j)\bigr)$ in the normal directions, with constant value $s_{2i}^{\lambda^j}(b^j)$ along $K$. The hat means shift by $1$ in the argument, and $\varepsilon^j=\pm 1$ are the signs defined in Figure~\ref{fig:2}. Loosely speaking, $\delta_Q$ inserts an {\em $N$-spike} at all points where some $Q$-string meets $K$. The operation $\delta_N$ is defined analogously, inserting a {\em $Q$-spike} where an $N$-string meets $K$. Note that by Definition~\ref{def:spike} the spikes stay in $N$ and meet $K$ only at their end points. \comment{ \begin{figure}[h] \begin{center} \epsfbox{cord_02.eps} \caption{The signs in the definition of $\delta_N$ and $\delta_Q$.} \label{fig:2} \end{center} \end{figure} } \begin{figure} \labellist \small\hair 2pt \pinlabel $U^j$ at 57 306 \pinlabel $\lambda^j$ at 37 276 \pinlabel $D_\delta$ at 307 292 \pinlabel ${\mathbb{R}}^2$ at 31 93 \pinlabel $D_\varepsilon$ at 323 101 \pinlabel $\times (-\gamma,\gamma)$ at 100 265 \pinlabel $\times (-\gamma,\gamma)$ at 347 265 \pinlabel $\times (-\gamma,\gamma)$ at 361 65 \pinlabel $\psi\times{{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l$ at 199 286 \pinlabel $\cong$ at 199 257 \pinlabel $\sigma$ at 41 182 \pinlabel $\tilde{f}$ at 183 192 \pinlabel $\cong$ at 319 174 \pinlabel $\Phi\times{{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l$ at 362 174 \pinlabel $\Phi\circ\Psi$ at 198 67 \pinlabel $f$ at 181 4 \endlabellist \centering \includegraphics[width=250pt]{figures/f} \caption{ Construction of the map $f$. } \label{fig:f} \end{figure} \underline{$d=2$}: Finally, consider a generic $2$-chain $S:\Delta_2\to\Sigma^\ell$. Let $\lambda^j\in{\rm int} \Delta_2$ be the finitely many points where $\dot\sigma_i^{\lambda^j}(a_i^{\lambda^j})=0$ for some $i=i(j)$. For the following construction see Figure~\ref{fig:f}. Let $\delta>0$ be a number $\leq 1$ such that the map $\psi:\lambda\mapsto \dot\sigma_i^{\lambda}(a_i^{\lambda})$ is a diffeomorphism from a neighborhood $U^j$ of $\lambda^j$ onto the $\delta$-disk $D_\delta\subset{\mathbb{R}}^2$ (such $\delta$ exists by condition (2e) in Section~\ref{ss:chains}). We choose $U^j$ so small that it contains no other $\lambda^i$. Let $\gamma>0$ be a number $\leq 1$ such that $|\sigma^\lambda(t+a_i^\lambda)|\leq 1$ for all $|t|\leq\gamma$. Consider the function $\sigma:U^j\times(-\gamma,\gamma)\to{\mathbb{R}}^2$ defined by $$ \sigma(\lambda,t) := \begin{cases} \sigma_i^\lambda(t+a_i^\lambda) :& t<0, \cr -\sigma_{i+1}^\lambda(t+a_i^\lambda) :& t\geq 0 \text{ if $i$ is even}, \cr \sigma_{i+1}^\lambda(t+a_i^\lambda) :& t\geq 0 \text{ if $i$ is odd}. \end{cases} $$ According to conditions~\eqref{eq:m=3}, the function $\sigma(\lambda,t)$ is smooth in $\lambda$ and of class $C^2$ but not $C^3$ in $t$. Define the function $$ \tilde f:D_\delta\times (-\gamma,\gamma)\to{\mathbb{R}}^2,\qquad (a,b,t)\mapsto \sigma\bigl(\psi^{-1}(a,b),t\bigr). $$ By construction we have $\frac{\partial\tilde f}{\partial t}(a,b,0)=(a,b)$ for all $(a,b)$. Moreover, by condition (2e) in Section~\ref{ss:chains} we have $v^j:=(\sigma_i^{\lambda^j})^{(3)} (a_i^{\lambda^j})\neq 0$. Let $\Psi$ be the rotation of ${\mathbb{R}}^2$ which maps $v^j$ onto a vector $(\mu,0)$ with $\mu>0$, let $\Phi:{\mathbb{R}}^2\to{\mathbb{R}}^2$ be multiplication by $6/\mu$, and set $\varepsilon:=6\delta/\mu$. Then the map \begin{equation}\label{eq:f} f:=\Phi\circ\Psi\circ\tilde f\circ(\Phi^{-1}\times{{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l):D_\varepsilon\times (-\gamma,\gamma)\to{\mathbb{R}}^2 \end{equation} satisfies \begin{gather*} f(a,b,0)=(0,0),\quad \frac{\partial f}{\partial t}(a,b,0)=(a,b),\cr \frac{\partial^2 f}{\partial t^2}(a,b,0)=(0,0),\quad \frac{\partial f^3}{\partial t^3}(0,0,0)=\pm(6,0) \end{gather*} for all $(a,b)$. Here the map $f$ is $C^2$ but not $C^3$, and the statement about the third derivative $\frac{\partial f^3}{\partial t^3}(0,0,0)$ means that it equals $+(6,0)$ from the left and $-(6,0)$ from the right. Therefore, $f$ has a Taylor expansion (again considered for $t\leq 0$ and $t\geq 0$ separately) \begin{equation}\label{eq:taylor} f(a,b,t) = \Bigl(at-{\rm sgn\,}(t)t^3,bt\Bigr) + O(|a||t|^3+|b||t|^3+|t|^4). \end{equation} Here to simplify notation we tacitly assume that the restrictions of $f$ to $t\leq 0$ and $t\geq 0$ are $C^4$ rather than $C^3$. The following argument carries over to the $C^3$ case if we replace throughout $O(|t|^4)$ by $o(|t|^3)$. Consider first the model case without higher order terms, i.e.~the function $$ f^0(a,b,t) = \Bigl(at-{\rm sgn\,}(t)t^3,bt\Bigr). $$ Note that the first component $at-{\rm sgn\,}(t)t^3$ of $f^0$ is exactly the function that we encountered at the end of Section~\ref{ss:spikes}. The zero set of $f^0$ consists of three strata $$ \{t=0\}\cup \{b=0,a>0,t=\sqrt{a}>0\}\cup \{b=0,a<0,t=-\sqrt{-a}<0\}. $$ For $a>0$ and $b=0$ the function $$ f_a:[0,\sqrt{a}]\to{\mathbb{R}}^2,\qquad t\mapsto f^0(a,0,t)=(at-t^3,0) $$ is a spike with ends satisfying $$ f_a'(0)=(a,0),\quad f_a'(\sqrt{a})=(-2a,0),\quad f_a'''(0)=f_a'''(\sqrt{a})=-6. $$ Similarly, for $a<0$ the function $$ f_a:[-\sqrt{-a},0]\to{\mathbb{R}}^2,\qquad t\mapsto f^0(a,0,t)=(at+t^3,0) $$ is a spike with ends satisfying $$ f_a'(0)=(a,0),\quad f_a'(-\sqrt{-a})=(-2a,0),\quad f_a'''(0)=f_a'''(-\sqrt{-a})=+6. $$ So two families of spikes pointing in the same directions come together from both sides along the $a$-axis $\{b=0\}$ and vanish at $(a,b)=(0,0)$, see Figure~\ref{fig:spikes-vanishing}. The following lemma states that this qualitative picture persists in the presence of higher order terms. \begin{figure} \labellist \small\hair 2pt \pinlabel ${\mathbb{R}}\times\{0\}$ at 289 385 \pinlabel $-\varepsilon$ at 165 171 \pinlabel $\varepsilon$ at 407 171 \pinlabel $a$ at 522 182 \pinlabel $b$ at 369 345 \endlabellist \centering \includegraphics[width=0.8\textwidth]{figures/spikes-vanishing} \caption{ Two families of spikes vanishing at the origin. } \label{fig:spikes-vanishing} \end{figure} \begin{lemma}\label{lem:spike} Let $f:D_\varepsilon\times(-\gamma,\gamma)\to{\mathbb{R}}^2$ be a function satisfying~\eqref{eq:taylor}. Then for $\varepsilon$ and $\gamma$ sufficiently small there exist smooth functions $\beta(a,t)$ and $\tau(a)$ for $a\in[-\varepsilon,\varepsilon]\setminus\{0\}$ such that with $\beta(a):=\beta\bigl(a,\tau(a)\bigr)$ the zero set of $f$ in $D_\varepsilon\times(-\gamma,\gamma)$ consists of three strata $$ \{t=0\}\cup \{b=\beta(a),a>0,t=\tau(a)>0\}\cup \{b=\beta(a),a<0,t=\tau(a)<0\}. $$ The functions $\beta,\tau$ satisfy the estimates $$ \beta(a,t) = O(|a|t^2+|t|^3),\quad \tau(a)^2-a = O(a^{3/2}),\quad \beta(a) = O(a^{3/2}). $$ Moreover, the functions \begin{gather*} f_a:[0,\tau(a)]\to{\mathbb{R}}^2,\qquad t\mapsto f\bigl(a,\beta(a),t\bigr),\quad a>0,\cr f_a:[\tau(a),0]\to{\mathbb{R}}^2,\qquad t\mapsto f\bigl(a,\beta(a),t\bigr),\quad a<0 \end{gather*} are spikes with ends satisfying \begin{gather*} f_a'(0)=(a,0)+O(a^{3/2}),\quad f_a'\bigl(\tau(a)\bigr)=(-2a,0)+O(a^{3/2}). \end{gather*} \end{lemma} \begin{proof} We consider the case $a,t>0$, the case $a,t<0$ being analogous. Setting the second component in~\eqref{eq:taylor} to zero and dividing by $t$ yields $b=O(at^2+bt^2+t^3)$, which for $t$ sufficiently small can be solved for $b=\beta(a,t)$ satisfying the estimate $\beta(a,t) = O(at^2+t^3)$. Inserting this into the first component in~\eqref{eq:taylor}, setting it to zero and dividing by $t$ yields $$ a-t^2=O\Bigl(at^2+\beta(a,t)t^2+t^3\Bigr) = O(at^2+t^3), $$ which for $(a,t)$ sufficiently small can be solved for $t=\tau(a)$ satisfying the estimate $\tau(a)^2-a = O(a^{3/2})$. Inserting $t=\tau(a)$ in $\beta(a,t)$ we obtain the estimate $\beta(a) = O(a^{3/2})$. This proves the first assertions. Now consider the function $f_a(t)=f\bigl(a,\beta(a),t\bigr)$ for $t\in[0,\tau(a)]$ and $a>0$. Inserting $\beta(a) = O(a^{3/2})$ we find \begin{align*} f_a(t) &= (at-t^3,0) + O\Bigl(\beta(a)t+at^3+\beta(a)t^3+t^4\Bigr)\cr &= (at-t^3,0) + O(a^{3/2}t+at^3+t^4), \end{align*} and therefore $f_a'(t)=(a-3t^2,0)+O(a^{3/2}+at^2+t^3)$. This immediately gives $f_a'(0)=(a,0)+O(a^{3/2})$ and, using $\tau(a)=O(a^{1/2})$, also $f_a'\bigl(\tau(a)\bigr)=(-2a,0)+O(a^{3/2})$. It remains to prove that the functions $f_a$ are spikes in the sense of Definition~\ref{def:spike}. Write in components $f=(f^1,f^2)$ and $f_a=(f^1_a,f^2_a)$ and abbreviate $\tau:=\tau(a)$. We claim that there exist constants $\delta,D>0$ independent of $a,t$ such that for all $t\in[0,\tau]$ we have $$ f^1_a(t)\geq2\delta t(\tau^2-t^2),\qquad |f_a^2(t)|\leq Dt(a+t)(\tau-t). $$ For the first estimate, note that $$ \frac{1}{t}f^1_a(t) = a-t^2+O(a^{3/2}+at^2+t^3), $$ viewed as a function of $t^2$, has transversely cut out zero locus $t=\tau$ and is therefore $\geq2\delta(\tau^2-t^2)$ for some $\delta>0$. The second estimate holds because $$ \frac{1}{t}f^2_a(t) = O(a^{3/2}+at^2+t^3) $$ vanishes at $t=\tau$, so $|f^2_a(t)|\leq Dt(a+t)(\tau-t)$ for some constant $D$. Using these estimates as well as $f_a'(0)=(a,0)+O(a^{3/2})$ and $\tau=O(a^{1/2})$ we compute with a generic constant $C$ (independent of $a,t$): \begin{align*} \langle f_a'(0),f_a(t)\rangle &= \bigl(a+O(a^{3/2})\bigr)f_a^1(t) + \langle O(a^{3/2}),f_a^2(t)\rangle \cr &\geq a\delta t(\tau^2-t^2) - Ca^{3/2}t(\tau-t)(a+t) \cr &= at(\tau-t)\Bigl(\delta(\tau+t)-Ca^{1/2}(a+t)\Bigr) \cr &\geq a^{3/2}t(\tau-t)\bigl(\delta-C(a+t)\bigr), \end{align*} which is positive for $0<t<\tau$ and $a$ sufficiently small. An analogous computation, using $f_a'(\tau)=(-2a,0)+O(a^{3/2})$, shows $\langle f_a'(\tau),f_a(t)\rangle<0$, so $f_a$ is a spike. \end{proof} \begin{remark}\label{rem:spike-interpol} The spikes from Lemma~\ref{lem:spike} can be connected to the spike of the model function $f^0$ without higher order terms by rescaling: For $s\in(0,1]$ set \begin{align*} f^s(a,b,t) &:= \frac{1}{s^3}f(s^2a,s^2b,st) \cr &= \Bigl(at-{\rm sgn\,}(t)t^3,bt\Bigr) + sO(|a||t|^3+|b||t|^3+|t|^4) \cr & \overset{s\to 0}{\longrightarrow} \Bigl(at-{\rm sgn\,}(t)t^3,bt\Bigr). \end{align*} Thus for $|a|\leq \varepsilon$ the corresponding family of spikes $(f^s_a)_{s\in[0,1]}$ connects $f_a$ to the spike $f_a^0$. \end{remark} Now we return to the points $\lambda^j\in U^j$ and the corresponding maps $f:D_\varepsilon\times(-\gamma,\gamma)\to{\mathbb{R}}^2$ defined by~\eqref{eq:f}. After shrinking $\varepsilon,\gamma>0$ and replacing $U^j$ by $(\Phi\circ\psi)^{-1}(D_\varepsilon)\subset \Delta_2$ (where $\psi,\Phi$ are the maps defined above), we may assume that $\varepsilon,\gamma$ satisfy the smallness requirement in Lemma~\ref{lem:spike} for each $j$. Define \begin{gather*} M_{\tilde\delta_Q} := \bigcup_i({\rm ev}_{2i}\circ S)^{-1}(K) \setminus\bigcup_j\bigl(U^j\times(0,1)\bigr),\cr M_{\tilde\delta_N} := \bigcup_i({\rm ev}_{2i-1}\circ S)^{-1}(K) \setminus\bigcup_j\bigl(U^j\times(0,1)\bigr). \end{gather*} By construction, $M_{\tilde\delta_Q}$ and $M_{\tilde\delta_N}$ are 1-dimensional submanifolds with boundary of $\Delta_2\times(0,1)$. Define $\tilde\delta_QS:M_{\tilde\delta_Q}\to\Sigma^{\ell+1}$ by inserting preferred $N$-spikes at all points where some $Q$-string meets $K$ (via the same formula as the one above for $\delta_Q$ on $1$-chains), and similarly for $\tilde\delta_NS$. See Figure~\ref{fig:string-relations}. Note that the boundary $\partial M_{\tilde\delta_Q}$ consists of intersections with $\partial\Delta_2$ and with the boundaries $\partial U^j$. Thus each $j$ contributes a unique point $\lambda^j_Q$ to $\partial M_{\tilde\delta_Q}$, which corresponds in the above coordinates to $a=+\varepsilon$ if the associated index $i$ is odd and to $a=-\varepsilon$ if $i$ is even. Similarly, each $j$ contributes a unique point $\lambda^j_N$ to $\partial M_{\tilde\delta_N}$ which corresponds in the above coordinates to $a=-\varepsilon$ if the associated index $i$ is odd and to $a=+\varepsilon$ if $i$ is even. The broken strings $\tilde\delta_QS(\lambda^j_Q)$ and $\tilde\delta_NS(\lambda^j_N)$ are $C^m$-close for $|t|\geq\gamma$, and by Lemma~\ref{lem:spike} for $|t|<\gamma$ they both have a $Q$-spike and an $N$-spike {\em with the same first derivatives at the ends}. So, using convexity of the space of spikes with fixed ends (Remark~\ref{rem:spikes-convex}, see also Remark~\ref{rem:spike-interpol}), we can connect them by a short $1$-chain $S^j:[0,1]\to\Sigma^{\ell+1}$ with spikes in $[-\gamma,\gamma]$ (which we regard as $Q$-spikes.) We define $\delta_QS:M_{\delta_Q}\to\Sigma^{\ell+1}$ to be $\tilde\delta_QS$ together with the $1$-chains $S^j$, and we set $\delta_NS:=\tilde\delta_NS:M_{\delta_N}=M_{\tilde\delta_N}\to\Sigma^{\ell+1}$. Recall that the $1$-dimensional submanifold $M_{\tilde\delta_Q}\subset\Delta_2\times(0,1)$ is the union of the transversely cut out preimages of $K$ under the evaluation maps ${\rm ev}_{2i}\circ S:\Delta_2\times(0,1)\to Q$. Hence the coorientation of $K\subset Q$ and the orientation of $\Delta_2\times(0,1)$ induce an orientation on $M_{\tilde\delta_Q}$, and similarly for $M_{\tilde\delta_N}$. (The induced orientations depend on orientation conventions which will be fixed in the proof of Proposition~\ref{prop:string-relations} below.) We parametrize each connected component of $M_{\tilde\delta_Q}$ and $M_{\tilde\delta_N}$ by the interval $\Delta_1=[0,1]$ proportionally to arclength (with respect to the standard metric on $\Delta_2\times(0,1)$ and in the direction of the orientation, where for components diffeomorphic to $S^1$ we choose an arbitrary initial point). So we can view $\delta_QS:M_{\delta_Q}\to\Sigma^{\ell+1}$ and $\delta_NS:M_{\delta_N}\to\Sigma^{\ell+1}$ as generic $1$-chains, where we orient the $1$-chains $S^j$ such that the points $\tilde\delta_QS(\lambda^j_Q)$ appear with opposite signs in the boundary of $S^j$ and $M_{\tilde\delta_Q}$. \begin{prop}\label{prop:string-relations} On generic chains of degree $2$, the operations $\partial$, $\delta_Q$ and $\delta_N$ satisfy the relations \begin{gather*} \partial^2=\delta_Q^2=\delta_N^2=\delta_Q\delta_N+\delta_N\delta_Q=0 ,\cr \partial\delta_Q+\delta_Q\partial + \partial\delta_N+\delta_N\partial = 0. \end{gather*} In particular, these relations imply $$ (\partial+\delta_Q+\delta_N)^2=0. $$ \end{prop} \begin{proof} Consider a generic $2$-chain $S:\Delta_2\to\Sigma^\ell$. We continue to use the notation above and denote by $\pi:\Delta_2\times(0,1)\to\Delta_2$ the projection. The relation $\partial^2S=0$ is clear. Points in $\delta_Q^2S$ correspond to transverse self-intersections of $\pi(M_{\tilde\delta_Q})$, so each point appears twice with opposite signs, hence $\delta_Q^2S=0$ and similarly $\delta_N^2S=0$. Points in $\delta_Q\delta_NS+\delta_N\delta_QS$ correspond to transverse intersections of $\pi(M_{\tilde\delta_Q})$ and $\pi(M_{\tilde\delta_N})$, so again each point appears twice with opposite signs and the expression vanishes. Note that the broken strings corresponding to these points have two preferred spikes inserted at different places, so due to the uniqueness of preferred spikes with given end points the broken strings do not depend on the order in which the spikes are inserted. \begin{figure} \labellist \small\hair 2pt \pinlabel $S^j$ at 68 102 \pinlabel $\tilde\delta_NS$ at 42 35 \pinlabel $\tilde\delta_QS$ at 140 65 \pinlabel ${\color{blue} \Delta_2}$ at 99 227 \pinlabel ${\color{blue} U^j}$ at 63 139 \pinlabel ${\color{blue} \lambda^j_N}$ at 48 75 \pinlabel ${\color{blue} \lambda^j_Q}$ at 102 116 \endlabellist \centering \includegraphics[width=0.6\textwidth]{figures/string-relations} \caption{ The definition of $\delta_QS=\tilde\delta_QS+S^j$ and $\delta_NS=\tilde\delta_NS$. } \label{fig:string-relations} \end{figure} In order to achieve $\partial\delta_Q+\delta_Q\partial + \partial\delta_N+\delta_N\partial = 0$, we choose the orientation conventions for $M_{\tilde\delta_Q}$ and $M_{\tilde\delta_N}$ such that (see Figure~\ref{fig:string-relations}): \begin{itemize} \item $\partial\tilde\delta_QS+\tilde\delta_Q\partial S$ corresponds to the intersection points $\lambda^j_Q$ of $M_{\tilde\delta_Q}$ with the boundaries of the regions $U^j$, and similarly for $\partial\tilde\delta_NS+\tilde\delta_N\partial S$; \item the sign of $\lambda^j_Q$ as a boundary point of $M_{\tilde\delta_Q}$ is opposite to the sign of $\lambda^j_N$ as a boundary point of $M_{\tilde\delta_N}$. \end{itemize} Due to the choice of the $1$-chains $S^j$, it follows that $\partial\delta_QS+\delta_Q\partial S$ is the sum of the points $\delta_NS(\lambda^j_N)$ with suitable signs, and $\partial\delta_NS+\delta_N\partial S$ is the same sum with opposite signs, so the total sum equals zero. \end{proof} \subsection{The string chain complex} For $d=0,1,2$ and $\ell\geq 0$ let $C_d(\Sigma^\ell)$ be the free ${\mathbb{Z}}$-module generated by generic $d$-chains in $\Sigma^\ell$, and set $$ C_d(\Sigma) := \bigoplus_{\ell=0}^\infty C_d(\Sigma^\ell),\qquad d=0,1,2. $$ The string operations defined in Subsection~\ref{ss:string-op} yield ${\mathbb{Z}}$-linear maps $$ \partial:C_d(\Sigma^\ell)\to C_{d-1}(\Sigma^\ell),\qquad \delta_N,\delta_Q:C_d(\Sigma^\ell)\to C_{d-1}(\Sigma^{\ell+1}). $$ The induced maps $\partial,\delta_Q,\delta_N:C_d(\Sigma)\to C_{d-1}(\Sigma)$ satisfy the relations in Proposition~\ref{prop:string-relations}, in particular $$ D:=\partial+\delta_Q+\delta_N. $$ satisfies $D^2=0$. We call $\bigl(C_*(\Sigma),\partial+\delta_Q+\delta_N\bigr)$ the {\em string chain complex} of $K$, and we define the {\em degree $d$ string homology} of $K$ as the homology of the resulting complex, $$ H_d^{\rm string}(K) := H_d\Bigl(C_*(\Sigma),\partial+\delta_Q+\delta_N\Bigr),\qquad d=0,1,2. $$ Concatenation of broken strings at the base point $x_0$ (and the canonical subdivision of $\Delta_1\times\Delta_1$ into two $2$-simplices) yields products $$ \times:C_d(\Sigma^\ell)\times C_{d'}(\Sigma^{\ell'})\to C_{d+d'}(\Sigma^{\ell+\ell'}), \qquad d+d'\leq 2 $$ satisfying the relations \begin{equation}\label{eq:x} (a\times b)\times c = a\times (b\times c),\qquad D(a\times b)=Da\times b+(-1)^{\deg a} a\times Db \end{equation} whenever $\deg a+\deg b+\deg c\leq 2$. In particular, this gives $C_0(\Sigma)$ the structure of a (noncommutative but strictly associative) algebra over ${\mathbb{Z}}$ and $C_1(\Sigma),C_2(\Sigma)$ the structure of bimodules over this algebra. These structures induce on homology the structure of a ${\mathbb{Z}}$-algebra on $H_0^{\rm string}(K)$, and of bimodules over this algebra on $H_1^{\rm string}(K)$ and $H_2^{\rm string}(K)$. By definition, the isomorphism classes of the algebra $H_0^{\rm string}(K)$ and the modules $H_1^{\rm string}(K),H_2^{\rm string}(K)$ are clearly isotopy invariants of the framed oriented knot $K$. We can combine these invariants into a single graded algebra as follows. For $d>2$, we define $C_d(\Sigma^\ell)$ to be the free ${\mathbb{Z}}$-module generated by products $S_1\times \dots\times S_r$ of generic chains $S_i$ of degrees $1\leq d_i\leq 2$ in $\Sigma^{\ell_i}$ such that $d_1+\dots+d_r=d$ and $\ell_1+\dots+\ell_r=\ell$, modulo the submodule generated by $$ S_1\times \dots\times S_r - S'_1\times \dots\times S'_{r'} $$ for different decompositions of the same $d$-chain. Put differently, this submodule is generated by $$ S_1\times\dots S_i\times S_{i+1}\times\dots \times S_r - S_1\times\dots (S_i\times S_{i+1})\times\dots \times S_r, $$ where $S_i$ and $S_{i+1}$ are generic $1$-chains and $(S_i\times S_{i+1})$ is the associated generic $2$-chain. Note that for $d=2$ this definition of $C_2(\Sigma^\ell)$ agrees with the earlier one. We define $D=\partial+\delta_Q+\delta_N$ on $$ C_d(\Sigma) := \bigoplus_{\ell=0}^\infty C_d(\Sigma^\ell),\qquad d\geq 0 $$ by the Leibniz rule. This is well-defined in view of the second equation in~\eqref{eq:x} and satisfies $D^2=0$. Together with the product $\times$ this gives $C_*(\Sigma)$ the structure of a differential graded ${\mathbb{Z}}$-algebra. The {\em total string homology} $$ H_*^{\rm string}(K) := H_*\Bigl(C_*(\Sigma),D\Bigr) $$ inherits the structure of a graded ${\mathbb{Z}}$-algebra whose isomorphism class is an invariant of the framed oriented knot $K$. \begin{remark} Our definition of string homology of $K$ in degrees $>2$ in terms of product chains is motivated by Legendrian contact homology of $\Lambda K$ when $Q={\mathbb{R}}^{3}$ which is then generated by elements of degrees $\leq 2$. From the point of view of string topology, it would appear more natural to define string homology in arbitrary degrees in terms of higher dimensional generic chains of broken strings in the sense of Definition~\ref{def:generic-chain}. Similarly, for knot contact homology in other ambient manifolds, e.g. for $Q=S^{3}$, there are higher degree Reeb chords that contribute to the (linearized) contact homology. It would be interesting to see whether such constructions would carry additional information. \end{remark} \subsection{Length filtration}\label{ss:length-filt} Up to this point, the constructions have been fairly symmetric in the $Q$-and $N$-strings. However, as we will see below, the relation to Legendrian contact homology leads us to assign to $Q$-strings $s_{2i}$ their geometric length $L(s_{2i})$, and to $N$-strings length zero. Thus we define the {\em length} of a broken string $s=(s_1,\dots,s_{2\ell+1})$ by $$ L(s) := \sum_{i=1}^\ell L(s_{2i}), $$ where we do not include in the sum those $s_{2i}$ that are $Q$-spikes in the sense of Definition~\ref{def:spike}. We define the length of a generic $i$-chain $S:K\to\Sigma$ by $$ L(S) := \max_{k\in K}L\bigl(S(k)\bigr). $$ Then the subspaces $$ \mathcal{F}^\ell C_i(\Sigma):=\left\{\sum a_jS_j\in C_i(\Sigma)\mid L(S_j)\leq\ell \text{ whenever }a_j\neq 0\right\} $$ define a filtration in the sense that $\mathcal{F}^k C_i(\Sigma)\subset \mathcal{F}^\ell C_i(\Sigma)$ for $k\leq\ell$ and $$ D\Bigl(\mathcal{F}^\ell C_i(\Sigma)\Bigr)\subset \mathcal{F}^\ell C_{i-1}(\Sigma). $$ This {\em length filtration} will play an important role in the proof of the isomorphism to Legendrian contact homology in Section~\ref{sec:iso}. \begin{remark}\label{rem:length-spikes} The omission of the length of $Q$-spikes from the length of a broken string ensures that the operation $\delta_N$, which inserts $Q$-spikes, does not increase the length. Since $Q$-spikes do not intersect the knot in their interior, they are not affected by $\delta_Q$ and it follows that $D$ preserves the length filtration. \end{remark}
1,108,101,562,603
arxiv
\section{Introduction} Cellular automata (CA) are models of parallel computation, so when implementing them on a sequential architecture, one cannot simply update the cells one by one -- some cells would see already updated states and the resulting configuration would be incorrect. The simplest-to-implement solution is to hold two copies of the current configuration in memory, and map $(x,x) \mapsto (x,G(x)) \mapsto (G(x),G(x))$. This is wasteful in terms of memory, and one can, with a bit of thinking, reduce the memory usage to a constant by simply remembering a `wave' containing the previous values of the $r$ cells to the left of the current cell, where $r$ is the radius of the CA. Here, we study the situation where the additional memory usage can be, in a sense, dropped to zero -- more precisely we remember \emph{only} the current configuration $x \in S^\mathbb{Z}$, and to apply the cellular automaton we sweep a permutation $\chi : S^m \to S^m$ from left to right over $x$ (applying it consecutively to all length-$m$ subwords of $x$). The positions where the sweep starts may get incorrect values, but after a bounded number of steps, the rule should start writing the image of the cellular automaton. We formalize this in two ways, with `sliders' and `sweepers', which are two ways of formally dealing with the problem that sweeps `start from infinity'. It turns out that the cellular automata that admit a sliding rule are precisely the ones that are left-closing (Definition~\ref{def:LeftClosing}), and whose number of right stairs (see Definition~\ref{def:stair}) of length $3m$ divides $|\ST|^{3m}$ for large enough $m$. This can be interpreted as saying that the average movement `with respect to any prime number' is not to the right. See Theorem~\ref{thm:SliderCharacterization2} and Theorem~\ref{thm:SweeperCharacterization} for the precise statements, and Section~\ref{sec:Decidability} for decidability results. We introduce the sweeping hierarchy where left-to-right sweeps and right-to-left sweeps alternate, and the closing hierarchy where left-closing and right-closing CA alternate. We show that the two hierarchies coincide starting from the second step. We do not know if the hierarchies collapse on a finite level. \subsection{Preliminaries} We denote the set of integers by $\mathbb{Z}$. For integers $i\leq j$ we write $[i,j)$ for $\{ x\in\mathbb{Z}\mid i\leq x < j\}$ and $[i,j]$ for $[i,j)\cup\{j\}$; furthermore $\itoinfty{i}=\{ x\in\mathbb{Z} \mid i\leq x\}$ and $\minftytoi{i}=\{ x\in\mathbb{Z} \mid x< i\}$ have the obvious meaning. Thus $\itoinfty{0}$ is the set of non-negative integers which is also denoted by $\NN$. Occasionally we use notation for a set $M$ of integers in a place where a \emph{list} of integers is required. If no order is specified we assume the natural increasing order. If the reversed order is required we will write $M^R$. For sets $A$ and $B$ the set of all functions $f: A\to B$ is denoted $B^A$. For $f\in B^A$ and $M\subseteq A$ the restriction of $f$ to $M$ is written as $f|_M$ or sometimes even $f_M$. Finite words $w\in\ST^n$ are lists of symbols, e.g.\xspace mappings $w: [0,n) \to \ST$. Number $n$ is the length of the word. The set of all finite words is denoted by $\ST^*$. Configurations of one-dimensional CA are biinfinite words $x:\mathbb{Z}\to\ST$. Instead of $x(i)$ we often write $x_i$. We define the \emph{left shift} $\sigma : S^\mathbb{Z} \to S^\mathbb{Z}$ by $\sigma(x)_i = x_{i+1}$. The restriction of $x$ to a subset $\minftytoi{i}$ gives a left-infinite word for which we write $x_{\minftytoi{i}}$; for a right-infinite word we write $x_{\itoinfty{i}}$. These are called \emph{half-infinite words}. Half-infinite words can also be shifted by $\sigma$, and this is defined using the same formula. The domain is shifted accordingly so for $x\in\ST^{[i,\infty)}$ we have $\sigma(x)\in\ST^{[i-1,\infty)}$. We use a special convention for concatenating words: Finite words `float', in the sense that they live in $\ST^n$ for some $n$, without a fixed position, and $u \cdot v$ denotes the concatenation of $u$ and $v$ as an element of $\ST^{|u| + |v|}$. Half-infinite configurations have a fixed domain $(-\infty,i]$ or $[i,\infty)$ for some $i$, which does not change when they are concatenated with finite words or other half-infinite configurations, while finite words are shifted suitably so that they fill the gaps exactly (and whenever we concatenate, we make sure this makes sense). More precisely, for $w \in \ST^*$ and $y \in \ST^{(-\infty, i]}$, we have $y \cdot w \in \ST^{(-\infty,i+|w|]}$ and for $w \in \ST^*$ and $z \in \ST^{[i,\infty)}$ we have $w \cdot z \in \ST^{[i-|w|,\infty)}$ (defined in the obvious way). For a word $w \in \ST^*$ and half-infinite words $y \in \ST^{\minftytoi{i}}$ and $z \in \ST^{\itoinfty{i+n}}$ we write $y \cdot w \cdot z$ for the obvious configuration in $\ST^\mathbb{Z}$, and this is defined if and only if $|w| = n$. The set $\ST^\mathbb{Z}$ of configurations is assigned the usual product topology generated by cylinders. A \emph{cylinder} defined by word $w\in\ST^n$ at position $i\in\mathbb{Z}$ is the set \[ [w]_{[i,i+n)} = \{x\in\ST^\mathbb{Z}\ |\ x_{[i,i+n)}=w\} \] of configurations that contain word $w$ in position $[i,i+n)$. Cylinders are open and closed, and the open sets in $\ST^\mathbb{Z}$ are precisely the unions of cylinders. We extend the notation also to half-infinite configurations, and define $$ [y]_{D} = \{x\in\ST^\mathbb{Z}\ |\ x_{D}=y\} $$ for $D = [i,\infty)$ and $D = (\infty, i]$, and any $y \in \ST^D$. These sets are closed in the topology. \section{Sliders and sweepers} A \emph{block rule} is a function $\chi : \ST^m \to \ST^m$. Given a block rule $\chi$ we want to define what it means to ``apply $\chi$ from left to right once at every position''. We provide two alternatives, compare them and characterize which cellular automata can be obtained by them. The first alternative, called a slider, assumes a bijective block rule $\chi$ that one can slide along a configuration left-to-right or right-to-left to transition between a configuration $y$ and its image $f(y)$. The second alternative, called a sweeper, must consistently provide values of the image $f(y)$ when sweeping left-to-right across $y$ starting sufficiently far on the left. We first define what it means to apply a block rule on a configuration. \begin{definition} Let $\chi : \ST^m \to \ST^m$ be a block rule and $i\in \mathbb{Z}$. The application of $\chi$ at coordinate $i$ is the function $\chi^i:\ST^\mathbb{Z}\longrightarrow\ST^\mathbb{Z}$ given by $\chi^i(x)_{[i,i+m)} = \chi(x_{[i,i+m)})$ and $\chi^i(x)_j=x_j$ for all $j\not\in[i,i+n)$. More generally, for $i_1,\ldots,i_k \in \mathbb{Z}$ we write \[ \chi^{i_1,\ldots,i_k} = \chi^{i_k} \circ \cdots \circ \chi^{i_2} \circ \chi^{i_1}. \] \end{definition} When $m > 1$, it is meaningless to speak about ``applying $\chi$ to each cell simultaneously'': An application of $\chi$ changes the states of several cells at once. Applying it slightly shifted could change a certain cell again, but in a different way. We next define finite and infinite sweeps of block rule applications with a fixed start position. \begin{definition} Given a block rule $\chi$ for $i,j\in \mathbb{Z}$, $i\leq j$, define $\chi^{[i,j]}=\chi^j\circ\cdots\circ \chi^i$; analogously let $\chi^{[i,j)}=\chi^{j-1}\circ\cdots\circ \chi^i$. % For any configuration $x\in\ST^{\mathbb{Z}}$ and fixed $i\in\mathbb{Z}$ the sequence of configurations $x^{(j)}=\chi^{[i,j]}(x)$ for $j\in\itoinfty{i}$ has a limit (point in the topological space $\ST^{\mathbb{Z}}$) for which we write $\toright{\chi}(x)$. Analogously, for a block rule $\xi$ the sequence of configurations $x^{(j)} = \xi^{[j,i)^R}(x)$ for $j\in \minftytoi{i}$ has a limit for which we write $\toleft{\xi}(x)$. \end{definition} It should be observed that in the definition of $\toright{\chi}(x)$ one has $i<j$ and the block rule is applied at successive positions from left to right. On the other hand $j<i$ is assumed in the definition of $\toleft{\xi}(x)$ and since the ${}^{{}^R}$ in $\xi^{[j,i)^R}$ indicates application of $\xi$ at the positions in the \emph{reverse} order, i.e.\ $i-1, i-2, \dots, j$, the block rule is applied from right to left. The reason the limits always exist in the definition is that the value of $x^{(j)}_i$ changes at most $m$ times, on the steps where the sweep passes over the cell $i$. \begin{example} \label{ex:swap} % Let $S=\{0,1\}$ and consider the block rule $\chi: S^{[0,2)}\to S^{[0,2)}: (a,b)\mapsto (b,a)$. % For consistency with the above definition denote by $\xi$ the inverse of $\chi$ (which in this case happens to be $\chi$ again). % Let $s\in S$ and $y\in S^{\mathbb{Z}}$. % We will look at the configuration $x\in S^{\mathbb{Z}}$ with \[ x_i = \begin{cases} y_{i+1}, &\text{ if } i<0 \\ s, &\text{ if } i=0 \\ y_{i}, &\text{ if } i>0 \end{cases} \] The application of $\chi$ successively at positions $0, 1, 2, \dots$ always swaps state $s$ with its right neighbor. % Since cell $j$ can only possibly change when $\chi^{j-1}$ or $\chi^j$ is applied, each cell enters a fixed state after a finite number of steps; see also the lower part of Figure~\ref{fig:ex-swap} starting at the row with configuration $x$. \begin{figure}[ht] \centering \small \begin{tabular}[b]{R@{ \ $\cdots$}|*{7}{B|}c@{$\cdots$\quad}C} \cline{2-8} \xi^{0-}(x) & y_{-3} & y_{-2} & y_{-1} & y_0 & y_1 & y_2 & y_3 & & y \\ \cline{2-8} \multicolumn{1}{c}{} & \multicolumn{7}{C}{\vdots} & \\ \cline{2-8} \xi^{[-3,0)^R}(x) & s & y_{-2} & y_{-1} & y_0 & y_1 & y_2 & y_3 & \\ \cline{2-8} \xi^{[-2,0)^R}(x) & y_{-2} & s & y_{-1} & y_0 & y_1 & y_2 & y_3 & \\ \cline{2-8} \xi^{[-1,0)^R}(x) & y_{-2} & y_{-1} & s & y_0 & y_1 & y_2 & y_3 & \\ \cline{2-8} \multicolumn{9}{c}{ }& \\ \cline{2-8} x\hphantom{)} & y_{-2} & y_{-1} & y_0 & s & y_1 & y_2 & y_3 & & x\\ \cline{2-8} \multicolumn{9}{c}{ }& \\ \cline{2-8} \chi^{[0,1)}(x) & y_{-2} & y_{-1} & y_0 & y_1 & s & y_2 & y_3 & \\ \cline{2-8} \chi^{[0,2)}(x) & y_{-2} & y_{-1} & y_0 & y_1 & y_2 & s & y_3 & \\ \cline{2-8} \chi^{[0,3)}(x) & y_{-2} & y_{-1} & y_0 & y_1 & y_2 & y_3 & s & \\ \cline{2-8} \multicolumn{1}{c}{} & \multicolumn{7}{C}{\vdots} & \\ \cline{2-8} \chi^{0+}(x) & y_{-2} & y_{-1} & y_0 & y_1 & y_2 & y_3 & y_4 & & z \\ \cline{2-8} \end{tabular} \caption{A sequence of configurations with the center cell at position $0$. Starting with configuration $x$ in the middle when going downward the swapping rule $\chi$ is applied to blocks $[0,1]$, $[1,2]$, etc., and going from $x$ upward rule $\xi=\chi$ is applied to blocks $[-1,0]$, $[-2,-1]$ and so on.} \label{fig:ex-swap} \end{figure} \end{example} \begin{example} \label{ex:RightXOR} Let $S=\{0,1\}$ and consider the block rule $\chi: S^{[0,2)}\to S^{[0,2)}: (a,b)\mapsto (a+b,b)$. Then sliding this rule over a configuration $x \in \{0,1\}^\mathbb{Z}$ produces the image of $x$ in the familiar exclusive-or cellular automaton with neighborhood $\{0,1\}$ (elementary CA 102). We will see in Example~\ref{ex:LeftXOR} that the exclusive-or CA with neighborhood $\{-1,0\}$ can not be defined this way. \end{example} \subsection{Definition of slider s} \begin{definition} \label{def:arelation} A bijective block rule $\chi$ with inverse $\xi$ defines a \emph{slider\ relation} $F \subset \ST^\mathbb{Z} \times \ST^\mathbb{Z}$ by $(y, z) \in F$ iff for some $x \in \ST^\mathbb{Z}$ and some $i\in\mathbb{Z}$ we have $\xi^{i-}(x) = y$ and $\chi^{i+}(x) = z$. % We call the pair $(x,i)$ a \emph{representation} of $(y, z)\in F$. \end{definition} Note that every $(x,i)\in \ST^\mathbb{Z}\times\mathbb{Z}$ is a representation of exactly one pair, namely $(\xi^{i-}(x),\chi^{i+}(x))\in F$. \begin{lemma} \label{lem:repr} Let $(x,i)$ be a representation of $(y, z)\in F$ under a bijective block rule $\chi$ of block length $n$. Then $x_{\minftytoi{i}} = z_{\minftytoi{i}}$ and $x_{\itoinfty{i+n}} = y_{\itoinfty{i+n}}$. \end{lemma} \begin{proof} Applying block rule $\chi$ at positions $j\geq i$ in $x$ never changes cells at positions $k<i$. % % Therefore $x_k = (\toright{\chi}(x))_k = z_k$ proving the first part. % The second part follows analogously. \end{proof} \begin{lemma} \label{lem:reprlemma} Let $(y,z)\in F$ be fixed. For all $i\in\mathbb{Z}$ denote $$R_i=\{x\in\ST^\mathbb{Z}\ |\ (x,i) \mbox{ is a representation of } (y, z)\}.$$ For $i<j$ the function $\chi^{[i,j)}: R_i\longrightarrow R_j$ is a bijection, with inverse $\xi^{[i,j)^R}$. All $R_i$ have the same finite cardinality. \end{lemma} \begin{proof} The claim follows directly from the definition and the facts that \begin{equation} \label{eq:reprlemma} \chi^{j+}\circ \chi^{[i,j)}=\chi^{i+} \mbox{ and } \xi^{j-}\circ \chi^{[i,j)} =\xi^{i-}, \end{equation} and that $\chi^{[i,j)}$ and $\xi^{[i,j)^R}$ are inverses of each other. More precisely, if $x\in R_i$ then $z=\chi^{i+}(x)=\chi^{j+}(\chi^{[i,j)}(x))$ and $y=\xi^{i-}(x)=\xi^{j-}(\chi^{[i,j)}(x))$ so $\chi^{[i,j)}(x)\in R_j$. This proves that $\chi^{[i,j)}$ maps $R_i$ into $R_j$. This map is injective. To prove surjectivity, we show that for any $x'\in R_j$ its pre-image $\xi^{[i,j)^R}(x')$ is in $R_i$. Composing the formulas in (\ref{eq:reprlemma}) with $\xi^{[i,j)^R}$ from the right gives $\chi^{j+}=\chi^{i+} \circ \xi^{[i,j)^R}$ and $\xi^{j-} =\xi^{i-}\circ \xi^{[i,j)^R}$, so as above we get $z=\chi^{j+}(x')=\chi^{i+}(\xi^{[i,j)^R}(x'))$ and $y=\xi^{j-}(x')=\xi^{i-}(\xi^{[i,j)^R}(x'))$, as required. The fact that the cardinalities are finite follows from Lemma~\ref{lem:repr}: there are at most $|\ST|^n$ choices of $x_{[i,i+n)}$ in $x\in R_i$. \end{proof} \begin{lemma} \label{lem:AruleClosed} A slider\ relation $F \subset \ST^\mathbb{Z} \times \ST^\mathbb{Z}$ defined by a bijective block rule $\chi$ is closed and shift-invariant, and the projections $(y,z)\mapsto y$ and $(y,z)\mapsto z$ map $F$ surjectively onto $\ST^\mathbb{Z}$. \end{lemma} \begin{proof} By Lemma~\ref{lem:reprlemma} every $(y,z)\in F$ has a representation $(x,0)$ at position $0$. Therefore, the relation $F$ is closed as the image of $\ST^\mathbb{Z}$ in the continuous map $x \mapsto (\xi^{0-}(x), \chi^{0+}(x))$. Clearly $(x,i)$ is a representation of $(y,z)$ if and only if $(\sigma(x),i-1)$ is a representation of $(\sigma(y),\sigma(z))$. Hence the relation $F$ is shift-invariant. The image of $F$ under the projection $(y,z)\mapsto z$ is dense. To see this, consider any finite word $w$ and a configuration $x$ with $x_{[-|w|,0)}=w$. The pair $(x,0)$ represents some $(y,z)\in F$, and because $z=\chi^{0+}(x)$ we have $z_{[-|w|,0)}=w$. The denseness follows now from shift invariance and the fact that $w$ was arbitrary. The image of $F$ under the projection is closed so the image is the whole $\ST^\mathbb{Z}$. The proof for the other projection is analogous. \end{proof} \begin{corollary} \label{cor:arule} If $F \subset \ST^\mathbb{Z} \times \ST^\mathbb{Z}$ defined by a bijective block rule $\chi$ is a function (that is, if for all $y\in \ST^\mathbb{Z}$ there is at most one $z\in \ST^\mathbb{Z}$ such that $(y,z)\in F$) then this function $f:y\mapsto z$ is a surjective cellular automaton. \end{corollary} \begin{proof} Because the projections $(y,z)\mapsto y$ and $(y,z)\mapsto z$ are onto, the function $f$ is full and surjective. Because the relation $F \subset \ST^\mathbb{Z} \times \ST^\mathbb{Z}$ is closed, the function $f$ is continuous. As it is continuous and shift-invariant, it is a cellular automaton. \end{proof} \begin{definition} \label{def:arule} Let $\chi$ be a bijective block rule such that the slider\ relation it defines is a function $f:\ST^\mathbb{Z} \longrightarrow \ST^\mathbb{Z}$. The surjective cellular automaton $f$ is called the \emph{slider}\ defined by $\chi$. \end{definition} Example~\ref{ex:swap} indicates that the slider\ for the block rule swapping two states is the left shift. By Corollary~\ref{cor:arule} every slider\ is a surjective CA. But not every surjective CA is a slider. This will follow from an exact characterization of which cellular automata are slider s below. \subsection{Characterization of slider s} We start by improving Corollary~\ref{cor:arule}, by showing that slider s are left-closing cellular automata. \begin{definition} \label{def:LeftClosing} Two configurations $y$ and $y'$ are \emph{right-asymptotic} if there is an index $i\in\mathbb{Z}$ such that $y_{\itoinfty{i}}=y'_{\itoinfty{i}}$. They are called \emph{left-asymptotic} if there is an index $i\in\mathbb{Z}$ such that $y_{\minftytoi{i}} = y'_{\minftytoi{i}}$. % A CA $f$ is \emph{left-closing} if for any two different right-asymptotic configurations $y$ and $y'$ we have $f(y)\not= f(y')$. % Right-closing CA are defined symmetrically using left-asymptotic configurations. \end{definition} \begin{lemma} \label{lem:a-rule->lc} A slider\ is a left-closing cellular automaton. \end{lemma} \begin{proof} Let slider\ $f$ be defined by a bijective block rule $\chi : \ST^m \to \ST^m$, so that $f$ is a surjective cellular automaton. Let $\xi$ be the inverse of $\chi$. Suppose $f$ is not left-closing, so that there exist two distinct right-asymptotic configurations $y$ and $y'$ such that $f(y) = f(y')$. We may suppose the rightmost difference in $y$ and $y'$ is at the origin. Let $r$ be a radius for the local rule of $f$, where we may suppose $r \geq m$, and let $y_{[-2r, 2r]} = w_0 v, y'_{[-2r, 2r]} = w_1 v$ where $|w_1| = |w_2| = 2r+1$. We can apply the local rule of $f$ to words, shrinking them by $r$ symbols on each side, and write $F : \ST^* \to \ST^*$ for this map. Since $y$ and $y'$ have the same $f$-image, we have $F(w_0 v) = F(w_1 v)$. Let $n$ be such that $2^n > |\ST|^m$ and for each $k \in \{0,1\}^n$, define the configuration \[ y_k = ...0000 w_{k_1} v w_{k_2} v \cdots v w_{k_n} v . 0000... \] where the right tail of $0$s begins at the origin. For each $y_k$, pick a point $x_k$ representing $(y_k, f(y_k))$ at the origin. By the pigeon hole principle, there exist $k \neq k'$ such that $(x_k)_{[0,m)} = (x_{k'})_{[0,m)}$. Let $j$ be the maximal coordinate where $k$ and $k'$ differ. Now, the rightmost difference in $y_k$ and $y_{k'}$ is in coordinate $R = -2r-1 - (4r+1)(n-j)$ (the last coordinate of the word $w_{k_j}$). We have $f(y_k)_{[R-r, \infty)} = f(y_{k'})_{[R-r, \infty)}$ by the assumption that $j$ is the rightmost coordinate where $k$ and $k'$ differ, and by $F(w_0 v) = F(w_1 v)$. Thus we also have $(x_k)_{[R-r, 0)} = (x_{k'})_{[R-r, 0)}$, since $\chi^{0+}(x_k) = f(y_k)$ and $\chi^{0+}(x_{k'}) = f(y_{k'})$ and these sweeps do not modify coordinates in $[R-r, 0)$. Recall that we have $(x_k)_{[0,m)} = (x_{k'})_{[0,m)}$ by the choice of $k$ and $k'$, so $(x_k)_{[R-r, m)}$ and $(x_{k'})_{[R-r, m)}$. Now, we should have $\xi^{0-}(x_k) = y_k$ and $\xi^{0-}(x_{k'}) = y_{k'}$, in particular we should have $\xi^{0-}(x_k)_R \neq \xi^{0-}(x_{k'})_R$. But this is impossible: $\xi^{0-}(x_k)_R$ is completely determined by $(x_k)_{[R-m+1, m)}$ and similarly $\xi^{0-}(x_{k'})_R$ is determined by $(x_{k'})_{[R-m+1, m)}$, but $(x_k)_{[R-m+1, m)} = (x_{k'})_{[R-m+1, m)}$ since $(x_k)_{[R-r, m)} = (x_{k'})_{[R-r, m)}$ and $r \geq m.$ \end{proof} In the rest of this section, we only consider the case when the slider relation $F$ that $\chi$ defines is a function. Next we analyze numbers of representations. We call a representation $(x,i)$ of a pair $(y,z)$ simply a representation of configuration $y$, because $z=f(y)$ is determined by $y$. Let $R(y,i)$ be the set of configurations $x$ such that $(x,i)$ is a representation of $y$. By Lemma~\ref{lem:repr} the elements of $R(y,i)$ have the form $x=f(y)_{\minftytoi{i}} \cdot w \cdot y_{\itoinfty{i+n}}$ for some word $w\in \ST^n$ where $n$ is the block length of $\chi$. By Lemma~\ref{lem:reprlemma} the cardinality of the set $R(y,i)$ is independent of $i$. Let us denote by $N(y)$ this cardinality. It turns out that the number is also independent of the configuration $y$. \begin{lemma} \label{lem:NIsSame} $N(y)=N(y')$ for all configurations $y,y'$. \end{lemma} \begin{proof} Let $n$ be the block length of rule $\chi$. \begin{enumerate} \item[(i)] Assume first that $y,y'$ are left-asymptotic. There is an index $i\in\mathbb{Z}$ such that $y_{\minftytoi{i}} = y'_{\minftytoi{i}}$. Then for any $z$ we have that $z_{\minftytoi{i}}y_{\itoinfty{i}}\in R(y,i-n)$ if and only if $z_{\minftytoi{i}}y'_{\itoinfty{i}}\in R(y',i-n)$. This gives a bijection between $R(y,i-n)$ and $R(y',i-n)$ so that $N(y)=\card{R(y,i-n)}=\card{R(y',i-n)}=N(y')$. \item[(ii)] Assume then that $y,y'$ are right-asymptotic. Also $f(y)$ and $f(y')$ are right-asymptotic so there is an index $i\in\mathbb{Z}$ such that $f(y)_{\itoinfty{i}} = f(y')_{\itoinfty{i}}$. Consider $z_{[i,\infty)}$ be such that $x=f(y)_{\minftytoi{i}}z_{\itoinfty{i}}\in R(y,i)$. Then $\chi^{i+}(x) = f(y)$. Consider then $x'=f(y')_{\minftytoi{i}}z_{\itoinfty{i}}$ obtained by replacing the left half $f(y)_{\minftytoi{i}}$ by $f(y')_{\minftytoi{i}}$. Because $f(y)_{\itoinfty{i}} = f(y')_{\itoinfty{i}}$ we have that $\chi^{i+}(x') = f(y')$. The configuration $y''$ represented by $(x',i)$ is right-asymptotic with $y'$ and satisfies $f(y'')=f(y')$. Because $f$ is left-closing by Lemma~\ref{lem:a-rule->lc}, we must have $y''=y'$. We conclude that $f(y)_{\minftytoi{i}}z_{\itoinfty{i}}\in R(y,i)$ implies that $f(y')_{\minftytoi{i}}z_{\itoinfty{i}}\in R(y',i)$, and the converse implication also holds by a symmetric argument. As in (i), we get that $N(y)=\card{R(y,i)}=\card{R(y',i)}=N(y')$. \item[(iii)] Let $y,y'$ be arbitrary. Configuration $y''=y_{\minftytoi{0}}y'_{\itoinfty{0}}$ is left-asymptotic with $y$ and right-asymptotic with $y'$. By cases (i) and (ii) above we have $N(y)=N(y'')=N(y')$. \end{enumerate} \end{proof} \noindent As $N(y)$ is independent of $y$ we write $N$ for short. Next we define right stairs. They were defined in~\cite{blockCA} for reversible cellular automata -- here we generalize the concept to other CA and show that the concept behaves well when the cellular automaton is left-closing. A right stair is a pair of words that can be extracted from two consecutive configurations $x$ and $f(x)$ that coincide with $y$ and $z$, respectively, as shown in Figure~\ref{fig:stair}. The precise definition is as follows. \begin{figure} \begin{center} \includegraphics[scale=0.3]{stairs-eps-converted-to.pdf} \end{center} \caption{A right stair $(v,w)$ of length $3m$ connecting $y$ and $z$, confirmed by $x$ at position $i=0$.} \label{fig:stair} \end{figure} \begin{definition} \label{def:stair} Let $f:\ST^\mathbb{Z}\longrightarrow \ST^\mathbb{Z}$ be a cellular automaton, and let $m$ be a positive integer. Let $y\in\ST^{\itoinfty{i+3m}}$ be a right infinite word and let $z\in\ST^{\minftytoi{i}}$ be a left-infinite word. \begin{itemize} \item A pair of words $(v,w)\in\ST^{2m}\times \ST^{2m}$ is a \emph{right stair connecting $(y,z)$} if there is a configuration $x\in\ST^{\mathbb{Z}}$ such that $vy=x_{\itoinfty{i+m}}$ and $zw=f(x)_{\minftytoi{i+2m}}$. \item The stair has \emph{length} $3m$ and it is \emph{confirmed} (at position $i$) by configuration $x$. \item We write $\Psi_{3m}(y,z)$ for the set of all right stairs of length $3m$ connecting $(y,z)$. \item We write $\Psi_{3m}$ for the union of $\Psi_{3m}(y,z)$ over all $y$ and $z$. \end{itemize} \end{definition} Due to shift invariance, $x$ confirms $(v,w)\in\Psi_{3m}(y,z)$ if and only if $\sigma(x)$ confirms $(v,w)\in\Psi_{3m}(\sigma(y),\sigma(z))$. This means that $\Psi_{3m}(y,z)=\Psi_{3m}(\sigma(y),\sigma(z))$, so it is enough to consider $i=0$ in Definition~\ref{def:stair}. In terms of cylinders, $(v,w)\in\Psi_{3m}$ if and only if $f([v]_{[m,3m)})\cap [w]_{[0,2m)} \neq\emptyset$. \begin{figure} \begin{center} \includegraphics[scale=0.3]{kurka-eps-converted-to.pdf} \end{center} \caption{(a) An illustration for Lemma~\ref{lem:kurka}, and (b) an illustration for Corollary~\ref{cor:kurkacorollary}(b) and for Lemma~\ref{lem:lcdiv->a-rule}.} \label{fig:kurka} \end{figure} We need the following known fact concerning left-closing CA. It appears as Proposition 5.44 in~\cite{kurkabook} where it is stated for right-closing CA. See Figure~\ref{fig:kurka}(a) for an illustration. \begin{lemma}[Proposition 5.44 in~\cite{kurkabook}] \label{lem:kurka} Let $f$ be a left-closing CA. For all sufficiently large $m\in\mathbb{N}$, if $s\in \ST^m$ and $t\in \ST^{2m}$ are such that $f([s]_{(m,2m]})\cap [t]_{(0,2m]} \neq\emptyset$ then for all $b\in \ST$ there exists a unique $a\in \ST$ such that $f([as]_{[m,2m]})\cap [bt]_{[0,2m]} \neq\emptyset$. \end{lemma} The condition $f([s]_{(m,2m]})\cap [t]_{(0,2m]} \neq\emptyset$ is just a way to write that there exists $x\in\ST^\mathbb{Z}$ with $x_{(m,2m]}=s$ and $f(x)_{(0,2m]}=t$. Note that the statement of the lemma has two parts: the existence of $a$ and the uniqueness of $a$. We need both parts in the following. A number $m$ is a \emph{strong\footnote{The word `strong' is added to distinguish this from the weaker closing radius obtained directly from the definition by a compactness argument.} left-closing radius} for a CA $f$ if it satisfies the claim of Lemma~\ref{lem:kurka}, and furthermore $m\geq 2r$ where $r\geq 1$ is a neighborhood radius of $f$. Next we state corollaries of the previous lemma, phrased for right stairs in place of $s$ and $t$ to be directly applicable in our setup. \begin{corollary} \label{cor:kurkacorollary} Let $f$ be a left-closing CA. Let $m$ be a strong left-closing radius. \begin{itemize} \item[(a)] $\Psi_{3m}(y,z)=\Psi_{3m}$ for all $y$ and $z$. \item[(b)] Let $(vc,wd)\in\Psi_{3m}$ for $c,d\in\ST$ and $v,w\in \ST^{2m-1}$. For every $b\in \ST$ there exists a unique $a\in \ST$ such that $(av,bw)\in\Psi_{3m}$. (See Figure~\ref{fig:kurka}(b) for an illustration.) \item[(c)] Every $(v,w)\in\Psi_{3m}(y,z)$ is confirmed by a unique $x$. \end{itemize} \end{corollary} \begin{proof} (a) Let $y,y'\in\ST^{\itoinfty{3m}}$ and $z,z'\in\ST^{\minftytoi{0}}$ be arbitrary. It is enough to prove that $\Psi_{3m}(y',z')\subseteq \Psi_{3m}(y,z)$. The claim then follows from this and shift invariance $\Psi_{3m}(y,z)=\Psi_{3m}(\sigma(y),\sigma(z))$. First we show that $\Psi_{3m}(y',z')\subseteq \Psi_{3m}(y,z')$. Let $(v,w)\in\Psi_{3m}(y',z')$ be arbitrary, so that there exists $x'\in [vy']_{\itoinfty{m}}$ such that $f(x')_{\minftytoi{2m}} = z'w$. Then $(v,w)\in\Psi_{3m}(y,z')$ is confirmed by the configuration $x''$ such that $x''_{\minftytoi{3m}}=x'_{\minftytoi{3m}}$ and $x''_{\itoinfty{3m}}=y$. Indeed, $x''_{\itoinfty{m}}=vy$, and because $m\geq r$, the radius of the local rule of $f$, we also have $f(x'')_{\minftytoi{2m}}=f(x')_{\minftytoi{2m}} = z'w$. Next we show that $\Psi_{3m}(y,z')\subseteq \Psi_{3m}(y,z)$. Let $(v,w)\in\Psi_{3m}(y,z')$. We start with finite extensions of $w$ on the left: we prove that for every finite word $u\in\ST^*$ we have $f([vy]_{[m,\infty)})\cap [uw]_{[-|u|,2m)}\neq \emptyset$. Suppose the contrary, and let $bu\in\ST^{k+1}$ be the shortest counterexample, with $b\in\ST$ and $u\in\ST^{k}$. (By the assumptions, the empty word is not a counterexample.) By the minimality of $bu$, there exists $x^r\in [vy]_{[m,\infty)}$ such that $f(x^r)_{[-k,2m)}=uw$. Choose $s=x^r_{[-k+m,-k+2m)}$ and $t=f(x^r)_{[-k,-k+2m)}$ and apply the existence part of Lemma~\ref{lem:kurka}. By the lemma, there exists a configuration $x^l$ such that $x^l_{[-k+m,-k+2m)}=x^r_{[-k+m,-k+2m)}$ and $f(x^l)_{[-k-1,-k+2m)}=b\cdot f(x^r)_{[-k,-k+2m)}$. Consider $x$ obtained by gluing together the left half of $x^l$ and the right half of $x^r$: define $x_{(-\infty,-k+2m)}=x^l_{(-\infty,-k+2m)}$ and $x_{[-k+m,\infty)}=x^r_{[-k+m,\infty)}$. Note that in the region $[-k+m,-k+2m)$ configurations $x^l$ and $x^r$ have the same value. By applying the local rule of $f$ with radius $r$ we also get that $f(x)_{(-k-1,-k+2m-r)}=f(x^l)_{(-k-1,-k+2m-r)}=b\cdot f(x^r)_{[-k,-k+2m-r)}$ and $f(x)_{[-k+m+r,2m)} =f(x^r)_{[-k+m+r,2m)}$. Because $m\geq 2r$ we have $-k+2m-r\geq -k+m+r$, so that $f(x)_{(-k-1,2m)}=b\cdot f(x^r)_{(-k,2m)}=buw$. We also have $x_{[m,\infty)}=x^r_{[m,\infty)}=vy$, so that $x$ proves that $bu$ is not a counterexample. Consider then the infinite extension of $w$ on the left by $z$: Applying the finite case above to each finite suffix of $z$ and by taking a limit, we see with a simple compactness argument that there exists $x\in [vy]_{[m,\infty)}$ such that $f(x)_{[-\infty,2m)}=zw$. This proves that $(v,w)\in\Psi_{3m}(y,z)$. \medskip \noindent (b) Let $(vc,wd)\in\Psi_{3m}$ and let $b\in\ST$ be arbitrary. Let $y\in\ST^{\itoinfty{3m}}$ be arbitrary, and and let $z\in\ST^{\minftytoi{0}}$ be such that $z_{-1}=b$. By (a) we have that $(vc,wd)\in\Psi_{3m}(y,z)$. Let $x$ be a configuration that confirms this, so $x_{[m,\infty)}=vcy$ and $f(x)_{(-\infty,2m)}=zwd$. Let $a=x_{m-1}$. Because $x_{[m-1,3m-1)}=av$ and $f(x)_{[-1,2m-1)}=bw$, configuration $x$ confirms (at position $i=-1$) that $(av,bw)\in\Psi_{3m}$. Let us prove that $a$ is unique. Suppose that also $(a'v,bw)\in\Psi_{3m}$. We apply the uniqueness part of Lemma~\ref{lem:kurka} on $s$ and $t$ where $t=wd$ and $s$ is the prefix of $v$ of length $m$. Because $(a'v,bw)$ is a right stair, $f([a'v]_{[m-1,3m-1)})\cap [bw]_{[-1,2m-1)}\neq\emptyset$. Because $m-1\geq 2r-1\geq r$, the local rule of $f$ assigns $f(x)_{2m-1}=d$ for all $x\in [a'v]_{[m-1,3m-1)}$, so that $f([a'v]_{[m-1,3m-1)})\cap [bwd]_{[-1,2m)}\neq\emptyset$. But then $f([a's]_{[m-1,2m)})\cap [bt]_{[-1,2m)}\neq\emptyset$, so that by Lemma~\ref{lem:kurka} we must have $a'=a$. \medskip \noindent (c) Suppose $x\neq x'$ both confirm that $(v',w')\in\Psi_{3m}(y,z)$. Then $x_{\itoinfty{m}}=v'y=x'_{\itoinfty{m}}$. Let $k<m$ be the largest index such that $x_{k}\neq x'_{k}$. Extract $a,a',b,c,d\in\ST$ and $v,w\in\ST^{2m-1}$ from $x$ and $x'$ as follows: $avc=x_{[k,k+2m]}$ and $a'vc=x'_{[k,k+2m]}$ and $bwd=f(x)_{[k-m,k+m]}=f(x')_{[k-m,k+m]}$. Then $(vc,wd)\in\Psi_{3m}$ and $(av,bw), (a'v,bw)\in\Psi_{3m}$. This contradicts (b). \end{proof} Now we can prove another constraint on slider s. \begin{lemma} \label{lem:a-rule->div} Let $f$ be a slider. Let $m$ be a strong left-closing radius, and big enough so that $f$ is defined by a bijective block rule $\chi:\ST^{n}\longrightarrow \ST^{n}$ of block length $n=3m$. Let $N$ be the number of representatives of configurations (independent of the configuration) with respect to $\chi$. Then $$N\cdot \card{\Psi_n}=\card{\ST}^n.$$ In particular, $\card{\Psi_n}$ divides $\card{\ST}^n$. \end{lemma} \begin{proof} Fix any $y\in\ST^{[3m,\infty)}$ and $z\in\ST^{(-\infty,0)}$. Denote $A=\{x\in \ST^\mathbb{Z}\ |\ x_{[3m,\infty)}=y \mbox{ and } f(x)_{(-\infty,0)}=z\}$. Consider the function $A\longrightarrow \Psi_{3m}(y,z)$ defined by $x\mapsto (x_{[m,3m)}, f(x)_{[0,2m)})$. It is surjective by the definition of $\Psi_{3m}(y,z)$, and it is injective by Corollary~\ref{cor:kurkacorollary}(c). Because $\Psi_{3m}(y,z)=\Psi_{3m}$ by Corollary~\ref{cor:kurkacorollary}(a), we see that $\card{A}=\card{\Psi_{3m}}$. For each $w\in \ST^{3m}$ define configuration $x^w= zwy$. Representations $(x,0)$ of $y\in A$ are precisely $(x^w,0)$ for $w\in \ST^{3m}$. Because each $y$ has $N$ representations and there are $\card{\ST}^{3m}$ words $w$ we obtain that $N\cdot \card{\Psi_{3m}}=\card{\ST}^{3m}$. \end{proof} Now we prove the converse: the constraints we have proved for slider s are sufficient. This completes the characterization of sliders. \begin{lemma} \label{lem:lcdiv->a-rule} Let $f$ be a left-closing cellular automaton, let $m$ be a strong left-closing radius, and assume that $\card{\Psi_n}$ divides $\card{\ST}^n$ for $n=3m$. Then $f$ is a slider. \end{lemma} \begin{proof} Let $N=\card{\ST}^n/\card{\Psi_n}$ and pick an arbitrary bijection $\pi:\Psi_n\times\{1,2,\dots ,N\}\longrightarrow\ST^n$. Let $f_\mathrm{loc} : \ST^{2m+1}\longrightarrow\ST$ be the local rule of radius $m$ for the cellular automaton $f$. Let us define a block rule $\chi:\ST^{n+1}\longrightarrow \ST^{n+1}$ as follows (see Figure~\ref{fig:kurka}). Consider any $c\in\ST$, any $k\in\{1,2,\dots ,N\}$ and any $(av,bw)\in \Psi_n$ where $a,b\in\ST$ and $v,w\in\ST^{2m-1}$. Let $d = f_\mathrm{loc}(avc)$. We set $\chi: \pi((av,bw),k)\cdot c \mapsto b\cdot \pi((vc,wd),k))$. This completely defines $\chi$, but to see that it is well defined we next show that $(vc,wd)$ is a right stair. By Corollary~\ref{cor:kurkacorollary}(a) we have that $(av,bw)\in\Psi_n(cy,z)$ for arbitrary $y,z$ so there is a configuration $x$ such that $x_{[m,\infty)}=avcy$ and $f(x)_{(\infty,2m)}=zbw$. The local rule $f_\mathrm{loc}$ determines that $f(x)_{2m}=f_\mathrm{loc}(avc)=d$. It follows that $(vc,wd)\in \Psi_n(y,zb)$, confirmed by $x$ at position $i=1$. Now that we know that $\chi$ is well defined, let us prove that $\chi$ is a bijection. Suppose $\pi((av,bw),k)\cdot c$ and $\pi((a'v',b'w'),k')\cdot c'$ have the same image $b\cdot \pi((vc,wd),k))=b'\cdot \pi((v'c',w'd'),k'))$. We clearly have $b=b'$, and because $\pi$ is a bijection, we have $v=v'$, $c=c'$, $w=w'$, $d=d'$ and $k=k'$. By Corollary~\ref{cor:kurkacorollary}(a) we also have that $a=a'$. As $\chi$ is a bijective block rule, it defines a slider\ relation $F$. We need to prove that for every configuration $y$, the only $z$ such that $(y,z)\in F$ is $z=f(y)$. Therefore, consider an arbitrary representation $(x,i)$ of $(y,z)\in F$. Write $x=z_{(-\infty,i)}\cdot \pi((av,bw),k)\cdot c \cdot y_{[i+n+1,\infty)}$ for letters $a,c,b\in\ST$ words $v,w\in \ST^{2m-1}$ and $k\in\{1,2,\dots ,N\}$. This can be done and as $\pi$ is surjective and all items in this representation are unique as $\pi$ is injective. We have $(av,bw)\in\Psi_n(cy,z)$ so by Corollary~\ref{cor:kurkacorollary}(c) there is a unique configuration $x'$ that confirms this. Then $x'_{[i+m,\infty)} = avc\cdot y_{[i+n+1,\infty)}$ and $f(x')_{(-\infty, i+2m)} = z_{(-\infty,i)}\cdot bw$. Associate $x'$ to $(x,i)$ by defining $g(x,i)=x'$. Let us show that $g(\chi^i(x),i+1)=g(x,i)$. By the definition of $\chi$ we have \[ \chi^i(x)=z_{(-\infty,i)}\cdot b\cdot \pi((vc,wd),k))\cdot y_{[i+n+1,\infty)} \] where $d=f_\mathrm{loc}(avc)$. To prove that $g(\chi^i(x),i+1)=x'=g(x,i)$ it is enough to show that $x'$ confirms $(vc,wd)\in\Psi_n(y,zb)$. But this is the case because $x'_{[i+m+1,\infty)} = vc\cdot y_{[i+n+1,\infty)}$ and $f(x')_{(-\infty, i+2m+1)} = z_{(-\infty,i)}\cdot bwd$. The fact that $f(x')_{i+2m}=d$ follows from $x'_{[i+m,i+3m]}=avc$ and $d=f_\mathrm{loc}(avc)$. By induction we have that for any $j\geq i$ holds $g(\chi^{[i,j)}(x),j)=x'$. Moreover, pair $(\chi^{[i,j)}(x),j)$ represents the same $(y,z)\in F$ as $(x,i)$. Therefore, $x'_{[j+n+1,\infty)}=y_{[j+n+1,\infty)}$ and $f(x')_{(-\infty, j)} = z_{(-\infty,j)}$ for all $j\geq i$. Let us look into position $p=i+n+m+1$. Using any $j>p$ we get $f(x')_p=z_p$ and using $j=i$ we get $x'_{[p-m,p+m]}=y_{[p-m,p+m]}$. This means that $z_p=f_\mathrm{loc}(y_{[p-m,p+m]})$, that is, $z_p=f(y)_p$. Because $i$ was arbitrary, $p$ is arbitrary. We have that $z=f(y)$, which completes the proof. \end{proof} \begin{theorem} \label{thm:SliderCharacterization} The function $f$ admits a slider if and only if $f$ is a left-closing cellular automaton and $\card{\Psi_n}$ divides $|\ST|^n$ for $n = 3m$ where $m$ is the smallest strong left-closing radius. \end{theorem} We can state this theorem in a slightly more canonical (but completely equivalent) form by normalizing the length of stairs: By Corollary~\ref{cor:kurkacorollary}, for a left-closing cellular automaton $f$ the limit \begin{equation} \label{eq:lambda} \lambda_f = \lim_{m\to\infty} \frac{\card{\Psi_{3m}}}{\card{\ST}^{3m}} \end{equation} is reached in finite time, namely as soon as $m$ is a left-closing radius, and thus $\lambda_f$ is rational for left-closing $f$. In \cite{blockCA} it is shown that the map $f \mapsto \lambda_f$ gives a homomorphism from the group of reversible cellular automata into the rational numbers under multiplication. For a prime number $p$ and an integer $n$, write $v_p(n)$ for the largest exponent $k$ such that $p^k | n$. For prime $p$ and rational $r = m/n$, write $v_p(r) = v_p(m) - v_p(n)$ for the \emph{$p$-adic valuation} of $r$. \begin{theorem} \label{thm:SliderCharacterization2} The function $f$ is a slider if and only if $f$ is a left-closing cellular automaton and $v_p(\lambda_f) \leq 0$ for all primes $p$. \end{theorem} \begin{example} Let $A = \{0,1\} \times \{0,1,2\}$ and write $\sigma_2$ and $\sigma_3$ for the left shifts on the two tracks of $A^\mathbb{Z}$. Then consider $f = \sigma_2 \times \sigma_3^{-1}$. For this CA we have by a direct computation $|\Psi_3| = 2^2 \cdot 3^4$ so $\lambda_f = 2^2 \cdot 3^4 / 6^3$ so $v_3(\lambda_f) = 1 > 0$, and thus $f$ is not a slider. Similarly we see that $\sigma_3 \times \sigma_2^{-1}$ is not a slider. \end{example} \begin{example} \label{ex:LeftXOR} Let $S=\{0,1\}$ and consider the exclusive-or CA with neighborhood $\{-1,0\}$, i.e. $f(x) = x + \sigma^{-1}(x)$. Then $f$ is left-closing but a direct computation shows $v_2(\lambda_f) = 1 > 0$, so $f$ is not a slider. Compare with Example~\ref{ex:RightXOR}. \end{example} \subsection{Definition of sweeper s} An alternative approach not requiring bijectivity of $\chi$ is specified in the following: \begin{definition} \label{def:brelation} A block rule $\chi$ defines a \emph{sweeper\ relation} $F \subset \ST^\mathbb{Z} \times \ST^\mathbb{Z}$ by $(y, z) \in F$ iff some subsequence of $\chi^{0+}(y), \chi^{-1+}(y), \chi^{-2+}(y),\dots$ converges to $z$. \end{definition} \begin{lemma} \label{lem:SweepingBasics} The projection $(y,z)\mapsto y$ on the first component maps a sweeper\ relation $F$ surjectively onto $\ST^\mathbb{Z}$. The relation $F$ is a function $f$ if and only if for each configuration $y$ the limit $\lim_{i\to-\infty} \toright{\chi}(y)$ exists and equals $f(y)$. \end{lemma} \begin{proof} For every $y\in\ST^\mathbb{Z}$ the sequence $\chi^{0+}(y), \chi^{-1+}(y), \chi^{-2+}(y),\dots$ has a converging subsequence with some limit $z$. Then $(y,z)\in F$ so the projection is onto. If $z=\lim_{i\to-\infty} \toright{\chi}(y)$ exists then every subsequence of $\chi^{0+}(y), \chi^{-1+}(y), \chi^{-2+}(y),\dots$ converges to $z$ so $z$ is the unique configuration such that $(y,z)\in F$. Conversely, if $\lim_{i\to-\infty} \toright{\chi}(y)$ does not exist then $\chi^{0+}(y), \chi^{-1+}(y), \chi^{-2+}(y),\dots$ has two subsequences converging to distinct $z_1$ and $z_2$. In this case $(y,z_1)$ and $(y,z_2)$ are both in relation $F$. \end{proof} \begin{definition} \label{def:brule} Let $\chi$ be a block rule such that for each configuration $y$ the limit $z= \lim_{i\to-\infty} \toright{\chi}(y)$ exists. The function $y\mapsto z$ is called the \emph{sweeper}\ defined by~$\chi$. \end{definition} Before we are going to compare the notions of sliders and sweepers we provide a result on a special kind of Mealy automata. \subsection{A note on finite Mealy automata} \label{subsec:mealy} In this section we consider Mealy automata with a set $Q$ of states and where the set $A$ of input symbols and the set of output symbols coincide. For convenience instead of pairs of elements we use words of length $2$. Thus, we denote by $\mu: Q A \to A Q$ the function mapping the current state $q$ and an input symbol $a$ to $\mu(qa)=a'q'$, where $q'$ is the new state of the automaton. The motivation for this is the following. When a block rule $\chi$ is sweeping over a configuration one can think of the block $q\in \ST^n$ where $\chi$ will be applied next as encoding the state of a Mealy automaton. The word $a\in\ST^n$ immediately to the right of it is the next input symbol. By applying $\chi$ at positions $0,1,\dots,n-1$ the word $qa$ is transduced into a word $a'q'\in\ST^{2n}$ where $a'$ can be considered the output symbol and $q'$ the next state of the automaton. When $\chi$ is bijective then clearly $\mu$ is bijective, too. Let $\delta:QA\to Q$ denote the function yielding only the new state of the Mealy automaton. The extension $\delta^*: QA^*\to A^*Q$ to input \emph{words} is for all states $q$, all inputs $w\in A^*$ and $a\in A$ defined by $\delta^*(q\varepsilon)=q$ and $\delta^*(qwa)=\delta(\delta^*(qw)a)$. Because of the application we have in mind we now restrict ourselves to the case where $Q=A$ and speak of elements $e\in Q$. Let $\bar{e}=(\ldots,e_{-2},e_{-1},e_0)$ denote a sequence of elements which is infinite to the left. \begin{definition} A finite tail $\bar{e}_i=(e_{-i},\ldots,e_0)$ of $\bar{e}$ is \emph{good for $q$} if $\delta^*(\bar{e}_i)=q$. % An infinite sequence $\bar{e}$ is \emph{good for $q$} if infinitely many finite tails $\bar{e}_i=(e_{-i},\ldots,e_0)$ are good for $q$. A \emph{state $q$ is good}, if there is an infinite sequence $\bar{e}$ that is good for $q$. % Let $G\subseteq Q$ denote the set of good states and $B\subseteq Q$ the set of bad states. \end{definition} \begin{lemma} \label{lem:mealy} If $\mu$ is bijective then $G=Q$ and $B=\emptyset$. \end{lemma} \begin{proof} First, observe that the property of being good is preserved by $\delta$. If $g$ is good, then each $\delta(ga)$ is good, too: % If $\bar{e}$ is good for $g$, then $\bar{e}a$ is good for $\delta(ga)$ since $\delta^*(e_{-i},\ldots,e_0)=g$ implies $\delta^*(e_{-i},\ldots,e_0,a)=\delta(ga)$. % This means that $\mu(GA)\subseteq AG$. Since $\mu$ is injective and $\card{GA}=\card{AG}$, in fact $\mu(GA)= AG$. % Therefore $\mu(BA)\subseteq AB$, that is $\delta$ preserves bad states. % Now, assume that there indeed exists a bad state $b\in B$. % Consider $\bar{b}=(\ldots, b,b,b)$. % The states $b_i=\delta^*(b^i)$ are all bad, but at least one of them happens infinitely often, which would mean that it is good. Contradiction. \end{proof} \subsection{Relation between sliders and sweepers} \label{subsec:a-rule-b-rule} Compared to definition~\ref{def:arule} the advantage of definition~\ref{def:brule} is that it does not require $\chi$ to be bijective. But as long as $\chi$ is bijective, there is in fact no difference. \begin{theorem} \label{thm:SweeperCharacterization} Let $\chi$ be a bijective block rule and $f$ a one-dimensional CA. % The slider relation defined by $\chi$ is equal to $f$ if and only if the sweeper relation it defines is equal to $f$. \end{theorem} The two implications are considered separately in Lemmata~\ref{lem:not-b=>not-a} and~\ref{lem:not-a=>not-b} below. For the remainder of this section let $\chi: \ST^n \to\ST^n $ always denote a bijective block rule and let $f:\ST^{\mathbb{Z}}\to\ST^{\mathbb{Z}}$ denote a one-dimensional CA (without stating this every time). \begin{lemma} \label{lem:not-b=>not-a} If $\chi$ is not a sweeper for $f$ then it is not a slider for $f$. \end{lemma} \begin{proof} If $\chi$ is not a sweeper for $f$ then there is a configuration $y$ for which the limit $\lim_{i\to-\infty} \chi^{\itoinfty{i}}(y)$ does not exist or is wrong. % In both cases there is a cell $j\in \mathbb{Z}$ and a state $s\in\ST$ such that $s\not=f(y)_j$ but $\chi^{\itoinfty{i}}(y)_j=s$ for infinitely many $i < j-n$. We will construct a configuration $x$ such that $\chi^{j-}(x)=y$ and $\chi^{j+}(x)_j=s\not=f(y)_j$. % Therefore $\chi$ is not a slider for $f$ (see Def.~\ref{def:arule}). As a first step we subdivide the ``left part'' $(-\infty,j+n)$ of $\mathbb{Z}$ into \emph{windows} $W_k$ of length $n$. % For $k\geq 0$ let $p_k=-kn+j$ denote the smallest index in $W_k$, i.\,e.~$W_k=[p_k,p_{k-1})$ (where $p_{-1}=j+n$). % Analogously divide the ``left part'' of $y$ into words $y^{(k)}$ of length $n$ by setting $y^{(k)} = y|_{W_k}$ (see Fig.~\ref{fig:Wk}). \begin{figure}[ht] \centering \begin{tikzpicture}[x={(1mm,0mm)},y={(0mm,1mm)},decoration={brace,amplitude=2mm}] \draw (-3,0) node {$y$}; \draw (0,0) -- +(100,0); \foreach \x in {2,4,...,99} { \draw (\x,0) -- +(0,1) -- +(0,-1); } \foreach \x in {10,30,...,90} { \draw[thick] (\x,0) -- +(0,2) -- +(0,-2); } \draw (71,8) node[rotate=20,anchor=south west] (N0) {$p_0=j$}; \draw[->] (N0.south west) -- (71,2); \draw (51,8) node[rotate=20,anchor=south west] (N1) {$p_1=-n+j$}; \draw[->] (N1.south west) -- (51,2); \draw (31,8) node[rotate=20,anchor=south west] (N2) {$p_2=-2n+j$}; \draw[->] (N2.south west) -- (31,2); \draw [decorate] (90,-3) -- (70,-3) (80,-7) node {$W_0$}; \draw [decorate] (70,-3) -- (50,-3) (60,-7) node {$W_1$};; \draw [decorate] (50,-3) -- (30,-3) (40,-7) node {$W_2$};; \end{tikzpicture} \caption{For configuration $y$ the windows $W_k$ contain the words $y^{(k)}$.} \label{fig:Wk} \end{figure} Let $M$ denote the set $\{i \mid \chi^{\itoinfty{i}}(y)_j=s\}$. % $M$ contains infinitely many integers $i< p_1=j-n$. % Then there has to be a word $v^{(0)}\in\ST^n$ such that the set $M_0 = \{ i\in M \mid i<p_1 \text{ and } \chi^{[i,j)}(y)|_{W_0} = v^{(0)} \}$ is infinite. % Since $M_0\subseteq M$ certainly $\chi(v^{(0)})_0 = s$ holds. \begin{figure}[ht] \centering \begin{tikzpicture}[x={(2mm,0mm)},y={(0mm,0.5mm)},decoration={brace,amplitude=2mm}] \draw (0,0) -- +(20,0); \foreach \x in {0,1,...,20} { \draw (\x,0) -- +(0,1) -- +(0,-1); } \foreach \x in {0,10.,20} { \draw[thick] (\x,0) -- +(0,2) -- +(0,-2); } \draw [decorate] ( 0,3) -- node[above=2mm] {$v^{(k+1)}$} (10,3); \draw [decorate] (10,3) -- node[above=2mm] {$y^{(k)}$} (20,3); \begin{scope}[yshift=-10]] \foreach \x in {0,1,...,9} { \draw[thick,gray!60!white] (\x,-\x) ++(0,-\x) ++(0,1) -- +(0,-2) ++(0,-1) -- ++ (10,0) ++(0,1) -- ++(0,-2); } \end{scope} \begin{scope}[yshift=-45] \draw (0,0) -- +(20,0); \foreach \x in {0,1,...,20} { \draw (\x,0) -- +(0,1) -- +(0,-1); } \foreach \x in {0,10.,20} { \draw[thick] (\x,0) -- +(0,2) -- +(0,-2); } \draw [decorate] (10,-3) -- node[below=2mm] {$x^{(k+1)}$} ( 0,-3); \draw [decorate] (20,-3) -- node[below=2mm] {$v^{(k)}$} (10,-3); \end{scope} \end{tikzpicture} \caption{Transition from $v^{(k+1)}y^{(k)}$ to $x^{(k+1)}v^{(k)}$ by applying $\chi$ from left to right at the positions indicated by the gray bars. Application of the inverse $\xi$ from right to left realizes the opposite transition from bottom to top.} \label{fig:a-b-1} \end{figure} For all $k\geq 0$ we now inductively define words $v^{(k+1)}$ and $x^{(k+1)}$ (all of length $n$) and along with it infinite sets $M_{k+1}$. % Since $M_k$ is infinite, there is a word $v^{(k+1)}$ such that the set \[ M_{k+1} = \{ i \in M_k \mid i< p_{k+2} \text{ and } \chi^{[i,p_k)}(y)|_{W_{k+1}} = v^{(k+1)} \} \] is infinite. % Since $M_{k+1}\subseteq M_k$ one has $\chi^{[p_{k+1},p_k)}(v^{(k+1)}y^{(k)}) = x^{(k+1)} v^{(k)}$ for some $x^{(k+1)}$ (see Fig.\ref{fig:a-b-1}). % Since $\chi$ is bijective, for the inverse $\xi$ of $\chi$ holds $v^{(k+1)}y^{(k)} = \xi^{[p_{k+1},p_k)^R}(x^{(k+1)} v^{(k)})$, too. % Note again that $\xi$ is applied from right to left. Now choose configuration $x= \cdots x^{(3)} \cdot x^{(2)} \cdot x^{(1)} \cdot v^{(0)} \cdot y_{\itoinfty{j+n}}$. On one hand $\chi^{j+}(x)$ already after the application of $\chi$ at position $j$ produces state $s$ there which never changes again. % Thus $\chi^{j+}(x)\not=f(y)$. On the other hand by construction for all $k\geq 0$ holds $\xi^{[p_{k+1},p_k)^R}(x^{(k+1)} v^{(k)}) = v^{(k+1)}y^{(k)}$. % Therefore \begin{align*} \xi^{[p_1,p_0)^R}(x) &=\xi^{[p_1,p_0)^R}( \cdots x^{(3)} x^{(2)} x^{(1)} v^{(0)}y_{\itoinfty{j+n}}) \\ &=\cdots x^{(3)} x^{(2)} v^{(1)}z^{(0)}y_{\itoinfty{j+n}} \\ \intertext{and by induction for all $k\geq 0$} \xi^{[p_{k+1},p_0)^R}(x) &= \xi^{[p_{k+1},p_0)^R}( \cdots x^{(3)} x^{(2)} x^{(1)} v^{(0)}y_{\itoinfty{j+n}}) \\ &= \cdots x^{(k+1)} v^{(k)} y^{(k-1)}\cdots y^{(0)}y_{\itoinfty{j+n}} \end{align*} % Obviously one gets $\toleft{\xi}(x) =y$. \end{proof} \begin{lemma} \label{lem:not-a=>not-b} If $\chi$ is not a slider for $f$ then it is not a sweeper for $f$. \end{lemma} \begin{proof} If $\chi$ is not a slider for $f$ then there exists a configuration $x$ and an $i\in\mathbb{Z}$ such that for $z=\toright{\chi}(x)$ and $y=\toleft{\chi}(x)$ one has $f(y) \not= z$. % Let $\xi$ be the inverse of $\chi$. % Let $j$ be a cell where $f(y)_j\not= z_j$. % If $j<i+n$ instead of $x$ can consider $x'=\xi^{[i-1,\dots,i-m]} (x)$ for some sufficiently large $m$. % Assume therefore that $j\geq i+n$. We will prove that there is a configuration $v$ such that for infinitely many positions $m$ the configuration $\chi^{\itoinfty{m}}(v)$ will not have the correct state at position $j$. % Therefore the limit $\lim_{m\to-\infty}\chi^{\itoinfty{m}}(v)$ cannot exist and have the correct state at position $j$. Thus $\chi$ is not a sweeper for $f$. Below the abbreviation $\STT=\ST^n$ is used. Configuration $x$ is of the form $z_{\minftytoi{i}}\cdot w \cdot y_{\itoinfty{i+n}}$ for some $w\in\STT$. % Applying $\chi$ at position $i$ and further to the right produces the same result independent of what is to the left of $w$. % Therefore if $z_{\minftytoi{i}}$ is replaced by any $z'_{\minftytoi{i}}$ still the wrong state is produced at position $j$. Define a Mealy automaton with $Q=A$ by $\mu(qa)=\chi^{[0,n)}(qa)$ (observe that $qa\in\ST^{2n}$). % Since $\mu$ is bijective, one can now use the result from Lemma~\ref{lem:mealy} and conclude that there is a sequence $(\ldots,v^{(2)},v^{(1)})$, infinite to the left, of elements $v^{(k)}\in\STT$ such that \begin{equation} \text{$\delta^*(v^{(k)}\cdots v^{(1)})=w$ for infinitely many $k$.} \label{cond:star} \end{equation} % Let $v$ be the infinite to the left half-configuration obtained by concatenating all $v^{(k)}$, more precisely $v: \minftytoi{i+n}\to\ST$ where $v_{-kn+j+i} = v^{(k)}_j$ for all $k\geq 1$ and all $j\in[0,n)$. % Condition~(\ref{cond:star}) implies that for infinitely many $k\geq 1$ applying $\chi$ in $v$ from position $-kn+i$ up to but excluding $i$ produces $w$ at the end, i.\,e.~in the window $[i,i+n)$. In other words $\chi^{[-kn+i,i)}(v) = v'_{\minftytoi{i}} \cdot w$ ($v'$ depends on $k$ but doesn't matter). % Therefore for infinitely many $k$ \begin{align*} \chi^{[-kn+i,i)}(vy_{\itoinfty{i+n}}) &= \cdots \cdot w\cdot z_{\itoinfty{i+n}} \\ \text{and \quad} \chi^{\itoinfty{-kn+i}}(vy_{\itoinfty{i+n}}) &= \cdots \cdot w'\cdot z_{\itoinfty{i+n}} \end{align*} % Since we could assume that the position $j$ where $f(y)_j\not= z_j$ is in the interval $\itoinfty{i+n}$ one can conclude that $\chi$ is not a sweeper for $f$. \end{proof} While the slider and sweeper relations defined by a block rule are equal when at least one them defines a cellular automaton, sweeper relations can also define non-continuous functions. \begin{example} \label{ex:NotClosed} Let $\ST = \{\ns{0}{0},\ns{0}{1},\ns{1}{0},\ns{1}{1}\}$ and define $\chi : \ST^2 \to \ST^2$ by $\chi(\ns{1}{0} \ns{0}{0}) = \ns{0}{0} \ns{0}{1}$, $\chi(\ns{0}{0} \ns{0}{1}) = \ns{1}{0} \ns{0}{0}$, and $\chi(ab) = ab$ for $ab \notin \{\ns{1}{0} \ns{0}{0}, \ns{0}{0} \ns{0}{1}\}$. We claim that $\lim_{i \rightarrow -\infty} \chi^{i+}(x)$ is well-defined for all $x \in \ST^\mathbb{Z}$, so that the sweeper relation $\chi$ defines is a function. Let $x \in \ST^\mathbb{Z}$ be arbitrary, and let $n \in \mathbb{Z}$. We need to show that $\chi^{i+}(x)_n$ converges. Suppose first that for some $k < n$, we have $x_k = \ns{1}{a}$ for $a \in \{0,1\}$. Then for all $i < k$, the value $\chi^{i+}(x)_n$ is independent of the values $x_j \leq k$, since $\chi^{[i,k-1]}(x)_k = \ns{1}{a}$, meaning that the sweep is synchronized (in the sense that whatever information was coming from the left is forgotten and the sweep continues the same way) and $\chi^{i+}(x)_n$ is determined by $x_{[k,n]}$ for all $i < k$. Thus, in this case $\chi^{i+}(x)_n$ converges. Suppose then that for all $k < n$, $x_k = \ns{0}{a}$ for some $a \in \{0,1\}$. If $x_k = \ns{0}{0}$ for some $k < n$, then since $x_{k-1} \neq \ns{1}{0}$ we also have $\chi^{[i,k-2]}(x)_{k-1} \neq \ns{1}{0}$. % Thus, the value at $k$ does not change when $\chi$ is applied at $k-1$, and as in the previous paragraph, the sweep is synchronized at this position. Again $\chi^{i+}(x)_n$ is determined by $x_{[k,n]}$ for all $i < k$, so $\chi^{i+}(x)_n$ converges. In the remaining case, $x_k = \ns{0}{1}$ for all $k < n$. Then since $\chi(\ns{0}{1} \ns{0}{1}) = \ns{0}{1} \ns{0}{1}$, the rule is not applied in the left tail of $x$, and thus certainly $\chi^{i+}(x)_n$ converges. The function defined by the sweeper relation is not continuous at $\ns{0}{1}^\mathbb{Z}$ since $\chi^\mathbb{Z}(\ns{0}{1}^\mathbb{Z}) = \ns{0}{1}^\mathbb{Z}$ while for any $n \in \mathbb{N}$ we have \[ \chi^\mathbb{Z}(...\ns{0}{0} \ns{0}{0} \ns{0}{0} \ns{0}{1}^n . \ns{0}{1}^\mathbb{N}) = ...\ns{0}{0} \ns{0}{0} \ns{1}{0} \ns{1}{0}^n . \ns{1}{0}^\mathbb{N} \] \end{example} \section{Realization of bi-closing CA using LR and RL sliders} \label{sec:rev-ca} In the definition of a slider we use a left-to-right slide of the window to realize the CA transition. Of course, one can analogously define \emph{right-to-left sliders} and prove a characterization via right-closing CA. We can also alternate these two types of rules, and obtain a ladder-shaped hierarchy analogous to the Borel, arithmetic and polynomial hierarchies. \begin{definition} Let $\mathcal{R}}%\mathcal{S}^{RL}$ denote the set of CA definable as slider relations with the ``from left to right'' as in Definition~\ref{def:arule}. % Analogously let $\mathcal{L}}%\mathcal{S}^{LR}$ denote the set of CA definable as right-to-left slider relations. % Denote $\Delta = \mathcal{L}}%\mathcal{S}^{LR} \cap \mathcal{R}}%\mathcal{S}^{RL}$. % Let now $\mathcal{L}}%\mathcal{S}^{LR}_0 = \mathcal{R}}%\mathcal{S}^{RL}_0 = \{\mathrm{id}\}$, and for all $k \in \mathbb{N}_0$ let $\mathcal{L}}%\mathcal{S}^{LR}_{k+1} = \mathcal{L}}%\mathcal{S}^{LR} \circ \mathcal{R}}%\mathcal{S}^{RL}_{k}$ and $\mathcal{R}}%\mathcal{S}^{RL}_{k+1} = \mathcal{R}}%\mathcal{S}^{RL} \circ \mathcal{L}}%\mathcal{S}^{LR}_{k}$. For all $n$, write $\Delta_n = \mathcal{L}}%\mathcal{S}^{LR}_n \cap \mathcal{R}}%\mathcal{S}^{RL}_n$. \end{definition} Note that in $\mathcal{L}}%\mathcal{S}^{LR}_n$, there are $n$ sweeps (slider applications) in total, and the last sweep goes from right to left. We have $\mathcal{L}}%\mathcal{S}^{LR}_1 = \mathcal{L}}%\mathcal{S}^{LR}$, $\mathcal{R}}%\mathcal{S}^{RL}_1 = \mathcal{R}}%\mathcal{S}^{RL}$, $\Delta_1 = \Delta$. See Figure~\ref{fig:Hierarchy}. \begin{figure} \begin{center} \begin{tikzpicture} \node (I0) at (-1,1) {}; \node[xshift=-4] (D0) at (I0) {$\Delta_1$}; \node (L1) at (0,0) {$\mathcal{L}}%\mathcal{S}^{LR}_1$}; \node (I1) at (1,1) {}; \node[right = 0mm of I1] (D1) {$\Delta_2$}; \node (R1) at (0,2) {$\mathcal{R}}%\mathcal{S}^{RL}_1$}; \node (L2) at (2,0) {$\mathcal{L}}%\mathcal{S}^{LR}_2$}; \node (I2) at (3,1) {}; \node[right = 0mm of I2] (D2) {$\Delta_3$}; \node (R2) at (2,2) {$\mathcal{R}}%\mathcal{S}^{RL}_2$}; \node (L3) at (4,0) {$\mathcal{L}}%\mathcal{S}^{LR}_3$}; \node (I3) at (5,1) {}; \node[right = 0mm of I3] (D3) {$\Delta_4$}; \node (R3) at (4,2) {$\mathcal{R}}%\mathcal{S}^{RL}_3$}; \node (L4) at (6,0) {$\mathcal{L}}%\mathcal{S}^{LR}_4$}; \node (I4) at (7,1) {}; \node[right = 0mm of I4] (D4) {$\Delta_5$}; \node (R4) at (6,2) {$\mathcal{R}}%\mathcal{S}^{RL}_4$}; \node (L5) at (8,0) {$\cdots$}; \node (R5) at (8,2) {$\cdots$}; \draw (I0) -- (R1); \draw (I0) -- (L1); \draw (L1) -- (R2); \draw (L1) -- (L2); \draw (R1) -- (L2); \draw (R1) -- (R2); \draw (L2) -- (R3); \draw (L2) -- (L3); \draw (R2) -- (L3); \draw (R2) -- (R3); \draw (L3) -- (R4); \draw (L3) -- (L4); \draw (R3) -- (L4); \draw (R3) -- (R4); \draw (L4) -- (R5); \draw (L4) -- (L5); \draw (R4) -- (L5); \draw (R4) -- (R5); \end{tikzpicture} \end{center} \vspace{-0.5cm} \caption{The sliding hierarchy.} \label{fig:Hierarchy} \end{figure} In Theorem~\ref{thm:slider-hierarchy=closing-hierarchy} below we will show a close relation between this ``slider hierarchy'' and a ``closingness hierarchy'' defined as follows, exactly analogously. Let $\mathcal{L}^{cl}$ denote the set of left-closing CA and $\mathcal{R}^{cl}$ the set of right-closing CA. Define $\mathcal{L}^{cl}_0 = \mathcal{R}^{cl}_0 = \{\mathrm{id}\}$ and for all $k$, $\mathcal{L}^{cl}_{k+1} = \mathcal{L}^{cl} \circ \mathcal{R}^{cl}_k$ and $\mathcal{R}^{cl}_{k+1} = \mathcal{R}^{cl} \circ \mathcal{L}^{cl}_k$. As always with such hierarchies, it is natural to ask whether they are infinite or collapse at some finite level. We do not know if either hierarchy collapses, but we show that after the first level, the hierarchies coincide. The main ingredients for the theorem are the following two lemmata. \begin{lemma} \label{lem:DividesPower} Let $f$ be a left-closing CA. For all $n$ large enough, $\card{\Psi_{n}}$ divides some power of $\card{\ST}$. \end{lemma} \begin{proof} Let $m$ be a strong left-closing radius for $f$. Number $m$ can be chosen as large as needed. Let $f_\mathrm{loc}$ be the local update rule of $f$ of radius $3m$. By Theorem 14.7 in~\cite{hedlund} there exist, for $k=3m$ chosen sufficiently large, \begin{itemize} \item positive integers $L,M$ and $R$ such that $L\cdot M\cdot R = |\ST|^{2k}$, \item pairwise different words $u_1,\dots,u_M$ of length $k$, \item sets ${\cal L}_1,\dots, {\cal L}_M\subseteq \ST^{2k}$ of words of length $2k$, each of cardinality $\card{{\cal L}_i}=L$, \item sets ${\cal R}_1,\dots, {\cal R}_M\subseteq \ST^{2k}$ of words of length $2k$, each of cardinality $\card{{\cal R}_i}=R$, \item a word $w$ of length $3k$ whose set pre-images of length $5k$ under $f_\mathrm{loc}$ is precisely \[ \bigcup_{i=1}^M {\cal L}_iu_i{\cal R}_i. \] \end{itemize} See Figure~\ref{fig:hedlund} for an illustration. \begin{figure} \begin{center} \includegraphics[scale=0.5]{hedlund-eps-converted-to.pdf} \end{center} \caption{Illustration of Theorem 14.7 in~\cite{hedlund}, with $L=4$, $R=3$.} \label{fig:hedlund} \end{figure} Let $y\in \ST^{[k,\infty)}$ be arbitrary and let $z\in\ST^{(-\infty,0)}$ be such that $z_{[-3k,0)}=w$. Define $A=\{x\in\ST^\mathbb{Z}\ |\ x_{[k,\infty)}=y, f(x)_{(-\infty,0)}=z\}$. By Corollary~\ref{cor:kurkacorollary} we know that $\card{\Psi_{3m}}=\card{A}$. \medskip \noindent (i) If $x\in A$ then $x_{[-4k,k)}$ is a pre-image of $w=z_{[-3k,0)}$. This means that for some $i\in\{1,\dots, M\}$, we have $x_{[-4k,k)}\in{\cal L}_iu_i{\cal R}_i$ and, in particular, $x_{[-2k,k)}\in u_i{\cal R}_i$. \medskip \noindent (ii) Conversely, let $i\in \{1,\dots, M\}$ and $v\in{\cal R}_i$ be arbitrary. Words in ${\cal L}_iu_iv$ are pre-images of $w$ so $f([u_iv]_{[-2k,k)})\cap [w]_{[-3k,0)} \neq\emptyset$. Because $f$ is left-closing and $k$ is a strong left-closing radius for $f$ there exists a unique $x\in A$ such that $x_{[-2k,k)}=u_iv$. \medskip \noindent From (i) and (ii) we can conclude that $\card{A}=M\cdot R$. Hence $L\cdot \card{\Psi_{3m}}=L\cdot \card{A}=L\cdot M\cdot R = |\ST|^{2k}$. \end{proof} \begin{lemma} \label{lem:ForLargeEnough} Let $f$ be a left-closing CA. Then for any large enough $n$, we have $\sigma^n \circ f \in \mathcal{R}}%\mathcal{S}^{RL}$. \end{lemma} \begin{proof} By the previous lemma, we have $v_p(\lambda_f) = 0$ for all $p \nmid |\ST|$. Similarly as in \cite{blockCA} one sees that the map $g \mapsto \lambda_g$ is a homomorphism among left-closing CA, so \[ v_p(\lambda_{\sigma^n \circ f}) = v_p(\lambda_{\sigma^n} \cdot \lambda_{f}) = v_p(\lambda_f) - n v_p(|\ST|) \leq 0 \] for large enough $n$. The claim follows from Theorem~\ref{thm:SliderCharacterization2}. \end{proof} \begin{theorem} \label{thm:slider-hierarchy=closing-hierarchy} For each $k\in\mathbb{N}$ with $k\geq 2$ we have $\mathcal{L}}%\mathcal{S}^{LR}_k = \mathcal{R}^{cl}_k$ and $\mathcal{R}}%\mathcal{S}^{RL}_k = \mathcal{L}^{cl}_k$. \end{theorem} \begin{proof} By Lemma~\ref{lem:a-rule->lc} we have $f \in \mathcal{L}}%\mathcal{S}^{LR} \implies f \in \mathcal{R}^{cl}$ and $f \in \mathcal{R}}%\mathcal{S}^{RL} \implies f \in \mathcal{L}^{cl}$, so by induction $\mathcal{L}}%\mathcal{S}^{LR}_k \subset \mathcal{R}^{cl}_k$ and $\mathcal{R}}%\mathcal{S}^{RL}_k \subset \mathcal{L}^{cl}_k$. Suppose then that $f \in \mathcal{R}^{cl}_k$ and $k \geq 2$, so \[ f = f_1 \circ f_2 \circ \cdots \circ f_{k-1} \circ f_k \] where $f_i \in \mathcal{R}^{cl}$ for odd $i$ and $f_i \in \mathcal{L}^{cl}$ for even $i$. Then write \[ f = (f_1 \circ \sigma^{n_1}) \circ (f_2 \circ \sigma^{n_2}) \circ \cdots \circ (f_{k-1} \circ \sigma^{n_{k-1}}) \circ (f_k \circ \sigma^{n_k}) \] where $\sum_{i = 1}^k n_i = 0$ and for each odd $i$, $n_i \leq 0$ is small enough that $f_i \circ \sigma^{n_i} \in \mathcal{L}}%\mathcal{S}^{LR}$ and for each even $i$, $n_i \geq 0$ is large enough that $f_i \circ \sigma^{n_i} \in \mathcal{R}}%\mathcal{S}^{RL}$. This shows that $f \in \mathcal{L}}%\mathcal{S}^{LR}_k$. Similarly $\mathcal{L}^{cl}_k \subset \mathcal{R}}%\mathcal{S}^{RL}_k$, concluding the proof. \end{proof} A cellular automaton $f$ is \emph{bi-closing} if it is both left-closing and right-closing, i.e. $f \in \Delta^{cl}_1$. Such cellular automata are also called \emph{open}, since they map open sets to open sets. By the previous result, every bi-closing CA can be realized by a left-to-right sweep followed by a right-to-left sweep by bijective block rules: \begin{theorem} Each bi-closing CA is in $\Delta_2$. \end{theorem} \section{Decidability} \label{sec:Decidability} In this section, we show that our characterization of sliders and sweepers shows that the existence of them for a given CA is decidable. We also show that given a block rule, whether it defines some CA as a slider (equivalently as a sweeper) is decidable. We have seen that sweepers can also define shift-commuting functions which are not continuous. We show that this condition is also decidable. \begin{lemma} \label{lem:decidable-left-closing-strong-radius} Given a cellular automaton $f : \ST^\mathbb{Z} \to \ST^\mathbb{Z}$, it is decidable whether it is left-closing, and when $f$ is left-closing, a strong left-closing radius can be effectively computed. \end{lemma} \begin{proof} It is obviously decidable whether a given $m \in \mathbb{N}$ is a strong left-closing radius, since checking this requires only quantification over finite sets of words. This shows that left-closing is semi-decidable and the $m$ can be computed when $F$ is left-closing. When $F$ is not left-closing, there exist $x, y$ such that $x_{[1,\infty)} = y_{[1,\infty)}$, $x_0 \neq y_0$ and $F(x) = F(y)$. A standard pigeonhole argument shows that there then also exist such a pair of points whose left and right tails are eventually periodic, showing that not being left-closing is semidecidable. \end{proof} \begin{lemma} \label{lem:lambdaf} Given a left-closing cellular automaton $f : \ST^\mathbb{Z} \to \ST^\mathbb{Z}$, one can effectively compute the rational number $\lambda_f$ defined in Equation~\eqref{eq:lambda} on page \pageref{eq:lambda}. \end{lemma} \begin{proof} As observed after defining \eqref{eq:lambda}, the limit is reached in finite time, once $m$ is a strong left-closing radius. By the previous lemma, one can effectively compute a strong left-closing radius. \end{proof} \begin{theorem} \label{thm:decidable-slider-sweeper} Given a cellular automaton $f : \ST^\mathbb{Z} \to \ST^\mathbb{Z}$, it is decidable whether $f$ is a slider (resp. sweeper). \end{theorem} \begin{proof} By Theorem~\ref{thm:SweeperCharacterization}, a block rule is a sweeping rule for $f$ if and only if it is a slider rule for $f$, so in particular $f$ admits a slider if and only if it admits a sweeper. Theorem~\ref{thm:SliderCharacterization2} characterizes cellular automata admitting a slider as ones that are left-closing and satisfy $v_p(\lambda_f) \leq 0$ for all primes $p$. Decidability follows from the previous two lemmas. \end{proof} We now move on to showing that given a block rule, we can check whether its slider or sweeper relation defines a CA. In the rest of this section, we explain the automata-theoretic nature of both types of rules, which allows one to decide many properties of the slider and sweeper relations even when they do not define cellular automata. As is a common convention in automata theory, all claims in the rest of this section have constructive proofs (and thus imply decidability results), unless otherwise specified. We recall definitions from \cite{infauto} for automata on bi-infinite words. A \emph{finite-state automaton} is $A = (Q, \ST, E, I, F)$ where $Q$ is a finite set of \emph{states}, $\ST$ the \emph{alphabet}, $E \subset Q \times \ST \times Q$ the \emph{transition relation}, $I \subset Q$ the set of \emph{initial states} and $F \subset Q$ the set of \emph{final states}. The pair $(Q, E)$ can be naturally seen as a labeled graph with labels in $\ST$. The \emph{language} of such an automaton $A$ the set $\mathcal{L}(A) \subset \ST^\mathbb{Z}$ of labels of bi-infinite paths in $(Q, E)$ such that some state in $I$ is visited infinitely many times to the left (negative indices) and some state in $F$ infinitely many times to the right. Languages of finite-state automata are called \emph{recognizable}. If $A \subset \ST^{-\mathbb{N}}$ and $B \subset \ST^{\mathbb{N}}$, write $[A, B] \subset \ST^\mathbb{Z}$ for the set of configurations $x \in \ST^\mathbb{Z}$ such that for some $y \in A, z \in B$, $x_i = A_{i+1}$ for $i < 0$ and $x_i = B_i$ for $i \geq 0$. We need the following lemma. \begin{lemma}[Part of Proposition~IX.2.3 in \cite{infauto}] \label{lem:ZetaAutomatic} For a set $X \subset \ST^\mathbb{Z}$ the following are equivalent \begin{itemize} \item $X$ is recognizable \item $X$ is shift-invariant and a finite union of sets of the form $[A, B]$ where $B$ is $\omega$-recognizable (accepted by a B\"uchi automaton) and $A$ is the reverse of an $\omega$-recognizable set. \end{itemize} \end{lemma} In the theorems of this section, note that the set $\ST^\mathbb{Z} \times \ST^\mathbb{Z}$ is in a natural bijection with $(\ST^2)^\mathbb{Z}$. \begin{proposition} \label{prop:arule-relation-decidable} Let $\chi : \ST^m \to \ST^m$ be a bijective block rule. Then the corresponding \emph{slider\ relation} $F \subset (\ST^2)^\mathbb{Z}$ is recognizable. \end{proposition} \begin{proof} Let $\xi = \chi^{-1}$. The slider relation is defined as the pairs $y, z \in \ST^\mathbb{Z}$ such that for some representation $(x, 0)$ we have $\chi^{0-}(x) = y$ and $\xi^{0+}(x) = z$. For each $uw \in S^{[-m,m-1]}$ where $|u| = |w| = m$, we define recognizable languages $A_{uw} \subset (\ST^2)^{(-\infty,0)}, B_{uw} \in (\ST^2)^{[0, \infty)}$ such that the slider relation is $\bigcup_{uw \in S^{[-m,m-1]}} [A_{uw}, B_{uw}]$. For finite words, one-way infinite words and more generally patterns over any domain $D \subset \mathbb{Z}$, define the ordered applications of $\chi$ and $\xi$ (e.g. $\chi^{i+}$) with the same formulas as for $x \in \ST^\mathbb{Z}$, when they make sense. For each word $uw \in S^{[-m,m-1]}$, define the $\omega$-recognizable set $B_{uw} \subset (S^2)^\mathbb{N}$ containing those $(y, z)$ for which $\chi^{0+}(x) = z$ where $x \in \ST^\mathbb{N}$ satisfies $x_{[0,m-1]} = w$, $x_{[m,\infty)} = y_{[m,\infty)}$, and $\xi^{[-m+1,-1]^R}(uw)|_{[0,m-1]} = y_{[0,m-1]}$. One can easily construct a B\"uchi automaton recognizing this language, so $B_w$ is $\omega$-recognizable. Let then for $w \in S^m$ the set $A_w \subset (\ST^2)^{-\mathbb{N}}$ be defined as those pairs $(y, z)$ such that $\xi^{0-}(zw)|_{(-\infty, 0)} = y$, where $zw \in \ST^{(-\infty, m-1]}$. Again it is easy to construct a B\"uchi automaton for the reverse of $A_w$. Now it is straightforward to verify that the slider relation of $\chi$ is \[ \bigcup_{uw \in S^{[-m, m-1]}} [A_{uw}, B_{uw}], \] which is recognizable by Lemma~\ref{lem:ZetaAutomatic} since the slider relation is always shift-invariant. \end{proof} \begin{lemma} \label{lem:FunctionDecidable} Given a recognizable set $X \subset (S^2)^\mathbb{Z}$, interpreted as a binary relation over $S^\mathbb{Z}$, it is decidable whether $X$ defines a function. \end{lemma} \begin{proof} Since recognizable sets representing relations are closed under Cartesian products, projections and intersections (by standard constructions), if $X$ is recognizable also the `fiber product' $Y \subset (S^2)^\mathbb{Z}$ containing those pairs $(z,z')$ satisfying $\exists y: (y,z) \in X \wedge (y,z') \in X$ is recognizable. The diagonal $\Delta$ of $(S^2)^\mathbb{Z}$ containing all pairs of the form $(z,z)$ is also clearly recognizable. Since recognizable languages are closed under complementation \cite{infauto}, we obtain that $((S^2)^\mathbb{Z} \setminus \Delta) \cap Y$ is recognizable. This set is empty if and only if $X$ is a function, proving decidability, since all proofs in this section are constructive and emptiness of a recognizable language is decidable using standard graph algorithms. \end{proof} The following is a direct corollary. \begin{theorem} Given a block rule, it is decidable whether it is the sliding rule of a CA. \end{theorem} We now discuss sweeping rules. \begin{proposition} \label{prop:brule-relation-decidable} Let $\chi : \ST^m \to \ST^m$ be a block rule. Then the corresponding \emph{sweeper\ relation} $F \subset (\ST^2)^\mathbb{Z}$ is recognizable. \end{proposition} \begin{proof} One can easily construct a finite-state automaton accepting the language $X \subset (\{0,1\}^2 \times \ST^2)^\mathbb{Z}$ containing those $(x, x', y, z) \in (\{0,1\}^2 \times \ST^2)^\mathbb{Z}$ where \[ \chi^{m+}(y)|_{[n,\infty)} = z|_{[n, \infty)} \] and $x_i = 1 \iff i = m$ and $x'_i = 1 \iff i = n$. Simply construct an automaton that checks that there is exactly one $1$-symbol on each of the first two tracks, and when it sees the first $1$ is seen on the first track it starts keeping in its state the current contents of the active window (where the block rule is being applied). When $1$ is seen on the second track, it also starts checking that the image is correct. Since $X$ is described by an automaton and $|\ST| \leq 2^k$ for some $k$, an adaptation of \cite[Theorem~IX.7.1]{infauto} shows that there exists a monadic second-order formula over the successor function of $\mathbb{Z}$, i.e. some formula $\phi \in \mbox{MF}_2(<)$, that defines those tuples sets of integers $(x, x', y_1, ..., y_k, z_1, ..., z_k)$ where $(y_1,...,y_k)$ codes some $y$ and $(z_1,...,z_k)$ some $z$ such that $(x, x', y, z)$ is in $X$. Since in tuple $(x, x', y_1, ..., y_k, z_1, ..., z_k)$ that satisfies $\phi$ we have $|x| = |x|' = 1$, it is standard to modify $\phi'$ into a formula where $x, x'$ are replaced by first-order variables $i, j$ and correspond to the unique places in $x$ and $x'$ where the unique $1$ appears. Now the formula $\psi$ defined by \[ \forall j \in \mathbb{Z}: \forall n \in \mathbb{Z}: \exists i \leq n: \phi'(i, j, y_1,...,y_k, z_1,...,z_k) \] defines those tuples $(y_1, ..., y_k, z_1, ..., z_k)$ that code pairs $(y, z)$ which are in the sweeper relation for $\chi$. Another application of \cite[Theorem~IX.7.1]{infauto} then shows that sweeper relation is recognizable. \end{proof} The sweeping relation need not be closed, as shown in Example~\ref{ex:NotClosed}. However, whether it is closed is decidable. \begin{lemma} \label{lem:ClosedDecidable} Given a recognizable $X \subset \ST^\mathbb{Z}$, it is decidable whether $X$ is closed. \end{lemma} \begin{proof} Take an automaton recognizing $X$, remove alls states from which an initial state is not reachable to the left, and ones from which a final state is not reachable to the right. Turn all states into initial and final states. Now $X$ is closed if and only if the new automaton recognizes $X$, which is decidable by standard arguments. \end{proof} \begin{theorem} \label{thm:block-decidable-whether-sweeping} Given a block rule, it is decidable whether the sweeping relation it defines is a CA. \end{theorem} \begin{proof} The sweeping rule of a block rule defines a CA if and only if the sweeping relation is closed and defines a function. These are decidable by Lemma~\ref{lem:FunctionDecidable} and Lemma~\ref{lem:ClosedDecidable}, respectively. \end{proof} \section{Future work and open problems} To obtain a practical computer implementation method for cellular automata, one would need much more work. The radius of $\chi$ should be given precise bounds, and we would also need bounds on how long it takes until the sweep starts producing correct values. Future work will involve clarifying the connection between the radii $m$ of local rules $\chi : S^m \to S^m$ and the strong left-closing radii, the study of non-bijective local rules, and the study of sweeping rules on periodic configurations. On the side of theory, it was shown in Section~\ref{sec:rev-ca} that the hierarchy of left- and right-closing cellular automata corresponds to the hierarchy of sweeps starting from the second level. Neither hierarchy collapses on the first level, since there exists CA which are left-closing but not right-closing, from which one also obtains CA which are in $\mathcal{L}_1$ but not $\mathcal{R}_1$. \begin{question} Does the hierarchy collapse on a finite level? Is every surjective CA in this hierarchy? \end{question} As we do not know which cellular automata appear on which levels, we do not know whether these levels are decidable. For example we do not know whether it is decidable if a given CA is the composition of a left sweep and a right sweep. It seems likely that the theory of sliders can be extended to shifts of finite type. If $X$ is a subshift, say that a homeomorphism $\chi : X \to X$ is \emph{local} if its application modifies only a (uniformly) bounded set of coordinates. One can define sliding applications of such homeomorphisms exactly as in the case of $\ST^\mathbb{Z}$. \begin{question} Let $X \subset \ST^\mathbb{Z}$ be a transitive subshift of finite type. Which endomorphisms of $X$ are defined by a sliding rule defined by a local homeomorphism? \end{question} In \cite{blockCA}, block representations are obtained for cellular automata in one and two dimensions, by considering the set of stairs of reversible cellular automata. Since stairs play a fundamental role for sliders as well, it seems natural to attempt to generalize our theory to higher dimensions. \par \noindent \textbf{Acknowledgement.} The authors gratefully acknowledge partial support for this work by two short term scientific missions of the EU COST Action IC1405. \bibliographystyle{plain}
1,108,101,562,604
arxiv
\section{Introduction} Diophantine approximation deals with the approximation of real numbers by rationals. A classic example is the set $J(\alpha)$ of all $\alpha$-well approximable numbers, \[ J(\alpha) = \{\, x \in \mathbb{R} : |x - p/q| < q^{-\alpha} \text{ for infinitely many } (p,q) \in \mathbb{Z} \times \mathbb{N} \,\}. \] Dirichlet showed that $J(\alpha) = \mathbb{R}$ for $\alpha = 2$ and Jarn\'{i}k \cite{jarnik} and Besicovitch \cite{besicovitch} showed that the Hausdorff dimension of $K(\alpha)$ is $2/\alpha$ for all $\alpha\geq 2$. The sets $J(\alpha)$ belong to a family of sets with an interesting large intersection property, first introduced by Falconer in \cite{falconerold, falconernew}. Falconer defined classes $\mathcal{G}^s$ of $G_{\delta}$ subsets of $\mathbb{R}^n$ with the property that any set in $\mathcal{G}^s$ has Hausdorff dimension at least $s$, and any countable intersection of bi-Lipschitz images of sets from $\mathcal{G}^s$, also belongs to $\mathcal{G}^s$. There are several equivalent ways to characterise the sets in $\mathcal{G}^s$ (see \cite{falconernew}). Falconer showed that the set $J (\alpha)$ is in the class $\mathcal{G}^{2/\alpha}$ \cite{falconerold}. This implies that any countable intersection of $J (\alpha)$ with sets from $\mathcal{G}^{2/\alpha}$ has Hausdorff dimension $2/\alpha$. Real numbers are typically represented by some imperfect truncation of their expansion to some given integer base. This motivates the classification of numbers according to the accuracy of their finite expansions by considering sets of the form, \begin{multline*} \mathcal{B}(\alpha) = \{ \, x \in \mathbb{R} : |x - p/2^n| < 2^{-\alpha n} \text{ for infinitely many } (p,n) \in \mathbb{Z} \times \mathbb{N} \,\}. \end{multline*} For each $\alpha$ the set $\mathcal{B} (\alpha)$ is of Hausdorff dimension $1/\alpha$. Moreover, each $\mathcal{B}(\alpha)$ belongs to the class $\mathcal{G}^{1/\alpha}$. We note that $\mathcal{B}(\alpha)=\mathcal{D}(\alpha)+\mathbb{Z}$ where, \begin{multline*} \mathcal{D}(\alpha) = \biggl\{ x \in [0,1] : \bigg|x-\sum_{i=1}^n \omega_i 2^{-i}\bigg|<2^{-n\alpha} \\ \text{ for infinitely many }\omega \in \{0,1\}^n, n \in \mathbb{N} \biggr\}. \end{multline*} For each $n \in \mathbb{N}$ we let $\mathcal{D}_n$ denote the set of all $n$-th level dyadic sums \begin{eqnarray*} \mathcal{D}_n := \left\lbrace \sum_{i=1}^n \omega_i 2^{-i}: \omega \in \{0,1\}^n\right\rbrace. \end{eqnarray*} The fact that $\mathcal{B}(\alpha)$ belongs to the class $\mathcal{G}^{1/\alpha}$ is essentially a consequence of the fact that each $\mathcal{D}_n$ is evenly distributed in $[0,1]$. This motivates the heuristic principle that if $\{\mathcal{D}_n\}_{n \in\mathbb{N}}$ were replaced by some other family of suitably well distributed sets then we should still obtain a set with large intersection properties. Now take some $\lambda \in \left(\frac{1}{2},1\right)$. Just as every number between zero and one may be written as a binary expansions, any number $x \in [0, \lambda(1-\lambda)^{-1}]$ may be written in the form, \begin{equation*} x=\sum_{i=1}^{\infty}\omega_i \lambda^i, \end{equation*} for some $(\omega_i)_{i=1}^{\infty} \in \{0,1\}^{\mathbb{N}}$. Following Pollicott and Simon \cite{lambda expansions deleted digits} we refer to $\left(\omega_i\right)_{i=1}^{\infty}$ as the $\lambda$-expansion of $x$. In this paper we shall study the approximation of real numbers by the finite truncations of their $\lambda$-expansions. Hence, we study sets of the form \begin{multline*} W_\lambda (\alpha) = \biggl\{\, x \in [0, \lambda/(1-\lambda)] : \bigg|x - \sum_{k=1}^n \omega_k \lambda^k \bigg| < 2^{-\alpha n},\\ \text{ for infinitely many } \omega \in \{0,1\}^n,\, n \in \mathbb{N} \,\biggr\}. \end{multline*} Since $W_\lambda (\alpha)$ is a subset of $[0,\lambda / (1 - \lambda)]$ it cannot belong to any class $\mathcal{G}^s$. Instead we will consider the corresponding versions of the classes $\mathcal{G}^s$ for subsets of an interval $I$, denoted by $\mathcal{G}^s (I)$. It is natural to conjeture that for almost every $\lambda$, $W_\lambda (\alpha)$ belongs to the set $\mathcal{G}^{1/\alpha} (I)$. This conjecture is motivated by our heuristic principle combined with results concerning the distribution of the $n$-th level $\lambda$-sums, \begin{eqnarray*} \mathcal{D}_{\lambda}(n):= \left\lbrace \sum_{i=1}^n \omega_i 2^{-i}: \omega \in \{0,1\}^n\right\rbrace. \end{eqnarray*} This topic has attracted a great deal of interest since the time of Erd\H{o}s \cite{erdos}. Erd\H{o}s studied a class of measures known as \textit{infinite Bernoulli convolutions} formed by taking the distributions of the random variable \[ \sum_{k=1}^\infty \pm \lambda^k, \] for some $\lambda \in \left(\frac{1}{2},1\right)$, where in each term $+$ and $-$ are chosen independently and with equal probability. Erd\H{o}s proved the existence of an interval $(a,1)$ for which the infinite Bernoulli convolution is absolutely continuous for almost every $\lambda \in (a,1)$ \cite{erdos}. Erd\H{o}s also proved the existence of a countable family of $\lambda$ for which the corresponding infinite Bernoulli convolution is not absolutely continuous. Nonetheless it was conjectured that for almost every $\lambda \in \left(\frac{1}{2},1\right)$ the corresponding infinite Bernoulli convolution is absolutely continuous. In a breakthrough work of Solomyak this conjecture was answered in the affirmative \cite{solomyak}. This implies that for typical $\lambda$ the sums $\mathcal{D}_{\lambda}(n)$ are fairly evenly distributed in the sense of Lebesgue. We shall show that for almost all $\lambda \in (\frac{1}{2}, \frac{2}{3})$, the set $W_\lambda (\alpha)$ belongs to the class $\mathcal{G}^{\frac{1}{\alpha}} (I)$. However there is a dense set of $\lambda$ such that the dimension of $W_{\lambda}(\alpha)$ drops below $1/\alpha$. We also show that for any $\lambda \in (\frac{1}{2}, 1)$, the set $W_\lambda (\alpha)$ belongs to $\mathcal{G}^s (I)$, at least for $s = - \frac{1}{\alpha} \frac{\log \lambda}{\log 2}$. We also show that this estimate is sharp in the sense that there exists a countable set of $\lambda$ for which $\dimH W_\lambda (\alpha) = - \frac{1}{\alpha} \frac{\log \lambda}{\log 2}$, and hence $W_\lambda (\alpha)$ is not in the class $\mathcal{G}^s (I)$ for any $s$ larger than $- \frac{1}{\alpha} \frac{\log \lambda}{\log 2}$. \section{Notation and Statement of Results} We begin by defining the classes $\mathcal{G}^s (I)$ referred to in the introduction. One characterisation of Falconer's class $\mathcal{G}^s$ is as follows \cite{falconernew}. $\mathcal{G}^s$ is the set of all $G_{\delta}$ sets $A$ which have the property that for any countable collection $\{f_j\}_{j \in \mathbb{N}}$ of similarity transformations $f_j: \mathbb{R} \rightarrow \mathbb{R}$ we have, \[ \dimH \Bigl(\bigcap_{j \in \mathbb{N}} f_j (A) \Bigl) \geq s. \] The class $\mathcal{G}^s (I)$ may be defined in terms of $\mathcal{G}^s$. \begin{defn} Given an interval $I$, the class $\mathcal{G}^s(I)$ is the class of subsets of $I$ given by $\mathcal{G}^s(I):= \left\lbrace A \subseteq I: A+\diam(I) \cdot \mathbb{Z} \in \mathcal{G}^s \right\rbrace.$ \end{defn} We let $I_{\lambda}$ denote the closed interval $[0,\lambda/(1-\lambda)]$ which consists of all points $x\in \mathbb{R}$ which may be written in the form $x=\sum_{i=1}^{\infty}\omega_i\lambda^{i}$ for some $\omega\in \{0,1\}^{\mathbb{N}}$. We shall consider the sets $W_{\lambda}(\alpha)$ of points which are $\alpha$-well-approximated by $\lambda$-expansions, \begin{equation*} W_{\lambda}(\alpha):= \bigcap_{m\in \mathbb{N}}\bigcup_{n\geq m} \bigcup_{\omega \in \{0,1\}^n}\biggl\{\, x \in I_\lambda : \bigg|x-\sum_{i=1}^n \omega_i \lambda^{i}\bigg|<2^{-n\alpha} \,\biggr\}. \end{equation*} \begin{theorem}\label{main bullet points} Choose $\alpha \in (1, \infty)$. \begin{enumerate} \item For all $\lambda \in \left(\frac{1}{2},1\right)$, $\dim W_{\lambda}(\alpha)\leq \frac{1}{\alpha}$, \item For almost every $\lambda \in \left(\frac{1}{2},\frac{2}{3}\right)$, $W_{\lambda}(\alpha) \in \mathcal{G}^{s}(I_{\lambda})$ for $s=\frac{1}{\alpha}$, \item For a dense set of $\lambda \in \left(\frac{1}{2},1\right)$, $\dim W_{\lambda}(\alpha) < \frac{1}{\alpha}$, \item For all $\lambda \in \left(\frac{1}{2},1\right)$, $W_{\lambda}(\alpha) \in \mathcal{G}^{s}(I_{\lambda})$ for $s=-\frac{\log \lambda}{\log 2}\frac{1}{\alpha}$, \item For a countable set of $\lambda \in \left(\frac{1}{2},1\right)$, $\dim W_{\lambda}(\alpha)=-\frac{\log \lambda}{\log 2}\frac{1}{\alpha}$. \end{enumerate} \end{theorem} In addition to Theorem \ref{main bullet points} (2) we also have the following upper bound on the dimension of the set of exceptions. \begin{theorem}\label{theorem typical} Given $\alpha >1$ and $s\leq \frac{1}{\alpha}$ we have, \begin{equation*} \dimH \Bigl\{\, \lambda \in \Bigl( \frac{1}{2},\frac{2}{3} \Bigr): W_{\lambda}(\alpha) \notin \mathcal{G}^{s}(I_{\lambda}) \,\Bigr\} \leq s. \end{equation*} \end{theorem} The remainder of the paper is structured as follows. In Section \ref{s P of Th 2} we prove Theorem \ref{theorem typical}, which implies Theorem \ref{main bullet points} (2). In Section \ref{sec:lowerbound} we establish the uniform lower bound given in Theorem \ref{main bullet points} (4). In Section \ref{s covering args} we prove the upper bounds in Theorem \ref{main bullet points} parts (1), (3) and (5). \section{Proof of Theorem \ref{theorem typical}}\label{s P of Th 2} In this section we prove Theorem \ref{theorem typical}. The proof of this theorem is influenced by Rams' work on the dimension of the exceptional set for families of self-similar measures with overlaps \cite{rams}. For each $\lambda\in\left(\frac{1}{2},\frac{2}{3}\right)$ and $k\in \mathbb{N}\cup\{0\}$, $r\in \mathbb{N}$ we define a pair of proximity numbers \begin{align*} \tilde{P}_n(\lambda,k,r) &:= \# \biggl\{ (\omega,\kappa)\in \left(\{0,1\}\right)^n: \bigg|\sum_{i=1}^n(\omega_i-\kappa_i)\lambda^i\bigg|\leq r \cdot \lambda^{n+k} \biggr\},\\ P_n(\lambda,k,r) &:= \# \biggl\{ (\omega,\kappa)\in \left(\{0,1\}\right)^n: \bigg|\sum_{i=1}^n(\omega_i-\kappa_i)\lambda^i\bigg| \\ & \hspace{5cm} \leq r \cdot \lambda^{n+k}\text{ and }\omega_1\neq \kappa_1 \biggr\}. \end{align*} \begin{lemma}\label{proximity inequality} For all $n \in \mathbb{N}$ and $k\in \mathbb{N}\cup\{0\}$, $r\in \mathbb{N}$ we have, \[ \tilde{P}_n(\lambda,k,r)\leq 2^n+\sum_{l=1}^{n}2^{n-l}P_l(\lambda,k,r). \] \end{lemma} \begin{proof} For notational convenience we let, \begin{eqnarray*} \mathcal{P}_n(\lambda,k,r):=\left\lbrace (\omega,\kappa)\in \left(\{0,1\}\right)^n: \bigg|\sum_{i=1}^n(\omega_i-\kappa_i)\lambda^i\bigg|\leq r \cdot \lambda^{n+k} \right\rbrace. \end{eqnarray*} We begin by writing, \begin{align}\label{ summands for different proximity numbers } \tilde{P}_n (\lambda,k,r) = \#\bigl\{ (\omega,\kappa)\in \mathcal{P}_n(\lambda,k,r) & : \omega = \kappa \bigr\} \\ + \sum_{l=1}^{n}\#\bigl\{ (\omega,\kappa)\in \mathcal{P}_n(\lambda,k,r) & : \omega_i=\kappa_i \text{ for }i\leq n-l \nonumber \\ & \phantom{:} \text{ and }\omega_{n-l+1}\neq \kappa_{n-l+1}\bigr\}. \nonumber \end{align} The cardinality of the first summand is clearly equal to $2^n$. Given a pair $(\omega,\kappa)\in \mathcal{P}_n(\lambda)$ with $\omega_i=\kappa_i$ for $i\leq n-l$ and $\omega_{n-l+1}\neq \kappa_{n-l+1}$, for some $l \in \{1,\cdots,k\}$ there exists some $\eta \in \{0,1\}^{n-l}$ and $\zeta,\xi \in \{0,1\}^l$ with $\eta_1\neq \zeta_1$ such that $\omega= \eta \zeta$ and $\kappa = \eta \xi$. It follows from the fact that $(\omega,\kappa)\in \mathcal{P}_n(\lambda,k,r)$ that, \begin{equation*} \lambda^{n-l} \bigg|\sum_{i=1}^l(\zeta_i-\xi_i)\lambda^i\bigg|\leq r\cdot \lambda^{n+k}. \end{equation*} Thus, \begin{equation*} \bigg|\sum_{i=1}^l(\zeta_i-\xi_i)\lambda^i\bigg|\leq r \cdot \lambda^{l+k}. \end{equation*} It follows that the number of elements of \begin{equation*} \left\lbrace (\omega,\kappa)\in \mathcal{P}_n(\lambda,k,r): \omega_i=\kappa_i \text{ for }i\leq n-l \text{ and }\omega_{n-l+1}\neq \kappa_{n-l+1}\right\rbrace \end{equation*} is equal to $P_l(\lambda,k,r)$ multiplied by the number of possible initial strings of length $n-l$. Thus, \begin{multline*} \#\left\lbrace (\omega,\kappa)\in \mathcal{P}_n(\lambda,k,r): \omega_i=\kappa_i \text{ for }i\leq n-l \text{ and }\omega_{n-l+1}\neq \kappa_{n-l+1}\right\rbrace \\ = 2^{n-l}P_l(\lambda,k,r). \end{multline*} Substituting into equation (\ref{ summands for different proximity numbers }) completes the proof of the Lemma. \end{proof} To prove that $W_\lambda (\alpha)$ is in $\mathcal{G}^s (I_\lambda)$ we will need good estimates on the numbers $P_n (\lambda, k, r)$. We will get such estimates for almost all $\lambda \in (\frac{1}{2}, \frac{2}{3})$, and the first step to get this is using the following lemma. \begin{lemma}[Shmerkin, Solomyak \cite{shmerkinsolomyak}] \label{Shmerkin Solomyak} For any $\varepsilon > 0$, there exists some $\delta>0$ such that for all polynomials of the form $g(\lambda)=1+\sum_{i=1}^n a_i\lambda^i$ with $a_i \in \{-1,0,1\}$ and for all $\lambda\in (\frac{1}{2},\frac{2}{3} - \varepsilon)$ we have $g'(\lambda)<-\delta$ whenever $g(\lambda)<\delta$. \end{lemma} Given $n \in \mathbb{N}$, a pair $(\omega,\kappa)\in (\{0,1\}^n)^2$ and $\gamma>0$ we let \begin{equation*} I_n(\omega,\kappa,\gamma):=\left\lbrace \lambda \in \left(\frac{1}{2},\frac{2}{3}\right): \bigg|\sum_{i=1}^n(\omega_i-\kappa_i)\lambda^i\bigg|\leq \gamma \right\rbrace. \end{equation*} \begin{lemma}\label{cor to Sh and Sol} Let $\delta$ be as in Lemma \ref{Shmerkin Solomyak}. Then for all $\gamma\in (0,\delta/2)$ and all pairs $(\omega,\kappa)\in (\{0,1\}^n)^2$ with $\omega_1 \neq \kappa_1$, $I_n(\omega,\kappa,\gamma)$ has diameter not exceeding $4\delta^{-1}\gamma$. \end{lemma} \begin{proof} Since $\omega_1\neq \kappa_1$ we may assume without loss of generality that $\omega_1=1$ and $\kappa_1=0$. Choose $\gamma\in (0,\delta/2)$ and all pairs $(\omega,\kappa)\in \left(\{0,1\}\right)^n$ with $\omega_1 \neq \kappa_1$. Now let $g(\lambda):= \sum_{i=1}^{n}(\omega_i-\kappa_i)\lambda^{i-1}$, which is of the required form for Lemma \ref{Shmerkin Solomyak}. We note that $\lambda \in I_n(\omega,\kappa,\gamma)$ implies $|g(\lambda)|<\gamma/\lambda <\delta$. By Lemma \ref{Shmerkin Solomyak} $g'(\lambda)<-\delta$ whenever $g(\lambda)<\delta$. Suppose $I_n(\omega,\kappa,\gamma)\neq \emptyset$ and choose $\lambda_0:=\inf I_n(\omega,\kappa,\gamma)$. It follows from Rolle's theorem that for all $\lambda\geq \lambda_0$, $g(\lambda) \leq \gamma<\delta$ and hence, $g'(\lambda)<-\delta$. Hence, $I_n(\omega,\kappa,\gamma)\subseteq [\lambda_0,\lambda_0+4\delta^{-1}\gamma]$. \end{proof} Using the following result by Rams \cite{rams} we will prove our desired estimates for the numbers $P_n (\lambda, k, r)$. \begin{lemma}[Rams \cite{rams}] \label{Ram's combinatorial lemma} Suppose we have a family of sets $\{E_i\}_i$ with $E_i$ of diameter $d_i$. Let $\rho>0$ be some positive real number and $b\in \mathbb{N}$. Then, the set of points which belong to at least $b$ of the sets $E_i$ may be covered by some family of intervals $\{\tilde{E}_j\}_j$ so that $\tilde{E}_j$ has diameter $\tilde{d}_j$ with $\sup_j \tilde{d}_j \leq 4 \sup_i d_i$ and \begin{equation*} \sum_j \tilde{d}_j^{\rho} \leq 4^{\rho} \cdot \frac{1}{b} \sum_i d_i^{\rho}. \end{equation*} \end{lemma} For each $s$ we shall let \begin{equation*} A(s):= \bigcup_{r\in \mathbb{N}}\bigcap_{m\in \mathbb{N}}\bigcup_{n \geq m} \bigcup_{k\geq 0}\left\lbrace \lambda \in \left(\frac{1}{2},\frac{2}{3}\right): P_n(\lambda,k,r)>4^n\lambda^{s(n+k)} \right\rbrace. \end{equation*} \begin{lemma} For all $s \in (0,1)$ we have $\dimH A(s)\leq s$. \end{lemma} \begin{proof} Choose $\rho > s$ and take some $r\in \mathbb{N}$. Take $n\in \mathbb{N}$ with $\lambda^{n}<\delta/2$. Note that each $\lambda \in \left(\frac{1}{2},\frac{2}{3}\right)$ with $P_n(\lambda,k,r)>4^n\lambda^{s(n+k)}$ is contained within $I_n(\omega,\kappa,r\cdot\lambda^{n+k})$ for at least $\lceil 4^n\lambda^{s(n+k)} \rceil$ pairs $(\omega,\kappa)\in \left(\{0,1\}^n\right)^2$. Now by Lemma \ref{cor to Sh and Sol} each $I_n(\omega, \kappa,r\cdot\lambda^{n+k})$ has diameter not exceeding $4\delta^{-1}r\lambda^{n+k}$. Thus, by Lemma \ref{Ram's combinatorial lemma} we may cover \[ \left\lbrace \lambda \in \left(\frac{1}{2},\frac{2}{3}\right): P_n(\lambda,k,r)>4^n\lambda^{s(n+k)} \right\rbrace \] with a family of sets $A_{i}^n(s,k)$ of diameter no greater than $16r\delta^{-1}\lambda^{n+k}$ and satisfying, \begin{align*} \sum_{i} & \diam(A_i^n(s,k))^{\rho} \leq 4^{\rho}\cdot (4^{-n}\lambda^{-s(n+k)}) \cdot \\ & \hspace{4cm} \cdot \sum_{(\omega,\kappa)\in \left(\{0,1\}\right)^n} \diam\left(I_n(\omega,\kappa,r\lambda^{n+k})\right)^{\rho} \\ &\leq (4r)^{\rho}\cdot (4^{-n}\lambda^{-s(n+k)}) \cdot 2^{2n} \cdot \left(4\delta^{-1}\lambda^{n+k}\right)^{\rho}\\ &\leq (16r/\delta)^{\rho} \lambda^{(n+k)(\rho-s)}. \end{align*} Consequently, we may cover \[ \bigcup_{k\geq 0}\left\lbrace \lambda \in \left(\frac{1}{2},\frac{2}{3}\right): P_n(\lambda,k,r)>4^n\lambda^{s(n+k)} \right\rbrace \] with sets $A_i^n(s,k)$ of diameter no greater than $16r\delta^{-1}\lambda^n$ and satisfying, \begin{eqnarray*} \sum_k\sum_{i}\diam(A_i^n(s,k))^{\rho} &\leq & (16r/\delta)^{\rho}\left(1-\lambda^{\rho-s}\right)^{-1} \lambda^{n(\rho-s)}. \end{eqnarray*} It follows that for each $m\in \mathbb{N}$, \[ \bigcup_{n\geq m}\bigcup_{k\geq 0}\left\lbrace \lambda \in \left(\frac{1}{2},\frac{2}{3}\right): P_n(\lambda,k,r)>4^n\lambda^{s(n+k)} \right\rbrace \] may be covered by a family of sets $\bigcup_{n\geq m} \bigcup_k\bigcup_i A_i^n(s,k)$ of diameter not exceeding $16r\delta^{-1}\lambda^{m}$ with \begin{eqnarray*} \sum_{n\geq m} \sum_k\sum_{i}\diam(A_i^n(s,k))^{\rho} &\leq & (16r/\delta)^{\rho}\left(1-\lambda^{\rho-s}\right)^{-1} \cdot \sum_{n\geq m} \lambda^{n(\rho-s)} \\ &\leq & (16r/\delta)^{\rho}\left(1-\lambda^{\rho-s}\right)^{-2} \lambda^{m(\rho-s)}. \end{eqnarray*} For every $m\in \mathbb{N}$ we have, \begin{align*} \bigcap_{m \in \mathbb{N}} & \bigcup_{n \geq m} \bigcup_{k\geq 0}\left\lbrace \lambda \in \left(\frac{1}{2},\frac{2}{3}\right): P_n(\lambda,k,r)>4^n\lambda^{s(n+k)} \right\rbrace \\ \subseteq & \bigcup_{n\geq m}\bigcup_{k\geq 0}\left\lbrace \lambda \in \left(\frac{1}{2},\frac{2}{3}\right): P_n(\lambda,k,r)>4^n\lambda^{s(n+k)} \right\rbrace. \end{align*} Thus, \begin{equation*} \dimH\left(\bigcap_{m\in \mathbb{N}}\bigcup_{n \geq m} \bigcup_{k\geq 0}\left\lbrace \lambda \in \left(\frac{1}{2},\frac{2}{3}\right): P_n(\lambda,k,r)>4^n\lambda^{s(n+k)} \right\rbrace\right) \leq \rho. \end{equation*} $A(s)$ is a countable union of such sets and so $\dimH A(s)\leq \rho$. Since $\rho>s$ was arbitrary the Lemma holds. \end{proof} Let $\mathcal{D} = \{0,1\}$. For a natural number $n$ we denote by $\mathcal{D}^n$ the set of words $(\omega_1, \omega_2, \ldots, \omega_n)$ of length $n$ such that each $\omega_k$ is in $\mathcal{D}$. Similarly we denote the set of all such infinite sequences by $\Sigma$. If $\omega$ is an element of $\Sigma$ or $\mathcal{D}^l$ with $l \geq n$, then we let $\omega | n$ denote the element in $\mathcal{D}^n$ such that $\omega$ and $\omega | n$ are equal on the first $n$ places. Given an $\omega \in \mathcal{D}^{n}$ we define a function $g_\omega (x) = \sum_{i=1}^{n} \omega_i \lambda^i + \lambda^{n} x$. \begin{lemma}\label{lots of cylinders} Given a similarity map $f \colon \mathbb{R} \rightarrow \mathbb{R}$ defined by $f \colon x \mapsto rx+t$ for some fixed $r, t \in \mathbb{R}$, together with any closed interval $A$ with non-empty interior and $\diam(A) < r \diam\left(I_{\lambda}\right)$ there exists an integer $n(A,f) \in \mathbb{Z}$ and a finite string $\omega=\omega(A,f)\in \mathcal{D}^{\theta}$, with length $\theta$ depending only on the magnitude of the derivative $|f'|$ and the diameter $\diam(A)$ of $A$, such that the interval $f\left( g_{\omega}(I_{\lambda})+ n(A,f)\cdot \diam(I_{\lambda})\right)$ is contained within $A$ and has diameter at least $\lambda/4\cdot \diam(A)$. \end{lemma} \begin{proof} Since $\diam(A) < r \diam\left(I_{\lambda}\right)$, $\diam(f^{-1}(A)) < \diam\left(I_{\lambda}\right)$. Hence, the closed interval $f^{-1}(A)$ intersects at most two of the intervals \[ \left\lbrace I_{\lambda}+n \diam(I_{\lambda})\right\rbrace_{n \in \mathbb{Z}}. \] As such, we may choose $n(A,f) \in \mathbb{Z}$ so that, \[ \diam \left(f^{-1}(A) \cap \left(I_{\lambda}+n(A,f) \diam(I_{\lambda})\right)\right) \geq \frac{1}{2} \cdot \diam\left(f^{-1}(A)\right). \] Equivalently, $\diam \left(Z\right) \geq \frac{1}{2} \cdot \diam\left(f^{-1}(A)\right)$ where \[ Z=\left(f^{-1}(A)-n(A,f) \diam(I_{\lambda}) \right) \cap I_{\lambda}. \] Let $x$ denote the midpoint of $Z$. Since $x\in I_{\lambda}$ we may write $x = \sum_{i=1}^{\infty}\omega_i\lambda^i = \bigcap_{n\in\mathbb{N}}g_{\omega|n}(I_{\lambda})$. We choose $\theta$ so that \begin{equation*} \theta:= \bigg\lfloor \frac{\log\left((1-\lambda)\diam(A)/4|f'|\right)}{\log \lambda}\bigg\rfloor. \end{equation*} In particular, $\theta$ depends only upon the magnitude of the derivative $|f'|$ and the diameter $\diam(A)$ of $A$. Since $f$ is a similarity and $I_{\lambda}$ is of diameter $\lambda/(1-\lambda)$, it follows that \begin{eqnarray*} \diam\left(g_{\omega|\theta}(I_{\lambda})\right) &=& \frac{\lambda^{\theta+1}}{1-\lambda}\\ &<& \frac{\diam(A)}{4r}\\ &=& \frac{1}{2}\cdot \frac{\diam(f^{-1}(A))}{2}\\ &<&\frac{1}{2}\cdot \diam(Z). \end{eqnarray*} Since $x$ is the midpoint of $Z$ and $g_{\omega|\theta}(I_{\lambda})$ contains $x$ we have \[ g_{\omega|\theta}(I_{\lambda}) \subseteq Z \subseteq f^{-1}(A)-n(A,f) \cdot \diam(I_{\lambda}). \] Hence, \[ f\left(g_{\omega|\theta}(I_{\lambda})+n(A,f) \cdot \diam(I_{\lambda})\right) \subseteq A. \] Moreover, \begin{eqnarray*} \diam\left(g_{\omega|\theta}(I_{\lambda})\right) &=& \frac{\lambda^{\theta+1}}{1-\lambda}\\ &\geq& \lambda \cdot \frac{\diam(A)}{4r}. \end{eqnarray*} Thus, \begin{equation*} \diam\left(f\left(g_{\omega|\theta}(I_{\lambda})+n(A,f) \cdot \diam(I_{\lambda})\right) \right)\geq \frac{\lambda}{4}\cdot \diam(A). \qedhere \end{equation*} \end{proof} Given a positive number $r>0$ and a finite set $\Omega$ and two functions $\varphi_1, \varphi_2: \Omega \rightarrow \mathbb{R}$ we shall let \begin{equation*} N_r(\varphi_1,\varphi_2):=\#\left\lbrace (x,y)\in \Omega^2 : |\varphi_1(x)-\varphi_2(y)|\leq r\right\rbrace. \end{equation*} \begin{lemma}\label{translation lemma} Given $r > 0$, any finite set $\Omega$, any function $\varphi \colon \Omega \rightarrow \mathbb{R}$ and any $t \in \mathbb{R}$, we have $N_r (\varphi,\varphi+t) \leq 4 \cdot N_r (\varphi,\varphi)$. \end{lemma} \begin{proof}[Proof of Lemma \ref{translation lemma}] Since the inequality $|\varphi_1(x)-\varphi_2(y)|\leq r$ holds if and only if $|\varphi_1(x)/r-\varphi_2(y)/r| \leq 1$, it is sufficient to prove the lemma in the case $r=1$. For each $n \in \mathbb{Z}$ we let $a_{n}:= \# \left(\Omega \cap \varphi^{-1}[n,n+1)\right)$. Given any pair $(a,b) \in \Omega^2$ with $\varphi(a),\varphi(b) \in [n, n+1)$ for some $n \in \mathbb{Z}$ we have $|\varphi(a)-\varphi(b)| \leq 1$. For each $n\in \mathbb{Z}$ there are at least $a_n^2$ such pairs, so $N_1 (\varphi,\varphi) \geq \sum_{n \in \mathbb{Z}} a_n^2$. Now suppose $a, b \in \Omega$, $\varphi(a) \in [n, n+1)$, $|\varphi(a)-(\varphi(b)+t)| \leq 1$. Since $n \leq \varphi(a) <n+1$, so $n-1 \leq \varphi(b)+t < n+2$, and so \[ n-(\lceil t\rceil+1) \leq n-1-t \leq \varphi(b) < n+2-t<(n-(\lfloor t \rfloor-1)) + 1. \] Hence, $\varphi(b)$ is in $[n-p, n-p+1)$ for some integer $p$ with $\lfloor t \rfloor-1 \leq p \leq \lceil t\rceil+1$. Thus, for each $a \in \Omega$ with $\varphi(a) \in [n, n+1)$ we have \begin{align*} \#\{\, b \in \Omega &: |\varphi(a)-(\varphi(b)+t)| \leq 1 \,\} \\ &\leq \sum_{\lfloor t \rfloor-1 \leq p \leq \lceil t\rceil+1} \# \left(\Omega \cap \varphi^{-1}[n-p,n-p+1)\right) \\ &= \sum_{\lfloor t \rfloor-1 \leq p \leq \lceil t\rceil+1} a_{n-p}. \end{align*} Thus, for each $n \in \mathbb{N}$, \begin{multline*} \#\left\lbrace (a,b) \in \Omega^2: \varphi(a) \in [n, n+1), |\varphi(a)-(\varphi(b)+t)| \leq 1 \right\rbrace \\ \leq \sum_{\lfloor t \rfloor-1 \leq p \leq \lceil t\rceil+1} a_n \cdot a_{n-p}. \end{multline*} Hence, $N_1 (\varphi, \varphi+t)\leq \sum_{n \in \mathbb{Z}} \sum_{\lfloor t \rfloor-1 \leq p \leq \lceil t\rceil+1}a_n a_{n-p}$. Thus, by Cauchy--Schwarz we have, \begin{align*} N_1 (\varphi, \varphi+t)& \leq \sum_{\lfloor t \rfloor-1 \leq p \leq \lceil t\rceil+1}\sum_{n \in \mathbb{Z}} a_n a_{n-p}\\ & \leq \sum_{\lfloor t \rfloor-1 \leq p \leq \lceil t\rceil+1}\sqrt{\sum_{n \in \mathbb{Z}} a_n^2 \cdot \sum_{n \in \mathbb{Z}} a_{n-p}^2}\\ & \leq \sum_{\lfloor t \rfloor-1 \leq p \leq \lceil t\rceil+1}\sum_{n \in \mathbb{N}} a_n^2 \\ & \leq 4 N_1 (\varphi,\varphi). \qedhere \end{align*} \end{proof} \begin{remark} It is natural to ask whether or not $4$ is the optimal constant possible in Lemma \ref{translation lemma}. Matthew Aldridge has provided an inductive demonstration that $N_r(\varphi,\varphi+t) < 2 \cdot N_r (\varphi,\varphi)$, whilst Oliver Roche-Newton has produced a family of counterexamples showing that such a bound is optimal. \end{remark} \begin{lemma}\label{Constant depending on r} Suppose $\lambda \notin A(s)$ and $r\in \mathbb{N}$. Then there exists a constant $C(r)>0$, such that for all $n \in \mathbb{N}$ and all $k\in \mathbb{N}\cup\{0\}$, \begin{equation*} \tilde{P}_n(\lambda,k,r)\leq C(r) \cdot 2^n+4^{n}n\lambda^{s(n+k)}. \end{equation*} \end{lemma} \begin{proof} Suppose $\lambda\notin A(s)$ and $r\in \mathbb{N}$. Then there exists some $N_0\in \mathbb{N}$ such that for all $n\geq N_0$ and all $k\in \mathbb{N}\cup \{0\}$, $P_n(\lambda,k,r) \leq 4^n\lambda^{s(n+k)}$. Thus, if we take $C:=1+\sum_{l=1}^{N_0}2^{-l}P_l(\lambda,0,r)$ then by Lemma \ref{proximity inequality} then we have, \begin{eqnarray*} \tilde{P}_n(\lambda,k,r)&\leq& 2^n+\sum_{l=1}^{n}2^{n-l}P_l(\lambda,k,r)\\ &\leq& 2^n\left(1+\sum_{l=1}^{N_0}2^{-l}P_l(\lambda,k,r)\right)+ \lambda^{sk} \cdot \sum_{l=N_0+1}^n 2^{n-l}\cdot (4\lambda^s)^l\\ &\leq& 2^n\left(1+\sum_{l=1}^{N_0}2^{-l}P_l(\lambda,0,r)\right)+ \lambda^{sk} \cdot \sum_{l=N_0+1}^n (4\lambda^s)^n\\ &\leq& C\cdot 2^n+ 4^{n}n\lambda^{s(n+k)}, \end{eqnarray*} where we used the fact that $\lambda \geq \frac{1}{2}$, so $4 \lambda^s\geq 2$. \end{proof} \begin{prop} Suppose $\lambda \notin A(s)$ for some $s\leq \frac{1}{\alpha}$. Then $W_{\lambda}(\alpha) \in \mathcal{G}^s(I_{\lambda})$. \end{prop} \begin{proof} To prove the proposition we begin by fixing $\lambda\notin A(s)$, $\alpha>1$ and a sequence of similarity maps $\{f_j\}_{j\in \mathbb{N}}$. We shall show that \begin{equation*} \dimH\left( \bigcap_{j\in\mathbb{N}} f_j\left(W_{\lambda}(\alpha)+\diam(I_{\lambda})\cdot \mathbb{Z} \right)\right)\geq s. \end{equation*} To do so we shall construct a subset $\Lambda \subset \bigcap_{j\in\mathbb{N}} f_j\left(W_{\lambda}(\alpha)+\diam(I_{\lambda})\cdot \mathbb{Z} \right)$ supporting a measure $\nu$ with correlation dimension $s$. Without loss of generality we may assume that $f_1:x \mapsto 2x$. We begin by choosing a sequence of natural numbers $(j(q))_{q\in \mathbb{N} \cup\{0\}}$ so that $j(0)=1$ and for each $k\in \mathbb{N}$, \begin{equation}\label{Infinite k} \#\{\, q : j(q) = k \,\} = \infty. \end{equation} Let $\Sigma_* = \{\emptyset\} \cup_n \mathcal{D}^n$. We shall recursively construct sequences of integers $(\gamma_q)_{q \in \mathbb{N}}$, $(\hat{\gamma}_q)_{q \in \mathbb{N}}$, $(\theta_q)_{q \in \mathbb{N}}$ and $(m_q)_{q\in \mathbb{N}}$ along with closed intervals $(\Delta_{\omega})_{\omega\in \Sigma_*}$ and $(\hat{\Delta}_{\omega})_{\omega\in \Sigma_*}$ and positive reals $(\delta_n)_{n\in \mathbb{N}\cup\{0\}}$, $(\hat{\delta}_n)_{n\in \mathbb{N}\cup\{0\}}$ with the property that for any $\omega \in \Sigma_*$ and $\eta \in \mathcal{D}$, \begin{equation*} \Delta_{\omega}\supseteq \hat{\Delta}_{\omega}\supseteq \Delta_{\omega\eta}. \end{equation*} Moreover, given any word $\omega \in \mathcal{D}^n$ for some $n \in \mathbb{N}\cup \{0\}$ we have $\diam(\Delta_{\omega})=\delta_n$ and $\diam(\hat{\Delta}_{\omega})=\hat{\delta}_n$. We also have $\hat{\delta}_n\leq \delta_n \leq \lambda^{n+1}/(1-\lambda)$ for all $n\in \mathbb{N}\cup\{0\}$. In addition, $\lambda^{\gamma_q} < \lVert f_{j(q)}' \rVert_{\infty}$ for $q \geq 1$. First let $\gamma_0=\hat{\gamma}_0=\theta_0=m_0=0$, $\Delta_{\emptyset}=\hat{\Delta}_{\emptyset}=I_{\lambda}$ and $\delta_0=\hat{\delta}_0=\lambda/(1-\lambda)$. Suppose we have chosen $\gamma_l$, $\theta_l$ and $m_l$ for $l\leq q$ and for all $n \leq \Gamma (q) := \sum_{l\leq q}\gamma_l$ we have defined $\delta_n$, $\hat{\delta}_n$ and for $\omega\in \mathcal{D}^n$ we have $\Delta_{\omega}$ and $\hat{\Delta}_{\omega}$, all satisfying the required properties. For the inductive step we first apply Lemma \ref{lots of cylinders} to obtain $\left(\omega(\kappa)\right)_{\kappa \in \mathcal{D}^{\Gamma (q)}}$ and $\left(n(\kappa)\right)_{\kappa \in \mathcal{D}^{\Gamma (q)}}$ with $n(\kappa)=n (\hat{\Delta}_{\kappa},f_{j(q)}) \in \mathbb{Z}$ and $\omega(\kappa)=\omega (\hat{\Delta}_{\kappa},f_{j(q)}) \in \Sigma_*$ for each $\kappa \in \mathcal{D}^{\Gamma (q)}$ so that, \begin{itemize} \item[(1)] $f_{j(q)}\left( g_{\omega(\kappa)}(I_{\lambda})+n(\kappa)\diam(I_{\lambda})\right) \subseteq \hat{\Delta}_{\kappa}$, \item[(2)] $\diam\left(f_{j(q)}\left(g_{\omega(\kappa)}(I_{\lambda})+n(\kappa)\diam(I_{\lambda})\right)\right) \geq \frac{\lambda}{4}\cdot \diam (\hat{\Delta}_{\kappa})$. \end{itemize} By supposition, $\diam (\hat{\Delta}_{\kappa}) = \delta_{\Gamma (q)}$ for all $\kappa \in \mathcal{D}^{\Gamma (q)}$. Consequently, by Lemma \ref{lots of cylinders} the length of $|\omega(\kappa)|$ is uniform over all $\kappa \in \mathcal{D}^{\Gamma (q)}$. We denote this uniform length by $\theta_{q+1}$. Choose $\gamma_{q+1},\hat{\gamma}_{q+1} \in \mathbb{N}$ so that, \begin{eqnarray*} \gamma_{q+1}&>& q\gamma_q\theta_{q+1} \cdot (-\log \delta_{\Gamma (q)}),\\ \gamma_{q+1}&>& \frac{\log |f'_{j(q+1)}|}{\log \lambda} ,\\ \hat{\gamma}_{q+1}&=&\gamma_{q+1}+\theta_{q+1}. \end{eqnarray*} and let \begin{equation*} m_{q+1}:=\bigg\lfloor \left(\frac{\log 2^{-\alpha}}{\log \lambda}-1\right)\hat{\gamma}_{q+1} -\frac{\log(1-\lambda)}{\log \lambda}\bigg\rfloor+1, \end{equation*} so that \begin{equation}\label{m q inequalities} \lambda^{\hat{\gamma}_{q+1}+m_{q+1}}/(1-\lambda)< 2^{-\alpha \hat{\gamma}_{q+1}} \leq \lambda^{\hat{\gamma}_{q+1}+m_{q+1}-1}/(1-\lambda). \end{equation} Given $\kappa \in \mathcal{D}^{\Gamma (q)}$ and $\tau\in \mathcal{D}^{l}$ for some $l\leq \gamma_{q+1}$ we define \begin{equation*} \Delta_{\kappa \tau}:=f_{j(q)}\left(g_{\omega(\kappa)}\circ g_{\tau}(I_{\lambda})+n(\kappa)\cdot \diam(I_{\lambda})\right). \end{equation*} Thus, for all $\omega\in \mathcal{D}^{\Gamma (q) + l}$ for some $l\leq \gamma_{q+1}$ we have, \begin{eqnarray*} \diam\left(\Delta_{\omega}\right)= \delta_{\Gamma (q) + l} := |f_{j(q)}'|\cdot \lambda^{\theta_{q+1}+l+1}/(1-\lambda). \end{eqnarray*} Moreover, for $l<\gamma_{q+1}$ we let $\hat{\Delta}_{\kappa \tau}:=\Delta_{\kappa \tau}$ and for $l=\gamma_{q+1}$, \begin{equation*} \hat{\Delta}_{\kappa \tau}:=f_{j(q)}\left( g_{\omega(\kappa)}\circ g_{\tau}\circ (g_0)^{m_{q+1}}(I_{\lambda})+n(\kappa) \cdot \diam(I_{\lambda})\right). \end{equation*} Hence, for all $\omega\in \mathcal{D}^{\Gamma (q) + l}$ for some $l<\gamma_{q+1}$ we have, \begin{eqnarray*} \diam (\hat{\Delta}_{\omega}) = \hat{\delta}_{\Gamma (q) + l} := \delta_{\Gamma (q) + l}, \end{eqnarray*} and for $\omega\in \mathcal{D}^{\Gamma (q+1)}$ \begin{eqnarray*} \diam (\hat{\Delta}_{\omega}) = \hat{\delta}_{\Gamma (q+1)} := |f_{j(q)}'|\cdot \lambda^{\theta_{q+1}+\gamma_{q+1}+m_{q+1}+1}/(1-\lambda). \end{eqnarray*} It follows that for all $\eta \in \mathcal{D}^{\Gamma (q+1)}$ \begin{multline} \label{Good approximant prop} \hat{\Delta}_{\eta}\subseteq f_{j(q)} \Biggl( \bigcup_{\omega \in \mathcal{D}^{\hat{\gamma}_{q+1}}} \biggl\{\, x \in I_{\lambda}: \\ : \biggl| x-\sum_{i=1}^{\hat{\gamma}_{q+1}} \omega_i \lambda^{i} \biggr| < 2^{-\hat{\gamma}_{q+1}\alpha} \,\biggr\}+\mathbb{Z} \diam(I_{\lambda})\Biggr). \end{multline} In this way we have defined two families of closed intervals $(\Delta_{\omega})_{\omega\in\Sigma_*}$ and $(\hat{\Delta}_{\omega})_{\omega\in\Sigma_*}$ with the property that for any $\omega \in \Sigma_*$ and $\eta \in \mathcal{D}$, \begin{equation*} \Delta_{\omega}\supseteq \hat{\Delta}_{\omega}\supseteq \Delta_{\omega\eta}, \end{equation*} and given any $\omega \in \mathcal{D}^n$, $\diam(\Delta_{\omega})\leq \lambda^{n+1}/(1-\lambda)$. Thus, we may define a map $\pi:\Sigma \rightarrow I_{\lambda}$ by \begin{equation*} \pi(\omega):=\bigcap_{n\in \mathbb{N}}\Delta_{\omega|n}=\bigcap_{n\in \mathbb{N}}\hat{\Delta}_{\omega|n}. \end{equation*} By construction we also have $\delta_n\geq \lambda^2/2 \hat{\delta}_{n-1}$ for all $n\in \mathbb{N}$. We let $\Lambda:=\pi\left(\Sigma\right)$. By Equations (\ref{Good approximant prop}) and (\ref{Infinite k}) we have \begin{equation*} \Lambda \subset \bigcap_{j\in\mathbb{N}} f_j\left(W_{\lambda}(\alpha)+\mathbb{Z} \cdot \diam(I_{\lambda})\right). \end{equation*} Thus, to complete the proof it suffices to show that $\dimH \Lambda \geq s$. In order to do this we shall define a measure supported on $\Lambda$ with the property \begin{equation*} \Cdim(\nu):=\liminf_{r\rightarrow 0}\frac{1}{\log r}\log \int \nu\left(B_r(x)\right) d\nu(x) \geq s. \end{equation*} That is, the correlation dimension $\Cdim(\nu)$ of $\nu$ is at least $s$. This implies that the Hausdorff dimension of $\nu$ and hence $X$ is at least $s$ (see \cite[Section 17]{Pesin}). We do this by taking $\mu$ to be the $\left(\frac{1}{2},\frac{1}{2}\right)$-Bernoulli measure on $\Sigma$ and $\nu$ its projection by $\pi$, $\nu:=\mu\circ \pi^{-1}$. In order to estimate $\Cdim(\nu)$ we require good upper bounds on the number of intervals $\hat{\Delta}_{\omega}$ of a given level which are close to one another. \begin{lemma}\label{first estimate for t close} Suppose $\rho>1$ and $\lambda \notin A(s)$. Then there exists a constant $C$ depending only on $\rho$ and $\lambda$ such that for any pair $\eta,\zeta \in \mathcal{D}^{\Gamma (q)}$ for some $q\in \mathbb{N}$, $n=l + \Gamma (q)$ for some $l \leq \gamma_{q+1}$ and $\hat{\delta}_n \leq t \leq \rho \cdot \delta_n$ we have, \begin{equation*} \#\left\lbrace (\kappa,\tau)\in \mathcal{D}^l : d (\hat{\Delta}_{\eta\kappa}, \hat{\Delta}_{\zeta\tau} ) < t\right\rbrace \leq C \cdot 2^l+4^{l}l \cdot(\rho \lambda)^{-s}\left(\frac{t}{\hat{\delta}_{n-l}}\right)^s. \end{equation*} \end{lemma} \begin{proof} We begin by noting that for each pair $(\kappa,\tau)\in \mathcal{D}^l$ we have \begin{eqnarray*} f_{j(q)}\circ g_{\omega(\eta)}\circ g_{\kappa}(0)+f_{j(q)}(n(\eta)) &\in & \hat{\Delta}_{\eta\kappa}\\ f_{j(q)}\circ g_{\omega(\zeta)}\circ g_{\tau}(0)+f_{j(q)}(n(\zeta)) &\in & \hat{\Delta}_{\zeta\tau}. \end{eqnarray*} Since every $\hat{\Delta}_{\eta\kappa}, \hat{\Delta}_{\zeta\tau}$ has diameter $\hat{\delta}_n$, $t\geq \hat{\delta}_n$ we have, \begin{multline*} \# \bigl\lbrace\, (\kappa,\tau)\in \mathcal{D}^l : d (\hat{\Delta}_{\eta\kappa}, \hat{\Delta}_{\zeta\tau} ) < t \,\bigr\rbrace \\ \leq \#\Bigl\{\, (\kappa,\tau)\in \mathcal{D}^l:\big| f_{j(q)}\circ g_{\omega(\eta)}\circ g_{\kappa}(0)- f_{j(q)}\circ g_{\omega(\zeta)}\circ g_{\tau}(0) + \\ + (f_{j(q)}(n(\eta))-f_{j(q)}(n(\zeta)))\big|< 2t \,\Bigr\} \end{multline*} Since $f_{j(q)}$ is affine we have, \begin{multline*} f_{j(q)}\circ g_{\omega(\zeta)}\circ g_{\tau}(0) \\ = \left( f_{j(q)}\circ g_{\omega(\eta)}\circ g_{\tau}(0)+\left( f_{j(q)}\circ g_{\omega(\zeta)}(0)- f_{j(q)}\circ g_{\omega(\eta)}(0)\right)\right). \end{multline*} By applying Lemma \ref{translation lemma} we obtain \begin{align*} &\#\Bigl\{\, (\kappa,\tau)\in \mathcal{D}^l:\big| f_{j(q)}\circ g_{\omega(\eta)}\circ g_{\kappa}(0)- f_{j(q)}\circ g_{\omega(\zeta)}\circ g_{\tau}(0) + \\ & \hspace{6cm} +(f_{j(q)}(n(\eta))-f_{j(q)}(n(\zeta)))\big|< 2t \,\Bigr\} \\ &\leq 4 \#\left\lbrace\, (\kappa,\tau)\in \mathcal{D}^l:\big| f_{j(q)}\circ g_{\omega(\eta)}\circ g_{\kappa}(0)- f_{j(q)}\circ g_{\omega(\eta)}\circ g_{\tau}(0)\big|< 2t \,\right\rbrace. \end{align*} We note that \begin{eqnarray*} \lVert (f_{j(q)}\circ g_{\omega(\eta)})' \rVert_{\infty}&\geq& \frac{1-\lambda}{\lambda}\cdot \diam\left(f_{j(q)}\circ g_{\omega(\kappa)}(I_{\lambda}) \right)\\ &\geq& \frac{1-\lambda}{\lambda}\cdot \frac{\lambda}{2}\cdot \diam (\hat{\Delta}_{\kappa} )\\ &=& (1-\lambda) \cdot \hat{\delta}_{n-l}/2. \end{eqnarray*} Piecing the above together we have \begin{align*} \# \bigl\{\, (\kappa,\tau) &\in \mathcal{D}^l:d (\hat{\Delta}_{\eta\kappa}, \hat{\Delta}_{\zeta\tau}) < t \,\bigr\} \\ &\leq 4\cdot \bigl\{\, (\kappa,\tau)\in \mathcal{D}^l : \big| f_{j(q)}\circ g_{\omega(\eta)}\circ g_{\kappa}(0)- \\ & \hspace{5.5cm} - f_{j(q)}\circ g_{\omega(\eta)}\circ g_{\tau}(0)\big|< 2t\,\bigr\} \\ &\leq 4\cdot \#\bigl\{ (\kappa,\tau)\in \mathcal{D}^l:\big|g_{\kappa}(0)- g_{\tau}(0)\big|< 2t \lVert (f_{j(q)}\circ g_{\omega(\eta)})'\rVert_\infty^{-1} \bigr\rbrace \\&\leq 4\cdot \#\biggl\lbrace (\kappa,\tau)\in \mathcal{D}^l:\big|g_{\kappa}(0)-g_{\tau}(0)\big|< \frac{4t}{(1-\lambda)\hat{\delta}_{n-l}}\biggr\rbrace. \end{align*} Since $\delta_n \leq \hat{\delta}_{n-l}\cdot \lambda^l$ and $t \leq \rho \delta_n$ we have, \begin{eqnarray*} \frac{4t}{(1-\lambda)\hat{\delta}_{n-l}} &\leq& \frac{4 \rho}{1-\lambda} \cdot \lambda^l. \end{eqnarray*} Now choose $k\in \mathbb{N}\cup\{0\}$ so that \begin{eqnarray*} \frac{4 \rho}{1-\lambda} \cdot \lambda^{l+k+1}<\frac{4t}{(1-\lambda)\hat{\delta}_{n-l}} \leq \frac{4 \rho}{1-\lambda} \cdot \lambda^{l+k}. \end{eqnarray*} By applying Lemma \ref{Constant depending on r} we have \begin{align*} \#\{\, (\kappa,\tau) &\in \mathcal{D}^l : d (\hat{\Delta}_{\eta\kappa}, \hat{\Delta}_{\zeta\tau}) < t \,\} \\ &\leq 4\cdot \#\{\, (\kappa,\tau)\in \mathcal{D}^l:\big|g_{\kappa}(0)-g_{\tau}(0)\big|<4 \rho (1-\lambda)^{-1} \lambda^{l+k} \,\} \\ &= 4\cdot \tilde{P}_l \Bigl(\lambda,k,\frac{4 \rho}{1-\lambda} \Bigr)\\ &\leq C\Bigl( \frac{4 \rho}{1-\lambda}\Bigr) \cdot 2^l+4^{l}l\lambda^{s(l+k)}\\ &\leq C\Bigl( \frac{4 \rho}{1-\lambda}\Bigr) \cdot 2^l+4^{l}l \cdot(\rho \lambda)^{-s} \Bigl( \frac{t}{\hat{\delta}_{n-l}} \Bigr)^s. \qedhere \end{align*} \end{proof} \begin{lemma}\label{second estimate for r close} Suppose $\rho>1$ and $\lambda \notin A(s)$. Then there exists a constant $C$ depending only on $\rho$ and $\lambda$ such that given $q\in \mathbb{N}$ and $n=l + \Gamma (q)$ for some $l \leq \gamma_{q+1}$ and $\hat{\delta}_n \leq t \leq \rho \cdot \delta_n$ we have, \begin{multline*} \# \{\, (\kappa,\tau)\in \mathcal{D}^n : d (\hat{\Delta}_{\kappa}, \hat{\Delta}_{\tau}) < t \,\} \\ \leq 4^{\Gamma (q-1)} \cdot \biggl( C \cdot 2^{\gamma_q}+4^{\gamma_q}\gamma_q \cdot \lambda^{-s} \biggl( \frac{\hat{\delta}_{n-l}}{\hat{\delta}_{n-l-\gamma_q}} \biggr)^s \biggr)\\ \cdot \left(C \cdot 2^l+4^{l}l \cdot(\rho \lambda)^{-s}\left(\frac{t}{\hat{\delta}_{n-l}}\right)^s\right). \end{multline*} \end{lemma} \begin{proof} First note that if $\eta \in \mathcal{D}^{n-l}$ and $\alpha \in \mathcal{D}^l$, $\hat{\Delta}_{\eta\alpha}\subseteq \hat{\Delta}_{\eta}$. Hence, \begin{align*} & \#\left\lbrace (\kappa,\tau)\in \mathcal{D}^n:d (\hat{\Delta}_{\kappa}, \hat{\Delta}_{\tau}) < t\right\rbrace \\ &= \sum_{(\kappa,\tau)\in \mathcal{D}^n}\chi_{\left\lbrace (\kappa',\tau')\in \mathcal{D}^n: d\left(\hat{\Delta}_{\kappa'}, \hat{\Delta}_{\tau'}\right)< t\right\rbrace} \\ &= \sum_{(\eta,\zeta)\in \mathcal{D}^{n-l}}\chi_{\{ (\eta',\zeta')\in \mathcal{D}^{n-l}: d (\hat{\Delta}_{\eta'}, \hat{\Delta}_{\zeta'} )< t \}}\cdot \sum_{(\alpha,\beta)\in \mathcal{D}^l} \chi_{\{ (\alpha',\beta')\in \mathcal{D}^{l}: d (\hat{\Delta}_{\eta\alpha'}, \hat{\Delta}_{\zeta\beta'} )< t \}}\\ &= \sum_{(\eta,\zeta)\in \mathcal{D}^{n-l}}\chi_{\{ (\eta',\zeta')\in \mathcal{D}^{n-l}: d (\hat{\Delta}_{\eta'}, \hat{\Delta}_{\zeta'} ) < t \}} \cdot \\ & \hspace{4cm} \cdot \#\bigl\{ (\alpha,\beta)\in \mathcal{D}^l: d (\hat{\Delta}_{\eta\alpha}, \hat{\Delta}_{\zeta\beta}) < t \}. \end{align*} By applying Lemma \ref{first estimate for t close} along with the fact that $t\leq \rho \delta_{n}\leq \rho \hat{\delta}_{n-l}$, \begin{align*} &\# \left\lbrace (\kappa,\tau)\in \mathcal{D}^n:d (\hat{\Delta}_{\kappa}, \hat{\Delta}_{\tau} )< t\right\rbrace \\ &\leq \#\left\lbrace (\eta,\zeta) \in \mathcal{D}^{n-l}:d (\hat{\Delta}_{\eta},\hat{\zeta}_{\tau}) < t \right\rbrace \cdot \left(C 2^l+4^{l}l \cdot(\rho \lambda)^{-s}\left(\frac{t}{\hat{\delta}_{n-l}}\right)^s\right)\\ &\leq \#\left\lbrace (\eta,\zeta) \in \mathcal{D}^{n-l}:d (\hat{\Delta}_{\eta},\hat{\zeta}_{\tau}) < \rho \hat{\delta}_{n-l} \right\rbrace \cdot \\ & \hspace{4cm} \cdot \left(C 2^l+4^{l}l \cdot(\rho \lambda)^{-s}\left(\frac{t}{\hat{\delta}_{n-l}}\right)^s\right), \end{align*} Now clearly $\rho \hat{\delta}_{n-l} \in [\hat{\delta}_{n-l},\rho \hat{\delta}_{n-l}]$ and so we may apply the above reasoing to the first term to obtain, \begin{align*} &\# \left\lbrace (\eta,\zeta) \in \mathcal{D}^{n-l}:d (\hat{\Delta}_{\eta},\hat{\zeta}_{\tau}) < \rho \hat{\delta}_{n-l} \right\rbrace \\ &\leq \#\left\lbrace (\alpha,\beta) \in \mathcal{D}^{n-l-\gamma_q}:d (\hat{\Delta}_{\alpha},\hat{\beta}_{\tau}) < \rho \hat{\delta}_{n-l-\gamma_q} \right\rbrace \cdot \\ & \hspace{4cm} \cdot \biggl( C \cdot 2^{\gamma_q}+4^{\gamma_q}\gamma_q \cdot \lambda^{-s} \biggl( \frac{\hat{\delta}_{n-l}}{\hat{\delta}_{n-l-\gamma_q}} \biggr)^s \biggr)\\ &\leq \#\mathcal{D}^{2\sum_{p<q}\gamma_p} \cdot \biggl( C \cdot 2^{\gamma_q}+4^{\gamma_q}\gamma_q \cdot \lambda^{-s} \biggl( \frac{\hat{\delta}_{n-l}}{\hat{\delta}_{n-l-\gamma_q}} \biggr)^s \biggr). \end{align*} Piecing these two inequalities together completes the proof of the lem\-ma. \end{proof} Recall that to complete the proof we must obtain the following inequality, \begin{equation*} \Cdim(\nu)=\liminf_{r\rightarrow 0}\frac{1}{\log r}\log \int \nu\left(B_r(x)\right) d\nu(x) \geq s. \end{equation*} Choose $r\in \left(0,\lambda/(1-\lambda)\right)$ and take $n$ to be the least integer satisfying $\hat{\delta}_n<r$. It follows that $r \leq \hat{\delta}_{n-1}< 2/\lambda^2 \cdot \delta_n$. Given $\kappa\in \mathcal{D}^n$ and a sequence $\omega$ such that $\kappa = \omega | n$, we have \begin{equation*} \#\lbrace\, \tau \in \mathcal{D}^n : \hat{\Delta}_{\tau}\cap B_r(\pi(\omega))\neq \emptyset \,\rbrace \leq \#\lbrace\, \tau \in \mathcal{D}^n : d (\hat{\Delta}_{\tau},\hat{\Delta}_{\kappa}) < r \,\rbrace. \end{equation*} Hence, \begin{eqnarray*} \nu\left(B(\pi(\omega),r)\right) \leq \#\lbrace\, \tau \in \mathcal{D}^n : d (\hat{\Delta}_{\tau},\hat{\Delta}_{\kappa}) < r \,\rbrace \cdot 2^{-n}. \end{eqnarray*} Since $\nu=\mu\circ \pi^{-1}$ we have, \begin{align*} \int \nu\left(B_r(x)\right) & d\nu(x) =\int \nu\left(B_r(\pi(\omega))\right) d\mu(\omega)\\ &\leq \sum_{\kappa\in \mathcal{D}^n} \mu([\kappa]) \bigl(\# \lbrace\, \tau \in \mathcal{D}^n : d (\hat{\Delta}_{\tau},\hat{\Delta}_{\kappa}) < r \,\rbrace \cdot 2^{-n}\bigr)\\ &= 4^{-n} \# \lbrace\, (\kappa,\tau) \in (\mathcal{D}^n)^2 : d (\hat{\Delta}_{\tau},\hat{\Delta}_{\kappa}) < r \,\rbrace. \end{align*} Now note that $\hat{\delta}_n< r\leq 2/\lambda^2 \delta_n \leq 8 \delta_n$ so by Lemma \ref{second estimate for r close} we have, \begin{align} \label{integral key estimate with r involved} \int & \nu\left(B_r(x)\right) d\nu(x) \\ &\leq 4^{-n} \cdot 4^{\Gamma (q-1)} \cdot \biggl( C \cdot 2^{\gamma_q}+4^{\gamma_q}\gamma_q \cdot \lambda^{-s} \biggl( \frac{\hat{\delta}_{n-l}}{\hat{\delta}_{n-l-\gamma_q}} \biggr)^s \biggr) \cdot \nonumber \\ & \hspace{4cm} \cdot \left(C \cdot 2^l+4^{l}l \lambda^{-s} \left( \frac{r}{\hat{\delta}_{n-l}} \right)^s \right) \nonumber \\ &\leq \biggl(C \cdot 2^{-\gamma_q}+\gamma_q \cdot \lambda^{-s} \biggl(\frac{\hat{\delta}_{n-l}}{\hat{\delta}_{n-l-\gamma_q}} \biggr)^s \biggr) \cdot \nonumber \\ & \hspace{4cm} \cdot \left(C \cdot 2^{-l}+l \lambda^{-s} \left( \frac{r}{\hat{\delta}_{n-l}} \right)^s \right). \nonumber \end{align} where $q$ is chosen so that $n=l + \Gamma (q)$ and $0\leq l<\gamma_{q+1}$. Now since $m_q \geq \left(\frac{\log 2^{-\alpha}}{\log \lambda}-1\right)\gamma_q$, \begin{eqnarray*} \frac{\hat{\delta}_{n-l}}{\hat{\delta}_{n-l-\gamma_q}} \leq \lambda^{\gamma_q+m_q} \leq 2^{-\alpha \gamma_q}, \end{eqnarray*} and provided $l>0$ we have \begin{eqnarray*} \frac{r}{\hat{\delta}_{n-l}} \leq \frac{8\delta_n}{\hat{\delta}_{n-l}} \leq \lambda^l. \end{eqnarray*} Note that $\frac{1}{2}\leq \lambda$ and since $s\leq \frac{1}{\alpha}$, we have $2^{-\gamma_q}\leq \left(2^{-\alpha\gamma_q}\right)^s$. Thus, by Equation~(\ref{integral key estimate with r involved}), if $l>0$ we have \begin{eqnarray} \label{integral key estimate with r removed l>0} \int \nu\left(B_r(x)\right) d\nu(x) &\leq & (2 C \lambda^{-s})^2 \cdot \gamma_q \left(2^{-\alpha \gamma_q}\right)^s \cdot l \left(\lambda^l \right)^s, \end{eqnarray} and if $l=0$ we have, \begin{eqnarray} \label{integral key estimate with r removed l=0} \int \nu\left(B_r(x)\right) d\nu(x) &\leq & (2 C^2 \lambda^{-s}) \cdot \gamma_q \left(2^{-\alpha \gamma_q}\right)^s. \end{eqnarray} By the inequality (\ref{m q inequalities}) we have, \begin{eqnarray}\label{r is not too small} r&>& \hat{\delta}_n \geq \hat{\delta}_{\Gamma (q)} \cdot \frac{ \lambda}{2} \cdot \lambda^l\\ \nonumber &\geq &\hat{\delta}_{\Gamma (q-1)} \left(\frac{ \lambda}{2}\right)^2 \cdot \lambda^{\gamma_q + m_q} \cdot \lambda^l\\ \nonumber &\geq &\hat{\delta}_{\Gamma (q-1)} \left(\frac{ \lambda}{2}\right)^2 \cdot \lambda^{\hat{\gamma}_q+m_q} \cdot \lambda^l\\ \nonumber &\geq & \frac{\lambda^3}{4}\cdot\hat{\delta}_{\Gamma (q-1)} \cdot 2^{-\alpha\gamma_q -\alpha \theta_q} \cdot \lambda^{l}. \end{eqnarray} Now by construction, for each $q\in \mathbb{N}$, $\gamma_{q}>q \log ( \hat{\delta}_{\Gamma(q-1)} )^{-1} \cdot \theta_q$, so if we define \begin{equation*} \iota(q):= \frac{-\log \left(\lambda^3/ \log 4\right) - \log \hat{\delta}_{\Gamma (q-1)}+\theta_q\alpha \log 2}{\gamma_q\alpha \log 2}, \end{equation*} we have $\iota(q) \rightarrow 0$ as $q \rightarrow \infty$. Moreover, by (\ref{r is not too small}), \begin{equation*} \frac{\gamma_q\log{2}+ l\log \lambda^{-1}}{-\log r} \geq \frac{1}{1+\iota(q)}. \end{equation*} Substituting into Equations (\ref{integral key estimate with r removed l>0}) and (\ref{integral key estimate with r removed l=0}) and noting that $q\rightarrow \infty$ as $r\rightarrow 0$ we have, \begin{equation*} \Cdim(\nu)=\liminf_{r\rightarrow 0}\frac{1}{\log r}\log \int \nu\left(B_r(x)\right) d\nu(x) \geq s. \end{equation*} This completes the proof of the Proposition. \end{proof} \section{$\beta$-shifts and a uniform lower bound} \label{sec:lowerbound} Let $1 < \beta \leq 2$. Given a real number $x \in \mathbb{R}$ we let $\lfloor x \rfloor$ and $\{x\}$ denote, respectively, the integer and fractional parts of $x$. Consider the $\beta$-transformation $f_\beta \colon [0,1) \to [0,1)$ defined by $x \mapsto \{\beta x \}$. Given $x \in [0,1]$ we let $\omega^{\beta}_n(x):=\lfloor \beta f_{\beta}^{n-1}(x) \rfloor$ and \begin{equation*} S_{\beta}:=\closure \bigl\{\, (\omega^{\beta}_n(x))_{n \in \mathbb{N}}: x \in [0,1) \,\bigr\}. \end{equation*} Let $\pi_{\beta} \colon S_{\beta} \rightarrow [0,1]$ be defined by $(\omega_n)_{n \in \mathbb{N}} \mapsto \sum_{n \in \mathbb{N}} \omega_n \beta^{-n}$, and let $\sigma \colon S_{\beta} \rightarrow S_{\beta}$ denote the left shift operator on $S_{\beta}$. Note that $\pi_{\beta} \circ \sigma= f_{\beta}\circ \pi_{\beta}$. Parry proved in \cite{parry} that the shift space $S_\beta$ can be written as \[ S_\beta = \{\, (\omega_1, \omega_2, \ldots) \in \{0,1\}^\mathbb{N} : \sigma^k (\omega_1, \omega_2, \ldots) \leq (\omega_n^\beta(1^-))_{n \in \mathbb{N}} \ \forall k \,\}, \] where $\leq$ is the lexicographical order and $\omega_n^\beta (1^-)$ denotes the limit in the product topology of $\omega_n^\beta (x)$ as $x \to 1$. Moreover, Parry proved that $S_{\beta}$ is a subshift of finite type if and only if the sequence $(\omega_n^\beta (1))_{n \in \mathbb{N}}$ terminates with infinitely many zeroes, and that a sequence $(\omega_n)_{n\in\mathbb{N}}$ equals $(\omega_n^\beta (1))_{n\in\mathbb{N}}$ for some $\beta$ if and only if it satisfies \begin{equation} \label{eq:developmentsof1} (\omega_k, \omega_{k+1}, \ldots) < (\omega_1, \omega_2, \ldots) \end{equation} for all $k > 1$. In the set of sequences satisfying \eqref{eq:developmentsof1}, the subset of sequences terminating with infinitely many zeroes is dense. This implies that the set of $\beta$ for which the sequence $(\omega_n^\beta(1))_{n \in \mathbb{N}}$ terminates with infinitely many zeroes is dense in $(1,2)$. Hence $S_\beta$ is a subshift of finite type for a dense set of $\beta$. The following theorem allows us to transfer results from subshifts of finite type to arbitrary $\beta$-shifts. It is a strengthened version of Theorem~2 from \cite{farmperssonschmeling}, that follows immediately by replacing Lemma~6 in \cite{farmperssonschmeling}, by Lemma~1 in \cite{farmpersson}. \begin{theorem}[F\"{a}rm, Persson]\label{Farm Persson finite type theorem} Let $\beta \in (1,2)$ and let $(\beta_n)_{n \in \mathbb{N}}$ be any sequence with $1<\beta_n < \beta$ for all $n$, such that $\beta_n \rightarrow \beta$ as $n \rightarrow \infty$. Suppose $E \subset S_{\beta}$ and $\pi_{\beta_n}\left(E \cap S_{\beta_n}\right)$ is in the class $\mathcal{G}^s (I)$ for all $n$. If $F$ is a $G_{\delta}$ with $F \supset \pi_{\beta}\left(E \cap S_{\beta}\right)$, then $F$ is also in the class $\mathcal{G}^s (I)$. \end{theorem} For $\kappa > 0$, we consider the sets \[ A_\beta (\kappa) = \{\, x \in [0,1] : 0 \leq T_\beta^n (x) \leq \beta^{-\kappa n} \text{ infinitely often} \, \}. \] We shall use the following theorem which allows us to restrict our attention to the case where $S_{\beta}$ is a subshift of finite type. \begin{theorem} \label{the:beta} For any $1 < \beta \leq 2$ we have $A_\beta (\kappa) \in \mathcal{G}^s ([0,1])$ for $s = \frac{1}{1+\kappa}$. \end{theorem} \begin{remark} We note that the bound $s \leq \frac{1}{1+\kappa}$ is sharp since an easy covering argument, using the fact that $T_{\beta}$ has topological entropy $\log \beta$, shows that the Hausdorff dimension of $A_\beta (\kappa)$ is not larger than $\frac{1}{1+\kappa}$. \end{remark} \begin{proof} We let \[ A_{\beta,n} (\kappa) = \biggl\{ x : 0 \leq x - y \leq 2^{- \gamma n} \text{ for some } y = \sum_{k=1}^n \frac{a_k}{\beta^k},\ (a_k)_{k \in \mathbb{N}} \in S_\beta \, \biggr\}, \] and note that $A_\beta (\kappa)$ can be written as $A_\beta (\kappa) = \limsup_{n\to \infty} A_{\beta,n} (\kappa)$. By Theorem \ref{Farm Persson finite type theorem} it suffices to prove the theorem in the special case where $S_\beta$ is a subshift of finite type. When $S_\beta$ is a subshift of finite type there are constants $c_1$ and $c_2$ such that \begin{equation} \label{eq:cylinderlength} c_1 \beta^{-n} \leq | \pi_\beta ([a_1, a_2, \ldots, a_n]) | \leq c_2 \beta^{-n}. \end{equation} This implies that the number of cylinders of size $n$, denoted by $N (n)$, satisfies \begin{equation} \label{eq:numberofcylinders} c_2^{-1} \beta^n \leq N(n) \leq c_1^{-1} \beta^n. \end{equation} Using these estimates we may complete the proof by following the method of \cite[Example~8.9]{falconerbook}. \end{proof} \begin{corollary}\label{cor:lowerbound} For any $\lambda \in \left(\frac{1}{2},1\right)$ and $\alpha>1$ we have $W_{\lambda}(\alpha) \in \mathcal{G}^s (I_\lambda)$ for $s = \frac{-\log \lambda}{\alpha \log 2}$. \end{corollary} \begin{proof} Take $\beta =\lambda^{-1}$ and $\kappa = \frac{\alpha \log 2}{\log \beta}-1$. It follows that $A_{\beta}(\kappa) \subset W_{\lambda}(\alpha)$, so $W_\lambda (\alpha) \in \mathcal{G}^s ([0,1])$ follows immediately from Theorem \ref{the:beta}. Now, the self-similar structure of $W_\lambda (\alpha)$ implies that $W_\lambda (\alpha) \in \mathcal{G}^s (I_\lambda)$. \end{proof} \section{Covering arguments and upper bounds}\label{s covering args} Each of the upper bounds from Theorem \ref{main bullet points} parts (1), (3) and (5) will rely on the following simple relationship between the growth in the number of $n$th level $\lambda$ sums and the dimension of $W_{\lambda}(\alpha)$. Given $\lambda \in \left(\frac{1}{2},1\right)$ and $n \in \mathbb{N}$ we let \begin{equation*} F_{\lambda, n} := \biggl\{\, \sum_{k=1}^n a_k \lambda^k : a_k \in \{0,1\} \,\biggr\}, \end{equation*} and let \begin{equation*} \tau(\lambda):= \limsup_{n \rightarrow \infty} \frac{\log \#F_{\lambda,n}}{n \log 2}. \end{equation*} \begin{lemma}\label{tau cover lemma} For all $\lambda \in \left(\frac{1}{2},1\right)$ and $\alpha >1$ the Hausdorff dimension of $W_{\lambda}(\alpha)$ is bounded above by $\tau(\lambda)/\alpha$. \end{lemma} \begin{proof} This may be deduced by a standard covering argument. See for example the first paragraph in the proof of Jarn\'{i}k's theorem from \cite[Section 10.3]{falconerbook}. \end{proof} Our first corollary establishes Theorem \ref{main bullet points} (1). \begin{corollary} For all $\lambda \in \left(\frac{1}{2},1\right)$ and $\alpha >1$ the Hausdorff dimension of $W_{\lambda}(\alpha)$ is bounded above by $1/\alpha$. \end{corollary} \begin{proof} This is immediate from Lemma \ref{tau cover lemma} combined with the fact that $\#F_{\lambda,n} \leq 2^n$ so $\tau(\lambda) \leq 1$ for all $\lambda \in \left(\frac{1}{2},1\right)$. \end{proof} Our second corollary establishes Theorem \ref{main bullet points} (3). \begin{corollary} There exists a dense family $\Gamma \subset \left(\frac{1}{2},1\right)$ such that for all $\lambda \in \Gamma$, $\dim W_{\lambda}(\alpha)<1/\alpha$. \end{corollary} \begin{proof} Our approach is based on \cite{ss}. We let $\Gamma$ denote the set of $\lambda \in \left(\frac{1}{2},1\right)$ such that for some finite word $\left(\omega_i\right)_{i=1}^{n} \in \{0,1\}^n$ we have $1=\sum_{i=1}^n\omega_i \lambda^i$. To see that $\Gamma$ is dense in $\left(\frac{1}{2},1\right)$ first fix $\lambda_0 \in \left(\frac{1}{2},1\right)$ and $\epsilon\in (0,1-\lambda_0)$. Then there exists an infinite string $\left(\omega_i\right)_{i=1}^{\infty} \in \{0,1\}^{\mathbb{N}}$ with $1=\sum_{i=1}^{\infty}\omega_i \lambda_0^i$. Let $k$ be the smallest $q$ with $\omega_q =1$ and choose $n$ so that $\sum_{i=1}^{n}\omega_i \lambda_0^i>1-\epsilon^k$. Then for some $\lambda \in \left(\lambda_0, \lambda_0+\epsilon\right)$ we have $\sum_{i=1}^{n}\omega_i \lambda^i=1$, so $\left(\lambda_0, \lambda_0+\epsilon\right)\cap \Gamma \neq \emptyset$. By Lemma \ref{tau cover lemma} it suffices to show $\tau(\lambda)<1$ for all $\lambda \in \Gamma$. But if $\lambda \in \Gamma$ then for some finite word $\left(\omega_i\right)_{i=1}^{n} \in \{0,1\}^n$ we have $\lambda^{q(n+1)+1}=\sum_{i=1}^n\omega_i \lambda^{i+1+q(n+1)}$ for all $q \in \mathbb{N}$. It follows that for all $q \in \mathbb{N}$, \begin{align*} F_{\lambda, q(n+1)}=\\ =\biggl\{\, \textstyle \sum\limits_{i=0}^{q-1} \sum\limits_{j=1}^{n+1} & a_{i(n+1)+j}\lambda^{i(n+1)+j} :\\ ( & a_{i(n+1)+1},\ldots, a_{i(n+1)+(n+1)} ) \in \{0,1\}^{n+1} \,\biggr\}\\ =\biggl\{\, \textstyle \sum\limits_{i=0}^{q-1} \sum\limits_{j=1}^{n+1} & a_{i(n+1)+j}\lambda^{i(n+1)+j} :\\ ( & a_{i(n+1)+1},\ldots, a_{i(n+1)+(n+1)} ) \in \{0,1\}^{n+1}\backslash\{(1,0,\ldots,0)\} \,\biggr\}. \end{align*} Thus, for each $q$ we have \begin{equation*} \#F_{q(n+1)} \leq \left(2^{n+1}-1\right)^q, \end{equation*} so for all $l \in \mathbb{N}$, \begin{equation*} \#F_{l, \lambda} \leq \#F_{\lceil l/(n+1) \rceil (n+1)} \leq \left(2^{n+1}-1\right)^{\lceil l/(n+1) \rceil}. \end{equation*} Thus, $\tau(\lambda) \leq \log \left(2^{n+1}-1\right)/(n+1) \log 2<1$. \end{proof} Finally we complete the proof of Theorem \ref{main bullet points} (5). \begin{defn} A \textit{multinacci number} is a postive real $\lambda$ which satisfies an equation of the form $\lambda^m+\cdots+\lambda=1$ for some $m \in \mathbb{N}$. \end{defn} We note that there are countably many multinacci numbers, all of which are contained within the interval $\left( \frac{1}{2}, 1\right)$. The largest multinacci number is the golden ratio $\frac{\sqrt{5}-1}{2}$. \begin{theorem} Let $\lambda$ be a multinacci number. Then the Hausdorff dimension of $W_\lambda (\alpha)$ is $- \frac{\log \lambda}{\log 2} \frac{1}{\alpha}$. \end{theorem} \begin{proof} Put \begin{align*} S_1 \colon x &\mapsto \lambda x, \\ S_2 \colon x &\mapsto \lambda (x + 1). \end{align*} Let us first consider the case $m = 2$. Then $\lambda = \frac{\sqrt{5} - 1}{2}$ and $S_1 \circ S_2 \circ S_2 = S_2 \circ S_1 \circ S_1$. Hence, when defining $W_\lambda (\alpha)$ we need only consider sequences where the word $011$ is forbidden, since replacing the word $011$ in a sequence by the word $100$, yields the same point. Hence, if we put \[ F_{\lambda, n} = \biggl\{\, \sum_{k=1}^n a_k \lambda^k : a_k \in \{0,1\} \,\biggr\}, \] then we have \[ F_{\lambda, n} = \biggl\{\, \sum_{k=1}^n a_k \lambda^k : a_k \in \{0,1\},\ (a_k, a_{k+1}, a_{k+2}) \neq (0,1,1) \,\biggr\}. \] The subshift in which $011$ is forbidden is a subshift of finite type, with adjacency matrix \[ A = \left[ \begin{array}{cccc} 1 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 1 & 1 & 0 & 0 \\ 0 & 0 & 1 & 1 \end{array} \right]. \] One checks that $\lambda^{-1} = \frac{\sqrt{5}+1}{2}$ is the largest eigenvalue of $A$. Hence there is a constant $K$ such that $\# F_{\lambda, n} < K \lambda^{-n}$. Hence $\tau(\lambda)= -\frac{\log \lambda}{\log 2}$, so by Lemma \ref{tau cover lemma} the Hausdorff dimension of $W_{\lambda} (\alpha)$ is at most $- \frac{\log \lambda}{\log 2} \frac{1}{\alpha}$. But by Corollary~\ref{cor:lowerbound} it is at least $- \frac{\log \lambda}{\log 2} \frac{1}{\alpha}$. For a general $m \geq 2$ we proceed similarly. Assume $\lambda$ is such that $S_1 \circ S_2^m = S_2 \circ S_1^m$. This implies that $\lambda$ satisfies the equation \begin{equation} \label{eq:multinacci} \lambda^m + \lambda^{m-1} + \cdots + \lambda = 1. \end{equation} As before, for the set $F_{\lambda, n}$, we need only consider sequences where the word \[ 0 \underbrace{11 \ldots 1}_{m} \] is forbidden. This is again a subshift of finite type, and it can be represented using a $2^m \times 2^m$ adjacency matrix given by \[ A = \left[ \begin{array}{ccccccc} 1 & 1 & & \\ & & 1 & 1 \\ & & & & \ddots \\ & & & & & 1 & 0 \\ 1 & 1 & & \\ & & 1 & 1 \\ & & & & \ddots \\ & & & & & 1 & 1 \end{array} \right]. \] By the Perron--Frobenius theorem, the eigenvalue of largest modulus of this matrix, is a positive number, and it has a corresponding eigenvalue with positive elements. Let $v = [v_1\ \cdots\ v_{2^m}]^\mathrm{T}$ be such an eigenvector and let $\mu$ be the eigenvalue. It is not hard to see that the equation $A v = \mu v$ implies that \[ \begin{array}{rcl} v_1 & = & v_{2^{m-1}+1}, \\ v_2 & = & v_{2^{m-1}+2}, \\ & \vdots & \\ v_{2^{m-1}-1} & = & v_{2^m - 1}. \end{array} \] Let $1 \leq k < 2^{m-2}-1$. Looking at row $k$ and row $2^{m-2}+k$ in the equation $Av = \mu v$, we see that $v_k = v_{2^{m-2}+k}$. Continuing in this fashion we end up in the conclusion that all $v_k$ for odd $k$ are equal. Without loss of generality we can therefore assume that $v_k = 1$ for odd $k$. If we look at the first row of the matrixes in the equation $A v = \mu v$, we see that $\mu v_1 = v_1 + v_2 = 1 + v_2$. We continue, and looking at the second row, we see that $\mu v_2 = v_3 + v_4 = 1 + v_4$. Hence we have \[ \mu = \mu v_1 = 1 + v_2 = 1 + \mu^{-1} (1 + v_4). \] Similarly we get $\mu v_4 = 1 + v_8$, and so \[ \mu = 1 + \mu^{-1} + \mu^{-2} (1 + v_8). \] We can continue this process, using the equations \[ \mu v_{2^k} = 1 + v_{2^{k+1}}, \] that are valid for $0 \leq k \leq m-3$, to conclude \[ \mu = 1 + \mu^{-1} + \cdots + \mu^{-m+2} (1 + v_{2^{m-2}}). \] But we have $\mu v_{2^{m-2}} = v_{2^{m-1}-1} = 1$, hence \[ \mu = 1 + \mu^{-1} + \cdots + \mu^{-m+2} + \mu^{-m+1}, \] or equivalently \[ \mu^m = 1 + \mu + \cdots + \mu^{m-1}. \] Comparing with the equation \eqref{eq:multinacci}, this implies that we have $\mu = \lambda^{-1}$. The rest is just as for the case $m = 2$ above. We have that $\# F_{\lambda, n} < K \mu^n = K \lambda^{-n}$, and therefore $\tau(\lambda)= -\frac{\log \lambda}{\log 2}$, so by Lemma \ref{tau cover lemma} the Hausdorff dimension of $W_{\lambda} (\alpha)$ is at most $- \frac{\log \lambda}{\log 2} \frac{1}{\alpha}$. \end{proof}
1,108,101,562,605
arxiv
\chapter*{List of Publications} \centerline{ \bf Published and submitted articles} \vspace{0.5cm} \begin{description} \item P.~Bandyopadhyay, C.~Corian\`o, A.~Costantini and L.~Delle Rose\\ \textit{Bounds on the Conformal Scale of a Minimally Coupled Dilaton and Multi-Leptonic Signatures at the LHC}\\ arXiv:1607.01933 [hep-ph] \item P.~Bandyopadhyay, C.~Corian\`o and A.~Costantini\\ \textit{A General Analysis of the Higgs Sector of the $Y=0$ Triplet-Singlet Extension of the MSSM at the LHC}\\ arXiv:1512.08651 [hep-ph]. Accepted for publication in Phys. Rev. D (2016). \item P.~Bandyopadhyay, C.~Corian\`o and A.~Costantini\\ \textit{Probing the hidden Higgs bosons of the $Y = 0$ triplet- and singlet-extended Supersymmetric Standard Model at the LHC}\\ JHEP {\bf 1512} (2015) 127, doi:10.1007/JHEP12(2015)127\\ arXiv:1510.06309 [hep-ph] \item P.~Bandyopadhyay, C.~Corian\`o and A.~Costantini\\ \textit{Perspectives on a supersymmetric extension of the standard model with a Y = 0 Higgs triplet and a singlet at the LHC}\\ JHEP {\bf 1509} (2015) 045, doi:10.1007/JHEP09(2015)045\\ arXiv:1506.03634 [hep-ph] \item C.~Corian\`o, A.~Costantini, M.~Dell'Atti and L.~Delle Rose\\ \textit{Neutrino and Photon Lensing by Black Holes: Radiative Lens Equations and Post-Newtonian Contributions}\\ JHEP {\bf 1507} (2015) 160, doi:10.1007/JHEP07(2015)160\\ arXiv:1504.01322 [hep-ph] \item C.~Corian\`o, A.~Costantini, L.~Delle Rose and M.~Serino\\ \textit{Superconformal sum rules and the spectral density flow of the composite dilaton (ADD) multiplet in $\mathcal{N}=1$ theories}\\ JHEP {\bf 1406} (2014) 136, doi:10.1007/JHEP06(2014)136\\ arXiv:1402.6369 [hep-th] \end{description} \begin{description} \centerline{\bf Conference Proceedings} \item A.~Costantini, L.~Delle Rose and M.~Serino\\ \textit{Sum rules and spectral density flow in QCD and in superconformal theories}, EPJ Web Conf.\ {\bf 80} (2014) 00017, doi:10.1051/epjconf/20148000017\\ arXiv:1409.5075 [hep-th] \item P.~Bandyopadhyay, C.~Corian\`o and A.~Costantini\\ \textit{Higgs bosons: discovered and hidden, in extended Supersymmetric Standard Models at the LHC}\\ arXiv:1604.00228 [hep-ph] \end{description} \chapter*{Introduction} Our description of the laws of Nature, from the point of view of fundamental physics, is tightly connected with the concept of symmetry. In fact, symmetries play a crucial role in our understanding of the fundamental interactions, and the Standard Model of the elementary particles (SM), which provides the best phenomenological description of the subatomic world, is entirely built around the two fundamental principles of Lorentz and of gauge symmetries, formulated according to a well established quantum field theory (QFT) language.\\ The gauge structure of the SM has been tested, by now, for almost half a century with an incredible success and it is expected that additional gauge simmetries will likely emerge, as the experimental search moves towards even higher energy. This search may not be quick and simple, rather it will require an extraordinary effort and several years to be formulated in a consistent framework, as shown by the enormous scientific effort which has already gone into the construction of the LHC and of the experimental detectors. \\ Given the daisy chain of successes built around the gauge principle, it is therefore obvious that, at theoretical level, the study of possible symmetric extensions of this model has to engage with the study of realistic patterns of their breakings and of their restorations. \\ The recent validation of the Higgs mechanism for the generation of the mass of the particles of the SM, indicates that we should anchor our experimental efforts to the banks of this principle, keeping however an open view about possible unexpected events which may appear along the way. More exotic scenarios, with the appearence of new symmetries, unrelated to the gauge structure, such as the conformal symmetry, remain an open possibility and will be central to this work. Theorists are still allowed wide options for molding new scenarios, since there are several and important features of the current phenomenology which find no answer within the SM. Such are the gauge hierarchy problem or the unsolved question of the origin of the masses of the light neutrinos, not to mention the issue of triviality for fundamental scalars at high energy. At this time, we still do not have definitive results about the structure of the scalar field which appears in the Higgs mechanism, for instance regarding its fundamental or composite nature. Likewise we also cannot exclude that a new symmetry may be playing an important role in it. These considerations provide the leitmotif for the first part of this thesis work, which collects five studies of specific extensions of the SM dealing with direct analysis of the scalar sector of gauge theories. This part is organised as follows.\\ The first two chapters, one of theoretical and the the other of phenomenological nature, address the role of a dilaton and of conformal symmetries together with their breaking, at phenomenological level. A dilaton is a state that appears as a signature of the breaking of a conformal symmetry, and the approximate scale invariance of the SM as we run towards the UV, where one could reach a scale of symmetry restoration, gives realistic motivations for such analysis. \\ In particular, the issue whether a dilaton can be fundamental or composite remains quite open. These two chapters both address this specific topic though in two different contexts, the first one supersymmetric, where fundamental open questions about the nature of this state are addressed, and the other non supersymmetric and phenomenological. \\ A dilaton is the Nambu-Goldstone mode which results from the breaking of conformal symmetry and couples to the trace anomaly of the SM. We just recall that the anomalous breaking of the conformal symmetry, as for a chiral symmetry, is characterised by the appearance of an intermediate state which interpolates between the anomalous current and the other (vector) currents of a trilinear correlator. As we show in chapter 2, the interaction of a dilaton with the anomaly provides a significant enhancement of the production rates of massless vector states in a typical trilinear vertex where this particle is involved. \\ Similarly, in the chiral case it is well known that such an anomalous interaction is responsible for the enhancement of the decay of the pion into two photons, and one wonders whether this enhancement could be true on a more general basis, and valid for all the anomalous vertices. Chapter 1 gives an affirmative answer to this question, showing the presence of a theoretical link between chiral and conformal anomaly interactions on a very rigorous basis and by a direct perturbative analysis. This test, as we are going to show, can be directly performed in the case of a superconformal theory, which also provides the natural framework for addressing this point, thanks to the appearance of a superconformal anomaly multiplet. Here trace and chiral anomalies appear in a similar way. \\ We show that there is a complete analogy between the chiral and the conformal cases. In particular we prove that the origin of either a chiral or a conformal anomaly is in the emergence of effective scalar or pseudoscalar degrees of freedom in the corresponding effective action. This interpretation is supported by the identification of a sum rule - which is completely fixed by the anomaly - for all the components of a supersymmetric multiplet and by the appearance of what we call a "spectral density flow" as we reach the conformal limit. We show, using a mass deformation of the superconformal theory, that the form factors responsible for the generation of the anomalies exhibit the same spectral density, with a branch cut that turns into a pole as the mass deformation parameter is removed. In other words, the flow generates a sequence of spectral functions which become a delta function in the conformal limit, signalling the exchange of a massless state in the intermediate channel. The intermediate exchange of scalar or pseudoscalar states, identified as an axion, a dilaton and a dilatino, as mediator of either chiral or conformal anomalies, is therefore firmly proved. \\ The study of dilaton interactions is furtherly discussed in Chapter 2, where we investigate the current limits on the possible discovery of a conformal scale at the LHC. An in depth phenomenological analysis of the possible channels mediated by an intermediate dilatons is presented, showing that such a scale is currently constrained by a lower bound of 5 TeV, which is not stringent enough to exclude it all together from the future experimental analysis at ATLAS and CMS. \\ In Chapters 3 and 4 we move to an analysis of a supersymmetric model with an extended Higgs sector, the TNMSSM, which is a scale invariant scalar theory with a Higgs superfield in a triplet representation of weak $SU(2)$. The model represents a significant departure respect to the NMSSM, manifesting a wider scalar spectrum which we have investigated in great detail. The inclusion of Higgs scalars belonging to higher representation of the gauge structure remains an open possibility of considerable theoretical and experimental interests, which the current and future analysis at the LHC have to confront. The presence of a light pseudoscalar in its spectrum is for sure one of its most significant features, which has been addressed in Chapter 4. The identification of the relevant region of parameter space of this model which is currently allowed at the LHC has been investigated by comparing the signal with direct simulations of the background. \\ Part 2 of this thesis can be read independently and presents an application of correlators which are affected by a conformal anomaly in a gravitational context. The same $TVV$ correlator (with T-denoting the stress-energy tensor of a gauge theory and V the vector current) which is studied in chapters 1 and 2, is used in the analysis of semiclassical lensing in gravity. As typical in the course of a theoretical analysis, fundamental results in a certain area may be quickly applied to other areas which seem to be apparently unrelated. Conformal anomalies are indeed fundamental, and the propagation of a photon in a curved gravitational background is affected by the same anomaly which shows up in the interaction of a dilaton at the LHC via its coupling to two photons. We just recall that the coupling of the SM to gravity occurs via the energy momentum tensor of the theory and that the dilatation current $J_D$ of a given theory has a divergence which is given by the trace of the same tensor. It is therefore not a big surprise that the TVV correlator can be used to describe the propagation of photons and neutrinos in gravitational backgrounds. This analysis hinges on a previous study in which it has been shown that the anomaly form factor of the $TVV$ vertex in the SM, where V is the electromagnetic current, induces a small change in Einstein's formula for the deflection, which has been quantified. In this final chapter we develop a complete formulation of these radiative effects in the interaction of photons and neutrinos in a gravitational background extending the formalism of gravitational lensing to the semiclassical case. \part{ Theoretical and Phenomenological Aspects of a Superconformal theory}\label{pI} \chapter{The Superconformal anomaly multiplet in an $\mathcal{N}=1$ theory} \section{Synopsis} In this chapter we investigate a supersymmetric Yang-Mills theory in its superconformal phase and its corresponding anomalies. We show that there is a unifying feature of the chiral and conformal anomalies appearing in correlators involving the the Ferrara-Zumino supercurrent and two vectors supercurrent. These are characterised by the presence of massless anomaly poles in their related anomaly form factors. The states associated to these massless poles are interpreted as a dynamical realization of a scalar (dilaton), a pseudoscalar (an axion) and a fermion (dilatino) interpolating between such a current and the two vector currents of the anomaly vertex. The appearance of a dilaton in a scale invariant theory is connected to the breaking of a conformal symmetry, a fact that can be simply illustrated with a realistic example. \\ A dilaton may appear in the spectrum of different extensions of the Standard Model not only as a result of the compactification of extra spacetime dimensions, but also as an effective state, related to the breaking of a dilatation symmetry. The Standard Model is not a scale-invariant theory, but can be such in its defining classical Lagrangian if we slightly modify the scalar potential with the introduction of a dynamical field $\Sigma$. This extension allows to restore this symmetry, which must be broken at a certain scale, where $\Sigma$ acquires a vacuum expectation value. This task is accomplished by the replacement of every dimensionfull parameter $m$ of the defining Lagrangian according to the prescription $m \rightarrow m \frac{\Sigma}{\Lambda}$, where $\Lambda$ is the classical conformal breaking scale. Establishing the size of this scale is a fundamental issue which may require considerable effort at phenomenological level. \\ In the case of the SM, classical scale invariance can be easily restored by the simple change in the scalar potential briefly described above. This is defined modulo a constant, therefore we may consider, for instance, two equivalent choices \begin{eqnarray} V_1(H, H^\dagger)&=& - \mu^2 H^\dagger H +\lambda(H^\dagger H)^2 = \lambda \left( H^\dagger H - \frac{\mu^2}{2\lambda}\right)^2 - \frac{\mu^4}{4 \lambda}\nonumber \\ V_2(H,H^\dagger)&=&\lambda \left( H^\dagger H - \frac{\mu^2}{2\lambda}\right)^2 \end{eqnarray} which generate two different scale-invariant extensions \begin{eqnarray} V_1(H,H^\dagger, \Sigma)&=&- \frac{\mu^2\Sigma^2}{\Lambda^2} H^\dagger H +\lambda(H^\dagger H)^2 \nonumber \\ V_2(H,H^\dagger, \Sigma)&=& \lambda \left( H^\dagger H - \frac{\mu^2\Sigma^2}{2\lambda \Lambda^2}\right)^2 \,, \end{eqnarray} where $H$ is the Higgs doublet, $\lambda$ is its dimensionless coupling constant, while $\mu$ has the dimension of a mass and, therefore, is the only term involved in the scale invariant extension.% The invariance of the potential under the addition of constant terms, typical of any Lagrangian, is lifted once we require the presence of a dilatation symmetry. One can immediately check that only the second choice $(V_2)$ guarantees the existence of a stable ground state with a spontaneously broken phase. In $V_2$ we parameterize the Higgs, as usual, around the electroweak vev $v$ and indicate with $\Lambda$ the vev of the dilaton field $\Sigma = \Lambda + \rho$, setting all the Goldstone modes generated in the breaking of the gauge symmetry to zero, as customary in the unitary gauge. \\ The potential $V_2$ has a massless mode due to the existence of a flat direction. Performing a diagonalization of the mass matrix we define the two mass eigenstates $\rho_0$ and $h_0$, which are given by \begin{equation} \left( \begin{array}{c} {\rho_0}\\ h_0 \\ \end{array} \right) =\left( \begin{array}{cc} \cos\alpha & \sin\alpha \\ -\sin\alpha & \cos\alpha \\ \end{array} \right) \left( \begin{array}{c} \rho\\ {h} \\ \end{array} \right) \end{equation} with \begin{equation} \cos\alpha=\frac{1}{\sqrt{1 + v^2/\Lambda^2}}\qquad \qquad \sin\alpha=\frac{1}{\sqrt{1 + \Lambda^2/v^2}}. \end{equation} We denote with ${\rho_0}$ the massless dilaton generated by this potential, while $h_0$ will describe a massive scalar, interpreted as a new Higgs field, whose mass is given by \begin{equation} m_{h_0}^2= 2\lambda v^2 \left( 1 +\frac{v^2}{\Lambda^2}\right) \qquad \textrm{with} \qquad v^2=\frac{\mu^2}{\lambda}, \end{equation} and with $m_h^2=2 \lambda v^2$ being the mass of the Standard Model Higgs. The Higgs mass, in this case, is corrected by the new scale of the spontaneous breaking of the dilatation symmetry ($\Lambda$), which remains a free parameter. The vacuum degeneracy of the scale-invariant model can be lifted by the introduction of extra (explicit breaking) terms which give a small mass to the dilaton field. To remove such degeneracy, one can introduce, for instance, the term \begin{equation} \mathcal{L}_{break} = \frac{1}{2} m_{\rho}^2 {\rho}^2 + \frac{1}{3!}\, {m_{\rho}^2} \frac{{\rho}^3}{\Lambda} + \dots \, , \end{equation} where $m_{\rho}$ represents the dilaton mass. It is clear that in this approach the coupling of the dilaton to the anomaly has to be added by hand. The obvious question to address, at this point, is if one can identify in the effective action of the Standard Model an effective state which may interpolate between the dilatation current of the same model and the final state with two neutral currents, for example with two photons. Such a state can be identified in ordinary perturbation theory in the form of an anomaly pole. We are entitled to interpret this scalar exchange as a composite state whose interactions with the rest of the Standard Model are defined by the conditions of scale and gauge invariance. \subsection{Dilaton coupling to the anomaly} We will show rigorously, in the supersymmetric case, that this state couples to the conformal anomaly by a direct analysis of the $J_DVV$ correlator, in the form of an anomaly pole, with $J_D$ and $V$ being the dilatation and the vector currents respectively. The correlator is extracted from the more general supersymmetric 3-point function involving the FZ current and two vector supercurrents, as already mentioned in the introduction. Poles in a correlation function are usually there to indicate that a specific state can be created by a field operator in the Lagrangian of the theory, or, alternatively, as a composite particle of the same elementary fields. Obviously, a perturbative hint of the existence of such intermediate state does not correspond to a complete description of the state, in the same way as the discovery of an anomaly pole in the $AVV$ correlator of QCD (with $A$ being the axial current) is not equivalent to a proof of the existence of the pion. Nevertheless, massless poles extracted from the perturbative effective action do not appear for no reasons, and this should be sufficient to justify a more complete analysis of the 1-loop effective action of classical conformal theories. \\ Originally, the appearance of classical scalar degrees of freedom in the context of gravitational interactions has been pointed out starting from several analysis of the $TVV$ vertex, where $T$ stands for the energy momentum tensor of a given theory and $V$ denotes the vector current \cite{Giannotti:2008cv, Armillis:2009pq}. Subsequently it has been shown that the same pole is inherited by the dilatation current, in the $J_D VV$ vertex, being the two vertices very closely related. We recall that the dilatation current can be defined as \begin{equation} J_D^\mu(z)= z_\nu T^{\mu \nu}(z) \qquad \textrm{with} \qquad \partial\cdot J_D = {T^\mu}_\mu. \label{def} \end{equation} The $T^{\mu\nu}$ has to be symmetric and on-shell traceless for a classical scale-invariant theory, and includes, at quantum level, the contribution from the trace anomaly together with the additional terms describing the explicit breaking of the dilatation symmetry. The contribution of the conformal anomaly, in flat space, is summarised by the equation \begin{equation} \label{anomz} T^\mu_\mu= \beta F_{\alpha\beta} F^{\alpha\beta} \end{equation} which holds for a classical scale invariant theory (i.e. with $T_\mu^\mu=0$), with the right hand side of (\ref{anomz}) related to the $\beta$ function $(\beta)$ of the gauge theory and to $F_{\mu\nu}$, the field strength of the vector particle (V). A similar equation holds in the case of chiral anomaly \begin{equation} \partial_\rho j_5^\rho= a_n F\tilde F \end{equation} for the chiral anomaly, with $j_5^\rho$ denoting the axial vector current, and with $\tilde{F}=1/2 \epsilon^{\mu\nu\alpha\beta} F_{\alpha\beta}$ being the dual of the field strength of the gauge field. We recall that the $U(1)_A$ current is characterized by an anomaly pole which describes the interaction between the Nambu-Goldstone mode, generated by the breaking of the chiral symmetry, and the gauge currents. In momentum space this corresponds to the nonlocal vertex \begin{equation} \label{AVVpole} V_{\textrm{anom}}^{\lambda \mu\nu}(k,p,q)= \frac{k^\lambda}{k^2}\epsilon^{\mu \nu \alpha \beta}p_\alpha q_\beta +... \end{equation} with $k$ being the momentum of the axial-vector current and $p$ and $q$ the momenta of the two photons. In the equation above, the ellipsis refer to terms which are suppressed at large energy. In this regime, this allows to distinguish the operator accounting for the chiral anomaly (i.e. $\square^{-1}$ in coordinate space) from the contributions due to mass corrections. Polology arguments can be used to relate the appearance of such a pole to the pion state around the scale of chiral symmetry breaking. We refer to \cite{Giannotti:2008cv} and \cite{Armillis:2009pq, Armillis:2010qk} for more details concerning the analysis of ths correlator in the QED and QCD cases, while the discussion of the $J_D VV$ vertex can be found in \cite{CDS}. Using the relation between $J_D^\mu$ and the EMT $T^{\mu\nu}$ we introduce the $J_DVV$ correlator \begin{eqnarray} \Gamma_D^{\mu\alpha\beta}(k,p) &\equiv& \int d^4 z\, d^4 x\, e^{-i k \cdot z + i p \cdot x}\, \left\langle J^\mu_D(z) V^\alpha(x)V^\beta(0)\right\rangle \label{gammagg} \end{eqnarray} which can be related to the $TVV$ correlator \begin{eqnarray} \Gamma^{\mu\nu\alpha\beta}(k,p)&\equiv& \int d^4 z\, d^4 x\, e^{-i k \cdot z + i p \cdot x}\, \left\langle T^{\mu \nu}(z) V^\alpha(x) V^\beta(0)\right\rangle \end{eqnarray} according to \begin{eqnarray} \Gamma_D^{\mu\alpha\beta}(k,p)&=& i \frac{\partial}{\partial k^\nu}\Gamma^{\mu\nu\alpha\beta}(k,p) \,. \end{eqnarray} As we have already mentioned, this equation allows us to identify a pole term in the $J_DVV$ diagram from the corresponding pole structure in the $TVV$ vertex. The analysis presented below shows that supersymmetry provides the natural framework where this pole-like behaviour is reproduced both in the chiral and the conformal anomaly parts of the superconformal anomaly vertex. \section{Spectral analysis of supersymmetric correlators} Dilaton fields are expected to play a very important role in the dynamics of the early universe and are present in almost any model which attempts to unify gravity with the ordinary gauge interactions (see for instance \cite{Gasperini:2007ar}). Important examples of these constructions are effective field theories derived from strings, describing their massless spectra, but also theories of gravity compactified on extra dimensional spaces, where the dilaton (graviscalar) emerges in 4 spacetime dimensions from the extra components of the higher dimensional metric (see for instance \cite{LopesCardoso:1991zt,LopesCardoso:1992yd,Derendinger:1991kr,Derendinger:1991hq,Derendinger:1985cv}). In these formulations, due to the geometrical origin of these fields, the dilaton is, in general, a fundamental (i.e. not a composite) field. Other extensions, also of significant interest, in which a fundamental dilaton induces a gauge connection for the abelian Weyl symmetry in a curved spacetime, have been considered (see the discussion in \cite{Codello:2012sn,Buchmuller:1988cj, Coriano:2013nja}). However, also in this case, the link of this fundamental particle to gravity renders it a crucial player in the physics of the early universe, and not a particle to be searched for at colliders. In fact, its interaction with ordinary matter should be suppressed by the Planck scale, except if one entails scenarios with large extra dimensions. More recently, following an independent route, several extensions of the Standard Model with an {\em effective} dilaton have been considered. They conjecture the existence of a scale-invariant extension of the Higgs sector \cite{Goldberger:2007zk, CDS,Coriano:2012dg}. In this case the breaking of the underlying conformal dynamics, in combination with the spontaneous breaking of the electroweak symmetry \cite{Coriano:2013nja}, suggests, in fact, that the dilaton could emerge as a composite field, appearing as a Nambu-Goldstone mode of the broken conformal symmetry. A massless dilaton of this type could acquire a mass by some explicit potential and could mix with the Higgs of the Standard Model.\\ By reasoning in terms of the conformal symmetry of the Standard Model, which should play a role at high energy, the dilaton would be the physical manifestation of the trace anomaly in the Standard Model, in analogy to the pion, which is interpolated by the $U(1)_A$ chiral current and the corresponding $\langle AVV \rangle$ (axial-vector/vector/vector) interaction in QCD. As in the $\langle AVV \rangle$ case, this composite state should be identified with the anomaly pole of the related anomaly correlator (the $\langle TVV \rangle$ diagram, with $T$ the energy momentum tensor (EMT)), at least at the level of the 1-particle irreducible (1PI) anomaly effective action \cite{Coriano:2012nm}. Considerations of this nature brings us to the conclusion that the effective massless Nambu Goldstone (NG) modes which should appear as a result of the existence of global anomalies, should be looked for in specific perturbative form factors under special kinematical limits. For this reason they are easier to investigate in the on-shell anomaly effective action, with a single mass parameter which drives the conformal/superconformal deformation. This action has the advantage of being gauge invariant and easier to compute than its off-shell relative. To exploit at a full scale the analogy between chiral and conformal anomalies, one should turn to supersymmetry, where the correlation between poles and anomalies should be more direct. In fact, in an ordinary quantum field theory, the $\langle TVV \rangle$ diagram (and the corresponding anomaly action) is characterized, as we are going to show, by pole structures both in those form factors which multiply tensors that contribute to the trace anomaly and in those which don't. For this reason we turn our attention to the effective action of the superconformal (the Ferrara-Zumino, FZ) multiplet, where chiral and conformal anomalies share similar signatures, being part of the same multiplet. Therefore one would expect that supersymmetry should help in clarifying the significance of these singularites in the effective action. \\ We are going to prove rigorously in perturbation theory that the anomaly of the FZ multiplet is associated with the exchange of three composite states in the 1PI superconformal anomaly action. These have been discussed in the past, in the context of the spontaneous breaking of the superconformal symmetry \cite{Dudas:1993mm}. They are identified with the anomaly poles present in the effective action, extracted from a supersymmetric correlator containing the superconformal hypercurrent and two vector currents, and correspond to the dilaton, the dilatino and the axion. This exchange is identified by a direct analysis of the anomalous correlators in perturbation theory or by the study of the flow of their spectral densities under massive deformations. The flow describes a 1-parameter family of spectral densities - one family for each component of the correlator - which satisfy mass independent sum rules, and are, therefore, independent of the superpotential. This behaviour turns a dispersive cut of the spectral density $\rho(s,m^2)$ into a pole (i.e. a $\delta(s)$ contribution) as the deformation parameter $m$ goes to zero. Moreover, denoting with $k^2$ the momentum square of the anomaly vertex, each of the spectral densities induces on the corresponding form factor a $1/k^2$ behaviour also at large $k^2$, as a consequence of the sum rule. \\ We also recall that the partnership between dilatons and axions is not new in the context of anomalies, and it has been studied in the past - for abelian gauge anomalies - in the case of the supersymmetric St\"uckelberg multiplet \cite{Kors:2004ri, Coriano:2008xa, Coriano:2008aw, Coriano:2010ws}. \\ The three states associated to the three anomaly poles mentioned above, are described - in the perturbative picture - by the exchange of two collinear particles. These are a fermion/antifermion pair in the axion case, a fermion/antifermion pair and a pair of scalar particles in the dilaton case, and a collinear scalar/fermion pair for the dilatino. The Konishi current will be shown to follow an identical pattern and allows the identification of extra states, one for each fermion flavour present in the theory. \\ This pattern appears to be general in the context of anomalies, and unique in the case of supersymmetry. In fact, we are going to show that in a supersymmetric theory anomaly correlators have a single pole in each component of the anomaly multiplet, a single spectral flow and a single sum rule, proving the existence of a one-to-one correspondence between anomalies and poles in these correlators. \section{Theoretical framework} In this section we review the definition and some basic properties of the Ferrara-Zumino supercurrent multiplet, which from now on we will denote also as the \emph{hypercurrent}, in order to distinguish it from its fermionic component, usually called the {\em supercurrent}. \\ We consider a $\mathcal N=1$ supersymmetric Yang-Mills theory with a chiral supermultiplet in the matter sector. In the superfield formalism the action is given by \begin{eqnarray} \label{SUSYactionSF} S = \left( \frac{1}{16 g^2 T(R)} \int d^4 x \, d^2 \theta \, \textrm{Tr} W^2 + h.c. \right) + \int d^4 x \, d^4 \theta \, \bar \Phi e^V \Phi + \left( \int d^4 x \, d^2 \theta \, \mathcal W(\Phi) +h.c. \right) \end{eqnarray} where the supersymmetric field strength $W_A$ and gauge vector field $V$ are contracted with the hermitian generators $T^a$ of the gauge group to which the chiral superfield $\Phi$ belongs. In particular \begin{eqnarray} V =2 g V^a T^a \,, \qquad \mbox{and} \qquad W_A = 2 g W^a_A T^a = -\frac{1}{4} \bar D^2 e^{- V} D_A \, e^V \,, \end{eqnarray} with $\textrm{Tr} \, T^a T^b = T(R) \delta^{ab}$. \\ In order to clarify our conventions we give the component expansion of the chiral superfield $\Phi$ \begin{eqnarray} \label{PHIexpansion} \Phi_i = \phi_i + \sqrt{2} \theta \chi_i + \theta^2 F_i \,, \end{eqnarray} and of the superfields $W_A^a$ and $V^a$ in the Wess-Zumino gauge \begin{eqnarray} \label{GAUGEexpansion} W^a_A &=& \lambda^a_A +\theta_A \, D^a - (\sigma^{\mu \nu} \theta)_A F^a_{\mu\nu} + i \theta^2 \, \sigma^{\mu}_{ A \dot B} \mathcal D_\mu \bar \lambda^{a \, \dot B} \,, \\ V^a &=& \theta \sigma^\mu \bar \theta A^a_\mu + \theta^2 \bar \theta \bar \lambda^a + \bar \theta^2 \theta \lambda^a + \frac{1}{2} \theta^2 \bar \theta^2 \left( D^a + i \partial_\mu A^{a \, \mu} \right) \,, \end{eqnarray} where $\phi_i$ is a complex scalar and $\chi_i$ its superpartner, a left-handed Weyl fermion, $A^a_\mu$ and $\lambda^a$ are the gauge vector field and the gaugino respectively, $F^a_{\mu\nu}$ is the gauge field strength while $F_i$ and $D^a$ correspond to the $F$- and $D$-terms. Moreover, we have defined $\sigma^{\mu\nu}=(i/4)(\sigma^\mu \bar \sigma^\nu -\sigma^\nu \bar \sigma^\mu )$. \\ Using the component expansions introduced in Eq.(\ref{PHIexpansion}) and (\ref{GAUGEexpansion}) we obtain the supersymmetric lagrangian in the component formalism, which we report for convenience \begin{eqnarray} \label{SUSYlagrangianCF} \mathcal L &=& - \frac{1}{4} F^a_{\mu\nu} F^{a \, \mu\nu} + i \lambda^a \sigma^\mu \mathcal D_\mu^{ab} \bar \lambda^b + ( \mathcal D_{ij}^\mu \phi_j )^\dag (\mathcal D_{ik \, \mu} \phi_k) + i \chi_j \sigma_\mu \mathcal D_{ij}^{\mu \, \dag} \bar \chi_i \nonumber \\ && - \sqrt{2} g \left( \bar \lambda^a \bar \chi_i T^a_{i j} \phi_j + \phi_i^\dag T^a_{ij} \lambda^a \chi_j \right) - V(\phi, \phi^\dag) - \frac{1}{2} \left( \chi_i \chi_j \mathcal W_{ij}(\phi) + h.c. \right) \,, \end{eqnarray} where the gauge covariant derivatives on the matter fields and on the gaugino are defined respectively as \begin{eqnarray} \mathcal D^\mu_{ij} = \delta_{ij} \partial^\mu + i g A^{a \, \mu} T^a_{ij} \,, \qquad \mathcal D_\mu^{ac} = \delta^{ac} \partial^\mu -g \, t^{abc} A^b_\mu \,, \end{eqnarray} with $t^{abc}$ the structure constants of the adjoint representation, and the scalar potential is given by \begin{eqnarray} V(\phi, \phi^\dag) = \mathcal W^\dag_i(\phi^\dag) \mathcal W_i(\phi) + \frac{1}{2} g^2 \left( \phi_i^\dag T^a_{ij} \phi_j \right)^2 \,. \end{eqnarray} For the derivatives of the superpotential we have been used the following definitions \begin{eqnarray} \mathcal W_i(\phi) = \frac{\partial \mathcal W(\Phi)}{\partial \Phi_i} \bigg| \,, \qquad \mathcal W_{ij}(\phi) = \frac{\partial^2 \mathcal W(\Phi)}{\partial \Phi_i \partial \Phi_j} \bigg| \,, \end{eqnarray} where the symbol $|$ on the right indicates that the quantity is evaluated at $\theta = \bar \theta = 0$. \\ Notice that in the above equations the $F$- and $D$-terms have been removed exploiting their equations of motion. Having defined the model, we can introduce the Ferrara-Zumino hypercurrent \begin{eqnarray} \label{Hypercurrent} \mathcal J_{A \dot A} = \textrm{Tr}\left[ \bar W_{\dot A} e^V W_A e^{- V}\right] - \frac{1}{3} \bar \Phi \left[ \stackrel{\leftarrow}{\bar \nabla}_{\dot A} e^V \nabla_A - e^V \bar D_{\dot A} \nabla_A + \stackrel{\leftarrow}{\bar \nabla}_{\dot A} \stackrel{\leftarrow}{D_A} e^V \right] \Phi \,, \end{eqnarray} where $\nabla_A$ is the gauge-covariant derivative in the superfield formalism whose action on chiral superfields is given by \begin{eqnarray} \nabla_A \Phi = e^{-V} D_A \left( e^V \Phi \right)\,, \qquad \bar \nabla_{\dot A} \bar \Phi = e^{V} \bar D_{\dot A} \left( e^{-V} \bar \Phi \right)\,. \end{eqnarray} The conservation equation for the hypercurrent $\mathcal J_{A \dot A}$ is \begin{eqnarray} \label{HyperAnomaly} \bar D^{\dot A} \mathcal J_{A \dot A} = \frac{2}{3} D_A \left[ - \frac{g^2}{16 \pi^2} \left( 3 T(A) - T(R)\right) \textrm{Tr}W^2 - \frac{1}{8} \gamma \, \bar D^2 (\bar \Phi e^V \Phi)+ \left( 3 \mathcal W(\Phi) - \Phi \frac{\partial \mathcal W(\Phi)}{\partial \Phi} \right) \right] \,, \end{eqnarray} where $\gamma$ is the anomalous dimension of the chiral superfield. \\ The first two terms in Eq. (\ref{HyperAnomaly}) describe the quantum anomaly of the hypercurrent, while the last is of classical origin and it is entirely given by the superpotential. In particular, for a classical scale invariant theory, in which $\mathcal W$ is cubic in the superfields or identically zero, this term identically vanishes. If, on the other hand, the superpotential is quadratic the conservation equation of the hypercurrent acquires a non-zero contribution even at classical level. This describes the explicit breaking of the conformal symmetry. We can now project the hypercurrent $\mathcal J_{A \dot A}$ defined in Eq.(\ref{Hypercurrent}) onto its components. The lowest component is given by the $R^\mu$ current, the $\theta$ term is associated with the supercurrent $S^\mu_A$, while the $\theta \bar \theta$ component contains the energy-momentum tensor $T^{\mu\nu}$. In the $\mathcal N=1$ super Yang-Mills theory described by the Lagrangian in Eq. (\ref{SUSYlagrangianCF}), these three currents are defined as \begin{eqnarray} \label{Rcurrent} R^\mu &=& \bar \lambda^a \bar \sigma^\mu \lambda^a + \frac{1}{3} \left( - \bar \chi_i \bar \sigma^\mu \chi_i + 2 i \phi_i^\dag \mathcal D^\mu_{ij} \phi_j - 2 i (\mathcal D^\mu_{ij} \phi_j)^\dag \phi_i \right) \,, \\ \label{Scurrent} S^\mu_A &=& i (\sigma^{\nu \rho} \sigma^\mu \bar \lambda^a)_A F^a_{\nu\rho} - \sqrt{2} ( \sigma_\nu \bar \sigma^\mu \chi_i)_A (\mathcal D^{\nu}_{ij} \phi_j)^\dag - i \sqrt{2} (\sigma^\mu \bar \chi_i) \mathcal W_i^\dag(\phi^\dag) \nonumber \\ &-& i g (\phi^\dag_i T^a_{ij} \phi_j) (\sigma^\mu \bar \lambda^a)_A + S^\mu_{I \, A}\,, \\ \label{EMT} T^{\mu\nu} &=& - F^{a \, \mu \rho} {F^{a \, \nu}}_\rho + \frac{i}{4} \left[ \bar \lambda^a \bar \sigma^\mu (\delta^{ac} \stackrel{\rightarrow}{\partial^\nu} - g \, t^{abc} A^{b \, \nu} ) \lambda^c + \bar \lambda^a \bar \sigma^\mu (- \delta^{ac} \stackrel{\leftarrow}{\partial^\nu} - g \, t^{abc} A^{b \, \nu} ) \lambda^c + (\mu \leftrightarrow \nu) \right] \nonumber \\ &+& ( \mathcal D_{ij}^\mu \phi_j )^\dag (\mathcal D_{ik}^\nu \phi_k) + ( \mathcal D_{ij}^\nu \phi_j )^\dag (\mathcal D_{ik}^\mu \phi_k) + \frac{i}{4} \left[ \bar \chi_i \bar \sigma^\mu ( \delta_{ij} \stackrel{\rightarrow}{\partial^\nu} + i g T^a_{ij} A^{a \, \nu} ) \chi_j \right. \nonumber \\ &+& \left. \bar \chi_i \bar \sigma^\mu ( - \delta_{ij} \stackrel{\leftarrow}{\partial^\nu} + i g T^a_{ij} A^{a \, \nu} ) \chi_j + (\mu \leftrightarrow \nu) \right] - \eta^{\mu\nu} \mathcal L + T^{\mu\nu}_I \,, \end{eqnarray} where $\mathcal L$ is given in Eq.(\ref{SUSYlagrangianCF}) and $S^\mu_I$ and $T^{\mu\nu}_I$ are the terms of improvement in $d=4$ of the supercurrent and of the EMT respectively. As in the non-supersymmetric case, these terms are necessary only for a scalar field and, therefore, receive contributions only from the chiral multiplet. They are explicitly given by \begin{eqnarray} S^\mu_{I \, A} &=& \frac{4 \sqrt{2}}{3} i \left[ \sigma^{\mu\nu} \partial_\nu (\chi_i \phi_i^\dag) \right]_A \,, \\ T^{\mu\nu}_I &=& \frac{1}{3} \left( \eta^{\mu \nu} \partial^2 - \partial^\mu \partial^\nu \right) \phi^\dag_i \phi_i \,. \end{eqnarray} The terms of improvement are automatically conserved and guarantee, for $\mathcal W(\Phi) = 0$, upon using the equations of motion, the vanishing of the classical trace of $T^{\mu\nu}$ and of the classical gamma-trace of the supercurrent $S^\mu_A$. % The anomaly equations in the component formalism, which can be projected out from Eq. (\ref{HyperAnomaly}), are \begin{eqnarray} \label{AnomalyR} \partial_\mu R^\mu &=& \frac{g^2}{16 \pi^2} \left( T(A) - \frac{1}{3} T(R) \right) F^{a \, \mu\nu} \tilde F^a_{\mu\nu} \,, \\ \label{AnomalyS} \bar \sigma_\mu S^\mu_A &=& - i \frac{3 \, g^2}{8 \pi^2} \left( T(A) -\frac{1}{3} T(R) \right) \left( \bar \lambda^a \bar \sigma^{\mu\nu} \right)_A F^a_{\mu\nu }\,, \\ \label{AnomalyT} \eta_{\mu\nu} T^{\mu\nu} &=& - \frac{3 \, g^2}{32 \pi^2} \left(T(A) - \frac{1}{3} T(R) \right) F^{a \, \mu\nu} F^a_{\mu\nu} \,. \end{eqnarray} The first and the last equations are respectively extracted from the imaginary and the real part of the $\theta$ component of Eq.(\ref{HyperAnomaly}), while the gamma-trace of the supercurrent comes from the lowest component. \section{The perturbative expansion in the component formalism} In this section we will present the one-loop perturbative analysis of the one-particle irreducible correlators, built with a single current insertion contributing - at leading order in the gauge coupling constant - to the anomaly equations previously discussed. \\ We define the three correlation functions, $\Gamma_{(R)}$, $\Gamma_{(S)}$ and $\Gamma_{(T)}$ as \begin{eqnarray} \label{RSTCorrelators} \delta^{ab} \, \Gamma_{(R)}^{\mu\alpha\beta}(p,q) &\equiv& \langle R^{\mu}(k)\, A^{a \, \alpha}(p) \, A^{b \, \beta}(q) \rangle \qquad \langle RVV \rangle \,, \nonumber \\ \delta^{ab} \, \Gamma_{(S) \, A\dot B}^{\mu\alpha}(p,q) &\equiv& \langle S^{\mu}_A (k) \, A^{a \, \alpha}(p) \, \bar \lambda^b_{\dot B}(q) \rangle \qquad \langle SVF \rangle \,, \nonumber \\ \delta^{ab} \, \Gamma_{(T)}^{\mu\nu\alpha\beta}(p,q) &\equiv& \langle T^{\mu\nu}(k) \, A^{a \, \alpha}(p) \, A^{b \, \beta}(q)\rangle \qquad \langle TVV \rangle \,, \end{eqnarray} with $k = p+q$ and where we have factorized, for the sake of simplicity, the Kronecker delta on the adjoint indices. These correlation functions have been computed at one-loop order in the dimensional reduction scheme (DRed). The Feynman rules used for the computation are given in \cite{CCDS}. We recall that in this scheme the tensor and scalar loop integrals are computed in the analytically continued spacetime while the sigma algebra is restricted to four dimensions. \\ We will present the results for the matter chiral and gauge vector multiplets separately, for on-shell gauge external lines. The one-particle irreducible correlation functions of the Ferrara-Zumino multiplet are ultraviolet (UV) divergent, as one can see from a direct computation, and we need a suitable renormalization procedure in order to get finite results. In particular we have explicitly checked that, at one-loop order, among the three correlators defined in Eq. (\ref{RSTCorrelators}), only those with $S^\mu_A$ and $T^{\mu\nu}$ require a UV counterterm. The renormalization of the correlation functions is guaranteed by replacing the bare operators in Eq. (\ref{Scurrent}) and Eq. (\ref{EMT}) with their renormalized counterparts. This introduces the renormalized parameters and wave-function renormalization constants which are fixed by some conditions that specify the renormalization scheme. In particular, for the correlation functions we are interested in, the bare $S^\mu_A$ and $T^{\mu\nu}$ current become \begin{eqnarray} \label{RenormalizedST} S^\mu_A &=& i Z_\lambda^{1/2} Z_A^{1/2}(\sigma^{\nu \rho} \sigma^\mu \bar \lambda^a_R)_A F^a_{R \, \nu\rho} + \ldots \,, \nonumber \\ T^{\mu\nu} &=& Z_A \left( - F_R^{a \, \mu \rho} {F^{a \, \nu}}_{R \, \rho} + \frac{1}{4} \eta^{\mu\nu} F_R^{a \, \rho \sigma} F^a_{R \, \rho \sigma} \right) + \ldots \,, \end{eqnarray} where the suffix $R$ denotes renormalized quantities. $Z_A$ and $Z_\lambda$ are the wave-function renormalization constants of the gauge and gaugino field respectively, while the ellipses stand for all the remaining operators. In the previous equations we have explicitly shown only the contributions from which, at one-loop order, we can extract the counterterms needed to renormalize our correlation functions. All the other terms, not shown, play a role at higher perturbative orders.\\ Expanding the wave-function renormalization constants at one-loop as $Z = 1 + \delta Z$ we obtain the vertices of the counterterms \begin{eqnarray} \label{counterterms} \delta[S^{\mu}_A(k) A^{a \, \alpha}(p) \bar \lambda^b_{\dot B}(q)] &=& \left( \delta Z_A + \delta Z_\lambda \right) \, p_\rho \, \left( \sigma^{\alpha \rho} \sigma^{\mu} \right)_{A \dot B} \,, \nonumber \\ \delta[T^{\mu\nu}(k)A^{a \, \alpha}(p) A^{b \, \beta}(q)] &=& \delta Z_A \, \delta^{ab} \left\{ p \cdot q \, C^{\mu\nu\alpha\beta} + D^{\mu\nu\alpha\beta}(p,q) \right\} \,, \end{eqnarray} with $p$ and $q$ outgoing momenta. The $\delta Z$ counterterms can be defined, for instance, by requiring a unit residue of the full two-point functions on the physical particle poles. This implies that \begin{eqnarray} \delta Z_A = - \frac{\partial}{\partial p^2} \Sigma^{(AA)}(p^2) \bigg|_{p^2 = 0} \qquad \mbox{and} \qquad \delta Z_\lambda = - \Sigma^{(\lambda \bar \lambda)}(0) \,, \end{eqnarray} where the one-loop corrections to the gauge and gaugino two-point functions are defined as \begin{eqnarray} \Gamma^{(AA)}_{\mu\nu}(p) &=& - i \delta^{ab} \left( \eta_{\mu\nu} - \frac{p_\mu p_\nu}{p^2} \right) \Sigma^{(AA)}(p^2) \,, \\ \Gamma^{(\lambda \bar \lambda)}_{A \dot B}(p) &=& i \delta^{ab} \, p_{\mu} \sigma^{\mu}_{A \dot B} \, \Sigma^{(\lambda \bar \lambda)}(p^2) \,, \end{eqnarray} with \begin{eqnarray} \Sigma^{(AA)}(p^2) &=& \frac{g^2}{16 \pi^2} p^2 \left\{ T(R) \, \mathcal B_0(p^2,m^2) - T(A) \, \mathcal B_0(p^2,0) \right\} \,, \\ \Sigma^{(\lambda \bar \lambda)}(p^2) &=& \frac{g^2}{16 \pi^2} \left\{ T(R) \, \mathcal B_0(p^2,m^2) + T(A) \, \mathcal B_0(p^2,0)\right\} \,. \end{eqnarray} Using the previous expressions we can easily compute the wave-function renormalization constants \begin{eqnarray} \delta Z_A &=& - \frac{g^2}{16 \pi^2} \left\{ T(R) \, \mathcal B_0(0,m^2) - T(A) \, \mathcal B_0(0,0) \right\} \,, \nonumber \\ \delta Z_\lambda &=& - \frac{g^2}{16 \pi^2} \left\{ T(R) \, \mathcal B_0(0,m^2) + T(A) \, \mathcal B_0(0,0)\right\} \,, \end{eqnarray} and therefore obtain the one-loop counterterms needed to renormalize our correlators. In the following we will always present results for the renormalized correlation functions. \\ It is interesting to observe that, accordingly to Eq. (\ref{counterterms}), the one-loop counterterm to the supercurrent correlation function is identically zero for the vector gauge multiplet, due to a cancellation between $\delta Z_A$ and $\delta Z_\lambda$. Therefore we expect a finite result for the vector supermultiplet contribution to the $\Gamma^{\mu\alpha}_{(S)}$. Indeed this is the case as we will show below.\\ The correctness of our computations is secured by the check of some Ward identities. These arise from gauge invariance, from the conservation of the energy-momentum tensor and of the supercurrent. In particular, for the three point correlators defined above, we have \begin{eqnarray} \label{VectorWI} && p_\alpha \, \Gamma_{(R)}^{\mu\alpha\beta}(p, q) = 0 \,, \qquad \qquad q_\beta \, \Gamma_{(R)}^{\mu\alpha\beta}(p, q) = 0 \,, \nonumber \\ && p_\alpha \, \Gamma_{(S)}^{\mu\alpha}(p, q) = 0 \,, \nonumber \\ && p_\alpha \, \Gamma_{(T)}^{\mu\nu\alpha\beta}(p, q) = 0 \,, \qquad \qquad q_\beta \, \Gamma_{(T)}^{\mu\nu\alpha\beta}(p, q) = 0 \,. \end{eqnarray} from the conservation of the vector current, and \begin{eqnarray} \label{TensorWI} i \, k_\mu \, \Gamma_{(S)}^{\mu\alpha}(p, q) &=& - 2 p_\mu \, \sigma^{\mu \alpha} \hat \Gamma_{(\lambda \bar \lambda)}(q) - i \sigma_\mu \hat \Gamma^{\mu \alpha}_{(AA)}(p)\,, \nonumber \\ i \, k_\mu \, \Gamma_{(T)}^{\mu\nu\alpha\beta}(p, q) &=& q_\mu \hat \Gamma^{\alpha \mu}_{(AA)}(p) \eta^{\beta \nu} + p_\mu \hat \Gamma^{\beta \mu}_{(AA)}(q) \eta^{\alpha \nu} - q^\nu \hat \Gamma^{\alpha \beta}_{(AA)}(p) - p^\nu \hat \Gamma^{\alpha \beta}_{(AA)}(q) \,, \end{eqnarray} for the conservation of the supercurrent and of the EMT, where $\hat \Gamma_{(AA)}$ and $\hat \Gamma_{(\lambda \bar \lambda)}$ are the renormalized self-energies. Their derivation follows closely the analysis presented in \cite{Armillis:2010qk}. Notice that, for on-shell gauge and gaugino external lines, the two identities in Eq. (\ref{TensorWI}) simplify considerably because their right-hand sides vanish identically. \section{The supercorrelator in the on-shell and massless case} In this section we discuss the explicit results of the computation of supercorrelator when the components of the external vector supercurrents are on-shell and the superpotential of the chiral multiplet is absent. We will consider first the contributions due to the exchange of the chiral multiplet, followed by a subsection in which we address the exchange of a virtual vector multiplet. \subsection{The chiral multiplet contribution} We start from the chiral multiplet, presenting the result of the computation for massless fields and on shell gauge and gaugino external lines. \begin{figure}[t] \centering \subfigure[]{\includegraphics[scale=0.5]{plots/rchiralf1.pdf}} \hspace{.5cm} \subfigure[]{\includegraphics[scale=0.5]{plots/rchiralf2.pdf}} \hspace{.5cm} \subfigure[]{\includegraphics[scale=0.5]{plots/rchirals1.pdf}} \hspace{.5cm} \subfigure[]{\includegraphics[scale=0.5]{plots/rchirals2.pdf}} \\ \subfigure[]{\includegraphics[scale=0.5]{plots/rchirals3.pdf}} \hspace{.5cm} \subfigure[]{\includegraphics[scale=0.5]{plots/rchirals4.pdf}} \hspace{.5cm} \subfigure[]{\includegraphics[scale=0.5]{plots/rchirals5.pdf}} \caption{The one-loop perturbative expansion of the $\langle RVV \rangle$ correlator with a massless chiral multiplet running in the loops. \label{Fig.Rchiral}} \end{figure} \begin{figure}[t] \centering \subfigure[]{\includegraphics[scale=0.5]{plots/schiral1.pdf}} \hspace{.5cm} \subfigure[]{\includegraphics[scale=0.5]{plots/schiral2.pdf}} \hspace{.5cm} \subfigure[]{\includegraphics[scale=0.5]{plots/schiral3.pdf}} \hspace{.5cm} \subfigure[]{\includegraphics[scale=0.5]{plots/schiral4.pdf}} \caption{The one-loop perturbative expansion of the $\langle SVF \rangle$ correlator with a massless chiral multiplet running in the loops. \label{Fig.Schiral}} \end{figure} \begin{figure}[t] \centering \subfigure[]{\includegraphics[scale=0.5]{plots/tchiralf1.pdf}} \hspace{.5cm} \subfigure[]{\includegraphics[scale=0.5]{plots/tchiralf2.pdf}} \hspace{.5cm} \subfigure[]{\includegraphics[scale=0.5]{plots/tchiralf3.pdf}} \hspace{.5cm} \subfigure[]{\includegraphics[scale=0.5]{plots/tchiralf4.pdf}} \\ \subfigure[]{\includegraphics[scale=0.5]{plots/tchirals1.pdf}} \hspace{.5cm} \subfigure[]{\includegraphics[scale=0.5]{plots/tchirals2.pdf}} \hspace{.5cm} \subfigure[]{\includegraphics[scale=0.5]{plots/tchirals3.pdf}} \hspace{.5cm} \subfigure[]{\includegraphics[scale=0.5]{plots/tchirals4.pdf}} \\ \subfigure[]{\includegraphics[scale=0.5]{plots/tchirals5.pdf}} \hspace{.5cm} \subfigure[]{\includegraphics[scale=0.5]{plots/tchirals6.pdf}} \hspace{.5cm} \caption{The one-loop perturbative expansion of the $\langle TVV \rangle$ correlator with a massless chiral multiplet running in the loops. The last diagram, being a massless tadpole, is identically zero in dimensional regularization. \label{Fig.Tchiral}} \end{figure} The diagrams defining the one-loop expansion of the $\Gamma_{(R)}$ correlator are shown in Fig.~(\ref{Fig.Rchiral}). They consist of triangle and bubble topologies with fermions, since the scalars do not contribute. The explicit result for a massless chiral multiplet with on-shell external gauge bosons is given by \begin{eqnarray} \label{RChiralOSMassless} \Gamma_{(R)}^{\mu\alpha\beta}(p,q) = - i \frac{g^2 \, T(R)}{12 \pi^2} \frac{k^\mu}{k^2} \varepsilon[p, q, \alpha ,\beta] \,, \end{eqnarray} The correlator in Eq.(\ref{RChiralOSMassless}) satisfies the vector current conservation constraints given in Eq.(\ref{VectorWI}) and the anomalous equation of Eq.(\ref{AnomalyR}) \begin{eqnarray} \label{AnomalyRmom} i k_\mu \, \Gamma_{(R)}^{\mu\alpha\beta}(p,q) = \frac{g^2 \, T(R)}{12 \pi^2} \, \varepsilon[p, q, \alpha ,\beta] \,. \end{eqnarray} There is no much surprise, obviously, for the anomalous structure of Eq. (\ref{RChiralOSMassless}) which is characterized by a pole $1/k^2$ term, since in the on-shell case and for massless fermions (which are the only fields contributing to the $\langle RVV \rangle$ at this perturbative order), we recover the usual structure of the $\langle AVV \rangle$ diagram. The perturbative expansion of the $\Gamma^{\mu\alpha}_{(S) \, A \dot B}$ correlation function is depicted in Fig.~(\ref{Fig.Schiral}). For simplicity we will remove, from now on, the spinorial indices from the corresponding expressions. The explicit result for a massless chiral supermultiplet with on-shell external gauge and gaugino lines is then given by \begin{eqnarray} \label{SChiralOSMassless} \Gamma^{\mu\alpha}_{(S)}(p,q) = - i \frac{g^2 T(R)}{6 \pi^2 \, k^2} s_1^{\mu\alpha} + i \frac{g^2 T(R)}{64 \pi^2} \Phi_2(k^2,0) \, s_2^{\mu\alpha} \,, \end{eqnarray} where the form factor $\Phi_2(k^2,0)$ is defined as \begin{eqnarray} \label{Phi2massless} \Phi_2(k^2,0) = 1 - \mathcal B_0(0,0) + \mathcal B_0(k^2,0) \,, \end{eqnarray} and the two tensor structures are \begin{eqnarray} s_1^{\mu\alpha} &=& \sigma^{\mu \nu} k_\nu \, \sigma^\rho k_\rho \, \bar \sigma^{\alpha \beta} p_\beta \,,\nonumber \\ s_2^{\mu\alpha} &=& 2 p_\beta \, \sigma^{\alpha \beta} \sigma^\mu \,. \end{eqnarray} The $\mathcal B_0$ function appearing in Eq.(\ref{Phi2massless}) is a two-point scalar integral defined in Appendix \ref{AppScalarIntegrals}. Notice that the form factor multiplying the second tensor structure $s_2$ is ultraviolet finite, due to the renormalization procedure, but has an infrared singularity inherited by the counterterms in Eq.~(\ref{counterterms}). \\ It is important to observe that the only pole contribution comes from the anomalous structure $s_1^{\mu\alpha}$, which shows that the origin of the anomaly has to be attributed to a unique fermionic pole ($\sigma^\rho k_\rho / k^2$) in the correlator, in the form factor multiplying $s_1^{\mu\alpha}$. It is easy to show that Eq.~(\ref{SChiralOSMassless}) satisfies the vector current and EMT conservation equations. Moreover, the anomalous equation reads as \begin{eqnarray} \label{AnomalySmom} \bar \sigma_{\mu} \, \Gamma^{\mu\alpha}_{(S)}(p,q) = \frac{g^2 T(R)}{ 4 \pi^2} \bar \sigma^{\alpha \beta} p_\beta \,, \end{eqnarray} where only the first tensor structure contributes to the $\sigma$-trace of the correlator. This result is clearly in agreement with Eq.(\ref{AnomalyS}), after Fourier transform $(\mathcal{F.T.})$ owing to \begin{eqnarray} \mathcal{F.T.} \left\{ \frac{i}{2} \frac{\delta^2 F_{\mu\nu} \bar \sigma^{\mu\nu} \bar \lambda }{\delta A_\alpha(x) \delta \bar \lambda (y)} \right\} = \bar \sigma^{\alpha \beta} p_\beta \,. \end{eqnarray} Notice also that \begin{eqnarray} \mathcal{F.T.} \left\{ \frac{\delta^2 S^\mu}{\delta A_\alpha(x) \delta \bar \lambda (y)} \right\} = s_{2}^{\mu\alpha} \,. \end{eqnarray} The diagrams appearing in the perturbative expansions of the $\Gamma_{(T)}$ are depicted in Fig.(\ref{Fig.Tchiral}). They consist of triangle and bubble topologies. There is also a tadpole-like contribution, Fig.(\ref{Fig.Tchiral}j), which is non-zero only in the massive case. \\ The explicit expression of the $\Gamma_{(T)}$ correlator for a massless chiral supermultiplet and on-shell gauge lines is given by \begin{eqnarray} \label{TChiralOSMassless} \Gamma_{(T)}^{\mu\nu\alpha\beta}(p,q) = - \frac{g^2 \, T(R)}{24 \pi^2 \, k^2} t_{1S}^{\mu\nu\alpha\beta}(p,q) + \frac{g^2 \, T(R)}{16 \pi^2} \Phi_2(k^2,0) \, t_{2S}^{\mu\nu\alpha\beta}(p,q) \,, \end{eqnarray} where the $\Phi_2$ is defined in Eq.(\ref{Phi2massless}) and \begin{eqnarray} t_{1S}^{\mu\nu\alpha\beta}(p,q) &\equiv& \phi_1^{\mu\nu\alpha\beta}(p,q) = (\eta^{\mu\nu} k^2 - k^\mu k^\nu) u^{\alpha\beta}(p,q)\,, \\ t_{2S}^{\mu\nu\alpha\beta}(p,q) &\equiv& \phi_3^{\mu\nu\alpha\beta}(p,q) = (p^\mu q^\nu + p^\nu q^\mu) \eta^{\alpha\beta} + p \cdot q (\eta^{\alpha\nu} \eta^{\beta\mu} + \eta^{\alpha\mu} \eta^{\beta\nu}) - \eta^{\mu\nu} u^{\alpha \beta}(p,q) \nonumber \\ &-& (\eta^{\beta\nu}p^\mu + \eta^{\beta\mu}p^\nu)q^\alpha - (\eta^{\alpha\nu}q^\mu + \eta^{\alpha\mu}q^\nu)p^\beta\,, \end{eqnarray} where $\phi_1^{\mu\nu\alpha\beta}, \phi_3^{\mu\nu\alpha\beta}$ and $u^{\alpha\beta}$ are given in Eqs. (\ref{phitensors}) and (\ref{utensor}). As in the previous cases we have explicitly checked all the Ward identities originating from gauge invariance and conservation of the energy-momentum tensor. As one can easily verify by inspection, only the first one of the two tensor structures is traceful and contributes to the anomaly equation of the $\Gamma_{(T)}$ correlator \begin{eqnarray} \label{AnomalyTmom} \eta_{\mu\nu} \, \Gamma_{(T)}^{\mu\nu\alpha\beta}(p,q) = - \frac{g^2 \, T(R)}{8 \pi^2} u^{\alpha\beta}(p,q) \,. \end{eqnarray} The comparison of Eq.(\ref{AnomalyTmom}) to Eq.(\ref{AnomalyT}) is evident if one recognizes that \begin{eqnarray} \mathcal{F.T.} \left\{ - \frac{1}{4} \frac{\delta^2 F_{\mu\nu}F^{\mu\nu}}{\delta A_\alpha(x) \delta A_\beta(y)} \right\} = u^{\alpha\beta}(p,q) \,. \end{eqnarray} For completeness we give also the inverse Fourier transform of $t_{2S}^{\mu\nu\alpha\beta}(p,q)$ which is obtained from \begin{eqnarray} \mathcal{F.T.} \left\{ \frac{\delta^2 T^{\mu\nu}_{gauge}}{\delta A_\alpha(x) \delta A_\beta(y)} \right\} = t_{2S}^{\mu\nu\alpha\beta}(p,q) \,, \end{eqnarray} where $T^{\mu\nu}_{gauge}$ is the pure gauge part of the energy-momentum tensor. Notice that $t_{2S}$ is nothing else than the tree-level vertex with two onshell gauge fields on the external lines. As in the previous subsection, concerning the supersymmetric current $S^\mu_A$, also in the case of this correlator there is only one structure containing a pole term, which appears in the only form factor (which multiplies $t_{1S}$) with a nonvanishing trace. Differently from the non supersymmetric case, such as in QED and QCD, with fermions or scalars running in the loops, as shown in Eqs.~(\ref{RVectorOS}), (\ref{SVectorOS}), and (\ref{TVectorOS}), there are {\em no extra poles} in the traceless structures of the decomposition of the correlators proving that in a supersymmetric theory the signature of all the anomalies in the $\langle \mathcal{J} \mathcal{V} \mathcal{V} \rangle$ correlator are only due to anomaly poles in each channel. \subsection{The vector multiplet contribution} Finally, we come to a discussion of the perturbative results for the vector (gauge) multiplet to the three anomalous correlation functions presented in the previous sections. Notice that due to the quantization of the gauge field, gauge fixing and ghost terms must be taken into account, increasing the complexity of the computation. This technical problem is completely circumvented with on-shell gauge boson and gaugino, which is the case analyzed in this work. \\ Concerning the diagrammatic expansion, the topologies of the various contributions defining the three correlators is analogous to those illustrated in massless chiral case. The explicit results are given by \begin{eqnarray} \label{RVectorOS} \Gamma_{(R)}^{\mu\alpha\beta}(p,q) &=& i \frac{g^2 \, T(A)}{4 \pi^2} \frac{k^\mu}{k^2} \varepsilon[p, q, \alpha ,\beta] \,, \\ \label{SVectorOS} \Gamma_{(S)}^{\mu\alpha}(p,q) &=& i \frac{g^2 T(A)}{2 \pi^2 \, k^2} s_1^{\mu\alpha} + i \frac{g^2 T(A)}{64 \pi^2} V(k^2) \, s_2^{\mu\alpha} \,, \\ \label{TVectorOS} \Gamma_{(T)}^{\mu\nu\alpha\beta}(p,q) &=& \frac{g^2 \, T(A)}{8 \pi^2 \, k^2} t_1^{\mu\nu\alpha\beta}(p,q) + \frac{g^2 \, T(A)}{16 \pi^2} V(k^2) \, t_{2}^{\mu\nu\alpha\beta}(p,q) \,, \end{eqnarray} where \begin{eqnarray} V(k^2) = -3 + 3 \, \mathcal B_0(0,0) - 3 \, \mathcal B_0(k^2,0) - 2 k^2 \, \mathcal C_0(k^2,0) \,. \end{eqnarray} The tensor expansion of the correlators is the same as in the previous cases. The only differences are in the form factors. In particular, the first in each of them is the only one responsible for the anomaly and is multiplied, respect to the chiral case, by a factor $-3$ and by a different group factor. The result reproduces exactly the anomaly Eqs (\ref{AnomalyR},\ref{AnomalyS},\ref{AnomalyT}). Concerning the ultraviolet divergences of these correlators, the explicit computation shows that the vector multiplet contribution to $\Gamma_{(S)}^{\mu\nu}$ is indeed finite at one-loop order before any renormalization. This confirms a result obtained in the analysis of the renormalization properties of these correlators presented in a previous section, where it was shown the vanishing of the counterterm of $\Gamma^{\mu\alpha}_{(S)}$ for the vector multiplet. Also for the vector multiplet, the result is similar, since the only anomaly poles present in the three correlators (\ref{RVectorOS}), (\ref{SVectorOS}) and (\ref{TVectorOS}) are those belonging to anomalous structures. We conclude that in all the cases discussed so far, the signature of an anomaly, in a superconformal theory, are anomaly poles. \section{ The supercorrelator in the on-shell and massive case} We now extend our previous analysis to the case of a massive chiral multiplet. This will turn out to be extremely useful in order to discuss the general behaviour of the spectral densities away from the conformal point. \begin{figure}[t] \centering \subfigure[]{\includegraphics[scale=0.6]{plots/rchiralm1.pdf}} \hspace{.5cm} \subfigure[]{\includegraphics[scale=0.6]{plots/schiralm2.pdf}} \hspace{.5cm} \subfigure[]{\includegraphics[scale=0.6]{plots/tchiralm1.pdf}} \caption{A sample of diagrams, for a massive chiral multiplet, mass insertions in the fermion propagators. \label{Fig.Massivechiral}} \end{figure} The diagrammatic expansion of the three correlators for a massive chiral multiplet in the loops get enlarged by a bigger set of contributions characterized by mass insertions on the $S^\mu_A$ and $T^{\mu\nu}$ vertices and on the propagators of the Weyl fermions. A sample of them are shown in Fig. (\ref{Fig.Massivechiral}). An explicit computation, in this case, gives \begin{eqnarray} \label{RChiralOSMassive} \Gamma_{(R)}^{\mu\alpha\beta}(p,q) &=& i \frac{g^2 \, T(R)}{12 \pi^2} \, \Phi_1(k^2,m^2) \, \frac{k^\mu}{k^2} \varepsilon[p, q, \alpha ,\beta] \,, \\ \label{SChiralOSMassive} \Gamma^{\mu\alpha}_{(S)}(p,q) &=& i \frac{g^2 T(R)}{6 \pi^2 \, k^2} \, \Phi_1(k^2,m^2) \, s_1^{\mu\alpha} + i \frac{g^2 T(R)}{64 \pi^2} \, \Phi_2(k^2,m^2) \, s_2^{\mu\alpha} \,, \\ \label{TChiralOSMassive} \Gamma_{(T)}^{\mu\nu\alpha\beta}(p,q) &=& \frac{g^2 \, T(R)}{24 \pi^2 \, k^2} \, \Phi_1(k^2,m^2) \, t_{1S}^{\mu\nu\alpha\beta}(p,q) + \frac{g^2 \, T(R)}{16 \pi^2} \, \Phi_2(k^2,m^2) \, t_{2S}^{\mu\nu\alpha\beta}(p,q) \,, \end{eqnarray} with \begin{eqnarray} \Phi_1(k^2,m^2) &=& - 1 - 2\, m^2 \, \mathcal C_0(k^2,m^2) \,, \nonumber \\ \Phi_2(k^2,m^2) &=& 1 - \mathcal B_0(0,m^2) + \mathcal B_0(k^2,m^2) + 2 m^2 \mathcal C_0(k^2,m^2) \,. \label{exp1} \end{eqnarray} The expressions above show that the only modification introduced by the mass corrections is in the form factors, while the tensor structure remains unchanged. \\ As we have previously discussed, if the superpotential is quadratic in the chiral superfield, the hypercurrent conservation equation develops a classical (non-anomalous) contribution describing the explicit breaking of the conformal symmetry. Therefore, in this case, the anomaly equations (\ref{AnomalyRmom}),(\ref{AnomalySmom}), and (\ref{AnomalyTmom}) must be modified in order to account for the mass dependence. The new conservation equations for a massive chiral supermultiplet become \begin{eqnarray} i k_\mu \, \Gamma^{\mu\alpha\beta}_{(R)}(p,q) &=& -\frac{g^2 T(R)}{12\pi^2} \Phi_1(k^2,m^2) \varepsilon[p,q,\alpha,\beta] \,, \\ \bar \sigma_{\mu} \, \Gamma^{\mu\alpha}_{(S)}(p,q) &=& - \frac{g^2 T(R)}{ 4 \pi^2} \Phi_1(k^2,m^2) \bar \sigma^{\alpha \beta} p_\beta \,, \\ \eta_{\mu\nu} \, \Gamma^{\mu\nu\alpha\beta}_{(T)}(p,q) &=& \frac{g^2 T(R)}{8\pi^2} \Phi_1(k^2,m^2) u^{\alpha\beta}(p,q) \,. \end{eqnarray} It is interesting to observe that supersymmetry prevents the appearance of new structures in the conservation equations, at least for these correlation functions, being the explicit classical breaking terms just a correction to the anomaly coefficient. This is not the case for non-supersymmetric theories \cite{Giannotti:2008cv, Armillis:2009pq}. \section{Comparing supersymmetric and non supersymmetric cases: sum rules and extra poles in the Standard Model} In this section and in the following one, we compare the structure of the spectral densities between supersymmetric and non supersymmetric theories in the presence of mass terms, looking for the additional sum rules not directly related to the anomalies, which may be present in the $\langle TVV \rangle$ and $\langle AVV \rangle$ correlators. We anticipate that these are found in the $\langle TVV \rangle$ in the non suspersymmetric case in all the gauge invariant sectors of the Standard Model. We start our analysis with the conformal anomaly action of QCD, described by the EMT-gluon-gluon vertex and then move to the EMT-$\gamma\gamma$ vertex in the complete electroweak theory. Obviously, the spectral densitites develope anomaly poles in the limit in which all the second scales of the vertices turn to zero. By this we mean fermion masses, the $W$ mass and external virtualities. Moreover, we identify the explicit form of the sum rules satisfied in perturbation theory. \subsection{The extra pole of QCD} For definiteness we focus our attention on a specific gauge theory, QCD. We write the whole amplitude $\Gamma^{\mu\nu\alpha\beta}(p,q)$ of the $\langle TVV \rangle$ diagram in QCD as \begin{eqnarray} \Gamma^{\mu\nu\alpha\beta}(p,q) = \Gamma_q^{\mu\nu\alpha\beta}(p,q) + \Gamma_g^{\mu\nu\alpha\beta}(p,q), \end{eqnarray} having separated the quark $(\Gamma_q)$ and the gluons/ghosts $(\Gamma_g)$ contributions. We have omitted the colour indices for simplicity, being the correlator diagonal in colour space. As described before in Section \ref{TVVsection} for the massless case, also in the massive case the amplitude $\Gamma$ is expressed in terms of 3 tensor structures. In the $\overline{MS}$ scheme these are given by \cite{Armillis:2010qk} \begin{equation} \Gamma^{\mu\nu\alpha\beta}_{q/g}(p,q) = \, \sum_{i=1}^{3} \Phi_{i\,q/g} (k^2,m^2)\, \phi_i^{\mu\nu\alpha\beta}(p,q)\,. \label{Gamt} \end{equation} For on-shell and transverse gluons, only 3 invariant amplitudes contribute, which for the quark loop case are given by \begin{eqnarray} \Phi_{1\, q} (k^2,m^2) &=& \frac{g^2}{6 \pi^2 k^2} \bigg\{ - \frac{1}{6} + \frac{ m^2}{k^2} - m^2 \mathcal C_0(k^2,m^2) \bigg[\frac{1}{2 \, }-\frac{2 m^2}{ k^2}\bigg] \bigg\} \,, \\ \Phi_{2\, q} (k^2,m^2) &=& - \frac{g^2}{4 \pi^2 k^2} \bigg\{ \frac{1}{72} + \frac{m^2}{6 k^2} + \frac{ m^2}{2 k^2} \mathcal D (k^2,m^2) + \frac{ m^2}{3 } \mathcal C_0(k^2,m^2 )\, \left[ \frac{1}{2} + \frac{m^2}{k^2}\right] \bigg\} \,, \\ \Phi_{3\,q} (k^2,m^2) &=& \frac{g^2}{4 \pi^2} \bigg\{ \frac{11}{72} + \frac{ m^2}{2 k^2} + m^2 \mathcal C_0(k^2,m^2) \,\left[ \frac{1}{2 } + \frac{m^2}{k^2}\right] + \frac{5 \, m^2}{6 k^2} \mathcal D (k^2,m^2) + \frac{1}{6} \mathcal B_0^{\overline{MS}}(k^2, m^2) \bigg\},\nonumber\\ \label{masslesslimit} \end{eqnarray} where the on-shell scalar integrals $\mathcal D (k^2,m^2)$, $\mathcal C_0(k^2, m^2)$ and $\mathcal B_0^{\overline{MS}}(k^2, m^2)$ are given in Appendix \ref{AppScalarIntegrals}. \\ Here we concentrate on the two form factors which are unaffected by renormalization, namely $\Phi_{1,2 q}$. Both admit convergent dispersive integrals of the form \begin{eqnarray} \Phi_{1,2 q}(k^2,m^2) &=& \frac{1}{\pi} \int_0^{\infty} ds \frac{\rho_{1,2 q}(s,m^2)}{s-k^2} \,, \end{eqnarray} in terms of spectral densities ${\rho_{1,2 q}(s,m^2)}$. From the explicit expressions of these two form factors, the corresponding spectral densities are obtained using the relations \begin{eqnarray} && \textrm{Disc}\left( \frac{1}{s^2}\right) = 2i\pi \delta'(s),\nonumber\\ && \textrm{Disc}\left( \frac{\mathcal C_0(s,m^2)}{s^2} \right)= - \frac{2i \pi}{s^3} \log \frac{1+\sqrt{\tau(s,m^2)}}{1-\sqrt{\tau(s,m^2)}}\theta(s-4 m^2) + i\pi \delta'(s) A(s), \label{disc1} \end{eqnarray} where $A(s)$ is defined in Eq.(\ref{As}) and we have used the general relation \begin{equation} \left( \frac{1}{x +i \epsilon}\right)^n - \left( \frac{1}{x -i \epsilon}\right)^n=(-1)^n \frac{2 \pi i}{(n-1)!}\delta^{(n-1)}(x) \,, \end{equation} with $\delta^{(n)}(x)$ the $n$-th derivative of the delta function. The contribution proportional to $\delta'(s)$ in Eq.(\ref{disc1}) can be rewritten in the form \begin{eqnarray} \delta'(s) A(s) = -\delta(s) A'(0)+ \delta'(s) A(0), \qquad \mbox{with} \quad A(0)=-\frac{1}{m^2} \,, \quad A'(0)=-\frac{1}{12 m^4} \,, \end{eqnarray} giving for the spectral densities \begin{eqnarray} \label{rhoq} \rho_{1q}(s,m^2) &=& \frac{g^2}{12 \pi} \frac{m^2}{s^2} \tau(s,m^2) \log \frac{1+\sqrt{\tau(s,m^2)}}{1-\sqrt{\tau(s,m^2)}} \theta(s-4m^2) \,, \nonumber \\ \rho_{2q}(s,m^2) &=& \frac{-g^2}{12 \pi} \left[ \frac{3 m^2}{2 s^2} \sqrt{\tau(s,m^2)} - \frac{m^2}{s} \left( \frac{1}{2 s} + \frac{m^2}{s^2} \right) \log \frac{1+\sqrt{\tau(s,m^2)}}{1-\sqrt{\tau(s,m^2)}} \right] \theta(s-4m^2) \end{eqnarray} Both functions are characterized by a two particle cut starting at $4m^2$, with $m$ the quark mass. Notice also that in this case there is a cancellation of the localized contributions related to the $\delta(s)$, showing that for nonzero mass there are no pole terms in the dispersive integral. The crucial difference, respect to the supersymmetric case discussed above, is that now we have two independent sum rules \begin{eqnarray} \frac{1}{\pi} \int_{0}^{\infty} ds \, \rho_{1 q}(s,m^2) = \frac{g^2}{36 \pi^2} \,, \qquad \qquad \frac{1}{\pi} \int_0^{\infty} ds \, \rho_{2q}(s,m^2) = \frac{g^2}{288 \pi^2} \,, \end{eqnarray} one for each form factor, as it can be verified by a direct integration. We can normalize both densities as \begin{equation} \bar{\rho}_{1 q}(s,m^2)\equiv \frac{36 \pi^2}{g^2}\rho_{1 q}(s,m^2) \qquad \bar{\rho}_{2 q}(s,m^2) \equiv \frac{288 \pi^2}{g^2}\rho_{2 q}(s,m^2) \end{equation} in order to describe the two respective flows, which are homogeneuos, since both densities carry the same physical dimension and both converge to a $\delta(s)$ as the quark mass $m$ is sent to zero \begin{equation} \lim_{m\to 0}\bar{ \rho}_{1 q}=\lim_{m\to 0}\bar{ \rho}_{2 q}=\delta(s). \end{equation} Indeed at $m=0$, $\Phi_{1,2 q}$ are just given by pole terms, while $\Phi_{3q}$ is logarithmic in momentum \begin{eqnarray} \Phi_{1\,q} (k^2,0) &=& - \frac{g^2}{36 \pi^2 k^2}, \qquad \Phi_{2\,q} (k^2,0) = - \frac{g^2}{288 \pi^2 \, k^2}, \\ \Phi_{3\,q} (k^2,0) &=& - \frac{g^2}{288 \pi^2} \, \left( 12 \log \left( -\frac{k^2}{\mu^2} \right ) - 35\right), \qquad \mbox{for} \quad k^2<0. \end{eqnarray} It is then clear, from this comparative analysis, that the supersymmetric and the non supersymmetric anomaly correlators can be easily differentiated in regards to their spectral behaviour. In the non supersymmetric case the spectral analysis of the $\langle TVV \rangle$ correlator shows the appearance of two flows, one of them anomalous, the other not. A similar pattern is found in the gluon sector, which obviously is not affected by the mass term. In this case the on-shell and transverse condition on the external gluons brings to three very simple form factors whose expressions are \begin{eqnarray} \Phi_{1\,g}(k^2) &=& \frac{11 \, g^2}{72 \pi^2 \, k^2} \, C_A \,, \qquad \Phi_{2\,g}(k^2) = \frac{g^2}{288 \pi^2 \, k^2} \, C_A \,, \\ \Phi_{3\,g}(k^2) &=& - \frac{g^2}{8 \pi^2} C_A \bigg[ \frac{65}{36} + \frac{11}{6} \mathcal{ B}_0^{\overline{MS}}(k^2,0) - \mathcal {B}_0^{\overline{MS}}(0,0) + k^2 \,\mathcal C_0(k^2,0) \bigg]. \label{gl2} \end{eqnarray} The $\overline{MS}$ renormalized scalar integrals can be found in Appendix \ref{AppScalarIntegrals}. Also in this case, it is clear that the simple poles in $\Phi_{1\,g}$ and $\Phi_{2\,g}$, the two form factors which are not affected by the renormalization, are accounted for by two spectral densities which are proportional to $\delta(s)$. The anomaly pole in $\Phi_{1\,g}$ is accompanied by a second pole in the non anomalous form factor $\Phi_{2\,g}$. Notice that $\Phi_{3 g}$ is affected by renormalization, and as such it is not considered relevant in the spectral analysis. \subsection{$\langle TVV \rangle$ and the two spectral flows of the electroweak theory} \begin{figure}[t] \centering \includegraphics[scale=0.8]{plots/HVVImpr.pdf} \caption{Amplitude with the graviton - Higgs mixing vertex generated by the term of improvement. The blob represents the SM Higgs -VV' vertex at one-loop.} \label{HVVImpr} \end{figure} The point illustrated above can be extended to the entire electroweak theory by looking at some typical diagrams which manifest a trace anomaly. The simplest case is the $\langle TVV \rangle$ in the full electroweak theory, where $V$, in this case, denotes on-shell photons. At one loop level it is given by the vertex $\Gamma^{\mu\nu\alpha\beta}$ and expanded onto two terms \begin{equation} \label{TVVew} \Gamma^{\mu\nu\alpha\beta}(p,q)=\Sigma^{\mu\nu\alpha\beta}(p,q) +\Delta^{\mu\nu\alpha\beta}(p,q)\,, \end{equation} where $\Sigma^{\mu\nu\alpha\beta}(p,q)$ is a full irreducible contribution, corresponding to topologies of triangles, bubbles and tadpoles. % In this case $\Sigma^{\mu\nu\alpha\beta}(p,q)$ is given by the expression% \cite{Coriano:2011ti,Coriano:2011zk,CDS} \begin{eqnarray} \Sigma ^{\mu\nu\alpha\beta}(p,q) = \Sigma_{F}^{\mu\nu\alpha\beta}(p,q) + \Sigma_{B}^{\mu\nu\alpha\beta}(p,q) + \Sigma_{I}^{\mu\nu\alpha\beta}(p,q), \end{eqnarray} corresponding to the exchange of fermions ($\Sigma_{F}$), gauge bosons ($\Sigma_{B}$) and to a term of improvement $(\Sigma_{I})$. The latter is generated by an EMT of the form \begin{eqnarray} T^I_{\mu\nu} = - \frac{1}{3} \bigg[ \partial_{\mu} \partial_{\nu} - \eta_{\mu\nu} \, \Box \bigg] \mathcal H^\dag \mathcal H = - \frac{1}{3} \bigg[ \partial_{\mu} \partial_{\nu} - \eta_{\mu\nu} \, \Box \bigg] \bigg( \frac{H^2}{2} + \frac{\phi^2}{2} + \phi^{+}\phi^{-} + v \, H \bigg). \end{eqnarray} and is responsible for a bilinear mixing between the EMT and the Higgs field. \\ The term $\Delta^{\mu\nu\alpha\beta}(p,q)$ in Eq.(\ref{TVVew}) comes from the insertion of the EMT of improvement given above with the Standard Model $H\gamma\gamma$ vertex. The relevant diagram is reported in Fig. (\ref{HVVImpr}). The inclusion of this term is necessary, from a careful analysis of the Ward identities, as shown in \cite{Coriano:2011zk}. \\ They full irreducible contributions are expanded as \begin{eqnarray} \Sigma ^{\mu\nu\alpha\beta}_{F}(p,q) &=& \, \sum_{i=1}^{3} \Phi_{i\,F} (s,0, 0,m_f^2) \, \phi_i^{\mu\nu\alpha\beta}(p,q)\,, \\ \Sigma ^{\mu\nu\alpha\beta}_{B}(p,q) &=& \, \sum_{i=1}^{3} \Phi_{i\,B} (s,0, 0,M_W^2) \, \phi_i^{\mu\nu\alpha\beta}(p,q)\,, \\ \Sigma ^{\mu\nu\alpha\beta}_{I}(p,q) &=& \Phi_{1\,I} (s,0, 0,M_W^2) \, \phi_1^{\mu\nu\alpha\beta}(p,q) + \Phi_{4\,I} (s,0, 0,M_W^2) \, \phi_4^{\mu\nu\alpha\beta}(p,q) \,. \end{eqnarray} with $s=k^2=(p+q)^2$, $\phi_i^{\mu\nu\alpha\beta}(p,q)$ given in Eq. (\ref{phitensors}) and \begin{equation} \phi_4^{\mu\nu\alpha\beta}(p,q) = (s \, \eta^{\mu\nu} - k^{\mu}k^{\nu}) \, \eta^{\alpha\beta}, \end{equation} while the $\Delta$ term reads as \begin{eqnarray} \Delta^{\mu\nu\alpha\beta}(p,q) &=& \Delta^{\mu\nu\alpha\beta}_I (p,q) \nonumber \\ &=& \Psi_{1\, I} (s,0, 0,m_f^2,M_W^2,M_H^2) \, \phi_1^{\mu\nu\alpha\beta}(p,q) + \Psi_{4 \, I} (s,0, 0,M_W^2) \, \phi_4^{\mu\nu\alpha\beta}(p,q)\, . \label{DAA} \end{eqnarray} This is built by combining the tree level vertex for EMT/Higgs mixing, coming from the improved EMT, and the Standard Model $H\gamma\gamma$ correlator at one-loop.\\ The spectral densities of the fermion contributions, related to $\Sigma_F$ have structure similar to those computed above in Eq. (\ref{rhoq}), with $\rho_{\Phi_{1F}} \sim \rho_{1 q}(s)$ and $\rho_{\Phi_{2F}} \sim \rho_{2 q} (s)$. Therefore we have two sum rules and two spectral flows also in this case, following the pattern discussed before for the spectral densities in Eq. (\ref{rhoq}).\\ A similar analysis on the two form factors $\Phi_B$ in the gauge boson sector gives \begin{equation} \rho_{\phi_{1 B}}(s)=\frac{ 2 M_W^2}{s^3} (2 M_W^2 - s) \alpha \log\left(\frac{1+ \sqrt{\tau(s,M_W^2}) }{1-\sqrt{\tau(s,M_W^2} )}\right)\theta(s- 4 M_W^2) \end{equation} while $\rho_{\phi_{2 B}}$ has the same functional form of $\rho_{\phi_{2 F}}$, modulo an overall factor, with $m$, the fermion mass, replaced by the $W$ mass $M_W$. Notice that both $\rho_{\phi_{1 B}}$ and $\rho_{\phi_{2 B}}$, as well as $\rho_{\phi_{1 F}}$ and $\rho_{\phi_{2 F}}$ are deprived of resonant contributions, being the diagrams massive. \\ Coming to the form factors in $\Sigma_I$, one realizes that the spectral density of $\Phi_{1I}$ shares the same functional form of $\rho_\chi$, extracted from Eq. (\ref{spectralrho}), and there is clearly a sum rule associated to it. Also in this case, this result is accompanied by the $1/k^2$ behaviour of the corresponding form factor, due to the anomaly. \\ Finally, for the case of $\psi_{1I}$, one can also show that the spectral density finds support only above the two particle cuts. The cuts are linked to $2 m$ and $2 M_W$. In this case there is no sum rule and the contribution is not affected by an anomaly pole, as expected, being the virtual loop connected with the $H\gamma\gamma$ vertex (see Fig. \ref{HVVImpr}). \subsection{The non-transverse $\langle AVV \rangle$ correlator } Before closing the analysis on the spectral densitites of non supersymmetric theories, we pause for few comments on the structure of the $\langle AVV \rangle$ diagram, which, as we are going to show, is affected by a single flow even if we do not impose the transversality condition on the two photons. We consider once more the anomaly vertex as parameterized in Eq. (\ref{anom1}), and consider the second form factor $A_{4+6}\equiv A_4 + A_6$, which contributes to the anomaly loop for non transverse (but on-shell) photons. The expression of $A_6$, the anomalous form factor, has been given in Eq. (\ref{a6}), while $A_4$ is given by \begin{equation} A_4(k^2,m^2)=-\frac{1}{2 \pi^2 k^2}\left[ 2 - \sqrt{\tau(k^2,m^2)}\log \frac{\sqrt{\tau(k^2,m^2)}+1}{\sqrt{\tau(k^2,m^2)}-1} \right], \qquad k^2<0 \label{a4} \end{equation} and $A_{4+6}$ takes the form \begin{equation} A_{4+6}(k^2,m^2)=\frac{1}{2\pi^2 k^2}\left[ -1 + \sqrt{\tau(k^2,m^2)} \log\frac{\sqrt{\tau(k^2,m^2)} + 1}{\sqrt{\tau(k^2,m^2)} - 1} + \frac{m^2}{k^2} \log^2 \frac{\sqrt{\tau(k^2,m^2)} + 1}{\sqrt{\tau(k^2,m^2)} - 1} \right]. \end{equation} Its discontinuity is given by \begin{equation} \textrm{Disc}\,A_{4+6}(k^2,m^2)= - 2 i \pi\left[ \frac{\sqrt{\tau(k^2,m^2)}}{k^2} + \frac{2 m^2}{(k^2)^2}\log \frac{\sqrt{\tau(k^2,m^2)}+1}{\sqrt{\tau(k^2,m^2)}-1} \right] \theta(k^2 - 4m^2). \end{equation} Notice that in this case there is no sum rule satisfied by this spectral density, being non-integrable along the cut. Coming to the spectral density for the anomaly coefficient $A_6$, this is proportional to the density of $\chi(s,m^2)$ given in Eq. (\ref{spectralrho}) and shares the same behaviour found for $\rho_{\chi}(s,m^2)$, as expected. This analysis shows that in the $\langle AVV \rangle$ case one encounters a single sum rule and a single massive flow which degenerates into a $\delta(s)$ behaviour, as in the supersymmetric case. This condition remains valid also for non-transverse vector currents. It is then clear that the crucial difference between the non supersymmetric and the supersymmetric case is carried by the $\langle TVV \rangle$ diagram, due to the extra sum rule discussed above. \subsection{Cancellations in the supersymmetric case} In order to further clarify how the cancellation of the extra poles occurs for the supersymmetric $\langle TVV \rangle$, we consider the non-anomalous form factor $f_2$ in a general theory (given in Eqs. (\ref{FFfermions},\ref{FFscalars},\ref{FFgauge})), with $N_f$ Weyl fermions, $N_s$ complex scalars and $N_A$ gauge fields. We work, for simplicity, in the massless limit. In this case the non anomalous form factor $f_2$, which is affected by pole terms, after combining scalar, fermions and gauge contributions can be written in the form \begin{eqnarray} \label{F2total} f_2(k^2) &=& \frac{N_f}{2} f_2^{(f)}(k^2) + N_s \, f_2^{(s)}(k^2) + N_A \, f_2^{(A)}(k^2) \nonumber \\ &=& \frac{g^2}{144 \pi^2 \, k^2} \left[ - \frac{N_f}{2} T(R_f) + N_s \, \frac{T(R_s)}{2} + N_A \, \frac{T(A)}{2} \right] \,, \end{eqnarray} where the fermions give a negative contribution with respect to scalar and gauge fields. If we turn to a $\mathcal N=1$ Yang-Mills gauge theory, which is the theory that we are addressing, we need to consider in the anomaly diagrams the virtual exchanges both of a chiral and of a vector supermultiplet. In the first case the multiplet is built out of one Weyl fermion and one complex scalar, therefore in Eq.(\ref{F2total}) we have $N_f = 1, N_s = 1, N_A = 0$ with $T(R_f) = T(R_s)$. With this matter content, the form factor is set to vanish. \\ For a vector multiplet, on the othe other end, we have one vector field and one Weyl fermion, all belonging to the adjoint representation and then we obtain $N_f=1, N_s=0, N_A=1$ with $T(R_f) = T(A)$. Even in this case all the contributions in the $f_2$ form factor sum up to zero. It is then clear that the cancellation of the extra poles in the $\langle TVV \rangle$ is a specific tract of supersymmetric Yang Mills theories, due to their matter content, not shared by an ordinary gauge theory. A corollary of this is that in a supersymmetric theory we have just one spectral flow driven by the deformation parameter $m$, accompanied by one sum rule for the entire deformation. \section{The anomaly effective action and the pole cancellations for $\mathcal{N}=4$ } The presence of poles in the effective action is associated either with fundamental fields in the defining Lagrangian or with the exchange of intermediate bound states. Here we present the quantum effective action obtained from the three-point correlation functions discussed previously. We consider the massless case for the chiral supermultiplet and on-shell external gauge bosons and gauginos. The anomalous part is given by the three terms \begin{equation} S_{\textrm{anom}}= S_{\textrm{axion}} + S_{\textrm{dilatino}} + S_{\textrm{dilaton}} \end{equation} which are \begin{eqnarray} S_{\textrm{axion}}&=& - \frac{g^2}{4 \pi^2} \left( T(A) - \frac{T(R)}{3} \right) \int d^4 z \, d^4 x \, \partial^\mu B_\mu(z) \, \frac{1}{\Box_{zx}} \, \frac{1}{4} F_{\alpha\beta}(x)\tilde F^{\alpha\beta}(x) \\ S_{\textrm{dilatino}} &=& \frac{g^2}{2 \pi^2} \left(T(A) - \frac{T(R)}{3} \right) \int d^4 z \, d^4 x \bigg[ \partial_\nu \Psi_\mu(z) \sigma^{\mu\nu} \sigma^\rho \frac{\stackrel{\leftarrow}{\partial_\rho}}{\Box_{zx}} \, \bar \sigma^{\alpha\beta} \bar \lambda(x) \frac{1}{2} F_{\alpha\beta}(x) + h.c. \bigg] \\ S_{\textrm{dilaton}}&=&- \frac{g^2}{8 \pi^2} \left(T(A) - \frac{T(R)}{3} \right) \int d^4 z \, d^4 x \, \left(\Box h(z) - \partial^\mu \partial^\nu h_{\mu\nu}(z) \right) \, \frac{1}{\Box_{zx}} \,\frac{1}{4} F_{\alpha\beta}(x) F^{\alpha\beta}(x) \end{eqnarray} We show in Figs.\ref{RST} the three types of intermediate states which interpolate between the Ferrara-Zumino hypercurrent and the gauge $(A)$ and the gaugino ($\lambda$) of the final state. The axion is identified by the collinear exchange of a bound fermion/antifermion pair in a pseudoscalar state, generated in the $\langle RVV \rangle$ correlator. In the case of the $\langle SVF \rangle$ correlator, the intermediate state is a collinear scalar/fermion pair, interpreted as a dilatino. In the $\langle TVV \rangle$ case, the collinear exchange is a linear combination of a fermion/antifermion and scalar/scalar pairs. The non-anomalous contribution is associated with the extra term $S_0$ which is given by \begin{eqnarray} S_0 &=& \frac{g^2}{16 \pi^2} \int d^4 z \, d^4 x \, h_{\mu\nu}(z) \left( T(R) \, \tilde \Phi_2(z-x) + T(A) \, \tilde V(z-x) \right) T^{\mu\nu}_{gauge}(x) \nonumber \\ &+& \frac{g^2}{64 \pi^2} \int d^4 z \, d^4 x \bigg[ i \, \Psi_\mu(z) \left( T(R) \, \tilde \Phi_2(z-x) + T(A) \, \tilde V(z-x) \right) S^{\mu}_{gauge}(x) + h.c. \bigg] \,, \end{eqnarray} where $\tilde \Phi_2(z-x)$ and $\tilde V(z-x)$ are the Fourier transforms of $\Phi_2(k^2,0)$ and $V(k^2)$ respectively. Their contributions in position space correspond to nonlocal logarithmic terms. \begin{figure}[t] \centering \subfigure{\includegraphics[scale=0.6]{plots/Rpole.pdf}} \hspace{.5cm} \subfigure{\includegraphics[scale=0.6]{plots/Spole.pdf}} \subfigure{\includegraphics[scale=0.6]{plots/Tpole.pdf}} \caption{The collinear diagrams corresponding to the exchange of a composite axion (top right), a dilatino (top left) and the two sectors of an intermediate dilaton (bottom). Dashed lines denote intermediate scalars.} \label{RST} \end{figure} The relation between anomaly poles, spectral density flows and sum rules appear to be a significant feature of supersymmetric theories affected by anomalies. It is then clear that supersymmetric anomaly-free theories should be free of such contributions in the anomaly effective action. In this respect, it natural to turn to the $\mathcal N = 4$ theory, which is free of anomalies, in order to verify and validate this reasoning. Indeed the $\beta$ function of the gauge coupling constant in this theory has been shown to vanish up to three loops, and there are several arguments about its vanishing to all perturbative orders. As a consequence, the anomaly coefficient in the trace of the energy-momentum tensor, being proportional to the $\beta$ function, must vanish identically and the same occurs for the other anomalous component, related to the $R$ and to the $S$ currents in the Ferrara-Zumino supermultiplet. \\ We recall that in the $\mathcal{N}=4$ theory the spectrum contains a gauge field $A^\mu$, four complex fermions $\lambda^i$ ($i=1,2,3,4$) and six real scalars $\phi_{ij} = - \phi_{ji}$ $(i,j=1,2,3,4)$. All fields are in the adjoint representation of the gauge group. From the point of view of the $\mathcal N=1$ SYM, this theory can be interpreted as describing a vector and three massless chiral supermultiplets, all in the adjoint representation. Therefore the $\langle TVV \rangle$ correlator in $\mathcal N=4$ can be easily computed from the general expressions in Eqs (\ref{TChiralOSMassless}) and (\ref{TVectorOS}) which give \begin{eqnarray} \label{residualV} \Gamma^{\mu\nu\alpha\beta}_{(T)}(p,q) = \frac{g^2 \, T(A)}{16 \pi^2} \left[ V(k^2) + 3 \Phi_2(k^2,0)\right] t_{2S}^{\mu\nu\alpha\beta}(p,q) = - \frac{g^2 \, T(A)}{8 \pi^2} k^2 \, \mathcal C_0(k^2,0) \, t_{2S}^{\mu\nu\alpha\beta}(p,q) \,. \label{gamma} \end{eqnarray} One can immediately observe from the expression above the vanishing of the anomalous form factor proportional to the tracefull tensor structure $t_{1S}^{\mu\nu\alpha\beta}$. The partial contributions to the same form factor, which can be computed using Eqs. (\ref{TChiralOSMassless}) and (\ref{TVectorOS}) for the various components, are all affected by pole terms, but they add up to give a form factor whose residue at the pole is proportional to the $\beta$ function of the $\mathcal N=4$ theory. It is then clear that the vanishing of the conformal anomaly, via a vanishing $\beta$ function, is equivalent to the cancellation of the anomaly pole for the entire multiplet. \\ Notice also that the only surviving contribution in Eq. (\ref{gamma}), proportional to the traceless tensor structure $t_{2S}^{\mu\nu\alpha\beta}$, is finite. This is due to the various cancellations between the UV singular terms from $V(k^2)$ and $\Phi_2(k^2,0)$ which give a finite correlator without the necessity of any regularization. We recall that the cancellation of infinities and the renormalization procedure, as we have already seen in the $\mathcal N=1$ case, involves only the form factor of tensor $t_{2S}^{\mu\nu\alpha\beta}$, which gets renormalized with a counterterm proportional to that of the two-point function $\langle AA \rangle$, and hence to the gauge coupling. For this reason the finiteness of the second form factor and then of the entire $\langle TVV \rangle$ in $\mathcal N=4$ is directly connected to the vanishing of the anomalous term, because its non-renormalization naturally requires that the $\beta$ function has to vanish. \chapter{ Dilaton Phenomenology at the LHC with the $TVV$ vertex} \section{Synopsis} In this chapter we explore the potential for the discovery of a dilaton $O(200-500)$ GeV in a classical scale/conformal invariant extension of the Standard Model by investigating the size of the corresponding breaking scale $\Lambda$ at the LHC. In particular, we address the recent bounds on $\Lambda$ derived from Higgs boson searches. We investigate if such a dilaton can be produced via gluon-gluon fusion, presenting rates for its decay either into a pair of Higgs bosons or into two heavy gauge bosons, which can give rise to multi-leptonic final states. We include a detailed analysis via PYTHIA-FastJet of the dominant Standard Model backgrounds, at a centre of mass energy of 14 TeV. We show that early data of $\sim 20$ fb$^{-1}$ can certainly probe the region of parameter space where such a dilaton is allowed. A conformal scale of 5 TeV is allowed by the current data, for almost all values of the dilaton mass investigated. \section{Introduction} An important feature of the electroweak sector of the Standard Model (SM) is its approximate scale invariance which holds if the quadratic terms of the Higgs potential are absent. These terms are obviously necessary in order for the theory to be in a spontaneously broken phase with a vacuum expectation value (vev) $v$ which is fixed by the experiments. \\ The issue of incorporating a mechanism of spontaneous symmetry breaking of a gauge symmetry while preserving the scale invariance of the Lagrangian is a subtle one, which naturally brings to the conclusion that the breaking of this symmetry has to be dynamical, with the inclusion of a dilaton field. In this case the mass of the dilaton should be attributed to a specific symmetry-breaking potential, probably of non-perturbative origin. A dilaton, in this case, is likely to be a composite \cite{CDS} state, with a conjectured behaviour which can be partly discussed using the conformal anomaly action. The absence of any dimensionful constant in a tree level Lagrangian is, in fact, a necessary condition in order to guarantee the scale invariance of the theory. This is also the framework that we will consider, which is based on the requirement of {\em classical} scale invariance. A stricter condition, for instance, lays in the (stronger) requirement of quantum scale invariance, with correlators which, in some cases, are completely fixed by the symmetry and incorporate the anomaly \cite{OP, EO, BMS1,BMS2,BMS3}. In the class of theories that we consider, the invariance of the Lagrangian under special conformal transformations are automatically fulfilled by the condition of scale invariance. For this reason we will refer to the breaking of such symmetry as to a conformal breaking.\\ Approaching a scale invariant theory from a non scale-invariant one requires all the dimensionful couplings of the model to be turned into dynamical fields, with a compensator ($\Sigma(x)$) which is rendered dynamical by the addition of a scalar kinetic term. It is then natural to couple such a field both to the anomaly and to the explicit (mass-dependent) extra terms which appear in the classical trace of the stress-energy tensor. \\ The inclusion of an extra $\Sigma$-dependent potential in the scalar sector of the new theory is needed in order to break the conformal symmetry at the TeV scale, with a dilaton mass which remains, essentially, a free parameter. We just mention that for a classically scale invariant extension of the SM Lagrangian, the choice of the scalar potential has to be appropriate, in order to support a spontaneously broken phase of the theory, such as the electroweak phase \cite{CDS}. For such a reason, the two mechanisms of electroweak and scale breaking have to be directly related, with the electroweak scale $v$ and the conformal breaking scale $\Lambda$ linked by a simple expression. At the same time, the invariance of the action under a change induced by a constant shift of the potential, which remains unobservable in a non scale-invariant theory, becomes observable and affects the vacuum energy of the model and its stability. \\ The goal of our work is to elaborate on a former theoretical analysis \cite{CDS} of dilaton interactions, by discussing the signatures and the phenomenological bounds on a possible state of this type at the LHC, using the current experimental constraints. Some of the studies carried so far address a state of geometrical origin ({\em the radion}) \cite{GRW}, which shares several of the properties of a (pseudo) Nambu-Goldstone mode of a broken conformal symmetry, except, obviously, its geometric origin and its possible compositeness. Other applications are in inflaton physics (see for instance \cite{AR}). \\ The production and decay mechanisms of a dilaton, either as a fundamental or a composite state, are quite similar to those of the Higgs field, except for the presence of a suppression related to a conformal scale ($\Lambda$) and of a direct contribution derived from the conformal anomaly. As we are going to show, the latter causes an enhancement of the dilaton decay modes into massless states, which is maximized if its coupling $\xi$ is conformal. In the phenomenological study that we present below we do not consider possible modifications of the production and decay rates of this particle typical of the dynamics of a bound state, if a dilaton is such. This point would require a separate study that will be addressed elsewhere. We just mention that there are significant indications from the study of conformal anomaly actions \cite{CDS,CCDS} both in ordinary and in supersymmetric theories, that the conformal anomaly manifests with the appearance of anomaly poles in specific channels. These interpolate with the dilatation current \cite{CDS}, similarly to the behaviour manifested by an axial-vector current in $AVV$ diagrams. The exchange of these massless poles are therefore the natural signature of anomalies in general, being them either chiral or conformal \cite{ACD0}. Concerning the conformal ones, these analyses have been fully worked out in perturbation theory in a certain class of correlators ($TVV$ diagrams) \cite{Giannotti:2008cv,Armillis:2009pq}, starting from QED. We have included one section (section \ref{non0xi}) where we briefly address these points, in view of some recent developements and prospects for future studies. In this respect, the analysis that we present should be amended with the inclusion of corrections coming from a possible wave function of the dilaton in the production/decay processes involving such a state. These possible developments require specific assumptions which we are not going to discuss in great detail in the current study but on which we will briefly comment prior to our conclusions. \section{Classical scale invariant extensions of the Standard Model and dilaton interactions} \label{revv} A scale invariant extension of the SM, at tree level, can be trivially obtained by promoting all the dimensionful couplings in the scalar potential, which now includes quartic and quadratic Higgs terms, to dynamical fields. The new field ($\Sigma(x)=\Lambda e^{\rho(x)/\Lambda}$) is accompanied by a conformal scale ($\Lambda$) and introduces a dilaton field $\rho(x)$, as a fluctuation around the vev of $\Sigma(x)$ \begin{equation} \Sigma(x)= \Lambda + \rho(x) + O(\rho^2), \qquad \qquad \langle \Sigma(x) \rangle=\Lambda, \qquad \qquad \langle \rho(x) \rangle =0. \end{equation} The leading interactions of the dilaton with the SM fields are obtained through the divergence of the dilatation current. This corresponds to the trace of the energy-momentum tensor $T^\mu_{\mu \, SM}$ computed on the SM fields \begin{eqnarray}\label{tmunu} \mathcal L_{int} = -\frac{1}{\Lambda}\rho T^\mu_{\mu\,SM}. \end{eqnarray} The interactions of the dilaton to the massive states are very similar to those of the Higgs, except that $v$ is replaced by $\Lambda$. The distinctive feature between the dilaton and the SM Higgs emerges in the coupling with photons and gluons. One-loop expressions for the decays into all the neutral currents sector has been given in \cite{CDS}, while leading order decay widths of $\rho$ in some relevant channels (fermions, vector and Higgs pairs) are easily written in the form (for a minimally coupled dilaton, with $\xi=0$) \begin{align} &\Gamma_{\rho\to \bar ff}=N_f^c\frac{m_\rho}{8\pi}\frac{m_f^2}{\Lambda^2}\left(1-4\frac{m_f^2}{m_\rho^2}\right)^{3/2},\label{ffWidth}\\ &\Gamma_{\rho\to VV}=\delta_V\frac{1}{32\pi}\frac{m_\rho^3}{\Lambda^2}\left(1-4\frac{m_V^2}{m_\rho^2}+12\frac{m_V^4}{m_\rho^4}\right)\sqrt{1-4\frac{m_V^2}{m_\rho^2}},\label{vvWidth}\\ &\Gamma_{\rho\to HH}=\frac{1}{32\pi}\frac{m_\rho^3}{\Lambda^2}\left(1+2\frac{m_H^2}{m_\rho^2}\right)^2\sqrt{1-4\frac{m_H^2}{m_\rho^2}}\label{hhWidth}. \end{align} The one-loop expression for decays into $\gamma\gamma$ is \begin{eqnarray} \Gamma(\rho \rightarrow \gamma\gamma) &=& \frac{\alpha^2\,m_{\rho}^3}{256\,\Lambda^2\,\pi^3} \, \bigg| \beta_{2} + \beta_{Y} -\left[ 2 + 3\, x_W +3\,x_W\,(2-x_W)\,f(x_W) \right] \nonumber \\ && + \frac{8}{3} \, x_t\left[1 + (1-x_t)\,f(x_t) \right] \bigg|^2. \, \label{PhiGammaGamma}\nonumber\\ \end{eqnarray} Here, the contributions to the decay, beside the anomaly term, come from the $W$ and the fermion (top) loops. $\beta_2 (= 19/6)$ and $\beta_Y (= -41/6)$ are the $SU(2)_L$ and $U(1)_Y$ $\beta$ functions, while the $x_i$'s are proportional to the ratios between the mass of each particle in the loops $m_i$ and the $\rho$ mass. In general, we have defined the variable \begin{equation} \label{x} x_i = \frac{4\, m_i^2}{m^2_\rho} \, , \end{equation} with the index "$i$" labelling the corresponding massive virtual particles. The leading fermionic contribution in the loop comes from the top quark via $f(x_t)$, while $f(x_W)$ denotes the contribution of the $W$-loop. The function $f(x)$ is given by \begin{eqnarray} \label{fx} f(x) = \begin{cases} \arcsin^2(\frac{1}{\sqrt{x}})\, , \quad \mbox{if} \quad \, x \geq 1 \\ -\frac{1}{4}\,\left[ \ln\frac{1+\sqrt{1-x}}{1-\sqrt{1-x}} - i\,\pi \right]^2\, , \quad \mbox{if} \quad \, x < 1. \end{cases} \end{eqnarray} related to the scalar three-point master integral through the relation \begin{equation} \label{C03m} C_0(s,m^2) = - \frac{2}{s} \, f(\frac{4\,m^2}{s}) \, . \end{equation} The decay rate of a dilaton into two gluons is given by \begin{eqnarray} \Gamma(\rho \rightarrow gg) &=& \frac{\alpha_s^2\,m_\rho^3}{32\,\pi^3 \Lambda^2} \, \bigg| \beta_{QCD} + x_t\left[1 + (1-x_t)\,f(x_t) \right] \bigg|^2 \,, \label{ggWidth} \end{eqnarray} where $\beta_{QCD}$ is the QCD $\beta$ function and we have taken the top quark as the only massive fermion, with $x_i$ and $f(x_i)$ defined in Eq. (\ref{x}) and Eq. (\ref{fx}) respectively. \\ Differently from the cross section case, the dependence of the decay amplitudes Eq.~(\ref{ffWidth}) - Eq.~(\ref{hhWidth}) on the conformal scale $\Lambda$, which amounts to an overall factor, the branching ratios \begin{eqnarray} \mathit{Br}(\rho\to\bar X X)=\frac{\Gamma_{\rho\to\bar X X}}{\sum_X\Gamma_{\rho\to \bar X X}}, \end{eqnarray} are $\Lambda$-independent. \\ We show in Fig.~\ref{brrhoh}(a) the decay branching ratios of the dilation as a function of its mass, while in Fig.~\ref{brrhoh}(b) we plot the corresponding decay branching ratios for a SM-like heavy Higgs boson, here assumed to be of a variable mass. For a light dilaton with $m_\rho < 200$ GeV the dominant decay mode is into two gluons ($gg$), while for a dilaton of larger mass ($m_\rho > 200$ GeV) the same channels which are available for the SM-like Higgs ($ZZ, WW, \bar{t} t$) are now accompanied by a significant $gg$ mode. From the two figures it is easily observed that the 2 gluon rate in the Higgs case is at the level of few per mille, while in the dilaton case is just slightly below 10$\%$. \begin{figure}[t] \begin{center} \hspace*{-2cm} \mbox{\subfigure[]{ \includegraphics[width=0.5\linewidth]{plots/BrDil.pdf}}\hskip 15pt \subfigure[]{\includegraphics[width=0.5\linewidth]{plots/BrHiggs.pdf}}} \caption{The mass dependence of the branching ratios of the dilaton (a) and of the Higgs boson (b).}\label{brrhoh} \end{center} \end{figure} \section{Production of the dilaton}\label{prod} The main production process of the dilaton at the LHC is through gluon fusion, as for the Higgs boson, with a suppression induced by the conformal breaking scale $\Lambda$, which lowers the production rates. Even in this less favourable situation, if confronted with the Higgs production rates of the SM, the dilaton phenomenology can still be studied al the LHC. \\ We calculate the dilaton production cross-section via gluon fusion by weighting the Higgs boson to gluon-gluon decay widths with the corresponding dilaton decay width. The dilaton production cross-section with the incoming gluons thus can be written as \begin{eqnarray}\label{ggrho} \sigma_{gg\to\rho}=\sigma_{gg\to H}\,\,\frac{\Gamma_{\rho\to gg}}{\Gamma_{H\to gg}} , \end{eqnarray} where we use the same factorization scale in the DGLAP evolution of the parton distribution functions (PDF) of \cite{HCWG}. The width of $\rho\to gg$ is given in Eq.~(\ref{ggWidth}) and we can use the same expression to calculate the width of $H\to gg$, replacing the breaking scale $\Lambda$ with $v$ and setting $\beta_{QCD}\equiv 0$. The ratio of the two widths appearing in Eq.~(\ref{ggrho}) is then given by \begin{eqnarray}\label{Wrhoh} \frac{\Gamma_{\rho\to gg}}{\Gamma_{H\to gg}}=\frac{v^2}{\Lambda^2}\frac{m^3_\rho}{m^3_H}\frac{\left|\beta_{QCD}+ x_t\left[1 + (1-x_t)\,f(x_t)\right] \right|^2}{\left| x_t\left[1 + (1-x_t)\,f(x_t)\right] \right|^2} . \end{eqnarray} In Fig.~\ref{ggvbf} we present the production cross-section of the dilaton at the LHC at 14 TeV centre of mass energy mediated by (a) gluon fusion and (b) vector boson fusion, versus $m_\rho$. Shown are the variations of the same observables for three conformal breaking scales with $\Lambda=1, 5, 10$ TeV. Notice that the contribution from the gluon fusion is about a factor $10^4$ larger than the vector boson fusion. \begin{figure}[thb] \begin{center} \hspace*{-1cm} \mbox{\subfigure[]{ \includegraphics[width=0.5\linewidth]{plots/ggFusion.pdf}}\hskip 15pt \subfigure[]{\includegraphics[width=0.5\linewidth]{plots/VBfusion.pdf}}} \caption{The mass dependence of the dilaton cross-section via gluon fusion (a) and vector boson fusion (b) for three different choices of the conformal scale, $\Lambda=1, 5, 10$ TeV respectively.}\label{ggvbf} \end{center} \end{figure} \subsection{Bounds on the dilaton from heavy Higgs searches at the LHC} \begin{figure \begin{center} \hspace*{-1cm} \mbox{\subfigure[]{ \includegraphics[width=0.5\linewidth]{plots/dilatonZZ.pdf}}\hskip 15pt \subfigure[]{\includegraphics[width=0.5\linewidth]{plots/dilatonWW.pdf}}} \hspace*{-1cm} \mbox{\subfigure[]{\includegraphics[width=0.5\linewidth]{plots/dilatonTauTau.pdf}}\hskip 15pt \subfigure[]{\includegraphics[width=0.5\linewidth]{plots/dilatonHH.pdf}}} \caption{The mass bounds on the dilaton from heavy scalar decays to (a) $ZZ$ \cite{CMSzz}, (b) $W^\pm W^\mp$ \cite{CMSww}, (c) $\bar\tau\tau$ \cite{CMStautau} and (d) to $H\,H$ \cite {CMShh} for three different choices of conformal scale, $\Lambda=1, 5, 10$ TeV respectively.}\label{bzzww} \end{center} \end{figure} Since the mass of the dilaton is a free parameter, and given the similarities with the main production and decay channels of this particle with the Higgs boson, several features of the production and decay channels in the Higgs sector, with the due modifications, are shared also by the dilaton case. As we have already mentioned, the production cross-section depends sensitively on $\Lambda$, as shown in Eqs.~(\ref{ggrho}) and (\ref{Wrhoh}). Bounds on this breaking scale has been imposed by the experimental searches for a heavy, SM-like Higgs boson at the LHC, heavier than the $125$ GeV Higgs, $H_{125}$. \\ We have investigated the bounds on $\Lambda$ coming from the following datasets \begin{itemize} \item{} the 4.9 $\rm{fb}^{-1}$ (at 7 TeV) and 19.7 $\rm{fb}^{-1}$ (at 8 TeV) datasets for a heavy Higgs decaying into $Z\,Z$ \cite{CMSzz}, $W^\pm W^\mp$ \cite{CMSww}, $\bar\tau\tau$ \cite{CMStautau} and \item{} the 19.7 fb$^{-1}$ datasets (at 8 TeV) for the decay in $H\,H$ \cite{CMShh} from CMS \item{} the 20.3 fb$^{-1}$ at 8 TeV data from ATLAS for the decay of the heavy Higgs into $Z\,Z$ \cite{ATLASzz} and $W^\pm W^\mp$ \cite{ATLASww}. \end{itemize} The dotted line in each plot presents the upper bound on the cross-section, i.e. the $\mu$ parameter in each given modes defined as \begin{eqnarray}\label{mu} \mu_{XY}=\frac{\sigma_{gg\to H}{{Br}(H\to XY)}}{{{\sigma_{gg \to H}}_{SM}}{{Br}(H \to XY)_{SM}}}. \end{eqnarray} In Fig. \ref{bzzww} we show the dependence of the 4-lepton ($2 l\, 2\nu$) channel on the mass of the $\rho$ at its peak, assuming $Z\,Z$, $W^\pm W^\mp$, $\bar\tau\tau$ and $H\,H$ intermediate states. The three continuous lines in violet, green and brown correspond to 3 diffferent values of the conformal scale, equal to 1, 5 and 10 TeV respectively. The SM predictions are shown in red. The dashed blue line separates the excluded and the admissible regions, above and below the blue curve respectively, which sets an upper bound of exclusion obtained from a CMS analysis. A similar study is shown in Fig. \ref{bzzwwA}, limited to the $Z\,Z$ and $W^\pm W^\mp$ channels, where we report the corresponding bound presented, in this case, by the ATLAS collaboration. Both the ATLAS and CMS data completely exclude the $\Lambda=1$ TeV case whereas the $\Lambda = 5$ TeV case has only a small tension with the CMS analysis of the $W^\pm W^\mp$ channel if $m_\rho\sim 160$ GeV. Any value of $\Lambda \geq 5$ TeV is not ruled out by the current data.\\ In Table \ref{cross} we report the values of the gluon fusion cross-section for three benchmark points (BP) that we have used in our phenomenological analysis. We have chosen $\Lambda = 5$ TeV, and the factorization in the evolution of the parton densities has been performed in concordance with those of the Higgs working group \cite{HCWG}. In the following subsection we briefly discuss some specific features of the dilaton phenomenology at the LHC, which will be confronted with a PYTHIA based simulation of the SM background. \begin{figure \begin{center} \hspace*{-2cm} \mbox{\subfigure[]{ \includegraphics[width=0.55\linewidth]{plots/dilatonZZatlas.pdf}}\hskip 15pt \subfigure[]{\includegraphics[width=0.55\linewidth]{plots/dilatonWWatlas.pdf}}} \caption{The mass bounds on the dilaton from heavy scalar decays to (a) $ZZ$ \cite{ATLASzz} and (b) $W^\pm W^\mp$ \cite{ATLASww} for three different choices of conformal scale, $\Lambda=1, 5$ and $10$ TeV respectively.}\label{bzzwwA} \end{center} \end{figure} \begin{table}[t] \begin{center} \hspace*{-1.0cm} \renewcommand{\arraystretch}{1.2} \begin{tabular}{|c||c|c|c||} \hline\hline Benchmark& $m_\rho$& $gg\to \rho$\\ Points &GeV& in fb \\ \hline BP1 &200&6906.62\\ \hline BP2 &260&3847.45\\ \hline BP3 &400&1229.25\\ \hline \hline \end{tabular} \caption{Dilation production cross-section via gluon fusion at the LHC at 14 TeV, for the 3 selected benchmark points, with $\Lambda=5$ TeV.}\label{cross} \end{center} \end{table} \subsection{Dilaton phenomenology at the LHC} Fig.~\ref{ggd} shows the production and decay amplitudes mediated by an intermediate dilaton at the LHC. We can see from Fig.~\ref{brrhoh}(a) that some of the main interesting decays of the dilaton are into two on-shell SM Higgs bosons $H\,H$, or into a real/virtual pair $H\,H^*$ and gauge boson pairs. The corresponding SM Higgs boson then further decays into $WW^*$ and/or $ZZ^*$. Certainly these gauge bosons and their leptonic decays will give rise to multi-leptonic final states with missing transverse energy ($\not\!\!{E_T}$) via the chain % \begin{eqnarray} \label{dcy} pp &\to& \rho \to H\,H^* \nonumber\\ &\to & WW^*, WW^* \nonumber \\ & \to & 4\ell +\not\!\!{E_T}, \, 3\ell + 2j +\not\!\!{E_T}. \end{eqnarray} \begin{figure}[thb] \begin{center} \mbox{\subfigure[]{ \includegraphics[width=0.4\linewidth]{plots/rhotoHH.pdf}} \hspace*{1cm} \subfigure[]{\includegraphics[width=0.35\linewidth]{plots/rhotoVV.pdf}}} \caption{The Feynman diagrams showing the dilaton production via gluon-gluon fusion and its decay to (a) pair of Higgs boson which further decays into gauge boson pairs and (b) a pair of gauge bosons.}\label{ggd} \end{center} \end{figure} As shown above, there are distinct intermediate states mediating the decay of the dilaton into four $W^\pm$ bosons on/off-shell which give rise to $3\ell +\not\!\!{E_T}$ and $4\ell +2j +\not\!\!{E_T}$ final states. When we demand that one of the SM Higgs bosons $h$ decays to $ZZ^*$ and the other to $WW^*$, we gain a factor of two in multiplicity and generate a final state of the form $ 6\ell +\not\!\!{E_T}$, $ 4\ell + \geq 2j +\not\!\!{E_T}$ and $3\ell + 4j +\not\!\!{E_T}$ (i.e. 4 leptons, plus at least 2 jets accompanied by missing $E_T$) as in \begin{eqnarray} \label{dcy1} pp &\to& \rho \to H H^* \nonumber\\ &\to & WW^*, ZZ^* \nonumber \\ & \to & 6\ell +\not\!\!{E_T}, \, 4\ell + \geq 2j +\not\!\!{E_T},\, 3\ell + 4j +\not\!\!{E_T}. \end{eqnarray} Though the SM Higgs boson decay branching ratios to $ZZ^*$ are relatively small $\sim 3\%$, when the dilaton decays via an intermediate $ZZ^*$, final states with several leptons are expected as in \begin{eqnarray} \label{dcy2} pp &\to& \rho \to H H^* \nonumber\\ &\to & ZZ^*, ZZ^* \nonumber \\ & \to & 8\ell,\, 6\ell + 2j,\, 4\ell + 4j. \end{eqnarray} From the last decay channel, final states with multiple charged leptons and zero missing energy are now allowed, a case which we will explore next. \\ The SM gauge boson branching ratios to charged leptons are very small, specially for channels mediated by a $Z$, due to the small rates. Therefore leptonic final states of higher multiplicities will be suppressed compared to those of a low number. For this reason we will restrict the choice of the leptonic final states in our simulation to $\geq 3\ell +X$ and $\geq 4\ell +X$. The requirement of $\geq 3\ell$ and $\geq 4\ell$ already allow to reduce most of the SM backgrounds, although not completely, due to some some irreducible components, as we are going to discuss next. \section{Collider simulation}\label{colsim} We analyse dilaton production by gluon-gluon fusion, followed by its decay either to a pair of SM-like Higgs bosons ($\rho \to H_{125} H_{125}$) or to a pair of gauge bosons ($WW$, $ZZ$). The $H_{125}$ thus produced will further decay into gauge boson pairs, i.e. $W^\pm W^\mp$ and $ZZ$, giving rise to mostly leptonic final states, as discussed above. When the intermediate decays into one or more gauge bosons in the hadronic modes are considered, then we get leptons associated with extra jets in the final states. For $m_{\rho} < 2m_{H_{125}}$ the dilaton decays to two on-shell $H_{125}$ states are not kinematically allowed. In that case we consider its direct decay into gauge boson pairs, $W^\pm W^\mp, ZZ$. In the following subsections we consider the two case separately, where we analyze final states at the LHC at 14 TeV and simulate the contributions coming from the SM backgrounds. \\ For this goal we have implemented the model in SARAH \cite{sarah}, generated the model files for CalcHEP \cite{calchep}, later used to produce the decay file SLHA containing the decay rates and the corresponding mass spectra. The generated events have then been simulated with {\tt PYTHIA} \cite{pythia} via the the SLHA interface \cite{slha}. The simulation at hadronic level has been performed using the {\tt Fastjet-3.0.3} \cite{fastjet} with the {\tt CAMBRIDGE AACHEN} algorithm with a jet size $R=0.5$ for the jet formation, chosen according to the following criteria: \begin{itemize} \item the calorimeter coverage is $\rm |\eta| < 4.5$ \item minimum transverse momenta of the jets $ p_{T,min}^{jet} = 20$ GeV and the jets are ordered in $p_{T}$ \item leptons ($\rm \ell=e,~\mu$) are selected with $p_T \ge 20$ GeV and $\rm |\eta| \le 2.5$ \item no jet should be accompanied by a hard lepton in the event \item $\Delta R_{lj}\geq 0.4$ and $\Delta R_{ll}\geq 0.2$ \item Since an efficient identification of the leptons is crucial for our study, we additionally require a hadronic activity within a cone of $\Delta R = 0.3$ between two isolated leptons. This is defined by the condition on the transverse momentum $\leq 0.15\, p^{\ell}_T$ GeV in the specified cone. \end{itemize} \subsection{Benchmark points} We have carried out a detailed analysis of the signal and of the background in a possible search for a light dilaton. For this purpose we have selected three benchmark points as given in Table~\ref{diltnbr}. The decay branching ratios given in Table~\ref{diltnbr} are independent of the conformal scale. For the benchmark point 1 (BP1), the dilaton is assumed to be of light mass of $200$ GeV, and its decay to the $H_{125}$ pair is not kinematically allowed. For this reason, as already mentioned, we look for slightly different final states in the analysis of such points. It appears evident that the dilaton may decay into gauge boson pairs when they are kinematically allowed. Such decays still remain dominant even after that the $t\bar{t}$ mode is open. This prompts us to study dilaton decays into $ZZ$, $WW$ via $3\ell$ and $4\ell$ final states. In the alternative case in which the dilaton also decays into a SM Higgs pair ($H_{125}$) along with gauge boson pairs, we have additional jets or leptons in the final states. This is due to the fact that the $H_{125}$ Higgs decays to the $WW$ and $ZZ$ pairs with one of the two gauge bosons off-shell (see Table~\ref{hbr}). We select two of such points when this occurs, denoted as BP2 and BP3, which are shown in Table~\ref{diltnbr}. Below we are going to present a separate analysis for each of the two cases. \\ \begin{table}[t] \begin{center} \hspace*{-1.0cm} \renewcommand{\arraystretch}{1.2} \begin{tabular}{|c||c|c|c||} \hline\hline Decay&BP1 & BP2&BP3 \\ Modes &$m_\rho$ = 200 GeV&$m_\rho$ = 260 GeV&$m_\rho$ = 400 GeV\\ \hline HH&-&0.245&0.290\\ \hline $W^\pm W^\mp$&0.639&0.478&0.408\\ \hline ZZ&0.227&0.205&0.191\\ \hline $\tau\tau$&$2.54\times10^{-4}$&$7.8\times10^{-5}$&$2.05\times10^{-5}$\\ \hline $\gamma \gamma$&$9.28\times10^{-5}$&$2.88\times10^{-5}$&$4.33\times10^{-6}$\\ \hline $gg$&$0.131$&$0.0691$&$0.0390$\\ \hline \hline \end{tabular} \caption{The benchmark points for a light dilaton with their mass-dependent decay branching ratios.}\label{diltnbr} \end{center} \end{table} \begin{table \begin{center} \renewcommand{\arraystretch}{1.2} \begin{tabular}{||c||c|c|c|c|c|c||} \hline\hline Decay Modes &$W^\pm W^\mp$&$Z\,Z$&$\bar b b$&$\bar \tau \tau$&$gg$&$\gamma\,\gamma$\\ \hline $H_{125}$&0.208&0.0259&0.597&0.0630&0.0776&$2.30\times10^{-3}$\\ \hline \hline \end{tabular} \caption{The corresponding branching ratios of the SM Higgs boson with a mass of 125 GeV.}\label{hbr} \end{center} \end{table} The leptons in the final state are produced from the decays of the gauge bosons, which can come, in turn, either from the decay of the dilaton or from that of the $H_{125}$. In such cases, for a dilaton sufficiently heavy, the four lepton signature ($4\ell$) of the final state is quite natural and their momentum configuration will be boosted. In Fig.~\ref{lpf}(a) we show the multiplicity distribution of the leptons and in Fig.~\ref{lpf}(b) their $p_T$ distribution for the chosen benchmark points. Here the lepton multiplicity has been subjected to some basic cuts on their transverse momenta ($p_T\geq 20$) GeV and isolation criteria given earlier in this section. Thus soft and non-isolated leptons are automatically cut out from the distribution. From Fig.~\ref{lpf}(b) it is clear that the leptons in BP3 can have a very hard transverse momentum ($p_T \sim 200$ GeV), as the corresponding dilaton is of $400$ GeV. Notice that the di-lepton invariant mass distribution in Fig.~\ref{mll12} presents a mass peak around $m_Z$ for the signal (BP2) but not for the dominant SM top/antitop ($t\bar{t}$) background. This will be used later as a potential selection cut in order to reduce some of the SM backgrounds. \begin{figure}[bht] \begin{center} \hspace*{-2cm} \mbox{\subfigure[]{ \includegraphics[width=0.55\linewidth]{plots/nL.pdf}} \hspace*{.5cm} \subfigure[]{\includegraphics[width=0.55\linewidth]{plots/pT.pdf}}} \caption{The (a) lepton multiplicity and (b) lepton $p_T$ distribution for the benchmark points.}\label{lpf} \end{center} \end{figure} \begin{figure}[thb] \begin{center} \hspace*{-2cm} \includegraphics[width=0.6\linewidth]{plots/mll12.pdf} \caption{The di-lepton invariant mass distribution for the signal BP2 and the background $t\bar{t}$.}\label{mll12} \end{center} \end{figure} \subsection{Light dilaton: $m_{\rho} < 2m_{H_{125}}$} In this subsection we analyse final states with at least three ($\geq 3\ell +X +\not\!\!{E_T}$) and 4 ($\geq 4\ell +X +\not\!\!{E_T}$) leptons (inclusive) and missing transverse energy that can result from the decays of the dilaton into $ZZ$, where we consider the potential SM backgrounds. The reason for considering the $3\ell$ final states is because one of the four leptons ($4\ell$) could be missed. This is in general possible due to the presence of additional kinematical cuts introduced when hadronic final states are accompanied by leptons. We present a list of the number of events for the $3\ell$ and $4\ell$ final states in Table~\ref{BP1n} for BP1, and the dominant SM backgrounds at integrated luminosity of 100 fb$^{-1}$ at the LHC. The potential SM backgrounds come from the $t\bar{t}Z$ and $tZW$ sectors, from intermediate gauge boson pairs ($VV$) and from the triple gauge boson vertices $VVV$ ($V: W^\pm, Z$). Due to the large $t\bar{t}$ cross-section, with the third and fourth lepton - which can originate from the corresponding $b$ decays - this background appears to be an irreducible one. For this reason we are going to apply successive cuts for its further reduction, as described in Table~\ref{BP1n}. \begin{table}[t] \begin{center} \hspace*{-1.0cm} \renewcommand{\arraystretch}{1.3} \begin{tabular}{|c||c||c|c|c|c|c||} \hline\hline Final states&\multicolumn{1}{|c||}{Benchmark}&\multicolumn{5}{|c||}{Backgrounds } \\ \hline &BP1 & $t\bar{t}$& $t\bar{t}Z$ &$tZW$&$VV$& $VVV$\\ \hline \hline $\geq 3\ell \,+\, \not\!\!{p_T} \leq 30\, \rm{GeV}$&494.97&275.52&65.17&22.29&6879.42&765.11\\ $\,+\,|m_{ll}-m_Z|<5\,\rm{GeV}$&384.47&68.88&62.68&20.93&2514.92&16.16\\ $\,+\,n_{\rm{b_{jet}}}=0$&377.56&9.84&17.64&10.08&2479.66&15.13\\ \hline Significance&7.00&\multicolumn{5}{|c||}{}\\ \hline $\mathcal L_5$&51 fb$^{-1}$&\multicolumn{5}{|c||}{}\\ \hline \hline $\geq 4\ell \,+\, \not\!\!{p_T} \leq 30\, \rm{GeV}$&273.96&0.00&3.32&1.36&1655.99&34.18\\ $\,+\,|m_{ll}-m_Z|<5\,\rm{GeV}$&218.71&0.00&3.11&1.16&627.38&4.44\\ \hline Significance&7.48&\multicolumn{5}{|c||}{}\\ \hline $\mathcal L_5$&45 fb$^{-1}$&\multicolumn{5}{|c||}{}\\ \hline \hline \end{tabular} \caption{Numbers of events for the $3\ell+\not\!\!{p_T}$ and $4\ell$ final states for the BP1 and the dominant SM backgrounds, at an integrated luminosity of $100$ fb $^{-1}$.}\label{BP1n} \end{center} \end{table} The primary signal that is considered is characterised by the kinematical cut $3\ell +\not\!\!{p_T} \leq 30$ GeV. The choice of a very low missing $p_T$ is justified because when both $Z$'s decay to charged lepton pairs they give rise to $\geq 3\ell$ and $\geq 4\ell$ final states which are neutrinoless. The theoretical prediction of no missing energy, however, cannot be fully satisfied as the missing transverse momentum $\not\!\!{p_T}$ is calculated by estimating the total visible $p_T$ of the jets and of the leptons after the threshold cuts. Next we demand the di-lepton be characterised by an invariant mass around $Z$ mass i.e., $\,|m_{ll}-m_Z|<5\,\rm{GeV}$, which reduces the $t\bar{t}$, $VV$ and $VVV$ backgrounds quite significantly. A further requirement of no $b$-jet ( i.e., $n_b=0$) reduces the $t\bar{t}, $$t\bar{t}Z$ and $tZW$ backgrounds. By looking at the signal, we observe that these cuts do not affect the signal number for BP1. After imposing all the cuts, we find that an integrated luminosity of $\mathcal{O}(51)$ fb$^{-1}$ is required for a $5\sigma$ reach in this final state. The demand of $4\ell$ of course reduces the background but also reduces the signal event numbers. In this case $\mathcal{O}(45)$ fb$^{-1}$ of integrated luminosity is required for a $5\sigma$ discovery. \subsection{Heavy dilaton: $m_{\rho} > 2m_{H_{125}}$} In this case we consider points where $m_{\rho} > 2m_{H_{125}}$, allowing decays of the dilaton to $H_{125}$ pairs. For this purpose we have chosen two benchmark points, one with $m_{\rho}=260$ GeV - where the channel $\rho \to\, H_{125}H_{125}$ is just open - and another one with $m_{\rho}=400$ GeV, where even the $\rho \to t\bar{t}$ channel is open. The decay mode via a $H_{125}$ pair, in turn decaying into gauge boson pairs, gives additional jets which accompany the $3\ell$ and $4\ell$ final states and help in a further reduction of the SM backgrounds. Table~\ref{4l} presents the number of expected events generated at the BP2 and BP3 benchmark points for the signal and for the dominant SM backgrounds. Here we have considered $\geq 3\ell$ GeV and $\geq 4\ell$ final states respectively, at an integrated luminosity of $1000$ fb$^{-1}$. The dominant backgrounds are as before, and listed in Table~\ref{4l}. Notice that if we demand the tagging of at least two additional jets and the $b$-jet veto, we can reduce the backgrounds even further. The result shows that in the case of BP2 and BP3 a dilaton signal could be discovered at an integrated luminosity of $\mathcal{O}(130)$ and $\mathcal{O}(570)$ fb$^{-1}$ respectively for the $\geq 3\ell$ final state. For the $\geq 4\ell$ f a $5\sigma$ discovery reach can be achieved even with 114 fb$^{-1}$ and 374 fb$^{-1}$ of integrated luminosity for BP2 and BP3 respectively. \begin{table}[t] \begin{center} \hspace*{-1.0cm} \renewcommand{\arraystretch}{1.3} \begin{tabular}{|c||c|c||c|c|c|c|c||} \hline\hline Final states&\multicolumn{2}{|c||}{Benchmark}&\multicolumn{5}{|c||}{Backgrounds } \\ \hline & BP2&BP3 & $t\bar{t}$& $t\bar{t}Z$ &$tZW$&$VV$& $VVV$\\ \hline \hline $\geq 3\ell$&3882.08&1642.28&10725.9&4790.19&1364.73&177140&53660.2\\ $\,+\, n_{\rm{b_{jet}}}=0$&3812.82&1627.53&5510.54&1550.38&664.92&176167&53604.8\\ $\,+\,n_{\rm{jet}}\geq2$&2677.82&1255.06&2952.08&1469.43&579.62&29165.5&324.28\\ \hline Significance&13.89&6.64&\multicolumn{5}{|c||}{}\\ \hline $\mathcal L_5$&130 fb$^{-1}$&568 fb$^{-1}$&\multicolumn{5}{|c||}{}\\ \hline \hline $\geq 4\ell$&1400.47&678.55&0.00&502.26&149.27&17338.1&2379.06\\ $\,+\,n_{\rm{jet}}\geq2\,+\, n_{\rm{b_{jet}}}=0$&865.68&448.68&0.00&147.36&48.46&2334.44&36.13\\ \hline Significance&14.78&8.17&\multicolumn{5}{|c||}{}\\ \hline $\mathcal L_5$&114 fb$^{-1}$&374 fb$^{-1}$&\multicolumn{5}{|c||}{}\\ \hline \hline \end{tabular} \caption{We present the final state numbers for $4\ell+\not\!\!{p_T}$ final states for the benchmark points and the dominant SM backgrounds at an integrated luminosity of $1000$ fb $^{-1}$.}\label{4l} \end{center} \end{table} \begin{figure}[thb] \begin{center} \hspace*{-2cm} \mbox{\subfigure[]{ \includegraphics[width=0.54\linewidth]{plots/invMASSll.pdf}} \hspace*{.5cm} \subfigure[]{\includegraphics[width=0.57\linewidth]{plots/invMASSlljj.pdf}}} \caption{The invariant mass distribution for the benchmark points and the dominant SM backgrounds for $4\ell$ and $2\ell2j$ final state respectively at an integrated luminosity of 100 fb$^{-1}$.}\label{invmass} \end{center} \end{figure} Next we try to reconstruct the dilaton mass peak from the $\geq 4\ell$ and $2\ell\, 2j$ channels. In the first case we consider the isolated $4\ell$'s after enforcing the basic cuts, and then demand that the di-leptons are coming from the $Z$ boson mass peak. This guarantees that we are reconstructing either the $\rho\to ZZ$ or the $\rho \to H_{125}H_{125}\to ZZ+X$ incoming channel. Fig.~\ref{invmass}(a) shows the plot of the invariant mass distributions $m_{4\ell}$ for all three benchmark points, along with the dominant backgrounds. The presence of a clear mass peak certainly allows the reconstruction of the dilaton mass. We have selected the number of events around the mass peaks, i.e., $|m_{4\ell} -m_{\rho}|\leq 10$ GeV for the benchmark points, which are shown in Table~\ref{4lpeak} at an integrated luminosity of 100 fb$^{-1}$. It is clear that for the BP1 and BP2 benchmark points the mass peak can be resolved with very early data at the LHC, with a 14 TeV run.\\ Fig.~\ref{invmass}(b) shows the invariant mass distribution, where we consider a pair of charged leptons around the $Z$ mass peak, i.e., $|m_{\ell\ell}-m_Z|<5\,\rm{GeV}$ as well as a pair of jets, i.e., $|m_{jj}-m_Z|< 10\,\rm{GeV}$. Such di-jet pairs and di-lepton pairs are then taken in all possible combinatorics to evaluate the $m_{\ell\ell jj}$ mass distribution, as shown in Fig.~\ref{invmass}(b). Clearly the $Y$ axis of the figure shows such possible pairings and the $X$ axis indicates the mass scale. We see the right combinations peak, which sits around the benchmark points. We have also taken the dominant backgrounds with their combinatorics to reproduce the invariant mass $m_{\ell\ell jj}$. In Table~\ref{2l2jlpeak} we list the results around the mass peak, i.e. for $|m_{2\ell2j} -m_{\rho}|\leq 10$ GeV. It is easily observed that such constraint can be a very handy guide to identify the resonance mass peak using very early data at the LHC with 14 TeV. \begin{table}[t] \begin{center} \hspace*{-1.0cm} \renewcommand{\arraystretch}{1.5} \begin{tabular}{c||c|c|c||} \hhline{~===} &\multicolumn{3}{c||}{Number of events in}\\ &\multicolumn{3}{c||}{$|m_{4\ell} -m_{\rho}|\leq 10$ GeV}\\ \hhline{~===} & BP1&BP2&BP3\\ \hline \hline \multicolumn{1}{||c||}{Signal}&396&194&30\\ \hline \multicolumn{1}{||c||}{Background}&108&77&18\\ \hline \multicolumn{1}{||c||}{Significance}&17.64&11.78&4.33\\ \hline \hline \end{tabular} \caption{We present the events number for $\geq 4\ell$ final state around the dilaton mass peak, i.e. $|m_{4\ell} -m_{\rho}|\leq 10$ GeV, for the benchmark points and the backgrounds at an integrated luminosity of 100 fb$^{-1}$.}\label{4lpeak} \end{center} \end{table} \begin{table}[t] \begin{center} \hspace*{-1.0cm} \renewcommand{\arraystretch}{1.5} \begin{tabular}{c||c|c|c||} \hhline{~===} &\multicolumn{3}{c||}{Number of events in}\\ &\multicolumn{3}{c||}{$|m_{\ell\ell jj} -m_{\rho}|\leq 10$ GeV}\\ \hhline{~===} & BP1&BP2&BP3\\ \hline \hline \multicolumn{1}{||c||}{Signal}&14727&8371&1390\\ \hline \multicolumn{1}{||c||}{Background}&10887&6706&1234\\ \hline \multicolumn{1}{||c||}{Significance}&92.02&68.17&27.13\\ \hline \hline \end{tabular} \caption{We present the events number for $\geq 2\ell$ final state around the dilaton mass peak, i.e. $|m_{2\ell2j} -m_{\rho}|\leq 10$ GeV, for BP1, BP2, BP3 and the backgrounds at an integrated luminosity of 100 fb$^{-1}$.}\label{2l2jlpeak} \end{center} \end{table} \section{Perspectives on compositeness and $\xi$ dependence } \label{non0xi} In our analysis the dilaton has been treated as a fundamental state, with interactions which are dictated from Eq. (\ref{tmunu}). The perturbative analysis that follows from this interaction does not take into account possible effects of compositeness, which would involve the wave function of this state both in its production and decay. In this respect, this treatment is quite similar to the study of the $\pi\to \gamma\gamma$ decay using only the divergence of the interpolating axial-vector current rather then the pion itself, with its hadronic wave function now replaced by the divergence of the dilatation current $J_D$. Those effects could modify the predictions that emerge from our analysis. \\ Another possible modification of our results will be certainly linked to a nonzero value of the $\xi$ parameter. The search for a valuable signal of a nonminimal dilaton at the LHC requires a completely independent calibration of the kinematical cuts that we have discussed. While we hope to address this point in a future work, we can however obtain a glimpse of the dependence of the signal (production/decays) as a function of $\xi$. \\ This behaviour is clearly illustrated in Fig.~\ref{diffxi} where the decay into massless and massive states of a conformal dilaton are dependent on the improvement coefficient $\xi$. Fig.~\ref{diffxi}(a), (d) show the decay branching fraction to gluon and photon pair respectively. We see that for $\xi=1/6$ they are enhanced compared to other values of $\xi$. Similarly, the massive gauge bosons modes are suppressed for $\xi=1/6$ as can be seen from Fig.~\ref{diffxi}(b),(c). In Fig.~\ref{xsgldi} we present the production cross-sections for di-gluons and di-photon final states. Notice that for $\xi=1/6$ these two modes have much larger rates than for other $\xi$ cases. Unlike the minimal case of $\xi=0$, the $\xi=1/6$ can be studied via di-jet or di-photon final states. It is expected that a dilaton which arises from the breaking of a conformal symmetry should be described by a conformal coupling $\xi=1/6$, at least in the high energy limit. The signature of such a state, if composite, is in the anomaly pole of correlators involving the dilatation current and two vector currents, as pointed out in \cite{CDS}. The dilatation current inherits the same pole from the $TVV$ correlator \cite{Giannotti:2008cv,ACD2,ACD3} while the explicit/non perturbative breaking of the conformal symmetry would then be responsible for the generation of its mass. \\ In a more general framework, the possibility of having similar states in superconformal theories has been extensively discussed in \cite{CCDS} from a perturbative side. It has been shown, for instance, that classical superconformal theories are characterised by a complete alignment in their conformal anomaly multiplets. An axion/dilaton/dilatino composite multiplet would then be the natural manifestation of this alignment found in the superconformal anomaly action. \begin{figure}[thb] \begin{center} \hspace*{-1.5cm} \mbox{\subfigure[]{ \includegraphics[width=0.55\linewidth]{plots/BrRhoGGXi.pdf}} \hspace*{0.3cm} \subfigure[]{\includegraphics[width=0.55\linewidth]{plots/BrRhoZZXi.pdf}}} \hspace*{-1.5cm} \mbox{\subfigure[]{ \includegraphics[width=0.55\linewidth]{plots/BrRhoWWXi.pdf}} \hspace{0.3cm} \subfigure[]{\includegraphics[width=0.55\linewidth]{plots/BrRhoGammaGammaXi.pdf}}} \caption{The decay branching ratios of the dilaton (a) to gluons, (b)-(c) massive gauge bosons and (d) photons pairs for different $\xi$ parameters.}\label{diffxi} \end{center} \end{figure} \begin{figure}[thb] \begin{center} \hspace*{-1.3cm} \mbox{\subfigure[]{ \includegraphics[width=0.55\linewidth]{plots/diphotonsignal.pdf}} \hspace*{0.3cm} \subfigure[]{ \includegraphics[width=0.55\linewidth]{plots/digluonsignal.pdf}}} \caption{Di-gluon and di-photon signal of a dilaton for a varying $\xi$.}\label{xsgldi} \end{center} \end{figure} \chapter{The Higgs sectors with an $SU(2)$ Triplet in a Superconformal Theory} \section{Synopsis} In this and in the following chapter we are going to investigate a superconformal model with an extended Higgs sector, from a strict phenomenological perspective. The model is characterised by a superpotential which includes a triplet superfield of $SU(2)$ and a singlet. We will be investigating this theory in a phase in which supersymmetry is broken, and, from this perspective, the effects of the superconformal symmetry are masked by the presence of soft-breaking mass terms. We anticipate that as a result of this analysis, three massless states of the physical spectrum, $H_4, A_1$ and $\chi_{10}$ should be identified as the Goldstone modes of a superconformal symmetry. The first is a scalar state, the second a pseudoscalar, and the third a neutralino. The main features of the scalar sector and the corresponding bounds on the parameter space of this model emerging from the recent experimental data are discussed. \section{Introduction} With the recent discovery of the Higgs boson at the Large Hadron Collider, the mechanism responsible for the breaking of the electroweak symmetry has finally been uncovered and it has been shown to involve at least one scalar field along the lines of the Standard Model (SM) description. This discovery has removed, at least in part, previous doubts about the real existence of a scalar with Higgs-like properties in our Universe. Both the CMS \cite{CMS, CMS2} and the ATLAS \cite{ATLAS} experimental collaborations have confirmed the discovery of a Higgs boson, by an analysis of the $\gamma \gamma, ZZ^*,$ and $WW^*$ decay channels of the Higgs particle - as predicted by the Standard Model (SM) - at a confidence level of more than $5\sigma$, except for the $WW^*$ decay rate, which has been recorded with a $4.7\sigma$ accuracy by CMS \cite{CMS2}. The fermionic decay modes, instead, have still to reach the $5\sigma$ accuracy, and show some disagreement in the results elaborated by the two experimental collaborations. Clearly, the disagreement of the experimental results with the predictions from the SM opens the possibility of further investigation of the Higgs sector. \\ For such reasons, it is widely believed that the SM is not a complete theory, being not able, for instance, to account for the neutrino masses, but also for being affected, in the scalar sector, by the gauge hierarchy problem \cite{hierarchy}. The widespread interest in the study of a possible supersymmetric extension of this model has always being motivated with the goal of finding a natural and elegant solution to this problem. In fact, supersymmetry protects the Higgs mass from the undesired quadratic divergences introduced by the radiative corrections in the scalar sector of the SM, and it does so by the inclusion of superpartners. \\ In the minimal supersymmetric extension of the SM (MSSM) the conditions of analyticity of the superpotential and of absence of the gauge anomalies require a minimal extensions of the scalar sector with two Higgs superfields, in the forms of $SU(2)$ doublets carrying opposite hypercharges $(Y)$. Supersymmetric extensions are, in general, characterized by a large set of additional parameters which render their phenomenological study quite involved. For this reason, in the near past, the interest has turned towards models, such as the constrained minimal supersymmetric extension (cMSSM/mSUGRA), with only 5 new parameters, generated at a large supergravity scale, quite close to the Planck scale \cite{cMSSM}. Unlike the SM case, in the MSSM the tree-level mass of the lightest Higgs ($h_1$) $m_{h_1}$ is not a free parameter but it is constrained to lay below the mass of the Z gauge boson, $m_Z$ ($m_{h_1}\leq m_Z$). This constraint has been in tension with the results of the experimental searches at LEP-2 which have failed to detect any CP-even Higgs below $m_Z$ and which had established a lower bound of $114.5$ GeV for the SM Higgs boson \cite{LEPb}. With the recent discovery of a CP-even Higgs boson around 125 GeV \cite{CMS, CMS2, ATLAS} the resolution of this conflict is, therefore, mandatory. To avoid the conflict between the MSSM prediction for the Higgs and the LHC results, one needs to consider the effect of the radiative corrections which could lift the bound on the Higgs mass in this model. It has been shown - and it is now well known - that in the case of the MSSM the significant radiative corrections come from the stop-top corrections, specially at low $\tan{\beta}$, due to large Yukawa couplings and to the presence of colour charges. This has triggered analysis envisioning scenarios with a heavy stop, which require a very high supersymmetric (SUSY) scale for the most constrained supersymmetric models like mSUGRA/cMSSM, AMSB, etc \cite{cMSSMb}. In the case of the phenomenological MSSM (pMSSM) there are two possibilities: a very large third generation SUSY mass scale and/or a large splitting between the stop mass eigenstates \cite{pMSSMb}. The second case leads to large soft trilinear couplings $\raise0.3ex\hbox{$\;>$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 2$ TeV \cite{pMSSMb}, which brings back the fine-tuning problem in a different way. A possible way to address the fine tuning problem is to consider an extended Higgs sector. In this respect, there are some choices which could resolve it, based on the inclusion of one singlet \cite{nmssm} and of one or more triplet superfields of appropriate hypercharges \cite{tssmHiggsb}. In particular, the addition of a $Y=0$ hypercharge superfield gives large tree-level as well as one-loop corrections to the Higgs masses, and relaxes the fine tuning problem of the MSSM by requiring a lower SUSY mass scale \cite{tssmyzero, DiChiara:2008rg}. There are some special features of these extensions which are particularly interesting and carry specific signatures. For instance, the addition of a ($Y=0,\pm 2$ hypercharge) - ($SU(2)$ triplet) Higgs sector induces $H^\pm-W^\mp-Z$ couplings mediated by the non-zero vacuum expectation value (vev) of the Higgs triplet, due to the breaking of the custodial symmetry \cite{tssmch1prime,tssmch1}. Other original features of the $Y=\pm 2$ hypercharge triplets are the presence of doubly charged Higgs in the spectrum \cite{tssmch2}. There are also other significant constraints which are typical of these scenarios, and which may help in the experimental analysis. In the supersymmetric Higgs triplet extension, the vev of the triplet $v_T$ is highly constraints by the $\rho$ parameter \cite{rho}, which leads to $v_T\lesssim 5$ GeV in the case of $Y=0$ triplets. In the same case, this value of $v_T$ can account for the value of the mixing parameter $\mu_D$ of the 2-Higgs doublets (or $\mu$-term), which remains small in the various possible scenarios. Another dynamical way to generate a $\mu_D$ term is by adding a SM gauge singlet superfield to the spectrum \cite{nmssm}, as in the NMSSM. Thus a triplet-singlet extended supersymmetric SM built on the superpotential of the MSSM, can address both the fine tuning issue and resolve, at the same time, the problem of the $\mu$-term of the two Higgs doublets \cite{tnssm, FileviezPerez:2012gg, Agashe:2011ia}. We will see that the addition of a discrete symmetry in this model removes the mass terms from the superpotential and its continuum limit generates a Nambu-Goldstone pseudoscalar particle in the spectrum, characterising some of its most significant features. In the MSSM we have two Higgs doublets giving masses to up and down type quarks respectively. After EWSB we have two CP-even light neutral Higgs bosons among which one can be the discovered Higgs around 125 GeV, a CP-odd neutral Higgs boson and a charged Higgs boson pair. Observation of a charged Higgs boson will be a obvious proof of the existence of another Higgs doublet which is necessary in the context of supersymmetry. Searches for the extended Higgs sector by looking for charged Higgs boson at the LHC are not new. In fact, both the CMS and ATLAS collaborations have investigated scenarios with charged Higgs bosons, even under the assumption of these being lighter than the top quark ($m_{H^\pm}\leq m_t$). In this case, the channel in question has been the $pp\to t\bar{t}$ production channel, with one of the top decaying into $b H^\pm$. In the opposite case of a charged Higgs heavier than the top ($m_{H^\pm}\geq m_t$), the most studied channels have been the $bg \to tH^\pm$ and $pp \to tb H^\pm$, with the charged Higgs decaying into $\tau \nu_\tau$ \cite{ChCMS, ChATLAS}. We recall that both doublet type charged and neutral Higgs bosons couple to fermions with Yukawa interactions which are proportional to the mixing angle of the up and down type $SU(2)$ doublets. The extension of the MSSM with a SM gauge singlet, i.e. the NMSSM \cite{Ellwanger}, has a scalar which does not couple to fermions or gauge bosons thus changes the search phenomenology. Similar extensions are possible with only $SU(2)$ triplet superfields with $Y=0 \pm 2 $ hypercharges \cite{pbas1, pbas2, DiChiara, pbas3, EspinosaQuiros}. In the case of $Y=0$, the neutral part of the triplet scalar does not couple to $Z$ boson and does not contribute to $Z$ mass, whereas non-zero hypercharge triplets contribute both in $W^\pm$ and $Z$ mass. The supersymmetric extensions of the Higgs sectors with $Z_3$ symmetry have the common feature of a light pseudoscalar in the spectrum, known as R-axion in the literature. Such feature is common to NMSSM with $Z_3$ symmetry \cite{Ellwanger} and also to extensions with singlet and triplet(s) with appropriate hypercharges \cite{TNMSSM1, TNMSSM2, tnssm, tnssma}. In this article we consider an extension of the MSSM with $SU(2)$ triplet superfield of $Y=0$ hypercharge and SM gauge singlet superfield, named as TNMSSM \cite{TNMSSM1, TNMSSM2}, with $Z_3$ symmetry. The main motivation to work with $Y=0$ triplet is that it is the simplest triplet extension in supersymmetric context, where the triplet only contribute in $W^\pm$ mass. For a model with non-zero hypercharges we need at least two triplets and also we get constrained from both $W^\pm$ and $Z$ masses \cite{tnssma}. The light pseudoscalar in this model is mostly singlet and hence does not have any coupling to fermions or gauge bosons. For this reason such light pseudoscalar is still allowed by the earlier LEP \cite{LEPb} data and current LHC data\cite{CMS, CMS2, ATLAS}. Similarly the triplet type Higgs bosons also do not couple to fermions \cite{pbas1, pbas2, DiChiara, pbas3} which makes a light triplet-like charged Higgs still allowed by the charged Higgs searches \cite{ChCMS, ChATLAS} and such Higgs bosons have to looked for in different production as well as decay modes. General features of this model have been presented in \cite{TNMSSM1}, while a more detailed investigation of the hidden pseudoscalar has been discussed by us in \cite{TNMSSM2}. Existence of the light pseudoscalar makes the phenomenology of the Higgs sector very rich for both the neutral and the charged sectors along with other signatures. In the TNMSSM we have three physically charged Higgs bosons $h^\pm_{1,2,3}$, two of which are triplet type in the gauge basis. The neutral part of the Higgs sector has four CP-even ($h_{1,2,3,4}$) and three CP-odd sectors ($a_{1,2,3}$) states. In the gauge basis two of CP-even states are doublet-like one of which should be the discovered Higgs around $125$ GeV, one triplet type and one singlet type. For the CP-odd states, there are one doublet type, one triplet type and one singlet type. Often it is the singlet-like pseudoscalar which becomes very light, which makes the phenomenology very interesting. The mass spectrum often splits into several regions with distinctively doublet/triplet blocks. The goal of our analysis will be to address the main features of this complete spectrum, characterising its main signatures in the complex environment of a hadron collider. \section{The Model}\label{model} We consider a scale invariant superpotential $W_{TNMSSM}$ with an extended Higgs sector containing a $Y=0$ SU(2) triplet $\hat{T}$ and a SM gauge singlet ${\hat S}$ (see \cite{tnssm, FileviezPerez:2012gg}) on top of the superpotential of the MSSM. We recall that the inclusion of the singlet superfield on the superpotential of the MSSM realizes the NMSSM superpotential. We prefer to separate the complete superpotential of the model into a MSSM part, \begin{equation} W_{MSSM}= y_t \hat U \hat H_u\!\cdot\! \hat Q - y_b \hat D \hat H_d\!\cdot\! \hat Q - y_\tau \hat E \hat H_d\!\cdot\! \hat L\ , \label{spm} \end{equation} where ''$\cdot$'' denotes a contraction with the Levi-Civita symbol $\epsilon^{ij}$, with $\epsilon^{12}=+1$, and combine the singlet superfield $(\hat{S})$ and the triplet contributions into a second superpotential \begin{equation} W_{TS}=\lambda_T \hat H_d \cdot \hat T \hat H_u\, + \, \lambda_S S \hat H_d \cdot \hat H_u\,+ \frac{\kappa}{3}S^3\,+\,\lambda_{TS} S \textrm{Tr}[T^2] \label{spt} \end{equation} with \begin{equation} W_{TNMSSM}=W_{MSSM} + W_{TS}. \end{equation} The triplet and doublet superfields are given by \begin{equation}\label{spf} \hat T = \begin{pmatrix} \sqrt{\frac{1}{2}}\hat T^0 & \hat T_2^+ \cr \hat T_1^- & -\sqrt{\frac{1}{2}}\hat T^0 \end{pmatrix},\qquad \hat{H}_u= \begin{pmatrix} \hat H_u^+ \cr \hat H^0_u \end{pmatrix},\qquad \hat{H}_d= \begin{pmatrix} \hat H_d^0 \cr \hat H^-_d \end{pmatrix}. \end{equation} Here $\hat T^0$ is a complex neutral superfield, while $\hat T_1^-$ and $\hat T_2^+$ are the charged Higgs superfields. Note that $(\hat{T}_1^-)^*\neq \hat{T}_2^+$. Only the MSSM Higgs doublets couple to the fermion multiplet via Yukawa coupling as in Eq.~(\ref{spm}), while the singlet and the triplet superfields generate the supersymmetric $\mu_D$ term after their neutral parts acquire vevs, as shown in Eq.~(\ref{spt}). In any scale invariant supersymmetric theory with a cubic superpotential, the complete Lagrangian with the soft SUSY breaking terms has an accidental $Z_3$ symmetry, the invariance after the multiplication of all the components of the chiral superfield by the phase $e^{2\pi i/3}$. Such terms are given by \begin{eqnarray}\nonumber V_{soft}& =&m^2_{H_u}|H_u|^2\, +\, m^2_{H_d}|H_d|^2\, +\, m^2_{S}|S|^2\, +\, m^2_{T}|T|^2\,+\, m^2_{Q}|Q|^2 + m^2_{U}|U|^2\,+\,m^2_{D}|D|^2 \\ \nonumber &&+(A_S S H_d.H_u\, +\, A_{\kappa} S^3\, +\, A_T H_d.T.H_u \, +\, A_{TS} S Tr(T^2)\\ &&\,+\, A_U U H_U . Q\, +\, \, A_D D H_D . Q + h.c), \label{softp} \end{eqnarray} while the D-terms are given by \begin{equation} V_D=\frac{1}{2}\sum_k g^2_k ({ \phi^\dagger_i t^a_{ij} \phi_j} )^2 . \label{dterm} \end{equation} In this article we assume that all the coefficients involved in the Higgs sector are real in order to preserve CP invariance. The breaking of the $SU(2)_L\times U(1)_Y$ electroweak symmetry is obtained by giving real vevs to the neutral components of the Higgs fields \begin{equation} <H^0_u>=\frac{v_u}{\sqrt{2}}, \, \quad \, <H^0_d>=\frac{v_d}{\sqrt{2}}, \quad ,<S>=\frac{v_S}{\sqrt{2}} \, \quad\, <T^0>=\frac{v_T}{\sqrt{2}}, \end{equation} which give mass to the $W^\pm$ and $Z$ bosons \begin{equation} m^2_W=\frac{1}{4}g^2_L(v^2 + 4v^2_T), \, \quad\ m^2_Z=\frac{1}{4}(g^2_L \, +\, g^2_Y)v^2, \, \quad v^2=(v^2_u\, +\, v^2_d) . \end{equation} and also generate the $ \mu_D=\frac{\lambda_S}{\sqrt 2} v_S+ \frac{\lambda_T}{2} v_T$ term. The non-zero triplet contribution to the $W^\pm$ mass leads to a deviation of the tree-level expression of the $\rho$ parameter \begin{equation} \rho= 1+ 4\frac{v^2_T}{v^2} . \end{equation} Thus the triplet vev is strongly constrained by the global fit on the measurement of the $\rho$ parameter \cite{rho} \begin{equation} \rho =1.0004^{+0.0003}_{-0.0004} , \end{equation} which restricts its value to $v_T \leq 5 $ GeV. In our numerical analysis we have chosen $v_T =3 $ GeV. \section{Tree-level Higgs masses}\label{treel} To determine the tree-level mass spectrum, we first consider the tree-level minimisation conditions, \begin{equation}\label{mnc} \partial_{\Phi_i}V|_{vev}=0; \quad V=V_D\, +\,V_F\,+\,V_{soft}, \quad <\Phi_{i,r}>=\frac{v_{i}}{\sqrt 2},\quad \Phi_i=H^0_{u},H^0_{d}, S, T^0, \end{equation} where we have defined the vacuum parameterizations of the fields in the Higgs sector as \begin{equation} H^0_u=\frac{1}{\sqrt{2}}(H^0_{u,r} + i H^0_{u,i}), \quad H^0_d=\frac{1}{\sqrt{2}}(H^0_{d,r} + i H^0_{d,i}), \quad S=\frac{1}{\sqrt{2}}(S_r + i S_i), \quad T^0=\frac{1}{\sqrt{2}}(T^0_{r} + i T^0_{i}). \end{equation} from which the soft-breaking masses are derived in the form \begin{align}\label{mnc2} m^2_{H_u}=& \frac{v_d}{2\,v_u} \left(\sqrt{2} A_S v_S-v_T \left(A_T+\sqrt{2} v_S \lambda _T \lambda_{TS}\right)+\lambda _S \left(\kappa v_S^2+v_T^2 \lambda_{TS}\right)\right)\nonumber\\ &-\frac{1}{2}\left(\lambda _S^2 \left(v_d^2-v_S^2\right)+ \frac{1}{2}\lambda _T^2\left(v_d^2+v_T^2\right)+\sqrt{2} \lambda _S v_S \lambda _T v_T\right)\nonumber\\ &+\frac{1}{8}(v_d^2 - v_u^2) \left(g_L^2+g_Y^2\right), \end{align} \begin{align} m^2_{H_d}=& \frac{v_u}{2\,v_d} \left(\sqrt{2} A_S v_S-v_T \left(A_T+\sqrt{2} v_S\lambda _T \lambda _{TS}\right)+\lambda _S \left(\kappa v_S^2+v_T^2 \lambda _{TS}\right)\right)\nonumber\\ &-\frac{1}{2} \left(\lambda _S^2 \left(v_u^2+v_S^2\right)+\frac{1}{2}\lambda _T^2 \left(v_u^2+v_T^2\right)- \sqrt{2} \lambda _S v_S \lambda _T v_T\right)\nonumber\\ &+\frac{1}{8}(v_u^2 - v_d^2) \left(g_L^2+g_Y^2\right), \end{align} \begin{align} m^2_S=& \frac{1}{2 \sqrt{2} v_S}\left(v_T \left(\lambda _T \left(\lambda _S \left(v_d^2+v_u^2\right)-2 v_d v_u \lambda _{TS}\right)-2 A_{TS} v_T\right)+2 A_S v_d v_u\right)\nonumber\\ &-\frac{A_{\kappa} v_S}{\sqrt{2}}+\kappa v_d v_u \lambda _S-\frac{1}{2} \lambda _S^2 \left(v_d^2+v_u^2\right)-\kappa ^2 v_S^2-\kappa v_T^2 \lambda _{TS}-2 v_T^2 \lambda _{TS}^2, \end{align} \begin{align} m^2_T=& \frac{1}{4 v_T}\left(\sqrt{2} v_S \lambda _T \left(\lambda _S \left(v_d^2+v_u^2\right)-2 v_d v_u \lambda _{TS}\right)-2 A_T v_d v_u\right)-\sqrt{2} A_{TS} v_S\nonumber\\ &+\lambda _{TS} \left(v_d v_u \lambda _S-v_S^2 \left(\kappa +2 \lambda _{TS}\right)\right)-\frac{1}{4} \lambda _T^2 \left(v_d^2+v_u^2\right)-v_T^2 \lambda _{TS}^2. \end{align} It can be shown that the second derivative of the potential with respect to the fields satisfy the tree-level stability constraints. The neutral CP-even mass matrix in this case is $4$-by-$4$, since the mixing terms involve the two $SU(2)$ Higgs doublets, the scalar singlet $S$ and the neutral component of the Higgs triplet. After electroweak symmetry breaking, the neutral Goldstone gives mass to the $Z$ boson and the charged Goldstone bosons give mass to the $W^\pm$ boson. Being the Lagrangean CP-symmetric, we are left with four CP-even, three CP-odd and three charged Higgs bosons as shown below \begin{eqnarray}\label{hspc} \rm{CP-even} &&\quad \quad \rm{CP-odd} \quad\quad \rm{charged}\nonumber \\ h_1, h_2, h_3, h_4 &&\quad \quad a_1, a_2, a_3\quad \quad h^\pm_1, h^\pm_2, h^\pm_3. \end{eqnarray} The neutral Higgs bosons are combination of doublets, triplet and singlet, whereas the charged Higgses are a combination of doublets and triplet only. We will denote with $m_{h_i}$ the corresponding mass eigenvalues, assuming that one of them will coincide with the 125 GeV Higgs $(h_{125})$ boson detected at the LHC. The scenarios that we consider do not assume that this is the lightest eigenvalue which is allowed in the spectrum of the theory. Both scenarios with lighter and heavier undetected Higgs states will be considered. In particular, we will refer to those in which one or more Higgses with a mass lower than 125 GeV is present, to {\em hidden Higgs} scenarios. \\ At tree-level the maximum value of the lightest neutral Higgs has additional contributions from the triplet and the singlet sectors respectively. The numerical value of the upper bound on the lightest CP-even Higgs can be extracted from the relation \begin{equation}\label{hbnd} m^2_{h_1}\leq m^2_Z(\cos^2{2\beta} \, +\, \frac{\lambda^2_T}{g^2_L\,+\,g^2_Y }\sin^2{2\beta}\, +\, \frac{2\lambda^2_S}{g^2_L\,+\,g^2_Y }\sin^2{2\beta}), \qquad \tan\beta=\frac{v_u}{v_d}, \end{equation} which is affected on its right-hand-side by two additional contributions from the triplet and singlet. These can raise the allowed tree-level Higgs mass. Both contributions are proportional to $\sin{2\beta}$, and thus they can be large for a low value of $\tan{\beta}$, as shown in Figure~\ref{mht}. The plots indicate that for higher values of $\lambda_{T,S}$ a lightest tree-level Higgs boson mass of $\sim 125$ GeV can be easily achieved. For general parameters, the required quantum corrections needed in order to raise the mass bound are thus much smaller compared to the MSSM. In the case of the MSSM, as we have already mentioned, at tree-level $m_h\leq m_Z$, and we need a correction $\raise0.3ex\hbox{$\;>$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 35$ GeV to match the experimental value of the discovered Higgs boson mass, which leads to a fine-tuning of the SUSY parameters. In fact, this requires that the allowed parameter space of the MSSM is characterized either by large SUSY masses or by large splittings among the mass eigenvalues. \begin{figure}[t] \begin{center} \includegraphics[width=0.6\linewidth]{plots/upperboundmh1.pdf} \caption{Tree-level lightest CP-even Higgs mass maximum values with respect to $\tan{\beta}$ for (i) $\lambda_T=0.8,\, \lambda_S=0.1$ (in red), (ii)$\lambda_T=0.1,\, \lambda_S=0.8$ (in green ) and (iii) $\lambda_T=0.8,\, \lambda_S=0.8$ (in blue).}\label{mht} \end{center} \end{figure} In fact, this requires that the allowed parameter space of the MSSM is characterized either by large SUSY masses or by large splittings among the mass eigenvalues. We have first investigated the tree-level mass spectrum for the Higgs bosons and analysed the prospect of a $\sim 125$ GeV Higgs boson along with the hidden Higgs scenarios. We have looked for tree-level mass eigenvalues where at least one of them corresponds to the Higgs discovered at the LHC. For this purpose we have performed an initial scan of the parameter space \begin{eqnarray}\label{parat} |\lambda_{T, S, TS}| \leq 1, \quad |\kappa|\leq 3, \quad |v_s|\leq 1 \, \rm{TeV}, \quad 1\leq \tan{\beta}\leq 10, \end{eqnarray} and searched for a CP-even Higgs boson around $100-150$ GeV, assuming that at least one of the 4 eigenvalues $m_{h_i}$ will fall within the interval $123$ GeV $\leq m_{h_i}\leq 127$ GeV at one-loop. Figure~\ref{hihj}(a) presents the mass correlations between $m_{h_1}$ and $m_{h_2}$, where we have a CP-even neutral Higgs boson in the $100\leq m_{h_i}\leq 150$ GeV range. The candidate Higgs boson around $125$ GeV will be determined at one loop level by including positive and negative radiative corrections in the next section. The mass correlation plot at tree-level shows that there are solutions with very light $h_1$, $m_{h_1}\leq 100$ GeV, which should be confronted with LEP data \cite{LEPb}. At LEP were conducted searches for the Higgs boson via the $e^+ e^- \to Z h$ and $e^+ e^- \to h_1 h_2$ channels (in models with multiple Higgs bosons) and their fermionic decay modes ($h\to b\bar{b},\tau \bar{\tau}$ and $Z\to \ell \ell$). The higher centre of mass energy at LEP II (210 GeV) allowed to set a lower bound of 114.5 on the SM-like Higgs boson and of 93 GeV for the MSSM-like Higgs boson in the maximal mixing scenario \cite{LEPb}. Interestingly, neither the triplet (in our case) nor the singlet type Higgs boson couple to $Z$ or to leptons (see Eq.~(\ref{spt})), and as such they are not excluded by LEP data. We mark such points with $\geq 90\%$ triplet/singlet components, which can evade the LEP bounds, in green. In Figure~\ref{hihj}(a) one can immediately realize that the model allows for some very light Higgs bosons ($m_{h_1}\leq 100$ GeV). We expect that the possibility of such a hidden Higgs would be explored at the LHC with 14 TeV centre of mass energy, whereas the points where $h_1$ is mostly a doublet ($\geq 90\%$) could be ruled out by the LEP data. The points with the mixed scenario for $h_1$ (with doublet, triplet and singlet) are marked in blue. We remark that a triplet of non-zero hypercharge will not easily satisfy the constraints from LEP, due to its coupling to the $Z$ boson. For the points with $m_{h_1/a_1}\leq 100$ GeV which are mostly doublet (red ones) it is very hard to satisfy the LEP bounds \cite{LEPb}. This is because, being doublet like, such $h_1$ would have been produced at LEP and decayed to the fermionic pairs, which have been searched extensively at LEP. On the other hand the singlet and triplet like points (green points) are very difficult to produce at LEP due to the non-coupling to $Z$ boson, which was one of the dominant production channel. This is true for both $e^+e^- \to Zh_1$ and $e^+e^- \to h_1a_1$. Such triplet and singlet like points will reduce the decay widths in charged lepton pair modes due to non-coupling with fermions. These make the green points more suitable candidate for the hidden Higgs bosons, both for the CP-even and CP-odd. However such parameter space would be highly constrained from the data of the discovered Higgs boson around 125 GeV at the LHC. So far the discovery of the Higgs boson at the LHC has reached $5\sigma$ or more in the channels $h_{125}\to \gamma\gamma, WW^*, ZZ^*$. Effectively this could be satisfied by the candidate Higgs around 125 GeV which is mostly doublet like and its decay branching fractions should be within the uncertainties give by CMS and ATLAS experiments at the LHC. Such requirements rule out vast number of parameter points, including some the triplet and/or signet like hidden Higgs boson(s). In section~\ref{Hdata} we consider such constrains coming from the Higgs data at LHC and the existing data from LEP. Figure~\ref{hihj}(b) shows the mass correlation between ${h_3}$ and $h_4$ for the the same region (\ref{parat}) of the parameter space. We see that although there are points characterized by a mass $m_{h_3}$ lighter than 500 GeV, states with $m_{h_4}\leq 500$ GeV are less probable. \begin{figure}[t] \begin{center} \mbox{\hskip -15 pt\subfigure[]{\includegraphics[width=0.55\linewidth]{plots/TLmh1mh2redgreen.pdf}} \subfigure[]{\includegraphics[width=0.55\linewidth]{plots/TLmh3mh4.pdf}}} \caption{Tree-level CP-even Higgs mass correlations (a) $m_{h_1}$ vs $m_{h_2}$ and (b) $m_{h_3}$ vs $m_{h_4}$, where we have a candidate $\sim 125$ GeV Higgs boson. The colors refers to the character of the $h_1$ mass eigenstate, describing the weights of the doublet, singlet and triplet contributions in their linear combinations. Red points are $>90\%$ doublets-like, the green points are either $\geq 90\%$ triplet-like or singlet-like and blue points are mixtures of doublet and triplet/singlet components. The linear combinations corresponding to green points are chosen to satisfy the constraints from LEP onto Z and lepton final states. }\label{hihj} \end{center} \end{figure} Figure~\ref{aiaj} shows the mass correlations of the CP-odd neutral Higgs bosons. Specifically, Figure~\ref{aiaj}(a) presents the analysis of the mass correlation between $a_1$ and $a_2$. The plot shows that there exists the possibility of having a pseudo-scalar $a_1$ lighter than 100 GeV, accompanied by a CP-even $\sim 125$ GeV Higgs boson. Note that a very light pseudoscalar Higgs in the MSSM gets strong bounds from LEP \cite{LEPb}. In this case, for a high $\tan{\beta}$, the pair production process $e^+e^- \to h A$, where $A$ is the pseudoscalar of the MSSM, is the most useful one, providing limits in the vicinity of 93 GeV for $m_A$ \cite{LEPb}. In the TNMSSM instead, if the light pseudoscalar Higgs bosons are either of triplet or singlet type then they do not couple to the $Z$, which makes it easier for these states to satisfy the LEP bounds. For this purpose, points which are mostly-triplet or -singlet ($90\%$) have been marked in green; points which are mostly-doublet ($90\%$) in red, whereas the mixed points have been marked in blue as before. Certainly, mass eigenvalues labelled in green would be much more easily allowed by the LEP data, but they would also be able to evade the recent bounds from the LHC $H\tau \tau $ decay mode for a pseudoscalar Higgs \cite{Htautau}. This occurs because neither the triplet nor the singlet Higgs boson couple to fermions (See Eq.~(\ref{spt})). Figure~\ref{aiaj}(b) presents the correlation between $a_2$ and $a_3$ where the same colour code applies for the structure of $a_2$. As one can easily realize from the figure, there are plenty of green coloured points which represent triplet/singlet type $a_2$ states, which can easily evade the recent bounds on pseudoscalar states derived at the LHC \cite{Htautau}. \begin{figure}[t] \begin{center} \mbox{\hskip -15 pt\subfigure[]{\includegraphics[width=0.55\linewidth]{plots/TLmA1mA2redgreen.pdf}} \subfigure[]{\includegraphics[width=0.55\linewidth]{plots/TLmA2mA3redgreen.pdf}}} \caption{Tree-level CP-odd Higgs mass correlations (a) $m_{a_1}$ vs $m_{a_2}$ and (b) $m_{a_2}$ vs $m_{a_3}$, where we have a candidate $\sim 125$ GeV Higgs boson. The red points are $>90\%$ doublets-like and the green points are $\geq 90\%$ triplet-like. The blue points are mixtures of doublet and triplet components for $a_1$ in (a) and for $a_2$ in (b) respectively.}\label{aiaj} \end{center} \end{figure} Figure~\ref{chichj} shows the correlation of the three charged Higgs bosons for the region in parameter space where we can have a $\sim 125$ GeV Higgs candidate. Figure~\ref{chichj}(a) shows that there are allowed points for a charged Higgs of light mass ($m_{h^\pm_1} \lsim 200$ GeV) correlated with a heavier charged Higgs $h^\pm_2$. Only Higgses of doublet and triplet type can contribute to the charged Higgs sector. We have checked the structure of the lightest charged Higgs $h^\pm_1$ in Figure~\ref{chichj}(a), where the red points correspond to $\geq 90\%$ doublet, while the green points correspond to $\geq 90\%$ triplet and the blue points to doublet-triplet mixed states. Charged Higgs bosons which are mostly triplet-like in their content (the green points) do not couple to the fermions (see Eq.~(\ref{spt})), and thus can easily evade the bounds on the light charged Higgs derived at the LHC from the $H^\pm \to \tau \nu$ decay channel \cite{chHb}. This kind of triplet charged Higgs boson would also be hard to produce from the conventional decay of the top quark and the new production modes as well as the decay modes will open up due to the new vertex $h^\pm_i - Z-W^\mp$ \cite{tssmch1}. Thus vector boson fusion (VBF) with the production of a single charged Higgs is a possibility due to a non-zero $h^\pm_i - Z-W^\mp$ vertex \cite{tssmch1}. Apart from the $h^\pm_i \to ZW^\pm$ channels, the $h^\pm_i \to a_1(h_1)W^\pm$ channels are also allowed, for very light neutral Higgs bosons ($a_1/h_1$). Figure~\ref{chichj}(b) presents the correlation between $m_{h^\pm_2}$ and $m_{h^\pm_3}$. We have used for $h^\pm_2$ the same colour conventions as in the previous plots. We see that there are only few triplet type $h^\pm_2$ (green points), most of the allowed mass points being doublet-triplet mixed states (blue points). \begin{figure}[t] \begin{center} \mbox{\hskip -15 pt\subfigure[]{\includegraphics[width=0.55\linewidth]{plots/TLmhc1mhc2redgreen.pdf}} \hskip 25 pt \subfigure[]{\includegraphics[width=0.55\linewidth]{plots/TLmhc2mhc3redgreen.pdf}}} \caption{Tree-level charged Higgs mass correlations (a) $m_{h^\pm_1}$ vs $m_{h^\pm_2}$ and (b) $m_{h^\pm_2}$ vs $m_{h^\pm_3}$, where we have a candidate $\sim 125$ GeV Higgs boson. The red points are $>90\%$ doublets-like and the green points are $\geq 90\%$ triplet- or singlet-like. The blue points are mixture of doublet and triplets/singlets, for $h^\pm_1$ in (a) and for $h^\pm_2$ in (b) respectively.}\label{chichj} \end{center} \end{figure} \section{Strong and weak sectors}\label{storngw} The TNMSSM scenario has an additional triplet which is colour singlet and electroweak charged and a singlet superfields (see Eq.~(\ref{spt})) not charged under $SU(3)_c\times SU(2)_L\times U(1)_Y$. Therefore, the strong sector of the model is the same of the MSSM, but supersymmetric F-terms affect the fermion mass matrices, and contribute to the off-diagonal terms. It generates additional terms in the stop mass matrix from the triplet and singlet vevs, which will be shown below. These terms are proportional to $\lambda_T v_T$ and $\lambda_S v_S$ respectively, and allow to generate an effective $\mu_D$-term in the model. The triplet contribution is of course restricted, due to the bounds coming from the $\rho$ parameter \cite{rho}. Thus, a large effective $\mu_D$ term can be spontaneously generated by the vev of the singlet, $v_S$. Figure~\ref{stm} shows the mass splitting between the $\tilde{t}_2$ and $\tilde{t}_1$ stops versus $\lambda_S$, for several $v_S$ choices and with $A_t=0$. Large mass splittings can be generated without a large parameter $A_t$, by a suitably large $v_S$, which is a common choice if the singlet is gauged respect to an extra $U(1)'$ \cite{uprime}, due the mass bounds for the additional gauge boson $Z'$ \cite{zprime}. The mass matrices for the stop and the sbottom are given by \begin{eqnarray}\label{stop} M_{\tilde{t}}=\left( \begin{array}{cc} m^2_t+m^2_{Q_3}+\frac{1}{24} \left(g_Y^2-3g_L^2\right) \left(v_u^2-v_d^2\right)\qquad & \frac{1}{\sqrt{2}} A_t v_u+\frac{Y_t v_d}{2} \left( \frac{v_T \lambda _T}{\sqrt 2}- v_S \lambda _S\right) \\ \\ \frac{1}{\sqrt{2}} A_t v_u+\frac{Y_t v_d}{2} \left( \frac{v_T \lambda _T}{\sqrt 2}- v_S \lambda _S\right) & m_t^2+m^2_{\bar{u}_3}+\frac{1}{6} \left(v_d^2-v_u^2\right) g_Y^2 \end{array} \right) \end{eqnarray} \begin{figure}[hbt] \begin{center} {\includegraphics[width=0.6\linewidth]{plots/stopsplitting500.pdf}} \caption{The mass splitting between the stop mass eigen states ($\tilde{t}_{2,1}$) vs $\lambda_S$ for $A_t=0$ with $v_S=500, 1000, 2000$ GeV respectively.}\label{stm} \end{center} \end{figure} \begin{eqnarray}\label{sbt} M_{\tilde{b}}=\left( \begin{array}{cc} m_b^2+m^2_{Q_3}+\frac{1}{24} \left(g_Y^2+3g_L^2\right) \left(v_u^2-v_d^2\right)\qquad & \frac{1}{\sqrt{2}} A_b v_d+\frac{Y_b v_u}{2} \left( \frac{v_T \lambda _T}{\sqrt 2}- v_S \lambda _S\right) \\ \\ \frac{1}{\sqrt{2}} A_b v_d+\frac{Y_b v_u}{2} \left( \frac{v_T \lambda _T}{\sqrt 2}- v_S \lambda _S\right) & m_b^2+m^2_{\bar{d}_3}+\frac{1}{12} \left(v_u^2-v_d^2\right) g_Y^2 \\ \end{array} \right) \end{eqnarray} In the electroweak sector the neutralino ($\tilde{\chi}^0_{i=1,..6}$ ) and chargino ($\tilde{\chi}^\pm_{i=1,2,3}$ ) sector are enhanced due to the extra Higgs fields in the superpotential given in (\ref{spt}). The neutralino sector is now composed of $\tilde{B},\, \tilde{W}_3, \,\tilde{H}_u, \, \tilde{H}_d, \, \tilde{T}_0, \, \tilde{S}$. The corresponding mass matrix is thus now 6-by-6 and given by \begin{eqnarray}\label{ntln} M_{\tilde{\chi}^0}&=\left( \begin{array}{cccccc} M_1 & 0 & -\frac{1}{2} g_Y v_d & \frac{1}{2}g_Y v_u & 0 & 0 \\ 0 & M_2 & \frac{1}{2}g_L v_d & -\frac{1}{2} g_L v_u & 0 & 0 \\ -\frac{1}{2} g_Y v_d & \frac{1}{2}g_L v_d & 0 & \frac{1}{2}v_T \lambda _T-\frac{1}{\sqrt{2}}v_S \lambda_S & \frac{1}{2}v_u \lambda _T & -\frac{1}{\sqrt{2}}v_u \lambda _S \\ \frac{1}{2}g_Y v_u & -\frac{1}{2} g_L v_u & \frac{1}{2}v_T \lambda _T-\frac{1}{\sqrt{2}}v_S \lambda_S & 0 & \frac{1}{2}v_d \lambda _T & -\frac{1}{\sqrt{2}}v_d \lambda _S \\ 0 & 0 & \frac{1}{2}v_u \lambda _T & \frac{1}{2}v_d \lambda _T & \sqrt{2} v_S \lambda _{TS} & \sqrt{2} v_T \lambda _{TS} \\ 0 & 0 & -\frac{1}{\sqrt{2}}v_u \lambda _S & -\frac{1}{\sqrt{2}}v_d \lambda _S & \sqrt{2} v_T \lambda _{TS} & \sqrt{2} \kappa v_S. \\ \end{array} \right)\nonumber\\ \end{eqnarray} The triplino ($\tilde{T}_0$) and the singlino ($\tilde{S}$) masses and mixings are spontaneously generated by the corresponding vevs. The triplino and singlino are potential dark matter candidates and have an interesting phenomenology as they do not couple directly to the fermion superfields. The doublet-triplet(singlet) mixing is very crucial in determining the rare decay rates as well as the dark matter relic densities. Unlike the neutralino sector, the singlet superfield does not contribute to the chargino mass matrix, and hence the MSSM chargino mass matrix is extended by the triplets only. The chargino mass matrix in the basis of $\tilde{W}^+, \tilde{H}^+_{u}, \tilde{T}^+_{2} (\tilde{W}^-, \tilde{H}^-_{d}, \tilde{T}^-_{1} )$ takes the form \begin{eqnarray}\label{chn} M_{\tilde{\chi}^\pm}=\left( \begin{array}{ccc} M_2 & \frac{1}{\sqrt{2}}g_L v_u & -g_L v_T \\ \frac{1}{\sqrt{2}}g_L v_d & \frac{1}{\sqrt{2}}v_S \lambda _S+\frac{1}{2}v_T \lambda _T & \frac{1}{\sqrt{2}}v_u \lambda _T \\ g_L v_T & -\frac{1}{\sqrt{2}}v_d \lambda _T & \sqrt{2} v_S \lambda _{TS} \\ \end{array} \right). \end{eqnarray} The chargino decays also have an interesting phenomenology due to the presence of a doublet-triplet mixing. \section{Higgs masses at one-loop}\label{onel} To study the effect of the radiative correction to the Higgs masses, we calculate the one-loop Higgs mass for the neutral Higgs bosons via the Coleman-Weinberg effective potential given in Eq.~(\ref{cwe}) \begin{align}\label{cwe} V_{\rm CW}=\frac{1}{64\pi^2}{\rm STr}\left[ \mathcal{M}^4 \left(\ln\frac{\mathcal{M}^2}{\mu_r^2}-\frac{3}{2}\right)\right], \end{align} where $\mathcal{M}^2$ are the field-dependent mass matrices, $\mu_r$ is the renormalization scale, and the supertrace includes a factor of $(-1)^{2J}(2J+1)$ for each particle of spin J in the loop. We have omitted additional charge and colour factors which should be appropriately included. The corresponding one-loop contribution to the neutral Higgs mass matrix is given by Eq.~(\ref{1Lmh}) \begin{align} (\Delta\mathcal{M}^2_h)_{ij} &=\left.\frac{\partial^2 V_{\rm{CW}}(\Phi)}{\partial \Phi_i\partial \Phi_j}\right|_{\rm{vev}} -\frac{\delta_{ij}}{\langle \Phi_i\rangle}\left.\frac{\partial V_{\rm{CW}}(\Phi)}{\partial \Phi_i}\right|_{\rm{vev}} \nonumber\\ &=\sum\limits_{k}\frac{1}{32\pi^2} \frac{\partial m^2_k}{\partial \Phi_i} \frac{\partial m^2_k}{\partial \Phi_j} \left.\ln\frac{m_k^2}{\mu_r^2}\right|_{\rm{vev}} +\sum\limits_{k}\frac{1}{32\pi^2} m^2_k\frac{\partial^2 m^2_k}{\partial \Phi_i\partial \Phi_j} \left.\left(\ln\frac{m_k^2}{\mu_r^2}-1\right)\right|_{\rm{vev}} \nonumber\\ &\quad-\sum\limits_{k}\frac{1}{32\pi^2}m^2_k \frac{\delta_{ij}}{\langle \Phi_i\rangle} \frac{\partial m^2_k}{\partial \Phi_i} \left.\left(\ln\frac{m_k^2}{\mu_r^2}-1\right)\right|_{\rm{vev}}\ ,\quad \Phi_{i,j}=H^0_{u,r},H^0_{d,r},S_r,T^0_r\ . \label{1Lmh} \end{align} Here, $m^2_k$ is the set of eigenvalues of the field-dependent mass matrices given in the equation above, and we remind that the real components of the neutral Higgs fields are defined as \begin{eqnarray} &H^0_u=\frac{1}{\sqrt{2}}(H^0_{u,r} + i H^0_{u,i}), \quad H^0_d=\frac{1}{\sqrt{2}}(H^0_{d,r} + i H^0_{d,i}),\nonumber\\ &S=\frac{1}{\sqrt{2}}(S_r + i S_i), \quad T^0=\frac{1}{\sqrt{2}}(T^0_{r} + i T^0_{i}). \end{eqnarray}. For simplicity we drop the supertrace expressions in Eq. (\ref{1Lmh}), but for each particle the supertrace coefficient should be taken into account. Having characterized the entire sector of the TNMSSM, we gear up for the numerical evaluation of the one-loop neutral Higgs masses in the model. We have already seen in Eq.~\ref{hbnd} that for low $\tan{\beta}$ the contribution of the radiative corrections required in order to reach the $\sim 125$ GeV Higgs mass, overcoming the tree-level bound in (\ref{hspc}), is reduced. This is due to the additional Higgs and higgsinos running in the loops. In our analysis we have chosen the following subregion of the parameter space \begin{eqnarray}\label{scan} &|\lambda_{T, S, TS}| \leq 1, \quad |\kappa|\leq 3, \quad |v_s|\leq 1 \, \rm{TeV}, \quad 1\leq \tan{\beta}\leq 10,\nonumber\\ &|A_{T, S, TS, U, D}|\leq 500,\qquad|A_\kappa|\leq1500, \qquad m^2_{Q_3, \bar{u}_3, \bar{d}_3}\leq1000,\\ &65\leq|M_{1, 2}|\leq1000,\nonumber \end{eqnarray} that we have used in the computation of the Higgs boson mass. In this scan, we have included the radiative corrections to the mass eigenvalues at one-loop order of the neutral sector and retained only those sets of eigenvalues which contain one 125 GeV CP-even Higgs. We have selected the range $65\leq|M_{1, 2}|\leq1000$ in order to avoid the constraints on the Higgs invisible decay and use $\mu_r=500$ GeV for the numerical calculation. \begin{figure}[] \begin{center} \mbox{\hskip -15 pt\subfigure[]{\includegraphics[width=0.55\linewidth]{plots/OLdeltamh1lt.pdf}} \subfigure[]{\includegraphics[width=0.55\linewidth]{plots/OLdeltamh1ls.pdf}}} \mbox{\subfigure[]{\includegraphics[width=0.7\linewidth]{plots/OLdeltamh1kappa.pdf}} } \caption{The radiative corrections at one-loop for $m_{h_1}$ vs (a) $\lambda_T$, (b) $\lambda_S$ and (c) $\kappa$. The red points are only with strong (top-stop, bottom-sbottom) corrections, blue points are with weak corrections without the Higgs bosons (higgsinos, gauge boson and gauginos), (c) black points are the total (strong +weak + Higgs bosons) corrections. }\label{dmh1vsp} \end{center} \end{figure} Figure~\ref{dmh1vsp} shows the radiative corrections to $m_{h_1}$ as $\Delta m_{h_{1}}=m^{1-\rm{loop}}_{h_1} - m^{\rm{tree}}_{h_1}$, plotted against (a) $\lambda_T$, (b) $\lambda_S$ and (c) $\kappa$ respectively. The red points show the corrections to $m_{h_1}$ from the strong sector, due to the contributions generated by top-stop and bottom-sbottom running in the loops. The blue points include the corrections from the weak sector with gauge bosons, gaugino and higgsino, and the black points take into account the total corrections which include strong, weak and the contributions from the Higgs sector. As one can deduce from the plots, the corrections (top-stop, bottom-sbottom) coming from the strong interactions are independent of the triplet and singlet Higgs couplings, as expected, with a maximum split of 50 GeV respect to the tree-level mass eigenvalue.\\ In the triplet-singlet extension we have four CP-even, three CP-odd neutral Higgs bosons and three charged Higgs bosons as shown in Eq.~(\ref{hspc}). These enhance both the Higgs and higgsino contributions to the radiative correction. The weak corrections (blue points) are dominated by the large number of higgsinos which contribute negatively to the mass and tend to increase for large values of the Higgs couplings ($\lambda_{T,S}$ and $\kappa$).\\ Finally, the black points show the sum of all the sectors, which are positive in sign, due to the large number of scalars contributing in the loop, with an extra factor of two for the charged Higgs bosons. This factor of two originates from the CW expression of the potential, and accounts for their multiplicity $(\pm)$. Such scalar contributions increase with the values of the corresponding couplings $\lambda_T, \lambda_S, \kappa$. From Figure~\ref{dmh1vsp} one can immeditaley notice that the electroweak radiative corrections could be sufficient in order to fulfill the requirement of a $\sim 125$ GeV Higgs mass, without any contribution from the strong sector. \begin{figure}[t] \begin{center} \mbox{\hskip -15 pt \subfigure[]{\includegraphics[width=0.55\linewidth]{plots/OLmh1mst1.pdf}} \subfigure[]{\includegraphics[width=0.55\linewidth]{plots/OLmh1tanbeta.pdf}}} \caption{The variation of the one-loop lightest CP-even Higgs mass $m_{h_1}$ with (a) the lightest stop mass $m_{\tilde{t}_1}$, and (b) with $\tan{\beta}$, respectively. The yellow band shows the candidate Higgs mass $123\leq m_{h_1} \leq 127$ GeV. }\label{mh1lvsp} \end{center} \end{figure} To illustrate this point, in Figure~\ref{mh1lvsp}(a) we have plotted the lightest CP-even neutral Higgs mass at one-loop versus the lighter stop mass ($m_{\tilde{t}_1}$). We have used the same color coding conventions of the tree-level analysis. The red points are mostly doublets ($\geq 90\%$), the green points are mostly triplet/singlet($\geq 90\%$) and blue points are mixed ones, as explained in section~\ref{treel}. The yellow band shows the Higgs mass range $123\leq m_{h_1} \leq 127$ GeV. We notice that a $\sim 125$ GeV CP-even neutral Higgs could be achieved by requiring a stop of very low mass, as low as 100 GeV. This is due to the presence of additional tree-level and radiative corrections from the Higgs sectors. Thus, in the case of extended SUSY scenarios like the TNMSSM, the discovery of a $\sim 125$ GeV Higgs boson does not put a stringent lower bound on the required SUSY mass scale, and one needs to rely on direct SUSY searches for that. In Figure~\ref{mh1lvsp}(b) we present the dependency of the one-loop corrected Higgs mass of the lightest CP-even neutral Higgs on $\tan{\beta}$. The distribution of points is clearly concentrated at low values of $\tan{\beta} \lsim 4$. This is due to the additional contributions on the tree-level Higgs masses, which are maximal in the same region of $\tan{\beta}$ (see Eq.~(\ref{hbnd})). It is then clear that an extended Higgs sector reduces the amount of fine-tuning \cite{tssmyzero} needed in order to reproduce the mass of the discovered Higgs boson, compared to constrained supersymmetric scenarios. The latter, in general, require much larger supersymmetric mass scales beyond the few TeV \cite{cMSSMb} region. Compared to the pMSSM, this also represents an improvement, as it does not require large mixings in the stop masses in order to have the lighter stop mass below a TeV \cite{pMSSMb}. \begin{figure} \begin{center} \mbox{\hskip -15 pt \subfigure[]{\includegraphics[width=0.55\linewidth]{plots/OLmh1mh2redgreenS.pdf}} \subfigure[]{\includegraphics[width=0.55\linewidth]{plots/OLmA1mA2redgreenS.pdf}}} \caption{The mass correlations at one-loop (a) $m_{h_1}-m_{h_2}$ and (b) $m_{a_1}-m_{a_2}$ where we have a CP-even candidate Higgs in the mass range $123\leq m_{h_i} \leq 127$ GeV. Red, blue and green points are defined as in Figure 2. }\label{mhimhj} \end{center} \end{figure} \subsection{Hidden Higgs bosons} Next we investigate the case in which we have one or more hidden Higgs bosons, lighter in mass than 125 GeV, scalars and/or pseudoscalars. In Figure~\ref{mhimhj} we present the mass correlations at one-loop for (a) $m_{h_1}-m_{h_2}$ and (b) $m_{a_1}-m_{a_2}$, where we have a CP-even candidate Higgs boson in the mass range $123\leq m_{h_i} \leq 127$ GeV. The red points are mostly doublets ($\geq 90\%$), green points are mostly triplets/singlets ($\geq 90\%$) and blue points are mixed ones, as already explained. The green points have a high chance of evading the LEP bounds \cite{LEPb}, showing that the possibility of having a hidden scalar sector is realistic, even after taking into account the radiative corrections to the mass spectrum. A closer inspection of Figure~\ref{mhimhj}(a) reveals that there are points where both $m_{h_1}$ and $m_{h_2}$ are less than $100$ GeV, showing that there is the possibility of having two CP-even hidden Higgs bosons. In that case $h_3$ is the candidate Higgs of $\sim 125$ GeV. Similarly, Figure~\ref{mhimhj}(b) shows the possibility of having two hidden pseduoscalars. The arguments mentioned in section~\ref{treel} will apply to the Higgs masses at one-loop as well. These imply that for $m_{h_1/a_1}\leq 123$ GeV, the green points could evade the bounds from LEP and LHC, the red points would be ruled out and the blue points need to be carefully confronted with the data. In section~\ref{Hdata} we analyse such scenarios in detail. The lightest pseudoscalar present in the spectrum, as we are going to discuss below, can play a significant role in cosmology. In fact, it is crucial in enhancing the dark matter annihilation cross-section, which is needed in order to get the correct dark matter relic in the universe \cite{Arina:2014yna}. \section{$\beta$-fuctions and the running of the couplings}\label{beta} We have implemented the model in SARAH (version 4.5.5) \cite{sarah} in order to generate the vertices and the model files for CalcHep \cite{calchep}, and generated the $\beta$ functions for the dimensionless couplings and the other soft parameters. The $\beta$ functions for $\lambda_{T, S, TS}$, $\kappa, g_Y, g_L, g_c, y_{t, b}$ are given in the appendix \ref{RGs}. \begin{figure}[t] \begin{center} \mbox{\subfigure[]{\hskip -15 pt \includegraphics[width=0.55\linewidth]{plots/rglt.pdf}} \subfigure[]{\includegraphics[width=0.55\linewidth]{plots/rglslt.pdf}}} \mbox{\subfigure[]{\hskip -15 pt \includegraphics[width=0.55\linewidth]{plots/rgltlslts.pdf}} \subfigure[]{\includegraphics[width=0.55\linewidth]{plots/rgltlskappa.pdf}}} \caption{The running of the dimensionless Higgs couplings $\lambda_{T, S, TS}$ and $\kappa $ with the log of the ratio the scales ($\ln{\mu/\mu_0}$) for $\tan{\beta}=1.5$ (solid lines) and $\tan{\beta}=10$ (dashed lines), where $\mu_0=M_Z$. We have checked the corresponding variations for (a)$\lambda_{T}=0.8$, (b)$\lambda_{T, S}=0.8$, (c)$\lambda_{T, S, TS}=0.8$ and (d)$\lambda_{T, S,}=0.8, \kappa=2.4$ chosen at scale $\mu_0$ respectively.}\label{rgv} \end{center} \end{figure} To analyse the perturbativity of the couplings we have selected four different scenarios and identified the cut-off scale ($\Lambda$) in the renormalization group evolution, where one of the coupling hits the Landau pole and becomes non-perturbative ($\lambda_i(\Lambda)=4\pi$). Figure~\ref{rgv}(a) presents a mostly-triplet scenario at the electroweak scale as we choose $\lambda_T=0.8$, $\lambda_{S,TS}=0.1, \kappa=0.3$ at the scale $\mu_0=M_Z$, for $\tan{\beta}=1.5$ (solid lines) and $\tan{\beta}=10$ (dashed lines). For lower values of $\tan{\beta}$ ($\tan{\beta}=1.5$) the triplet coupling $\lambda_T$ becomes non-perturbative already at scale of $\Lambda \sim 10^{9-10}$ GeV, similarly to the behaviour shown in the triplet-extended MSSM \cite{FileviezPerez:2012gg, nardini}. For larger values of $\tan{\beta}$ ($\tan{\beta}=10$) all the couplings remain perturbative up to the (Grand Unification) GUT scale ($\Lambda \sim 10^{16}$ GeV). Figure~\ref{rgv}(b) presents the case where $\lambda_{T, S}=0.8$ at $\mu_0=M_Z$. We see that although the $\tan{\beta}$ dependency becomes less pronounced, the theory becomes non-perturbative at a relatively lower scale $\Lambda \sim 10^{8}$ GeV. From Figure~\ref{rgv}(c) it is evident that on top of $\lambda_T$ and $\lambda_S$ if we also choose $\lambda_{TS}=0.8$ at $\mu_0=M_Z$, the $\tan{\beta}$ dependency almost disappears. In this case the theory becomes more constrained with a cut-off scale $\Lambda \sim 10^{6}$ GeV. Finally, Figure~\ref{rgv}(d) illustrates the effect of a larger $\kappa$ value, the singlet self-coupling, with $\kappa=2.4$ at $\mu_0=M_Z$. The perturbative behaviour of the theory comes under question at a scale as low as $10^4$ GeV. Such a large value of $\kappa$ at the electroweak scale thus restricts the upper scale of the theory to lay below 10 TeV, unless one extends the theory with an extra sector\footnote{For the scan in Eq.~\ref{scan} we select $|\kappa|\leq 3 $. The theoretical perturbativity of the parameter points have to be checked explicitly.}. Choosing relatively lower values of $\lambda_{TS}$ and $\kappa$ would allow the theory to stay perturbative until $10^{8-10}$ GeV even with $\lambda_{T, S}$ as large as $0.8$. The choice of larger values of $\lambda_{T, S}$ increases the tree-level contributions to the Higgs mass (see Eq.~(\ref{hbnd})) as well as the radiative corrections, via the additional Higgs bosons exchanged in the loops. Both of these contributions reduce the amount of supersymmetric fine-tuning, assuming a Higgs boson of $\sim 125$ GeV in the spectrum, by a large amount, respect both to a normal and to a constrained MSSM scenario. Obviously, the addition of the triplet spoils the gauge coupling unification under the renormalization group evolution. This features is already evident in the triplet-extended MSSM \cite{FileviezPerez:2012gg, nardini}. \section{Fine-tuning}\label{finet} The minimisation conditions given in Eq.~(\ref{mnc2}) relate the $Z$ boson mass to the soft breaking parameters in the form \begin{eqnarray} M_Z^2&=&\mu^2_{\rm{soft}}-\mu_{\rm{eff}}^2\\ \mu_{\rm{eff}}&=&v_S \lambda_S-\frac{1}{\sqrt 2}v_T\lambda_T, \quad \mu^2_{\rm{soft}}=2\frac{m_{H_d}^2-\tan^2\beta\, m_{H_u}^2}{\tan^2\beta -1}. \end{eqnarray} It is also convenient to introduce the additional parameter \begin{eqnarray}\label{ft} \mathcal{F}&=&\left|\ln\frac{\mu^2_{\rm{soft}}-\mu_{\rm{eff}}^2}{\mu^2_{\rm{soft}}}\right|, \end{eqnarray} characterizing the ratio between $M^2_Z$ and $\mu^2_{\rm{soft}}$, which can be considered a measure of the fine-tuning. Unlike the MSSM, here the $\mu_{\rm{eff}}$ parameter is generated spontaneously by the singlet and triplet vevs. Notice that while the triplet contribution is bounded by the $\rho$ parameter \cite{rho}, the singlet vev is unbounded and it may drive $\mu_{\rm{eff}}$ to a large value. Similarly, the soft parameters $m_{H_u, H_d}$, which are determined by the minimisation condition (\ref{mnc2}), can be very large, and thus can make $\mu^2_{\rm{soft}}$ also large. Finally, to reproduce the $Z$ boson mass we need large cancellations between these terms, which leads to the well know fine-tuning problem of the MSSM and of other supersymmetric scenarios. \begin{figure}[bht] \begin{center} \mbox{\subfigure[]{\hskip -15 pt \includegraphics[width=0.55\linewidth]{plots/TLEWfinetuning.pdf}} \subfigure[]{\includegraphics[width=0.55\linewidth]{plots/OLEWfinetuning.pdf}}} \caption{The (a)tree-level and (b) one-loop level fine-tuning measures $\mu_{\rm{soft}}$ and $-\mu^2_{\rm{eff}}$ versus the singlet vev $v_S$ for a candidate Higgs of mass between $120\leq m_{h_1} \leq 130$ GeV respectively. The violet points represent $\mu^2_{\rm{soft}}$ values $\lambda_{S,T}\geq 0.5$ and blue points represent $\mu^2_{\rm{soft}}$ values $\lambda_{S,T}<0.5$. The green points represent $\mu^2_{\rm{eff}}$ values $\lambda_{S,T}\geq 0.5$ and the orange points represent $\mu^2_{\rm{eff}}$ values $\lambda_{S,T}< 0.5$. The red line shows the $Z$ boson mass $M_Z$.}\label{fnt} \end{center} \end{figure} We show in Figure~\ref{fnt}(a) plots of $\mu^2_{\rm{soft}}$ and $-\mu^2_{\rm{eff}}$ versus the singlet vev $v_S$ for tree-level candidate Higgs masses in the interval $120\leq m_{h_1} \leq 130$ GeV. Figure~\ref{fnt}(b) presents the same plots, but with $m_{h_1}$, the candidate Higgs mass, calculated at one-loop. The violet points represent $\mu^2_{\rm{soft}}$ values for which $\lambda_{S,T}\geq 0.5$, and the points in blue refer to values of $\mu^2_{\rm{soft}}$ with $\lambda_{S,T}<0.5$. The green points mark values of $\mu^2_{\rm{eff}}$ with $\lambda_{S,T}\geq 0.5$, and the orange points refer to $\mu^2_{\rm{eff}}$ values with $\lambda_{S,T}< 0.5$. We see that for low $\lambda_{T,S}$ both $\mu^2_{\rm{soft}}$ and $-\mu^2_{\rm{eff}}$ (blue and orange points) are small, so that the required cancellation needed in order to reproduce the $Z$ boson mass is also small. This leads to less fine-tuning, measured by $\mathcal{F}< 1$. Unfortunately, such points are small in numbers in the tree-level case, since they require the extra contributions from the triplet and the singlet in order to reproduce the $\sim 125$ GeV Higgs mass. For $\lambda_{T,S}\geq 0.5$ both $\mu^2_{\rm{soft}}$ and $-\mu^2_{\rm{eff}}$ (the violet and green points) are both very large, leading to large cancellations and thus to a fine-tuning parameter $\mathcal{F}\sim 5$ for $\mu^2_{\rm{soft}},-\mu^2_{\rm{eff}} \sim 10^6$. \\ Comparing Figure~\ref{fnt}(a) and Figure~\ref{fnt}(b) we see that the tree-level Higgs mass needs more fine-tuning as $\mu^2_{\rm{soft}, \rm{eff}} \sim 10^6$ for large $\lambda_{T, S}$. The situation improves significantly at one-loop due to the contributions from the radiative corrections. This is due to the fact that there are more solutions with low values of $\lambda_T$, $\lambda_{T, S}<0.5$, compared to tree-level and, on top of this, (for high and low $\lambda_{T,S}$) the required fine-tuning is reduced ($\mathcal{F}\lesssim 2$). This fine-tuning measure is a theoretical estimation but it is constrained from the lightest chargino mass bound from LEP ($m_{\tilde{\chi}^\pm_1}> 104$ GeV), which results in $\mu_{\rm{eff}}>104$ GeV and $\mathcal{F}\raise0.3ex\hbox{$\;>$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 0.2$. We have performed a run of $m^2_{H_u, H_d}$ using the corresponding $\beta$-functions for large $\lambda_{T,S}$, from electroweak scale ($M_Z$) up to a high-energy scale $\sim 10^{9,10}$ GeV, where the couplings become non-perturbative. It can be shown that $m^2_{H_u, H_d}, \mu^2_{\rm{soft}}$ do not blow up unless the couplings $\lambda_{T,S}$ hit the Landau pole. The requirement of perturbativity of the evolution gives stronger bounds on the range of validity of the theory and the fine-tuning parameter is a good indicator at the electroweak scale. In the case of MSSM, the large effective quartic coupling comes from the storng SUSY sectors which also increase $m^2_{H_u}$ and other parameters. However the situation changes in the case of extended Higgs sectors, which gives additional tree-level as well as quantum corrections to the Higgs masses. These reduce the demand for larger $m^2_{H_u}$. In our case there is a singlet and a triplet which contribute largely at the tree-level for low $\tan\beta$ and also contribute at the quantum level. In the case of tree-level Higgs mass, the extra tree-level contributions demand very large $\lambda_{T,S} \sim 0.8$, which in turn make $\mu_{\rm{eff}}$ very large and so the fine-tuning. However in the case of Higgs mass at one-loop, the extra contributions from the extended Higgs sectors are shared by both tree-level and quantum corrections, which reduces the requirement of large $\lambda_{T,S}$. This reduces $\mu_{\rm{eff}}$ and so the fine-tuning $\mathcal{F}$. \section{A light pseudoscalar in the spectrum}\label{axion} In the limit when the $A_i$ parameters in Eq.~(\ref{softp}) go to zero, the discrete $Z_3$ symmetry of the Lagrangian is promoted to a continuos $U(1)$ symmetry given by Eq.~(\ref{csmy}). \begin{figure} \begin{center} \mbox{\subfigure[]{ \hskip -15 pt \includegraphics[width=0.55\linewidth]{plots/OLmA1mh1lps.pdf}} \subfigure[]{\includegraphics[width=0.55\linewidth]{plots/OLmA1mh1axi.pdf}}} \caption{The lightest CP-even Higgs boson mass $m_{h_1}$ vs the lightest pseudo-scalar mass $m_{a_1}$ at one-loop (top-stop and bottom-bottom corrections). The violet-yellow band presents the candidate Higgs mass $123\leq m_{h_1} \leq 127$ GeV. The violet band specify the points with $m_{a_1}\leq 1$ MeV, where the $a_1 \to e^+e^-$ decay is kinematically forbidden. Red, blue and green points are defined as in Figure 2.}\label{mAA1} \end{center} \end{figure} This symmetry is spontaneously broken by the vevs of the doublets, triplet and the singlet fields and should contain a physical massless pseudoscalar, $a_1$, the Nambu-Goldstone boson of the symmetry. The soft breaking parameters will then lift the mass of $a_1$, turning it into a pseudo-Goldstone mode whose mass will depend on the $A_i$. The symmetry takes the form \begin{eqnarray}\label{csmy} (\hat{H}_u,\hat{H}_d, \hat{T},\hat{ S}) \to e^{i\phi}(\hat{H}_u,\hat{H}_d, \hat{T},\hat{ S}) . \end{eqnarray} If this symmetry is softly broken by very small parameters $A_i$ of $\mathcal{O} (1)$ GeV, we get a very light pseudoscalar \cite{nmssm, Agashe:2011ia} which could be investigated at cosmological level. Notice that the vector-like nature of the symmetry decouples this pseudoscalar from any anomalous behaviour. We are going to briefly investigate some features of the $a_1$ state in the context of the recent Higgs discovery and we will consider two different realizations. In the first case we consider a scenario where such continuous symmetry is broken very softly. In this case we choose the $A_i$ parameters to be $\mathcal{O}(1)$ GeV. We expect the pseudo-Goldstone boson to be very light, with a mass $\mathcal{O}(1)$ GeV. In Figure~\ref{mAA1} we show the mass correlation between the lightest CP-even neutral Higgs boson $h_1$ and the lightest massive CP-odd neutral Higgs boson $a_1$. The red points are of doublet type, the green points represent massive states of triplet/singlet type and the blue points represent the mixed contributions to the $a_1$ pseudoscalar. The violet-yellow band presents the region of parameter space where $h_1$ is the candidate Higgs, with a mass $123\leq m_{h_1} \leq 127$ GeV. It is rather clear from Figure~\ref{mAA1}(a) that there plenty of points in the parameter space where there could be a hidden pseudoscalar Higgs boson along with or without a CP-even hidden scalar. Such a light pseudoscalar boson gets strong experimental bounds from LEP searches \cite{LEPb} and from the bottomonium decay rates \cite{bottomium}. Such light pseudoscalar in the mass range of 5.5 -14 GeV, when it couples to fermions, gets strong bound from the recent CMS data at the LHC \cite{cmsab}. For triplet/singlet green points these bounds can be evaded quite easily since these states do not couple to gauge bosons (the $Z$ boson in the case of a triplet) and to fermions. Of course, for real mass eigenstates the mixing between the doublet-triplet/singlet would be very crucial in the characterization of their allowed parameter space. For a mass of the $a_1$ of $\mathcal{O}(100)$ MeV, the decay to $\pi \gamma \gamma$, $\pi \pi \pi$ could be an interesting channel to investigate in order to search for this state \cite{infnr}. The simpler 2-particle decay channel $a_1\to \pi\pi$ is not allowed due to the CP conservation of the model. Due to the singlet/triplet mixing nature of this state, it decays into fermion pairs $e^+e^-, \mu \bar{\mu}, \tau\bar{\tau}$, if kinematically allowed. Notice that there is no discrete symmetry to protect this state from decaying, preventing it from being a dark matter candidate \cite{Mambrini:2011ri}. Now, if we choose the $A_i$ parameters to be of $\mathcal{O}(1)$ MeV then we get a very light pseudoscalar boson with mass of the same order, as shown in Figure~\ref{mAA1}(b). Such a bosons cannot decay to $\mu \bar{\mu}, \tau\bar{\tau}$ kinematically. Following the same reasoning, if its mass is $<1$ MeV, then even the $a_1 \to e^+e^-$ channel is not allowed and only the photon channel remains open to its decay. In this case the $a_1$ resembles an axion-like particle, and can be a dark matter candidate only if its lifetime is larger than the age of the universe \cite{infnr, Bernabei:2005ca}. The pseudoscalar, in this case, couples to photons at one-loop, due to its doublet component which causes the state to have a direct interaction with the fermions.\\ We recall that the effective lifetime of a light pseduoscalar decaying into two photons is given by Eq.~(\ref{agg}) \cite{Bernabei:2005ca} \begin{eqnarray}\label{agg} \tau_a =\frac{64 \pi}{g^2_{a\gamma\gamma}m^3_{a}} \end{eqnarray} where $g_{a\gamma\gamma}$ is the effective pseduoscalar /fermion coupling which is proportional to the doublet-triplet/singlet mixing. Notice that the $a_1$ shares some of the behaviour of axion-like particles, which carry a mass that is unrelated to their typical decay constant, and as such are not described by a standard Peccei-Quinn construction. They find a consistent description in the context of extensions of the SM with extra anomalous abelian symmetries \cite{ga0} \cite{ga1} and carry a direct anomalous (contact) interaction to photons. Such interaction is absent in the case of a $a_1$ state. \\ Along with the lightest neutralino of the TNMSSM, this particle can be a dark matter candidate. In the supersymmetic context a similar scenario, with two dark matter candidates has been discussed in \cite{ga2}. The role of this pseudoscalar state, in the context of the recent results by FERMI about the 1-3 GeV excess gamma-ray signal from the galactic center \cite{Arina:2014yna} is under investigation for this model \cite{pb3}. \section{$\sim 125$ GeV Higgs and LHC data}\label{Hdata} In this section we consider the one-loop Higgs mass spectrum, including only the correction coming from quarks and squarks, in light of recent results from the LHC \cite{CMS,CMS2, ATLAS} and the existing data from LEP \cite{LEPb}. In particular, we consider the uncertainties in the decay modes of the Higgs to $WW^*$, $ZZ^*$ and $\gamma\gamma$ in a conservative way \cite{CMS, ATLAS}. We explore the scenario where one of the CP-even neutral scalars is the candidate $\sim 125$ GeV Higgs boson within the mass range $123\leq m_{h_i}\leq 127$ GeV and investigate the possibilities of having one or more light scalars, CP-even and/or CP-odd, allowed by the LEP data and consistent with the recent Higgs decay branching fractions at the LHC. We just mention that in the TNMSSM the triplet and the singlet type Higgs bosons do not couple to the $Z$ boson but the triplet couples to the $W^\pm$ bosons, which result in a modified $h_i\, W^\pm\,W^\mp$ vertices given by \begin{eqnarray} h_i\,W^\pm\,W^\mp = \frac{i}{2}\,g_L^2\left(v_u\mathcal{R}^S_{i1} +v_d\mathcal{R}^S_{i2} +4\,v_T\mathcal{R}^S_{i4}\right), \end{eqnarray} where the rotation matrix $R^S_{ij}$ is defined as $h_i= \mathcal{R}^S_{ij} H_j$. The vertices $h_i\,Z\,Z$ are given by \begin{eqnarray} h_i\,Z\,Z = \frac{i}{2}\left(g_L\cos\theta_W+g_Y\sin\theta_W\right)^2\left(v_u\mathcal{R}^S_{i1} +v_d\mathcal{R}^S_{i2}\right), \end{eqnarray} where $\theta_W$ is the Weinberg angle. The Yukawa part of the superpotential is just the MSSM one. Hence the couplings of the CP-even sector to the up/down-type quarks and to the charged leptons are \begin{eqnarray} h_i\,u \,\bar u = -\frac{i}{\sqrt2}y_u \mathcal{R}^S_{i1},\\ h_i\,d \,\bar d = -\frac{i}{\sqrt2}y_d \mathcal{R}^S_{i2},\\ h_i\,\ell \,\bar\ell = -\frac{i}{\sqrt2}y_\ell \mathcal{R}^S_{i2}, \end{eqnarray} respectively. On the other hand, in the Higgs bosons decay into di-photons, there are more virtual particles which contribute in the loop compared to the SM. This is due to the enlarged Higgs and strong sectors which have a non-zero coupling with the photon. In particular there are three charginos ($\chi_{1,2,3}^\pm$), three charged Higgs bosons ($h_{1,2,3}^\pm$), the stops ($\tilde{t}_{1,2}$) and the sbottoms ($\tilde{b}_{1,2}$). Compared to the MSSM and the NMSSM we have two additional charged Higgs bosons and one additional chargino which contribute to the decay. The decay rate in the di-photon channel is given by \begin{align}\label{gammagamma} \Gamma(h\rightarrow\gamma\gamma)&=\frac{\alpha\,m_h^3}{1024\,\pi^3} \Big|\frac{g_{hWW}}{m_W^2}\,A_1(\tau_W)+\sum_{ \chi_i^\pm,\,t,\, b}2\frac{g_{hf\bar{f}}}{m_f}N^c_f\, Q_f^2\, A_{1/2}(\tau_f)\\ &+\sum_{h_i^\pm,\,\tilde{t}_i,\,\tilde{b}_i}\frac{g_{hSS}}{m_S^2}N_S^c\,Q_S^2\,A_0(\tau_S)\Big|^2,\nonumber \end{align} where $N_{f, S}^c$ are the color number of fermion and scalars, $Q_{f, S}$ are the electric charges, in unit of $|e|$, of the fermions and scalars, and $\tau_i=\frac{m_h^2}{4\,m_i^2}$. $A_0, \,A_{1/2}$ and $A_1$ are the spin-0, spin-1/2 and spin-1 loop functions \begin{eqnarray} &&A_0(x)=-\frac{1}{x^2}\left(x-f(x)\right),\\ &&A_{1/2}(x)=\frac{2}{x^2}\left(x+(x-1)f(x)\right),\\ &&A_1(x)=-\frac{1}{x^2}\left(2\,x^2+3\,x+3(2\,x-1)f(x)\right), \end{eqnarray} with the analytic continuations \begin{eqnarray} f(x)=\left\{ \begin{array}{lr} \arcsin^2(\sqrt{x})& x\leq1\\ -\frac{1}{4}\left(\ln\frac{1+\sqrt{1-1/x}}{1-\sqrt{1-1/x}}-i\pi\right)^2& x>1 \end{array}\right. \end{eqnarray} In the limit of heavy particles in the loop, we have $A_0\rightarrow 1/3$, $A_{1/2}\rightarrow 4/3$ and $A_1\rightarrow-7$.\\ Using the expression above, we study the discovered Higgs boson ($h_{125}$) decay rate to di-photon in this model. We also check the consistency of light scalar(s) and/or light pseudoscalar(s) with the current data at the LHC and the older LEP data. Such analysis is presented in Figure~\ref{higgsdata}. Figure~\ref{higgsdata}(a) shows such hidden Higgs scenarios with one $a_1$ and/or one $h_1$ below 123 GeV, which find significant realizations. \begin{figure} \begin{center} \mbox{\subfigure[]{\hskip -15 pt \includegraphics[width=0.55\linewidth]{plots/lpsWZgood.pdf}} \subfigure[]{\includegraphics[width=0.55\linewidth]{plots/axiWZgood.pdf}}} \caption{The lightest CP-even Higgs boson mass $m_{h_1}$ vs the lightest pseudo-scalar mass $m_{a_1}$ at one-loop (top-stop and bottom-bottom corrections) consistent with the Higgs data from CMS, ATLAS and LEP. The red points corresponds to the case where $m_{h_1}\sim m_{125}$, the orange points correspond to mass values of $m_{h_1}$ and $m_{a_1}$ where $m_{h_2}\sim m_{125}$ and all of them satisfy the $ZZ^*$, $WW^*$ bounds at $1\sigma$ and $\gamma\gamma$ bound at $2\sigma$ level from both CMS and ATLAS. The red (orange) points which satisfy the $\gamma\gamma$ result at $1\sigma$ are marked green (blue). Very light pseudoscalar masses $m_{a_1}\leq 1$ MeV are shown in panel (b), which is a zoom of the small mass region of (a).}\label{higgsdata} \end{center} \end{figure} We first consider the results coming from both CMS and ATLAS in the decay of the Higgs to $WW^*$ and $ZZ^*$ modes at $1\sigma$ \cite{CMS, CMS2, ATLAS} and also consider the cross-section bounds from LEP \cite{LEPb}. The allowed mass values are shown as red points for which the lightest CP-even Higgs boson ($h_1$) is the detected Higgs at $\sim 125$ GeV. Cleary we see that there are many light pseudo-scalars ($\leq 100$ GeV) which are allowed. The orange points present the scenario where $m_{h_2}\sim m_{125}$ and which leaves both $h_1$ and $a_1$ hidden ($< 125$ GeV). We have performed additional tests of such points and compared them with the results from the decay of the Higgs boson to di-photons at the LHC, both from CMS \cite{cmsgamma} and ATLAS \cite{atlasgamma}. The red points (with one hidden Higgs boson) which satisfy $h_{125}\to \gamma \gamma$ at $1\sigma$ level, are marked as green points. The orange points (with two hidden Higgs bosons) when allowed at $1\sigma$ level, have been marked as blue points. Notice that all the points in Figure~\ref{higgsdata} are allowed at $1\sigma$ by the $WW^*$, $ZZ^*$ channels and at $2\sigma$ by the $\gamma \gamma$ channel. These requirements automatically brings the fermionic decay modes closer to the SM expectation. Of course the uncertainties of these decay widths give us a room for $h_{125}\to a_1a_1/h_1h_1$. Notice also the presence of a very light pseudoscalar mass values near $a_1 \sim 0$. Figure~\ref{higgsdata}(b) is a zoom of this region, where such solutions are shown for $m_{a_1} \leq 1$ MeV. The points in this case correspond to possible $a_1$ states which do not decay into any charged fermion pair ($m_{a_1} \leq 2m_{e}$) and have an interesting phenomenology, as briefly pointed out in section~\ref{axion}. The fact that such mass values only allow a decay of this particle to two photons via doublet mixing mediated by a fermion loop, makes the $a_1$ a possible dark matter candidate, being long lived. Two hidden Higgs bosons render the phenomenology very interesting, allowing both the $h_{125} \to a_1 a_1$ and the $h_{125} \to h_1 h_1$ decay channels \cite{hdlh, ehdc}. In Figure~\ref{bps} we show some of the points in this model as benchmark points (BMP's), which are allowed both by LHC \cite{CMS, ATLAS} and LEP \cite{LEPb} data. \begin{figure} \begin{center} \mbox{\hskip -15 pt\subfigure[]{\includegraphics[width=0.55\linewidth]{plots/BMP1.pdf}} \subfigure[]{\includegraphics[width=0.55\linewidth]{plots/BMP2.pdf}}} \mbox{\subfigure[]{\includegraphics[width=0.55\linewidth]{plots/BMP3.pdf}} } \caption{We show the benchmark points of the model which are allowed both by LHC \cite{CMS, ATLAS} and LEP\cite{LEPb} data. The neutral Higgs spectrum has been calculated at one-loop and the rest of the spectrum at tree-level.}\label{bps} \end{center} \end{figure} The neutral Higgs spectrum has been calculated at one-loop order and the remaining states at tree-level. Figure~\ref{bps}(a) shows a point (BMP1) where we have a hidden pseudoscalar ($a_1$) with mass $\mathcal{O}(10^{-1})$ MeV and another triplet/singlet-like hidden CP-even scalar ($h_1$) with a mass around $\sim 93$ GeV. In this case the candidate Higgs boson is $h_2$, taken around $125$ GeV. This point also have a triplet type very light charged Higgs boson at a mass around 90 GeV, which is not excluded by the recent charged Higgs bounds from the LHC \cite{chHb}. Figure~\ref{bps}(b) shows a benchmark point (BMP2) where we have a pseudoscalar around 37 GeV, and the lightest scalar and charged Higgs bosons around 100 GeV. Figure~\ref{bps}(c) shows a trivial (SM-like) solution where we have a doublet-type CP-even Higgs around $\sim 125$ GeV, with the other states decoupled. In the next study we are going to analyse such points through a detailed collider simulation \cite{pb3}. \section{Phenomenology of the TNMSSM}\label{pheno} \begin{figure} \begin{center} \mbox{\subfigure[]{ \includegraphics[width=0.3\linewidth]{plots/Whihcj.pdf}} \hskip 25 pt \subfigure[]{\includegraphics[width=0.3\linewidth]{plots/WWhi.pdf}}} \mbox{\subfigure[]{ \includegraphics[width=0.3\linewidth]{plots/ZWhci.pdf}} \hskip 25 pt \subfigure[]{\includegraphics[width=0.25\linewidth]{plots/vbfusion.pdf}}} \caption{The new and modified production channels for the Higgs bosons at the LHC.}\label{higgsprd} \end{center} \end{figure} The TNMSSM extends the Higgs sector as well as the electroweak chargino-neutralino sectors by additional higgsino contributions. Both the triplet and singlet fields do not couple to the fermions but affect the phenomenology to a large extent. In the context of the recent Higgs discovery, searches for additional Higgs bosons, both neutral and charged, are timely. In particular, if an extended Higgs sector will be discovered at the LHC, it will be crucial to determine the gauge representation which such states belong to, by investigating its allowed decays modes. We have seen from Eq.~(\ref{spt}), that a $Y=0$ hypercharged triplet couples to the $W^\pm$ bosons and contributes to their mass. On the other hand, the singlet does not directly couple to any of the gauge bosons. In the case of Higgs mass eigenstates which carry a doublet-triplet-singlet mixing, we need to look either for their direct production processes at the LHC or take into consideration the possibility of their cascade production from other Higgses or supersymmetric particles. \subsection{Productions} We have detailed a model with a rich Higgs sector with additional Higgs bosons of triplet and singlet type. We recall that the relevant production processes of a Higgs boson which is a SU(2) doublet at the LHC \cite{anatomy1, anatomy2} are the gluon-gluon fusion (GGF) and vector boson fusion channels, followed by the channels of associated production of gauge bosons and fermions. In our case, the production channels for the new Higgs bosons are different, due to their different couplings to the gauge bosons and fermions. We list below the possible additional production channels for the neutral and charged Higgs bosons at the LHC. \begin{itemize} \item \underline{ Neutral Higgs boson production in association with charged Higgs boson:} The triplet only couples to $W^\pm$ boson. Thus a neutral Higgs (doublet or triplet) can be produced in association with a charged Higgs boson (doublet or triplet) via a $W^\pm$ exchange. As shown in Figure~\ref{higgsprd}(a) a light in mass and charged Higgs boson in the TNMSSM can be easily explored by this production channel $\bar{q} q' \to h_i h^\pm_j$. \item \underline{ Neutral Higgs boson production in association with $W^\pm$: } A triplet or a doublet type neutral Higgs boson can be produced via $\bar{q}q' \to W^\pm h_i$ as shown in Figure~\ref{higgsprd}(b). A triplet admixture modifies the $h_i-h^\pm_j-W^\mp$ couplings by an additional term proportional to the vev of the triplet. \item \underline{Charged Higgs boson production in association with $W^\pm$: } Triplet of $Y=0, \pm2$ hypercharge has a non-zero tree-level coupling to $Z-W^\pm-h^\mp_i$. This leads to additional contributions to $q\bar{q} \to W^\pm h^\mp_i$ as shown in Figure~\ref{higgsprd}(c). \item \underline{Production of charged Higgs boson in vector boson fusion:} The non-zero $Z-W^\pm-h^\mp_i$ coupling leads to vector boson fusion ($Z, W$ fusion) which produces a charged Higgs boson as shown in Figure~\ref{higgsprd}(d). This mode is absent in 2-Higgs doublet models (2HDM), in the MSSM and in the NMSSM. This is a unique feature of the $Y=0, \pm2$ hypercharge, triplet-extended scenarios. \item \underline{Singlet Higgs production:} The singlet in this model is not charged under any of the gauge groups, and hence the direct production of such a singlet at the LHC is impossible. Gauging this additional singlet with the inclusion of an extra additional $U(1)'$ gauge group would open new production channels via the additional gauge boson ($Z'$). Most of the extra $Z'$ models get a bound on the $Z'$ mass, $m_{Z'} \raise0.3ex\hbox{$\;>$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 2.79$ TeV \cite{LHCZ'} which makes such channels less promising at the LHC. In our case such a singlet type Higgs boson would only be produced via mixing with the Higgs bosons of doublet and triplet type. \end{itemize} \subsection{Decays} The smoking gun signatures for the model would be the decays of the doublet, triplet and singlet states that are produced. Different F-term contributions can generate these types of mixing and corresponding decay vertices. \begin{figure} \begin{center} \mbox{\subfigure[]{ \includegraphics[width=0.23\linewidth]{plots/hhh.pdf}} \hskip 25 pt \subfigure[]{\includegraphics[width=0.23\linewidth]{plots/hWW.pdf}}} \mbox{\subfigure[]{ \includegraphics[width=0.23\linewidth]{plots/hZW.pdf}} \hskip 25 pt \subfigure[]{\includegraphics[width=0.23\linewidth]{plots/hhW.pdf}} \hskip 25 pt \subfigure[]{\includegraphics[width=0.23\linewidth]{plots/haZ.pdf}} } \caption{The new and modified decay channels of the Higgs bosons at the LHC.}\label{higgdcy} \end{center} \end{figure} \begin{itemize} \item \underline{Higgs decays to Higgs pairs:} The candidate Higgs around the 125 GeV mass in this case can decay into two hidden Higgs bosons if this channel is kinematically allowed as can be seen in Figure~\ref{higgdcy}(a). Such hidden Higgs boson(s) could be both scalar and pseudoscalar in nature. The discovered Higgs is however $99\%$ CP-even \cite{CMS2}, which forbids any CP-violating decay of the nature $h_{125} \to a_i h_j$. However, the CP-conserving decays like $h_{125} \to a_i a_j$ and/or $h_{125} \to h_i h_j$ are allowed. Such decays should be carefully investigated on the basis of the current Higgs data at the LHC. If the two light Higgs bosons are mostly singlet or triplet, then it is easy to evade the bounds from LEP \cite{LEPb}. As we have already pointed out, a singlet Higgs does not couple to any of SM gauge bosons and even the triplet type does not couple to the $Z$ boson. Such a light Higgs boson could decay into $\tau$ pairs only through the mixing with the doublet type Higgs bosons, since neither the singlet nor the triplet couple to fermions (see Eq.~(\ref{spt})). The mixing angle is also constrained by data on bottomonium decay, for a very light neutral Higgs boson ($\lsim 8$ GeV) \cite{bottomium}. \\ The decay of a Higgs boson into other Higgs bosons depends on the cubic coupling, which is proportional to the vevs of Higgs fields, and thus it is very sensitive to the values of $v_i$. It is therefore requires an analysis of the allowed decay widths of the Higgs boson into a light Higgs pair using LHC data \cite{hdlh}. \item \underline{Higgs decays to $W^\pm W^\mp$:} The triplet couples to $W^\pm$ via its non-zero SU(2) charge which is at variance respect to the analogous coupling of the doublet. This will modify the decay width of $h_i \to W W$ (Figure~\ref{higgdcy}(b)). The recent data show that there is some disagreement and uncertainties between the CMS \cite{CMS, CMS2} and ATLAS \cite{ATLAS} results in the $h_{125} \to WW^*$ channels. The measurement of this decay channel thus becomes even more crucial under the assumption of a triplet mixture. \item \underline{Charged Higgs decays to $Z\,W^\pm$:} We know that the triplet type charged Higgs has a non-zero tree-level coupling to $Z\,W$, for a non-zero triplet vev, as shown in Figure~\ref{higgdcy}(c). This opens up the possible decay modes $h^\pm_i \to Z W^\pm$, which are absent in the 2HDM and in the MSSM at tree-level. \item \underline{Charged Higgs decays to $h_j(a_j)W^\pm$:} A doublet or triplet type Higgs boson can decay to a lighter neutral Higgs and a $W^\pm$ (Figure~\ref{higgdcy}(d)). A possibility of a very light triplet-singlet-like neutral Higgs makes this decay mode more interesting compared to the case of the CP-violating MSSM \cite{cpv}. \item \underline{Higgs decays to $a_j Z$:} In the MSSM the odd and heavy Higgs bosons are almost degenerate, so $h_i \to a_j Z$ is not kinematically allowed. The introduction of a triplet and of a singlet adds two more massive CP-odd Higgs bosons, and the degeneracy is lifted. In this case we have a relatively lighter CP-odd Higgs state $a_i$ which makes $h_i \to a_j Z$ possible, as shown in Figure~\ref{higgdcy}(e). This scenarios is also possible in the context of the CP-violating MSSM, where we have a very light pseudoscalar Higgs boson due to the large mixing between the Higgs CP eigenstates \cite{pb2} and in the NMSSM, for having an additional scalar \cite{nmssm}. \item \underline{Higgs decays to fermion pairs:} In a scenario where a triplet or/and singlet type Higgs boson decays to gauge bosons and other Higgses are kinematically forbidden, the only permitted decays are into light fermion pairs, viz, $bb, \tau\tau$ and $\mu\mu$. Even such decays are only possible by a mixing with doublet type Higgs bosons. When such mixing angles are very small this can results into some displaced charged leptonic signatures. \end{itemize} \subsection{Possible signatures}\label{sign} The unusual production and decay channels lead to some really interesting phenomenology which could be tested in the next run of the LHC and at future colliders. From the testability point of view, one could use the data form the discovered Higgs boson $\sim 125$ GeV in order to get bounds from the Higgs decaying to Higgs boson pair \cite{hdlh}, or the existing bounds from LEP \cite{LEPb} for two Higgs bosons productions. We have already taken into account these bounds by ensuring that the hidden Higgs boson is mostly of singlet or of triplet type. Given the uncertainty in the Higgs decay branching fractions in different modes and the absence of direct bounds on the non-standard decays of Higgs boson to Higgs boson pair ($h_{125}\to a_i a_j/h_i h_j$), this remains phenomenologically an interesting scenario. Below we list different possible signatures that could be tested in the LHC with 13/14 TeV. \begin{itemize} \item The singlet and doublet F-terms generate the doublet-triplet-triplet vertex which is proportional to $\lambda_S \lambda_{TS}$ and $\lambda^2_T$. This would provide a signature of a doublet type Higgs decaying into two triplet type Higgs bosons, which, in turn, do not decay into fermions. Similarly the F-terms of $H_u$ and $H_d$ generate vertices involving triplet-singlet-doublet which are proportional to $\lambda_T \lambda_S$. The F-term of triplet type also contributes to this mixing, which is proportional to $\lambda_T \lambda_{TS}$. Thus the relative sign between the two contributions become important. In the case of a $\sim 125$ GeV Higgs boson, this can decay into two triplet-like hidden scalars or pseudoscalars, which in turn decay into off-shell $W^\pm$s only. This type of decays can be looked for by searching for very soft jets or leptons coming from the off-shell $W^\pm$s. The signatures could be the $4\ell +\not\!\!{p_T}$ or $4j+2l +\not\!\!{p_T}$ channels, where the jets and the leptons are very soft. On the other hand, both the triplet and the singlet hidden Higgses can decay to fermion pairs ($b\bar{b}$, $c\bar{c}, e^+e^-, \mu\bar{\mu}, \tau \bar{\tau}$) via the mixing with doublets. The recent bounds on these non-standard decays has been calculated for the LHC \cite{ehdc}. Such decays give $4\ell, \, 2b+ 2\ell$ final states, where the leptons are very soft. For the triplet type hidden Higgs bosons it would be interesting to analyze the competition between the four-body and the two-body decays (which depend on the triplet-doublet mixing). Demanding for the presence of softer leptons and jets in the final states, allows to reduce the SM backgrounds at the LHC. If the mixing is very small, this could lead to displaced charged leptonic final states, similar to those of a Higgs boson decay in a $R$-parity violating supersymmetric scenario \cite{pbrp}. Due to the coupling both with the up and the down type doublets, this coupling could be tested both at a low and a high $\tan{\beta}$. \item The singlet does not contribute to charged Higgs mass eigenstates, so the charged Higgs bosons could be either of doublet or triplet nature. In the case of a heavy doublet type, the heavier charged Higgs can decay to a triplet type a neutral Higgs (CP even or odd) and a triplet type charged Higgs ($H^\pm_{u,d} \to T^0 T^\pm_{1,2}$). The coupling is proportional to ($g^2_L -\lambda^2_T$). The lighter triplet type charged Higgs then mostly decays into on-shell or off-shell $Z\, W^\pm$. This is a generic signature for $Y=0, \pm2$ hypercharge triplets with non-zero triplet vev, which breaks the custodial symmetry of the Higgs potential. The relatively lighter triplet (either CP-odd or even) neutral Higgs can decay via an on/off-shell $W^\pm$ boson pair, which leads to leptonic final states. The final states with multi-lepton($>3\ell$), multi-jet($>4$) and missing energy, could be the signature for this model. Depending on the off-shell decays, few leptons or a jet could be softer in energy. \item In other cases a triplet type heavier charged Higgs can decay into a doublet type neutral Higgs and a triplet type charged Higgs. These couplings are proportional to ($\frac{g^2_L}{2} -\lambda^2_T$ ) and can give rise to $3\ell +2b +\not\!\!{p_T}$ and $3\ell +2\tau +\not\!\!{p_T}$ final states. Here the $b$ and $\tau$ pairs expected from the neutral doublet type Higgs boson decay. \item Unlike to the neutral Higgs bosons, the up and down type charged Higgs bosons doublet only mix with the triplets. The couplings are again proportional to a combination of $\lambda_S \lambda_{T}$ and $\lambda_T \lambda_{TS}$. In this case the doublet (triplet) charged Higgs state will decay into a triplet (doublet) charged Higgs and a singlet neutral Higgs boson. As the singlet is not coupled to any SM particles, it can only decay through mixing with doublets and triplets. Decays of such singlets to leptons (in the case of mixing with doublets) and off-shell or on-shell $W^\pm$-pair will be determined by the mixing only. In a fine-tuned region where such mixing is very low this decay channel can lead to a displaced vertex of charged leptons, whose measurement can give information about such a mixing. \end{itemize} \section{Higgs spectrum and the experimental constraints}\label{scans} As already pointed out, in the Higgs sector there are four CP-even neutral ($h_1,h_2,h_3,h_4$), three CP-odd neutral ($a_1,a_2, a_3$) and three charged Higgs bosons ($h_1^\pm,h_2^\pm,h_3^\pm$). In general the interaction eigenstates are obtained via a mixing of the two Higgs doublets, the triplet and the singlet scalar. However, the singlet does not contribute to the charged Higgs bosons, which are mixed states generated only by the $SU(2)$ doublets and triplets. The rotation from gauge eigenstates to the interaction eigenstates are \begin{eqnarray}\label{chmix} h_i= \mathcal{R}^S_{ij} H_j\nonumber\\ a_i= \mathcal{R}^P_{ij} A_j\\ h^\pm_i= \mathcal{R}^C_{ij} H^\pm_j\nonumber \end{eqnarray} where the eigenstates on the left-hand side are interaction eigenstates whereas the eigenstates on th right-hand side are gauge eigensates. Explicitly we have $h_i=(h_1,h_2,h_3,h_4)$, $H_i=(H^0_{u,r},H^0_{d,r},S_r,T^0_r)$, $a_i=(a_0,a_1,a_2,a_3)$, $A_i=(H^0_{u,i},H^0_{d,i},S_i,T^0_i)$, $h_i^\pm=(h_0^\pm,h_1^\pm,h_2^\pm,h_3^\pm)$ and $H_i^+=(H_u^+,T_2^+,H_d^{-*},T_1^{-*})$. { Using these definitions we can write the doublet and triplet fraction for the scalar and pseudoscalar Higgs bosons as \begin{eqnarray} h_i|_{D}=(\mathcal{R}^S_{i,1})^2+(\mathcal{R}^S_{i,2})^2, \,\, a_i|_{D}=(\mathcal{R}^P_{i,1})^2+(\mathcal{R}^P_{i,2})^2 \end{eqnarray} \begin{eqnarray} h_i|_{S}=(\mathcal{R}^S_{i3})^2, \,\, a_i|_{S}=(\mathcal{R}^P_{i3})^2 \end{eqnarray} \begin{eqnarray} h_i|_T=(\mathcal{R}^S_{i4})^2, \,\, a_i|_T=(\mathcal{R}^P_{i4})^2 \end{eqnarray} and the triplet and doublet fraction of the charged Higgs bosons as \begin{eqnarray} h_i^\pm|_D=(\mathcal{R}^C_{i1})^2+(\mathcal{R}^C_{i3})^2, \,\, h_i^\pm|_T=(\mathcal{R}^C_{i2})^2+(\mathcal{R}^C_{i4})^2 . \end{eqnarray} We call a scalar(pseudoscalar) Higgs boson doublet-like if $h_i|_D(a_i|_D)\geq\,90\%$, singlet-like if $h_i|_S(a_i|_S)\geq\,90\%$ and triplet-like if $h_i|_T(a_i|_T)\geq\,90\%$. Similarly a charged Higgs boson will be doublet-like if $h_i^\pm|_D\geq\,90\%$ or triplet-like if $h_i^\pm|_D\geq\,90\%$.} If the discovered Higgs is the lightest CP-even boson, $h_1\equiv h_{125}$, then $h_1$ must be doublet-like and the lightest CP-odd and charged Higgses must be triplet/singlet-like, in order to evade the experimental constraint from LEP \cite{LEPb} for the pseudoscalar and charged Higgses. { LEP searched for the Higgs boson via the $e^+ e^- \to Z h$ and $e^+e^- \to h_1h_2$ channels (in models with multiple Higgs bosons) and their fermionic decay modes ($h \to \bar{b}b,\bar\tau \tau$ and $Z \to \ell\ell$). The higher centre of mass energy at LEP II (210 GeV) allowed to set a lower bound of 114.5 on the SM-like Higgs boson and of 93 GeV for the MSSM-like Higgs boson in the maximal mixing scenario \cite{LEPb}. Interestingly, neither the triplet nor the singlet type Higgs boson couple to Z or to leptons (see Eq.~\ref{spt}), and we checked explicitly the demand of $\geq 90\%$ singlet and/or triplet is sufficient for the light pseudoscalar to be allowed by LEP data. We also checked explicitly the LHC allowed parameter space for the light pseudoscalar and the details can be found out in \cite{TNMSSM2}. Later we also discuss how the criteria of $\geq 90\%$ singlet/triplet is enough to fulfill the constraints coming from the B-observables.} Similar constraints on the structure of the Higgses must be imposed if $h_2\equiv h_{125}$. To scan the parameter space we have used a code written by us, in which we have randomly selected $1.35\times10^6$ points that realize the EWSB mechanism at tree-level. In particular, we have performed the scan using the following criteria for the couplings and the soft parameters \begin{eqnarray}\label{scan} &|\lambda_{T, S, TS}| \leq 1, \, |\kappa|\leq 3, \, |v_s|\leq 1 \, \rm{TeV}, \, 1\leq \tan{\beta}\leq 10,\nonumber\\ &|A_{T, S, TS, U, D}|\leq 1\, \rm{GeV},\,|A_\kappa|\leq 3\, \rm{GeV},\\ &65\leq|M_{1, 2}|\leq10^3\,\rm{GeV},\, 3\times10^2\leq m_{Q_3, \bar{u}_3, \bar{d}_3}\leq10^3\,\rm{GeV}.\nonumber \end{eqnarray} We have selected those points which have one of the four Higgs bosons with a one-loop mass of $\sim125$ GeV { with one-loop minimization conditions} and, out of the $1.35\times10^6$ points, over $10^5$ of them pass this constraint. On this set of Higgs candidates we have imposed the constraints on the structure of the lightest CP-even, CP-odd and charged Higgses. The number of points with $h_1\equiv h_{125}$ doublet-like and $a_1$ singlet-like is about 70 \% but we have just one point with $h_1\equiv h_{125}$ which is doublet-like and $a_1$ triplet-like. If we add the requirement on the lightest charged Higgs to be triplet-like, we find that the number of points with $h_1\equiv h_{125}$ doublet-like, $a_1$ singlet-like and $h_1^\pm$ triplet-like is 26 \%. The case of $h_2\equiv h_{125}$ doublet-like allows more possibilities, because in this case we have also to check the structure of $h_1$. However we find 75 points only when $h_1$ is triplet-like, $h_2\equiv h_{125}$ is doublet-like and $a_1$ is singlet-like. This selection is insensitive to the charged Higgs selection, i.e. we still have 75 points with $h_1$ triplet-like, $h_2\equiv h_{125}$ doublet-like, $a_1$ singlet-like and $h_1^\pm$ triplet-like.\\ The LHC constraints have been imposed on those points with $h_1\equiv h_{125}$, because they provide a better statistics. For these points we demand that \begin{eqnarray}\label{LHCdata} &\mu_{WW^*}=0.83\pm0.21\,\,\mu_{ZZ^*}=1.00\pm0.29\\ &\mu_{\gamma\gamma}=1.12\pm0.24\nonumber \end{eqnarray} at 1$\sigma$ of confidence level \cite{CMS2}. The LHC selection give us 12223 points out of the 26776 points that have $h_1\equiv h_{125}$ doublet-like, $a_1$ singlet-like and $h_1^\pm$ triplet-like. \begin{figure}[thb] \begin{center} \mbox{\hskip -10pt\subfigure[]{\includegraphics[width=0.45\linewidth]{plots/degtriph2h1pm.pdf}} \subfigure[]{\includegraphics[width=0.45\linewidth]{plots/a2ch2mass.pdf}}} \mbox{\hskip -10pt\subfigure[]{\includegraphics[width=0.45\linewidth]{plots/degtripa2h1pm.pdf}} \subfigure[]{\includegraphics[width=0.45\linewidth]{plots/h2ch2mass.pdf}} } \caption{We show the fraction of triplets of $h_2$ (a) and $a_2$ (c) as a function of the mass difference $|\Delta m_{h_2/a_2\, h^\pm_1}|$ between $h_2/a_2$ and $h^\pm_1$ respectively. We plot the mass correlation between $a_2$ and $h_2^\pm$ (b) and between $h_2$ and $h_2^\pm$ (d). These exhaust the possible hierarchies for the triplet eigenstates. We mark in red the points with both $a_2$ and $h_2^\pm$ doublet-type, in purple the points with $a_2$ triplet-type and $h_2^\pm$ doublet-type or viceversa, and in green the points with both $a_2$ and $h_2^\pm$ triplet-like.}\label{ch1h2a2} \end{center} \end{figure} { Apart from the LEP \cite{LEPb} and LHC \cite{CMS2} constraints, we also ensure the validity of the constrains coming from the $B$-observables. For this particular reason we claim the light pseudoscalar $a_1$ to be $\geq 90\%$ singlet-type and the light charged Higgs $h^\pm_1$ to be $90\%$ triplet-type. A very light scalar or pseudoscalar, with a mass around $1-10$ GeV, gets strong bounds from bottomonium decay to $a_1\gamma$ \cite{bottomonium1}. The decay rate for $\Upsilon \to a_1 \gamma$ can be approximated as follows \begin{equation} \mathcal{Br}(\Upsilon \to a_1 \gamma)=\mathcal{Br}(\Upsilon \to a_1 \gamma)_{SM}\times g^2_{a_1 b\bar{b}}, \end{equation} where $g_{a_1 b\bar{b}}$ is the reduced down-type Yukawa coupling with respect to SM \cite{bottomonium}. We checked explicitly that the requirement of more than $90\%$ singlet type $a_1$ and low $\tan{\beta}$ ensure that we are in the region of validity. Another important constraint for a light pseudoscalar comes from $\mathcal{Br}(B_s \to \mu \mu)$ which can be summerised as follows \cite{bottomonium} \begin{equation} \mathcal{Br}(B_s \to \mu \mu)\simeq \frac{2\tau_{B_s}M^5_{B_s}f^2_{B_s}}{64\pi}|C|^2( \mathcal{R}^P_{12})^4, \end{equation} with \begin{eqnarray} C=\frac{G_F\alpha}{\sqrt2\pi}V_{tb}V^*_{ts}\frac{\tan^3\beta}{4\sin^2\theta_w}\frac{m_\mu m_t |\mu_r|}{m_W^2(m^2_{a_1}-m^2_{B_s})}\frac{\sin2\theta_{\tilde t}}{2}\Delta f_3\nonumber\\ \end{eqnarray} where $\Delta f_3=f_3(x_2)-f_3(x_1)$, $x_i=m^2_{\tilde t_i}/|\mu_r|^2$, $f_3(x)=x\ln x/(1-x)$, $\theta_{\tilde t}$ is the stop mixing angle and $\mathcal{R}^P_{12}$ is the rotation angle, defined in Eq.~\ref{chmix}, which gives the coupling with the down type Higgs ($H_d$) with leptons and down type quarks. The demand of mostly singlet $a_1$ ($\geq 90\%$) on the data set ensures that we are well below the current upper limit \cite{lhcb}. Other constraint that affects the models with extra Higgs boson, specially the charged Higgs bosons, comes from the rare decay of $B\to X_s \gamma$. The charged Higgs bosons which are doublet in nature couple to quarks via Yukawa couplings and contribute to the rare decay of $B\to X_s \gamma$. Similar contributions also come from the charginos which couple to the quarks, namely doublet-type Higgsinos and Wino. However when we have charged Higgs or charginos which are triplet in nature they do not couple to the fermions and thus do not contribute in such decays \cite{pbas1,pbas2}. If the light charged Higgs bosons are triplet in nature the dominant Wilson coefficients $F_{7,8}$ are suppressed by the charged Higgs rotation angles $\mathcal{R}^C_{11,13}$ as defined in Eq.~\ref{chmix}. The demand of the light charged Higgs boson mostly triplet $\geq 90\%$ enable us to avoid the constraint from $\mathcal{Br}(B\to X_s \gamma)$ \cite{pbas1,pbas2}. } \begin{figure}[thb] \begin{center} \mbox{\subfigure[]{ \includegraphics[width=0.6\linewidth]{plots/cartoonmass.pdf}}} \caption{A typical mass hierarchy of the scalar sector, with the singlet in blue, the doublets in red and the triplet Higgs bosons in green colour. The eigenstates of the triplet sector with $a_2/h_2$ or $h_2/a_2$ are alternative: if $h_1^\pm$ pairs with the neutral $h_2$, then $h_2^\pm$ is mass degenerate with the pseudoscalar $a_2$ (and viceversa). }\label{cartoon} \end{center} \end{figure} In Figure \ref{ch1h2a2}(a) we plot the triplet fraction of $h_2$ in function of the mass splitting between $h_2$ and $h_1^\pm$. The lightest charged Higgs is selected to be triplet-like ($\geq 90\%$). It is evident that in the case of mass degeneracy between $h_2$ and $h_1^\pm$ the triplet-like structure of $h_1^\pm$ is imposed also on $h_2$. In Figure \ref{ch1h2a2}(b) we plot the mass correlation between $a_2$ and $h_2^\pm$. We use the following color code: we mark in red the points with both $a_2$ and $h_2^\pm$ doublet-type, in purple the points with $a_2$ triplet-type and $h_2^\pm$ doublet-type or viceversa, and in green the points with both $a_2$ and $h_2^\pm$ triplet-like. In the zoomed plot the dashed line indicates a configuration of mass degeneracy. It is evident that the mass degeneracy between $a_2$ and $h_2^\pm$ implies that both of them are triplet-like. As we have depicted in Figure \ref{cartoon}, there could be an exchange between $a_2$ and $h_2$ in the triplet pairs, shown in green. For this reason we illustrate also the other possible hierarchy path in Figure \ref{ch1h2a2}(c) and \ref{ch1h2a2}(d). As one may notice, the two sets of plots are qualitatively similar, although there is a quantitative difference between the red points of Figures \ref{ch1h2a2}(b) and \ref{ch1h2a2}(d). The points in the latter are closer than the former to the line of mass degeneracy. \begin{figure}[thb] \begin{center} \mbox{\hskip -10pt\subfigure[]{\includegraphics[width=0.55\linewidth]{plots/degsingh4a1.pdf}} \subfigure[]{\includegraphics[width=0.55\linewidth]{plots/msmh4.pdf}} } \caption{We show the singlet fraction of $h_4$ as a function of mass difference $|\Delta m_{h_4\, a_1}|$ between the two states $h_4$ and $a_1$ (a), and the mass correlation between $h_4$ and $m_S$ (b).}\label{h4a1} \end{center} \end{figure} Figure \ref{h4a1}(a) shows that the more $h_4$ is decoupled, compared to $a_1$, the more tends to be in a singlet-like eigenstate. We remind that $a_1$ is a pseudo NG mode and hence it is naturally light. From Figure \ref{h4a1}(b) it is evident that $h_4$ takes the soft mass $m_S$ coming from the singlet. \begin{figure}[htb] \begin{center} \mbox{\hskip -10pt\subfigure[]{\includegraphics[width=0.55\linewidth]{plots/a3h3pm.pdf}} \subfigure[]{\includegraphics[width=0.55\linewidth]{plots/h3h3pm.pdf}}} \caption{Scattered plots of the mass correlation between $a_3$ and $h_3^\pm$ (a) and between $h_3$ and $h_3^\pm$ (b). The color code is defined as follows: we mark in red the points where $h_3, a_3, h^\pm_3$ are mostly doublets ($\geq90\%$) and in green the points where they are mostly triplet.}\label{mcrl2} \end{center} \end{figure} Figure~\ref{mcrl2}(a) shows the mass correlations between $h^\pm_3$ and $a_3$, while Figure~\ref{mcrl2}(b) shows the same correlation but between $h^\pm_3$, $h_3$ where all of them are of doublet-type nature and are marked in red. It is easily seen that all the three doublet-like Higgs bosons $h^\pm_3$, $h_3$ and $a_3$ remain degenerate. There are only 7 points which behave like triplets and are shown in green. Thus it is evident from the above analysis that eigenstates dominated by the same representation (i.e mostly singlet or mostly triplet) tend to be hierarchically clustered. In this case of a $Z_3$ symmetric Lagrangian, the light pseudoscalar is actually a pseudo NG mode of a continuous $U(1)$ symmetry of the Higgs potential, also known as R-axion \cite{Ellwanger}, and remains very light across the entire allowed parameter space. Though the interaction eigenstates are a mixture of the gauge eigenstates, there seems to be a pattern for the various representations of the Higgs sector. A given representation tries to keep their masses in the same block, i.e., the masses of scalar, pseudoscalar and charged components of the triplets will form a different mass block than the doublet Higgs sectors. A typical mass hierarchy is shown in Figure~\ref{cartoon}, where a light pseudoscalar which is a pseudo NG boson lays hidden below $100$ GeV and the scalar state $h_4$ takes a heavy mass $\sim m_S$, and is therefore decoupled from the low energy spectrum. There is a CP-even Higgs boson of doublet type around $125$ GeV and doublet-like heavy Higgs bosons of larger mass ($h^\pm_3, h_3, a_3$), shown in red. Apart from doublet and singlet interaction eigenstates, we have two triplets $T_1$ and $T_2$ which then forms two different sets, ($h^\pm_1, h_2/a_2$) and ($h^\pm_2, a_2/h_2$) in the mass hierarchy, shown in green colours. Of course this is not the most general situation but it comes from the phenomenological constraints that should be applied to the scanned points in the parameter space. We remind again that these constraints include a scalar Higgs boson with a mass around 125 GeV which satisfy the LHC constraint of Eq.~\ref{LHCdata} and no light doublet-like pseudoscalar or charged Higgs boson. We take care of the latter requesting that the lightest pseudoscalar as mostly singlet and lightest charged Higgs boson is mostly triplet. \section{Charged Higgs bosons and its structure}\label{chcdcy} In this section we will describe the feature of the charged Higgs sector, emphasizing the role of the rotation angles in the limit $|\lambda_T|\simeq0$. The charged Higgs bosons are a mixture of two doublet and two triplet fields, as can be seen from Eq.~\ref{chH}, \begin{equation}\label{chH} h^\pm_i= \mathcal{R}^C_{i1}H_u^+ + \mathcal{R}^C_{i2}T_2^+ + \mathcal{R}^C_{i3}H_d^{-*} + \mathcal{R}^C_{i4}T_1^{-*} \end{equation} with $\mathcal{R}^C_{i1, i3}$ and $\mathcal{R}^C_{i2, i4}$ determining the doublet and triplet part respectively. In general $\mathcal{R}^C_{ij}$ is a function of all the vevs, $\lambda_{T, TS, S}$ and the $A_i$ parameters and we can write schematically \begin{eqnarray}\label{rc} \mathcal{R}^C_{ij} = f^C_{ij}\left(v_u, v_d, v_T, v_S, \lambda_T, \lambda_{TS}, \lambda_S, A_i\right). \end{eqnarray} The charged Higgs mass matrix which is given in appendix (Eq.~\ref{chMM}), shows the similar dependency on the parameters. However, the charged Goldstone mode, expressed in terms of the gauge eigenstates, is a function only of the vevs and the gauge couplings, as we expect from the Goldstone theorem. \begin{eqnarray}\label{gstn} h_0^\pm=\pm N_T \left(\sin\beta H_u^+\, -\cos\beta H_d^{-*} \,\mp\sqrt2\,\frac{v_T}{v}(T_2^+ +T_1^{-*})\right)\, ,\quad N_T=\frac{1}{\sqrt{1+4\frac{v_T^2}{v^2}}} \end{eqnarray} Eq.~\ref{gstn} presents the explicit expression of the charged Goldstone mode and we can see that it is independent of any other kind of couplings or parameters. Among the three kind of vevs entering in the charged Goldstone mode, the triplet vev is very small ($v_T\lesssim 5$ GeV) due to its contribution in the $W^\pm$ boson mass, as already discussed. The triplet vev, being restricted by the $\rho$ parameter \cite{rho}, makes the charged Goldstone always doublet-type. However among the massive states in the gauge basis, two of them are triplet-like and one is doublet-like. We shall see later that this small triplet contribution to the Goldstone boson protects one of the three physical charged Higgs bosons from becoming absolute triplet-like. \begin{figure}[thb] \begin{center} \includegraphics[width=0.6\linewidth]{plots/chaSTRUCT.pdf} \includegraphics[width=0.6\linewidth]{plots/chaSTRUCT1.pdf} \caption{Triplet component of the massive charged Higgs bosons versus $\lambda_T$.}\label{chpslmbda} \end{center} \end{figure} In Figure~\ref{chpslmbda} we show the structure of the charged Higgs bosons as a function of $|\lambda_T|$, where we demand the lightest charged Higgs massive state to be mostly triplet. One can realize that that for a non-zero $\lambda_T$, their tendency is to mix. However, as we move towards the $|\lambda_T|\simeq 0$ region, one of the charged Higgs boson gives away the $\sim (\frac{v_T}{v})^2$ triplet part to the charged Goldstone and fails to become 100\% triplet (see the blue points in Figure~\ref{chpslmbda}). { In the models where $A_T$ parameter is proportional to $\lambda_T$, the mixing induced by the soft parameter $A_T$ automatically goes to zero in this limit. However the mixing of doublet and triplet in the charged Goldstone comes from the corresponding vevs and it is independent of $\lambda_T$ or $A_T$ as can be seen from Eq. 21. Now all the other massive charged Higgs bosons are orthogonal to the Goldstone boson, which makes the similar mixing in the massive states as well. This mixing goes to zero only when the triplet does not play any role in EWSB, i.e. $v_T=0$. However for non-zero $\lambda_T$ and $A_T$ the additional mixings come for the massive eigenstates.} Anyone of the three massive charged Higgs boson can show this feature but we see it only for $h_1^\pm$ because in the selection criteria we have demanded that $h_1^\pm$ must be triplet-like. Thus for non-zero triplet vev even with $|\lambda_T|=0$, complete decoupling of doublet and triplet representations is not possible. Therefore by 'decoupling limit' we mean $|\lambda_T|\simeq 0$ here onwards. In this decoupling limit either the $h^\pm_2$ or the $h^\pm_3$ become completely of triplet-type. A similar conclusion was shown for the triplet extension of the supersymmetric standard model \cite{EspinosaQuiros}. \begin{figure}[thb] \begin{center} \includegraphics[width=0.78\linewidth]{plots/R2R4.pdf} \caption{Correlations of the rotation angles of the lightest charged Higgs boson $h^\pm_1$ as a function of $\lambda_T$.}\label{ssoslmbda} \end{center} \end{figure} \begin{table} \begin{center} \renewcommand{\arraystretch}{1.4} \begin{tabular}{||c||c|c||} \hlin &$10^{-2}<|\lambda_T|<1$&$|\lambda_T|<10^-2$\\ \hline \rm{sign} $\mathcal{R}^C_{12}$ $\mathcal{R}^C_{14}$&+ or -&+\\ \hline \end{tabular} \caption{The sign of the product $\mathcal{R}^C_{12}$ $\mathcal{R}^C_{14}$. The sign of the two rotation angles of the lightest charged Higgs boson plays a crucial role in the interactions of a triplet-like charged Higgs boson. In the limit $|\lambda_T|\sim0$ these two rotation angles have the same sign. This feature has important consequences for the interaction, and hence the cross-section, of the lightest charged Higgs boson in various channels.}\label{r2r4s} \end{center} \end{table} The decoupling limit of $|\lambda_T|\sim 0$ not only affects the structure of the charged Higgs bosons, where two of them become triplet-like and one of them doublet-like, but also affects the respective coupling via the corresponding rotation angles. In Figure~\ref{ssoslmbda} we show the rotation matrix elements for the light charged Higgs boson $h^\pm_1$ with respect to $|\lambda_T|$. We can see that when $\lambda_T$ becomes very small the mixing angles in the triplet component of the light charged Higgs boson $h^\pm_1$, $\mathcal{R}^C_{12}$ and $\mathcal{R}^C_{14}$, as defined in Eq.~\ref{chH}, take same signs, unlike the general case. We will see later that the presence of same signs in $\mathcal{R}^C_{12}$ and $\mathcal{R}^C_{14}$ in the decoupling limit, causes an enhancement of some production channels and decrement for some other ones. \section{Decays of the charged Higgs bosons}\label{chdcys} As briefly mentioned above, the phenomenology of the Higgs decay sector of the TNMSSM, as discussed in \cite{TNMSSM1}, is affected by the presence of a light pseudoscalar which induces new decay modes. In this section we consider its impact in the decay of a light charged Higgs boson $h^\pm_1$. Along with the existence of the light pseudoscalar, which opens up the $h^\pm_1 \to a_1 W^\pm$ decay mode, the triplet-like charged Higgs adds new decay modes, not possible otherwise. In particular, a $Y=0$ triplet-like charged Higgs boson gets a new decay mode into $ZW^\pm$ which is a signature of custodial symmetry breaking. Apart from that, the usual doublet-like decay modes into $\tau\nu$ and $tb$ are present via the mixings with the doublets. \subsection{$h_i^\pm \to W^\pm h_j/a_i$} The trilinear couplings with charged Higgses, scalar (pseudoscalar) Higgses and $W^\pm$ are given by \begin{align}\label{hachW} g_{h_i^\pm W^\mp h_j}&=\frac{i}{2}g_L\Big(\mathcal R_{j2}^S\mathcal R_{i3}^C-\mathcal R_{j1}^S\mathcal R_{i1}^C+\sqrt2\mathcal R_{j4}^S\left(\mathcal R_{i2}^C+\mathcal R_{i4}^C\right)\Big), \\ g_{h_i^\pm W^\mp a_j}&=\frac{g_L}{2}\Big(\mathcal R_{j1}^P\mathcal R_{i1}^C+\mathcal R_{j2}^P\mathcal R_{i3}^C+\sqrt2\mathcal R_{j4}^P\left(\mathcal R_{i2}^C-\mathcal R_{i4}^C\right)\Big). \end{align} Both the triplet and doublet has $SU(2)$ charges so they couple to $W^\pm$ boson. Their coupling in association with neutral Higgs bosons have to be doublet(triplet) type for doublet(triplet) type charged Higgs bosons. For the phenomenological studies we have considered a doublet-like Higgs boson around $125$ GeV, a light triplet-like charged Higgs boson $\lesssim 200$ GeV and a very light singlet type pseudoscalar $\sim 20$ GeV. Hence the mixing angles become really important. In the next few section we will see how the various rotation angles involved with the charged Higgs bosons and their relative signs determine the strength of the couplings and thus of the decay widths. Eq.~\ref{hachW} shows that for $h_i^\pm \to W^\pm h_j$ decay the rotation angles $\mathcal R^C_{i2}$ and $\mathcal R^C_{i4}$ come as additive where as for $h_i^\pm \to W^\pm a_j$ they come as subtractive. The decay width of a massive charged Higgs boson in a $W$ boson and a scalar (or pseudoscalar) boson is given by \begin{eqnarray} \label{chwah} \Gamma_{h_i^\pm\rightarrow W^\pm h_j/a_j}&=&\frac{G_F}{8\sqrt2\pi}m^2_{W^\pm}|g_{h_i^\pm W^\mp h_j/a_j}|^2 \,\sqrt{\lambda(1,x_W,x_{h_j/a_j})}\,\lambda(1,y_{h_i^\pm},y_{h_j/a_j}) \end{eqnarray} where $x_{W,h_j}=\frac{m^2_{W,h_j}}{m^2_{h_i^\pm}}$ and $y_{h_i^\pm,h_j}=\frac{m^2_{h_i^\pm,h_j}}{m^2_{W^\pm}}$ and similarly for $a_j$. \begin{figure}[thb] \begin{center} \includegraphics[width=0.7\linewidth]{plots/gHpmA1W.pdf} \caption{Correlation of $g_{h^\pm_1 W^\mp a_1}$ with $\mathcal{R}^C_{12}$ and $\mathcal{R}^C_{14}$. { For the blue points in II and IV quadrants the low values of the coupling are due to the selection of a singlet-like $a_1$, which means that $\mathcal{R}^P_{13}\sim1$, whereas for the blue points in the I and III quadrants the low value of $|g_{h^\pm_1 W^\mp a_1}|$ comes from the cancellation between $\mathcal{R}^C_{12}$ and $\mathcal{R}^C_{14}$.}}\label{ghmp1a1W} \end{center} \end{figure} Figure~\ref{ghmp1a1W} shows the dependency of the $g_{h^\pm_1 W^\mp a_1}$ coupling with the triplet components of the lightest charged Higgs eigenstate, i.e., $\mathcal{R}^C_{12}$ and $\mathcal{R}^C_{14}$. We have seen from Figure~\ref{ssoslmbda} and Table~\ref{r2r4s} the behaviour of $\mathcal{R}^C_{12}$ $\mathcal{R}^C_{14}$ as a function of $\lambda_T$, i.e. that for $\lambda_T \sim 0$ they take same sign. We can see that in the decoupling limit, i.e. for $\lambda_T\sim 0$, the coupling decreases because $\mathcal{R}^C_{12}$ and $\mathcal{R}^C_{14}$ take same sign and they tend to cancel, cfr. Eq.~\ref{hachW}. { A low value of this coupling can come even when the pseudoscalar Higgs boson ($a_j$) is singlet-like, which means that $\mathcal{R}^P_{j3}\sim1$.} The situation is just opposite in the case of $g_{h^\pm_1 W^\mp h_1}$, as one can see from Figure~\ref{ghmp1h1W}. Here in the decoupling limit the coupling $g_{h^\pm_1 W^\mp h_1}$ is enhanced. { In Figure~\ref{ghmp1h1W} we can also see some blue points with low $\mathcal{R}^C_{12}$, $\mathcal{R}^C_{14}$. In this case the charged Higgs boson is not triplet-like and the suppression in the coupling is due to the accidental cancellation of $\Big(\mathcal R_{12}^S\mathcal R_{13}^C-\mathcal R_{11}^S\mathcal R_{11}^C\Big)$, cfr. Eq.~\ref{hachW}. This cancellation is of course not related to the limit $\lambda_T\sim 0$.} We see later how it affects the corresponding production processes. \begin{figure}[thb] \begin{center} \includegraphics[width=0.7\linewidth]{plots/gHpmH1W.pdf} \caption{Correlation of $g_{h^\pm_1 W^\mp h_1}$ with $\mathcal{R}^C_{12}$ and $\mathcal{R}^C_{14}$. { The coupling is enhanced when $\mathcal{R}^C_{12}$ and $\mathcal{R}^C_{14}$ are small, i.e. for a doublet-like charged Higgs $h_1^\pm$. The enhancement in the I and III quadrants are related to the same sign of $\mathcal{R}^C_{12}$ and $\mathcal{R}^C_{14}$, cfr. Eq.~\ref{hachW}.}}\label{ghmp1h1W} \end{center} \end{figure} \subsection{$h_i^\pm \to W^\pm Z$} The charged sector of a theory with scalar triplet(s) is very interesting due to the tree-level interactions $h_i^\pm-W^\mp-Z$ for $Y=0, \pm 2$ hypercharge triplets which break the custodial symmetry \cite{pbas3,EspinosaQuiros,tnssm, tnssma}. In the TNMSSM this coupling is given by \begin{eqnarray}\label{zwch} g_{h_i^\pm W^\mp Z}&=&-\frac{i}{2}\left(g_L\, g_Y\left(v_u\sin\beta\,\mathcal R^C_{i1}-v_d\cos\beta\,\mathcal R^C_{i3}\right)+\sqrt2\,g_L^2v_T\left(\mathcal R^C_{i2}+\mathcal R^C_{i4}\right)\right), \end{eqnarray} where the rotation angles are defined in Eq.~\ref{chmix}. The on-shell decay width is given by \begin{eqnarray}\label{chzw} \Gamma_{h_i^\pm\rightarrow W^\pm Z}&=&\frac{G_F\,\cos^2\theta_W}{8\sqrt2\pi}m^3_{h_i^\pm}|g_{h_i^\pm W^\mp Z}|^2\,\sqrt{\lambda(1,x_W,x_Z)}\left(8\,x_W\,x_Z+(1-x_W-x_Z)^2\right) \end{eqnarray} where $\lambda(x,y,z)=(x-y-z)^2-4\,y\,z$ and $x_{Z,W}=\frac{m^2_{Z,W}}{m^2_{h_i^\pm}}$ \cite{Asakawa}. \begin{figure}[thb] \begin{center} \includegraphics[width=0.7\linewidth]{plots/gHpmZW.pdf} \caption{Correlation of $g_{h^\pm_1 W^\mp Z}$ with $\mathcal{R}^C_{12}$ and $\mathcal{R}^C_{14}$.}\label{ghmp1WZ} \end{center} \end{figure} Figure~\ref{ghmp1WZ} shows the dependency of $g_{h_i^\pm W^\mp Z}$ with respect to $\mathcal{R}^C_{12}$ and $\mathcal{R}^C_{14}$. We see that for $\lambda_T \sim 0$ $\mathcal{R}^C_{12}$ and $\mathcal{R}^C_{14}$ take the same sign, and hence the $h_i^\pm-W^\mp-Z$ coupling is enhanced. \subsection{$h_i^\pm \to t b$} Beside the non-zero $h^\pm_i-W^\mp-Z$ coupling at the tree-level due to custodial symmetry breaking, the charged Higgs bosons can also decay into fermions through the Yukawa interaction given below \begin{eqnarray} g_{h_i^+ \bar u d}=i\left(y_u\,\mathcal R^C_{i1}\,\mathtt{P_L}+y_d\,\mathcal R^C_{i3}\,\mathtt{P_R}\right) \end{eqnarray} governed by doublet part of the charged Higgses. The decay width at leading order is \begin{align}\label{chtb} \Gamma_{h_i^\pm\rightarrow u\,d}&=\frac{3}{4}\frac{G_F}{\sqrt2\pi}m_{h_i^\pm}\sqrt{\lambda(1,x_u,x_d)}\Bigg[(1-x_u-x_d)\,\left(\frac{m^2_u}{\sin^2\beta}(\mathcal R^C_{i1})^2+\frac{m_d^2}{\cos^2\beta}(\mathcal R^C_{i3})^2\right)\nonumber\\ &\hspace{4.5cm}-4\frac{m_u^2m_d^2}{m^2_{h_i^\pm}}\frac{\mathcal R^C_{i1}\mathcal R^C_{i3}}{\sin\beta\cos\beta}\Bigg] \end{align} where $x_{u,d}=\frac{m^2_{u,d}}{m^2_{h_i^\pm}}$. The QCD correction to the leading order formula are the same as in the MSSM and are given in \cite{anatomy2}. The decay of the charged Higgs bosons into quarks is then suppressed in the case of triplet-like eigenstates, as one can easily realize from the expression above. In Figure~\ref{ghmp1tb} we show the correlation of the effective Yukawa coupling $(y_u\,\mathcal R^C_{i1}$ and $y_d\,\mathcal R^C_{i3})$ of top and bottom quark respectively as a function of $\tan\beta$. The dominant contribution comes from the top for small $\tan\beta$, as we expected. \begin{figure}[thb] \begin{center} \includegraphics[width=0.7\linewidth]{plots/gHpmUd.pdf} \caption{Correlation of $y_t\mathcal{R}^C_{11}$ and $y_b\mathcal{R}^C_{13}$ as a function of $\tan{\beta}$.}\label{ghmp1tb} \end{center} \end{figure} \section{Decay branching ratios of the charged Higgs bosons}\label{ch1dcy} Prepared with the possibilities of new decay modes we finally analyse such scenarios with the data satisfying various theoretical and experimental constraints. The points here have a CP-even neutral Higgs boson around 125 GeV which satisfies the LHC constraint given in Eq.~\ref{LHCdata}. To study the decay modes and calculate the branching fractions we have implemented our model in \texttt{SARAH$\_$4.4.6} \cite{sarah} and we have generated the model files for \texttt{CalcHEP$\_$3.6.25} \cite{calchep}. \begin{figure}[thb] \begin{center} \mbox{\subfigure[]{ \includegraphics[width=0.23\linewidth]{plots/hhW.pdf}} \hspace*{.5cm} \subfigure[]{ \includegraphics[width=0.23\linewidth]{plots/hZW.pdf}} \hspace*{.5cm} \subfigure[]{ \includegraphics[width=0.23\linewidth]{plots/chtaunutb.pdf}}} \caption{The new and modified decay channels of the Higgs bosons at the LHC.}\label{higgdcy} \end{center} \end{figure} \begin{figure}[thb] \begin{center} \mbox{\subfigure[]{\includegraphics[width=.5\linewidth]{plots/ch1Brs.pdf}} \subfigure[]{\includegraphics[width=.5\linewidth]{plots/ch1BrNC.pdf}}} \caption{The branching ratios for the decay of the lightest charged Higgs boson $h^\pm_1$ into non-supersymmetric (a) and supersymmetric modes (b).}\label{ch1br} \end{center} \end{figure} Figure~\ref{ch1br}(a) presents the decay branching ratios of the light charged Higgs boson $h^\pm_1$ into non-supersymmetric modes. This includes the $a_1W^\pm$, $h_1W^\pm$, $ZW^\pm$, $tb$ and $\tau\nu$ channels. The points in the Figure~\ref{ch1br} include a discovered Higgs boson at $\sim 125$ GeV and a triplet-like light charged Higgs boson $h_1^\pm$. When $a_1$ is singlet-type, the $a_1W^\pm$ decay mode is suppressed in spite of being kinematically open. One can notice that, being the $ h^\pm_1$ triplet-like, the decay mode $ZW^\pm$ can be very large, even close to $100\%$. When the $tb$ mode is kinematically open, the $ZW^\pm$ gets an apparent suppression, but it increases again for a charged Higgs bosons of larger mass ($m_{h^\pm_1}\sim 400$ GeV). This takes place because the $h^\pm_i \to ZW^\pm$ decay width is proportional to $m_{h^\pm_i}^3$, unlike the $tb$ one, which is proportional to $m_{h^\pm_i}$ (see Eq.~\ref{chzw} and Eq.~\ref{chtb}). The variation of these two decay widths, as a function of $m_{h^\pm_1}$, are shown in Figure~\ref{ch1dc}.\\ Figure~\ref{ch1br}(b) shows the decays of the lightest charged Higgs boson into the supersymmetric modes, i.e. into charginos $\tilde{\chi}^\pm_i$ and neutralinos $\tilde{\chi}^0_j$, when these modes are are kinematically allowed. We observe that for a charged Higgs boson of a relatively higher mass $m_{h^\pm_i} \raise0.3ex\hbox{$\;>$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 300$ GeV, these modes open up and can have very large branching ratios. \begin{figure}[bht] \begin{center} \includegraphics[width=.5\linewidth]{plots/ch1Ws.pdf} \caption{The decay widths of the lightest charged Higgs boson $h^\pm_1$ to $tb$ and $ZW^\pm$.}\label{ch1dc} \end{center} \end{figure} Apart from the lightest charged Higgs boson, there are two additional charged Higgs bosons, $h^\pm_2$ and $h^\pm_3$. As we have pointed out many times, we have selected data points for which the light charged Higgs boson is triplet-type. Certainly, in the decoupling limit, i.e. when $|\lambda_T|\simeq 0$, either one of $h^\pm_{2,3}$ is triplet-like and the other one is doublet-like. The points that we have generated, which satisfy also the precondition of allowing a $h_{125}$ in the spectrum, have a $h^\pm_2$ as a triplet- and a $h^\pm_3$ as a doublet-like Higgs boson, cfr. Figure~\ref{chpslmbda}. \begin{figure}[thb] \begin{center} \mbox{\subfigure[]{ \includegraphics[width=0.5\linewidth]{plots/ch2Br.pdf}} \subfigure[]{\includegraphics[width=0.5\linewidth]{plots/ch2BrNC.pdf}}} \mbox{\subfigure[]{\includegraphics[width=0.5\linewidth]{plots/ch2BrHS.pdf}}} \caption{The branching ratios of the decay of the charged Higgs boson $h^\pm_2$ into non-supersymmetric (a), supersymmetric modes (b) and into Higgs bosons (c).}\label{ch2br} \end{center} \end{figure} In Figure~\ref{ch2br} we present the decay branching ratios of the second charged Higgs boson $h^\pm_2$. Figure~\ref{ch2br}(a) shows the ratios in $\tau\nu$, $tb$, $a_1W^\pm$, $h_1 W^\pm$ and $Z h^\pm_1$. As one can observe, $tb$ and $a_1 W^\pm$ are the dominant modes reaching up to $\sim 90\%$ and $\sim80\%$ respectively. Figure~\ref{ch2br}(b) shows the branching ratios into supersymmetric modes with neutralinos and charginos, which are kinematically allowed. For some benchmark points these modes can have decay ratios as large as $\sim 60\%$. Figure~\ref{ch2br}(c) shows the ratios for $h^\pm_2$ decaying into two scalars, i.e. to $h_1^\pm h_{1,2}$ and $h^\pm_1 a_1$, with the $h^\pm_1 a_1$ final state being the dominant among all. \begin{figure}[thb] \begin{center} \mbox{\subfigure[]{ \includegraphics[width=0.44\linewidth]{plots/ch3Br.pdf}}} \mbox{\subfigure[]{\includegraphics[width=0.44\linewidth]{plots/ch3BrNC.pdf}}} \mbox{\subfigure[]{\includegraphics[width=0.44\linewidth]{plots/ch3BrHS.pdf}}} \mbox{\subfigure[]{\includegraphics[width=0.44\linewidth]{plots/ch3BrHS2.pdf}}} \caption{The branching ratios of the decay of the charged Higgs boson $h^\pm_3$ into non-supersymmetric (a), supersymmetric modes (b), lightest charged Higgs boson $h^\pm_1$ in association with the neutral Higgs bosons (c) and second light charged Higgs boson $h^\pm_2$ in association with the neutral Higgs bosons (d).}\label{ch3br} \end{center} \end{figure} Figure~\ref{ch3br} presents the third charged Higgs boson $h^\pm_3$ decays. From Figure~\ref{ch3br}(a) we can see that for a large parameter space the decay branching fraction to $a_1W^\pm$ is the most relevant mode which can be probed at the LHC. Even though $tb$ mode is kinematically open but not the most dominant one. Figure~\ref{ch3br}(b) shows that $\tilde{\chi}^0_2 \tilde{\chi}^\pm_1$ mode is kinematically open and also one of the most important. Figure~\ref{ch3br}(c) shows the decay branching ratios for the decay modes into the lightest charged Higgs boson in association with the neutral Higgs bosons. It is evident that the $h^\pm_1 a_1$ mode is the most important and one can probe more than one charged Higgs boson and also the light pseudoscalar. In Figure~\ref{ch3br}(d) the branching ratios are shown where the heaviest charged Higgs boson $h^\pm_3$ decays to second lightest charged Higgs boson $h^\pm_2$ in association with the neutral Higgs bosons. Again the light pseudoscalar mode can have large branching ratios. \section{Production channels of a light charged Higgs boson}\label{ch1prod} The triplet nature of the charged Higgs bosons adds few new production processes at the LHC along with the doublet-like charged Higgs production process. For a doublet-like charged Higgs boson the production processes are dominated by the top quark decay for the light charged Higgs boson ($m_{h^\pm_i} < m_t$) or $b g \to t h^\pm_i$ for ($m_{h^\pm_i} > m_t$) which are governed by the corresponding Yukawa coupling and $\tan{\beta}$ viz, in 2HDM, MSSM and NMSSM. In TNMSSM however the charged Higgs bosons can be triplet-like, and hence do not couple to fermions. Fermionic channels, including top and bottom and in general all the fermions, are then suppressed. The presence of the $h_i^\pm-W^\mp-Z$ vertex generates new production channels and also modifies the known processes for the production of a charged Higgs boson $h^\pm_i$. In these sections we address the dominant and characteristically different production mechanisms for the light charged Higgs bosons $h^\pm_1$ at the LHC. For this purpose we select in the parameter space the benchmark points with a discovered Higgs boson around $125$ GeV and with the lightest charged Higgs boson $h^\pm_1$ that is triplet-like ($\geq 90\%$). The cross-sections are calculated at the LHC with a center of mass energy of 14 TeV for such events. We have performed our analysis at leading order, using $\mathtt{CalcHEP\_3.6.25}$ \cite{calchep}, using the CTEQ6L \cite{6teq6l} set of parton distributions and a renormalization/factorization scale $Q=\sqrt{\hat{s}}$ where $\hat{s}$ denotes the total center of mass energy squared at parton level. \subsection{Associated $W^\pm$ } The dominant channels are shown in Figure~\ref{prodchW}, which are mediated by the neutral Higgs bosons, the $Z$ boson and the quarks. Figure~\ref{prodchW}(b) which describe the $Z$ mediation requires the non-zero $h_1^\pm-W^\mp-Z$ vertex which is absent in theories without the $Y=0,\pm2$ triplet-extended Higgs sector. For a doublet-like charged Higgs, the only contributions comes from the neutral Higgs-mediated diagrams in the s-channel and $t$-quark mediated diagram in the t-channel (see Figure~\ref{prodchW}(a), (c)). For low $\tan{\beta}$ case the t-channel contribution in $b\bar{b}$ fusion is really large due to large Yukawa coupling. We will see that this admixture of doublet still affects the production cross-section for low $\tan{\beta}$. \begin{figure}[thb] \begin{center} \mbox{\subfigure[]{ \includegraphics[width=0.35\linewidth]{plots/Wchha.pdf}} \hskip 15pt \subfigure[]{\includegraphics[width=0.35\linewidth]{plots/ZWhci.pdf}}} \mbox{\subfigure[]{\includegraphics[width=0.3\linewidth]{plots/chWtChannel.pdf}} } \caption{The Feynman diagrams for the charged Higgs production in association with $W^\pm$ boson at the LHC.}\label{prodchW} \end{center} \end{figure} \begin{figure}[thb] \begin{center} \includegraphics[width=0.7\linewidth]{plots/WHpmCSmass.pdf} \caption{The production cross-section of $h^\pm_1W^\mp$ at the LHC versus the lightest charged Higgs boson mass $m_{h^\pm_1}$. The red coloured ones are $\geq 90\%$ doublet-like, green ones are $\geq 90\%$ triplet-like and blue ones are mixed type light charged Higgs bosons.}\label{ch1Wcs} \end{center} \end{figure} The contribution of $h_1$ is subdominant because $h_1$ and $h_1^\pm$ are selected to be mostly doublet and triplet respectively, in order to satisfies the LHC data. The coupling of a totally triplet charged Higgs boson with a totally doublet neutral Higgs boson and a $W$ boson is not allowed by gauge invariance. For the lightest triplet-like charged Higgs boson, one of the degenerate neutral Higgs boson, either $h_2$ or $a_2$, is also triplet-like, and fails to contribute as mediator in $b\bar{b}$ fusion mode (Figure~\ref{prodchW}(a)). The other relevant neutral Higgs boson which is not degenerate with the lightest charged Higgs boson $h^\pm_1$ contributes to $b\bar{b}$ fusion production process via its doublet mixings. Thus doublet-triplet mixing part plays an important role even when we are trying to produce a light charged Higgs boson which is triplet-like. This feature also has been observed in Triplet Extended Supersymmetric Standard Model (TESSM) \cite{pbas3}. Even the off-shell doublet type neutral Higgs mediation ($h_{125}$) in s-channel via gluon-gluon fusion fails to give sufficient contribution to $h^\pm_1 W^\mp$ final state. We checked such process at the LHC for the center of mass energy of 14 TeV and a triplet-like charged Higgs of mass $\sim 300$ GeV and $h^\pm_1 W^\mp$ cross-section is below $\mathcal{O}(10^{-3})$ fb. In Figure~\ref{ch1Wcs} we present the associated production cross-section for a light charged Higgs boson $h^\pm_1$ together with the light charged Higgs boson mass $m_{h^\pm_1}$. The red coloured ones are $\geq 90\%$ doublet-like, green ones are $\geq 90\%$ triplet-like and blue ones are mixed type light charged Higgs bosons. It can be seen that as the doublet the fraction grows, the production cross-section also grows. At $\lambda_T\simeq0$ the lightest charged Higgs cannot be completely triplet-like, due to the doublet fraction $\frac{v_T}{v}$. In this limit the cross section follows the line given by the green points in Figure~\ref{ch1Wcs}. As we have seen in the previous section, for $\lambda_T\neq 0$ the coupling $g_{h_1^\pm W^\mp Z}$ is very small even if the lightest charged Higgs is completely triplet-like. This means that the $Z$ propagator (cfr. Figure~\ref{prodchW}(b)) does not give contribution. However, since for $\lambda_T\neq 0$ the triplet fraction of $h_1^\pm$ is not fixed, the cross-section can be enhanced or decreased compared to the $|\lambda_T|\simeq0$ one. \subsection{Associated $Z$ } \begin{figure}[thb] \begin{center} \mbox{\subfigure[]{ \includegraphics[width=0.35\linewidth]{plots/Zchch.pdf}}\hskip 15pt \subfigure[]{\includegraphics[width=0.35\linewidth]{plots/ZchW.pdf}}} \caption{The Feynman diagrams for the charged Higgs production in association with $Z$ boson at the LHC.}\label{prodch1Z} \end{center} \end{figure} Unlike the previous case, the charged Higgs production in association with $Z$ does not have sizeable contributions from the doublet part of the Higgs boson spectrum. For instance, the doublet nature of the charged Higgs allows its exchange in the s-channel, as shown in Figure~\ref{prodch1Z}(a), via an annihilation process ($q \bar{q}')$ which requires quarks of different flavours. The contributions from the valence $u/\bar{d}, \bar{u}/d$ distributions, in a $pp$ collision are strongly suppressed by the much lower Yukawa couplings. On the other hand contributions from heavier generations such as $c/\bar{b},\bar{c} /b$ are suppressed by CKM mixing angles and the involvement of sea quarks in the initial state. Nevertheless, in the case of the TNMSSM, a non-zero $h_1^\pm-W^\mp-Z$ vertex gives an extra contribution to this production process, which is absent in the case of doublet-like charged Higgs bosons. In fact, for $\lambda_T \simeq 0$, which corresponds to what we have called decoupling limit, the $T^+_1$ and $T^-_2$ interaction eigenstates contribute additively to the $h_1^\pm-W^\mp-Z$, as can be seen from Eq.~\ref{zwch} and also can be realised from Figure~\ref{ssoslmbda} and Figure~\ref{ghmp1WZ}. However we can see from Figure~\ref{ch1Zcs} that the $h^\pm_1Z$ production cross-section is smaller than the respective production in association with a $W^\pm$. This is due to the fact that there are no other efficient contributions beside the channel with the $W^\pm$ in the propagator, as discussed earlier. \begin{figure}[thb] \begin{center} \includegraphics[width=0.7\linewidth]{plots/ZHpmCSmass.pdf} \caption{The production cross-section of light charged Higgs boson $h^\pm_1$ in association with $Z$ boson versus the light charged Higgs boson mass $m_{h^\pm_1}$.}\label{ch1Zcs} \end{center} \end{figure} \subsection{Associated $h_1$} We have considered, than, the production of the charged Higgs boson production in association with a scalar Higgs boson, $h_i$. It is clear from Figure~\ref{prodchhi} that there are two contributions to this channel, one via the doublet-type charged Higgs boson and another mediated by the $W^\pm$ boson. However the charged Higgs mediated diagrams are suppressed, for the same reasons discussed earlier in the associated $Z$ production. Both the triplet and doublet Higgs bosons couple to $SU(2)$ gauge boson $W^\pm$. However a careful look on the vertex, given in Eq.~\ref{hachW}, shows that their mixing angles can have relative signs. In general their coupling in association with neutral Higgs bosons have to be doublet(triplet) type for doublet(triplet) type charged Higgs bosons. This behaviour can be seen from Figure~\ref{ch1h1cs}, where we plot the production cross-section versus the mass of the lightest charged Higgs boson, $m_{h^\pm_1}$. The colour code for the charged Higgs boson remains as before. It is quite evident that, for a triplet-like charged Higgs boson, the cross-sections in association with $h_1$, which is mostly doublet, are very small, except for the $\lambda_T\simeq0$ points. We can see the enhanced cross-section for the mostly doublet charged Higgs boson in association with doublet-like $h_1$ (red points). The situation is different for $\lambda_T\simeq0$, where it is easy to produce a mostly triplet charged Higgs boson in this channel due to the enhancement of the $h^\pm_1-W^\mp-h_1$ coupling, given in Eq.~\ref{hachW}. This is due to the fact that for $\lambda_T\simeq0$ the rotation angles $\mathcal{R}^C_{12}$ and $\mathcal{R}^C_{14}$ of the triplet sector, which appear in the coupling given in Eq.~\ref{hachW}, take same sign (in the decoupling limit see Figure~\ref{ssoslmbda}). \begin{figure}[thb] \begin{center} \mbox{\subfigure[]{ \includegraphics[width=0.35\linewidth]{plots/chchh.pdf}}\hskip 15pt \subfigure[]{\includegraphics[width=0.35\linewidth]{plots/Whihcj.pdf}}} \caption{The Feynman diagrams for the charged Higgs production in association with $h_i$ boson at the LHC.}\label{prodchhi} \end{center} \end{figure} \begin{figure}[thb] \begin{center} \includegraphics[width=0.7\linewidth]{plots/h1HpmCSmass.pdf} \caption{The production cross-section of a light charged Higgs boson $h^\pm_1$ in association with the $h_1$ boson versus the light charged Higgs boson mass $m_{h^\pm_1}$.}\label{ch1h1cs} \end{center} \end{figure} \subsection{Associated $a_1$} \begin{figure}[thb] \begin{center} \mbox{\subfigure[]{ \includegraphics[width=0.35\linewidth]{plots/chcha.pdf}}\hskip 15pt \subfigure[]{\includegraphics[width=0.35\linewidth]{plots/chWa.pdf}}} \caption{The Feynman diagrams for the charged Higgs production in association with $a_i$ boson at the LHC.}\label{prodchai} \end{center} \end{figure} Similarly, we can also produce the charged Higgs boson in association with a pseudoscalar Higgs boson, as shown in Figure~\ref{prodchai}. Here we also include the two contributions coming from $h^\pm_i$ and $W^\pm$ respectively even though, as before, the contribution from the charged Higgs propagator is negligible. Figure~\ref{ch1a1cs} presents the variation of the cross-section with the mass of the lightest charged Higgs boson. The cross-section stays very low for the triplet-like points (green ones) and reaches a maximum around 10 fb for doublet- and mixed-like points (red and blue ones). For $\lambda_T\simeq0$ points, the triplets ($T^+_1, T^{-*}_2$) rotation angles $\mathcal{R}^C_{i2, i4}$ appear with a relative sign in the coupling $h^\pm_i-W^\mp-a_j$, as can be seen in Eq.~\ref{hachW}. The $h^\pm_1 a_1$ cross-section thus gets a suppression in the decoupling limit, i.e. for $|\lambda_T|\simeq 0$, unlike the $h_ih^\pm_1$ case, as discussed in the previous section. \begin{figure}[thb] \begin{center} \includegraphics[width=0.7\linewidth]{plots/a1HpmCSmass.pdf} \caption{The production cross-section of light charged Higgs boson $h^\pm_1$ in association with the $a_1$ boson versus the light charged Higgs boson mass $m_{h^\pm_1}$.}\label{ch1a1cs} \end{center} \end{figure} \subsection{Charged Higgs pair production } Here we move to the description of the charged Higgs pair production for the lightest charged Higgs boson $h^\pm_1$. The Feynman diagrams for this process are given in Figure~\ref{prodchij}, with the neutral Higgses and $Z,\gamma$ bosons contributing to the process. However, if the lightest charged Higgs boson $h^\pm_1$ is triplet-like, the diagrams of Figure~\ref{prodchij}(a) give less contribution to the cross section. In fact $a_1$ is selected to be singlet-like, so it does not couple to the fermoins, and the diagram with $h_{125}$ in the propagator is subdominant. The reason is that the coupling $g_{h_1^\pm h_1^\mp h_1}$ of a totally doublet scalar Higgs boson with two totally triplet charged Higgs bosons is prevented by gauge invariance. The triplet charged Higgs pair production is more suppressed than the single triplet-like charged Higgs production via a doublet-like neutral Higgs boson. In that case pair production cross-section via off-shell doublet type neutral Higgs mediation ($h_{125}$) in s-channel via gluon-gluon fusion is below $\mathcal{O}(10^{-6})$ fb. Hence for triplet-like $h_1^\pm$ the diagrams of Figure~\ref{prodchij}(b) are the most relevant ones. The coupling of a pair of $h_1^\pm$ to the $Z$ and the $\gamma$ bosons is shown in Figure~\ref{ZphoHpmHpm} as a function of the doublet fraction. The coupling $g_{h_1^\pm h_1^\mp \gamma}$ is independent of the structure of $h_1^\pm$ as it should be because of the $U(1)_{\rm{em}}$ symmetry. In fact the value of this coupling is just the value of the electric charge. Conversely, the coupling of the $Z$ boson to a pair of charged Higgs depends on the structure of the charged Higgs. When the charged Higgs is totally doublet its coupling approaches the MSSM value $\frac{g_L}{2}\frac{\cos\,2\theta_w}{\cos\theta_w}$. If the charged Higgs is totally triplet the value of the coupling is $g_L\cos\theta_w$, the same of the $W^\pm-W^\mp-Z$ interaction. \begin{figure}[thb] \begin{center} \mbox{\subfigure[]{ \includegraphics[width=0.35\linewidth]{plots/chchha.pdf}}\hskip 15pt \subfigure[]{\includegraphics[width=0.35\linewidth]{plots/chchZ.pdf}}} \caption{Feynman diagrams for the production of a charged Higgs boson pair $h^\mp_i h^\pm_j$ at the LHC, mediated by the Higgs bosons, the $Z$ and the $\gamma$ bosons.}\label{prodchij} \end{center} \end{figure} \begin{figure}[thb] \begin{center} \subfigure[]{\hspace{-.5cm} \includegraphics[width=0.7\linewidth]{plots/ZphoHpmHpm.pdf}} \caption{Value of the coupling $g_{h_1^\pm h_1^\mp X}$ as a function of the doublet fraction of the lightest charged Higgs boson. In the case of the photon this coupling is just the value of the electric charge.}\label{ZphoHpmHpm} \end{center} \end{figure} In Figure~\ref{ch1ch1cs} we show the variation of the cross-sections with respect to the lightest charged Higgs boson mass $m_{h^\pm_1}$. The colour code of the points are as the previous ones. We can see that for triplet-like points with mass around $\sim 100$ GeV the cross-section reach around pb. This large cross-section makes this production a viable channel to be probed at the LHC for the light triplet type charged Higgs boson. We discuss the corresponding phenomenology in section~\ref{pheno}. \begin{figure}[thb] \begin{center} \includegraphics[width=0.8\linewidth]{plots/HpmHpmCSmass.pdf} \caption{The production cross-section of light charged Higgs boson pair $h^\pm_1h^\mp_1$ versus the light charged Higgs boson mass $m_{h^\pm_1}$.}\label{ch1ch1cs} \end{center} \end{figure} \subsection{Vector boson fusion} Neutral Higgs boson production via vector boson fusion is second most dominant production mode in SM. Even in 2HDM or MSSM this production mode of the neutral Higgs boson is one of the leading ones. However no such channel exist for charged Higgs boson as $h^\pm_i-W^\mp-Z$ vertex is zero at the tree-level, as long as custodial symmetry is preserved. The introduction of a $Y=0$ triplet breaks the custodial symmetry at tree-level, giving a non-zero $h^\pm_i-W^\mp-Z$ vertex, as shown in Eq.~\ref{zwch}. This vertex gives rise to the striking production channel of the vector boson fusion into a single charged Higgs boson, which is absent in the MSSM and in the 2-Higgs-doublet model (2HDM) at tree-level. This is a signature of the triplets with $Y=0, \pm 2$ which break custodial symmetry at the tree-level. \begin{figure}[t] \begin{center} \mbox{\subfigure[]{ \includegraphics[width=0.35\linewidth]{plots/vbfusion.pdf}}} \caption{The Feynman diagram for the charged Higgs production via vector boson fusion at the LHC.}\label{prodvvfch} \end{center} \end{figure} \begin{figure}[thb] \begin{center} \includegraphics[width=0.7\linewidth]{plots/VBFHpmCSmass.pdf} \caption{The production cross-section of a light charged Higgs boson via vector boson fusion versus the light charged Higgs boson mass $m_{h^\pm_1}$.}\label{VBFcs} \end{center} \end{figure} Figure~\ref{VBFcs} shows the cross-section variation with respect to the lightest charged Higgs boson mass $m_{h^\pm_1}$. As expected, doublet-like points (in red) have very small cross-sections, and for the mixed points (in blue) we see a little enhancement. Green points describe the cross-sections for the triplet-like points. We see that a triplet-like charged Higgs boson does not necessarily guarantee large values for the cross-section. As one can notice from Eq.~\ref{zwch}, the coupling $g_{h_1^\pm W^\mp Z}$ is a function of $\mathcal{R}^{C}_{12}$ and $\mathcal{R}^{C}_{14}$ and their relative sign plays an important role. From Figure~\ref{ghmp1WZ} we see that only in the decoupling limit, where where $\lambda_T=0$, both $\mathcal{R}^{C}_{12}$ and $\mathcal{R}^{C}_{14}$ take the same sign, thereby enhancing the $h_1^\pm- W^\mp -Z$ coupling and thus the cross-section. It can been seen that only for lighter masses $\sim 150-200$ GeV the cross-sections is around few fb. Such triplet-like charged Higgs bosons can be probed at the LHC as a single charged Higgs production channel without the top quark. This channel thus can be used to distinguish from other known single charged Higgs production mode in association with the top quark, which characterises a doublet-like charged Higgs bosons. \subsection{Associated top quark} \begin{figure}[thb] \begin{center} \mbox{\subfigure[]{ \includegraphics[width=0.35\linewidth]{plots/tHpmsch.pdf}}\hskip 20pt \subfigure[]{\includegraphics[width=0.3\linewidth]{plots/tHpmtch.pdf}}} \caption{The Feynman diagrams for the charged Higgs production in association with a top quark at the LHC.}\label{prodtch} \end{center} \end{figure} In the TNMSSM the triplet sector does not couple to fermions, which causes a natural suppression of the production of a triplet-like charged Higgs in association with a top quark. The only way for this channel to be allowed is via the mixing with doublets. Figure~\ref{prodtch} shows the Feynman diagrams of such production processes, which are dominant and take place via a $b$ quark and gluon fusion. They are highly dependent on the value of $\tan{\beta}$ \cite{djuadi, moretti}. \begin{figure}[thb] \begin{center} \includegraphics[width=0.7\linewidth]{plots/topHpmCSmass.pdf} \caption{The production cross-section of light charged Higgs boson in association with top quark versus the light charged Higgs boson mass $m_{h^\pm_1}$.}\label{tchcs} \end{center} \end{figure} Figure~\ref{tchcs}(b) shows the production cross-section as a function of the lightest charged Higgs boson mass, where the green points correspond to linear combinations which are mostly triplet ($\raise0.3ex\hbox{$\;>$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 90\%$), while red points correspond to those which are mostly of doublet ($\raise0.3ex\hbox{$\;>$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 90\%$) and the blue points are of mixed type. Triplet-like points have a naturally suppressed cross-section whereas the doublet-like points have a large cross-sections, that can be $\sim$ pb. The mixed points lay in between, with cross-sections $\mathcal{O}(100)$ fb. One can also notice the certain enhanced line in the green points which correspond to $|\lambda_T| \simeq 0$. As already explained in the previous sections, in this limit some portion ($\sim(\frac{v_T}{v})^2$) of the lightest charged Higgs boson $h^\pm_1$ remains doublet type, as shown in Figure~\ref{chpslmbda}, and is responsible for the enhancement of the cross-section. Thus not finding a charged Higgs boson in this channel does not mean that it is completely ruled out, simply it can come from higher representation of $SU(2)$. \section{Charged Higgs boson phenomenology}\label{pheno} As already pointed out before, the TNMSSM with a $Z_3$ symmetry allows a very light singlet-like pseudoscalar in its spectrum, which turns into a pseudo-NG mode in the limit of small soft parameters $A_i$ \cite{TNMSSM1}. The existence of such a light and still hidden scalar prompts the decay of a light charged Higgs boson $h^\pm_1 \to a_1 W^\pm$. Of course the gauge invariant structure of the vertex further restricts such decay mode, which is only allowed by the mass mixing of the singlet with the doublets or the triplet. In the extended supersymmetric scenarios with only triplet, one cannot naturally obtain such light triplet-like pseudoscalar, because imposing $Z_3$ symmetry would be impossible due to existence of $\mu$ term, which is necessary to satisfy the lightest chargino mass bound \cite{pbas3}. The existence of a light pseudoscalar mode has been observed and studied in the context of the NMSSM \cite{han, colepa, guchait, pbsnkh}. Unlike NMSSM, in TNMSSM with a $Z_3$ symmetry the decay $h^\pm_1 \to ZW^\pm$ is possible for a triplet-type light charged Higgs boson. Below we discuss the phenomenology of such charged Higgs bosons at the LHC. {For this phenomenological analysis we have selected three benchmark point, named BP1, BP2 and BP3 given in Table~\ref{BP}. \begin{table} \begin{center} \renewcommand{\arraystretch}{1.4} \begin{tabular}{||c||c|c|c|c|c||} \hline \hline &$m_{h_1^\pm}$&$m_{a_1}$&$\mathcal{Br}(a_1W^\pm)$&$\mathcal{Br}(Z\,W^\pm)$&$\mathcal{Br}(\tau\nu_\tau)$\\ \hline BP1&179.69&41.22&$9.7\times10^{-1}$&$2.1\times10^{-2}$&$1.3\times10^{-4}$\\ \hline BP2&112.75&29.77&$9.9\times10^{-1}$&$6.3\times10^{-5}$&$5.5\times10^{-3}$\\ \hline BP3&172.55&48.94&$6.3\times10^{-5}$&$9.8\times10^{-1}$&$2.4\times10^{-3}$\\ \hline \hline \end{tabular} \caption{The mass of $h_1^\pm$, the mass of $a_1$ and the relevant branching ratios for the three benchmark points used in the phenomenological analysis.}\label{BP} \end{center} \end{table} All of them are characterised by a triplet-like charged Higgs boson $h_1^\pm$, which make the charged Higgs branching fractions into fermions, e.g. $\mathcal{Br}(h_1^\pm\to\tau\nu_\tau)$ or $\mathcal{Br}(h_1^\pm\to t\,b)$, strongly suppressed. We choose this scenario of triplet-like charged Higgs boson to look for new physics signals that is not there in two Higgs doublet model (2HDM), MSSM and NMSSM. The benchmark points maximize following decay modes; \begin{itemize} \item BP1: \\ $\sigma_{pp\to h_1^\pm h_1^\mp} \times \mathcal{Br}(h_1^\pm \to a_1W^\pm)\mathcal{Br}(h_1^\mp \to Z\,W^\mp)$ , \item BP2:\\ $\sigma_{pp\to h_1^\pm h_1^\mp} \times \mathcal{Br}(h_1^\pm \to a_1W^\pm)\mathcal{Br}(h_1^\mp \to a_1W^\pm)$ \item BP3:\\ $\sigma_{pp\to h_1^\pm h_1^\mp} \times \mathcal{Br}(h_1^\pm \to Z\,W^\mp)\mathcal{Br}(h_1^\mp \to Z\,W^\mp)$. \end{itemize} We will discuss the final sate searches along with dominant SM backgrounds below starting for BP1 to BP3. A detailed collider study is in preparation \cite{pbch}. If the lightest charged Higgs boson is pair produced, it can have the following decay topologies \begin{eqnarray}\label{fs1} pp &\to& h^\pm_1h^\mp_1\nonumber \\ &\to & a_1 W^\pm Z W^\mp \nonumber \\ &\to & 2\tau (2b)+ 2j+ 3\ell \nonumber +\not\!\!{E_T} \\ & \to & 2\tau (2b)+ 4\ell +\not\!\!{E_T} . \end{eqnarray} Eq.~\ref{fs1} shows that when one of the charged Higgs bosons decays to $a_1 W^\pm$, which is a signature of the existence of singlet-type pseudoscalar, and the other one decays to $ZW^\pm$, which is the triplet signature. Thus we end up with $a_1 +2 W^\pm +Z$ intermediate state. Depending on the decays of the gauge bosons; hadronic or leptonic, and that of the light pseudoscalar (into $b$ or $\tau$ pairs), we can have final states with multi-lepton plus two $b$- or $\tau$-jets. The tri-lepton and four-lepton backgrounds are generally rather low in SM. In this case they are further tagged with $b$ or $\tau$-jet pair, which make these channels further clean. As mentioned earlier the detailed signal, backgrounds study is in progress as a separate study in \cite{pbch}. However in Table~\ref{finalstates} we look for $ \geq3\ell+2\tau+\not\!\!{E_T}$ and $\geq3\ell+2b+\not\!\!{E_T}$ final states event numbers at an integrated luminosity of 1000 fb$^{-1}$ for both BP1 and dominant SM backgrounds. The demand $\geq 3\ell$ over $4\ell$ was chosen to enhance the signal numbers. The kinematical cuts on the momentum and various isolation cuts and tagging efficiencies for $b$-jets \cite{btag} and $\tau$-jets \cite{tautag} reduce the final state numbers. The $b$-tagging efficiency has been chosen to be $0.5$ and $\tau$-jet tagging efficiency varies a lot with the momentum of the $\tau$-jet ($30-70\%$) are taken into account while giving the final state numbers. For $\geq3\ell+2\tau+\not\!\!{E_T}$ and $\geq3\ell+2b+\not\!\!{E_T}$ final states the dominant backgrounds mainly come from triple gauge boson productions $ZZZ$ and $ZWZ$ respectively. We can see that that $\geq3\ell+2b+\not\!\!{E_T}$ reaches around $3\sigma$ of signal significance at an integrated luminosity of 1000 fb$^{-1}$. However a point with larger branching to both $aW^\pm$ and $ZW^\pm$ decay modes can be probed with much earlier data. In the case of a TESSM \cite{pbas1, pbas3} we have have only the triplet signature of charged Higgs decaying into $ZW^\pm$, which carries a different signature respect to the doublet-like charged Higgs boson. On the other hand, in the NMSSM we only have $a_1 W^\pm$ decay \cite{han, colepa, guchait, pbsnkh}, which is characterised by a different signature respect to the MSSM \cite{ChCMS,ChATLAS}. In comparison, Eq.~\ref{fs1} provides a golden plated mode in the search of an extended Higgs sector, as predicted by the TNMSSM. Finding out both $a_1 W^\pm$ and $Z W^\pm$ decay modes at the LHC can prove the existence of both a singlet and a triplet of the model. However, as we can see in Figure~\ref{muCH}, it is very difficult to find out points where both the $\mathcal{Br}( h^\pm_1 \to ZW^\pm)$ and $\mathcal{Br}( h^\pm_1 \to a_1W^\pm)$ are enhanced at the same time. Nevertheless as the final states carry the signatures of both singlet and triplet type Higgs bosons, it is worth exploring for a high luminosity at the LHC or even for higher energy (more than 14 TeV) at the LHC in future. \begin{figure}[thb] \begin{center} \mbox{\subfigure[]{ \includegraphics[width=0.5\linewidth]{plots/muCHARGEDaW.pdf}} \subfigure[]{\includegraphics[width=0.5\linewidth]{plots/muCHARGEDZW.pdf}}} \caption{The signal strength for the pair production of the lightest charged Higgs boson in the intermediate channels of Eq. \ref{fs1}, \ref{fs2}, \ref{fs2b}, \ref{fs2t} (a) and \ref{fs2a}, \ref{fs2c}, \ref{fs2ta} (b) as a function of the mass of the lightest charged Higgs boson.}\label{muCH} \end{center} \end{figure} \begin{table} \begin{center} \renewcommand{\arraystretch}{1.5} \begin{tabular}{||c|c|c||c|c||} \hline\hline \multicolumn{3}{||c||}{\multirow{2}{*}{Decay Channels}}&\multicolumn{2}{c||}{$\#$ of Events}\\ \cline{4-5} \multicolumn{3}{||c||}{}&Signal&Backgrounds\\ \hline \hline \multirow{2}{*}{\rotatebox{90}{BP1}}&\multirow{2}{*}{$\;a_1W^\pm\,ZW^\mp\;$} &$\geq3\ell+2\tau+\not\!\!{E_T}$&1&6\\ \cline{3-5} &&$\geq3\ell+2b+\not\!\!{E_T}$&21&39\\ \hline \hline \multirow{2}{*}{\rotatebox{90}{BP2}}&$\;a_1W^\pm\,\tau\nu_\tau\;$ &$3\tau+1\ell+\not\!\!{E_T}$&13&$<1$\\ \cline{3-5} \cline{3-5} &$\;a_1W^\pm\,a_1W^\mp\;$&$\;2b+2\tau+2\ell+\not\!\!{E_T}\;$&164&38\\ \hline \hline \multirow{3}{*}{\rotatebox{90}{BP3}}&$\;ZW^\pm\,\tau\nu_\tau\;$ &$1\tau+3\ell+\not\!\!{E_T}$&9&19\\ \cline{3-5} \cline{3-5} &\multirow{2}{*}{$\;Z\,W^\pm\,Z\,W^\mp\;$} &$\geq5\ell+\not\!\!{E_T}$&228&23\\ \cline{3-5} &&$\;\geq1\ell+2b+2\tau+\not\!\!{E_T}\;$&29&246\\ \hline\hline \end{tabular} \caption{The final state numbers for the benchmark points and backgrounds at an integrated luminosity of 1000 fb$^{-1}$.}\label{finalstates} \end{center} \end{table} } The light charged Higgs boson can also decay to $\tau\nu$ for $m_{h^\pm_1} < m_t$ and to $tb$ for $m_{h^\pm_1} > m_t$, via its doublet fraction. The charged Higgs pair production then has the signatures given in Eq.~\ref{fs2} and Eq.~\ref{fs2a}, with one of the charged Higgs boson decaying to $\tau\nu$ and the other one to $a_1 W^\pm$ or $Z W^\pm$, respectively. Eq.~\ref{fs2} and Eq.~\ref{fs2a} probe the existence of singlet, doublet and triplet representations at the same time. The final states with one or more tau-jets along with charged lepton reduce the SM backgrounds but nevertheless $t\bar{t}Z$ and $tZW^\pm$ contribute. \begin{eqnarray}\label{fs2} pp &\to & h^\pm_1h^\mp_1 \nonumber \\ & \to & a_1 W^\pm \tau\nu \nonumber \\ & \to & 3\tau /(2b +1\tau) +1 \ell +\not\!\!{E_T}, \end{eqnarray} \begin{eqnarray}\label{fs2a} pp &\to & h^\pm_1h^\mp_1 \nonumber \\ &\to & Z W^\pm \tau\nu \nonumber \\ & \to & 1(3)\tau + 3(1)\ell +\not\!\!{E_T}. \end{eqnarray} Thus these final states would play a very crucial role in determining whether the mechanism of EWSB incorporates a finer structure respect to our current description, with a single Higgs doublet. { In Table~\ref{finalstates} we present the number of events in the $3\tau+1\ell+\not\!\!{E_T}$ final state for the channel $a_1W^\pm\,\tau\nu_\tau$ and in the $1\tau+3\ell+\not\!\!{E_T}$ for the channel $ZW^\pm\,\tau\nu_\tau$ at an integrated luminosity of 1000 fb$^{-1}$. As already stated, we chose a triplet-like charged Higgs boson $h_1^\pm$ and hence the branching in $\tau\nu_\tau$ is suppressed, being a signature decay mode for a doublet-type charged Higgs boson. In both the case the dominant backgrounds are the triple gauge bosons $ZZZ$ and $ZWZ$. We can see that that $3\ell+1\tau+\not\!\!{E_T}$ reaches more than $3\sigma$ of signal significance at an integrated luminosity of 1000 fb$^{-1}$. } There are, of course, two other possibilities for the decays of a pair of charged Higgs bosons, that is when both the charged Higgs bosons decays to $a_1W^\pm$ or $ZW^\pm$. \begin{eqnarray}\label{fs2b} pp &\to& h^\pm_1h^\mp_1\nonumber \\ &\to & a_1 W^\pm a_1 W^\mp \nonumber \\ &\to & 2\tau +2b+ 2j+ 1\ell \nonumber +\not\!\!{E_T} \\ & \to & 4\tau (4b)+ 2\ell +\not\!\!{E_T}\nonumber\\ & \to & 2b+2\tau+2\ell+\not\!\!{E_T}. \end{eqnarray} \begin{eqnarray}\label{fs2c} pp &\to& h^\pm_1h^\mp_1\nonumber \\ &\to & Z W^\pm Z W^\mp \nonumber \\ &\to & 2j+ 4\ell \nonumber +\not\!\!{E_T} \\ & \to & 6\ell +\not\!\!{E_T} \nonumber\\ & \to &2b+2\tau+2\ell+\not\!\!{E_T}. \end{eqnarray} These channels can prove the existence of singlet and triplet representation separately. { For the decay channel $h^\pm_1h^\mp_1\to a_1 W^\pm a_1 W^\mp$ we have considered the $2b+2\tau+2\ell+\not\!\!{E_T}$ final state for the signal and background analysis. This is because the final states with $\geq1\ell$ have $\bar t t$ as dominant background and hence are strongly suppressed. For $2b+2\tau+2\ell+\not\!\!{E_T}$ the dominant backgrounds are $ZZZ$ and $\bar t tZ$ and we can see from Table~\ref{finalstates} that the signal significance is more than $10\sigma$ for an integrated luminosity of 1000 fb$^{-1}$. A $5\sigma$ of signal significance can be achieved with an integrated luminosity of $\approx$ 200 fb$^{-1}$ at the LHC with 14 TeV center of mass energy. In the case of $h^\pm_1h^\mp_1\to Z\, W^\pm Z\, W^\mp$ we look into the $\geq5\ell+\not\!\!{E_T}$ and $\geq1\ell+2b+2\tau+\not\!\!{E_T}$ final states where the demand $\geq 1\ell$ over $2\ell$ was chosen to enhance the signal numbers. The $\geq5\ell+\not\!\!{E_T}$ has the triple gauge bosons $ZZZ$ and $ZWZ$ as dominant backgrounds. This is one of cleanest final state and we can see from Table~\ref{finalstates} that it has more than $14\sigma$ of signal significance at an integrated luminosity of 1000 fb$^{-1}$. The integrated luminosity for $5\sigma$ of signal significance is 120 fb$^{-1}$. The dominant backgrounds for the $\geq1\ell+2b+2\tau+\not\!\!{E_T}$ final state are the triple gauge bosons $ZZZ$ and $ZWZ$ as well as $\bar t tZ$. The $\bar t tZ$ background is the most dominant one in this case and suppress the signal significance, as one can immediately realize looking at Table~\ref{finalstates}. } For a charged Higgs bosons heavier than the top quark the channel $h^\pm_1 \to t b$ is kinematically allowed. If one of the charged Higgs decays to $tb$ and the other one decays to $ a_1 W^\pm$ we have the final states given by Eq.~\ref{fs2t}. When the other charged Higgs boson decays to $ZW^\pm$, the production of $h^\pm_1h^\mp_1$ results in the final states of Eq.~\ref{fs2ta} \begin{eqnarray}\label{fs2t} pp&\to & h^\pm_1h^\mp_1 \nonumber \\ &\to & a_1 W^\pm t b\nonumber \\ & \to & 2\tau + 2b + 2 W \nonumber \\ & \to & 2\tau +2b + 2\ell +\not\!\!{E_T}, \end{eqnarray} \begin{eqnarray}\label{fs2ta} pp&\to & h^\pm_1h^\mp_1 \nonumber \\ &\to & Z W^\pm t b\nonumber \\ &\to & 2\tau + 2b + 2 W \nonumber \\ & \to & 2\tau +2b + 2\ell +\not\!\!{E_T}\, \nonumber\\ & \rm{or}& \,2b+ 4\ell +\not\!\!{E_T}. \end{eqnarray} The signal related to the intermediate states of the pair production and the decays of the lightest charged Higgs boson in the channels of Eq. \ref{fs1}, \ref{fs2}, \ref{fs2a}, \ref{fs2t} and \ref{fs2ta} is reported in Figure~\ref{muCH}. We can clearly see that for light charged Higgs boson ($m_{h^\pm_1} \raise0.3ex\hbox{$\;>$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 200$ GeV) the decay modes in a light pseudoscalar can be probed rather easily at the LHC but probing $a_1W^\pm$ and $ZW^\mp$, i.e., the existence of a light pseudoscalar and the triplet decay modes together needs higher luminosity. Another signature of this model could be the existence of the heavier charged Higgs bosons $h^\pm_{2,3}$ which could be produced at the LHC. For our selection points $h^\pm_2$ is triplet-like and $h^\pm_3$ is doublet-like. Following our discussion in section~\ref{ch1dcy}, such heavy charged Higgs can decay dominantly to $a_1 h^\pm_1$ or $h_1 h^\pm_1$, as shown in Eq.~\ref{fs3} and Eq.~\ref{fs4}. The lighter charged Higgs can then decay into final states with $a_1 W^\pm$ or $Z W^\pm$ giving $2\tau (2b)+ 3\ell +\not\!\!{E_T}$ and $4\tau (4b)+ 1\ell +\not\!\!{E_T}$ final states \begin{eqnarray}\label{fs3} pp\to h^\pm_{2,3} +X& \to & a_1/h_1 h^\mp_1 \nonumber \\ & \to & 2\tau (2b)+ Z W^\pm \nonumber \\ & \to & 2\tau (2b)+ 3\ell + \not\!\!{E_T}, \end{eqnarray} \begin{eqnarray}\label{fs4} pp\to h^\pm_{2,3} +X & \to & a_1/h_1 h^\mp_1 \nonumber \\ & \to & 2\tau (2b)+ a_1 + W^\pm \nonumber \\ & \to & 4\tau (4b)+ 1\ell + \not\!\!{E_T}. \end{eqnarray} Searching for the above signatures is certainly necessary not only in order to discover a charged Higgs boson but also to determine whether scalars in higher representations of $SU(2)$ are involved in the mechanism of EWSB. \chapter{Simulating a Light Pseudoscalar in the TNMSSM} \section{Synopsis} This chapter is devoted to the phenomenological analysis of a light pseudoscalar state present in the TNMSS, whose mass is entirely generated by the soft-breaking terms of the theory. As we have already discussed in the introduction to Chapter 1, modes of this type are associated to the presence of flat directions in the potential of a superconformal theory. The goal of the chapter is to provide an in depth analysis of the possible signatures of this state and the constraints emerging from a comparison oof the prediction of the model against the current LHC data. \section{Introduction} The success of the Standard Model (SM) in explaining the gauge structure of the fundamental interactions has reached its height with the discovery of a scalar particle with {most of} the properties of the SM Higgs boson - as a 125 GeV mass resonance - at the LHC. With this discovery, the mechanism of spontaneous symmetry breaking of the gauge symmetry, which in a gauge theory such as the SM is mediated by a Higgs doublet, has been confirmed, but the possible existence of an extended Higgs sector, at the moment, cannot be excluded. The identification by the CMS \cite{CMS, CMS2} and ATLAS \cite{ATLAS} experiments of a new boson exchange, has interested so far only the $WW^*$, $ZZ^*$ and $\gamma\gamma$ channels - using data at 7 and at 8 TeV - at more than $5\sigma$ confidence level for the $Z$ and $\gamma$ cases, and slightly below in the $W$ channel. However, the fermionic decay modes of the new boson, together with other exotic decay modes, are yet to be discovered. Clearly, they are essential in order to establish the mechanism of electroweak symmetry breaking (EWSB), which is crucial in the SM dynamics, with better precision. The new data collection at the LHC at 13 TeV center of mass energy - which will be upgraded to 14 TeV in the future - will probably provide new clues about some possible extensions of the SM, raising large expectations both at theoretical and at experimental level. \\ The SM is not a completely satisfactory theory, even with its tremendous success, since it does not provide an answer to long-standing issues, most prominently the gauge-hierarchy problem. This is instead achieved by the introduction of supersymmetry, which, among its benefits, allows gauge coupling unification and, in its R-parity conserving version, also provides a neutral particle as a dark matter candidate. The absence of any supersymmetric signal at the LHC and the recent observation of a Higgs boson $(h_{125})$ of 125 GeV in mass, requires either a high SUSY mass scale or larger mixings between the scalar tops \cite{pMSSMb}. The situation is severer for more constrained SUSY scenarios like mSUGRA \cite{cMSSMb}, which merge supersymmetric versions of the SM with minimal supergravity below the Planck scale. In the current situation, extensions of the Higgs sector with the inclusion of one or more electroweak doublets and/or of triplets of different hypercharges - in combination with {SM gauge} singlets - are still theoretical possibilities in both supersymmetric and non-supersymmetric extensions of the SM. We have recently shown that a supersymmetric extension of SM with a $Y=0$ triplet and a singlet Higgs superfields \cite{TNSSMo}, called the TNMSSM, is still a viable scenario, which is compatible with the recent LHC results and the previous constraints from LEP, while respecting several others direct and indirect experimental limits. Building on our previous analysis, here we are going to show that the same model allows a light pseudoscalar in the spectrum, which could have been missed both by older searches at LEP \cite{LEPb} and by the recent ones at the LHC \cite{CMS, CMS2, ATLAS}.\\ Concerning the possible existence of an extended Higgs sector, the observation of a Higgs boson decaying into two light scalar or pseudoscalar states would be one of its direct manifestations. This detection would also allows us to gather significant information about the cubic couplings of the Higgs and, overall, about its potential. However, so far neither the CMS nor the ATLAS collaborations have presented direct bounds on the decays of the Higgs $h_{125}$ into two scalars. If such scalars are very light ($m_\Phi\lsim 100$ GeV), then they cannot be part of the spectrum of an ordinary CP-conserving minimal supersymmetric extension of the SM (MSSM). In fact, in that case they are predicted to be accompanied by a heavy pseudoscalar or by a charged Higgs boson. The only possibilities which are left open require CP-violating scenarios where one can have a light scalar with a mostly CP-odd component \cite{CPVMSSM}. Such scenarios, however, are in tension with the recent observations of the decay mode $h\to \tau \tau$ \cite{CPVMSSMb}.\\ The natural possibilities for such hidden Higgs bosons are those scenarios characterized by an extended Higgs sector. In the next-to-minimal supersymmetric standard model (NMSSM) with a $Z_3$ symmetry, such a light pseudoscalar is part of the spectrum in the form of a pseudo Nambu-Goldstone mode \cite{NMSSMps}. This situation gets even more interesting with the addition of triplets of appropriate hypercharge assignments \cite{TNSSMo,tnssm}, as in the TNMSSM. In the case of a $Y=0$ Higgs triplet- and singlet-extended scenarios, the triplet does not couple to the $Z$ boson and the singlet to any gauge boson, and both of them do not couple to fermions.\\ At LEP the Higgs boson was searched in the mass range less than $114.5$ GeV via the production of $e^+e^-\to Zh$ and $e^+e^-\to h_ia_j$ (in scenarios with two Higgs doublets), involving scalar $(h_i)$ and pseudoscalar $(a_j)$ with fermionic final states. The $Y=0$ TNMSSM thus becomes a natural candidate for the such hidden Higgs possibility and therefore can evade the LEP bounds \cite{LEPb}. However, the situation gets slightly more complicated for Higgs triplets of non-zero hypercharge because they do couple to the $Z$ boson. In this chapter we will focus our attention on decays of the Higgs boson into light scalars and pseudoscalars ($h_{125} \to h_ih_j/a_ia_j$). Such light scalar or pseudoscalars, when characterized by a mostly triplet or singlet component, do not couple directly to fermions but decay to fermion pairs ($b$ or $\tau$) via their mixing with Higgs bosons of doublet type under $SU(2)$. Thus their final states are often filled up with $b$-quarks, and leptons $\tau$ and $\mu$'s. The corresponding leptons and jets are expected to be rather soft, depending on the masses of the hidden scalars. If the doublet-triplet/singlet mixings in the Higgs sector are very small, they can give rise to the typical leptonic signature of charged displaced vertices. The goal of our analysis is to provide a direct characterization of the final states in the decay of a Higgs-like particle which can be helpful in the search for such hidden scalars at the LHC. \section{Higgs decays into two gluons}\label{ggh} In the SM the most efficient production process of the Higgs boson is by gluon-gluon $(g)$ fusion (Figure (\ref{glutri})). The amplitude is mediated by a quark loop, which involves all the quarks of the SM, although the third generation, and in particular the top quark, gives the dominant contribution. \begin{figure}[t] \begin{center} \includegraphics[width=0.3\linewidth]{plots/gluontriplet.pdf} \caption{A Feynman diagram depicting the coupling of gluons to the triplet/singlet, via their mixing with the doublets.}\label{glutri} \end{center} \end{figure} In supersymmetric theories the situation is slightly different, because there are the up-type and down-type Higgs doublets $\hat H_u$ and $\hat H_d$ that couple to the up-type and down-type quarks/squarks respectively. Beside the sparticles contribution, the main difference between the SM and supersymmetric theories comes in the coupling of the Higgs bosons to fermions. These are given by \begin{eqnarray} g_{h_i u\bar u} = -\frac{i}{\sqrt2}y_u \mathcal{R}^S_{i1},\\ g_{h_i d\bar d} = -\frac{i}{\sqrt2}y_d \mathcal{R}^S_{i2},\\ g_{h_i\ell\bar\ell} = -\frac{i}{\sqrt2}y_\ell \mathcal{R}^S_{i2}, \end{eqnarray} where $R^S_{ij}$ is the rotation matrix of the CP-even sector. This means that the top/bottom contribution can be suppressed/enhanced, depending on the structure of $h_i$. The production cross section for $g,g\rightarrow h_i$ is related to the decay width of $h_i\rightarrow g,g$. At leading order, this decay width is given by \begin{align}\label{gluongluon} \Gamma(h_i\rightarrow g,g)&=\frac{G_F\,\alpha_s\,m_h^3}{36\sqrt2\,\pi^3} \left|\frac{3}{4}\sum_{q=t,\, b} \frac{g_{h_i q\bar q}}{(\sqrt2G_F)^{1/2}m_q}\, A_{1/2}(\tau^i_q)+\sum_{\tilde{q}=\tilde t, \, \tilde b}\frac{g_{h_i\tilde q\tilde q}}{m^2_{\tilde q}}A_0(\tau^i_{\tilde q})\right|^2, \end{align} where $A_0$ and $\,A_{1/2}$ are the spin-0 and spin-1/2 loop functions \begin{eqnarray} &&A_0(x)=-\frac{1}{x^2}\left(x-f(x)\right),\\ &&A_{1/2}(x)=\frac{2}{x^2}\left(x+(x-1)f(x)\right), \end{eqnarray} with the analytic continuations \begin{eqnarray} f(x)=\left\{ \begin{array}{lr} \arcsin^2(\sqrt{x})& x\leq1\\ -\frac{1}{4}\left(\ln\frac{1+\sqrt{1-1/x}}{1-\sqrt{1-1/x}}-i\pi\right)^2& x>1 \end{array}\right. \end{eqnarray} and $\tau^i_j=\frac{m_{h_i}^2}{4\,m_j^2}$. We show in Figure~\ref{hgg} the decay width of $h_{1,2}\rightarrow g,g$. In general, this decay width can be very different from the SM one in the case of supersymmetric theories with an extended Higgs sector, like the TNMSSM. In fact, in the latter case we have only the doublet Higgs that couples to the fermions, as shown in Eq. (\ref{spm}). This implies that if the Higgs is mostly triplet- or singlet-like, the fermion couplings are suppressed by $\mathcal{R}^S_{i1,2}$, in the limit of low $\tan\beta$. In Figure~\ref{hgg} the dashed line is the SM decay width and the color code is defined as follow: we mark in red the up-type Higgs (>90\%), in blue the down-type, in green the triplet/singlet-type and in gray the mixed type. A look at Figure~\ref{hgg}(a) and (b) shows that for low $\tan\beta$ the decay width of a triplet/singlet-type Higgs is heavily suppressed. This occurs because the triplet and singlet Higgses couple to fermions only through the mixing with their analogue $SU(2)$ doublets. It is also rather evident that the shape of the decay widths for Higgses of up-type and of mixed-type are similar to those of the SM Higgs, for a large range of the mass of the extra Higgses. In Figure~\ref{hgg}(a) it is shown that for a light Higgs which takes the role of $h_{125}$, the SM decay width can be provided by the down-type Higgs of the TNMSSM, even in the case of low $\tan\beta$. Figure~\ref{hgg}(c) and (d) instead show that for a high value of $\tan\beta$ the decay width is dominated by the down-type Higgs, hence by the bottom quark. However it is still possible to have a SM-like decay width mediated by the top quark. In Figure~\ref{hgg}(d) it is quite evident that the bottom quark contribution has the same shape as in the MSSM \cite{anatomy2}. In this case the TNMSSM decay width of the Higgs is very different from the SM one for $m_h\gsim200$ GeV. \begin{figure}[t] \begin{center} \mbox{\hskip -20 pt\subfigure[]{\includegraphics[width=0.55\linewidth]{plots/widthmasslowbetah1.pdf}} \subfigure[]{\includegraphics[width=0.55\linewidth]{plots/widthmasslowbetah2.pdf}}} \mbox{\hskip -20 pt\subfigure[]{\includegraphics[width=0.55\linewidth]{plots/widthmasshighbetah1.pdf}} \subfigure[]{\includegraphics[width=0.55\linewidth]{plots/widthmasshighbetah2.pdf}}} \caption{We show a comparison between the SM and the TNMSSM predictions for the decay width of $h_1\rightarrow g,g$ (a), $h_2\rightarrow g,g$ (b) for $1<\tan\beta<15$ and $h_1\rightarrow g,g$ (c), $h_2\rightarrow g,g$ (d) for $20<\tan\beta<40$. We use the color code to distinguish among the up-type (>90\%) (red), down-type (blue), triplet/singlet-type (green) and mixed type Higgses (gray).}\label{hgg} \end{center} \end{figure} \section{Higgs decays into pseudoscalars}\label{psdcy} The most important consequence of the $Z_3$ symmetry of the potential is that the mass of the pseudoscalar is in the GeV range, $m_{a_1}\sim\mathcal O(10)$ GeV, if we choose $A_{S, T, TS, \kappa, U, D}\sim\mathcal O(1)$ GeV. In this situation the decay $h_{125}\rightarrow a_1,a_1$ can be kinematically allowed. We study the decay of $h_{125}\rightarrow a_1,a_1$ via the decay width, given by \begin{eqnarray}\label{haaWidth} \Gamma_{h_i\rightarrow a_j,a_j}=\frac{G_F}{16\sqrt2\pi}\frac{M_Z^4}{M_{h_i}}\left(1-\frac{4\,M_{a_j}^2}{M_{h_i}^2}\right)\left|\frac{g_{h_ia_ja_j}}{i M_Z^2/v}\right|^2, \end{eqnarray} where the $g_{h_ia_ja_j}$ coupling is given in the appendix. In Figure~\ref{Whaa}(a) and (b) we plot this decay width as a function of $\lambda_S$ and $\lambda_T$ respectively. Figure~\ref{Whaa}(a) shows that for $\left|\lambda_S\right|\gsim0.3$ we have scenarios in which the Higgs of doublet-type decays into pseudoscalars of singlet-type, but Figure~\ref{Whaa}(b) shows no particular structure in the dependence of $\Gamma_{h_1\rightarrow a_1,a_1}$ on $\lambda_T$. Being interested in the fermionic final states of the decay of the SM-like Higgs into the light pseudoscalar $a_1$, $h_{125}\rightarrow a_1,a_1$, we gather the relevant coupling of the same pseudoscalars to fermions, which are given by \begin{eqnarray} g_{a_i u\bar u} = -\frac{\gamma_5}{\sqrt2}y_u \mathcal{R}^P_{i1},\\ g_{a_i d\bar d} = -\frac{\gamma_5}{\sqrt2}y_d \mathcal{R}^P_{i2},\\ g_{a_i\ell\bar\ell} = -\frac{\gamma_5}{\sqrt2}y_\ell \mathcal{R}^P_{i2}. \end{eqnarray} Because the triplet, as well as the singlet, do not couple to the fermions, each $a_i$ will decay into fermions only trough a mixing with the doublet Higgses. This means that if $a_1$ is mostly of triplet or singlet component, its fermionic decay will be suppressed by the rotation elements $\mathcal{R}^P_{i1,2}$. An interesting consequence of this property is that this highly suppressed decay can generate a displaced vertex for the fermionic final states. \begin{figure}[t] \begin{center} \mbox{\hskip -20 pt\subfigure[]{\includegraphics[width=0.55\linewidth]{plots/ha1a1Widthls.pdf}} \subfigure[]{\includegraphics[width=0.55\linewidth]{plots/ha1a1Widthlt.pdf}}} \caption{We plot the decay width of the $h_{125}$ to two pseudoscalars (a) with respect to $\lambda_S$ and (b) with respect to $\lambda_{T}$. The red and orange coloured bands show the region where $\mathcal{B}(h_{125} \to a_1 a_1)=20\%\, , 10\%$ respectively. }\label{Whaa} \end{center} \end{figure} \section{Phenomenology and benchmark points}\label{secbps} In Table~\ref{bps} we show the mass spectrum along with the other parameters which are necessary for the identification of three benchmark points. Together with the recent Higgs data we have also considered the recent bounds on the stop and sbottom masses \cite{thridgensusy} and the mass bounds on the lightest chargino from LEP \cite{chargino}. We have also taken into account the recent bounds on the charged Higgs boson mass from both CMS \cite{ChCMS} and ATLAS \cite{ChATLAS}. These have been derived in their searches for light in mass, charged Higgs bosons from the decay of a top quark, and in decays to $\tau \bar{\nu}$. The benchmark points 1 and 2 (BP1 and BP2) are characterized by one hidden Higgs boson, corresponding to a pseudoscalar particle of singlet-type with a mass of $\sim 20$ and $57$ GeV respectively. However BP3 has two hidden Higgs bosons, one of them a pseudoscalar of singlet-type around $\sim 37$ GeV and a second (scalar) one of triplet-type, around $\sim 118$ GeV in mass. In the cases of BP1 \& BP2, $h_1$ is the discovered Higgs boson $h_{125}$, whereas for BP3 it is $h_2$. \begin{figure}[hbt] \begin{center} \mbox{\subfigure[]{\includegraphics[width=0.3\linewidth]{plots/a1a1decay.pdf}}\hskip 30 pt \subfigure[]{\includegraphics[width=0.3\linewidth]{plots/a1mix.pdf}}} \mbox{\subfigure[]{\includegraphics[width=0.35\linewidth]{plots/gluonfusion.pdf}}} \caption{Pseudoscalar (triplet/singlet) pair production from Higgs boson produced via gluon-gluon fusion and their decays, via their mixing with the doublets.}\label{glutri} \end{center} \end{figure} \begin{table} \begin{center} \renewcommand{\arraystretch}{1.2} \begin{tabular}{||c||c|c|c||} \hline\hline Benchmark&BP1&BP2&BP3 \\ Points & &&\\ \hline\hline $m_{h_1}$ & {\color{red}$\sim 125$} & {\color{red}$\sim 125$} & {\color{blue}$117.73 $} \\ \hline $m_{h_2}$ & 183.58 &$162.59 $ & {\color{red}$\sim 125$} \\ \hline $m_{h_3}$& 614.14 &$982.59$ & $791.37$ \\ \hline $m_{h_4}$ & 965.75 &$1560.7$ & $1051.6$ \\ \hline \hline $m_{a_1}$ & \color{blue}20.50& \color{blue}$57.02$& \color{blue}$36.79$ \\ \hline $m_{a_2}$ &435.83 &$644.50$ & $620.81$ \\ \hline $m_{a_3}$& 659.20 &$1018.1$ & $831.51$ \\ \hline \hline $m_{h^\pm_1}$ & \color{blue}182.84 &\color{blue}$162.25$ &\color{blue}$117.47$ \\ \hline $m_{h^\pm_2}$ & 436.04 & $644.55$ & $620.86$ \\ \hline $m_{h^\pm_3}$& 626.23 & $989.77$ & $805.58$ \\ \hline \hline \end{tabular} \caption{Benchmark points for a collider study consistent with the $\sim 125$ GeV Higgs mass, where the $h_{i=1,2,3,4}$, $a_{i=1,2,3}$ are at one-loop and $h^{\pm}_{i=1,2,3}$ masses are calculated at tree level. We color in red the states which are mostly doublets ($>90\%$) and in blue those which are mostly triplet/singlet ($>90\%$). The points are consistent with the $2\sigma$ limits of $h_{125}\to WW^*, ZZ^*, \gamma\gamma$ \cite{CMS, ATLAS}.}\label{bps} \end{center} \end{table} We now turn our attention to the decay of the discovered Higgs boson $h_{125}$ into a light pseudoscalar pair $a_1a_1$. Table~\ref{hdcy2} shows the branching ratios for the decay of $h_{125}$, in the case of the three benchmark points that we have selected. The table shows that for BP1 such branching ratio $(\mathcal{B})$ is the lowest $\mathcal{B}(h_{125}\to a_1a_1)\sim 10\%$, while for BP3 it is the highest $\mathcal{B}(h_{125}\to a_1a_1)\sim 18\%$. The discovered decay modes are consistent with the $2\sigma$ limits of $h_{125}\to WW^*, ZZ^*, \gamma\gamma$ \cite{CMS, ATLAS}. Such light pseudoscalars - though mostly singlet or triplet - decay to the fermionic pairs which are kinematically allowed, via the mixing with the $H_u$ and $H_d$ doublets. This is because both singlet and triplet Higgses do not couples to fermions (see Eq.~\ref{spt}). For the benchmark point BP3 there is another hidden scalar which is CP-even, with a mass around $\sim 118$ GeV. $h_{125}$ cannot decay into this state $h_1$, as it is kinematically forbidden. If this $h_1$ is produced by other means it can have two-body decays to fermion pairs, as in the case of the $a_1$, via the mixing with the doublets. It will also have three-body decays ($WW^*$, $ZZ^*$) via its $SU(2)$ triplet charge and the mixing with the doublets. \begin{table} \begin{center} \renewcommand{\arraystretch}{1.4} \begin{tabular}{||c||c|c|c|c|c|c|c||} \hline\hline Benchmark&\multicolumn{7}{|c||}{Branching ratios}\\ \hline Points & $a_1 a_1$& $h_1 h_1$ & $a_1$Z &\; $W^+ W^-$ \; & \;$b\bar{b}$ \;&\;$\tau \bar{\tau}$&$\mu\bar\mu$ \\ \hline\hline BP1 &0.106 & - &$4.02\times10^{-7}$ & 0.138 &0.695 & 0.042&$1.50\times10^{-4}$\\ \hline BP2 & 0.162 & - & $1.43\times10^{-8}$ & 0.136 & $0.645$ &$0.039$&$1.39\times10^{-4}$ \\ \hline BP3 & $0.178$ & - & $1.93\times10^{-6}$ & 0.137 & $0.628$ & $0.038$&$1.35\times10^{-4}$ \\ \hline\hline \end{tabular} \caption{Decay branching ratios of $h_{125}$ for the three benchmark points, where the $h_{125}$ mass is calculated at tree level. The kinematically forbidden decays are marked with dashes. The points are consistent with the $2\sigma$ limits of $h_{125}\to WW^*, ZZ^*, \gamma\gamma$ \cite{CMS, ATLAS}.}\label{hdcy2} \end{center} \end{table} \begin{table} \begin{center} \renewcommand{\arraystretch}{1.4} \begin{tabular}{||c||c|c|c||} \hline\hline Benchmark&\multicolumn{3}{|c||}{Branching ratios(\%)}\\ \hline Points& \;$b\bar{b}$ \;&\;$\tau \bar{\tau}$&$\mu\bar\mu$ \\ \hline\hline BP1 & $0.939$ & $0.061$&$2.20\times10^{-4}$ \\ \hline BP2 & $0.943$ & $0.057$&$2.04\times10^{-4}$\\ \hline BP3 & $0.942$ & $0.058$& $2.07\times10^{-4}$\\ \hline\hline \end{tabular} \caption{Decay branching ratios of $a_1$ for the three benchmark points $BP_i$. The kinematically forbidden decays are marked with dashes.}\label{a1dcy2} \end{center} \end{table} For these benchmark points we have computed the production cross-sections of a $h_{125}$ Higgs boson assuming that it is mediated by the gluon-gluon fusion channel at the LHC. Table~\ref{Hcrosssec} presents the cross-sections which include the associated K-factors from the Higgs-Cross-Section Working Group \cite{HCWG}. In the next section we are going to simulate the production of such light pseudoscalars produced from the decay of such $h_{125}$. The choice of this particular production process is motivated by its large cross-section and by the rather clean final states ensued, that favour the extraction of the pseudoscalar $a_1$ pair. \begin{table} \begin{center} \renewcommand{\arraystretch}{1.4} \begin{tabular}{||c||c|c|c||} \hline\hline ECM&\multicolumn{3}{|c||}{$\sigma(gg\to h_{125}$) in pb}\\ in TeV&\multicolumn{3}{|c||}{for benchmark points}\\ \hline &BP1 & BP2 &BP3 \\ \hline 13&41.00&41.00&41.00\\ \hline 14&46.18&46.18&46.18\\ \hline \hline \end{tabular} \caption{Cross-section of $gg\to h_{125}$ at the LHC for center of mass energy of 13 and 14 TeV for the three benchmark points.}\label{Hcrosssec} \end{center} \end{table} \section{Signature and collider simulation}\label{sigsim} The discovered Higgs boson $h_{125}$ can decay into two light pseudoscalars, which further decay into $\tau$ or $b$ pairs. The $b$'s and $\tau$'s channel are therefore the relevant ones to look into, in the search for such hidden decay. For this purpose we have implemented the model in SARAH \cite{sarah} and we have generated the model files for CalcHEP \cite{calchep}. These have been used to generate the decay file SLHA, containing the decay branching ratios and the corresponding mass spectra. The generated events have then been simulated with {\tt PYTHIA} \cite{pythia} via the the SLHA interface \cite{slha}. The simulation at hadronic level has been performed using the {\tt Fastjet-3.0.3} \cite{fastjet} with the {\tt CAMBRIDGE AACHEN} algorithm. We have selected a jet size $R=0.5$ for the jet formation, with the following criteria: \begin{itemize} \item the calorimeter coverage is $\rm |\eta| < 4.5$ \item the minimum transverse momentum of the jet $ p_{T,min}^{jet} = 10$ GeV and jets are ordered in $p_{T}$ \item leptons ($\rm \ell=e,~\mu$) are selected with $p_T \ge 10$ GeV and $\rm |\eta| \le 2.5$ \item no jet should be accompanied by a hard lepton in the event \item $\Delta R_{lj}\geq 0.4$ and $\Delta R_{ll}\geq 0.2$ \item Since an efficient identification of the leptons is crucial for our study, we additionally require a hadronic activity within a cone of $\Delta R = 0.3$ between two isolated leptons to be $\leq 0.15\, p^{\ell}_T$ GeV, with $p^{\ell}_T$ the transverse momentum of the lepton, in the specified cone. \end{itemize} We keep the cuts in $p_T$ of the leptons and the jets relatively low ($p_T \ge 10$ GeV), as they will be generated from the lighter pseudoscalar decays. $h_{125}$, once produced via gluon-gluon fusion, will decay into two very light pseudoscalars ($m_{a_1}\sim 20$ GeV for BP1). The light pseudoscalars then will decay further into $b$ or $\tau$ pairs (see Table~\ref{a1dcy2}). The parton level signatures would be $4b$, $4\tau$ and $2b+2\tau$. In reality, this description is expected to change due to hadronization and to the contributions from the initial- and final-state radiation emission in the presence of $b$ quarks and of $\tau$ leptons. The number of jets can indeed increase or decrease due to these effects. The efficiency of the jet of the b-quark ($b_{\rm{jet}}$) is determined through the determination of the secondary vertex and it is therefore momentum dependent. For this purpose we have taken - for the $b_{\rm{jet}}$'s from $t\bar{t}$ - the single-jet tagging efficiency equal to $0.5$, while for the remaining components of the final state we have followed closely the treatment of \cite{btag}. Here, in the case of the $\tau_{\rm{jet}}$ we have considered the hadronic decay of the $\tau$ to be characterized by at least one charged track with $\Delta R \leq 0.1$ of the candidate $\tau_{\rm{jet}}$ \cite{tautag}. \begin{figure}[hbt] \begin{center} \includegraphics[width=0.33\linewidth, angle=-90]{plots/bjetp.pdf} \includegraphics[width=0.33\linewidth, angle=-90]{plots/ipt.pdf} \caption{ $p^{b_j}_T$ distribution (left) and $p^{\ell}_T$ distribution (right) for $t\bar{t}$ and for the signal in BP2.}\label{ptjl} \end{center} \end{figure} Figure~\ref{ptjl} (left) shows the $b_{\rm{jet}}$ $p_T$ coming from the pseudoscalar decays in the case of BP2 with the dominant background $t\bar{t}$. Clearly one may observe the that $b_{\rm{jet}}$'s coming from the signal (BP2) are rather soft, mostly with $p_T \lesssim 50$ GeV. Figure~\ref{ptjl} (right) shows the transverse momentum $p_T$ of the lepton coming from the signal (BP2) and the dominant backgrounds $t\bar{t}$ and $ZZ$. This clearly shows that the signal leptons are very soft ($p_T \lesssim 40$ GeV) compared to the corresponding backgrounds. \begin{figure}[hbt] \begin{center} \includegraphics[width=0.33\linewidth, angle=-90]{plots/jetm.pdf} \includegraphics[width=0.33\linewidth, angle=-90]{plots/taupt.pdf} \caption{(Left) jet-multiplicity ($n_{\textrm{jet}}$) distributions and (Right) $p^{\tau}_T$ distributions for signal events coming from the pseudoscalars $a_1$ decays for BP2 and the dominant SM backgrounds $t\bar{t}$, $ZZ$.}\label{njtaupt} \end{center} \end{figure} Next we have investigated the number of jets in the final states after hadronization. Figure~\ref{njtaupt} (left) shows the number of jets for the signal (BP2) and for the dominant background $t\bar{t}$. Due to the lower cuts in $p_T$, the number of final state jets has increased, in this case, both for the signal and for the background. The difference is still prominent between the two, where the signal peaks around 4 jets and $t\bar{t}$ around 6. Thus a requirement of a relatively lower number of jets in the final state will remove the dominant $t\bar{t}$ contribution quite effectively. Figure~\ref{njtaupt} (right) shows the transverse momentum ($p^{\tau}_T$) distribution of the $\tau$ at parton level for the signal in BP2 and the dominant $\tau\tau$ backgrounds coming from $ZZ$ and $t\bar{t}$. Clearly, the condition of $p^{\tau}_T \lesssim 50$ GeV will reduce effectively the background contributions to the final state. \subsection{$2b+2\tau$} In the case of the TNMSSM, the discovered Higgs boson can also decay into a pair of lighter mass eigenstates $a_1a_1$ and/or $h_1h_1$. The possibility of producing such light states specially as singlet-like pseudoscalars has been discussed in \cite{TNSSMo}, and it is shown in Table~\ref{bps}. Table~\ref{hdcy2} presents the branching ratios for the decay of $h_{125}$ for the three benchmark points that we have selected. Notice that the ratios into the pseudoscalar pair $\mathcal{B}(h_{125}\to a_1 a_1)$ is about 10-20\%. The $a_1$ pair then decays into $b$ and $\tau$ pairs with rates shown in Table~\ref{a1dcy2}. We have selected a final state with $2b+2\tau$, where one of the $a_1$ decays into a $\tau$ pair and the other one decays into a $b$ pair. This also enhances the combinatorial factor and thus the number of events in the final state. The dominant SM backgrounds in this case comes from $t\bar{t}$, $ZZ$ and $b\bar{b}Z$. Figure~\ref{njtaupt} (right) shows that the requirement of a lower number of jets ($n_j$) $\leq 5$ will suppress the $t\bar{t}$ backgrounds. A similar effect is generated by requiring a lower $p_T$ on the $\tau_{\rm{jet}}$'s and $b_{\rm{jet}}$'s ($p_T \lesssim 50$ GeV). The corresponding $\tau$ decays give rise to very soft neutrinos, and therefore, by demanding a low missing $p_T$ $\leq 30$ GeV, we can reduce the backgrounds even further. The $b$ and $\tau$ tagging come with their own efficiencies \cite{btag} and \cite{tautag}, but this also helps in suppressing the other multi-jet backgrounds present from the SM. In Table~\ref{2b2tau13} and Table~\ref{2b2tau14} we present the number of events for the three benchmark points coming both from the signal and the SM backgrounds at the LHC, for a center of mass energy of 13 TeV and 14 TeV respectively. The tables also show how their values change with each additional cut. We ask for a final state with $n_j\leq 5$, in which we demand the presence of at least two $b_{\rm{jet}}$'s and two $\tau_{\rm{jet}}$'s. In our notations, this request is indicated in the form: $n_j\leq 5\,[2b_{\rm{jet}}\,+ \,2\tau_{\rm{jet}}]$. We will be using the ampersand \& (a logical {\em and}) to combine additional constraints on the event, either in the form of particle/jet multiplicites or kinematical restrictions, and define the signal as $$\rm{sig1}: n_j\leq 5\,[2b_{\rm{jet}}\, + \,2\tau_{\rm{jet}}]\,\&\,\not\!\!{p_T} \leq 30 \, \rm{GeV.}$$ In the expression above, we have also required that the missing transverse momentum is smaller than 30 GeV $(\&\,\not\!\!{p_T} \leq 30 \, \rm{GeV})$. In addition we apply some other cuts on the signal in order to reduce the backgrounds. For instance, in Table \ref{2b2tau13} we introduce a long sequence of such cuts (first column). In the case of BP1, for instance, the significance, after these selections, is $4.00\, \sigma$. The two additional conditions $p_1$ and $p_2$ are then applied as alternative clauses, and are enclosed into separate rows. \\ The first sequential cuts include the $b_{\rm{jet}}$ pair invariant mass veto around $m_Z$, the conditon that $|m_{bb}-m_Z|>10$ GeV and, around $m_{125}$, the condition$|m_{bb}-m_{h_{125}}|>10$ GeV. $m_Z$ is the mass of the $Z$ gauge boson and $m_{h_{125}}$ is the Higgs mass (125 GeV). Similarly, we also put veto on the invariant mass of the $\tau_{\rm{jet}}$ pair as: $|m_{\tau\tau}-m_Z|>10$ GeV and $|m_{\tau\tau}-m_{h_{125}}|>10$ GeV. Finally, since we are searching for hidden Higgs bosons, we demand that $m_{\tau\tau}<125$ GeV and $m_{bb}<125$ GeV respectively, where $m_{bb}$ and $m_{\tau\tau}$ are the invariant masses of the $b$ and $\tau$ pairs. \begin{table}[htb] \begin{center} \hspace*{-1.0cm} \renewcommand{\arraystretch}{1} \begin{tabular}{|c||c|c|c||c|c|c|c|c||} \hline\hline Final states&\multicolumn{3}{|c||}{Benchmark}&\multicolumn{5}{|c||}{Backgrounds } \\ \hline &BP1 & BP2&BP3 & $t\bar{t}$& $ZZ$ & $Z h$ &$b\bar{b}h$& $b\bar{b}Z$\\ \hline \hline $n_j\leq 5\,[2b_{\rm{jet}}+ 2\tau_{\rm{jet}}$]&\multirow{2}{*}{220.10}&\multirow{2}{*}{591.46}&\multirow{2}{*}{310.19}&\multirow{2}{*}{1824.08}&\multirow{2}{*}{199.50}&\multirow{2}{*}{39.56}&\multirow{2}{*}{11.87}&\multirow{2}{*}{4903.05}\\ $\&\,\not\!\!{p_T} \leq 30$ GeV&&&&&&&&\\ &&&&&&&&\\ $\&\, p_T^{bj_{1,2}}\leq 50$GeV&\multirow{2}{*}{211.30}&\multirow{2}{*}{568.14}&\multirow{2}{*}{289.02}&\multirow{2}{*}{410.83}&\multirow{2}{*}{73.04}&\multirow{2}{*}{7.87}&\multirow{2}{*}{3.96}&\multirow{2}{*}{2941.83}\\ $\& \, |m_{bb}-m_Z|>10$ GeV&&&&&&&&\\ &&&&&&&&\\ $ \&\, |m_{bb}-m_{h_{125}}|>10$ GeV&211.30&565.32&289.02&386.18&73.04&7.52&3.96&2614.96\\ &&&&&&&&\\ $ \&\, |m_{\tau\tau}-m_Z|>10$ GeV&211.30&560.37&289.02&312.23&62.13&6.29&3.46&2397.04\\ &&&&&&&&\\ $\&\, |m_{\tau\tau}-m_{h_{125}}|>10$ GeV&211.30&560.37&289.02&287.58&62.13&6.18&2.97&2397.04\\ &&&&&&&&\\ $\&\, m_{\tau\tau}<125$GeV&211.30&560.37&289.02&254.71&62.13&6.18&2.97&2397.04\\ &&&&&&&&\\ $\&\, m_{bb}<125$GeV&211.30&559.66&289.02&230.06&62.13&6.07&2.97&2288.09\\ &&&&&&&&\\ \hline Significance&4.00&9.98&5.39&\multicolumn{5}{|c||}{}\\ \hline \hline \multirow{3}{*}{\&\, $p_1:|m_{bb}-m_{a_1}|\leq 10$GeV}&\multirow{3}{*}{198.82}&\multirow{3}{*}{281.95}&\multirow{3}{*}{216.04}&24.65&0.00&0.22&0.49&326.87\\ &&&&65.73&26.16&1.46&0.49&1307.48\\ &&&&65.73&8.72&1.34&1.00&435.83\\ \hline Significance&8.47&6.87&8.01&\multicolumn{5}{|c||}{}\\ \hline \hline \multirow{3}{*}{$\&\, p_2:|m_{\tau\tau}-m_{a_1}|\leq 10$GeV}&\multirow{3}{*}{205.29}&\multirow{3}{*}{229.66}&\multirow{3}{*}{203.63}&65.73&3.27&0.33&0.00&0.00\\ &&&&73.95&28.34&1.46&0.49&762.70\\ &&&&41.08&13.08&1.57&1.48&0.00\\ \hline Significance&12.40&6.94&12.65&\multicolumn{5}{|c||}{}\\ \hline \hline \end{tabular} \caption{The number of events for a $n_j\leq 5\,[2b_{{\rm{jet}}}+ 2\tau_{\rm{jet}}]\,\&\not\!\!{p_T} \leq 30$ GeV final state at 100 fb$^{-1}$ of luminosity at the LHC, for a center of mass energy of 13 TeV. We require that the original signal has a number of jets $\leq 5$, of which 2 are $b_{\rm{jet}}$'s and 2 are $\tau_{\rm jet}$'s, with a missing $p_T\, (\not\!\!{p_T}) \leq$ 30 GeV. We have denoted with $p_T^{bj_{1,2}}$ the transverse momentum of the $b_{\rm{jet}}$'s, with the two $b$'s labelled as 1 and 2. The final states are selected by imposing a long list of sequential cuts on the event, indicated with an ampersand (\&). The two additional options $p_1$ and $p_2$ are, however, alternative, and are imposed as additional constraints (a logical {\em or}). For this reason they are enclosed into separate rows.}\label{2b2tau13} \end{center} \end{table} From Table~\ref{2b2tau13} and Table~\ref{2b2tau14} we deduce that the most dominant SM backgrounds are those from $t\bar{t}$, $ZZ$, $Zh$, $b\bar{b}h$ and $b\bar{b}Z$ respectively. Though the 125 GeV bound on the two invariant masses reduces substantially most of the backgrounds, still the $b\bar{b}Z$ rate remains relatively large. At this stage the signal significances, for the two benchmark points BP2 and BP3, both cross the $5\,\sigma$ value at an integrated luminosity 100 fb$^{-1}$, $9.98\, \sigma$ and $5.39 \,\sigma$, for a center of mass energy of 13 TeV. In the case of BP1 this value is at the level of $4 \,\sigma$. This is expected, given that in the case of BP2 the branching ratio $\mathcal{B}(h_{125}\to a_1a_1)$ is about $16\%$ (see Table~\ref{hdcy2}) and the pseudoscalar is relatively heavy, with a mass around $57$ GeV. The $\tau_{\rm jet}$'s and $b_{\rm{jet}}$'s coming from the decays of the $a_1$ are relatively harder (characterized by a larger momentum) compared to the benchmark points BP1 and BP3, so less events are cut out by the threshold on the $p_T$ cuts. Thus for BP2 we can reach a $5\sigma$ level of signal significance at an integrated luminosity of 25 fb$^{-1}$, for a given center of mass energy of 13 TeV. In this case the signal significance stays very similar also at 14 TeV, with little improvement for each of the $BP_i$'s. The signal significances, in this case, are $4.47 \,\sigma$, $10.18\, \sigma$ and $5.98 \,\sigma$ respectively for BP1, BP2 and BP3. \begin{table} \begin{center} \hspace*{-1.0cm} \renewcommand{\arraystretch}{1} \begin{tabular}{|c||c|c|c||c|c|c|c|c||} \hline\hline Final states&\multicolumn{3}{|c||}{Benchmark}&\multicolumn{5}{|c||}{Backgrounds }\\ \hline &BP1 & BP2&BP3 & $t\bar{t}$& $ZZ$ & $Z h$ &$b\bar{b}h$& $b\bar{b}Z$\\ \hline \hline $n_j\leq 5\,[2b_{\rm{jet}}+ 2\tau_{\rm{jet}}$]&\multirow{2}{*}{253.10}&\multirow{2}{*}{641.50}&\multirow{2}{*}{361.69}&\multirow{2}{*}{1530.66}&\multirow{2}{*}{223.72}&\multirow{2}{*}{40.35}&\multirow{2}{*}{19.77}&\multirow{2}{*}{4657.83}\\ $\&\,\not\!\!{p_T} \leq 30$ GeV&&&&&&&&\\ &&&&&&&&\\ $p_T^{bj_{1,2}}\leq 50$ GeV&\multirow{2}{*}{248.41}&\multirow{2}{*}{605.68}&\multirow{2}{*}{337.04}&\multirow{2}{*}{294.36}&\multirow{2}{*}{85.11}&\multirow{2}{*}{7.80}&\multirow{2}{*}{7.19}&\multirow{2}{*}{3432.09}\\ $\&\, |m_{bb}-m_Z|>10$ GeV&&&&&&&&\\ &&&&&&&&\\ $\&\, |m_{bb}-m_{h_{125}}|>10$ GeV&248.41&604.89&337.04&294.36&85.11&7.43&7.19&3432.09\\ &&&&&&&&\\ $\&\, |m_{\tau\tau}-m_Z|>10$ GeV&248.41&597.73&337.04&255.11&70.52&6.09&5.39&2819.21\\ &&&&&&&&\\ $\&\, |m_{\tau\tau}-m_{h_{125}}|>10$ GeV&248.41&597.73&337.04&255.11&70.52&5.97&2.40&2819.21\\ &&&&&&&&\\ $\&\, m_{\tau\tau}<125$ GeV&248.41&596.93&337.04&255.11&69.30&5.85&2.40&2819.21\\ &&&&&&&&\\ $\&\, m_{bb}<125$ GeV&248.41&596.93&337.04&196.24&69.30&5.85&2.40&2574.07\\ &&&&&&&&\\ \hline Significance&4.47&10.18&5.98&\multicolumn{5}{|c||}{}\\ \hline \hline \multirow{3}{*}{$\&\,p_1:|m_{bb}-m_{a_1}|\leq 10$ GeV}&\multirow{3}{*}{236.43}&\multirow{3}{*}{326.32}&\multirow{3}{*}{279.49}&9.81&2.43&0.37&0.00&490.30\\ &&&&68.68&31.61&1.83&1.20&1348.32\\ &&&&29.43&15.81&1.46&0.00&490.30\\ \hline Significance&8.70&7.74&9.79&\multicolumn{5}{|c||}{}\\ \hline \hline \multirow{3}{*}{\&\,$p_2:|m_{\tau\tau}-m_{a_1}|\leq 10$ GeV}&\multirow{3}{*}{241.64}&\multirow{3}{*}{248.32}&\multirow{3}{*}{279.49}&19.62&6.08&0.49&0.00&0.00\\ &&&&58.87&24.32&1.58&0.00&1103.17\\ &&&&49.06&14.59&1.10&1.80&122.57\\ \hline Significance&14.78&6.56&12.93&\multicolumn{5}{|c||}{}\\ \hline \hline \end{tabular} \caption{The number of events for a $n_j\leq 5\,[2b_{{\rm{jet}}}+ 2\tau_{\rm{jet}}]\, \& \not\!\!{p_T} \leq 30$ GeV final state at 100 fb$^{-1}$ of luminosity at the LHC for center of mass energy of 14 TeV. }\label{2b2tau14}. \end{center} \end{table} \begin{figure}[bht] \begin{center} \includegraphics[width=0.33\linewidth, angle=-90]{plots/invjjbp14.pdf} \includegraphics[width=0.33\linewidth, angle=-90]{plots/invtautau.pdf} \caption{ Invariant mass distribution of $b_{\rm{jet}}$'s (left) and $\tau_{\rm{jet}}$'s (right) for $t\bar{t}$ and for the signal in BP2.}\label{invdis}. \end{center} \end{figure} Next we have analyzed the invariant mass distributions of the $b_{\rm{jet}}$ pair for the same benchmark points. Figure~\ref{invdis} (left) presents the $b_{\rm{jet}}$ pair invariant mass distributions for the signal in BP1 and BP2, with dominant SM backgrounds coming from $t\bar{t}$ and $b\bar{b}Z$. These results suggest that, given the integrated luminosity, it is possible to resolve the resonant peak in the mass distribution of the signal. To further clarify this point, we select events with $|m_{bb}-m_{a_1}|\leq 10$ GeV, that we label as $p_1$. The resolutions of these peaks depend on the specific benchmark point, but this selection reduces the $b\bar{b}Z$ background drastically, in those cases when $m_{a_1}$ is far separated from the $Z$ gauge boson mass $m_Z$. The signal significances for all the benchmark points cross the $5\sigma$ level at an integrated luminosity of 100 fb$^{-1}$, and at 13 TeV they are equal to $8.47\, \sigma$, $6.87\, \sigma$ and $8.01\, \sigma$ for BP1, BP2 and BP3 respectively. At a center of mass energy of 14 TeV the significances are $8.70\, \sigma$, $7.74 \,\sigma$ and $9.79 \,\sigma$ in the three cases. Finally, we simulate the $\tau_{\rm{jet}}$ invariant mass distributions, as they are expected to be cleaner than the $b_{\rm{jet}}$ distributions. Figure~\ref{invdis} (right) shows the invariant mass distributions for both the signals in BP1 and BP3, and the SM backgrounds from $t\bar{t}$ and $b\bar{b}Z$. For this purpose, similarly to the previous case, we select those events with $|m_{\tau\tau}-m_{a_1}|\leq 10$ GeV. For the points which are far away from the $Z$ mass, namely BP1 and BP3, the signal significance improves significantly, to $12.40 \,\sigma$ and $12.65 \,\sigma$ respectively, whereas for BP2 it is $6.94 \,\sigma$. At a centre of mass energy of 14 TeV these value are $14.78\, \sigma$, $6.56 \,\sigma$ and $12.93 \,\sigma$ for BP1, BP2 and BP3 respectively. \subsection{$3\tau$} In this subsection we consider the case in which both pseudoscalars decay into $\tau$ pairs. In this case we expect to see a final state of $4\tau$' s. Of course, due to the lower branching ratio in the $a_1\to \tau\bar{\tau}$ mode, the final state numbers are not very promising at low luminosities. On top of that, due to a low $\tau$-tagging efficiency for $\tau$'s of low $p_T$, { the final state number is furtherly reduced.\cite{tautag}. Keeping this in mind, we search for final states where we have at least three $\tau$'s. We tag such $\tau$'s via hadronic $\tau_{\rm{jet}}$'s, as explained earlier. \begin{table}[h] \begin{center} \hspace*{-1.0cm} \renewcommand{\arraystretch}{1} \begin{tabular}{|c||c|c|c||c|c|c||} \hline\hline Final states&\multicolumn{3}{|c||}{Benchmark}&\multicolumn{3}{|c||}{Backgrounds } \\ \hline &BP1 & BP2&BP3 &$ZZ$ &$ZW^\pm$&$h Z$\\ \hline \hline $n_j\leq 5\,[\geq 3\tau_{\rm{jet}}]$&95.71&199.27&137.21&186.42&437.17&20.68\\ &&&&&&\\ $\&\, |m_{\tau\tau}-m_Z|>10$ GeV&94.79&197.15&135.02&163.53&363.43&17.42\\ &&&&&&\\ $\&\, m_{\tau\tau}\leq125$ GeV&94.79&197.15&135.02&158.07&326.56&16.07\\ &&&&&&\\ \&\, $p_T^{\tau_{j_1}}\leq 100\, \&\, p_T^{\tau_{j_{2,3}}}\leq 50$ GeV&87.85&184.43&123.34&99.21&210.69&8.31\\ \hline Significance&4.41&8.22&5.93&\multicolumn{3}{|c||}{}\\ \hline\hline \multirow{3}{*}{$\&\, p_1:|m_{\tau\tau}-m_{a_1}|\leq 10$ GeV}&\multirow{3}{*}{48.55}&\multirow{3}{*}{54.41}&\multirow{3}{*}{64.96}&4.36&21.07&0.90\\ &&&&44.70&89.54&2.70\\ &&&&26.16&42.14&3.82\\ \hline Significance&5.61&3.93&5.55&\multicolumn{3}{|c||}{}\\ \hline \hline \end{tabular} \caption{The number of events for a $n_j\leq 5\,[\geq 3\tau_{\rm{jet}}]$ final state at 100 fb$^{-1}$ of luminosity at the LHC with 13 TeV center of mass energy.}\label{3tau13} \end{center} \end{table} The dominant SM backgrounds, in this case, come from the association of $Z$ bosons, i.e. from $ZZ$, $ZW^\pm$, $Zh$ along with the triple gauge boson productions, namely from $ZZZ$, $ZZW^\pm$, $W^\pm W^\mp W^\pm$, $ZW^\pm W^\mp$ and $WWW$. However, the triple gauge boson backgrounds are found to be negligible after imposing the cuts ($\lsim 0.1$) at 100 fb$^{-1}$. Table~\ref{3tau13} and Table~\ref{3tau14} show the expected numbers of events for the three benchmark points $BP_i$, together with the dominant backgrounds, at an integrated luminosity of 100 fb$^{-1}$. The final state that we are looking for is characterized by a number of jets $n_j\leq 5$ among which we tag at least three of them as $\tau_{\rm{jet}}$'s, defined as $$\textrm{sig}2: n_j\leq 5\,[\geq 3\tau_{\rm{jet}}].$$ We then add some further kinematical cuts to reduce the backgrounds, as before. These cuts include the invariant mass veto on the $\tau_{\rm{jet}}$ pair, $|m_{\tau\tau}-m_Z|>10$ GeV and we also demand that $m_{\tau\tau}\leq125$ GeV, which allows us to search for hidden resonances. Finally, we also demand for softer second and third $\tau_{\rm{jet}}$'s by implementing the cuts $p_T^{\tau_{j_1}}\leq 100\, \&\, p_T^{\tau_{j_{2,3}}}\leq 50$ GeV. From Table~\ref{3tau13} and Table~\ref{3tau14} one deduces that the $ZW^\pm$ channel remains the most dominant background of all. The signal significance at this stage for the three benchmark points are $4.41 \,\sigma$, $8.22\, \sigma$ and $5.93\, \sigma$ for BP1, BP2 and BP3 respectively, at an integrated luminosity of 100 fb$^{-1}$ and a center of mass energy of 13 TeV. At 14 TeV these numbers are $3.79 \,\sigma$, $8.38 \, \sigma$ and $5.81\, \sigma$. As in the previous case, also in this case we try to select events around the pseudoscalar mass peak by the constraint $p_1:|m_{\tau\tau}-m_{a_1}|\leq 10$ GeV. The mass resolution depends on the mass value of $a_1$, but BP1 and BP3 now have more than a $5\sigma$ signal significance. For BP2 $m_{a_1} \sim 57$ GeV, and the multiplicities from the backgrounds involving $ZZ$ and $ZW^\pm$ are more significant than for BP1 and BP3. The signal significance at 13 TeV, with an integrated luminosity of 100 fb$^{-1}$ for BP1, BP2 and BP3 are $5.61\, \sigma$, $3.93\, \sigma$ and $5.55\, \sigma$ respectively. These values change for collisions at 14 TeV and equal $5.16\, \sigma$, $4.00 \,\sigma$ and $6.03\, \sigma$ in this second case. \begin{table} \begin{center} \hspace*{-1.0cm} \renewcommand{\arraystretch}{1} \begin{tabular}{|c||c|c|c||c|c|c||} \hline\hline Final states&\multicolumn{3}{|c||}{Benchmark}&\multicolumn{3}{|c||}{Backgrounds } \\ \hline &BP1 & BP2&BP3 & $ZZ$ &$ZW^\pm$&$h Z$\\ \hline \hline $n_j\leq 5\,[\geq 3\tau_{\rm{jet}}]$&96.34&224.45&146.73&200.62&499.20&18.28\\ &&&&&&\\ $\&\, |m_{\tau\tau}-m_Z|>10$ GeV&94.78&222.85&142.62&178.73&408.70&15.11\\ &&&&&&\\ $\&\, m_{\tau\tau}\leq125$ GeV&94.78&222.06& 141.80&165.36&382.43&13.65\\ &&&&&&\\ $\&\, p_T^{\tau_{j_1}}\leq100\,\& \,p_T^{\tau_{j_{2,3}}}\leq 50$ GeV&82.80&205.34& 133.58&121.59&265.66&7.56\\ \hline Significance&3.79&8.38&5.81&\multicolumn{3}{|c||}{}\\ \hline\hline \multirow{3}{*}{$\&\, p_1:|m_{\tau\tau}-m_{a_1}|\leq 10$ GeV}&\multirow{3}{*}{46.35}&\multirow{3}{*}{62.08}&\multirow{3}{*}{79.74}&12.16&20.44&1.71\\ &&&&54.71&122.61&2.44\\ &&&&25.53&67.14&2.56\\ \hline Significance&5.16&4.00&6.03&\multicolumn{3}{|c||}{}\\ \hline \hline \end{tabular} \caption{The number of events for a $n_j\leq 5\,[\geq 3\tau_{\rm{jet}}]$ final state at 100 fb$^{-1}$ of luminosity at the LHC, for a center of mass energy of 14 TeV. }\label{3tau14} \end{center} \end{table} \subsection{$2b+2\mu$} The decay rate of the pseudoscalar to $\mu\bar{\mu}$ is $\mathcal{O}(10^{-4})$, which makes this channel difficult to observe. If we demand that one of the two pseudoscalars decay into a $b\bar{b}$ pair and the other into a $\mu\bar{\mu}$ pair, the effective cross-section may increase firstly due to the large branching coming from $a_1\to b\bar{b}$ and, secondly, due to a combinatorial factor of 2, because of the presence of two pseudoscalars. This gives us the option of investigating a final state $2b+2\mu$. Table~\ref{2b2mu13} and Table~\ref{2b2mu14} show the corresponding $2\mu$ final states event numbers for the benchmark points and the dominant SM backgrounds which include $t\bar t$, $ZZ$, $Zh$, $b\bar b h$ and $b \bar b Z$ at an integrated luminosity of 1000 fb$^{-1}$. We first consider the $2\mu \,\&\, p_T^{\ell_{1,2}}\leq 50$ GeV final state, largely dominated by the SM backgrounds (see Tables~\ref{2b2mu13} and~\ref{2b2mu14}). Then with impose further requirements on the numbers of jets and their transverse momentum ($p_T$), by defining the signal as $$\textrm{sig}3: \, n_j\leq 3\,[2b_{\rm jet}]\, \& \,n_{\mu}\geq 2 \,[|m_{\mu\mu}-m_Z|> 5 \,\rm{GeV}]\, \& \,p_T^{{\mu,j}_{1,2}}\leq 50\, \rm{GeV}\, \& \,\not\!\!{p_T} \leq 30\, \textrm{GeV}.$$ The $\mu$-pair invariant mass veto around the $Z$ mass $(|m_{\mu\mu}-m_Z|> 5 \,\rm{GeV})$, together with the condition of having softer $b_{\rm{jet}}$'s in the final state ($p_T^{j_{1,2}}\leq 50 \,\rm{GeV}$), conspire to reduce the SM backgrounds coming from the $Z$ bosons quite drastically. Finally, since this final state - in an ideal situation - should not have any missing energy, we also demand that $\not\!\!{p_T} \leq 30$ GeV. To reduce the backgrounds even further, and to ensure that we select signatures of the light pseudoscalar decay below $125$ GeV, we impose additional constraints on the $\mu$-pair and on the $b_{\rm{jet}}$-pair invariant masses, around the $Z$ mass and the mass of $h_{125}$. These are given by $|m_{\mu\mu}-m_{h_{125}}|>5\, \textrm{ GeV}$, $|m_{bb}-M_Z|\geq 10\, \textrm{ GeV}$ and |$m_{bb}-m_{h_{125}}|>10 \, \textrm{ GeV}$.\\ At this stage, only in the case of BP2 the signal significance reaches the $3.31 \,\sigma$ value, while for BP1 and BP3 these are $1.03 \,\sigma$, and $1.83 \,\sigma$ respectively, at 13 TeV. At a center of mass energy of 14 TeV, instead, the values are $1.08 \,\sigma$, $2.64\, \sigma$ and $1.18\, \sigma$ respectively for BP1, BP2 and BP3. \begin{table}[t] \begin{center} \hspace*{-2cm} \renewcommand{\arraystretch}{1} \begin{tabular}{|c||c|c|c||c|c|c|c|c||} \hline\hline Final states&\multicolumn{3}{|c||}{Benchmark}&\multicolumn{5}{|c||}{Backgroounds }\\ \hline &BP1 & BP2&BP3 & $t\bar t$&$ZZ$& $Zh$ & $b\bar b h$ &$b \bar b Z$\\ \hline \hline $2\mu_{\rm{jet}}\, \& \, p_T^{\ell_{1,2}}\leq 50$ GeV& 1877.23&3660.42&3167.55&909080&132161&2669.20&657.71&$6.3\times10^6$\\ &&&&&&&&\\ $\&\,n_j\leq 3\,\&\, b_{\rm jet}\geq 2$&\multirow{3}{*}{69.36}&\multirow{3}{*}{226.13}&\multirow{3}{*}{124.07}&\multirow{3}{*}{4765.60}&\multirow{3}{*}{457.87}&\multirow{3}{*}{15.73}&\multirow{3}{*}{14.83}&\multirow{3}{*}{28.60}\\ $\&\,|m_{\mu\mu}-m_Z|> 5$ GeV&&&&&&&&\\ $\&\,p_T^{j_{1,2}}\leq 50 \rm{GeV}\,\&\, \not\!\!{p_T} \leq 30$ GeV&&&&&&&&\\ $\&\, |m_{\mu\mu}-m_{h_{125}}|>5$ GeV&\multirow{2}{*}{69.36}&\multirow{2}{*}{226.13}&\multirow{2}{*}{124.07}&\multirow{2}{*}{4190.45}&\multirow{2}{*}{359.76}&\multirow{2}{*}{14.61}&\multirow{2}{*}{14.83}&\multirow{2}{*}{28.60}\\ $\&\,|m_{bb}-M_Z|\geq 10$ GeV&&&&&&&&\\ $\&\, |m_{bb}-m_{h_{125}}|>10$ GeV&69.36&226.13&124.07&4026.11&359.76&13.49&14.83&28.60\\ \hline Significance&1.03&3.31&1.83&\multicolumn{5}{|c||}{}\\ \hline \hline \multirow{3}{*}{$\&\, p_1:|m_{bb}-m_{a_1}|\leq 10$ GeV}&\multirow{3}{*}{64.73}&\multirow{3}{*}{98.93}&\multirow{3}{*}{80.28}&328.66&0.00&0.00&4.94&19.67\\ &&&&1150.32&141.72&5.62&9.89&9.53\\ &&&&492.99&43.61&2.25&0.00&0.00\\ \hline Significance&3.17&2.63&3.23&\multicolumn{5}{|c||}{}\\ \hline \hline \multirow{3}{*}{$\&\, p_2:|m_{\mu\mu}-m_{a_1}|\leq 5$ GeV}&\multirow{3}{*}{41.61}&\multirow{3}{*}{148.40}&\multirow{3}{*}{72.98}&328.66&43.61&1.12&0.00&0.00\\ &&&&575.15&32.70&0.00&0.00&9.53\\ &&&&410.83&21.80&1.12&4.94&0.00\\ \hline Significance&2.04&5.36&3.22&\multicolumn{5}{|c||}{}\\ \hline \hline \end{tabular} \caption{The number of events for the $n_j\leq 3\,[2b_{\rm{jet}}]\, \&\, \geq 2\mu\, \& \,\not\!\!{p_T} \leq 30$ GeV final state at 1000 fb$^{-1}$ of luminosity at the LHC, for a center of mass energy of 13 TeV. The constraint $(\& \geq 2\mu)$ requires the presence of at least 2 muons. The clause ($\&\, b_{\rm jet}\geq 2$) demands at least 2 jets of $b$ quarks, denoted as $b_{\rm jet}$. }\label{2b2mu13} \end{center} \end{table} Later we try to enhance the mass peak resolutions on the $bb$ and $\mu\mu$ invariant mass distributions by imposing the two constraints (denotes as $p_1,p_2$) $$p_1:|m_{bb}-m_{a_1}|\leq 10\, \textrm{GeV \,\, and}\,\, p_2:|m_{\mu\mu}-m_{a_1}|\leq 5 \,\textrm{GeV}.$$ At a center of mass energy of 13 TeV, the $m_{bb}$ peaks are characterized by about a $3\, \sigma$ signal significance i.e., $3.17\sigma$, $2.63 \, \sigma$ and $3.23\, \sigma$ respectively for BP1, BP2 and BP3 at an integrated luminosity of of 1000 fb$^{-1}$. At 14 TeV the respective values are $3.17\, \sigma$, $2.63 \, \sigma$ and $3.23 \, \sigma$ respectively for the three benchmarks. \\ The constraint $p_2:|m_{\mu\mu}-m_{a_1}|\leq 5$ GeV, brings BP2 at $5.36 \,\sigma$, BP1 at $2.04\, \sigma$, and BP3 at $3.22 \,\sigma$, for a center of mass energy of 13 TeV. At 14 TeV the significances are $4.71\, \sigma$, $3.82 \, \sigma$ and $3.00 \, \sigma$ in the three cases, respectively. \begin{table} \begin{center} \hspace*{-2cm} \renewcommand{\arraystretch}{1} \begin{tabular}{|c||c|c|c||c|c|c|c|c||} \hline\hline Final states&\multicolumn{3}{|c||}{Benchmark}&\multicolumn{5}{|c||}{Backgrounds } \\ \hline &BP1 & BP2&BP3 & $t\bar t$&$ZZ$& $Zh$ & $b\bar b h$ &$b\bar b Z$\\ \hline \hline $2\mu_{\rm{jet}}\,\&\, p_T^{\ell_{1,2}}\leq 50$ GeV& \multirow{2}{*}{2281.00}&\multirow{2}{*}{4011.37}&\multirow{2}{*}{3362.13}&\multirow{2}{*}{788683}&\multirow{2}{*}{141428}&\multirow{2}{*}{2926.71}&\multirow{2}{*}{946.42}&\multirow{2}{*}{$7\times10^6$}\\ &&&&&&&&\\ $\&\,n_j\leq 3\,\&\, b_{\rm jet}\geq 2$&\multirow{3}{*}{67.70}&\multirow{3}{*}{167.14}&\multirow{3}{*}{ 73.99}&\multirow{3}{*}{5102.21}&\multirow{3}{*}{583.61}&\multirow{3}{*}{20.72}&\multirow{3}{*}{17.97}&\multirow{3}{*}{10.72}\\ $\&\,|m_{\mu\mu}-m_Z|> 5$ GeV&&&&&&&&\\ $\&\,p_T^{j_{1,2}}\leq 50 \rm{GeV}\,\&\, \not\!\!{p_T} \leq 30$ GeV&&&&&&&&\\ $|m_{\mu\mu}-m_{h_{125}}|>5$ GeV&\multirow{2}{*}{67.70}&\multirow{2}{*}{167.14}&\multirow{2}{*}{73.99}&\multirow{2}{*}{3630.42}&\multirow{2}{*}{510.66}&\multirow{2}{*}{9.75}&\multirow{2}{*}{11.98}&\multirow{2}{*}{0.00}\\ $\&\,|m_{bb}-M_Z|\geq 10$ GeV&&&&&&&&\\ $\&\,|m_{bb}-m_{h_{125}}|>10$ GeV&67.70&167.14&73.99&3336.06&498.50&9.75&11.98&0.00\\ \hline Significance&1.08&2.64&1.18&\multicolumn{5}{|c||}{}\\ \hline \hline \multirow{3}{*}{$\&\, p_1:|m_{bb}-m_{a_1}|\leq 10$ GeV}&\multirow{3}{*}{67.70}&\multirow{3}{*}{79.60}&\multirow{3}{*}{ 57.54}&196.24&0.00&0.00&0.00&0.00\\ &&&&1373.67&255.33&1.22&0.00&0.00\\ &&&&686.83&24.32&2.44&0.00&0.00\\ \hline Significance&4.16&1.93&2.08&\multicolumn{5}{|c||}{}\\ \hline \hline \multirow{3}{*}{$\&\, p_2:|m_{\mu\mu}-m_{a_1}|\leq 5$ GeV}&\multirow{3}{*}{41.66}&\multirow{3}{*}{103.47}&\multirow{3}{*}{ 45.21}&0.00&36.47&0.00&0.00&0.00\\ &&&&588.72&36.47&0.00&5.99&0.00\\ &&&&98.12&85.11&0.00&0.00&0.00\\ \hline Significance&4.71&3.82&3.00&\multicolumn{5}{|c||}{}\\ \hline \hline \end{tabular} \caption{The number of events for $n_j\leq 3\,[2b_{\rm{jet}}]\, \&\, \geq 2\mu\, \&\,\not\!\!{p_T} \leq 30$ GeV final state at 1000 fb$^{-1}$ of luminosity at the LHC for center of a center of mass energy of 14 TeV.}\label{2b2mu14} \end{center} \end{table} \subsection{$2\tau+2\mu$} In this section we discuss a scenario where one of the pseudoscalars decays into a $\tau$ pair and the second one into a $\mu$ pair. Due to the low branching ratios of these two modes, even with a large integrated luminosity, the signal remains small. It is however accompanied by a SM backgrounds for such final states ($2\tau+2\mu$) which is quite suppressed. As in the previous cases, also in this case we tag the $\tau$ via its hadronic decay into a $\tau_{\rm{jet}}$ \cite{tautag}. The threshold $p_T$ cuts both for the $\tau_{\rm{jet}}$ and for the muons are kept as low as 10 GeV, since we are considering the decay of a very light pseudoscalar. \begin{table} \begin{center} \hspace*{-1cm} \renewcommand{\arraystretch}{1.2} \begin{tabular}{|c||c|c|c||c|c||} \hline\hline Final states&\multicolumn{3}{|c||}{Benchmark}&\multicolumn{2}{|c||}{Backgrounds } \\ \hline &BP1 & BP2&BP3 & $ZZ$& $Zh$ \\ \hline \hline $2\mu\, \&\, n_j\leq 3\,[2\tau_{\rm{jet}}$]&\multirow{2}{*}{16.18}&\multirow{2}{*}{14.13}&\multirow{2}{*}{29.19}&\multirow{2}{*}{490.58}&\multirow{2}{*}{28.10}\\ $\&\,p_T^{\ell_{1,2}}\,\&\,p_T^{j_{1,2}}\leq 50$ GeV&&&&&\\ $\&\,|m_{\mu\mu}-m_Z|\geq 5$ GeV&16.18&14.13&29.19&218.03&9.00\\ $\&\,|m_{\tau\tau}-m_Z|>10$ GeV&16.18&14.13&29.19&163.53&9.00\\ $\&\,|m_{\tau\tau}|<125$ GeV&16.18&14.13&29.19&152.62&7.87\\ \hline Significance&1.22&1.07&2.12&\multicolumn{2}{|c||}{}\\ \hline \hline \multirow{3}{*}{$\&\,p_1:|m_{\tau\tau}-m_{a_1}|\leq 10$ GeV}&\multirow{3}{*}{11.56}&\multirow{3}{*}{14.13}&\multirow{3}{*}{21.90}&0.00&0.00\\ &&&&54.51&1.12\\ &&&&32.70&1.12\\ \hline Significance&3.40&1.70&2.93&\multicolumn{2}{|c||}{}\\ \hline \hline \multirow{3}{*}{$\&\, p_2:|m_{\mu\mu}-m_{a_1}|\leq 5$ GeV}&\multirow{3}{*}{6.94}&\multirow{3}{*}{7.07}&\multirow{3}{*}{0.00}&0.00&0.00\\ &&&&0.00&0.00\\ &&&&43.61&2.25\\ \hline Significance&2.63&2.65&-&\multicolumn{2}{|c||}{}\\ \hline \hline \end{tabular} \caption{The number of events for $n_j\leq 3\,[2\tau_{\rm{jet}}] \,\&\,\geq 2\mu \,\&\,\not\!\!{p_T} \leq 30$ GeV final state at 1000 fb$^{-1}$ of luminosity at the LHC for a center of mass energy of 13 TeV.}\label{2ta2mu13} \end{center} \end{table} The results of this analysis are reported in Table~\ref{2ta2mu13} and Table~\ref{2ta2mu14}, where we present the number of events for the benchmark points and the dominant SM backgrounds, for a center of mass energy of 13 and 14 TeV and an integrated luminosity of 1000 fb$^{-1}$. We search for a muon pair and at least two $\tau$'s in the final state. Though muons ($\mu$) will be detected as a charged leptons, the $\tau$'s will be detected via their hadronic decays as $\tau_{\rm{jets}}$'s \cite{tautag}. Being the two pseduoscalars light, we require both the $\mu$ and the $\tau$ jets to be rather soft (i.e. ($p_T^{\ell_{1,2}}\&p_T^{j_{1,2}}) \leq 50$ GeV) in the final state. This defines the signal as $$\textrm{sig}4: \,n_j\leq 3\,[2\tau_{\rm{jet}}]\, \& \, \geq 2\mu \, \&\, \not\!\!{p_T} \leq 30\, \rm{GeV}.$$ Tagging both muons and requiring the cut $p_T \leq 50$ GeV for the transverse momentum $p_T$ of the $\tau_{\rm{jet}}$, will suppress much of the hard SM backgrounds, favouring the search for a low mass resonance, in this case a light pseudoscalar. The dominant backgrounds in this case comes from the SM $ZZ$ and $hZ$ channels. The background due to the $a_1 Z$ channel is negligible, due to the mostly-singlet nature of the $a_1$. We have also checked for other triple gauge boson contributions to this final states, but they are all either zero or negligible. To reduce further the SM backgrounds we apply a veto on the mass peak of the $Z$ boson, by requiring that $|m_{\mu\mu}-m_Z|\geq 5$ GeV and $|m_{\tau\tau}-m_Z|>10$ GeV respectively. As one may deduce from Table~\ref{2ta2mu13} and Table~\ref{2ta2mu14}, the application of these two cuts, though reduces the SM backgrounds quite drastically, does not affect the signal, which remains unchanged. Finally, we apply the constraint $|m_{\tau\tau}|<125$ GeV to ensure the search for hidden scalars, i.e., $m_{a_1}< 125$ GeV, which causes an even larger suppression of the background. At this level the signal significances are still below $3\sigma$ at 13 TeV and reach $3.20 \, \sigma$ only in the case of the benchmark point BP3, at 14 TeV. Next we apply the constraint $p_1:|m_{\tau\tau}-m_{a_1}|\leq 10$ GeV to favour the search for a possible mass peak of the pseudoscalar and this enhances the signal significance to $3.40\, \sigma$, $1.70\, \sigma$ and $2.93\, \sigma$ respectively for BP1, BP2 and BP3 at 13 TeV. At 14 TeV these numbers are $2.47 \,\sigma$, $2.51\, \sigma$ and $3.27\, \sigma$ respectively. Similar peaks around $\mu$ pair invariant mass distribution, i.e. with $p_2:|m_{\mu\mu}-m_{a_1}|\leq 5$ GeV, give signal significances of $2.63 \,\sigma$ and $2.65\, \sigma$ for BP1 and BP2, at a center of mass energy of 13 TeV. BP3 in this case runs out of statistics. At 14 TeV the signal significances are $2.05 \,\sigma$, $2.82\, \sigma$ and $2.04 \,\sigma$ respectively. The leptonic modes thus need higher luminosities $\raise0.3ex\hbox{$\;>$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 2000$ fb$^{-1}$ in order to reach the discover limit for a light pseudoscalar. \begin{table} \begin{center} \hspace*{-1cm} \renewcommand{\arraystretch}{1.2} \begin{tabular}{|c||c|c|c||c|c||} \hline\hline Final states&\multicolumn{3}{|c||}{Benchmark}&\multicolumn{2}{|c||}{Backgrounds } \\ \hline &BP1 & BP2&BP3 & $ZZ$& $Zh$\\ \hline \hline $2\mu\,\&\, n_j\leq 3\,[2\tau_{\rm{jet}}$]&\multirow{2}{*}{15.62}&\multirow{2}{*}{31.84}&\multirow{2}{*}{41.10}&\multirow{2}{*}{498.50}&\multirow{2}{*}{20.72}\\ $\&\,p_T^{\ell_{1,2}}\,\&\,p_T^{j_{1,2}}\leq 50$ GeV&&&&&\\ $\&\,|m_{\mu\mu}-m_Z|\geq 5$ GeV&15.62&31.84&41.10&145.90&7.31\\ $\&\, |m_{\tau\tau}-m_Z|>10$ GeV&15.62&31.84&41.10&121.58&3.66\\ $\&\, |m_{\tau\tau}|<125$ GeV&15.62&31.84&41.10&121.58&2.44\\ \hline Significance&1.32&2.55&3.20&\multicolumn{2}{|c||}{}\\ \hline \hline \multirow{3}{*}{$\&\,p_1:|m_{\tau\tau}-m_{a_1}|\leq 10$ GeV}&\multirow{3}{*}{15.62}&\multirow{3}{*}{15.92}&\multirow{3}{*}{28.77}&24.32&0.00\\ &&&&24.32&0.00\\ &&&&48.63&0.00\\ \hline Significance&2.47&2.51&3.27&\multicolumn{2}{|c||}{}\\ \hline \hline \multirow{3}{*}{$\&\,p_2:|m_{\mu\mu}-m_{a_1}|\leq 5$ GeV}&\multirow{3}{*}{5.21}&\multirow{3}{*}{7.96}&\multirow{3}{*}{12.33}&0.00&1.22\\ &&&&0.00&0.00\\ &&&&24.32&0.00\\ \hline Significance&2.05&2.82&2.04&\multicolumn{2}{|c||}{}\\ \hline \hline \end{tabular} \caption{The number of events for $n_j\leq 3\,[2\tau_{\rm{jet}}]\,\&\,\geq 2\mu\,\&\,\not\!\!{p_T} \leq 30$ GeV final state at 1000 fb$^{-1}$ of luminosity at the LHC for center of mass energy (ECM) of 14 TeV.}\label{2ta2mu14} \end{center} \end{table} \chapter*{Conclusions}\label{concl} We have considered a scenario with an extended Higgs sector characterized by a $Y=0$ hypercharge $SU(2)$ triplet and a gauge singlet superfields, along with the remaining MSSM superfields. The triplet vev is restricted by the $\rho$ parameter, hence the $\mu_{\rm{eff}}$ is generated spontaneously mostly by the singlet vevs. In models with gauged $U(1)'$ symmetry the singlet could be invoked in the mass generation of the extra gauge boson $Z'$ by spontaneous symmetry breaking. This would require a large singlet vev $v_S$, due to the recent bounds on extra $Z'$ coming from the analysis at the LHC \cite{LHCZ'}. We have first investigated the masses of the Higgs sector of the model at tree-level. The lightest tree-level Higgs state, in this case, is not bounded to lay below $M_Z$, due to the additional contributions from the triplet and the singlet, which are proportional to their respective couplings and are enhanced at low $\tan{\beta}$. This allows to reduce the size of the quantum correction needed in order to reach the $\sim 125$ GeV at one-loop, compared to the MSSM or to others constrained MSSM scenarios. Then we have extended our analysis at one-loop level. The one-loop Higgs with mass around $\sim 125$ GeV puts some indirect bounds on the masses of the particles contributing in the radiative corrections. For this purpose we have included the one-loop contributions using the Coleman-Weinberg potential. We have also presented results for the neutralino, and charginos spectra, together with the stop and sbottom mass matrices. We have calculated full one-loop Higgs masses considering all the weak sectors and the strong sectors. We also showed that the gauge boson-gaugino-higgsino sectors mostly contribute negatively to the mass eigenstates, while the stop-top, sbottom-bottom and Higgs sectors contribute positively. Due to the large number of scalars, seven neutral and three charged Higgs bosons, the Higgs self corrections can be larger than the strong corrections in the large $\lambda_{T,S}$ limit. This substantially reduces the indirect lower bounds on the stop and sbottom masses. Thus in TNMSSM the discovery of a $\sim 125$ GeV Higgs boson does not put a stringent lower bound on the stop and sbottom masses, and one has to rely on direct search results for the lower bounds on the SUSY mass scale. We have implemented the model in SARAH3.5 \cite{sarah} in order to generate the vertices and other model files for CalcHEP \cite{calchep}. The beta-functions have also been generated at one-loop. We have addressed the issues of perturbativity of the couplings at the higher scale, as we have run the corresponding renormalization group equations from the electroweak scale up. This has shown that the couplings of the model at the electroweak scale need to be restricted to certain values. For example, even with a value of $\lambda_{T,S}\sim 0.8$ at the electroweak scale, the theory remains perturbative up to $10^{8-10}$ GeV. Setting all the couplings at a value ($\lambda_{TS} \sim 0.8$, $\kappa\sim 2.4$) the upper scale in the perturbative evolution gets lowered to $10^{4-6}$ GeV. The issue of fine-tuning at the electroweak scale has been discussed in this context. We have seen that although the tree-level mass spectrum is highly fine-tuned for larger $\lambda_{T,S}$, the amount of fine tuning is reduced after the inclusion of the radiative corrections. The prospects for hidden Higgs(es), which are scalars and/or pseudoscalars of mass lower than the current Higgs mass, has been discussed quite thoroughly. We have seen that in the rich Higgs spectrum of the model there are several possibilities for having one or more hidden neutral Higgs bosons ($\lesssim125$ GeV) both CP-even and CP-odd. A special scenario emerges when we break the continuous $U(1)$ symmetry softly by the parameters $A_i$. This leads to the appearance of a very light pseudoscalar state of $\mathcal{O}(1)$ GeV to $\mathcal{O}(1)$ MeV in mass, which has its own interesting phenomenology. Finally, we have discussed the doublet-triplet-singlet mixing which influences the productions and decays of neutral and charged Higgs bosons at the LHC. The existence of a $h^\pm_i-W^\mp-Z$ tree-level vertex, due to the triplet, impacts both the production as well as the decay channels of the charged Higgs bosons \cite{tssmch1}. In the presence of a light pseudoscalar, the $h_i\to Z a_j$ channel is a possibility due to the very light mass of the pseudoscalar(s). Both the triplet and the singlet states do not couple to the fermions, which leads to some very interesting phenomenology. This property also has an impact on rare decays like $b\to \mu \mu$ and $b\to s \gamma$ \cite{tssmyzero, infnr}. Given the rich phenomenology and the specific predictions of this model, the current analysis at the LHC and future colliders could be able to test and shed a light on this scenario by looking at its interesting signatures. We focus our attention on a typical mass spectrum with a doublet-like CP-even Higgs boson around 125 GeV, a light triplet-like charged Higgs boson and a light singlet-like pseudoscalar. The existence of light singlet-like pseudoscalar and triplet-like charged Higgs boson enrich the phenomenology at the LHC and at future colliders. In general we expect to have mixing between doublet and triplet type charged Higgs. We find that in the decoupling limit, $\lambda_T \simeq 0$, one should expect two triplet-like and one doublet-like massive charged Higgs bosons. However since the Goldstone boson is a linear combination which includes a triplet contribution $\sim {v_T}/{v}$ (see Eq.~\ref{gstn}), one of the massive eigenstates triplet cannot be $100\%$ triplet-like. Recent searches by both CMS \cite{ChCMS} and ATLAS \cite{ChATLAS} are conducted for a charged Higgs mainly of doublet-type and coupled to fermions. For this reason such a state can be produced in association with the top quark and can decay to $\tau\nu$. Clearly, these searches have to be reinvestigated in order to probe the possibility of triplet representations of $SU(2)$ in the Higgs sector. The breaking of the custodial symmetry via a non-zero triplet vev generates $h_i^\pm-W^\mp-Z$ vertex at the tree-level in TNMSSM. This leads to the vector boson fusion channel for the charged Higgs boson, which is not present in the MSSM or the 2HDM. On top of that the $Z_3$ symmetric superpotential of TNMSSM has a light pseudoscalar $a_1$ as a pseudo NG mode of a global $U(1)$ symmetry, known as the "$R$-axion" in the literature. However the later can also be found in the context of the $Z_3$ symmetric NMSSM. In this case the light charged Higgs boson can decay to $a_1 W^\pm$ \cite{han, colepa, guchait, pbsnkh} just like in the TNMSSM. In the context of a CP-violating MSSM, such modes can arise due to the possibility of a light Higgs boson $h_1$ and of CP-violating interactions. A charged Higgs boson can decay to $h_1 W^\pm$ \cite{CPVMSSM}, just as in our case. Therefore, one of the challenges at the LHC will be to distinguish among such models, once such a mode is discovered. Triplet charged Higgs bosons with $Y=0$, however, have some distinctive features because they do not couple to the fermions, while the fusion channel $ZW^\pm$ is allowed. The phenomenology of such triplet-like charged Higgs boson has already been studied in the context of TESSM \cite{pbas3}. Such charged Higgs bosons also affect the predictions of $B$-observables \cite{pbas1, pbas2} for missing the coupling to fermions and to the $Z$ boson. However in TESSM, even though the charged Higgs boson decays to $ZW^\pm$ \cite{pbas3}, the possibility of a light pseudoscalar is not so natural \cite{pbas1, pbas2, DiChiara, pbas3}. Indeed, one way to distinguish between the TESSM and the TNMSSM is to exploit the prediction of a light pseudoscalar in the second model, beside the light triplet type charged Higgs boson. We expect that such a Higgs in the TNMSSM will be allowed to decay both to $ZW^\pm$ as well as to $a_1 W^\pm$, the former being a feature of the triplet nature of this state, and the latter of the presence of an $R$-axion in the spectrum of the model. We have investigated the discovery potential of a light pseudoscalar sector which is present in this model. Our analysis has been performed assuming as a production mechanism the gluon-gluon fusion channel of the 125 GeV Higgs $h_{125}$, and focused on the currents experimental rates on its decay into the $WW^*$, $ZZ^*$ and $\gamma\gamma$ derived at the LHC. Given the current uncertainties in these discovered modes as well as in other (fermionic) modes of the Higgs, we have investigated the possibility that such uncertainties are compatible with the production of two light pseudoscalars, predicted by the TNMSSM, which have so far been undetected. Benchmarking three points in the parameter space of the model, we have proposed and simulated final states of the form $2b+2\tau$, $3\tau$, $2b+2\mu$ and $2\tau +2\mu$, derived from the decays of such pseudoscalars. A PYTHIA-FastJet based simulation of the dominant SM backgrounds shows that, depending on the benchmark points, such light pseudoscalars can be probed with early LHC data ($\sim 25$ fb$^{-1}$) at 13 and 14 TeV. The $2\tau+2\mu$ decay modes of such states, though much cleaner compared to other channels, need higher luminosity ($\sim 2000$ fb$^{-1}$) in order to be significant. Nevertheless, such muon final states will be crucial for precision mass measurements of the $a_1$. In this case, due to the $Z-a_1-a_1$ coupling, one may consider the production of an $a_1$ pair directly at tree-level, and this can enhance the signal strength by about $10\%$. The identification of such hidden scalars would be certainly a signal in favour of an extended Higgs sectors, but finding the triplet and singlet $SU(2)$ representations of these extra states would require more detailed searches. Clearly, there are some other distinctive features of this model respect to the NMSSM. The NMSSM does not have any extra charged Higgs bosons compared to the MSSM, while the TNMSSM has an extra triplet-like charged Higgs boson which does not couple to fermions and can decay to $h^\pm \to Z W^\pm$. This possibility changes the direct bounds derived from searches for a charged Higgs at the LHC, as well as the indirect bounds on flavour. These changes are due to the doublet-triplet mixing in the charged Higgs and chargino sectors of the triplet extended model \cite{tripch}. Such sectors can be very useful in order to establish the $SU(2)$ content of the extra scalars, since in this model a very light triplet-like charged Higgs states cannot be ruled out \cite{pbancc}. Finally, the superpartners of this triplet- and singlet- like scalars can be dark matter candidates. In particular, a light pseudoscalar sector provides the much needed annihilation channel in order to respect the correct dark matter relic density. As we have seen, both direct and indirect constraints can play a significant role in the searches for scalars in higher representations of the $SU(2)$ gauge symmetry, setting a clear distinction respect to the ordinary doublet construction, which is typical of the SM. \part{Applications to Gravitational Lensing of the TVV and TFF correlators} \chapter{Radiative Effects in Gravitational Lensing } \section{Synopsis} This chapter develops an application of the trilinear vertices $TVV$ and $TFF$, where $F$ denotes a fermion, in our case a neutrino, computed at one-loop in the SM, to the case of propagation of photons and neutrinos in a gravitational background. These vertices, as already pointed out in previous chapters, are responsible for the tree level interaction between gravity and the fields of the SM. The study is built around previous analysis of the same topic, with the idea to investigate the role of the conformal anomaly in the process of gravitational lensing. As already pointed out in \cite{Coriano:2014gia}, the effect of the conformal anomaly manifests in some corrections to the classical Einstein's formula for the deflection in General Relativity (GR). Here we propose a method to incorporate radiative effects in the classical lens equations of neutrinos and photons. The study is performed for a Schwarzschild metric, generated by a point-like source, and expanded in the Newtonian potential at first order. We use a semiclassical approach, where the perturbative corrections to neutrino scattering, evaluated at one-loop in the Standard Model, are compared with the Einstein formula for the deflection using an impact parameter formulation. As just mentioned, for this purpose we use the renormalized expression of the graviton/fermion/fermion vertex presented in previous studies. We show the agreement between the classical and the semiclassical formulations, for values of the impact parameter $b_h$ of the neutrinos of the order of $b_h\sim 20$, measured in units of the Schwarzschild radius. The analysis is then extended with the inclusion of the post Newtonian corrections in the external gravity field, showing that this extension finds application in the case of the scattering of a neutrino/photon off a primordial black hole. The energy dependence of the deflection, generated by the quantum corrections, is then combined with the standard formulation of the classical lens equations. We illustrate our approach by detailed numerical studies, using as a reference both the thin lens and the nonlinear Virbhadra-Ellis lens. \section{Introduction} According to classical GR massless particles follow null spacetime geodesics which bend significantly in the presence of very massive sources. The gravitational lensing enforced on their spatial trajectories provides important information on the underlying distributions of matter and, possibly, of dark matter, which act as sources of the gravitational field.\\ Several newly planned weak lensing experiments such as the Dark Energy Survey (DES) \cite{DES}, the Large Synoptic Survey Telescope (LSST)\cite{LSST}, both ground based, or from space with the Wide-Field Infrared Survey Telescope (WFIRST) \cite{WFIRST} and Euclid \cite{EUCLID}, are expected to push forward, in the near future, the boundaries of our knowledge in cosmology.\\ In the analysis of the deflection by a single compact and spherically symmetric source, one significant variable, beside the mass of the source, is the impact parameter of the incoming particle beam, measured respect to the center of the source, which determines the size of the deflection. It is very convenient to measure the impact parameter $(b)$, which is typical of a given collision, in units of the Schwarzschild radius $r_s\equiv 2 G M $, denoted as $b_h\equiv b/r_s$. In the Newtonian approximation for the external background, this allows to scale out the entire mass dependence of the lensing event. \\ For an impact parameter of the beam of the order of $10^5-10^6$, the corresponding deflection is rather weak, of the order of 1-2 arcseconds, as in the case of a photon skimming the sun. Stronger lensing effects are predicted as the particle beam nears a black hole, with deflections which may reach 30 arcseconds or more. These are obtained for impact parameters $b_h$ of the order of $2\times 10^4$. Even larger deflections, of 1 to 2 degrees or a significant fraction of them, are generated in scatterings which proceed closer to the event horizon \cite{Coriano:2014gia}. In fact, as we are going to show, for closer encounters, with the beam located between 20 and 100 $b_h$, such angular deflections are around $10^{-2}$ radians in size, as predicted by classical GR. A high energy cosmic ray of 10-100 GeV will then interact with the field of the source by exchanging momenta far above the MeV region, and will necessarily be sensitive to radiative effects, such as those due to the electroweak corrections.\\ Interactions with such momentum exchanges cannot be handled by an effective Newtonian potential, as derived, for instance, from the (loop corrected) scattering amplitude. We recall that, in general, in the derivation of such a potential, one has to take into account only non-analytic terms in the momentum transfer $q$. These are obtained from a given amplitude and/or gravitational form factor of the incoming particle after an expansion at small momentum. The analytic terms in the expansion correspond to contact interactions which are omitted from the final form of the potential, being them proportional to Dirac delta functions. \\ As one can easily check by a direct analysis, non-analytic contributions originate from massless exchanges in the loops, which approximate the full momentum dependence of the radiative corrections only for momentum transfers far below the MeV region. Therefore, the validity of the method requires that the typical impact parameter of the beam, for a particle with the energy of few GeV's, be of the order of $10^6$ Schwarzschild radii and not less. For such a reason, if we intend to study a lensing event characterized by a close encounter between a cosmic ray and a black hole, we need to resort to an alternative approach, which does not suffer from these limitations. \\ Finally, with the photon sphere located at $b_h\sim 2.5$ for a Schwarzschild metric, one expects that very strong deflections are experienced by a beam for scattering events running close to such a value of the impact parameter. This is also the radial distance from the black hole center at which the scattering angle diverges. A simple expansion of the Einstein formula for the deflection shows that this singularity is logarithmic \cite{Coriano:2014gia}. In such extreme cases the beam circulates around the source one or more times before escaping to infinity, generating a set of relativistic images \cite{Bozza:2001xd}. This is also the region where the simple Newtonian approach, discussed in \cite{Coriano:2014gia}, fails to reproduce the classical GR prediction, as expected. \\ \section{Comparing classical and semiclassical effects} The analysis of possible extensions of the classical GR prediction for lensing, with the inclusion also of quantum effects in the interaction between the particle source and the deflector (lens), has not drawn much attention in the past, except for a couple of very original proposals \cite{Delbourgo:1973xe, Berends:1974gk}. While these effects are expected to be small, even for huge gravitational sources such as massive/supermassive black holes, they could provide, in principle, a way to test the impact of quantum gravity and of other radiative corrections to the propagation of cosmic rays. Close encounters of a beam with a localized source, which could be a large black hole or a neutron star, are expected to be quite common in our universe, although the probability of identifying a lensing event characterized by a close alignment between the source, the lens and an earth based detector, especially for neutrinos, is exceedingly rare \cite{Mena:2006ym}. The situation might be more promising for photons in close encounters with primordial black holes, revealed by resorting to spaceborne detectors. \\ Such is the FERMI satellite \cite{FERMI}, with source beams given by Gamma Ray Bursts (GRBs) \cite{Gould1992}, which could detect fringes between primary and secondary paths of the GRBs on its ultra sensitive camera, generated by a gravitational time delay. This approach was termed in \cite{Gould1992} "femtolensing", due to the size of the Einstein radius characteristic of these events, which was estimated to be of the order of a femtoarcsecond. As shown in \cite{Gould1992}, a classical GR analysis based on the thin lens equation can be applied quite straightforwardly also to this extreme situation.\\ An important point which needs to be addressed, in this case, concerns the quantum features of these types of lensing events, since the Schwarzschild radius of a primordial black hole, for a gamma ray photon, is comparable to its wavelength. Our analysis draws a path in this direction. The classical deflections of photons, as pointed out in the past and in a recent work \cite{Coriano:2014gia}, can be compared at classical and quantum levels by equating the classical gravitational cross section, written in terms of the impact parameter of the incoming photon beam, to the perturbative cross section. The latter is expanded in ordinary perturbation theory with the inclusion of the corresponding radiative corrections. The result is a differential equation for the impact parameter of the beam, whose solution provides the link between the two descriptions. In particular, the energy dependence, naturally present in the cross section starting at one-loop order, allows to derive a new formula which relates $b_h$ to the energy $E$ of the beam and to the angle of deflection $\alpha$, $b_h(E, \alpha)$. This dependence, which is absent in Einstein's formula, propagates into all the equations for the usual observables of any lensing process: magnifications, cosmic shears, the light curve of microlensing events and Shapiro time delays. Clearly, such a dependence implies, as noted in \cite{Accioly1}, that radiative corrections induce a violation of the classical equivalence principle in General Relativity. The violation of the equivalence principle, viewed from a quantum perspective, is not surprising, since this principle is inherently classical and requires the localization of the point particle trajectory on a geodesic. It can be summarized in the statement that an experiment will not be able to determine the nature of the point particle which is subjected to gravity, except for its mass. The notion of a point particle clearly clashes with the quantum description, which is, on the other hand, inherently tight to Heisenberg's indetermination principle. For this reason, one expects that the inclusion of radiative corrections will cause a violation of such principle. Gravity, in this approach, is treated as an external background and the transition amplitude involves on the quantum side, in the photon case, the $TVV$ vertex, where $T$ denotes the energy momentum tensor (EMT) of the Standard Model and $V$ the electromagnetic current. In the fermion case (f), the corresponding vertex is the $Tff$, with $f$ denoting a neutrino. The comparison between the classical and the semiclassical formula for the deflection derived by this method can then be performed at numerical level, as shown in \cite{Coriano:2014gia} for the photons. The energy dependence of the bending angle, for a given impact parameter of the photon beam, though small, is found to become more pronounced at higher energies, due to the logarithmic growth of the electroweak corrections with the energy. The goal of our present work is to propose a procedure which allows to include these effects in the ordinary lens equations, illustrating in some detail how this approach can be implemented in a complete numerical study. We mention that our semiclassical analysis is quite general, and applies both to macroscopic and to microscopic black holes. In the case of macroscopic black holes the procedure has to stop at Newtonian level in the external field. In fact, post-Newtonian corrections, though calculable, render the perturbative expansion in the external (classical) gravitational potential divergent, due to the macroscopic value of the Schwarzschild radius. On the other hand, in the case of primordial black holes, the very same corrections play a significant role in the deflection of a cosmic ray, and bring to a substantial modification of the classical formulas. We will firstly extend a previous analysis of photon lensing \cite{Coriano:2014gia}, developed along similar lines, to the neutrino case, presenting a numerical study of the complete one-loop corrections derived from the electroweak theory. The formalism uses a retarded graviton propagator with the effects of back reaction of the scattered beam on the source not included, as in a typical scattering problem by a static external potential. In this case, however, because of the presence of a horizon, we search for a lower bound on the size of the impact parameter of the collision where the classical GR prediction and the quantum one overlap. Indeed, above the bound the two descriptions are in complete agreement. As already mentioned above, both in the fermion as in the photon case \cite{Coriano:2014gia}, this bound can be reasonably taken to lay around 20 ${b_h}$, which is quite close to the horizon of the classical source. For smaller values of $b_h\, (4< b_h <20)$, the two approaches are in disagreement, since the logarithmic singularity in the angle of deflection, once the beam gets close to the photon sphere, starts playing a significant role. This is expected, given the assumption of weak field for the gravitational coupling, which corresponds to the Newtonian approximation in the metric. Than we deal with the implementation of the semiclassical deflection within the formalism of the classical lens equations. We use the energy dependence of the angular deflection to derive new lens equations, which are investigated numerically. We quantify the impact of these effects both in the thin lens approximation, where the trigonometric relations in the lens geometry are expanded to first order, and for a lens with deflection terms of higher order included. As an example, in this second case, we have chosen the Virbhadra-Ellis \cite{Virbhadra:2002ju} lens equation. The observables that we discuss are limited to solutions of these equations and to their magnifications, although time delays, shears and the light curves of a typical microlensing event can be easily included in this framework. We anticipate that the effects that we quantify are small and cover the milliarcsecond region, remaining quite challenging to detect at experimental level. We hope though, that the framework that we propose can draw further interest on this topic in the future, both at theoretical and at phenomenological level. Finally we discuss the post Newtonian formulation of the impact parameter formalism, and apply it to the case of a compact source with a microscopic Schwarzschild radius. This is the only case in which the gravitational corrections to the Newtonian cross section can be consistently included in our approach in a meaningful way. We then summarize our analysis and discuss in the conclusions some possible future directions of possible extensions of our work. \section{Gravitational interaction of neutrinos} \label{Sec.TheorFram} We start our analysis with a brief discussion of the structure of the gravitational interaction of neutrinos, building on the results of \cite{Coriano:2012cr, Coriano:2013iba}, to which we refer for additional details, and that we are going to specialize to the case of a massless neutrino. An analysis of gravity with the fermion sector is contained in \cite{Degrassi:2008mw}. We simply recall that the dynamics of the Standard Model in external gravity is described by the Lagrangian \begin{equation} S = S_G + S_{SM} + S_{I}= -\frac{1}{\kappa^2}\int d^4 x \sqrt{-{g}}\, R+ \int d^4 x \sqrt{-{g}}\mathcal{L}_{SM} + \chi \int d^4 x \sqrt{-{g}}\, R \, H^\dag H. \, \label{thelagrangian} \end{equation} This includes the Einstein term $\mathcal{S}_G$, the $\mathcal{S}_{SM}$ action and a term $\mathcal{S}_I$ involving the Higgs doublet $H$ \cite{Callan:1970ze}, called the term of improvement. $\mathcal{S}_{SM}$, instead, is obtained by extending the ordinary Lagrangian of the Standard Model to a curved metric background. The term $\chi$ is a parameter which, at this stage, is arbitrary and that at a special value $(\chi\equiv\chi_c=1/6)$ guarantees the renormalizability of the model at leading order in the expansion in $\mathcal{\kappa}$. \\ Deviations from the flat metric $\eta_{\mu\nu}=(+,-,-,-)$ will be parametrized in terms of the gravitational coupling $\kappa$, with $\kappa^2= 16 \pi G$ and with $G$ being the gravitational Newton's constant. At this order the metric is given as $g_{\mu\nu}=\eta_{\mu\nu} + \kappa h_{\mu\nu}$, with $h_{\mu\nu}$ describing its fluctuations. We will consider two spherically symmetric and static cases, corresponding to the Schwarzschild and Reissner-Nordstrom metrics. The first, in the weak field limit and in the isotropic form is given by \begin{equation} ds^2\approx\left(1- \frac{2 G M}{|\vec{x}|}\right)dt^2 -\left(1 + \frac{2 G M}{|\vec{x}|}\right)d\vec{x}\cdot d\vec{x} \label{SCH3}. \end{equation} In this case the fluctuation tensor takes the form \begin{eqnarray} h_{\mu\nu}(x) &=& \frac{2 G M}{\kappa |\vec{x}|}\bar{S}_{\mu\nu}, \qquad \bar{S}_{\mu\nu}\equiv \eta_{\mu\nu}-2 \delta^0_{\mu}\delta^0_{\nu}. \label{hh} \end{eqnarray} The inclusion of higher order terms in the weak field expansion will be discussed in the following sections. \\ The coupling of the gravitational fluctuations to the fields of the Standard Model involves the EMT, which is defined as \begin{equation} T_{\mu\nu}=\frac{2}{\sqrt{-g}}\frac{\delta \left(S_{SM}+S_{I}\right)}{\delta g^{\mu\nu}} \bigg|_{g=\eta} \, \label{stmn} \end{equation} with a tree-level coupling summarized by the action \begin{equation} \mathcal{S}_{int}=-\frac{\kappa}{2}\int d^4 x \, T_{\mu\nu} h^{\mu\nu} \,, \label{inter} \end{equation} where $T_{\mu \nu}$ is symmetric and covariantly conserved. The complete expression of the EMT of the Standard Model, including ghost and gauge-fixing contributions can be found in \cite{Coriano:2011zk}. \\ The Higgs field is parameterized in the form \begin{equation} H = \left(\begin{array}{c} -i \phi^{+} \\ \frac{1}{\sqrt{2}}(v + h + i \phi) \end{array}\right) \end{equation} in terms of $h$, $\phi$ and $\phi^{\pm}$, which denote the physical Higgs and the Goldstone bosons of the $Z$ and $W'$ s respectively. $v$ is the Higgs vacuum expectation value. The terms of the Lagrangian $\mathcal{S}_I$, generate an extra contribution to the EMT which is given by \begin{eqnarray} \label{Timpr} T^{\mu\nu}_I = - 2 \chi (\partial^\mu \partial^\nu - \eta^{\mu \nu} \Box) H^\dag H = - 2 \chi (\partial^\mu \partial^\nu - \eta^{\mu \nu} \Box) \left( \frac{h^2}{2} + \frac{\phi^2}{2} + \phi^+ \phi^- + v \, h\right) \, , \end{eqnarray} the term of improvement, which can be multiplied by an arbitrary constant ($\chi$). As mentioned above, it is mandatory to choose the value $\chi=1/6$ for any insertion of the EMT on the correlators of the Standard Model. These are found to be ultraviolet finite only if $T^{\mu\nu}_I$ is included \cite{Callan:1970ze,Coriano:2011zk,Freedman:1974gs}.\\ We will be dealing with the $T f \bar{f}$ vertex, where $T$ denotes the EMT and $f\equiv \nu_f$ a neutrino of flavour $f$, and work in the limit of zero mass of the neutrinos. The vertex, to lowest order, is obtained from the EMT of the neutrino. For instance, the explicit expression of the EMT for the (left-handed, $\nu\equiv \nu_L$) electron neutrino is given by \begin{equation} \begin{split} T^{\nu^e}_{\mu\nu} =& \frac{i}{4}\bigg\{\bar\nu^e\gamma_\mu\stackrel{\rightarrow}{\partial}_\nu\nu^e - \bar\nu^e\gamma_\mu\stackrel{\leftarrow}{\partial}_\nu\nu^e + \frac{2e}{\sin2\theta_W}\bar\nu^e \gamma_\mu\frac{1-\gamma^5}{2}\nu^e Z_\nu \\ &- 2i\frac{e}{\sqrt{2}\sin\theta_W}\bigg(\bar\nu^e \gamma_\mu \frac{1-\gamma^5}{2}e\,W^+_\nu + \bar e\gamma_\mu\frac{1-\gamma^5}{2}\nu^e\,W^-_\nu\bigg) \\ & + (\mu \leftrightarrow \nu) \biggr\} - \eta_{\mu\nu} \mathcal{L}_{\nu^e} \,, \end{split} \end{equation} with \begin{equation} \begin{split} \mathcal{L}_{\nu_e} &= i\bar\nu^e\gamma^\mu\partial_\mu\nu^e + \frac{e}{\sin2\vartheta_{\rm{W}}}\bar\nu^e \gamma^\mu\frac{1-\gamma^5}{2}\nu^e Z_\mu \\ &+ \frac{e}{\sqrt{2}\sin\vartheta_{\rm{W}}}\bigg(\bar\nu^e \gamma^\mu \frac{1-\gamma^5}{2}e\, W^+_\mu + \bar e\gamma^\mu\frac{1-\gamma^5}{2}\nu^e\, W^-_\mu\bigg).\\ \end{split} \end{equation} In momentum space, in the case of a massless fermion, the vertex takes the form \begin{equation} V^{(0) \mu\nu}=\frac{i}{4}\left( \gamma^\mu(p_1 +p_2)^\nu + \gamma^\nu(p_1 +p_2)^\mu - 2 \eta^{\mu\nu}(\ds{p}_1+\ds{p}_2)\right). \end{equation} while in the case of neutrinos we have \begin{equation} V_\nu^{(0)\mu\nu}=V^{(0)\mu\nu}\,P_L \end{equation} with $P_L=(1-\gamma_5)/2$ being the chiral projector. We will denote with \begin{equation} \hat{T}^{(0) \mu\nu}=\bar u(p_2)V^{(0) \mu\nu}u(p_1), \end{equation} the corresponding invariant amplitude, a notation that we will use also at one-loop level in the electroweak expansion. We introduce the two linear combinations of momenta $p = p_1 + p_2$ and $q = p_1 - p_2$ to express our results. % It has been shown that the general $Tf\bar{f}$ vertex, for any fermion $f$ of the Standard Model, decomposes into six different contributions \cite{Coriano:2012cr}, but in the case of a massless neutrino only three amplitudes at one-loop level are left, denoted as \begin{eqnarray} \label{hatT} \hat T^{\mu\nu} =\hat T^{\mu\nu}_{Z} + \hat T^{\mu\nu}_{W} + \hat T^{\mu\nu}_{CT}. \end{eqnarray} In the expression above, the subscripts indicate the contributions mediated by virtual $Z$ and $W$ gauge bosons, while $CT$ indicates the contribution from the counterterm. We show in Fig. \ref{diagrams} some of the typical topologies appearing in their perturbative expansion.\\ Two of them are characterized by a typical triangle topology, while the others denote terms where the insertion of the EMT and of the fermion field occur on the same point. The computation of these diagrams is rather involved and has been performed in dimensional regularization using the on-shell renormalization scheme. % \begin{figure}[t] \centering \subfigure[]{\includegraphics[scale=0.7]{plots/triangle1.pdf}} \hspace{.5cm} \subfigure[]{\includegraphics[scale=0.7]{plots/triangle2.pdf}} \hspace{.5cm} \subfigure[]{\includegraphics[scale=0.7]{plots/bubble1.pdf}} \hspace{.5cm} \subfigure[]{\includegraphics[scale=0.7]{plots/bubble2.pdf}} \caption{The one-loop Feynman diagrams of the neutrino vertex in a gravitational background. The dashed lines can be $Z$ and $W$. \label{diagrams}} \end{figure} Neutrinos interactions, in the limit of massless neutrinos, involve only few of the structures of the $Tf\bar{f}$ tensor decomposition presented in \cite{Coriano:2012cr}. In this case we are left with only one tensor structure and hence only one form factor for each sector \begin{eqnarray} \hat T^{\mu\nu}_Z &=& i \, \frac{G_F}{16 \pi^2 \sqrt{2}} f^{Z}_1(q^2, m_Z) \, \bar u(p_2) \, O^{\mu\nu}_{C 1} \, u(p_1) \,, \nonumber \\ \hat T^{\mu\nu}_W &=& i \, \frac{G_F}{16 \pi^2 \sqrt{2}} f^{W}_1(q^2, m_f, m_W) \, \bar u(p_2) \, O^{\mu\nu}_{C 1} \, u(p_1) \,, \label{expans} \end{eqnarray} where we have defined the vertex \begin{eqnarray} \label{chiralbasis} O^{\mu\nu}_{C 1} &=& \left( \gamma^\mu \, p^\nu + \gamma^\nu \, p^\mu \right) P_L. \end{eqnarray} The counterterms needed for the renormalization of the vertex can be obtained by promoting the counterterm Lagrangian of the Standard Model from a flat spacetime to the curved background, and then extracting the corresponding Feynman rules, as for the bare one. We obtain \begin{eqnarray} \label{TCT} \hat T^{\mu \nu}_{CT} = - \frac{i}{4} \Sigma^L(0)\,\bar u(p_2)O^{\mu\nu}_{C 1}u(p_1), \end{eqnarray} where we have denoted with $\Sigma^L$ the neutrino self-energy \begin{eqnarray} \Sigma^L(p^2) = \frac{G_F}{16 \pi^2 \sqrt{2}} \bigg[ \Sigma^L_Z (p^2) + \Sigma^L_W(p^2) \bigg], \end{eqnarray} which is a combination of the self-energy contributions \begin{eqnarray} &&\Sigma^L_W (p^2) = - 4\bigg[ \left( m_f^2 + 2 m_W^2 \right) \mathcal B_1 \left( p^2, m_f^2, m_W^2 \right) + m_W^2 \bigg]\\ && \Sigma^L_Z (p^2) = -2 m_Z^2\bigg[ 2 \, \mathcal B_1 \left( p^2, 0, m_Z^2 \right) +1 \bigg] \,, \end{eqnarray} with \begin{eqnarray} \mathcal B_1 \left( p^2, m_0^2, m_1^2 \right) = \frac{m_1^2 -m_0^2}{2 p^2} \bigg[ \mathcal B_0(p^2, m_0^2, m_1^2) - \mathcal B_0(0, m_0^2, m_1^2) \bigg] -\frac{1}{2} \mathcal B_0(p^2, m_0^2, m_1^2), \end{eqnarray} expressed in terms of the scalar form factor $\mathcal{B}_0$. We have denoted with $m_Z $ and $m_W$ the masses of the $Z$ and $W$ gauge bosons; with $q^2$ the virtuality of the incoming momentum of the EMT and $m_f$ is the mass of the fermion of flavor $f$ running in the loops. \\ The explicit expressions of the form factors appearing in (\ref{expans}) is given by \begin{eqnarray} f^Z_1&=&-2\,m_Z^2-\frac{4\,m_Z^4}{3\,q^2}+\left(2+\frac{7\,m_Z^2}{3\,q^2}\right)\,\mathcal A_0(m_Z^2)-\left(\frac{17\,m_Z^2}{6}+\frac{7\,m_Z^4}{q^2}+\frac{4\,m_Z^6}{q^4}\right)\,\mathcal B_0(q^2, 0, 0)\nonumber\\ &&+\frac{2}{3\,q^4}\,m_Z^2(2\,m_Z^2+q^2)\,(3m_Z^2+2q^2)\,\mathcal B_0(q^2, m_Z^2, m_Z^2)\nonumber\\ &&-\frac{4}{q^4}\,m_Z^6\,(m_Z^2+q^2)\,\mathcal C_0(0, m_Z^2, m_Z^2)-\frac{1}{q^4}m_Z^2\,(m_Z^2+q^2)^2(4\,m_Z^2+q^2)\,\mathcal C_0(m_Z^2, 0, 0), \end{eqnarray} with $\mathcal C_0$ denoting the scalar 3-point function, and with the form factor $f^W_1$ related to the exchange of the $W$' s given by \begin{align} f^W_1&=\frac{m_f^2}{2}-4\,m_W^2+\frac{4}{3\,q^2}\,(m_f^4+m_f^2\,m_W^2-2\,m_W^4)-\frac{1}{3\,q^2}(m_f^2+2\,m_W^2)\left(\mathcal A_0(m_f^2)-\mathcal A_0(m_W^2)\right)\nonumber\\ &-\frac{2}{q^2}\Big(m_f^4+m_f^2\,m_W^2-2m_W^2\,(m_W^2+q^2)\Big)\,\mathcal B_0(0, m_f^2, m_W^2)+\frac{1}{6\,q^4}\Big(-24\,m_f^6-10\,m_f^4\,q^2\nonumber\\ &+m_f^2\,(72\,m_W^4+46\,m_W^2\,q^2+q^4)-2\,m_W^2\,(24\,m_W^4+42\,m_W^2\,q^2+17\,q^4)\Big)\,\mathcal B_0(q^2, m_f^2, m_f^2)\nonumber\\ &+\frac{1}{3\,q^4}\Big(12\,m_f^6+12\,m_f^4\,q^2+4\,m_W^2\,(2\,m_W^2+q^2)(3\,m_W^2+2\,q^2)\nonumber\\ &+m_f^2\,(-36\,m_W^4-16\,m_W^2\,q^2+q^4)\Big)\,\mathcal B_0(q^2, m_W^2, m_W^2)+2\Big(m_f^4+\frac{2}{q^4}\,(m_f^2-m_W^2)^3\,(m_f^2+2\,m_W^2)\nonumber\\ &+\frac{1}{q^2}\left(3\,m_f^6-4\,m_f^4\,m_W^2+5\,m_f^2\,m_W^4+4\,m_W^6\right)\Big)\,\mathcal C_0(m_f^2, m_W^2, m_W^2)\nonumber\\ &+\frac{1}{q^2}\Big(4\,m_f^8+m_f^6\,(q^2-4\,m_W^2)-2\,m_W^2\,(m_W^2+q^2)^2\,(4\,m_W^2+q^2)\nonumber\\ &-m_f^4\,(2\,m_W^2+q^2)\,(6\,m_W^2+q^2)\Big)\,\mathcal C_0(m_W^2, m_f^2, m_f^2)\nonumber\\ &+\frac{m_f^2}{q^2}\,(20\,m_W^6+25\,m_W^4\,q^2+6\,m_W^2\,q^4)\,\mathcal C_0(m_W^2, m_f^2, m_f^2). \end{align} Being the computations rather involved, the correctness of the results above has been secured by appropriate Ward identities, whose general structure has been discussed in \cite{Coriano:2011zk}. As an example, by requiring the invariance of the generating functional of the theory under a diffeomorphic change of the spacetime metric, one derives the following Ward identity \begin{eqnarray} \label{WI} q_{\mu} \, \hat T^{\mu\nu}& =& \bar u(p_2) \bigg\{ p_2^{\nu} \, \Gamma_{\bar f f}(p_1) - p_1^{\nu} \, \Gamma_{\bar f f}(p_2) + \frac{q_\mu}{2} \left( \Gamma_{\bar f f}(p_2) \, \sigma^{\mu\nu} - \sigma^{\mu\nu} \, \Gamma_{\bar f f}(p_1) \right) \bigg\} u(p_1) \,, \end{eqnarray} where $ \Gamma_{\bar f f}(p)$ is the fermion two-point function, diagonal in flavor space \cite{Coriano:2012cr}. From this equation one obtains \begin{eqnarray} 0 &=& f^Z_1 - \frac{1}{4} \Sigma^L_Z(0) \nonumber \\ 0 &=& f^W_1 - \frac{1}{4} \Sigma^L_W(0), \end{eqnarray} which, as one can check, are identically satisfied by the explicit expressions of $f_Z$ and $f_W$ given above. \\ In the case of MeV neutrinos, the expressions of the two form factors simplify considerably, since the typical momentum transfer $q^2=-4 E^2 \sin^2(\theta/2)$ may be small. These expansions, in fact, are useful in the case of scattering and lensing of neutrinos far from the region of the event horizon, of the order of $10^3-10^6$ horizon units. As we are going to see, an expansion in $q^2$ provides approximate analytical expressions of the $b_h(\alpha)$ relation, connecting the impact parameter to the angle of deflection $\alpha$, valid at momentum transfers which are smaller compared to the electroweak scale, i.e. $q^2/m_W^2 \ll1$. We will come back to illustrate this point more closely in the following sections. \\ In these cases the expression of the renormalized $f_Z$ form factor takes the form \begin{eqnarray} f^{Z\,(ren)}_{low\,q}=-\frac{11}{18} q^2, \end{eqnarray} while the $W$ form factor is slightly lengthier \begin{eqnarray} f^{W\,(ren)}_{low\,q}=& - \dfrac{q^2}{36\,(m_f^2-m_W^2)^4}\Biggl[ 5\,m_f^8-98\,m_f^6m_W^2+243\,m_f^4m_W^4-194\,m_f^2m_W^6+44\,m_W^8 \nonumber\\ &+6\left(10\,m_f^6m_W^2-15\,m_f^4m_W^4+2\,m_f^2m_W^6 \right) \ln \left( m_f^2 / m_W^2\right) \Biggr] . \end{eqnarray} \section{Cross Sections for photons, massive fermions and scalars} \begin{figure}[t] \centering \includegraphics[width=0.65\textwidth]{plots/Deflection.pdf} \caption{The deflection of the trajectory of a massless particle $P$ approaching a black hole. } \label{picx} \end{figure} Before coming to a discussion of the 1-loop effects in the scattering of neutrinos, we briefly summarize the result for the leading order cross sections for fermions, photons and scalars using in an external static background \cite{Accioly1} \cite{Coriano:2012cr,Coriano:2013iba}. We just recall that the scattering matrix element is written as \begin{equation} i\mathcal{S}_{if}=-\frac{\kappa}{2}\int_ V d^4 x \langle p_2 | h_{\mu\nu}(x) T^{\mu\nu}(x)| p_1 \rangle \label{volume}, \end{equation} where ${ V}$ is the integration volume where the scattering occurs, which gives \begin{eqnarray} \langle p_2 |h_{\mu\nu}(x) T^{\mu\nu}(x)| p_1 \rangle&=& h_{\mu\nu}(x) \bar{\psi}(p_2)V^{\mu\nu}\psi(p_1) e^{i q\cdot x}. \end{eqnarray} Denoting with $i$ and $f$ the initial and final neutrino, we have introduced plane waves normalized as \begin{equation} \psi_{i}(p_{1})={\mathcal{N}_{i} }u(p_{1}), \qquad \mathcal{N}_{i}=\sqrt{\frac{1}{E_{1} V}}, \qquad \bar{u}(p_1)u(p_1)=1, \end{equation} and similarly for $\psi_f$, while $V$ denotes a finite volume. The $E_1$ ($E_2$) are the energy of the incoming (outgoing) particle respectively.\\ In momentum space the matrix element is given by \begin{eqnarray} i \mathcal{S}_{fi}= -\frac{\kappa}{2} h_{\mu\nu}(q) \bar{\psi}(p_2)V^{\mu\nu}\psi(p_1) = -\frac{\kappa}{2} h_{\mu\nu}(q) \mathcal{N}_i\mathcal{N}_f \hat{T}^{\mu\nu} \label{sfi} \end{eqnarray} in terms of the gravitational fluctuations in momentum space $h_{\mu\nu}(q)$. For a static external field the energies of the incoming/outgoing fermions are conserved ($E_1=E_2\equiv E$). \\ The Fourier transform of $h_{\mu\nu} $ in momentum space is given by \begin{eqnarray} h_{\mu\nu}(q_0,\vec{q})&=&\int d^4 x e^{i q\cdot x} h_{\mu\nu}(x), \end{eqnarray} which for a static field can be expressed as \begin{equation} h_{\mu\nu}(q_0,\vec{q})=2 \pi \delta(q_0) h_{\mu\nu}(\vec{q} ), \label{h1} \end{equation} in terms of a single form factor $h_0(\vec{q})$ \begin{equation} h_{\mu\nu}(\vec{q})\equiv h_0(\vec{q}) \bar{S}_{\mu\nu} \qquad \textrm{with}\qquad h_0(\vec{q})\equiv \left(\frac{\kappa M}{2 \vec{q}^2}\right). \label{h2} \end{equation} The squared matrix element in each case takes the general form \begin{equation} \label{eq:sfi} \left|iS_{fi}\right|^2=\frac{\kappa^2}{16 V^2 E_1 E_2}\, 2\pi \delta(q_0)\, \,\mathcal{T} \,\frac{1}{2} \, \mathcal{J}^{\mu\nu\rho\sigma}(p_1,p_2) \,h_{\mu\nu}(\vec{q}\,) \,h_{\rho\sigma}(\vec{q}\,), \, \end{equation} where $\mathcal{T}$ is the transition time. Specifically, in the case of a massive (Dirac) fermion one obtains \begin{equation} \mathcal{J}^{\mu\nu\rho\sigma}_f(p_1,p_2)= \rm{tr} \left[ (\ds{p}_2+m) V_m^{\mu\nu}(p_1,p_2)( \ds{p}_1+m) V_m^{\rho\sigma}(p_1,p_2) \right] \,, \end{equation} where the $V_m^{\mu\nu}$ vertex is in this case given by \begin{equation} \label{eq:hff} V_m^{\mu\nu}(p_1,p_2)=\frac{i}{4} \Bigl( \gamma^{\mu} (p_1+p_2)^{\nu} + \gamma^{\nu}(p_1+p_2)^{\mu} - 2\eta^{\mu\nu} (\ds{p}_1+\ds{p}_2 - 2 m )\Bigr) \, \end{equation} which gives a cross section \begin{equation} \label{eq:crosssecFerm} \left.\frac{d \sigma}{d \Omega}\right|^{(0)}_f= \Biggl( \frac{G M}{\sin^2 (\theta/2)} \Biggr)^{\!2} \left( \cos^2\vartheta/2 + \frac{1}{4} \frac{m^2}{|\vec{p}_1|^2} + \frac{1}{4} \frac{m^4}{|\vec{p}_1|^4} + \frac{3}{4} \frac{m^2}{|\vec{p}_1|^2} \cos^2\vartheta/2 \right)\,. \end{equation} In the case of a neutrino, the corresponding cross section is obtained by sending the fermion mass $m$ of the related Dirac cross section to zero, giving \begin{equation} \left.\frac{d \sigma}{d \Omega}\right|_\nu^{(0)} =\left(\frac{G M }{\sin^2\frac{\theta}{2}}\right)^2\cos^2\frac{\theta}{2} \label{leading}, \end{equation} which is energy independent. Notice that the inclusion of the chiral projector $P_L$ in the expression of the neutrino amplitude, which carries a factor $1/2$, makes the neutrino and Dirac cross sections coincide. The same $1/2$ factor, in the Dirac case, appears in the average over the two states of helicity, while the axial-vector terms induced by $P_L$ are trivially zero (see \cite{ChangCorianoGordon} for typical studies of polarized processes).\\ In the photon case one obtains \begin{equation} \mathcal{J}^{\alpha\beta\rho\sigma}_{\gamma}(k_1,k_2)= \sum_{\lambda_1,\lambda_2} V^{\alpha\beta\kappa\lambda} (k_1,k_2) e_{\kappa}(k_1,\lambda_1) e_{\lambda}^{*}(k_2,\lambda_2) V^{\rho\sigma\mu\nu}(k_1,k_2) e_{\mu} (k_2,\lambda_2) e^{*}_{\nu}(k_1,\lambda_1)\,, \end{equation} where $e_{\mu}$ denotes the polarization vector of the photon, with an interaction vertex which is given by \begin{equation} \label{eq:hAA} V^{\mu\nu\alpha\beta}(k_1,k_2)=i \bigg\{ \left( k_1 \cdot k_2 \right) C^{\mu\nu\alpha\beta} + D^{\mu\nu\alpha\beta}(k_1,k_2) \bigg\}\,, \end{equation} where \begin{gather*} C_{\mu\nu\rho\sigma} = \eta_{\mu\rho}\, \eta_{\nu\sigma} +\eta_{\mu\sigma} \, \eta_{\nu\rho} -\eta_{\mu\nu} \, \eta_{\rho\sigma} \,, \\ D_{\mu\nu\rho\sigma} (k_1, k_2) = \eta_{\mu\nu} \, k_{1 \, \sigma}\, k_{2 \, \rho} - \biggl[\eta^{\mu\sigma} k_1^{\nu} k_2^{\rho} + \eta_{\mu\rho} \, k_{1 \, \sigma} \, k_{2 \, \nu} - \eta_{\rho\sigma} \, k_{1 \, \mu} \, k_{2 \, \nu} + (\mu\leftrightarrow\nu)\biggr] \,. \end{gather*} The cross section for a photon is then given by \begin{equation} \label{eq:crsechVV} \left.\frac{d \sigma}{d \Omega}\right|^{(0)}_\gamma= (G M)^2\cot^4 (\theta/2) \,. \end{equation} Finally, in the case of a scalar the relative expression is given by \begin{equation} \mathcal{J}^{\alpha\beta\rho\sigma}_{s}(p_1,p_2)=V_s^{\alpha\beta}(p_1,p_2)V_s^{\rho\sigma}(p_1,p_2)\,, \end{equation} with \begin{equation} V^{\mu\nu}_s=-i\left\{ p_{1\,\rho} p_{2\,\sigma} C^{\mu\nu\rho\sigma} - 2\chi \left[ \left( p_1+p_2 \right)^{\mu} \left( p_1+p_2 \right)^{\nu} - \eta^{\mu\nu} (p_1+p_2)^2 \right] \right\} \,, \end{equation} where we have included the minimal and the term of improvement \cite{Coriano:2011zk}. For a conformally coupled scalar $\chi=1/6$. The cross sections, in this case, are given by \begin{equation} \label{eq:crsechSS} \left.\frac{d \sigma}{d \Omega}\right|^{(0)}_s= \left\{ \begin{array}{l} (GM)^2\csc^4 (\theta/2)\qquad\chi=0\\ \left(\frac{ G M}{3}\right)^2\cot^4 (\theta/2)\qquad\chi=1/6 \end{array} \right. \\ \end{equation} We show in Fig. \ref{fig:TreeLevel} the expressions of these three cross sections at different energies, normalized by $1/(2 GM)^2$ and denoted as $\tilde{\sigma}$. In panel (a) we consider the scattering of a massive fermion, together with the massless limit, which applies in the neutrino case. We have included in (b) and (c) two enlargements of (a) which show how the massive and the massless cross sections tend to overlap for energies of the order of 1 GeV. In panel (d) we show the cross sections for the photon ($s=1$), for the neutrino ($s=1/2$) and for the conformally coupled scalar ($s=0$). \begin{figure}[t] \centering \subfigure[]{\includegraphics[scale=.52]{./plots/Graph3.pdf}} \hspace{.5cm} \subfigure[]{\includegraphics[scale=.52]{./plots/Graph2.pdf}} \subfigure[]{\includegraphics[scale=.52]{./plots/Graph1.pdf}} \subfigure[]{\includegraphics[scale=.65]{./plots/Spin.pdf}} \caption{Normalized ($\tilde{\sigma}=\sigma/(2 G M)^2$) cross sections for massive and massless fermions. In the massive case $m$ is the electron mass (a). Two enlargements of (a) are in (b) and (c). Panel (d) shows the cross sections for photons ($s=1$), massless neutrinos ($s=1/2$) and conformally coupled scalars ($s=0$). } \label{fig:TreeLevel} \end{figure} \subsection{The neutrino cross section at 1-loop} In the neutrino case, at 1-loop level, Eq. (\ref{leading}) is modified in the form \begin{equation} \label{sigmaOL} \frac{d \sigma}{d \Omega}=G^2M^2\frac{\cos^2\theta/2}{\sin^4\theta/2}\left\{1+\frac{4\,G_F}{16\,\pi^2 \sqrt{2}} \left[ \, f_W^1(E,\theta) + f_Z^1(E,\theta) - \frac{1}{4} \Sigma_Z^L - \frac{1}{4} \Sigma_W^L\right] \right\}. \end{equation} In the massless approximation for the neutrino masses, loop corrections do not induce flavor transition vertices, such as those computed in \cite{Coriano:2013iba}.\\ \begin{figure}[t] \centering \subfigure[]{\includegraphics[scale=.45]{plots/sezene.pdf}} \hspace{.5cm} \subfigure[]{\includegraphics[scale=.35]{plots/sezene2.pdf}} \hspace{.5cm} \subfigure[]{\includegraphics[scale=.35]{plots/sezene3.pdf}} \caption{Differential cross section for MeV neutrinos in units of $r_s^2$, with $r_s$ the Schwarzschild radius.\label{OLMeV}} \label{diff1} \end{figure} In the case of neutrinos of an energy $E$ in the MeV range, the expression above simplifies considerably and takes the form \begin{align} \frac{d\sigma}{d\Omega}=&\,G^2 M^2 \frac{\cos^2\theta/2}{\sin^4\theta/2} \Biggl\{ 1 + \frac{G_F}{\pi^2 \sqrt{2}} \Biggl[ \frac{11}{18} + \frac{1}{36\,(m_f^2-m_W^2)^4} \Biggl( 5\,m_f^8-98\,m_f^6m_W^2+243\,m_f^4m_W^4 \nonumber\\ & -194\,m_f^2m_W^6 +44\,m_W^8 +\,\, 6\, \Bigl( 10\,m_f^6m_W^2-15\,m_f^4m_W^4+2\,m_f^2m_W^6 \Bigr) \ln \dfrac{m_f^2}{m_W^2} \Biggr) \Biggr] E^2 \sin^2\frac{\theta}{2} \Biggr\} . \end{align} We show in Fig. \ref{diff1} three plots of the tree level and one-loop cross sections for an energy of the incoming neutrino beam of 1 MeV, for 2 different angular regions (plots $(a)$ and $(b)$), together with a global plot of the entire cross section (plot ($c$)) for the rescaled differential cross section $d\tilde{\sigma}/d\Omega\equiv 1/r_s^2\,\, d\sigma/d\Omega$. Notice that the tree-level and one-loop results are superimposed. We can resolve the differences between the two by zooming-in in some specific angular regions of the two results, varying the energy of the incoming beam. The result of this analysis is shown in Fig. \ref{diff2}, where in plots $(a)$ and $(b)$ we show the rescaled cross section $d\tilde{\sigma}/d\Omega$ as a function of the scattering angle $\theta$, for three values of the incoming neutrino beam equal to $1$ GeV, $1$ TeV and $1$ PeV. PeV neutrinos events are rare, due to the almost structureless cosmic ray spectrum, which falls dramatically with energy. They could be produced, though, as secondaries from the decays of primary protons of energy around the GZK \cite{Greisen:1966jv, Zatsepin:1966jv} cutoff, and as such they are part of our analysis, which we try to keep as general as possible. \\ It is clear from these two plots that the tree-level and the one-loop result are superimposed at low energies, with a difference which becomes slightly more remarked at higher energies. A similar behaviour is noticed in the cross section for scatterings at larger angles. Also in this case the radiative corrections tend to raise as the energy of the incoming beam increases. This behaviour is expected to affect the size of the angle of deflection $\alpha$ as we approach the singular region of a black hole. In fact, $\alpha$ is obtained by integrating the semiclassical equation (\ref{semic}), introduced below, and large deviations are expected as the impact parameter $b_h$ reaches the photon sphere. As we are going to illustrate in the next sections, the $b_h(E, \alpha)$ relation is significantly affected by the behaviour of the cross section at large $\theta$ as $b_h\to 3/2 r_s$. This is the closest radial distance allowed to a particle approaching the black hole from infinite distance without being trapped. Therefore, these differences in $\tilde{\sigma}$ for large $\theta$ are going to render $b_h$ sensitive on the changes in energy of the neutrino beam for such close encounters of the neutrinos with a black hole. \begin{figure}[t] \centering \subfigure[]{\includegraphics[scale=.6]{plots/sezene4.pdf}} \hspace{.5cm} \subfigure[]{\includegraphics[scale=.4]{plots/sezene5.pdf}} \caption{Differential cross section: tree level and one-loop contribution for a wide range of energies.\label{energies}} \label{diff2} \end{figure} \section{Impact parameter formulation of the semiclassical scattering} As pointed out in previous studies \cite{Coriano:2014gia,Delbourgo:1973xe,Coriano:2013iba, Berends:1975ah}, the computation of the angle of deflection for a fermion or a photon involves a simple semiclassical analysis, in which one introduces the impact parameter representation of the specific classical cross section and equates it to the quantum one. The classical/semiclassical scattering process is illustrated in Fig.~\ref{picx}, with $\alpha$ denoting the angle of deflection. By assuming that the incoming particle is moving along the $z$ direction, with the source localized at the origin, and denoting with $\theta$ the azimuthal scattering angle present in the quantum cross section, we have the relation \begin{equation} \frac{b}{\sin\theta}\vline\frac{d b}{d\theta}\vline=\frac{d \sigma}{d\Omega} \label{semic} \end{equation} between the impact parameter $b$ and $\theta$, as measured from the $z$-direction. This semiclassical equation \cite{Delbourgo:1973xe, Berends:1975ah} allows to relate the quantum and the classical features of the interaction between the particle beam and the gravitational source. The explicit expression of $b(\alpha)$, at least for small deflection angles, which correspond to large values of the impact parameter, can be found either analytically, such as at Born level and, for small momentum transfers also at one-loop, but it has to be obtained numerically otherwise. The solution of (\ref{semic}) takes the general form \begin{equation} b_h^2({\alpha})=b_h^2(\bar{\theta}) +2\int_{\alpha}^{\bar{\theta}} d\theta' \sin\theta' \frac{d \tilde\sigma}{d\Omega'}, \label{intg} \end{equation} with $b_h^2(\bar{\theta})$ denoting the constant of integration. The semiclassical scattering angle $\alpha$ is obtained from (\ref{intg}) as a boundary value of the integral in $\theta$ of the quantum cross section. As discussed in \cite{Coriano:2014gia}, the integration constant derived from (\ref{intg}) has to be set to zero (for $\bar\theta=\pi$) in order for the solution of (\ref{semic}) to match the classical GR result for a very large $b_h$. In the case of a point-like gravitational source and of neutrino deflection, one obtains from (\ref{intg}) the differential equation \begin{eqnarray} \frac{d b^2}{d\theta}&=&- 2 \left(\frac{G M }{\sin^2\frac{\theta}{2}}\right)^2\cos^2\frac{\theta}{2}\, \sin\theta. \label{semic1} \end{eqnarray} Notice that the variation of $b$ with the scattering angle $\theta$ is negative, since the impact parameter decreases as $\theta$ grows, as we approach the center of the massive source. A comparison of this expression with the analogous relation in the photon case $(\gamma)$ shows that the two equations differ by a simple prefactor \begin{eqnarray} \frac{d b^2}{d\theta} =\frac{1}{\cos^2\frac{\theta}{2}}\frac{d b^2}{d\theta}\Big|_\gamma \qquad \textrm{with} \qquad \frac{d b^2}{d\theta}\Big|_\gamma = - 2 \,G^2 M^2\,\cot^4\frac{\theta}{2}\, \sin\theta. \end{eqnarray} The solution of (\ref{semic1}) takes the form \begin{equation} \label{classb} b^2(\alpha)=4\,G^2M^2\left(-1+\csc^2\frac{\alpha}{2}+2\ln\left(\sin\frac{\alpha}{2}\right)\right), \end{equation} and in the small $\alpha$ (i.e. large $b$) limit takes the asymptotic form \begin{equation} b = G M\left(\frac{4}{\alpha} +\frac{\alpha}{3}(1+\ln\,8-3\ln\alpha)\right) + {\cal O}(\alpha^2) \label{blocal} \end{equation} which allows us to identify the deflection angle as \begin{equation} \alpha\sim 4 \frac{G M}{b} \label{impact} \end{equation} in agreement with Einstein's prediction for the angular deflection. This is the result expected from the classical (GR) analysis. The inversion of the asymptotic expansion (\ref{blocal}) generates the asymptotic behaviour \begin{eqnarray} \alpha=\frac{2}{b_h} -\frac{2}{b_h^3}(\ln b_h +\frac{1}{3})+\frac{3}{b_h^5}(\ln^2 b_h-\frac{1}{5})+ \mathcal{O}(1/b_h^7) \label{inv1} \end{eqnarray} which corresponds to the general functional form \begin{equation} \label{genex} \alpha= \frac{2}{b_{h}} + \sum_{k\geq1} \frac{a_{2k}}{b_{h}^{2k}} + \sum_{k\geq1} \frac{1}{b_h^{2k+1}} \left( a_{2k+1} + d_1 \ln b_h + d_2 \ln^2 b_h + \cdots +d_k\ln^k b_h \right) \,. \end{equation} The analytic inversion of (\ref{blocal}), given by (\ref{inv1}), is very stable under an increase of the order of the asymptotic expansion over a pretty large interval of $b_h$, from low to very high values. Solutions (\ref{inv1}) and (\ref{genex}) can be obtained by an iterative (fixed point) procedure, which generates a sequence of approximations $\alpha_0\to\alpha_1\to\ldots\to\alpha_n$ to $\alpha(b_h$) implemented after a Laurent expansion of (\ref{blocal}) and the use of the initial condition $\alpha_0=2/b_h$. The approach can be implemented also at one-loop and with the inclusion of the post-Newtonian corrections, if necessary. The logarithmic corrections present in (\ref{genex}) are a genuine result of the quantum approach and, as we are going to discuss below, are not present in the classical formula for the deflection. Radiative and post-Newtonian effects, not included in (\ref{inv1}), give an expression for $\alpha(b_h)$ which coincides with the form (\ref{genex}), with specific coefficients $(a_n, d_n)$ which are energy dependent. This is at the origin of the phenomenon of light dispersion (gravitational rainbow) induced by the quantum corrections, which is absent at classical level \cite{Accioly1}. Eq. (\ref{genex}) will play a key role in our proposal for the inclusion of the radiative corrections in the classical lens equation. Such equation will relate the angular position of the source in the absence of lensing, $\beta$, to $\alpha(b)$.\\ We give, for completeness, the analogous expressions in the case of the scalar and for a massive fermion. For a massless scalar we have the relation \begin{eqnarray} \alpha = \frac{2}{3\,b_h}-\frac{1}{b^3_h}\left(\frac{12\,\ln 3 - 1}{243}+\frac{4}{81}\ln b_h\right)+\mathcal O (1/b_h^5), \end{eqnarray} while for a massive fermion the corresponding expression becomes more involved and takes the form \begin{align}\label{massf} \alpha &= \frac{8\,E^4}{4\,E^4-2\,E^2m_f^2+m_f^4}\frac{1}{b_h}-\frac{1}{b_h^3}\left[\frac{8\,E^4}{3(2\,E^2-m_f^2)(4\,E^4-2\,E^2m_f^2+m_f^4)^2}\times \right.\nonumber\\ &\left.\times \left(m_f^6+8\,E^6(1+\ln 8)+E^4m_f^2\ln 64 -6E^4(4\,E^2+m^2)\ln \frac{2}{1-\frac{m_f^2}{2\,E^2}}\right)\right.\nonumber\\ &\left.+\frac{4\,E^4(4\,E^2+m_f^2)}{8\,E^6-8\,E^4m_f^2+4\,E^2m_f^4-m_f^6}\ln b_h\right] +\mathcal O(1/b_h^5), \end{align} where $E$ and $m_f$ are the energy and the mass of the fermion respectively. One can easily check that in the limit $E\gg m_f$ Eq.~(\ref{massf}) reproduce the formula for the massless fermion (neutrino). The angular deflection is much less enhanced in the scalar case compared to the remaining cases, showing a systematic difference respect to the classical prediction form Einstein's deflection integral. The angular deflection in the scalar case is significantly affected by the choice of $\chi$ the free coupling factor of a scalar field to the external curvature $R$. \subsection{Bending at 1-loop} Moving to the one-loop expression given in (\ref{sigmaOL}), we can derive an analytic solution of the corresponding semiclassical equation (\ref{semic}) for $b=b(E,\alpha)$, in the limit of small momentum transfers. For this reason we perform an expansion of (\ref{sigmaOL}) in $q^2/m_W^2$ up to $\mathcal{O}((q^2/m_W^2)^2)$ and solve (\ref{semic}) in this approximation for $b_h^2(E, \alpha)$, obtaining \begin{eqnarray} \label{OLb} b_h^2(E, \alpha)&=&\left[-1+\csc^2\frac{\alpha}{2}+2\ln\left(\sin\frac{\alpha}{2}\right)\right]+C_1(E)\left[1+\cos\alpha+4\ln\left(\sin \frac{\alpha}{2}\right)\right]+C_2(E)\cos^4 \frac{\alpha}{2}\nonumber\\ &&+20\,D_2(E)\ln\left(\sin\frac{\alpha}{2}\right)-4\,F_2(E)\cos \alpha-8\,D_2(E)\cos \alpha\ln\left(\sin^2\frac{\alpha}{2}\right)-G_2(E)\cos 2\alpha\nonumber\\ &&-2\,D_2(E)\cos 2\alpha\ln\left(\sin^2\frac{\alpha}{2}\right)-E_2(E), \end{eqnarray} with the coefficients $C,D$, $F$ and $G$ are functions of the energy and of the masses of the weak gauge bosons. The impact parameter $b_h(\alpha)$, as shown in the same appendix, has a dependence on the angular deflection $\alpha$ which can be summarized by an expression of the form \begin{eqnarray} \label{btheta} b_h(E, \alpha) &=& \frac{2}{\alpha}+c(E)\,\alpha+d(E)\,\alpha\,\ln(\alpha)+f(E)\,\alpha^3+g(E)\,\alpha^3\ln \alpha+h(E)\,\alpha^3\ln^2 \alpha + \mathcal{O}(\alpha^5)\nonumber\\ \end{eqnarray} that we can invert in order to get $\alpha(E, b_h)$. This is given by \begin{eqnarray} \alpha(E, b_h)&=&\frac{2}{b_h}-\frac{1}{b_h^3}\Big[\big(2+4\,C_1(E)\big)\log b_h + \mathcal{A}(E)\Big] + \mathcal{O}(1/b_h^5)\nonumber \\ \mathcal{A}(E) &=& -2\,C_1(E)-C_2(E)+E_2(E)+4F_2(E)+G_2(E)+\frac{2}{3} . \end{eqnarray} \begin{figure}[t] \centering \subfigure[]{\includegraphics[scale=.45]{plots/balphatev.pdf}} \hspace{.5cm} \subfigure[]{\includegraphics[scale=.6]{plots/balphapev.pdf}} \caption{Plots of the impact parameter $b_h$ versus $\alpha$, the angle of deflection, for $20<b_h<100$ for the classical and quantum solution. \label{bthetamev1}} \end{figure} We show in Fig.~\ref{bthetamev1} some plots of the impact parameter $b_h$ as a function of the deflection angle in a range closer to the horizon of a black hole, computed using the Newtonian approximation derived from the metric (\ref{SCH3}). The region involved covers the interval between 20 and 100 horizons. The numerical results refer to the GR solution and to the full one-loop prediction respectively. The classical expression and the quantum one start differing as we approach the value of $b_h\sim 20$, and are characterized by a certain dependence on the energy of the incoming beam. Shown are the plots corresponding to neutrinos of energies in the TeV and the PeV range respectively. In these regions the lensing is very strong, corresponding to $10^3$ arcseconds and larger. As the neutrino (or the photon) beam gets closer to the photon sphere ($x_0=3/2 r_s$), which is the point of maximum approach, the angular deflection diverges. This is the impact parameter region where one expects the formation of relativistic images. The divergence can be parameterized by an integer $n$, with $\alpha_n=2 \pi n$, and $n$ tending to infinity. The integer is the winding number of the beam path around the photon sphere. In the external neighborhood of the point of closest approach the beam still escapes to infinity, forming an infinite set of images which are parameterized by the same integer $n$ \cite{Bozza:2001xd}. \section{$1/b^n$ contributions to the deflection } It is interesting to compare the classical GR prediction for the deflection with the result of (\ref{genex}), by resorting to a similar expansion for the deflection integral. This has been studied quite carefully in the literature, especially in the limit of strong lensing \cite{Amore:2006pi,Keeton:2005jd}. The $1/b_h^n$ expansion has been shown to appear quite naturally in the post-Newtonian approach applied to the Einstein integral for light deflection. \\ We recall that Einstein's expression in GR is given by the integral \begin{equation} \alpha(r_0)=\int_{r_0}^\infty dr \frac{2}{r^2}\left[ \frac{1}{r_0^2}\left(1 -\frac{2 M}{r_0}\right) - \frac{1}{r^2}\left(1 -\frac{2 M}{r}\right)\right]^{-1/2} -\pi \label{exactT} \end{equation} and can be re-expressed in the form \begin{equation} \alpha=2\int_0^1 \frac{dy}{\sqrt{1 - 2 s - y^2 + 2 s y^3}} -\pi, \end{equation} with the variable $s\equiv r_s/ (2 r_0)$ being related to the ratio between the Schwarzschild radius and the distance of closest approach between the particle and the source, $r_0$. Additional information on $\alpha(r_0)$ is obtained via an expansion of the integrand in powers of $s$ and a subsequent integration. This method shows that the result can be cast in the form \begin{equation} \alpha(b_h)=\frac{a_1}{b_h} +\frac{a_2}{b_h^2} +\frac{a_3}{b_h^3} + \frac{a_4}{b_h^4} + \frac{a_5}{b_h^5}\ldots \label{exp} \end{equation} with \begin{equation} a_1=2, \qquad a_2=\frac{15}{16}\pi, \qquad a_3=\frac{16}{3}, \qquad a_4=\frac{3465}{1024}\pi, \qquad a_5=\frac{112}{5}. \end{equation} The coefficients $a_i$ differ from those given in \cite{Keeton:2005jd} (up to $a_7$) just by a normalization. They are obtained by re-expressing $s=s(r_0)$ in terms of the impact parameter $b_h$ using the relation \begin{equation} \label{change} b_h=x_0 \left( 1 -\frac{1}{x_0}\right)^{-1/2} \end{equation} between the impact parameter and the radial distance of closest approach, having redefined $x_0 \equiv r_0/(2 \, G M)$. This can also be brought into the form \begin{eqnarray} \label{form1} x_0=\frac{2\,b_h}{\sqrt 3}\cos\left[\frac{1}{3}\cos^{-1}\left(-\frac{3^{3/2} }{2\,b_h}\right)\right]. \end{eqnarray} An expression equivalent to (\ref{form1}) can be found in \cite{Coriano:2014gia}. Eq. (\ref{form1}) can be given in a $1/b_h$ expansion \begin{equation} x_0=b_h-\frac{3}{8 \, b_h} -\frac{1}{2\, b_h^2} -\frac{105}{128\, b_h^3} -\frac{3}{2\, b_h^4} +\mathcal{O}(1/b_h^5), \label{inversion} \end{equation} which will turn useful below. We can invert (\ref{exp}) obtaining the relation \begin{align} b_h(\alpha)=&\frac{2}{\alpha}+\frac{a_2}{2}+\frac{\alpha}{8}\big(2\,a_3-a_2^2\big)+\frac{\alpha^2}{16}\big(a_2^3-3a_2\,a_3+2\,a_4\big)\nonumber\\ &+\frac{\alpha^3}{128}\big(8\,a_5-16\,a_2\,a_4-8\,a_3^2+20\,a_2^2a_3-5a_2^4\big)+\mathcal{O}(\alpha^4), \end{align} which differs from (\ref{genex}) by the absence of logarithmic terms in the impact parameter $b_h$ and by the energy independence of the coefficients. The inclusion of the extra contributions mentioned above, in the classical GR expression, becomes relevant in the case of strong lensing. The inclusion of the additional $1/b_h^n $ terms in the expansion of the angular deflection can be extended to the case of a continuous distribution of sources/deflectors. This provides a simple generalization of the standard approach to classical lensing for such distributions. \section{Lens equations and $1/b^n$ corrections} The standard approach to gravitational lensing in GR is based on an equation, derived from a geometrical construction, which relates the angular position of the image ($\theta_I$) to that of the source ($\beta$), with an intermediate angular deflection ($\alpha$) generated on the lens plane. In this section we are going to briefly review this construction, which is based on the asymptotic expression for the angular deflection ($\alpha\sim 2/b_h$), and discuss its extension when one takes into account more general expansions of $\alpha(b_h)$ of the form given by Eq. (\ref{exp}). The extension that we consider covers the case of a thin lens and concerns only the extra $1/b_h^n$ terms derived from classical GR. The discussion is preliminary to the analysis of the next section, where we will consider the inclusion of the radiative effects, parameterized by (\ref{genex}), into the classical lens equation. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{plots/LensingPlanes.pdf} \caption{Geometric construction of the lens for a continuous distribution of sources. Shown are the plane $S$ of the source distribution and the plane of the lens $L$. The line $OI$ identifies the direction at which the observer sees the image after the angular deflection $\alpha$.} \label{lenspicture} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.35\textwidth]{plots/GravitationalLensing.pdf} \caption{The thin lens geometric construction where the source S, the lens V and the observer O lie on the same plane. Notice that figure is not to scale, since $D_{OL}$ and $D_{LS}$ are far larger than the length of $VR$.} \label{geo} \end{figure} \begin{figure} \centering \includegraphics[width=0.4\textwidth]{plots/PlanarLens.pdf} \caption{Geometrical construction for the primary I$_p$ and secondary I$_s$ images generated by the two geodesics of the isotropic emission. Shown are the source S, the lens, represented by the dotted circle, the observer O and the primary $I_p$ and secondary $I_s$ angular positions involved in the discussion.} \label{prim} \end{figure} \subsection{The lens geometry} We show in Fig.~\ref{lenspicture} the lens geometry in the case of a continuous distributions of sources and deflectors. A simplified picture of the geometry, with pointlike source and deflector is shown in Fig.~\ref{geo}. We indicate with $\vec{\beta}$ the oriented angle between the optical axis $(OP)$ (taken as the $z$ axis) and the unlensed direction of the source $(OS)$. $\vec{\theta}_I$ denotes the angle formed by the visual line of the image $(OI)$ with the optical axis. We also denote with $D_{OL}$ the distance between the observer and the lens plane; with $D_{LS}$ the distance between the lens plane and the source plane and with $D_{OS}$ the distance of the source plane from the observer. $\hat{\alpha}$ is the (oriented) angle of deflection, measured clockwise as all the other angles appearing in the geometrical construction. We also introduce the relations, valid for $D_{LS}, D_{OL}$ much larger than the size of the lens, typical of a linear lens, \begin{equation} \vec{\eta}\equiv \vec{PS}=\vec{\beta} D_{OS} \qquad \qquad \vec{SI}=\hat{\vec{\alpha}} D_{LS} \qquad \qquad \vec{PI}=\vec{\theta}_I D_{OS}. \end{equation} The thin lens equation follows from the approximate geometrical relation \begin{equation} \label{lin} \vec{PI}=\vec{PS} +\vec{SI} \qquad \textrm{ i.e.} \qquad \vec{\beta}=\vec{\theta}_I -\hat{\vec{\alpha}}\frac{D_{LS}}{D_{OS}}. \end{equation} Denoting with $\vec{\xi}$ a 2-D vector in the lens plane, it is convenient to introduce two scales $\eta_0$ and $\xi_0$ defined as \begin{eqnarray} \vec{\eta}=\eta_0\,\vec y\qquad \qquad\vec \xi\equiv\vec{VR}=\xi_0\vec x\qquad \qquad \frac{\eta_0}{\xi_0}=\frac{D_{OS}}{D_{LS}}. \end{eqnarray} Using the lens equation in the geometric relation \begin{eqnarray} \frac{|\vec{PI}|}{|\vec{VR}|}=\frac{D_{OS}}{D_{OL}}, \end{eqnarray} we find the relation \begin{eqnarray} \label{thin} \vec y=\vec x - \hat{\vec \alpha} \frac{D_{LS}\,D_{OL}}{D_{OS}\,\xi_0}\equiv \vec x -\vec \alpha \qquad \qquad \textrm{with} \qquad\qquad\vec \alpha=\hat{\vec \alpha} \frac{D_{LS}\,D_{OL}}{D_{OS}\,\xi_0}, \end{eqnarray} which defines the thin lens equation. It is possible to give a simpler expression to the equation above if we go back to (\ref{lin}) and perform simple manipulations on the angular dependence. On the lens plane (Fig.~\ref{geo}) the equation takes the scalar form \begin{eqnarray} \label{thin1} \beta=\theta_I- \alpha \frac{D_{LS}}{D_{OS}}, \end{eqnarray} which can be extended to the case of stronger lensing by the inclusion of the contributions of the $1/b^n$ corrections in $\alpha(b)$. Use of the Einstein relation $\alpha=4 G M/b$ and of the relation $b\sim\theta_I D_{OL}$ brings (\ref{thin1}) into the typical form \begin{equation} \beta=\theta_I -\frac{\theta_E^2}{\theta_I} \qquad \theta_E^2 =\frac{D_{LS}}{D_{OS}}\frac{4 G M}{D_{OL}}, \end{equation} which defines the thin lens approximation, with $\theta_E$ being the Einstein radius. For a source $S$ aligned on the optical axis together with the deflector and the observer $O$ (see Fig.~\ref{prim}) - which is defined by the segment connecting the observer, the lens and the plane of the source (with $\beta=0$) - the images will form radially at an opening $\theta_I=\theta_E$ and appear as a circle perpendicular to the lens plane. For a generic $\beta$, instead, the primary and secondary image solutions are given by the well-known expressions \begin{equation} \label{img} \theta_{I {\pm}}=\frac{\beta}{2} \pm \frac{1}{2}\left(\beta^2 + 4 \theta_E^2\right)^{1/2}. \end{equation} It is quite straightforward to extend this derivation with the inclusion of the $1/b^n$ corrections in the $\alpha(b)$ relation and test their effect numerically \cite{Keeton:2005jd}. This is part of a possible improvement of the ordinary (quadratic) thin lens equation which can be investigated more generally in conditions of strong lensing. In that case one can also adopt an equation which includes deflections of higher orders, as we will discuss in the following sections. For the moment we just mention that the inclusion of the higher order $1/b^n$ contributions given by (\ref{exp}) modifies (\ref{thin1}) into the form \begin{equation} \label{thin2} \beta=\theta_I - \frac{\theta_E^2}{\theta_I}-\sum_{n\geq 2} \frac{\theta_E^{(n)}}{\theta_I^n}, \end{equation} with \begin{equation} \theta_E^{(n)}\equiv r_s^n a_n \frac{D_{LS}}{D_{OS} D_{OL}^n}. \end{equation} Another observable that we will investigate numerically is going to be the lens magnification. For this purpose we recall that light beams are subject to deflections both as a whole but also locally, due to their bundle structure. Rays which travel closer to the deflector are subject to a stronger deflection compared to those that travel further away. This generates a difference in the solid angles under which the source is viewed by the observer in the unlensed and in the lensed cases. In the simple case of an axi-symmetric lens the ratio between the two solid angles can be defined in the scalar form \begin{eqnarray} \mu=\left|\left(\frac{\partial\beta}{\partial\theta_I}\frac{\sin\beta}{\sin\theta_I}\right)\right|^{-1}. \end{eqnarray} In the case of a thin lens (\ref{thin}), the analogous expression is given by \begin{eqnarray} \mu^{(0)}_\pm\equiv\left(\frac{\partial\beta}{\partial\theta_I}\frac{\beta}{\theta_I}\right)^{-1}. \end{eqnarray} For this lens the analysis simplifies quite drastically. Using the expression of the two images $\theta_{I\pm}$ given in (\ref{img}) one obtains the simple expression for the primary and secondary images \begin{equation} \mu_{\pm}=\pm\left(1 -\left(\frac{\theta_E}{\theta_{I\pm}}\right)^4\right)^{-1}, \label{magni} \end{equation} where the Einstein angle is defined as usual \begin{eqnarray} \theta_E=\sqrt{4\,G M\frac{x}{D_{OL}}}\qquad\qquad \textrm{with}\qquad \qquad x=\frac{D_{LS}}{D_{OS}}. \end{eqnarray} It is convenient to measure the angular variables in terms of the Einstein angle $\theta_E$, as $\bar{\beta}\equiv \beta/\theta_E$, $\bar{\theta}\equiv \theta_I/\theta_E$, with \begin{equation} \bar{\theta}_{I \pm}=\frac{\bar{\beta}}{2} \pm \sqrt{1 + \frac{\bar{\beta}^2}{4}}, \end{equation} then the total magnification takes a rather simple form \begin{equation} \mu\equiv \mu_+ + \mu_-=\frac{2 + \bar{\beta}^2}{\bar\beta\sqrt{4 + \bar\beta^2}}. \end{equation} This equation is commonly used to calculate the light curve in the microlensing case. We refer to \cite{Mao:2008kp} for a short review on this point. \subsection{Nonlinear effects in strong deflections} In conditions of strong lensing, the linear approximations in the trigonometric expressions are not accurate enough and one has to turn to a fully nonlinear description of the geometry, expressed in terms of the angular variables which are involved. We illustrate this point by taking as an example a typical lens equation, which in our case is given by the Virbhadra-Ellis construction (VE) \cite{Virbhadra:1999nm}. \\ Following Fig.~\ref{geo}, we recall that the VE lens equation is based on the geometrical relation \cite{Virbhadra:1999nm,Bozza:2008ev} \begin{figure}[t] \centering \subfigure[]{\includegraphics[scale=0.6]{plots/VElensclqu.pdf}}\hspace{.5cm} \subfigure[]{\includegraphics[scale=0.6]{plots/VElensdist.pdf}} \caption{(a): $\beta(\theta_I)$ for the Virbadhra-Ellis lens equation in the neutrino case, for a black hole with M $=10^6\,\rm{M}_{\odot}$ and with $D_{OL}$=10 Kpc, $D_{OS}$=19 Kpc. The numerical solution for the classical and the energy-dependent result. (b): $\beta(\theta_I)$ as in (a) but for a 1 GeV neutrino beam.} \label{VHc1} \end{figure} \begin{equation} \overline{PS}=\overline{PI} -\overline{SI}, \end{equation} which gives \begin{eqnarray} \label{lenseq} D_{OS}\tan\beta=D_{OS}\tan{\theta}_I-D_{LS}(\tan{\theta_I} +\tan(\alpha-{\theta}_I)), \end{eqnarray} under the assumption that the point $R$ in Fig. \ref{geo} lies on the vertical plane of the lens. ${\theta}_I$ is the angle at which the image is viewed by the observer and $\beta$ is the unlensed angular position of the source. Within this approximation we can use the geometric relation \begin{equation} b=D_{OL}\sin{\theta}_I, \label{bt} \end{equation} which allows to relate the image position ${\theta}_I$ to the angular deflection of the beam $\alpha$. Notice that this approximate relation is justified by the fact that the distances $D_{OL}$ and $D_{OS}$ are very large compared to the radius of closest approach $r_0$. In this limit the two segments $\overline{VH}$ and $\overline{VT}$ are treated as equal.\\ We remind that (\ref{lenseq}) is not the unique lens equation that one can write down, but, differently from Eq. (\ref{thin}), it can be used in the case of strong lensing. It takes into account the nonlinear contributions to the angular deflection by the introduction of the $\tan(\beta)$ and $\tan(\theta_I)$ terms, which in (\ref{thin}) are not included. We refer to \cite{Bozza:2008ev} for a review of possible lens equations. \section{Radiative effects and the geometry of lensing } Turning to our case study, radiative effects in the lens equations can be introduced by replacing the expression of the angular deflection generated by the source on the source plane, which is a function of the impact parameter $b$ $(\alpha=\alpha(b))$ with the new, energy dependent relation $\alpha(b,E)$ whose general form is given by (\ref{genex}). \\ For simplicity we consider a pointlike source, and a pointlike deflector, as shown in Fig. \ref{geo}. We recall that for a massless particle the geodesic motion is determined in terms of the energy $E$ and of the angular momentum $L$ at the starting point of the trajectory. The gravitational deflection, however, can be written only as a function of the impact parameter $b$ of the source, with $b=E/L$, which is an important result of the classical approach. For a further clarification of this aspect, which differs from the semiclassical analysis we are interested in, we briefly overview the classical case, using the lens geometry as a reference point for our discussion.\\ For a source located on the source plane at an angular opening $\beta$ (in the absence of the deflector), the initial conditions can be expressed in terms of the two components of the initial momentum $ \vec p=(p_r,p_\phi)$ on the plane of the geodesic, or, equivalently, by the pairs $(p_r, E)$ or $(p_\vartheta,E)$, with $E$ the initial energy of the beam. We recall that for a Schwarzschild metric these are defined as \begin{equation} p_r = \left( 1- \frac{2 G M }{r} \right)^{\!-1} \dot{r}, \qquad p_{\vartheta} = -r^2 \dot{\vartheta}, \qquad p_t = \left( 1- \frac{2 G M}{r} \right) \dot{t}, \qquad p_{\phi} = - r^2 \sin^2 \vartheta \dot{\phi} \,. \end{equation} We have denoted with $\dot{x}\equiv dx/d s$ the derivative respect to the affine parameter. $p_t$ and $p_\phi$ related to the energy and to the angular momentum as $p_t=E$ and $p_\phi=-L$, and with the motion taking place on the plane $\vartheta=\pi/2$ $(p_\vartheta=0)$. They are constrained by the mass-shell condition \begin{equation} \left(1-\frac{2 G M }{r} \right) (p^t)^2 - \left( 1-\frac{2 G M}{r} \right)^{\!-1} (p^r)^2 - r^2 (p^\phi)^2=0, \end{equation} with $(p^r=\dot r, p^\phi=\dot \phi,p^t=\dot t)$. The lens equation, usually written as \begin{equation} L(\beta,\theta_I)=0, \end{equation} can also be written, equivalently, in the form of a constraint between $\beta$ and $b$ using (\ref{bt}). We can use any of the independent variables mentioned above. For a given initial momentum of the beam, emitted from the plane of the source, the lens equation will then determine the position of the source in such a way that the geodesic motion will reach the observer at its location on the optical axis. In particular, an interesting description emerges if we choose as initial conditions the angular position of the source $(\beta)$ and the value of the impact parameter $b$. These two conditions fix the direction of the trajectory of the beam at its origin on the source plane. In these last variables, the lens equation will then determine one of the two in terms of the other in such a way that outgoing geodesic will reach the observer. \\ The inclusion of an energy dependence in the angle of deflection $\alpha$ renders this picture slightly more complex. For instance, the lens equation will now depend on 3 parameters, which can be chosen to to be $(\beta, \theta_I, E)$ or $(\beta,p_r,p_\phi)$ or any other equivalent combination, with one of the three fixed in terms of the other two by the equation itself. For a monochromatic and spherical source of energy $E$, fixed at a position $\beta$, emitting a beam with a given impact parameter $b$ respect to the deflector, the lens equation may not have a real solution, since the deflector may disperse the beam in such a way that it will never reach the observer. For a fixed spherical source which emits photons or neutrinos of any energy, one can look for solution in the reduced variables $b, E$. Being $b$ related to the primary and secondary images $\theta_{I\pm}$, the beam that reaches the observer will be characterized by a unique energy $E$, assuming that the images are detected at angular positions $\theta_{I\pm} $. \\ The argument above can be repeated by using any triple combination of independent kinematic variables among those mentioned above. \\ Having clarified this point, we now move to a description of the actual implementation of the lens equation is this extended framework. The angular location of the image $\theta_I$ and the impact parameter are related in the geometry of the lens by Eq. (\ref{bt}), and this allows to search for solutions of the lens equation (\ref{lenseq}) in regions characterized by smaller values of the impact parameter ($ 20 <b_h <100$) where the angular deflections are stronger. \\ The key to the derivation of the radiative lens equation are Eqs. (\ref{genex}) and (\ref{bt}). Combining the two relations we obtain \begin{align} \label{genexp} \alpha(b(\theta_I,E))=&\frac{4 G M}{D_{OL} \sin\theta_I} + \sum_{n\geq 1} \frac{A_{2n}}{\left(D_{OL} \sin\theta_I\right)^{2n}}\nonumber\\ &+ \sum_{n\geq 1}\left( \frac{2 G M}{D_{OL} \sin\theta_I}\right)^{2n+1}\left(A_{2n+1} + D_1 \ln^n \left(\frac{D_{OL}}{2 G M} \sin\theta_I\right) +\ldots \right), \end{align} where the ellipsis refer to the extra logarithmic contributions present in Eq.(\ref{genex}). The expression above is known analytically if we manage to solve explicitly the semiclassical equation (\ref{semic}), otherwise it has to be found by a numerical fit. However, it is clear that the ansatz for the fit has, in any case, to coincide with Eqs.~(\ref{genex}) and (\ref{genexp}), due to the typical functional forms of the solutions of Eq. (\ref{semic}). For instance, in the case of a thin lens, the modifications embodied in (\ref{genexp}) can be incorporated into the new equation \begin{eqnarray} \label{thin3} \beta=\theta_I- \alpha(b(\theta_I,E)) \frac{D_{LS}}{D_{OS}}, \end{eqnarray} which is an obvious generalization of (\ref{thin2}), the latter being valid only in the classical GR case. As we are going to illustrate below, (\ref{thin3}) can be studied numerically for several geometrical configurations, which are obtained by varying the lensing parameters $D_{LS}$ and $D_{OL}$.\\ A similar approach can be followed for the VE or for any other classical lens equation. The insertion of $\alpha({\theta_I},E)$ given by (\ref{genex}) into (\ref{lenseq}) generates the radiative lens equation \begin{equation} \label{lens1} D_{OS}\tan\beta=D_{OS}\tan{\theta}_I-D_{LS}(\tan{\theta}_I +\tan(\alpha({\theta}_I,E)-{\theta}_I)), \end{equation} which takes into account also the quantum corrections and is now, on the contrary of (\ref{lenseq}), energy dependent. At this point it is clear that all the lens observables, such as magnifications, shears, light curves of microlensing etc. descend rather directly by this general prescription. \\ For instance, we can determine for the Virbadhra-Ellis lens the expression for the magnification using the radiative (semiclassical) expression \begin{eqnarray} \label{mueq} \mu&=&\frac{\chi_1}{\chi_2} \\ \chi_1&=&D_{OS}\sin\theta_I\,(1+((D_{OL}\tan\theta_I+(D_{OL}-D_{OS})\tan(\alpha(\theta_I,E)-\theta_I))/D_{OS})^2)^{3/2}, \nonumber \end{eqnarray} \begin{eqnarray} \chi_2 &=&(D_{OL}\tan\theta_I+(D_{OL}-D_{OS})\tan(\alpha(\theta_I,E)-\theta_I))\nonumber \\ &&\times (\sec^2\theta_I+(D_{OL}-D_{OS})/D_{OS}(\sec^2\theta_I +\sec^2(\alpha(\theta_I,E)-\theta_I)\,(\alpha^{\prime}(\theta_I,E)-1)))\nonumber, \end{eqnarray} where $\alpha^{\prime}\equiv \partial \alpha/\partial \theta_I$. As clear from Eqs. (\ref{lens1}) and (\ref{mueq}), both equations are very involved, although they can be investigated very accurately at numerical level. It is also possible to discuss the analytical form of the solutions within the formalism of the $1/b^n$ expansion. In fact, we are entitled to expand all the observables of the fully nonlinear lens in the angular deflection $\alpha$, and work at a certain level of accuracy in the angular parameters. In this work, however, we prefer to proceed with a direct numerical analysis of the full equations, both for the thin and for the VE lens, leaving the discussion of the explicit solutions to a future work. \section{Post-Newtonian corrections: the case of primordial black holes} We have seen in the previous sections that the $b_h(\alpha)$ expression for the deflection does not suffer from any apparent divergence (from the gravity or external field side) due to well-defined structure of the Newtonian cross section. The expression given in (\ref{leading}), in fact, is similar to the ordinary Rutherford scattering encountered in electrodynamics.\\ The dependence of the resulting cross section on the scale $G M/c^2$, the Schwarzschild radius, manifests as an overall dimensionful constant. Therefore, the inclusion of the electroweak corrections - and the logarithmic dependence on the energy of the terms in the expansion that follows - do not appear in combination with the macroscopic scale $r_s$. This allows, in principle, an extension of the perturbative computation up to any order in the electroweak coupling constant $\alpha_w$. It is also clear that this result is expected to be valid for any renormalizable field theoretical model, when combined with an external static gravitational field of Coulomb type, as in the case of the Newtonian limit of GR.\\ From now on, we will be using the notation nPN to indicate the (post-Newtonian) order in the potential at which we expand the Schwarzschild metric. For instance, contributions of a certain nPN order involve corrections in the external field proportional to $\Phi^{n+1}$, with 0PN denoting the ordinary (lowest order) Newtonian (i.e. zeroth post-Newtonian) contributions proportional to $\Phi$, as given in Eq. (\ref{h2}). The inclusion of the higher order corrections in the external potential modifies this simple picture due 1) to the need of introducing a cutoff regulator in the computation of the Fourier transform of the higher powers of the Newtonian potential and 2) to the presence of the Schwarzschild radius $r_s$ in the actual expansion. These features emerge already at the first post-Newtonian order (1PN) for an uncharged black hole and at order 0PN for the Reissner-Nordstrom (RN) metric (charged black hole). \\ Both points 1) and 2) are, in a way, expected, since the microscopic expression for the transition matrix element given by (\ref{volume}), in fact, cannot be extrapolated to the case of a macroscopic source, with the presence of a macroscopic scale such as the black hole horizon. This seems to indicate that the success of the Newtonian approximation is essentially due to the rescaling of $r_s$ found in the expression of the cross section, which is a feature of this specific order, and is therefore limited to a $1/r$ potential. It is then natural to ask if there is any other realistic case in which the post-Newtonian corrections can be included in an analysis of this type. Obviously, the answer is affirmative, as far as we require that $r_s$ is microscopic and that the energy of the beam, which is an independent variable of a scattering event, is at most of the order of $1/r_s$. Under these conditions, we are then allowed to extend our analysis through higher orders in $\Phi$, with scatterings in which the dimensionless parameter $r_s q$ with $q$ the impact parameter, is at most of $\mathcal{O}(1)$. This specific situation is encountered in the case of primordial black holes, where $r_s$ can be microscopic. We are going to illustrate this point in some detail, since it becomes relevant in the case of primordial black holes. \subsection{Post Newtonian contributions in classical GR} To illustrate this point we extend the expansion of the Schwarzschild metric at order 0PN given in (\ref{SCH3}). A similar expansion will be performed on the RN metric. \\ For this purpose, it is convenient to perform a change of coordinates on the Schwarzschild metric \begin{eqnarray} \label{ds} ds^2=\left(1-\frac{2\,G\,M}{r}\right)dt^2-\left(1-\frac{2\,G\,M}{r}\right)^{-1}dr^2-r^2d\Omega \end{eqnarray} in such a way that this takes an isotropic form. The radial change of coordinates is given by \begin{eqnarray} r=\rho\left(1+\frac{G\,M}{2\rho}\right)^2 \end{eqnarray} which allows to rewrite (\ref{ds}) as \begin{eqnarray} ds^2=A(\rho)dt^2-B(\rho)(d\rho^2+\rho^2\,d\Omega) \end{eqnarray} with \begin{eqnarray} \label{ABsch} A(\rho)=\frac{(1-G\,M/2\rho)^2}{(1+G\,M/2\rho)^2}\qquad\qquad B(\rho)=(1+G\,M/2\rho)^4 \,. \end{eqnarray} Post-Newtonian (weak field) corrections can be obtained by an expansion of $A$ and $B$ taking $M/\rho\ll1$. Up to third order in $\Phi$ this is given by \begin{eqnarray} &&A(\rho)=1+2\,\Phi+2\,\Phi^2+\frac{3}{2}\,\Phi^3\\ &&B(\rho)=1-2\,\Phi+\frac{3}{2}\,\Phi^2-\frac{1}{2}\,\Phi^3. \end{eqnarray} In the RN spacetime for a charged black hole the analysis runs similar. The interest in this metric is due to the fact that the lowest order potential, in this case, involves charge-dependent $1/r^2$ contributions which, for an uncharged black hole, appear at first post-Newtonian order (1PN). The metric, in this case, is given by the expression \begin{eqnarray} \label{RN} ds^2=\left(1-\frac{2\,G\,M}{r}+\frac{G\,Q^2}{r^2}\right)dt^2-\left(1-\frac{2\,G\,M}{r}+\frac{G\,Q^2}{r^2}\right)^{-1}dr^2-r^2d\Omega, \end{eqnarray} with $Q$ denoting the overall charge of the black hole. It has two concentric horizons which become degenerate in the maximally charged case. The two horizons are the solution of the equation \begin{eqnarray} \left(1-\frac{2\,G\,M}{r}+\frac{G\,Q^2}{r^2}\right)=0 \end{eqnarray} with solutions $r=G\,M\pm\sqrt{G^2M^2-G\,Q^2}$. The RN black hole has a maximum allowed charge $Q=M\sqrt G$, in order to avoid a naked singularity. In this case, the radial change of variables which brings the metric into a symmetric form is given by \begin{eqnarray} r=\rho\left(1+\frac{G\,M+\sqrt G\,Q}{\rho}\right)\left(1+\frac{G\,M-\sqrt G\,Q}{\rho}\right), \end{eqnarray} so that the RN spacetime in isotropic coordinates is \begin{align} \label{RNiso} ds^2=&\frac{\left(1-\frac{G^2\,M^2-G\,Q^2}{4\rho^2}\right)^2}{\left(1+\frac{G\,M+\sqrt G\,Q}{2\rho}\right)^2\left(1+\frac{G\,M-\sqrt G\,Q}{2\rho}\right)^2}dt^2\nonumber\\ &-\left(1+\frac{G\,M+\sqrt G\,Q}{2\rho}\right)^2\left(1+\frac{G\,M-\sqrt G\,Q}{2\rho}\right)^2(d\rho^2+\rho^2\,d\Omega). \end{align} We just recall that for a massless particle in this metric background the angle of deflection and the impact parameter are given by the expressions \begin{eqnarray} &&\alpha(r_0)=2\,\int^\infty_{r_0}\frac{dr}{r\sqrt{\frac{r^2}{r_0^2}\left(1-\frac{2\,GM}{r_0}+\frac{GQ^2}{r_0^2}\right)-\left(1-\frac{2\,GM}{r}+\frac{GQ^2}{r^2}\right)}}-\pi\\ &&b(r_0)=\frac{r_0}{\sqrt{1-\frac{2\,GM}{r_0}+\frac{GQ^2}{r_0^2}}} \end{eqnarray} where $r_0$ is the closest distance of approach. It's convenient to normalize $r$, $r_0$ and $Q$ to the Schwarzshild radius $r_s=2\, G M$ and introduce the variables \begin{eqnarray} x=\frac{r}{2\, G M}\qquad x_0=\frac{r_0}{2\, G M}\qquad q=\frac{Q}{2\, G M}. \end{eqnarray} With this redefinitions the deflection can be expressed in the form \cite{Eiroa:2002mk} \begin{eqnarray} \alpha(x_0)=G(x_0)\,\mathrm{F}(\phi_0, \lambda)-\pi \label{elli} \end{eqnarray} with \begin{eqnarray} G(x_0)=\frac{4\,x_0}{\sqrt{1-\frac{1}{x_0}+\frac{q^2}{x_0^2}}}\frac{1}{\sqrt{(r_1-r_3)(r_2-r_4)}} \end{eqnarray} and with \begin{eqnarray} \mathrm{F}(\phi_0,\lambda)=\int_0^{\phi_0}(1-\lambda\,\sin^2\phi)^{-1/2}d\phi \end{eqnarray} being an elliptic integral of the first kind with arguments \begin{eqnarray} &&\phi_0=\arcsin \sqrt{\frac{r_2-r_4}{r_1-r_4}}\\ &&\lambda=\frac{(r_1-r_4)(r_2-r_3)}{(r_1-r_3)(r_2-r_4)}. \end{eqnarray} The $r_i$ are the roots of the fourth order polynomial \begin{eqnarray} P(x)=x^4+\frac{x_0^2}{1-\frac{1}{x_0}+\frac{q^2}{x_0^2}}\,(x-x^2-q^2) \end{eqnarray} ordered so that $r_1>r_2>r_3>r_4$. The comparison between Schwarzschild and RN deflection angle is shown in Figure \ref{schRN}. The plots describe the behaviour of the angular deflection as a function of the impact parameter $b_h$ for a RN and Schwarzschild metric in the region with $10 < b_h < 50$ (top left) and $4 < b_h < 10$ (top right) for the maximally charged case. The differences tend to be very pronounced as we approach the horizon of the Schwarzschild metric.\\ As pointed out in \cite{Amore:2006pi} in the Schwarzschild case, the $1/b$ expansion for the deflection angle does not reproduce the photon sphere singularity of the Schwarzschild metric, which is achieved using the exact GR expression in terms of elliptic function given in (\ref{elli}), but it represents nevertheless an improvement respect to the $0PN$ order. Expanding the RN metric in $M/\rho\ll1$ up to the third order, the $2PN$ approximation gives \begin{eqnarray} &&A(\rho)=1-\frac{2\,G\,M}{\rho}+\frac{2\,G^2\,M^2+G\,Q^2}{\rho^2}-\frac{3\,G^3\,M^3+5\,G^2\,M\,Q^2}{2\,\rho^3}\\ &&B(\rho)=1+\frac{2\,G\,M}{\rho}+\frac{3\,G^2\,M^2-G\,Q^2}{2\,\rho^2}+\frac{G^3\,M^3-G^2\,M\,Q^2}{2\,\rho^3}. \end{eqnarray} Inserting this expansion into the deflection integral, we can account in a systematic way of the $1/b$ corrections in the angle of deflection $\alpha$ \begin{eqnarray} \label{alphaRN} \alpha(b)=4\,\frac{G\,M}{b}+\left(5-\frac{G\,Q^2}{M^2}\right)\frac{3\pi}{4}\,\frac{G^2M^2}{b^2}+\left(\frac{128}{3}-16\frac{G\,Q^2}{M^2}\right)\,\frac{G^3M^3}{b^3}. \end{eqnarray} The deflection (\ref{alphaRN}) in the maximally charged case is given by the expression \begin{eqnarray} \alpha^{\textit{m.c.}}=4\,\frac{GM}{b}+3\pi\,\frac{(GM)^2}{b^2}+\frac{80}{3}\,\frac{(GM)^3}{b^3}. \end{eqnarray} In the next subsection we are going to illustrate how the inclusion of these expansions at nPN order affects the computation of the quantum corrections to the angular deflection. The corrections are embodied in a geometric form factor whose expression is entirely controlled by the $1/b$ expansion. \begin{figure}[t] \centering \subfigure[]{\includegraphics[scale=.5]{plots/maxcharg.pdf}}\hspace{.5cm} \subfigure[]{\includegraphics[scale=.5]{plots/singRN.pdf}} \caption{Comparison of the deflection angle for the Schwarzshild case and the maximally charged Reissner-Nordstrom case in the near-horizon (a), in the very-near-horizon (b).\label{schRN}} \end{figure} \subsection{Quantum effects at 2nd PN order} The inclusion of the PN corrections to the external background requires a recalculation of the cross section, with the inclusion of the additional terms in the fluctuation of the metric in momentum space. As usual we consider a static source, so that the metric is written as \begin{equation} h_{\mu\nu}(q)=2\pi\delta(q_0) h_{\mu\nu}(\vec{q}\,). \end{equation} At leading order in the external field $\Phi$ both the timelike and the spacelike components are equal ( $h_{00}\equiv h_{ii}$), while at higher orders they are expressed in terms of two form factors $h_0$ and $h_1$ \begin{equation} h_{\mu\nu}(\vec{q}\,)=h_0(\vec{q}\,) \delta_{0\mu}\delta_{0\nu}+h_1(\vec{q}\,)\bigl( \eta_{\mu\nu}-\delta_{0\mu}\delta_{0\nu} \bigr), \end{equation} which at higher order in the weak external field are given by \begin{equation} \label{eq:ffh0h1} \begin{split} h_0(\vec{q}\,)&=-\frac{2}{\kappa} \int d^3\vec{x}\, \biggl[ \frac{\Phi}{c^2} + \biggl( \frac{\Phi}{c^2} \biggl)^{\!\!2} + \frac{3}{4} \biggl( \frac{\Phi}{c^2} \biggl)^{\!\!3\,} \biggr] \rm{e}^{i \vec{q} \cdot \vec{x}} \\ h_1(\vec{q}\,)&= -\frac{2}{\kappa} \int d^3\vec{x}\, \biggl[ -\frac{\Phi}{c^2} +\frac{3}{4} \biggl( \frac{\Phi}{c^2} \biggl)^{\!\!2} - \frac{1}{4} \biggl( \frac{\Phi}{c^2} \biggl)^{\!\!3\,} \biggr] \rm{e}^{i \vec{q} \cdot \vec{x}}, \end{split} \end{equation} where we have explicitly reinstated the dependence on the speed of light. Below we will conform to our previous notations in natural units, with $c=1$. \begin{itemize} \item{\bf \large Neutrinos} \end{itemize} The computation, at this stage, follows rather closely the approach of the previous sections, giving for the averaged squared matrix element in the neutrino case \begin{equation} \left|iS_{fi}\right|^2=\frac{\kappa^2}{16 V^2 E_1 E_2} 2\pi \delta(q_0) \,\mathcal{T} \,\frac{1}{2} \, \rm{tr}\biggl[ \ds{p}_2 V^{\mu\nu}(p_1,p_2) \ds{p}_1 V^{\rho\sigma}(p_1,p_2) \biggr] h_{\mu\nu}(\vec{q}\,) h_{\rho\sigma}(\vec{q}\,), \end{equation} with $\mathcal{T}$ being the time of the transition, and the differential cross section \begin{equation} d \sigma=\frac{d \mathcal{W}}{\mathcal{T}}=\frac{\left|S_{fi}\right|^2}{j_i \mathcal{T}}dn_f. \end{equation} We have denoted with $d n_f$ the density of final states in the transition amplitude, and with $j_i$ the incoming flux density. After integration over the final states, and using $|\vec{p}_1|=|\vec{p}_2|$, we obtain the expression \begin{equation} \left.\frac{d\sigma}{d\Omega}\right|_{\rm{2PN}}^{(0)}=\frac{\kappa^2}{16 \pi^2} E^4 \cos^2\frac{\theta}{2} F_g(q)^2, \end{equation} where we have introduced the gravitational form factor of the external source \begin{equation} \label{fg} F_g(q)\equiv \Bigl( h_0(\vec{q}\,)-h_1(\vec{q}\,) \Bigr). \end{equation} Notice the complete analogy between the corrections coming from a distributed source charge, for a potential scattering in quantum mechanics, and the gravity case. In the evaluation of $F_g$ in momentum space we are forced to introduce a cutoff $\Lambda$, being the Fourier transforms of the cubic contributions in $\Phi$ divergent. The singularity is generated by the integration around the region of $r\sim 0$ in the Fourier transform of the potential. The relevant integrals in this case are given by \begin{equation} \begin{split} I_n&=\int d^3\vec{x}\,\, \frac{1}{{|\vec {x}|}^n} \rm{e}^{i\vec{q}\cdot \vec{x}}\\ \end{split} \end{equation} with \begin{equation} I_1=\frac{4\pi}{\vec{q}\,^2}, \qquad I_2 = \frac{2\pi^2}{|\vec{q}|}. \end{equation} and with $I_3$ requiring a regularization with an ultraviolet cutoff in space ($\Lambda$) \begin{equation} I_3 = \frac{4\pi}{|\vec{q}\,| } \int_\Lambda^{\infty} dr\,\, \frac{\sin(|\vec{q}\,| r)}{r^2}.\\ \end{equation} The choice of $\Lambda$ is dictated by simple physical considerations. Given the fact that consistency of the expansion requires that $r_s q \lesssim \mathcal{O}(1)$, it is clear the appropriate choice in the regulator is given by the condition that this coincides with the Scwarzschild radius, i.e. $\Lambda\sim r_s$. Expressed in terms of the cutoff, we obtain for the geometric form factors the expressions \begin{equation} \begin{split} h_0(\vec{q}\,)&=-\frac{2}{\kappa} \biggl[ - \frac{4 \pi}{|\vec{q}\,|^2}GM + \frac{2\pi^2}{|\vec{q}\,|}(GM)^2 - \frac{3}{4} \frac{4\pi}{|\vec{q}\,|}\biggl( \frac{\sin(\Lambda |\vec{q}\,|)}{\Lambda} - |\vec{q}\,| \rm{Ci}(\Lambda |\vec{q}\,| )\biggl)(GM)^{3} \biggr] \\ h_1(\vec{q}\,)&= -\frac{2}{\kappa} \biggl[ \frac{4 \pi}{|\vec{q}\,|^2}GM +\frac{3}{4} \frac{2\pi^2}{|\vec{q}\,|}(GM)^2 + \frac{1}{4} \frac{4\pi}{|\vec{q}\,|}\biggl( \frac{\sin(\Lambda |\vec{q}\,|)}{\Lambda} - |\vec{q}\,| \rm{Ci}(\Lambda |\vec{q}\,|) \biggl)(GM)^{3} \biggr] , \\ \end{split} \end{equation} where we have indicated with $\rm{Ci}$ the cosine integral function \begin{equation} \rm{Ci}(x)= \int_{\infty}^x dt\, \frac{\cos t}{t}. \end{equation} From the previous equations we obtain the cross section \begin{equation} \left.\frac{d\sigma}{d\Omega}\right|_{\rm{2PN}}^{(0)}=\frac{1}{4 \pi^2}E^4 \cos^2\frac{\theta}{2} \biggl[ \frac{8 \pi}{|\vec q\,|^2}GM - \frac{\pi^2}{2|\vec{q}\,|}(GM)^2 + \frac{4\pi}{|\vec{q}\,|} \biggl( \frac{\sin(\Lambda |\vec{q}\,|)}{\Lambda} - |\vec{q}\,| \mathrm{Ci}(\Lambda |\vec{q}\,|) \biggl)(GM)^{3} \biggr]^2, \end{equation} which is valid at Born level and includes the weak field corrections up to the third order in $\Phi$. In the expression of the cross sections, we use the subscript nPN, with $n=0,1,2$ to indicate a n-{th} order expansion of the metric in the gravitational potential, while the superscripts ((0), (1) and so on) label the perturbative order in $\alpha_w$. The leading order cross section at order 2PN, for instance, takes the form \begin{equation} \left.\frac{d\sigma}{d\Omega}\right|_{\rm{2PN}}^{(0)}=\left.\frac{d\sigma}{d\Omega}\right|_{\rm{0PN}}^{(0)}\,\mathcal{PN}_2(E,\theta) , \end{equation} with \begin{align} \mathcal{PN}_2(E, \theta)\equiv&\biggl[1-\frac{\pi}{8}(GM)\,E\sin\frac{\theta}{2}+\frac{1}{2}(GM)^2\,E \sin\frac{\theta}{2}\biggl( \frac{1}{\Lambda}\sin\left(2\,\Lambda\,E\sin\frac{\theta}{2}\right)\nonumber\\ &- 2E\sin\frac{\theta}{2} \mathrm{Ci}\biggl(2\,\Lambda\,E\sin\frac{\theta}{2}\biggr)\biggr)\biggr]^2, \label{PN} \end{align} where we have factorized the tree level result ${d\sigma}/{d\Omega}|_{\rm{0PN}}^{(0)}$ given in (\ref{leading}). The post-Newtonian form factor $\mathcal{PN}_2(E, \theta)$ induces an energy dependence of the cross section which is unrelated to the electroweak corrections. The analysis, in fact, can be extended at one loop in the electroweak theory. In this case, a lengthy computation gives the 2PN result \begin{equation} \frac{d\sigma}{d\Omega}\biggr|_{\rm{2PN}}^{\rm{(1)}} =\frac{d\sigma}{d\Omega }\biggr|_{\rm{0PN}}^{(0)} \left[ 1 + \frac{4 G_F}{16 \pi^2\sqrt{2}} \left( f_W^1(E,\theta) + f_Z^1(E,\theta) - \frac{1}{4} \Sigma^L_W - \frac{1}{4} \Sigma^L_Z \right) \right]\,\mathcal{PN}_2(E, \theta), \label{third} \end{equation} where we have inserted the one loop expression given in \eqref{sigmaOL}.\\ We can obtain an explicit solution of the corresponding semiclassical equation at order 1PN. Using the expression of the $\mathcal{PN}$ function at this order \begin{eqnarray} \mathcal{PN}_1(E, \theta)&\equiv&\biggl[1- \frac{\pi}{8}(GM)\,E\sin\frac{\theta}{2}\biggr]^2 \end{eqnarray} on the right hand side of (\ref{PN}) in order to generate the 1PN cross section at Born level, and solving the corresponding semiclassical equation (\ref{semic}) we obtain \begin{align} \label{nbpn} \left.b^2\right|_{\rm{1PN}}^{(0)}&=4\,(GM)^2\Big(-1+\csc^2\frac{\alpha}{2}+2\ln\sin\frac{\alpha}{2}\Big) + E\, (GM)^3\pi\Big(4+(\cos\alpha-3)\csc\frac{\alpha}{2}\Big)\nonumber\\ &-\frac{1}{32}E^2(GM)^4\pi^2\Big(1+\cos\alpha+4\ln\sin\frac{\alpha}{2}\Big). \end{align} At this point, we can invert Eq.~(\ref{nbpn}) for $\alpha(b)$ obtaining \begin{align} \left.\alpha\right|_{\rm{1PN}}^{(0)}&=\frac{2}{b_h}- \frac{\pi}{2}\frac{1}{b^2_h}E\,(GM)-\frac{1}{b^3_h}\Big(\ln b_h\,\Big(2-\frac{\pi^2}{32}\,E^2(GM)^2\Big)+\frac{2}{3}- E\,(GM)\,\pi\nonumber\\ &-\frac{3\pi^2}{64}\,E^2(GM)^2\Big) + \mathcal O(b_h^4) \end{align} for the tree level post Newtonian one.\\ For the Reissner-Nordstrom geometry the situation is similar. The post-Newtonian form factor is then given by \begin{align} \left.\mathcal{PN}(E, \theta)\right|^{RN}=&\left[1- \frac{\pi}{8}(GM)\big(1+3\frac{Q^2}{G\,M^2}\big)\,E\sin\frac{\theta}{2}\right.\nonumber\\ &\left.+(GM)^2\big(1+\frac{Q^2}{G\,M^2}\big)\,E \sin\frac{\theta}{2}\biggl( \frac{1}{\Lambda}\sin\left(2\,\Lambda\,E\sin\frac{\theta}{2}\right) - 2E\sin\frac{\theta}{2} \mathrm{Ci}\biggl(2\,\Lambda\,E\sin\frac{\theta}{2}\biggr)\biggr)\right]^2, \label{PNrn} \end{align} and the impact parameter in the 1PN approximation is \begin{align} \label{nbpnRN} \left.b^2\right|_{\rm{1PN}}^{(0)\,RN}&=4\,(GM)^2\Big(-1+\csc^2\frac{\theta}{2}+2\ln\sin\frac{\theta}{2}\Big)+E\, (GM)^3\big(1+3\frac{Q^2}{G\,M^2}\big)\pi\Big(4+(\cos\theta-3)\csc\frac{\theta}{2}\Big)\nonumber\\ &-\frac{1}{32}E^2(GM)^4\big(1+3\frac{Q^2}{G\,M^2}\big)^2\pi^2\Big(1+\cos\theta+4\ln\sin\frac{\theta}{2}\Big) \end{align} The inversion formula in this case is \begin{align} \left.\alpha\right|_{\rm{1PN}}^{(0)\,RN}=&\frac{2}{b_h}-\frac{\pi}{2}\frac{1}{b^2_h}E\,(GM)\big(1+3\frac{Q^2}{G\,M^2}\big)-\frac{1}{b^3_h}\Big(\ln b_h\,\Big(2-\frac{\pi^2}{32}\,E^2(GM)^2\big(1+3\frac{Q^2}{G\,M^2}\big)^2\Big)\nonumber\\ &+\frac{2}{3}-E\,(GM)\big(1+3\frac{Q^2}{G\,M^2}\big)\,\pi-\frac{3\pi^2}{64}\,E^2(GM)^2\big(1+3\frac{Q^2}{G\,M^2}\big)^2\Big)+ \mathcal O(b_h^4). \end{align} \begin{itemize} \item{\bf \large Photons} \end{itemize} We can extend the analysis presented above for neutrinos to the photon case. Here the cross section takes the form \begin{equation} \left.\frac{d\sigma}{d\Omega}\right|_{\gamma,\rm{2PN}}^{(0)}=\frac{\kappa^2}{16\pi^2} E^4 \cos^4\frac{\theta}{2} F_g(q)^2 \end{equation} and, as in the neutrino case, we have \begin{equation} \left.\frac{d\sigma}{d\Omega}\right|_{\gamma,\rm{2PN}}^{(0)}=\left.\frac{d\sigma}{d\Omega}\right|_{\gamma,\rm{0PN}}^{(0)}\,\mathcal{PN}_2(E, \theta) \end{equation} where we inserted the tree level cross section for the photon \begin{equation} \left.\frac{d\sigma}{d\Omega}\right|_{\gamma,\rm{0PN}}^{(0)}= (GM)^2 \cot^4 \frac{\theta}{2} \,. \end{equation} In the 0PN Newtonian limit, this cross section has been computed in \cite{Coriano:2014gia}, and takes the form \begin{eqnarray} \label{phDCS} \left.\frac{d \sigma}{d \Omega}\right|_{\gamma,\rm{0PN}}^{\rm{(1)}}=\left.\frac{d\sigma}{d\Omega}\right|_{\gamma,\rm{0PN}}^{(0)}\left\{1+2\left[ \sum_{{f_k}} N^c_{f_k} F^3_{f_k}(E, \theta, m_{f_k}, Q_{f_k})+F^3_W(E, \theta)\right]\right\} \end{eqnarray} where \begin{align} F^3_{f_k}(E, \theta)=&\frac{1}{36}\frac{\alpha_w}{\pi}\frac{Q^2_{f_k}}{E^2}(35\,E^2-39\,m^2_{f_k}\csc^2\theta/2)\nonumber\\ &-\frac{1}{12}\frac{\alpha_w}{\pi}\frac{Q^2_{f_k}}{E^2}(4\,E^2-5\,m^2_{f_k}\csc^2\theta/2)\sqrt{1+m^2_{f_k}\frac{\csc^2\theta/2}{E^2}}\log\left(\frac{1+\sqrt{1+m^2_{f_k}\frac{\csc^2\theta/2}{E^2}}}{-1+\sqrt{1+m^2_{f_k}\frac{\csc^2\theta/2}{E^2}}}\right)\nonumber\\ &+\frac{1}{16}\frac{\alpha_w}{\pi}\frac{m^2_{f_k}Q^2_{f_k}}{E^4}\csc^4\theta/2\left(E^2\cos\theta-E^2+m^2_{f_k}\right)\log^{2}\left(\frac{1+\sqrt{1+m^2_{f_k}\frac{\csc^2\theta/2}{E^2}}}{-1+\sqrt{1+m^2_{f_k}\frac{\csc^2\theta/2}{E^2}}}\right) \end{align} and \begin{align} F^3_W(E, \theta)=&-\frac{1}{24}\frac{\alpha_w}{\pi}\frac{1}{E^2}(125\,E^2-39\,m^2_W\csc^2\theta/2)\nonumber\\ &+\frac{1}{8}\frac{\alpha_w}{\pi}\frac{1}{E^2}(14\,E^2-5\,m^2_W\csc^2\theta/2)\sqrt{1+m^2_W\frac{\csc^2\theta/2}{E^2}}\log\left(\frac{1+\sqrt{1+m^2_W\frac{\csc^2\theta/2}{E^2}}}{-1+\sqrt{1+m^2_W\frac{\csc^2\theta/2}{E^2}}}\right)\nonumber\\ &-\frac{1}{32}\frac{\alpha_w}{\pi}\frac{1}{E^4}\left(16\,E^4-16\,E^2\,m^2_W\csc^2\theta/2+3\,m_W^4\csc^4\theta/2\right)\log^2\left(\frac{1+\sqrt{1+m^2_W\frac{\csc^2\theta/2}{E^2}}}{-1+\sqrt{1+m^2_W\frac{\csc^2\theta/2}{E^2}}}\right) \end{align} are the relevant electroweak form factors entering in the computation. In the previous equations the sum $f_k$ is over all Standard Model fermions, with $m_{fk}$ and $Q_{f_k}$ their masses and charges. $N^c_{f_k}$ is 1 for leptons and 3 for quarks. Proceeding similarly to the neutrino case, the one loop cross section in the 2PN approximation takes the form \begin{equation} \left.\frac{d \sigma}{d \Omega}\right|_{\gamma,\rm{2PN}}^{\rm{(1)}}=\left.\frac{d \sigma}{d \Omega}\right|_{\gamma,\rm{0PN}}^{\rm{(1)}}\,\mathcal{PN}_2(E, \theta), \label{twop} \end{equation} with $\mathcal{PN}_2$ given by (\ref{PN}), which can be inserted again in (\ref{semic}) and investigated numerically. Solving at order 1PN the analogous of (\ref{twop}), the solution of (\ref{semic}) gives \begin{align} \label{pbpn} \left.b^2\right|_{\gamma,\rm{1PN}}^{(0)}&=2\,(GM)^2\Big(-1+2\,\csc^2\frac{\alpha}{2}+\cos\alpha+8\ln\sin\frac{\alpha}{2}\Big)-\frac{2}{3}E\,(GM)^3\pi\Big(1+3\,\csc\frac{\alpha}{2}\Big)\Big(\cos\frac{\alpha}{4}-\sin\frac{\alpha}{4}\Big)^6\nonumber\\ &-\frac{1}{256}E^2(GM)^4\pi^2\Big(11+12\cos\alpha+\cos2\alpha+32\ln\sin\frac{\alpha}{2}\Big). \end{align} In the photon case the inversion formulae at orders 0PN and 1PN are given by \begin{eqnarray} \left.\alpha\right|_{\gamma, \rm{0PN}}^{(0)}=\frac{2}{b_h}-\frac{1}{b_h^3}\Big(4\,\ln b_h-\frac{1}{3}\Big)-\frac{1}{b_h^5}\Big(12\,\ln^2 b_h+10\,\ln b_h+\frac{17}{20}\Big) + \mathcal O(b_h^7) \end{eqnarray} and \begin{align} \left.\alpha\right|_{\gamma, \rm{1PN}}^{(0)}=&\frac{2}{b_h}-\frac{1}{b_h^2}\frac{\pi}{2}E\,(GM)-\frac{1}{b_h^3}\Big(\ln b_h\big(4-\frac{1}{16}\,\pi^2E^2(GM)^2\big)-\frac{1}{64}\,\pi^2E^2(GM)^2\nonumber\\ &-\frac{4}{3}\,\pi\,E\,(GM)-\frac{1}{3}\Big) + \mathcal O(b_h^4) \end{align} respectively. \subsection{Range of applicability} The structure of the one-loop 2PN result for neutrinos and photons shows the complete factorization between the quantum corrections and the background-dependent contributions. While the former are process dependent, the latter are general. Obviously, this result is not unexpected, and follows rather closely other typical similar cases in potential scattering in quantum mechanics. An example is the case of an electron scattering off a finite charge distribution characterized by a geometrical size $R$, where the finite size corrections are all contained in a geometric form factor. \\ We recall that for a Coloumb interaction of the form $V( r )=e^2/r$, the cross section is given in terms of the pointlike $( p )$ amplitude \begin{equation} f(\theta)_{\textrm{p}} =- 2 \frac{m e^2}{\vec{q}^{\,2}} \label{point} \end{equation} with $\vec{q}= \vec{k} - \vec{k}'$ and $|\vec{q}|=2 |\vec{k}|\sin\theta/2$ being the momentum transfer of the initial (final) momentum of the electron $\vec{k}$ ($\vec{k}'$) and charge $e$. The scattering angle is measured with respect to the z-direction of the incoming electron. The charge of the static source has also been normalized to $e$. The corresponding cross section is given by \begin{equation} \frac{d\sigma}{d\Omega}_\textrm{p}=|f(\theta)_{\textrm{p}}|^2= \frac{(2 m) ^2 e^4}{16 k^4 \sin^4\theta/2}, \end{equation} and the modification induced by the size of the charge distribution $(\rho(x))$ is contained in \begin{equation} F( \vec{q})=\int d \vec{x}\rho( \vec{x}) e^{i \vec{q}\cdot \vec{x}} \end{equation} with \begin{equation} \frac{d\sigma}{d\Omega}=\frac{d\sigma}{d\Omega}_\textrm{p}|F( q)|^2 . \end{equation} For a uniform charge density, for instance, the geometrical form factor $F(\vec{q})$, which is the transform of the charge distribution, introduces a dimensionless variable $ q R$ in the cross section which is absent in the point-like (Coulomb) case, of the form \begin{equation} F(q)=3 \frac{\sin(q\, R) - q\, R \cos(q\, R)}{(q\, R)^3}. \label{coulomb} \end{equation} The validity of the expression above is for $q\, R\lesssim 1$, and the presence of the geometrical form factor is responsible for the fluctuations measured in the cross section as a result of the finite extension of the charged region. \\ In the analysis of the nPN corrections in gravity, the situation is clearly analogous, with the size of the horizon taking the role of the classical charge radius $R$. For ordinary (macroscopic) horizons (e.g. of a km size) $r_s\sim G M$ invalidates the perturbative expansion due to the appearance of the $G M E$ parameter in the expression of the post Newtonian factor $\mathcal{P N}(E,\theta)$, which is small only if $E\sim 1/GM$, a choice which is not relevant for our analysis, since it applies to particle beams whose energy is in the very far infrared. \\ \begin{figure}[t] \centering \subfigure[]{\includegraphics[scale=.7]{plots/miniBH.pdf}}\hspace{.5cm} \subfigure[]{\includegraphics[scale=.7]{plots/pnfunc.pdf}}\hspace{.5cm} \subfigure[]{\includegraphics[scale=.7]{plots/pnfunc1.pdf}}\hspace{.5cm} \caption{Comparison of nPN approximations for $\alpha(b)$ in the photon case with $\rm{M}_{PBH}=10^{-16}\,\rm{M}_\odot$ and for $E G M \approx 1$ (a). In (b) and (c) we show the $\mathcal{PN}$ function for different energies.\label{pnPBH}} \end{figure} By imposing that the cutoff $\Lambda$ coincides with the Schwarzschild radius ($\Lambda\equiv r_s$), one can immediately realize that the post-Newtonian expansion gets organized only in terms of this parameter ($G M E$). In the regions of strong deflections, which are those that concern our analysis, we can reasonably assume that $y\equiv \sin\theta/2\sim \mathcal{O}(1)$, if we use the GR prediction to estimate the bending angle. This allows to discuss the convergence of the PN expansion only in terms of the energy $E$ of the incoming beam and of the size of the horizon. The analogous of the charge oscillations given by (\ref{coulomb}), in the gravitational case, are then uniquely related to the post-Newtonian function $\mathcal{PN}$, and hence to the size of the parameter $ \Lambda E\sim r_s E$ which defines its expansion in powers of the gravitational potential. Assuming a small value of $x\equiv G M E$, we can indeed rewrite (\ref{PN}) via a small-x expansion, obtaining \begin{equation} \mathcal{PN}(x,y)= 1 -\frac{\pi}{8} x y + x^2 y^2\left( 1 -\gamma_E - \log x - \log 4 y\right). \end{equation} This expression can be used to investigate the range of applicability of these corrections in terms of the two factors appearing in $x$, the energy of the incoming beam and the size of the horizon of the gravitational source. The requirement that such a parameter be small defines a unique range of applicability of such corrections in the quantum case. \\ One possible application of the formalism which renders the PN corrections to the gravitational scattering quite sizable is in the context of primordial black holes \cite{Carr:1974nx}, which have found a renewed interest in the current literature \cite{Carr:2014eya,Khlopov:2008qy}. \\ We just mention that primordial black holes (PBHs) have been considered a candidate component of dark matter since the 70's, and conjectured to have formed in the early universe by the gravitational collapse of large density fluctuations, with their abundances and sizes tightly constrained by various theoretical arguments. These range from Hawking radiation, which causes their decay to occur at a faster rate compared to a macroscopic black hole (of solar mass); bounds from their expected microlensing events; their influence on the CMB, just to mention a few \cite{Clesse:2015wea}. For instance, the mechanism of thermal emission by Hawking radiation sets a significant lower bound on their mass ($\sim5\times 10^{14} g$), in order for them to survive up to the present age of the universe. This bound satisfies also other constraints, such as those coming from the possible interference of their decay with the formation of light elements at the nucleosynthesis time. With the launch of the FERMI gamma ray space telescope \cite{FERMI}, the interest in this kind of component has found new widespread interest. The unprecedented sensitivity of its detector in the measurement of interferometric patterns generated by high energy cosmic rays (femtolensing events), such as Gamma Ray Bursts \cite{Gould1992}, has allowed to consider new bounds on their abundances \cite{Barnacka:2012bm}. The hypothesis of having PBHs as a dominant component of the dark matter of the universe provides remarkable constrains on their allowed mass values, except for a mass range $10^{18} \textrm{kg} < M_{\textrm{PBH}} <10^{23} \textrm{kg}$, where it has been argued that they can still account for the majority of it. In other mass ranges several analyses indicate that the PBH fraction of dark matter cannot exceed $1\% $ of the total \cite{Clesse:2015wea}. \\ PN corrections turn out to be significant for PBH in this mass range, due to the large variation induced on the $\mathcal{P N}$ function by the 1PN and 2PN terms. These may play a considerable role in a PBH mediated lensing event. We illustrate this behaviour by showing plots of the post Newtonian behaviour of the relevant expressions for lensing. In Fig. \ref{pnPBH} (a) we plot the angular deflection as a function of the impact parameter for the Newtonian 0PN, and relative post Newtonian corrections. We have considered a primordial black hole with a mass of $10^{-16}$ $M_{\odot}$, which carries a microscopic Schwarzschild radius (300 fm) and chosen $E=1/(GM)=0.6$ MeV for the incoming photon beam. The impact of the corrections on the gravitational cross section are quite large, as one can easily figure out from panel (b), where we plot the factor $\mathcal{P N}$ as a function of the Schwarzschild radius for these compact massive objects, for $b_h\sim$ 1 fm. For a more massive primordial black hole, with $200 < b_h <1000$, the pattern is quite similar, as shown in panel (\textrm{c}). In both cases the post-Newtonian corrections appear to be significant, of the order of 15-20 $\%$ and could be included in a more accurate analysis of lensing for these types of dark matter candidate solutions. \section{Conclusions} We have presented a discussion of neutrino lensing at 1-loop in the electroweak theory. In our approach the gravitational field is a static background, and the propagating matter fields are obtained by embedding the Standard Model Lagrangian on a curved spacetime, as discussed in previous works \cite{Coriano:2013iba,Coriano:2011zk}. As in a previous study \cite{Coriano:2014gia}, also in our current case the field theoretical corrections to the gravitational deflection are in close agreement with the predictions of general relativity. The agreement holds both asymptotically, for very large distances from the center of the black hole, of the order of $10^6$ horizon sizes $(b_h)$, but also quite close to the photon sphere $(\sim 20\, b_h)$. In this respect, the similarity of the results for photons and neutrinos indicates the consistency of the semiclassical approach that we have implemented. As noticed in \cite{Accioly1}, the inclusion of the quantum effects causes the appearance of an energy dependent dispersion of a particle beam, which implies a violation of the classical equivalence principle. Various types of lens equations have been formulated in the past using classical GR, and we have illustrated the modifications induced on their expressions by the We have then developed a formalism which allows to include the semiclassical results, due to the radiative effects in the propagation of a photon or a neutrino, in a typical lensing event. We have considered both the case of a thin lens, which is quadratic in the deflection angle, and the fully nonlinear case, taking as an example the Virbhadra-Ellis lens equation. Radiative and post-Newtonian effects induce a dependence of the angle of deflection with the appearance of extra $1/b^n$ suppressed contributions and of extra logarithms of the impact parameter, that we have studied numerically for some realistic geometric configurations. In general, radiative effects are significant only for configurations of the source/lens/observer which involve small impact parameters in the deflection $(b_h\sim 20)$, and require angular resolutions in the region of few milliarcseconds. Our results are valid for a Schwarzschild metric, considered both in the Newtonian and in the post-Newtonian approximation, but they can be extended to other metrics as well. \\ We have also discussed the consistency of the post-Newtonian approach. We have shown that such corrections can be consistently taken into account in the case of microscopic horizon sizes, such as primordial black holes. These corrections have been shown to factorize and be accounted for by a post-Newtonian function. Our analysis can be extended in several directions, from the case of Kerr-Newman metrics to the study of microlensing and Shapiro delays, and to dynamical gravity. \chapter*{Conclusions} The aim of this thesis has been to present some consequences of classical conformal symmetry in some extensions of the SM, both in the supersymmetric and in the non supersymmetric cases. In all the cases we have been stressing on the possible physical implications the scenarios that we have investigated. The analysis of Chapter 1 has been centered on general features of supersymmetric theories, where we have analysed some important aspects of anomaly actions, proving that the manifestation of a conformal anomaly is in the presence of massless effective degrees of freedom which interpolate as intermediate states in an anomaly vertex. We have shown that this universal feature is typical both of chiral and of conformal anomalies. For a dilatation symmetry, broken by the trace anomaly, the intermediate state is interpreted as an effective dilaton. Dilatons, axions and dilatinos, interpreted as composite states, are identified in the UV as anomaly poles of fundamental theories, but their manifestation in the IR could be related to possible nonlinear realizations of the same theories. More studies are obviously necessary in order to come up with more conclusive results in regard to the role played by these degrees of freedom in a low energy context. \\ In Chapter 2 we have turned to a phenomenological analysis of a dilaton state in the context of the SM. The study, in this case concerns a fundamental state, on which we have quantified the impact of the current constraints coming from Higgs searches at the LHC. The bounds on the conformal scale that we have extracted are not too restrictive, showing that a dilaton can still be searched for at the LHC and is not excluded. \\ Chapter 3 and 4 deal with a specific superconformal theory, the TNMSSM, which extends the NMSSM with one extra triplet and a scalar singlet superfield. We have focused these two chapters on the Higgs sector of the model, characterising its spectrum and the possible implications for the discovery of hidden Higgses, predicted by it, at the LHC. Although we have not much commented upon, we have verified that the model allows a massless supermultiplet, associated to the superconformal symmetry of the model. Chapter 4 has been entirely dedicated to a study of the pseudoscalar state present in the model, and we have discussed several constraints on the allowed parameter space coming from recent ATLAS and CMS data. \\ Finally in Chapter 5, we have been discussing an application of the TVV anomaly vertex to the gravitational deflection of photons in a Schwarzschild background. This original analysis, which has been developed as a by-product of previous investigations of conformal anomalies, shows how a fundamental result derived from the study of specific interactions carries far reaching implications in astroparticle physics and semiclassical GR. In particular, we have introduced for the first time the notion of a "radiative lens equation" using an original approach which we hope can be further study and extended in the future.
1,108,101,562,606
arxiv
\section{Introduction} This paper is organised as follows. In the next section we describe virtual links and give reasons for studying them. In section 3 the conditions are given for a 2$\times$2 matrix, $S$, to be a {\it linear switch} or {\it switch} for short, see \cite{FJK}. The entries in the switch lie in some associative ring, $R$, with identity which need not have commutative multiplication. This allows us to define in section 5 an $R$-module for any virtual link. The definition of the $R$-module comes from a labelling of a diagram of the virtual link considered in section 4. If the ring allows determinants then a sequence of polynomials in one or more variables can be defined and this is treated in section 6. Virtual links also include classical links but in that case these polynomials are constant as we see in section 7. We would like to thank Hugh Morton for suggestions in this section and for reading through an earlier draft. Finally various classification schemes of switches are given together with tables. \vfill\eject \section{Virtual Links} In this section we consider virtual links. For more details see \cite{K, FRS} A diagram of a classical knot or link can be described by the Gauss code. However not all Gauss codes can be realised as {\it classical} diagrams of knots or links. Their realization may be dependant on the introduction of {\it virtual crossings}. These are crossing which are neither above or below in space but just indicate that the journey of the arc intersects the journey of another arc. Virtual links are represented by oriented diagrams with ordinary crossings as for classical knots and links together with these virtual crossings. In addition to their application as a geometric realization of the combinatorics of a Gauss code, virtual links have physical, topological and homological applications. In particular, virtual links may be taken to represent a particle in space and time which dissappears and reappears. A virtual link may be represented, up to stabilisation, by a link diagram on a surface. Finally an element of the second homology of a rack space can be represented by a labelled virtual link, see \cite{FRS}. Since the rack spaces form classifying spaces for classical links the study of virtual links may give information about classical knots and links. A {\bf diagram} for a virtual link is a 4-regular plane graph with extra structure at its nodes representing the three types of crossings in the link. A classical crossing of either sign is represented in the diagram in the usual way. A virtual crossing is represented by two crossing arcs with a small circle placed around the crossing point. The graph also lies implicitly on a two-dimensional sphere $S^{2}$. {\bf Semi-arcs} go from one classical crossing of the graph to another ignoring virtual crossings. This is distinct from a classical link diagram where the {\it arcs} go from one undercrossing to another. Two such diagrams are {\bf equivalent} if there is a sequence of moves of the types indicated in the figures below taking one diagram to the other. They are the generalised Reidemeister moves and are local in character. We show the classical Reidemeister moves as part (A) of Figure 1. These classical moves are part of virtual equivalence where no changes are made to the virtual crossings. Taken by themselves, the virtual crossings behave as diagrammatic permutations. Specifically, we have the flat Reidemeister moves (B) for virtual crossings as shown in Figure 1. In Figure 1 we also illustrate a basic move (C) that interrelates real and virtual crossings. In this move an arc going through a consecutive sequence of two virtual crossings can be moved across a single real crossing. In fact, it is consequence of moves (B) and (C) for virtual crossings that an arc going through any consecutive sequence of virtual crossings can be moved anywhere in the diagram keeping the endpoints fixed and writing the places where the moved arc now crosses the diagram as new virtual crossings. This is shown schematically in Figure 2. We call the move in Figure 2 the {\bf detour}, and note that the detour move is equivalent to having all the moves of type (B) and (C) of Figure 1. This extended move set (Reidemeister moves plus the detour move or the equivalent moves (B) and (C)) constitutes the move set for virtual knots and links. \diagram \diagram \section{Switches: Definition and Examples} In this section we define the conditions needed for a 2$\times$2 matrix, $S$, to be a {\it linear switch} or {\it switch} for short. The conditions, divided into subsections, are invertability, Yang-Baxter and the existence of sideways matrices. The reasons for these conditions should become clear in the next section. Note that the definition of a switch in \cite{FJK} is more general. \vfill\eject \subsection{Inverting a 2$\times$2 Matrix} Suppose $S$ is the $2\times 2$ matrix with entries in a ring $R$, $$S=\pmatrix{A & B \cr C & D\cr}.$$ The ring $R$ is associative and has a multiplicative identity element 1 but need not have commutative multiplication. We call an element of the ring a {\bf unit} if it has a two sided multiplicative inverse. The proof of the following lemma may be safely left with the reader. \lemma{The matrix $S$ is a unit in the ring of 2$\times$2 matrices with entries in $R$ if either $B, D$ and $\Delta=B^{-1}A-D^{-1}C$ are units or $A, C$ and $\Delta'=C^{-1}D-A^{-1}B$ are units. In the first case $$S^{-1}=\pmatrix{\Delta^{-1}B^{-1} & -\Delta^{-1}D^{-1} \cr -D^{-1}C\Delta^{-1}B^{-1} & B^{-1}A\Delta^{-1}D^{-1}\cr}.$$ In the second case $$S^{-1}=\pmatrix{C^{-1}D\Delta'^{-1}A^{-1} & -A^{-1}B\Delta'^{-1}C^{-1} \cr -\Delta'^{-1}A^{-1} & \Delta'^{-1}C^{-1}\cr}.$$ } \hfill$\square$\par\rm The hypothesis for the above lemma is sufficient but not quite necessary. For example $\pmatrix{1&\alpha\cr 0&1}$ is invertible even if $\alpha$ is not. Note that the condition for an inverse to exist can also be written $B^{-1}A\ne D^{-1}C$ if the ring is a division ring. The theory of inverting matrices with entries in a non-commutative ring can differ from the commuting case. As an example consider the following $2\times 2$ matrix with entries in the quaternions $\hbox{\Bbb H}$. Let $$S=\pmatrix{1& k\cr i & j\cr}\hbox{ then }S^{-1}=\pmatrix{1/2& -i/2\cr -k/2 & -j/2\cr}.$$ The ``obvious'' determinant $j-ki$ is zero. Moreover the transpose of $S$ has no inverse since then $\Delta=0$. \subsection{The Yang-Baxter Equations} The next condition to consider is $$ (S\times id)(id\times S)(S\times id)=(id\times S)(S\times id)(id\times S).$$ Here $S\times id$ and $id\times S$ are the 3$\times$3 matrices $$S\times id=\pmatrix{A&B&0\cr C&D&0\cr 0&0&1\cr}\quad id\times S=\pmatrix{1&0&0\cr 0&A&B\cr 0&C&D\cr}.$$ These equations are a specialization of the set theoretic Yang-Baxter equations considered by V. Drinfeld in \cite{Dr}. The Yang-Baxter equations imply the seven equations $$\matrix{% 1: A=A^2+BAC\hfill &2: [B,A]= BAD\hfill \cr 3: [C,D]= CDA\hfill &4: D=D^2+CDB\hfill \cr 5: [A,C]= DAC\hfill &6: [D,B]= ADB \hfill \cr \hfill 7: [C,B]=& ADA-DAD \hfill \cr}$$ where $[X,Y]$ denotes the commutator $XY-YX$. \lemma{Assume that either $A, B, C$ or $B, C, D$ are units and the seven equations above are satisfied. Then $A-1, D-1$ are also units and the seven equations can be reduced to the first four equations. By a further refinement these equations can be reduced to two equations in two unknowns.} {\bf Proof } Assume that $A, B, C$ are units. Firstly we write $C, D$ in terms of $A, B$ using equations 1 and 2. So $$\matrix{% C=&A^{-1}B^{-1}A-A^{-1}B^{-1}A^2=A^{-1}B^{-1}A(1-A) & D=&1-A^{-1}B^{-1}AB\cr}.$$ We see that 5 is easily satisfied if we substitute these values and clearly $A-1$ and $D-1$ are units. Now look at equation 4. The right hand side minus the left hand side is $A^{-1}B^{-1}A\Theta B$ where $$\Theta=BA^{-1}B^{-1}A-A^{-1}B^{-1}AB-A+B^{-1}AB.$$ The same difference for equation 6 is $-\Theta B$ and for equation 7 is $\Theta(1-A)$. So $\Theta=0$ implies equations 6 and 7. The converse is also true since $B, A-1$ are units. It is also easily seen that $(D-1)B^{-1}(A-1)C^{-1}=1$ and we note that for future use. Since $C,\ D$ can be eliminated using equations 1 and 2 we are finally left with two equations, in two unknowns $A$ and $B$. A symmetric argument works if $B, C, D$ are units.\hfill$\square$\par\rm The number of equations can be reduced to one, see \cite{B,F}. \subsection{The Final Definition and Sideways Matrices} Summing up, a matrix $S=\pmatrix{A & B \cr C & D\cr}$ is a {\bf linear switch} if {\parindent=20pt 1. $B, C$ are units and either $D$ and $\Delta=B^{-1}A-D^{-1}C$ are units or $A$ and $\Delta'=C^{-1}D-A^{-1}B$ are units. 2. The four Yang-Baxter equations $$\matrix{% 1: A=A^2+BAC\hfill &2: [B,A]= BAD\hfill \cr 3: [C,D]= CDA\hfill &4: D=D^2+CDB\hfill \cr} $$ are satisfied. } The {\bf sideways matrices} $S^+_-$ and $S^-_+$ are defined by $$S^+_-=\pmatrix{% DB^{-1}&C-DB^{-1}A\cr B^{-1}&-B^{-1}A\cr}\quad S^-_+=\pmatrix{% -C^{-1}D&C^{-1}\cr B-AC^{-1}D&AC^{-1}\cr} $$ Note for future reference that both $S^+_-$ and $S^-_+$ are invertible if $S$ is and that $(S^{-1})^+_-=(S^-_+)^{-1}$ and $(S^{-1})^-_+=(S^+_-)^{-1}$. Also $$S^+_-(a,a)=(\lambda a,\lambda a)\hbox{ and } S^-_+(a,a)=(\lambda^{-1} a,\lambda^{-1} a)$$ where $\lambda=B^{-1}(1-A)=(1-D)^{-1}C$. So the sideways matrices preserve the diagonal. This has the curious consequence that a linear switch which is a birack is also a biquandle in the sense of \cite{FJK}. It is an easy exercise to verify that the only switches with a commutative ring are $$\pmatrix{ 1-\mu\lambda& \mu \cr\lambda & 0\cr} \quad\pmatrix{0 & \mu \cr \lambda & 1-\mu\lambda\cr}$$ for some units $\lambda,\ \mu$ in the ring. Either is called the {\bf Alexander} switch, see \cite{FJK}. Note that $$\pmatrix{1&0\cr 0&\lambda^{-1}\cr}\pmatrix{1-\mu\lambda&\mu\cr\lambda&0\cr} \pmatrix{1&0\cr 0&\lambda\cr}=\pmatrix{1-\mu\lambda&\mu\lambda\cr 1&0\cr}.$$ The matrix on the right is a version of the Burau matrix. The identity is invertible and a solution of the Yang-Baxter equations but is not a linear switch because $B, C$ are zero. For a non-commutative example assume that the ring is the quaternion division algebra $\hbox{\Bbb H}$. Then $$S=\pmatrix{% 1+i&-j\cr j&1+i\cr}$$ is called the {\bf Budapest switch}. It is only one of many such solutions found by analysis and a computer search. It is easily checked that if $$S=\pmatrix{A & B \cr C & D\cr}$$ is a linear switch then so is $$S(t)=\pmatrix{A & tB \cr t^{-1}C & D\cr}$$ where $t$ is any variable in the centre of the ring. By analogy with the quaternionic case we call $t$ a {\bf real} variable. Further switches are given by $S^{-1}$, because $S\ \longrightarrow\ S^{-1}$ is an antiautomorphism and $S^\dagger=\pmatrix{D&C\cr B&A\cr}$ is also a switch, by inspection. If the entries of $S$, a linear switch, are quaternions, then $S^*=\pmatrix{\overline{A}&\overline{C}\cr \overline{B}& \overline{D}\cr}$ is a solution, because hermitian involution is also an antiautomorphism, and so is $S^{\dagger*}=\pmatrix{\overline{D}&\overline{B}\cr \overline{C}& \overline{A}\cr}$. \section{Labelling Diagrams} In this section, given a switch $S$ with entries in $R$, we define a {\it labelling} or colouring, $\cal{L}$, of the semi-arcs of a virtual link diagram, $D$, by elements of $R$ in such a way that after a Reidemeister move converting $D$ into $D'$ there is a uniquely defined labelling ${\cal L}'$ of $D'$ which is unchanged outside of the disturbence caused by the Reidemeister move. It follows that if $D_1$ and $D_2$ are diagrams representing the same virtual link and $D_1\ \longrightarrow\ \cdots\ \longrightarrow\ D_2$ is a sequence of Reidemeister moves transforming $D_1$ into $D_2$. Then any labelling ${\cal L}_1$ of $D_1$ is transfered via the sequence of Reidemeister moves to a labelling ${\cal L}_2$ of $D_2$. In particular the set of labellings of $D_1$ is in bijective correspondence with the set of labellings of $D_2$, albeit not by a uniquely defined bijection. Let the edges of a positive real crossing in a diagram be arranged diagonally and called geographically {\it NW, SW, NE} and {\it SE}. Assume that initially the crossing is oriented and the edges oriented towards the crossing from left to right ie west to east. The {\bf input} edges, oriented towards the crossing, are in the west and the edges oriented away from the crossing, the {\bf output} edges, are in the east. Let $R$ be a labelling set and let $a$ and $b$ be labellings from $R$ of the input edges with $a$ labelling SW and $b$ labelling NW. For a positive crossing, $a$ will be the label of the undercrossing input and $b$ the label of the overcrossing input. Suppose now that $S(a,b)^T=(c,d)^T$. Then we label the undercrossing output NE by $d$ and we label the overcrossing output SE by $c$. For a negative crossing the direction of labelling is reversed. So $a$ labels SE, $b$ labels NE, $c$ labels SW and $d$ labels NW. Finally for a virtual crossing the labellings carry across the strings. This corresponds to the twist function $T(a,b)=(b,a)$. Manturov in \cite{M} has generalized this to $T(a,b)=(\epsilon b,\epsilon^{-1}a)$. However as we shall see later this generalization leads to the same polynomial considered by Silver and Williams, \cite{SW}. \vfill\eject The following figure shows the labelling for the three kind of crossings. \diagram \centerline{$c=Aa+Bb\quad d=Ca+Db$} It is convenient to think of the action from left to right on a positive crossing as being the action of $S$, the action from right to left as being $S^{-1}$, the action from top to bottom as being $S^-_+$ and the action from bottom to top as being $S^+_-$. For a negative crossing the actions are equal but with opposite orientation. \diagram \theorem{The set of labellings of two diagrams representing the same virtual link are in bijective equivalence.} {\bf Proof } Full details can be found in \cite{FJK}. It should now not be difficult for the reader to see how a labelling on a diagram is extended to a labelling on the result of a Reidemeister move. For example consider a Reidemeister 1 move which introduces a new classical crossing. Then the new semi-arc needs a consistent label. But this happens because the sideways matrices preserve the diagonal. A Reidemeister 2 move either extends the labelling because $S$ is invertible or because one of the sideways operators are invertible. A Reidemeister 3 move labelling follows from the Yang Baxter equations. \hfill$\square$\par\rm \section{The Invariant $R$-module} Assume for simplicity that we are dealing with a knot. The link case is similar and details can safely be left to the reader. The set of labellings of a diagram by the ring $R$ with fixed switch $S$ is a right $R$-module and we can give a finite (square) presentation. More precisely let $D$ be a diagram of a virtual link with $n$ classical crossings. The {\bf semi-arcs} of the diagram run from one classical crossing to the next. The virtual crossings are ignored. Following the orientation of the knot, label the semi-arcs with $R$-variables $x_1, x_2, \ldots, x_{2n}$. By an $R$-variable we mean a symbol standing in for any element of $R$. At each crossing there is a relation of the form $$\pmatrix{A&B\cr C&D\cr}\pmatrix{x_i\cr x_j\cr}= \pmatrix{x_{j+1}\cr x_{i+1}\cr} \hbox{ or } \pmatrix{A&B\cr C&D\cr}\pmatrix{x_i\cr x_j\cr}= \pmatrix{x_{j-1}\cr x_{i-1}\cr}$$ depending on whether the crossing is positive or negative. As is the usual custom, indices are taken modulo $2n$. The relations can now be written in matrix form as $M{\bf x}={\bf 0}$ where $M$ is a $2n\times 2n$ matrix and ${\bf x}=(x_1,x_2 \ldots, x_{2n})^T$. The non-zero entries in each row of the matrix are $A, B, -1$ or $C, D, -1$. Let ${\cal M}={\cal M}(S,D)$ be the module defined by these relations. We now show that the modules defined by diagrams representing the same virtual link are isomorphic. We do this by showing that a single Reidemeister move defines an isomorphism. The proof has the same structure as the proof, say, that the Alexander module of a classical link is an invariant as in \cite{A} but we give the details because of the care needed due to non-commutativity. \theorem{The module ${\cal M}$ defined above is invariant under the Reidemeister moves and is therefore a virtual link invariant.} {\bf Proof} Any module defined by a presentation of the form $M{\bf x}={\bf 0}$ is invariant under the following moves and their inverses applied to the matrix $M$. {\parindent=20pt \item{1.} permutations of rows and columns, \item{2.} multiplying any row on the left or any column on the right by a unit, \item{3.} adding a left multiple of a row to another row or a right multiple of a column to another column, \item{4.} changing $M$ to $\pmatrix{x&{\bf u}\cr{\bf 0}&M\cr}$ where $x$ is a unit, ${\bf u}$ is any row vector and ${\bf 0}$ is a zero column vector, \item{5.} repeating a row. } An {\bf elementary} matrix of type 1 is a permutation matrix. An elementary matrix of type 2 is the identity matrix with one diagonal entry replaced by a unit and an elementary matrix of type 3 is a square matrix with zero entries except for $1$'s down the diagonal and one other entry off diagonal. The operations $i.$ above for $i=1, 2, 3$ are equivalent to multiplying $M$ on the right or left by an elementary matrix of type $i$. For example any switch can be written as a product of elementary matrices as follows $$\pmatrix{% A & B \cr C & D\cr}= \pmatrix{% A & 0 \cr 0 & 1} \pmatrix{% 1 & 0 \cr C & 1\cr} \pmatrix{% 1 & 0 \cr 0 & C\Delta'\cr} \pmatrix{% 1 & A^{-1}B \cr 0 & 1} $$ if $A$ and $C\Delta'$ are units and a similar formula if $D$ and $B\Delta$ are units. Now consider the module ${\cal M}$ defined above. Clearly the presentation is unaltered by any of the basic moves which involve the virtual crossing. So we look to see the changes induced by the classical Reidemeister moves and check that the presentation matrix is only changed by the above 5 moves. Firstly, consider a Reidemeister move of the first kind. \diagram This introduces (or deletes) two new equal generators $x_{n+1}=x_{n+2}$. Because $S^-_+$ and $S^+_-$ preserve the diagonal, (the biquandle condition, see \cite{FJK}) the output ($x_{n+3}$) is the same as the input ($x_{n}$). The generator $x_{n+1}$ is equal to $\lambda^{-1}x_{n}$ where $\lambda=B^{-1}(1-A)$. So up to reordering of the columns the relation matrix is changed by $$M\ \Leftrightarrow\ \pmatrix{M&\matrix{{\bf 0}&{\bf 0}\cr}\cr \matrix{{\bf 0}&0\cr {\bf 0}&1\cr}&\matrix{1&-1\cr\lambda&0\cr}\cr} \sim\pmatrix{M&\matrix{{\bf 0}&{\bf 0}\cr}\cr \matrix{{\bf 0}&0\cr {\bf 0}&1\cr}&\matrix{0&-1\cr\lambda&0\cr}\cr}$$ Since $\lambda$ is a unit this does not alter the module. There are other possible inversions and mirror images of the above which can be dealt with in a similar fashion. Secondly, consider a Reidemeister move of the second kind. \diagram Again the outputs are unchanged from the inputs $x_j, x_i$ because of the relation $S^{-1}S=1$. Two new generators $x_{i+1}$ and $x_{j+1}$ are introduced (or deleted). They are related by the equations $$x_{j}=Ax_{i+1}+Bx_{j+1}\hbox{ and }x_{i}=Cx_{i+1}+Dx_{j+1}.$$ This has the following effect on the relation matrix. $$M\ \Leftrightarrow\ \pmatrix{M&\matrix{{\bf 0}&{\bf 0}\cr}\cr \matrix{{\bf 0}&0&-1\cr{\bf 0}&-1&0\cr}&\matrix{A& B \cr C& D\cr}\cr}. $$ Since $S$ is a product of elementary matrices this does not alter the module. The other possible inversions and mirror images of the above can be dealt with in a similar fashion but it is worth looking at the case where the two arcs run in opposite directions. The right outputs are unchanged from the left inputs by the relation $S^+_-(S^+_-)^{-1}=1$. \diagram The changes to the relation matrix are given by $$M\ \Leftrightarrow\ \pmatrix{M&\matrix{{\bf 0}&{\bf 0}\cr}\cr \matrix{{\bf 0}&-1&A\cr{\bf 0}&0&C\cr}&\matrix{B& 0 \cr D&-1\cr}\cr}\sim \pmatrix{M&\matrix{{\bf 0}&{\bf 0}\cr}\cr \matrix{{\bf 0}&-1&A\cr{\bf 0}&0&0\cr}&\matrix{B& 0 \cr 0&-1\cr}\cr} $$ The module is unaltered because $B$ is a unit. Finally, consider a Reidemeister move of the third kind. \diagram The outputs $x_{i+2}, x_{j+2}, x_{k+2}$ are unaltered by the Reidemeister move because of the Yang-Baxter equations. The inner generators $x_{i+1}, x_{j+1}, x_{k+1}$ are related to the inputs $x_{i}, x_{j}, x_{k}$ by the following matrix $$\pmatrix{C&DA&DB\cr 0&C&D\cr 0&A&B\cr}$$ and the inner generators $x'_{i+1}, x'_{j+1}, x'_{k+1}$ are related to $x_{i}, x_{j}, x_{k}$ by the following matrix $$\pmatrix{C&D&0\cr A&B&0\cr AC&AD&B\cr}.$$ Both are the product of elementary matrices and the proof is finished. \hfill$\square$\par\rm \section{The Invariant Polynomials} We have seen that associated to every linear switch $S$ and virtual link diagram $D$ there is an associated $R$-module, ${\cal M}$, which is a link invariant. Moreover this module has a finite presentation of the form $M{\bf x}={\bf 0}$. If the ring is commutative then there is a sequence of invariants of ${\cal M}$, the {\bf elementary ideals}, $E_i$ which can be defined from the presentation. The details can be found in the book by Crowell and Fox \cite{CF}. Suppose an $R$-module is defined by a matrix $M$, (not necessarily square but with $c$ columns and $r$ rows, $r\ge c$). Let $E_0$ be the ideal generated by the largest possible sub-determinants of $M$, that is those of size $c\times c$. Let $E_1$ be the ideal generated by the $c-1\times c-1$ sub-determinants of $M$ and so on. The elementary ideals form an ascending chain of ideals $$\{0\}\subset E_0\subset E_1\subset\cdots\subset E_c=E_{c+1}=\cdots=R.$$ The ideal $E_{c-1}$ is generated by the elements of $M$. In our situation the matrix $M$ is square so $E_0$ is principle, generated by the determinant of $M$. This has the following interpretation. Let $\hbox{\rm adj} M$ denote the adjugate matrix of $M$ consisting of the codimension 1 minors of $M$. Let $(x_1,\ldots,x_n)$ be the generators of $\cal M$. Multiplying the defining equations on the left by $\hbox{\rm adj} M$ yields the equation $$\det(M)(x_1,\ldots,x_n)=0.$$ It follows that if $(x_1,\ldots,x_n)\ne0$ then either $\det(M)=0$ in the ring or ${\cal M}$ has torsion. Assume now that $R$ is a gcd-ring. That is $R$ is an integral domain such that any set of elements has a gcd which is well defined up to multiplication by a unit. Examples of gcd-rings are the (Laurent) polynomial rings $p[t_1^{\pm1},\ldots,t_k^{\pm1}]$ where $p$ is a field or the integers $\hbox{\Bbb Z}$. It follows that we can define the $i$-th {\bf ideal polynomial} by the rule $$\Delta_i({\cal M})=\hbox{gcd}\{E_i\}.$$ Then $\Delta_i({\cal M})$ is the generator of the smallest principle ideal containing $E_i$. We have a chain of divisions $$1=\cdots=\Delta_n|\Delta_{n-1}|\cdots|\Delta_0=\det(M)|0.$$ Let us illustrate this with the Alexander switch. Here $R=\hbox{\Bbb Z}[\lambda,\lambda^{-1},\mu,\mu^{-1}]$ and the switch is defined by the matrix $$\pmatrix{0 & \mu \cr \lambda & 1-\mu\lambda\cr}.$$ We shall call $\det(M)=\Delta_0(K)$ the 0-th {\bf Alexander polynomial} where $K$ is the class of the diagram $D$. For classical knots $K$, it is always the case that $\Delta_0(K)=0$. However this need not be true for virtual knots or links. For example consider the {\bf virtual trefoil} as shown in the figure. If we label as indicated then the module has a presentation with 3 generators $a, b, c$ and relations $c=\mu a,\ a=\mu\lambda b+ \mu(1-\mu\lambda)a,\ b=\lambda c+(1-\mu\lambda)(\lambda b+(1-\mu\lambda)a)$. \diagram If we eliminate $c=\mu a$ we arrive at the following equations. $$\matrix{(\mu-\lambda\mu^2-1)a&\hfill+\lambda\mu b=&0\cr (\lambda^2\mu^2-\lambda\mu+1)a&\hfill+(\lambda-\lambda^2\mu-1)b=&0\cr}$$ The determinant of these equations is $$\Delta=(\lambda-1)(\lambda\mu-1)(\mu-1).$$ Note that the fundamental rack (and hence group defined by the Wirtinger relations) is trivial. Consider the Kishino knots $K_1, K_2$ and $K_3$ illustrated below. $$\ddiagram\qquad\ddiagram\qquad\ddiagram$$ All are ways of forming the connected sum of two unknots. Under the symmetry which changes positive crossings into negative crossings and vice-versa leaving virtual crossings alone, $K_1$ is transformed into $K_2$ and $K_3$ is invariant. Both have trivial racks and Jones polynomial. The Alexander polynomial $\Delta_0$ is zero in all three cases. On the other hand for $K_1$, the 1st Alexander polynomial, $\Delta_1$ is $1+\mu-\lambda\mu$ and for $K_2$, $\Delta_1$ is $1+\lambda-\lambda\mu$. Since these are neither units nor associates in the ring, $K_1, K_2$ are non trivial and non amphich\ae ral in the above sense. Since $\Delta_1$ in the case of $K_1, K_2$ is not an Alexander polynomial in the form $f(\lambda\mu)$ it follows that $K_1, K_2$ are not equivalent to classical knots, see \cite{S,W}. The 1st Alexander polynomial $\Delta_1$ of $K_3$ is 1. So we need new invariants to distinguish $K_3$ from the trivial knot. The Alexander switch is the only commutative case so we need a non-commutative ring. In the case of the entries in the switch this means that the pairs $(A, B)$, $(A, C)$, $(D, B)$, $(D, C)$, may not commute although other pairs may. Firstly we need a definition of the determinant of a matrix with entries in a non-commutative ring such as the quaternions. There are various definitions of determinants in this case and we refer to \cite{As} for details. For our purposes let $M_n(R)$ denote the ring of $n\times n$ matrices with entries in the ring $R$. We will only consider $R=\hbox{\Bbb R},\ \hbox{\Bbb C},\ \hbox{\Bbb H}$, the reals, complex numbers and quaternions respectively or polynomials with coefficients in these rings. Any quaternion $Q$ can be written as a pair of complex numbers $a, b$ by the formula $Q=a+bj$. There is an injective ring homomorphism $\psi: M_n(\hbox{\Bbb H})\ \longrightarrow\ M_{2n}(\hbox{\Bbb C})$ given by $$\psi(a+bj)=\pmatrix{a&b\cr -\overline{b}&\overline{a}\cr}.$$ Define the {\bf Study} determinant by $$\hbox{\rm d}(M)=\det(\psi(M)).$$ Note that the definition is slightly different from the one given in \cite{As} but its properties are the same. This Study determinant has the following properties {\parindent=20pt \item{1.} $\hbox{\rm d}(M)=0$ if and only if $M$ is singular. \item{2.} $\hbox{\rm d}(MN)=\hbox{\rm d}(M)\hbox{\rm d}(N)$ \item{3.} $\hbox{\rm d}(M)$ is unaltered by adding a left multiple of a row to another row or a right multiple of a column to another column. \item{4.} $\hbox{\rm d}\pmatrix{x&{\bf u}\cr{\bf 0}&M\cr} =|x|^2\hbox{\rm d}(M)$ where ${\bf u}$ is any row vector and ${\bf 0}$ is a zero column vector. \item{4'.} $\hbox{\rm d}\pmatrix{x&{\bf 0}\cr{\bf v}&M\cr} =|x|^2\hbox{\rm d}(M)$ where ${\bf v}$ is any column vector and ${\bf 0}$ is a zero row vector. \item{5.} $\hbox{\rm d}(M^*)=\hbox{\rm d}(M)$ where $M^*$ denotes Hermitian conjugate. \item{6.} If $M$ is non-singular then $\hbox{\rm d}(M)$ is a positive real number. \item{7.} $\hbox{\rm d}(M)$ is unaltered by permutations of rows and columns. } We now define the elementary ideals in the non-commutative case. As before, let $E_0$ be the ideal generated by the largest possible sub-determinants of $M$. Let $E_1$ be the ideal generated by the second largest possible sub-determinants of $M$ {\it plus} $E_0$ and so on. This extra condition is needed in order that the elementary ideals form an ascending chain and we cannot expand by rows or columns as in the case of commutative rings. It is not hard to see that these ideals are invariant under the 5 moves above. Let $S$ be a switch with quaternion entries. We can make the entries lie in $\hbox{\Bbb H}[t,t^{-1}]$ by replacing $S$ with $S(t)$. The polynomials $\Delta_{i}^{\hbox{\Bbb H}}(D,S)$ are defined as in the commutative case. That is $$\Delta_{i}^{\hbox{\Bbb H}}(D,S)=\hbox{gcd}\{E_i\}.$$ By the properties of the Study determinant they will be real (Laurant) polynomials with roots in complex conjugate pairs and are defined up to multiplication by a unit. We can normalise the polynomials $\Delta_{i}^{\hbox{\Bbb H}}(D,S)$ by multiplying by a suitable unit $\pm t^n$ so that we get a genuine polynomial with a positive constant term. For a worked calculation of $\Delta_{0}^{\hbox{\Bbb H}}$, consider again the diagram of the virtual trefoil above. The equations on the generators $a, b, c$ are $$\eqalign{ tBa+Ab-c=&0\cr ADa+(t^{-1}AC-1)b+tBc=&0\cr (t^{-1}CD-1)a+t^{-1}C^2b+DAb=&0\cr} .$$ Eliminating $c=tBa+Ab$ the relation matrix is $$\pmatrix{ AD+t^2B^2&t^{-1}AC+tBA-1\cr t^{-1}CD+tDB-1& t^{-1}C^2+DA\cr}.$$ If we now take the values $A=1+i, B=-j, C=j, D=1+i$ of the Budapest switch, the normalised determinant is $\Delta_0^{\hbox{\Bbb H}}=1+2t^2+t^4$. For the three Kishino knots $\Delta_0^{\hbox{\Bbb H}}=0$ and $\Delta_1^{\hbox{\Bbb H}}=1+(5/2)t^2+t^4$ showing in particular that $K_3$ is non-trivial. The polynomials $\Delta_i^{\hbox{\Bbb H}}$ are defined by deleting rows and columns of a presentation matrix $M$ and taking the determinant of the image under the automorphism $\psi$ which now has complex entries. But we can reverse this procedure and obtain a new set of polynomials $\Delta_i^{\hbox{\Bbb C}}$. The even order polynomials $\Delta_{2i}^{\hbox{\Bbb C}}$ will divide $\Delta_i^{\hbox{\Bbb H}}$. We can continue to the real case under the well known embedding of the complex numbers in the ring of $2\times 2$ matrices. The resulting polynomials are denoted by $\Delta_i^{\hbox{\Bbb R}}$. At the moment we have not calculated any examples but they may provide useful invariants. \section{The Classical Case} This section suggests that these polynomials do not provide more information about classical knots and links. However it is a method of distinguishing virtual knots from classical ones. The idea behind the proofs is to use the fact that any classical link is the closure of a braid, \cite{B}. This result can easily be extended to virtual links. {\bf Virtual} braids are defined in a similar manner to classical braids. We consider a diagramatic representation of a virtual braid as a set of $n$ strings travelling from a vertical line of $n$ points on the left to a translated copy of these points on the right. The strings are to be monotonic in the left to right motion. They may cross as in the classical case with a right handed cross (corresponding to $\sigma_i$) or they may incur a left handed cross (corresponding to $\sigma_i^{-1}$) or they may have a virtual crossing. The virtual crossing is indicated algebraically by $\tau_i$. The group of virtual braids on $n$ strings is denoted by $VB_n$. Kamada's presentation of the virtual braid group is as follows, \cite{KK}. Let generators of $VB_n$ be $\sigma_1, \ \sigma_2, \ ..., \sigma_{n-1}$ where $\sigma_i$ corresponds to the real positive crossing of the $i$-th and $(i+1)$-th string and $\tau_1, \ \tau_2, \ ..., \ \tau_{n-1}$ where $\tau_i$ corresponds to the virtual crossing of the $i$-th and $(i+1)$-th string. The following relations hold: {\parindent=20pt \item{i)}Braid relations $$\matrix{ \sigma_i \sigma_j= \sigma_j \sigma_i ,\qquad |i-j|>1 \cr \sigma_i \sigma_{i+1} \sigma_i=\sigma_{i+1} \sigma_i \sigma_{i+1}\hfill \cr } $$ \item{ii)}Permutation group relations $$\matrix{ {\tau_i}^2=1 \hfill \cr \tau_i \tau_j= \tau_j \tau_i ,\qquad |i-j|>1 \cr \tau_i \tau_{i+1} \tau_i=\tau_{i+1} \tau_i \tau_{i+1}\hfill \cr } $$ \item{iii)}Mixed relations $$\matrix{ \sigma_i \tau_j= \tau_j \sigma_i ,\qquad |i-j|>1 \cr \sigma_i \tau_{i+1} \tau_i=\tau_{i+1} \tau_i \sigma_{i+1}\hfill \cr } $$} At this stage the reader may care to compare these relations with the relations for the braid-permutation group, see \cite{FRR}. The braid-permutation group is a quotient of the virtual braid group. The corresponding knot-like objects are called {\bf welded} knots. Any linear switch $S$ with entries in the ring $R$ defines a representation of the braid group $B_n$ into the group of invertible $n\times n$ matrices with entries in $R$ by sending the standard generator $\sigma_i$ to $S_i=(id)^{i-1}\times S \times (id)^{n-i-1}$. Denote the representation by $\rho=\rho(S, n)$. This can be extended to virtual braids by sending $\tau_i$ to $T_i=(id)^{i-1}\times T \times (id)^{n-i-1}$ where $T=\pmatrix{0&1\cr 1&0\cr}$. Denote the extended representation by $\rho'=\rho'(S, T, n)$. \theorem{1. For all linear switches $S$ the representation, $\rho(S, n)$, of the braid group $B_n$ is equivalent to $\rho(S(t), n)$. 2. There exists a non-zero vector ${\bf z}=(z_0, z_1, \ldots, z_{n-1})^T$ such that $S_i({\bf z})={\bf z}$. In particular the representation $\rho$ is reducible. 3. The representation, $\rho'(S(t), T, n)$, of the virtual braid group, $VB_n$, is equivalent to \hfil\break $\rho'(S, T(t), n)$ where $T(t)=\pmatrix{0&t\cr t^{-1}&0\cr}$.} {\bf Proof } Let $\Lambda=\mathop{\rm diag}\nolimits\{1, t, t^2, \ldots, t^{n-1}\}$. Then $\Lambda^{-1}S_i\Lambda=S(t)_i$ and $\Lambda^{-1}T_i\Lambda=T(t)_i$. The vector ${\bf z}$ is defined by ${\bf z}=(1,\lambda, \lambda^2, \ldots, \lambda^{n-1})^T$ where $\lambda=B^{-1}(1-A)$. \hfill$\square$\par\rm This shows that there is no gain in the representation of $B_n$ by replacing $S$ with $S(t)$. It also shows that Manturov's invariant in \cite{M} is equivalent to the generalised Alexander polynomial. \theorem{For all classical knots and all switches where the polynomials are defined, $\Delta_i$ is constant (independant of $t$) and in particular $\Delta_0=0$.} {\bf Proof } Consider a diagram where the link is the closure of a classical braid. Suppose that the switch is $S(t)$. Then the defining matrix is of the form $M-I$ where $M$ is the representing matrix of the braid group and $I$ is the identity matrix. By the above $\Lambda(M-I)\Lambda^{-1}$ has switch $S(1)$. Moreover $(M-I){\bf z}=0$ so any determinant of $M-I$ will be zero. \hfill$\square$\par\rm As a consequence of this theorem it follows that the Kishino knot $K_3$ is not a classical knot. It also verifies, see \cite{SW}, that the Silver-Williams version of the Alexander polynomial is just the original for classical knots in the form $\Delta(\lambda\mu)$. We illustrate this result by calculating this constant for the Budapest switch. Recall that the determinant of a knot is the value of the Alexander polynomial evaluated at $-1$. \theorem{For the Budapest switch, $S=\pmatrix{ 1+i&-j\cr j&1+i\cr}$ and for any classical knot, $\Delta_1$ is the square of the knot determinant.} {\bf Proof } Define the $n\times n$ lower triangular matrix ${\cal B}_n$ inductively as follows. Put ${\cal B}_1=1$ and suppose that ${\cal B}_{n-1}$ is defined with bottom row $x_1,x_2,\ldots,x_{n-1}$. Then ${\cal B}_n$ is obtained from ${\cal B}_{n-1}$ by adjoining the bottom row $y_1,y_2,\ldots,y_{n}$ where $$y_i=kx_i,\ i=1,2,\ldots,n-2,\ y_{n-1}=(k-j)x_{n-1}\hbox{ and }y_n=jx_{n-1}.$$ So for example $${\cal B}_3=\pmatrix{1&0&0\cr k-j&j&0\cr i-1& 1-i&-1\cr}.$$ Then ${\cal B}_n^{-1}S_i{\cal B}_n=D_i$ where $D=\pmatrix{0&1\cr -1&2\cr}$, which is the Burau matrix with variable $-1$. The Study determinant now gives the square of the Alexander polynomial evaluated at $-1$. \hfill$\square$\par\rm It follows from the proof above that the representation of the classical braid group when $S$ is the Budapest switch is equivalent to a variant of the Burau representation. We conjecture that this is always the case. \section{Some Classifications of Switches} Consider now $2\times2$ linear switches with quaternion entries. We have already seen that a switch $S$ also defines the switch $S(t)$. The next lemma looks at the constraints on the entries of $S$. We can write any quaternion $Q\in\hbox{\Bbb H}$ as the sum of a real part ${\cal R}(Q)\in \hbox{\Bbb R}$, and a purely quaternionic part ${\cal P}(Q)\in \hbox{\Bbb R}^3$. Let $S^3$ denote the group of unit quaternions and let $S^2$ denote the set of pure unit quaternions. So $S^2$ is $\sqrt{-1}$. Let $\hbox{\Bbb C}$ be the complex numbers in $\hbox{\Bbb H}$ and let $S^1$ denote the group of unit complex numbers. \lemma{Let $S=\pmatrix{A & B \cr C & D\cr}$, with quaternion entries, be a switch. Then, in the non-commutative case, $|A|<2$ and ${\cal R}(A)>0$. If $|A|=1$ then ${\cal R}(A)=1/2$. Similar constraints hold for $D$. Finally $|B||C|=1$.} {\bf Proof}\quad From the equations satisfied by the entries we have $A=1-D^{-1}C^{-1}DC$ and \hfil\break $D^{-1}C^{-1}DC\in S^3-\{1\}$. So $|A|\le 2$ and ${\cal R}(A)>0$. The case $|A|= 2$ cannot occur since then $A=2$ which is the commutative case. Suppose $A=r+P$ where $r={\cal R}(A)$. Then if $|A|^2=1=r^2+|P|^2$, we use the fact that $|1-A|^2=1=(1-r)^2+|P|^2$ to see that ${\cal R}(A)=1/2$. Similar constraints hold for $D=1-A^{-1}B^{-1}AB$. Finally note that $1-A=A^{-1}BAC=D^{-1}C^{-1}DC$. So $|B||C|=1$.\hfill$\square$\par\rm We now consider all the switches over special subrings of $\hbox{\Bbb H}$ starting with the ring, $R$, of quaternions with integer coefficients, namely $$R=\{a+bi+cj+dk|a,b,c,d\in\hbox{\Bbb Z}\}.$$ By the above $A=1\pm i,\ 1\pm j,\ 1\pm k$. Note that say $A=1+i+j$ cannot occur because $|A-1|=1$. Similarly for $D$. The entries $B, C$ can only take the values $\pm i,\ \pm j,\ \pm k$. Call a switch of {\bf Budapest} type if $$S=\pmatrix{ 1+U&-V\cr V&1+U\cr}$$ where $U,V\in S^2$ and $U\perp V$. \lemma{All switches with entries in the ring $R$ of quaternions with integer coefficients are of Budapest type with $U,V$ lying in the set $\{\pm i,\ \pm j,\ \pm k\}$.} \hfill$\square$\par\rm Let $\xi=1/2+1/2i+1/2j+1/2k$. The {\bf Hurwitz}-ring, $H$, is defined to be the set $$H=\{n\xi+mi+pj+rk|n,m,p,r\in\hbox{\Bbb Z}\}.$$ Let us extend the coefficients of the switches to include elements of $H$. The new possible values for $A$ and $D$ are $1/2\pm i/2\pm j/2 \pm k/2$. The new possible values for $B$ and $C$ are $\pm 1/2\pm i/2\pm j/2 \pm k/2$. A computer search for all solutions with entries in $H$ is given in table 2 at the end of the paper. \section{Some Calculations of the Polynomials} We now calculate the polynomials for various choices of the matrix $S$. This not only allows us to distinguish various virtual knots but also helps to distinguish the construction of the polynomial by $S$. We now pick 4 switches from the more than 150 discovered by computer search. We have been able to reduce the number of switches by 50\% from earlier versions using the results of \cite{BF}. $$\vbox{ \offinterlineskip \halign{\strut \vrule \hfil \quad $#$ \ \hfil \vrule & \hfil \ $\small #$ \ \hfil \vrule & \hfil \ $\small #$ \ \hfil \vrule & \hfil \ $\small #$ \ \hfil \vrule & \hfil \ $\small #$ \ \hfil \vrule\cr \noalign{\hrule} & A & B & C & D\cr \noalign {\hrule} 1 & 1+i & -j & j & 1+i \cr \noalign {\hrule} 2 & 1+i & -1/2+i/2+j/2-k/2 & -1/2+i/2-j/2+k/2 & 1/2+i/2+j/2+k/2 \cr \noalign {\hrule} 3 & 1/2+i/2+j/2+k/2 & -1/2+i/2-j/2+k/2 & -1/2+i/2+j/2-k/2 & 1+i \cr \noalign {\hrule} 4 & 1+i & -1+i-k & -1/3+i/3+k/3 & 1/3+i/3+j2/3 \cr \noalign {\hrule} }}$$ Note that 1 is the Budapest switch, 2, and 3 have coefficients in the Hurwitz ring and 4 is an anomalous type thrown up in the computer search. The table below gives the corresponding normalised polynomials, $\Delta_0$, for the virtual trefoil. $$\vbox{ \offinterlineskip \halign{\strut \vrule \hfil \quad $#$ \quad \hfil \vrule & \hfil \ $#$ \ \hfil \vrule\cr \noalign{\hrule} Switch & \Delta_0 \cr \noalign {\hrule} 1 & t^4+2t^2+1\cr \noalign {\hrule} 2 & 3/4t^4+3/2t^3+9/4t^2+3/2t+3/4 \cr \noalign {\hrule} 3 & 3/64t^4+3/16t^3+9/16t^2+3/4t+3/4\cr \noalign {\hrule} 4 & 9t^4+12t^3+10t^2+4t+1\cr \noalign {\hrule} }}$$ For the Kishino knots, in all cases $\Delta_0$ is zero. The next table shows $\Delta_1$, the 1-st ideal polynomial for these knots as determined by the same set of switches. $$\vbox{ \offinterlineskip \halign{\strut \vrule \hfil \quad $#$ \quad \hfil \vrule & \hfil \ $#$ \ \hfil \vrule & \hfil \ $#$ \ \hfil \vrule & \hfil \ $#$ \ \hfil \vrule\cr \noalign{\hrule} Switch & K1 & K2 & K3 \cr \noalign {\hrule} 1 & t^4+5/2t^2+1 & t^4+5/2t^2+1 & t^4+5/2t^2+1 \cr \noalign {\hrule} 2 & 1/2t^4+3/2t^2+1 & 2t^4+3t^2+1 & 75/64 \cr \noalign {\hrule} 3 & 1/8t^4+3/4t^2+1 & 1/32t^4+3/8t^2+1 & 75/64 \cr \noalign {\hrule} 4 & 3t^4+7/2t^2+1 & 27t^4+21/2t^2+1 & 58381/36450 \cr \noalign {\hrule}}}$$ Now consider the following knot which has trivial Jones' polynomial and trivial fundamental rack. \diagram In this case for the commutative Alexander switch $$\Delta_0=(s-1)(st-1)(t^2-1)$$ up to multiplication by a unit. For the quaternion switches we get $$\vbox{ \offinterlineskip \halign{\strut \vrule \hfil \quad $#$ \quad \hfil \vrule & \hfil \ $#$ \ \hfil \vrule\cr \noalign{\hrule} Switch & \Delta_0 \cr \noalign {\hrule} 1 & t^8-2t^4+1 \cr \noalign {\hrule} 2 & 2t^8+4t^7+4t^6-3t^4-2t^3+t^2+2t+1 \cr \noalign {\hrule} 3 & 1/512t^8+1/128t^7+1/128t^6-1/32t^5-3/32t^4+1/2t^2+t+1 \cr \noalign {\hrule} 4 & 243t^8+324t^7+216t^6+36t^5-24t^4-12t^3+4t^2+4t+1 \cr \noalign {\hrule} }}$$ \vfill\eject \section{Questions and further developements} It seems likely that the methods described above will not yield new invariants for real knots and links. However it is quite possible that in identifying the old invariants new properties of these will be found. This awaits a further paper. On the other hand these methods provide a rich tableaux of virtual knot and link invariants. Much more can be done to understand the set of switches described in section 8. For example are there 2$\times$2 matrices in which the entries are polynomials in a real variable with quaternionic coefficients other than those described in the dodge which replaces $S$ by $S(t)$? The answer must surely be yes. However this has so far proved too much for a computer search to cope with. We can consider the set of switches with quaternion entries as points of some quaternionic variety. Since each point gives rise to a polynomial and there are many such polynomials it is likely that this variety has many components. Each component contains a copy of the real line. But is that all? In general we have a universal non-commutative algebra. This is given by the four relations of section 2. What can we say about this algebra? Certainly more than is indicated by its representation onto the quaternionic variety. \section{ References} \refe \cite{A} J. W. Alexander, Topological Invariants of Knots and Links, Trans. Amer. Math. Soc. 30 275-306 (1928) \cite{As} Helmer Aslaksen, Quaternionic Determinants, Math. Intel. Vol 18 no. 3 (1996) \cite{JB} J. Birman, Braids, Links and Mapping Class Groups, Princeton University Press (1975) \cite{BF} S. Budden and R. Fenn, The Equation $[B,(A-1)(A,B)]=0$ and Virtual Knots and Links, preprint. \cite{CF} R.H. Crowell and R.H. Fox, Introduction to Knot Theory. Ginn and Co. (1963) \cite{Dr} V. Drinfeld, On some Unsolved Problems in Quantum Group Theory, Quantum Groups, Lectures Notes in Maths. 1510, Springer 1-8 (1990) \cite{FJK} R. Fenn, M. Jordan, L. Kauffman, The Birack; an Invariant of Virtual Knots and Links, to appear in Topology and its Applications. Available from \hfill\break www.maths.sussex.ac.uk///Staff/RAF/Maths/ \cite{FR} R. Fenn, C. Rourke. Racks and Links in Codimension Two. JKTR, No. 4, 343-406 (1992). \cite{FRR} R. Fenn, R. Rimanyi and C. Rourke The braid-permutation group. Topology 36, No.1, 123-135 (1997). \cite{KK} N. Kamada and S. Kamada, Abstract Link Diagrams and Virtual Knots,\hfill\break preprint. \cite{K} L.Kauffman. Virtual Knot Theory, European J. Comb. Vol 20, 663-690, (1999) \cite{M} V. O. Manturov, On Invariants of Virtual Links, Acta Math. 00 1-15 (2002) \cite{KS} Toshimasa Kishino and Shin Satoh, A note on non-classical virtual knots, preprint \cite{SW}D.S. Silver and S.G. Williams, Polynomial Invariants of Virtual Links, JKTR 12 (2003) 987-1000. \section{Tables} In this section we give some examples of switches found by a combination of theory and computer search. If the switch $S$ appears then the alternatives, $S^{-1}, S^\dagger, S^*$, are not included: neither are those obtained by a permutation of $i, j, k$ up to sign. This pruning considerably reduces the number of switches. The first table contains switches with coefficients in the Hurwitz ring. Table 2 contains switches where at least two of the entries have integer coefficients. Since $|B||C|=1$ we have restricted ourselves to integer coefficients for $B$ and $C$ in the set $\{-1,0,1\}$. The tables have overlaps. For example switches 1 are both variants of the Budapest switch. The software to search for switches and calculate polynomials can be downloaded from http://www.layer8.co.uk/maths/braids/ \newcount\tablelinenum \tablelinenum=0 \def\nextline{\global\advance\tablelinenum by 1 \the\tablelinenum } \bigskip \centerline{\bf Table 1. Hurwitz Coefficients} \bigskip \tablelinenum=0 \hbox{\kern-3em$$\vbox{ \offinterlineskip \halign{\strut \vrule \hfil \quad $\small #$ \ \hfil \vrule & \hfil \ $\small #$ \ \hfil \vrule & \hfil \ $\small #$ \ \hfil \vrule & \hfil \ $\small #$ \ \hfil \vrule & \hfil \ $\small #$ \ \hfil \vrule\cr \noalign{\hrule} & A & B & C & D\cr \noalign{\hrule} \nextline & 1+i & j & -j & 1+i \cr \noalign{\hrule} \nextline & 1+i & 1/2-1/2i+1/2j+1/2k & 1/2-1/2i-1/2j-1/2k & 1/2+1/2i+1/2j-1/2k \cr \noalign{\hrule} \nextline & 1+i & -1/2+1/2i+1/2j+1/2k & -1/2+1/2i-1/2j-1/2k & 1/2+1/2i-1/2j+1/2k \cr \noalign{\hrule} \nextline & 1/2+1/2i+1/2j+1/2k & 1/2+1/2i-1/2j-1/2k & 1/2-1/2i+1/2j-1/2k & 1+k \cr \noalign{\hrule} \nextline & 1/2+1/2i+1/2j+1/2k & -1/2+1/2i+1/2j-1/2k & -1/2-1/2i+1/2j+1/2k & 1+j \cr \noalign{\hrule}}}$$} \centerline{\bf Table 2. At least Two Integer Coefficients} \bigskip \tablelinenum=0 \hbox{\kern-3em$$\vbox{ \offinterlineskip \halign{\strut \vrule \hfil \quad $\small #$ \ \hfil \vrule & \hfil \ $\small #$ \ \hfil \vrule & \hfil \ $\small #$ \ \hfil \vrule & \hfil \ $\small #$ \ \hfil \vrule & \hfil \ $\small #$ \ \hfil \vrule\cr \noalign{\hrule} & A & B & C & D\cr \noalign{\hrule} \nextline & 1-j & -k & k & 1-j \cr \noalign{\hrule} \nextline & 1+i & 1/2j+1/2k & -j-k & 1+i \cr \noalign{\hrule} \nextline & 1+i & 1-i-j-k & 1/4-1/4i+1/4j+1/4k & 1/2+1/2i-1/2j+1/2k \cr \noalign{\hrule} \nextline & 1-i & -j-k & 1/2j+1/2k & 1-i \cr \noalign{\hrule} \nextline & 1+j & 1-j-k & 1/3-1/3j+1/3k & 1/3+2/3i+1/3j \cr \noalign{\hrule} \nextline & 1-k & -1-i-j-k & -1/4+1/4i+1/4j-1/4k & 1/2-1/2i+1/2j-1/2k \cr \noalign{\hrule} \nextline & 1-k & -1-j-k & -1/3+1/3j-1/3k & 1/3-2/3i-1/3k \cr \noalign{\hrule} \nextline & 1/2+1/2i-1/2j-1/2k & -1/4+1/4i-1/4j+1/4k & -1-i-j-k & 1-j \cr \noalign{\hrule} \nextline & 1/2+1/2i+1/2j-1/2k & 1/4+1/4i-1/4j+1/4k & 1-i-j-k & 1+j \cr \noalign{\hrule} \nextline & 1/3+2/3i-1/3j & -1/3-1/3j+1/3k & -1-j-k & 1-j \cr \noalign{\hrule} \nextline & 1/3-2/3i+1/3k & 1/3+1/3j-1/3k & 1-j-k & 1+k \cr \noalign {\hrule}}}$$} \bye
1,108,101,562,607
arxiv
\section{Introduction} The collective quadrupole degrees in nuclei yield rotational and vibrational excitations. They are described by the Bohr-Mottelson Hamiltonian \cite{BM}. The wavefunctions for rotations and $\beta$ and $\gamma$ vibrations in strongly deformed axially symmetric nuclei have been greatly improved in the Rotation Vibration Model (RVM) by Faessler and Greiner \cite{Fae1,Fae2,Fae3,Fae4} taking the interaction between the rotations and vibrations into account with an axially symmetric equilibrium deformation. \par Recently, the Ba and Xe region with mass numbers A$\approx$120 to 130 has been studied experimentally [6, 9-15] and interpreted by several models: They compare different approaches of the Interacting Boson Approximation (IBA) in the U(5) and the O(6) limits (with different E2 operators) \cite{Iachello} and the axial Rotation Vibration Model (RVM) and the Asymmetric Rotor Model (ARM) \cite{Dawydow}. They exclude the ARM \cite{Lie,Sei} because it cannot describe a K=0 band built on $\gamma$ vibrations \cite{Sei} and because of the wrong staggering of the $2^{+}$, $3^{+}$, $4^{+}$, $5^{+}$, $\ldots$ excitation energies in the K=2 band. The argument with the K=0 band built on $\gamma$ vibrations is trivial: The ARM does not contain $\gamma$ vibrations and thus cannot describe it. Because the staggering in the ($K$=2) quasi $\gamma$ band is not described correctly in the ARM, but given correctly in the RVM, the staggering seems to be due to rotation vibration interaction. Although the RVM \cite{Fae1,Fae2,Fae3,Fae4} does quite well in describing the data, the Cologne group concludes \cite{Sei} that the O(6) IBA with the additional parameter $\chi$ in the E2 transition operator does in average agree better with the data. Here, we want to show that this is connected with the restriction of the RVM \cite{Fae1,Fae2,Fae3,Fae4} to axial symmetry while IBA allows also for triaxiallity (O(6) = $\gamma$ instable limit). We present here an extension of the axial RVM to triaxiallity (Triaxial Rotation Vibration Model = TRVM). This model has the same number of parameters as the IBA used \cite{Lie,Sei}. We obtain an equally good agreement with the data as in IBA.\par In Section 2 we present the model which is an extension of the RVM \cite{Fae1,Fae2,Fae3,Fae4} allowing for a triaxial equilibrium deformation (TRVM). In Section 3 we compare the excitation energies and the branching ratios in the best measured Xe and Ba isotopes. Section 4 summarizes the main results. \section{The Triaxial Rotation Vibration Model} In the Rotation Vibration Model \cite{Fae1,Fae2,Fae3,Fae4} one characterizes as in the Bohr-Mottelson Model \cite{BM} the surface of the nucleus by quadrupole deformations: \begin{equation} R(\theta,\phi)=R_0(1+\sum_{\mu}\alpha_{2\mu}Y_{2\mu}(\theta,\phi)) \end{equation} The deformation parameters $\alpha_{2\mu}$ are considered as dynamical variables and depend classically on time. Their behaviour is governed by the Hamiltonian: \begin{equation}\label{Ham} H=\frac{1}{2}B\sum_{\mu}\dot{\alpha}_{2\mu}^{\dag}\dot{\alpha}_{2\mu}+V(\alpha_{2\mu}) \end{equation} The information about the equilibrium shape of the nucleus is contained in the potential energy $V(\alpha_{2\mu})$. The five quadrupole degrees of freedom $\alpha_{2\mu}$ can be replaced by the three Euler angles $(\phi, \theta, \psi)$ for rotations and two deformation parameters \begin{equation}\label{trans} \alpha_{2\mu}={\cal D}_{\mu 0}^2(\phi, \theta, \psi) (\beta_0 + a_0^{\prime}(t))+ ({\cal D}_{\mu 2}^2+{\cal D}_{\mu -2}^2) (a_2 + a_2^{\prime}(t))\hspace*{1cm}. \end{equation} Here $\beta_0$ and $a_2$ give the equilibrium shape of the nucleus while $a_0^{\prime}(t)$ and $a_2^{\prime}(t)$ describe vibrations around this shape. They are connected with the Bohr-Mottelson parameters through: \begin{eqnarray} \beta_0 + a_0^{\prime}(t) & = & \beta cos(\gamma)\nonumber\\ a_2 + a_2^{\prime}(t) & = & \frac{1}{\sqrt{2}}\beta sin(\gamma) \end{eqnarray} In deriving the Hamiltonian for the TRVM we follow very closely the derivation for the axial RVM in Ref.\cite{Fae3}. With the transformation (\ref{trans}) the Hamiltonian (\ref{Ham}) can be written (see in Ref.\cite{Fae3} eqs. (3) and (4)): \begin{equation}\label{Ham1} H=T+V=T_{rot}+T_{vib}+T_{rotvib}+V_{a_0 a_2}(a_0^{\prime}, a_2^{\prime}) \end{equation} The different terms are obtained by the straight forward transformation (\ref{trans}) and the expansion in powers of $a_0^{\prime}/b_0$ and $a_2^{\prime}/a_2$ up to second order. This assumes that the $\beta$ and $\gamma$ deformation is large compared to the vibrational amplitudes. This approach therefore does not allow to describe spherical or nearly spherical nuclei. The several terms in the TRVM Hamiltonian are given by: \begin{eqnarray}\label{Ham2} T_{rot} & = & \frac{\hat{{\bf I}}^2-\hat{I}_3^2}{2I_0}+ \frac{\hat{I}_3^2}{16Ba_2^2}\nonumber\\ T_{vib} & = & -\frac{\hbar^2}{2B}(\frac{\partial^2}{\partial a_0^{\prime 2}}+ \frac{1}{2}\frac{\partial^2}{\partial a_2^{\prime 2}})\nonumber\\ T_{rotvib} & = & \frac{\hat{{\bf I}}^2-\hat{I_3}^2}{2I_0}f_0(\beta_0,a_2,a_0^ {\prime},a_2^{\prime})\\ {} & {} & + \frac{\hat{I}_{+}^2+\hat{I}_{-}^2}{2I_0}f_1(\beta_0,a_2,a_0^ {\prime},a_2^{\prime})\nonumber\\ {} & {} & + \frac{\hat{I}_3^2}{16Ba_2^2}f_2(a_2,a_2^{\prime})+2\epsilon\frac{a_0^{\prime}}{\beta_0} \nonumber \end{eqnarray} The functions $f_0$, $f_1$, and $f_2$ are obtained by the expansion mentioned above: \begin{eqnarray} f_0 & = & -2\frac{a_0^{\prime}}{\beta_0}+3\frac{a_0^{\prime^{2}}}{\beta_0^2}+ \frac{2}{\beta_0^2}(a_2^2+2a_2a_2^{\prime}+a_2^{\prime^2}) \nonumber\\ f_1 & = & \frac{1}{3}\sqrt{6}\frac{1}{\beta_0}(a_2+a_2^{\prime})- \sqrt{6}\frac{1} {\beta_0^2}a_0^{\prime}(a_2+a_2^{\prime})\nonumber \\ f_2 & = & -2\frac{a_2^{\prime}}{a_2}+3\frac{a_2^{\prime^2}}{a_2^2}\nonumber \end{eqnarray} For the potential energy we assume a harmonic oscillator potential around the equilibrium shape: \begin{equation}\label{Ham3} V(a_0^{\prime},a_2^{\prime})=\frac{1}{2}C_0a_0^{\prime^2}+C_2a_2^{\prime^2} \end{equation} For the diagonalization of the Hamiltonian we chose a basis of eigenstates of the Hamiltonian $H_0=T_{rot} +T_{vib}+V$. The eigenstates of the unperturbed Hamiltonian $H_0$ are easily obtained and labelled by the total angular momentum $I$, the projection on the intrinsic 3-axis $K$ and the number of phonons for the $\beta$-$(n_0)$ and $\gamma$ vibrations $(n_2)$: \begin{equation}\label{basis} |IK,n_2 n_0\rangle= \left(\frac{2I+1}{16\pi^2}\frac{1}{1+\delta_{K0}} \right)^{\frac{1}{2}}({\cal D}_{MK}^I+(-)^I{\cal D}_{M-K}^I) \sqrt{\frac{1}{n_0 !}}\left(\hat{\beta}_0^{\dag}\right)^{n_0} |0\rangle \sqrt{\frac{1}{n_2 !}}\left(\hat{\beta}_2^{\dag}\right)^{n_2} |0\rangle \end{equation} Due to the rotation vibration part of the Hamiltonian $T_{rotvib}$ several eigenstates of $H_0$ are mixed. The unperturbed energies as eigenvalues of $H_0$ may easily be obtained. They are given by: \begin{eqnarray} E^{IK}_{n_2n_0} & = & (I(I+1)-K^2)\frac{\hbar^2}{2I_0}+\frac{K^2}{16Ba_2^2}+ (n_0+\frac{1}{2})E_{\beta}+(n_2+\frac{1}{2})E_{\gamma}\nonumber\\ \mbox{with}\hspace*{0.5cm}I& = & \left\{\begin{array}{lc} 0,2,4,6,\dots\ldots\ldots\ldots & \mbox{for K=0}\\ K,K+1,K+2,\ldots & \mbox{for K$\neq$0} \end{array}\right.\nonumber\\ E_{\beta}=\hbar\sqrt{\frac{C_0}{B}}& ; &E_{\gamma}=\hbar\sqrt{\frac{C_2} {B}}\\ \epsilon& = &\frac{\hbar^2}{I_0}\nonumber\\ I_0 & = & 3B\beta_0^2\nonumber \end{eqnarray} The operators $\hat{\beta}^{\dag}$ are creation operators for harmonic oscillations. We diagonalize the Hamiltonian (\ref{Ham1}), (\ref{Ham2}), (\ref{Ham3}) in the complete basis (\ref{basis}). For numerical reasons the Hilbert space has to be truncated. We chose the 31 lowest basis states with $K\leq 6$ and up to two phonons ($n_2+n_0\leq 2$). In notation (\ref{basis}) the most important states $|IK,n_2 n_0\rangle$ are: \begin{eqnarray} |I0,00\rangle & \ldots\ldots & \mbox{ground state band}\nonumber\\ |I2,00\rangle & \ldots\ldots & \mbox{K=2 quasi}\hspace*{1mm}\gamma \hspace*{1mm}\mbox{band}\nonumber\\ |I0,10\rangle & \ldots\ldots & \mbox{one-phonon}\hspace*{1mm}\gamma \hspace*{1mm}\mbox{band}\\ |I0,01\rangle & \ldots\ldots & \mbox{one-phonon}\hspace*{1mm}\beta \hspace*{1mm}\mbox{band}\nonumber\\ |I4,00\rangle & \ldots\ldots & \mbox{K=4 quasi}\hspace*{1mm}\gamma \hspace*{1mm}\mbox{band}\nonumber \end{eqnarray} The TRVM has as the IBA four independent parameters. The vibration energies $E_{\beta}$ and $E_{\gamma}$, the inverse moment of inertia $\epsilon=1/I_0$ and the triaxial equilibrium deformation $a_2$. \par The Hilbert space is too limited to describe the full variation of the moment of inertia. As in the competing IBA \cite{Kir} we make for the energies the Lipas ansatz \begin{equation} E=E_0/(1+\alpha_L E_0)\hspace*{1cm} \end{equation} $E_0$ is the excitation energy obtained after diagonalization. The Lipas parameter $\alpha_L$ is quite small ($\approx 10^{-4}$[keV$^{-1}$]). It describes a variable moment of inertia. We would like to note that the number of parameters including the Lipas parameter is the same in TRVM and in IBA (including the effective boson charge in IBA). In the O(6) limit the branching ratios of IBA depend only on the parameter $\chi$. But similarly most of the branching ratios in TRVM depend essentially only on the effective triaxiallity $a_2+a_2^{\prime} $. In addition, one could omit practically the $\beta$ band and therefore the parameter $E_{\beta}$ in TRVM, since the results for the branching ratios do almost not depend on the $\beta$ band and the agreement would be equally good by omitting this band. But on the other side, this means also that our prediction for the $\beta$ bandhead is not reliable. Both models therefore effectively have the same number of parameters. \section{Comparison with the Data} \subsection{Energy spectra} In Figure 1 we display our results for the energy spectrum of $^{130}$Ba. The free parameters which give an overall best fit of the experimental data from Ref.\cite{Kir} are listed in Table 1. For comparison, in Figure 1 furthermore the calculated IBA values and the experimental results are shown. For the low spin states in the ground state band both theoretical results are in nice agreement with experiment. As a result of the applied Lipas fitting procedure lower spin states as the 4$^+$, 6$^+$ and also the 8$^+$ state lie slightly higher while the 10$^+$-state is little below the experimental values. This is due to the stronger effect of the Lipas procedure on higher spin states.\par For the excited $K$=2 band (quasi $\gamma$ band) we obtain in TRVM the experimentally observed staggering of even-odd angular momentum states. A pure triaxial rotor model without vibrations but with a $\gamma$ deformation of the nuclear surface does not exhibit this feature. The staggering thus seems to be the result of the rotation vibration coupling. We would like to note that the staggering increases with excitation energies in TRVM. In contrast, in IBA the staggering seems to be constant with growing excitation energy. Both models agree qualitatively with the data in the $\gamma$ band but not in detail.\par The $K$=0 band is interpreted in the TRVM as the $\gamma$ one-phonon band. The bandhead ($0^+$) at 1179 keV is fitted through the vibration energy $E_{\gamma}$. For the TRVM the 0$^+$--2$^+$ splitting is nicely reproduced and in better agreement with experiment than IBA. The $\beta$ vibrational bandhead is at around 1150 keV. Up to now, the levels of this band have not been identified in the experimental analysis which might be difficult because the two 0$^+$ energies are very close. Since our values for the experimentally observed energy levels are almost not affected by the $\beta$ band, we do not display it in Figure 1.\par \marginpar{Table 1} \marginpar{Figure1} In Figure 2 we show our results for the $^{126}$Xe spectrum. There a different set of parameters must be used in order to fit the experimental data best. This set is displayed in Table 2. The agreement with the experimental level spacings is in the two theoretical models about the same. This holds for the ground state band as well as for the quasi $\gamma$ band. In TRVM the staggering in low lying states is rather low. The free parameters $a_2 /\beta_0$ and $E_{\beta}$ have to be increased relative to these in $^{130}$Ba to get the best agreement. The excited $0^+$ band is in good agreement with experiment and the level spacings obtained in TRVM are in better agreement with the experiment than the IBA calculations.\\[12pt] \marginpar{Table 2} \marginpar{Figure 2} \subsection{E2 Branching Ratios} The electric quadrupole operator can again be obtained from the one in Ref.\cite{Fae3} by replacing $a_2^{\prime}\rightarrow a_2+a_2^{\prime}$. In collective intrinsic coordinates, it is given by: \begin{eqnarray} m(E2,\mu)& = &\frac{3Z}{4\pi}R_0^2\left[{\cal D}_{\mu 0}^2\left(\beta_0 \left(1+\frac{2}{7}\left(\frac{5}{\pi}\right)^{\frac{1}{2}}\beta_0\right) \right)+{\cal D}_{\mu 0}^2 a_0^{\prime}\left(1+\frac{4}{7} \left(\frac{5}{\pi}\right)^{\frac{1}{2}}\beta_0\right)+\right.\nonumber\\ {}&{}& {\cal D}_{\mu 0}^2\frac{2}{7}\left(\frac{5}{\pi}\right)^{\frac{1}{2}} (a_0{\prime^2}-2(a_2+a_2^{\prime})^2)+\\ {}&{}&\left. ({\cal D}_{\mu 2}^2+ {\cal D}_{\mu -2}^2)\left(\left(1-\frac{4}{7}\left(\frac{5} {\pi}\right)^{\frac{1}{2}}\beta_0\right)(a_2+a_2^{\prime})- \frac{4}{7}\left(\frac{5}{\pi}\right)^{\frac{1}{2}}a_0^{\prime}(a_2+ a_2^{\prime})\right)\right]\nonumber \end{eqnarray} The reduced E2 transition probabilities are defined by \cite{Fae1,Fae2,Fae3,Fae4}: \begin{equation} B(E2;I_i\rightarrow I_f)=\frac{2I_f+1}{2I_i+1}| \langle I_i||m(E2)||I_f\rangle|^2 \end{equation} We have calculated the measured B(E2) ratios to be able to compare them with the experimental results for both $^{130}$Ba and $^{126}$Xe. In Table 3 the results for $^{130}$Ba are given. We see a fair agreement between the results of TRVM and the experiment as well as those of IBA except for the transition from the initial $4^{+}_{3}$ state. Here our result is about two orders of magnitude larger while the IBA result is two orders of magnitude lower than the experimental ratio B(E2,$4^{+}_{3}\rightarrow 2^{+}_{2}$)/ B(E2,$4^{+}_{3}\rightarrow 4^{+}_{2}$). A similar discrepancy exists also for the transition ($4^+_3 \rightarrow 2^+_1$) as can be seen from the last entry in Table 3. Any B(M1) contribution to the $4^+_3\rightarrow 4^+_2$ transition cannot cure this disagreement.\marginpar{Table 3}\\[12pt] Table 4 shows the ratios of certain B(E2) transitions in $^{126}$Xe . Here the agreement is much better between the models and the experiment; and both Interacting Boson Approximation and the Triaxial Rotation Vibration Model agree nicely with the data. This might be because only pure quadrupole transitions are used in Xe to define the ratios, while the uncertainty in the E2/M1 ratio in the normalising states is ignored in the case of Ba and transitions are assumed to be pure E2 transitions.\\[12pt] \marginpar{Table 4} \section{Conclusions} The Triaxial Rotation Vibration Model (TRVM) is successfully applied to Ba and Xe nuclei to obtain low lying energy levels. These results compare well with those obtained in the O(6)-limit of IBA and with the experimental values both for the spectra and for relative E2 transition rates. The bandhead of the $\beta$ vibrational band are not seen in experiment so far. Unfortunately, we are not able to specify very exactly the energy region for the bandhead or other levels of the $\beta$ band since our calculated values are not very sensitive to the parameter $E_{\beta}$.\\[12pt] {\bf Acknowledgment}: We would like to thank Dr.\ Ingo Wiedenh\"{o}ver and Prof.\ Peter von Brentano for providing us the IBA results and for fruitful discussions.
1,108,101,562,608
arxiv
\section{Introduction} Since \cite{Babcock1960} measurements of the surface magnetic field $B_s$ (average over the stellar visible disk of the magnetic modulus) of the star HD\,215441, massive measurements of $B_s$ for Magnetic Chemically Peculiar (MCP) stars have been reported by \cite{Preston1971}, \cite{Mathys1997} (=M97) and \cite{Mathys2017}(=M17). Nowadays, $B_s$ can be straightly and accurately measured from the Zeeman split of the Fe{\sc ii}\,6149.258\,\AA\, spectral line, because of the only two Zeeman subcomponents $\pi$ in a coincidence of two components $\sigma$ \citep{Mathys1990}. In the framework of the oblique rotator model by \cite{Babcock1949} and \cite{Stibbs1950}, MCP stars present photometric, spectroscopic, and magnetic variability with a single period as a consequence of the stellar rotation. The possibility to measure the rotational period from the Fe{\sc ii}\,6149.258\,\AA\, surface magnetic field modulation is probably the most reliable for very long-term variables. All the necessary information is coded in a single spectrum with no necessity of standard stars or zero-points as in the case of photometry. This paper reports the results of an observational campaign started in 2001 to measure and monitor the surface field of 36 MCP stars whose rotational period was expected to be very long, even decades, because of the sharpness of their spectral lines. Field measurements are from high-resolution spectroscopy of the Fe{\sc ii}\,6149.258\,\AA\, line. And, to extend, as much as possible, the time frame we have also explored all public astronomical archives and mined the literature. We have also analyzed some of our spectra dated back to 1995. In Section 2, we present the: 1) operated high-resolution spectrographs, 2) reduction methods, 3) procedure to measure $B_s$ from the Fe{\sc ii}\,6149.258\,\AA\, spectral line, and 4) method to establish the variability period. In Section 3, we present star-by-star the determined rotational periods. If possible, these periods have been checked or determined contextually to the effective magnetic field $B_e$ (average over the stellar visible disk of the magnetic components along the line of sight) and/or photometric measurements. In the conclusions, we present the found relation between rotational periods and magnetic field strength. \begin{table} \caption{List of spectrographs used to measure the stellar surface magnetic fields. Spectral resolution R [k] = $\frac{\lambda}{\Delta\lambda}$/1000 is given. For any instrument, $N$ is the number of $B_s$ measurements from here acquired spectra. A two-letter identification is used for any instrument.} \begin{tabular}{llrrl}\multicolumn2c{ \rm Spectrograph@Telescope} & {\rm R [k]} &{N}& {\rm Reference} \\\hline CAOS@OAC & CS & 71 & 123 & \cite{Leone2016a} \\ HARPS@ESO\,3.6m & HS & 115 & 20 & \cite{Mayor2003} \\ HARPS-N@TNG & HN & 115 & 81 & \cite{Cosentino2013} \\ UCLES@AAT & UC & 120 & 22 & \cite{Horton2012} \\ SARG@TNG & SG & $\le$ 164 & 53 & \cite{Gratton2001} \\ CES@ESO\,3.6m & CE & 220 & 2 & \cite{Enard1982} \\ \hline \end{tabular}\label{Tab_Spectrographs} \end{table} \begin{table} \caption{Archives hosting high-resolution spectra of long period magnetic chemically peculiar stars. For any instrument, $N$ is the number of $B_s$ measurements from archive spectra.} \begin{tabular}{llrrl} \multicolumn{2}{c}{\rm Spectrograph@Telescope} & \rm R [k]& N & \rm Reference \\\hline [email protected] & EE & 40 & 6 &\cite{Baranne1996} \\ [email protected] & FS & 48 & 6 &\cite{Kaufer1999} \\ EMMI@ESO-NTT & EM & 60 & 2 &\cite{Dodorico1990} \\ NES@BTA & NS & 60 & 6 &\cite{Panchuk09}\\ ESPaDOns@CFHT & ES & 65 & 85 &\cite{Silvester2012} \\ NARVAL@TBL & NL & 65 & 16 &\cite{Silvester2012} \\ UCLES@AAT & UC & 90 & 10 &\citet{Horton2012} \\ UVES@ESO-UT2 & US & 100 & 72 &\cite{Dekker2000}\\ [email protected] & HS & 115 & 350 &\cite{Mayor2003}\\ GECKO@CFHT & GO & 120 & 21 & \cite{Glaspey1993} \\ [email protected] & CE & 220 & 7 &\cite{Enard1982} \\ \hline \end{tabular}\label{Tab_Archive} \end{table} \begin{table*} \scriptsize \caption{Observed stars and here adopted ephemeris for the $B_s$ variability. The last column is used to state if the $B_s$ and $B_e$ variabilities are in phase as expected for a magnetic dipole. } \begin{tabular}{clcllcc} \hline \label{Tab_Periods} {HD / HDE} & \multicolumn2c{Literature Ephemeris} & {Reference} & \multicolumn2c{ Here determined or adopted Ephemeris } \\ & {JD = 240000+} & {Period (days)} & & {JD = 240000+} & Period (days) \\\hline 965 & $B_e^{\rm min}$ = 51000.0 & 6030$\pm$200 & \cite{Mathys2019_HD965} & $B_e^{\rm min}$ = 51000.0 & 6030 \\ \hline \multirow{2}{*}{2453} & $B_e^{\rm min}$ = 42213.0 & 521$\pm$2 & \cite{Mathys2017} & \multirow{2}{*}{ $B_e^{\rm min}$ = 48440.911 } & \multirow{2}{*}{518.2} \\ & c$_{1}^{\rm max}$ = 48440.911 & 518.2$\pm$0.5 & \cite{Pyper2017} & & \\ \hline \multirow{2}{*}{9996} & y$^{\rm max}$ = 53016.610 &7850$\pm$100 & \cite{Pyper2017} & \multirow{2}{*}{$B_s^{\rm max}$ = 49200.0 } & \multirow{2}{*}{7850} \\ & $B_e^{\rm min}$ = 33301.360 &7936.522 & \cite{Bychkov2019} & & & \\ \hline \multirow{3}{*}{12288} & $B_e^{\rm max}$ = 48499.87 & 34.9$\pm$0.2 & \cite{Wade2000} & \multirow{3}{*}{$B_e^{\rm max}$ = 51131.9} & \multirow{3}{*}{34.993$\pm$0.003} \\ & v$^{\rm max}$ = 51131.772 & 34.99$\pm$0.01 & \cite{Pyper2017} & & & \\ & V$^{\rm max}$ = 57218.6 & 35.73$\pm$ 0.2 & \cite{Bernhard2020} & & & \\ \hline \multirow{2}{*}{\rm \,14437 } & V$^{\rm max}$ = 57077.7 & 26.78$\pm$0.1 & \cite{Bernhard2020} & \multirow{2}{*}{v$^{\rm max} = 49230.528$} & \multirow{2}{*}{26.734$\pm$0.007} \\ & $B_e^{\rm max}$ = 48473.846 & 26.87$\pm$0.02 & \cite{Wade2000} & & \\ \hline \multirow{2}{*}{\rm \,18078 } & \multirow{2}{*}{$B_{s}^{\rm max} = 49930.0$} & \multirow{2}{*}{$1358\pm$12} & \cite{Pyper2017} & \multirow{2}{*}{$B_{s}^{\rm max} = 49916$} & \multirow{2}{*}{$1352\pm$6} \\ & & & \cite{Mathys2016} & & \\\hline 29578 & & $>>$ 1800 & \cite{Mathys2017} &$B_{\rm s}^{\rm max}$ = 51950.0 & 4000\,/\,9230 \\\hline 47103 & & $>$ 10 & \cite{Wraight2012} & $B_{\rm s}^{\rm max}$ = 50098.99 & 17.683$\pm$0.004 \\\hline 50169 & & 10600$\pm$300 & \cite{Mathys2019_HD50169} & $B_{\rm s}^{\rm max}$ = 41600.0 &10600$\pm$300 \\\hline 51684 & $B_{\rm s}^{\rm max}$ = 49947.0 & 371$\pm$ 6 & \cite{Mathys2019_HD50169} & $B_{\rm s}^{\rm max}$ = 53617 & 366$\pm$1 \\\hline 55719 & & $ >>$ 3650 & \cite{Mathys2017} & \,\,\,\,\,\,\,\,48500 & $\ge$ 14000 \\\hline 61468 & $B_e^{\rm max}$ = 50058.5 & 322$\pm$3 & \cite{Mathys2017} & $B_e^{\rm max}$ = 50058.5 & 321$\pm$1 \\\hline 75445 & & 6.291$\pm$ 0.002\,\, ? & \cite{Mathys2017} & & $>$ 5000 \\\hline \multirow{2}{*}{81009 } & v$^{\rm min}$ = 48646.878 & 33.987$\pm$0.002 & \cite{Pyper2017} & \multirow{2}{*}{$B_{\rm s}^{\rm max} = 48645.9$} & \multirow{2}{*}{33.987$\pm$0.002} \\ & v$^{\rm max}$ = 44483.420 & 33.984$\pm$0.055 & \cite{Wade2000_HD81009} & & \\ \hline 93507 & $B_e^{\rm min}$ = 49800.0 & 556$\pm$ 22 & \cite{Mathys1997} & $B_{\rm s}^{\rm max}$ = 48965.0 & 562$\pm$5 \\ \hline 94660 & $B_{\rm s}^{\rm min}$ = 47000.0 & 2800$\pm$200 & \cite{Mathys2017} & $B_{\rm s}^{\rm max}$ = 48284.0 & 2830$\pm$140 \\ \hline \multirow{2}{*}{110066} & \multicolumn2c{phot.\,not\,variable} & \cite{Pyper2017} & & \multirow{2}{*}{ $>$ 10\,500} \\ & $B_{\rm s}^{\rm min}$ = 49826.738 & 6.4769$\pm$0.0011 & \cite{Bychkov2020} & & \\ \hline \multirow{2}{*}{116114} & m$^{\rm max}$ = 54352.057 & 5.3832 & \cite{Wraight2012} & \multirow{2}{*}{$B_{\rm s}^{\rm max} = 40350.0$} & \multirow{2}{*}{ $>$ 17700 } \\ & $B_e^{\rm max}$ = 47539.000 &27.61 & \cite{Mathys2017} & & \\ \hline \multirow{2}{*}{126515} & $B_e^{\rm max}$ = 37015.000 & 129.95 & \cite{Mathys2017} & \multirow{2}{*}{$B_e^{\rm max} = 37015.0$} & \multirow{2}{*}{129.95} \\ & v$^{\rm min}$ = 52031.708 & 129.95$\pm$0.02 & \cite{Pyper2017} & & \\ \hline \multirow{3}{*}{137949} & \multicolumn2c{many decades} & \cite{Landstreet2014} & & \multirow{3}{*}{ $>$ 10\,000} \\ & \multicolumn2c{phot. not variable } & \cite{Pyper2017} & & \\ & $B_e^{\rm max}$ = 38166 & 5195 & \cite{Mathys2017} & & \\ \hline 142070 & $B_e^{\rm max}$ = 49878.2 & 3.3718 & \cite{Mathys2017} & $B_e^{\rm max}$ = 49878.2 & 3.3721$\pm$0.0002 \\\hline 144897 & $B_e^{\rm max} $ = 49133.7 & 48.57$\pm$0.15 & \cite{Mathys2017} & $B_e^{\rm max}$ = 491157.1 & 48.60$\pm$0.02 \\ \hline 150562 & & $>$ 1600 & \cite{Mathys2017} & $B_e^{\rm max}$ = 54317.0 & 2100$\pm$200 \\\hline 154708 & $B_e^{\rm max}$ = 54257.740 & 5.363$\pm$0.003 & \cite{Landstreet2014} & $B_e^{\rm max}$ = 53662.57 & 5.367$\pm$0.001 \\ \hline 318107 & $B_e^{\rm max}$ = 48800.000 & 9.7088$\pm$0.0007 & \cite{Bailey2011_HDE318107} & $B_e^{\rm max}$ = 48800.0 & 9.7089$\pm$0.0002 \\ \hline 165474 & & $>>$3300 & \cite{Mathys2017} & $B_e^{\rm max} $ = 52150.0 & $\ge$ 9900 \\ \hline 166473 & $B_{\rm s}^{\rm max}$ = 48660.0 & 3836$\pm$30 & \cite{Mathys2020_HD166473} & $B_{\rm s}^{\rm max}$ = 48660.0 & 3836 \\ \hline 177765 & & $>> $1800 & \cite{Mathys2017} & & $\ge$13500\\ \hline 178892 & V$^{\rm max}$ = 52708.562 & 8.2549 & \cite{Semenko_HD178892} & $B_s^{\rm max}$ = 52696.850 & 8.2572$\pm$0.0016 \\ \hline 187474 & $B_e^{\rm min}$ = 46766.000 & 2345 & \cite{Mathys2017} & $B_{\rm e}^{\rm min}$ = 47870.0 & 2329$\pm$60 \\ \hline \multirow{2}{*}{188041} & v$^{\rm min}$ = 49904.860 & 223.826$\pm$0.040 & \cite{Pyper2017} & \multirow{2}{*}{$B_e^{\rm max}$ = 49797.921} & \multirow{2}{*}{223.826} \\ & $B_e^{\rm max}$ = 46319.5 & 223.78$\pm$0.10 & \cite{Mathys2017} & & \\ \hline 192678 & $B_e^{\rm max}$ = 44890.170 & 6.4193$\pm$0.003 & \cite{Pyper2017} & $B_s^{\rm max}$ = 49112.76 &6.4199$\pm$0.0001\\ \hline 335238 & \hspace{0.5cm} 47000 & 48.7$\pm$0.1 &\cite{Mathys2017} & $B_e^{\rm max}$ = 57222.7 & 48.985$\pm$0.007 \\ \hline 201601 & $B_e^{\rm min}$ = 52457.1 & 35462.5$\pm$1149 & \cite{Bychkov_HD201601} & $B_e^{\rm max}$ = 52200.0 & 90.49$\times$365.25 \\\hline \multirow{2}{*}{208217} & $B_e^{\rm max}$ = 47028.0 & 8.44475$\pm$0.00011 & \cite{Mathys2017} & \multirow{2}{*}{$B_s^{\rm max}$ = 47027.094} & \multirow{2}{*}{8.445$\pm$0.005} \\ & & 8.317$\pm$0.001 & \cite{David-Uraz2019} & & \\\hline \multirow{2}{*}{216018} & & $>$ 10 & \cite{Wraight2012} & \multirow{2}{*}{$B_e^{\rm max}$ = 49531.870} & \multirow{2}{*}{$34.044\pm$0.007} \\ & & $>>$2000 & \cite{Mathys2017} & & \\ \hline \end{tabular} \end{table*} \section{Observations, Archives, data reduction and period search} We have carried out high-resolution spectroscopy of 36 MCP stars with the different instruments listed in Table\,\ref{Tab_Spectrographs}, for a total of 412 new spectra. Data have been reduced by using IRAF routines as described in \cite{Leone2017}. To extend as much as possible the time frame for determining the variability periods we have exploited all public archives storing high-resolution spectra (Table\,\ref{Tab_Archive}) with the Fe{\sc ii} 6149.258 \AA\, line. A total of 581 spectra have been retrieved. Following \cite{Mathys1990, Mathys2017, Mathys1997}, measurements of the surface magnetic fields $B_s$ have been obtained from the wavelength distance $\Delta\lambda$ of Zeeman subcomponents of the Fe{\sc ii} 6149.258 \AA\, lines: \begin{center} $B_s[G] = 20974\, \Delta\lambda$ [\AA] \end{center} in the weak field approximation. \begin{figure}\center \includegraphics[width=0.45\textwidth]{2G_Fit_HD116114.pdf} \includegraphics[width=0.45\textwidth]{3G_Fit_HD965.pdf} \caption{Distances of Fe{\sc ii}\,6149.258\,\AA\, line Zeeman components are from a 2-gaussian fit (a) and 3-gaussian fit including the still unidentified spectral line (b) at $\sim $6148.84\AA.} \label{Fig_gaussians} \end{figure} \begin{figure}\center \includegraphics[width=0.45\textwidth]{HD965_Hs.pdf} \includegraphics[width=0.45\textwidth]{HD965_He.pdf} \caption{HD\,965. $B_s$ and $B_e$ variation. } \label{Fig_HD965} \end{figure} The wavelengths of Zeeman subcomponents have been obtained with a 2-gaussian fit (top panel of Figure\,\ref{Fig_gaussians}). Error in a surface magnetic field measurement is from the propagation of errors in the determination of subcomponent positions. If a blend is present with the not yet identified spectral line at $\sim$6148.84\,\AA\, a third gaussian component has been considered (bottom panel of Figure\,\ref{Fig_gaussians}). In this paper, the measurements of the surface magnetic field by \cite{Mathys1997} (=M97) and \cite{Mathys2017} (=M17) are fundamental. The huge observational effort of these authors represents a significant enlargement of the time base for many stars. Whenever it was possible, we have retrieved the original spectroscopic data of these authors and obtained a measure of the surface magnetic field. In average, our measurements are 50 G smaller than Mathys values. For this reason, whenever the original spectra of Mathys and coworkers were not available, we have combined $B_s$ values published by M97 and M17 with our values after this shift. Variability periods are here determined from the Lomb-Scargle \citep{Press1989} periodogram of the surface magnetic field measurements: $LS(B_s)$. If necessary, the periodograms of other observables ($LS(O^i))$ have been also computed and the final period determined as the position of the highest peak in the product: $LS(B_s, O^1, O^2, ....)=\frac{LS(B_s)}{LS^{max}(B_s)} \times\Pi_i\frac{LS(O^i)}{LS^{max}(O^i)}$. We assume that spurious peaks in any periodogram due to data sampling and noise are not in coincidence and they cancel or mitigate in the product. Normalization of any periodogram to its maximum value is equivalent to assume the same weight to all datasets. Uncertainty in period determination is always controversial and in the literature a large number of methods has been proposed to estimate the period error. In the present framework, the papers by \cite{Pyper2017}(=P17) and M17 are relevant for many of the here presented stars. Determining the periods with the Scargle method, P17 estimate its precision by {\it ``the practical method of comparing the two good data sets most widely separated in time and determined how much the period had to be changed to see a definite shift in phase''}. \cite{Mathys2019_HD965} estimate the period uncertainty {\it ``by plotting a phase diagram of the measurements for a series of tentative values of the period around the one suggested by the periodogram, one can visually identify the period value that minimizes the phase shifts between field determinations from different rotation cycles, and constrain the range around that value for which those phase shifts remain reasonably small.''} We prefer a clinical decision based on the $\sigma$ value of a gaussian fit of the highest peak in the periodogram as uncertainty in the period determination. If the literature period is coincident within errors with the here determined value and the literature period has been determined with a smaller error, we have adopted the period of the literature. \section{Single Stars} Table\,\ref{Tab_Periods} lists the 36 MCP stars object of this paper and it reports the ephemeris from the literature and the here adopted ones. For any star, measured surface field values are listed from Table\,\ref{Tab_HD965} to \ref{Tab_HD216018} with errors, Heliocentric Julian Date (HJD) and used spectrographs as coded in Tables\,\ref{Tab_Spectrographs} and \ref{Tab_Archive}. Field measurements by Mathys and coworkers are indicated with asterisks in the hereafter figures. If data from the literature are here used sources and symbols are given in the corresponding section and figure, respectively. Hereafter, if different photometric data sets of are available to better determine the variability period of a star, these are overplotted after an ad hoc shift \subsection{HD\,965} \cite{Mathys2019_HD965} found the $B_s$ measurements of HD\,965, collected between 1993 and 2008, do not show significant variations. Differently, these authors found the $B_e$ measurements, collected between 1995 and 2017, variable with the period of 6030$\pm$200 days. We have obtained high-resolution spectra of HD\,965 between 2001 and 2021, and retrieved spectra from the CFHT and ESO archives. The list of our $B_s$ measurements of HD\,965 is given in Table\,\ref{Tab_HD965}. There is no clear evidence of a periodically variable surface magnetic field: the average value is $B_s = 4240\pm$70 G, where the r.m.s. is comparable to the error of a single field measure. However, $B_s$ measurements folded with \cite{Mathys2019_HD965} period (Table\,\ref{Tab_Periods}) present a double wave variation and their fit suggest minima in coincidence with the extrema of the effective magnetic field, while $B_s$ maxima are in coincidence with null values (Fig.\,\ref{Fig_HD965}). A magnetic field certainly does not dominated by the dipolar component. \begin{figure}\center \includegraphics[width=0.45\textwidth]{HD2453_Hs.pdf} \includegraphics[width=0.45\textwidth]{HD2453_He.pdf} \includegraphics[width=0.45\textwidth]{HD2453_c1.pdf} \caption{HD\,2453. Variation of $B_s$, $B_e$ and $c_1$ index by \citet{Wolff1975} (+) and \citet{Pyper2017} ($\triangle$).} \label{Fig_HD2453} \end{figure} \subsection{HD\,2453}\label{Sec_HD2453} Two periods have been almost simultaneously published to account for the HD\,2453 variability: 1) 521$\pm$\,2 days by M17 from $B_e$ measurements and 2) 518.2$\pm$\,0.5 day by P17 from Str\"omgren photometry. Our $B_s$ measurements of HD\,2453 (Table\,\ref{Tab_HD2453}) extend the 2700-day time coverage by M17 to 11000 days. But, with an average value of $B_s = 3690\pm$60 G and a scatter comparable to the error of a single field measure, it is not possible to ascertain a clear variability of the surface magnetic field. The single $B_e$ measurement obtained by \cite{Romanyuk2016} on JD = 2\,455\,075.417 equal to $B_e$ = $-$1160$\pm$50 G let the 518.2 day period be the most probable. The Lomb-Scargle analysis of $B_e$ (by M17, \cite{Wolff1975, Romanyuk2016}) and $c_1$ (by \cite{Wolff1975, Pyper2017}) measurements produce an $L(B_s, B_e, c_1)$ peaking at 517.7$\pm$1.8 days. We have then assumed the P17 period to fold the $B_s$, $B_e$ and $c_1$ measurements (Fig.\,\ref{Fig_HD2453}). With this period, $B_s$ presents a single wave variation that is in phase with the light curve and with a maximum in coincidence with the negative $B_e$ extremum. This is expected for a dominant dipole component of magnetic field. \subsection{HD\,9996} From $B_e$ measurements, \cite{Metlova2014, Bychkov2019} concluded that the variability period of HD\,9996 is 7936.522 days ($\sim$21.7 yr). \cite{Pyper2017} determined the photometric value of 7850 days. M17 assumed the 7936.522 day period resulting in an incomplete phase coverage with $B_s$ maximum coincident in phase with the $B_e$ minimum. Even if folded $B_s$ values were larger than 4000 G, M17 reported of a unsplit Fe{\sc ii} 6149.258\,\AA\, line in the spectrum of HD\,9996 obtained on JD = 2\,450\,797.312. We have observed HD\,9996 between 2001 and 2021, in addition, we have retrieved two GECKO spectra, one obtained in 2000, from the CFHT archive. Our $B_s$ measurements of HD\,9996 are listed in Table\,\ref{Tab_HD9996}. We find the variability period given by P17 is representative of the $B_s$ and $B_e$ variations of HD\,9996 (Figure\,\ref{Fig_HD9996}) with a well-defined maximum and a rather flat minimum. This is not rare for this class of stars (see HD\,318107 or HD\,335238 later in the text), although it could be that the observed plateau is a consequence of the difficulty in measuring such a weak $B_s$ field because of the merging of Zeeman subcomponents of the Fe{\sc ii} 6149.258 \AA\, line (see in Figure\,\ref{Fig_HD9996}). Near-infrared lines, that are more sensitive because of the $\lambda^2$ Zeeman-split dependence \citep{Leone2003}, should be preferred to determine the real minimum of the $B_s$ variation of HD\,9996. Figure\,\ref{Fig_HD9996} also shows some of these spectra ordered in time and it confirms \cite{Preston1970_HD9996} finding that the equivalent width of chromium and rare-earths spectral lines change out of phase. $B_s$ maximum is coincident with the negative extremum of $B_e$, while from phase 0.3 to 0.7 $B_e$ presents a maximum and $B_s$ is constant at its minimum value. If the flat minimum is real, HD\,9996 presents a magnetic field not purely dipolar. \begin{figure} \includegraphics[width=0.45\textwidth]{HD9996_spectrum.pdf} \includegraphics[width=0.45\textwidth]{HD9996_Hs.pdf} \includegraphics[width=0.45\textwidth]{HD9996_He.pdf} \caption{HD\,9996. Top, chunk of spectra with marked Fe{\sc ii} 6149.258 \AA\, line, the Cr{\sc ii} 6147.154\,\AA\, line and Nd{\sc iii} 6145.070\,\AA\, lines. Normalized spectra are arbitrary shifted for decreasing value of $B_s$. Cr (6147$\lambda$) and Nd (6145$\lambda$) abundances change out of phase. Central and lower panels show the $B_s$ and $B_e$ variability.} \label{Fig_HD9996} \end{figure} \begin{figure} \includegraphics[width=0.45\textwidth]{HD12288_Hs.pdf} \includegraphics[width=0.45\textwidth]{HD12288_He.pdf} \includegraphics[width=0.45\textwidth]{HD12288_phot.pdf} \caption{HD\,12288} \caption{HD\,12288. Variability of $B_s$, $B_e$ by \citet{Wade2000} and photometry by \citet{Wolff1973} ($*$), HIPPARCOS ($\triangle$) and MASCARA ($+$).} \label{Fig_HD12288} \end{figure} \subsection{HD\,12288} \cite{Bernhard2020} found that HD\,12288 is a photometric variable with the ephemeris JD(V$^{\rm max}$) = 2\,457\,218.6 + 35.73$\pm$ 0.03 E days and presenting a peak-to-peak difference equal to 0.02 magnitudes. According to P17, this star is variable in the St\"omgren $u$, $v$ and $b$ filter with the 34.99 day period, but it presents a constant Str\"omgren $y$ magnitude. A period of 34.9 days was adopted by M17 to discuss the magnetic variability of HD\,12288. We have observed HD\,12288 eight times and obtained one spectrum from the ELODIE archive and one from the CFHT one. Consistently with P17, we found the TESS (600-1000 nm) magnitudes obtained from 2\,458\,790 to 2\,458\,841 constant (7.63688$\pm$0.00006). Lomb-Scargle analysis of our (Table\,\ref{Tab_HD12288}), M97 and M17 $B_s$ measurements simultaneously to the \cite{Wade2000} $B_e$ and photometry by \cite{Wolff1973}, MASCARA \citep{Talens2017, Bernhard2020}, and HIPPARCOS \citep{vanLeeuwen2007} gives the highest peak of $L(B_s, B_e, Mag.)$ at 34.993$\pm$0.003 days. Data are folded in Fig.\,\ref{Fig_HD12288}. It seems that HD\,12288 presents a singular behavior among MCP stars: $B_s$ is almost constant (at the maximum value of 8.5 kG) along half of the rotation period when $B_e$ changes from $-3$ kG value to zero. And, $B_s$ decreases to 7.5 kG when $B_e$ goes again to -3 kG. This star is brighter when $B_e$ is null. \subsection{HD\,14437} The variability period of HD\,14437 was determined equal to 26.87$\pm$0.02 days by \cite{Wade2000} from $B_e$ measurements. From MASCARA data, \cite{Bernhard2020} found a photometric period of 26.78$\pm$0.01 days. M97 and M17 $B_s$ measurements cover from 1991 to 1997, while our 11 spectra were obtained from 2001 to 2021. We performed a Lomb-Scargle analysis of our (Table\,\ref{Tab_HD14437}), M97 and M17 $B_s$ measurements simultaneously to the \cite{Wade2000} $B_e$ measurements and photometry by HIPPARCOS, TESS, and MASCARA. It results that the period reproducing at the best all variations is 26.734$\pm$0.007 days (Fig.\,\ref{Fig_HD14437}). $B_e$ presents a single wave variability, while a $B_s$ a double-wave variation cannot be ruled out. The light variability is in phase with the $B_s$ modulation and the magnetic equator is the brightest region (as it is in the case of HD\,12288. \subsection{HD\,18078} From $B_s$ and $B_e$ measurements, the variability period of HD\,18078 has been determined by \cite{Mathys2016_HD18078} as long as 1358$\pm 12$ days. We have obtained high-resolution spectra of HD\,18078 from 2003 up to 2021 with the SARG, CAOS, and HARPS-N. Adopting the previous \cite{Mathys2016_HD18078} period our $B_s$ measurements (Table\,\ref{Tab_HD18078}) are slightly in advance. A simultaneous Lomb-Scargle analysis gives $L(B_s, B_e)$ with the main peak at 1352$\pm$7 days. HIPPARCOS and Str\"omgren $y$ \citep{Mathys2016_HD18078} photometry are also shown in Fig.\,\ref{Fig_HD18078}. \begin{figure} \includegraphics[width=0.45\textwidth]{HD14437_Hs.pdf} \includegraphics[width=0.45\textwidth]{HD14437_He.pdf} \includegraphics[width=0.45\textwidth]{HD14437_phot.pdf} \caption{HD\,14437. Variability of $B_s$, $B_e$ by \citet{Wade2000} and photometry by HIPPARCOS ($\triangle$), TESS ($\diamond$), and MASCARA ($+$).} \label{Fig_HD14437} \end{figure} \begin{figure} \includegraphics[width=0.45\textwidth]{HD18078_Hs.pdf} \includegraphics[width=0.45\textwidth]{HD18078_He.pdf} \includegraphics[width=0.45\textwidth]{HD18078_phot.pdf} \caption{HD\,18087. Variability of $B_s$, $B_e$ by \citet{Mathys2016_HD18078} and photometry by HIPPARCOS ($\triangle$) and \citet{Mathys2016_HD18078} ($+$).} \label{Fig_HD18078} \end{figure} \subsection{HD\,29578} From spectra collected between JD = 2\,449\,298 and 2\,451\,084 ($\sim$1786 days), M17 found that the surface magnetic field of HD\,29578 changes with a period much longer than 1800 days. We have obtained one spectrum of HD\,29578 with UCLES at the AAT on JD = 2\,457\,056.992, unfortunately the Fe{\sc ii} 6149.258 \AA\, line region was not included. From other lines the surface magnetic field was estimated as large as $\sim$2900 G. We have also obtained from ESO archive one UVES and two FEROS spectra. These $B_s$ measurements (Table\,\ref{Tab_HD29578}), plus the ones by M97, M17 and \cite{Ryabchikova2004} extend the time-frame of magnetic measurements to 7759 days. We have performed a Lomb-Scargle analysis of all available $B_s$ measurements and found in the periodogram two comparable peaks at 4000 and 9230 days (Table\,\ref{Tab_Periods}). Fig.\,\ref{Fig_HD29578} shows the $B_s$ and the $B_e$ (M17) measurements folded with both periods. It appears that new measurements are necessary to determine the period of HD\,29578. Fig.\,\ref{Fig_HD29578_spectrum} shows the variation of the HD\,29578 spectrum in time, from JD = 2\,451\,946 (when Zeeman subcomponents are clearly visible) to 2\,457\,056, when these overlap. Again, NIR high-resolution spectroscopy seems to be advantageous to define the $B_s$ variability of HD\,29578. \begin{figure*} \includegraphics[width=0.45\textwidth]{HD29578_Hs_P4000.pdf} \includegraphics[width=0.45\textwidth]{HD29578_Hs_P9370.pdf} \includegraphics[width=0.45\textwidth]{HD29578_He_P4000.pdf} \includegraphics[width=0.45\textwidth]{HD29578_He_P9370.pdf} \caption{HD\,29578. Variability of $B_s$ and $B_e$ by \citet{Ryabchikova2004} ({\scriptsize $\star$}) symbols. Left panels show the folding with P = 4000 days. Right panels with P = 9370 days. } \label{Fig_HD29578} \end{figure*} \begin{figure}\center \includegraphics[width=0.23\textwidth]{HD29578_spectrum1.pdf} \includegraphics[width=0.23\textwidth]{HD29578_spectrum2.pdf} \caption{HD\,29578. Chunks of spectra ordered for decreasing $B_s$ values. The JD - 2\,400\,000 is reported. The [email protected] spectrum (JD = 2\,451\,946.528) goes from 6120 to 6150\,\AA\. UCLES@AAT spectrum (JD = 2\,457\,056.992) does not include the Fe{\sc ii}\,6149.258\,\AA\, line, Zeeman split of other metal lines gives a surface magnetic field of about 2800 G.} \label{Fig_HD29578_spectrum} \end{figure} \subsection{HD\,47103} From $B_s$ measurements, obtained between JD = 2\,449\,816 and 2\,451\,085, M17 found HD\,47103 be variable with an extremely long period. Our measurements of $B_s$ (Table\,\ref{Tab_HD47103}) extend the time baseline to 7600 days. We have computed the Lomb-Scargle periodogram $LS(B_s)$ of the $B_s$ measurements, here measured and from literature \citep{Babel1997, Mathys2017}, and the Lomb-Scargle periodogram of the $B_e$ measurements by \cite{Elkin1997} and M17. Hence, $LS(B_s, B_e)$ peaks at 17.683$\pm$0.004 days. Fig.\,\ref{Fig_HD47103} presents the $B_s$ and $B_e$ variability. \begin{figure} \includegraphics[width=0.45\textwidth]{HD47103_Hs_P17_683.pdf} \includegraphics[width=0.45\textwidth]{HD47103_He_P17_683.pdf} \caption{HD\,47103. $B_s$ and $B_e$ measurements folded with the period of 17.683 days. Circles are by \citet{Elkin1997}.} \label{Fig_HD47103} \end{figure} \subsection{HD\,50169} \cite{Mathys2019_HD50169} found the $B_e$ variability period of HD\,50169 equal to 10600$\pm$300 days. This value is also representative of their measurements of the surface magnetic field, however covering only 9856 days, which is an interval shorter than a full rotation cycle. Our $B_s$ measurements (Table\,\ref{Tab_HD50169}) slightly extend the time-frame to 10100 days. However, these data confirm the \cite{Mathys2019_HD50169} period and establish the shape of the minimum of the $B_s$ variability. The previous uncertainty of 300 days is based on the $B_e$ measurements and it cannot be reduced here. Data are folded in Fig.\,\ref{Fig_HD50169} with the ephemeris given in Table\,\ref{Tab_Periods}. \begin{figure}\center \includegraphics[width=0.45\textwidth]{HD50169_Hs.pdf} \caption{HD50169.} \label{Fig_HD50169} \end{figure} \subsection{HD\,51684} From ten measurements of $B_s$ collected between JD = 2\,450\,162 and 2\,451\,086, M17 concluded that the variability period of HD\,51684 is 371$\pm$6 days. In the ESO archive, we found 5 additional UVES spectra spread in 330 days. A Lomb-Scargle analysis of Mathys and our (Table\,\ref{Tab_HD51684}) measurements gives a slightly shorter variability period: 366$\pm$1 days. Data are folded in Fig.\,\ref{Fig_HD51684}. \begin{figure}\center \includegraphics[width=0.45\textwidth]{HD51684_Hs.pdf} \caption{HD51684.} \label{Fig_HD51684} \end{figure} \subsection{HD\,55719} M17 analyzed the $B_s$ measurements, collected from JD = 2\,447\,285 to 2\,451\,086, and found the rotational period of HD\,55719 being much longer than ten years. Our (Table\,\ref{Tab_HD55719}) and literature measurements of $B_s$ cover more than 10000 days. We find the surface magnetic field of HD\,55719 was always decreasing in the last 27 years, with a rotation period not shorter than 14000 days (38 years) by assuming a simple sinusoidal variation (Fig.\,\ref{Fig_HD55719}). \begin{figure}\center \includegraphics[width=0.45\textwidth]{HD55719_Hs_Time.pdf} \caption{HD55719. } \label{Fig_HD55719} \end{figure} \subsection{HD\,61468} M17 found the $B_s$ of HD\,61468 variable with the 322$\pm 3$ day period. We have obtained a spectrum of this star with HARPS-North on JD = 2\,457\,340.717 and measured a value for the surface magnetic field $B_s = 6260\pm 85$ G. A Lomb-Scargle periodogram of M17 and our $B_s$ values presents two merged peaks centered at 321 and 325.5 days. A sine fit of data is in favor of a variability period equal to 321$\pm$1 days. Fig.\,\ref{Fig_HD61468} shows the periodic variability of the surface field of HD\,61468 with the ephemeris given in Table\,\ref{Tab_Periods}. \begin{figure}\center \includegraphics[width=0.45\textwidth]{HD61468_Hs.pdf} \caption{HD61468.} \label{Fig_HD61468} \end{figure} \subsection{HD\,75445} M17 found HD\,75445 to be a small amplitude $B_s$ variable with a period of 6.291 days. Our measurements (Table\,\ref{Tab_HD75445}) combined with the surface magnetic field values by M17 and \cite{Ryabchikova2004} span the JD = 2\,449\,457 - 2\,454\,205 day ($\sim$13 yr) interval and give an average value of $<B_s> = 2970\pm$55 G. Lomb-Scargle analysis results in a large number of comparable peaks, with the highest three are at 2.5026, 3.2926 and 6.5500 days. The constant magnitude (6.9000$\pm$0.0003) measured with TESS along 50 days (JD = 2\,458\,517 - 2\,458\,568), together with the previous $B_s$ r.m.s. (comparable to the measurement error) suggest that, if any, the variability period of HD\,75445 is much longer than 13 years (Fig.\,\ref{Fig_HD75445}). \begin{figure}\center \includegraphics[width=0.45\textwidth]{HD75445_Hs_Time.pdf} \caption{HD75445. } \label{Fig_HD75445} \end{figure} \subsection{HD\,81009} From $B_s$ and $B_e$ measurements, \cite{Wade2000_HD81009} determined the variability period of HD\,81009 equal to 33.984$\pm 0.055$ days. P17 concluded that the photometric variability period of this star is 33.987$\pm 0.002$. Our $B_s$ measurements (Table\,\ref{Tab_HD81009}) extend the time coverage of $B_s$ from 1954 days to 8210 days. A Lomb-Scargle analysis of the $B_s$ data confirms the period found by P17. Fig.\,\ref{Fig_HD81009} shows the $B_s$ periodic variability of HD\,81009. \begin{figure}\center \includegraphics[width=0.45\textwidth]{HD81009_Hs.pdf} \caption{HD81009.} \label{Fig_HD81009} \end{figure} \subsection{HD\,93507} The period of the $B_s$ and $B_e$ variabilities of HD\,93507 is 556$\pm$22 days (M17). A Lomb-Scargle analysis of $B_s$ measurements by M17, ours (Table\,\ref{Tab_HD93507}) and $B_e$ measurements by \cite{Mathys2017} gives a variability period equal to 562$\pm$5 day (Fig.\,\ref{Fig_HD93507}). \begin{figure}\center \includegraphics[width=0.45\textwidth]{HD93507_Hs.pdf} \includegraphics[width=0.45\textwidth]{HD93507_He.pdf} \caption{HD\,93507. Variability of $B_s$. } \label{Fig_HD93507} \end{figure} \begin{figure}\center \includegraphics[width=0.45\textwidth]{HD94660_Hs.pdf} \caption{HD94660.} \label{Fig_HD94660} \end{figure} \begin{figure}\center \includegraphics[width=0.45\textwidth]{HD110066_Hs_Time.pdf} \caption{HD110066. } \label{Fig_HD110066} \end{figure} \subsection{HD\,94660} \cite{Hensberge1993} discovered HD\,94660 be a photometric variable with a period close to 2700 days. From spectra acquired in 12 different nights between May 2001 and April 2014, \cite{Bailey2015} found that this star belongs to a binary system with an orbital period of 804 days and that the rotational period is 2800 days. We have retrieved the ESO and CFHT archive spectra published by \cite{Bailey2015} and 11 high-resolution spectra of HD94660 obtained between January 1998 and December 2009 with UCLES at the AAT. Our $B_s$ measurements are listed in the Table\,\ref{Tab_HD94660}. The Lomb-Scargle periodogram peaks at 2832 days. Fig.\,\ref{Fig_HD94660} shows the $B_s$ periodic variability of HD\,94660. Radial velocities have also been measured and combined with the values given by \cite{Bailey2015} to determine the orbital parameters following \cite{Catanzaro2016}. Orbital parameters of HD\,94660, period $P_{Orb.} $, periastro passage date $T_0$, orbit eccentricity $e$, angular anomaly $\omega$, amplitude of radial velocity variation K and system velocity $\gamma$, are: \begin{center} \begin{tabular}{ccc} \hline $P_{Orb.}$ & 849.1$\pm$0.7 & $d$ \\ $T_0$ & 2\,452\,418.2$\pm$1.9 & JD \\ $e$ & 0.43$\pm$0.03 & \\ $\omega$ & 263.5$\pm$1.2 & $^o$ \\ $K$ & 17.9$\pm$0.5 & \rm km\, s$^{-1}$\\ $\gamma$ &18.4$\pm$0.3 & \rm km\, $s^{-1}$\\ \hline \end{tabular} \end{center} \subsection{HD\,110066} \cite{Pyper2017} reported on a 12-year photometric campaign dedicated to HD\,110066 without any evidence of variability. M17 found the $B_s$ variability of this star presents an amplitude of 100 G if the period is as long as 4900 days. \cite{Bychkov2020} have obtained new $B_e$ measurements and - considering the positive Babcock's values incorrect - established a variability period of 6.4769 days. We have analyzed the TESS photometric data and found a constant magnitude (6.346238$\pm$0.000003) between JD=2\,458\,900.0 and 2\,458\,926.5 (26 days). Our $B_s$ measurements of HD\,110066 are listed in Table\,\ref{Tab_HD110066}. Including M97 and M17 values, we span a period of 28.8 years without any evidence of variability. All $B_s$ measurements are between 4040 and 4150 G, with errors not smaller than 50 G. On the basis of $B_s$ measurements, we can only conclude that, if any, the variability period of HD\,110066 is longer than three decades, see Fig.\,\ref{Fig_HD110066}. \subsection{HD\,116114} From measurements collected between JD = 2\,448\,732 and 2\,451\,042, M17 found HD\,116114 to be a magnetic variable with the 27.61$\pm$0.08 day period. The amplitude of the $B_s$ variation is 33$\pm 9$ G and the amplitude of the $B_e$ variation is equal to 84$\pm 33$ G. M17 ruled out the 4.41156 day period determined by \cite{Romanyuk2014} from $B_e$. We have analyzed the TESS data of HD\,116114 (from JD = 2\,458\,570 to 2\,458\,595), and found a constant magnitude (6.76799$\pm 0.00001$) in a time scale of 25 days. Our $B_s$ measurements are listed in Table\,\ref{Tab_HD116114}. It appears that $B_s$ measurements cover 8855 days when an always increasing field is observed. This defines the shortest possible period of HD\,116114 as equal to 48 years (Fig.\,\ref{Fig_HD116114}). The $B_e$ measurements by \cite{MathysHubrig1997, Romanyuk2014, Mathys2017} do not present a clear trend. \begin{figure}\center \includegraphics[width=0.45\textwidth]{HD116114_Hs.pdf} \includegraphics[width=0.45\textwidth]{HD116114_He.pdf} \caption{HD\,116114. $B_s$ and $B_e$ variability. A sine with the shortest possible period of 17700 days is plotted over the $B_s$ measurements. A linear fit of $B_e$ measurements is also reported. } \label{Fig_HD116114} \end{figure} \subsection{HD\,126515} \cite{Leone2001} noted the coincidence of extrema of literature light curves of the star HD\,126515 if a variability period equal to 129.9474 days is assumed. This period was also representative of the variability of the $B_e$ and $B_s$ measurements. P17 and M17 established a variability period of 129.95$\pm$0.02 days from photometric and magnetic measurements respectively. By adding our $B_s$ measurements (Table\,\ref{Tab_HD126515}) to the literature data by \cite{Preston1970_HD126515}, M97 and M17 we span a total of 23250 days. A Lomb-Scargle analysis of these measurements confirms the validity of the 129.95 day period. Fig.\,\ref{Fig_HD126515} shows the $B_s$ variability and the variability of $B_e$ measurements by \cite{Babcock1958, vandenHeuvel1971, Mathys1997, Wade2000_HD126515, Leone2001, Mathys2017}. \begin{figure}\center \includegraphics[width=0.45\textwidth]{HD126515_Hs.pdf} \includegraphics[width=0.45\textwidth]{HD126515_He.pdf} \caption{HD126515. $B_s$ variability. Some of \citet{Preston1970_HD126515} values ({\scriptsize $\star$}) are due to \citet{Babcock1958}. } \label{Fig_HD126515} \end{figure} $B_s$ maximum is in coincidence with the negative $B_e$ extremum, while the $B_s$ minimum appears close in phase with a null value of $B_e$. This is evidence of the departure of the HD\,126515 magnetic field from the dipolar configuration. \subsection{HD\,137949} After 14 years of monitoring, no evidence of photometric variability has been observed by P17 in HD\,137949. From $B_e$ measurements spanning almost 50 years, M17 concluded that the variability period of HD\,137949 is 5195 days. By combining our measurements (Table\,\ref{Tab_HD137949}) with data available in the literature, we span 10584 days corresponding to twice the period found by Mathys and coworkers. A Lomb-Scargle analysis of $B_s$ and $B_e$ data does not show any predominant peak in the periodogram. Fig.\,\ref{Fig_HD137949} shows $B_s$ and $B_e$ in time. Both trends are not against a slight increment that, if real, is indicative of a rotational period much longer than 27 years. Even if all the spectra of HD\,137949 recorded between 2002 and 2018 appear constant with equal red and blue $\sigma$ components for all species, in the SARG spectra obtained on July 13, 14 and 15 2006 the blue $\sigma$ components appear much weaker than the red one. This difference is particularly large for chromium spectral lines and only marginal for iron lines. Particularly for chromium lines. We have no idea on the persistence and recursivity of this phenomenon, we can only state that it started later than 2006 April 10, and that it ended before August 1, 2006 (Fig.\,\ref{Fig_HD137949_Cr}). We wonder if such a change is a consequence of a partial occultation of HD\,137949 or an occultation of a small size companion across photospheric regions rich in chromium. \begin{figure}\center \includegraphics[width=0.45\textwidth]{HD137949_Hs_Time.pdf} \includegraphics[width=0.45\textwidth]{HD137949_He_Time.pdf} \caption{HD\,137949. $B_s$ and $B_e$ measurements of in time. The straight lines represent a linear fit of data.} \label{Fig_HD137949} \end{figure} \begin{figure*}\center \includegraphics[trim=0.0cm 1.0cm 0.0cm 0.0cm,clip=true,width=0.48\textwidth]{HD137949_Cr6147.pdf} \includegraphics[trim=0.0cm 1.0cm 0.0cm 0.0cm,clip=true,width=0.48\textwidth]{HD137949_Cr6020.pdf} \caption{HD\,137949. In three consecutive nights, from JD = 2453930 to 245393032 (2006 July 13, 14 and 15) SARG spectra present the red $\sigma$ Zeeman components deeper than the blue ones, expecially for chromium lines. Other series of observations in consecutive days did not show the same behaviour. ESPADONS spectra obtained on JD = 2\,453\,836 and 2\,453\,949 do not show clear Zeeman components because of the reduced resolution as compared to other spectrograph, however they state that the Tthis phenomenon was not longer than 110 days. } \label{Fig_HD137949_Cr} \end{figure*} \subsection{HD\,142070} According to \cite{Adelman2001}, HD\,147010 is a photometric periodic variable with the largest amplitude in the Str\"omgren v filter and ephemeris: HJD(v$_{min}$) = 2\,450\,837.499 + 3.37189$\pm$0.00007 E. M17 found this star is also a magnetic variable with a period of 3.3718$\pm$0.0011 days. Our measurements of $B_s$ are listed in (Table\,\ref{Tab_HD142070}). The product $LS(B_s, B_e,$v$)$ of the periodograms of all available $B_s$ measurements, $B_e$ measurements by M17 and \cite{Romanyuk2014}, and \cite{Adelman2001} Str\"omgren v photometry peaks at 3.3721$\pm$0.0002. Fig.\,\ref{Fig_HD142070} shows the double wave variation of $B_s$ whose extrema are in phase coincidence with the $B_e$ extrema of HD\,142070 with the ephemeris given in Table\,\ref{Tab_Periods}. \begin{figure}\center \includegraphics[width=0.45\textwidth]{HD142070_Hs.pdf} \includegraphics[width=0.45\textwidth]{HD142070_He.pdf} \includegraphics[width=0.45\textwidth]{HD142070_phot.pdf} \caption{HD\,142070. Variability of $B_s$, $B_e$ \citep{Mathys2017, Romanyuk2014} and Str\"omgren $v$ magnitude.} \label{Fig_HD142070} \end{figure} \subsection{HD\,144897} The variability period of the surface magnetic field of HD\,144897 was determined by M17 as equal to 48.57$\pm 0.15$ days from data collected in 7.1 years. To these values, we added our values of $B_s$ (Table\,\ref{Tab_HD144897}) for a total range of 22.5 years. A Lomb-Scargle analysis gives 48.60$\pm 0.02$ days. Fig.\ref{Fig_HD144897} shows all the available measurements of $B_s$ and $B_e$ \citep{Mathys2017} phased with the ephemeris given in Table\,\ref{Tab_Periods}. Both variations are rather sinusoidal and the $B_s$ minimum in coincidence with the $B_e$ maximum is a proof of a magnetic field far from the dipolar configuration. \begin{figure}\center \includegraphics[width=0.45\textwidth]{HD144897_Hs.pdf} \includegraphics[width=0.45\textwidth]{HD144897_He.pdf} \caption{HD\,144897. $B_s$ and $B_e$ variability. } \label{Fig_HD144897} \end{figure} \subsection{HD\,150562} $B_s$ measurements of HD\,150562 have been published by M97 and M17 spanning an interval of 4.5 years, Mathys and coworkers concluded that the variability period of this star is longer than 4.5 years. With our $B_s$ values (Table\,\ref{Tab_HD150562}), the coverage is now 21.3 years. Lomb-Scargle analysis of our and Mathys measurements produces a periodogram with the highest peak at 2100 days. Fig.\,\ref{Fig_HD150562} shows the $B_s$ data folded with this period. In the same figure, measurements by \cite{Bagnulo2015} and M17 are too scanty to draw the $B_e$ variability. At the phase of the maximum, we find an "unexpected" $B_s$ value obtained on JD = 2\,450\,171.802 that M17 ascribed to an unidentified cosmic ray. At the same phase, we find another "unexpected" value measured in a HARPS spectrum acquired on JD = 2\,454\,338.582. HD\,150562 should be observed in 2024 during the next $B_s$ maximum to check if this is the result of chance or to identify a physical reason (e.g. partial occultation of the visible stellar disk by a secondary star). \begin{figure}\center \includegraphics[width=0.45\textwidth]{HD150562_Hs.pdf} \includegraphics[width=0.45\textwidth]{HD150562_He.pdf} \caption{HD\,150562. $B_s$ and $B_e$ variability. $B_e$ measurements are by M17 ($*$) and \citet{Bagnulo2015}.} \label{Fig_HD150562} \end{figure} \subsection{HD\,154708} The Zeeman components of the Fe{\sc ii} 6149.258\,\AA\, line of HD\,154708 are distant more than 1 \AA, meaning that the field in the Zeeman approximation is about 2.4 T, one of the strongest known fields for a non degenerate star. From $B_s$ and $B_e$ measurements, the variability period of HD\,154708 has been determined by \cite{Hubrig_HD154708} as equal to 5.3666$\pm$0.0007 days. \cite{Landstreet2014} found 5.363$\pm 0.003$ days. Here obtained $B_s$ values in the Zeeman approximation are given in Table\,\ref{Tab_HD154708}, because of the very strong magnetic field these $B_s$ values are only indicative of the surface magnetic field but, however, representative of the variability. A model of HD\,154708 magnetic fields has been published by \cite{Stift_HD154708}. Including in the Lomb-Scargle analysis the $B_e$ by \cite{Hubrig_HD154708} and TESS magnitudes, the highest peak of their product $LS(B_s, B_e, TESS)$ is at 5.367$\pm$0.001 days. Fig.\,\ref{Fig_HD154708} shows the variability of TESS photometry, $B_e$ e $B_s$ with the ephemeris given in Table\,\ref{Tab_Periods}. It appears that light variability is driven by the longitudinal component of the field, while a 0.15 phase delay is the $B_s$ variability. \begin{figure}\center \includegraphics[width=0.45\textwidth]{HD154708_Hs.pdf} \includegraphics[width=0.45\textwidth]{HD154708_He.pdf} \includegraphics[width=0.45\textwidth]{HD154708_TESS.pdf} \caption{HD\,154708. $B_s$, $B_e$ and TESS photometry variability.} \label{Fig_HD154708} \end{figure} \subsection{HDE\,318107} As to HDE\,318107, the $B_s$ and $B_e$ variability period of 9.7088$\pm$0.0007 days was determined by \cite{Bailey2011_HDE318107}. We have obtained 7 HARPS spectra from the ESO archive, 1 GECKO spectrum, and 2 ESPaDOns spectra from the CFHT archive. In addition, we have obtained 1 new spectrum with UCLES and 2 spectra with HARPS-North. Our 13 measurements of $B_s$ (Table\,\ref{Tab_HD318107}) have been combined with M17 values for a Lamb-Scargle analysis that peaks at 9.7089$\pm 0.0001$ days. Fig.\,\ref{Fig_HD318107} shows the $B_s$ and $B_e$ (\cite{MathysHubrig1997}, \cite{Bagnulo2015}, M17) periodic variability of HDE\,318107. \begin{figure}\center \includegraphics[width=0.45\textwidth]{HD318107_Hs.pdf} \caption{HD318107. } \label{Fig_HD318107} \end{figure} \subsection{HD\,165474} M97 and M17 present $B_s$ measurements of HD\,165474 obtained between 1989 (JD=2\,447\,642) and 1998 (JD=2\,450972) leading to the conclusion that the field is always increasing and the variability period is much longer than nine years. Differently, our $B_s$ measurements between 2004 (JD=2\,453\,104) and 2015 (JD=2\,457\,229) are always decreasing, with evidence of a new increase in the HARPS-North data obtained in 2018 and 2021 (Table\,\ref{Tab_HD165474}). Including \cite{Preston1971} and \cite{Nielsen2002} $B_s$ measurements, in the hypothesis of a single harmonic variability, the period is not shorter than 9900 days (27.1 yr): Fig.\,\ref{Fig_HD165474}. This value is also compatible with the measurements by \cite{Preston1971} and \cite{Nielsen2002}. As to $B_e$, data are too scanty for any conclusion. \begin{figure}\center \includegraphics[width=0.45\textwidth]{HD165474_Hs_Time.pdf} \includegraphics[width=0.45\textwidth]{HD165474_He_Time.pdf} \caption{HD165474. $B_s$ and $B_e$ measurements in time. It appears that $B_s$ is variable with a 9900 day period.} \label{Fig_HD165474} \end{figure} \subsection{HD\,166473} The $B_s$ variability of HD\,166473 has been very recently analyzed by \cite{Mathys2020_HD166473} from spectra also collected from ESO and ESPaDOns archives. These authors determined a period of 3836 days. We have also obtained from ESO and CFHT archive the HD\,166473 spectra and add one new measurement: $B_s$ = 7210$\pm$65 G, obtained by us with UCLES at the Anglo Australian Telescope on JD = 2\,457\,235.008. Our measurements (Table\,\ref{Tab_HD166473}) confirm this period, and Fig.\,\ref{Fig_HD166473} shows the $B_s$ variability with the ephemeris determined by Mathys and coworkers (Table\,\ref{Tab_Periods}). \begin{figure}\center \includegraphics[width=0.45\textwidth]{HD166473_Hs.pdf} \caption{HD166473. $B_s$ variability.} \label{Fig_HD166473} \end{figure} \subsection{HD\,177765} Eight measurements of $B_s$ distributed from 1993 to 1998, plus one literature measurement obtained in 2010, led M17 to the conclusion that the variability period of HD\,177765 is longer than 17 years. We have obtained 1 FEROS spectrum and 1 UVES spectrum from the ESO archive and observed HD\,177765 once with UCLES and 2 times with HARPS-North. Our $B_s$ measurements (Table\,\ref{Tab_HD177765}) extend the temporal baseline from 6 days to almost 25 years and show a continuous increase in time. In the assumption of a simple harmonic variation, the possible shortest period is 37 years (Fig.\,\ref{Fig_HD177765}). \begin{figure}\center \includegraphics[width=0.45\textwidth]{HD177765_Hs_Time.pdf} \caption{HD\,177765. $B_s$ measurements in time.} \label{Fig_HD177765} \end{figure} \subsection{HD\,178892} From $B_e$ measurements and HIPPARCOS photometry, \cite{Semenko_HD178892} determined the variability period of HD\,178892 equal to 8.2549 days. As to this star, we retrieved 1 HARPS spectrum, 67 UVES spectra obtained from JD = 2\,455\,351.334 to JD = 2\,455\,351.396 that we have combined in a single spectrum, 1 NES spectrum and 5 ESPaDOns spectra. In addition we have observed this star with HARPS-North two times. Measured values of $B_s$ are in Table\,\ref{Tab_HD178892}. The Lomb-Scargle analysis of our $B_s$ measurements and $B_e$ measurements by \cite{Ryabchikova_HD178892}, \cite{Kudryavtsev_HD178892} and \cite{Romanyuk_HD178892} results in a $L(B_s, B_e)$ with a peak at 8.2572$\pm$0.0016 days. Figure\,\ref{Fig_HD178892} shows the folded $B_s$ and $B_e$ variabilities. \begin{figure}\center \includegraphics[width=0.45\textwidth]{HD178892_Hs.pdf} \includegraphics[width=0.45\textwidth]{HD178892_He.pdf} \caption{HD\,178892: $B_s$ and $B_e$ variability.} \label{Fig_HD178892} \end{figure} \subsection{HD\,187474} From the $B_s$ measurements collected in the JD = 2\,447287 - 2\,4551\,084 ($\sim$10 yr) interval, M17 established a variability period of 2345 days for HD\,187474. We have acquired high-resolution spectra of HD\,187474 two times with UCLES and two times with HARPS, extending the $B_s$ measurements (Table\,\ref{Tab_HD187474}) for more than 27 years. We have also observed HD\,187474 \citep[see][for details]{Gonzalez2014} with HARPS polarimeter and obtained two measurements of $B_e$: \begin{center}HJD\hspace{15mm} $B_e$\\ 2\,456\,143.760\hspace{5mm} 2125$\pm$50 G\\ 2\,456\,145.685\hspace{5mm} 2025$\pm$50 G\\ \end{center} The periodogram of our and M17 $B_s$ measurements and the periodogram of $B_e$ measurements by \cite{Mathys1991}, \cite{MathysHubrig1997}, M17, \cite{Sikora2019} and us. The $LS(B_s, B_e)$ function peaks at 2329$\pm$60 days. Fig.\,\ref{Fig_HD187474} shows the folded $B_s$ and $B_e$ measurements. An almost constant surface field of 5 kG is measured observing a longitudinal field changing from zero to a 2 kG positive value and then back to zero (variability phase from 0.8 to 1.2). Differently the surface field increases from 5 to 6.3 kG when the negative longitudinal field pass us by (variability phase from 0.2 to 0.8). \begin{figure}\center \includegraphics[width=0.45\textwidth]{HD187474_Hs.pdf} \includegraphics[width=0.45\textwidth]{HD187474_He.pdf} \caption{HD\,187474. $B_s$ and $B_e$ variability.} \label{Fig_HD187474} \end{figure} \subsection{HD\,188041} \cite{Mikulasek2003} determined the photometric, magnetic and spectroscopic period of HD\,188041 as equal to 223.826$\pm$0.040 days. This period has been adopted by P17 to fold their photometric data. From $B_e$ measurements (covering 50 yr) M17 concluded that this is 223.78$\pm$0.10 days. Our $B_s$ measurements are listed in Table\,\ref{Tab_HD187474} and these extend their time coverage from 2711 (7.4 yr) \citep{Mathys2017} to 10249 days (28 yr). The literature $B_e$ measurements are by \cite{Babcock1954, Babcock1958}, \cite{Wolff1969}, \cite{Mathys1991}, \cite{Mathys1997}, M17, \cite{Sikora2019}. So that, $L(B_s, B_e)$ presents a peak at 223.82$\pm 0.32$ days. The low precision in the period determination is also consequence of the very small amplitude of the $B_s$ variability. In fact, the average value is $<B_s>$ = 3620 G and the $\sigma$ = 55 G is comparable to the typical error in this measurements. Figure\,\ref{Fig_HD188041} shows the $B_s$ and $B_e$ data folded with the \cite{Mikulasek2003} period. It appears that the $B_s$ maximum is in coincidence with the $B_e$ minimum. \begin{figure}\center \includegraphics[width=0.45\textwidth]{HD188041_Hs.pdf} \includegraphics[width=0.45\textwidth]{HD188041_He.pdf} \caption{HD\,188041. $B_s$ and $B_e$ variability.} \label{Fig_HD188041} \end{figure} \subsection{HD\,192678} \cite{Pyper2017} found the period of 6.4193 days representative of the photometric and magnetic variability shown by HD\,192678. The same period was adopted by M17 to discuss the $B_s$ variabilities of this star. However, these authors have also pointed out that the $B_e$ measurements by \cite{Wade_HD192678} do not show any variability with this period. \cite{Bychkov2005} suggested that $B_e$ changes with a period equal to 12.91049 days. To determine the variability period of HD\,192678, we have measured $B_s$ from our and archive spectra (Table\,\ref{Tab_HD192678}) and retrieved the TESS photometric data. A Lomb-Scargle analysis of TESS photometry rules out the 12.91049 day period, while $LS(B_s, B_{TESS})$ presents the maximum at 6.4199$\pm 0.0001$ days, that we have adopted as a period of the HD\,192678 variability (Fig.\,\ref{Fig_HD192678}). The same figure shows the $B_e$ measurements by \cite{Babcock1958} and \cite{Wade_HD192678} also folded, however still scattered. \begin{figure}\center \includegraphics[width=0.45\textwidth]{HD192678_Hs.pdf} \includegraphics[width=0.45\textwidth]{HD192678_He.pdf} \includegraphics[width=0.45\textwidth]{HD192678_TESS.pdf} \caption{HD\,192678. $B_s$ and TESS photometric periodic variability. $B_e$ shows no clear evidence of variability.} \label{Fig_HD192678} \end{figure} \subsection{HDE\,335238} As to HDE\,335238, M17 established a variability period of 48.7$\pm 0.1$ days from $B_s$ measurements. Most of these data were clustered at the minimum value with only two measurements shaping the $B_s$ maximum. A Lomb-Scargle analysis of our 22 (Table\,\ref{Tab_HD335238}), M97, and M17 measurements produces a periodogram with the highest peak at 48.98$\pm 0.02$ days. Fig.\,\ref{Fig_HD335238} shows the $B_s$ measurements folded with this period. \begin{figure}\center \includegraphics[width=0.45\textwidth]{HD335238_Hs.pdf} \caption{HD\,HD335238. $B_s$ variability.} \label{Fig_HD335238} \end{figure} \subsection{HD\,201601} The most recent determination of the variability period of HD\,201601 is 97.16 years by \cite{Bychkov_HD201601} under the assumption of a sine variation of $B_e$. We have acquired high-resolution spectra of HD\,201601 for 24 years with CES (1 spectrum), SARG (8), UCLES (1), HARPS (2), CAOS (14), HARPS (2), and HARPS-North (10) In addition, 58 spectra have been obtained from ESO, CFHT and TBL archives. Measured values of the $B_s$ are listed in Table\,\ref{Tab_HD201601}. To extend as much as possible the time interval of the $B_s$ measurements, we have mined from the available literature and found measurements and estimations also listed in Table\,\ref{Tab_HD201601} with the appropriate references. We have folded with \cite{Bychkov_HD201601} period, these $B_s$ measurements plus the \cite{Evans1971} and \cite{Scholz1979} values. We find the $B_s$ maximum, on JD = 2\,452\,200, and a peak-to-peak variation not smaller than 1.3 kG. Among the large numbers of $B_e$ measurements reported in the literature, we here report a sample well distributed in time from the first measurements by \cite{Babcock1958} to present days (Fig.\,\ref{Fig_HD201601}). It appears that $B_e$ minimum is coincident with the $B_s$ maximum. \begin{figure}\center \includegraphics[width=0.45\textwidth]{HD201601_Hs.pdf} \includegraphics[width=0.45\textwidth]{HD201601_He.pdf} \caption{HD\,201601. $B_s$ (a) and $B_e$ (b) variability.} \label{Fig_HD201601} \end{figure} \subsection{HD\,208217} M17 found the $B_s$ measurements, collected between JD = 2\,449\,213 and 2\,451\,085, of HD\,208217 variable with the period of 8.44475 days photometrically determined by \cite{Manfroid1997}. From TESS photometry, \cite{David-Uraz2019} determined a period equal to 8.317$\pm 0.001$ days. We have obtained 2 UVES spectra and 8 HARPS spectra from the ESO archive and acquired a new spectrum of HD\,208271 with HARPS extending the time coverage to 6932 days. $B_s$ measurements are in Table\,\ref{Tab_HD208217}. In addition, we have retrieved the TESS photometric data. A Lomb-Scargle analysis gives $LS(B_s, B_e, TESS)$ with a peak at 8.445 days (FWHM = 0.005). Figure\,\ref{Fig_HD208217} shows $B_s$, $B_e$ (M17) and TESS variabilities. \begin{figure}\center \includegraphics[width=0.45\textwidth]{HD208217_Hs.pdf} \includegraphics[width=0.45\textwidth]{HD208217_He.pdf} \includegraphics[width=0.45\textwidth]{HD208217_TESS.pdf} \caption{HD\,208217. $B_s$, $B_e$ and TESS-photometry variabilities. } \label{Fig_HD208217} \end{figure} \subsection{HD\,216018} From spectra collected between 1992 and 1998, M17 concluded that HD\,216018 presents, if any, a variability period longer than six years. Table\,\ref{Tab_HD216018} reports our measurements of $B_s$ from spectra collected from 2001 and 2018. These plus M17 measurements present an average equal to $B_s = 5600 \pm 45$ G. We have computed the Lomb-Scargle periodograms of all $B_s$ data and $B_e$ measurements by M17 and \cite{Romanyuk2016} and found the main peak of $L(B_s, B_e)$ at 34.044$\pm$0.007 days. Figure\,\ref{Fig_HD216018} reports the folded $B_s$ and $B_e$ data. Indeed $B_s$ and $B_e$ variabilities presents rather small amplitudes. \begin{figure}\center \includegraphics[width=0.45\textwidth]{HD216018_Hs_P34_044.pdf} \includegraphics[width=0.45\textwidth]{HD216018_He_P34_044.pdf} \caption{HD\,216018. $B_s$ and $B_e$ variability.} \label{Fig_HD216018} \end{figure} \section{Magnetic field strength and rotation period} Figure\,\ref{Fig_BvsP} shows the average surface field versus the rotation period for the here considered stars plus HD\,59435, HD\,65339, HD\,70311, HD\,116458 and HD\,200311 from M17. It appears a general decrease of the field with the stellar rotation period. Even if, it is safe to say that the top-right corner is empty, doubts are for the left-bottom corner of Figure\,\ref{Fig_BvsP} being impossible to measure weak surface fields from the distance of the Fe{\sc ii} 6149.258\,\AA\, Zeeman-components largely broadened by the stellar rotation. Thanks to $\lambda^2$ dependence of Zeeman-split, a trial could be to look at the near-infrared lines. The spread for a given value of the period is not justified by the random distribution of angles between the Line-of-Sight, the rotation axis and (if any) the magnetic symmetry axis. For a magnetic dipole, the ratio between the largest and smallest values of $B_s$ is only 1.2 \citep{Preston1969}. Although, in our sample the most extreme case is represented by HD\,126515 with a ratio of 1.8. No clear dependence of the field strength appears on temperature or stellar radius. \begin{figure}\center \includegraphics[width=0.45\textwidth]{BvsP.pdf} \center{Rotation period (days)} \caption{Average surface magnetic field as a function of the rotation period. Colors of filled circles indicate the stellar temperature. If available, GAIA measures of the stellar radius is proportional to the radius of the empty circles. These radii go from 1.66\,R$_\odot$ (e.g. HD\,154708) to 10.47\,R$_\odot$ (HD\,59435).} \label{Fig_BvsP} \end{figure} \section{Conclusions} The surface field of 36 magnetic chemically peculiar stars has been monitored for 20 years. For any star, by adding archive and literature data, we have extended as large as possible the base time with the aim to determine even very long (decades) periods. Fields have been measured from the distance in wavelength of the Zeeman components of Fe{\sc ii} 6149.258\,\AA\, line in the hypothesis of weak field approximation. For many stars, magnetic fields are so strong that the Paschen-Back effect should be the right approximation for these stars \citep{Stift2008}. The incorrectness of weak field approximation does not affect the determination of the period, but rather the shape of the variability with underestimated amplitudes. For some stars only the lower limit for the period has been found: HD\,55719 with $P > 38$ yr, HD75445 with $P > 14$ yr, HD\,110066 with $P > 29$ yr, HD\,116114 with $P > 48$ yr, HD\,137949 with $P > 27$ yr, HD\,165474 with $P > 27$ yr and HD177765 with $P > 37$ yr. As to the HD\,201601 (=$\gamma$Equ), the variability period is very close to one century, at the moment the longest established one for this class of stars. We found that the maximum of surface field is coincident with the negative extremum of the longitudinal field. Figure\,\ref{Fig_BvsP} shows the average surface field versus the here determined rotation periods or the lowest possible value. As a general rule, it appears that stars very long periods show the weakest fields. \section*{Acknowledgments} Based on observations collected at the European Southern Observatory (ESO) and on data obtained from the ESO Science Archive Facility. "Based on observations made with the Italian Telescopio Nazionale Galileo (TNG). The TNG is operated on the island of La Palma by the Fundaci\'on Galileo Galilei of the INAF (Istituto Nazionale di Astrofisica) at the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias". This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA's Science Mission Directorate. This paper includes data collected by the TESS mission, which are publicly available from the Mikulski Archive for Space Telescopes (MAST). Funding for the TESS mission is provided by NASA's Science Mission directorate. We acknowledge financial contribution from the agreement ASI-INAF n.2018-16-HH.0 and from {\it Programma ricerca di Ateneo UNICT 2020-22 linea 2}. \bibliographystyle{mnras} \subsubsection*{#1}} \pagestyle{headings} \markright{Reference sheet: \texttt{natbib}} \usepackage{shortvrb} \MakeShortVerb{\|} \begin{document} \thispagestyle{plain} \newcommand{\textsc{Bib}\TeX}{\textsc{Bib}\TeX} \newcommand{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}}{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}} \begin{center}{\bfseries\Large Reference sheet for \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ usage}\\ \large(Describing version \fileversion\ from \filedate) \end{center} \begin{quote}\slshape For a more detailed description of the \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package, \LaTeX\ the source file \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.dtx}. \end{quote} \head{Overview} The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is a reimplementation of the \LaTeX\ |\cite| command, to work with both author--year and numerical citations. It is compatible with the standard bibliographic style files, such as \texttt{plain.bst}, as well as with those for \texttt{harvard}, \texttt{apalike}, \texttt{chicago}, \texttt{astron}, \texttt{authordate}, and of course \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}. \head{Loading} Load with |\usepackage[|\emph{options}|]{|\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}|}|. See list of \emph{options} at the end. \head{Replacement bibliography styles} I provide three new \texttt{.bst} files to replace the standard \LaTeX\ numerical ones: \begin{quote}\ttfamily plainnat.bst \qquad abbrvnat.bst \qquad unsrtnat.bst \end{quote} \head{Basic commands} The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package has two basic citation commands, |\citet| and |\citep| for \emph{textual} and \emph{parenthetical} citations, respectively. There also exist the starred versions |\citet*| and |\citep*| that print the full author list, and not just the abbreviated one. All of these may take one or two optional arguments to add some text before and after the citation. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90}| & Jones et al. (1990)\\ |\citet[chap.~2]{jon90}| & Jones et al. (1990, chap.~2)\\[0.5ex] |\citep{jon90}| & (Jones et al., 1990)\\ |\citep[chap.~2]{jon90}| & (Jones et al., 1990, chap.~2)\\ |\citep[see][]{jon90}| & (see Jones et al., 1990)\\ |\citep[see][chap.~2]{jon90}| & (see Jones et al., 1990, chap.~2)\\[0.5ex] |\citet*{jon90}| & Jones, Baker, and Williams (1990)\\ |\citep*{jon90}| & (Jones, Baker, and Williams, 1990) \end{tabular} \end{quote} \head{Multiple citations} Multiple citations may be made by including more than one citation key in the |\cite| command argument. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90,jam91}| & Jones et al. (1990); James et al. (1991)\\ |\citep{jon90,jam91}| & (Jones et al., 1990; James et al. 1991)\\ |\citep{jon90,jon91}| & (Jones et al., 1990, 1991)\\ |\citep{jon90a,jon90b}| & (Jones et al., 1990a,b) \end{tabular} \end{quote} \head{Numerical mode} These examples are for author--year citation mode. In numerical mode, the results are different. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90}| & Jones et al. [21]\\ |\citet[chap.~2]{jon90}| & Jones et al. [21, chap.~2]\\[0.5ex] |\citep{jon90}| & [21]\\ |\citep[chap.~2]{jon90}| & [21, chap.~2]\\ |\citep[see][]{jon90}| & [see 21]\\ |\citep[see][chap.~2]{jon90}| & [see 21, chap.~2]\\[0.5ex] |\citep{jon90a,jon90b}| & [21, 32] \end{tabular} \end{quote} \head{Suppressed parentheses} As an alternative form of citation, |\citealt| is the same as |\citet| but \emph{without parentheses}. Similarly, |\citealp| is |\citep| without parentheses. Multiple references, notes, and the starred variants also exist. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citealt{jon90}| & Jones et al.\ 1990\\ |\citealt*{jon90}| & Jones, Baker, and Williams 1990\\ |\citealp{jon90}| & Jones et al., 1990\\ |\citealp*{jon90}| & Jones, Baker, and Williams, 1990\\ |\citealp{jon90,jam91}| & Jones et al., 1990; James et al., 1991\\ |\citealp[pg.~32]{jon90}| & Jones et al., 1990, pg.~32\\ |\citetext{priv.\ comm.}| & (priv.\ comm.) \end{tabular} \end{quote} The |\citetext| command allows arbitrary text to be placed in the current citation parentheses. This may be used in combination with |\citealp|. \head{Partial citations} In author--year schemes, it is sometimes desirable to be able to refer to the authors without the year, or vice versa. This is provided with the extra commands \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citeauthor{jon90}| & Jones et al.\\ |\citeauthor*{jon90}| & Jones, Baker, and Williams\\ |\citeyear{jon90}| & 1990\\ |\citeyearpar{jon90}| & (1990) \end{tabular} \end{quote} \head{Forcing upper cased names} If the first author's name contains a \textsl{von} part, such as ``della Robbia'', then |\citet{dRob98}| produces ``della Robbia (1998)'', even at the beginning of a sentence. One can force the first letter to be in upper case with the command |\Citet| instead. Other upper case commands also exist. \begin{quote} \begin{tabular}{rl@{\quad$\Rightarrow$\quad}l} when & |\citet{dRob98}| & della Robbia (1998) \\ then & |\Citet{dRob98}| & Della Robbia (1998) \\ & |\Citep{dRob98}| & (Della Robbia, 1998) \\ & |\Citealt{dRob98}| & Della Robbia 1998 \\ & |\Citealp{dRob98}| & Della Robbia, 1998 \\ & |\Citeauthor{dRob98}| & Della Robbia \end{tabular} \end{quote} These commands also exist in starred versions for full author names. \head{Citation aliasing} Sometimes one wants to refer to a reference with a special designation, rather than by the authors, i.e. as Paper~I, Paper~II. Such aliases can be defined and used, textual and/or parenthetical with: \begin{quote} \begin{tabular}{lcl} |\defcitealias{jon90}{Paper~I}|\\ |\citetalias{jon90}| & $\Rightarrow$ & Paper~I\\ |\citepalias{jon90}| & $\Rightarrow$ & (Paper~I) \end{tabular} \end{quote} These citation commands function much like |\citet| and |\citep|: they may take multiple keys in the argument, may contain notes, and are marked as hyperlinks. \head{Selecting citation style and punctuation} Use the command |\bibpunct| with one optional and 6 mandatory arguments: \begin{enumerate} \item the opening bracket symbol, default = ( \item the closing bracket symbol, default = ) \item the punctuation between multiple citations, default = ; \item the letter `n' for numerical style, or `s' for numerical superscript style, any other letter for author--year, default = author--year; \item the punctuation that comes between the author names and the year \item the punctuation that comes between years or numbers when common author lists are suppressed (default = ,); \end{enumerate} The optional argument is the character preceding a post-note, default is a comma plus space. In redefining this character, one must include a space if one is wanted. Example~1, |\bibpunct{[}{]}{,}{a}{}{;}| changes the output of \begin{quote} |\citep{jon90,jon91,jam92}| \end{quote} into [Jones et al. 1990; 1991, James et al. 1992]. Example~2, |\bibpunct[; ]{(}{)}{,}{a}{}{;}| changes the output of \begin{quote} |\citep[and references therein]{jon90}| \end{quote} into (Jones et al. 1990; and references therein). \head{Other formatting options} Redefine |\bibsection| to the desired sectioning command for introducing the list of references. This is normally |\section*| or |\chapter*|. Define |\bibpreamble| to be any text that is to be printed after the heading but before the actual list of references. Define |\bibfont| to be a font declaration, e.g.\ |\small| to apply to the list of references. Define |\citenumfont| to be a font declaration or command like |\itshape| or |\textit|. Redefine |\bibnumfmt| as a command with an argument to format the numbers in the list of references. The default definition is |[#1]|. The indentation after the first line of each reference is given by |\bibhang|; change this with the |\setlength| command. The vertical spacing between references is set by |\bibsep|; change this with the |\setlength| command. \head{Automatic indexing of citations} If one wishes to have the citations entered in the \texttt{.idx} indexing file, it is only necessary to issue |\citeindextrue| at any point in the document. All following |\cite| commands, of all variations, then insert the corresponding entry to that file. With |\citeindexfalse|, these entries will no longer be made. \head{Use with \texttt{chapterbib} package} The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is compatible with the \texttt{chapterbib} package which makes it possible to have several bibliographies in one document. The package makes use of the |\include| command, and each |\include|d file has its own bibliography. The order in which the \texttt{chapterbib} and \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ packages are loaded is unimportant. The \texttt{chapterbib} package provides an option \texttt{sectionbib} that puts the bibliography in a |\section*| instead of |\chapter*|, something that makes sense if there is a bibliography in each chapter. This option will not work when \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ is also loaded; instead, add the option to \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}. Every |\include|d file must contain its own |\bibliography| command where the bibliography is to appear. The database files listed as arguments to this command can be different in each file, of course. However, what is not so obvious, is that each file must also contain a |\bibliographystyle| command, \emph{preferably with the same style argument}. \head{Sorting and compressing citations} Do not use the \texttt{cite} package with \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}; rather use one of the options \texttt{sort} or \texttt{sort\&compress}. These also work with author--year citations, making multiple citations appear in their order in the reference list. \head{Long author list on first citation} Use option \texttt{longnamesfirst} to have first citation automatically give the full list of authors. Suppress this for certain citations with |\shortcites{|\emph{key-list}|}|, given before the first citation. \head{Local configuration} Any local recoding or definitions can be put in \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.cfg} which is read in after the main package file. \head{Options that can be added to \texttt{\char`\\ usepackage}} \begin{description} \item[\ttfamily round] (default) for round parentheses; \item[\ttfamily square] for square brackets; \item[\ttfamily curly] for curly braces; \item[\ttfamily angle] for angle brackets; \item[\ttfamily colon] (default) to separate multiple citations with colons; \item[\ttfamily comma] to use commas as separaters; \item[\ttfamily authoryear] (default) for author--year citations; \item[\ttfamily numbers] for numerical citations; \item[\ttfamily super] for superscripted numerical citations, as in \textsl{Nature}; \item[\ttfamily sort] orders multiple citations into the sequence in which they appear in the list of references; \item[\ttfamily sort\&compress] as \texttt{sort} but in addition multiple numerical citations are compressed if possible (as 3--6, 15); \item[\ttfamily longnamesfirst] makes the first citation of any reference the equivalent of the starred variant (full author list) and subsequent citations normal (abbreviated list); \item[\ttfamily sectionbib] redefines |\thebibliography| to issue |\section*| instead of |\chapter*|; valid only for classes with a |\chapter| command; to be used with the \texttt{chapterbib} package; \item[\ttfamily nonamebreak] keeps all the authors' names in a citation on one line; causes overfull hboxes but helps with some \texttt{hyperref} problems. \end{description} \end{document}
1,108,101,562,609
arxiv
\section{Introduction} Outflows driven by star formation are thought to be a crucial driver of galaxy evolution. Strong stellar feedback caused by high star formation rate densities can launch outflows of ionized, neutral and molecular gas that potentially can escape the main body of a galaxy. Consequently, such outflowing gas removes the potential fuel for future star formation. Therefore, outflows can suppress and quench star formation, as also demonstrated by theoretical predictions and simulations \citep[e.g.][]{1986ApJ...303...39D,2017MNRAS.466.1213K,2018ApJ...857..116M}. Depending on the velocity of the outflow and a galaxy's escape velocity, outflowing gas can be re--accreted at later cosmic times (the so--called `galactic fountain') or leave the system altogether. This process thus has the potential to enrich the galactic disk and circum--galactic medium with heavy metals \citep[e.g.][]{Oppenheimer:2006eq,2010MNRAS.406.2325O,Hopkins:2012ez,Christensen:2018ka}. Galactic outflows are a multi--phase phenomenon and are observed across the electro--magnetic spectrum from X-ray \citep[e.g.][]{2007ApJ...658..258S}, UV \citep[e.g.][]{2005ApJ...619L..99H}, optical like H$\alpha$ \citep[e.g.][]{2009ApJ...696..192W} to IR \citep[e.g.][]{2009ApJ...700L.149V}, cold dust \citep[e.g.][]{2010A&A...518L..66R}, PAH emission \citep[e.g.][]{2006ApJ...642L.127E}, and sub-millimeter to radio including H\textsc{i} \citep[e.g.][]{2013Natur.499..450B,2015ApJ...814...83L,Lucero:2015if}. Typically, large-scale outflow features at high relative velocity (100s-1000s km\,s$^{-1}$\xspace) are observed in the ionized and neutral gas, whereas molecular outflows often appear as smaller, more compact features \citep{Strickland:2002kp,Westmoquette:2011bp}. The latter are nonetheless important as they dominate the mass budget \citep{2015ApJ...814...83L}. In some galaxies, the gas phases seem to be stratified with an inner ionized outflow cone, a surrounding neutral shell, and molecular gas situated along the outer edge \citep[e.g.][]{2015ApJ...801...63M}. Typically, the outflows originate from an extended region, so the apparent outflow cone has its tip cut-off. Molecular outflows are thus closely intertwined with feedback processes and star formation. The high-resolution structure and kinematic properties of (molecular) outflows are not studied in great detail yet, primarily due to the lack of high resolution and sensitivity observations. Starburst galaxies are the obvious target to study star formation-driven outflows due to the high star formation rates (SFR) in these system. Consequently, molecular outflows have been studied over the past years in a few nearby starbursts : M\,82 \citep{2002ApJ...580L..21W,2015ApJ...814...83L}, NGC\,253 \citep{2013Natur.499..450B,2017ApJ...835..265W,2018ApJ...867..111Z}, NGC\,1808 \citep{2018ApJ...856...97S}, and ESO320-G030 \citep{2016A&A...594A..81P}. NGC\,253 is one of the nearest starburst systems at a distance of 3.5\,Mpc \citep{Rekola:2005ha}. It is considered one of the prototypical starburst galaxies with a star formation rate surface density of $\Sigma_{SFR} \sim 10^2$\,M$_\odot$\xspace\,yr$^{-1}$\,kpc$^{-2}$ in the nuclear region and a molecular depletion time that is $\tau^{mol}_{dep} \sim 5-25$ times lower than what is found in local disks \citep{Leroy:2015ds}. A galactic wind emerges from the central $\sim 200$\,pc of NGC\,253 that has been characterized in H$\alpha$, X-ray, as well as neutral and molecular gas emission \citep[e.g.][]{Sharp:2010jl,Turner:1985iy,Sturm:2011jb,Strickland:2000wd,Strickland:2002kp, Westmoquette:2011bp,2000ApJS..129..493H,2013Natur.499..450B,2017ApJ...835..265W}. Due to the close proximity, starburst and galactic winds can be studied in detail and individual structures can be resolved. Studies of the molecular gas phase in NGC\,253 showed that its central starburst is fueled by gas accretion along the bar \citep{2004ApJ...611..835P}. The molecular ISM in the nuclear region is structured in several clumps that show high temperatures of $\sim 50$\,K \citep{2004ApJ...611..835P,Sakamoto:2011et,2019ApJ...871..170M}. From earlier low resolution observations \citep[$>20$\,pc, e.g.][]{2006ApJ...636..685S,Sakamoto:2011et} to recent observations at high resolution ($8\,\mathrm{pc}\times5$\,pc in \citealt{2017ApJ...849...81A} and 2\,pc in \citealt{2018ApJ...869..126L}) the number of molecular clumps associated the starburst increased from $\sim5$ to 14. These studies find the clumps to be massive ($4-10 \times 10^4$\,M$_\odot$\xspace), compact ($<10$\,pc), chemically rich (up to $>19$ molecules detected in the 0.8\,mm band) and hot (up to 90\,K). Each clump likely hosts an embedded massive star cluster \citep{2018ApJ...869..126L}. Further structures in the molecular gas are shells and bubbles blown up by feedback from the intense star formation process. \citet{2006ApJ...636..685S} found two 100\,pc diameter superbubbles. \citet{2013Natur.499..450B} report molecular streamers\footnote{The term {\em streamer} here denotes structures with a high aspect ratio that are typically oriented roughly perpendicular to the disk and often show a velocity gradient.} originating from these shells with a lower limit to the outflow rate of $3-9$\,M$_\odot$\,yr$^{-1}$, about three times the star formation rate. This estimate was revisited by \citet{2018ApJ...867..111Z}, based on observations that show that the CO emission associated with the most prominent streamer is optically thick, increasing it to $25-50$\,M$_\odot$\,yr$^{-1}$\xspace. As suggested by these studies, the outflow rate in NGC\,253 is factors of a few to potentially $>10$ larger than the star formation rate. Hence, the impact of the outflows on the amount of material lost from the molecular gas reservoir, and thus the lifetime of the starburst, is significant. The availability of new data makes it interesting to revisit the determination of the mass outflow rate in NGC\,253, while also removing some limitations of previous determinations. \citet{2013Natur.499..450B} estimated the outflow rate from a few massive molecular streamers, but did not include potential diffuse outflowing gas. Also, resolution plays an important role in the ability to disentangle outflows from material in the starbursting disk. New ALMA band~7 observations provide excellent spatial resolution and reasonable surface brightness sensitivity. This information enables increasingly accurate determination of the total mass outflow rate, and its impact on the starburst. In this work, we present ALMA \co32 observations carried out in cycle 3 and 4 that target the molecular gas in the central $\sim 750$\,pc of NGC\,253. Together with ancillary band 3 and 6 data from our previous work \citep{2013Natur.499..450B,2015ApJ...801...63M,Leroy:2015ds,2018ApJ...867..111Z}, we have an inventory of three CO lines to study the molecular gas in the starbursting disk and a kinematically different component that includes the outflow. By decomposing the detected emission, we aim to measure the total molecular gas outflow rate in NGC\,253 and improve upon previous less systematic results. Throughout this paper, we adopt a distance of 3.5\,Mpc to NGC\,253 \citep{Rekola:2005ha} at which 1\arcsec\ corresponds to 17\,pc. We also define the ``center'' of the nuclear region of NGC\,253 to be the kinematic center at $\alpha, \delta = 00^h47^m33.134^s, -25^\circ17^m19.68^s$ as identified in \citet{MullerSanchez:2010dr}. The paper is structured as follows: In section~\ref{section: data reduction}, we describe observational setup and data reduction, and show the results in the form of channel maps, moment maps and position-velocity diagrams. Our approach on separating gas in the star-forming disk from potentially outflowing gas is laid out in section~\ref{section: disk separation}. Section~\ref{section: results separated disk/non-disk} discusses the derived quantities such as CO luminosities, molecular gas masses, outflow rate, kinetic energy and momentum. Our conclusions are summarized in section~\ref{section: summary}. \section{Data Reduction and Imaging}\label{section: data reduction} \floattable \begin{deluxetable*}{lccc} \tablewidth{\linewidth} \tablecaption{Details of the datasets used in this analysis. \label{table: used datasets}} \tablehead{\colhead{} & \colhead{\co10} & \colhead{\co21} & \colhead{\co32}} \startdata ALMA ID & 2011.1.00172.S & 2012.1.00108.S & 2015.1.00274.S\\ spatial resolution & $1.85\arcsec \times 1.32\arcsec$ & $1.70\arcsec \times 1.02\arcsec$ & $0.17\arcsec \times 0.13\arcsec$\\ & 31.4\,pc $\times$ 22.4\,pc & 28.8\,pc $\times$ 17.3\,pc & 2.9\,pc $\times$ 2.2\,pc\\ spectral resolution & 5.0\,km\,s$^{-1}$\xspace & 5.0\,km\,s$^{-1}$\xspace & 2.5\,km\,s$^{-1}$\xspace\\ RMS noise per channel & 1.99\,mJy\,beam$^{-1}$\xspace & 2.19\,mJy\,beam$^{-1}$\xspace & 0.81\,mJy\,beam$^{-1}$\xspace\\ & 75\,mK & 29\,mK & 0.37\,K\\ \enddata \end{deluxetable*} \subsection{Data reduction} The data presented in this paper are based on observations in ALMA cycles~2, 3 and 4 in bands~3, 6 and 7 that cover the redshifted emission in NGC\,253 of \co10, \co21 and \co32 as well as other molecular lines. For data reduction and imaging of the band~3 and 6 data see \citet{2013Natur.499..450B}, \citet{Leroy:2015ds}, \citet{2015ApJ...801...63M} and \citet{2018ApJ...867..111Z}. Table~\ref{table: used datasets} gives an overview of the datasets used in this analysis. For the band~7 observations, we tuned the lower side band to $342.0-345.8$\,GHz and the upper side band to $353.9-357.7$\,GHz (total bandwidth 7.6\,GHz) with 976.6\,kHz channel width (corresponding to 0.8\,km\,s$^{-1}$\xspace). We targeted the central $\sim 750$\,pc of NGC\,253 in a linear four pointing mosaic with two configurations of the 12\,m array (12\,m compact and 12\,m extended, half power beam width $\sim 17\arcsec$) and a five pointing mosaic of the 7\,m array (ACA, half power beam width $\sim 30\arcsec$). Additional single dish observations with the total power array (TP) recovers emission on large spatial scales. The baseline ranges covered by this setup are 8.9-49.0\,m, $15.1 - 783.5$\,m and $15.1 - 1813.1$\,m for the ACA and the two 12\,m setups, respectively. The observations were carried out primarily in the first half of 2016 (TP: 07-Dec-2015 to 02-Aug-2016; ACA: 07-Dec-2015 to 23-Nov-2016; 12\,m compact configuration: 16-Apr-2016, 23-Apr-2016, 17-Jun-2016, 27-Jun-2016; 12\,m extended configuration: 30-Aug-2016, 03-Sep-2016). The total on-source observation time is $48^h45^m$ split across $27^h:23^m$ (TP), $14^h57^m$ (ACA), $2^h37^m$ (12\,m compact) and $3^h59^m$ (12\,m extended). The calibrators were: J0006-0623 (bandpass); J0038-2459 (complex gain); the asteroid Pallas (absolute flux density); J0104-2416, J0106-2718 (both WVR). Visibilities of the 12\,m data are calibrated using the ALMA cycle~3 pipeline in \textsc{casa}\xspace 4.6.0 and the delivered calibration script. The other datasets are calibrated in \textsc{casa}\xspace 4.7.2 and the cycle~4 pipeline. In order to image the spectral lines, we subtract the continuum in the $U,V$ plane using a first order polynomial fitted to the channels that do not contain strong spectral lines. We reliably detect $>25$ lines in the range 342.0-345.0\,GHz and 353.9-357.7\,GHz beside the four strong lines \co32, HCN~(4-3)\xspace, HCO$^+$~(4-3)\xspace and CS~(7-6)\xspace. Most of these lines are weak and only detected in small spatial regions so they do not affect the overall continuum fit and subtraction. \subsection{Imaging}\label{subsection: imaging} Combined imaging of the interferometric data is done with the \texttt{tclean} task in \textsc{casa}\xspace 5.4.0 which includes crucial bug fixes for ALMA mosaics\footnote{For details see NAASC memo 117 by the North American ALMA Science Center (NAASC) at \url{http://library.nrao.edu/public/memos/naasc/NAASC_117.pdf}.}. We regrid the visibilities during deconvolution to a spectral resolution of 2.5\,km\,s$^{-1}$\xspace. Applying a Briggs weighting scheme with robust parameter 0.5 results in a synthesized beam of $0.17\arcsec \times 0.13\arcsec$ (pixel scale $0.05\arcsec$). The images are cleaned to a level of $2.5 \times$ the RMS noise in line-free channels of $2.5 \times 0.81$\,mJy\,beam$^{-1}$\xspace ($2.5 \times 0.37$\,K) using a clean mask derived from a low resolution image of the compact 12\,m array \co32 data only. We correct the cleaned images for the mosaic sensitivity pattern (mosaic primary beam response pattern), combine them with the TP images using \texttt{feather} and finally convert the units to brightness temperature. For the final images, we do not consider the ACA data as they introduce large scale noise fluctuation towards the edge of the mosaic, which we attribute to decreasing sensitivity of the 12\,m data relative to the ACA data. These fluctuations obscure the regions where outflows have been found previously. This work requires accurate integrated flux measurements and correct representation of the small scale structure which are defined by single dish observations (TP) and long baselines (extended 12\,m), respectively. By checking the images without ACA data against the images including ACA data, we can confirm that neither the overall flux scale, nor the small scale structure is significantly altered. Data products for \co10 are shown in \citet{2013Natur.499..450B}, \citet{2015ApJ...801...63M}, \citet{Leroy:2015ds} and \citet{2018ApJ...867..111Z} presents the \co21 data. Imaging results for \co32 are presented in the following section. In order to keep the amount of detail and contrast in the high resolution data, we do not match the spatial resolution to that of the data with the lowest resolution, but perform our analysis at the native resolution of each dataset. All further steps work on the data cubes masked at $5.0 \sigma$ (cf. table~\ref{table: used datasets}) and further masks where necessary. For generating the masks, we do not consider the non-uniform noise level caused by the mosaic sensitivity pattern but use the per channel RMS noise in the center of the field of view. \subsection{\co32 data presentation}\label{section: channel maps} \begin{figure*} \centering \includegraphics[width=\linewidth]{fig1.pdf} \caption{Channel maps of \co32 in NGC\,253. Every $16^{th}$ channel of 2.5\,km\,s$^{-1}$\xspace width is shown with the corresponding line-of-sight velocity ($\mathrm{v}_{sys} = 250$\,km\,s$^{-1}$\xspace) given in the upper right corner of each panel. The synthesized beam of $0.17" \times 0.13"$ is plotted in the lower right corner; it is hardly noticeable due to its small size. Contours are plotted at $10\sigma$, $20\sigma$, $40\sigma$, $80\sigma$ with an RMS noise of $\sigma = 0.37$\,K. Large structures are marked by dashed contours in those panels that show them most clearly. Further new shells are indicated by dashed circles.} \label{figure: CO channel map} \end{figure*} \begin{figure} \centering \includegraphics[width=\linewidth]{fig2a.pdf} \includegraphics[width=\linewidth]{fig2b.pdf} \includegraphics[width=\linewidth]{fig2c.pdf} \caption{\co32 moment maps of NGC\,253. \emph{top}: Integrated intensity map (moment 0); contours are show from $250 - 8000$\,K\,km\,s$^{-1}$\xspace in factors of two. \emph{middle}: Velocity field (moment 1); contours are shown from 100\,km\,s$^{-1}$\xspace -- 400\,km\,s$^{-1}$\xspace in steps of 50\,km\,s$^{-1}$\xspace. \emph{bottom}: Moment 2 (corresponding to the velocity dispersion if the line profile would be Gaussian); contours are shown from 0\,km\,s$^{-1}$\xspace -- 100\,km\,s$^{-1}$\xspace in steps of 20\,km\,s$^{-1}$\xspace. The color scale is chosen to saturate a few regions with dispersions $>100$\,km\,s$^{-1}$\xspace. All maps are generated from the data cube masked a $5\sigma$ threshold per channel and confined to the collapsed clean mask to include only emission that has been processed by the clean algorithm.} \label{figure: CO moment maps} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{fig3.pdf} \caption{\co32 position-velocity diagram of NGC\,253 along the major axis centered on the kinematic center averaged over the full width of the field of view ($\sim 30\arcsec$). Pixel below $3\sigma$ are masked and contours are drawn at $10\sigma$, $20\sigma$, $40\sigma$, $80\sigma$ with an RMS noise of $\sigma = 0.37$\,K. Note the vertical spikes indicating high velocity dispersion due to outflowing gas.} \label{figure: co pV} \end{figure} In this section, we present the \co32 data in different representations. Channel maps (figure~\ref{figure: CO channel map}), moment maps (figure~\ref{figure: CO moment maps}) and a position-velocity (pV) diagram (figure~\ref{figure: co pV}) show the spatial and kinematic structures to be discussed and highlight the data quality. Figure~\ref{figure: CO channel map} shows channel maps of the image cube. To retain the intrinsic resolution, only every 16$^{th}$ channel (40\,km\,s$^{-1}$\xspace spacing) is shown here. Besides the rotating disk of molecular gas, we clearly detect the prominent south-west (SW) streamer \citep{2017ApJ...835..265W} in the range $180 - 250$\,km\,s$^{-1}$\xspace (Fig.~\ref{figure: CO channel map}, panels 220\,km\,s$^{-1}$\xspace and 260\,km\,s$^{-1}$\xspace). Additional gas streamers are apparent between $\sim 60$\,km\,s$^{-1}$\xspace and $\sim 350$\,km\,s$^{-1}$\xspace towards north and south of the disk as can be seen for example in the panels at 260\,km\,s$^{-1}$\xspace or 340\,km\,s$^{-1}$\xspace. Several notable molecular shells are present between 180\,km\,s$^{-1}$\xspace and 340\,km\,s$^{-1}$\xspace. Beside the (super-)shells at the eastern (left) and western (right) edge of the map that have been previously identified by \citet{2006ApJ...636..685S} and \citet{2013Natur.499..450B}, further smaller shell--like structures are located along the molecular disk. We calculate image moments (figure~\ref{figure: CO moment maps}) with \texttt{immoments} in \textsc{casa} for emission above $5\sigma$ for the moment 0 (integrated intensity) map, the moment~1 (intensity-weighted line-of-sight velocity) and moment~2 (intensity-weighted velocity dispersion) maps. Note that due to the complex line shapes, the moment~2 map does not directly correspond to velocity dispersion which is only the case for Gaussian line profiles. The maps are further constrained to the region defined by the collapsed clean mask to limit them to emission that has been processed by the clean algorithm. Figure~\ref{figure: co pV} shows the kinematic structure of NGC\,253 as a pV diagram along the major axis of the disk ($\mathrm{PA} = 55^\circ$) averaged over the full width of the field of view ($\sim 30\arcsec$) centered on the kinematic center. The pV cut shows several high velocity dispersion structures extending from a rotating disk, indicative of outflows. \section{Separating disk and non-disk emission}\label{section: disk separation} \subsection{Separating disk and non-disk emission in position-position-velocity space}\label{subsection: ppV separation} Our goal is to account for all the molecular wind, separating outflowing molecular gas from foreground or background disk emission. A clean separation in 2D position-position space cannot be easily accomplished due to the inclination of $78^\circ$ of NGC\,253. At this high inclination, outflows and disk emission are co-spatial in projection. Kinematic information from line-of-sight velocities, however, makes it possible to disentangle the outflow. Note that this becomes increasingly difficult as the velocity vector aligns with the plane of the sky, resulting in line-of-sight velocities that are systemic. From H$\alpha$ kinematic modeling the NGC\,253 outflow is approximately bi-conical with an axis normal to the disk and an opening angle of $\sim60^\circ$ \citep{Westmoquette:2011bp}, and thus the range of possible projection angles is large (see \citealt{2015ApJ...801...63M} for a sketch). Note that because the cone opening angle is larger than the angle between the axis of the cone and the plane of the sky, gas in the approaching and receding cones can have both blue- and red-shifted velocities with respect to systemic. The launching of molecular gas occurs within the disk through star formation feedback, thus the outflows originate from the same location in position-position-velocity (ppV) space as disk molecular clouds. Outflows will therefore blend into the disk near their launching sites, which makes disentanglement increasingly difficult closer to the starburst region. The complexity of systematically separating emission corresponding to the disk and the outflow in ppV space is challenging. Algorithmically, this separation is simpler in a lower dimensional space, obtained by slicing the data cube into a collection of 2D position-velocity diagrams. In what follows we identify kinematic components in these diagrams, which then we project back to 3D ppV space. In order to avoid introducing biases we model the large-scale disk velocity field and use this model as the basis of the kinematic separation. \subsection{Definition of components}\label{subsection: definition disk non-disk} Images of the center of NGC\,253 on large scales show an elongated gas structure (figure~\ref{figure: CO moment maps} top) with a regular velocity field (figure~\ref{figure: CO moment maps} middle) that roughly matches a rotating disk disturbed by streaming motions from a bar \citep[e.g.][]{2004ApJ...611..835P}. The elongated gas structure is consistent with a highly inclined disk of molecular gas, or possibly a ring-like structure as observed in other galaxies \citep[for example, NGC\,1512, NGC\,1808;][]{2016ApJ...823...68S,2018ApJ...857..116M}. Similar structures break up at higher spatial resolution into two embracing spiral arms or complex non-closed orbits in the Milky Way center \citep{2015MNRAS.453..739K,2016MNRAS.457.2675H,2018MNRAS.481....2S}. Superimposed on this large scale structure, there are smaller features that are not part of the large-scale pattern of rotation and streaming motions. Some of them have high aspect ratios in channel maps and line-of-sight velocity gradients, both typical for outflows. Local deviations from the large-scale velocity field can also be due to infalling gas, or clumps of gas that do not follow the global pattern due perhaps to a cloud-cloud collision. Henceforth, we will refer to the bulk of the molecular gas that moves according to the large-scale velocity field in the central regions of NGC\,253 as the \emph{disk}. We assume this large-scale velocity field consists of rotation and streaming motions. The term \emph{non-disk} refers to any gas that is not following the ppV structure of the disk. By this definition, non-disk gas encompasses material from features that may be attributed to a variety of physical processes, including outflow and infall. Structures of outflowing gas are frequently referred to by names that describe their kinematic or spatial appearance, such as ``streamer''. We will use the term outflow to denote localized structures with morphology and kinematics consistent with gas moving away from the disk, as inferred from their location in ppV space. Typical signatures are a velocity that is inconsistent with rotation in the plane of the disk, and a high aspect ratio oriented roughly perpendicular to the disk major axis. Note that similar kinematic and structural properties can arise in infalling gas clouds. We will assume that all molecular gas with these characteristics is outflowing, which is likely the case for the majority of the material in NGC\,253. \subsection{Position-velocity slicing}\label{subsection: slicing} Kinematic analyses typically depend on high signal-to-noise ratios (SNR) because faint features can easily drown in noisy spectra. As a trade-of between necessary high SNR and also trying to include as much faint emission as possible, we conduct the following analysis on data cubes masked at the $5\sigma$ level (cf. table~\ref{table: used datasets}). We split the ppV cubes into position-velocity (pV) slices along the major axis of NGC\,253 as shown in figure~\ref{figure: slice positions}. The slices assume the kinematic center is $\alpha,\delta = 00^h47^m33.134^s, -25^\circ17^m19.68^s$ \citep{MullerSanchez:2010dr}, and are oriented along the major axis of the projected CO emission with $\mathrm{PA} = 55^\circ$. The area sliced is chosen to cover the region for which we have overlapping \co10, (2--1) and (3--2), and also cover the full length of the SW streamer outflow feature \citep[17.5\arcsec,][]{2017ApJ...835..265W}. These requirements are fulfilled by slices of $50\arcsec$ (850\,pc) length (major axis) and covering $50\arcsec$ (850\,pc) along the minor axis (figure~\ref{figure: slice positions}). To reduce the problems introduced by splitting features across slices, each slice is $5.0\arcsec$ (85\,pc) wide, and we overlap slices by half their width ($2.5\arcsec$, 42\,pc). A sample pV diagram is shown in figure~\ref{figure: sample slice} for the central slice, which runs along the major axis (offset $0.0\arcsec$). A complete set of pV diagrams is given in appendix~\ref{appendix: all pVs}. The resolution differences between our three transitions, a factor of $\sim 100$ in beam solid angle, are apparent in figure~\ref{figure: sample slice}. In the high angular resolution \co32, small features with large linewidth are common. These features are blurred out in the lower resolution \co10 and (2--1). \begin{figure} \centering \includegraphics[width=\linewidth]{fig4a.pdf} \includegraphics[width=\linewidth]{fig4b.pdf} \includegraphics[width=\linewidth]{fig4c.pdf} \caption{Size and orientation of the position-velocity slices overlaid on the integrated intensity image of \co10 (top), \co21 (middle) and \co32 (bottom). Each slice is $5.0\arcsec$ wide and overlaps adjacent slices by $2.5\arcsec$.} \label{figure: slice positions} \end{figure} \subsection{Modeling the disk} We derive a model for the velocity of the disk component from the \co10 observations using the kinematic fitting tool \texttt{diskfit} \citep{2007ApJ...664..204S,2010MNRAS.404.1733S,2015arXiv150907120S}. Because the \co10 observations cover the largest area among our observations, we use them to derive the model; the additional information provided by the \co21 and/or \co32 data is negligible in terms of the bulk motions of the gas. We obtain a \co10 velocity field by computing the first moment of the cube after masking it at $20\sigma$ (1.26\,K), in order to represent the velocity of the bright emission. We show the details of the fit parameters and a comparison to the \co10 velocity field in Appendix~\ref{appendix: model}. In each pV slice, we use the velocity profile of the \texttt{diskfit} model to define the local disk velocity. We consider the CO emission consistent with the disk component of the emission when the velocity difference is within the local observed velocity range, $\Delta v$. This velocity range varies spatially and depends on distance $x$ from the major axis, increasing towards the center due to the combined effects of higher intrinsic velocity dispersion and projection. For the success of this analysis, it is crucial that $\Delta v$ is broad enough to cover the observed velocity range of the disk but also narrow enough in order to not classify potential outflows as disk. The definition of $\Delta v$ is thus a crucial source of uncertainty for the derived quantities. We parametrize the velocity range of the disk as \begin{equation} \Delta v\,(x) = 120\,\exp \left( - \left(\frac{x}{2.5}\right)^2 \right) + 100, \label{equation: delta v} \end{equation} \noindent with $\Delta v$ in km\,s$^{-1}$ and $x$ in arcsec. We find this empirical relation to fit the pV data best and small variations of order 10-20\% already show noticeable mismatch as is discussed in appendix~\ref{appendix: disk velocity range}. Note that the parameters (120, 2.5 and 100) in equation~\ref{equation: delta v} are visually selected to fit the fit pV diagrams as best as possible. The quality of this definition can be assessed from figure~\ref{figure: sample slice} and appendix~\ref{appendix: all pVs}: The velocity ranges are wide enough to include obvious emission of the disk but do not extend into the kinematically distinct features (potential outflows) that appear as spikes. This is most apparent for \co32 as this line offers the highest spatial resolution. The effect of a 10\% change in the velocity range $\Delta v$ correspond to up to 0.1\,dex variations in the derived quantities (cf. appendix~\ref{appendix: disk velocity range}). \subsection{Selecting the components} We use the modelled velocity field and the $\Delta v$ relation together to define a ``disk mask'' over ppV space, corresponding to emission that is consistent with disk rotation. We show in figure~\ref{figure: sample slice} the central pV slices for \co10, (2--1) and (3--2). We show in Appendix~\ref{appendix: all pVs} the complete set of pV slices. Note that the CO emission extends beyond the disk mask. These extensions are not symmetric, and due to non-disk gas and projection effects. At $\sim78^\circ$ inclination gas flowing perpendicular to the galaxy disk towards the south (negative slice offsets) is primarily approaching us and seen at lower velocities relative to the disk emission. Similarly, outflow emission toward the north is primarily at velocities higher than the disk emission. Consequently, emission in pV slices shifts from lower to higher velocities relative to the disk model when the offset from the major axis increases (see figure~\ref{figure: all pV diagrams} in appendix~\ref{appendix: all pVs}). We designed the disk mask to be wide enough to capture the disk emission but exclude the asymmetric component caused by outflows. We define a ``non-disk'' mask that is the mathematical complement of the disk mask, with the addition of removing emission from known sources (portions of the spiral arms) that were not included in our model of the central disk and are not of interest for this analysis. \begin{figure} \centering \includegraphics[width=\linewidth]{fig5a.pdf} \includegraphics[width=\linewidth]{fig5b.pdf} \includegraphics[width=\linewidth]{fig5c.pdf} \caption{Position-velocity diagram of the central slice (offset $0.0\arcsec$) showing the construction of the disk/non-disk masks. The background images show the flux density above $5.0 \sigma$ for \co10 (top), \co21 (center) and \co32 (bottom) on identical gray scale. Contours are drawn at 1, 2, 4, 8 and 16\,K. The central velocity for our model of the disk emission is illustrated by the red line. The golden-shaded area denotes the disk mask. Similar figures for other offsets are shown in Appendix~\ref{appendix: all pVs}. Note that the different transitions have different angular resolution.} \label{figure: sample slice} \end{figure} \subsection{Identifying outflows in the non-disk component} We identify three different types of structures in the non-disk component (figure~\ref{figure: separated moments}, contours in figure~\ref{figure: nondisk zoomin} highlight these structures): (1) Emission that is co-located with the central disk and bar in projection. This is visible as a ridge in \co32 (inner contour in figure~\ref{figure: nondisk zoomin}), and also present but less apparent in \co10 and \co21. The structure is unlikely to be an outflow. It appears more likely to be an additional kinematic component of the disk/bar that is not included in the model we used for the separation. We therefore do not consider this gas to contribute to the total mass outflow rate. (2) Emission associated with the so-called western superbubble, located to the west and north of the central starburst region \citep[][shown by the western contour in figure~\ref{figure: nondisk zoomin}]{2006ApJ...636..685S,2013Natur.499..450B}. This feature is already known to be kinematically distinct from the surrounding gas. Part of it is likely the base of the northern outflow cone (and giving rise to the NW streamers identified by \citeauthor{2013Natur.499..450B}, for example), but it is difficult to know what portion of the emission should be associated with a net outflow. In our calculations below we exclude this feature from the total outflow rate of NGC\,253, although it likely has some contribution to outflow. (3) The remaining gas associated with the non-disk component is organized in small clumps along the edge of the disk region or beyond it. Some of this gas is not discernible as individual structures, particularly in the \co10 and \co21 cubes, perhaps due to the resolution but maybe also due to the excitation conditions, constituting extended regions with diffuse emission. Some of the emission is located in well-defined structures known to be part of the outflow, such as the SW streamer which is apparent in all CO transitions. In summary, the non-disk component consists of these three sub-components: structures that we associate with a net ``outflow,'' structures that are part of the ``western superbubble,'' and structures that are co-located with the ``disk''. The latter is not associated with the outflow, while parts of the western supperbubble may contribute to it. Below we calculate properties for the two components disk and non-disk, and its sub-components individually where it is feasible to do so. \section{Results}\label{section: results separated disk/non-disk} The process described in the previous section allows us to estimate the properties of the galactic outflow and other structures. A 2D representation of the separated data cubes is shown in figure~\ref{figure: separated moments} in the form of moment maps for integrated intensity (moment 0) and intensity-weighted velocity (moment 1) for all three CO transitions. Striping artifacts due to the pV cuts used in the separation method are present in the disk and non-disk components, visible as straight lines parallel to the major axis. This is primarily aesthetic. We tested their effects on the fluxes and derived velocities by varying the slice width and found them to be negligible. \begin{figure*} \caption{Comparison between original moment maps and separated disk/non-disk components. The outline of the maps is defined by the observed field of view and the square region considered for separating the kinematic components.). } \subfloat[ Moment 0 (integrated intensity) of the original image (\emph{top}), disk component (\emph{middle}) and non-disk component (\emph{bottom}). The logarithmic color scales are identical for all panels and chosen to also show the fainter non-disk component which saturates the inner regions of the disk. Contours are drawn at $\log \left( F\ \lbrack \mathrm{K\,km\,s}^{-1} \rbrack \right) = 1.7, 2.0, 2.3, 2.6, 2.9, 3.2, 3.5$; for clarity, only every other contour is drawn for \co32. ]{ \centering \includegraphics[width=\linewidth]{fig6a.pdf} } \label{figure: separated moments} \end{figure*} \begin{figure*} \ContinuedFloat \centering \subfloat[ Moment 1 (intensity-weighted velocity) of the original image (\emph{top}), disk component (\emph{middle}) and non-disk component (\emph{bottom}). Contours are drawn at 150, 200, ..., 350\,km\,s$^{-1}$\xspace. The noise edge visible in \co32 is due to primary beam correction required to derive accurate fluxes. ]{ \centering \includegraphics[width=\linewidth]{fig6b.pdf} } \stepcounter{figure} \end{figure*} \begin{figure} \centering \includegraphics[width=\linewidth]{fig7a.pdf} \includegraphics[width=\linewidth]{fig7b.pdf} \caption{A zoom-in on the non-disk component of \co32 (the bottom left panels in figure~\ref{figure: separated moments}a and \ref{figure: separated moments}b). \emph{top}: Moment~0 (integrated intensity) with contours at $\log \left( F\ \lbrack \mathrm{km\,s}^{-1} \rbrack \right) = 2.0, 2.5, 3.0$. \emph{bottom}: Moment~1 (intensity-weighted velocity). The thick contours show the regions discussed in the text (section~\ref{section: outflow rate}): gas that is kinematically not consistent with disk rotation but co-spatial with the disk in projection, and the western superbubble to the north-west of the disk.} \label{figure: nondisk zoomin} \end{figure} \floattable \begin{deluxetable*}{lccccccc} \tablewidth{\linewidth} \tablecaption{Results of separating disk from non-disk emission in NGC\,253. Uncertainties for these quantities are discussed in the corresponding subsections of section~\ref{section: results separated disk/non-disk}. \label{table: results}} \tablehead{ \colhead{quantity} & \colhead{unit} & \multicolumn{2}{c}{\co10} & \multicolumn{2}{c}{\co21} & \multicolumn{2}{c}{\co32}\\ \colhead{} & \colhead{} & \colhead{disk} & \colhead{non-disk} & \colhead{disk} & \colhead{non-disk} & \colhead{disk} & \colhead{non-disk} } \startdata \multicolumn{2}{l}{\bf luminosity}\\ L$_\mathrm{CO}$ & K\,km\,s$^{-1}$\,pc$^2$\xspace & $2.8 \times 10^8$ & $4.2 \times 10^7$ & $2.3 \times 10^8$ & $4.5 \times 10^7$ & $1.8 \times 10^8$ & $1.2 \times 10^7$ \\ fraction & \% & 87 & 13 & 84 & 16 & 94 & 6.5 \\ \midrule \multicolumn{2}{l}{\bf molecular gas mass $\mathrm{M}_\mathrm{mol}$ \tablenotemark{\dag}}\\ total\tablenotemark{a} & M$_\odot$\xspace & $3.1 \times 10^8$ & $4.5 \times 10^7$ & $3.1 \times 10^8$ & $6.1 \times 10^7$ & $2.9 \times 10^8$ & $2.0 \times 10^7$ \\ \phantom{---}outflow\tablenotemark{b} & M$_\odot$\xspace & & $2.7 \times 10^7$ & & $4.1 \times 10^7$ & & $8.3 \times 10^6$ \\ \phantom{---}superbubble\tablenotemark{c} & M$_\odot$\xspace & & $8.9 \times 10^6$ & & $8.9 \times 10^6$ & & $5.8 \times 10^6$ \\ \phantom{---}other-disk\tablenotemark{d} & M$_\odot$\xspace & & $7.6 \times 10^6$ & & $7.4 \times 10^6$ & & $5.8 \times 10^6$ \\ \midrule \multicolumn{2}{l}{\bf molecular mass outflow rate $\dot{M}$} \tablenotemark{\ddag}\\ \phantom{---}outflow (continuous)\tablenotemark{e} & M$_\odot$\,yr$^{-1}$\xspace & & 14 & & 20 & & 2.7 \\ \phantom{---}outflow (constant)\tablenotemark{f} & M$_\odot$\,yr$^{-1}$\xspace & & 29 & & 39 & & 4.8 \\ \midrule \multicolumn{2}{l}{\bf kinetic energy $\mathrm{E}_\mathrm{kin}$ \tablenotemark{\S}}\\ \phantom{---}outflow (continuous)\tablenotemark{e} & erg\,s$^{-1}$\xspace & & $3.9 \times 10^{54}$ & & $4.5 \times 10^{54}$ & & $6.5 \times 10^{53}$ \\ \phantom{---}outflow (constant)\tablenotemark{f} & erg\,s$^{-1}$\xspace & & $2.5 \times 10^{54}$ & & $3.1 \times 10^{54}$ & & $4.3 \times 10^{53}$ \\ \midrule \multicolumn{2}{l}{\bf momentum $\mathrm{P}$ \tablenotemark{\P}}\\ \phantom{---}outflow (continuous)\tablenotemark{e} & M$_\odot$\,km\,s$^{-1}$\xspace & & $6.9 \times 10^8$ & & $8.7 \times 10^8$ & & $1.2 \times 10^8$ \\ \phantom{---}outflow (constant)\tablenotemark{f} & M$_\odot$\,km\,s$^{-1}$\xspace & & $4.8 \times 10^8$ & & $6.4 \times 10^8$ & & $8.0 \times 10^7$ \\ \enddata \tablenotetext{a}{CO line luminosity of all emission considered consistent with disk rotation (disk) and not consistent with disk rotation (non-disk), respectively.} \tablenotetext{b}{Non-disk excluding the western superbubble and the gas that is co-spatial with the projected disk.} \tablenotetext{c}{Non-disk emission belonging to the western superbubble as defined by \citet{2006ApJ...636..685S}} \tablenotetext{d}{Non-disk gas that is co-spatial with the disk in projection. See section~\ref{section: outflow rate} for the definition.} \tablenotetext{e}{Outflowing gas as defined by note $^b$ under the assumption of continuous mass ejection without accelerations to the gas after ejection.} \tablenotetext{f}{Outflowing gas as defined by note $^b$ under the assumption of approximately constant starting mass outflow rate over the lifetime of the starburst.} \tablenotetext{\dag}{Molecular gas mass derived using a conversion factor for \co10 emission of $\mathrm{X}_{\mathrm{CO}} = 0.5\times10^{20}\,\left(\mathrm{K\,km\,s}^{-1}\right)^{-1}$, including the contribution of Helium, and assuming CO brightness temperature line ratios of $r_{21} = 0.80$ and $r_{31} = 0.67$ for \co21 and \co32 relative to \co10.} \tablenotetext{\ddag}{Deprojected molecular mass outflow rate. $50^{th}$ percentile best estimate assuming a flat distribution of outflow inclinations for the unknown geometry.} \tablenotetext{\S}{Deprojected kinetic energy of the molecular gas. $50^{th}$ percentile best estimate assuming a flat distribution of outflow inclinations for the unknown geometry.} \tablenotetext{\P}{Deprojected momentum of the molecular gas. $50^{th}$ percentile best estimate assuming a flat distribution of outflow inclinations for the unknown geometry.} \tablecomments{Sources of error are discussed and quantified in the respective subsections of section \ref{section: results separated disk/non-disk}.} \end{deluxetable*} \subsection{CO luminosities} We quantify in table~\ref{table: results} the CO luminosities of the disk and non-disk components. We measure luminosities of $2.8 \times 10^8$\,K\,km\,s$^{-1}$\,pc$^2$\xspace, $2.3 \times 10^8$\,K\,km\,s$^{-1}$\,pc$^2$\xspace and $1.8 \times 10^8$\,K\,km\,s$^{-1}$\,pc$^2$\xspace for \co10, (2--1) and (3--2), respectively, in the central disk of NGC\,253. The non-disk component is, naturally, much fainter with luminosities of $\sim 4.2 \times 10^7$\,K\,km\,s$^{-1}$\,pc$^2$\xspace for \co10, $\sim 4.2 \times 10^7$\,K\,km\,s$^{-1}$\,pc$^2$\xspace for (2--1) and $\sim 4.2 \times 10^7$\,K\,km\,s$^{-1}$\,pc$^2$\xspace (3--2). These correspond to approximately $12.9\%$, 16.4\% and 6.5\% of the total luminosity. These luminosities are measured over the sliced area (cf.\ figure~\ref{figure: slice positions}) for which the coverage is not the same among the datasets. We therefore also measure luminosities integrated over the same spatial region, here defined as the overlap between the datasets. This overlap amounts to 885\,\arcsec$^2$ ($2.55\times10^5$\,pc$^2$). The luminosities in the overlap area are: disk: $2.6 \times 10^8$\,K\,km\,s$^{-1}$\,pc$^2$\xspace, $2.1 \times 10^8$\,K\,km\,s$^{-1}$\,pc$^2$\xspace and $1.8 \times 10^8$\,K\,km\,s$^{-1}$\,pc$^2$\xspace; non-disk: $2.6 \times 10^7$\,K\,km\,s$^{-1}$\,pc$^2$\xspace, $2.4 \times 10^7$\,K\,km\,s$^{-1}$\,pc$^2$\xspace and $1.2 \times 10^7$\,K\,km\,s$^{-1}$\,pc$^2$\xspace for \co10, (2--1) and (3--2), respectively. An interesting result coming out of our decomposition is that not all the material we identify as ``outflow'' is in well-defined structures such as the streamers identified by \citet{2013Natur.499..450B}. Correctly estimating the outflow rate requires accounting also for a diffuse extended component. It is important to compare our fluxes to measurements in the literature. \citet{1996A&A...305..421M} find a \co21 luminosity of $1.2 \times 10^6$\,K\,km\,s$^{-1}$\xspace\,arcsec$^2$ which translates\footnote{adjusting from the distance $\mathrm{D} = 2.5$\,pc used by \citeauthor{1996A&A...305..421M} to the $\mathrm{D} = 3.5$\,pc assumed here} to $3.5 \times 10^8$\,K\,km\,s$^{-1}$\,pc$^2$\xspace or $1.3$ times our measurement. Their observations cover $80\arcsec \times 60\arcsec$, a area similar to our \co10 observations (but $\sim 4$ times larger than the area of our \co32 observations). For the outflow \co10 luminosity, \citet{2013Natur.499..450B} derive an estimate of $2.0 \times 10^7$\,K\,km\,s$^{-1}$\,pc$^2$\xspace by summing over individual identified molecular outflow features. This includes flux from the ``superbubble'' component, so it is probably better compared to the sum of our ``outflow'' and ``superbubble'' components of $\sim3.6\times10^7$\,K\,km\,s$^{-1}$\,pc$^2$\xspace. Given the large methodological differences and the importance of the diffuse emission, these numbers are in reasonable agreement. \subsection{Masses of components}\label{section: mass distribution} The total gas mass $M$ is estimated from the CO line luminosity, using the conversion factor $\mathrm{X}_{\mathrm{CO}} = 0.5\times10^{20}\,\left(\mathrm{K\,km\,s}^{-1}\right)^{-1}$\,cm$^{-2}$\xspace corresponding to $\alpha_\mathrm{CO} = 1.1\,\mathrm{M}_\odot\,\left(\mathrm{K\,km\,s}^{-1}\,\mathrm{pc}^{-1}\right)^{-1}$ discussed by \citet{Leroy:2015ds} for the central starburst region. This value accounts for the effects of moderate optical depth, high velocity dispersion, and warm gas temperatures that are likely to dominate the central regions of NGC\,253. The masses we report include the contribution of Helium to the total mass. To compute masses using the \co21 and \co32 transitions we assume typical line ratios of $r_{21} = 0.80$ and $r_{31} = 0.67$ relative to \co10 as implied by \citet{2018ApJ...867..111Z}. Note that we do not measure line ratios from the images but adopt a uniform factor to keep the mass measurements from the three observed CO lines independent. Table~\ref{table: results} lists the masses corresponding to the disk and the non-disk components. Uncertainty in the mass estimates arises primarily from the assumed conversion factor and the apportioning of emission among the different components. The calibration uncertainty for the flux measurements is $\sim10-15\%$ for the ALMA observations. Overall, we adopt a systematic error of factor $\sim2$ for the the derived masses. The molecular masses derived from the three CO transitions are very similar. They match within 10\% for the disk component, and within 50\% for the non-disk component. We estimate the total gas mass in the center of NGC\,253 to be $\sim 3.5 \times 10^8$\,M$_\odot$ (adding the disk and non-disk components), with estimates in the range of $3.1-3.6 \times 10^8$\,M$_\odot$\xspace for the different transitions. About 85\% of the total mass is in the disk component. The masses estimated in the non-disk components using \co10 and \co21 are fairly similar at $4.5 \times 10^7$\,M$_\odot$\xspace and $6.1 \times 10^7$\,M$_\odot$\xspace whereas in \co32 we detect a lower $2.0 \times 10^7$\,M$_\odot$\xspace, a consequence of the lower luminosity. The non-disk masses are primarily contributed by the outflow component ($\sim 50$\%). About $20-30$\% of mass is in the western supperbubble and $12-30$\% is co-spatial with the disk but kinematically distinct. It is important to compare these mass estimates to previous results for the total molecular gas mass in NGC\,253, noting that our analysis covers the central $45\arcsec \times 25\arcsec$ ($750\,\mathrm{pc} \times 400\,\mathrm{pc}$). Towards the east, $\sim 10\%$ of the known molecular gas close to the center is not covered by our \co32 observations and thus not considered in this analysis. The agreement with previous measurements is very good. \citet{1996A&A...305..421M} reported a mass of $1.3 \times 10^8$\,M$_\odot$\xspace over a similar area ($80\arcsec \times 60\arcsec$ in the center of NGC\,253), but this was based on a different distance and $\mathrm{X}_{\mathrm{CO}}$. After correcting for those differences, their luminosity corresponds to $4.2 \times 10^8$\,M$_\odot$\xspace, consistent with our measurement. Using the same distance and the same 1--0 observations, \citet{Leroy:2015ds} measure a molecular mass of $3.5 \times 10^8$\,M$_\odot$\xspace. \citet{2018ApJ...860...23P} report a total gas mass of $4.5 \times 10^8$\,M$_\odot$\xspace\ derived from the sub-mm dust spectral energy distribution, which is very consistent with our result given the very different methodologies. No estimates in the literature separate the ``disk'' and ``non-disk'' components as we do above. Previous estimates of the outflowing mass range from a lower limit of $6.6 \times 10^6$\,M$_\odot$\xspace calculated for the optically thin limit \citep{2013Natur.499..450B}, to $2-4 \times 10^7$\,M$_\odot$\xspace when accounting for optical depth \citep{2018ApJ...867..111Z}. Since we identify an outflowing mass $\sim5\times10^7$\,M$_\odot$\xspace, the agreement with the latter estimate is fairly good. Note, however, that these studies derive the outflowing mass from individual features rather than using the position-velocity information as we do here in a systematic way. \subsection{Mass outflow rate}\label{section: outflow rate} \begin{figure} \centering \includegraphics[width=\linewidth]{fig8a.pdf} \\ \vspace{0.2cm} \includegraphics[width=\linewidth]{fig8b.pdf} \caption{Deprojected molecular mass outflow rate averaged over 0.1\,Myr as a function of time since ejection (\emph{top}) and as a function of deprojected distance between outflow and launching site (\emph{bottom}). The top panel implicitly assumes continuous mass ejection without accelerations to the gas after ejection, while the lower panel assumes approximately constant starting mass outflow rate over the lifetime of the starburst. The shaded area indicates approximate errors ($16^{th}$ to $84^{th}$ percentile), which are dominated by uncertainties in the deprojection geometry. Dotted lines represent the ranges where confusion with gas in the disk occurs and where the limited field-of-view affects the completeness.} \label{figure: outflow rate} \end{figure} A mass flow rate is defined as the flux of mass per unit time through a surface. In our case, we are interested in the flow of molecular gas mass through a virtual closed surface around the center of NGC\,253 at a given distance. Note that an individual outflow feature observed over a certain length, such as the SW streamer, can develop in at least two ways: as the distance of an outflow from its origin corresponds to time since ejection times velocity, continuously outflowing gas results in extended streaming structures. Gas ejected at a single ejection event in the past with a distribution of ejection velocities, on the other hand, will also result in an extended streamer. In reality gas will be ejected with a distribution of velocities at a varying rate over a period of time, and in order to interpret the measurements we need to make some simplifying assumptions. We chose two edge cases to span the range of different interpretations: (1) the gas does not experience accelerations after being launched \citep[however, see ][]{2017ApJ...835..265W}, and (2) that the gas outflow rate is approximately constant with time. For all calculations, however, we assume that the projected direction of flow is perpendicular the the central plane of the bar and that CO emission traces the mass with a constant conversion factor. We compute both the outflow rate as a function of distance and as a function of time. If we assume that the mass outflow rate has been approximately constant over the lifetime of the starburst, for example, a diminishing outflow rate as a function of distance would suggest that gas is either launched with or somehow develops a distribution of velocities. Conversely, if we assume that the present day velocity has been constant since the gas was ejected, we can derive a history of the mass outflow rate as a function of time and account for a variable mass outflow rate. Both interpretations are equally, but not simultaneously, valid. For the detailed calculation of the mass outflow rate, we proceed as follows. A mass outflow rate is $\dot{\mathrm{M}} = \mathrm{M} \,t^{-1}$ with mass M and relevant time scale $t$. For each image element $i$ (3D pixel or sometimes also called voxel), we calculate the outflow rate $\dot{m}_i$ of the gas that was ejected at time $t_{{\mathrm{eject}},i}$ over the time interval $\Delta t_{\mathrm{cross},i}$, as the ratio of mass of the pixel ${m}_i$ to the pixel crossing interval $t_{\mathrm{cross},i}$. Ejection time and pixel crossing interval are functions of the outflow velocity $v_i$ and the distance $s_i$ between current pixel position and the launching site, and the pixel size in the direction of the flow $\Delta s$, respectively. We therefore compute \begin{eqnarray} \Delta t_{\mathrm{cross},i} &=& \frac{\Delta s}{v_i}\\ \dot{m}_i\ (t_{\mathrm{eject},i}) &=& \frac{m_i}{\Delta t_{\mathrm{cross},i}}\\ t_{\mathrm{eject},i} &=& \frac{s_i}{v_i} \end{eqnarray} \noindent obtaining a mass outflow rate $\dot{m}_i$, a distance $s_i$, and an ejection time $t_{\mathrm{eject},i}$ for each pixel in the ``outflow'' component. Note that this approach takes the 3D phase space information into account by treating pixels independently. Typically, a sightline shows multiple pixels with emission at different velocities that all contribute an outflow rate with their respective mass, distance and velocity. We then bin the outflow rates $\dot{m}_i$ by ejection time $t_{\mathrm{eject},i}$ and integrate over the time range $\left[\mathrm{T}_1, \mathrm{T}_2 \right]$ to obtain the average outflow rate in this time interval, \begin{equation} \dot{M} \left( \mathrm{T}_1, \mathrm{T}_2 \right) = \frac{\displaystyle \sum_i \dot{m}_i \left( \mathrm{T}_1 < t_{\mathrm{eject},i} < \mathrm{T}_2 \right) \Delta t_{\mathrm{cross},i} }{\displaystyle \mathrm{T}_2 - \mathrm{T}_1 }. \label{equation: outflow rate time} \end{equation} \noindent Similarly, binning by distance results in the average outflow rate at a given distance, \begin{equation} \dot{M} \left( \mathrm{D}_1, \mathrm{D}_2 \right) = \frac{\displaystyle \sum_i \dot{m}_i \left( \mathrm{D}_1 < s_i < \mathrm{D}_2 \right) \Delta s }{\displaystyle \mathrm{D}_2 - \mathrm{D}_1 }. \label{equation: outflow rate distance} \end{equation} \noindent Performing binning on a sequence of time intervals yields the outflow rate history, while binning in distance tells us how far from the launching site a given fraction of the mass is able to escape. Calculating velocity $v$ and distance $s$ requires knowledge about the geometry and origin of each outflowing gas parcel. The simplest assumption, used here, is that on average outflows are launched in the plane of the central region of the galaxy which corresponds to launching on the major axis. The distance $s$ is thus the projected distance to the major axis on the edge of an outflow cone with given opening angle. Note that the outflow originates from an extended region in the disk and the term cone thus refers to a cut-off cone (called a frustum in geometry). Velocity $v$ is the velocity difference between launching site and current velocity of the outflow parcel, i.e. the velocity difference over distance $s$. Both the velocity of the launching site and the projected distance are uncertain. The velocity changes by $\pm 25$\,km\,s$^{-1}$\xspace when an outflow originates from the northern/southern edge of the observed CO disk (above/below the plane), while the projected distance traveled by the gas changes by $\pm1.25\arcsec$ ($\pm20$\,pc). Distance $s$, ejection time scale $t_\mathrm{eject}$ and pixel crossing time scale $t_\mathrm{cross}$ are measured as projected quantities that need to be deprojected to account for the outflow geometry. The bright molecular streamers (the SW and SE streamers) seem to lie at the edge of the ionized outflow cone with $\sim60^\circ$ opening angle \citep{2013Natur.499..450B}. Assuming that all molecular outflows are along this cone, and that the axis of the cone is oriented perpendicular to the disk ($i = 78^\circ$), the range of effective inclination of outflowing gas can be anywhere between $\theta = 48^\circ$ and $\theta = 108^\circ$. Deprojected velocity, $v_\mathrm{depro} = v_\mathrm{obs} / \sin\theta$, and distance, $s_\mathrm{depro} = s_\mathrm{obs} / \cos\theta$, have a direct effect on the deprojected outflow rate, $\dot{m}_\mathrm{depro} = \dot{m}_\mathrm{obs} \tan\theta$, and also on the inferred time and distance evolution of the outflow rate. We use a Monte Carlo approach to derive the errors introduced by deprojection, assuming that the outflow direction has an equal probability of being in any direction along the surface of the outflow cone. Figure~\ref{figure: outflow rate} shows the molecular mass outflow rate as a function of time or distance, corresponding to the two alternative interpretations we discuss above: a flow where the distribution of material is interpreted as resulting from the history of mass outflow rate (top panel), and one where we show the mass outflow rate as a function of distance, which under the assumption of a constant outflow rate over the last several Myr can be interpreted as an efficiency of ejection to a given distance (bottom panel). Indeed, for an outflow with a distribution of velocities the slower material will not travel as far in a given time, neither will it escape the galaxy if it does not have a high enough velocity. Close to the starburst region (or at small times since ejection) the mass outflow rate drops to zero, because it becomes increasingly difficult to separate the ``outflow'' component from the ``disk'' component. At large values of distance or time it also drops to zero, due to a decreasing amount of outflowing molecular material detected far from the starbursts (and the fact that the observations have a limited field-of-view). The constant outflow rate out to $\sim 300$\,pc is in tension with \citet{2018ApJ...853..173K} who find a steeply dropping ``cold'' ($\mathrm{T}<5050$\,K) component in their TIGRESS simulation. Within 400\,pc, they find the averaged mass loading factor to drop by two orders of magnitude. It is unlikely that the SFR in NGC\,253 has increased by two orders of magnitude within the past 1-2\,Gyr which would alter the observed constant outflow rate profile to be consistent with the \citet{2018ApJ...853..173K} simulation. Note, however, that their simulation recreates solar neighborhood-like conditions instead of a starburst. A direct comparison may thus be not possible. Our data show that the \emph{average} outflow rates within 20\arcsec (340\,pc) from the major axis are 29\,M$_\odot$\,yr$^{-1}$\xspace ($^{+0.48}_{-0.35}$\,dex), 39\,M$_\odot$\,yr$^{-1}$\xspace ($^{+0.49}_{-0.34}$\,dex) and 4.8\,M$_\odot$\,yr$^{-1}$\xspace ($^{+0.50}_{-0.39}$\,dex) for \co10, (2--1) and (3--2) respectively. Similarly, within the past 1.0\,Myr, the \emph{average} outflow rates are 14\,M$_\odot$\,yr$^{-1}$\xspace ($^{+0.25}_{-0.29}$\,dex), 20\,M$_\odot$\,yr$^{-1}$\xspace ($^{+0.27}_{-0.37}$\,dex) and 2.7\,M$_\odot$\,yr$^{-1}$\xspace ($^{+0.22}_{-0.56}$\,dex) for \co10, (2--1) and (3--2) respectively. The uncertainties, indicated by the $16^{th}$ to $84^{th}$ percentile in the Monte Carlo described above, are substantial at a factor of $2-3$. Real systematic uncertainties are even larger, since there can be conversion of molecular into atomic material \citep[c.f. ][]{2015ApJ...814...83L}, or in general variations in the CO-to-H$_2$ conversion. Note that the average outflow rates quoted above differ between the two representations, with the median outflow rate as a function of distance being about twice as high as a function of time. The outflowing mass is identical in both cases, and the difference arises solely from binning. Comparing between lines, it is apparent that as measured in \co32 the outflow rate is roughly one order of magnitude lower than for the lower two transitions. This is a direct consequence of the lower mass detected in \co32, and the smaller field-of-view of those observations. The \co32 observations cover only $\sim 12.5\arcsec$ ($\sim 210$\,pc projected) above/below the disk and thus miss significant amounts of non-disk gas. Their lower surface brightness sensitivity means we also fail to detect a diffuse non-disk component, as we see in the two lower lines. The measurements in \co32 should thus be interpreted as a lower limit, and in that sense they are consistent with those for the lower two transitions. Overall, the deprojected total mass outflow rate in the starburst of NGC\,253 is most likely in the range $\sim 14-39$\,M$_\odot$\,yr$^{-1}$\xspace as derived from \co10 and \co21 with $\sim 0.4$\,dex uncertainty. The large spread arises due to different interpretations of the kinematics of the observed gas while the errors are due to unknown geometry. The majority of this outflow rate is contributed by massive outflows alongside the disk like the SW/SE streamers, with a significant contribution by diffuse molecular gas. The present day star formation rate in the central region of NGC\,253 is $1.7-2.8$\,M$_\odot$\,yr$^{-1}$\xspace, derived from radio continuum and far-infrared measurements \citep{Ott:2005il,Leroy:2015ds,2015MNRAS.450L..80B}. This results in a mass loading factor $\eta = \dot{M}_\mathrm{out} / \dot{M}_\mathrm{SFR}$ in the range $\eta \sim 5.4-23.5$. Note that this is for gas ejected as far as 340\,pc. We do not currently know what fraction of the gas makes it to the far regions of the halo, or reaches escape velocity from the system. Theoretical works suggest that most of the molecular outflow will not escape but rain back down on the galaxy (e.g. \citealt{Shapiro:1976ha} up to recent work by \citealt{2018ApJ...853..173K} or \citealt{2019MNRAS.tmp..540T}). In our data, no gas reaches the escape velocity of $v_\mathrm{esc} = 500$\,km\,s$^{-1}$\xspace \citep{2017ApJ...835..265W}. The uncertainty on $v_\mathrm{esc}$ is substantial, so allowing a factor of two is still plausible. At $v_\mathrm{esc} = 250$\,km\,s$^{-1}$\xspace, the fraction of gas above $v_\mathrm{esc}$ by mass is 0.5\%, 0.5\% and 6.0\% for \co10, \co21 and \co32, respectively. The mismatch between the lower transitions and \co32 implies that some high velocity gas can be found on small scales that is blurred out in the low resolution observations. This estimate of the molecular mass outflow rate is higher than the lower limit found by \citet{2013Natur.499..450B} for optically thin emission. \citet{2018ApJ...867..111Z} analysis of the CO line ratios in the SW streamer shows that the emission there is optically thick, which the authors used this to rescale the \citet{2013Natur.499..450B} measurements finding a NGC\,253 galactic outflow rate of $25-50$\,M$_\odot$\,yr$^{-1}$\xspace. The result presented here, a mass outflow rate of $\sim 14-39$\,M$_\odot$\,yr$^{-1}$\xspace, is consistent with this number using an independent and a more complete methodology than the original work. From H$\alpha$ observations by \citet{Westmoquette:2011bp}, we can estimate the ionized outflow rate to $\sim 4$\,M$_\odot$\,yr$^{-1}$\xspace using their ionized mass ($\mathrm{M} = 10^7$\,M$_\odot$\xspace) and typical velocity (200\,km\,s$^{-1}$\xspace) at mean deprojected distance (510\,pc). X-ray observations yield comparable values. \citet{Strickland:2000wd} finds an upper limit of 2.2\,M$_\odot$\,yr$^{-1}$\xspace assuming a standard outflow velocity of 3000\,km\,s$^{-1}$\xspace. The upper limit reported in \citet{Strickland:2002kp} translates to 2.3\,M$_\odot$\,yr$^{-1}$\xspace when assuming 3000\,km\,s$^{-1}$\xspace outflow velocity and a reasonable 10\% filling factor. These estimates scale linearly with the unknown velocity and also depend on the unknown metallicity and filling factor in the outflow. Estimates of the outflow rate in neutral gas are not known in the literature but are arguably at a similar level. The molecular phase thus clearly dominates the mass budget in the outflow close to the disk as found in other galaxies (e.g. M82, \citealt{2015ApJ...814...83L} and simulations, e.g. \citealt{2018ApJ...853..173K}). \subsection{Outflow energy and momentum}\label{section: outflow energy} \begin{figure*} \centering \includegraphics[width=0.48\linewidth]{fig9a.pdf}~~ \includegraphics[width=0.48\linewidth]{fig9b.pdf} \\ \includegraphics[width=0.48\linewidth]{fig9c.pdf}~~ \includegraphics[width=0.48\linewidth]{fig9d.pdf} \caption{Deprojected kinetic energy (\emph{left}) and deprojected momentum (\emph{right}) of the molecular outflow averaged over 0.1\,Myr as a function of time since ejection (\emph{top}) and as a function of deprojected distance between outflow and launching site (\emph{bottom}). The top panel implicitly assumes continuous mass ejection without accelerations to the gas after ejection, while the lower panel assumes approximately constant starting mass outflow rate over the lifetime of the starburst. The shaded area indicates approximate errors ($16^{th}$ to $84^{th}$ percentile), which are dominated by uncertainties in the deprojection geometry. Dotted lines represent the ranges where confusion with gas in the disk occurs and where the limited field-of-view affects the completeness.} \label{figure: outflow energy momentum} \end{figure*} Similar to mass outflow rate, energy and momentum can be calculated as a function of time and distance which is shown in figure~\ref{figure: outflow energy momentum}. In equations~\ref{equation: outflow rate time} and \ref{equation: outflow rate distance} the molecular outflow rate $\dot{m}_i$ is replaced by kinetic energy $E_{\mathrm{kin},i} = \frac{1}{2} m_i\,v_i^2$ or momentum $P_i = m_i\,v_i$. As with the outflow rate, the dominant sources of error are the uncertainty in the launching site and the geometry for which we Monte Carlo the errors as described before. Our median estimate with 16$^{th}$ and 84$^{th}$ percentile uncertainties are given below and in table~\ref{table: results}. The kinetic energy in the outflow integrated over the past 1.0\,Myr is $3.9 \times 10^{54}$\,erg ($^{+0.91}_{-0.75}$\,dex) in \co10, $4.5 \times 10^{54}$\,erg ($^{+0.94}_{-0.80}$\,dex) in \co21, and $6.5 \times 10^{53}$\,erg ($^{+0.58}_{-0.83}$\,dex) for \co32. Within 20\arcsec\ (340\,pc), the kinetic energies amount to $2.5 \times 10^{54}$\,erg ($^{+0.96}_{-0.65}$\,dex) in \co10, $3.1 \times 10^{54}$\,erg ($^{+0.96}_{-0.65}$\,dex) in \co21, and $4.3 \times 10^{53}$\,erg ($^{+0.98}_{-0.64}$\,dex) for \co32. For the reasons described above, the \co32 measurement is a lower limit, thus the results for the lower transitions are consistent. NGC\,253 does not appear to host an energetically important AGN, and the outflow is driven by the starburst. It is interesting then to compare our results for the kinetic energy to the energy released by the starburst. We assume the current star formation rate of $\sim 2.8$\,M$_\odot$\,yr$^{-1}$\xspace in the central region \citep{Ott:2005il,2015MNRAS.450L..80B} that has been approximately constant over the last few Myr. The total energy $E_\mathrm{bol}$ produced by the starburst is simply the time integrated bolometric luminosity $L_\mathrm{bol}$ which depends\footnote{We follow IAU resolution B2 that defines the bolometric magnitude in absolute terms and eliminates the dependence on the variable magnitude of the sun.} on the bolometric magnitude $M_\mathrm{bol}$. \begin{eqnarray} E_\mathrm{bol} &=& L_\mathrm{bol} \times \Delta t\\ &=& 3 \times 10^{35-0.4\,M_\mathrm{bol}} \Delta t \left( \frac{SFR}{1\,\mathrm{M}_\odot\,\mathrm{yr}^{-1}} \right)\,\mathrm{erg\,s}^{-1} \label{equation: bolometric energy} \end{eqnarray} According to Starburst99 \citep[figure~46 in][]{Leitherer:1999jt}, the bolometric magnitude of a starburst at an age of $10^7$\,yr to $10^8$\,yr is $M_\mathrm{bol} \sim -20.5$ for $\mathrm{SFR} = 1$\,M$_\odot$\,yr$^{-1}$\xspace. The total energy output of the starburst over the past 1\,Myr is thus $4.2 \times 10^{57}$\,erg. The observed kinetic energy $\sim3.9-4.5\times10^{54}$\,erg in the outflow is a factor of $\sim 10^3$ lower which places the coupling efficiency of outflow kinetic energy to starburst energy at $\sim 0.1$\%. In terms of only kinetic energy, the fraction is higher. Primarily supernovae and winds supply kinetic energy to the ISM which can be estimated from the energy deposition rate according to \citet{Leitherer:1999jt} as given in \citet{Chisholm:2017bu} and \citet{Murray:2005jt}. \begin{eqnarray} \dot{E}_\mathrm{SN} &= 3 \times 10^{41} \left( \frac{SFR}{1\,\mathrm{M}_\odot\,\mathrm{yr}^{-1}} \right) \,\mathrm{erg\,s}^{-1} \label{equation: supernova energy} \end{eqnarray} Each SN releases approximately $10^{51}$\,erg in kinetic energy, with the progenitor releasing a similar amount of kinetic energy during its lifetime by winds \citep[e.g.][]{Leitherer:1999jt}. The approximate total kinetic energy released by SNe in the past 1\,Myr is then $\sim 5.3 \times 10^{55}$\,erg, compared to the $\sim3.9-4.5\times10^{54}$\,erg we observe in the outflow. Hence, the observed starburst is sufficient to kinetically power the measured molecular outflows with $\sim 8\%$ efficiency. The commonly adopted 50\% relative contribution of wind feedback is a first order estimate that is subject to environmental dependence and requires careful modeling to determine precisely \citep[e.g.][]{Leitherer:1999jt}. Furthermore, it should be noted that the observed outflow energy and its error is based on a fixed mass conversion factor that may vary. The uncertainty on the energy coupling efficiency is thus substantial and it should be understood as an order of magnitude comparison. The above calculation ignores the contribution of other energies, such as the turbulent energy within the molecular outflow and the kinetic energy of the neutral and ionized gas. \citet{2009ApJ...701.1636M} derived a kinetic energy of the ionized wind in NGC\,253 of $1.3 \times 10^{53}$\,erg or more than one order of magnitude lower than the molecular outflow kinetic energy. The molecular outflow is slower ($50-100$\,km\,s$^{-1}$\xspace on the scales we observed here) than the ionized outflow \citep[up to $\sim 400$\,km\,s$^{-1}$\xspace,][]{2009ApJ...701.1636M} but also more massive. The ionized outflow thus has only a very small effect on the total kinetic energy and the coupling efficiency. Deprojected outflow momenta integrated over the past 1.0\,Myr are $6.9 \times 10^8$\,M$_\odot$\,km\,s$^{-1}$\xspace ($^{+0.50}_{-0.49}$\,dex), $8.7 \times 10^8$\,M$_\odot$\,km\,s$^{-1}$\xspace ($^{+0.57}_{-0.57}$\,dex) and $1.2 \times 10^8$\,M$_\odot$\,km\,s$^{-1}$\xspace ($^{+0.33}_{-0.59}$\,dex) for \co10, (2--1) and (3--2), respectively. Within 20\arcsec\ (340\,pc) deprojected distance from the launching site the outflow momenta integrate to $4.8 \times 10^8$\,M$_\odot$\,km\,s$^{-1}$\xspace ($^{+0.48}_{-0.35}$\,dex) in \co10, $6.4 \times 10^8$\,M$_\odot$\,km\,s$^{-1}$\xspace ($^{+0.49}_{-0.34}$\,dex) in \co21 and $8.0 \times 10^7$\,M$_\odot$\,km\,s$^{-1}$\xspace ($^{+0.50}_{-0.39}$\,dex) in \co32. The momentum released initially by SNe is given in \citep{Murray:2005jt}: \begin{eqnarray} \dot{P}_\mathrm{SN} &=& 2 \times 10^{33} \left( \frac{SFR}{1\,\mathrm{M}_\odot\,\mathrm{yr}^{-1}} \right) \,\mathrm{g\,cm\,s}^{-2} \\ &=& 317 \left( \frac{SFR}{1\,\mathrm{M}_\odot\,\mathrm{yr}^{-1}} \right) \,\mathrm{M_\odot\,yr}^{-1}\,\mathrm{km\,s}^{-1} \label{equation: supernova momentum} \end{eqnarray} In 1\,Myr, a constant SFR of 2.8\,M$_\odot$\,yr$^{-1}$\xspace yields $8.9 \times 10^8$\,M$_\odot$\,km\,s$^{-1}$\xspace. Assuming a contribution by stellar winds of the same order \citep{Leitherer:1999jt}, the total momentum is $1.8 \times 10^9$\,M$_\odot$\,km\,s$^{-1}$\xspace or roughly twice the observed outflow momentum. SNe, however, gain significant amounts\footnote{Assuming a Salpeter-like IMF ($\alpha=2.35$, mass range $0.1-100$\,M$_\odot$\xspace, $\mathrm{Z} = 0.008$). The usual uncertainties related to the shape, upper mass cutoff and influence of binary stars apply.} of momentum by sweeping up surrounding material. From simulations, the total momentum supplied to the ISM is expected to be $2.8 \times 10^5$\,M$_\odot$\,km\,s$^{-1}$\xspace per SNe (\citealt{Kim:2015iy} and references therein). For a constant SFR of 2.8\,M$_\odot$\,yr$^{-1}$\xspace over 1\,Myr, this amounts to $1.0 \times 10^{10}$\,M$_\odot$\,km\,s$^{-1}$\xspace or $2.0 \times 10^{10}$\,M$_\odot$\,km\,s$^{-1}$\xspace when adopting 50\% contribution by stellar winds which is about four times the observed momentum. The efficiency of transferring feedback momentum to outflow momentum is thus in the range $27-49$\% considering the initially available momentum or $2.5-4$\% efficiency for total to outflow momentum transfer. These outflow momenta are much higher than the momentum currently produced by young ($<10$\,Myr) super star clusters in the starburst. \citet{2018ApJ...869..126L} list 14 candidate clusters that together produce $1.5\times10^7$\,M$_\odot$\,km\,s$^{-1}$\xspace measured from gas kinematics, a factor $10-100$ lower than the observed outflow momentum. The currently forming (super-) star cluster thus could not have launched the outflow but the feedback of another population of stars is needed to explain the observed outflows. This is indicative of the time delay of SF feedback. Energy and momentum curves in figure~\ref{figure: outflow energy momentum} differ only by a factor $v$ but follow a similar evolution. This implies that the median velocity at a given distance must be roughly constant along the outflow. As the curves as a function of distance are roughly constant within $50\,\mathrm{pc}<s<300\,\mathrm{pc}$, especially for kinetic energy, the outflow mass at a given distance must also be approximately constant along the outflow. The decline in energy and momentum below 50\,pc is caused by a decrease in outflow mass, again because both curves follow a similar trend. This is at least partially related to the difficulty of separating outflow from disk where the former emerges from the latter. The decrease could also be interpreted physically as the outflow sweeping up mass while emerging from the disk. An estimation of the relative importance of these effects requires high-resolution modelling of the outflow that are not possible yet because we do not know the detailed outflow geometry. The drop beyond $\sim 300$\,pc ($\sim 200$\,pc in \co32) is partially related to reaching the edge of the field-of-view. Discerning this effect from an actual decrease is not possible with our data as we do not know the inclination at every location in the outflow. The edge of the field-of-view thus corresponds to a range of deprojected distances from the disk which gradually depresses the curve rather than showing a sudden drop. A physical reason for the decrease could be the destruction of the molecular gas, e.g. photo-dissociation by the intense starburst radiation or ionization. The kinetic energy and momentum evolution in figure~\ref{figure: outflow energy momentum} thus suggest both energy and momentum conservation along the outflow from $\sim50$\,pc to $\sim 300$\,pc, as well as approximately constant molecular gas mass. When additionally assuming no acceleration of the outflow after launch, it becomes possible to study the time evolution. The corresponding plots (figure~\ref{figure: outflow rate} top and \ref{figure: outflow energy momentum} top) all show a peak within the past 0.5\,Myr and steady decrease towards earlier gas ejection times. Corresponding to the decline towards zero distance, the decrease towards zero ejection time is most likely a methodological complication. From the peak at t$_\mathrm{eject} = 0.2-0.3$\,Myr, kinetic energy and momentum in the outflow drops by a factor of 10 within $\sim 2$\,Myr. This decline would be physically plausible if the starburst in NGC\,253 is very young and taking into account a time delay between start of star formation, feedback and efficient outflow driving (superbubble breakout). For the observed age of the starburst of $20-30$\,Myr \citep{Rieke:1980hh,Engelbracht:1998cj} this scenario is implausible. Time delays of $>20$\,Myr are longer than the lifetime of the most massive stars. A younger generation of massive stars at an age of $\sim 6$\,Myr \citep{Kornei:2009ee} may, however, drive the currently visible molecular outflows. If this were to be true, a time delay between star formation and outflow launching of $\sim 4$\,Myr is implied. Outflow launching in this context means the time after which the outflow reaches a mass loading $\eta>1$. The time delay is 2\,Myr until the outflow carries more energy (momentum) than the feedback kinetic energy (momentum) of a single high mass star. Note that these rough estimates depend on the assumption of no acceleration (positive, nor negative) of the outflow after being launched from the disk which might be a close enough approximation on these scales of a few hundred parsecs. The very young ($\lesssim 1$\,Myr) and still deeply embedded super star cluster discussed recently by \citet{2017ApJ...849...81A} and \citet{2018ApJ...869..126L} are most likely too young to have affected the observed molecular outflow. \section{Summary and Conclusions}\label{section: summary} We present \co32 observations taken with ALMA that offer an unprecedented resolution of $\sim 0.15\arcsec$ ($\sim 2$\,pc) in the starbursting center of NGC\,253. The new high resolution data show structures consistent with previous lower resolution observations in other CO lines, revealing the complexity of the molecular ISM in a starburst on scales of a few parsecs. We use archival \co10, \co21, and the new \co32 ALMA observations to perform a position-position-velocity decomposition of the emission into different structures. The bulk of the emission is associated with a rotating disk with streaming motions due to the bar. The rest of the emission is incompatible with a simple kinematic model of a disk plus a bar. This ``non-disk'' component is further decomposed into an outflow, an expanding superbubble (part of which may be associated with outflowing gas) and a potential second kinematic component within the disk. We find CO line luminosities of the disk component of $2.8\times10^8$\,K\,km\,s$^{-1}$\,pc$^2$\xspace, $2.3\times10^8$\,K\,km\,s$^{-1}$\,pc$^2$\xspace and $1.8\times10^8$\,K\,km\,s$^{-1}$\,pc$^2$\xspace for \co10, (2--1) and (3--2), respectively. The fractional luminosity of the non-disk component is small, amounting the $\sim7-16\%$ of the total. A significant amount of the outflow emission we identify is faint and diffuse, while part of the emission is in discrete, higher surface brightness structures (e.g., the SW streamer). Assuming a starburst conversion factor, we estimate the molecular gas mass from the three CO transitions. Masses match within 10\% for the disk component and within 50\% for the non-disk component. The total gas mass in the center of NGC\,253 is $\sim 3.6 \times 10^8$\,M$_\odot$, with $\sim 0.5 \times 10^8$\,M$_\odot$ in the non-disk component. We further estimate the deprojected molecular mass outflow rate, kinetic energy and momentum in the starburst of NGC\,253. The observed gas distribution can be interpreted to have formed in two ways: (1) by constant starting mass outflow rate over the lifetime of the starburst and (2) through continuous gas ejection without acceleration of the gas after ejection. In the first interpretation, the molecular mass outflow rate averaged over a deprojected distance of 340\,pc (20\arcsec) from the launching site is $29-39$\,M$_\odot$\,yr$^{-1}$\xspace. Typical uncertainties are 0.4\,dex. The majority of this outflow rate is contributed by massive localized features such as the SW/SE streamers, with a significant contribution by diffuse molecular gas. The mass loading factor $\eta=\dot{M}_\mathrm{SFR}/\dot{M}_\mathrm{out}\sim14-20$ is relatively high. Due to the limited field-of-view of our observations, this $\eta$ applies to gas ejected as far away as 340\,pc: the fraction of mass that makes it to the far regions of the halo or escapes is not known. The kinetic energy of the molecular outflow within 340\,pc from the launching site is $2.5-3.1 \times 10^{54}$\,erg with a $\sim 0.8$\,dex error. The coupling efficiency of kinetic energy in the outflow to the total energy released by the starburst is $\sim 0.1$\% while the coupling to only the kinetic energy is higher at $\sim 8$\%. Including other phases of the outflow would increase this efficiency. The kinetic energy of the ionized outflow is negligible relative to the molecular outflow. The outflow momenta within the same distance are $4.8-6.4 \times 10^8$\,M$_\odot$\,km\,s$^{-1}$\xspace (error $\sim 0.5$\,dex) which is $\sim 2.5-4$\% of the momentum supplied by SNe and winds. These best estimates for the physical properties of the outflow are derived from the \co10 and (2--1) observations. The very high resolution of the \co32 data is necessary to identify the outflow features that connect to the central regions. When interpreting the outflow as a structure of constant velocity along the outflow, the time evolution can be reconstructed. We derive outflow rate, kinetic energy and momentum within the approximate dynamical time scale of 1\,Myr and find lower values compared to the previous interpretation. The difference is systematic at the $\sim 30-40$\% level. The outflow rate is $14-20$\,M$_\odot$\,yr$^{-1}$\xspace (0.3\,dex), kinetic energy $2.5-3.1 \times 10^{54}$\,erg (0.8\,dex) and momentum $4.8-6.4 \times 10^8$\,M$_\odot$\,km\,s$^{-1}$\xspace (0.5\,dex). For all measurements given above, we assume a fixed starburst mass conversion factor of $\mathrm{X}_{\mathrm{CO}} = 0.5\times10^{20}\,\left(\mathrm{K\,km\,s}^{-1}\right)^{-1}$. The quoted uncertainties are primarily systematic due to the unknown geometry of the outflow and its launching sites. A further uncertainty of $30-40$\% ($\sim 0.1$\,dex) comes from the assumptions regarding the outflowing material (constant starting mass over the lifetime of the starburst vs.\ continuous gas ejection without acceleration). These limitations need to be addressed in the future. In principle, ALMA can provide the very high resolution and sensitivity needed to enable this detailed view of a starburst also on larger scales than probed in this study. \software{CASA \citep{McMullin:2007tj}, astropy \citep{Collaboration:2013cd,Collaboration:2018ji}, APLpy \citep{Robitaille:2012wl}} \acknowledgements We would like to thank the anonymous referee for their constructive feedback which helped improve the paper. The authors thank Jan-Torge Schindler and Roberto Decarli for insightful discussion and advise. This paper makes use of the following ALMA data: ADS/JAO.ALMA \#2011.1.00172.S, \#2012.1.00108.S, and \#2015.1.00274.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), NSC and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. Part of the work presented in this paper was carried out at the Finnish Center for Astronomy with ESO (FINCA). The work of AKL is partially supported by NASA ADAP grants NNX16AF48G and NNX17AF39G and National Science Foundation under grants No.~1615105, 1615109, and 1653300. \FloatBarrier \newpage
1,108,101,562,610
arxiv
\section{Introduction} Nowadays, it is important to sensing and understanding what users prefer and need in recommender systems, which have been fundamental components of various applications. People always say ``Seeing is believing." \emph{Visual} information plays an important role in understanding user behaviors, especially in domains such as cloths, jewelries, house decorations and so on. It is crucial to investigate the visual dimensions of users' preferences and items' characteristics for better personalized recommendation. Recently, some studies have been done on investigating visual features for user modeling, including cloth matching \cite{hu2015collaborative,mcauley2015image} and visual recommendation \cite{he2016ups,he2016vbpr}. However, these methods models items in a common visual feature space, which may fail to capture different styles of items. In Figure \ref{fig:origin}, we cluster items in the clothing subset of the Amazon dataset\footnote{http://jmcauley.ucsd.edu/data/amazon/} \cite{mcauley2015inferring,mcauley2015image}. The visual features used here are the Convolutional Neural Networks (CNN) visual features extracted from the Caffe reference model\footnote{bvlc\_reference\_caffenet from caffe.berkeleyvision.org} \cite{jia2014caffe,krizhevsky2012imagenet}, which have been also used in several existing works \cite{he2016sherlock,he2016ups,he2016vbpr,mcauley2015image}. We can observe that, each category (e.g., ups, dresses, pants, shoes, bags and watches) of items are assigned to one cluster. It is obvious that, different styles (e.g., casual, athletic and formal) of items can not be distinguished in the figure, even if the male and female styles. Items with similar styles are usually bought together, but they are not similar in the visual feature space. Thus, it is hard for a recommender to make reliable prediction in such feature space. For example, the similarity between suit pants and leather shoes is much small than the similarity between suit pants and jeans. However, suit pants and leather shoes are usually bought together by the same user. Thus, we need to investigate the \emph{styles} of items, and eliminate the characteristics of categories from representations of items. Accordingly, we assume that: \begin{equation} \label{eq:ass_style} item = style + category~. \end{equation} Based on the assumption in Equation \ref{eq:ass_style}, we propose a novel method called \textbf{DeepStyle}. In DeepStyle, images of items are feeded into a deep CNN model. For each item, on the output layer of CNN, we subtract a latent representation of the corresponding category from the visual feature vector generated by CNN, and thus, we obtain the style features of items. Then, we incorporate style features with the widely-used Bayesian Personalized Ranking (BPR) \cite{rendle2009bpr} for personalized recommendation. \begin{figure}[!tb] \centering \includegraphics[width=1\linewidth]{./origin.pdf} \caption{Part of the clustering results of items in the Clothing subset of the Amazon dataset \cite{mcauley2015inferring,mcauley2015image} measured by CNN visual features \cite{jia2014caffe,krizhevsky2012imagenet}. One row is a cluster. We can observe that, each category of items are assigned to one cluster. Male and female items are not distinguished. Different styles of clothing are neither distinguished.} \label{fig:origin} \end{figure} Moreover, as we eliminate the characteristics of categories from representations of items, we need to find another way to take usage of the categorical information, which is also important for modeling user behaviors. Here, we extract two components in users' selection behaviors (e.g., buying a pair of jeans): \emph{preferences} and \emph{demands}. Usually, a user may have a category of items that he or she needs (demands), then the user selects one that he or she likes belonging to the category (preferences). Thus, we can achieve an assumption: \begin{equation} \label{eq:ass_demand} selection = preference + demand~. \end{equation} Usually, demands lead to categories (e.g., coats, shoes, sandals, shirts and dresses), while preferences lead to styles (e.g., casual, formal, fashionable and conservative). For example, when the weather becomes cold, you will need a coat (demands) of the casual style (preferences). Preferences on styles of a user are comparative permanent and stable in a relatively long period, which can be modeled by DeepStyle introduced above. Demands on categories are temporary, and depend on situations and what a user has bought before. For example, when the weather is hot, you will need sandals instead of boots. And when you have bought lots of shoes, you probably do not need another pair. Thus, predicting the category a user needs becomes a recommendation problem with contextual and sequential information. Context-aware recommendation \cite{liu2015cot,rendle2011fast,shi2014cars} and sequential recommendation \cite{rendle2010factorizing,wang2015learning,yu2016dream} are both extensively studied. In our recent work \cite{liu2016context}, the problem of context-aware sequential recommendation has been addressed, and a model called Context-Aware Recurrent Neural Networks (CA-RNN) is proposed. CA-RNN captures sequential and contextual information simultaneously in a Recurrent Neural Networks (RNN) architecture. It adjusts matrices in the conventional RNN formulation to variety of contexts. However, models with conventional RNN architecture may occur the vanishing or exploding gradients problem \cite{bengio1994learning}, and assigning each context with a matrix requires too many parameters. Accordingly, we incorporate the Gated Recurrent Unit (GRU) \cite{chung2014empirical,chung2015gated} architecture, and propose a novel method, namely Context-Aware Gated Recurrent Unit (\textbf{CA-GRU}). In the formulation of CA-GRU, we incorporate context-aware gates for adjusting to different contexts. Comparing with context-aware matrices in CA-RNN, context-aware gate vectors in CA-GRU require much less parameters. And with the GRU architecture, the vanishing or exploding gradients problem can be relieved. Finally, the prediction on preferences and demands, i.e., the prediction generated by DeepStyle and CA-GRU, can be aggregated, for better prediction on users' selections, and promoting the performance in visual recommendation. The main contributions of this work are listed as follows: \begin{itemize} \item We address two components in users' selection behaviors: preferences and demands, leading to styles and categories of items respectively. \item We propose DeepStyle for learning style features of items, and sensing users' preferences. \item We propose CA-GRU for context-aware sequential recommendation, which can be applied for better predicting users' demands on different categories of items. \item Experiments conducted on real-world datasets demonstrates the effectiveness of our proposed methods in visual recommendation. \end{itemize} The rest of the paper is organized as follows. In section 2, we review some related works. Section 3 details our proposed DeepStyle and CA-GRU for learning users' preferences and demands respectively. In section 4, we report and analyze our experimental results. Section 5 concludes our work and discusses future research. \section{Related Works} In this section, we review some related works on visual recommendation, context-aware recommendation and sequential recommendation. \subsection{Visual Recommendation} Matrix Factorization (MF) \cite{koren2009matrix} and Bayesian Personalized Ranking (BPR) \cite{rendle2009bpr} have become the state-of-the-art approaches to recommender systems. Based on these methods, some extended methods are focusing on incorporating visual features for better user modeling. And the importance of utilizing visual features of items in recommender systems has been stressed and proved \cite{di2014picture,goswami2011study,he2016sherlock}. Functional Pairwise Interaction Tensor Factorization (FPITF) \cite{hu2015collaborative} predicts the matching of clothes in outfits with tensor factorization. Personalized matching of items based on visual features has also been investigated \cite{mcauley2015image}. Deep CNNs have been incorporated to model visual features of clothing according to dyadic co-occurrences \cite{veit2015learning}. Collaborative Knowledge-base Embedding (CKE) \cite{zhang2016collaborative} incorporates knowledge-bases, including visual knowledge, for recommender systems. Visual Bayesian Personalized Ranking (VBPR) \cite{he2016vbpr} extends the framework of BPR, and incorporates visual features for promoting the recommendation of items in implicit feedback scenarios. VBPR is further extended with dynamic dimensions to model the visual evolution of fashion trends in visual recommendation \cite{he2016ups}. The impact of categorical information on items' styles has been considered in Sparse Hierarchical Embeddings (Sherlock) \cite{he2016sherlock}. In Sherlock, the embedding matrices for transferring visual features to style features vary among different categories of items. However, Sherlock requires a prior category tree. Moreover, one embedding matrix for each category leads to very large amount of parameters to be learned, although a sparse operation on the category tree is performed. \subsection{Context-aware and Sequential Recommendation} Context-aware recommendation \cite{adomavicius2011context} is a major topic in recommender systems. Multi-verse recommendation \cite{karatzoglou2010multiverse} uses tensor factorization to model $n$-dimensional contextual information. As a extended model of tensor factorization, Factorization Machine (FM) can model a wide variety of contextual information by specifying contextual information as the input dimensions and provide context-aware predictions \cite{rendle2011fast}. The Tensor Factorization for MAP maximization (TFMAP) model \cite{shi2012tfmap} uses tensor factorization and Mean Average Precision (MAP) objective to model implicit feedback data with contextual information. Recently, CARS2 \cite{shi2014cars} and Contextual Operating Tensor (COT) \cite{liu2015cot,wu2016contextual,wu2017contextual} models represent the common semantic effects of contexts as contextual operating tensor and represents contexts as latent vectors. And the Hierarchical Interaction Representation (HIR) model \cite{liu2015collaborative,wu2017hierarchical} generates the interaction representation via tensor multiplication which can be applied in context-aware recommender system. Methods based on Markov assumption are most widely-used models for sequential recommendation \cite{yang2010personalizing}. Via factorization of the probability transition matrices, Factorizing Personalized Markov Chain (FPMC) \cite{rendle2010factorizing} can provide better personalized prediction for sequential recommendation. Hierarchical Representation Model (HRM) \cite{wang2015learning} learns the representation of behaviors in the last transaction and predicts behaviors for the next transaction. Recently, Recurrent Neural Networks (RNN) have achieved the state-of-the-art performance in sequential prediction. RNN based models have been applied successfully in many different scenarios, such as sentence modeling \cite{mikolov2010recurrent}, sequential click prediction \cite{zhang2014sequential}, location prediction \cite{liu2016strnn}, next basket recommendation \cite{yu2016dream} and session-based recommendation \cite{hidasi2016session}. Back Propagation Through Time (BPTT) \cite{rumelhart1988learning} is usually employed to learn parameters of these RNN based models. Recently, Convolutional Neural Networks (CNN) have also be incorporated for sequential prediction \cite{tang2018personalized,yuan2019simple,wang2019towards}. Recently, the problem of context-aware sequential recommendation has been addressed for capturing sequential and contextual information simultaneously \cite{liu2016context,wu2017context}. To solve this problem, Context-Aware Recurrent Neural Networks (CA-RNN) is proposed based on a RNN architecture. CA-RNN focuses on two types of contexts: input contexts and transition contexts. Input contexts denote situations where users conduct behaviors, while transition contexts mean time intervals between adjacent behaviors in sequences. CA-RNN adjusts matrices in the conventional RNN formulation to variety of contexts. However, models with the conventional RNN architecture may occur the vanishing or exploding gradients problem \cite{bengio1994learning}. Moreover, assigning each context with a matrix in CA-RNN requires too many parameters, and may result in overfitting. \section{Proposed Methods} In this section, we detail our proposed methods. First, we give the notations in this work. Then, we introduce DeepStyle and CA-GRU for learning users' preferences and demands respectively. Finally, we present the aggregation of DeepStyle and CA-GRU for better predicting users' selections in visual recommendation. \subsection{Notations} In this work, we focus on predicting users' implicit feedbacks, i.e., users' selections, on items. We have a set of users denoted as $\mathcal{U}$, and a set of items denoted as $\mathcal{I}$. Users may have selection behaviors on some items, where $\mathcal{I}^u$ denotes the set of items selected by user $u$. Each item $i$ is associated with an image describing its visual information, and belongs to a specific category $l_i$. The whole set of categories is denoted as $\mathcal{L}$. Moreover, users' selections on items are associated with timestamps. For each user $u$, corresponding categories form a sequence $\mathcal{L}^u = \{l_{1}^u,l_{2}^u,..., l_{t}^u,... \}$, where $l_t^u$ denotes the category of the item selected by user $u$ at time step $t$. At time step $t$ of the selection sequence of user $u$, there are input context $c_{I,t}^u$, i.e., the situation where the user conducts behavior, and transition context $c_{T,t}^u$, i.e., the time interval between the current timestamp and the previous timestamp. \subsection{DeepStyle} Conventional methods in visual recommendation are mostly focusing on modeling items in a common visual feature space. This may fail to capture different styles of items. As shown in Figure \ref{fig:origin}, items with similar styles may be not similar in the visual space at all. And categorical information is dominant in the common visual space. Thus, in visual recommendation, it is vital to eliminate characteristics of categories from representations of items. Accordingly, we propose the DeepStyle method for learning items' style features and users' preferences. First, for each item $i$, we feed the corresponding image into a deep CNN model, as shown in Figure \ref{fig:ds}. Following several representative works \cite{he2016sherlock,he2016ups,he2016vbpr,mcauley2015image} in visual recommendation, the CNN model applied is the Caffe reference model \cite{mcauley2015inferring,mcauley2015image}. It consists of $5$ convolutional layers followed by $3$ fully-connected layers. The model is pre-trained on 1.2 million ImageNet images\footnote{http://image-net.org/}, for capturing some common visual concepts. On the output layer of the CNN model, there is a $4096$ dimensional visual feature vector denoted as ${{\mathbf{v}}_i} \in {\mathbb{R}^{4096}}$. Then, to obtain style features, according to Equation \ref{eq:ass_style}, we subtract items' latent categorical representations from visual features generated by CNN. For item $i$, we can calculate its style features as \begin{equation} \label{eq:style} {{\mathbf{s}}_i} = {\mathbf{E}}{{\mathbf{v}}_i} - {{\mathbf{l}}_i}~, \end{equation} where ${{\mathbf{s}}_i} \in {\mathbb{R}^{d}}$ denotes the style feature of item $i$, ${{\mathbf{l}}_i} \in {\mathbb{R}^{d}}$ denotes the categorical representation of the corresponding category $l_i$, ${{\mathbf{E}}} \in {\mathbb{R}^{d \times 4096}}$ is a matrix for transferring visual features to lower dimensionality on the top layer, and $d$ is the dimensionality of learned representations. \begin{figure}[!tb] \centering \includegraphics[width=1\linewidth]{./DS.pdf} \caption{The illustration of DeepStyle for learning styles of items and preferences of users. In DeepStyle, images of items are feeded into a deep CNN model. For each item, on the output layer of CNN, we subtract a latent representation of the corresponding category from the visual features generated by CNN, and obtain style features of items. The CNN architecture used here is the Caffe reference model \cite{mcauley2015inferring,mcauley2015image}, which consists of $5$ convolutional layers followed by $3$ fully-connected layers and is pre-trained on $1.2$ million ImageNet images. Finally, the style features are incorporated into a BPR framework \cite{rendle2009bpr} for personalized recommendation.} \label{fig:ds} \end{figure} Furthermore, similar with VBPR \cite{he2016vbpr}, we incorporate style features in the BPR \cite{rendle2009bpr} framework, which is the state-of-the-art method for modeling implicit feedbacks, for sensing preferences of users. The prediction of user $u$ on item $i$ can be made as \begin{equation} \label{eq:style_predict} {{\hat y}_{u,i}} = {\left( {{{\mathbf{p}}_{u}}} \right)^T}\left( {{{\mathbf{s}}_i}{ + }{{\mathbf{q}}_i}} \right)~, \end{equation} where ${{\mathbf{p}}_u} \in {\mathbb{R}^{d}}$ denotes the latent representation of user $u$, and ${{\mathbf{q}}_i} \in {\mathbb{R}^{d}}$ denotes the latent representation of item $i$. For user $u$, with an arbitrary negative sample $i'$, the model needs to fit \begin{equation} \label{eq:style_dayu} {{\hat y}_{u,i}} > {{\hat y}_{u,i'}}~, \end{equation} where $i$ is a positive item that $i \in {\mathcal{I}^u}$, and $i'$ is a negative item that $i' \notin {\mathcal{I}^u}$. Then, in the BPR framework, we need to maximize the following probability \begin{equation} \label{eq:style_pro} p\left( {u,i > i'} \right) = g\left( {{{\hat y}_{u,i}} - {{\hat y}_{u,i'}}} \right)~, \end{equation} where the activation function $f(x)$ is usually chosen as \begin{equation} \label{eq:style_act} g\left( x \right) = \frac{1}{{1 + {e^{ - x}}}}~. \end{equation} Incorporating the negative log likelihood, we can minimize the following objective function equivalently \begin{equation} \label{eq:style_obj} {J_p} = \sum\limits_{u,i} {\ln \left( {1 + {e^{ - \left( {{{\hat y}_{u,i}} - {{\hat y}_{u,i'}}} \right)}}} \right)} + \frac{{{\lambda _p}}}{2}{\left\| {{{\mathbf{\theta }}_p}} \right\|^2}~, \end{equation} where ${\mathbf{\theta }}_p$ denotes all the parameters to be estimated in DeepStyle, and $\lambda _p$ is a hyper-parameter to control the power of regularization. Then, the derivations of $J_p$ with respect to all the parameters in DeepStyle can be calculated, and we can employ Stochastic Gradient Descent (SGD) to estimate the model parameters. The training procedure consists of two parts: the training of the deep CNN layers which generate visual features, and the training of the BPR layer which learns style features. These two parts of training are done alternately. This process is repeated iteratively until the convergence is achieved. \subsection{CA-GRU} \begin{figure}[!tb] \centering \includegraphics[width=1\linewidth]{./CAGRU.pdf} \caption{The illustration of CA-GRU for learning users' demands. In CA-GRU, update gate, activation gate and reset gate are adjusted to variety of input contexts and transition contexts.} \label{fig:ca} \end{figure} After proposing DeepStyle for learning users' preferences on different styles of items, we need to predict users' demands on different categories of items. Usually, demands are temporal. It depends on external situations, e.g., weather and month, and what a user has bought before. This make demand prediction a problem of recommendation with both contextual and sequential information. In our previous work \cite{liu2016context}, we address the problem of context-aware sequential recommendation, and propose a CA-RNN method based on the RNN architecture. CA-RNN adjusts matrices in the RNN formulation to various contexts. However, CA-RNN has its own drawbacks: 1) models constructed on conventional RNN architecture may occur the vanishing or exploding gradients problem \cite{bengio1994learning}; 2) assigning each context with a matrix requires too many parameters to be learned. Accordingly, we propose a CA-GRU method based on the GRU architecture. First, we introduce the conventional GRU architecture \cite{chung2014empirical,chung2015gated}. For modeling our problem, the formulation of GRU is \begin{equation} \label{eq:GRU} \begin{array}{l} {\mathbf{z}}_{{t}}^u = \sigma \left( {{{\mathbf{W}}_z}{\mathbf{l}}_{{t}}^u + {{\mathbf{M}}_z}{\mathbf{h}}_{{{t - 1}}}^u} \right)\\ {\mathbf{r}}_{{t}}^u = \sigma \left( {{{\mathbf{W}}_r}{\mathbf{l}}_{{t}}^u + {{\mathbf{M}}_r}{\mathbf{h}}_{{{t - 1}}}^u} \right)\\ {\mathbf{\tilde h}}_{{t}}^u = tanh\left( {{{\mathbf{W}}_h}{\mathbf{l}}_{{t}}^u + {{\mathbf{M}}_h}\left( {{\mathbf{h}}_{{{t - 1}}}^u \cdot {\mathbf{r}}_{{t}}^u} \right)} \right)\\ {\mathbf{h}}_{{t}}^u = \left( {1 - {\mathbf{z}}_{{t}}^u} \right) \cdot {\mathbf{h}}_{{{t - 1}}}^u + {\mathbf{z}}_{{t}}^u \cdot {{{\mathbf{\tilde h}}}_t} \end{array}~, \end{equation} where ${{\mathbf{h}}_{{t}}^u} \in {\mathbb{R}^{d}}$ denotes the hidden state of user $u$ at time step $t$, ${{\mathbf{l}}_{{t}}^u} \in {\mathbb{R}^{d}}$ denotes the representation of the category $l_t^u$, ${{\mathbf{z}}_{{t}}^u} \in {\mathbb{R}^{d}}$ and ${{\mathbf{r}}_{{t}}^u} \in {\mathbb{R}^{d}}$ are the update gate and the reset gate respectively, $\mathbf{W}_*$ and $\mathbf{M}_*$ are all $d \times d$ dimensional matrices. With the gating operations, GRU can relieve the vanishing or exploding gradients problem to a certain extent. According to the definition in \cite{liu2016context}, there are two types of contexts in sequences: input contexts and transition contexts. Input contexts are external contexts under which users conduct behaviors. Such contexts usually include location (home or working place), time (weekdays or weekends, morning or evening), weather (sunny or rainy), etc. Transition contexts denote time intervals between adjacent behaviors. It captures context-adaptive transition effects from past behaviors to future behaviors with different time intervals. That is to say, input and transition contexts affect current input elements and propagated previous hidden states respectively. Thus, the formulation in CA-GRU can be adjusted as \begin{equation} \label{eq:CA-GRU} \begin{array}{l} {\mathbf{z}}_{{t}}^u = \sigma \left( {{{\mathbf{W}}_z}{\mathbf{l}}_{{t}}^u + {{\mathbf{M}}_z}{\mathbf{h}}_{{{t - 1}}}^u + {{\mathbf{I}}_z}{\mathbf{c}}_{I,t}^u + {{\mathbf{T}}_z}{\mathbf{c}}_{T,t}^u} \right)\\ {\mathbf{a}}_{{t}}^u = \sigma \left( {{{\mathbf{W}}_a}{\mathbf{l}}_{{t}}^u + {{\mathbf{M}}_a}{\mathbf{h}}_{{{t - 1}}}^u + {{\mathbf{I}}_a}{\mathbf{c}}_{I,t}^u} \right)\\ {\mathbf{r}}_{{t}}^u = \sigma \left( {{{\mathbf{W}}_r}{\mathbf{l}}_{{t}}^u + {{\mathbf{M}}_r}{\mathbf{h}}_{{{t - 1}}}^u + {{\mathbf{T}}_r}{\mathbf{c}}_{T,t}^u} \right)\\ {\mathbf{\tilde h}}_{{t}}^u = tanh\left( {{{\mathbf{W}}_h}\left( {{\mathbf{l}}_{{t}}^u \cdot {\mathbf{a}}_{{t}}^u} \right) + {{\mathbf{M}}_h}\left( {{\mathbf{h}}_{{{t - 1}}}^u \cdot {\mathbf{r}}_{{t}}^u} \right)} \right)\\ {\mathbf{h}}_{{t}}^u = \left( {1 - {\mathbf{z}}_{{t}}^u} \right) \cdot {\mathbf{h}}_{{{t - 1}}}^u + {\mathbf{z}}_{{t}}^u \cdot {{{\mathbf{\tilde h}}}_t} \end{array}~, \end{equation} where ${{\mathbf{c}}_{I,t}^u} \in {\mathbb{R}^{d}}$ and ${{\mathbf{c}}_{T,t}^u} \in {\mathbb{R}^{d}}$ denote vector representations of input context $c_{I,t}^u$ and transition context $c_{T,t}^u$ respectively, $\mathbf{I}_*$ and $\mathbf{T}_*$ are all $d \times d$ dimensional matrices. Considering the update gate controls the updating weights between current states and previous states, we incorporate ${\mathbf{z}}_{{t}}^u$ with both input and transition contexts. Because the reset gate resets the propagated signals from previous states, we incorporate ${\mathbf{r}}_{{t}}^u$ with only transition contexts. Moreover, we add an activation gate ${{\mathbf{a}}_{{t}}^u} \in {\mathbb{R}^{d}}$ incorporating input contexts for modeling the activating operation on input elements ${{\mathbf{l}}_{{t}}^u}$. With context-aware gate vectors, CA-GRU significantly reduces the number of parameters comparing with CA-RNN. When making prediction at time step $t+1$, despite operations discussed above, current contexts $c_{I,t+1}^u$ and $c_{T,t+1}^u$ should also be considered. Then, the prediction of user $u$ at time step $t+1$ on category $j$ can be made as \begin{equation} \label{eq:CAGRU_predict} \begin{array}{l} {\mathbf{a}'}_{t + 1}^u = \sigma \left( {{\mathbf{I}'}{_a}{\mathbf{c}}_{I,t + 1}^u} \right)\\ {\mathbf{r}'}_{t + 1}^u = \sigma \left( {{\mathbf{T}'}{_r}{\mathbf{c}}_{T,t + 1}^u} \right)\\ {{\hat y}_{u,t + 1,j}} = {\left( {{\mathbf{h}}_{{t}}^u \cdot {\mathbf{r}'}_{t + 1}^u} \right)^T}\left( {{{\mathbf{l}}_j} \cdot {\mathbf{a}'}_{t + 1}^u} \right) \end{array}~, \end{equation} where ${\mathbf{a}'}_{t + 1}^u$ and ${\mathbf{r}'}_{t + 1}^u$ are the activation gate and the reset gate at the prediction scenario respectively, and as same as $\mathbf{I}_*$ and $\mathbf{T}_*$, $\mathbf{I}'_*$ and $\mathbf{T}'_*$ are all $d \times d$ dimensional matrices. Furthermore, we incorporate the cross-entropy for learning of CA-GRU. We need to minimize the following objective function \begin{equation} \label{eq:CAGRU_obj} {J_d} = - \sum\limits_{u,t} {\frac{1}{{\left| \mathcal{L} \right|}}\sum\limits_{j \in \mathcal{L}} {{y_{u,t + 1,j}}\ln \left( {{{\hat y}_{u,t + 1,j}}} \right)} } + \frac{{{\lambda _d}}}{2}{\left\| {{{\mathbf{\theta }}_d}} \right\|^2}~, \end{equation} where ${\mathbf{\theta }}_d$ denotes all the parameters to be learned in CA-GRU, and $\lambda _d$ is the regularization parameter. According to the objective function, parameters in CA-GRU can be estimated with the commonly-used BPTT \cite{rumelhart1988learning} and SGD. To be noted, CA-GRU can be pre-trained on DeepStyle. That us to say, ${{\mathbf{l}}_{{t}}^u}$ can be initialized with the categorical representations learned in DeepStyle. With this trick, CA-GRU is able to capture some visual characteristics and correlations. \subsection{Summary of Proposed Methods} As shown in Figure \ref{fig:frame}, based on visual information, DeepStyle learns users' preferences on styles. And based on contextual information and sequential information, CA-GRU learns users' demands on categories. Finally, we need to aggregate the prediction on preferences and demands for better predicting users' selections, and generate a ranked list of items for each user at each predicted time step. First, based on the predicted results generated by CA-GRU, we pick top $k$ categories with largest probabilities. Items in these top $k$ categories form the former part of the list, and items in other categories form the latter part of the list. Second, in either part of the list, items are ranked according to the matching degrees with the user's preferences measured by DeepStyle. Then, the final recommended results are generated. And the whole procedure of recommendation can be named as DeepStyle+CA-GRU for predicting users' selections on items in visual recommendation. \section{Experiments} In this section, we introduce our experiments to evaluate the effectiveness of our proposed methods. First, we introduce our experimental settings. Then, we give comparison among some state-of-the-art methods in different aspects. Finally, we demonstrate the visualization of clustering results measured by style features learned in DeepStyle. \begin{figure}[!tb] \centering \includegraphics[width=1\linewidth]{./framework.pdf} \caption{The summary of our proposed methods, i.e., DeepStyle and CA-GRU. Based on visual information, DeepStyle learns style features of items and preferences of users, as shown in the bottom left square. Based on contextual information and sequential information, CA-GRU learns users' demands on categories, as shown in the bottom right square. Then, we can obtain top $k$ categories with largest probabilities predicted by CA-GRU. Among these categories, items matching users' preferences will be recommended, as shown in the top square.} \label{fig:frame} \end{figure} \subsection{Experimental Settings} \begin{table*}[tb!] \centering \caption{Performance comparison on predicting users' selections on items measured by AUC. The dimensionality is $d=10$ on both datasets. Numbers of top categories are $k=3$ and $k=5$ on Clothing and Home respectively. CA-GRU is pre-trained and initialized with DeepStyle.} \begin{tabular}{ccccccc} \toprule dataset & setting & BPR & VBPR & Sherlock & DeepStyle & DeepStyle+CA-GRU \\ \midrule \multirow{2}[2]{*}{Clothing} & warm-start & 0.6183 & 0.7441 & 0.7758 & \textbf{0.7961} & \textbf{0.8075} \\ & cold-start & 0.5037 & 0.6915 & 0.7167 & \textbf{0.7317} & \textbf{0.7599} \\ \midrule \multirow{2}[2]{*}{Home} & warm-start & 0.5848 & 0.6845 & 0.7049 & \textbf{0.7155} & \textbf{0.7390} \\ & cold-start & 0.5053 & 0.6140 & 0.6322 & \textbf{0.6396} & \textbf{0.6711} \\ \bottomrule \end{tabular}% \label{tab:selection}% \end{table*}% Our experiments are conducted on two subsets of the Amazon dataset \cite{mcauley2015inferring,mcauley2015image}. In particular, we adopt the ``Clothing, Shoes and Jewelry" subset and the ``Home and Kitchen" subset, which are named as the \textbf{Clothing} dataset and the \textbf{Home} dataset for short. The reason we choose these two datasets is that, visual features are important in buying things such as clothes, shoes, jewelries, house decorations and so on. Especially, visual features have been proven to be useful in cloth recommendation \cite{he2016sherlock,he2016ups,he2016vbpr,mcauley2015image}. The Clothing dataset consists of $74$ categories, e.g., jeans, pants, shoes, shirts and dresses. The home dataset contains $86$ categories, e.g., sheets, furniture, pillows and cups. In our experiments, we empirically set the regulation parameters as $\lambda_p = 0.01$ and $\lambda_d = 0.01$, and the learning rate for SGD is set to be $0.01$. For each dataset, we use $80\%$ instances for training, and remaining $20\%$ instances for testing. Moreover, we remove users with less than 5 records and more than 100 records. There are two types of evaluation settings on both datasets during the testing procedure: \textbf{warm-start} and \textbf{cold-start}. The former focuses on measuring the overall ranking performance, while the latter captures the capability to recommend cold-start items, i.e., items with less than $5$ records during training, in the system. According to timestamps associated in the two datasets, similar with the operations in \cite{liu2016context}, we extract input contexts and transition contexts for CA-GRU, CA-RNN and other context-aware methods. First, we can extract two kinds of contexts: seven days in a week and twelve months in a year. So, there are totally $84$ input context values. Second, we can extract time intervals between adjacent behaviors in sequences as transition contexts. Time intervals in both datsets are empirically discretized to several time bins, where thresholds are one day, two days, three days, one week, half a month, one month, three months, half a year and one year. So, there are totally $10$ transition context values. Then, following some previous works \cite{he2016vbpr,rendle2009bpr}, for evaluating the performance of all the methods, we apply the Area Under the ROC Curve (\textbf{AUC}) metric: \begin{small} \begin{displaymath} AUC = \frac{1}{{\left| \mathcal{U} \right|}}\sum\limits_{u \in \mathcal{U}} {\frac{1}{{\left| {set\left( {i \in {\mathcal{I}^u},i' \notin {\mathcal{I}^u}} \right)} \right|}}\sum\limits_{i \in {\mathcal{I}^u},i' \notin {\mathcal{I}^u}} {\delta \left( {{p_{u,i}} > {p_{u,i'}}} \right)} }~, \end{displaymath} \end{small} where $\delta \left( . \right)$ is the Dirac delta function, which outputs $1$ when the condition is met, and $0$ otherwise. The larger the AUC value, the better the performance. Moreover, to illustrate the effectiveness of our proposed methods, several aspects of methods are compared: (1) To investigate the performance on predicting users' preferences on styles, some state-of-the-art methods in visual recommendation are compared: \textbf{BPR} \cite{rendle2009bpr}, \textbf{VBPR} \cite{he2016vbpr} and \textbf{Sherlock} \cite{he2016sherlock}. BPR is a widely-used method for modeling implicit feedbacks. Based on BPR, VBPR incorporates visual features of items. Sherlock extends VBPR, and takes categorical effects on styles into consideration. As in \cite{he2016sherlock,he2016vbpr}, visual features used in VBPR and Sherlock are CNN features extracted from the Caffe reference model \cite{jia2014caffe,krizhevsky2012imagenet}. (2) To investigate the performance on predicting users' demands on categories, several methods in both context-aware recommendation and sequential recommendation are compared. Context-aware methods consist of \textbf{FM} \cite{rendle2011fast} and \textbf{COT} \cite{liu2015cot}, while sequential methods include \textbf{FPMC} \cite{rendle2010factorizing}, \textbf{HRM} \cite{wang2015learning}, \textbf{RNN} \cite{yu2016dream} and \textbf{GRU} \cite{hidasi2016session}. Moreover, \textbf{CA-RNN} \cite{liu2016context}, which can model contextual and sequential information simultaneously, is also compared. \subsection{Evaluation of DeepStyle} \begin{figure}[tb!] \centering \hspace{-6mm} \subfigure[Clothing.]{ \begin{minipage}[b]{0.25\textwidth} \centering \includegraphics[width=1\textwidth]{./d1.pdf} \label{dimen1} \end{minipage} } \hspace{-6mm} \subfigure[Home.]{ \begin{minipage}[b]{0.25\textwidth} \centering \includegraphics[width=1\textwidth]{./d2.pdf} \label{dimen2} \end{minipage} } \caption{Performance of DeepStyle, Sherlock and VBPR with varying dimensionality $d=[5,10,15,20]$ measured by AUC.} \label{fig:dim} \end{figure} \begin{table*}[tb!] \centering \caption{Performance comparison on context-aware sequential recommendation, i.e., predicting users' demands on categories, measured by AUC. The dimensionality is $d=10$ on both datasets.} \begin{tabular}{c|cc|cccc|ccc} \toprule \multirow{2}[4]{*}{dataset} & \multicolumn{2}{c|}{context-aware} & \multicolumn{4}{c|}{sequential} & \multicolumn{3}{c}{context-aware+sequential} \\ \cmidrule{2-10} & FM & COT & FPMC & HRM & RNN & GRU & CA-RNN & CA-GRU & CA-GRU(pre-trained) \\ \midrule Clothing & 0.6704 & 0.6716 & 0.6928 & 0.6973 & 0.7169 & 0.7422 & 0.7483 & \textbf{0.7582} & \textbf{0.7602} \\ Home & 0.6692 & 0.6711 & 0.6789 & 0.6846 & 0.7021 & 0.7268 & 0.7241 & \textbf{0.7351} & \textbf{0.7385} \\ \bottomrule \end{tabular}% \label{tab:demand}% \end{table*}% Table \ref{tab:selection} illustrates the performance comparison among DeepStyle, Sherlock, VBPR and BPR under warm-start and cold-start settings. And the dimensionality is $d=10$. We can clearly observe that, methods incorporating visual features can outperform the baseline method BPR with relatively large advantages on both datasets. The advantages comparing with BPR are even larger under the cold-start setting, which indicates that visual features can model properties of cold-start items when observations are not enough, and promote the cold-start evaluation. Moreover, methods modeling categorical effects on styles of items, i.e., Sherlock and DeepStyle, have better performance than VBPR on both datasets under both settings. This shows it is vital to take categorical information into consideration for modeling styles of items. And Sherlock becomes the best one among all the compared methods in visual recommendation. It is obvious that our proposed DeepStyle method outperforms all the compared methods. Comparing with Sherlock, DeepStyle improves AUC values by $2.3\%$ and $1.1\%$ on Clothing and Home respectively under the warm-start setting, and $1.5\%$ and $0.7\%$ under the cold-start setting. These improvements indicates the superiority of DeepStyle for learning style features of items and preferences of users. Moreover, we illustrate the dimensionality sensitivity of DeepStyle, Sherlock and VBPR in Figure \ref{fig:dim}. DeepStyle can consistently outperform Sherlock and VBPR. On both datasets, the performance of DeepStyle stays stable after $d=10$. This indicates DeepStyle is not sensitive with the dimensionality, and the performance with $d=10$ is reported in the rest of our experiments. \begin{figure}[tb!] \centering \hspace{-6mm} \subfigure[Clothing.]{ \begin{minipage}[b]{0.25\textwidth} \centering \includegraphics[width=1\textwidth]{./cd1.pdf} \label{dimen1} \end{minipage} } \hspace{-6mm} \subfigure[Home.]{ \begin{minipage}[b]{0.25\textwidth} \centering \includegraphics[width=1\textwidth]{./cd2.pdf} \label{dimen2} \end{minipage} } \caption{Performance of CA-GRU, CA-RNN and GRU with varying dimensionality $d=[5,10,15,20]$ measured by AUC.} \label{fig:cd} \end{figure} \subsection{Evaluation of CA-GRU} To investigate the performance of CA-GRU in context-aware sequential recommendation, its comparison with CA-RNN and some state-of-the-art methods in context-aware and sequential recommendation is shown in Table \ref{tab:demand}. The dimensionality is $d=10$ on both datasets. Comparing with context-aware methods, i.e., FM and COT, sequential methods, i.e., FPMC, HRM, RNN and GRU, have better performance. This show that, sequential information is usually important than contextual information in predicting users' demands. GRU can significantly outperform RNN, which indicates the advantage of the gating operations for solving the the vanishing or exploding gradients problem. GRU and CA-RNN have similar performance, where CA-RNN performs better on Clothing, while GRU performs better on Home. This may because that, sequences in the Home dataset are longer, and the performance benefits from the gating operations. Moreover, it is obvious that CA-GRU can outperform all the compared methods on both datasets. And pre-trained with DeepStyle, the performance improves slightly furthermore. Comparing with CA-RNN, CA-GRU improves AUC values by $1.2\%$ and $1.4\%$ on Clothing and Home respectively. This shows the advantages of CA-GRU in context-aware sequential recommendation and predicting users' demands on different categories of items. Performance of CA-GRU, CA-RNN and GRU with varying dimensionality is also shown in Figure \ref{fig:cd}. CA-GRU can clearly outperform CA-RNN and GRU with all the dimensionality on both datasets. And the performance of CA-GRU stays stable after $d=10$. Accordingly, in the rest of our experiments, CA-GRU is reported with the dimensionality $d=10$. Moreover, CA-RNN tends to overfit the data when dimensionality is high, which proves context-aware matrices in CA-RNN require too many parameters to be learned. \begin{figure}[tb!] \centering \hspace{-6mm} \subfigure[Clothing.]{ \begin{minipage}[b]{0.25\textwidth} \centering \includegraphics[width=1\textwidth]{./ca1.pdf} \label{dimen1} \end{minipage} } \hspace{-6mm} \subfigure[Home.]{ \begin{minipage}[b]{0.25\textwidth} \centering \includegraphics[width=1\textwidth]{./ca2.pdf} \label{dimen2} \end{minipage} } \caption{Performance of DeepStyle+CA-GRU with varying numbers of top categories $k=[1,2,3,4,5,6,7,8,9]$ measured by AUC. The dimensionality is $d=10$ on both datasets. CA-GRU is pre-trained and initialized with DeepStyle.} \label{fig:top} \end{figure} \subsection{Evaluation of DeepStyle+CA-GRU} \begin{figure*}[!tb] \centering \includegraphics[width=1\linewidth]{./vis.pdf} \caption{Visualization of part of the clustering results of items in the Clothing dataset measured by the learned style features in DeepStyle. Items in one square belong to the same cluster. We can observe that, one category of items are assigned to different clusters. Male and female items are distinguished. And each cluster covers a distinct style of clothing.} \label{fig:vis} \end{figure*} For better predicting users' selections, we can aggregate the prediction of DeepStyle and CA-GRU. Performance of DeepStyle+CA-GRU with varying numbers of top categories $k$ is illustrated in Figure \ref{fig:top}. CA-GRU is pre-trained with DeepStyle, and the dimensionality is $d=10$ for both DeepStyle and CA-GRU. We can clearly observe that, the performance of DeepStyle+CA-GRU is stable in a large range of $k$ under both warm- and cold-start evaluations, which indicates our proposed methods are not very sensitive with the parameter $k$. Integrated the results of warm- and cold-start evaluations, we select $k=3$ and $k=5$ as best parameters for DeepStyle+CA-GRU in Clothing and Home respectively. Performance of DeepStyle+CA-GRU with best parameters is also shown and compared in Table \ref{tab:selection}. On both datasets, on the basis of DeepStyle, the performance is further improved via aggregating the prediction on preferences and demands. Under the warm-start evaluation, comparing with Sherlock, DeepStyle+CA-GRU improves AUC values by $3.2\%$ and $2.5\%$ on Clothing and Home respectively. Under the cold-start evaluation, the improvements become $4.3\%$ and $3.9\%$ on Clothing and Home respectively. It is obvious that, improvements are larger under the cold-start setting. This is because that, predicting users' demands on categories faces no cold-start problem, i.e., there is always enough data for learning representation of each category during training. These significant improvements show the rationality and advancement of our proposed DeepStyle and CA-GRU in visual recommendation. \subsection{Visualization} Based on 10-dimensional style features learned in DeepStyle, items in the Clothing dataset are clustered into several distinct styles. The visualization of part of the clustering results is shown in Figure \ref{fig:vis}. It is obvious that, one category of items are assigned to different clusters, and different styles of items are distinguished. Female items are in the top two rows, and male items are in the bottom row. The left column covers formal and official styles of clothing, in which the middle square is closer to the banquet-style. Items in the middle column are mostly casual, school-style or street-style clothing for women and men. In the right column, items somehow belong to the old-style, and the middle square is more likely the clothing style of middle-aged women. Each cluster clearly covers a distinct style of clothing. To be noted, during the training of DeepStyle, there is absolutely no supervision on styles. Thus, our proposed method is able to automatically capture different styles of items, and promote the visual recommendation. \section{Conclusions and Future Work} In this paper, we propose two novel methods, DeepStyle and CA-GRU, for learning users' preferences and demands in visual recommendation. Based on the CNN architecture and the BPR framework, DeepStyle subtracts categorical characteristics from visual features of items, and thus obtains style features of items. CA-GRU incorporates context-aware gates in the GRU formulation for adjusting to different contexts, and promoting the modeling of context-aware sequential recommendation. Experimental results demonstrate the successful performance of our proposed methods in visual recommendation. In the future, we will further investigate the following directions. First, we will investigate deeper visual characteristics of items, to obtain the corresponding high-level semantic information. Second, we are going to analyze the long-term evolution of style features of items and preferences of users. Third, we plan to enlarge the range of research, which may include investigating images posted on various social media. \balance \bibliographystyle{ACM-Reference-Format}
1,108,101,562,611
arxiv
\section*{Abstract} It is well accepted that, at the global scale, the Gutenberg-Richter (GR) law describing the distribution of earthquake magnitude or seismic moment has to be modified at the tail to properly account for the most extreme events. It is debated, though, how much additional time of earthquake recording will be necessary to properly constrain this tail. Using the global CMT catalog, we study how three modifications of the GR law that incorporate a corner-value parameter are compatible with the size of the largest observed earthquake in a given time window. Current data lead to a rather large range of parameter values (e.g., corner magnitude from 8.6 to 10.2 for the so-called tapered GR distribution). Updating this estimation in the future will strongly depend on the maximum magnitude observed, but, under reasonable assumptions, the range will be substantially reduced by the end of this century, contrary to claims in previous literature. \section*{Introduction} Statistics of earthquake occurrence, {in particular of the most extreme events,} must be a fundamental source to assess seismic hazard \cite{Mulargia}. The cornerstone model for describing the {earthquake-size distribution} is the Gutenberg-Richter (GR) law \cite{Utsu_GR,Kagan_book}. {The original} version of the GR law states that earthquake magnitudes follow an exponential distribution, {and since this} is a perfectly ``well-behaved'' distribution, with all {statistical} moments {(such as the mean and the standard deviation) being} finite, the problem of earthquake sizes {would seem a rather trivial one.} However, a physical interpretation of the meaning of the GR law needs a proper understanding of magnitude. In fact, magnitude presents several difficulties as a measure of earthquake size \cite{Ben_zion_review}, and a true physical quantity is given instead by seismic moment \cite{Kanamori_77,Kanamori_rpp}. Due to the logarithmic dependence of magnitude on seismic moment, the GR law for the latter transforms into a power-law distribution, i.e., \begin{equation} f(x) \propto \frac 1 {x^{1+\beta}}, \mbox{ for } a \le x < \infty, \label{powerlawuno} \end{equation} {where} $x$ is seismic moment, $f(x)$ the seismic-moment probability density, $a$ a lower cut-off below which the power law does not hold (presumably because of the incompleteness of the {considered catalog for small earthquakes}), and $1+\beta$ the power-law exponent, {which takes} values close to 1.6 or 1.7 (and with the symbol ``$\propto$'' representing proportionality). It turns out that the solution to the physical interpretation of the GR law has a price to be paid: the power-law distribution, when $1+\beta$ is smaller than 2 (which is indeed the case), is not ``well behaved'', in the sense that the mean value {of {the} seismic moment} becomes infinite. {The reason is} that, for power-law distributed seismic moments, events in the tail of the distribution, despite having very small probability, bring an enormous contribution to seismic-moment release \cite{Corral_FontClos}, and the seismic-moment sample mean does not converge, no matter how large the number of data is, due to the inapplicability of the law of large numbers to power-law distributions \cite{Corral_csf} such as that in Eq. (\ref{powerlawuno}). In consequence, as when extended to the whole range of earthquake sizes the GR law {is unphysical}, the tail of the distribution of seismic moment must deviate from the GR power-law shape \cite{Kagan_gji02}. Due to scarcity of data, the problem has to be approached at a global scale, or at least for a large subset of the global data (for instance, for subduction zones as a whole \cite{Kagan_gji02}). This approach has been followed by a number of authors \cite{Kagan_book,Kagan_gji02,Kagan_pageoph99,Godano_pingue,Main_ng,Kagan_tectono10,Bell_Naylor_Main,Corral_Deluca,Zoller_grl,Geist_Parsons_NH}. Essentially, a new parameter $M_c>0$ is introduced, providing a scale for the seismic moment of the {largest} {(``non-GR'')} earthquakes, in such a way that for $x \ll M_c$ the GR law can be considered to hold but for $x \gg M_c$ the distribution clearly departs from this law, decaying faster than the GR power law. The values of ${M_c}$ are more easily read in terms of the corresponding (moment) magnitude $m_c$ \cite{Kanamori_77,Kanamori_rpp}, through the formula m_c= 2 \left( \log_{10} {M_c} - 9.1\right) /3, \label{mcMc} $ where the seismic moment is measured in N$\cdot$m. As $m_c$ is sometimes referred to as ``corner magnitude'', so $M_c$ would be the ``corner seismic moment'' \cite{Kagan_gji02}, independently of the specific probabilistic model (in practice, we will use $M_c$ for formulas and $m_c$ for reporting numeric values, and both will be referred to as ``corner parameters'' or ``corner values''). In this article we aim to further clarify to what extent the available observations can constrain ${M_c}$ or ${m_c}$, and how many more earthquakes (and then, how many more years of recording) would be likely necessary to yield reasonably precise values of such estimates. Before proposing a rigorous statistical way to tackle these issues, we will need first to assess a previously proposed approach \cite{Zoller_grl}. \section*{Probabilistic models} We define the probabilistic models in terms of the cumulative distribution function, $F(x)$, which gives the probability that the random variable (seismic moment) is equal or smaller than a value $x$. This description is totally equivalent to the one in terms of the probability density, as both functions are related as $f(x)=dF(x)/dx$ and $F(x)=\int_a^x f(x') dx'$ (at some point we will use also the complementary cumulative distribution function, $S(x)=1-F(x)$). The distributions of our interests are: (i) the truncated power-law (TPL) distribution \cite{Zoller_grl}, \begin{equation} F_{tpl}(x) = \left[1-\left(\frac a {M_c} \right)^\beta \right]^{-1} \left[1-\left(\frac a x \right)^\beta \right], \mbox{ for } a \le x \le {M_c}; \label{Ftpl} \end{equation} (ii) the tapered (Tap) GR law \cite{Bell_Naylor_Main,Zoller_grl,Mulargia_Geller}, also called Kagan distribution \cite{Vere_Jones_gji}, \begin{equation} F_{tap}(x)= 1-\left(\frac a x\right)^\beta e^{-(x-a)/{M_c}}, \mbox{ for } a \le x < \infty; \label{Ftap} \end{equation} (iii) the truncated gamma (TrG) distribution \cite{Main_ng,Serra_Corral}, \begin{equation} F_{trg}(x) = 1-\frac {\Gamma(-\beta,x/{M_c})} {\Gamma(-\beta,a/{M_c})}, \mbox{ for } a \le x < \infty; \label{Ftrg} \end{equation} with $\Gamma(\gamma,z)= \int_z^\infty x^{\gamma-1} e^{-x}dx$ the upper incomplete gamma function, defined when $\gamma <0$ only for $z>0$. All three $F(x)$ are zero for $x<a$ and $F_{tpl}(x)=1$ for $x\ge M_c$. The parameter $\beta$ has to be greater than zero, except in the TrG model, where it has no restriction. Of course, $M_c> 0$ and $a> 0$. The three distributions are graphically depicted in Figs. S1-S3 of the supporting information. Note that for the TPL distribution $M_c$ is a truncation parameter, whereas for the Tap and TrG it is a scale parameter (it sets the scale of $F(x)$ in the $x-$axis) \cite{Main_ng,Serra_Corral}. {Namely,} {$f_{tpl}(x)$} goes abruptly (discontinuously) to zero at $x=M_c$, whereas for the other two distributions this point sets the scale at which the power law transforms smoothly into an exponential decay. So, the physical meaning of $M_c$ in the TPL is quite different than in the other two models. Note also that the {TPL} is truncated both from below and from above (but the adjective refers to the truncation from above, $x\le {M_c}$), whereas the {TrG and Tap are} truncated only from below ($x \ge a$). Summarizing, all the considered distributions have two free parameters, $\beta$ and ${M_c}$ (or $\beta$ and $m_c$), with the value of $a$ fixed by the completeness {of the earthquake catalog}. In all cases, the limit ${M_c} \rightarrow \infty$ yields the usual power-law (PL) distribution \cite{Serra_Corral}, $F_{pl}(x)=1-(a/x)^\beta$ for $x\ge a$, which is equivalent to Eq. (\ref{powerlawuno}). Other works have considered different distributions, such as the Gumbel in Ref. \cite{Lomnitz}, for which the power-law limit is not so clear. \section*{State of the art} Several authors have addressed the constraining of the value of $M_c$ and related issues. In particular, Ref. \cite{Zoller_grl} studied the TPL and the Tap distributions (called there GR and MGR, respectively). It was claimed that, for global seismicity with magnitude above 5.75 (i.e., seismic-moment lower cut-off $a=5.31 \times 10^{17}$ N$\cdot$m), an enormous amount of data {would be} necessary in order to obtain reliable estimates of ${M_c}$ or ${m_c}$ (200,000 years are mentioned for the Tap distribution with $m_c\simeq 8.5$). Reasonable values proposed previously by other authors (for instance $m_c\simeq 9$ in Ref. \cite{Bell_Naylor_Main} for the Tap distribution) {were} discarded. The analysis was based {on a} single statistic: the maximum {seismic moment} $Y$ of the $N$ {earthquakes with magnitude above 5.75 contained in the catalog}; that is, $$ Y = \max\{X_1,X_2,\dots X_N\}. $$ Elementary probability theory allows one {obtaining} the {probability} distribution of the maximum $Y$ when the $N$ observations are independent \cite{Zoller_grl,Ross_firstcourse8} (independence is the maximum-entropy outcome when there is no constrain for the dependence between the observations \cite{Broderick}). {Namely,} the cumulative distribution function of this maximum is given by \begin{equation} F_{max}(y) = \mbox{Prob}[ Y \le y ] = [F(y)]^N, \label{Fmaxx} \end{equation} where $F(y)$ can be given by any of the distributions in Eqs. (\ref{Ftpl})-(\ref{Ftrg}), depending on the underlying statistical model. {This} approach constitutes an ``extreme'' limit of the classical block-maxima procedure used in extreme-value theory, considering just one single block \cite{Coles}. Figure \ref{figone} provides an illustration for $F_{max}(y)$; Figs. S4-S6 in the supporting information provide a full picture. Given a set of $N$ observations with empirical maximum $y_{emp}= \max\{x_1,x_2,\dots x_N\}$ and a modeling probability distribution $F(x)$, Z\"oller \cite{Zoller_grl} correctly {argued} that, if the data come indeed from $F(x)$, then, $F_{max}(y_{emp})=\mbox{Prob}[ Y \le y_{emp} ]$ should not be too close to 1{. The reason is that} proximity to 1 would mean that the empirical value $y_{emp}$ is too large in relation to the values of $Y$ that one can expect from the model distribution $F(x)$ {and the number of earthquakes observed}. Subsequently, this author {introduced an \textit{ad-hoc}} distinction between what he called ``not well-sampled'' distributions, characterized by $F_{max}(y_{emp})=\mbox{Prob}[ Y \le y_{emp} ]$ large (close to 1) and ``well-sampled'' distributions, characterized by $F_{max}(y_{emp})$ small{. The latter can be} equivalently {characterized} by {a large value of the complementary cumulative distribution} at $y_{emp}$, that is, $S_{max}(y_{emp})=1-F_{max}(y_{emp})= \mbox{Prob}[ Y > y_{emp} ]$ large (close to one). {In} practice {\cite{Zoller_grl}}, \begin{equation} S_{max}(y_{emp})= \mbox{Prob}[ Y > y_{emp} ] > 0.99 \label{wellsampledness} \end{equation} for ``well-sampled'' distributions \cite{Zoller_grl}. {We} will explain below that this criterion cannot be sustained {from a statistical point of view, and will introduce instead a robust criterion}. Analyzing global data from the centroid moment tensor (CMT) catalog \cite{Ekstrom2012,Dziewonski}, from January 1, 1977 to June 30, 2012 (including shallow, intermediate and deep events, $N=7,585$ for $x\ge a$), Z\"oller \cite{Zoller_grl} found that the value of the maximum magnitude corresponds to {the 2011 Tohoku earthquake, with} magnitude 9.1 (note that the 2004 Sumatra earthquake had a combined multiple-source moment magnitude of 9.3, but only 9.0 with the standard CMT determination \cite{Tsai_grl}). In our work, we will analyze the same dataset, for the sake of comparison. Then, this author \cite{Zoller_grl} {evaluated} the performance of the {TPL} and the Tap distributions for different fixed values of the parameter ${M_c}$. The considered values correspond to $m_c=8.5, 9, 9.5, \dots 12$, in addition to $m_c=9.2$. In contrast, it {was} stated that $\beta$ {was} estimated by maximum likelihood for fixed $M_c$. For the {TPL model}, a value of $m_c=9.2$ {resulted} in $\mbox{Prob}[ Y > y_{emp} ]=0.55$ \cite{Zoller_grl}, whereas $m_c=9.5$ and $m_c=10$ led to $\mbox{Prob}[ Y > y_{emp} ]$ very close to one, and even closer-to-one values {were} obtained for $m_c \ge 10.5$. Following the ``well-sampledness'' criterion, the value $m_c=9.2$ was discarded for the TPL model, despite of having the maximum likelihood among all the values of the parameters considered, and values with $m_c \ge 10.5$, with much smaller likelihood, {were} preferred. However, no preference {was} shown between $m_c=10.5$ and any other higher value (for instance $m_c=12$) and all the models {were} considered equally likely. For the Tap model, the previous results and the conclusions \cite{Zoller_grl} {were} similar to those for the TPL model, and in this way the value $m_c=9$ (proposed in Ref. \cite{Bell_Naylor_Main}) was rejected despite of yielding maximum likelihood. The calculation of the required number of data to perform a reliable estimation of parameter $M_c$ (or $m_c$) was obtained by imposing a minimum number of events $N_m$ such that the distribution becomes ``well-sampled'' \cite{Zoller_grl}, in the sense of Eq. (\ref{wellsampledness}). So, introducing Eq. (\ref{Fmaxx}) into Eq. (\ref{wellsampledness}), \begin{equation} S_{max}(y_{emp})= \mbox{Prob}[ Y > y_{emp} ] = 1 -[F(y_{emp})]^{N_m} > 0.99. \label{previozoller} \end{equation} Note that, no matter the value of $F(y_{emp})$, if this is strictly smaller than 1, for sufficiently large $N_m$ we will have $[F(y_{emp})]^{N_m} < 0.01$ and the condition will be fulfilled by any model, with any parameter value, if enough data are gathered (except truncated models with $F(y_{emp})=1$). Imposing that the previous condition becomes an equality one gets \begin{equation} N_m=\frac{|\ln 0.01|}{|\ln F(y_{emp})|}= \frac{7,585 | \ln 0.01|}{|\ln \left(1- \mbox{Prob}[ Y > y_{emp} ]\right)|}. \label{zollerformula} \end{equation} We will argue below that this equation (\ref{zollerformula}), used (but not made explicit) in previous research \cite{Zoller_grl}, does not hold for the problem under consideration. In this way, for the TPL model with $m_c=9.2$, accepting the value $\mbox{Prob}[ Y > y_{emp} ]=0.55$, the approach just outlined, Ref. \cite{Zoller_grl} [Eq. (\ref{zollerformula}) here], yields that $N_m$ has to be higher than 45,000 (corresponding to 212 years {of earthquake recording}, with about 214 earthquakes with $x \ge a$ per year). For the Tap model with $m_c=8.5$, for which $\mbox{Prob}[ Y > y_{emp} ]=0.0007$, one obtains that more than 200,000 years {would be needed} (from $N_m=50 \times 10^6$, roughly). Note the counterintuitive results that this approach leads to: the larger the corner seismic moment $M_c$, the less data are required for its estimation, as contained in Eq. (\ref{zollerformula}) (due to the decrease of $F(y_{emp})$ with $m_c$) and illustrated for the TPL model in Fig. \ref{numberofearthq}. \section*{Proper testing using the maximum seismic moment } First, let us show that the previously used ``well-sampledness'' criterion \cite{Zoller_grl}, reproduced here in Eq. (\ref{wellsampledness}), is not appropriate. If the distribution $F(x)$ is a good model for the empirical data, what one expects is that both $\mbox{Prob}[ Y \le y_{emp} ]$ and $\mbox{Prob}[ Y > y_{emp}]$ are not too close to 1, let us say, below $1-(1-r) \alpha$ and $1-r \alpha$, respectively, at significance level $\alpha$ (with $r=1/2$ in the usual symmetric case and $\alpha=0.05$ or $0.01$). As both probabilities add to one, the conditions can be written as \begin{equation} r\alpha < \mbox{Prob}[ Y \le y_{emp} ] < 1-(1-r)\alpha. \label{condition_alpha} \end{equation} or, equivalently, as $$ (1-r)\alpha < \mbox{Prob}[ Y > y_{emp} ] < 1-r\alpha, $$ i.e., the random variable $Y$ takes not too extreme values with probability $1-\alpha$ (e.g. 0.95 or 0.99). Note the profound difference between these conditions and the ``well-sampledness'' criterion \cite{Zoller_grl}, Eq. (\ref{wellsampledness}) here. Note that, following this ``new'' criterion, previous numerical results for the truncated power-law distribution \cite{Zoller_grl} seem to indicate (in contrast to the conclusions there) that all tested values of $m_c$ should be rejected at the 0.05 significance level (as Ref. \cite{Zoller_grl} reports $\mbox{Prob}[ Y > y_{emp} ] > 0.975$), except $m_c=9.2$ (the value of $\mbox{Prob}[ Y > y_{emp} ]$ for $m_c=9.5$ displayed in Fig. 3 of Ref. \cite{Zoller_grl} seems to be slightly above 0.975 and should be rejected as well, at least in the symmetric case $r=1/2$). For the Tap distribution, the only values of $m_c$ that should not be clearly rejected from the numerical results of Ref. \cite{Zoller_grl} (again in contrast with the conclusions of that reference) are $m_c=9$ and $m_c=9.2$ (for the rest of $m_c$ values Ref. \cite{Zoller_grl} reports $\mbox{Prob}[ Y > y_{emp} ]$ above 0.975 or below 0.025). But the numerical results of Ref. \cite{Zoller_grl} are not in correspondence with ours; our maximum-likelihood estimations for $\beta$ do not lead to {$\mbox{Prob}[Y>y_{emp}]\simeq 1$} when $m_c$ is large ($m_c \ge 10$). What we find for those values is $\mbox{Prob}[Y>y_{emp}] < 0.975$, {see} Figs. \ref{figone} and S5 (and Fig. S6 for the TrG), so all large values of $m_c$ are allowed, in principle. Regarding the number of earthquakes required to constrain the corner parameters ($M_c$ or $m_c$), what is implicit behind Eq. (\ref{zollerformula}) is that a ``not well-sampled'' distribution (with $\mbox{Prob}[ Y > y_{emp} ]$ close to zero) is ``not well-sampled'' just because of ``bad luck'', {that is, the largest earthquake had} $y_{emp}$ much larger than {expected} from both the model $F(x)$ and the actual value of $N$. {This} bad luck is what leads to the rejection of the null hypothesis in usual statistical testing (and corresponds to the significance level, see Eq. (\ref{condition_alpha})). {But}, in Ref. \cite{Zoller_grl}'s argument, gathering more data {would eventually} lead to the accommodation of the theoretical distribution of the maximum to the empirical value $y_{emp}$, {regardless of the model}. Thus, in that assumption $y_{emp}$ is considered quenched, i.e., it does not grow despite the fact that the number of data increases. This is hard to justify. \section*{Proper constraining of the corner seismic-moment: TPL case } In this section we derive {a} proper {statistical} way to evaluate the number $N$ of earthquakes necessary to constrain the estimated value of $M_c$ or $m_c$ for the TPL distribution. {In this case, our approach uses} the distribution of the estimator of these quantities ($M_c$ and $m_c$) to calculate their {statistical} uncertainty as a function of $N$, and {looks} for the value of $N$ that {reduces} {the} uncertainty {down to a desired range}. {This will necessary} depend on the true values of the parameters, which are unknown, and {is} also {based} on the assumption that the sample is representative of the whole population (otherwise, no inference is possible). For this purpose, let us {focus} in the truncated-power-law model, which has the peculiar property that the random variable $Y$ (the maximum {seismic moment} of the $N$ {earthquakes}) constitutes the maximum-likelihood estimator, $\hat M_c$, of the truncation parameter $M_c$, {that is} $Y=\hat M_c$ for the TPL (or, equivalently, $\hat m_c$ for the magnitude). Then, inverting $F_{max}(y_p)=p$, with $y_p$ defining the $100p-$th percentile of the distribution of the maximum seismic moment (i.e., the distribution of $\hat M_c$), one can get the probability of any interval for $\hat M_c$. The limiting points for these intervals are, from Eqs. (\ref{Fmaxx}) and (\ref{Ftpl}), $$ y_{p,tpl}=\frac a{\sqrt[\beta]{1-p^{1/N}[1-(a/M_c)^\beta]}}, $$ and in terms of the magnitude, \begin{equation} m_{p,tpl}= \frac 2 3 \left[ \log_{10} \left( \frac a{\sqrt[\beta]{1-p^{1/N}[1-(a/M_c)^\beta]}} \right)- 9.1\right], \label{mp} \end{equation} using the relation between magnitude and seismic moment, with $m_{p}$ the $100p-$th percentile of the distribution of the maximum magnitude. For the true distribution, the resulting 95\%-probability intervals, $(m_{p,tpl},m_{p+0.95, tpl})$, should contain the empirical value of the maximum with a $0.95$ probability. These intervals are shown in Fig. \ref{intervalos}, using the empirical value of $N$ {in the global CMT catalog} and different values of $M_c$, with $\beta$ fixed to $0.67$, and $p=0.025$ for symmetric intervals (we have checked that the final results do not depend too much on this choice). Figure \ref{intervalos} shows that the ideal situation happens when the distribution of the maximum-likelihood estimator is very narrow, and then $\hat m_c\simeq m_c$, leading to the automatic recovering of the true value (a value very close to it, but below, in fact). When $N$ is equal to the empirical value (considering the case previously studied in the literature \cite{Zoller_grl}, up to mid 2012) this happens for $m_c<8.5$. One could refer to this case as {``sampled enough''} (in sharp contrast with previous terminology \cite{Zoller_grl}). On the contrary, when the upper limit of the interval, $m_{p+0.95}$, departs clearly from the true value of $m_c$, we may talk of undersampling ({there is no hint of} the real maximum $m_c$ after {the} $N$ observations, again in contrast with previous research \cite{Zoller_grl}). This is the case for $m_c > 10.5$ (for $N=7,585$), for which the intervals do not include the true value of $m_c$ (for instance, for $m_c=12$ the interval of the maximum goes from 9 to 11, roughly, see Fig. \ref{intervalos}). But note this kind of undersampling still would allow {ruling} out the values of the parameters of the undersampled distributions, if the empirical value of the maximum were outside the resulting interval (nevertheless, this is not the case for the actual value, see below). In the intermediate case ($8.5 < m_c < 10.5$ for the period under consideration), the intervals are wide but they reach the true value. We can use the previous argument to find the value of $N$ that leads to narrow 95\%-probability intervals for the estimation of $M_c$ or $m_c$ in the TPL model. Using Eq. (\ref{mp}), the width of the magnitude intervals, $\Delta=m_{p+0.95}-m_{p}$, is obtained as \begin{equation} \Delta_{tpl}= \frac 2 {3\beta} \log_{10} {\frac {1-[1-(a/M_c)^\beta] p^{1/N}} {1-[1-(a/M_c)^\beta] (p+0.95)^{1/N}} }. \label{ainvertir} \end{equation} Isolating $N$ as a function of $\Delta_{tpl}$ for given values of $M_c$ and $\beta$ yields the desired result. Notice that, in contrast to Ref. \cite{Zoller_grl} [Eq. (\ref{wellsampledness})], our approach does not need any empirical information (except the value of $\beta$). Going back to Fig. \ref{numberofearthq}, this includes the number of events necessary to obtain intervals of a fixed width after numerical inversion of Eq. (\ref{ainvertir}), as a function of $M_c$. The results are clearly different to the previous ones \cite{Zoller_grl}, as shown in the figure. Figure \ref{numberofearthq} is {particularly} useful {for testing} {a specific} value of $m_c$. If the real value of $m_c$ were 9.5 (the largest earthquake in the historical record \cite{Chile1960}, but not contained in the CMT catalog) a 95\%-probability interval with width $\Delta=0.4$ (from 9.1 to 9.5, roughly) would be obtained after about $N=14,000$ events (corresponding to 65 years, reached in 2042). If one wants instead a width of $\Delta=0.2$ (yielding an interval from 9.3 to 9.5) the necessary $N$ is $36,400$, {to be} reached around the year 2147 ({assuming} that the TPL were the right model, that there is no dependence between the magnitudes, and that the long-term global earthquake rate and $\beta$ were constant). It is important to realize that, in all the cases shown in the figure, the top value of the interval coincides with the real value. Although the probability that the estimated value is between $m_{p+0.95}$ and $m_c$ is $0.05-p$, the two values are very close, i.e., $m_{p+0.95}\simeq m_c$; this is due to the extreme sharpness of the density of the observed maximum close to $m_c$ (for instance, as in Fig. 1, where the vertical axis is logarithmic). So, the value of $N$ provided in the figure guarantees no undersampling. Note also that a 95\%-probability interval is a much more strict requirement than an interval corresponding to one standard deviation. We have just calculated the number of earthquakes required to estimate $M_c$ with a {given} uncertainty, for different hypothetical values of the true $M_c$. This does not make use the empirical value $y_{emp}$ obtained in $35.5$ years. A different issue then is how $y_{emp}$ discards or not the possible values of $M_c$. Figure \ref{intervalos} shows (in addition to the intervals of the maximum magnitude obtained from Eq. (\ref{mp})) the empirical value obtained for the period $1977$-$2012.5$. If the observed maximum magnitude (9.1 {in the global CMT catalog}) is inside the interval, there is no reason to reject the parameters of the model (with a 95\% confidence); on the contrary, if the empirical value is outside, we should reject the parameters. The figure shows how, for the TPL model, no value of $m_c\ge 9.1$ can be rejected, i.e., any value of $m_c$ between $9.1$ and $\infty$ is compatible with the empirical result, and therefore the data do not allow to determine an upper bound for $m_c$, although values of $m_c$ above 10 are close to rejection (with a 95\% confidence; if we decreased the confidence or increased the number of data an upper bound would appear). Indeed, considering the most recent data at the time of writing, up to the end of 2017 (where no other earthquake of magnitude larger than 9.1 has taken place) the range of compatible values of $m_c$ turns out to be $9.1$--$10.8$, as reported in Table \ref{table_data}. As an illustration, we also analyze what an hypothetical $y_{emp}$ corresponding to a 9.1 magnitude in a 71-year period (from 1977 to 2047, let us say) would imply. Table \ref{table_data} shows that that would constrain $m_c$ to be between 9.1 and 9.5, for 95\%-probability intervals, but if the maximum in the same period were 9.3, the allowed range would be between 9.3 and 10.3. In contrast, a maximum empirical value of 9.5 (or higher) in that period would yield $m_c$ unbounded from above again. Needless to say, we need to wait about 30 years to chose between these three answers. \section*{Proper constraining of the corner seismic-moment: Tap and TrG cases } Note that, although the maximum empirical value of the seismic moment is the maximum-likelihood estimator of $M_c$ only for the TPL distribution (out of the three considered models), we can still use the previous procedure to constrain the value of $M_c$ for any distribution, but with the resulting values of $M_c$ not related to maximum likelihood estimation, in general. Thus, for the Tap distribution, the percentiles of the maximum seismic moment turn out to be, using Eq. (\ref{Ftap}) and (\ref{Fmaxx}), $$ y_{p, tap}=\beta M_c W\left(\frac {a e^{a/(\beta M_c)}} {\beta M_c (1-p^{1/N})^{1/\beta}}\right), $$ with $W$ the Lambert W function \cite{Corless1996}, fulfilling $z=W(z e^z)$. And for the truncated gamma we get, using Eq. (\ref{Ftrg}), $$ y_{p, trg}=M_c \Gamma_2^{-1} (-\beta, (1-p^{1/N})\Gamma(-\beta, a/M_c) ), $$ with $\Gamma_2^{-1}$ the inverse, respect to its second argument, of the incomplete gamma function. In the same way as for the TPL, the empirical value $y_{emp}$ leads to an unbounded range of the values of $m_c$ compatible with $y_{emp}$ for the original value of $N$ (7,585). These ranges go from $8.65$ to $\infty$ for the Tap distribution and from $8.8$ to $\infty$ for the TrG, with $\beta=0.67$. However, when one extends the analysis up to 2017 the ranges become bounded, although large, see Table \ref{table_data}. This table also explores the values of these ranges in the future, depending on the hypothetical value of the maximum magnitude observed. We see that, in general, the ranges provided by the Tap distribution are somewhat {wider} than those provided by the TPL, whereas the TrG yields rather larger ranges. This means that the number of data necessary to constrain the value of $m_c$ is larger in the TrG than in the other two distributions. The table also allows us to rule out the scenario that there will be no earthquakes larger than magnitude 9.1 before 2097 for a TPL distribution, as this scenario leads to the implausibility of having events larger than 9.3, contrary to what was observed in the 9.5 1960 event in Chile (although the CMT catalog would probably underestimate the seismic moment of such an event \cite{Tsai_grl}). \section*{Discussion} Before concluding, we briefly explore the implications of our results for the assesment of seismic hazard. Considering as an illustration the case of the tapered model, we have seen (Table 1) how the CMT data, up to 2017, is compatible with a range of values of the corner magnitude, from $m_{c\,min}=8.6$ to $m_{c\,max}=10.2$ (with a 95 \% confidence). Therefore, the resulting seismic-moment distribution (or, in the same way, the resulting magnitude distribution) will be a mixture (or combination) of the different $S_{tap}(x|M_c)$ (now we use the complementary cumulative distribution function and make explicit in the notation the dependence on the corner seismic moment $M_c$), with $M_c$ ranging from $M_{c\,min}$ to $M_{c\,max}$. Thus, \begin{equation} S_{mix}(x) =\int_{M_{c\,min}}^{M_{c\,max}} S_{tpl}(x|M_c) \rho(M_c) d M_c, \label{laintegral} \end{equation} where the resulting distribution $S_{mix}(x)$ is no longer a Tap distribution but a mixture of Tap's with different $M_c$. The term $\rho(M_c)$ gives weight to the different values of $M_c$. The same equation holds for any other probabilistic model (such as TPL and TrG). One could assume a uniform distribution of corner magnitudes (all its values would be equally likelly from $m_{c\,min}$ to $m_{c\,max}$). Interestingly, for the corner seismic-moment distribution, this leads to the Jeffreys prior of a scale parameter, $\rho(M_c) \propto 1/M_c$. Under this choice, the integral in Eq. (\ref{laintegral}) can be easily evaluated by the Monte-Carlo method. For the Tap model, the probability of an earthquake of magnitude 9.1 or larger (among all earthquakes with magnitude larger than 5.75) turns out to be $S_{mix}(x)=2.6 \times 10^{-4}$, corresponding to about 1 in 20 years. In comparison with the CMT catalog itself (1 of such events in 35.5 years) this probability seems somewhat large. Even higher values of the magnitude or other models (TPL or TrG) also seem to lead to an overestimation of these probabilities. Naturally, this is the core problem in the statistics of extreme events, one has very few extreme events to contrast estimations. As the result is highly sensitive to the choice of the distribution $\rho(M_c)$, this is a topic that deserves further study. Our results can also have applications for time-dependent hazard \cite{Mulargia_Geller}. If we know when the last earthquake of a given seismic moment $x$ or higher happened (a time $t$ ago), we can obtain the probability of recurrence in a given time period $\Delta$ from the present as $$ \mathcal{P}_{x,t,\Delta}= \mbox{Prob}[t < \mbox{ waiting time } \le t+\Delta \,| \, \mbox{ waiting time } > t] = 1 - \frac{S_w(t+\Delta \, | \, x)}{S_w(t \, | \,x)}, $$ where the subindex $w$ denotes that the distribution refers to the waiting time (not to the seismic moment). For a Poisson process $S_w$ is exponential with rate $\lambda_x$ and then we recover $$\mathcal{P}_{x,t,\Delta}= 1-e^{-\lambda_x \Delta} \simeq \lambda_x \Delta =R_a S(x) \Delta, $$ which turns out to be independent on $t$ and becomes essentially the same formula used above for time independent hazard, with $R_a=213.7$ year$^{-1}$ (we have assumed $\Delta \ll \lambda_x^{-1}$). In order to obtain time-dependent hazard one needs to go beyond Poisson occurrence. At a global scale it has been pointed out that the gamma distribution can describe well earthquake waiting times \cite{Corral_prl.2004,Corral_calcutta}; nevertheless, for the sake of simplicity, we are going to illustrate the calculation with the Weibull distribution, which can give similar fits \cite{Morina_storms}. In this way, from the equation ago we can write \begin{equation} \mathcal{P}_{x,t,\Delta}=1-\exp\left[\left(\frac{t}{c_x}\right)^\gamma - \left(\frac{t+\Delta}{c_x}\right)^\gamma \right], \label{mathcalP} \end{equation} with $\gamma$ and $c_x$ the shape and scale parameters of the Weibull distribution, respectively (the latter depending on $x$). The Poisson case is included in the particular limit $\gamma=1$. The scale parameter of the waiting-time distribution can be directly related to the seismic-moment distribution: On the one hand, the number of events per unit time (with seismic moment above $x$) is $R_a S(x)$. On the other hand, this number is also given by $1/\langle t(x) \rangle$, where $\langle t (x)\rangle$ is the mean waiting time for events above $x$. In the particular case of the Weibull distribution, this is given by $\langle t(x) \rangle = c_x g(\gamma)$ with $g(\gamma)=\Gamma(1+\gamma^{-1})$. Thus, $$ c_x=\frac 1 {g(\gamma) R_a S(x)}, $$ which substituting into Eq. (\ref{mathcalP}) allows the calculation of the probability $\mathcal{P}_{x,t,\Delta}$. In the case $\Delta \ll t$ this can be simplified to $$ \mathcal{P}_{x,t,\Delta} \simeq 1-\exp\left[-\gamma \Delta \left(g(\gamma) R_a S(x)\right)^\gamma t^{\gamma-1} \right]. $$ In the context of this article, the seismic-moment distribution $S(x)$ could be substituted by the mixture for different values of $M_c$ given by Eq. (\ref{laintegral}). Nevertheless, the calculation of these probabilities needs the accurate fitting of the waiting time distributions $S_w(t \, | \, x)$ (i.e., the fitting of $\gamma$ and $c_x$ in the case of the Weibull distribution). This is left to future works. \section*{Conclusions} {Summarizing the main results of the article, we have reconsidered to what extent the available earthquake record can constrain the parameter that characterizes the tail of the global seismic-moment distribution: a corner seismic moment ($M_c$, or its corresponding moment magnitude $m_c$), for three different distributions (truncated power law, tapered GR, and truncated gamma).} We have corrected some of the drawbacks of previous literature, regarding the number of events necessary {for such a purpose}. The key point in our approach is to obtain the percentiles of the distribution of the maximum {seismic moment} of $N$ earthquakes, and {to derive} from {there} probability intervals that can be compared with the {maximum seismic moment} observed, $y_{emp}$. If $y_{emp}$ is inside the interval there is no reason to reject the considered value of the corner parameter. Although currently (up to the end of 2017), the range of values of $m_c$ is rather {wide}, in 80 years from now these ranges {are expected to} decrease substantially, but {depending crucially} on the maximum value {to be} observed. For instance, if this were 9.3, the {tapered} model would lead to $m_c \simeq 9.1 \pm 0.3$ (95 \% confidence), and the {truncated gamma} model to $9.35 \pm 0.45$ (see Table \ref{table_data} for more hypothetical examples). From here we conclude that the {much larger} periods of time {estimated} earlier are not justified. In addition, for the same reasons elaborated in this article, the standard errors of corner parameters that we \cite{Serra_Corral} calculated previously for almost 37 years of shallow global seismicity using asymptotic likelihood theory do not provide a convenient description of the range of uncertainty in those parameters. \section*{Acknowledgements} We appreciate the critical reading of \'Alvaro Gonz\'alez, as well as suggestions from the reviewers. The data used in this research can be downloaded from {\tt http://www.globalcmt.org} \cite{Ekstrom2012}.
1,108,101,562,612
arxiv
\section{Introduction} Multisymmetric polynomials are the generalization of symmetric polynomials in $n$ variables $x_1 ,\ldots , x_n$ to $m \geq 2$ sets of $n$ variables. They have been classically studied in characteristic 0 (see Schl\"afli~\cite{schlaefli1852}, Junker~\cite{junker1893}, Weyl~\cite{weyl1939classical}, and for a historical overview see Domokos~\cite[Remark~2.6]{domokos2007}). In the setting of invariant theory symmetric polynomials appear as the elements of the invariant ring ${\mathbb K}[V]^{{\mathcal S}_n}$ of the standard representation $V = {\mathbb K}^n$ of the symmetric group ${\mathcal S}_n$, where ${\mathbb K}$ is a field. Multisymmetric polynomials form the invariant ring ${\mathbb K}[V^m]^{{\mathcal S}_n}$, where the action of ${\mathcal S}_n$ on $V$ is extended to a diagonal action of ${\mathcal S}_n$ on $V^m = V \oplus \ldots \oplus V$. The coordinate ring of $V^m$ is denoted by \[ {\mathbb K}[V^m] = {\mathbb K}[ \,x(j)_{i} \mid i = 1 ,\ldots , n,\, j = 1 ,\ldots , m],\] where $x(j)_{i}:V^m\to {\mathbb K}$ sends $(a_1,\ldots,a_m) \in V^m$ to the $i$-th coordinate $(a_j)_i$ of $a_j$. \noindent An obvious invariant inside ${\mathbb K}[V^m]$ is obtained for any $m$-tuple $\underline{k} = (k_1 ,\ldots , k_m) \in {\mathbb N}^m$, where ${\mathbb N}$ is the set of all non-negative integers, and any $1 \leq t \leq n$ by \begin{equation}\label{eqDefinitionOfF} \sigma_t(\un{k})=\sum\limits_{1\leq i_1<\ldots< i_t\leq n} x^{\un{k}}_{i_1} \cdot \ldots \cdot x^{\un{k}}_{i_t} \quad \in {\mathbb K}[V^m]^{{\mathcal S}_n},\end{equation} where \[ x^{\un{k}}_i=x(1)_i^{k_1}\cdot \ldots \cdot x(m)_i^{k_m} \quad \text{ (for $1\leq i\leq n$).}\] The invariants $\sigma_t(\un{k})$ are called \emph{elementary multisymmetric polynomials}. For short, we write $\tr(\un{k})=\sigma_1(\un{k})$ for the power sum \[\tr(\un{k}) = \sigma_1(\un{k})= \sum\limits_{i=1}^n x^{\un{k}}_i.\] There is a natural ${\mathbb N}^m$-grading on ${\mathbb K}[V^m]$, which is preserved by the ${\mathcal S}_n$-action. Under this grading $\sigma_t({\underline{k}})$ is multi-homogeneous of multi-degree $t\underline{k}$. Upper bounds on the degrees of elements of minimal generating sets for the ring of multisymmetric polynomials ${\mathbb K}[V^m]^{{\mathcal S}_n}$ were studied by Fleisch\-mann~\cite{fleischmann}, Vaccarino~\cite{vaccarino2005}, Domokos~\cite{domokos2009vector}, and a minimal generating set was explicitly described by Rydh~\cite{rydh2007}. In particular, ${\mathbb K}[V^m]^{{\mathcal S}_n}$ is known to be generated by the $m$ elements $\sigma_n(0,\ldots,0,1,0,\ldots,0)$, where 1 is at position $j$ for $1 \leq j \leq m$, together with the elements $\sigma_t(\un{k})$ with $1\leq t\leq n$ and $\un{k} \in {\mathbb N}^m$ satisfying $\gcd(\un{k})=1$ and $k_1<\frac{n}{t}$,$\ldots$, $k_m<\frac{n}{t}$ (see Corollary 5.3 of~\cite{domokos2009vector}). In case $\charakt({\mathbb K}) = 0$ or $\charakt({\mathbb K})> n$ the ring ${\mathbb K}[V^m]^{{\mathcal S}_n}$ is minimally (w.r.t. inclusion) generated as a ${\mathbb K}$-algebra by \begin{equation}\label{eqMinimalGen} \tr(\un{k})\;\; \text{ with } \quad \un{k} \in {\mathbb N}^m \text{ satisfying } 1 \leq k_1 + \ldots + k_m \leq n, \end{equation} see Domokos~\cite[Theorem 2.5]{domokos2009vector}. While the set (\ref{eqMinimalGen}) of generating invariants can not be improved, it is often helpful to consider a set of separating invariants, which can be significantly smaller than (\ref{eqMinimalGen}). In 2002 Derksen and Kemper~\cite{derksen2002computational} introduced the notion of separating invariants as a weaker concept than generating invariants. Given a subset $S$ of an invariant ring ${\mathbb K}[W]^{G}$, we say that elements $u,\, v \in W$ {\it can be separated by $S$} if there exists an invariant $f\in S$ with $f(u)\neq f(v)$. The set $S$ is called {\it separating} if any $u,\, v \in W$ that can be separated by ${\mathbb K}[W]^G$ can also be separated by $S$. It is a well-known fact about the invariants of a finite group $G$ that $u$, $v$ can be separated by ${\mathbb K}[W]^G$ if and only if they are not in the same orbit (see~\cite[Section 2.4]{derksen2002computationalv2}). Since the introduction of the notion of separating invariants, many results were found where they behave better than generating invariants. Some properties of separating invariants can be found in the second volume of the book by Derksen and Kemper~\cite[Section 2.4]{derksen2002computationalv2}. Explicit separating sets and upper degree bounds of separating sets have been calculated for some group actions, see e.g. \cite{derksen2018}, \cite{domokos2017}, \cite{draisma2008polarization}, \cite{dufresne2014}, \cite{dufresne2015separating}, \cite{elmer2014}, \cite{elmer2016}, \cite{kaygorodov2018}, \cite{kohls2013}, \cite{reimers2018}. In this paper we are interested in multi-homogeneous separating sets of ${\mathbb K}[V^m]^{{\mathcal S}_n}$. One way to obtain them is by \emph{expanding} a separating set for ${\mathbb K}[V^{m_0}]^{{\mathcal S}_n}$ for some $m_0 \leq m$. By this we mean the following construction. We say that an $m_0$-tuple $\un{j}\in{\mathbb N}^{m_0}$ is {\it $m$-admissible} if $1\leq j_1<\cdots < j_{m_0}\leq m$. For any $m$-admissible $\un{j}\in{\mathbb N}^{m_0}$ and $f\in {\mathbb K}[V^{m_0}]^{{\mathcal S}_n}$ we define the invariant $f^{(\un{j})}\in {\mathbb K}[V^{m}]^{{\mathcal S}_n}$ as the result of the following substitution of variables in $f$: \[ x(1)_i \to x(j_1)_i,\, \ldots, \, x(m_0)_i \to x(j_{m_0})_i \quad \text{ (for all } 1 \leq i \leq n).\] Given a set $S \subset {\mathbb K}[V^{m_0}]^{{\mathcal S}_n}$, we define its {\it expansion} $\expan{S}{m} \subset {\mathbb K}[V^m]^{{\mathcal S}_n}$ by \begin{equation}\label{DefExpansion} \expan{S}{m} = \{ f^{(\un{j})} \mid f \in S \text{ and } \un{j}\in{\mathbb N}^{m_0} \text{ is $m$-admissible}\}. \end{equation} As an example, for $S = \{ \sigma_t(2,1) \}\subset {\mathbb K}[V^2]^{{\mathcal S}_n}$ for some $t$ we have $$\expan{S}{3} = \{ \sigma_t(2,1,0),\,\sigma_t(2,0,1),\,\sigma_t(0,2,1) \}\subset {\mathbb K}[V^3]^{{\mathcal S}_n}.$$ The idea of studying separating sets of vector invariants that depend only on a smaller number of variables was considered by Domokos~\cite{domokos2007} for an $n$-dimensional representation $W$ of an arbitrary algebraic group $G$. Domokos showed that it is enough to consider a separating set of ${\mathbb K}[W^{m_0}]^G$ for $m_0 = 2 n$ and expand it to ${\mathbb K}[W^m]^G$ for $m \geq m_0$. This was later improved to $m_0 = n+1$ by Domokos and Szab\'o \cite{domokos2011helly}. Our main theorem is a strengthening of this result in the case of multisymmetric polynomials to $m_0 = \lfloor \frac{n}{2} \rfloor + 1$, where $\lfloor \frac{n}{2} \rfloor$ denotes the largest integer $\leq \frac{n}{2}$. \medskip \begin{thm}\label{TheoremMain} Assume that $m_0 \geq \lfloor \frac{n}{2} \rfloor + 1$ and that $S \subset {\mathbb K}[V^{m_0}]^{{\mathcal S}_n}$ is separating. Then for all $m \geq m_0$ the expansion $\expan{S}{m}$ as defined in (\ref{DefExpansion}) is separating for ${\mathbb K}[V^{m}]^{{\mathcal S}_n}$. \end{thm} \medskip By Remark~\ref{remark_new} below it is enough to prove Theorem~\ref{TheoremMain} for a particular separating set $S$. \medskip \begin{remark}(cf. \cite[Remark~1.3]{domokos2007})\label{remark_new} Assume that $S_{1}$ and $S_{2}$ are separating sets for ${\mathbb K}[V^{m_0}]^{{\mathcal S}_n}$ and assume that $m>m_0$. Then $\expan{S_1}{m}$ is separating for ${\mathbb K}[V^{m}]^{{\mathcal S}_n}$ if and only if $\expan{S_2}{m}$ is separating for ${\mathbb K}[V^{m}]^{{\mathcal S}_n}$. \end{remark} Let $\sigma(n)$ denote the minimal number $m_0$ such that the expansion of some separating set $S$ for ${\mathbb K}[V^{m_0}]^{{\mathcal S}_n}$ produces a separating set for ${\mathbb K}[V^m]^{{\mathcal S}_n}$ for all $m \geq m_0$. By Remark~\ref{remark_new} this is independent of $S$ and Theorem~\ref{TheoremMain} can be rephrased as \begin{equation}\label{eqSigma} \sigma(n) \leq \lfloor \frac{n}{2} \rfloor + 1.\end{equation} In Theorem~\ref{TheoremMainMin} in Section \ref{MinimalSeparating} we explicitly give minimal (w.r.t. inclusion) separating sets for $n = 2,\,3,\,4$ in case $\charakt({\mathbb K}) = 0$ or $\charakt({\mathbb K}) > n$. These sets also show that the upper bound (\ref{eqSigma}) is exact for $n \leq 4$. For $n = 3$ and ${\mathbb K} = {\mathbb C}$ the algebra ${\mathbb K}[V^m]^{{\mathcal S}_3}$ was studied in details by Domokos and Pusk\'as \cite{domokos2012}. This paper is structured as follows. At first we need a few basic results about partitions of the set $\{ 1 ,\ldots , n\}$, which are provided in Section~\ref{SectionPartitions}. Then we study the transitions from $n-1$ to $n$ and from $m-1$ and to $m$ in Section~\ref{SectionLemmasForInduction}. The lemmas from Section~\ref{SectionLemmasForInduction} will be applied in the proofs of Theorems~\ref{TheoremMain} and~\ref{TheoremMainMin} in Sections~\ref{SectionProofOfTheorem1}~and~\ref{MinimalSeparating}. In Remark~\ref{RemarkFraction} we compare the asymptotic sizes of our multi-homogeneous separating set and the minimal generating set (\ref{eqMinimalGen}) in case $\charakt({\mathbb K}) = 0$ or $\charakt({\mathbb K}) > n$. \textbf{Acknowledgment.} We are very grateful to the anonymous referee for his or her helpful comments, in particular, for pointing us towards a characteristic-free version of the results. \section{Partitions of the set $[n]$}\label{SectionPartitions} Recall that a {\it partition} $A=\{I_1 ,\ldots , I_r\}$ of $[n] := \{ 1 ,\ldots , n\}$ is a set of non-empty subsets of $[n]$ with pairwise empty intersections such that $I_1\cup\ldots\cup I_r=[n]$. For a partition $A$ of $[n]$ denote by $G_A$ the subgroup of $G := {\mathcal S}_n$ that fixes the sets of $A$, i.e., \[ G_A := \{ \sigma \in {\mathcal S}_n \mid \sigma(I) = I \text{ for all } I \in A \}.\] The intersection partition of two partitions $A$ and $B$ of $[n]$ is \[ A \sqcap B := \{ I \cap J \mid I \in A,\, J \in B \} \setminus \{ \emptyset \}.\] Note that $\sqcap$ is an associative operation on the set of all partitions of $[n]$. For the corresponding subgroups we have \begin{equation} \label{eq1} G_{A \sqcap B} = G_A \cap G_B. \end{equation} A vector $a = \begin{pmatrix}a_1 \\ \vdots \\ a_n \end{pmatrix} \in V$ defines an equivalence relation on $[n]$ through \[ i \sim j \quad \Longleftrightarrow \quad a_i = a_j .\] This equivalence relation defines a partition, denoted by $\parti(a)$, of the set $[n]$. In particular, $|\parti(a)|$ is the number of distinct coordinates $a_1,\ldots,a_n$ of $a$. The stabilizer of $a$ under the ${\mathcal S}_n$-action on $V$ is equal to the subgroup $G_{\parti(a)}$ defined above: \begin{align}\label{StabilizerOfVector} G_a &:= \{ \sigma \in {\mathcal S}_n \mid a_{\sigma(i)} = a_i \text{ for } i = 1 ,\ldots , n \} \\ &\,\,= \{ \sigma \in {\mathcal S}_n \mid \sigma(I) = I \text{ for all } I \in \parti(a) \} = G_{\parti(a)} .\notag \end{align} \medskip \begin{lem}\label{Lemma1OnPartitions} Let $A$ and $B$ be partitions of $[n]$. If $G_A \not\subset G_B$, then $|A \sqcap B| > |A|$. \end{lem} \begin{proof} The statement obviously follows from the fact that $G_{A}\subset G_{B}$ holds if and only if the partition $A$ is a refinement of the partition $B$. \end{proof} \medskip Next for a partition $A$ of $[n]$ we define \[ \min(A) := \min \{ |I| \mid I \in A \} .\] Clearly, if $|A| \geq \lfloor \frac{n}{2} \rfloor + 1$, then $\min(A) = 1$. This fact together with an iterated use of Lemma~\ref{Lemma1OnPartitions} implies the next statement. \medskip \begin{lem}\label{Lemma2OnPartitions} Let $A_1 ,\ldots , A_r$ be partitions of $[n]$ with $r \geq \lfloor \frac n 2 \rfloor$ such that for all $i = 1 ,\ldots , r$ we have \[ G_{A_1} \cap \ldots \cap G_{A_{i-1}} \not\subset G_{A_i}, \] where for $i = 1$ the empty intersection is interpreted as ${\mathcal S}_n$. Then $$\min(A_1 \sqcap \ldots \sqcap A_r) = 1.$$ \end{lem} \section{Lemmas for reduction}\label{SectionLemmasForInduction} In this section we assume that $n \geq 2$. To work with the case of $n-1$ we will introduce the following notations. Denote by $\widetilde{V} = {\mathbb K}^{n-1}$ the natural representation of ${\mathcal S}_{n-1}$. For $f \in {\mathbb K}[V^m]$ (resp. $S \subset {\mathbb K}[V^m]$) denote by $\widetilde{f}$ (resp. $\widetilde{S}$) the image of $f$ (resp. $S$) under the surjective ${\mathbb K}$-algebra homomorphism ${\mathbb K}[V^m] \to {\mathbb K}[\widetilde{V}^m]$ which maps $x(j)_n$ to $0$ for $j = 1 ,\ldots , m$. Clearly, this homomorphism restricts to a map \begin{equation*}{\mathbb K}[V^m]^{{\mathcal S}_n} \to {\mathbb K}[\widetilde{V}^m]^{{\mathcal S}_{n-1}}, \quad f \mapsto \widetilde{f}.\end{equation*} \noindent{}Furthermore, for $p=(a_1,\ldots,a_m)\in V^m$ define $\widetilde{p}\in \widetilde{V}^m$ by deleting the $n$-th coordinate of every entry $a_j$ of $p$. So the superscript $\,\,\widetilde{}\,\,$ always means ``the $n$-th coordinate is missing''. To study the transitions from $n-1$ to $n$ and from $m-1$ to $m$, we will use the following conditions in the formulations of the next lemmas. \medskip \begin{cond}\label{assump} Assume $m \geq 2$, $n \geq 2$, $S \subset {\mathbb K}[V^{m-1}]^{{\mathcal S}_n}$, $T \subset {\mathbb K}[V^m]^{{\mathcal S}_n}$, $p=(a_1 ,\ldots , a_m) \in V^m$ and $q = (b_1 ,\ldots , b_m) \in V^m$ satisfy \begin{enumerate} \item[$\bullet$] $S$ is separating for the ${\mathcal S}_n$-action on $V^{m-1}$, \item[$\bullet$] $\expan{S}{m}\subset T$, \item[$\bullet$] $\widetilde{T}$ is separating for the ${\mathcal S}_{n-1}$-action on $\widetilde{V}^m$, \item[$\bullet$] $p$ and $q$ can not be separated by $T$. \end{enumerate} \end{cond} \medskip The next lemma is trivial. \begin{lem}\label{LemmaInductionOnN} Assume that $p=(a_1 ,\ldots , a_m) \in V^m$ and $q = (b_1 ,\ldots , b_m) \in V^m$ satisfy $(a_1)_n = (b_1)_n ,\ldots , (a_m)_n = (b_m)_n$. Then $p$ and $q$ are in the same ${\mathcal S}_n$-orbit if and only if $\widetilde{p}$ and $\widetilde{q}$ are in the same ${\mathcal S}_{n-1}$-orbit. In particular, if $S \subset {\mathbb K}[V^m]^{{\mathcal S}_n}$ is separating, then $\widetilde{S} \subset {\mathbb K}[\widetilde{V}^m]^{{\mathcal S}_{n-1}}$ is also separating. \end{lem} \medskip \begin{lem}\label{LemmaFact0} Assume that Conditions~\ref{assump} hold. Then there exists a permutation $\sigma \in {\mathcal S}_n$ such that \[ \sigma q = (a_1 ,\ldots , a_{m-1},\, c) \quad \text{ (for some } c \in V).\] \end{lem} \begin{proof} Delete the last column vector from $p$ and $q$ to obtain \[ p' = (a_1 ,\ldots , a_{m-1}) \text{ and } q' = (b_1 ,\ldots , b_{m-1}) \in V^{m-1}.\] By the definition of the expansion, for all $f \in S$ we have $f^{(1,2 ,\ldots , m-1)} \in \expan{S}{m} \subset T$. Hence the assumption that $p$ and $q$ can not be separated by $T$, implies that for all $f \in S$ we have \begin{align*} f(p') = f^{(1,2 ,\ldots , m-1)}(p) = f^{(1,2 ,\ldots , m-1)}(q) = f(q').\end{align*} Since $S$ is separating, there exists $\sigma \in {\mathcal S}_n$ such that $\sigma q' = p'$. Hence with $c = \sigma b_m$ we obtain \[\sigma q = (a_1 ,\ldots , a_{m-1},\,c).\] \end{proof} \medskip We will often apply Lemma~\ref{LemmaFact0} to replace $q \in V^m$ by $\sigma q$. For this purpose we add the following trivial remark. \medskip \begin{remark}\label{RemarkReplacePAndQ} For $p$, $q \in V^m$, $S \subseteq K[V^m]^{{\mathcal S}_n}$ and $\sigma \in {\mathcal S}_n$ the statements \begin{enumerate} \item[$\bullet$] $p$ and $q$ can not be separated by $S$; \item[$\bullet$] $p$ and $q$ are in the same ${\mathcal S}_n$-orbit; \end{enumerate} do not change their validity when replacing $p$ or $q$ (or both) with $\sigma p$ or $\sigma q$, respectively. \end{remark} \medskip In the next two lemmas we will use the following notation. For an associative binary operation $\ast$ on a set $X$ and $a_1 ,\ldots , a_m \in X$ we write \[ a_1 \ast \ldots \ast \widehat{a_i} \ast \ldots \ast \widehat{a_j} \ast \ldots \ast a_m \] for \[ a_1 \ast \ldots \ast a_{i-1}\ast a_{i+1} \ast \ldots \ast a_{j-1}\ast a_{j+1} \ast \ldots \ast a_m\] where $i,\,j \in \{ 1 ,\ldots , m\}$. \medskip \begin{lem}\label{LemmaFact2} Assume that Conditions~\ref{assump} hold and that there exist $i,\,j \in \{1 ,\ldots , m\}$ with $i \neq j$ such that for the stabilizers in ${\mathcal S}_n$ we have \begin{equation}\label{eqLemmaFact2} G_{a_1} \cap \ldots \cap \widehat{G_{a_i}}\cap \ldots \cap \widehat{G_{a_j}}\cap \ldots \cap G_{a_{m}} \subset G_{a_i}. \end{equation} Then $p$ and $q$ are in the same ${\mathcal S}_n$-orbit. \end{lem} \begin{proof} Without loss of generality we can assume that $i=m-1$ and $j=m$. Furthermore, after applying Lemma~\ref{LemmaFact0} and Remark~\ref{RemarkReplacePAndQ}, we can assume that \[p=(a_1 ,\ldots , a_{m-1},\,a_m)\quad \text{ and } \quad q=(a_1 ,\ldots , a_{m-1},\,c).\] Consider the reduced vectors \begin{align*} p':= (a_1 ,\ldots , a_{m-2},\,a_m) \quad \text{ and } \quad q' := (a_1 ,\ldots , a_{m-2},\,c) \in V^{m-1}.\end{align*} For all $f \in S$ we have $f^{(1,2 ,\ldots , m-2,m)} \in \expan{S}{m} \subset T$ and hence \begin{align*} f(p') = f^{(1,2 ,\ldots , m-2,m)}(p) = f^{(1,2 ,\ldots , m-2,m)}(q) =f(q').\end{align*} Since $S$ is separating, then there exists $\sigma \in {\mathcal S}_n$ such that $\sigma q' = p'$. Then $\sigma c = a_m$ and $$\sigma \in G_{a_1} \cap \ldots \cap G_{a_{m-2}}.$$ Thus (\ref{eqLemmaFact2}) implies $\sigma \in G_{a_{m-1}}$ and $\sigma q = p$ follows. \end{proof} \medskip \begin{lem}\label{LemmaFact1} Assume that Conditions~\ref{assump} hold and that there exists $l \in \{ 1 ,\ldots , n\}$ such that $(a_1)_l = (b_1)_l ,\ldots , (a_m)_l = (b_m)_l$. Then $p$ and $q$ are in the same ${\mathcal S}_n$-orbit. \end{lem} \begin{proof} By Remark~\ref{RemarkReplacePAndQ} we can apply the transposition $\tau = (l,\,n)$ to both $p$ and $q$ and hence assume that $l=n$. Denote by $e$ the vector $(1,\ldots,1)^T \in V$. Then $c=(\gamma_1 e,\ldots, \gamma_m e)\in V^m$ is a fixed point of ${\mathcal S}_n$ for any $\gamma_1,\ldots,\gamma_m\in{\mathbb K}$. Consequently, $p$ and $q$ are in the same ${\mathcal S}_n$-orbit if and only if $p - c$ and $q - c$ are in the same ${\mathcal S}_n$-orbit. Let $c= ((a_1)_n \cdot e, \ldots, (a_m)_n \cdot e)\in V^m$. Replacing $p$ and $q$ by $p-c$ and $q-c$, respectively, we may assume that $0 = (a_j)_n= (b_j)_n$ for all $1\leq j\leq m$. Then $\widetilde{f}(\widetilde{p})=f(p)=f(q)=\widetilde{f}(\widetilde{q})$ for all $f\in T$. Since $\widetilde{T}$ is separating, $\widetilde{p}$ and $\widetilde{q}$ are in the same ${\mathcal S}_{n-1}$-orbit, and Lemma~\ref{LemmaInductionOnN} concludes the proof. \end{proof} \medskip \begin{lem}\label{LemmaFact3} Assume that Conditions~\ref{assump} hold and let $A_i$ denote the partitions $A_i := \parti(a_i)$ for $i \in \{1 ,\ldots , m\}$. Moreover, assume that there exist $i,\,j \in \{1 ,\ldots , m\}$ with $i\neq j$ such that \begin{equation}\label{eqLemmaFact3} \min(A_1 \sqcap \ldots \sqcap \widehat{A_i} \sqcap \ldots\sqcap \widehat{A_j} \sqcap \ldots \sqcap A_m) = 1.\end{equation} Then $p$ and $q$ are in the same ${\mathcal S}_n$-orbit. \end{lem} \begin{proof} Without loss of generality we can assume that $i=m-1$ and $j=m$. Furthermore, after applying Lemma~\ref{LemmaFact0} and Remark~\ref{RemarkReplacePAndQ}, we can assume that \[p=(a_1 ,\ldots , a_{m-1},\,a_m)\quad \text{ and } \quad q=(a_1 ,\ldots , a_{m-1},\,c).\] We consider the reduced vectors \begin{align*} p' := (a_1 ,\ldots , a_{m-2},\,a_m) \quad \text{ and } \quad q' := (a_1 ,\ldots , a_{m-2},\,c) \in V^{m-1}\end{align*} and conclude as in the proof of Lemma~\ref{LemmaFact2} that there exists a permutation $\sigma \in {\mathcal S}_n$ such that $\sigma q' = p'$. By (\ref{eq1}) and (\ref{StabilizerOfVector}) we have \[ \sigma \in G_{a_1} \cap \ldots \cap G_{a_{m-2}} = G_{A_1} \cap \ldots \cap G_{A_{m-2}} = G_{A_1 \sqcap \ldots \sqcap A_{m-2}}.\] Hence by (\ref{eqLemmaFact3}) there exists a position $l \in \{1 ,\ldots , n \}$ such that $\sigma(l) = l$. But since $\sigma c = a_m$, this implies $c_l = (a_m)_l$ and hence the $l$-th coordinates of the entries of $p$ and $q$ are the same. Lemma~\ref{LemmaFact1} concludes the proof. \end{proof} \section{Proof of Theorem~\ref{TheoremMain}}\label{SectionProofOfTheorem1} \begin{proof_of}{of Theorem~\ref{TheoremMain}}. By Remark~\ref{remark_new} we can assume that $S$ is a generating set for ${\mathbb K}[V^{m_0}]^{{\mathcal S}_n}$. We use induction on $n$. For $n = 1$ we have $m_0 \geq 1$ and we can assume that $S \supset \{x(1)_1\}$. Then $S^{[m]}$ generates the algebra ${\mathbb K}[x(1)_1,\ldots,x(m)_1]$. Now assume that $n\geq2$. Since $S\subset {\mathbb K}[V^{m_0}]^{{\mathcal S}_n}$ is separating, we know by Lemma~\ref{LemmaInductionOnN} that $\widetilde{S}$ is separating for the ${\mathcal S}_{n-1}$-action on $\widetilde{V}^{m_0}$. Clearly, \[ m_0 \geq \lfloor \frac{n-1}{2} \rfloor + 1,\] so by induction on $n$ we can use Theorem~\ref{TheoremMain} for $n-1$. It follows that \begin{equation}\label{eqSITilde} \text{$\expan{(\widetilde{S})}{m}$ is separating for the ${\mathcal S}_{n-1}$-action on $\widetilde{V}^{m}$.}\end{equation} Let us prove that $\expan{S}{m}$ is separating by induction on $m \geq m_0$. For $m = m_0$ there is nothing to show. Assume that $m>m_0$ and consider two vectors \[ p = (a_1 ,\ldots , a_{m-1},\,a_m) \in V^m \quad \text{ and } \quad q \in V^m \] that can not be separated by $\expan{S}{m}$. To conclude the proof, we have to show that $p$ and $q$ are in the same ${\mathcal S}_n$-orbit. By induction on $m$ we have that \begin{equation}\label{eqSIexpM-1} \text{$\expan{S}{m-1}$ is separating for the ${\mathcal S}_n$-action on $V^{m-1}$.}\end{equation} Since statements (\ref{eqSITilde}), (\ref{eqSIexpM-1}) hold and $\expan{(\expan{S}{m-1})}{m} = \expan{S}{m}$, Conditions~\ref{assump} are satisfied with $\expan{S}{m-1}$ taking the role of $S$ and $\expan{S}{m}$ taking the role of $T$. We apply Lemma~\ref{LemmaFact0} together with Remark~\ref{RemarkReplacePAndQ} and hence can assume that $p$ and $q$ are of the form \begin{equation*}p = (a_1 ,\ldots , a_{m-1},\,a_m),\quad q = (a_1 ,\ldots , a_{m-1},\,c) \in V^m.\end{equation*} Consider the partitions $A_i := \parti(a_i)$ for $i = 1 ,\ldots , m-1$ as defined in Section \ref{SectionPartitions}. We split the rest of the proof into two cases. \medskip \textbf{\underline{Case 1}}: Assume that there exists an $i \in \{1 ,\ldots , m-1\}$ such that \begin{equation*}G_{A_1} \cap \ldots \cap G_{A_{i-1}} \subset G_{A_i}.\end{equation*} Then inclusion (\ref{eqLemmaFact2}) is valid. It follows from Lemma~\ref{LemmaFact2} that $p$ and $q$ are in the same ${\mathcal S}_n$-orbit. \medskip \textbf{\underline{Case 2}}: Assume that for all $i \in \{1 ,\ldots , m-1\}$ we have \begin{equation*}G_{A_1} \cap \ldots \cap G_{A_{i-1}} \not\subset G_{A_i}.\end{equation*} We will only use these assumptions for $i\leq m -2$. Since \[ m-2 \geq m_0 - 1 \geq \lfloor \frac{n}{2} \rfloor,\] we can apply Lemma \ref{Lemma2OnPartitions} to $A_1,\ldots,A_{m-2}$ and obtain that \begin{equation*} \min(A_1 \sqcap \ldots \sqcap A_{m-2}) = 1.\end{equation*} Then Lemma~\ref{LemmaFact3} concludes the proof. \end{proof_of} \section{Minimal Separating Sets}\label{MinimalSeparating} This section deals with the question of \emph{minimality} with respect to inclusion of a separating set of ${\mathbb K}[V^m]^{{\mathcal S}_n}$ among all separating sets of ${\mathbb K}[V^m]^{{\mathcal S}_n}$. At first we show that the method of expansion behaves well in this regard when dealing with certain sets. We call a set of invariants $S \subset {\mathbb K}[V^m]^{{\mathcal S}_n}$ {\it elementary} if \begin{enumerate} \item[$\bullet$] any element of $S$ is equal to $\sigma_t(\un{k})$ for some $1\leq t\leq n$ and $\un{k}\in{\mathbb N}^m$; \item[$\bullet$] if $\sigma_t(\un{k})\in S$, then $\sigma_l(\un{k})\in S$ for all $1\leq l\leq t$. \end{enumerate} \medskip \begin{lem}\label{LemmaMinSep} Let $m_0,\,m$ be integers with $1 \leq m_0 \leq m$ and let $S \subset {\mathbb K}[V^{m_0}]^{{\mathcal S}_n}$ be an elementary set of invariants such that \begin{enumerate} \item[(a)] $S$ is a minimal separating set; \item[(b)] $\expan{S}{m}$ is a separating set; \item[(c)] if $\sigma_t(\un k) \in S$ and $k_i = 0$ for some $i\in\{1,\ldots,m_0\}$, then $\sigma_t(\un r) \in S$ for each $\un{r}\in{\mathbb N}^{m_0}$ from the list: $(0,k_1,\ldots,\widehat{k_i},\ldots,k_{m_0})$, $(k_1,0,k_2,\ldots,\widehat{k_i},\ldots,k_{m_0})$, $\ldots$, $(k_1,k_2,\ldots,\widehat{k_i},\ldots,k_{m_0},0)$. \end{enumerate} Then $\expan{S}{m}$ is also a minimal separating set. \end{lem} \begin{proof} We need to show that for every $F \in \expan{S}{m}$ the set $\expan{S}{m} \setminus \{F\}$ is \emph{not} separating. For such an $F$ by (\ref{DefExpansion}) there exists $f \in S$ and an $m$-admissible $\un j \in {\mathbb N}^{m_0}$ such that $F = f^{(\un j)}$. By assumption, $S \setminus \{f\}$ is not separating. So there exist \[ p = (a_1 ,\ldots , a_{m_0}) \quad \text{ and } \quad q = (b_1 ,\ldots , b_{m_0}) \in V^{m_0} \] with $f(p) \neq f(q)$ and $h(p) = h(q)$ for all $h \in S \setminus \{ f \}$. Consider \[ p'=(0,\ldots,0, \underbrace{a_1,}_{\text{position } j_1}0,\;\;\ldots\;\;,0, \underbrace{a_{m_0},}_{\text{position } j_{m_0}} 0, \ldots , 0) \in V^m\] and $q' \in V^m$ which is defined accordingly. Hence $$F(p') = f(p) \neq f(q) = F(q').$$ We finish the proof by showing \begin{equation*} H(p') = H(q')\text{ for all } H \in \expan{S}{m} \setminus \{F\}. \end{equation*} By (\ref{DefExpansion}) for any such $H$ there exist $h \in S$ and an $m$-admissible $\un l \in {\mathbb N}^{m_0}$ such that $H = h^{(\un l)}$. Since $S$ is elementary, $h = \sigma_t(\un k)$ with $t \in \{ 1 ,\ldots , n\}$ and $\un k \in {\mathbb N}^{m_0}$. Hence $H = \sigma_t(\un{k'})$ with \[ \un{k'} = (0,\ldots,0, \underbrace{k_1,}_{\text{position } l_1}0,\;\;\ldots\;\;,0, \underbrace{k_{m_0},}_{\text{position } l_{m_0}} 0, \ldots , 0)\in{\mathbb N}^m .\] If $\un{k'}$ has a non-zero entry at a position different from $j_1 ,\ldots , j_{m_0}$, then \[ H(p') = 0 = H(q')\] and we are done. Otherwise, $\un{k'}$ is zero at all positions $i \notin \{ j_1 ,\ldots , j_{m_0} \} \cap \{l_1 ,\ldots , l_{m_0} \}$. Let $\{k_{t_1},\ldots,k_{t_s}\}$ be the set of all non-zero elements of $\un{k}$, where $1\leq t_1<\cdots <t_s\leq m_0$ and $s\geq1$. Since $l_{t_1},\ldots, l_{t_{s}}\in \{ j_1 ,\ldots , j_{m_0}\}$, we set $l_{t_1}=j_{d_1},\ldots,l_{t_s}=j_{d_s}$ for some $1\leq d_1<\cdots< d_s\leq m_0$. Hence we can view $H$ as $H = \sigma_t(\un r)^{(\un j)}$ for \[ \un{r} = (0,\ldots,0, \underbrace{k_{t_1},}_{\text{position } d_1}0,\;\;\ldots\;\;,0, \underbrace{k_{t_s},}_{\text{position } d_s} 0, \ldots , 0)\in{\mathbb N}^{m_0} .\] Since $\sigma_t(\un k)\in S$, condition (c) implies that $\sigma_t(\un r) \in S$. The inequality $F \neq H$ implies $\sigma_t(\un r) \neq f$. It follows that $H(p') = \sigma_t(\un r)(p) = \sigma_t(\un r)(q) = H(q')$ and we are done. \end{proof} \medskip The following example shows that condition (c) from Lemma~\ref{LemmaMinSep} can not be removed. Namely, in case $\charakt({\mathbb K}) \neq 2$ and $n=2$ the set $$S = \{ \tr(1,0),\; \tr(2,0),\; \tr(0,1),\;\sigma_2(0,1),\;\tr(1,1) \}\subset {\mathbb K}[V^2]^{{\mathcal S}_2 }$$ is a minimal separating set and the set $S^{[3]}$ is separating, since we can apply the equality $2 \sigma_2(0,1) = \tr(0,1)^2 - \tr(0,2)$ to part (a) of Theorem~\ref{TheoremMainMin} (see below). Since invariants $h_1=\tr(0,1,0)$, $h_2=\tr(0,2,0)$, $h_3=\sigma_2(0,1,0)$ belong to $S^{[3]}$ and $2 h_3 = h_1^2- h_2$, the set $S^{[3]}$ is not a minimal separating set. In \cite[Theorem 2]{reimers2019} minimal separating sets for $m = 2$ and $n = 2,\,3,\,4$ were constructed in characteristic 0. \begin{remark}\label{RemarkNewExamples} Theorem~1 and Theorem~2 of \cite{reimers2019} remain true in the case of \linebreak$\charakt({\mathbb K}) > n$. \end{remark} Using our previous results we extend this to $m \geq 3$ in the following theorem. Note in particular, that for $n = 4$ and $m \geq 3$ we can not simply take the expansion of a separating set for $m = 2$. \medskip \begin{thm}\label{TheoremMainMin} Let $\charakt({\mathbb K}) = 0$ or $\charakt({\mathbb K}) > n$. Then for $n=2,\,3,\,4$ and $m \geq 2$ the set $T_{n,m}$ is a minimal separating set for the algebra of invariants ${\mathbb K}[V^{m}]^{{\mathcal S}_n}$, where \begin{enumerate}[label=(\alph*)] \item for $n=2$ let \begin{align*} &T_{2,2} = \{\tr(r,0),\tr(0,r),\tr(1,1) \mid r=1,2\}, \\ &T_{2,m}=\expan{T_{2,2}}{m} \quad \text{for all } m \geq 3; \end{align*} \item for $n=3$ let \begin{align*} &T_{3,2}=\{\tr(r,0),\tr(0,r),\tr(1,1),\tr(2,1) \mid r = 1,2,3\}, \\ &T_{3,m}=\expan{T_{3,2}}{m} \quad \text{ for all } m \geq 3;\end{align*} \item for $n=4$ let \begin{align*} &T_{4,2}=\{\tr(r,0),\tr(0,r),\tr(1,1),\tr(2,1),\tr(1,2),\tr(3,1)\mid r =1,2,3,4\},\\ &T_{4,3}=\expan{T_{4,2}}{3} \cup \{\tr(1,1,1)\},\\ &T_{4,m}=\expan{T_{4,3}}{m} \quad \text{for all $m \geq 4$.}\end{align*} \end{enumerate} \end{thm} \begin{proof} The case $m = 2$ follows from \cite[Theorem~2]{reimers2019} together with Remark~\ref{RemarkNewExamples}. For $n = 2$ and $n = 3$ we can take $m_0 = 2$ in Theorem~\ref{TheoremMain}. Since $T_{n,2}$ is separating, we have that $T_{n,m}$ is separating for all $m \geq 3$ by Theorem~\ref{TheoremMain}. The minimality of $T_{n,m}$ follows from Lemma~\ref{LemmaMinSep}. Assume $n = 4$. Let us show that \[ T_{4,3} \text{ is separating.}\] Consider $p = (a_1,\,a_2,\,a_3) \in V^3$ and $q \in V^3$ that can not be separated by $T_{4,3}$. Conditions~\ref{assump} are satisfied for $S := T_{4,2}$ and $T := T_{4,3}$, because $\widetilde{T_{4,3}} \supset T_{3,3}$ is separating. The combination of Lemma~\ref{LemmaFact0} and Remark~\ref{RemarkReplacePAndQ} allows us to assume that \begin{equation*} p = (a_1,\,a_2,\,a_3), \quad q = (a_1,\,a_2,\,c).\end{equation*} If for some $i \neq j$ with $i,\,j \in \{1,\,2,\,3\}$ we have $G_{a_i} \subset G_{a_j}$, we can apply Lemma~\ref{LemmaFact2} to obtain that $p$ and $q$ are in the same ${\mathcal S}_4$-orbit. Hence we can assume that for all $i \neq j$ we have $G_{a_i} \not\subset G_{a_j}$. Then in particular for all $i \in \{1,\,2,\,3\}$ we have $G_{a_i} \neq {\mathcal S}_4$ and $G_{a_i} \neq \{ \id \}$. The former means that not all coordinates of $a_i$ are equal, while the latter means that not all coordinates of $a_i$ are pairwise distinct. Therefore, for the corresponding partition $A_i := \parti(a_i)$ we know that $|A_i| \in \{ 2,\,3\}$ for all $i$. If $\min(A_i)=1$ for some $i$, then we can apply Lemma~\ref{LemmaFact3} to conclude that $p$ and $q$ are in the same ${\mathcal S}_4$-orbit. Thus the only remaning case is that for all $i \in \{ 1,\,2,\,3\}$ we have $|A_i| = 2$ and $\min(A_i)=2$. Then for each $i$ the vector $a_i \in {\mathbb K}^4$ has exactly two pairs of distinct coordinates. After applying some permutation to $p$ and $q$ (see Remark~\ref{RemarkReplacePAndQ}) we can assume that \[ a_1 = \begin{pmatrix}\lambda_1 \\ \lambda_1 \\ \lambda_2 \\ \lambda_2 \end{pmatrix}, \quad a_2 = \begin{pmatrix}\mu_1\\ \mu_2 \\ \mu_1 \\ \mu_2 \end{pmatrix}, \quad a_3 = \begin{pmatrix}\nu_1\\ \nu_2 \\ \nu_2 \\ \nu_1 \end{pmatrix}\] with $\lambda_1,\,\lambda_2,\,\mu_1,\,\mu_2,\,\nu_1,\,\nu_2 \in {\mathbb K}$ such that \begin{equation*} \lambda_1\neq \lambda_2,\quad \mu_1\neq \mu_2, \quad \nu_1\neq \nu_2. \end{equation*} The equations $\tr(r,0,1)(p)=\tr(r,0,1)(q)$ with $r=0,1$ imply $c_1+c_2=c_3+c_4$. Similarly, $\tr(0,r,1)(p)=\tr(0,r,1)(q)$ with $r=0,1$ imply $c_1+c_3 = c_2+c_4$. Since $\charakt({\mathbb K})\neq2$, we obtain $c_3=c_2$ and $c_4=c_1$. If $c = a_3$, then $p = q$ and we are done. So let us assume $c \neq a_3$. Then the equations $\tr(0,0,r)(p) = \tr(0,0,r)(q)$ with $r = 0,1$ imply that \[ c = \begin{pmatrix}\nu_2\\ \nu_1 \\ \nu_1 \\ \nu_2 \end{pmatrix}.\] Since $\tr(1,1,1)$ belongs to $T_{4,3}$, we obtain \begin{align*} 0 &= \tr(1,1,1)(p) - \tr(1,1,1)(q) \\ &= \lambda_1 \mu_1 (\nu_1 - \nu_2) + \lambda_1 \mu_2 (\nu_2 - \nu_1) + \lambda_2 \mu_1 (\nu_2 - \nu_1) + \lambda_2 \mu_2 (\nu_1 - \nu_2) \\ &= (\nu_1 - \nu_2) (\mu_1 - \mu_2) (\lambda_1 - \lambda_2);\end{align*} a contradiction. This shows that $T_{4,3}$ is separating. Now let us show that $T_{4,3}$ is a minimal separating set. For $f \in T_{4,3}$ with $f \neq \tr(1,1,1)$ the minimality of $T_{4,2}$ implies that $T_{4,3} \setminus \{f \}$ is not separating. For $f = \tr(1,1,1)$ consider the vectors $p=(a_1,\,a_2,\,a_3)$ and $q=(a_1,\,a_2,\,c) \in V^3$ with \[ a_1 = \begin{pmatrix}1\\1\\2\\2\end{pmatrix}, \quad a_2= \begin{pmatrix}1\\2\\1\\2\end{pmatrix},\quad a_3= \begin{pmatrix}1\\2\\2\\1\end{pmatrix},\quad c= \begin{pmatrix}2\\1\\1\\2\end{pmatrix}.\] Here $\tr(1,1,1)(p) \neq \tr(1,1,1)(q)$, but $p$ and $q$ can not be separated by $T_{4,3} \setminus \{ \tr(1,1,1) \}$. So the theorem is proven for $n = 4$ and $m = 3$. Since for $n = 4$ we can take $m_0 = 3$ in Theorem~\ref{TheoremMain}, the set $T_{4,m}$ is separating for all $m \geq 4$. The minimality of $T_{4,m}$ follows from Lemma~\ref{LemmaMinSep}. \end{proof} \medskip Now we assume that $\charakt({\mathbb K}) = 0$ or $\charakt({\mathbb K})> n$ and compare the asymptotic sizes of multi-homogeneous generating and separating sets when $n$ is fixed and $m$ tends to infinity. Denote by $M_m$ the minimal generating set of ${\mathbb K}[V^m]^{\mathcal{S}_n}$ from (\ref{eqMinimalGen}) depending on $m$. Theorem~\ref{TheoremMain} allows us to use $M_{m_0}$, where $m_0 := \lfloor \frac n 2 \rfloor + 1$, to construct separating sets of ${\mathbb K}[V^m]^{{\mathcal S}_n}$ for $m > m_0$. We define \[ S_m := \begin{cases}M_m \quad &\text{ for } m \leq m_0, \\[1mm] \expan{M_{m_0}}{m} \quad &\text{ for } m> m_0. \end{cases} \] By Theorem~\ref{TheoremMain}, $S_m$ is a separating set for all $m$. For two functions $f,\,h:{\mathbb N}\to {\mathbb R}_{>0}$ we write $f\sim h$ if $\lim_{m\to\infty}\frac{f(m)}{h(m)}=1$. \medskip \begin{remark}\label{RemarkFraction} Let $\charakt({\mathbb K}) = 0$ or $\charakt({\mathbb K})> n$. Then as functions of $m$ \[ \frac{|S_m|}{|M_m|}\sim \binom{n}{m_0} \frac{n!}{m_0!} \; \frac{1}{m^{n-m_0}}. \] In particular for $n \geq 3$, we have \[ \lim_{m \to \infty} \frac{|S_m|}{|M_m|} = 0.\] \end{remark} \bigskip \bibliographystyle{siam}
1,108,101,562,613
arxiv
\section{Introduction} \label{sec.intro} Ad impressions sold through real-time bidding (RTB) auctions are responsible for an ever-increasing portion of company expenditures as well as the revenue of large ad providers like Google, Facebook, Amazon, Microsoft, and Yahoo!. In particular, Google is responsible for upwards of 50 billion ad impressions on average per day \cite{Tunuguntla_2021}, with a corresponding revenue on the order of \$100 million. While companies bidding in these advertising campaigns do not participate in every auction throughout the day, they are often required to participate enough to spend their allotted budget in this time. Thus, great interest is placed on adequately pacing budgets throughout the day to not spend too hastily at the beginning of a day and miss out on better impressions, or spend too frugally until the close of day. Our model here takes the perspective of a digital service provider (DSP) who is given a users budget and target spend amount and must optimally pace their spending. The optimization process for RTB maximizes the key performance indicators (KPI) for an ad campaign subject to the budgeting constraints for each time period \cite{Cai_2017,Jauvion_2020,lobos2018optimal,Zhang_2014}. The goal of smoothly spreading a budget across a campaign is to sustain the influence of an advertiser across the broad spectrum of impressions with which they interact. The research literature on the problem instance of optimally spending budgets in their entirety is limited, and either excludes comprehensive analysis of the stability of a proper budget pacing mechanism in favor of simulating these results \cite{Cai_2017,lee2013,Xu_2015}, or does not impose the realistic constraints that are met in the advertising economy \cite{tapia_2015,lobos2018optimal}. As such, we bring an analytic approach in combination with simulated results to bridge the gap in the literature. \subsection{Main Ideas} \label{sec.intro_main} In this paper, we present an online approach to smoothly and \emph{uniformly} pacing an advertiser's budget over a fixed-length advertising campaign. Our approach invokes an iterative control feedback mechanism to estimate the proper \emph{average} bid to submit over each time period, which is later manipulated by a mechanism (potentially hidden to the bidder) to compute an ``actual spent amount" for that period so that the total budget is approximately spent in a uniform fashion throughout the campaign. The algorithm relies on simplistic scaling of bids in response to learning of this latent mechanism in a naturalistic way, and is currently in implementation at two major companies with which the first author is affiliated. Specifically, we analytically derive the stability and convergence time of this natural learning algorithm that has analogously appeared in other online bidding literature \cite{lee2013,Zhang_2014}. Though simple, this algorithm exhibits low convergence times for the problem of budget pacing and possesses interesting dynamics in accordance with other one-dimensional mappings like the logistic map. \subsection{Organization} \label{sec.intro_org} The paper is organized as follows. In Section \ref{sec.back}, we discuss the budgeting problem on a high level and previous work in the field. In Section \ref{sec.opt}, we describe our model and give a comprehensive analysis of the dynamics exhibited. In Section \ref{sec.simul}, we compare the theoretical insights with simulated results and further discuss the extensions of our implementation to capture different pacing schemes in Section \ref{sec.realworld}. Lastly, in Section \ref{sec.conc}, we conclude with a discussion of our methodology, its limitations, and where further work is needed. \section{Background and Related Work} \label{sec.back} In this section, we discuss the problem of budget pacing with the joint problem of bid optimization, and subsequently discuss the related work, noting where these results fall short of the reality of RTB schemes. \subsection{Problem Statement} \label{sec.back_prob} We consider an online bid optimization problem subject to budgeting constraints in the following way: throughout a given campaign, there are $T$ total auction periods being conducted sequentially (indexed by $t \in \{0,2,...,T-1\}$). For example, these periods could correspond to each minute in a day, which would amount to $\approx 10,000$ auctions in each period. An advertiser must decide how much to bid on average for impressions during each period of the campaign, denoting their average bid as $b_t$ for the $t$-th period. It is important to note that for each impression, this base bid can be rescaled in a manner corresponding to the value of the impression for that advertiser. For example: as originally formulated in \cite{Perlich_2012}, the bid value of the $i$-th impression is rescaled by the probability of its conversion $p_i$, divided by the average conversion (denoted $\overline p$). This is often done for individual impressions of great interest to an advertiser, however, in our formulation, $b_t$ is submitted \textit{on average} in each period. We assume an advertiser has a budget $B > 0$ for the $T$ auctions. The key aspect of real-time bidding in a finite time period is that each advertiser is trying to both deliver their budget smoothly and spend it nearly entirely \cite{Agarwal_2014,Avadhanula_2021,Balseiro_2019,Balseiro_2021,Conitzer_2021,lee2013,Xu_2015}. The added complication of the smooth delivery problem we focus on in the present study is that oftentimes, when an advertiser submits a bid in an auction, this may not be the true value they pay for that impression. To reiterate this novel context, consider the simplistic instance where the advertising campaign is conducted using a first price auction. The amount spent in each time period is thus the proportion of auctions won multiplied by the submitted bid and thus the underlying actual spent amount function is a linear scaling of the submitted bids and generalizes the expenditure across hundreds of thousands of auctions. In response, the pacing of an advertiser's budget relies on learning the actual cost of submitted bids to adequately scale their bid values to meet desired spending goals in each period. We can now formally frame the budget pacing problems as: \begin{align} \textbf{minimize } &B - \sum_{t=1}^Ts_t \\ \textbf{subject to } &\left|\frac{B}{T} - s_t\right| \leq \epsilon \end{align} where the optimization is framed using the information for all the auctions, however, the problem itself is online. As a result, it is clear we need an algorithm that quickly learns the actual spend function $f(b)$ in a small portion of the total number of auctions and subsequently uniformly paces their bidding for the duration of the campaign. Of crucial importance, we note that the spent amount function in each period implicitly contains information on the number of impressions won and overall surplus generated based on the value of impressions won, subject to the payment rules of whichever auction mechanism is implemented for those impressions. Lastly, it is important to note that the spent amount function is \emph{subject to change} at different times in a campaign and can be adjusted by the advertiser when manipulating their target spend amount for different times. For example, if the algorithm is being utilized to exhaust a daily budget, the advertiser may emphasize impressions in the morning and input a higher target spend amount for that portion of the day, while reducing this target for later times in the day. With slight modifications of our algorithm we can further capture these instances (see Section \ref{sec.realworld}). \subsection{Related Work} \label{sec.back_rel} Real-time bidding strategies have an expansive literature \cite{Celli_2021,Feng_2018,Geyik_2015,Karaca_2019,Nedelec_2019,Weed_2016}, however the field of optimal budget pacing is relatively new and, as such, the literature thus far is often limited in scope \cite{Agarwal_2014,Balseiro_2019,Balseiro_2021,borgs_2007}. We here examine the methodologies of a select number of other studies whose work closely aligns with our own, the aspects of real-time bidding they encompass, and where they fall short. Most closely related to our work is that of Lee et al. \cite{lee2013}, which devises a comparable algorithm for pacing the budget while also maximizing the performance indicators of impressions. In particular, their algorithm scales bids between successive rounds to ensure smooth delivery of the budget without early exit. However, this paper maximizes impression value by estimating a threshold for advertisers to begin bidding. As a result, the algorithm bids more highly on the valuable impressions, but risks overspending and early exit from the campaign in these instances. Our algorithm instead ensures that agents are present throughout the entire campaign so that no impression is missed out on and guarantees sustained presence in an ad space. Additionally, the work of Lee et al. does not account for the real-world quotas that advertisers must meet, forcing them to spend their budgets in close to their entirety. Our algorithm accounts for this constraint and utilizes it in the scaling of bid values. The work done by Xu et al. \cite{Xu_2015} also aligns well with our study, emphasizing smooth delivery of budget and minimizing the probability of early exit from a campaign to ensure a sustained presence for advertisers. When omitting performance goals, their algorithm is comparable to ours as it optimizes bidding by scaling bids based on the discrepancy between desired spending amount and the actuality. Additionally, they optimize over a fixed penalty function, similar to our latent spent function. However, their algorithm is not robust to changes in this penalty over time and fails to provide analytic results on the convergence and stability of this algorithm. Our study attempts to bridge the gap between the analytic and computational studies on optimal budget pacing. Lastly, this paper fails to consider the real-world constraint of budgeting quotas. Fernandez-Tapia \cite{tapia_2015} examines the pacing problem from different perspective to the above, deriving analytical results through variational calculus techniques. This work compares the optimal bidding strategy when RTBs occur at a linear rate (fixed spacing between bids) and nonlinear. As most papers, including our own, make the simplifying assumption that auctions occur at such a high rate that we need not consider unequal arrival of the impressions to bid on, this more technical study highlights a more complex and realistic setting of auctions. However, while impressions may arrive in this more complicated fashion, advertisers do not in practice base their strategies around this assumption. As such, our algorithm is better suited for real-world implementation. The problem of finding an optimal bidding strategy for an advertiser who does not know their own valuation, subject to budgeting constraints, has also been studied as an extension of the multi-armed bandit problem as in \cite{Avadhanula_2021}. Avadhanula et al. provide bidding schemes over discrete and continuous bid spaces for $m$ platforms simultaneously with regret lower bounds of $\Omega \left( \sqrt{mOPT} \right)$ and \\ $\Omega \left( m^{1/3}B^{2/3} \right)$ -- where $B$ is the budget and $OPT$ the performance of the optimal bidding strategy. This model however is limited in that the auction format is restricted to a second-price auction payout mechanism, whereas our approach provides a more general framework which can be applied to any auction mechanism. \section{Optimal Bidding} \label{sec.opt} In this section, we detail the algorithm for learning the latent spent function and uniformly pacing an advertiser's budget throughout the campaign. We first present the algorithm with the intuition behind its definition and proceed by analyzing the convergent dynamics and efficiency. We emphasize uniformly pacing an advertiser's budget, or target spend amount, so that all auction periods are weighted equally. The primary concern is to spend the entirety of their budget while staying active in the campaign for as long as possible. We define the metrics by which a budget pacing algorithm should be measured in order to assess its overall effectiveness in online real-time bidding systems. \begin{enumerate} \item Fast convergence to a stable bidding strategy relative to the overall campaign duration. \item Adaptive capacity for changes in the display auction mechanism (manipulated on the supplier end). \item Duration spent in the optimal bidding state before necessary deviation to fulfill budget quotas by the end of campaign time. \end{enumerate} Ensuring good performance metrics is complex in the online market due to the speed and frequency that impressions are sold, yet we prove in this section that our natural learning algorithm performs well due in large part to its simplicity and efficient computation. \subsection{Pacing Algorithm} \label{sec.opt_algo} We begin by assuming an advertiser has two pieces of information: its budget, $B$, and the number of auction periods in the campaign, $T$ (ie. auction periods per day). Intuitively, the advertiser who is trying to uniformly pace their budget will initially bid its average budget for the corresponding number of auctions, $b_0 = \frac{B}{nT}$ (where $n$ is the number of auctions per period). However, in general, the actual spent amount is much larger than the input bid and the convergence time remains low regardless of this initial bid selection, so any choice suffices. Once an initial average bid is set, it is utilized for the entire first period ($t=0$), once again being scaled for each impression in accordance with their value, and the agent spends $s_0$ throughout this time. Following the initial bid, an advertiser now has the information of how much was actually spent in the first period and can assess the discrepancy between the \textit{desired} amount to be spent and the \textit{actual} amount. Let $B_r^t$ denote the remaining budget after period $t$, and $s_{t}$ be the actual amount spent in this period, then we can define the scaling factor $\frac{B_r^t}{T-t} \cdot \frac{1}{s_{t}}$ as the ratio of the amount the advertiser wants to spend on average in each remaining auction period to how much it spent on the previous. Using this ratio as a budget pacing factor, we have the following iterative scheme: \begin{align*} b_{t+1} = \frac{B_r^t}{s_t (T-t)} \cdot b_t \end{align*} \begin{algorithm}[] \caption{Budget Smoothing} \label{alg:algorithm} \begin{algorithmic}[1] \STATE \textbf{Input}: $B, T, b_t, (s_0, ..., s_t)$ \\ \STATE \textbf{Output}: $b_{t+1}$ \STATE $B_r^t = B - \sum_{i=0}^t s_i$ \COMMENT{remaining budget after time $t$} \STATE $s_{\text{opt}} = \frac{B_r^t}{T-t}$ \COMMENT{optimal spend amount} \STATE $s_{\text{act}} = s_t$ \COMMENT{actual spend amount} \STATE $\alpha = \frac{s_{\text{opt}}}{s_{\text{act}}}$ \STATE \textbf{return} $\alpha \cdot b_t$ \end{algorithmic} \end{algorithm} \textsc{Algorithm} \ref{alg:algorithm} takes as input the budget ($B$), number of periods in a campaign ($T$), current period ($t$), the most recent bid value ($b_t$), and the history of previously spent amounts ($\{s_i\}_{i=0}^t$), outputting a new average bid value scaled based on this information. As we will see in Section \ref{sec.opt}, this intuitive update algorithm converges to a fixed point that uniformly paces an advertisers budget within a small fraction of the total auction time, as well as undergoing interesting bifurcations when parameters of the system are tuned. A brief summary of the results for latent spent functions in the family $f(b) = cb^k$, where $k \in \mathbb{R}$ and $c \in \mathbb{R}_{+}$, are provided in Table 1. We additionally reiterate that we here assume the budget $B$ is a fixed value, however this can be adjusted by advertisers for different time periods and retain the same convergent dynamics (see Section \ref{sec.realworld}). \begin{table}[t] \center \begin{tabular}{c|c} \textbf{Parameter Range} & \textbf{Dynamics} \\ \hline $k \leq 0$ & Unstable fixed point \\ \hline $k \in (0,1)$ & Stable fixed point with convergence time $\propto \frac{\ln(k)}{\ln(1-k)}$ \\ \hline $k = 1$ & Stable fixed point with convergence in one iterations \\ \hline $k \in (1,2)$ & Stable fixed point with convergence time $\propto \frac{\ln(2-k)}{\ln(k-1)}$ \\ \hline $k \geq 2$ & Instability requiring guard rails on spending \end{tabular} \end{table} In analyzing the efficiency of this learning algorithm, we consider various reasonable actual spent functions $f(b) = s$ and their divergent or convergent behavior to an optimal strategy. In the following analysis we consider only \textit{continuous} spent functions, and as such simplify our analysis by examining polynomial functions which can approximate any such function -- a consequence of the Stone-Weierstrass theorem \cite{rudin}. \subsection{Linear and Semi-Linear Spent Function} \label{sec.opt_lin} In the first step of analyzing the natural learning algorithm, we consider the most simplistic form of the latent spent function: a \textit{linear} spent amount. We assume that $f(b_t) = cb_t$ for some $c > 0$: the situation in which the agent running the ad sale simply scales incoming bids by a fixed constant factor. We reiterate that such a simple scaling can effectively capture the dynamics of a repeated first auction system wherein the actual amount spend by a bidder is proportional to the number of auctions won in a given period. In this elementary case, we see that, for any initial bid placed an advertiser adapts to the latent spent function in \textit{exactly} one iteration, bidding the optimal uniform amount for the remainder of the campaign. \begin{theorem} \label{thm:linear} For a linear spent function, $f(b) = cb$ where $c > 0$, and initial bid $b_0$, \textsc{Algorithm} \ref{alg:algorithm} converges to a fixed point bid value in exactly one iteration. \end{theorem} In many practical instances, it is also important to consider the \textit{semi-linear} spent function. We define the semi-linear spent function as $f(b_t) = \min\{cb_t,M\}$\footnote{Additionally, if we use the semi-linear function with a maximum rather than minimum, we see identical convergence rates.} where both $c,M > 0$. This is analogous to when the administration of the ad sales both scale an advertisers bids by a fixed constant factor and also instate "guard rails" on the spending: requiring the amount spent to be no greater than $M$. This is meant to protect against over excessive spending by an advertiser. Despite this limitation on bids, we see that the semi-linear function converges in one iteration when $M > \frac{B}{cT}$. \begin{theorem} \label{thm:semi_lin} For a semi-linear spent function, $f(b) = \min\{cb,M\}$ where $c > 0$, initial bid $b_0 < M $, and $M > \frac{B}{cT}$ \textsc{Algorithm} \ref{alg:algorithm} converges to a fixed point average bid value in exactly one iteration. \end{theorem} Additionally, for $M \gg 1$, the semi-linear function is identical to the linear case. As a result, proving Theorem \ref{thm:semi_lin} encompasses the result of Theorem \ref{thm:linear}. \begin{proof}[Theorem \ref{thm:semi_lin}] Assume $b_0 \neq b^*$: \begin{align*} b_1 &= \text{\textsc{Algorithm} 1}(B,T,1,b_0,s_0) \\ &= \frac{B - \min\{cb_0,M\}}{\min\{cb_0,M\}T} \cdot b_0 \\ &= \frac{B - cb_0}{cT} \\ b_2 &= \text{\textsc{Algorithm} 1}(B,T,2,b_1,(s_0,s_1)) \\ &= \frac{B - \min\{cb_0,M\} - \min\{cb_1,M\}}{\min\{cb_1,M\}(T-1)} \cdot b_1 \\ &= \frac{B - cb_0 - \min\{c \cdot \frac{B - cb_0}{cT},M\}}{\min\{c \cdot \frac{B - cb_0}{cT},M\}(T-1)} \cdot \frac{B - cb_0}{cT} \\ &= \frac{B - cb_0}{cT} \\ &= b_1 \end{align*} \end{proof} \begin{proof}[Theorem \ref{thm:linear}] Using the result of Theorem 3.2 we take the limit as $M \rightarrow \infty$, thus giving $\min\{cb,M\} = cb$ with convergence in one iteration. \end{proof} Before proceeding to give theoretical bounds on the convergence times of our algorithm for the cases where we do not see one iteration convergence, we must first define the ``general spent function" and assess the fixed points of our system. \subsection{General Spent Function} \label{sec.opt_gen} In the most complex case, we consider a general spent function of the form $f(b_t) = cb_t^k$ where $k \neq 1$. In this case, we here identify the fixed point of the system and assess its stability. \begin{theorem} \label{thm:stable} For $k \in (0,2)$, Algorithm 1 has exactly one stable fixed point for spent amount functions of the form $f(b) = cb^k$ where $c > 0$. \end{theorem} \begin{proof} The fixed point of our system occurs when the output of Algorithm 1 is equal to the input $b_t$. Solving this equation yields: \begin{gather*} b^* = \left(\frac{B - \sum_{i=0}^{t-1}cb_i^k}{c(T-t+1)}\right)^{1/k} \tag{3.3.1} \label{eq:fpt} \end{gather*} Noting that in general, if $b_0$ is not a fixed point, then the fixed point for period $t$ is defined as the above. If instead $b_0$ is a fixed point, the system will thus bid this value on average for the entire campaign and the point has the more simplistic formula derived by the following: \begin{gather*} b^* = \left(\frac{B}{c(T+1)}\right)^{1/k} \tag{3.3.2} \label{eq:fp1} \end{gather*} which is the special case of \eqref{eq:fpt} where $t=0$. As such it suffices to examine the more general formula to encompass all results. In order to prove stability of the above fixed points, we linearize around the point $b^*$ and assess whether the point is attracting or repelling by examining the derivative at that point \cite{strogatz:1994}. If we let $|\lambda| = |g'(b^*)|$ be the magnitude of the eigenvalues, then we have linear stability when $|\lambda| < 1$ and instability when $|\lambda| > 1$. Now we can analyze our system's fixed point. First we differentiate the function in \textsc{Algorithm} \ref{alg:algorithm} to get the following: \begin{gather*} \frac{d}{db_n}\left(\frac{B-\sum_{i=1}^tcb_i^k}{cb_t^k(T-t)} \cdot b_t\right) \\ = \frac{-c + (1-k)b_t^{-k}(B-\sum_{i=1}^{t-1}cb_i^k)}{c(T-t)} \tag{3.3.3} \label{eq:deriv} \end{gather*} Plugging in the fixed point from \eqref{eq:fpt} gives the following stability estimate respectively \begin{align*} |\lambda| = \left|1 - k\left(\frac{T-t+1}{T-t}\right)\right| \approx |1-k| \text{ for } t \ll T \tag{3.3.4} \label{eq:stab} \end{align*} Thus for $0 < k < 2$, the system is in general attracted to the fixed point defined in \eqref{eq:fpt}, and the point is unstable outside of this parameter range. \end{proof} \subsection{Convergence Times Lower Bound} \label{sec.opt_lb} The crucial result of this paper comes from the minimal convergence time for \textsc{Algorithm} \ref{alg:algorithm}. In contrast to recent papers examining optimal online budget pacing schemes, none have analytically derived an estimate for convergence times to these optimal strategies \cite{lee2013}. \begin{theorem} \label{thm:conv} For $|1 - k| < 1$, \textsc{Algorithm} \ref{alg:algorithm} has a bounded distance from the stable bid value at time $t$ defined by: \begin{align*} \epsilon := |b_t - b^*| \leq \gamma^{-1/k} \cdot \frac{|1-k|^{t - 1 + \frac{1}{k}}}{1-|1-k|} \end{align*} \textit{where $\gamma = \frac{cT}{B}$. Subsequently, we have a lower bounded convergence time to the stable bid value given by:} \begin{align*} t \leq \frac{k-1}{k} + \frac{\ln\left|\epsilon \gamma^{1/k} (1 - |1-k|)\right|}{\ln|1-k|} \tag{3.4.1} \label{eq:conv} \end{align*} \end{theorem} \noindent We can see that the convergence time is dependent upon the parameters of our RTB campaign, namely the budget, length, and degree of the latent spent function. In order to prove the theorem, we first need a crucial lemma from the theory of metric spaces, proven in \cite{fixpt_thrm}. \begin{lemma}[Banach Fixed Point Theorem] \label{lemma:banach} Assume that $\Omega \subset \mathbb{R}^n$ is closed, and that $G: \Omega \mapsto \Omega$ is a contraction, that is, there exists $0 \leq L < 1$ such that \begin{align*} \|G(x) - G(y)\| \leq L\|x - y\|, \end{align*} \textit{for all $x,y \in \Omega$, where $\|\cdot\|$ is any norm on $\mathbb{R}^n$. Then the function $G$ has a unique fixed point $x^* \in \Omega$. Additionally, let $x_0 \in \Omega$ be arbitrary and define the sequence $\{x_t\}_{t=1}^{\infty} \subset \Omega$ by $x_t = G(x_{t-1})$. Then we have the estimate} \begin{align*} \|x_{t} - x^*\| &\leq \frac{L^t}{1-L}\|x_1 - x_0\| \end{align*} In particular the sequence $\{x_t\}_{t=1}^{\infty}$ converges to $x^*$ \end{lemma} Invoking this well studied result from Banach requires only that the mapping is contractive. We henceforth proceed in proving the theorem by demonstrating this crucial property of \textsc{Algorithm} 1, and applying Lemma \ref{lemma:banach}. \vspace{-2mm} \begin{proof}[Theorem \ref{thm:conv}] To invoke Lemma \ref{lemma:banach} we need to estimate an adequate Lipschitz constant $L$ and put a proper bound on the distance between two points in the sequence defined by our algorithm. We first note that \begin{gather*} |G(x) - G(y)| \leq L|x - y| \\ \Rightarrow \frac{|G(x) - G(y)|}{|x - y|} \leq L \leq |G'(x^*)| \end{gather*} where $x^* \in (x,y)$ gives us an easily computable bound the sequence when $|G'(x)| < 1$. Note that we now use the absolute value rather than other norms since our algorithm uses scalar valued functions. Stemming from our analysis in Section \ref{sec.opt_gen}, a Lipschitz bound for the dynamics when $0 < k < 2$ is $L = |1-k|$. The remaining component needed to apply Lemma \ref{lemma:banach} is a bound on the distance $|b_1 - b_0|$. For any $b_0$ that is not the fixed point, we can look for the maximal distance to $b_0$ by taking the derivative of $b_1 - b_0$ and setting it equal to 0. Assume without loss of generality that $b_1 - b_0 > 0$. \begin{align*} \frac{d}{db_0}|b_1 - b_0| &= \frac{d}{db_0}\left|\frac{B-cb_0^k}{cb_0^k(T-1)} \cdot b_0 - b_0\right| \\ &= \frac{-c + |1-k|b_0^{-k}B}{c(T-1)} - 1 \end{align*} Setting the derivative equal to 0 and finding the maximal $b_0$ \begin{align*} b_0 = \left(\frac{B|1-k|}{cT}\right)^{1/k} \end{align*} which gives the distance value \begin{align*} |b_1 - b_0| = \left|\frac{T-|1-k|}{|1-k| \cdot (T-1)} \cdot \left(\frac{B|1-k|}{cT}\right)^{1/k}\right| \tag{3.4.2} \label{eq:maxdis} \end{align*} Now plugging in our Lipschitz constant and maximal distance from \eqref{eq:maxdis} to the estimate from Lemma \ref{lemma:banach} yields \begin{align*} |b_t - b^*| &\leq \frac{|1-k|^{t-1}}{1-|1-k|} \cdot \left|\frac{T-|1-k|}{T-1} \cdot \left(\frac{B|1-k|}{cT}\right)^{1/k}\right| \\ &\approx \frac{|1-k|^{t-1}}{1-|1-k|} \cdot \left|\left(\frac{B|1-k|}{cT}\right)^{1/k}\right| \end{align*} Lastly, we can define $\gamma := \frac{cT}{B}$ to get the simplified \begin{align*} |b_t - b^*| \leq \gamma^{-1/k} \cdot \frac{|1-k|^{t - 1 + \frac{1}{k}}}{1-|1-k|} \end{align*} and finally, we can rearrange this inequality to give us a bound on the convergence time: \begin{align*} t &\leq \frac{k-1}{k} + \frac{\ln\left|\gamma^{-1/k} (1 - |1-k|) |b_t-b^*| \right|}{\ln|1-k|} \\ &= \frac{k-1}{k} + \frac{\ln\left|\epsilon \gamma^{1/k} (1 - |1-k|)\right|}{\ln|1-k|} \end{align*} thus, we have the result. \end{proof} Lastly, it is important to note that up to this point, we have only considered latent spent functions which are monomial (consisting of one term). This is a simplifying assumption which is justified by bounding polynomial cost functions with monomials, detailed in the following lemma (the proof can be found in Appendix A). \begin{lemma} \label{lemma:bounded} For a spent function defined using the general form $c_1b^{k_1} + c_2b^{k_2} + ... c_mb^{k_m}$ where $k_1 > k_2 > ... > k_m$ and $b > 0$, the convergence time of \textsc{Algorithm} \ref{alg:algorithm} is bounded by that of monomial functions. \end{lemma} \subsection{Loss of Stability} \label{sec.opt_insta} As noted in Section \ref{sec.opt_gen}, when $k > 2$ the fixed point for average bidding becomes unstable. As a result, \textsc{Algorithm} \ref{alg:algorithm} does not converge for a spent function of quadratic or higher degree. In this instance however, we can once again utilize guard rails on spending to obtain convergence to cyclic bidding behavior. The following theorem, proven in Appendix A, demonstrates the importance of guard rails on spending to ensure adequate budget pacing for advertisers adapting to the RTB auction mechanism. The proof invokes the same strategy as Theorem \ref{thm:linear}, where we now find $b_0$ which are fixed points of \textsc{Algorithm} \ref{alg:algorithm} applied twice, and can be found in the appendix. \begin{theorem} \label{thm:chaos} For a spent function of the form min$\{cb^k,M\}$ where $c >0$ and $\frac{B}{cT} < M < B$, \textsc{Algorithm} \ref{alg:algorithm} undergoes a period doubling bifurcation at $k = 2$, leading to oscillatory behavior between two average bid values for each iteration $t$. \end{theorem} This period doubling is further indicative of a canonical path to chaos \cite{strogatz:1994} and as such a loss of stability for our system. \section{Simulation Results} \label{sec.simul} In this section we validate our model against simulated results for various different parameter values. These experiments are meant to illustrate the empirical behavior of our simplified model, as we are limited in the use of real-world data. We first examine the stable region where the spent function $f(b) = b^k$ has degree $k < 2$ and compute the fast convergence time. We subsequently analyze the instability that arises for spent functions where $k \geq 2$, demonstrating that the system undergoes a bifurcation to produce a 2-cycle, followed by period doubling en route to chaotic behavior. \subsection{Stable Region Simulation} \label{sec.simul_stab} Advertisers are given an arbitrary initial average bid for the first period, and subsequently bid based on the optimal bidding algorithm. We henceforth set the budget for all simulations at $B = 50000$, $T = 1000$, and an amount spent function defined as $f(b) = \min\{b_k,100\}$. Additionally, we once again make the simplifying assumption that the spent functions are monomials, allowing us to study the dynamics exclusively as a function of the degree $k$. We first consider $k = 0.5$, evaluating \eqref{eq:conv} with the given parameter set gives a lower bound on the iteration to convergence of $t \leq 31$. In simulating the model we see convergence within an error tolerance $\epsilon = 10^{-6}$ in 23 iterations, approximately 3\% of the total auction period with 99.90\% of the budget spent. For $k = 1$, we see the convergence in exactly one iteration as predicted with 99.90\% of the budget spent. Lastly, at $k=1.4$, we predict the time to convergence to be at most $t = 19$ and in simulation we see convergence in 18 iterations with 99.87\% of the budget spent. Thus, our theoretical results for the stable region are further validated in simulation. \subsection{Path to Chaos} \label{sec.simul_chaos} We here show the interesting dynamics that arise when $k \geq 2$. At $k=2$ we have the birth of 2-cycles in the dynamics, never converging on a singular fixed point but remaining oscillatory (as demonstrated in Section \ref{sec.opt_insta}). As we further increase the parameter, higher order cycles emerge until the system devolves into aperiodic behavior, more formally known as chaos \cite{strogatz:1994}. The results of this parameter tuning align with the bifurcations exhibited by the logistic map ($f(x) = rx(1-x)$) \cite{strogatz:1994}. Figure 1 shows the dynamics of the average bidding on each iteration as we increase $k$. As expected, for $k < 2$, we have quick convergence to a stable average bidding strategy, but as we see for $k > 2$ periodic behavior emerges. We plot the behavior for $k=2.3$ and observe the oscillatory behavior from the period doubling bifurcations after $k = 2$. To further emphasize the bifurcation, we generate first and second iterate maps for our algorithm: these plots show the output of \textsc{Algorithm} \ref{alg:algorithm} after applying it once (first iterate) and the output of applying the algorithm twice (second iterate). We plot these two curves with the line $y=x$, or the line of fixed points. The points at which the first and second iterate maps intersect this line are thus the fixed points of the mappings. As we see in Figure 1, the first iterate map always has exactly one intersection with $y=x$, the general fixed point from Section \ref{sec.opt_gen}. We also see that the second iterate map always shares this intersection point however, once $k > 2$, the curve intersects $y=x$ three times (the three points found in the proof of Theorem \ref{thm:chaos}) aligning with the periodic behavior. \begin{figure \center{\includegraphics[scale=0.5]{figure_draft2h-01.png}} \caption{\label{fig:sub2} Average bid value in the first 10\% of the total campaign time and iterate maps for three different $k$-values. Row 1 plots the average bid for each period of the campaign for $k = 0.5, 1.4$, and 2.3 respectively. The second row plots the corresponding first iterate maps (solid red line), second iterate map (dashed blue line) and $y=x$ (solid black line).} \end{figure} \section{Real-World Implementation} \label{sec.realworld} The algorithm presented is easy to implement and can be specifically tailored to a wide range of applications. As noted previously, slight variants on the presented algorithm are used at several companies which tailor the bidding to tertiary interests that we discuss in this section. \footnote{Variations of the bid pacing algorithm presented in this paper are currently implemented at two major companies with which the first author was previously involved. Due to proprietary data restrictions at these companies, we focus mainly on important theoretical aspects of the algorithm along with simulations results in this paper.} We focus on two specific variations: fluctuating target spend amounts and subthreshold budgets, however the simplicity of our algorithm allows for a plethora of adjustments for specific needs. \subsection{Non-Uniform Pacing} While our main analysis demonstrates the stability of average bidding for the desired goal of uniform pacing, the algorithm is simply adapted to meet changing target spend amounts throughout the campaign. For instance, if an advertiser wants to decrease spending during the morning since the bulk of their target market is not online, they may decrease their target spend amount until later in the day. A variant of this form easily achieved by adjusting the submitted budget for any given time period(s). This is most easily done by adding another input parameter $\kappa \in (0,1]$ which scales the budget by the multiplier at any given time point. Pseudocode for such a variation on our algorithm is presented in Algorithm \ref{alg:algorithm_nonu}. We note that in spite of this tailoring to a specific need, the convergence properties that were theoretically and experimentally justified remain. \begin{algorithm}[H] \caption{Non-Uniform Pacing} \label{alg:algorithm_nonu} \begin{algorithmic}[1] \STATE \textbf{Input}: $B, T, b_t, (s_0, ..., s_t), \kappa$ \\ \STATE{$B_v \leftarrow \kappa \cdot B$} \\ \STATE{$b_{t+1} = $ \textsc{Algorithm 1($B_v, T, b_t, (s_0, ..., s_t)$)}} \STATE \textbf{Output}: $b_{t+1}$ \end{algorithmic} \end{algorithm} \subsection{Subthreshold Budgets} In the instance where users submit a budget to the DSP that are too small for meaningful uniform pacing, the provider can implement a variant on our algorithm for forcibly exhausting a budget early in the campaign period: thus resulting in the user needing to update with a larger budget for the next campaign. \footnote{We once again note that this is common practice at certain companies, in an effort to force users to meet a minimum budget threshold.} This is achieved by implementing a ``virtual budget" where the DSP simply scales up the input budget and thus the algorithm establishes a higher average bid than is feasible for the user's actual budget, leading to early exit from the campaign. While this outcome is counter to the presented practicality of the algorithm, it is an effective means by which a provider can ensure that advertisers are submitting a high enough budget to be competitive in the market space. This variant's pseudo code is presented in Algorithm \ref{alg:algorithm_sub}. We reiterate that although the \emph{intent} of this adapted algorithm is not in line with the primary justification for our presented results, it nonetheless retains the convergence properties of Section \ref{sec.opt}. \begin{algorithm}[H] \caption{Subthreshold Budget} \label{alg:algorithm_sub} \begin{algorithmic}[1] \STATE \textbf{Input}: $B, T, b_t, (s_0, ..., s_t)$ \\ \IF{$B < \tau$ for some threshold $\tau$} \STATE{$B_v \leftarrow \sigma \cdot B$ for some $\sigma > 1$ fixed by the provider} \\ \STATE{$b_{t+1} =$ \textsc{Algorithm 1($B_v, T, b_t, (s_0, ..., s_t)$)}} \ELSE \STATE{$b_{t+1} =$ \textsc{Algorithm 1($B, T, b_t, (s_0, ..., s_t)$)}} \ENDIF \STATE \textbf{Output}: $b_{t+1}$ \end{algorithmic} \end{algorithm} \section{Conclusion} \label{sec.conc} Bid optimization has become the central issue of budget pacing in online RTB problems. Our work has presented and analyzed a natural learning algorithm that ensures fast convergence times to optimal bids for uniform pacing of an advertiser's budget. The algorithm is additionally robust to fluctuations in the underlying mechanism of the display ad sales, in the form of a latent spent function. The efficiency of our algorithm is demonstrated both analytically and numerically through simulations, and demonstrates the feasibility of its implementation in real world systems. Additionally, the algorithm does not rely on a synthesis of neural learning systems or more complex parameter estimation: it simply learns optimal behavior by comparing performance in the current iteration to the previous \cite{Perlich_2012}. Future work should validate our model and results against real-world data and examine different families of spent amount functions. \bibliographystyle{plain}
1,108,101,562,614
arxiv
\section{Introduction} \label{sect:intro} 3D-spectrographs (3DS), or Integral Field Units (IFU) exist at many observatories, providing spectra for a large number of spatial elements (``spaxel'') within a 2-dimensional field-of-view, rather than only along a traditional 1-dimensional spectrograph slit. Depending on the instrument, up to hundreds or thousands of spectra are recorded simultaneously in any single exposure. While the instrumentation suite is diverse and based on various principles of operation (image slicers, lens-arrays, fiber-bundles or combinations of these), compromises with respect to field-of-view, spatial sampling, wavelength coverage, and spectral resolution have to be made, due to the limited detector space. Since commissioning in May 2001, the Astrophysical Institute Potsdam (AIP) successfully operates PMAS, the Potsdam Multi-Aperture Spectrophotometer, at the Calar Alto 3.5~m Telescope in southern Spain (Roth et al.\ 2004, Kelz et al.\ 2003a). An overall description of the PMAS instrument is given by Roth et al.\ (2005), hereafter referred to as paper~I. While PMAS is a unique spectrophotometer, covering a wide wavelength range from 350~nm to 900~nm, its standard~IFU, a fiber-coupled lens-array, provides 256 spectra and is limited to a maximum integral field-of-view of 16$\times$16 arcseconds on the sky. Driven by the ``Disk Mass'' project (Verheijen et al.\ 2004), which requires imaging spectroscopy of nearby face-on galaxies (with typical sizes of 1 arcminute) at intermediate spectral resolution (of R$\ge$8000), a science case was put forward to develop a larger IFU for PMAS. Based on the experience with the SparsePak bundle, which was constructed and commissioned for the 3.5m WIYN telescope at Kitt Peak (Bershady et al.\ 2004, 2005), the PPak (PMAS fiber Package) fiber bundle was designed and built at the AIP in 2003 as part of the ULTROS project (ultra-deep optical spectroscopy with PMAS). This new IFU was produced on a short timescale of approximately six months and with a budget of less than 20.000 Euro for the hardware components (mainly lenses, filters and fibers). PPak was successfully integrated within PMAS in December 2003 and commissioned in spring 2004. The PPak-mode of PMAS is now fully operational and routinely employed for the Disk Mass project, as well as for a variety of other common user programmes, that require large integral-field spectroscopy. \begin{table* \tabletypesize{\small} \caption{Selected IFU instrumental parameters with corresponding spectral capabilities. \label{tab:IFUs}} \begin{tabular}{lcccrccrcc} \tableline \tableline Instrument & Telescope & FoV & Spaxel & Spaxel & Filling & Range\tablenotemark{4}& Resol.\tablenotemark{4} &\multicolumn{2}{c}{Grasp} \\ & diameter & (max.) & size\tablenotemark{1} & number\tablenotemark{2} & factor\tablenotemark{3} & ($\lambda_{cov}$) & ($\lambda/\Delta \lambda$) & specific\tablenotemark{5} & total\tablenotemark{6} \\ & [m] & [arcsec] & [arcsec] & & & [\AA] & &[arcsec$^2$m$^2$]&[arcmin$^2$m$^2$]\\ \tableline PMAS-PPak{$^7$} & CA \hfill 3.5 & 74$\times$64 & 2.68 $\oslash$ & 331+36 & 0.60 & 400 & 8000 & 47\phantom{.4} & 4.23 \\ SparsePak & WIYN \hfill 3.5 & 72$\times$71 & 4.69 $\oslash$ &75+\phantom{3}7& 0.25 & 260 & 12000 & 138\phantom{.4} & 2.87 \\ DensePak & WIYN \hfill 3.5 & 45$\times$30 & 2.81 $\oslash$ &91+\phantom{3}4& 0.42 & 260 & 20000 & 49\phantom{.4} & 1.25 \\ INTEGRAL & WHT \hfill 4.2 & 34$\times$29 & 2.70 $\oslash$ & 115+20 & 0.67 & 360 & 4200 & 73\phantom{.4} & 2.32 \\ \tableline VIMOS & VLT \hfill 8.2 & 54$\times$54 & 0.67$\times$0.67 & 6400 & 1.00 & 350 & 220 & 23\phantom{.4} & 40.50\phantom{7} \\ SAURON & WHT \hfill 4.2 & 41$\times$33 & 0.94$\times$0.94 & 1577 & 1.00 & 540 & 1250 & 11\phantom{.4} & 4.77 \\ SPIRAL{$^8$} & AAT \hfill 3.9 & 22$\times$11 & 0.7 $\times$0.7 & 512 & 1.00 & 330 & 7600 & 5.4 & 0.77 \\ PMAS-LARR{$^9$} & CA \hfill 3.5 & 16$\times$16 & 1.0 $\times$1.0 & 256 & 1.00 & 700 & 6000 & 8.2 & 0.58 \\ OASIS & WHT \hfill 4.2 & 17$\times$12 & 0.42 hex. & $\sim$1100 & 1.00 & 370 & 2650 & 2.3 & 0.71 \\ GMOS & Gemini~ \hfill 8.1 & 7$\times$5 & 0.2 hex. & 1000+500 & 1.00 & 280 & 1700 & 1.8 & 0.49 \\ \tableline \\ \end{tabular} \small{$^1$}{~corresponding value to the max. FoV, may depend on fore-optics magnification} \\ \small{$^2$}{~dedicated sky-spaxels are listed separately}\\ \small{$^3$}{~bare fiber bundles (upper part); lens-array-types (lower part)} \\ \small{$^4$}{~selected values only; may span wide range depending on configuration and wavelength} \\ \small{$^5$}{~specific grasp = telescope area [m$^2$] $\times$ spaxel size [arcsec$^2$]} \\ \small{$^6$}{~total grasp = telescope area [m$^2$] $\times$ spaxel size [arcsec$^2$] $\times$ number of spaxels } \\ \small{$^7$}{~PPak-IFU and PMAS spectrograph with 2nd order gratings} \\ \small{$^8$}{~SPIRAL with Littrow spectrograph (de-commissioned)} \\ \small{$^9$}{~Lens-array IFU and PMAS spectrograph with 1st order gratings} \\ \end{table*} As a bare fiber-bundle IFU, PPak is based on earlier developments, such as DensePak (Barden \& Wade 1988), INTEGRAL (Arribas et al.\ 1998) and SparsePak (Bershady et al.\ 2004). Similar to the above instruments, PPak opts for rather large fibers, that can not properly sample the (seeing-limited) image, but collect more flux and allow for wide fields. PPak and SparsePak span 74$\times$64 and 72$\times$71 arcseconds on the sky respectively and provide the largest fields-of-view of any IFU available worldwide (see Table~\ref{tab:IFUs}). Additionally, a single PPak fiber with 5.7~arcsec$^2$ on the sky, collects twice the amount of light at the 3.5m telescope, than a single spatial element of the VIMOS-IFU (LeFevre et al.\ 2003) at the 8.2m-VLT (see specific grasp in Table~\ref{tab:IFUs}). PPak is attached to the efficient PMAS fiber-spectrograph, that provides resolutions R=$\lambda/\Delta \lambda$ from 800 to 8000, or corresponding spectral powers (defined as the product of resolution times the number of spectral resolution elements N$_{\Delta \lambda}$ $\lambda/\Delta \lambda$) between 10$^6$ to 10$^7$. The PPak fibers and optics are optimized for wavelengths between 400--900~nm. \begin{figure* \epsscale{2.0} \plotone{Figure1-07.eps} \caption{The principle of operation for the two PMAS IFUs at the Cassegrain focal station. The LARR-IFU is on-axis and consists of fore-optics, a lens-array (LARR) and a fiber-bundle (dashed lines). The PPak-IFU is located off-axis and features a focal reducer lens (FORED) and a bare fiber-bundle. A dedicated PPak calibration unit can illuminate additional fibers (dashed-dotted lines). Both bundles connect to one spectrograph (solid outline). The acquisition and guiding system (dotted outline) can be used by both modes (see text for further explanation).} \label{fig:ppak_principle} \epsscale{1.0} \end{figure*} Therefore, PPak is ideally suited for spectroscopic studies of extended astronomical objects with low surface brightness, such as the outskirts of spiral galaxies, where sufficient signal is more important than detailed spatial resolution. The PMAS+PPak configuration offers a powerful combination of light-collecting power or grasp, wavelength range, and spectral resolution. \\ This paper is organized as follows: \S~\ref{sect:design} presents the instrument and its opto-mechanical design. \S~\ref{sect:mai} summarizes the manufacture, assembly, and integration of the PPak components. \S~\ref{sect:operations} describes the operational procedure during observation, the data reduction, and visualization tools. The instrument performance at the telescope and test results are given in \S~\ref{sect:performance}. \section{Instrument Description} \label{sect:design} The baseline parameters for the PPak development were, to provide a contiguous sampled field-of-view of at least 1~arcminute across with high specific grasp per spaxel and adequate resolution at the spectrograph. PPak needed to be designed as an unforeseen add-on module to the existing, Cassegrain mounted, PMAS instrument. Therefore, certain space constraints dictated the overall design. Likewise, the PMAS spectrograph hardware and performance had to be taken as given. The existing PMAS grating set (see table~2 of paper~I) includes gratings with 300, 600 and 1200 l/mm, of which two can be used in the 2nd spectral order. Taking advantage of anamorphic demagnification (see section \ref{sect:2ndorder}), spectral resolutions of R$\sim$8000 can be achieved with the I1200 and J1200 gratings, if the width of the pseudo-slit of the PMAS spectrograph does not exceed 150~microns, which respectively limited the fiber core diameters. Around 400 of these fibers can be accommodated at the spectrograph slit with acceptable separations and cross-talk. The PMAS spectrograph accepts a F/3 beam, which implies that, allowing for some focal ratio degradation, the fibers can be fed with up to F/3.3 in the focal plane, which sets the plate scale and fiber grasp. The purchase of even higher-line density or holographic reflection gratings was no consideration at the time of the PPak development. Fig.~\ref{fig:ppak_principle} illustrates the principle of operation for both the pre-existing lens-array IFU and the new PPak IFU: For the lens-array-mode, the telescope focal plane image is magnified by a fore-optics (dashed box)~(Roth et al.\ 2003), then spatially sampled by a lens-array and re-configured by an optical fiber module (Kelz et al.\ 2003b). PPak, on the other hand, is equipped with a focal reducer lens (FORED) in front of the telescope's focal plane, which maximizes the field-of-view, and provides the required plate scale and f-number. The subsequent fiber bundle (solid line) by-passes the fore-optics and lens-array and bridges the distance of around 3~meters to the spectrograph. Additional fibers (dash-dotted) connect the spectrograph to a dedicated calibration unit. At the spectrograph entrance, the fibers from both the lens-array and the PPak-IFU form two parallel slits, of which only one is active during observing. The PPak-IFU is placed 6 arcminutes off-axis, as not to obstruct the existing field for the direct imaging camera. This allows to use the A\&G optics (dotted box) for target acquisition and guiding in both the PPak- and lens-array-IFU mode. The various components are described in more detail in the following subsections. \subsection{Focal Reducer Lens} \label{sect:FORED_design} \begin{figure*}[ht!] \centerline{ \epsscale{1.7} \plotone{Figure2-05.eps}} \caption{Ray tracing of the focal reducer system. The original telescope focal plane with its F/10 rays is shown to the right (dotted lines). The focal reducer converts the rays to F/3.3 (solid lines) and creates a focal plane 6~mm after the last lens. (optical design by U.L.)} \label{fig:fored_ray} \end{figure*} \begin{figure*} [ht!] \plotone{Figure3-03.eps} \caption{Spot diagrams at the focal plane of the focal reducer system at design wavelengths of: 405, 436, 480, 546, 643, 852~nm, and for a polychromatic spot. The top and bottom rows represent on-axis and 3.5~mm off-axis spots respectively. The box size is 100~$\mu$m, corresponding to 1.8~arcseconds, which is smaller than the diameter of a fiber core.} \label{fig:spotdiagrams} \epsscale{1.0} \end{figure*} The focal reducer lens (FORED) immediately in front of the fiber bundle reduces the focal length of the Calar Alto 3.5m telescope from 35000~mm to 11550~mm, changes the plate-scale from $5''.89$ per mm to $17''.85$ per mm, and converts the telescope F-number from F/10 to F/3.3. The focal reducer is a 4-lens system, consisting of a triplet and a thick singlet lens (see Fig.~\ref{fig:fored_ray}), which creates the required telecentric exit rays. The system has four glass-to-air interfaces, which are treated with anti-reflective coatings. The individual lenses of the triplet are made of LF5, BaK1, LF5, respectively, while the fourth lens is made of BaK1. The thickness of the triplet is 28~mm, that of the single lens 64~mm, while their diameters are 50~mm. The lenses are optimized for a wavelength interval between 400 and 850 nm, i.e. neglecting the blue part of the spectrum that otherwise is accessible with the PMAS spectrograph. The image quality, that the focal reducer provides, was optimized to match the fiber size of 150 $\mu$m and does not deteriorate the point-spread-function beyond the typical seeing (see spot diagrams in Fig.~\ref{fig:spotdiagrams}). As the light is coupled to optical fibers, telecentric exit rays are required. The focal reducer lens creates a telecentric field of 7 mm (=125 arcsec) in diameter. Additionally, the system can be placed up to 65~mm off-axis, which is necessary due to space constraints within the existing PMAS instrument. The off-axis position of the focal reducer causes a small telecentric offset of 0.25~degree. While in principle this offset could be compensated completely with a tilt correction of the entire fiber array, it was considered a small and negligible change in terms of the incoming F-number (see Fig.~12 of Bershady et al. 2004). A common, compact and stiff mount for the focal reducer and the PPak-IFU was designed in order to make sure, that the fiber bundle is firmly attached to the focal reducer image plane, which is located 6~mm behind the lens (see Fig.~\ref{fig:CAD_FORED_mount}). This mount also allows the insertion of bandpass or interference filters (with diameters of 50~mm or 2~inch) in front of the lens. Any mechanical flexure due to a changing gravity vector was calculated to be 4~$\mu$m ($\approx$ 0$''$.1) in the worst case. In the spatial direction, this is much smaller than the fiber sampling size (of 150~$\mu$m) or effects caused by seeing (0$''$.5 $\approx$ 30~$\mu$m). In terms of focal accuracy, this amount of flexure is negligible. Note, that due to the limited pointing accuracy of the telescope, it is practically impossible to measure flexure effects of this magnitude, if present at all. \begin{figure}[ht!] \begin{center} \plotone{Figure4-03.eps} \caption{Three CAD views of the mount, that holds the focal reducer lenses, the fiber head and optional filters. Due to space constraints, the fibers are bend by 90$^\circ$ below the IFU and exit sideways. A flexure analysis of the mount stability vs. inclination, yields a maximum deformation of 4~$\mu$m at the top part with respect to the fixed mount plate. (mechanical design by S.M.B.)} \vspace{5mm} \label{fig:CAD_FORED_mount} \end{center} \end{figure} \subsection{PPak IFU} \begin{figure}[h] \plotone{Figure5-03.eps} \caption{Layout and dimensions of the PPak-IFU. The central hexagonal is made up of 331 object fibers, surrounded by six sky-IFUs. Note, that only the white circles represent active fibers, while the black ones are protective buffers. Each circle represents the combined fiber core, cladding and buffer material. While the physical size of the central IFU is just 4 mm, its coverage on the sky is more than 1 minute of arc.} \label{fig:PPak_size} \end{figure} The final PPak design features 331 fibers in a densest packed hexagonal grid with a maximum diameter of 74$^{\prime\prime}$, while each fiber projects to 2$^{\prime\prime}$.68 in diameter on the sky. The fiber-to-fiber pitch is 3.6 arcseconds. The projected fiber area with 5.7 arcsec$^2$ is comparable to DensePak or INTEGRAL, but smaller than for SparsePak. However, the larger number of fibers allows the observer much freedom, to apply adaptive binning of spaxels to increase the signal-to-noise. Additional 36 fibers are distributed amongst six `mini-IFUs' and placed 72$^{\prime\prime}$ away from the center to sample the surrounding sky (see Fig.~\ref{fig:PPak_size}). Not shown are 15 extra fibers, that are not part of the IFU, but are connected to a calibration unit. These calibration fibers can be illuminated with light from spectral line lamps during the science exposures. This provides a synchronous spectral calibration and keeps track of any image shifts at the spectrograph detector (see section 4.9 of paper~I). A fair amount of additional fibers (shown in black) is placed around the active (white) fibers for protective buffer purposes and to avoid increased FRD edge-effects (see Figs.~ 8 and 9 in Bershady et al., 2004) of the outer science fibers. These buffer fibers have a length of $\sim$ 70~mm and terminate inside the mount, just below the IFU head. Table~\ref{tab:PPak_fibers} gives a summary of the total fiber breakdown. \begin{table}[ht] \caption{Breakdown of total number of fibers for PPak.} \begin{center} \begin{tabular}{rrrr} \tableline \tableline \# Fibers & $\circ$ Active & $\bullet$ Buffer & $\circ+\bullet$ Total \\ \tableline object & 331 & 216 & 547 \\ sky & 36 & 186 & 222 \\ calibration & 15 & 22 & 37 \\ \tableline total & 382 & 424 & 806 \\ \tableline \end{tabular} \end{center} \label{tab:PPak_fibers} \end{table} \begin{figure}[h] \plotone{Figure6-03.ps} \caption{Internal fiber transmission plot for 4m of FIP fiber as supplied by the manufacturer Polymicro Technologies LLC. PPak uses 3.5m long fibers of this series, with a NA=0.22, a core diameter of 150$\mu$m and a core/cladding ratio of 1:1.1. No additional transmission measurements were done within this project. Note, that end losses or losses from FRD effects are not included in the above data.} \label{fig:fiber-trans} \end{figure} \subsection{Fiber Slit} The output ends of the fibers are placed side-by-side to form a long fiber-slit. For practical reasons and to assist data reduction, the overall fiber-slit is divided into 12 blocks (called slitlets). A slitlet is 7.5~mm wide and features 32 v-grooves with a spacing of 0.234~mm (see Fig.~\ref{fig:fibslit}, right). Given a spectrograph magnification of 0.6 and 15~$\mu$m pixel, a fiber core projects to 6 pixel at the detector, with a pitch of 9.4 pixel. The chosen spacing is a trade-off to minimize cross-talk and the overall slit-length, because of edge vignetting. Contrary to other IFUs, that are add-on units to existing spectrographs, PMAS features a designated fiber-spectrograph (see paper~I). The spectrograph collimator is a f=450~mm, F/3 system, and therefore the optics accepts the whole fiber output cone without the need of additional beam-converting micro-lenses. The spectrograph optics require a curved fiber-slit, that directly couples to the first lens of the collimator. The fibers are mounted parallel to each other, but terminate at different lengths, to form a curved focal surface (see Fig.~\ref{fig:fibslit}, left). \subsection{Slit to Sky Mapping} \begin{figure*}[ht!] \epsscale{1.5} \plotone{Figure7.ps} \caption{In the focal plane, five main segments can be identified which map to distinct regions along the slit (see Tab.~\ref{tab:PPak_slit}). North is up and east is left. The orientation is fixed on the sky as the instrument can not be rotated.} \label{fig:PPak_layout} \epsscale{1.0} \end{figure*} The focal plane geometry of the PPak IFU is largely determined by the Disk Mass project. The arrangement of the fibers in the focal plane and how they map to the slit follows a quasi-random fashion. The central IFU can be divided in five main segments (see Fig.~\ref{fig:PPak_layout}). The central segment, with fibers numbered 148 to 184, maps to the central part of the slit. The fibers in the two intermediate segments of the IFU (1 to 66 for the northern part, and 266 to 331 for the southern part), map to the edges of the slit. The fibers in the two outer IFU segments (67 to 147 for the northern part, and 185 to 265 for the southern part), terminate halfway between the center and the edges of the slit (see Tab.~\ref{tab:PPak_slit}). A reason for this arrangement was, that for typical targets, such as galaxies, the surface brightness falls off rapidly away from the center. As the outer fibers carry the weaker signals, any additional vignetting or aberration effects, predominantly at the edge of the slit, shall be avoided. Care was taken, that fibers which are adjacent in the focal plane, are well separated in the slit. The aim was to minimize any systematic effects, that purely depend on the location of fibers within the spectrograph. Note, that this is an opposite philosophy to other instrument layouts, such as SPIRAL (Lee \& Taylor 2000), where adjacent fibers at the sky remain adjacent at the slit. Also note, that the central row at the IFU, starting in the east with fiber number 185 and ending in the west with number 92, contains fibers from all 5 segments. This implies, that drifting a star across the central row produces 21 spectra distributed over the entire CCD, nearly sampling the full range of optical paths through the spectrograph. Each slitlet carries 3 sky fibers. Those 3 sky fibers are distributed in three non-adjacent sky-IFUs. In this way, each triplet of sky fibers on a slitlet spans a triangular area on the sky, which surrounds the main IFU. In other words, the sky is sampled symmetrically around the object and is well distributed within the slit, again to avoid any instrumental biases. This scheme results in 36 sky-fibers altogther, which is twice the `optimum' number of dedicated sky fibers as calculated by Bershady et al. (2004, Fig.3), but helps to limit systematic errors in the sky subtraction. \begin{table}[h] \setlength{\tabcolsep}{1mm} \caption{The location of object ({1--331}), sky ({\bf S1--S36}), \& calib\-ration ({\bf C1--C15}) fibers and gaps ($\times$) along the slit (1,1$\rightarrow$12,32). Underlined numbers indicate a new IFU segment.} {\footnotesize \begin{center} \begin{tabular}{c|cccccccccccc} \tableline \tableline Groove~~~&\multicolumn{12}{c}{Slitlet number} \\ number~~~& 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 \\ \tableline 1 &{\bf C1}&{\bf C3}&{\bf C4}&{\bf C5}&{\bf C6}&{\bf C7}&{\bf C9}&{\bf C10}&{\bf C11}&{\bf C12}&{\bf C13}&{\bf C14}\\ 2 &{\bf C2}& 27 & 55 & 83 & 111 & 139 & 166 & 194 & 222 & 250 & 278 & 306 \\ 3 & $\times$& 28 & 56 & 84 & 112 & 140 & 167 & 195 & 223 & 251 & 279 & 307 \\ 4 & 1 & 29 & 57 & 85 & 113 & 141 & 168 & 196 & 224 & 252 & 280 & 308 \\ 5 & 2 & 30 & 58 & 86 & 114 & 142 & 169 & 197 & 225 & 253 & 281 & 309 \\ 6 &{\bf S1}&{\bf S4}&{\bf S7}&{\bf S10}&{\bf S13}&{\bf S16}&{\bf S19}&{\bf S22}&{\bf S25}&{\bf S28}&{\bf S31}&{\bf S34}\\ 7 & 3 & 31 & 59 & 87 & 115 & 143 & 170 & 198 & 226 & 254 & 282 & 310 \\ 8 & 4 & 32 & 60 & 88 & 116 & 144 & 171 & 199 & 227 & 255 & 283 & 311 \\ 9 & 5 & 33 & 61 & 89 & 117 & 145 & 172 & 200 & 228 & 256 & 284 & 312 \\ 10 & 6 & 34 & 62 & 90 & 118 & 146 & 173 & 201 & 229 & 257 & 285 & 313 \\ 11 & 7 & 35 & 63 & 91 & 119 &\underline{147}&174& 202 & 230 & 258 & 286 & 314 \\ 12 & 8 & 36 & 64 & 92 & 120 & 148 & 175 & 203 & 231 & 259 & 287 & 315 \\ 13 & 9 & 37 & 65 & 93 & 121 & 149 & 176 & 204 & 232 & 260 & 288 & 316 \\ 14 & 10 & 38 &\underline{66}& 94 & 122 & 150 & 177 & 205 & 233 & 261 & 289 & 317 \\ 15 & 11 & 39 & 67 & 95 & 123 & 151 & 178 & 206 & 234 & 262 & 290 & 318 \\ 16 & 12 & 40 & 68 & 96 & 124 & 152 & 179 & 207 & 235 & 263 & 291 & 319 \\ 17 &{\bf S2}&{\bf S5}&{\bf S8}&{\bf S11}&{\bf S14}&{\bf S17}&{\bf S20}&{\bf S23}&{\bf S26}&{\bf S29}&{\bf S32}&{\bf S36}\\ 18 & 13 & 41 & 69 & 97 & 125 & 153 & 180 & 208 & 236 & 264 & 292 & 320 \\ 19 & 14 & 42 & 70 & 98 & 126 & 154 & 181 & 209 & 237 &\underline{265}&293& 321 \\ 20 & 15 & 43 & 71 & 99 & 127 & 155 & 182 & 210 & 238 & 266 & 294 & 322 \\ 21 & 16 & 44 & 72 & 100 & 128 & 156 & 183 & 211 & 239 & 267 & 295 & 323 \\ 22 & 17 & 45 & 73 & 101 & 129 & 157 &\underline{184}&212& 240 & 268 & 296 & 324 \\ 23 & 18 & 46 & 74 & 102 & 130 & 158 & 185 & 213 & 241 & 269 & 297 & 325 \\ 24 & 19 & 47 & 75 & 103 & 131 & 159 & 186 & 214 & 242 & 270 & 298 & 326 \\ 25 & 20 & 48 & 76 & 104 & 132 & 160 & 187 & 215 & 243 & 271 & 299 & 327 \\ 26 & 21 & 49 & 77 & 105 & 133 & 161 & 188 & 216 & 244 & 272 & 300 & 328 \\ 27 & 22 & 50 & 78 & 106 & 134 & 162 & 189 & 217 & 245 & 273 & 301 & 329 \\ 28 &{\bf S3}&{\bf S6}&{\bf S9}&{\bf S12}&{\bf S15}&{\bf S18}&{\bf S21}&{\bf S24}&{\bf S27}&{\bf S30}&{\bf S33}&{\bf S36}\\ 29 & 23 & 51 & 79 & 107 & 135 & 163 & 190 & 218 & 246 & 274 & 302 & 330 \\ 30 & 24 & 52 & 80 & 108 & 136 & 164 & 191 & 219 & 247 & 275 & 303 & 331 \\ 31 & 25 & 53 & 81 & 109 & 137 & 165 & 192 & 220 & 248 & 276 & 304 & $\times$\\ 32 & 26 & 54 & 82 & 110 & 138 &{\bf C8}& 193 & 221 & 249 & 277 & 305 &{\bf C15}\\ \tableline \end{tabular} \end{center} } \label{tab:PPak_slit} \end{table} \subsection{Fiber Loop Box} Stress on optical fibers increases the focal ratio degradation (FRD) with consequences for the overall optical system performance (see Barden 1998, Parry \& Carrasco 1990, Ramsey 1988, Schmoll et al.\ 2003, and references therein). To minimize FRD, and therefore the loss of information, the fibers are inserted into protective, friction-free 3-layer furcation tubings, and only bent with tolerable radii. Roughly 2~m behind the IFU and some 40~cm in front of the fiber-slit, the protective tubing is interrupted, to allow the fibers to form a loop. The fiber loops are placed inside an enclosed box of 30 $\times$ 30~cm, where they are being kept in groups of 32 and placed into separate sections, divided by Teflon sheets (see Fig.~\ref{fig:loopbox}). The loop box serves two functions: Any pull on a fiber results in a change of the individual loop diameter, which avoids the fibers from being torn. Secondly, the loops provide a reservoir of extra fiber length, which is needed during assembly and integration. Note, that both the IFU and the spectrograph are mounted at the Cassegrain station and remain fixed with respect to each other. Therefore, no stress-relief cabling, as for bench-mounted instruments with long ($>$10m) fiber length is required. \subsection{PPak Calibration Unit} Fifteen fibers that are distributed along the fiber-slit are diverted from the rest of the fiber-bundle, as their input ends are not placed at the telescope focal plane but connected to the PPak Calibration Unit (PPCU). This unit is made from standard {\em OWIS} laboratory equipment and consists of five liquid light guides ({\em Lumatec}, Germany), a white diffuser screen, a relay lens and the calibration fibers themselves (see Fig.~\ref{fig:PPCU}). Four liquid light guides illuminate the diffuser screen with light from the various calibration lamps (such as Halogen continuum, Mercury, Neon and Thorium/Argon). A fraction of the light is picked-up by the relay lens and focused onto the calibration fibers, with the same F-number as for the science fibers, and with an object distance at infinity. As the lamps are placed within electronic boxes and feature individual shutters, any combination of calibration light (and respective exposure times) can be fed into the calibration unit. The calibration fibers can be illuminated separately from or simultaneously with the object fibers, allowing the observer flexibility with respect to calibration strategy and needs. Finally, a change of lamps or a swap to a spare lamp is easily done by re-connecting the light guide(s), without any changes to the calibration unit itself. \begin{center} \begin{figure}[ht!] \plotone{Figure8-03.ps} \caption{The PPak Calibration Unit: Liquid-light-guides illuminate a white diffuser screen, from which the light reflects backwards. A relay lens couples the light with the correct F-number into the calibration fibers. An optional filter can be inserted. Up to 5 light-guides offer the possibility to combine the light from different lamps.} \label{fig:PPCU} \end{figure} \end{center} \section{Manufacture, assembly and integration} \label{sect:mai} \subsection{Lens Optics} The fabrication of the focal reducer lens was contracted to {\em Pr\"azisionsoptik Gera}, (Germany), based on specifications from the optical design calculations by U.L. (see \S~\ref{sect:FORED_design}). These calculations were repeated as soon as the glasses had been procured from {\em Schott}, (Germany), and the index of refraction had been measured at the design wavelengths in order to optimize the design. Note, that due to its thickness, the fourth lens was made from two pieces. The individual lenses were cemented using optical compound K57, produced by {\em Carl Zeiss Jena}. All glass--air surfaces were treated with a Balzers broad-band AR coating, yielding a transmission of $>$98\% across the design wavelength range. The first lens of the spectrograph collimator (diameter D=100~mm, curvature R=218mm), to which the fiber-slit connects, was produced by {\em Carl Zeiss Jena} (Germany). \subsection{Fiber Cables} After tests in the AIP laboratories, silica/silica, step-index fibers with core/cladding/buffer diameters of 150/165/195 microns, low OH core and NA=0.22 of the series FIP150165195 from {\em Polymicro Technologies Inc.} (USA) were selected (see Fig.~\ref{fig:fiber-trans}). The fiber bundle was manufactured as follows: Firstly, the fibers were cut to a length of 3.5~m and one end was polished manually (smallest grain size=0.3 $\mu$m). The reason for polishing the exit surfaces at this early stage was, that the assembled fiber-slit is curved and therefore difficult to polish. At the input end, the fibers were assembled into the IFU head first and polished afterwards. Copying parts of the SPIRAL-B design (Lee \& Taylor 2000), a three-layer polypropylene-KEVLAR-PVC furcation tube from {\em Northern Lights Cable} (USA) was cut to lengths and fitted with connector screws on both ends. Altogether 367 fibers were inserted into 12 protective tubings, each tube carrying the object and sky fibers from one slitlet. The 15 calibration fibers were put into an additional tube. \subsection{Fiber-Slit Assembly} \begin{center} \begin{figure}[h] \plotone{Figure9-02.eps} \caption{Left: photo of the fiber-slit unit with all 382 fibers and 13 cables in place. Note the overall curvature of the slit. Right: a magnified view of an individual slitlet (out of twelve) with 32~fibers (width = 7.6~mm, fiber-spacing = 0.234~mm)} \label{fig:fibslit} \end{figure} \end{center} \vspace{-7mm} The polished ends of the fibers were glued onto the 12 slitlets, using EPO-TEK 301-2 non-shrinking two-component epoxy from {\em Polytek}. Each slitlet typically holds 28 object fibers, 3 sky fibers and 1 calibration fiber (see Tab.~\ref{tab:PPak_slit} for details). The outermost calibration fibers ({\bf C1} and {\bf C2} at one end, and {\bf C15} at the other end of the slit) are separated from the object fibers by one empty groove. Three sky fibers (from 3 different sky-IFUs) are distributed uniformly amongst the 28 object fibers. To ensure a correct and repetitive fiber alignment, an assembly jig was produced, that holds both the fibers and the slitlet in place and allows an accurate end termination of each fiber against a dummy surface with the correct curvature. After the correct alignment was controlled visually, the fibers were clamped temporarily and then glued onto the slitlet block. Given, that the v-grooves are 100 times longer than a fiber diameter, and manufacturing tolerances of $<$0.01~mm were achieved, the alignment and the end positioning of the fibers was done to within a few microns accuracy. Altogether, twelve slitlets, each carrying 31 or 32 fibers each, were mounted side-by-side on a common stage, creating a fiber-slit of 94~mm length in total. The slitlets can be moved and locked individually. In this way, any length variations between slitlets are irrelevant, as each slitlet can be brought forward, until the fibers touch the collimator lens surface. The overall unit includes mounts for the cables, the fiber-slit and the collimator lens as well as protective covers (Fig.~\ref{fig:fibslit}, left). The fibers were repeatedly cleaned with methanol and water in an ultra-sonic bath prior and after the assembly, to remove any contamination from the surfaces. The quality was controlled by inspections of the fiber ends using a video microscope. Illumination of individual fibers yielded their positions within each slitlet, and the fibers were labeled according to a pre-defined position table (see Fig.~\ref{fig:PPak_layout}). \subsection{Fiber-Head Assembly} At the input (i.e. the IFU) side, the individually labeled object and buffer fibers were ordered and pre-assembled row-by-row on a piece of sticky tape. No additional glue was applied at this stage. A mount was manufactured at the AIP workshop, that features a central hexagonal opening, with precision steps to aid the correct fiber alignment. The milling precision was on the order of 1/100~mm (=5\% of a fiber diameter), which was more than adequate to ensure, that the fibers form a dense-packed arrangement. The main IFU was built up by inserting each row of fibers (27 rows in total), into the hexagonal mount, with the fibers extending $\approx$5~mm beyond the mount surface. For practical reasons, this was done separately for the two halves (see Fig.~\ref{fig:IFUbuilt}), which were put together and locked mechanically thereafter. In fact, not the precision of the mount, but the added tolerance of the sizes of the overall number of fibers limited the correct alignment. The maximum deviation from a regular hexagonal grid occurs for one outermost row and was measured to be of the order of 10\% of a fiber diameter, corresponding to a misalignment of approximately 0.3~ arcsec. \begin{center} \begin{figure}[h!] \plotone{Figure10-03.ps} \caption{Top and left: The assembly of the IFU was done row-by-row for each half separately and then merged into a single mount. Bottom right: Image of the PPak fiber head taken with a $\times 20$ magnification to determine the exact fiber positions. The surrounding rows of darker fibers are in-active, but protect and buffer the central science and sky fibers.} \label{fig:IFUbuilt} \end{figure} \end{center} \vspace{-8mm} Six additional circular drillings around the central opening accommodate the sky-fiber IFUs. The assembly of the sky-IFUs worked similar with respect to the row-by-row approach. Each sky-IFU consists of 6 sky and 31 short buffer fibers (Fig.~\ref{fig:PPak_size}), which form a densest pack of 7 fibers across. This was inserted into a steel ferule with a matching inner diameter of 1.4~mm. Subsequently, the ferules were glued into the drillings of the mount (Fig.~\ref{fig:IFUpolish}, left). After assembly, the IFU-mount was pointed downward and the extending fibers were immersed in a bath of epoxy which worked its way upward in-between the fibers and through the mount by means of capillary force. In this way, the fibers are glued together and to the metal mount without introducing additional stress. After the epoxy had cured, the entire fiber head, including the six sky-IFUs, was polished using a custom-made polishing stage (as described in Kelz et al.\ 2004). Polishing sheets from {\em Newport} and {\em Data Optics}, featuring grain sizes from 30 to 0.3~$\mu$m were used. The surface quality was inspected regularly during the polishing process using a video microscope with 4--16 times magnification (Fig.~\ref{fig:IFUpolish}, right). This allowed the projection of highly magnified fiber images onto a monitor. In conjunction with a variety of viewing angles and illuminations, scratches at the order of 1~micron were visible and could be polished out. The end requirements were, to have no obvious surface defects, such as partial breakages of core or cladding material, no scratches larger than 1\% of the fiber diameter ($\leq$1.5~$\mu$m), and a visually flat and perpendicular endface. \begin{center} \begin{figure}[h] \plotone{Figure11-03.ps} \caption{Left: Magnified view of the assembled IFU (including sky-IFUs) before polishing. Right: a video inspection image with $\times$ 8 magnification of the polished fibers.} \label{fig:IFUpolish} \end{figure} \end{center} \vspace{-8mm} \subsection{Integration of PPak into PMAS} Optical gel (code 0406, n=1.46) from {\em Cargille Laboratory} was applied between the spectrograph collimator lens and the fiber-slit(s) to match the refractive indices. The loop-box was filled and closed (see Fig.~\ref{fig:loopbox}), and all 13 PPak-cables inserted into a common flexure tube. Inspections revealed that no fibers were broken or damaged during the process of manufacture, assembly and integration. \begin{center} \begin{figure}[h] \plotone{Figure12-03.ps} \caption{Left: photo of the fiber-loop box, showing how a section of bare fibers is looped and inserted into a compartment. Right: photo of the fully filled, but still open loop box. The cables to the left connect to the slit, the ones to the right to the IFUs. 16 cables contain the 256 LARR-fibers, 12 cables the 367 PPak-fibers, one cable the 15 calibration fibers.} \label{fig:loopbox} \end{figure} \end{center} During the construction and initial commissioning, the PPak bundle was an entity from end to end. This implied, that the existing lens-array fiber-module needed to be dis-mounted from the PMAS instrument, to make space for the PPak-module. As this is a time-consuming and potentially hazardous undertaking, both fiber modules, i.e. the fiber-slits and loop-boxes, were merged into a single unit (called double-IFU) in October 2004. The double-slit consists of two parallel rows, featuring the 256 lens-array fibers and the 382 PPak fibers (see Fig.~\ref{fig:DIFU}). The spacing between the slits is approximately 2~mm. The fore-optics and the two IFUs remain physically separate units. While both IFUs can not be used simultaneously, it is easy and safe to change the configuration during daytime. A change of modes between the lens-array-IFU and the PPak-IFU involves a hardware switch to select a different shutter, to reconnect the internal lamps to the respective calibration units, and to cover the IFU that is not in use. \begin{center} \begin{figure}[h] \plotone{Figure13-03.eps} \caption{Zero order image of the double fiber-slit. The fainter rows with 16 blocks belong to the 16 $\times$ 16 lens-array-IFU, the brighter rows with 12 blocks belong to PPak. For better clarity, the slit (physical size=94mm) is split in two, the top image connects to the left of the bottom one.} \label{fig:DIFU} \end{figure} \end{center} \section{Operations} \label{sect:operations} \subsection{Calibration} While the entire lens-array-IFU can be illuminated from a deployable internal calibration unit, the position of the PPak-IFU, being off-axis and in front of the telescope focal plane, is outside the opto-mechanical range of the original calibration unit. Instead, PPak calibrations must be performed with dome or sky flatfield exposures. Flatfield exposures with external light sources (continuum, arc lamps) yield the fiber-to-fiber responses, the wavelength calibration, and the position information required for the purpose of accurately tracing and subsequently extracting the spectra from a recorded image. These calibration images should be obtained at least once per night and for each grating setting. The best spectrograph focus is found by illuminating the calibration fibers only, using the internal spectral line lamps. This will yield well separated emission spots across the entire CCD chip. Preferable, the calibration fibers should always be illuminated by a spectral line lamp while a science exposure is taken. This allows the tracing of any image shifts or spectrograph de-focus during the data reduction for each science frame (see Fig.~\ref{fig:rawframe}). \begin{center} \begin{figure}[h] \plotone{Figure14-02.ps} \caption{Raw CCD frame illustrating the basic features of the PPak data format: 382 spectra, grouped in 12 blocks (vertical axis) versus wavelength (horizontal axis). A central gap is provided for the use with a mosaic-CCD. The dark arcs are absorption lines (not wavelength calibrated). The bright dots are emission line spots in the calibration spectra. (see insert and Fig.~\ref{fig:PPak_anamag} for a magnified area near the center of the CCD). The distribution of light within the slit illustrates the mapping philosophy (compare with Fig.~\ref{fig:PPak_layout} and Table~\ref{tab:PPak_slit}).} \label{fig:rawframe} \end{figure} \end{center} \subsection{Instrument Control Software} The AG-OPTICS system for acquisition and guiding has been described in paper~I. As opposed to the lens-array-IFU, which is positioned at the center of the A\&G field-of-view (of 3$'$.2 $\times$ 3$'$.2), the PPak-IFU is located off-axis, 295$''$ to the South and 206$''$ to the West. Since the offset is known to the instrument control software, the basic functionality for field acquisition and guiding remains unchanged. The A\&G instrument control software (pics\_ag) has an option to over-plot the PPak outline or even the position of the 331 fibers to a freshly obtained acquisition frame and allows for an offset pointing to center the object of interest onto the PPak-IFU. The position of a guiding box around a guide-star on the A\&G frame can be stored and recalled. This allows for the accurate re-positioning of a guide-star on subsequent nights to within 0.2 arcseconds. Note, that this procedure has been successfully applied using the finer sampling lens-array IFU, which, due to its distance, is more likely subject to relative flexure, than the closely mounted PPak unit (see Fig.~\ref{fig:ppak_principle}). The IDL-based PMAS Instrument Control Software (PICS) includes some additional features for PPak operations, namely the option to include calibration light within the science frames (e.g. 5 $\times$ 10~seconds of ThAr distributed equally within an overall exposure time of 30~minutes, say) and to center, offset or mosaic-point the PPak-IFU. Note, that the nod-and-shuffle (or beam switching) mode, that is available for the lens-array-IFU, is possible with PPak too, but at the expense of a higher cross-talk, as the spacing between the PPak spectra on the detector is smaller. \subsection{Data Reduction Software} \begin{figure}[h] \plotone{Figure15-03.ps} \caption{Graphical User Interface (GUI) of the PPak-online data reduction software, written in IDL by T.B. The display contains the (not yet calibrated) stacked spectra (top panel), re-constructed maps at various wavelength cuts (central panels) and a spectrum of one selected spaxel (bottom panel).} \label{fig:PPak_online} \end{figure} Based on the ``P3d'' data reduction software package (Becker 2002, see paper I), an adapted version for PPak (PPAK\_online) was written by T.B. The program can be used for a quick-look inspection of the data quality and for a reconstruction of maps, while observing at the telescope. The code is written in IDL and allows to process the raw data and to eliminate the specific instrumental signature. The subroutines include: bias and dark subtraction, cosmic cleaning, spectra tracing, flexure compensation, spectra extraction, flat-fielding, and wavelength calibration. The full P3d package also allows a CCD pixel-to-pixel response variation, stray-light modeling, and wavelength-dependent fiber response calibration. There are also various custom-made IDL utilities for the visualization of stacked spectra, maps, and individual (or co-added) spectra (see Fig.~\ref{fig:PPak_online}). These utilities are available within GUIs and from the IDL command line, supporting the use of scripts. It is also possible to call the E3D visualization tool from within PPAK\_online, providing further features, such as interpolated maps, line fitting, etc (see S\'anchez 2004, S\'anchez et al.\ 2004). Also, PPak data was successfully reduced, using the hydra package within IRAF. \section{Performance} \label{sect:performance} \subsection{Throughput} The instrumental throughput was obtained using two methods. Firstly, the observed flux of spectrophotometric standard stars was compared to tabulated values from the literature. Secondly, the relative throughput of domeflat exposures for the lens-array-IFU and the PPak-IFU was determined. The reason for the second approach is, that often the actual atmospheric conditions at Calar Alto are either non-photometric or not known well enough, as to determine the true instrumental response unambiguously. However, the instrumental efficiency using the lens-array-IFU was well established previously (see paper~I), and therefore a relative response measurement can yield the PPak-throughput. Fig.~\ref{fig:PPak_effi1} plots the directly measured PMAS+PPak efficiency. The lower (dotted) curve gives the total throughput, from top of the atmosphere to the detector. It was obtained by comparing the flux of the spectrophotometric standard star BD~+75~325, observed on Nov. 20, 2004 at an airmass of 1.3, to the expected flux as given by Oke (1990). This total efficiency $\eta$ includes the instrument $\eta_{ins}$, the atmosphere $\eta_{atm}$, and the telescope $\eta_{tel}$. The middle (dashed) curve represents the efficiency from top of the telescope to the detector, i.e. taking atmospheric extinction and airmass into account. The atmospheric extinction coefficients for each wavelength were calculated by scaling typical extinction tables for Calar Alto (Hopp \& Fernandez 2001) to $k_{ext}$=0.14 mag and $\eta_{atm}$=0.85, which was the measured extinction in the V-band during the night. The top (solid) curve is the pure instrumental throughput $\eta_{ins}$, from the telescope focal plane to the detector. The instrumental configuration included the PPak-IFU without any filters and the spectrograph with the V300 grating in 1st order (300~l/mm, blaze angle=4.3$^\circ$, $\alpha$=16$^\circ$, $\lambda_{cen}$=542~nm). The throughput of the primary mirror was derived from reflectivity measurements obtained routinely at Calar Alto\footnote{\url{http://dbserv.caha.es/iris/index.asp}}. Assuming a similar value of 75\% for the secondary reflectivity, the telescope efficiency was estimated to be $\eta_{tel}$=0.57 in V at the time of observation. Note, that the plots in Fig.~\ref{fig:PPak_effi1} present {\em lower limits} in that the flux lost outside the finite aperture of a single fiber was neglected. Applying an aperture correction (along the lines of CCD aperture photometry techniques, e.g.\ Howell 1989) based on the measured seeing FWHM of 1$''$.1 in V, and assuming a Gaussian approximation to the point-spread-function, we estimate a correction factor of 1.15, i.e.\ a peak efficiency of 31\%. \begin{figure}[h] \plotone{Figure16-04.ps} \caption{PMAS+PPak efficiency, measured from a single-fiber standard star integration. Plotted are the total throughput (dotted curve), the efficiency of telescope and instrument combined (dashed curve), and the pure instrumental efficiency (solid curve), taking the atmospheric extinction and the telescope reflectivity into account. Note: the total response curve is based on telescope transmission under less than optimal conditions (dust).} \label{fig:PPak_effi1} \end{figure} This value agrees reasonably well with the comparison of domeflat exposures, taken with both the lens-array and the PPak-IFU, while the spectrograph setup, using a V300 grating, remained unchanged. The PPak configuration was found to have a 1.5 times higher throughput, than the lens-array-IFU, resulting in a peak efficiency of 30\% rather than 20\%, respectively. As the coupling of both fiber-slits towards the spectrograph is similar and the length differences between the lens-array and PPak-fibers is only 1.5~m, we attribute this difference in efficiency to the fore-optics and input coupling in particular. As shown in \S~4.3 of paper~I, the lens-array IFU throughput suffers from two effects: firstly from light losses caused by the extended parts of the lens-array PSFs (i.e. stray light and diffraction spikes) and secondly from the median mis-alignment between the micro-pupil images and the fiber cores. Both these problems are not present in the PPak-design, so that a bare fiber-bundle fed by a large fore-optics lens is more efficient. Instrumental throughput estimates for other PMAS gratings were bootstrapped from the lens-array-IFU efficiency data as shown in Fig.~15 of paper~I. Using a grating blazed in R (600~l/mm, blaze angle=13.9$^\circ$, $\lambda$=530--810~nm), the instrumental efficiency peaks at 36\% between 600--700~nm. Note, that the PPak configuration has not been optimized in the blue and that both the image quality of the focal reducer lens, and the transmission of the fibers and the lenses rapidly decrease below 400~nm. The groove density of the gratings (e.g. 300, 600 or 1200 l/mm) only has a minor effect on the efficiency. Those gratings, which are in use in 2nd order (I1200, J1200), show a significantly lower efficiency due to intrinsic grating effects and a geometrical overfill of the grating at large tilt angles (see~\S~\ref{sect:2ndorder}). \subsection{Fiber Response and Cross-talk} Visual inspection of the either forward or backward illuminated fiber bundle yielded a apparent uniform fiber response across the slit and the IFU, respectively. This was confirmed by sky- and domeflat exposures taken at the telescope. Fig.~\ref{fig:PPak_cdc1} shows a cross-dispersion cut, roughly at 500~nm, through a raw domeflat exposure with the illuminated 331 `object' and 36 `sky' fibers. The vignetting of the spectrograph optics towards the edges and the degree of flatness of the fiber-to-fiber response can be judged from this plot. Note, that there is an overall slope in the intensity level, which is believed to result from two effects: the slit is not perfectly centered on the optical axis of the spectrograph and the undersized flatfield screen in the dome, does in fact not provide a uniform and flat illumination across the field. Fig.~\ref{fig:PPak_cdc2} is a zoomed version of the previous, showing a central slitlet only. The maxima of the normalized intensities range between 0.89 and 0.98. In zero-th order, the fiber core projects to 6~pixels on the CCD. The pitch between individual fibers is 9.4~pixels, resulting in moderate cross-talk and a typical inter-order minimum intensity of 20\% of the peak level. \begin{figure}[h] \includegraphics[width=0.35\textwidth,angle=90]{Figure17-03.ps} \caption{A cross-dispersion cut through a raw `domeflat' exposure featuring all 367 spectra, that belong to the PPak-IFU (the calibration fibers were not illuminated). Note the gaps between the twelve slitlet blocks. } \label{fig:PPak_cdc1} \end{figure} \begin{figure}[h] \includegraphics[width=0.35\textwidth,angle=90]{Figure18-03.ps} \caption{Typical fiber-to-fiber throughput variation. Zoom of Fig.~\ref{fig:PPak_cdc1}: Shown is one slitlet with 31 spectra, which are well separated down to 20\% of the peak value.} \label{fig:PPak_cdc2} \end{figure} The final amount of cross-talk does not only depend on the instrumental performance, but also on the actual data reduction. A profile-fitting extraction method is capable to allocate the overlapping wings of the flux distribution towards the individual spectra. In this way, cross-talk can be dis-entangled better, than using an extraction scheme with fixed pixel numbers (see Becker,\ 2002). Apart from spectra extraction, other parameters, such as the option to use on-chip binning in the spatial direction, or the (sagittal and tangential) focus setting of the spectrograph, will result in different levels of cross-talk. If these are of concern and what level is acceptable, will depend on the particular science case. \subsection{Scattered Light} Scattered light effects are mainly depending on the selected grating, wavelength range and order, i.e. the grating position. In particular, ghost images were noticed at certain second order wavelength settings, where the grating is overfilled and its mount is highly inclined towards the incoming beam. The exact origin of these ghost images is subject of further investigation. In addition to ghost images, most complex optical systems show extended wings of the point-spread-function at low intensity levels, which are generally attributed to ``scattered light", although the precise physical origin of this ``halo" around the PSF core is difficult to assess. The relatively large separation of the PPak calibration fibers presented us with the opportunity to obtain high signal-to-noise cross-dispersion profiles from well-exposed continuum lamp calibration exposures, and map the scattered light level as illustrated in Fig.~\ref{fig:ScatLight}. The overlapping individual scattered light halos from these calibration spectra generate a level of $\approx$0.1\% of the peak counts in the areas in between these spectra. Despite the fact, that these light levels are low, they can contaminate the neighboring spectra, depending on the exact method of spectra extraction. Using a box-like extraction, with a width of 10 fiber-pitches to both sides, around a single, illuminated spectrum, the contribution from the integrated scattered light, can be on the order of 8--14\%. This value highly depends on the assumed width of a spectrum, the position of the spectrum on the chip and the wavelength. \begin{center} \begin{figure}[h] \plotone{Figure19-03.ps} \caption{Normalized average of 100 columns in the cross-dispersion direction near the center of the CCD. Only the 15 calibration fibers are illuminated with halogen light, while the 367 IFU fibers are dark. This allows the measurement of the extent of the wings of the flux distribution to low light levels.} \label{fig:ScatLight} \end{figure} \end{center} Becker (2002) has implemented in the P3d data reduction package a profile-fitting extraction method, which is based on empirically determined cross-dispersion profiles as a function of wavelength for each spectrum, which is, however, presently only available for data obtained with the PMAS lens-array IFU. The iterative scheme of this code is capable of (1) measuring and eliminating the crosstalk between adjacent spectra (on the order of 0.5\% for the lens-array), and (2) simultaneously solving for a model of diffuse scattered light over the face of the detector (less than 1\% of the average peak intensity). While this level was considered negligible for normal PMAS data, the application of the scattered light model to data from the MPFS instrument at the Selentchuk 6m Telescope, which does have significant stray-light patterns, proved to be essential to eliminate systematic errors (Becker 2002). Implementing a profile fitting extraction routine for PPak data, as well as a thorough characterization of cross-talk and stray-light properties for the various grating setups at different grating tilts is a goal with a future upgraded version of the P3d software. \subsection{Usage in the Second Spectral Order} \label{sect:2ndorder} PMAS was initially designed and built as a spectrophotometer for low/medium spectral resolution, with the goal to maximize wavelength coverage rather than spectral resolution. More recently, the main science driver for retrofitting PMAS with the PPak-IFU demanded R$\approx$8000 (Verheijen et al.\ 2004), which could not be satisfied with the current fiber size (i.e. pseudo-slit-width) and any of the standard gratings in 1st order. A test with two gratings in 2nd order (I1200: blaze angle=37$^\circ$, 1200~l/mm and J1200: blaze angle=46$^\circ$, 1200~l/mm), however, yielded satisfactory results in the wavelength region near 520~nm. Fig.~\ref{fig:PPak_anamag} illustrates that not only the roughly two-fold increase of linear dispersion, but also the effect of anamorphic demagnification helps to improve the spectral resolution (e.g.\ Schweizer 1979). Considering the basic grating equation: $sin(\alpha) + sin(\beta) = n g \lambda$, where $\alpha$ and $\beta$ are the angles between the grating normal and the incident and diffracted beams, respectively, $n$ is the spectral order, $g$ the groove density in [l/mm], and $\lambda$ the wavelength, the anamorphic (de)magnification $r$ of the slit width is given by: $r = {cos(\alpha)}/{cos(\beta)}$. For a typical setup in first order, $r$ is close to unity, and the effect is often negligible. In second order, however, the grating tilt is more extreme so that $r$ becomes significantly different from 1. The upper panel of Fig.~\ref{fig:PPak_anamag} shows a small central region of a raw CCD frame from a sky flatfield exposure which was taken on Nov.~8, 2004, using the V600 grating in 1st order with a grating tilt of $\alpha$=15$^\circ$. The anamorphic magnification of this setup is $r$=1.08. Therefore, the two rows of calibration spectra with ThAr emission lines present a close to perfect round spot appearance. The corresponding plot with emission line profiles reveals a FWHM of $\approx$3 pixels (from a 2x2 binned CCD frame), matching exactly the expected width of 6 unbinned pixels. \begin{figure}[h] \plotone{Figure20-02.eps} \caption{Anamorphic magnification. Top: V600 1st order flatfield exposure with two ThAr calibration spectra from a 100$\times$100 binned pixel region near the the center of the CCD (binning factor 2$\times$2). Bottom: J1200 in 2nd order (backward), same binning factor. Both plots are shown with a negative greyscale stretch. As a result of anamorphic demagnification, the 2nd order emission line spots are significantly compressed in the (horizontal) direction of dispersion, compared with the perfectly round appearance in first order. The anamorphic factors are r=1.08, and r=0.49, respectively. The spectra to the right show the corresponding ThAr emission line profiles for the same region. Note that anamorphic magnification acts only in the direction of dispersion. For a more detailed explanation, see text.} \label{fig:PPak_anamag} \end{figure} The lower panel shows the same situation for a setup with the J1200 grating in second order, mounted in the ``{\em backward}'' orientation (see paper~I), i.e.\ the grating normal facing the camera, and a grating tilt of $\alpha$=63$^\circ$. This sky flatfield exposure was obtained on May~4, 2005. In contrast to the V600 exposure from above, the emission line spots are now significantly compressed in the dispersion direction, and the line profiles are extremely sharp with a FWHM of $\approx$1.5 pixels (2x binned). The anamorphic magnification is $r$=0.49. The resolving power obtained in this configuration with the 150~$\mu$m PPak fibers is R$\approx$7900. Note, that in order to maintain sharp images and this relatively high spectral resolution, careful focusing, restriction of exposure times, and the avoidance of hour angles with adverse flexure effects (see paper I) are of utmost importance. The usage of the high-resolution mode comes at the expense of a lower throughput. Table~\ref{GRATEFF} lists the relative efficiencies $T_{rel}$ (in ADU/s/\AA) of the I1200 and J1200 gratings in second order, forward (fwd) and backward (bwd) oriented, respectively, with reference to the V1200 grating in first order. \begin{table}[ht] \label{GRATEFF} \caption{Relative grating throughput $T_{rel}$ and (de)magnification $r$ at $\lambda$=515~nm} \begin{center} \begin{tabular}{lrcrcr} \tableline \tableline Grating & Blaze[$^{\circ}$] & $n$ & $\alpha$[$^{\circ}$] & $r$ & $T_{rel}[\%]$ \\ \tableline V600 (fwd) & 8.6 & 1st & 11.5 & 1.14 & 100 \\ V1200 (fwd) & 17.5 & 1st & 1.7 & 1.31 & 100 \\ V1200 (bwd) & 17.5 & 1st & 40.3 & 0.76 & 95 \\ I1200 (fwd) & 36.8 & 2nd & 20.5 & 2.03 & 40 \\ I1200 (bwd) & 36.8 & 2nd & 62.5 & 0.49 & 55 \\ J1200 (fwd) & 46.0 & 2nd & 20.5 & 2.03 & 52 \\ J1200 (bwd) & 46.0 & 2nd & 62.5 & 0.49 & 70 \\ \tableline \end{tabular} \end{center} \end{table} \section{Summary} \label{sect:summary} A new Integral-Field-Unit, based on the fiber-bundle technique, providing high grasp and a large field was developed and successfully commissioned for the existing PMAS 3D-instrument. The central PPak-IFU features 331 object-fibers, which, projected by the 3.5~m Calar Alto telescope, span a hexagonal field-of-view of 74$\times$64~ arcseconds with a filling factor of 60\%. The individual spaxel (fiber) size is 2$^{\prime\prime}$.7 across, yielding a total grasp of 15200~arcsec$^2$m$^2$ at this telescope. An additional 36 fibers are distributed over six sky-IFUs, which surround the main IFU at a distance of 72$^{\prime\prime}$ from the field center, allowing a good coverage and subtraction of the sky background. For calibration purposes, 15 fibers can be illuminated independently with arc lamps during a science exposure, and can keep track of spectral resolution and image shifts. A summary of the technical parameters is given in Table~\ref{tab:pmas_param}. Further details regarding the PMAS spectrograph, available gratings and filters are given in paper~I, or can be found online\footnote{\url{http://www.caha.es/pmas}}. The combination of spaxels with high grasp and the PMAS spectrograph with high efficiency and wide wavelength coverage, makes PPak a powerful tool for the study of extended low-surface brightness objects, which require a high light collecting power and a large field-of-view. Fig.~\ref{fig:UGC463}, gives an example of the galaxy UGC~463, that was observed for the Disk Mass project. Despite the rather crude sampling of the fibers, the basic morphological structures of the galaxy seen in the POSS-II image (spiral arms, stars clumps...), are clearly visible in the PPAK reconstructed image. Apart from the ability to create mono- and polychromatic images from the resulting data, one exposure with PPak yields 331 spatially resolved spectra of the target. The high number of fibers at the outer and fainter parts of the galaxy, offers the observer the option to adaptively bin spaxels as to increase the signal-to-noise further. \begin{figure*}[ht!] \begin{center} \includegraphics[width=0.3\textwidth,angle=-90]{Figure21-03.ps} \caption{Comparison between the POSS-II R-band image (left panel) and the PPAK reconstructed image of the galaxy UGC 463 (right panel). The PPAK data were obtained using the V300 grating, centered at $\approx$5300\AA. Once reduced, the reconstructed image was created using E3D (S\'anchez,\ 2004): a 3D cube was created adopting a natural neighbor interpolation scheme to a common grid of 1$''$.35/pixel. After that, a 2D image was produced by co-adding the flux in the wavelength range between 4500 and 6000~\AA.} \label{fig:UGC463} \end{center} \end{figure*} During 2004, PPak was available on a shared-risk basis, as its usage involved a complex change-over between the lens-array and PPak fibers, which needed to be done by AIP staff. With the integration of the two parallel fiber-slits, the PPak-IFU was permanently installed and is offered as common-user instrument from 2005 onwards. Two further upgrades are scheduled: a new mount to ease the exchange of order separating filters, and adjustments to the PPak data reduction software. Since commissioning, PPak attracted considerable interest from several observers, with the result, that 9 observing runs with a total of 26 nights, plus 9 `service buffer A' nights, were granted to PMAS-PPAK within its first year by the Calar Alto TAC. In 2005, approximately 50\% of the allocated time for PMAS is used with the PPak-IFU. Throughout these runs, the PPak module worked without failure. \begin{table \caption{Summary of main PPak + PMAS instrumental parameters.} \begin{center} \begin{tabular}{lr} \tableline \tableline {\bf PPak-IFU:} & \\ \tableline principle design & focal reducer + fiber-bundle \\ focal reducer lens & F/10 to F/3.3, dia=50~mm \\ plate-scale & $17''.85~$/mm \\ fiber configuration & 331 object, 36 sky, 15 calibration \\ PPak - FoV & $74'' \times 64''$ (hexagonal packed) \\ spatial sampling & $2''.68$ per fiber diameter \\ fiber pitch & $3''.5$ fiber-to-fiber \\ IFU-filter & 2~inch / 50~mm round filter \\ PPCU-filter & 1~inch / 25~mm round filter \\ wavelength range & 400--900~nm (`dry' fibers) \\ \tableline {\bf PMAS spectrograph:} & \\ \tableline PPak fiber slit & 0.15 $\times$ 94 mm, 382 fibers \\ slit-filter & 140 $\times$ 35~mm filter \\ collimator & fully refractive 450 mm, F/3 \\ camera & fully refractive 270 mm, F/1.5 \\ reflective gratings & 1200, 600 and 300 l/mm \\ detector & SITe 2k $\times$ 4k, 15 $\mu m$ pixels \\ linear dispersion & 0.53, 1.2, 2.6 \AA /pixel (m=1) \\ resolution & 0.3 \AA /pixel, R$\approx$8000 (m=2) \\ wavelength coverage & 600-3400 \AA /frame (1st order) \\ & 400 \AA /frame (2nd order)\\ \tableline {\bf PMAS A\&G camera:} & \\ \tableline A\&G FoV & 3.4 $\times$ 3.4 arcminutes \\ A\&G plate scale & 0.2 arcseconds/pixel \\ A\&G-filter & 4 $\times$ 2~inch/50~mm round filters \\ \tableline \end{tabular} \end{center} \label{tab:pmas_param} \end{table} \vspace{5mm} \section*{ACKNOWLEDGMENTS} The authors would like to thank Ute Tripphahn \& the mechanical workshop for the realization of various components of the PPak-IFU, Thomas Hahn and Thomas Fechner (all AIP) for their help during commissioning. MV likes to thank Matthew Bershady for valuable discussions regarding design and construction issues. PPak was developed within the \mbox{ULTROS} project, which is funded by the German ministry of education \& research (BMBF) through Verbundforschungs grant 05- \mbox{AE2BAA/4}. AK, MR and MV gratefully acknowledge travel support from the Deutsche Forschungsgemeinschaft (DFG). We are thankful for the excellent support we received from Calar Alto staff, in particular Nicol\`as Cardiel, during the commissioning and the subsequent observing runs with PPak. Special thanks to David Lee (former AAO) for insights into aspects of the SPIRAL-B manufacture. \vspace{-5mm}